External modularised pipeline for Jenkins (realized in Python)

Robert Diers
3 min readDec 15, 2023

--

image source: https://en.wikipedia.org/wiki/Jenkins_%28software%29

Typically, a Jenkinsfile is used and any kind of pipeline can be built. There are shared libraries or the MPL for reusing existing build steps.

https://www.jenkins.io/blog/2019/01/08/mpl-modular-pipeline-library/

Unfortunately, you need in-depth knowledge of Jenkins to use it and ultimately it is optional — if you want, you can continue to do what you want in the Jenkins file.

What is our goal?

Use 1 pipeline throughout the department or hole company, which can be controlled by configuration and supports different versions as an independent product (including product owner).

Our solution

Fortunately, there is a plugin that makes it possible to store the pipeline externally. https://plugins.jenkins.io/remote-file/

There are two approaches to supporting different versions of the pipeline:

  • Plugin allows selection of the branch of externalized repository with Jenkinsfile — you can use a different branch/tag for your version
  • add version to configuration file in application repository and build versioning on your own within master branch (use folders, filenames, whatever — be creative)

It’s a question of responsibility — if the project itself manages the Jenkins pipeline, the approach via the branches is recommended. (sadly it conflicts with our goal)

If the pipeline is specified by the central team and the applications themselves do not have access to the configuration of the Multi-Branch Build, it is easier to implement the versions within the master. From my personal point of view, this also simplifies migrations and information and warnings for older versions can be provided without great administrative effort — for example, that a version has reached end-of-life.

Configuration

We have a YML file in each project that serves as the configuration for the build pipeline (and activator in the remote plugin). In addition to the desired version of the build pipeline, this file describes the desired artifacts — but not the exact build steps.

For example, you want to create a Python package and make it available via PyPI. Or a container should be created using a Dockerfile and automatically stored in the company’s own Docker registry.

Reasons for use

  • automatic versioning of build artifacts (for example based on the branch name of the application)
  • automatic artifact upload as snapshot (git branch) or release (git tag)
  • additional build steps like static code scanners or security tools
  • simplified infrastructure management through the use of an additional config directly in the pipeline repository
  • additional quality checks — for example, a release must not use snapshot components internally
  • automatic documentation generation (from code or git history)
  • automatic SBOM, dependency-tree, CVE scan result collection, …
  • and much more…

Why do we use Python for the Pipeline?

It is very inconvenient to work with YML in the shell. Python also offers so many possibilities to improve build steps — for example by using multi-threading.

The disadvantage is that you need Python in every container required for the build. A central script for the installation simplifies the addition of further required Python packages to all of them.

Basically, however, the use of Python is just one of many possibilities. If your DevOps team has more experience with Go, Rust or Shell — then go for it — the only important thing is that you can successfully execute build and test commands in the containers or with agents directly.

Oh yes, and code sharing — since the pipeline is a product, it can of course also be built by itself and store reusable modules in PyPI :-)

--

--