Quantcast
Channel: Jenkins Blog
Viewing all 1088 articles
Browse latest View live

Jenkins November 2022 Newsletter

$
0
0

November 2022

Jenkins November 2022 Newsletter

Welcome to the Jenkins Newsletter! This is a compilation of progress within the project, highlighted by Jenkins Special Interest Groups (SIGs) for the month of November.

Got Inspiration? We would love to highlight your cool Jenkins innovations. Share your story and you could be in the next Jenkins newsletter. View previous editions of the Jenkins Newsletter here!

Happy reading!

Highlights:

  • Congratulations to the new officers and members of the governance board.

  • Pipeline Utility Steps Plugin was impacted by CVE-2022-33980, leading to a remote code execution in Jenkins.

  • New "jenkinsci" NPM official account for Jenkins project.

  • ssh-agent docker image is now using a correct volume for agent workspace.

  • Jenkins 2.375.1 was released with more improvements to the user interface.

  • Jenkins.io is now using Algolia v3 for its search feature.

  • Jenkins in Google Summer of Code 2023, preparations to help new contributors prepare effective proposals.

Governance Update

Contributed by: Mark Waite

The 2022 Jenkins elections are complete. Thanks to Kevin Martens for the blog post announcing the results of the election and thanks to Damien Duportal for running the election process. We are glad to have received 65 registrations to vote in the election. That is less than the 81 registered voters we had in 2021.

Elected officers and members of the Jenkins governance board will begin their service December 3, 2022. Special thanks to the members of the governance board that have completed their two-year service:

Congratulations to the new members of the governance board.

Congratulations also to the newly elected officers:

Governance board meetings are held every two weeks. Meeting notes and recordings of the meetings are available on the Jenkins community site.

Security Update

Contributed by: Wadeck Follonier

  • One security advisory during November about plugins

    • Usually the vulnerabilities in third party dependencies are not impacting the final product, it is worth mentioning when there is a true positive case. Pipeline Utility Steps Plugin was impacted by CVE-2022-33980, leading to a remote code execution in Jenkins.

Infrastructure Update

Contributed by: Damien Duportal

  • Upgrade of our controllers to the latest LTS 2.375.1.

  • New "jenkinsci" NPM official account for Jenkins project.

  • JDK17 for Windows provided to developers.

  • https://meetings.jenkins-ci.org/ website (archive Board meeting minutes) is recovered and back.

  • Deprecated "windows-slaves" plugin removed.

  • Azure networks are now code-managed to ease management (future support of IPv6 and network performances)

Platform Update

Contributed by: Bruno Verachten

  • Docker Images for Jenkins changes:

  • New platforms:

    • Experiments with Windows 2022: working with Jenkins Infrastructure to build these images.

  • Experiments with JDK19: ANTLR 2 to ANTLR 4 transition complete, Jenkins core compiles.

User Experience Update

Contributed by: Mark Waite

Jenkins 2.375.1 was released on November 30, 2022 with more improvements to the user interface. Special thanks to Jan Faracik, Tim Jacomb, Alexander Brandes, Wadeck Follonier, Daniel Beck, and many others that were involved in the most recent improvements to the user experience.

The weather icons have been updated to be more recognizable. They continue to work well in both light themes and dark themes.

image

The plugin manager navigation has moved from the top of the page to the side panel. The search field is more visible.

image

The User Experience SIG is also pleased to note a valuable improvement in the Jenkins 2.380 weekly release. The tooltips that were previously provided by the unmaintained and long outdated YahooUI JavaScript library are now being provided by the Tippy.js JavaScript framework. Special thanks to Jan Faracik for his work removing that use of the YahooUI JavaScript library.

Documentation Update

Contributed by: Kevin Martens

Jenkins.io is now using Algolia v3 for its search feature. This update has not only improved searching on the Jenkins site, but also provided a new search UI, which provides helpful suggestions. Massive thanks to Gavin Mogan for working on this and improving the Jenkins.io search.

image

image

Algolia has graciously upgraded our search from their legacy v2 to the super pretty and useful v3 apis. This includes a new fully accessible popup. I just love being able to goto jenkins.io and hitting ctrl+k to search.

said Gavin Mogan, current Jenkins Board Member, maintainer of the Jenkins plugin site, and plugin site API.

Advocacy & Outreach Update

Contributed by: Alyssa Tong

Jenkins gets ready for Google Summer of Code 2023!

Google recently announced the GSoC 2023 program timeline, and the Advocacy & Outreach SIG has responded! We’ve established the GSoC early preparations for applicants - steps to effective submission post to help future contributors with the process. On December 20, 2022 at 4PM UTC there will be a walk through of this process via a webinar. We would like this to be an interactive webinar so bring your questions. See Event Calendar (See GSoC 2023 - Contributor webinar: How to get ready) for login details.

We are still in great need of project idea proposals and mentors.

  • GSoC project ideas are coding projects that potential GSoC contributors can accomplish in 10-22 weeks.

  • The coding projects can be new features, plugins, test frameworks, infrastructure, etc. Anyone can submit a project idea.

  • Mentoring takes about 5 to 8 hours of work per week (more at the start, less at the end).

  • Mentors provide guidance, coaching, and sometimes a bit of cheerleading. They review GSoC contributor proposals, pull requests and contributor presentations at the evaluation phase.


[GSoC] 'Tis the Season to Give the Gift of Mentorship

$
0
0

Jenkins GSoC

'Tis the season of gift giving and what better way than to give the gift of mentorship.

Jenkins is a big supporter and contributor of programs like Google Summer of Code (GSoC), She Code Africa, and Outreachy. We believe in the value and advancements in helping new, and in many cases under-represented contributors to the open source. There is also a special 'je ne sais quoi' that comes with giving your time and sharing your knowledge.

During this season of giving, we have opportunities for you to give the gift of mentorship. Jenkins in Google Summer of Code welcomes any Jenkins user interested in making a lifelong impact to new contributors new of open source.

What does mentorship require?

You know enough about the Jenkins code to guide the GSoC contributor on coding.

As a mentor, you will:

  • Allocate 5-8 mentoring hours per week, for 10-22 weeks depending on the size of the project.

  • Provide Jenkins specific code reviews on pull-requests.

  • Review new contributors' proposals and presentations.

  • Complete two (2) evaluation documents from Google.

  • Collaborate with another mentor on the chosen project, ensuring that no one is working alone.

What will you work on?

  • You may choose to be a mentor/co-mentor for any of the listed project ideas. Please note, this list is still evolving.

  • You can also propose a project that is near and dear to your heart.

Additional info:

We look forward to welcoming you as mentors to this rewarding program! Reach out to us on the GSoC SIG Gitter chat.

Maintenance with downtime of JFrog Artifactory (repo.jenkins-ci.org) 18 of December of 2022

$
0
0

Maintenance with downtime of JFrog Artifactory (repo.jenkins-ci.org) December 18, 2022

JFrog

December 18, 2022: our sponsor JFrog will proceed to perform maintenance of our "jenkinsci" Artifactory instance.

Expect a complete downtime of about 6 hours due to the nature of this maintenance. The maintenance involves cloud migration from Google Cloud to Amazon Web Services.

Impacts

Only Jenkins contributors will be impacted, as no releases or builds on ci.jenkins.io or on developer machines will succeed during the downtime. Jenkins users won’t be impacted, as plugin downloads, update center, and websites such as https://jenkins.io and https://plugins.jenkins.io will continue working as expected.

We’ll update the Jenkins Infrastructure Status Page with the exact downtime window.

Artifactory Service repo.jenkins-ci.org: a backbone of the Jenkins project

Despite not being directly used by the usual Jenkins users, the repo.jenkins-ci.org service is crucial for any contributor to the Jenkins project as it hosts all the development dependencies and released binaries as the source of truth.

The entire Jenkins Infrastructure uses this repository to build and publish Jenkins itself, and also all of its plugins and side projects.

It’s also heavily used by the Jenkins Security Team as part of their advisory publication.

This migration is an important requirement to ensure that JFrog is able to monitor the service and provide the best availability and infrastructure health. If you are curious or interested, you can find more details on the Jenkins Infrastructure issue tracker: Jenkins Infrastructure Helpdesk Issue #3288

Jenkins plugin development requires Java 11 or newer

$
0
0

🚀 Java Platform update

Jenkins has required Java 11 or newer since 2.361.1 LTS (released on September 7, 2022) and has supported Java 17 since 2.346.1 LTS (released on June 16, 2022). At the time, more users were running Jenkins on Java 8 than on Java 11, and a negligible number of users were running Jenkins on Java 17. In recent months, usage of Java 11 has surpassed usage of Java 8, and usage of Java 17 has entered a phase of rapid adoption:

JVMS by date as of November 2022

With all known production issues on Java 17 resolved, we continue to encourage users to upgrade to Java 17, particularly in light of the fact that many Java vendors plan to stop providing free public updates for Java 11 on or after October 2024. Report any issues running on Java 17 in JENKINS-67908. The Versions Node Monitors plugin, which helps keep track of Java versions across agents, has also been recently updated.

👷 Changes for plugin developers

With the migration to Java 11 now well underway for both end users and Jenkins core alike, the focus has shifted to the plugin development toolchain. With the developer documentation now recommending 2.361.4 as a baseline for plugins, this raises the question about when to require Java 11 for plugin development.

Historically the Jenkins project has supported old baselines for a long time in the plugin build toolchain, with recent versions of the plugin build toolchain supporting baselines up to two (2) years old. This provides a high degree of flexibility for plugin maintainers and ultimately end users. For the reasons given on the developer mailing list, however, this level of compatibility could not be preserved throughout the transition to Java 11. We therefore took the decision to require a baseline of 2.361 or newer for plugins in recent releases of the plugin build toolchain, about a year ahead of our usual timeline. This unusually aggressive timeline does not represent a change in policy for the Jenkins project but is rather an exception to the rule to facilitate migration to a number of breaking changes in the upstream Java community.

Such a transition constitutes a flag day, or breaking change. Below are answers to frequently asked questions, inspired by Uma Chingunde’s excellent framework for communicating organizational changes and building on the excellent cheat sheet started by Jean-Marc Meessen.

What is the change?

Beginning with 4.52, the plugin build toolchain requires Java 11 or newer and Jenkins 2.361 or newer.

When is the change effective?

The change is effective as of plugin parent POM 4.52, which was released on December 1, 2022.

Why is it happening?

Due to a large number of breaking changes in the upstream Java community, it became impractical to support both Java 8 and Java 11 in the same build toolchain. For more details, see the developer mailing list.

Who is affected by the change?

This change affects all plugin developers, particularly those who receive updates to the plugin build toolchain via Dependabot pull requests.

What action do I need to take?

At a high level, three actions need to be taken, the third of which depends on the first two:

  1. Adjust the Jenkinsfile to use Java 11 or newer, removing any Java 8 configuration(s).

  2. Update the baseline to 2.361 or newer.

  3. Update the plugin parent POM to 4.52 or newer.

We will elaborate on each of these three points below.

Adjust the Jenkinsfile to use Java 11 or newer

First, consult the matrix of build configurations in the plugin’s Jenkinsfile. The goal is to ensure the plugin is building on Java 11 and 17 and not on Java 8, as recommended in the latest version of the archetype:

buildPlugin(useContainerAgent:true,// Set to `false` if you need to use Docker for containerized testsconfigurations:[[platform:'linux',jdk:17],[platform:'windows',jdk:11],])

Note the explicit configurations entry with two versions: Java 11 and 17. If there is an explicit configuration for Java 8, remove it — Java 8 is no longer supported as of plugin parent POM 4.52.

If you are not already using container agents, we recommend adding useContainerAgent: true (but this is not mandatory). Doing so results in ci.jenkins.io spawning a container agent for executing the build instead of a virtual machine, which is usually faster to start and reduces costs for the Jenkins project.

Some older Jenkinsfiles may not have an explicit list of configurations:

buildPlugin()

Such builds will use the defaults for buildPlugin(), which (at the time of this writing) are Java 8 on Linux and Windows. Since plugin parent POM 4.52 and newer require Java 11, such a configuration is incompatible, and it should be changed to an explicit configuration that includes Java 11 and 17 as shown above.

At some point in the future, the default may change from Java 8 to Java 11; however, such a change was considered premature at the time of this writing.

Note that changes to a plugin’s Jenkinsfile require write access to take effect in pull request builds. If you submit a pull request to a repository to which you do not have write access, any Jenkinsfile changes will be ignored with this message:

Jenkinsfile has been modified in an untrusted revision

Pending JENKINS-46795, the pull request will need to be refiled by someone with write access to ensure the Jenkinsfile changes take effect.

Update the baseline to 2.361 or newer

The process for updating the baseline is described in the developer documentation. To summarize, set the jenkins.version property in pom.xml to 2.361 or newer:

<properties><jenkins.version>2.361.4</jenkins.version></properties>

The baseline is often encoded in one other place in pom.xml: the version of the plugin BOM. Check the <dependencyManagement> section for an entry with the group ID io.jenkins.tools.bom and an artifact ID that starts with bom-. If there is such an entry, and if it is using a line older than the baseline, then update it to match the baseline. For the latest version, see the list of BOM releases. At the time of this writing, the latest version is 1750.v0071fa_4c4a_e3:

<dependencyManagement><dependencies><dependency><groupId>io.jenkins.tools.bom</groupId><artifactId>bom-2.361.x</artifactId><version>1750.v0071fa_4c4a_e3</version><scope>import</scope><type>pom</type></dependency></dependencies></dependencyManagement>

For more information about the plugin BOM, see its README.

Update the plugin parent POM to 4.52 or newer

Having completed the above prerequisites, the plugin parent POM can be successfully upgraded to 4.52 or newer. For the latest version, see the list of plugin parent POM releases. At the time of this writing, the latest version is 4.53:

<parent><groupId>org.jenkins-ci.plugins</groupId><artifactId>plugin</artifactId><version>4.53</version><relativePath/></parent>

Java level

Some plugins may have a Jenkinsfile with an older javaLevel property, and some plugins may have a pom.xml file with a java.level property. These have been deprecated since plugin parent POM 4.40. If present, they should be deleted. At the time of this writing, their presence will log a warning.

At some point in the future, this warning will be changed to an error and will fail the build.

Other flag days

When updating the plugin parent POM from a version older than 4.39, you may run into an error like the following:

[ERROR] Failed to execute goal org.jenkins-ci.tools:maven-hpi-plugin:3.38:hpi (default-hpi) on project azure-credentials: Missing target/classes/index.jelly. Delete any <description> from pom.xml and create src/main/resources/index.jelly

This was a flag day introduced in 4.39. See the release notes for more information.

Similarly, be on the lookout for warnings like these:

[WARNING] <connection>scm:git:git://github.com/${gitHubRepo}.git</connection> is invalid because git:// URLs are deprecated. Replace it with <connection>scm:git:https://github.com/${gitHubRepo}.git</connection>. In the future this warning will be changed to an error and will break the build.

Now is a good time to address them as suggested, though doing so is not mandatory.

Is there an example I can follow?

Yes! Consult jenkinsci/text-finder-plugin#138 for an example.

What happens if I fail to take action?

Nothing will happen in the immediate future if you do not cross this flag day. You can still build and release plugins with Java 8 and their current baseline. In the long term, however, an out-of-date plugin build toolchain creates the risk of plugin compatibility testing (PCT) failures and negatively impacts the Jenkins core development team.

If you neglect to update the baseline to 2.361 or newer, you will receive the following error:

This version of maven-hpi-plugin requires Jenkins 2.361 or later.

If you neglect to update the Jenkinsfile to remove any Java 8 configurations (or try to build locally with Java 8), you will receive a low-level class version error:

[ERROR] Failed to execute goal org.jenkins-ci.tools:maven-hpi-plugin:3.38:validate (default-validate) on project text-finder: Execution default-validate of goal org.jenkins-ci.tools:maven-hpi-plugin:3.38:validate failed: Unable to load the mojo validate in the plugin org.jenkins-ci.tools:maven-hpi-plugin:3.38 due to an API incompatibility: org.codehaus.plexus.component.repository.exception.ComponentLookupException: org/jenkinsci/maven/plugins/hpi/ValidateMojo has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0

Whom should I contact for help?

If you have doubts or if the information in this post does not work for you, do not hesitate to discuss the matter on the developer mailing list.

What future work is planned?

We recognize that maintaining plugin builds can be onerous for many, especially when crossing flag days like this. Like linkers and loaders, Jenkins plugin build maintenance is a sub-specialty within a sub-specialty. In the long term, we aspire and hope to automate much of this build maintenance to allow the community to focus its attention on the delivery of features and bug fixes. In the meantime, we appreciate the community’s patience and support as we pass through these periods of transition.

Create a new Jenkins node, and run your Jenkins agent as a service

$
0
0

In this tutorial, we will review how to start a Jenkins agent as a Linux service with systemd. When using Docker for my agents, entering the correct options on the command line should cause the agents to restart automatically. Sometimes, such as when you want to use the famous Dockerfile: true option, you need to start the agent manually with a java command and not with Docker (for various security reasons). Then you need to restart it manually if you have to reboot, or if you forget to use nohup to start it in the background, and then close the terminal.

Pre-requisites

Let’s say we’re starting with a fresh Ubuntu 22.04 Linux installation. To get an agent working, we’ll need to do some preparation. Java is necessary for this process, and Docker allows us to use Docker for our agents instead of installing everything directly on the machine.

Java

Currently, openjdk 11 is recommended, and openjdk 17 is supported. Let’s go with openjdk 17:

sudo apt-get update
sudo apt install-y--no-install-recommends openjdk-17-jdk-headless

Let’s now verify if java works for us:

java -version
openjdk version "17.0.3" 2022-04-19
OpenJDK Runtime Environment (build 17.0.3+7-Ubuntu-0ubuntu0.22.04.1)
OpenJDK 64-Bit Server VM (build 17.0.3+7-Ubuntu-0ubuntu0.22.04.1, mixed mode, sharing)

Jenkins user

While creating an agent, be sure to separate rights, permissions, and ownership for users. Let’s create a user for Jenkins:

sudo adduser --group--home /home/jenkins --shell /bin/bash jenkins

Docker

Now, to get a recent version of Docker, we should install the docker-ce package and a few others with a particular repo. First, let’s add the needed dependencies to add the repo:

sudo apt-get install ca-certificates curl gnupg lsb-release

In my case, these packages were already installed and up to date. The next step is to add Docker’s official GPG key:

sudo mkdir-p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor-o /etc/apt/keyrings/docker.gpg

Then, we can set up the repo:

echo\"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

The last thing to do is to update the list of available packages, and then install the latest version of Docker:

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

If you’re - like me - running a recent version of Ubuntu or Debian, you won’t need to create the docker group, because it has been created with the installation of Docker. On the contrary, you can then issue a sudo groupadd docker command to create the docker group.

Now, let’s add our current user to the docker group:

sudo usermod -aG docker $USER

And if you’re not using the default user, but jenkins, you can do the same:

sudo usermod -aG docker jenkins
sudo usermod -aGsudo jenkins

Now log out, and log back in so that your group membership is updated. If you get any error, just reboot the machine, this sometimes happens. ¯_(ツ)_/¯

Mandatory ``Hello World!'' Docker installation test:

docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
2db29710123e: Pull complete
Digest: sha256:53f1bbee2f52c39e41682ee1d388285290c5c8a76cc92b42687eecf38e0af3f0
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Nice!

Create a new node in Jenkins

Quoting the official documentation,

Nodes are the "machines" on which build agents run.

and also:

Agents manage the task execution on behalf of the Jenkins controller by using executors. An agent is actually a small (170KB single jar) Java client process that connects to a Jenkins controller and is assumed to be unreliable. An agent can use any operating system that supports Java. Tools required for builds and tests are installed on the node where the agent runs; they can be installed directly or in a container (Docker or Kubernetes).

To conclude:

In practice, nodes and agents are essentially the same but it is good to remember that they are conceptually distinct.

We will now create a new node in Jenkins, using our Ubuntu machine as the node, and then launch an agent on this node.

Node creation in the UI

  • Go to your Jenkins dashboard

  • Go to Manage Jenkins option in the main menu

  • Go to Manage Nodes and clouds item

Jenkins UI

  • Go to New Node option in the side menu

  • Fill in the Node name (My New Ubuntu 22.04 Node with Java and Docker installed for me) and type (Permanent Agent for me)

Jenkins UI

  • Click on the Create button

  • In the Description field, enter if you want a human-readable description of the node (My New Ubuntu 22.04 Node with Java and Docker installed for me) -

  • Let 1 as the number of executors for the time being. A good value to start with would be the number of CPU cores on the machine (unfortunately for me, it’s 1) - As Remote root directory, enter the directory where you want to install the agent (/home/jenkins for me)

An agent should have a directory dedicated to Jenkins. It is best to use an absolute path, such as /var/jenkins or c:\jenkins. This should be a path local to the agent machine. There is no need for this path to be visible from the controller.

  • Regarding the Labels field, enter the labels you want to assign to the node (ubuntu linux docker jdk17 for me), which makes four labels. This will help you group multiple agents into one logical group)

  • For the Usage now, choose Use this node as much as possible for the time being, you will be able to restrict later on the kind of jobs that can be run on this node.

  • The last thing to set up now: choose Launch agent by connecting it to the controller . That means that you will have to launch the agent on the node itself and that the agent will then connect to the controller. That’s pretty handy when you want to build Docker images, or when your process will use Docker images… You could also have the controller launch an agent directly via Docker remotely, but then you would have to use Docker in Docker, which is complicated and insecure.

Node configuration

The Save button will create the node within Jenkins, and lead you to the Manage nodes and clouds page. Your new node will appear brown in the list, and you can click on it to see its details. The details page displays your java command line to start the agent.

Jenkins UI

This command looks like that for me:

curl -sO http://my_ip:8080/jnlpJars/agent.jar
java -jar agent.jar -jnlpUrl http://my_ip:8080/computer/My%20New%20Ubuntu%2022%2E04%20Node%20with%20Java%20and%20Docker%20installed/jenkins-agent.jnlp -secret my_secret -workDir"/home/jenkins"

Terminal

You can now go back into Jenkins’ UI, select the Back to List menu item on the left side of the screen, and see that your new agent is running.

Jenkins UI

After this is running, there are a few more actions that need to be completed. Whenever you close the terminal you launched the agent with, the agent will stop. If you ever have to reboot the machine after a kernel update, you will have to restart the agent manually too. Therefore, you should keep the agent running by declaring it as a service.

Run your Jenkins agent as a service

Create a directory called jenkins or jenkins-service in your home directory or anywhere else where you have access, for example /usr/local/jenkins-service. If the new directory does not belong to the current user home, give it the right owner and group after creation. For me, it would look like the following:

sudo mkdir-p /usr/local/jenkins-service
sudo chown jenkins /usr/local/jenkins-service

Move the agent.jar file that you downloaded earlier with the curl command to this directory.

mv agent.jar /usr/local/jenkins-service

Now (in /usr/local/jenkins-service) create a start-agent.sh file with the Jenkins java command we’ve seen earlier as the file’s content.

#!/bin/bashcd /usr/local/jenkins-service
# Just in case we would have upgraded the controller, we need to make sure that the agent is using the latest version of the agent.jar
curl -sO http://my_ip:8080/jnlpJars/agent.jar
java -jar agent.jar -jnlpUrl http://my_ip:8080/computer/My%20New%20Ubuntu%2022%2E04%20Node%20with%20Java%20and%20Docker%20installed/jenkins-agent.jnlp -secret my_secret -workDir"/home/jenkins"exit 0

Make the script executable by executing chmod +x start-agent.sh in the directory.

Now create a /etc/systemd/system/jenkins-agent.service file with the following content:

[Unit]
Description=Jenkins Agent

[Service]
User=jenkins
WorkingDirectory=/home/jenkins
ExecStart=/bin/bash /usr/local/jenkins-service/start-agent.sh
Restart=always

[Install]
WantedBy=multi-user.target

We still have to enable the daemon with the following command:

sudo systemctl enable jenkins-agent.service

Let’s have a look at the system logs before starting the daemon:

journalctl -f&

Now start the daemon with the following command.

sudo systemctl start jenkins-agent.service

We can see some interesting logs in the journalctl output:

Aug 03 19:37:27 ubuntu-machine systemd[1]: Started Jenkins Agent.
Aug 03 19:37:27 ubuntu-machine sudo[8821]: pam_unix(sudo:session): session closed for user root
Aug 03 19:37:28 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:28 PM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
Aug 03 19:37:28 ubuntu-machine bash[8826]: INFO: Using /home/jenkins/remoting as a remoting work directory
Aug 03 19:37:28 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:28 PM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
Aug 03 19:37:28 ubuntu-machine bash[8826]: INFO: Both error and output logs will be printed to /home/jenkins/remoting
Aug 03 19:37:28 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:28 PM hudson.remoting.jnlp.Main createEngine
Aug 03 19:37:28 ubuntu-machine bash[8826]: INFO: Setting up agent: My New Ubuntu 22.04 Node with Java and Docker installed
Aug 03 19:37:28 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:28 PM hudson.remoting.Engine startEngine
Aug 03 19:37:28 ubuntu-machine bash[8826]: INFO: Using Remoting version: 3046.v38db_38a_b_7a_86
Aug 03 19:37:28 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:28 PM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
Aug 03 19:37:28 ubuntu-machine bash[8826]: INFO: Using /home/jenkins/remoting as a remoting work directory
Aug 03 19:37:29 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:29 PM hudson.remoting.jnlp.Main$CuiListener status
Aug 03 19:37:29 ubuntu-machine bash[8826]: INFO: Locating server among [http://controller_ip:58080/]
Aug 03 19:37:29 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:29 PM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver resolve
Aug 03 19:37:29 ubuntu-machine bash[8826]: INFO: Remoting server accepts the following protocols: [JNLP4-connect, Ping]
Aug 03 19:37:29 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:29 PM hudson.remoting.jnlp.Main$CuiListener status
Aug 03 19:37:29 ubuntu-machine bash[8826]: INFO: Agent discovery successful
Aug 03 19:37:29 ubuntu-machine bash[8826]:   Agent address: controller_ip
Aug 03 19:37:29 ubuntu-machine bash[8826]:   Agent port:    50000
Aug 03 19:37:29 ubuntu-machine bash[8826]:   Identity:      31:c4:f9:31:46:c3:eb:72:64:a3:c7:d6:c7:ea:32:2f
Aug 03 19:37:29 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:29 PM hudson.remoting.jnlp.Main$CuiListener status
Aug 03 19:37:29 ubuntu-machine bash[8826]: INFO: Handshaking
Aug 03 19:37:29 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:29 PM hudson.remoting.jnlp.Main$CuiListener status
Aug 03 19:37:29 ubuntu-machine bash[8826]: INFO: Connecting to controller_ip:50000
Aug 03 19:37:29 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:29 PM hudson.remoting.jnlp.Main$CuiListener status
Aug 03 19:37:29 ubuntu-machine bash[8826]: INFO: Trying protocol: JNLP4-connect
Aug 03 19:37:29 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:29 PM org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader run
Aug 03 19:37:29 ubuntu-machine bash[8826]: INFO: Waiting for ProtocolStack to start.
Aug 03 19:37:30 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:30 PM hudson.remoting.jnlp.Main$CuiListener status
Aug 03 19:37:30 ubuntu-machine bash[8826]: INFO: Remote identity confirmed: 31:c4:f9:31:46:c3:eb:72:64:a3:c7:d6:c7:ea:32:2f
Aug 03 19:37:30 ubuntu-machine bash[8826]: Aug 03, 2022 7:37:30 PM hudson.remoting.jnlp.Main$CuiListener status
Aug 03 19:37:30 ubuntu-machine bash[8826]: INFO: Connected

We can now check the status with the command below, and the output should be similar to what is shown here.

sudo systemctl status jenkins-agent.service
● jenkins-agent.service - Jenkins Agent
     Loaded: loaded (/etc/systemd/system/jenkins-agent.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2022-08-03 19:37:27 UTC; 4min 0s ago
   Main PID: 8825 (bash)
      Tasks: 22 (limit: 1080)
     Memory: 63.1M
        CPU: 9.502s
     CGroup: /system.slice/jenkins-agent.service
             ├─8825 /bin/bash /usr/local/jenkins-service/start-agent.sh
             └─8826 java -jar agent.jar -jnlpUrl http://controller_ip:8080/computer/My%20New%20Ubuntu%2022%2E04%20Node%20with%20Java%20and%20Docker%20installed/jenkins-agent.jnlp -secret my_secret>

Just for fun, we can now reboot the machine and see in the UI if the agent is still running once the boot is finished!

Jenkins 2022 Recap

$
0
0

Jenkins 2022 recap Newsletter

2022 was a fruitful year for Jenkins! Across the Jenkins project, we experienced growth and strong contributions. We want to share deep gratitude to the corporate sponsorships and individual contributions that made it possible to take Jenkins to the next level. We also look forward to welcoming new friends to collaborate with us in working together to make Jenkins even better. There’s still a lot to do within the project, and in 2023, we plan to integrate many more improvements, while sustaining a diversified, inclusive, and welcoming community.

Happy reading!

Got Inspiration? We would love to highlight your cool Jenkins innovations. Share your story, and you could be in the next Jenkins newsletter.

Highlights

Governance Update

Contributed by: Mark Waite

The Jenkins governance board thanks Ewelina Wilkosz and Gavin Mogan for their two years of service as board members. We’re so glad to have talented people serving on the Jenkins governance board.

December 2022 brought two new members to the Jenkins governance board.

We welcome Dr. Ullrich Hafner of the Department of Computer Science and Mathematics at the Munich University of Applied Sciences as a new member of the board. In his role as professor, he tries to win new Jenkins contributors by letting students develop new features and test cases in their student projects and theses. He has been an active contributor in the Jenkins project since 2007, mostly in the acceptance test harness and the static code analysis suite (which is now replaced by the Warnings Next Generation Plugin).

image

We also welcome Alexander Brandes as a new member of the board. Alexander is a Jenkins Core maintainer and the maintainer of the Job Configuration History and Ionicons API plugins.

He is a release team member and actively involved in Jenkins Long Term Support releases and weekly releases. He served as release lead for five of the twelve Jenkins LTS releases in 2022.

In addition to these two new governance board members, we welcome the new documentation officer, Kevin Martens. Kevin leads Jenkins documentation office hours, reviews and merges documentation pull requests, and is actively involved in Jenkins Special Interest Groups.

image

We’re grateful for the Jenkins officers that are continuing their service for another year, including:

Security Update

Contributed by: Wadeck Follonier

The Jenkins security team was incredibly active this year, not only in resolving security concerns, but also providing insight to users through security advisories. The information below provides overall statistics from the Security team in 2022.

Publication

  • 16 Security advisories (stable year over year).

    • Only 5 impacting Jenkins core (also stable).

  • Around 400 CVEs published, which was more than all previous years.

Day to day:

  • Approximately 580 SECURITY tickets handled.

    • Which is ~20% of the total number of tickets since 2009.

  • Around 70 hosting requests proactively audited (introduced in Q2 of 2022).

  • Around 80 UI related PRs audited in Jenkins core (introduced in Q3 of 2022).

We had to deal with even more CVEs with fancy names like Spring4Shell. We analyzed and understood them to confirm that they don’t affect Jenkins.

Delivery

  • General availability for the Jenkins custom rules in CodeQL (message).

  • Improved tooling for the SECURITY tickets handling.

Infrastructure Update

Contributed by: Damien Duportal

2022 was an eventful year for the Jenkins Infrastructure team as well, leading to various updates and improvements.

  • Ci.jenkins.io now has:

    • General availability for Windows 2022 server use.

    • JDK19 availability for developers, providing new functionalities and edge testing options.

    • Kubernetes has been upgraded to version 1.23 to support Azure, AWS, and DigitalOcean.

  • The JFrog sponsored migration of repo.jenkins-ci.org to their new AWS platform, which provides improved performance for artifact downloads.

  • Download mirrors (get.jenkins.io):

    • A new download mirror for Jenkins was added in Asia. We want to thank Servana for providing the mirror!

    • The mirror mirror.gruenehoelle.nl, located in the Netherlands, that had been available previously has been decommissioned. Thank you for the service!

  • The Infrastructure team was also able to review and clean up unused Azure resources, leading to $1,000 of monthly savings!

Platform Modernization Update

Contributed by: Bruno Verachten

Several upgrades were made to modernize the Jenkins platform. These include:

  • Java 11 is now required for Jenkins platform and plugin development.

    • Build toolchain changes arrived in parent pom 4.52.

    • Java 11 provides a better baseline to work from, ensuring that the benefits such as performance and memory improvements are felt across the platform.

    • Jenkins now has more Java 11 installations than Java 8 installations of Jenkins core!

      image

  • Jenkins now fully supports Java 17.

    • Previously, Java 17 was only available in a preview mode, but with the LTS release of 2.361.1, Java 17 functionality is fully available in Jenkins.

  • Migration of Linux installation packages from System V init to systemd.

    • Users have requested this migration since 2017. The goals of the migration were achieved: to provide unification of service management implementation and better integration between Jenkins core and service management framework.

    • Thanks to Basil Crow for his work on the migration.

  • Staying on top of new backend and frontend dependency updates providing better testing, processing, and performance across Jenkins.

  • Container image updates:

    • Added new platform support, such as arm/v7 and aarch64.

    • Removed support for ppc64le.

    • Released the final, definitive version of the JDK8 containers.

    • Removed the deprecated install-plugins.sh script from Docker images.

    • There were also "Exit" and "Restart"lifecycle changes added to the Docker images. As a result, users must ensure they have a Container Restart Policy in their container.

  • The ANTLR 2 grammars and code were upgraded to ANTLR 4, making it easier for Jenkins to read and parse through various programming languages. This means Jenkins core can now compile with more languages!

    • Thanks to Alex Earl and Basil Crow for all of their hard work on completing this transition!

    • This was included in the 2.376 Jenkins weekly release.

  • Platform documentation

    • A short guide about web containers and servlet container support was created.

  • Jenkins releases are now guided by release leads thanks to our release officer, Tim Jacomb. 2022 release leads have included:

  • Platform Work In Progress:

    • For further development, experiments with RISC-V agents with JDK17/19/20 need to be performed.

    • Additional experiments with Windows 2022 server needs to be performed as well.

Localization simplification Update

CrowdIn for plugin localization

Thanks to Alexander Brandes for helping get CrowdIn connected with Jenkins. This will make the plugin localization process easier, allowing for any user to contribute to localizing plugin documentation. This page shows the plugins that have localization work currently open. It also provides some insight as to how many changes have been made and how many people have been contributing to the project.

Jenkins Crowdin

UTF-8 encoding

The Jenkins project also updated how it reads jelly files, making the transition to using UTF-8. This was possible once the transition to Java 11 completed. By utilizing UTF-8, developers and users can build more reliably and have modern property files read correctly. This also aligns Jenkins' ability to read different types of property files, provided the encoding is the same.

User Experience Update

Contributed by: Mark Waite

Jenkins LTS and weekly releases in 2022 included significant user experience improvements thanks to the work of Jan Faracik, Tim Jacomb, Alex Brandes, Daniel Beck, and many others. Table layouts, menu entries, icons, themes, breadcrumbs, and more were updated to give Jenkins a new, fresh look in 2022.

jenkins modern look

Themes and icons

Jenkins now has much broader support for themes. The dark theme is now installed on over 6,000 Jenkins controllers worldwide. The material theme is also available.

The transition to scalable vector graphics (SVG) icons improved the appearance of Jenkins icons. The SVG icons are specifically selected to work well across a wide range of resolutions and across multiple themes.

The menus of configuration forms moved from the top of each configuration page to the side panel. The side panel locations are more familiar for users and make better use of screen space that was previously empty.

New look

jenkins modern look 2

The improvements to look and feel have made Jenkins more comfortable for users and easier to navigate.

What’s next?

Tim Jacomb and Jan Faracik shared their ideas for further improvements to the Jenkins UI. Watch their DevOps World 2022 talk, "Transformation of the Jenkins User Interface and Where It’s Going Next" (registration required to view the video).

Jenkins io improvements Update

Contributed by: Kevin Martens

This year, the Jenkins project saw documentation contributions from new and seasoned Jenkins users. These contributions included blog posts, documentation additions and updates, documentation migration, and other improvements. All of this has helped expand and empower the Jenkins community.

Over the year, Jenkins project saw 48 blog posts, submitted by 23 different authors. There were 814 contributions throughout 2022. These contributions are a result of the community and collaboration with various projects throughout the year, such as She Code Africa Contributhon, Google Summer of Code, and Hacktoberfest. Our deepest gratitude and appreciation go out to all Jenkins contributors and the open-source community beyond.

Pipeline Steps Reference

Thanks to the work of Vihaan Thora, contributing via Google Summer of Code, the Pipeline Steps reference page provides simplified search for Pipeline steps. The reference page is invaluable for developers when working in Jenkins and utilizing plugins. The updates include search functionality, UI improvements, and faster page loading.

image

image

The Jenkins documentation site search has been updated to use the latest version of Algolia. We recognize and thank Gavin Mogan for his work on site search and on the plugins site. We thank Algolia for donating the search functionality. The site search now provides more relevant results and suggestions for users. A visual update was included as part of the upgrade, resulting in the new look and feel.

image

Outreach and advocacy Update

Contributed by: Alyssa Tong

In 2022, the Jenkins project was able to collaborate on and complete several projects. This included launching two new sites for community engagement and involvement:

  • community.jenkins.io now provides a space for community discourse and communication.

  • stories.jenkins.io is a site dedicated to sharing the experiences and stories of Jenkins users, developers, and contributors that Jenkins has impacted.

Throughout the year, the Jenkins project participated in:

We collaborated with new Jenkins users all over the globe, improved many areas of Jenkins, and celebrated the successes of the community!

The Jenkins project is also excited to share what’s to come in 2023:

Finally, we want to thank our partners and sponsors over the year, as so much of this is possible with the help of their contributions.

"On your marks, get set, ..."

$
0
0

In a couple of days, the Jenkins GSoC 2023 preparation will culminate with our official Organization Application. It is now time to shed some light on what is coming next.

Jenkins GSoC

Since December 2022, we have actively prepared the Jenkins project for the 2023 edition of Google Summer of Code (GSoC). This entailed collecting and discussing project ideas, setting up a mentoring team, and giving advice on how to build the necessary "GSoC Jenkins muscle".

Our application will be submitted shortly, and on February 22, we will know if we are part of the adventure again.

Although we are waiting for Google’s decision, we will start the next GSoC phase without delay. In this phase, the candidates will prepare their detailed GSoC application with the help of the Jenkins Community. The proposals are based on the 2023 project ideas. The phase will culminate with the selection of the students/contributors participating in in this year’s GSoC. The deadline for the application is April 4.

Please carefully review the guidelines for a successful application on the Information for GSoC Contributors page.

In a nutshell, select a project idea and begin thinking about how you will build a proposal to convince the mentors that you are the best candidate to bring the project to a successful end.

We will help you through the currently available channels such as the Community forum and Gitter, and also with weekly on-line office-hours. During these meetings you can ask for any clarification, disambiguation, or precision on the project ideas or the GSoC program. These sessions will be recorded so that everyone can benefit from them.

Before setting up the weekly Office Hours, we will organize a poll to determine a time that suits the timezones of most of the candidates. Last year, we held sessions at 14h00 UTC for Europe and Asia, and 03h00 UTC for the US and the Pacific.

When the Jenkins participation in GSoC 2023 is confirmed, you can actively start working on your draft proposal. Make these drafts available to the Jenkins Community for review via the community forum. Previous years' experience shows that proposals not submitted for the public discussion and review process, are not selected due to being less convincing. This may be a new and uncomfortable way of approaching such candidatures for you, but it is the Open Source way Further discussion and review is rewarding and gives you the best chances to be selected.

Collaborate as much as you can, as it is the best way to have a strong and successful proposal. Actively use the GSoC Community topic, the GSoC Gitter channel, or the Office Hour sessions.

Jenkins January 2023 Newsletter

$
0
0

Jenkins January Newsletter

Highlights

  • Jenkins in GSoC planning is in full steam ahead.

  • General availability of new development tools on ci.jenkins.io: Maven, JDK, Playwright.

  • 98 pull requests were merged from 38 different authors in January.

  • Jenkins 2.375.2 released January 11, 2023. Over 350 positive ratings.

  • A sandbox bypass vulnerability was corrected among 37 other vulnerabilities. The security team recommends users to upgrade.

  • Debian 12 (“bookworm”) will not deliver OpenJDK 11

Outreach and advocacy Update

Contributed by: Alyssa Tong

Infrastructure Update

Contributed by: Damien Duportal

  • Bumped our controllers to LTS 2.375.2

  • General availability of new development tools on ci.jenkins.io:

    • Maven 3.8.7

    • JDK 8u362-b09

    • JDK 11.0.18+10

    • JDK 17.0.6+10

    • JDK 19.0.2+7

    • Playwright (headless web-browser testing)

  • Started to decrease download bandwidth from JFrog (repo.jenkins-ci.org) to our infrastructure by enabling artifact caching for plugins builds on ci.jenkins.io.

    • A lot of “under-the-hood” network and performances related work.

  • Cleaned up deprecated plugins from all of our controllers (momentjs, jquery bundle and ace-editor bundle).

Jenkins io improvements Update

Contributed by: Kevin Martens

During the month of January, 98 pull requests were merged from 38 different authors. We are also preparing for the 2023 edition of Google Summer of Code, by encouraging folks to share ideas or join as mentors. We published one blog post, which was the 2022 recap and contained tons of highlights for the last year of the Jenkins project. Thanks to all of the new and returning contributors for their hard work already in the new year, and the continued efforts that are bound to happen.

Governance Update

Contributed by: Mark Waite

Jenkins 2.375.2 released January 11, 2023. Over 350 positive ratings with only 1 issue reported to have required a rollback for 4 users.

The nine most recent Jenkins weekly releases (2.381 - 2.389) have received positive ratings as well, with 40-50 positive ratings and only 1 reported rollback in each of those 9 weekly releases.

Thanks to everyone that has done such great work on Jenkins core and its recent releases.

Platform Modernization Update

Contributed by: Bruno Verachten

  • Docker images

    • New platforms:

      • UBI9 with JDK17 proposed and maintained by Oliver Gondza from RedHat, who’s also maintainer of the UBI8 container image.

      • Zombie images fixed. Hurray!

      • Java updates were deployed to infra and container images (except for Windows containers):

        • 8u362

        • 11.0.18

        • 17.0.6

        • 19.0.2

  • Debian 12 (“bookworm”) will not deliver OpenJDK 11.

    • The end of life date for Debian’s openJDK11 won’t happen until 2026 or 2027.

    • There is no urge to drop jdk11 support for Jenkins.

    • We will change the documentation nonetheless when it goes out, so that we describe the use of Jenkins with openJDK17.

  • JDK Support for Jenkins

    • Jenkins' enhancement proposal for required JDK11 is final! Congrats.

User Experience Update

Contributed by: Mark Waite

The Jenkins user experience continues to improve. Recent weekly releases have included hiding of potentially sensitive values in system properties and in environment variables, navigation improvements with more breadcrumb entries, changes to the plugin manager, and internal updates to various libraries.

Security Update

Contributed by: Wadeck Follonier

A sandbox bypass vulnerability was corrected and announced in the January security advisory among 37 others vulnerabilities. The security team recommends users to upgrade.

The security team continues to improve the tooling and automation to increase the security of the project. We are pleased to have added support for plugin developers to suppress findings coming from our custom CodeQL rules. See the message in the mailing list.


Thoughts on FOSDEM 2023

$
0
0

What better way to kick off the new year, than by returning for an in-person event at one of the most popular open source gatherings of all, FOSDEM! On February 4 & 5, contributors from the Jenkins project and thousands of other open-source enthusiasts traveled from around the world to flock to FOSDEM. Located in Brussels, Belgium, FOSDEM is an opportunity to renew relationships, forge new bonds, and share the incredible work that is being advanced by the open-source ecosystem.

Brussels at night.

Shops in brussels.

There was a heightened sense of excitement and nostalgia surrounding FOSDEM, and we can clearly see why:

The Jenkins booth at FOSDEM 2023.

The Jenkins crowd at FOSDEM 2023.

Excited contributors at the Jenkins booth.

Jenkins participation banner at FOSDEM.Jenkins stickers!A self contained Jenkins agent.Self contained Jenkins agent.

Full crowd shot of FOSDEM and Jenkins.

We asked our Jenkins contributors for their thoughts as they returned to FOSDEM, and this is what they had to say:

image

What an amazing experience! I met people I’ve interacted with for the first time in various open-source communities, and we decided on partnerships between our communities. One Oreboot member soldered an SPI chip on my RISC-V Jenkins agent (in a corridor, using a chair as a workbench) to free it from U-Boot.

There are two things I’d like to point out: * People love Jenkins, lots of them came to the booth to testify. * Open source is not just a GitHub punchcard, it’s way more about sharing knowledge and empowering people.

— Bruno Verachten

image

What I will retain from FOSDEM is the diversity of the stands and the public, and an impeccable organization of FOSDEM, from the stand organizer’s point of view. To be able to meet in real life the people whom we discuss and work every day for Jenkins, (Oleg, Alexander, …) is a real pleasure. Hearing testimonials from Jenkins users about their love of Jenkins and the particular uses they have for it has also done us a lot of good.
— Stéphane Merle

image

I had a fantastic time at FOSDEM this year. I was happy to meet people from the Jenkins community, some of whom I had only interacted with online before. This was my first FOSDEM, and I was blown away by the number of people who were interested in Jenkins and wanted to learn more about it. I was able to hear about different stories and use cases of Jenkins, which really helped to broaden my understanding of the platform and how it is being used in the real world.
— Alexander Brandes

image

It was with great pleasure that I could attend this incredible event. Meeting contributors and members of the community in person was such a change after these years hiding from the pandemic. I particularly enjoyed the great conversations on so many subjects such as the Jenkins day to day experience, where the project is heading (or should head to), and particularly, my personal pet interests: GSoC or how to start contributing. Even after attending this conference since 2009, my amazement never fades for this incredible explosion of ideas, enthusiasm, diversity, dedication, and generosity for the Open Source movement.
— Jean-Marc Meessen

Many thanks to the FOSDEM organizers for their hard work and dedication to make this event possible for so many open-source communities. We can’t wait to do this again in 2024!

Brussels love for FOSDEM and Jenkins.

Jenkins Contributor Awards - Nominations Open

$
0
0

CDF Community Awards 2023

The Jenkins project is a part of the Continuous Delivery Foundation, which gives out awards to recognize all the work from this community and the progress made in the name of Continuous Delivery.

View the award guidelines, definitions, and previous winners: here.

2023 Award Categories

CDF Awards

Jenkins Awards

Tekton Awards

Last year, Tekton graduated meaning they now get three awards!

  • Most Valuable Tekton Advocate

  • Most Valuable Tekton Contributor

  • Tekton Security MVP

Project Awards

Note: Each project can decide how to elect their Most Valuable Contributor.

Nomination Process

The nomination process takes place on GitHub to make the process open to all. If you don’t have a GitHub account, you can send us the nomination by email, and we will post it there on your behalf.

Awards Timeline 2023

  • Nominations close: Friday, March 3

  • Voting opens: Wednesday, March 8

  • Voting closes: Tuesday, March 28

  • Winners announced at cdCon + GitOpsCon: May 8 – 9, 2023

2022 Winners

Congratulations to the 2022 CD Foundation Community Award Winners announced at cdCon 2022 in Austin Texas. Watch the ceremony on YouTube.

Google Summer of Code 2023… Here We Come!

$
0
0

Jenkins GSoC

We are thrilled to announce that Jenkins has been accepted to Google Summer of Code 2023! This will be Jenkins' seventh year as a mentoring organization.

As a mentoring organization for the past six years, Jenkins has mentored 31 GSoC students by 85+ different mentors, bringing together over 107 strangers for a common idea - Jenkins!

At the heart of it, GSoC is more than just a mentoring program. The intention is to welcome and engage with new contributors in open source. It is about giving a little of your day to make a lifetime of difference, not only for the GSoC contributors, but also for the many Jenkins users who will benefit from the improvements.

We are excited to welcome new GSoC contributors to the Jenkins family. We think you will enjoy this valuable experience while developing your technical skills. You will gain insights into how the community works. The best part will be learning from people who are passionate about Jenkins but more importantly they are passionate about wanting to make a difference for another individual (GSoC contributors) and for the betterment of the project as a whole.

For detailed information on Jenkins GSoC 2022, see the completed projects.

Here we come

What’s next?

GSoC is officially announced, and please expect more potential GSoC contributors to contact projects in our Gitter and Discourse channels. Many communications will also happen in SIG and sub-project channels. We will be working hard in order to help potential participants to find interesting projects, to explore the relevant domain(s), and to prepare their project proposals before the deadline on April 4th (UTC). Then we will process the applications, select projects, and assign mentor teams.

All information about the Jenkins GSoC is available on its sub-project page.

How do I apply?

Refer to the Information for students page for full application guidelines.

We encourage interested participants to reach out to the Jenkins community early and to start exploring project ideas. We also encourage participants to join the weekly Jenkins GSoC office hours. These meetings are set up for participants to meet org admins and mentors, and to ask questions. Also, join our Gitter channel and our Discourse server to receive information about such incoming events in the project.

The application period starts on March 20th (UTC), but you should prepare now! Use the time before the application period to discuss and improve your project proposals. We also recommend that you become familiar with Jenkins and start exploring your proposal areas. Project ideas include quick-start guidelines and reference newbie-friendly issues, which may help with your initial study. If you do not see anything interesting, you can propose your own project idea.

I want to be a mentor. Is it too late?

It’s not! We are looking for more project ideas and for Jenkins contributors or users who are passionate about Jenkins and want to be a mentor. No hardcore experience is required, as mentors can study the project internals together with GSoC contributors and technical advisors.

You can either propose a new project idea or join an existing one. See the Call for mentors post and Information for mentors for details. If you want to propose a new project, please do so as soon as possible so that potential GSoC contributors have time to explore them and prepare their proposals.

This year mentorship does NOT require strong expertise in Jenkins development. The objective is to guide participants and to get involved into the Jenkins community. GSoC org admins will help to find advisors if special expertise is required.

Important dates for GSoC 2023

  • February 22 - March 19 - Potential GSoC contributors discuss application ideas with mentoring organizations. Show us your proposal!

  • March 20 - The GSoC contributor application period begins.

  • April 4 - GSoC contributor application deadline

  • May 4 - Accepted GSoC contributor projects are announced.

  • May 4 - May 28 - Community Bonding Period | GSoC contributors get to know mentors, read documentation, and get up to speed to begin working on their projects.

  • May 29 - Coding officially begins!

  • July 14 - Midterm evaluation

  • August 21 - August 28 - Final week: GSoC contributors submit their final work product.

  • September 5 - Initial results of Google Summer of Code 2022 are published.

  • November 6 - Final date for all GSoC contributors to submit their final work product.

Refer to the GSoC Timeline for more info.

How to build an unsigned Jenkins MSI on your Windows machine

$
0
0

Should you ever need to rebuild a Jenkins MSI on your Windows machine, here is a way to do it.

Pre-requisites

Jenkins WAR file

First, you should download the Jenkins war file that will be inside the MSI file. You can access it from the official Jenkins website or from the Jenkins update center.

Check the Jenkins download page to download the latest weekly version of Jenkins for example. You can always access the direct link to get the latest weekly version, but you won’t necessarily know which version number you are using. Just saying.

Git

There are quite a few ways to install Git on Windows, but the most straightforward way is to see what the official Git website recommends.

Install MSBuild

You can install MSBuild from Visual Studio or from the Build Tools for Visual Studio.

This command line tool is used to build the MSI file.

Install .NET Framework 3.5

You may already have it installed on your machine, but not activated. You can activate it from the Windows Features dialog box.

To access this dialog box, press the keys ⊞ Win + R, then enter the command appwiz.cpl and push enter. Search for

Turn Windows features on or off.

Tick the .NET Framework 3.5 entry and install.

now run Windows Update to check for security updates.

If it is not installed yet, you can install .NET Framework 3.5 from the Windows Features.

Check if you have PowerShell

In recent versions of Windows, PowerShell is already installed and accessible through the terminal application. At the time of writing, the pre-installed version is 5.1.22621.963. You can also install the latest version from the Microsoft Store (7.3.2 at the time of writing).

You could also install PowerShell from GitHub by issuing the following command:

wingetshow"Microsoft.PowerShell"-swinget

This would give an output similar to:

FoundPowerShell[Microsoft.PowerShell]Version:7.3.2.0Publisher:MicrosoftCorporationPublisherUrl:https://github.com/PowerShell/PowerShell/PublisherSupportUrl:https://github.com/PowerShell/PowerShell/issuesAuthor:MicrosoftCorporationMoniker:pwshDescription:PowerShellisacross-platform(Windows,Linux,andmacOS)automationandconfigurationtool/frameworkthatworkswellwithyourexistingtoolsandisoptimizedfordealingwithstructureddata(e.g.JSON,CSV,XML,etc.),RESTAPIs,andobjectmodels.Itincludesacommand-lineshell,anassociatedscriptinglanguageandaframeworkforprocessingcmdlets.Homepage:https://microsoft.com/PowerShellLicense:MITLicenseUrl:https://github.com/PowerShell/PowerShell/blob/master/LICENSE.txtCopyright:Copyright(c)MicrosoftCorporationCopyrightUrl:https://github.com/PowerShell/PowerShell/blob/master/LICENSE.txtReleaseNotesUrl:https://github.com/PowerShell/PowerShell/releases/tag/v7.3.2Tags:command-linecross-platformopen-sourcepowershellpwshshellInstaller:InstallerType:wixInstallerUrl:https://github.com/PowerShell/PowerShell/releases/download/v7.3.2/PowerShell-7.3.2-win-x64.msiInstallerSHA256:a4f7d081c5f74bc8d6c75f1dfee382b7fd9335361181748fee590ecdbc96cb26ReleaseDate:2023-01-24

You can see that the latest version is 7.3.2 and that the installer is a .msi file located on GitHub. Just follow the link provided with your browser and install PowerShell from this file once it has downloaded.

Build the MSI

Clone the Jenkins packaging repository

Choose your git tool and clone the Jenkins packaging repository on your machine.

Prepare the build

Open a terminal window and go to the folder where you cloned the repository. For example C:\jenkinsci\packaging\. You now have to declare where you downloaded the Jenkins war file, so the build script can find it.

$env:War="$env:USERPROFILE\jenkins.war"

If you have previously moved it into your repository clone folder, you can use this command instead:

$env:War="C:\jenkinsci\packaging\msi\build\jenkins.war"

Build the MSI

Enter the subfolder msi\build and run the following command:

.\build.ps1

You should get an output similar to:

ExtractingcomponentsJenkinsVersion=2.392RestoringpackagesbeforebuildAllpackageslistedinpackages.configarealreadyinstalled.BuildingMSIMSBuildversion17.4.0+18d5aef85for.NETFrameworkBuildstarted01/12/202220:53:30.Project"C:\jenkinsci\packaging\msi\build\jenkins.wixproj"onnode1(defaulttargets).SetConstants:EncodedVersion=2.255.3920Compile:Skippingtarget"Compile"becausealloutputfilesareup-to-datewithrespecttotheinputfiles.AssignCultures:Culture:en-USLink:C:\jenkinsci\packaging\msi\build\packages\WiX.3.11.1\build\..\tools\Light.exe-outC:\jenkinsci\packaging\msi\build\bin\Release\en-US\jenkins-2.392.msi-pdboutC:\jenkinsci\packaging\msi\build\bin\Release\en-US\jenkins-2.392.wixpdb-sw1076-cultures:en-US-extC:\Support\users\jenkinsci\packaging\packaging\msi\build\packages\WiX.3.11.1\build\..\tools\\WixUIExtension.dll-extC:\jenkinsci\packaging\msi\build\packages\WiX.3.11.1\build\..\tools\\WixNetFxExtension.dll-extC:\jenkinsci\packaging\msi\build\packages\WiX.3.11.1\build\..\tools\\WixUtilExtension.dll-ext.\msiext-1.5\WixExtensions\WixCommonUIExtension.dll-extC:\jenkinsci\packaging\msi\build\packages\WiX.3.11.1\build\..\tools\\WixFirewallExtension.dll-fv-locjenkins_en-US.wxl-spdb-contentsfileobj\Release\jenkins.wixproj.BindContentsFileListen-US.txt-outputsfileobj\Release\jenkins.wixproj.BindOutputsFileListen-US.txt-builtoutputsfileobj\Release\jenkins.wixproj.BindBuiltOutputsFileListen-US.txt-wixprojectfileC:\jenkinsci\packaging\msi\build\jenkins.wixprojobj\Release\jenkins.wixobjWindowsInstallerXMLToolsetLinkerversion3.11.1.2318Copyright(c).NETFoundationandcontributors.Allrightsreserved.jenkins->C:\jenkinsci\packaging\msi\build\bin\Release\en-US\jenkins-2.392.msiDoneBuildingProject"C:\jenkinsci\packaging\msi\build\jenkins.wixproj"(defaulttargets).Buildsucceeded.0Warning(s)0Error(s)TimeElapsed00:00:08.26

Locate the generated MSI file

The MSI file is located in the .\bin\Release\en-US\ folder. In this folder, you will find the generated MSI file and its sha256 file.

lsDirectory:C:\jenkinsci\packaging\msi\build\bin\Release\en-USModeLastWriteTimeLengthName----------------------------a----01/12/202220:53105107456jenkins-2.392.msi-a----01/12/202220:5384jenkins-2.392.msi.sha256

miniJen is alive!

$
0
0

The Jenkins multi-architecture CPU instance

miniJen as a FOSDEM display on the booth

What is that contraption?

Nope, it’s not a robot of some sort, it won’t move by itself. It’s not Cerebro from Professor Xavier; no, it can’t fly either. What you’re looking at is a Jenkins instance. It is composed of a controller (the “brain” or conductor) and three agents (the workers or musicians if we continue a little further with the metaphor).

During FOSDEM, we displayed the aarch64 Jenkins controller dashboard on an another computer screen using the same Wi-Fi network.

These boards are not microcontrollers, they are miniature computers running GNU/Linux, like the famous Raspberry Pi.

This Jenkins instance was featured in the Hackaday blog post about FOSDEM.

Hardware

The controller runs on a NanoR5S, sold as a router (thus the three RJ45 connectors).

NanoPi R5S pic from the manufacturer

It’s a 4GB aarch64 (or armv8) 4 cores running friendlyCore, a distribution from the manufacturer (friendlyElec) on a 5.10.x kernel.

The smallest board is a 4 cores arm32 agent with 512MB of RAM running Armbian with a 5.10.x kernel too.

NanoPi Duo2 pic from the manufacturer

It’s also a board coming from the friendlyElec manufacturer, the NanoPi Duo2.

The pink board next to the arm32 board is a RISC-V board running Armbian with just 1 core, 1GB of RAM and a 6.1.x kernel.

MangoPi MQ-Pro pic from the manufacturer

It’s a MangoPi MQ-Pro from MangoPi, one of the first RISC-V boards available.

The latest board just next to the RISC-V board with a slightly different shade of pink is an aarch64 board also from MangoPi.

MangoPi MQ-Quad pic from a taobao store

It is a 4 cores agent with 1GB of RAM running a fork of Armbian with kernel 5.16.x. It’s a MangoPi MQ-Quad.

Don’t try to fool me, there are no cables between the boards!

The boards all have Wi-Fi, and they are all connected to the same Wi-Fi network, provided by a router or my phone, depending on the location. You can spot their small Wi-Fi antennas hanging in the first pic, except for the router which has no integrated Wi-Fi (it uses a USB Wi-Fi dongle you can see in the pic). One day, the R5S controller will also be a router for miniJen, but for now, it’s just a Jenkins controller. How come the controller can contact and control the agents? We’re not using IP addresses, but hostnames ending in .local, thanks to the Avahi daemon.

What is that big box with cables?

PinePower powering very astemious boards picture courtesy of HackaDay’s author https://hackaday.com/author/aryavoronova/[Arya Voronova]

These boards are powered thanks to a Pine64power supply. Most of the time, you can see they don’t use much current.

3D printed parts

The 3DDesign on the 2nd of February 2023

The frame looks strange, I know. I wanted to use a torus because it’s a cool-looking shape, and tentacles because it’s even more cool-looking than a torus.
It has been designed thanks to openSCAD, an …​ open-source computer-aided design tool & language (yes, there is such a thing as 3D Design as code), and printed at home on a printer running an open-source firmware, Marlin.

Should you want to replicate this at home, you can find the source code on my GitHub.

Genesis and near future

I have made a few live streams during the build of miniJen, and should do some more for the upcoming modifications. I also have a few videos on the same channel about Jenkins and other boards, so don’t hesitate to have a look.

Jenkins Contributor Awards - Voting Open

$
0
0

CDF Community Awards 2023

The 2023 Jenkins award nomination period has ended and voting is now open! Voting will take place until March 28, when the voting period closes, and the winners will be announced at this year’s cdCon.

Voting

To vote for the Jenkins awards, please use this Google form. Please be aware that this form is only for Jenkins awards, as other projects are hosting their own voting process.

Awards timeline 2023

  • Nominations close: Friday, March 3

  • Voting opens: Wednesday, March 8

  • Voting closes: Tuesday, March 28

  • Winners announced at cdCon + GitOpsCon: May 8 - 9, 2023

2022 Winners

You can view the award guidelines, definitions, and previous winners on the CD Foundation awards page.

Congratulations to the 2022 CD Foundation Community Award Winners announced at cdCon 2022 in Austin Texas! Watch the ceremony on YouTube.

Jenkins February 2023 Newsletter

$
0
0

Jenkins January Newsletter

Highlights

  • FOSDEM 2023 insights

  • Jenkins is a mentor organization for Google Summer of Code

  • Several container image updates

  • Jenkins Awards voting is now open

Outreach and advocacy Update

Contributed by: Alyssa Tong

FOSDEM 2023

Returning to FOSDEM for the first in-person event since COVID was both exciting and nostalgic for our Jenkins contributors. It was exciting to see the same crowd size and enthusiasm by attendees. Many thanks to the wonderful FOSDEM organizers for yet another fantastic event!

image

image

Jenkins in Google Summer of Code (GSoC)

We are thrilled to have been accepted to the Google Summer of Code 2023!! This will be Jenkins' eighth (8th) year participating with the program. Weekly GSoC office hours have begun as well, with office hours every Thursday @ 16:00 UTC. Refer to the Event Calendar for additional details. If you missed the initial meetings, the recordings are below:

Join in on all GSoC discussions in our gitter channel.

image

Jenkins Awards

Award season is here! Nominations are closed but voting is now open. Congratulations to all the nominees and thank you for your contributions! Check out our blog post about the Jenkins awards.

image

Infrastructure Update Contributed by: Damien Duportal

Following FOSDEM, where most of the infrastructure team was present physically, February was a busy month for the Jenkins Infrastructure team:

  • In an effort to reduce bandwidth with JFrog (repo.jenkins-ci.org), Jenkins continuous integration jobs are now using significantly less internet bandwidth thanks to the artifact caching proxy implemented by the team. The artifact caching proxy is implemented on our three cloud providers, so that artifacts can be downloaded from a local cache rather than accessing the artifact repository.

  • Jenkins LTS 2.375.3 is now used on all of our controllers, less than 3 days after its release.

  • We have removed all Jenkins deprecated plugins on all of our controllers such as jquery, momentjs, and ace-editor.

  • We upgraded all six of our Kubernetes clusters from the 1.23 to 1.24 baseline in the three cloud providers.

  • All of the private and internal web services are now using TLS with certificates provided by Let’s Encrypt, along with Azure DNS challenge.

  • We contributed to Docker documentation after catching issues with the Docker CE 23.x fresh release and Docker BuildX on Ubuntu.

Documentation Update Contributed by: Kevin Martens

February was a busy month for the Jenkins project. We want to highlight several blog posts from various authors such as:

We’ve also received numerous pull requests from contributors that are getting started with Jenkins, as well as several excited participants of the Google Summer of Code. For all of the work and energy you’re putting into the Jenkins project, we extend our deepest gratitude.

Governance Update

Contributed by: Mark Waite

The Jenkins governance board met once in February, resolved several action items, and noted the progress on projects with sponsors like JFrog and Atlassian. We’re sincerely grateful for the sponsorships provided by those generous companies and many other companies.

Platform Modernization Update

Contributed by: Bruno Verachten

As part of our ongoing work, we are considering CentOS 7 and its eventual end of life. There is a proposal to deprecate the Centos 7 Jenkins controller Docker images. When we decide to deprecate these images, we’ll publish an announcement and a JEP. Before it is fully deprecated, we’ll also release a merged version of the centos and centos7 image as the very last CentOS 7 Docker image.

In regards to our Docker images, there were several updates here as well:

  • The latest updates are now part of the agent images such as:

    • ssh-agent: Upgraded Git version on Windows to 2.39.2.windows.1 (#209) @github-actions

    • docker-agent: Upgraded Git version on Windows to 2.39.2.windows.1 (#376) @github-actions

    • Inbound agent:

      • Upgraded the parent image jenkins/agent version to 3107.v665000b_51092-4 (#331) @github-actions

      • Upgraded the parent image jenkins/agent version to 3107.v665000b_51092-3 (#330) @github-actions

      • Upgraded updatecli/updatecli-action from 2.19.0 to 2.20.1 (#329) @dependabot

      • The Windows controller image is not updated as often as the rest. It’s been more than one year without any updates, and we may choose to drop it.

  • With the release of Debian 12 (“bookworm”), Debian will no longer deliver OpenJDK 11.

    • Thankfully, the end of life date for Debian’s openJDK11 won’t happen until 2026 or 2027.

    • The Jenkins documentation will be updated when it goes out, so that we describe the use and installation of Jenkins with openJDK17.

New platforms:

  • RISC-V support is far from official for Jenkins, but tests are progressing.

User Experience Update

Contributed by: Mark Waite

User experience improvements continued to arrive in February, thanks to contributions from Jan Faracik, Alexander Brandes, Tim Jacomb, Markus Winter, and others. Look for the improvements in recent weekly releases and in the new Jenkins 2.387.1 LTS release.

The pipeline graph viewer plugin continues to improve its user interface. Refer to the video highlights in the User Experience SIG recording. Additionally, build logs are now viewed from the main panel with easier navigation.

Security Update

Contributed by: Kevin Guerroudj

Two security advisories have been published during the month of February:

  • One regarding plugins, including 5 plugins that were affected and have been fixed, with one vulnerable to a sandbox bypass vulnerability.

  • One regarding Docker images and fixing the vulnerabilities CVE-2022-23521 and CVE-2022-41903 present in git, making remote code execution possible.

The security team recommends that users update as soon as possible.


Jira for the Jenkins project

$
0
0

The Jenkins project has used Jira Software for issue tracking since 2005. Jira has been our issue and enhancement tracking system for almost as long as Jenkins core has existed. Jenkins core and most Jenkins plugins track issues and enhancements through a Jira instance hosted for the Jenkins project by the Linux Foundation.

Atlassian sponsors Jenkins

We’re pleased to announce that Atlassian has agreed to continue sponsoring the Jenkins project with a license to use Jira. We’re deeply grateful to Atlassian for their support of open source software and especially for their support of the Jenkins project. We extend our thanks to the Linux Foundation as well for their continued hosting of Jira for the Jenkins project.

Upgrade schedule

As part of the sponsorship, the Linux Foundation will need to update Jira and restart it. The upgrade will happen beginning at midnight, Saturday March 11, 2023 UTC (4:00 PM Seattle time on Friday March 10, 2023). The system may be down for as much as 30 minutes.

Jenkins and Jira by the numbers

The Jenkins project Jira installation includes over 70,000 issues collected over the course of 18 years. Some examples of the breadth and depth in the issue tracker include:

We thank Atlassian for their donation to the Jenkins project.

miniJen and RISCV

$
0
0

miniJen logo

Short Introduction

What is miniJen? It’s the smallest Jenkins multi-cpu-architectures instance known to this day.

miniJen as a FOSDEM display on the booth

It’s composed of a 4 armCortex-A55 core RockChip controller (aarch64), a 4 arm Cortex-A7a core AllWinner agent (armv7l), a 4 arm Cortex-A53 core AllWinner agent (aarch64), and a single RV64GCV core AllWinner agent (RISC-V).

A bit of personal history

I’ve been an arm fanboy for years, it all started in 2014 or so when I bought a Raspberry Pi. At that time, it reminded me of my younger days when I used to tinker with an HP-48SX calculator, using assembly language, discovering new methods, new instructions, and new backdoors every other day. Later on, when resin.io (now balena.io) ported Docker to the arm processor, I then became obsessed with arm and Docker.

Docker on arm

I spent way too much time compiling FOSS for arm32 and aarch64, and building docker images around them.

It was fun, it was exploratory, it was a way to learn new things…​ and it was a way to contribute to the FOSS community. I made a lot of friends, and I gained a lot of knowledge. I sometimes had to recompile gcc with…​ gcc to be able to recompile ffmpeg for example, and one thing led to another. I had to recompile one library, then another, then a utility, then another library, then the kernel, then another library…​ Boy, that was fun! These were good times. I may sound nostalgic, and I think I am. It was hard, but there were immediate or delayed benefits because everybody was benefiting from the community work. For multiple reasons, such as energy saving, IoT, Edge Computing, server rooms, Cloud, or just for fun, arm was bound to be everywhere. It was the future.

Colleagues, who also happen to be friends, used to call me "mister WhatIf". Yes, I had way too many ideas, but if you want to find a good idea one of these days, you have to let tons of ideas, good or bad, make their way into the world. So yes, basically I was spending most of my free time asking myself (and friends) "What if…​?". Most of the time, these "What if…​?" questions lead to an implementation on an arm SBC, due to how cheap and available they were at that time. Some of these experiments were successful, and some were not. Frankly, hosting a complete Gitlab server on a Raspberry Pi 3B was ambitious, but I learned a lot from them.

Back to arm: when the future becomes the present, it’s not that exciting anymore. Arm is not as boring as X86, but most of the software now works on arm, from microcontrollers to the Cloud. Even laptops and MacBooks have seen the light of arm.

If you don’t own any arm hardware, you can still develop for this architecture thanks to QEMU and Docker.

You may come across sentiments such as:
It’s not that hard to compile the software for arm anymore.
It’s not that exciting anymore.
It’s not that fun anymore.
It’s not that exploratory anymore.
It’s not that rewarding anymore.
It’s not that challenging anymore.
It’s not that cool anymore.
It’s not that…​ well, you get the point.

I still love the arm ecosystem and all the people I’ve met, but it feels like the honeymoon time is gone and we’re in a more platonic relationship now. It is stable, deep, and true, (I love the arm community!) but the time has come to find another quest.

The RISC-V quest

I’ve been lurking in the RISC-V community, projects, SoCs, SBCs, and vendors for a while now, and following the RISC-V Foundation for quite some time.

Until recently, I didn’t have any RISC-V hardware to play with and I was not seeing myself buying a very expensive, but lame, RISC-V SBC without any project in mind. I was waiting for the right moment and the right project. I’ve been working with Jenkins since April 2022, and with my love of arm being what it is, my first contributions were about arm32 and aarch64 for the Jenkins project. During the summer of 2022, I spotted an interesting RISC-V board called the MQ-PRO from an unknown (to me) manufacturer called MangoPi. The price was right, and even though the specs were not that good, the board was available. At that time, the software support was not the best, but I was not afraid of that because of my personal history with arm. However, I did not buy it because I was not sure if I would have the time to work on it. At the beginning of September 2022, the amazing Michael Hurt organized a giveaway on his Twitter account.

Michael Hurt Giveaway

I won the board thanks to my proposal linked to Jenkins.

poddingue’s proposal

At that time, I had no clear idea if Java would run on RISC-V, and of course no clue if Jenkins would run on top of that. I also knew Docker was not yet officially available for RISC-V. That sounded way too fun not to try…​ especially since the board was basically free. I then felt the same level of excitement I used to feel when I was working on arm32 and aarch64. This meant there was once again new territories to explore, new challenges to face, new friends to make, and new knowledge to gain.

The RISC-V journey

Prerequisites and first steps

I read in the news that Ubuntu 22.04 was supplying a RISC-V image, designed for the AllWinner Nezha, that could work for this board. The Nezha was the first D1-based board made available to the public. The MangoPi MQ-Pro came after that, but shares more or less the same set of components. As strange as it may seem (a RISC-V build by an Armbian contributor), I also found an image built by a regular contributor of Armbian, balbes150.

I started by downloading Armbian_22.08.0-trunk_Nezha_jammy_current_6.1.0_xfce_desktop.img from December 06, 2002, burned it thanks to Balena Etcher, and was able to boot the board. bret.dk gave me an interesting pointer to James A. Chambers'blog post about the Ubuntu Preview for RISC-V. In the blog post from James A. Chambers, there is a paragraph about OpenJDK Availability for RISC-V, and we can see that there is a wide range of OpenJDK versions, from 11 to 20, available here. That was unexpected because I thought I would have to compile everything from scratch, make changes to the build system, and so on.

MangoPi MQ-Pro pic from the manufacturer

As you can see, the board is very minimalistic. We only have two USB-C ports, with one being used for power, a microSD card slot, and a mini HDMI port. My goal was to get this board on the Wi-Fi network, but how would that be possible without an Ethernet port? Most of the time when I use Armbian, I just plug in an Ethernet cable, and I’m good to go, as the board uses DHCP by default. I just have to search for a new machine appearing on the router webpage, and issue an ssh command to connect to it.

This time, I was kind of stuck. I had no USB-C keyboard, no mini-HDMI cable, and no Ethernet plug to use. What was I to do? Once again, bret.dk came to the rescue. Bret does tons of reviews on his blog and I found one about an Ethernet/USB hat for the Raspberry Pi Zero W. I bought the same hat, a USB-C hub just in case, and a mini-HDMI cable. The hat never worked for me for some reason, but the USB-C hub did. It’s an almost-no-name generic hub, but it worked. I managed to get Ethernet on it so that my board got an IP address from my router.

Linux and Java installation

Linux

I could then log in thanks to ssh, create an admin user, and so on. I then removed packages linked to X11 that I didn’t need for my use case. Later on, I configured a Wi-Fi connection, and created a jenkins user. The next step logically, was to install the default OpenJDK 17 build provided by Ubuntu.

Java

I now know the default OpenJDK 17 build is a Zero VM build, so I also installed a nightly build of Temurin’s OpenJDK 19 and OpenJDK 20. By the way, do you know what Temurin is?

Temurin is both a chemical similar to caffeine and an anagram of "runtime". Oh, and a cool new free-to-use Java runtime from the Eclipse Foundation! Enjoy.

Temurin is almost caffeine

Zero VM

You may wonder what a Zero VM build is, and why I want to use something else. Zero VM builds come with pros and cons:

  • Zero VM is a Java Virtual Machine implementation that is designed to execute Java applications on systems that use architectures other than the x86 architecture. It is specifically optimized for systems that use ARM, PowerPC, and other non-x86 architectures.

  • Zero VM is part of the OpenJDK project, which is an open-source implementation of the Java SE platform. Zero VM uses a technique called "interpreter-only" mode, which allows it to run on platforms that do not support just-in-time (JIT) compilation.

  • In interpreter-only mode, Zero VM executes Java bytecode directly, without compiling it to native code (it does not use any assembler). This approach typically results in slower performance compared to JIT-enabled VMs, but it has the advantage of being able to run on a wider range of platforms. That’s why the developers got a working OpenJDK to build this early for RISC-V.

So, as much as I’m grateful for the Zero VM build, I’m also curious to see how Temurin’s builds perform on this board. In other words, the board is already so slow that using a Zero VM will make it unusable. There, I said it. The default OpenJDK implementation is there just in case I need to use it for some reason, but I plan to only use Temurin’s builds.

OpenJDK 19

As you may already know, JDK19 is almost end of life (21st of March 2023), so I’m not going to use it for long, and Temurin does not provide steady RISC-V nightly builds. Speaking of end-of-life, I could not recommend enough endoflife.date which is an open-source project that aims to provide a simple way to find the end-of-life dates of software and operating systems. It even provides an API to query the data. Thanks a lot to Mark Waite for letting me know about this project.

Back to openJDK19, how did I find the last RISC-V published nightly build? While discussing with Stewart Addison on various GitHub issues related to Temurin on RISC-V (and aarch64), and later on through Temurin’s Slack channel, we sympathized. He mentioned that he had the same board, and gave me a link to the latest RISC-V build he could find. So, that’s the version I’m using for now. Please note that your libc should be at least 2.35 for this build to work.

The RISC-V Jenkins agent

Installation

I then added an ssh key on the RISC-V machine that would become an agent, created a new node within the Jenkins UI, and installed the agent on it.

Testing

The last thing to do before confirming that Jenkins works on RISC-V was to launch a simple RISC-V job. Spoiler alert, it did work!

Simplest RISC-V job ever

The next step was to install a Pipeline that downloads the latest nightly build of Temurin openJDK20, and installs it on the RISC-V machine, overriding the one I installed previously. This is done mostly thanks to the gh command line tool that can do wonders when it comes to interacting with GitHub on the command line.

gh is open-source, and it’s even available for RISC-V, but not directly in the gh GitHub releases. As far as I know, go is not yet officially available for RISC-V, and gh is written in go. So what’s the catch? Well, it’s open-source, and Ubuntu has a source package for it. Even if I can’t see the binary package for RISC-V on the Ubuntu package page, it magically appeared on my machine after an apt install gh.

The Pipeline uses openJDK19 to update openJDK20, and openJDK20 to update openJDK19. The main Jenkins process is still running on the Zero VM openJDK17, which is something I’ll have to address later on. That part worked, and I was pretty happy about the result.

OpenJDK RISC-V

But what about a smoke test?

I mean, I’m not going to use Jenkins on RISC-V if I can’t build a real-life project with it, right? I asked in the community, and Mark Waite, Basil Crow, and Damien Duportal all agreed that the best way to test Jenkins on RISC-V was to build a few Jenkins plugins with it. I started with an ambitious project, the git plugin itself. Well, it was quite big and not ready for openJDK19, so I switched to a smaller one, the git client plugin. Unfortunately, the results were similar and did not go well.

I then switched to a very basic one, the infrastructure test plugin, which is used to test the Jenkins infrastructure as its name implies. Bad luck occurred once again, as it was not ready for open JDK19 either. In desperation, I switched to the Platform Labeler which is ready for openJDK17, but it required way too much memory to be built. Bummer! I was stuck, and to this day, I haven’t found a Jenkins plugin that can be built with openJDK19 on RISC-V with very little memory. I have yet to find another kind of smoke test that would prove Jenkins works on RISC-V, and the other options is to wait until a plugin is ready for openJDK19.

The RISC-V future for Jenkins

Back to the future

When it comes to Jenkins and the RISC-V ecosystem, I swear I thought I was some kind of pioneer, like in the good old days of arm. Guess what, I’m not! I’ve finally done my homework and found out that Jenkins has been running on RISC-V for a while now.

  • In a blog post from May 2021 (which has unfortunately disappeared), the RISC-V Foundation demonstrated Jenkins running on a RISC-V board with a Linux operating system. The demo used the OpenSBI bootloader and the OpenJDK RISC-V port to run Jenkins, and was able to successfully build and test a simple Java application. The post includes detailed instructions for setting up Jenkins on RISC-V and running a build job.

  • In a video of the presentation (which has unfortunately disappeared) given at the LFELC Spring 2021 Virtual Summit, we could see a demonstration of Jenkins running on RISC-V. The presentation was given by Anup Patel, who was at that time, a member of the RISC-V Technical Steering Committee.

  • There is another video (which has unfortunately disappeared) that shows Jenkins running on RISC-V, presented by Keith Packard at the RISC-V Workshop Taiwan 2021. The video shows Jenkins running on a HiFive Unmatched development board, which is based on the SiFive Freedom U740 RISC-V processor.

  • In a Reddit thread from January 2021 (which has unfortunately disappeared), a user reported running Jenkins on a HiFive Unmatched RISC-V board using Ubuntu 20.04 and OpenJDK 11. The user reported that Jenkins worked well on the RISC-V board and was able to run build jobs without any issues.

Why have these experiment proofs been removed? Is that a coincidence, or am I acting undercover to remove any evidence of Jenkins running on RISC-V before I attempt to do the same? Just kidding, I have no idea, but if three years ago some people were able to run Jenkins on RISC-V, I should be able to do the same today.

The RISC-V board I’ve been using for this experiment is not the most powerful available on the market, so my success rate with Jenkins plugins was not very high. I have another board that is way more powerful, so I’ll try again with it soon. It’s the StarFive VisionFive 2 board which is based on a quad-core RISC-V processor (the StarFive JH7110 64 bit SoC with RV64GC). It also sports 8GB of LPDDR4, so I should be able to build a few RAM-hungry Jenkins plugins with it, and why not, even run a Jenkins controller on it.

I have another board on my radar; it’s the Vision Five 2’s twin from Pine64, the Star64. At the time of writing, it’s not available yet, but I’ll definitely get one as soon as it’s available.

When will RISC-V be a first-class citizen with Jenkins?

Remember, Jenkins is an open-source project, but above all, it’s a community project. Who am I to tell you when RISC-V will be a first-class citizen with Jenkins? I’m just a guy who’s trying to make it work. I think it’s up to the community to decide when RISC-V will be officially supported by Jenkins. My guess would be when two major conditions are met:

  • Temurin is officially available for RISC-V, meaning we’ll be able to download a binary package for RISC-V from the official AdoptOpenJDK website.

    Temurin supported architectures

  • Docker is officially available for RISC-V, which means we’ll be able to download a binary package for RISC-V from the official Docker website.

    Docker supported architectures

  • You may wonder, why do I need Temurin and Docker to be officially available for RISC-V before saying Jenkins supportsRISC-V? As you know, the Java motto says:

    "Write once, run anywhere"

    It’s often abbreviated as "WORA". This motto reflects Java’s ability to be compiled into bytecode that can run on any platform with a Java Virtual Machine (JVM), without requiring recompilation for each specific platform. The Jenkins war runs on top of the JVM; it is then considered CPU-architecture agnostic, which means it can run on any CPU architecture (as long as openJDK11+ can run on the machine, but take it with a grain of salt). The Jenkins infrastructure owns, or borrows, machines of the supported CPU architectures and runs the war on them, so we can testify Jenkins works on these architectures. Jenkins also supplies Docker images for the supported CPU architectures and tests them on the supported CPU architectures. The Jenkins project does not own any RISC-V machine, as far as I know. We could provide a RISC-V docker image, as docker buildx allows us to build for various CPU architectures, but…​ Wouldn’t it be kind of hasty? We wouldn’t be able to test on a Jenkins-owned, Jenkins-managed machine regularly. It is then urgent to…​ wait.

Jenkins 2.397 and 2.387.2: New Linux Repository Signing Keys

$
0
0

Beginning March 28, 2023, the Jenkins weekly releases will use new repository signing keys for the Linux installation packages. The same change will be made in Jenkins LTS releases beginning April 5, 2023. Administrators of Linux systems must install the new signing keys on their Linux servers before installing Jenkins Jenkins weekly 2.397 or Jenkins LTS 2.387.2.

Debian/Ubuntu

Update Debian compatible operating systems (Debian, Ubuntu, Linux Mint Debian Edition, etc.) with the command:

Debian/Ubuntu
# wget -qO - https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | apt-key add -

Red Hat/CentOS

Update Red Hat compatible operating systems (Red Hat Enterprise Linux, Alma Linux, CentOS, Fedora, Oracle Linux, Rocky Linux, Scientific Linux, etc.) with the command:

Red Hat/CentOS
# rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key

Frequently Asked Questions

What if I don’t update the repository signing key?

Updates may be blocked or interrupted by the operating system package manager (apt, yum, dnf) on operating systems that have not installed the new repository signing key. Sample messages from the operating system may look like:

Debian/Ubuntu
Reading package lists... Done
W: GPG error: https://pkg.jenkins.io/debian-stable binary/ Release:
    The following signatures couldn't be verified because the public key is not available:
        NO_PUBKEY FCEF32E745F2C3D5
E: The repository 'https://pkg.jenkins.io/debian-stable binary/ Release' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
Red Hat/CentOS
Downloading packages:
warning: /var/cache/yum/x86_64/7/jenkins/packages/jenkins-2.397-1.1.noarch.rpm:
    Header V4 RSA/SHA512 Signature, key ID 45f2c3d5: NOKEY
Public key for jenkins-2.397-1.1.noarch.rpm is not installed

Why is the repository signing key being updated?

The repository signing key expires after 3 years so that it matches with the expiration dates of the jar file signing and the MSI signing certificate. The updated GPG repository signing key is used in the weekly repositories and the stable repositories.

Which operating systems are affected?

Operating systems that use Debian package management (apt) and operating systems that use Red Hat package management (yum and dnf) need the new repository signing key.

Other operating systems like Windows, macOS, FreeBSD, OpenBSD, Solaris, and OpenIndiana are not affected.

Android and Jenkins: what is the limit?

$
0
0

jenkins hugging bugdroid

After reading the title, you may be thinking "Wait, what? Is Jenkins somehow limited in building Android apps?" You can relax, as I may have phrased it incorrectly. We’re not talking about building Android apps with Jenkins, which has no limitations as far as I know. We’re talking about building something with Jenkins, using an Android device as a Jenkins node, or potentially as a Jenkins controller. Does this sounds appealing or strange enough to you? Continue reading to learn more about the relationship between Android and Jenkins!

Jenkins and aarch64

I joined the Jenkins project in April 2022. At that time, we could already find aarch64 docker images, for the agents or the controller, and regular installers for aarch64 Linux. The oldest image for a controller I found was from August 2021, and the oldest image for an agent was from February 2022. This is nothing new, as Jenkins works on aarch64 Linux and has been running on that CPU architecture for years.

It’s pretty easy when using Linux on an x86_64 machine, but it can be more difficult on an aarch64 machine. This is because the tools needed to build Android applications were not available until late 2021, with the release of Android Build Tools 31.0.0. Of course, you can use Rosetta to build your applications, and even combine it with Docker.

In my experience with Jenkins and aarch64, I have several aarch64 Jenkins agents and controllers. Some of them are using docker and some of them are installed directly on the Linux machine thanks to the standard instructions. Thankfully, there has been nothing outstanding to worry about, as Jenkins works fine for me with aarch64.

Android and aarch64

Until recently, it was difficult to build Android applications on an aarch64 machine. The main reason was that the Android build tools were not compatible with aarch64 machines. Before version 31.0.0, there was a bug in the Android Build Tools that caused the aapt2 tool to crash when building resources on aarch64 machines. This issue was resolved in version 31.0.0, which added native support for aarch64 and fixed the aapt2 crash on these machines. Thanks to this, Android Build Tools are now natively compatible with aarch64.

Isn’t that fantastic? This meant there was no need to use Rosetta to build Android apps on aarch64 anymore. Until I started writing this post, I was actually using build-tools 30.x. I didn’t need to build Android apps on aarch64 machines, so upgrading was unnecessary.

However, a friend of mine works on an M1 Mac, which happens to be an aarch64 machine, and wanted to build Android apps on his Mac. He was working with Docker, which translates x86-64 to aarch64, as long as you specify that you’re using an x86-64 image to begin with. I know it’s strange that x86-64 is called amd64 in Docker, but that’s not the point.

[...]android-agent:platform:linux/amd64build:../restart:unless-stoppedvolumes:-android-agent-data:/home/jenkins:rw-../adbkey.pub:/home/jenkins/.android/adbkey.pub:rw-../adbkey.txt:/home/jenkins/.android/adbkey:rwenvironment:-JENKINS_AGENT_SSH_PUBKEY=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBpNqXQ4x7fPPUBbYPxKF77Zqq6d35iPCD2chg644OUD-STF_HOST_NAME[...]

We could theoretically build Android apps on Raspberry PI 3B+ with Jenkins from now on.

In the introduction, I stated that I wanted to build something with Jenkins on an Android device, and not the other way around. The goal is to run parts of Jenkins (first an agent, then a controller, then a whole Jenkins instance thanks to Docker) on an Android device. So why am I telling you about building Android apps on aarch64 machines with Jenkins? Because it would be kind of ironic, and tons of fun, to build an Android app with Jenkins on an Android device! That would even allow me to introduce you to the Jenkinsception concept. More on that in a future post.

Get sshd working on an Android device

What does sshd have to do with installing Jenkins? Well, we have a few different ways to start and connect a Jenkins agent to a Jenkins controller, but the easiest one for me is to use an ssh agent. The first step is to get sshd working on an Android device. There are various ways to access an Android device through SSH. Here are some of them:

  • ADB: You can use the Android Debug Bridge (ADB) to connect to your Android device over SSH. This requires you to enable USB debugging on your device and have ADB installed on your computer. The following guide may not work any longer, but it’s a good starting point: How to enable ssh on Android.

  • SimpleSSHD: SimpleSSHD is a lightweight and easy-to-use SSH server app for Android. It supports key-based authentication, and you can configure it to run at startup.

  • Termux: Termux is a terminal emulator and Linux environment app that allows you to start an SSH server on your Android device. You can then use any SSH client to connect to your device over SSH. Unfortunately, updates for Termux are not available on the Google Play Store anymore, but you can still find it on GitHub or F-Droid.

It is important to note that SSH access to your device can pose security risks, so it is recommended to use caution and only enable SSH access when necessary.

Termux is my go-to choice when it comes to using some kind of Linux on Android. There are packages available that will allow you to install new software, and package updates, like a "real" Linux distribution. I almost feel at home when using it.

While reading the Termux documentation, I discovered that Termux has an SSH server (OpenSSH) built-in. It’s not enabled by default, but it’s easy enough to configure. The following instructions are available on the Termux wiki, and I’ve added some details to make it easier to follow.

Starting and stopping the OpenSSH server

Since Termux does not use an initialization system, services are started manually from the command line.

To start the OpenSSH server, you need to execute this command: sshd. If you need to stop sshd, just kill its process: pkill sshd.

SSH daemon logs to the Android system log, and you can view them by running logcat -s 'sshd:*'. This is possible from either Termux or ADB.

Setting up password authentication

Password authentication is enabled by default, making it easier to get started. Before proceeding, make sure that you understand that password authentication is less secure than a pubkey-based one.

Ensure that everything is up-to-date and the openssh package is installed:

 pkg upgradepkg install openssh

Please note that $PREFIX is a variable that points to the Termux installation directory. It is usually /data/data/com.termux/files.

Password authentication is enabled by default in the configuration file. You can review the file at $PREFIX/etc/ssh/sshd_config, and it should contain this data:

 PrintMotd yes
 PasswordAuthentication yes
 Subsystem sftp /data/data/com.termux/files/usr/libexec/sftp-server

If your file does not look like this, you will have to edit the file. Note that vi is not installed by default, but nano is. You can use nano to edit the file.

Set a new password by executing the command passwd. While the program allows a minimal password length of one character, the recommended password length is more than eight to ten characters. Passwords are not printed on the console.

 $ passwd New password:
 Retype new password:
 New password was successfully set.

Setting up public key authentication

Public key authentication is the recommended way for logging in using SSH. You need to have a public/private key pair to use this type of authentication. For a successful login, the public key must exist in the authorized keys list on the remote machine, while the private key should be kept safe on your local host.

In the following example, it is assumed that you want to establish public key authentication between your PC (host) and your future Jenkins agent, which happens to be an Android device running Termux (remote). It is also assumed that you’re running a Linux distribution on your PC, WSL2, or even Cygwin. It would be better if both machines were using the same network, for example both are connected to the same Wi-Fi network. It is also assumed that you know your Android device’s IP address. If you have access to your router webpage, you should be able to see which IP has been assigned to your Android device. If you don’t have access to the router webpage, you can find your IP address on an Android device by following these steps:

  • Open the Settings app on your Android device.

  • Scroll down and tap on "About phone" or "About device".

  • Look for the "Status" or "Network" section and tap it.

  • Find the "IP address" or "Wi-Fi IP address" option, which will display your device’s IP address.

Alternatively, you can also find your IP address within Termux by typing the following command: ip addr show. Be aware that if the package is not installed yet, you will need to issue pkg install iproute2 first. Look for the inet line next to the wlan0 line that has your IP address given by your Wi-Fi router.

If you do not have ssh keys, you can generate them. In this example, we will generate an RSA key. On the PC, execute the command: ssh-keygen -t rsa -b 2048 -f id_rsa, replacing id_rsa with the name of your key. For me it would be ssh_key_for_jenkins_agent_2023-03-10. The command shown above generates a private RSA key with a 2048-bit key length and saves it to the file id_rsa. In the same directory, you can find a file named id_rsa.pub, and this is a public key.

For me, the command was:

 ssh-keygen -t rsa -b 2048 -f ssh_key_for_jenkins_agent_2023-03-10
 Generating public/private rsa key pair.
 Enter passphrase (empty for no passphrase):
 Enter same passphrase again:
 Your identification has been saved in ssh_key_for_jenkins_agent_2023-03-10
 Your public key has been saved in ssh_key_for_jenkins_agent_2023-03-10.pub
 The key fingerprint is:SHA256:yoykbWyCHuqrANFBkO41vuXMC7kLhsVfe8caLWQEUqk user@PC
 The key's randomart image is:
 +---[RSA 2048]----+
 |.+o ..o.         |
 |.. . ...         |
 |o .  .  .        |
 | + oE  .         |
 |o = o . S        |
 |o+ B.* = o       |
 |++oo& = + +      |
 |= o=o+ . =       |
 |=+.o... .        |
 +----[SHA256]-----+

The key was generated in the current directory, not in $HOME/.ssh. I tend to move the generated key in that $HOME/.ssh directory (mv ssh_key_for_jenkins_agent_2023-03-10* ~/.ssh for me). I then change the directory to $HOME/.ssh (cd ~/.ssh) and change the permissions of the key (chmod 600 ssh_key_for_jenkins_agent_2023-03-10).

2048 bit is the minimal key length that is considered safe. You can use higher values, but do not use a higher than 4096, as the remote server may not support that big of a key.

Copy the key to the remote machine (your Jenkins agent wannabe running Termux). Password authentication must be enabled to install a public key on the remote machine. Now execute: ssh-copy-id -p 8022 -i id_rsa IP_ADDRESS, making sure to replace id_rsa with the name of your key and IP_ADDRESS with the IP address of your Android machine.

Alternatively, you can manually copy the content inside id_rsa.pub (public key), which is already on the PC, and looks like ssh-rsa <A LOT OF RANDOM STRINGS> user@host. After copying, paste this content to the Termux file $HOME/.ssh/authorized_keys (remote machine). Before copying and pasting, you must connect through ssh user@IP_ADDRESS -p 8022, replacing IP_ADDRESS with the IP address of your Android machine. Doing so enables you to copy the content of the public key, using any text editor available on PC, and paste it inside an ssh session handled by Termux.

What looks strange to me is that user could be just about anything. I tried to log in without supplying a user, which means I was using my PC username, and it worked! I tried to log in with a different username and this also worked. When issuing the whoami command inside Termux, it shows the username of the Termux user, which is u0_a504 in my case.

If everything went fine, you will see a message like this one:

 Number of key(s) added: 1

If your system has an ssh-agent, you should now add your newly generated key to the agent. After adding the key, try logging into the machine with: ssh -p '8022''IP_ADDRESS' Be sure to replace IP_ADDRESS with the IP address of your Android machine and check to make sure that only the key(s) you wanted were added. If you don’t have an agent running, you will have to use a slightly different command: ssh -i id_rsa -p '8022''IP_ADDRESS' Here, you need to replace id_rsa with the name of your key and IP_ADDRESS with the IP address of your Android machine. That would display for me as:

 ssh -i ssh_key_for_jenkins_agent_2023-03-10 -p 8022 192.168.1.xx
 Welcome to Termux!

At this point, password authentication can be disabled. Using nano, edit the file $PREFIX/etc/ssh/sshd_config, and replace the line beginning PasswordAuthentication with PasswordAuthentication no. Back in the Termux app, execute the command pkill sshd && sshd to restart the sshd server with the updated configuration file. Of course, if you were to do that from your PC, you would be disconnected and the ssh server would not be restarted.

Now you can log in to the remote machine without a password. Just execute the command ssh -p '8022''IP_ADDRESS' For this command, replace IP_ADDRESS with the IP address of your Android machine, or with the more complex command -i, if your machine does not use an ssh agent.

Installing Java on Termux

We all know that Jenkins is written in Java. We also know Android apps are written in Java or Kotlin, so while we could hope that we magically skip this step, I’m afraid we can’t. The virtual machine that runs Android apps is not the same as the one that runs on your PC. Later on, we’ll detail the main differences between the two. The Android virtual machine (called dalvik) is available on Termux, but it is not capable of executing our agent.jar file, since the java command is not available yet.

$ dalvikvm -showversion
ART version 2.1.0 arm64
$ java --version
bash: /data/data/com.termux/files/usr/bin/java: No such file or directory

For the time being, let’s assume that we need to install Java on Termux. Let’s find out which java versions are available on Termux:

pkg update && pkg search openjdk
Checking availability of current mirror:
[*] https://packages-cf.termux.dev/apt/termux-main: ok
Sorting...
Done
Full Text Search...
Done
openjdk-17/stable 17.0-25 aarch64
  Java development kit and runtime
openjdk-17-source/stable 17.0-25 all
  Source files for openjdk-17
openjdk-17-x/stable 17.0-25 aarch64
  Portion of openjdk-17 requiring X11 functionality

Nice. Jenkins supports Java 17 since the 2.355 and 2.346.1 LTS releases, so let’s go with OpenJDK 17.

pkg install openjdk-17

Now the java command is available:

java --version
openjdk 17-internal 2021-09-14
OpenJDK Runtime Environment (build 17-internal+0-adhoc..src)
OpenJDK 64-Bit Server VM (build 17-internal+0-adhoc..src, mixed mode)

Creating a Jenkins ssh agent

You should now be able to connect via ssh to your Android device running Termux if you have issued the sshd command. Your ssh server also knows about the ssh key you generated on your PC. We will now create a credential based on that key within Jenkins, that will allow you to connect to your Android device running Termux from Jenkins later on.

Creating a Jenkins ssh credential

For this part, there is almost nothing specific to Android. You can follow the official documentation, and there are instructions on how to create a Jenkins credential.

Setting up a Jenkins ssh agent

It’s now time to set up your agent.

You can use Android as a label for your agent. Choose the Launch agent via SSH option. The hostname should be your phone’s IP address, which was named 'IP_ADDRESS' in the previous steps.

The credentials should be the ones you created in the previous steps. The remote root directory should be /data/data/com.termux/files/home. The host key verification strategy should be Non-verifying Verification Strategy. The Launch method should be Launch agent via SSH.

Don’t forget to select the Advanced option and change the port to 8022. You could also specify the path of the java executable you installed in the previous steps, which happens to be /data/data/com.termux/files/usr/bin/java. Since I have installed the 'Platform Labeller' plugin, I have also checked the 'Automatic Platform Labels' checkbox. We’ll see later on if it can cope with Android devices that don’t use the lsb_release command.

The very last thing to do is to select Save. You should now see the complete list of your defined agents. While the agent has been created, it may have not started yet. If that’s the case, select the name corresponding to your newly created agent ('Android Phone' for me) and select Launch to start the agent. After some time, you should see in the logs Agent successfully connected and online, which means you can now use this agent to run your builds.

Using a Jenkins ssh agent

Let’s create a new job and use our newly created agent to run it.

The simplest job that comes to mind is a Freestyle project that runs the uname -a command. That should give us some information about the Android device we are running on, while proving that the agent is working. Once again, there is nothing specific to Android for this step, so you can follow the official documentation. The only changes to the documentation I have made are:

  • I have used the Android label to make sure the job is run on the Android agent.

  • I have used the uname -a command instead of the echo $NODE_NAME command.

Started by user admin
Running as SYSTEM[EnvInject] - Loading node environment variables.
Building remotely on Android Phone (aarch64 aarch64-unknown+check_lsb_release_installed aarch64-unknown+check_lsb_release_installed-unknown+check_lsb_release_installed android unknown+check_lsb_release_installed-unknown+check_lsb_release_installed unknown+check_lsb_release_installed)in workspace /data/data/com.termux/files/home/workspace/Android First Job
[Android First Job] $ /bin/sh -xe /data/data/com.termux/files/usr/tmp/jenkins13760213506108463207.sh
+ uname-a
Linux localhost 4.4.192-perf+ #1 SMP PREEMPT Fri Dec 10 13:53:37 WIB 2021 aarch64 Android
Finished: SUCCESS

We now have a working Jenkins agent running on Android, thanks to Termux. Now what? Of course, we will be limited to the commands and packages that are available on Termux. For example, I can’t see gcc in the list of available packages, which could be troublesome.

pkg search gcc
Checking availability of current mirror:
[*] https://termux.astra.in.ua/apt/termux-main: ok
Sorting...
Done
Full Text Search...
Done

No gcc? You’re right, there is no gcc in the official Termux repository. However, the Termux community comes to the rescue with some repositories that provide additional packages, like gcc. After installing the repository, we can install gcc.

pkg search gcc
Checking availability of current mirror:
[*] https://termux.astra.in.ua/apt/termux-main: ok
Sorting...
Done
Full Text Search...
Done
gcc-6/termux 6.5.0-2 aarch64
  GNU C compiler
gcc-7/termux 7.4.0-2 aarch64
  GNU C compiler
gcc-8/termux 8.3.0-3 aarch64
  GNU C compiler
libgccjit-8-dev/termux 8.3.0-3 aarch64
  GCC just-in-time compilation
libgomp-7/termux 7.4.0-2 aarch64
  openmp library for gcc
libgomp-8/termux 8.3.0-3 aarch64
  openmp library for gcc-8

As you can see, we have a few gcc versions to try out.

What if we need gcc 10, for example? We would have to compile it ourselves like in the good old days. This solves the problem for gcc, but what about other packages? We are somewhat limited by the availability of packages on Termux.

What if we could work around that limitation though? What about running Docker on Termux? Docker has no limit on packages as long as we choose the right base image, right? So, we could run a Jenkins agent on Termux through a Docker image, based on another distribution that happens to supply all the packages we need. The slight problem that may arise, is that Docker is not easily installed on Termux, and once installed, it won’t work out of the box.

Android apps are running some kind of JVM, right? So why not use a Jenkins inbound agent?

Android apps are written in Java or Kotlin programming languages, and they run on one of two Java Virtual Machines (JVM):

  • Android Runtime, known as ART

  • Dalvik Virtual Machine, known as DVM.

It is possible to access the JVM from an ADB shell and run Java code using the dalvikvm command. This is a command-line tool that allows you to execute Java code on the DalvikVM.

Nevertheless, there are preliminary steps that you need to take before you can run Java code on an Android device: * Compile your Java code into a .class file * Transform it into the DEX format using the d8 tool * Push the resulting .dex file to your Android device * Run the Java class using the dalvikvm command.

It’s possible to some extent to automate these steps, but it’s not trivial.

The dalvikvm command is a low-level tool that may not be suitable for running complex Java apps, which may need additional dependencies to function properly. Even if that would work, it would be a very roundabout solution (which is fine with me), but where would we go from there? I mean, we have a subset of the Linux commands available in the ADB shell, but we can’t install tools, packages, etc. For example, how would we install gcc?

So what could our Jenkins agent do? Not so much I’m afraid…​ We could still use Termux, as we’ve seen earlier Termux uses the base shell that is available through ADB. If we could launch the inbound agent through Dalvik, we would be able to use the Dalvik VM while using Termux, to keep the best of both worlds (Android & Linux-like).

Another solution would be to create a library from the agent.jar file and integrate it into an Android app. That part could work but then the resulting agent would be even more limited. There wouldn’t be a shell available, as the app is sandboxed. We would have an agent able to do almost nothing…​

I’d like to know more nonetheless, so I’ll write down my thoughts about that in another article, once I’ve done my homework.

Building Android apps with Jenkins: an introduction

$
0
0

profile of ci/cd users

Why is mobile CI/CD special?

In 2020, a surprising 33% of professional mobile app developers were not using Continuous Integration/Continuous Deployment (CI/CD) practices, which is 18% more than web developers. There are several reasons why this is the case:

  • Unique needs: Unlike web applications, mobile applications have different requirements, which means that mobile CI/CD requires a different approach and dedicated tools.

  • Tightly controlled ecosystems: Unlike the open ecosystem of the web, mobile app ecosystems are tightly controlled by the OS providers, such as Apple and Google. These providers have strict rules from development to deployment, to running the apps. As a result, traditional CI/CD approaches and best practices can’t be applied out of the box.

  • Specialized expertise is required: Mobile CI/CD requires specific expertise that may not be available in the typical DevOps team. In many cases, mobile developers must handle the mobile CI/CD pipeline themselves due to the unique requirements. However, it is still CI/CD and requires that specific mindset, along with mobility knowledge.

  • Mobile CI/CD requires a separate pipeline: Mobile CI/CD requires setting up a separate pipeline from web or backend stacks, as it is based on the deployment of a compiled binary, which has to be installed on a mobile device from scratch every time. End-user deployments for B2C apps are subject to app reviews and app release waiting periods, which is not prevalent in other stacks. As a result, errors should be detected at the earliest stage (the famous "shift-left") and mobile-app-specific code analyses and tests should be incorporated at every step of CI/CD to avoid issues being discovered by the end-user.

  • Code-signing is mandatory: Unlike most other stacks, code-signing is mandatory in mobile app development. This introduces another layer of complexity on top of the already complex mobile CI/CD processes.

In summary, mobile app development requires unique CI/CD practices, due to the specific needs of the mobile app ecosystem, the specialized expertise required, and the need for a separate pipeline for deployment. By understanding and addressing these challenges, mobile app developers can successfully adopt CI/CD practices and ensure the delivery of high-quality mobile apps.

How do we progress then?

Mobile app development requires dedicated tools and approaches for CI/CD management. However, in many cases, mobile developers themselves are tasked with managing CI/CD, which is not their core role. This can result in significant time and productivity losses.

To optimize mobile CI/CD management, it is essential to have a dedicated team member who is knowledgeable in both mobile development and CI/CD. Ideally, this person would have experience as a former mobile app developer who has transitioned into a CI/CD role. Alternatively, someone who knows CI/CD and wants to help mobile app developers onboard the CI/CD train can also be valuable.

In conclusion, by having dedicated team members who are knowledgeable in both mobile development and CI/CD, and by utilizing effective tools and approaches, mobile CI/CD management can be optimized to improve productivity and minimize time losses.

Flashback to my previous job

At my previous job, we evolved our mobile CI/CD system over time and ended up using Gitlab-ci, along with a set of specific Docker images. Our standard image contained the latest version of all the required tools, including maven, ant, gradle, android SDK, android NDK, flutter, firebase, linters, and dependency checkers. This image was cached on the runners, similar to Jenkins agents, so that jobs could start immediately.

We also created specific images with various versions of the tools or with specific tools added. Additionally, we linked an Android Device Farm to Gitlab-ci, so that developers could test their newly built app on real Android devices directly from their pipeline.

In 2022, I resigned from my previous job and started working at CloudBees. Soon after, my manager asked if I would be interested in replicating some of the work I had done with GitLab-CI, but this time using Jenkins. I was intrigued and asked two questions. First, does Jenkins work well with Docker (to which I got an honest "sort of" answer), and second, whether I could start from scratch without looking at what had worked for Android app development with Jenkins previously. My goal was relying instead on my old GitLab habits, even if that meant failure. To my surprise, my manager said, "Go for it," and I began my journey to experiment, learn, and share my findings.

Given my more than 8 years of experience with Mobile App CI/CD, I initially thought this would be a cakewalk. But it turned out to be quite the ride…​

Starting with Jenkins

I embarked on a new journey with Jenkins, despite having no prior experience with the tool. As a result, I made every mistake a Jenkins newbie could possibly make. To begin my learning process, I knew that starting with an empty Android application was the logical first step. However, as the saying goes, it’s difficult to teach an old monkey new tricks. Therefore, I decided to dive headfirst into rebuilding an Android Docker image, attempting to fit in all the tools I could think of, much like I had done previously with Gitlab.

In my naivete, I believed that once I pushed the image to DockerHub, Jenkins would somehow magically use it for my builds, and I would be done in a matter of days. Looking back, it’s almost endearing to see how naive I was. Simultaneously, I installed Jenkins through Docker on my Windows laptop and experimented with a few tutorials, starting with the simplest jobs possible that worked.

After successfully pushing my bulky Docker image, filled with all the necessary tools, I moved onto creating an empty Android application, with plans to connect the dots later on.

Android app

To start, I created a new repository on GitHub and cloned it onto my laptop. From there, I opened up Android Studio and created a brand new application from scratch within that folder. Once I had everything set up, I committed and pushed the app shell to GitHub.

To make sure the application could be built outside of my machine, I added a GitHub Action file. Thankfully, building a simple Android app is possible with GitHub Actions, as they provide Java and Android SDK in their Ubuntu:latest image.

With this setup, my empty app could be built on Android Studio and with GitHub Actions. As a result, I was able to obtain my APKs on both platforms. It was a satisfying milestone to achieve.

Following the official documentation, I managed to install both the Jenkins controller and agent. However, as a beginner, I found the process unnecessarily complex. Despite successfully running a Jenkins controller and agent on Docker images on my laptop, I encountered difficulties when trying to run my custom Android building Docker image on it.

Now I understand that there were other ways to approach the problem, but at the time, I was determined to stick with my old habits. I knew that creating a specific agent by starting with the SSH agent Docker image and adding the Android SDK was an option, but I was more comfortable using my custom Docker image and generic agents. As the saying goes, "when your only tool is a hammer, everything looks like a nail".

The Free Tier parenthesis

Unfortunately, I ran into some issues with running my custom Docker image under Windows. So, I decided to create two Jenkins agents on Oracle Cloud Free Tier machines instead. I installed Java and Docker on these machines, and then created a Jenkins agent that was handled by systemd. This allowed me to continue working on my project and explore different ways of using Jenkins.

One of the Free Tier machines on Oracle Cloud was set up with the Android SDK so that it could handle Android jobs, earning it the moniker "JenkinsDroid". Using this machine, I created a simple Android job on Jenkins that referenced my GitHub repository and initiated the build process.

As I gained confidence, I added more checks and bundle creation, and soon found myself with a long list of build steps in a FreeStyle project. However, I realized that if the Jenkins controller were to restart for any reason, my current builds would be lost. This was a major drawback, and I wanted to find a more robust solution.

After some research, I discovered that pipeline jobs are not affected by the controller restart issue. As a result, I decided to switch to pipeline jobs to ensure that my builds would be safe even if the controller restarted.

From FreeStyle to Pipeline

As a developer, I often try to find ways to make my work easier. Admittedly, I can be a bit lazy when it comes to certain tasks. That’s why I decided to use the Declarative Pipeline Migration Assistant to convert my FreeStyle project into a Pipeline project. However, my first attempt at using this converted pipeline failed due to incorrect syntax. It was back to the drawing board for me, and I had to learn the Declarative Pipeline syntax. Remember the old Apple ads from around 2009, where the answer to every need was "there’s an app for that"? In the same way, Jenkins has a solution for almost every need. One thing I appreciate about Jenkins is that it offers a lot of flexibility regardless of the version being used.

Jenkins is an incredibly powerful tool, with a vast community contributing to its plugins. With over 2,000 plugins available, it’s safe to say that if you have a need, there’s likely a plugin that can help you achieve it. However, with so many options available, it can sometimes be overwhelming to choose the right one. It’s important to note that some plugins may be outdated or incompatible with your Java or Jenkins version, so it’s always wise to double-check compatibility before installing. Despite these potential challenges, the sheer number of available plugins is a testament to the versatility and flexibility of Jenkins.

History of the number of plugins since 2008 to March 2023

To start with, I began with a small Pipeline description, gradually expanding it to incorporate more stages, additional tools, static analysis, compilation, unit testing, and ultimately, the creation of the release, which we will explore in a few weeks. However, the worst possible thing happened: I lost everything.

As previously mentioned, my Jenkins controller instance was running on my Windows machine, running atop Docker. One day, as I was trying to free up space for Android builds, I unintentionally entered a Docker command that removed all volumes, resulting in the loss of my jobs and their respective definitions.

Despite taking precautions, things can still go wrong. It was frustrating, but I learned from it and decided to store my Jenkinsfile in GitHub along with my other files, which gave me a sense of familiarity since GitLab-ci uses a similar approach. With Jenkins, I could create a separate Pipeline for each branch with different agents, different Docker images, and different tools, which was very convenient. However, it’s not perfect since a branch’s last commit/push is always used to start a job, and it’s impossible to build a specific branch explicitly.

Using a simple Pipeline with multi branches

Status

Let’s face it, unexpected issues can occur during a build. While it is ideal to have everything reproducible at the click of a button, in the real world, a machine serving dependencies can go down, a link can break momentarily, or a docker image layer can go missing. When using dockerfile: true, the risks are even higher, as you’re building the tool you’ll be using for the build, and sometimes things can go out of control.

When a build fails due to missing dependencies on Branch A, but a build on Branch B starts because it’s the latest commit/push, what can you do? It’s not a good idea to keep a simple pipeline project when working with multiple branches. That’s why I switched to a Multibranch Pipeline Project later on.

At this point, I had several branches, each with a Jenkinsfile. I also had Free Tier machines struggling to keep up with the heavy load.

Let’s make things a bit more complex

As I was testing different tools and stages using different Jenkinsfiles on various branches, I realized that using the same Docker image on all branches was not efficient. I started exploring the idea of using a different Docker image per branch, based on the specific tools or tool versions required. This made sense because using a generic Android image would result in additional download time during the build process for non-bundled tool versions.

Developers prioritize fast pipelines, and a custom Docker image with the correct tool versions is a way to achieve this. However, this custom image may not always be present in the Docker cache, resulting in slower builds.

To tackle this issue, I decided to automate the Docker image building process and use GitHub Actions to build and push the images to my Docker registry.

Of course, achieving a "fast" pipeline (around 5 minutes) depends heavily on the agent’s specificity. If it’s attached to only one project, then there’s hope that, even with various versions of the Docker image, the Docker cache would be large enough to ensure that builds fire up immediately.

To accomplish this, I had a potentially different Dockerfile per branch, and an image per branch, built using a GitHub Action and pushed to my Docker Hub repository. At that point, I had a working declarative pipeline for each branch, as well as a separate Docker image for each branch. Ultimately, this allowed me to generate an application binary that was ready to be deployed.

Ready? We’ll see that in the following blog post of this series.

Viewing all 1088 articles
Browse latest View live