Quantcast
Channel: Jenkins Blog
Viewing all 1088 articles
Browse latest View live

GSoC CDF Meetup: Google Summer of Code Midterm Demos

$
0
0

Jenkins GSoC

Congratulations to all GSoC students who have made it through the first half of the GSoC 2021 coding phase!

This year, the Jenkins project has been participating in GSoC as part of the Continuous Delivery Foundation’s GSoC org. To celebrate our GSoC students and the fantastic work they have been doing, the CDF is hosting an online meetup where students will present their work. Students will be showcasing what they have learned and accomplished thus far in GSoC, demoing their work, and discussing their goals and plans for the second coding phase.

The CDF Google Summer of Code Midterm Demos will be held online on July 20th, 13:00 UTC - 15:00 UTC.

Sign up here: Meetup Event

CDF GSoC

Speakers

Akihiro Kiuchi - Jenkins Remoting Monitoring

Akihiro Kiuchi

Akihiro is a student in the Department of information and communication engineering at the University of Tokyo. He is improving the monitoring experience of Jenkins Remoting during Google Summer of Code 2021.

  • Affiliation: The University of Tokyo and Jenkins project

  • GitHub: Aki-7

Title: Jenkins Remoting Monitoring with OpenTelemetry

In this talk, he will discuss the problems in maintaining Jenkins agents and how to support Jenkins admins in troubleshooting them. As one of the solutions, he will introduce the new Remoting monitoring with OpenTelemetry plugin that collects Jenkins Remoting monitoring data and troubleshooting data using OpenTelemetry. What kind of data the plugin will collect and how we will be able to visualize them using available open-source monitoring tools will be demonstrated.

Shruti Chaturvedi - CloudEvents Plugin for Jenkins

Shruti is an undergrad student of Computer Science at Kalamazoo College. She is developing a CloudEvents integration for Jenkins, allowing other CloudEvents-compliant CI/CD tools to communicate easily. Shruti is also the Founding Engineer of a California-based startup, MeetKlara, where she is building serverless solutions and advocating for developing CI/CD pipelines using open-source tools.

Title: CloudEvents Plugin for Jenkins: Moving Towards Interoperability

In this talk, we will look at interoperability as an essential element in building workloads across several services. We will also talk about how CloudEvents solves one of the biggest challenges in achieving interoperability between systems: lack of normalization/standardization. Without any standard definition, in order to achieve interoperability, services have to develop adapters specific to a particular system. That, however, is complex because services are always changing the way data/events are emitted. CloudEvents solves this problem by defining a standard format for events, which can be emitted/consumed agnostically, thereby achieving indirect interoperability. Shruti will demonstrate the workings of CloudEvents Plugin for Jenkins; she will walk us through how Jenkins can be configured as a source and a sink, emitting and consuming CloudEvents-compliant events in a platform-independent manner.

Daniel Ko - try.spinnaker.io

Daniel Ko

Daniel is studying computer science at the University of Wisconsin - Madison. He is developing a public Spinnaker sandbox environment for Google Summer of Code 2021.

  • Affiliation: University of Wisconsin - Madison and Spinnaker project

  • GitHub: ko28

Title: try.spinnaker.io: Explore Spinnaker in a Sandbox Environment!

The talk will go through a brief explanation of Spinnaker and the challenges that users face during the installation process. He will discuss the infrastructure of this project and how a public multi tenant spinnaker instance will be managed and installed. We will end with a demo of the site so far and the various features implemented, including Github authentication, K8s manifest deployment, AWS Load Balancer Controller to expose deployments, private ECR registry and the blocking of all public images, and auto resource cleanup.

Aditya Srivastava - Conventional Commits Plugin for Jenkins

Aditya Srivastava

Aditya is a curiosity driven individual striving to find ingenious solutions to real-world problems. He is an open-source enthusiast and a lifelong learner. Aditya is also the Co-Founder and Maintainer of an Open Source Organization - Auto-DL, where he’s leading the development of a Deep Learning Platform as a Service application.

Title: Conventional Commits Plugin for Jenkins

In this talk, we’ll start with what are conventional commits and why they are needed. Then we’ll see what the jenkins plugin, "Conventional Commits" is and what goal it is trying to achieve. A demo of how the plugin can be used/integrated in the current workflow will be shown. Finally, we’ll talk about the next steps in plugin development followed by the QnA.

Harshit Chopra - Git credentials binding for sh, bat, and powershell

Harshit Chopra is a recent graduate and is currently working on a Jenkins project which brings the authentication support for cli git commands in a pipeline job and freestyle project.

Title: Git credentials binding for sh, bat, and powershell

In this talk, he will give an overview of the project and will move on further explaining what problems are being faced, a bit about the workaround that are being used to tackle the problems, what makes the authentication support so important, why a feature and not a plugin in itself, accomplishments achieved and work done during the coding phase 1, will talk about the implementation of the feature, demonstration of git authentication support over HTTP protocol.

Pulkit Sharma - Security Validator for Jenkins Kubernetes Operator

Pulkit Sharma

Pulkit is a student at Indian Institute of Technology,BHU,Varanasi. He is working on a GSoC Project under Jenkins where he aims to add a security validator to the Jenkins Kubernetes Operator.

  • Affiliation: Indian Institute of Technology, BHU and Jenkins Project.

  • GitHub: sharmapulkit04

Title: Security Validator for Jenkins Kubernetes Operator

In this talk, we will discuss why we need a security validator for the Jenkins Kubernetes Operator and how we are going to implement it via admission webhooks. We will have a look at how we are going to implement the validation webhook, the validation logic being used and what tools we are using to achieve it. Pulkit will showcase his progress and will discuss his future plans for phase 2 and beyond as well.


Git username / password credentials binding

$
0
0

Git username/password credentials binding

Google Summer of Code 2021 is implementing git credentials binding for sh, bat, and powershell. Git credentials binding is one of the most requested features for Jenkins Pipeline (see JENKINS-28335).

The project involves extending the Credentials Binding Plugin to create custom bindings for two types of credentials essential to establish a remote connection with a git repository

  • Username/Password

  • SSH Private Key

Why use git credentials binding?

Many operations in a Jenkins Pipeline or Freestyle job can benefit from authenticated access to git repositories. Authenticated access to a git repository allows a Jenkins job to

  • apply a tag and push the tag

  • merge a commit and push the merge

  • update submodules from private repositories

  • retrieve large files with git LFS

The git credentials username / password binding included in git plugin 4.8.0 allows Pipeline and Freestyle jobs to use command line git from sh, bat, and powershell for authenticated access to git repositories.

How to use git credentials binding?

The binding is accessible using the withCredentials Pipeline step. It requires two parameters:

credentialsId

Reference id provided by creating a Username/Password type credential in the Jenkins configuration. To understand how to configure credentials in a Jenkins environment: Using Credentials

gitToolName

Name of the git installation in the machine running the Jenkins instance (Check Global Tool Configuration section in Jenkins UI)

Note: In case a user is not aware of the git tool installation of the particular machine, the default git installation will be chosen.

Examples

The withCredentials wrapper allows declarative and scripted Pipeline jobs to perform authenticated command line git operations with sh, bat, and powershell tasks.

Shell example
withCredentials([gitUsernamePassword(credentialsId: 'my-credentials-id', gitToolName: 'git-tool')]) {
  sh 'git fetch --all'
}
Batch example
withCredentials([gitUsernamePassword(credentialsId: 'my-credentials-id', gitToolName: 'git-tool')]) {
  bat 'git submodule update --init --recursive'
}
Powershell example
withCredentials([gitUsernamePassword(credentialsId: 'my-credentials-id', gitToolName: 'git-tool')]) {
  powershell 'git push'
}

The Pipeline Syntax Snippet Generator is a good way to explore the syntax of the withCredentials step and the git username / password credentials binding.

Limitations

The git credentials username / password binding has been tested on command line git versions 1.8.3 through 2.32.0. It has been tested on CentOS 7, CentOS 8, Debian 9, Debian 10, FreeBSD 12, OpenBSD 6.9, openSUSE 15.2, Ubuntu 18.04, Ubuntu 20.04, Ubuntu 21.04, and Windows 10. Processor testing has included amd64, arm32, arm64, and s390x.

The binding does not support private key credentials. The binding is not supported on command line git versions prior to 1.8.3.

What’s next?

Private key credentials support is coming soon.

Introducing the Conventional Commits Plugin for Jenkins

$
0
0

GSoC

The conventional commits plugin is a Google Summer of Code project. Special thanks to the mentors Gareth Evans, Kristin Whetstone, Olivier Vernin and Allan Burdajewicz.

What are Conventional Commits

According to the official website, conventonal commits are, "A specification for adding human and machine readable meaning to commit messages."

Conventional commits are a lightweight convention on top of commit messages.

The following table shows major structural elements offered by the conventional commits convention.

Why Conventional Commits

As the CI/CD world is moving more towards complete automation and minimal human interaction, the ability to fully automate a release is desired. Conventional Commits enable the use of automated systems on top of commit messages. These systems can "truly" automate a release with almost no human interaction.

The convention dovetails with semantic versioning. Let’s take an example, a maven project is currently versioned at 1.2.0. The following table shows how conventional commits would bump the version depending on the type of the commit.

Structural ElementExample

Chore

chore: improve logging

Fix

fix: minor bug fix

Feat

feat: add a new feature

Breaking Change

BREAKING CHANGE: reimplement

The Conventional Commits Plugin

The conventional commits plugin is a Jenkins plugin to programatically determine the next semantic version of a git repository using:

  • Last tagged version

  • Commit message log

  • Current version of the project

How it works?

The plugin will read the commit messages from the latest tag or the current version of the project till the latest commit. Using this information it will determine what would be the next semantic Version for that particular project.

Supported Project Types?

Currently the plugin can read the current version from various configuration files of the following project types:

Commit MessageVersion BumpSemVer Equivalent

chore: improve logging

1.2.01.2.0

No version bump

fix: minor bug fix

1.2.01.2.1

Increment in the patch version

feat: add a new feature

1.2.01.3.0

Increment in the minor version

BREAKING CHANGE: reimplement

1.2.02.0.0

Increment in the major version

How to request a project type support?

Please feel free to open an issue on the GitHub repository of the plugin.

Project TypeConfiguration File(s) Read

Maven

pom.xml

Gradle

build.gradle

Make

Makefile

Python

setup.py

setup.cfg

pyproject.toml

Helm

Charts.yml

Node (NPM)

package.json

How to use the plugin

Recommended way of using the plugin is to add a step in a Jenkins Pipeline Project.

nextVersion() is the pipeline step to be used.

For example:

pipeline {
    agent any

    environment {
        NEXT_VERSION = nextVersion()
    }

    stages {
        stage('Hello') {
            steps {
                echo "next version = ${NEXT_VERSION}"
            }
        }
    }
}

Tip: The pipeline step can also be generated with the help of the Snippet Generator.Please select "nextVersion" in the Sample Step drop down and then click on "Generate Pipeline Snippet"

The plugin is released on every feature using JEP-229.

The plugin is available to download from the plugins site.

Demo

You can watch the plugin in action in a demo presented at the GSoC Midterm Presentations

Next Steps

  • Support for pre-release information. Example: 1.0.0-alpha, 1.0.0-beta, etc

  • Support for build metadata. Example: 1.0.0-beta+exp.sha.5114f85

  • Optionally writing the calcuated "Next Version" into the project’s configuration file. Example: pom.xml for a maven project, setup.py for python.

Feedback

We would love to hear your feedback & suggestions for the plugin.

Please reach out on the plugin’s GitHub repository, the Gitter channel or start a discussion on community.jenkins.io.

Remoting Monitoring with OpenTelemetry - Coding Phase 1

Next: CloudEvents Plugin for Jenkins: Interoperability between Jenkins and other CI/CD Tools
Previous: Introducing the Conventional Commits Plugin for Jenkins
$
0
0

Remoting Monitoring with OpenTelemetry

Goal

Goal of Remoting Monitoring with OpenTelemetry

The goal of this project:

  • collect telemetry data(metrics, traces, logs) of remoting module with OpenTelemetry.

  • send the telemetry data to OpenTelemetry Protocol endpoint

Which OpenTelemetry endpoint to use and how to visualize the data are up to users.

OpenTelemetry

OpenTelemetry Logo

An observability framework for cloud-native software

OpenTelemetry is a collection of tools, APIs, and SDKs. You can use it to instrument, generate, collect, and export telemetry data(metrics, logs, and traces) for analysis in order to understand your software’s performance and behavior.

Phase 1 summary

User survey

Our team conducted a user survey to understand the pain point regarding Jenkins remoting.

Fig 1. What agent type/plugins do you use?

End user survey result

Fig 1 shows what types of agent users use, and 17 unique respondents out of 28 use docker for agent. So I’m planning to publish a docker image to demonstrate how we can build Docker image with our monitoring feature.

This survey and investigation of JIRA tickets of past two years also tell me five common causes of agent unavailability.

  • Configuration mistakes

    • Jenkins agent settings, e.g. misuse of "tunnel connection through" option.

    • Platform settings, e.g. invalid port setting of Kubernetes' helm template.

    • Network settings, e.g. Load balancer misconfiguration.

  • Uncontrolled shutdown of nodes for downscaling.

  • Timeout during provisioning a new node.

  • Firewall, antivirus software or other network component kill the connection

  • Lack of hardware resources, e.g. memory, temp space, etc…​

We also heard valuable user voice in the survey.

What areas would you like to see better in Jenkins monitoring?

I have created a bunch of adhoc monitoring jobs to check on the agent’s health and send e-mail. Would be nice to have this consolidated.

Having archive of nodes with the access to their logs/events would have been nice.

I hope that implementing these feature with OpenTelemetry, which is expected to become the industry standard for observability, will bring great monitoring experience to Jenkins community.

Proof of Concept

How to deliver the monitoring program to agents

1. Sending monitoring program to the agent over remoting

Sending monitoring program via remoting

In my first implementation, I prepared a Jenkins plugin and send the monitoring program from Jenkins controller. However, this approach have following disadvantages.

  1. We cannot collect telemetry data before the initial connection. We are likely to encounter a problem while provisioning a new node, so it’s important to observe agents' telemetry data from the beginning.

  2. Some agent restarters (e.g. UnixSlaveRestarter) restart agent completely when reconnecting. It means that the agent lost monitoring program every time the connection closed, and we cannot collect telemetry data after the connection is lost before a new connection is established.

So we decided to take the next approach.

2. Install monitoring engine when provisioning a new agent

Installing monitoring engine when provisioning

In this approach, user will download the monitoring program called monitoring engine, which is a JAR file, and place it in the agent node when provisioning.

How to instrument remoting to produce remoting trace

Add instrumentation extension point to remoting

This approach makes the agent launch command more complicated, and we have to overcome this problem.

Current State

Metrics

We currently support the following metrics and planning to support more.

Traces

We tried several approaches to instrument remoting module, but good approach is not established yet.

Here is a draft documentation of the spans to collect. Google Doc

Logs

Coming soon!

Metric and span demo visualization

Our team created a demo example with Docker compose and visualized the metrics and spans.

Click to open in new tab

prometheus metric visualizationjaeger span visualization

Google Summer of Code Midterm Demo

Our project demo starts with 8:20

Next Step

  • Log support

  • Alpha release!

metrics

unit

label

key

description

system.cpu.load

1

System CPU load. See com.sun.management.OperatingSystemMXBean.getSystemCpuLoad

system.cpu.load.average.1m

System CPU load average 1 minute See java.lang.management.OperatingSystemMXBean.getSystemLoadAverage

system.memory.usage

bytes

state

used, free

see com.sun.management.OperatingSystemMXBean.getTotalPhysicalMemorySize and com.sun.management.OperatingSystemMXBean.getFreePhysicalMemorySize

system.memory.utilization

1

System memory utilization, see com.sun.management.OperatingSystemMXBean.getTotalPhysicalMemorySize and com.sun.management.OperatingSystemMXBean.getFreePhysicalMemorySize. Report 0% if no physical memory is discovered by the JVM.

system.paging.usage

bytes

state

used, free

see com.sun.management.OperatingSystemMXBean.getFreeSwapSpaceSize and com.sun.management.OperatingSystemMXBean.getTotalSwapSpaceSize.

system.paging.utilization

1

see com.sun.management.OperatingSystemMXBean.getFreeSwapSpaceSize and com.sun.management.OperatingSystemMXBean.getTotalSwapSpaceSize. Report 0% if no swap memory is discovered by the JVM.

process.cpu.load

%

Process CPU load. See com.sun.management.OperatingSystemMXBean.getProcessCpuLoad.

process.cpu.time

ns

Process CPU time. See com.sun.management.OperatingSystemMXBean.getProcessCpuTime.

runtime.jvm.memory.area

bytes

type

used, committed, max

see MemoryUsage

area

heap, non_heap

runtime.jvm.memory.pool

bytes

type

used, committed, max

see MemoryUsage

pool

PS Eden Space, G1 Old Gen…​

runtime.jvm.gc.time

ms

gc

G1 Young Generation, G1 Old Generation, …​

see GarbageCollectorMXBean

runtime.jvm.gc.count

1

gc

G1 Young Generation, G1 Old Generation, …​

see GarbageCollectorMXBean

CloudEvents Plugin for Jenkins: Interoperability between Jenkins and other CI/CD Tools

Next: Docker images use Java 11 by default
Previous: Remoting Monitoring with OpenTelemetry - Coding Phase 1
$
0
0

CloudEvents Plugin for Jenkins

The What, Why and How of Interoperability

With workloads and teams becoming more diverse and complex, there is an increasing need to automate various tasks in the CI/CD ecosystem of an application as a way to decrease complexity that can come with CI/CD.

A more diverse team working across different aspects of the application requires a diverse suite of CI/CD tools too, to test and deliver to a wide range of users. More often than not, we need these tools to work together and exchange data to form an effective CI/CD pipeline. However, chaining multiple services together can very easily increase complexity.

How? Each of these services use a different "language" to communicate and represent the entity(an event) which occured inside that service. In order for another service to understand this "language", the service might need to develop customized clients and agents which specialize in understanding, traversing and taking-actions based on what was transmitted to it by the first service.

One can think of it as a translator who specializes in a language called ABC, and each service who wants to communicate with the service who uses ABC will have to employ this translator, or perhaps get another trained translator. And there is no guarantee that this translator will also help communicate with other services speaking a completely different language.

We can see how easily that can grow in cost and maintenance. A preferred way is to have a common language each of these services use and understand as a way to communicate amongst each other. This way, an event which is emitted using this common language will be available to any of the interested receiver without that receiver needing a special agent. This way of communication which uses a common/standard language also creates a way for agnostic communication where the sender or the receiver are sending and receiving data without creating a tight coupling between the two.

CloudEvents specification is enabling that loosely-coupled, event-driven communication between services by enforcing a common language which defines how an event should be emitted and transferred between systems.

CloudEvents and Jenkins

CloudEvents Specification

A specification for describing event data in a common way
  • Consistency

    • Consistent across tools and services.

  • Accessibility

    • Common event format means common libraries, tooling, and infrastructure for delivering event data across environments can be used to develop with CloudEvents.

  • Portability

    • Easily port event-data across tools, truly leveraging event-driven architecture.

The CloudEvents plugin for Jenkins is developed as an effort to make interoperability between Jenkins and CI/CD tools much easier. The CloudEvents plugin for Jenkins is a GSoC project, and with the help from an amazing team of mentors, this project is aimed at enhancing event-driven interoperability between cloud-native CI/CD tools, making it easier for developers to include Jenkins in their CI/CD pipelines.

With this plugin, Jenkins can send and receive CloudEvents-compliant events to and from a wide variety of CI/CD tools using CloudEvents as their event format. This plugin makes chaining Jenkins with multiple tools like Tekton, Keptn, Knative and more, very easy.

GSoC Phase 1 - CloudEvents Plugin

Using CloudEvents plugin for Jenkins

This plugin allows Jenkins to be configured as a source and sink, which can emit and consume CloudEvents from a range of tools simultaneously.

Jenkins as a Source

Configuring Jenkins as a Source enables Jenkins to send CloudEvents to a CloudEvents sink. For Phase-I of this project, there is support for HTTP Sinks, however CloudEvents supports various protocol bindings. Moving forward, there will also be support for other protocol bindings supported by CloudEvents.

To use Jenkins as a Source, the following configuration is needed:

  1. Click on Manage Jenkins in the Root-Actions menu on the left.

  2. Inside the Manage Jenkins UI, search for Configure System under System Configuration.

  3. In the Configure System UI, scroll down to the CloudEvents plugin section, and this is where all the plugin configuration will be present. Here, you will have to enter the following information:

    • Sink Type (For now, HTTP Protocol Binding for CloudEvent and HTTP Sink is supported.)

    • Sink URL (URL of the Sink where you want the cloudevents sent.)

    • Events you want sent to the CloudEvents sink URL.

Step 1: Manage Jenkins

Manage Jenkins

Step 2: Configure System

Configure System

Step 3: Configure CloudEvents Sink

Configure Sink to Receive Events

With Jenkins as a Source configured, Jenkins will send a POST request to the configured sink right as the selected event occurs inside Jenkins. Each event has a different payload specific to the type of the event emitted.

Event Types, Payload and Metadata

CloudEvents emitted by Jenkins follow the Binary-structure supported by CloudEvents, where the CloudEvents metadata is present inside the header, and the event-data is serialized as JSON, and present under request-body. This is the HTTP Protocol Binding for CloudEvents. Each protocol binding for CloudEvents follows a definition specific to the binding protocol.

For now, the following Jenkins events are supported in the CloudEvents Plugin-Jenkins as a Source:

Following is a table of the queue-entered waiting cloudevents metadata:

All of these fields will be present inside the HTTP-request headers since the CloudEvents format used here is the Binary structure.

Here’s also an example of event payload for the queue-entered event:

{
  "ciUrl": "http://3.101.116.80/",
  "displayName": "test2",
  "entryTime": 1626611053609,
  "exitTime": null,
  "startedBy": "shruti chaturvedi",
  "jenkinsQueueId": 25,
  "status": "ENTERED_WAITING",
  "duration": 0,
  "queueCauses": [
    {
    "reasonForWaiting": "In the quiet period. Expires in 0 ms",
    "type": "entered_waiting"
    }
  ]
}
Event Metadata Headers KeyEvent Metadata Headers Value

ce-specversion

1.0

ce-type

org.jenkinsci.queue.entered_waiting

ce-source

job/test

ce-id

123-456-789

Try the Plugin

The plugin will soon be releasing as the CloudEvents Plugin under https://plugins.jenkins.io/!!

Here’s the GitHub Repo of the Plugin: CloudEvents Plugin GitHub Repo

Demo

Here is a video of the CloudEvents plugin with SockEye demoed at CDF GSoC Midterm Demos. SockEye is an open-source tool which is designed as a way to visulaize cloudevents which are sent from a sink. In this demo, we will take a look at how Jenkins installed in a multi-node K8s environment work with the CloudEvents plugin as a Source, sending events over HTTP to the SockEye sink.

Next Steps

  • Jenkins as a Sink to allow Jenkins to trigger various actions as cloudevents are received from other tools.

  • Enabling filtering on CloudEvents metadata to only act upon a certain kind of events recieved.

  • Support for other protocol bindings in CloudEvents.

Feedback

We would absolutely love to hear your suggestions and feedback. This will help us understand the various use-cases for the plugin, and iterate to support a variety of bindings and formats.

Feel free to log an issue at the CloudEvents Plugin GitHub repository. We are on CDF slack under gsoc-2021-jenkins-cloudevents-plugin. You can also start a discussion on community.jenkins.io. I also love emails! Drop me one on: shrutichaturvedi16.sc@gmail.com

Docker images use Java 11 by default

Next: Git Credentials Binding for sh, bat, powershell
Previous: CloudEvents Plugin for Jenkins: Interoperability between Jenkins and other CI/CD Tools
$
0
0

Docker images use Java 11 by default

The Jenkins project provides Docker images for controllers, inbound agents, outbound agents, and more. Beginning with Jenkins 2.307 released August 17, 2021 and Jenkins 2.303.1 released August 25, 2021, the Docker images provided by the Jenkins project will use Java 11 instead of Java 8.

Controllers use Java 11 by default

If you are running one of the Jenkins Docker controller images that does not include a JDK version in its label, the Java runtime will switch from Java 8 to Java 11 with the upgrade.

For example:

  • Jenkins 2.306 running as jenkins/jenkins:latest uses Java 8. When Jenkins 2.307 or later is run with jenkins/jenkins:latest, it will use Java 11

  • Jenkins 2.289.3 running as jenkins/jenkins:lts uses Java 8. When Jenkins 2.303.1 or later is run with jenkins/jenkins:lts, it will use Java 11

The Docker image tags affected by this upgrade include:

  • alpine

  • centos7

  • latest

  • lts

  • slim

Users that need to remain with Java 8 may use a different Docker image tag to run with Java 8.

  • Jenkins 2.306 running as jenkins/jenkins:latest uses Java 8. When Jenkins 2.307 or later is run with jenkins/jenkins:latest-jdk8, it will use Java 8

  • Jenkins 2.289.3 running as jenkins/jenkins:lts uses Java 8. When Jenkins 2.303.1 or later is run with jenkins/jenkins:lts-jdk8, it will use Java 8

Agents use Java 11 by default

During the next 1-2 weeks (Aug 17, 2021 - Aug 31, 2021), the Jenkins agent images will be updated to use Java 11 instead of Java 8.

For example:

  • Running a Jenkins agent from Docker image jenkins/jenkins-inbound-agents:4.9-1 uses Java 8. When running a Jenkins agent from Docker image jenkins/jenkins-inbound-agents:4.10-1 it will use Java 11.

  • Running a Jenkins agent from Docker image jenkins/jenkins-inbound-agents:latest uses Java 8. When running a Jenkins agent from Docker image jenkins/jenkins-inbound-agents:latest after the agent change, it will use Java 11.

Users that need to remain with Java 8 may use a different Docker image tag to run with Java 8.

  • Running a Jenkins agent from Docker image jenkins/jenkins-inbound-agents:4.9-1 uses Java 8. When running a Jenkins agent from Docker image jenkins/jenkins-inbound-agents:4.10-1-jdk8 it will also use Java 8.

Docker tag updates stopped

The Jenkins project will no longer update the Docker images that are based on CentOS 8. The CentOS project has changed direction to track just ahead of a Red Hat Enterprise Linux release rather than tracking after a release. They are no longer publishing updates for CentOS 8 Docker images.

Users running Jenkins 2.306 or earlier with the jenkins/jenkins:centos tag will need to switch to use a different tag. They may consider using:

  • jenkins/jenkins:almalinux

  • jenkins/jenkins:rhel-ubi8-jdk11

Users running Jenkins 2.289.3 or earlier with the jenkins/jenkins:centos tag will need to switch to use a different tag

They may consider using:

  • jenkins/jenkins:lts-almalinux

  • jenkins/jenkins:lts-rhel-ubi8-jdk11

Window 1809 Docker images stopped

The Windows Docker images have published versions based on both the 1809 feature release and the Windows Server long term support channel ("LTSC"). Windows support for the 1809 images will no longer be published because Microsoft has ended mainstream support for the 1809 images. Users should switch to use the Jenkins images based on the "LTSC" channel.

Git Credentials Binding for sh, bat, powershell

Next: Security Validator for Jenkins Operator for Kubernetes
Previous: Docker images use Java 11 by default
$
0
0

Abstract

This project implemented two new credential bindings to perform authenticated operations using command line git in Jenkins pipeline and freestyle jobs.

The two credential bindings are gitSshPrivateKey and gitUsernamePassword.

Implementation

Type

Feature

Location

The gitUsernamePassword binding is implemented in Jenkins git pluginv4.8.0. The gitSshPrivateKey binding is implemented in a pull request to the Jenkins git plugin

Dependencies
  1. Credentials Binding Plugin- It is used to bind Git specific environment variables with shell scripts/commands which perform git authentication on behalf of the user, without their interaction with the command-line.

  2. Bouncy Castle API Plugin- Provides an API to do common tasks like PEM/PKCS#8 Encoding/Decoding and ensuring its stability among Bouncy Castle API versions.

  3. SSH Server Plugin- Provides an API to perform tasks like OpenSSH private key encoding and decoding.

Phase 1: Git Username Password Binding (gitUsernamePassword)

Deliverables

  • Support git authentication over the HTTP protocol

    • Use the GIT_ASKPASS environment variable to provide user credentials to command line git

  • Support different

    • OS environments: CentOS 7, CentOS 8, Debian 9, Debian 10, FreeBSD 12, OpenBSD 6.9, openSUSE 15.2, Ubuntu 18.04, Ubuntu 20.04, Ubuntu 21.04, and Windows 10.

    • Processors: amd64, arm32, arm64, and s390x.

  • Authentication support for command line git only, not JGit or JGit Apache.

    • Check for specific git versions

    • Setting git specific environment variables based on OS type

  • Automated test coverage more than 90%

Phase 2: Git SSH Private Key Binding (gitSshPrivateKey)

Deliverables

  • To support git authentication over the SSH protocol

  • Supports:

    • Private Key Formats

      • OpenSSH

      • PEM

      • PKCS#8

    • Encryption algorithms

      • RSA

      • DSA

      • ECDSA

      • ED25519

    • OS environments: CentOS 7, CentOS 8, Debian 9, Debian 10, FreeBSD 12, OpenBSD 6.9, openSUSE 15.3, Ubuntu 18.04, Ubuntu 20.04, Ubuntu 21.04, and Windows 10.

    • Processors: amd64, arm32, arm64, and s390x.

  • Authentication support for command line git only, not JGit or JGit Apache.

  • Use git specific environment variables depending upon the minimum git version

    • GIT_SSH_COMMAND - If the version is greater than 2.3, provides ssh command including the necessary options.

    • SSH_ASKPASS - If the version is less than 2.3, an executable script is attached to the variable.

    • Setting variables based on the OS type

Achievements

  1. The git credential bindings which are available through the git plugin automate the git authentication process for a user effortlessly

  2. The gitUsernamePassword and gitSshPrivateKey binding provides git authentication support for Pipeline and Freestyle Project users in various OS environments on different processors

  3. The gitUsernamePassword binding has been released and is readily available from git plugin v4.8.0 and above

  4. The gitSshPrivateKey binding provides support for OpenSSH format which is default for OpenSSH v7.8 and above

Future Work

  • SSH private key binding pull request merge and release

Unexpected complications from Jenkins class loader required extra effort and investigation, including an experiment shading a dependency into the git plugin We intentionally chose to avoid the complication and risk of shading the dependency If the SSH library use requires shading, then we may need to use maven modules in the git plugin

Security Validator for Jenkins Operator for Kubernetes

Next: Work report for the Conventional Commits Plugin for Jenkins
Previous: Git Credentials Binding for sh, bat, powershell
$
0
0

Background

Jenkins custom resources on a Kubernetes cluster are deployed using declarative YAML configuration files; hence some of the plugins declared in these files may contain security warnings. So there is no way for the user to know other than manually checking for each on the site. This project aims to add an extra step of validation before creating/updating a new Jenkins Custom Resource.

Deliverables

This project aims to add a validating admission webhook to the Jenkins Operator for Kubernetes to detect potential security vulnerabilities in the plugins before the object is created.

Dependencies

Webhooks communicate to the API server over HTTPS and use TLS. Thus, Jetstack/cert-manager is used to provision TLS certificates and establish connection between Kubernetes API and webhook.

Implementaion

Operator-SDK takes care of creating a new webhook and appending it to the manager and creating handlers. Tls certificates are managed using cert-manager.

  • Validation Logic:

    • Proposed Implementations: Iterate through the list of plugins to be installed and fetch warnings for each plugin from the plugin center API and check if the version of that plugin has any of those warnings.

    • Caveats: Webhooks add latency to an API request, hence they should evaluate as quickly as possible thus having max allowed timeout of 30s. In the earlier approach I was fetching the security warnings from the plugin site API in the validator interface itself, and since network operations are slow, it was causing a timeout in the case of validating a larger number of plugins or when the Internet connection was not good.

    • Updated Implementaion: Instead of fetching information for each plugin, the information about all the plugins is downloaded and cached at the start of the operator and updated periodically, thus eliminating network calls and finishing validation in less than a second.

Evaluation Phase 1:

  • Scaffoled a new validation webhook

  • Added manifests for ValidatingWebhookConfiguration, certificates and volumes, and updated Makefile

  • Implemented the validator interface

  • Updated helm charts

Evaluation Phase 2:

  • Reimplemented the validator interface.

  • Added unit tests for internal functions

  • Added e2e tests along with helm tests

  • Updated helm charts

User Guide

The webhook feature is completely optional for the user. It can be easily deployed using Helm Chart by setting webhook.enabled in values.yaml and in the Operator command line flag.

 webhook.enabled=true

To enable security validation in the jenkins custom resource set

jenkins.ValidateSecurityWarnings=true
  • Note: The webhook takes some time to get up and running, also when helm renders the template, the validating webhook configuration is applied last, hence if the user wants to deploy the Jenkins Custom Resource with validation turned on, he needs to wait for some time. After the webhook is up and running the user can deploy the Jenkins Custom Resource using helm or kubectl

Future work

  • Implementing a post-install hook in the helm charts that checks whether the webhook is up and running.

  • Adding validation for required core version of plugin and core version of Jenkins.

  • Migrating other validation logic from controller to the webhook.

  • Adding validation for the dependencies of the plugins.


Work report for the Conventional Commits Plugin for Jenkins

Next: Jenkins project Confluence instance attacked
Previous: Security Validator for Jenkins Operator for Kubernetes
$
0
0

GSoC

This blog post is part 2 of the Introducing the Conventional Commits Plugin blog.

The goal of this blog is to showcase the work done during the Google Summer of Code 2021 coding phases.

Please refer the part 1 of the blog for a detailed description of the plugin.

Abstract

The project/plugin aims to fully automate a release process.

The plugin tries to achieve this goal by automatically determining the next semantic version based on commit messages.

There were 2 coding phases in the GSoC 2021. I call the first phase - "Read" and the 2nd phase - "Write", let’s see why.

Phase 1: Read

In this phase, the "read" aspect of the plugin was enhanced. The plugin supported multiple project types (Maven, Gradle, NPM, Helm, Python, Make) and was able to read current version information from the configuration files of the supported project types.

Deliverables

  • Support multiline comments

  • Support reading the current version from a maven pom.xml

  • Support reading the current version from a build.gradle

  • Support reading the current version from a Makefile

  • Support reading the current version from a package.json

  • Support reading the current version from a helm Chart.yaml

Resources

Phase 2: Write

In this phase, some work was done in extending the "write" aspect of the plugin. A provision (optional parameter) to write back the calculated next semantic version to the configuration files of projects was added to the plugin. Along with that, the plugin now can append "Pre-Release" and "Build Metadata" information to the calculated semantic version.

Deliverables

  • Add prerelease information to the calculated/new version

  • Add build metadata to the calculated/new version

  • Write next version in pom.xml

  • Write next version in package.json

  • Handle version mismatch between config file and latest tag

Next Steps

  • Write back version for Python project.

  • Write back version for Gradle project.

  • Handle remote workspaces

Feedback

We would love to hear your feedback & suggestions for the plugin.

Please reach out on the plugin’s GitHub repository, the Gitter channel or start a discussion on community.jenkins.io.

Jenkins project Confluence instance attacked

Next: Jenkins Election 2021
Previous: Work report for the Conventional Commits Plugin for Jenkins
$
0
0

Earlier this week the Jenkins infrastructure team identified a successful attack against our deprecated Confluence service. We responded immediately by taking the affected server offline while we investigated the potential impact. At this time we have no reason to believe that any Jenkins releases, plugins, or source code have been affected.

Thus far in our investigation, we have learned that the Confluence CVE-2021-26084 exploit was used to install what we believe was a Monero miner in the container running the service. From there an attacker would not be able to access much of our other infrastructure. Confluence did integrate with our integrated identity system which also powers Jira, Artifactory, and numerous other services.

The trust and security in Jenkins core and plugin releases is our highest priority. We do not have any indication that developer credentials were exfiltrated during the attack. At the moment we cannot assert otherwise and are therefore assuming the worst. We are taking actions to prevent releases at this time until we re-establish a chain of trust with our developer community. We have reset passwords for all accounts in our integrated identity system. We are improving the password reset system as part of this effort.

At this time, the Jenkins infrastructure team has permanently disabled the Confluence service, rotated privileged credentials, and taken proactive measures to further reduce the scope of access across our infrastructure. We are working closely with our colleagues at the Linux Foundation and the Continuous Delivery Foundation to ensure that infrastructure which is not directly managed by the Jenkins project is also scrutinized.

In October 2019 we made the Confluence server read-only effectively deprecating it for day-to-day use within the project. At that time, we began migrating documentation and changelogs from the wiki to GitHub repositories. That migration has been ongoing, with hundreds of plugins and many other documentation pages moved from the wiki to GitHub repositories.

We are grateful for those of you who followed our responsible disclosure procedure and reached out to us about this vulnerability affecting the Jenkins project.

We will continue to take proactive measures to improve the security of our infrastructure and encourage you to follow us on Twitter for further updates.

Jenkins Election 2021

Next: Join Jenkins at DevOps World 2021
Previous: Jenkins project Confluence instance attacked
$
0
0

Jenkins Needs You transparent

Dear all,

Time flies and the Jenkins elections period is here.

This year, two board seats and all officer positions are up for election. Thanks, Oleg Nenashev and Ullrich Hafner who led the Jenkins project as board members for the last two years. Thanks, Tim Jacomb, Daniel Beck, Mark Waite for your dedication as officers over the past year.

We already had two successful editions in a row. I want us to continue on that path. This is a tremendous opportunity for community members to influence the direction of the project for the next two years. To make this year’s election even better, we slightly modified the process by leveraging our new community platform aka community.jenkins.io.

To participate in the election, we ask every Jenkins community member to have an account on community.jenkins.io. You can either reuse your Github account or create a new discourse account specific to community.jenkins.io. The second requirement is to be able to showcase at least one contribution done before the first of September 2021. As mentioned on jenkins.io.io/participate, they are many different ways to contribute to Jenkins and for many of them, it’s very difficult to measure. Therefore we’ll trust participants and will not require that they provide evidence of contribution as part of their voter registration. We reserve the right to ban the specific account from the election process if we identify abuse. The election works in three stages:

  1. Identify voters and nominees

  2. Voting period

  3. Announce results

Voters

To invite participants to vote, we need a list of email’s addresses that we would share with the Condorcet Internet Voting Service. Therefore we ask every community member who matches the requirements to join the group election-voter on community.jenkins.io . The group will be open for joining during the registration period after we’ll close registration during the voting period. We’ll use emails from the “election-voter” group members.

Nominees

During the same period, we invite every community member to nominate candidates by sending a message to the group election-committee mentioning the position and the motivation. On the 31 of October, the nomination period will end. We’ll notify all the nominees and get confirmation that they are interested in running as a candidate. The list of candidates will be announced on the 7th of November.

Everybody can nominate candidates.

This year we are looking for nominees for the following positions:

More information about the different roles can be found on jenkins.io/project/team-leads.

Election

On the 7th of November, once voters and candidates are identified, we’ll invite everybody by email to vote using civs.cs.cornell.edu. At this stage of the election, nobody will be allowed to register. Voting deadline is the 30th of November.

Result

As soon as we have the election results, we’ll publish them. Elected members will begin their official roles on the 3rd of December 2021.

Key Dates

  • Sep 20: Nomination and voter registration begin

  • Oct 31: Nomination deadline

  • Nov 07: Candidates announced, Registration deadline, voting start

  • Nov 30: Voting deadline

  • Dec 03: Results announced

Key Information:

Cheers,


register button

Join Jenkins at DevOps World 2021

Next: New eBook: Fortune 500 Developers and Engineers Turn to Jenkins for Real-World Results
Previous: Jenkins Election 2021
$
0
0

DevOps World has been the largest gathering for Jenkins for many years. In keeping with tradition, many Jenkins presentations and sessions are planned for this year’s event.

devops world 2021

Join us for DevOps World on September 28 - 30, 2021. The event is virtual, free to attend and will include the following Jenkins activities:

Jenkins workshops

  • Contributing to Open Source

  • Securing Jenkins Pipeline with CyberArk Conjur Secrets Manager

See the conference workshop list for more information about workshops.

Breakout sessions

See the full agenda for more Jenkins sessions.

Birds of a feather

These are 30 minute networking sessions. Topics will be on security and pipeline. Discussions are led by Wadeck Follonier, Daniel Beck, Joost van der Griendt, and Mark Waite.

Jenkins contributor summit

A Jenkins Contributor Summit will be held October 2, 2021 at 7:00 AM UTC.

The planning discussion is taking place at community.jenkins.io. We welcome topic suggestions or you can volunteer to present a topic about which you are passionate. Join in on the discussion. New and veteran contributors are welcome!

Virtual expo hall

Be sure to stop by the Jenkins booth for project update content or to chat with one of the Jenkins maintainers.

Special thanks to review committee

Lastly, special THANKS to the DevOps World review committee. We are grateful for their contributions to review, score and provide feedback for paper submissions.

Looking forward to seeing you there!

New eBook: Fortune 500 Developers and Engineers Turn to Jenkins for Real-World Results

Next: Jenkins Health Advisor by CloudBees Tool Makes Life Easier for Jenkins Administrators
Previous: Join Jenkins at DevOps World 2021
$
0
0

Jenkins User Success Stories in the Fortune 500

If you’ve been following JenkinsIsTheWay.io, you’ve read some fantastic stories from the Jenkins user community about the great stuff they are building with Jenkins. With over 200,000 installations to date, Jenkins remains the most widely used open-source automation server. And story after story, we hear what a critical role Jenkins plays in building robust, secure CI/CD pipelines.

So it comes as no surprise that in many of the back (and remote) offices of Fortune 500 companies, developers are turning to Jenkins to help make their lives easier with automation while also giving their engineers more time to innovate. With this in mind, I invite you to read the latest ebook focused on the behind-the-scenes development activities underway in large-scale enterprises.

Jenkins led

Learn how companies like IBM continue to innovate their software prowess while confidently relying on their custom-built CI/CD pipelines. You’ll also read how Jenkins became the ultimate collaboration tool for thousands of Apple developers. And we don’t just shine the spotlight on tech companies. In this ebook, Jenkins users span multiple industries across the enterprise.

You’ll read how Jenkins made it possible to standardize multidisciplinary team procedures so Roche engineers can create innovative healthcare applications with confidence. We also dive into how Telstra’s software team was able to automate the build cycle - across 100,000 microservices - to accelerate the creation of world-class communication tools. And how Sainsbury’s development team used Jenkins as the way to "bring a retail giant into the 21st century".

Results inspired

No time to dive into this Fortune 500 ebook? I thought it’s worth highlighting the real-world results experienced by enterprise developers and engineers who have turned to awesome plugins to help build, deploy and automate their software solutions. Here’s what they had to say in their own words!

Software acceleration

  • Build times are much faster with the new node mechanism

  • Shared build pipelines are consistent and evolve faster

  • Faster deployment times - from 5 days to several minutes

  • Deploy multiple server patches 30x faster than normal processes

  • Reduce release time from 7 to 4 days

Simpler, smarter processes

  • One-stop-shop for building, deploying, monitoring, testing and even self-managing

  • Easy onboarding for new applications

  • Credential management has gotten much simpler and stricter

  • Made setting up CI/CD really easy

  • Provides the visibility needed to track the deployment process

Loved and relied on by developers

  • Ultimate collaboration tool for thousands of developers

  • 100% confidence in a consistent and repeatable pipeline

  • Keeps the DevOps team in the loop

  • Jenkins-as-code has freed teams up to experiment more

  • Teams are more self-empowered to provision and support their own builds

One thing to remember is that since Jenkins is a free open source solution, it also means savings across the enterprise. This is highlighted in several of our user stories in this ebook. You’ll discover how peers have "decreased build server costs" and have mentioned "cost-cutting" as a top Jenkins benefit.

Share your story

Whether working for a large corporation or an emerging tech startup, we relish hearing your experiences and the results you get with a Jenkins assist. When you’re ready to tell your story, we’re prepared to help you share it. Fill out our short questionnaire, and we’ll send you our Jenkins Is The Way T-shirt as a thank you once it’s published!

Jenkins Health Advisor by CloudBees Tool Makes Life Easier for Jenkins Administrators

Next: Congratulations to all Jenkins and CDF Google Summer of Code 2021 participants!
Previous: New eBook: Fortune 500 Developers and Engineers Turn to Jenkins for Real-World Results
$
0
0

Jenkins Health Advisor by CloudBees logo

There are many ways to set up your Jenkins environment, and depending on the configuration you choose, there are different best practices and options to optimize your environment. In this blog, I’m going to focus on Jenkins Health Advisor by CloudBees as a way to fine-tune your environment. It’s a free tool that can help administrators understand and manage their Jenkins controller. If you’re a CloudBees customer, the tool automatically comes with CloudBees CI. However, it’s also available as a free open source tool to anyone who uses Jenkins. The Jenkins Health Advisor tool can make your life easier, because it can arm you with the information you need to keep your Jenkins environment running smoothly.

A Bit of Background

The CloudBees support team originally created the tool as a way to help customers troubleshoot Jenkins issues. With the tool, our support engineers could gather information from a customer’s Jenkins controller about the configuration of the controller and agents, the operating system, stats from web requests, and the like. Our tech support teams used this data to help customers pinpoint problems, such as security vulnerabilities, performance problems, and plugin version issues. Very quickly, our support tooling team saw the bigger-picture benefits of the diagnostic tool as an opportunity for people to proactively manage their controller. With that bit of background, let’s dive into the Jenkins Health Advisor tool so you understand how it can help you.

The Problem — Lack of Visibility into Controller Environment

Because of the open-source nature of Jenkins and the vast array of plugins that you can deploy in your Jenkins installation, downstream code changes to plugins and corresponding dependencies can inadvertently impact your environment, which slows productivity, causes confusion, and impacts the quality of your software delivery. When problems arise, Jenkins administrators need to quickly pinpoint the root cause of an issue, and that can be quite a challenge.

The Solution — Jenkins Health Advisor by CloudBees

When an issue arises, you need razor sharp insight to help you quickly sift through lines upon lines of data about your Jenkins controller and your plugins to identify the problem. And, that’s exactly what Jenkins Health Advisor does. It gives you specific information about potential problems in your environment so you can resolve issues as quickly as possible.

When you first install the Jenkins Health Advisor tool in your environment, you’ll receive a report that lists everything it detects in your system. You’ll then receive report emails only when something changes that could impact your environment or when the tool identifies an issue that could be problematic. If there are no issues in your environment, you don’t receive an email. It’s really that simple.

The Power is in the Insight … and the CloudBees Support

Ok, so let’s explain how Jenkins Health Advisor uses data to help you optimize your environment. The tool actually generates a support bundle every 24 hours. We don’t want to bog down your email inbox with redundant reports emails, so we don’t send every report to you—only the reports that indicate an issue with your environment. However, CloudBees receives every support bundle from every user. We are constantly monitoring all of this active Jenkins controller data with external Jenkins and open source data about known issues, plugin updates, security vulnerabilities, etc. If the Jenkins Health Advisor tool identifies an issue that could impact your system, you’ll receive an email so you can proactively address the problem. At the same time, CloudBees engineering teams are continually working to enhance the detection capabilities of the Jenkins Health Advisor tool, so you can work as proactively as possible to manage your environment.

The emails you receive identify potential problems, and they also include supporting information to help you resolve issues, with links to solutions and recommended resolutions as well as articles to understand the problem.

Helpful Tips to Gain Value from Jenkins Health Advisor

  • Improve Troubleshooting

    If you need to troubleshoot a particular issue, you can manually generate a support bundle to give you point-in-time information on your environment. You can filter the support bundle data on different system parameters, like the build queue, dump agent export tables, and garbage collection logs, so you get the specific information you need.

  • Generate Anonymized System Data

    At any time, you can change the settings on your tool to anonymize the data in your support bundles. You may want to do this if you don’t want to share sensitive project data with CloudBees. Our support and engineering teams can still gain valuable insight from your generic system data. This is also a helpful feature if you need to share system data with a partner or vendor, and you don’t want to share project or personnel data.

  • Jenkins Health Advisor Tool Updates and Support

    As we mentioned above, the CloudBees support tooling team is constantly working to enhance the tool. To ensure it’s always as current as possible, we release updates to the tool every two weeks. Usually these updates won’t impact your Jenkins environment. However, if there are changes to the tool that might affect your system, you’ll receive a notification email so there are no surprises. And, if you need help with the Jenkins Health Advisor tool, our support engineers are available to answer your questions.

Start Using Jenkins Health Advisor Today

Given the vast Jenkins community, it’s impossible to know everything that may impact your environment, even if you look at your control panel every day and scour open source forums on a regular basis to stay on top of new issues and vulnerabilities. The Jenkins Health Advisor tool is designed to streamline the work effort of your daily routine by automatically telling you when there’s an issue that may impact your Jenkins environment. With the proactive Jenkins Health Advisor notifications, you can spend your time on more strategic tasks.

Congratulations to all Jenkins and CDF Google Summer of Code 2021 participants!

Next: Jenkins Election 2021
Previous: Jenkins Health Advisor by CloudBees Tool Makes Life Easier for Jenkins Administrators
$
0
0

Jenkins GSoC

Congratulations to all Google Summer of Code (GSoC) 2021 students! On behalf of the Jenkins org team, we would like to thank all participants: students, mentors, applicants, and dozens of other contributors who participated in GSoC this year.

In 2021, the Jenkins project participated in GSoC as part of the Continuous Delivery Foundation’s GSoC mentor organisation. Within the CDF GSoC mentor organisation, we had six students working on projects: five projects focused on Jenkins and one project focused on Spinnaker. In GSoC, we focus on projects that solve problems important to end users and community members. This year’s GSoC projects delivered highly anticipated new features for Jenkins and Spinnaker.

Google Summer of Code has been a successful and positive experience for students due to the active participation of the Jenkins community and the wider Continuous Delivery Foundation community.

🎉 All of the CDF GSoC students have successfully completed their projects! 🎉

This is the second year in a row that all Jenkins GSoC students have reached the final evaluation and successfully passed! This has been an extremely challenging year, and the amount of work and dedication that the students and their mentoring teams have put into GSoC has been phenomenal. Jenkins, Spinnaker, and the CDF are incredibly grateful to everyone who has contributed to GSoC 2021!

CDF GSoC

☀️ GSoC Students and their Projects

Please see the individual project pages for more details on the projects and work undertaken. You can view student presentations during mid-term demos and final demos and students have written numerous blog posts about their work.

Shruti Chaturvedi - CloudEvents Plugin for Jenkins

Shruti Chaturvedi

Shruti is an undergrad student of Computer Science at Kalamazoo College. She is developing a CloudEvents integration for Jenkins, allowing other CloudEvents-compliant CI/CD tools to communicate easily. Shruti is also the Founding Engineer of a California-based startup, MeetKlara, where she is building serverless solutions and advocating for developing CI/CD pipelines using open-source tools.

Harshit Chopra - Git credentials binding for sh, bat, and powershell

Harshit Chopra

Harshit Chopra is a recent graduate and is currently working on a Jenkins project which brings the authentication support for cli git commands in a pipeline job and freestyle project.

Git credentials binding for sh, bat, and powershell

Akihiro Kiuchi - Jenkins Remoting Monitoring

Akihiro Kiuchi

Akihiro is a student in the Department of information and communication engineering at the University of Tokyo. He is improving the monitoring experience of Jenkins Remoting during Google Summer of Code 2021.

  • Affiliation: The University of Tokyo and Jenkins project

  • GitHub: Aki-7

Jenkins Remoting Monitoring with OpenTelemetry

Daniel Ko - try.spinnaker.io

Daniel Ko

Daniel is studying computer science at the University of Wisconsin - Madison. He is developing a public Spinnaker sandbox environment for Google Summer of Code 2021.

  • Affiliation: University of Wisconsin - Madison and Spinnaker project

  • GitHub: ko28

  • LinkedIn: Daniel Ko

try.spinnaker.io: Explore Spinnaker in a Sandbox Environment!

Pulkit Sharma - Security Validator for Jenkins Kubernetes Operator

Pulkit Sharma

Pulkit is a student at Indian Institute of Technology,BHU,Varanasi. He is working on a GSoC Project under Jenkins where he aims to add a security validator to the Jenkins Kubernetes Operator.

  • Affiliation: Indian Institute of Technology, BHU and Jenkins Project.

  • GitHub: sharmapulkit04

Security Validator for Jenkins Kubernetes Operator

Aditya Srivastava - Conventional Commits Plugin for Jenkins

Aditya Srivastava

Aditya is a curiosity driven individual striving to find ingenious solutions to real-world problems. He is an open-source enthusiast and a lifelong learner. Aditya is also the Co-Founder and Maintainer of an Open Source Organization - Auto-DL, where he’s leading the development of a Deep Learning Platform as a Service application.

Upcoming Events, September 28-30: DevOps World!

This year CloudBees, one of the Jenkins corporate sponsors, has invited all students to participate in the DevOps World virtual conference on September 28-30. GSoC students will present lighting talks about their projects, attended other conference talks, and join the Continuous Delivery Foundation booth which represents CDF projects at the conference. We look forward to GSoC students' lightning talks during DevOps World!

Swag

All Google Summer of Code students and mentors receive swag from Google. In addition, this year, CloudBees has sponsored swag for the most active GSoC participants: all students, mentors, and many other contributors who participated and helped the projects to succeed. This is the forth year when the Jenkins organization sends extra GSoC swag. In the previous years swag logistics was one of the more challenging tasks for org admins during GSoC, and we highly appreciate that the Continuous Delivery Foundation will handle sending out the additional swag.


Jenkins Election 2021

Next: Use Just Enough Pipeline
Previous: Congratulations to all Jenkins and CDF Google Summer of Code 2021 participants!
$
0
0

Voter registration is now open for the 2021 Jenkins project elections. Two members of the governing board are up for election. Five officers are up for election.

How Do I Register to Vote?

register button

Click the "Register Here" button above or open https://community.jenkins.io/g/election-voter in your browser.

You will need to register with community.jenkins.io either using an existing account (like a GitHub account) or by creating a new account dedicated to community.jenkins.io and then "Join" the "election-voter" group.

2021 10 25 jenkins elections

Am I Eligible to Vote?

You’re eligible to vote if you have made at least one contribution to Jenkins before September 1, 2021. Contributions to Jenkins are recognized in many different forms, including:

  • Connect with other users through mailing lists or chat channels

  • Meet with other users in conferences or online meetups

  • Code improvements for Jenkins core, Jenkins plugins, or Jenkins infrastructure

  • Help other users through chat channels or mailing lists

  • Translate Jenkins software or other materials

  • Test Jenkins software interactively or with automation

  • Document Jenkins core, Jenkins plugins, or Jenkins infrastructure

  • Design Jenkins user interface, Jenkins artwork, or other Jenkins

  • Review changes to code or documentation that are submitted by others

  • Donate funds to help the project

If you’ve helped someone use Jenkins or a Jenkins derived product, you’re eligible to vote. If you’ve tested Jenkins interactively or with automation, you’re eligible to vote. If you’ve assisted at a Jenkins meetup or other Jenkins event, you’re eligible to vote.

Nominations

Nominations for candidates are being accepted by the Jenkins election committee. Nominations will close at the end of the day, October 31, 2021. Nominees will be notified and will be asked to confirm that they are willing to serve if elected. The list of candidates for each position will be announced November 7, 2021.

Who Nominates a Candidate?

Anyone may propose a nomination. Nominations are being accepted for the following positions:

Use Just Enough Pipeline

Next: Introducing external storage for JUnit test results
Previous: Jenkins Election 2021
$
0
0

2021 10 26 just enough pipeline

Jenkins Pipeline (or simply Pipeline with a capital P) is a suite of plugins that supports implementing and integrating continuous delivery pipelines into Jenkins. This allows you to automate the process of getting software from version control through to your users and customers.

Pipeline code works beautifully for its intended role of automating build, test, deploy, and administration tasks. But, as it is pressed into more complex roles and unexpected uses, some users have run into snags. Using best practices – and avoiding common mistakes – can help you design a pipeline that is more robust, scalable, and high-performing.

We see a lot of users making basic mistakes that can sabotage their pipeline. (Yes, you can sabotage yourself when you’re creating a pipeline.) In fact, it’s easy to spot someone who is going down this dangerous path – and it’s usually because they don’t understand some key technical concepts about Pipeline. This invariably leads to scalability mistakes that you’ll pay dearly for down the line.

Don’t make this mistake!

Perhaps the biggest misstep people make is deciding that they need to write their entire pipeline in a programming language. After all, Pipeline is a domain specific language (DSL). However, that does not mean that it is a general-purpose programming language.

If you treat the DSL as a general-purpose programming language, you are making a serious architectural blunder by doing the wrong work in the wrong place. Remember that the core of Pipeline code runs on the controller. So, you should be mindful that everything you express in the Pipeline domain specific language (DSL) will compete with every other Jenkins job running on the controller.

For example, it’s easy to include a lot of conditionals, flow control logic, and requests using scripted syntax in the pipeline job. Experience tells us this is not a good idea and can result in serious damage to pipeline performance. We’ve actually seen organizations with poorly written Pipeline jobs bring a controller to its knees, while only running a few concurrent builds.

Wait a minute, you might ask, “Isn’t handling code what the controller is there for?” Yes, the controller certainly is there to execute pipelines. But, it’s much better to assign individual steps of the pipeline to command line calls that execute on an agent. So, instead of running a lot of conditionals inside the pipeline DSL, it’s better to put those conditionals inside a shell script or batch file and call that script from the pipeline.

However, this raises another question: “What if I don’t have any agents connected to my controller?” If this is the case, then you’ve just made another bad mistake in scaling Jenkins pipelines. Why? Because the first rule of building an effective pipeline is to make sure you use agents. If you’re using a Jenkins controller and haven’t defined any agents, then your first step should be to define at least one agent and use that agent instead of executing on the controller.

For the sake of maintaining scalability in your pipeline, the general rule is to avoid processing any workload on your controller. If you’re running Jenkins jobs on the controller, you are sacrificing controller performance. So, try to avoid using Jenkins controller capacity for things that should be passed off to an agent. Then, as you grow and develop, all of your work should be running agents. This is why we always recommend setting the number of executors on the master to zero.

Use Just Enough Pipeline to Keep Your Pipeline Scalable

All of this serves to highlight our overarching theme of “using just enough pipeline.” Simply put, you want to use enough code to connect the pipeline steps and integrate tools – but no more than that. Limit the amount of complex logic embedded in the Pipeline itself (similarly to a shell script), and avoid treating it as a general-purpose programming language. This makes the pipeline easier to maintain, protects against bugs, and reduces the load on controllers.

Another best practice for keeping your pipeline lean, fast, and scalable is to use declarative syntax instead of scripted syntax for your Pipeline. Declarative naturally leads you away from the kinds of mistakes we’ve just described. It is a simpler expression of code and an easier way to define your job. It’s computed at the startup of the pipeline instead of executing continually during the pipeline.

Therefore, when creating a pipeline, start with declarative, and keep it simple for as long as possible. Anytime a script block shows up inside of a declarative pipeline, you should extract that block and put it in a shared library step. That keeps the declarative pipeline clean. By combining declarative with a shared library, that will take care of the vast majority of use cases you’ll encounter.

That said, it’s not accurate to say that declarative plus a shared library will solve every problem. There are cases where scripted is the right solution. However, declarative is a great starting point until you discover that you absolutely must use scripted.

Just remember, at the end of the day, you’ll do well to follow the adage: “Use just enough pipeline and no more.”

Thanks to our Sponsor

Mark and Darin both work for CloudBees. CloudBees helps Fortune 1000 enterprises manage and scale Jenkins. Thanks to CloudBees for sponsoring the creation of this blog post.

Mark and Darin joined Hope Lynch and Joost van der Griendt to share additional topics in a CloudBees on-demand recording, "Optimizing Jenkins for the Enterprise". Register for the on-demand recording to receive more information on configuration as code, plugin management, and Pipelines.

Introducing external storage for JUnit test results

Next: Jenkins in Hacktoberfest 2021
Previous: Use Just Enough Pipeline
$
0
0

In common CI/CD use-cases a lot of the space is consumed by test reports. This data is stored within JENKINS_HOME, and the current storage format requires huge overheads when retrieving statistics and, especially, trends. In order to display trends, each report has to be loaded and then processed in-memory.

The main purpose of externalising Test Results is to optimize Jenkins performance and storage by querying the desired data from external storages.

I’m please to announce that the JUnit Plugin external storage is now available for use.

Getting started

Install your database vendor specific plugin, you can use the Jenkins plugin site to search for it:

e.g. you could install the PostgreSQL Database plugin.

We currently support PostgreSQL or MySQL, but can support others, just create an issue or send a pull request.

From Jenkins UI

Navigate to: Manage Jenkins → Configure System → Junit

In the dropdown select 'SQL Database'

JUnit SQL plugin configuration

Now configure your Database connection details.

Search for 'Global Database' on the same 'Configure System' page.

Select the database implementation you want to use and click 'Test Connection' to verify Jenkins can connect

JUnit SQL plugin database configuration

Click 'Save'

Configuration as code

If you want to configure the plugin via Configuration as Code then see the below sample:

unclassified:globalDatabaseConfiguration:database:postgreSQL:database: "jenkins"hostname: "${DB_HOST_NAME}"password: "${DB_PASSWORD}"username: "${DB_USERNAME}"validationQuery: "SELECT 1"junitTestResultStorage:storage: "database"

Using the plugin

Now run some builds, here’s an example pipeline configuration to get you started if you’re just trying out the plugin:

node {
  writeFile file: 'x.xml', text: '''<testsuite name='sweet' time='200.0'><testcase classname='Klazz' name='test1' time='198.0'><error message='failure'/></testcase><testcase classname='Klazz' name='test2' time='2.0'/><testcase classname='other.Klazz' name='test3'><skipped message='Not actually run.'/></testcase></testsuite>'''
  junit 'x.xml'
}

You will see a test result trend appear like below on the builds project page:

JUnit Trend

If you check on the controller’s file system you will see no junitResult.xml for new builds.

If you connect to your database and run:

SELECT * FROM caseresults;

You will see a number of test results in the database.

What happens to existing test results?

Existing test results will stay on disk but will not be loaded.

Currently there is no migration scripts or plugin functionality to do this, if you need it then please raise an issue.

How are test results cleaned up

When a job or build is deleted the related test results are removed.

This is expected to be done as part of a 'Build Discarder'.

If you wish to keep your results longer than this you can disable this feature by enabling:

Skip cleanup of test result on build deletion on the system configuration page.

If you need more complex cleanup strategies built into the plugin then please raise an issue.

API

The API is defined at:

JunitTestResultStorage#load is passed a job name and build which can be used to construct an instance of the external storage implementation.

This implementation will then act on that job and build except for the optimised calls that act across all builds.

The API contains the basic methods like getFailCount, getSkipCount, but also APIs that are optimised for retrieving data for the trend graphs on the job page and the test result history page.

These allow single API calls to be made for what used to be a lot of work for Jenkins to look up before.

Feedback

I would love to hear your feedback & suggestions for this feature.

Please create an issue at https://github.com/jenkinsci/junit-plugin or provide feedback on https://community.jenkins.io

Jenkins in Hacktoberfest 2021

Next: Guava library upgrade (breaking changes!)
Previous: Introducing external storage for JUnit test results
$
0
0

2021 10 31 hacktoberfest results 2021

Hacktoberfest 2021 made great contributions to the Jenkins project. We thank all the Hacktoberfest contributors and the maintainers who reviewed the submitted pull requests.

We received contributions in artwork, translation, documentation, security, and general purpose improvements. The contributions included software improvements, documentation updates, and video tutorials.

Translations and Artwork

Duchess France provided significant improvements to the French localization of Jenkins. The changes included new translations of text, corrections of existing translations, and file encoding fixes. The translation changes included fifteen pull requests representing the work of six different contributors.

Translation pull requests were also received for Spanish language improvements. The translation pull requests included multiple plugins in addition to Jenkins core.

A contributor from "Duchess France [Women in tech]" created the first Jenkins logo highlighting a woman. Special thanks to tatoberres for the new artwork!

256

Thea Mushambadze created the new Jenkins logo for Georgia logo.

georgia

Thanks very much!

Plugin Docs Migration to GitHub

The Jenkins project started the migration of plugin documentation to GitHub almost two years ago. Hacktoberfest 2021 identified 40 candidate plugins to migrate their documentation to GitHub.Dan Heath,Deepak Gupta,Avinash Upadhyaya K R,Rajan Kumar Singh, andDheeraj Singh Jodha provided a total of 29 pull requests migrating plugin documentation to GitHub.

Special thanks to the contributors and to the plugin maintainers that merged and released documentation migration pull requests. Eight plugins have fully migrated their documentation to GitHub through Hacktoberfest contributions. An additional thirteen plugins have merged the documentation change and will complete the migration with their next plugin release.

The plugin documentation migration was coordinated through a worksheet. A tutorial video describing the documentation migration process is also available.

Implementing Content Security Policy

Wadeck Follonier provided a tutorial to prepare Jenkins for a broader implementation of Content Security Policy. Six contributors provided pull requests to Jenkins core to create separate JavaScript files from JavaScript that was previously implemented inside the Jenkins 'jelly' files. Moving JavaScript into separate files prepares Jenkins to implement Content Security Policy protections against JavaScript embedded in HTML.

Jenkins Architecture Diagrams

Angélique Jard provided architectural diagrams showing a dataflow view of Jenkins and a high level view of Jenkins. Those diagrams help others understand the internal structure of Jenkins and how it interacts with users and with other systems.

jenkins dataflow

Modernizing Plugins Video Series

Darin Pope created five videos illustrating some of the small steps that a new contributor can use to help modernize a plugin. Additional plugin modernization and adoption ideas are included in the "Contributing to Open Source" workshop from DevOps World 2021.

Modernizing Jenkins Plugins

Thanks to All

We offer our most sincere thanks to all Hacktoberfest contributors and to the many pull request reviewers.

Guava library upgrade (breaking changes!)

Next: 2020 Elections: Governance Board and Officer candidates
Previous: Jenkins in Hacktoberfest 2021
$
0
0

Guava Upgrade

Summary

Jenkins bundles Guava, a core Java library from Google. Beginning with Jenkins 2.320 (released on November 10, 2021), Jenkins has upgraded the Guava library from11.0.1 (released on January 9, 2012) to31.0.1 (released on September 27, 2021). Plugins have already been prepared to support the new version of Guava in JEP-233.Use the Plugin Manager to upgrade all plugins before and after upgrading to Jenkins 2.320.

Motivation

Many security-conscious organizations using, or planning to use, Jenkins run off-the-shelf security scanners to look for known vulnerabilities. These commonly flag the obsolete Guava library as susceptible to a serialization-related vulnerability (CVE-2018-10237) and recommend upgrading. While Jenkins uses JEP-200 to form an explicit list of allowed classes for deserialization, and the two Guava classes affected by CVE-2018-10237 are not and will never be added to the list, it is time-consuming for the security team to respond to purported security reports and for users to justify exemptions from policy to use Jenkins anyway.

Furthermore, the decade-old version of Guava has long been a maintenance burden for Jenkins developers. In a world where Dependabot offers upgrades to libraries released just hours before, it is unpleasant to be working with dependencies that are many years old.

For more information, see JEP-233.

Upgrading

The vast majority of plugins have already been prepared to support the new version of Guava in JEP-233. Jenkins users need only upgrade plugins to compatible versions as documented in the "Released As" field in Jira.It is critical to use the Plugin Manager to upgrade all plugins before and after upgrading to Jenkins 2.320. Failure to upgrade plugins to compatible versions may result in ClassNotFoundException, NoClassDefFoundError, or other low-level Java errors.

Reporting issues

If you find a regression in a plugin, please file a bug report in Jira:

When reporting an issue, include the following information:

  1. Use the JEP-233 label.

  2. Provide the complete list of installed plugins as suggested in the bug reporting guidelines.

  3. Provide the complete stack trace, if relevant.

  4. Provide steps to reproduce the issue from scratch on a minimal Jenkins installation; the scenario should fail when the steps are followed on Jenkins 2.320 or later and pass when the steps are followed on Jenkins 2.319 or earlier.

If you maintain a Jenkins plugin with an open JEP-233 issue, then please check if there is a pull request awaiting merge or release. If you use an unmaintained Jenkins plugin with an open JEP-233 issue, consider stepping up and adopting the plugin to release a compatible version.

Conclusion

We expect to see a bit of disruption from these changes but hope that in the long run they will save time for core and plugin developers and lead to a more secure and stable tool. Please reach out on the developers' list with any questions or suggestions.

Remove ADS
Viewing all 1088 articles
Browse latest View live