Quantcast
Channel: Jenkins Blog
Viewing all 1087 articles
Browse latest View live

Jenkins and Kubernetes - Secret Agents in the Clouds

$
0
0
This is a guest post by DevOps World | Jenkins World speakerMandy Hubbard.
DevOps World | Jenkins World 2018

At long last, the way we build and deploy software is finally changing and significantly so. The days of the persnickety, prima donna build machine where monolithic applications were built, tested, and deployed are numbered. And that is a "Good Thing (tm)" - a consequence of how we will meet the transformation goals of our businesses. Modern applications consist of distributed services, often with multiple microservices that are developed and deployed independent of other services. However, the only way to build these services with their own dependencies and schedules is to bake in continuous integration and delivery from the beginning. And as usual, your Jenkins platform is your friend.

But let’s take a moment and think about that in the context of microservices, especially if you’ve only used Jenkins for monolithic applications. You’ll be creating a greater number of individual Jenkins jobs that each run multiple times a day. This is a significant process change, and it’s important to acknowledge this and change our approach to managing Jenkins to accommodate these changes. It’s well within Jenkins’ capabilities, but you will need to think a little differently, and invest to close those last-mile deployment gaps.

Evolution of my Jenkins Environment

One of the biggest challenges I’ve faced as a DevOps practitioner is a long and evolving set of options to manage my Jenkins agent infrastructure. With only a few large jobs you don’t really need to worry too much about your agents. But when you’re orchestrating the CI/CD pipelines for dozens or even hundreds of services, optimizing efficiency and minimizing cost becomes important. And that journey has allowed me to consider and test many different Jenkins build agent architectures over the years. This journey may be familiar to you as well.

These are the types of Jenkins environments I’ve run over the years.

  1. Execute all the builds on the master. Concentrate all the moving parts on one instance. (I call this Hello Jenkins)

  2. Create a Jenkins EC2 agent with all the required tools for building every service, and then clone it if I need to “scale” Jenkins. (I call this the Monster Agent.)

  3. Create an individual Jenkins EC2 agent for each service I need to build. (I call this the Snowflake Agent.)

  4. Run build steps in containers. For example, launching agents in containers using theDocker Plugin or using multi-stage Dockerfiles to encapsulate all the logic for building, testing and packaging an application. They are both good first steps in container abstraction and allow you to easily copy artifacts from one container to another. Of course, access to a Docker engine is required for either approach, and I’ve managed my Docker host(s) for running Jenkins agents several different ways:

    1. Run the Docker engine inside my Jenkins master container - Docker in Docker (DinD)

    2. Mount the Docker socket of the host on which my Jenkins master container runs, allowing agents to run as sibling or sidecar containers - Docker outside of Docker (DooD)

    3. Configure a single external EC2 Docker host for the Jenkins master to use for launching builds in containers

    4. Dynamically launch agents using the EC2 plugin with an AMI that contains the Docker Engine and then run all the steps in a multi-stage Dockerfile

All these approaches were attempts to get out of the business of curating and managing Jenkins agents and infrastructure, each with their own benefits and drawbacks. But recently I begin working in a new Jenkins environment - Jenkins on Kubernetes.

Once you’ve come to view Jenkins, build agents and jobs as containerized services, migrating platforms becomes much more straightforward. And total disclaimer here - I had never used Kubernetes in my life, not even for side projects - when I set out to do this. That said, it was surprisingly simple to create a Kubernetes cluster in Google Cloud Platform’s (GCP) GKE, launch a Jenkins master using a Helm chart and begin running build steps in Jenkins agents running in containers on my new Kubernetes cluster.

Launch agents in Kubernetes from your pipeline scripts

The focus of this post and my Jenkins World talk for 2018, is to show you how to configure Jenkins to launch agents in Kubernetes from your pipeline scripts. My examples assume you are launching your agents in the same Kubernetes cluster where your Jenkins master is running, but there are other options. You’ll begin by installing the Kubernetes plugin. As a bonus, when I installed Jenkins using the latest stable chart in the default Helm repository, the Kubernetes plugin was automatically installed for me.

Once you get the Jenkins master running on your Kubernetes cluster, there are only a few configuration steps required and then you can begin launching ephemeral build agents on Kubernetes.

Configure the Jenkins Master

You’ll first need to create a credentials set for the Jenkins master to access the Kubernetes cluster. To do this, perform the following steps:

  1. In the Jenkins UI, click the Credentials link in the left-hand navigation pane

  2. Click the arrow next to (global) in the Stores scoped to Jenkins table (you have to hover next to the link to see the arrow)

  3. Click Add Credentials

  4. Under Kind, specify Kubernetes Service Account

  5. Leave the scope set to Global

  6. Click OK.

That’s it! This configuration allows the Jenkins master to use a Kubernetes service account to access the Kubernetes API.

Create a Cloud Configuration on the Jenkins Master

The next step is to create a cloud configuration for your K8s cluster. (When I use K8s instead of Kubernetes it’s because it is quicker to type, not just for coolness.)

  1. In the Jenkins UI, go to Manage JenkinsConfigure System

  2. Scroll down until you see Cloud settings and click the Add a new cloud box and select kubernetes

  3. The following parameters must be set:

    • Name: <your choice> - This defaults to kubernetes

    • Kubernetes URL: https://kubernetes.default - This was automatically configured from the service account.

    • Kubernetes Namespace: default - Unless you are running your master in another namespace

    • Credentials: Select the Kubernetes Service Account credentials you created in the previous step

    • Jenkins URL: http://<your_jenkins_hostname>:8080

    • Jenkins tunnel: <your_jenkins_hostname>:5555 - This is the port that is used to communicate with an agent

Kubernetes Configuration

These were the only parameters I had to set to launch an agent in my K8s cluster. You can certainly modify other parameters to tweak your environment.

Now that you’ve configured your Jenkins master so that it can access your K8s cluster, it’s time to define some pods. A pod is the basic building block of Kubernetes and consists of one or more containers with shared network and storage. Each Jenkins agent is launched as a Kubernetes pod. It will always contain the default JNLP container that runs the Jenkins agent jar and any other containers you specify in the pod definition. There are at least two ways to configure pod templates – in the Jenkins UI and in your pipeline script.

Configure a Pod Template in the Jenkins UI

  1. In the Jenkins UI, go to Manage JenkinsConfigure Systems

  2. Scroll down to the cloud settings you configured in the previous step

  3. Click the Add Pod Template button and select Kubernetes Pod Template

  4. Enter values for the following parameters:

    • Name: <your choice>

    • Namespace: default - unless you configured a different namespace in the previous step

    • Labels: <your choice> - this will be used to identify the agent pod from your Jenkinsfiles

    • Usage: Select "Use this node as much as possible" if you would like for this pod to be your default node when no node is specified. Select "Only build jobs with label matching expressions matching this node" to use this pod only when its label is specified in the pipeline script

    • The name of the pod template to inherit from - you can leave this blank. It will be useful once you gain experience with this configuration, but don’t worry about it for now.

    • Containers: The containers you want to launch inside this pod. This is described in detail below.

    • EnvVars: The environment variables you would like to inject into your pod at runtime. This is described in detail below.

    • Volumes: Any volumes you want to mount inside your pod. This is described further below.

Kubernetes Pod Template

Remember that a pod consists of one or more containers that live and die together. The pod must always include a JNLP container, which is configured by default if you installed the master using the Helm Chart. However, you will want to add containers with the tool chains required to build your application.

Add Your Own Container Template

  1. In the Jenkins UI, return to the pod template you created in the last step

  2. Click the Add Container button and select Container Template

  3. Enter values in the following fields:

    • Name: <your choice>

    • Docker image: any Docker image you’d like For example, if you are building an application written in Go, you can enter 'golang:1.11-alpine3.8'

    • Label: Enter any label strings you’d like to use to refer to this container template in your pipeline scripts

    • Always pull image: - Select this option if you want the plugin to pull the image each time a pod is created.

Container Template

You can leave the default values for the other parameters, but you can see that the plugin gives you fine-grained control over your pod and the individual containers that run within it. Any values you might set in your Kubernetes pod configuration can be set via this plugin as well. You can also inject your configuration data by entering raw YAML. I encourage you not to get distracted by the sheer number of options you can configure in this plugin. You only have to configure a small subset of them to get a working environment.

You can click the Add Environment Variable button in the container template to inject environment variables into a specific container. You can click the Add Environment Variable button in the pod template to inject environment variables into all containers in the pod. The following environment variables are automatically injected into the default JNLP container to allow it to connect automatically to the Jenkins master:

  • JENKINS_URL: Jenkins web interface url

  • JENKINS_JNLP_URL: url for the jnlp definition of the specific slave

  • JENKINS_SECRET: the secret key for authentication

  • JENKINS_NAME: the name of the Jenkins agent

If you click the Add Volume button in the pod template, you’ll see several options for adding volumes to your pod. I use the Host Path Volume option to mount the docker socket inside the pod. I can then run a container with the Docker client installed and use the host Docker socket to build and push Docker images.

At this point, we’ve created a cloud configuration for our Kubernetes cluster and defined a pod consisting of one or more containers. Now, how do we use this to run Jenkins jobs? We simply refer to the pod and containers by label in our Jenkins pipeline script. We use the label we gave to the pod in the node block and the label for the container we wish to use in the container block. The examples in this post use scripted pipeline, but you can achieve the same outcome using the declarative pipeline syntax:

node('test-pod') {
    stage('Checkout') {
        checkout scm
    }
    stage('Build'){
        container('go-agent') {// This is where we build our code.
        }
    }
}

Defining the Pod in the Jenkinsfile

Configuring a plugin through the UI is perfectly fine in a proof of concept. However, it does not result in a software-defined infrastructure that can be versioned and stored right alongside your source code. Luckily, you can create the entire pod definition directly in your Jenkinsfile. Is there anything you can’t do in a Jenkinsfile???

Any of the configuration parameters available in the UI or in the YAML definition can be added to the podTemplate and containerTemplate sections. In the example below, I’ve defined a pod with two container templates. The pod label is used in the node block to signify that we want to spin up an instance of this pod. Any steps defined directly inside the node block but not in a container block with be run in the default JNLP container.

The container block is used to signify that the steps inside the block should be run inside the container with the given label. I’ve defined a container template with the label 'golang', which I will use to build the Go executable that I will eventually package into a Docker image. In the volumes definition, I have indicated that I want to mount the Docker socket of the host, but I still need the Docker client to interact with it using the Docker API. Therefore, I’ve defined a container template with the label 'docker' which uses an image with the Docker client installed.

podTemplate(name: 'test-pod',label: 'test-pod',containers: [
        containerTemplate(name: 'golang', image: 'golang:1.9.4-alpine3.7'),
        containerTemplate(name: 'docker', image:'trion/jenkins-docker-client'),
    ],volumes: [
        hostPathVolume(mountPath: '/var/run/docker.sock',hostPath: '/var/run/docker.sock',
    ],
    {//node = the pod label
        node('test-pod'){//container = the container label
            stage('Build'){
                container('golang'){// This is where we build our code.
                }
            }
            stage('Build Docker Image'){
                container(docker){// This is where we build the Docker image
                }
            }
        }
    })

In my Docker-based pipeline scripts, I was building Docker images and pushing them to a Docker registry, and it was important to me to replicate that exactly with my new Kubernetes setup. Once I accomplished that, I was ready to build my image using gcloud, the Google Cloud SDK, and push that image to the Google Container Registry in anticipation of deploying to my K8s cluster.

To do this, I specified a container template using a gcloud image and changed my docker command to a gcloud command. It’s that simple!

podTemplate(name: 'test-pod',label: 'test-pod',containers: [
        containerTemplate(name: 'golang', image: 'golang:1.9.4-alpine3.7'),
        containerTemplate(name: 'gcloud', image:'gcr.io/cloud-builders/gcloud'),
    ],
    {//node = the pod label
        node('test-pod'){//container = the container label
            stage('Build'){
                container('golang'){// This is where we build our code.
                }
            }
            stage('Build Docker Image'){
                container(gcloud){//This is where we build and push our Docker image.
                }
            }
        }
    })

Standing up a Jenkins master on Kubernetes, running ephemeral agents, and building and deploying a sample application only took me a couple of hours. I spent another weekend really digging in to better understand the platform. You can be up and running in a matter of days if you are a quick study. There are a wealth of resources available on running Jenkins on Kubernetes, and I hope this blog post helps to further that knowledge. Even better, come tomy session at Jenkins World and let’s talk in person.

So, what else do you want to know? Hit me up on Twitter. I might even add your questions to my Jenkins World session. I suppose next up is Mesos?

Come meet Mandy and other Jenkins and Kubernetes experts atJenkins World on September 16-19th, register with the code JWFOSS for a 30% discount off your pass.


Jenkins Artwork at the DevOps World | Jenkins World 2018 Community Booth

$
0
0
256

Hi all, this is my first blogpost on jenkins.io. My name is Kseniia Nenasheva, I work as a Graphics Designer at CloudBees. I have been using Jenkins since 2012 as a QA engineer, and I am happy to contribute to the project. I have also submitted some patches to the core and plugins, and probably you have seen some Jenkins logos created by me, and some of you may even have them on your laptops. By the way, Ron Burgundy is my favorite Jenkins logo.

This year I am going to DevOps World | Jenkins World in San Francisco. During the conference I will be working at the Jenkins community booth and creating exclusive pictures with conference visitors and one of the Jenkins heroes. So, if you come to our booth and share your Jenkins story, you can get a special picture.

If you are interested to get a logo for your Jenkins Area Meetup or an open-source project (including Jenkins plugins, of course), please also stop by at the booth and share your ideas. After the conference I will try to implement the most interesting proposals.

You can also meet me at the contributor summit on September 17.

example art

Come meet Kseniia and other Jenkins contributors atJenkins World on September 16-19th in San Francisco and on October 22-25 in Nice. register with the code JWFOSS for a 30% discount off your pass.

Continuously delivering an easy-to-use Jenkins with Evergreen

$
0
0

When I first wrote about Jenkins Evergreen, which was then referred to as "Jenkins Essentials", I mentioned a number of future developments which in the subsequent months have becomereality. At this year’s DevOps World - Jenkins World in San Francisco, I will be sharing more details on the philosophy behind Jenkins Evergreen, show off how far we have come, and discuss where we’re going with this radical distribution of Jenkins.

Jenkins Evergreen

As discussed in my first blog post, andJEP-300, the first two pillars of Jenkins Evergreen have been the primary focus of our efforts.

Automatically Updated Distribution

Perhaps unsurprisingly, implementing the mechanisms necessary for safely and automatically updating a Jenkins distribution, which includes core and plugins, was and continues to be a sizable amount of work. InBaptiste’s talk he will be speaking about the details which make Evergreen "go" whereas I will be speaking about why an automatically updating distribution is important.

As continuous integration and continuous delivery have become more commonplace, and fundamental to modern software engineering, Jenkins tends to live two different lifestyles depending on the organization. In some organizations, Jenkins is managed and deployed methodically with automation tools like Chef, Puppet, etc. In many other organizations however, Jenkins is treated much more like an appliance, not unlike the office wireless router. Installed and so long as it continues to do its job, people won’t think about it too much.

Jenkins Evergreen’s distribution makes the "Jenkins as an Appliance" model much better for everybody by ensuring the latest feature updates, bug and security fixes are always installed in Jenkins.

Additionally, I believe Evergreen will serve another group we don’t adequately serve at the moment: those who want Jenkins to behave much more like a service. We typically don’t consider "versions" of GitHub.com, we receive incremental updates to the site and realize the benefits of GitHub’s on-going development without ever thinking about an "upgrade."

I believe Jenkins Evergreen can, and will provide that same experience.

Automatic Sane Defaults

The really powerful thing about Jenkins as a platform is the broad variety of patterns and practices different organizations may adopt. For newer users, or users with common use-cases, that significant amount of flexibility can result in a paradox of choice. With Jenkins Evergreen, much of the most common configuration is automatically configured out of the box.

Included is Jenkins Pipeline and Blue Ocean, by default. We also removed some legacy functionalities from Jenkins while we were at it.

We are also utilizing some of the fantasticConfiguration as Code work, which recently had its 1.0 release, to automatically set sane defaults in Jenkins Evergreen.

Status Quo

The effort has made significant strides thus far this year, and we’re really excited for people to start trying out Jenkins Evergreen. As of today, Jenkins Evergreen is ready for early adopters. We do not yet recommend using Jenkins Evergreen for a production environment.

If you’re at DevOps World - Jenkins World in San Francisco please come see Baptiste’s talk Wednesday at 3:45pm in Golden Gate Ballroom A. Ormy talk at 11:15am in Golden Gate Ballroom B.

If you can’t join us here in San Francisco, we hope to hear your feedback and thoughts in our Gitter channel!

Hacktoberfest 2018. Contribute to Jenkins!

$
0
0

Once again October has arrived. That means the regular Hacktoberfest event is back! This year it will be the fifth installment. During this one-month hackathon you can support open-source and earn a limited edition swag.

On behalf of the Jenkins project, we invite you to participate in Hacktoberfest and to work on the project. We welcome all contributors, regardless of their background and Jenkins experience.

Hacktoberfest

Contributing to Jenkins

There are many ways tocontribute to Jenkins during Hacktoberfest. Generally, any pull requests in GitHub may qualify. You can…​

  • Code - Contribute to the code or automated tests

    • Jenkins project codebase includes dozens of programming languages, mostly Java, Groovy, and JavaScript + Go in Jenkins

    • You can also find components in Ruby/Kotlin, and even native components in C/C++

  • Document - Improve documentation

  • Blog - write blogposts about Jenkins

  • Localize - Localize Jenkins components

  • Design - artwork and UI improvements also count!

  • Organize - Organize a local meetup for Jenkins & Hacktoberfest (see below)

See the Contribute and Participate page for more information.

Projects

The Jenkins project is spread across several organizations on GitHub (jenkinsci, jenkins-x, jenkins-infra). You are welcome contribute to any repository in any of those organizations, however various components in Jenkins have differing review and delivery velocity. Here is a list of Jenkins subprojects with maintainers who have committed to delivering quick reviews to Hackathon participants.

Project/componentIdeas and links

Jenkins Core

There is always something to improve in Jenkins core itself. You can address various issues, improve the codebase, and add new features there.

Contributing,newbie-friendly issues

Jenkins Website

Extend and improve Jenkins documentation, add your own blogpost.

Contributing guidelines

Jenkins X

Try out the project and create new demos, extend documentation, and create new builders for your toolchains.

Contributing guidelines,Quick start,creating custom builders,newbie-friendly issues

Jenkins Configuration-as-Code Plugin

Contribute to the fresh new plugin: improve the codebase, add demos and plugin integrations.

Contributing to JCasC

Jenkins Evergreen

Try and improve the recently released Evergreen project - an automatically updating rolling distribution system for Jenkins.

Quick start,newbie-friendly issues.

Docker Packaging

Add new features and improvements to Jenkins Docker packaging:Jenkins Master,Agents, and other components.

Chinese Localization SIG

Contribute to the new Website and the Simplified Chinese Localization plugin.

Jenkins Artwork

Create new images and logos for Jenkins area meetups,subprojects, and plugins. You can also contribute new graphics to plugins.

Note that this is not a full list, and the list will be extended depending on the interest from maintainers. You are welcome to contribute to existing Jenkins plugins…​ and even to create new ones.

Local events

Hacktoberfest is an online event, but there are many events being organized by open-source communities. You can join one of these events.

We also encourage Jenkins Area Meetup organizers to run Jenkins-specific events in October (workshops, hackergartens). If you are not a meetup organizer but want to host a meetup, you can reach out to the organizers via meetup.com resources (you can find a JAM here). Check out the Hacktoberfest Event Kit for more info.

FAQ

Q: How do I sign up?

  1. Sign-up to Hacktoberfest on the event website.

  2. Join the Hacktoberfest channel in Gitter

  3. Everything is set, just start contributing!

Q: I am new to Jenkins, how do I start?

If you are new to Jenkins, you could start by fixing some small and well described issues. There are lists of such newbie-friendly issues, see the links in the table above. You can also submit your own issue and propose a fix.

Q: How to find documentation?

Jenkins project contains lots of materials about contributing to the project. Here are some links which may help:

Projects in the table above also have their own documentation to help newcomers.

Q: How do I label issues and pull requests?

Hacktoberfest guidelines require issues and/or pull requests to be labeled with the hacktoberfest label. You may have no permissions to set labels on your own, but do not worry! In Jenkins GitHub organizations just mention Hacktoberfest in the title, and we will set the labels for you.

Q: How do I get reviews?

All projects in the list above are monitored by their maintainers, and you will likely get a review within few days. Reviews in other repositories and plugins may take longer. In the case of delays, ping us in the hacktoberfest-help channel in Gitter. Unmerged pull-requests also count in Hacktoberfest, so merge delays won’t block you from getting prizes.

Q: I am stuck. How do I get help?

Q: Does Jenkins project send special swag?

All participants will get swag from Hacktoberfest organizers if they create at least5 pull requests. Jenkins project may also distribute some swag to top contributors, depending on the budget and contributions.

Improving Jenkins Release Quality using Uplink Telemetry

$
0
0

One of the major strengths of Jenkins is its customizability and extensibility. With its plugin ecosystem and long list of (possibly hidden) options, Jenkins can be used for a wide range of use cases.

The downside of all this flexibility is that, not knowing how people use Jenkins, we mostly rely on issues filed in our bug tracker to know when things go wrong. And over the years, quite a few things have gone wrong. The worst of these have been security fixes have had unintended side effects. Unlike regular changes, it’s not really feasible to roll back security fixes, so users have sometimes had to choose between security and functionality. But even changes developed in the open, such as the introduction of JEP-200, haven’t gone as smoothly as we hoped. With big changes in the works, it’s more important than ever for us to have a better idea how Jenkins is used, so that we can deliver major changes safely.

Jenkins Evergreen solves this to some degree by being always connected to the Jenkins project and reporting back telemetry (mostly errors) allowing us to quickly react and provide fixes. But that project is still pretty new, and its goal of being a more standardized Jenkins does not represent the breadth of configurations of the general user base.

So we recently extended the existing, very limited anonymous usage statistics by adding a simple, extensible telemetry reporting client. We’re calling it Uplink telemetry, based on the name of the service it reports its data to. It made its debut in Jenkins 2.143.

Uplink telemetry is designed to collect data in trials, which are defined as:

  • a well defined set of technical data with a specific purpose

  • a start and end date of the collection

Detailed information explaining the scope and purpose of currently active trials is available in the inline help for the usage statistics control in the global configuration.

Screenshot of detailed Uplink telemetry trial description in the Jenkins inline help

Of course, opting out of anonymous usage statistics there also disables the submission of Uplink telemetry. And while Uplink trials report a per-instance UUID to help with collation (e.g. removal of duplicate submissions), that UUID is exclusively used for this purpose, and independent of all other properties of an instance. This prevents us from correlating reported data with specific instances. These measures are in place to strike a balance between the need to understand how Jenkins is used and respecting users' privacy.

Improving Jenkins through real-world data

We’re already created our first trial. Jenkins 2.143 includes a trial to gather information about how how common it is for instances to use Java system properties to disable (parts of) security fixes. Every time we publish a security fix we’re not completely certain is safe to apply for everyone, we add another of these options — just in case. As you can imagine, quite a few of these hidden options exist. Until now, user feedback in our issue tracker was the only way we could estimate the need for any of these options. With Uplink, Jenkins will report that information to us.

The trial is scheduled to run for the next six weeks, enough to hopefully gather this information from a large number of users of both LTS and weekly releases. Our hope is that we will be able to remove some of these options entirely, as they might not be needed after all. For others, we might need to consider elevating them to supported features, or finding better solutions obviating the need for them.

In the future, I will publish of some of what we have learned from the first trial running through Uplink telemetry. I look forward to Jenkins continuing to improve with real-world data informing our future decisions.

Important security updates for Jenkins

$
0
0

We just released security updates to Jenkins, versions 2.146 and 2.138.2, that fix multiple security vulnerabilities.

For an overview of what was fixed, see the security advisory. For an overview on the possible impact of these changes on upgrading Jenkins LTS, see our LTS upgrade guide.

Further improvements

In addition to the security fixes listed in the security advisory, we also applied multiple improvements that make future security vulnerabilities more difficult, or even impossible to exploit.

One such improvement concerns cross-site scripting vulnerabilities, and comes with a risk of regressions.

Jenkins uses a fork of Jelly for the vast majority of the views it renders. Since 2011, it includes a feature that lets view authors opt in or out of automatic escaping of variable values for rendering in HTML, and since 2016, the plugin build tooling requires that views explicitly specify whether to apply this automatic escaping. Details are available in the developer documentation.

Until now, if views do not declare whether to automatically escape, they were rendered without automatic escaping, and developers were expected to explicitly escape every variable reference that was not supposed to contain markup. This has resulted in a number of cross-site scripting (XSS) vulnerabilities, most recently SECURITY-1130 in Job Config History Plugin.

For that reason, we have decided to enable this automatic escaping by default if plugins do not specify a preference. This can result in problems with some plugins if they need their output to remain unescaped. We expect that those plugins will adapt pretty quickly to this change, as the fix is typically straightforward.

In the mean time, users can set the system propertyorg.kohsuke.stapler.jelly.CustomJellyContext.escapeByDefault to false to disable this additional protection.

Jenkins Hackathons in October

$
0
0

Traditionally there are a lot of events happening in the Jenkins organization in Autumn. I would like to share some information about the upcoming hackathons.

Online Event: Hacktoberfest

As you probably know, there is an ongoing Hacktoberfest event. The Jenkins project is participating in it and everybody is welcome to contribute to Jenkins as a part of this event. The event lasts from October 01 to October 31, and you can can join it at any time.

See this blogpost for more information about Hacktoberfest in the Jenkins project.

Onsite Hackathons

Hacktoberfest is not the only event happening in the Jenkins community this month, there are also a number of upcoming on-site events:

  • Oct 19 - Copenhagen, Denmark - Jenkins Configuration as Code hackathon atDay of Jenkins [as code]

    • Registration: all conference participants can attend

  • Oct 22 - Nice, France - Hackathon atDevOps World - Jenkins World Nice for those who arrive early for the conference and plan to also participate in the Jenkins Contributor Summit

  • Oct 30 - Neuchatel, Switzerland - Hacktoberfest: Jenkins & Friends event (Swiss Jenkins Area meetup)

    • Registration: RSVP to the meetup here

  • TBA - Beijing, China - Hacktoberfest

All contributions during these in-person events qualify asHacktoberfest contributions as well. :) More events will also be announced later in the year, e.g. we traditionally do a hackfest in Brussels after FOSDEM in February. Follow our developer mailing lists and social media to receive announcements.

Jenkins in Google Summer of Code 2018. Results

$
0
0

Jenkins GSoC

It has been a while since the last blogpost about Google Summer of Code in Jenkins. GSoC 2018 has officially finished on August 23, and we had a Jenkins Online Meetup where we had final presentations of the GSoC projects. It is never late to provide more context, so I would like to summarize the results and provide updates of what was happening in Jenkins GSoC Special Interest Group over last 2 months. In this blogpost you can find project status overviews and updates from the Jenkins GSoC SIG.

But first of all, I would like to thank all our students, their mentors and to all other contributors who proposed project ideas, participated in student selection, community bonding and further reviews. Google Summer of Code is a major effort which would not be possible without active participation of the Jenkins community

Summary

This year we started preparing for Google Summer of Code in early December. 14 project ideas and 12 potential mentors we published on our website, and we got dozens of students reaching out to us during the application period. After processing applications, we have selected 4 applications for GSoC. Unfortunately one project got cancelled due to student eligibility issues.

So, we had the following projects: Code Coverage API plugin, Remoting over Apache Kafka, and Simple Pull-Request Job Plugin (also known as Pipeline as YAML). All these projects have a significant value to the Jenkins community. They were focused on areas which have been discussed in the community for a long time, but which had no progress so far. Google Summer of Code allowed us to kick-start these projects, and to make significant progress there. All projects have been released and made available in the Jenkins community (common or experimental update centers).

In total there were 9 blogposts about GSoC projects on jenkins.io, and also 2 Jenkins Online Meetups. GSoC results have been also presented at DevOps World - Jenkins World conference and the contributor summit.

Code Coverage API Plugin

There are many code coverage plugins in Jenkins: Cobertura, JaCoCo, Emma, etc., etc. The problem with these plugins is that each of them implements all code coverage features on their own. So you get different feature sets, UIs, CLI commands and REST APIs. The idea of this project was to unify the existing functionality and offer a new API plugin which other plugins could extend. It would help to simplify existing plugin and to create new plugins for coverage tools.

The project has started really well, and we had the first demo after a week of coding. Then Shenyu continued extending the plugin’s functionality over coding periods. Here is the list of the key features offered by the plugin:

  • Flexible data structure for defining and storing coverage metrics within Jenkins

  • Coverage charts and trends

  • Source code navigation

  • REST API for retrieving coverage stats and trends

  • Report aggregation for parallel steps

  • Extension points which allow integrating other plugins

In addition to the Code Coverage API Plugin, Shenyu added integration to the Cobertura Plugin and also created a new llvm-cov plugin which is expected to be released soon.

After GSoC Shenyu continued contributing to the Jenkins project. He works on the Code Coverage API plugin and also participates in the Chinese Localization SIG.

Simple Pull-Request Job Plugin

This project focused on introducing a way to easily define pull-request build job definitions in YAML. This project has been shaped a lot during the application period and community bonding, so that the project fit the existing Jenkins Ecosystem better. Finally it was decided to build the new plugin on the top of Pipeline: Multi-Branch Plugin. There was also an idea to offer extra syntax sugar, templating and automatic resolution for common flows, so that users need less time to define Pipelines for common use-cases.

The plugin allows defining Pipeline jobs as YAML being stored in SCM. Original design presumed a new job type, but during community bonding and Phase 1 prototyping it was decided to build the plugin on the top of the existing Pipeline ecosystem and extension points. Currently the plugin generates Declarative Pipeline code from YAML so that it gets a lot of Pipeline features out-of-the box. In addition to that, Simple Pull Request Job Plugin uses a an engine provided by the Configuration as Code plugin to convert YAML snippets to Pipeline step definitions.

The plugin has been well described by Abhishek in his Pipeline as YAML blogpost in August. Currently it is available in the Experimental Update Center as an alpha version. Pham Vu Tuan, one of our GSoC students, have also joined the plugin team. At the DevOps World - Jenkins World hackfest we had discussions with the Jenkins Pipeline team, and we have a plan towards making this plugin available as an Incubated Pipeline project. The final implementation may change, but in any case the project gave us a working prototype and a lot of information about obstacles we need to resolve.

Remoting over Apache Kafka

Last but not least, Remoting over Kafka is another challenging project we had. To implement communication between its masters and agents, Jenkins widely uses home-grown protocol implementations based on TCP (JNLP 1..4 protocols). There are some performance and stability implementations, and there have been discussions about using an industry-standard message bus or queue. Pham Vu Tuan proposed to use Apache Kafka for it, and after some experiments during community bonding and first coding phase we agreed to go forward with this implementation.

During his project Vu Tuan extended Jenkins Core and Remoting to allow implementing an agent communication channel in a plugin. Then he has created a new Remoting over Kafka plugin which is now available in the main Jenkins Update cente. Once the plugin is installed, it is possible to connect to agents over Apache Kafka and execute all types of Jenkins jobs there. There are also official jenkins/remoting-kafka-agent images available on DockerHub.

Vu Tuan continued contributing to the Jenkins project after GSoC, currently he maintains the Remoting over Kafka plugin. He visited the DevOps World - Jenkins World US conference in September, presented his GSoC project at theJenkins Contributor Summit. You can find his slides here. After the conference he also participated in the hackfest where he helped to migrate Jenkins' DNS services to Microsoft Azure.

What could we do better?

After the end of GSoC we had a Retrospective with GSoC students and mentors. We discussed the issues we encountered during the projects, and ways to improve the student and mentor experience.

Main takeaways for us:

  • GSoC projects should be aligned with Jenkins Special Interest Groups (SIGs) or subprojects in order to get a wider list of stakeholders Projects should be aligned with SIG priorities when possible

  • In addition to GSoC SIG meetings and Jenkins Online Meetups during student evaluation, we should also run regular status updates within SIGs so that there more contributors involved in projects

  • We should invest more time into forming mentor teams before the application period starts. This year there were changes in mentor teams after the community bonding started, and it complicated the work

  • We should pay more attention to student eligibility. This year we started from 4 projects, but unfortunately one project (EDA plugins for Jenkins) got cancelled due to the visa limitations the student had.

  • We should do regular office hours for mentors/students so that it is possible to exchange information between GSoC projects within the organization. This year we cancelled them at the end of phase and relied only on regular project meetings and mailing lists, but this is not enough.

For me personally the main takeaway is also to reduce direct involvement into the project as a mentor and technical advisor. Doing org administration, logistics and mentorship is not good from a bus factor PoV, and I believe I was pushing my vision too hard in few cases. Will do my best to prevent it next year.

If you want to share your feedback and ideas, please reach out to us using the GSoC mailing list.

What’s next?

In order to improve GSoC organization in Jenkins, we have have created a GSoC Special Interest Group which will be running non-stop as other SIGs in Jenkins. The objective of the SIG is to organize GSoC, work with potential students/mentors, and to help students stay involved in the community after GSoC ends. In this SIG we will have monthly meetings to sync-up on GSoC. If you are interested to contribute, please join the SIG.

According to the Retrospective, next year we plan to invest more into communication with mentors. We will also try to tie new project proposals to JenkinsSpecial Interest Groups so that the students become a part of ongoing coordinated efforts. This weekend Martin d’Anjou, Jeff Pearce and me are participating in the GSoC Mentor summit to share experiences and to study from other GSoC organizations. On October 17 we will have a GSoC SIG meeting to discuss our experience and to discuss next steps.

In addition to that, Jenkins Google Summer of Code will be presented at DevOps World - Jenkins World Nice and at the contributor summit. If you plan to visit the conference and you are interested to participate in Google Summer of Code and other community activities, please join us at the contributor summit or stop by at the community booth.

And, elephant in the room…​ GSoC 2019. Of course we are going to apply, stay tuned for new announcements. We have already started collecting project ideas for the next year. If you are interested to participate as a student or mentor, please reach out to us using the GSoC SIG mailing list.


Build your own Jenkins! Introducing Custom WAR/Docker Packager

$
0
0

I would like to introduce Custom WAR Packager - a new tool for Jenkins administrators and developers. This tool allows packaging custom Jenkins distributions as WAR files,Docker images and Jenkinsfile Runner bundles. This tool allows packaging Jenkins, plugins, and configurations in a ready-to-fly distribution. Custom WAR packager is a part of the Ephemeral Jenkins master toolchain which we presented in our A Cloud Native Jenkins blogpost. This toolchain is already used in Jenkins X to package serverless images.

In this blogpost I will show some common use-cases for Custom WAR Packager.

History

As with Jenkins itself, Custom WAR Packager started as a small development tool. For a long time it was a problem to run integration testing in Jenkins. We have 3 main frameworks for it:Jenkins Test Harness,Acceptance Test Harness, and Plugin Compatibility Tester. All these frameworks require a Jenkins WAR file to be passed to them to run tests. What if you want to run Jenkins tests in a custom environment like AWS? Or what if you want to reuse existing Jenkins Pipeline tests and to run them againstPluggable Storage to ensure there are no regressions?

And it was not just an idle question. There were major activities happening in the Jenkins project: Cloud-Native Jenkins, Jenkins Evergreen, and Jenkins X. All these activities required a lot of integration testing to enable Continuous Delivery flows. In order to do this in existing test frameworks, we needed to package a self-configuring WAR file so that it would be possible to run integration tests in existing frameworks. That is why Custom WAR Packager was created in April 2018. Later it got support for packaging Docker images, and in September 2018 it also got support for Jenkinsfile Runner which was created by Kohsuke Kawaguchi and then improved by Nicolas de Loof.

What’s inside?

Custom WAR packager is a tool which is available as CLI Executable, Maven Plugin, or Docker package. This tool takes input definitions and packages them as requested by the user. Everything is managed by a YAML configuration file:

Custom WAR Packager build flow

The tool supports various types of inputs. The list of plugins can be passed via YAML itself, pom.xml, or a BOM file from JEP-309. Custom WAR Packager supports not only released versions, but also builds deployed to the Incremental repository (CD flow for Jenkins core and plugins - JEP-305) and even direct builds by Git or directory path specifications. It allows building packages from any source, without waiting for official releases. The builds are also pretty fast, because the plugin does caching in the local Maven repository by using commit IDs.

Custom WAR packager also supports the following self-configuration options:

WAR Packaging

WAR packaging happens by default every time the repo is built. Generally Custom WAR Packager repackages all inputs into a single WAR file by following conventions defined in the Jenkins core and the JCasC plugin.

Sample configuration:

bundle:groupId: "io.jenkins.tools.war-packager.demo"artifactId: "blogpost-demo"vendor: "Jenkins project"description: "Just a demo for the blogpost"war:groupId: "org.jenkins-ci.main"artifactId: "jenkins-war"source:version: 2.138.2plugins:
  - groupId: "io.jenkins"artifactId: "configuration-as-code"source:# Common releaseversion: 1.0-rc2
  - groupId: "io.jenkins"artifactId: "artifact-manager-s3"source:# Incrementalsversion: 1.2-rc259.c9d60bf2f88c
  - groupId: "org.jenkins-ci.plugins.workflow"artifactId: "workflow-job"source:# Gitgit: https://github.com/jglick/workflow-job-plugin.gitcommit: 18d78f305a4526af9cdf3a7b68eb9caf97c7cfbc# etc.systemProperties:jenkins.model.Jenkins.slaveAgentPort: "9000"jenkins.model.Jenkins.slaveAgentPortEnforce: "true"groovyHooks:
  - type: "init"id: "initScripts"source:dir: src/main/groovycasc:
  - id: "jcasc"source:dir: casc.yml

Docker packaging

In order to do the Docker packaging, Custom WAR Packager uses the officialjenkins/jenkins Docker images or other images using the same format. During the build the WAR file just gets replaced by the one built by the tool. It means that ALL image features are available for such custom builds: plugins.txt, Java options, Groovy hooks, etc., etc.

## ...## WAR configuration from above## ...buildSettings:docker:build: true# Base imagebase: "jenkins/jenkins:2.138.2"# Tag to set for the produced imagetag: "jenkins/custom-war-packager-casc-demo"

For example, this demo shows packaging of a Docker image with External Build Logging to Elasticsearch. Although the implementations have been improved as a part of JEP-207 and JEP-210, you can check out this demo to see how the Docker image does self-configuration, connects to a Elasicsearch, and then starts externally storing logs without changes in build log UIs. A Docker Compose file for running the entire cluster is included.

Jenkinsfile Runner packaging

This is probably the most tricky mode of Jenkinsfile Runner. In March a new Jenkinsfile Runner projectwas announced in the developer mailing list. The main idea is to support running Jenkins Pipeline in a single-shot master mode when the instance just executes a single run and prints outputs to the console. Jenkinsfile Runner runs as CLI or as a Docker image. Custom WAR Packager is able to produce both, though only Docker run mode is recommended. With Jenkinsfile Runner you can run Pipelines simply as…​

docker run --rm -v $PWD/Jenkinsfile:/workspace/Jenkinsfile acmeorg/jenkinsfile-runner

When we started working on Ephemeral (aka "single-shot") masters in the Cloud Native SIG, there was an idea to use Custom WAR Packager and other existing tools (Jenkinsfile Runner, Jenkins Configuration as Code, etc.) to implement it. It would be possible to just replace Jenkins core JAR and add plugins to Jenkinsfile Runner, but it is not enough. To be efficient, Jenkinsfile Runner images should start up FAST, really fast. In the build flow implementation we used some experimental options available in Jenkins and Jenkinsfile Runner, including classloader precaching, plugin unarchiving, etc, etc. With such patches Jenkins starts up in few seconds with configuration-as-code and dozens of bundled plugins.

So, how to build custom Jenkinsfile Runner images? Although there is no release so far, it is not something which can stop us as you see above.

##...## WAR Configuration from above##...buildSettings:jenkinsfileRunner:source:groupId: "io.jenkins"artifactId: "jenkinsfile-runner"build:noCache: truesource:git: https://github.com/jenkinsci/jenkinsfile-runner.gitcommit: 8ff9b1e9a097e629c5fbffca9a3d69750097ecc4docker:base: "jenkins/jenkins:2.138.2"tag: "onenashev/cwp-jenkinsfile-runner-demo"build: true

You can find a Demo of Jenkinsfile Runner packaging with Custom WAR Packagerhere.

More info

There are many other features which are not described in this blogpost. For example, it is possible to alter Maven build settings or to add/replace libraries within the Jenkins core (e.g. Remoting). Please see the Custom WAR Packager documentation for more information. There are a number of demos available in the repository.

If you are interested to contribute to the repository, please create pull requests and CC @oleg-nenashev and Raul Arabaolaza who is the second maintainer now working on Jenkins test automation flows.

What’s next?

There are still many improvements that could be made to the tool to make it more efficient:

  • Add upper bounds checks for transitive plugin dependencies so that the conflicts are discovered during the build

  • Allow passing all kinds of system properties and Java options via configuration YAML

  • Improve Jenkinsfile Runner to improve performance

  • Integrate the tool into Jenkins Integration test flows (see essentialsTest() in the Jenkins Pipeline library)

Many other tasks could be implemented in Custom WAR Packager, but even now it is available to all Jenkins users so that they can build their own Jenkins bundles with it.

Want to know more?

If you are going to DevOps World - Jenkins World in Nice on Oct 22-25, I will be presenting Custom WAR Packager at the Community Booth during the lunch demo sessions. We will be also repeating our A Cloud Native Jenkins talk together with Carlos Sanchez where we will show how Ephemeral Jenkins works with Pluggable Storage. Jenkins X team is also going to present their project using Custom WAR Packager.

Come meet Oleg and other Cloud Native SIG members atDevOps World - Jenkins World on October 22-25 in Nice. register with the code JWFOSS for a 30% discount off your pass.

What to Expect at the Jenkins Contributor Summit

$
0
0
Contributor Summit - Morning
DevOps World | Jenkins World 2018

The Jenkins Contributor summit is where the current and future contributors of the Jenkins project get together. This summit will be on Tuesday, October 23rd 2018 in Nice, France just before Jenkins World. What should those planning on joining expect at the event? Earlier this year in September we had a contributor summit in San Francisco which gave us a pretty good outline of what to expect. First of all it was one of the biggest contributor summits ever with lots of first-time attendees.

Morning

There are plenty of exciting developments happening in the Jenkins community, which meant there was a packed program. One of the most anticipated updates was Kohsuke Kawaguchi speaking about Jenkins Shifting Gears.

There were also updates on the '5 Jenkins Superpower' projects in active development:

As ever Jenkins is a community driven by its members so it was also great to get an update on Google Summer of Code.

Birds-of-a-feather (BoF)

After a packed morning of updates, it was time for a break and some lunch. After lunch attendees divided up into groups and gathered around tables for unconference style discussions of specific areas. Each table ran differently: some had demos, some did presentations, some hacked on code and others brainstormed ideas. There was definitely alot of energy in the room and huge exchange of ideas.

Ignite Talks & Wrap-up

To finish off the session we had a set of ignite talks. Attendees were invited to volunteer on the day - no easy task given the pressure involved- and many did. Hats off to Liam Newman, Mandy Hubbard, Eric Smalling, Pui Chee Chan, Martin d’Anjou and Vishal Raina for getting out of their comfort zone and doing talks. There were two surprise ignite talks, one for James Strachan and one for Kohsuke Kawaguchi which were highly entertaining gave the audience lots of laughs. Someone even captured KK’s talk on video. The sound isn’t great but it was a truly visionary talk:

Finally the event finished with swag presentations and a fun Kahoot quiz to wrap things up.

Contributor Appreciation Event

After the summit, contributors were invited to join at the after party at Spin. Spin was a unique venue in San Francisco where attendees could socialise and also play ping-pong! While some took it seriously most enjoyed the relaxed way to get to know their fellow contributors.

contributor summit sf

See you in Nice

The event was a lot of fun and the contributor summit in Nice will follow a very similar structure. All levels of contributor are welcome, there will be lots of opportunity for in-depth discussions and you can even do an ignite talk! While we won’t be repeating the ping pong event there will be something equally unique to follow on from the summit.

Attending is free, and no DevOps World | Jenkins World ticket is needed, but RSVP if you are going to attend to help us plan. See you there!

As long as you’re in Nice for the Contributor Summit, join Tracy, Kohsuke, and hundreds of other Jenkins users at DevOps World - Jenkins World on October 22-25. Register with the code JWFOSS for a 30% discount off your pass.

Validate your Jenkinsfile from within VS Code

$
0
0

In my daily work I often have to create or modify Jenkinsfiles and more often than I would like, I make mistakes. It is a very tedious workflow when you make a change to your Jenkinsfile, create a commit, push the commit and wait for your Jenkins Server to tell you, that you have missed a bracket.

The Command-line Pipeline Linter (https://jenkins.io/doc/book/pipeline/development/) does a great job of reducing the turnaround times when writing a Jenkinsfile, but its usage has its own inconveniences. You need tools like curl or ssh to make a connection to your Jenkins Server and you need to remember the correct command to validate your Jenkinsfile. I still did not like the solution.

As VS Code is my daily driver, I started to look at writing extensions for it and out of it came a little extension which makes validating Jenkinsfiles just a little bit more comfortable.

What the 'Jenkins Pipeline Linter Connector' does is, that it takes the file that you have currently opened, pushes it to your Jenkins Server and displays the validation result in VS Code.

Jenkins Pipeline Linter Connector | Example 1
Jenkins Pipeline Linter Connector | Example 2

​You can find the extension from within the VS Code extension browser or at the following url: https://marketplace.visualstudio.com/items?itemName=janjoerke.jenkins-pipeline-linter-connector

The extension adds four settings entries to VS Code which you have to use to configure the Jenkins Server you want to use for validation.

The Silence of the Lambs: Inspecting binaries with Jenkins

$
0
0

This is a guest post by Michael Hüttermann.

In a past blog post,Delivery Pipelines, with Jenkins 2, SonarQube, and Artifactory, we talked about pipelines which result in binaries for development versions, and inDelivery pipelines, with Jenkins 2: how to promote Java EE and Docker binaries toward production, we examined ways to consistently promote applications toward production. In this blog post, I continue on both by discussing more details on security related quality gates and bringing this together with the handling of Docker images.

Use case: Foster security on given, containerized business application

Security is an overloaded term with varying meaning in different contexts. For this contribution, I consider security as the sum of rules regarding vulnerabilities (Common Vulnerability and Exposure, CVE), in binaries. In a past blog post, we’ve identified SonarQube already, as a very helpful tool to identify flaws in source code, particularly concerning reliability (bugs), vulnerabilities (security, e.g. CWE, that is common weakness enumaration, and OWASP, that is the Open Web Application Security Project), and maintainability (code smells). Now it is a good time to add another tool to the chain, that is Twistlock, for inspection binaries for security issues. Features of Twistlock include

  • Compliance and vulnerability management, transitively

  • Runtime defense

  • Cloud-native CI/CD support

  • Broad coverage of supported artifact types and platforms

  • API, dashboards, and Jenkins integration, with strong configuration options

The underlying use case can be derived from several real-world security initiatives, in enterprises, based on given containerized applications. In practice, it is not a surprise that after adding such new quality gates, you identify historically grown issues. However, there are many good reasons to do so. You don’t need any Word documents to check any governance criteria manually, rather execution and reporting are done automatically and also part of the actions are taken automatically. And above all, of course, your application is quality assured regarding known vulnerability issues, aligned with the DevOps approach: development is interested in quick feedback whether their change would introduce any vulnerabilities, and operations is interested in insights whether and how running applications are affected if a new CVE is discovered.

The term DevSecOps was coined to explicitely add security concerns to DevOps. In my opinion, security is already inherent part of DevOps. Thus, there is no strong reason to introduce a new word. Surely, new words are catchy. But they have limits. Or have you ever experienced NoDev, the variant of DevOps where features are suddenly falling from the sky and deployed to production automatically?

Conceptually, container inspection is now part of the delivery pipeline and Twistlock processing is now triggered once we have produced our Docker images, see below, in order to get fast feedback.

01

Software is staged over different environments by configuration, without rebuilding. All changes go through the entire staging process, although defined exception routines may be in place, for details see Michael Hüttermann, Agile ALM (Manning, 2012). The staged software consists of all artifacts which make up the release, consistently, including the business application, test cases, build scripts, Chef cookbooks, Dockerfiles, Jenkins files to build all that in a self-contained way, for details see Michael Hüttermann, DevOps for Developers (Apress, 2012).

This blog post covers sample tools. Please note, that there are also alternative tools available, and the best target architecture is aligned with concrete requirements and given basic conditions. Besides that, the sample toolchain is derived from couple of real world success stories, designed and implemented in the field. However, this blog post simplifies and abstracts them in order to stay focussed while discussing the primitives of delivery units. For example, aggregating multiple Docker images with ASCII files, does not change the underlying primitives and their handlings. For more information on all parts of the blog post, please consult the respective documentation, good books or attend fine conferences. Or go to the extremes: talk to your colleagues.

In our sample process, we produce a web application that is packaged in a Docker image. The produced Docker images are distributed only if the dedicated quality gate passes. A quality gate is a stage in the overall pipeline and a sum of defined commitments, often called requirements, the unit of work must pass. In our case, the quality gate comprises inspection of produced binaries and it fails if vulnerabilities of severity 'critical' are found. We can configure Twistlock according to our requirements. Have a look how we’ve integrated it into our Jenkins pipeline, with focus on detecting vulnerabilities.

Jenkinsfile (excerpt): Twistlock inspection triggered
stage('Twistlock: Analysis') { (1)String version = newFile("${workspace}/version.properties").text.trim() (2)
    println "Scanning for version: ${version}"
    twistlockScan ca: '', cert: '', compliancePolicy: 'critical', \dockerAddress: 'unix:///var/run/docker.sock', \ignoreImageBuildTime: false, key: '', logLevel: 'true', \policy: 'critical', repository: 'huttermann-docker-local.jfrog.io/michaelhuettermann/alpine-tomcat7', \(3)requirePackageUpdate: false, tag: "$version", timeout: 10
}

stage('Twistlock: Publish') { (4)String version = newFile("${workspace}/version.properties").text.trim()
    println "Publishing scan results for version: ${version}"
    twistlockPublish ca: '', cert: '', \dockerAddress: 'unix:///var/run/docker.sock', key: '', \logLevel: 'true', repository: 'huttermann-docker-local.jfrog.io/michaelhuettermann/alpine-tomcat7', tag: "$version", \timeout: 10
}
1Twistlock inspection as part of the sequence of stages in Jenkinsfile
2Nailing down the version of the to be inspected image, dynamically
3Configuring analysis including vulnerability severity level
4Publishing the inspection results to Twistlock console, that is the dashboard

Now let’s start with the first phase to bring our application in shape again, that is gaining insight about the security related flaws.

After we’ve introduced the new quality gate, it failed, see image above. As integration with other tools, Jenkins is the automation engine and does provide helpful context information, however, those cannot replace features and data the dedicated, triggered tool does offer. Thus, this is the moment to switch to the dedicated tool, that is Twistlock. Opening the dashboard, we can navigate to the Jenkins build jobs, that is the specific run of the build, and the respective results of the Twistlock analysis. What we see now is a list of vulnerabilities, and we need to fix those of severity critical in order to pass the quality gate, and get our changes again toward production. The list shows entries of type jar, that is a finding in a binary as part of the Docker image, in our case the WAR file we’ve deployed to a web container (Tomcat), and of type OS, those are issues of the underlying image itself, the operating system, either part of the base image, or as a package added/changed in our Dockerfile.

02

We can now easily zoom in and examine the vulnerabilities of the Docker layers. This really helps to structure work and identify root causes. Since, typically, a Docker image extends a Docker base image, the findings in the base image are shown on the top, see next screenshot, grouped by severity.

03

Other Docker layers were added to the base image, and those can add vulnerabilities too. In our case, the packaged WAR file obviously contains a vulnerability. The next image shows how we examine that finding, while this time expanding the Twistlock wizard (that is the plus sign) to directly see the list of found vulnerabilities.

04

Finding and visualizing the issues are a very good first step, and we’ve even made those findings actionable, so we now have to take action and address them.

Phase 2: Address the findings

To address the findings, we need to split our initiative into two parts:

  1. Fixing the critical vulnerabilities related to the Docker image (in our case largely the base image)

  2. Fixing the critical vulnerabilities related to the embedded deployment unit (in our case the WAR)

Let’s proceed bottom up, first coping with the Docker base image.

This is an easy example covering multiple scenarios particularly identifying and fixing vulnerabilities in transitive binaries, i.e. binaries contained in other binaries, e.g. a Docker image containing a WAR file that in turn contains libraries. To expand this vertical feasibility spike, you can easily add more units of each layer, or add more abstractions, however, the idea can always be nailed down to the primitives, covered in this blog post.

Let’s now have a look at the used Docker image by looking at the used Dockerfile.

Dockerfile: The Dockerfile based on Alpine, running OpenJDK 8
FROM openjdk:8-jre-alpine (1)
LABEL maintainer "michael@huettermann.net"

# Domain of your Artifactory. Any other storage and URI download link works, just change the ADD command, see below.
ARG ARTI
ARG VER

# Expose web port
EXPOSE 8080

# Tomcat Version
ENV TOMCAT_VERSION_MAJOR 9 (2)
ENV TOMCAT_VERSION_FULL  9.0.6

# Download, install, housekeeping
RUN apk add --update curl &&\  (3)
  apk add bash &&\
  #apk add -u libx11 &&\  (4)
  mkdir /opt &&\
  curl -LO ${ARTI}/list/generic-local/apache/org/tomcat/tomcat-${TOMCAT_VERSION_MAJOR}/v${TOMCAT_VERSION_FULL}/bin/apache-tomcat-${TOMCAT_VERSION_FULL}.tar.gz &&\
  gunzip -c apache-tomcat-${TOMCAT_VERSION_FULL}.tar.gz | tar -xf - -C /opt &&\
  rm -f apache-tomcat-${TOMCAT_VERSION_FULL}.tar.gz &&\
  ln -s /opt/apache-tomcat-${TOMCAT_VERSION_FULL} /opt/tomcat &&\
  rm -rf /opt/tomcat/webapps/examples /opt/tomcat/webapps/docs &&\
  apk del curl &&\
  rm -rf /var/cache/apk/*

# Download and deploy the Java EE WAR
ADD http://${ARTI}/list/libs-release-local/com/huettermann/web/${VER}/all-${VER}.war /opt/tomcat/webapps/all.war (5)

RUN chmod 755 /opt/tomcat/webapps/*.war

# Set environment
ENV CATALINA_HOME /opt/tomcat

# Start Tomcat on startup
CMD ${CATALINA_HOME}/bin/catalina.sh run
1Base image ships OpenJDK 8, on Alpine
2Defined version of web container
3Applying some defined steps to configure Alpine, according to requirements
4Updating package itself would address one vulnerability already
5Deploying the application

By checking available versions of the official OpenJDK Alpine image, we see that there’s a newer version 8u181 which we could use. We can zoom in and study release notes and contents, or we just pragmatically switch the base image to a more recent version. Often it is a good idea to upgrade versions regularly, in defined intervals. This leads to the following change in the Dockerfile.

Dockerfile (excerpt): The Dockerfile based on Alpine, running OpenJDK 8u181
FROM openjdk:8u181-jre-alpine (1)
LABEL maintainer "michael@huettermann.net"
1Base image is now OpenJDK 8u181, on Alpine

There are more options available to fix the issues, but let’s proceed to the second part, the vulnerabilities in the deployment unit.

Before we push this change to GitHub, we also address the vulnerability issue in the deployment unit, that is jetty-io. Here we are a bit unsure about why, in this specific use case, the library is used. To retrieve more information about dependencies, we run a dependency:tree command on our Maven based project. We now see that jetty-io is transitively referenced by org.seleniumhq.selenium:htmlunit-driver. We can surely discuss why this is a compile dependency and the libraries are shipped as part of the WAR, but let’s consider this to be given according to requirements, thus we must take special attention now to version 2.29.0 of the specific library.

05

Also here we can browse release notes and content (particularly how those libs are built themselves), and come to the conclusion to switch from the used version, that is 2.29.0, to a newer version of htmlunit-driver, that is 2.31.1.

pom.xml (excerpt): Build file
<dependencies>(1)<dependency><groupId>org.seleniumhq.selenium</groupId><artifactId>selenium-java</artifactId><version>3.14.0</version></dependency><dependency><groupId>org.seleniumhq.selenium</groupId>(2)<artifactId>htmlunit-driver</artifactId><version>2.31.1</version></dependency><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>4.7</version></dependency>
1Part of the underlying POM defining dependencies
2Definition of the dependency, causing the vulnerability finding; we use a newer version now

OK, now we are done. We push the changes to GitHub, and our GitHub webhook directly triggers the workflow. This time the quality gate passes, so it looks like our fixes did address the root causes and eliminated those with the configured threshold severity.

06

Finally, after running through our entire workflow, that is made up of different pipelines, our inspected and quality assured container does successfully run in our production runtime environment, that is on Oracle Cloud.

07

Crisp, isn’t it?

Summary

This closes our quick walkthrough of how to inject security related quality gates into a Jenkins based delivery pipeline. We’ve discussed some concepts and how this can look like with sample tools. In the center of our efforts, we used Jenkins, the swiss army knife of automation. We enriched our ecosystem by integrating couple of platforms and tools, above all Twistlock. After this tasty appetizer you are ready to assess your own delivery pipelines, concepts and tools, and to possibly invest even more attention to security.

Google Summer of Code Mentor Summit 2018

$
0
0
Jenkins GSoC

This year, the Jenkins organization participated in the Google Summer of Code Mentor Summit at the Google office in Sunnyvale on Oct 12, 13 and 14, 2018. The GSoC Mentor Summit is where mentors of all organizations participating in the GSoC program are invited each year to learn and network with mentors from other organization, and make GSoC a better program. This is the second time Jenkins mentors have participated in the summit, the first time was in 2016.

Exceptionally, three Jenkins GSoC mentors were invited to the summit this year. Normally only two mentors are invited, but when there are cancellations, Google draws a name at random from the waiting list, and the Jenkins organization was lucky enough and sent an extra mentor this year! The mentors participating this year were Oleg Nenashev, Jeff Pearce and Martin d’Anjou.

It is worth mentioning that the Mentor Summit is not a typical conference where you go sit and listen at what speakers have to say, quite the contrary. The Mentor Summit is an unconference where participants are invited to fill empty time slots with their own topic of discussion.

Friday Oct 12

Pre-conference meeting

The mentors had a short pre-conference meeting to reflect on the Jenkins participation in the 2018 GSoC program, and to plan for 2019. We were joined at this meeting by Lloyd Chang, whom we had met at Jenkins World 2018. Thank you Lloyd for joining us! A few ideas we had for 2019 are:

  • Move project proposals to individual Google Documents

  • Create a template for project proposed by potential mentors and by project champions

  • Create an Organization Administrator Guide for future Jenkins GSoC project admins

Other preparations we agreed to work on include a review of the 2018 feedback and the creation of an Epic capturing the action items in preparation for 2019. We are also planning on making progress on the GSoC Budget process described in JEP-8.

Summit Starts!

The summit started by a welcoming dinner at the Google Cafeteria and an evening session where we were explained how the unconference would work. We proposed a few topics: dealing with CPT lost slots, motivating mentors, and Open Source Hardware ASIC/FPGA.

On thing to say is that everyone at the conference had heard of Jenkins, or was already using Jenkins. Lots of people came to tell us their Jenkins experience.

I noticed this too - made me feel proud to be part of the Jenkins project.

— Jeff Pearce
Jenkins GSoC mentor in 2018

Saturday Oct 13

Unconference sessions

The morning started with a couple of announcements from Google. The first one was that Google is thinking of creating a program called "Google Season of Docs" (GSoD for short), where technical writers would be paired with Open Source Organizations to help them write documentation such as:

  • High-impact tutorials

  • Set of How-To Guides

  • Contributor’s Guide

  • Documentation refactoring

  • Plain documentation

We have additional details regarding this in theGSoC Mentor Summit Notes and we quickly concluded that if this program comes to life, Jenkins should be a participating organization.

The other announcement made by the GSoC administrators is that GSoC may take a different form in 2020. However, not much more information has been made available at this time. The program has been operating for 13 years, and in 2020 it will have been 15 years.

The announcements were followed by a series of morning lightning talks. This is where organizations showcase what their students accomplished during the program. This is when we had a bit of a surprise…​

Oleg who had signed up for the evening lightning talks, was watching the talks while casually preparing slides for his evening presentation. But something unusual happened: many talks were shorter than the 3-minute allotted, and suddenly we were ahead of schedule. That’s when Oleg was called to the stage. I had no idea whether his slides were ready or not since he had just leaned over to me to say that he wanted to talk about all 3 projects we had this year. Not knowing how far he had gotten into refactoring the slides, this was going to be…​ interesting. Being an experienced presenter, Oleg pulled it off brilliantly. The slides were effectively ready (how he managed that I have no idea), but you can see the slides of his lightning talk here: Jenkins Remoting over Apache Kafka.

Then there were the unconference sessions. Some of the sessions we attended are:

  • Documentation

  • Attracting and retaining mentors (facilitated by Martin)

  • Organizing and motivating volunteers and mentors

  • Getting students from coding/boot camps involved in open source

  • Retaining students after GSoC

  • Open Event management System

  • GSoC Feedback

We have notes for all the session in the main document. Some sessions were captured in separated documents which are linked from the main document, or from this blog post.

There were lots of good ideas in those sessions, and we will do what we can next year to implement some of them.

Some organizations have said that the key for student retention is to give them responsibilities and tasks after the program is over. We have certainly seen that this year, with one of our students asking for more responsibilities and wanting to know how his plugin project could continue to grow within the Jenkins project (while at the same time help out on another GSoC plugin!).

In the evening was the second round of lightning talks. Jeff Pearce presented the Code Coverage API Plugin lightning talk, (he was not caught by surprise).

Chocolate table at the GSoC 2018 Mentor Summit

After the lightning talks, we were invited to hang out at the cafeteria and on the patio, to exchange stickers, network with mentors of other organizations, and enjoy late evening snacks, music and of course the chocolate table!

Sunday Oct 14

On Sunday, the sessions continued. An interesting session was "Beyond GSoC, What can Google do?". One person got a big round of applause when he said: "Cloud credits". It turns out the GSoC program admins have been trying to get that for us for about 3 years. Google may be big and powerful, but some things are hard and remain hard in the corporate world.

An interesting suggestion was made by Oleg, and it would be to have a program with smaller, shorter term commitments, something that would encourage more granular contributions but would not require a 4-month long commitment. This was noted by the GSoC program admins.

Then we attended a number of sessions:

Then the day came to an end with some last words by Google thanking all the mentors and volunteers who run this program in their organizations.

Return trip

I would now like to add a personal note. After the summit, like many others I fly back home, so I spend the evening at the SFO international terminal waiting for my late night flight. That is where I get to meet more mentors, as some of us still wear our badges and T-Shirts, and also recognize each other from being at the conference. And funny enough, there are so many geeks at that terminal that we may have recruited, among the passengers, a mentor to another org for next year!

Want a GSoC student to work on your project in 2019?

We have already started the preparations for GSoC 2019. And we cannot do this without the participation of the Jenkins community. We are already looking for:

  • Mentors from the Jenkins Special Interest Groups

  • Mentors from any background and any provenance (being a Jenkins developer is NOT required)

  • Project proposals

  • Students and their proposals

Lots of people are afraid that mentoring a student will take a lot of their time. If you feel that way, you are not alone. It does take some time. In my case, I spend 5 to 8 hours per week on mentor tasks (more at the start, less at the end). To make it easier on mentors who likely have full time jobs and life commitments, we define different mentor roles:

  • Project champion co-mentor: this is the mentor who proposes the idea, but may not have all the Jenkins code expertise needed. This mentor works with the student to define the project and acts mostly as a "customer" of the project. This mentor usually know enough about coding to comment on pull-requests with regards to the over quality, style and features of the code.

  • Technical co-mentor: this is the mentor who knows enough about the Jenkins code to guide the student on coding, and to provide Jenkins specific code reviews on pull-requests, but has limited involvement outside the coding activity of the student.

There is a third role which is:

  • Subject Matter Expert: these individuals are not mentors, but we reach out to them 3-4 times during the project for advice and guidance, and sometimes complicated programming challenges.

If you have questions or are curious about the program, contact us on the GSoC Gitter SIG chat.

We would like to emphasize that project proposals are not limited to "big projects". For example, it is perfectly fine to have a proposal that is a collection of related Jira issues that aim to improve your project, or a list of tasks that need to be done for your project. Writing documentation is outside the scope of GSoC, but automating documentation generation, as long as it is mostly about writing code, is within the scope of GSoC.

We look forward working with the Jenkins community on GSoC 2019!

Important security updates for Jenkins

$
0
0

We just released security updates to Jenkins, versions 2.154 and LTS 2.150.1, that fix multiple security vulnerabilities. Since 2.150.1 is the first release in the new LTS line, we also released 2.138.4, a security update for the previous LTS line. This allows administrators to install today’s security fixes without having to upgrade to the new LTS line immediately.

For an overview of what was fixed, see the security advisory. For an overview on the possible impact of these changes, see our LTS 2.138.4 upgrade guide.

In the Jenkins core security updates released in August and October, we also included security improvements that can be disabled by setting various system properties. Those changes are an essential part of the SECURITY-595 fix, so we strongly recommend not disabling them for any reason. Previously published documentation has been updated.

Official Jenkins image to use from Docker Hub

$
0
0

There are now three different Docker Hub repositories that are or have been used as the "official" Jenkins image. This article aims at providing a clarification about which one is the current official one (as of December 2018 :-)).

The official one

docker pull jenkins/jenkins

i.e. https://hub.docker.com/r/jenkins/jenkins/ is the right repository to use.

I also documented some time ago on my blog the recommended way to run Jenkins using the official Docker image.

The deprecated ones

jenkins

Deprecated since a long time already. A short version of why we stopped using and updating this image is that we never had a way to get our images published without having each time to go through a manual process.

jenkinsci/jenkins

Deprecated since a long time too, but for easing transition, we had kept updating both jenkins/jenkins (the right one) and jenkinsci/jenkins together. We stopped updating jenkinsci/jenkins in early December 2018 (cf. INFRA-1934 for details if you are interested)

Thanks for reading!


Outreachy internships to add audit logging support to Jenkins

$
0
0

This year marks the first time the Jenkins project is participating in Outreachy. Outreachy is a program similar to Google Summer of Code (GSoC) where interns work on open source projects for a paid stipend. The key difference is that Outreachy reaches out to underrepresented groups and those who face systemic bias or discrimination in the technology industry in their home country. Once I learned about this program, I immediately volunteered to mentor as the concept strongly aligns with my ideals of inclusiveness and community building. I’m happy to report that both the Jenkins project, and my employer [CloudBees](https://www.cloudbees.com), have been very supportive of this program.

Expanding on our previous efforts to mentor students in GSoC, this year we’ve joined up with Outreachy to mentor two interns. Our interns for this season of Outreachy, Latha Gunasekar and David Olorundare, will be working with me on audit logging support for Jenkins. I am excited to welcome both David and Latha, and am looking forward to what they will learn about both professional software engineering and contributing to an open source community. Stay tuned for blog post entries introducing both people in the near future.

The audit logging support project forms a new connection between Jenkins and Apache Log4j which offers great opportunities for our interns to learn more about open source governance and meet new people. As a bonus, the project aims to provide the tooling necessary to support advanced observability concerns such as running anomaly detection on authentication events to detect potential intrusion attempts. We will also be authoring a JEP to detail the audit logging API provided by the plugin and how other plugins can define and log their own audit events besides the Jenkins Core ones that come with the plugin.

I’m looking forward to the great work we’ll be doing together, and I hope that we’ll be able to welcome more Outreachy interns in the future!

KubeCon + CloudNativeCon North America 2018 is Here!

$
0
0

KubeCon + CloudNativeCon North America 2018

The time has come - KubeCon + CloudNativeCon North America 2018 has arrived. The conference has completely sold out and the schedule is jam packed with interesting talks.

If you’re among those with tickets, here are a couple Jenkins related events that might interest you:

I look forward to seeing you there!

Java 11 Support Preview is available in Jenkins 2.155+

$
0
0
This is a joint blogpost prepared by the Java 11 Support Team. On Dec 18 (4PM UTC) we will be also presenting the Java 11 Preview Support at the Jenkins Online Meetup (link)

Jenkins Java

Jenkins, one of the leading open-source automation servers, still supports only Java 8. On September 25 OpenJDK 11 was released. This is a Long-Term-Support which will stay around for years, and in the Jenkins project we are interested to offer a full support of this version. Over the last year many contributors have been working towards enabling support for Java 11 in the project (Jenkins JEP-211). It was a thorny path, but now, on behalf of the Jenkins Platform SIG, we are happy to announce preview availability of Java 11 support in Jenkins weekly releases!

Why do we need preview availability for Java 11? It offers Jenkins contributors and early adopters a way to try out the changes before the general availability release happens early next year. It should help us to get more exploratory testing and, hopefully, resolve most of the issues before Java 11 is officially supported in Jenkins.

In this blog post we will describe how to run with Java 11, and how to investigate compatibility issues and report them.

Background

As you probably remember, in June 2018 we had an online hackathon targeting Java 10+ support in Jenkins. As a part of the hackathon, we provided the experimental support of Java 11. This event was a big success for us, and we were able to get Jenkins running with Java 10 and 11-ea, including major features like Jenkins Pipeline, JobDSL, Docker/Kubernetes plugins, Configuration as Code, BlueOcean, etc. It gave us confidence that we can provide Java 11 support in Jenkins without major breaking changes. After the hackathon, Oleg Nenashev createdJEP-211: Java 10+ support in Jenkins (was later adjusted to target Java 11 only).Platform Special Interest Group has been also founded to coordinate the Java 11 support work and other platform support efforts (packaging, operating system support, etc.).

A group of contributors continued working on Java 11 support, mostly focusing on upstreaming functional patches, enabling Java 11 support in development tools, testing and addressing known compatibility issues. See the Platform SIG meeting notes for detailed status updates. Starting from Jenkins 2.148, Jenkins successfully runs with latest OpenJDK 11 releases on various Linux and Windows platforms. We performed a LOT of automated and exploratory tests, Jenkins plugins appear to work well with some exceptions (see below). There is ongoing test automation effort towards the GA releases, but we were able to successfully run Jenkins core tests, full Acceptance Test Harness, and Plugin Compat Tester for recommended plugins. We also deployed a temporaryExperimental Update Center for Java 11 which allows quickly delivering fixes for Java 11 early adopters. Jenkins 2.155+ defaults to this update center when running with Java 11, and that’s why we announce preview availability for this version.

On Nov 19, 2018 we presented the current Java 11 support status at the Platform SIG meetingslides, and we agreed that we would like to proceed with the preview availability so that we can offer something for evaluation to Jenkins users. By the next meeting on Dec 04, all blockers have been addressed, and the Platform SIG meeting signed off the Java 11 preview availability.

Running Jenkins and Java 11 in Docker

Starting from Jenkins 2.155, we provide Docker images for the Jenkins master and agent. All these images are based on the officialopenjdk:11-jdk image maintained by the Docker Community. There were discussions about migrating to other base images, but we decided to exclude it from the Preview Availability scope. Similarly, we do not provide Alpine images for now.

Jenkins master image

Java 11 support is now provided as a part of the officialjenkins/jenkins image. You can run the Jenkins with Java 11 simply as:

docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:jdk11

The following tags are available:

  • jdk11 - Latest weekly release with Java 11 support

  • 2.155-jdk11 - Weekly releases packaged with Java 11

The image is fully compatible withjenkins/jenkins documentation, e.g. you can use plugins.txt to install plugins, mount volumes and pass extra options via environment variables.

Agent images

If you use containerized agents via Docker or Kubernetes plugins, we have also released official Docker images for Jenkins agents:

All images use the latest-jdk11 image tag for JDK11 bundles. And sorry for the obsolete names!

Experimental Jenkins master images

In order to simplify testing, we also provide some experimental images on DockerHub. We set up a continuous delivery flow for them, so you can get patches without waiting for Jenkins weekly releases.

  • jenkins4eval/blueocean-platform-support - Equivalent of jenkinsci/blueocean

    • Tag: latest-jdk11

    • The image bundles all Jenkins Pipeline and Blue Ocean patches required to run on Java 11

    • If you want to try Pipeline, use this image

  • jenkins/jenkins-experimental - Equivalent of jenkins/jenkins

    • Tag: latest-jdk11

    • The image is released from the java11-support feature branch in the Jenkins core

    • The branch may be slightly ahead or behind the master branch, we may use the branch to quickly deliver patches to Java 11 users

Eventually we will move the experimental flow to the new jenkins4eval organization being created as a part of JEP-217.

Running jenkins.war with Java 11

Running without Docker is not that trivial, because Jenkins depends on some modules which have been removed from Java 11. We plan to address it in the General Availability release somehow (see JENKINS-52186), but for now some manual actions are required to run Jenkins WAR with Java 11.

  1. Download Jenkins WAR for 2.155

  2. Download the following libraries to the same directory as jenkins.war

  3. Run the following command:

Run Jenkins with ${JAVA11_HOME}/bin/java \
    -p jaxb-api.jar:javax.activation.jar --add-modules java.xml.bind,java.activation \
    -cp jaxb-core.jar:jaxb-impl.jar \
    -jar jenkins.war --enable-future-java --httpPort=8080 --prefix=/jenkins

Known compatibility issues

To help users to track down the compatibility issues, we have created a new Known Java 11 Compatibility Issues Wiki page.

Several important issues and obstacles:

  • Pipeline: Support Plugin has a known issue with context persistency when running with Java 11 (JENKINS-51998)

    • We have deployed a temporary fix to theExperimental Update Center for Java 11. Fix version: 3.0-java11-alpha-1

    • If you use Jenkins Pipeline, make sure you run with this fix. Otherwise the jobs will fail almost immediately

    • When updating instances to Java 11, make sure there is no running Pipelines

  • JENKINS-54305 -JDK Tool Plugin does not offer installers for JDK 11

  • JENKINS-52282 - Java Web Start is no longer available in Java 11, so it is no longer possible to start agents from Web UI. We do not plan to provide a replacement.

We also know about some minor incompatibilities in other plugins, but we do not consider them as blockers for preview availability.

Reporting compatibility issues

If you discover any Java 11 incompatibilities, pleasereport issues in our bugtracker. Please set java11-compatibility labels for such issues so that they automatically appear on the Wiki page and get triaged.

For the security issues please use the standardvulnerability reporting process. Although we will be fixing Java 11 specific issues in public while it is in the preview, following the security process will help us to investigate impact on Java 8 users.

Java 11 Support Team

Once Java 11 support is released, we expect reports of regressions in plugins and Jenkins core. One of the concerns are exotic platforms with native libraries, and of course other Java versions. There is also a risk of 3rd-party library incompatibilities with Java 11. To mitigate the risks, we have created aJava 11 Support Team. This team will be focusing on triaging the incoming issues, helping to review pull requests and, in some cases, delivering the fixes. The process for this team is documented in JEP-211.

We do not expect the Java 11 Support Team to be able to fix all discovered issues, and we will be working with Jenkins core and plugin maintainers to get the fixes delivered. If you are interested to join the team, reach out to us in the Platform SIG Gitter Channel.

Contributing

We will appreciate any kind of contributions in the Java 11 effort, including trying out Jenkins with Java 11, reporting and fixing compatibility issues.

  • If you want to do the exploratory testing, we recommend to try out Java 11 support at one of your test instances. Such testing will be much appreciated, especially if you use some service integration plugins or exotic platforms. The issue reporting guidelines are provided above

  • If you are a plugin developer/maintainer, we would appreciate if you could test your plugin with Java 11. In order to help with that, we have created a Wiki page withJava 11 Developer guidelines. This page explains how to build and test plugins with Java 11, and it also lists known issues in development tools

Whatever you do, please let us know about your experience by sending a message to the Platform SIG mailing list. Such information will help us a lot to track changes and contributions. Any other feedback about the migration complexity will be appreciated!

What’s next?

On Dec 18 (4PM UTC) we will be presenting the Java 11 Preview Support at the Jenkins Online Meetup (link). At this meetup we will summarize the current Java 11 Preview support status. If you are a plugin developer, we will also organize separate sessions about testing plugins with Java 11 and about common best practices for fixing compatibility issues. Please follow the Platform SIG announcements if you are interested.

In the next weeks we will focus on addressing feedback from early adopters and fixing the discovered compatibility issues. We will also continue working on Java 11 support patches towards the general availability next year (JENKINS-51805). In addition to that, we will start working on Java 11 support in subprojects, including Jenkins X and Jenkins Evergreen.

2018 in Review: A year of innovation

$
0
0

The end of a year is a great time to step back from the daily grind to look at the big picture.

Year in review

Across the industry, the relentless march toward more automation still continues on. We are writing software faster than ever, but the demand for software seems to be going up even more, and I feel more and more businesses and executives are keenly aware that software and developers are king. At the ground level, every team I meet sees the software delivery automation to be a critical part of their "software factory," and it’s important for them to create and manage them with unhinged flexibility and visibility.

Jenkins continues to play a major role in making this possible, after 14+ years since its birth, and if anything the pace of growth seems to be accelerating. In this dog year industry, that’s truly remarkable. Being a part of this achievement truly makes me proud.

Building Jenkins, being a tool that everyone uses, comes with a great responsibility. So within the Jenkins community, we’ve been hard at work. In fact, 2018 has been the single most innovative year in the history of the whole project across the field, at multiple levels.

  • As we got bigger, we needed better ways to drive initiatives that cut across multiple people. This thinking led to JEPs and SIGs, and 2018 saw these formats getting great traction. After a year of operating them, I think we’ve learnt a lot, and I hope we will continue to improve them based on the learning.

  • These new formats gave rise to new collaborations. For example, Chinese Localization SIG resulted in our WeChat presence and localized website. Platform SIG was instrumental in Java 11 support.

  • I’m also very happy to see new batch of leaders. In fear of missing out some people, I’m not going to list them individually, but we celebrated many of them as Jenkins Ambassadors this fall (and please nominate more for the next year!) Those people who lead key efforts are often people who are new to those roles.

  • Some of the new leaders led other efforts that unlock new contributors. It’s about consciously thinking which segment of our potential contributors we aren’t tapping today and understanding why. Something any business does all the time. Ours resulted in Google Summer of Code and Outreachy participations.

  • Our security process and the pace of fixes have gone up considerably this year again, reflecting our stepping up to the trust our users gave to us. For example, this year we rolled out a telemetry system that informs us to develop better fixes more quickly.

Now, where these community improvements ultimately matter is what impact we are creating to software that you use. On that front, I think we did great in 2018, resulting in what I call "5 super powers":

The not-so-secret sauce of the Jenkins community that threads together all these improvements from user visible changes to the community improvements is our ability to evolve. As I look forward to 2019, no doubt these things I mentioned will evolve, morph, merge, and split as we continue to learn and adopt.

So please, follow @jenkinsci and @jenkinsxio on Twitter to get updates on how we will evolve, and join our community to together build the software that rocks the world. How many open-source projects can say that?

Google Summer of Code 2019. Call for Project ideas and Mentors

$
0
0
Google Summer of Code is as program where students are paid a stipend by Google to work on a free open source project like Jenkins, at full-time for four months (May to August). Mentors get actively involved with students starting at the end of February when students start to apply (see the timeline).

Jenkins GSoC

We are looking for mentors and project ideas to participate in the 15th edition of the Google Summer of Code program! We have until February 6th, 2019 at 8pm UTC to submit the application on behalf the Jenkins Organization, but obviously, we want to be ready before that.

The first step in the process is to have mentors and project ideas. Then we will apply to Google. We need Google to accept Jenkins' application to the program itself. And for this to happen, we need project proposals and mentors.

We currently have a list of project idea proposals, and we are looking for new project proposals, mentors, technical advisers, and subject matter experts. GSoC projects may be about anything around code: new features, plugins, test frameworks, infrastructure, etc., etc.

Making a project idea proposal is easy, you can read the instructions here. Quick start:

  1. Copy the project proposal template, add a short description of your project idea

  2. Open the document for public view and comments, reference communication channels there (if any)

  3. Let us know about the project idea via our gitter channel or themailing list.

  4. After getting initial feedback from org admins, share your idea with other contributors who might be interested (via the developer mailing list, chats, or special interest groups)

Potential mentors are invited to read the information for mentors for more information about the project. Note that being a GSoC mentor does not require expert knowledge of Jenkins. GSoC org admins will help to find technical advisors, so you can study together with your students.

Mentoring takes about 5 to 6 hours of work per week (more at the start, less at the end). In return, a student works on your project full time for four months. Think about the projects that you’ve always wanted to do but could not find the time…​ There are also many opportunities to engage with the Jenkins community (meetups, knowledge sharing, communications) and with other projects (e.g. going to the GSoC Mentor Summit). GSoC is a pretty good return on the investment!

For any question, you can find the GSoC admins, mentors and participants on the GSoC SIG Gitter chat.

The Jenkins GSoC Org Admin Team 2019

Viewing all 1087 articles
Browse latest View live