Quantcast
Channel: Jenkins Blog
Viewing all 1087 articles
Browse latest View live

Configuring a Jenkins Pipeline using a YAML file

$
0
0
This guest post was originally published on Wolox’s Medium account here.

A few years ago our CTO wrote about building aContinuous Integration server for Ruby On Rails using Jenkins and docker. The solution has been our CI pipeline for the past years until we recently decided to make an upgrade. Why?

  • Jenkins version was way out of date and it was getting difficult to upgrade

  • Wolox has grown significantly over the past years and we’ve been experiencing scaling issues

  • Very few people knew how to fix any issues with the server

  • Configuring jobs was not an easy task and that made our project kickoff process slower

  • Making changes to the commands that each job runs was not easy and not many people had permissions to do so. Wolox has a wide range of projects, with a wide variety of languages which made this problem even bigger.

Taking into account these problems, we started digging into the newest version of Jenkins to see how we could improve our CI. We needed to build a new CI that could, at least, address the following:

  • Projects must be built using Docker. Our projects depend on one or multiple docker images to run (app, database, redis, etc)

  • Easy to configure and replicate if necessary

  • Easy to add a new project

  • Easy to change the building steps. Everyone working on the project should be able to change if they want to run npm install or yarn install.

Installing Jenkins and Docker

Installing Jenkins is straightforward. You can visitJenkins Installation page and choose the option that best suits your needs.

Here are the steps we followed to install Jenkins in AWS:

sudo rpm — import https://pkg.jenkins.io/debian/jenkins.io.key
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins.io/redhat/jenkins.repo
sudo yum install java-1.8.0 -y
sudo yum remove java-1.7.0-openjdk -y
sudo yum install jenkins -y
sudo yum update -y
sudo yum install -y docker

Automatically adding projects from Github

Adding projects automatically from Github can be achieved using theGitHub Branch Source Plugin. It allows Jenkins to scan a GitHub organization for projects that match certain rules and add them to Jenkins automatically. The only constraint that all branches must meet in order to be added is that they contain a Jenkinsfile that explains how to build the project.

Easy to change configuration

Not so easy to change configuration

One of the biggest pains we had with our previous Jenkins was the difficulty of changing the steps necessary to build the project. If you looked at a project’s build steps, you would find something like this:

#!/bin/bash +x
set -e

# Remove unnecessary files
echo -e "\033[34mRemoving unnecessary files...\033[0m"
rm -f log/*.log &> /dev/null || true &> /dev/null
rm -rf public/uploads/* &> /dev/null || true &> /dev/null

# Build Project
echo -e "\033[34mBuilding Project...\033[0m"
docker-compose --project-name=${JOB_NAME} build

# Prepare test database
COMMAND="bundle exec rake db:drop db:create db:migrate"
echo -e "\033[34mRunning: $COMMAND\033[0m"
docker-compose --project-name=${JOB_NAME} run  \
        -e RAILS_ENV=test web $COMMAND

# Run tests
COMMAND="bundle exec rspec spec"
echo -e "\033[34mRunning: $COMMAND\033[0m"
unbuffer docker-compose --project-name=${JOB_NAME} run web $COMMAND

# Run rubocop lint
COMMAND="bundle exec rubocop app spec -R --format simple"
echo -e "\033[34mRunning: $COMMAND\033[0m"
unbuffer docker-compose --project-name=${JOB_NAME} run -e RUBYOPT="-Ku" web $COMMAND

And some post build steps that cleaned up the docker:

#!/bin/bash +x
docker-compose --project-name=${JOB_NAME} stop &> /dev/null || true &> /dev/null
docker-compose --project-name=${JOB_NAME} rm --force &> /dev/null || true &> /dev/null
docker stop `docker ps -a -q -f status=exited` &> /dev/null || true &> /dev/null
docker rm -v `docker ps -a -q -f status=exited` &> /dev/null || true &> /dev/null
docker rmi `docker images --filter 'dangling=true' -q --no-trunc` &> /dev/null || true &> /dev/null

Although these commands are not complex, changing any of them required someone with permissions to modify the job and an understanding ofwhat needed to be done.

Jenkinsfile to the rescue…​ or not

With the current Jenkins version, we can take advantage ofJenkins Pipeline and model our build flow in a file. This file is checked into the repository and, therefore, anyone with access to it can change the build steps. Yay!

Jenkins Pipeline even has support for:

  • Docker andmultiple images can be used for a build!

  • Setting environment variables with withEnv and many other built -in functions that can be foundhere.

This makes a perfect case for Wolox. We can have our build configuration in a file that’s checked into the repository and can be changed by anyone with write access to it. However, a Jenkinsfile for a simple rails project would look something like this:

# sample Jenkinsfile. Might not compile
node {
    checkout scm
    withEnv(['MYTOOL_HOME=/usr/local/mytool']) {
        docker.image("postgres:9.2").withRun() { db ->
            withEnv(['DB_USERNAME=postgres', 'DB_PASSWORD=', "DB_HOST=db", "DB_PORT=5432"]) {
                docker.image("redis:X").withRun() { redis ->
                    withEnv(["REDIS_URL=redis://redis"]) {
                        docker.build(imageName, "--file .woloxci/Dockerfile .").inside("--link ${db.id}:postgres --link ${redis.id}:redis") {
                            sh "rake db:create"
                            sh "rake db:migrate"
                            sh "bundle exec rspec spec"
                        }
                    }
                }
            }
        }
    }
}

This file is not only difficult to read, but also difficult to change. It’s quite easy to break things if you’re not familiar with Groovy and even easier if you know nothing about how Jenkins’ pipeline works. Changing or adding a new Docker image isn’t straightforward and might lead to confusion.

Configuring Jenkins Pipeline via YAML

Personally, I’ve always envied simple configuration files for CIs and this time it was our chance to build CI that could be configured using a YAML file. After some analysis we concluded that a YAML like this one would suffice:

config:dockerfile: .woloxci/Dockerfileproject_name: some-project-nameservices:
  - postgresql
  - redissteps:analysis:
    - bundle exec rubocop -R app spec --format simple
    - bundle exec rubycritic --path ./analysis --minimum-score 80 --no-browsersetup_db:
    - bundle exec rails db:create
    - bundle exec rails db:schema:loadtest:
    - bundle exec rspecsecurity:
    - bundle exec brakeman --exit-on-erroraudit:
    - bundle audit check --updateenvironment:RAILS_ENV: testGIT_COMMITTER_NAME: aGIT_COMMITTER_EMAIL: bLANG: C.UTF-8

It outlines some basic configuration for the project, environment variables that need to be present during the run, dependentservices, and our build steps.

Jenkinsfile + Shared Libraries = WoloxCI

After investigating for a while about Jenkins and the pipeline, we found that we could extend it withshared libraries. Shared libraries are written in groovy and can be imported into the pipeline and executed when necessary.

If you look carefully at this Jenkinsfile, we see that the code is a chain of methods calls that receive a closure, where we execute another method passing a new closure to it.

# sample Jenkinsfile. Might not compile
node {
    checkout scm
    withEnv(['MYTOOL_HOME=/usr/local/mytool']) {
        docker.image("postgres:9.2").withRun() { db ->
            withEnv(['DB_USERNAME=postgres', 'DB_PASSWORD=', "DB_HOST=db", "DB_PORT=5432"]) {
                docker.image("redis:X").withRun() { redis ->
                    withEnv(["REDIS_URL=redis://redis"]) {
                        docker.build(imageName, "--file .woloxci/Dockerfile .").inside("--link ${db.id}:postgres --link ${redis.id}:redis") {
                            sh "rake db:create"
                            sh "rake db:migrate"
                            sh "bundle exec rspec spec"
                        }
                    }
                }
            }
        }
    }
}

Groovy is flexible enough to allow this same declarative code to be created at runtime, making our dream of using a YAML to configure our job come true!

Introducing Wolox-CI

That’s how wolox-ci was born- our shared library for Jenkins!

With wolox-ci, our Jenkinsfile is now reduced to:

@Library('wolox-ci') _

node {

  checkout scm

  woloxCi('.woloxci/config.yml');
}

Now it simply checks out the code and then calls wolox-ci. The library reads yaml file like this one

config:dockerfile: .woloxci/Dockerfileproject_name: some-project-nameservices:
  - postgresql
  - redissteps:analysis:
    - bundle exec rubocop -R app spec --format simple
    - bundle exec rubycritic --path ./analysis --minimum-score 80 --no-browsersetup_db:
    - bundle exec rails db:create
    - bundle exec rails db:schema:loadtest:
    - bundle exec rspecsecurity:
    - bundle exec brakeman --exit-on-erroraudit:
    - bundle audit check --updateenvironment:RAILS_ENV: testGIT_COMMITTER_NAME: aGIT_COMMITTER_EMAIL: bLANG: C.UTF-8

and builds the Jenkinsfile to get your job running on the fly.

The nice part about having a shared library is that we can extend and fix our library in a centralized way. Once we add new code, the library is automatically updated in Jenkins which will notify all of our jobs with the update.

Since we have projects in different languages we use Docker to build the testing environment. WoloxCI assumes there is a Dockerfile to build and will run all the specified commands inside the container.

Woloxci config.yml

Config

The first part of the config.yml file specifies some basic configuration: project’s name and Dockerfile location. The Dockerfile is used to build the image where the commands will be run.

Services

This section describes which services will be exposed to the container. Out of the box, WoloxCI has support for postgresql, mssql andredis. You can also specify the docker image version you want! It is not hard to add a new service. You just need to add the corresponding file at

and modify how the services are parsed

Steps

The listed commands in this section will run inside the Docker container. As a result, you’ll see each of the steps on the Jenkins UI.

image

Environment

If you need some environment variables during your build, you can specify them here. Whatever variable you set will be available inside the Docker container when your commands listed in the steps section described above.

Wrapping up

WoloxCI is still being tested with a not-so-small sample of our projects. The possibility of changing the build steps through a YAML file makes it accessible for everyone and that is a great improvement in our CI workflow.

Docker gives us the possibility of easily changing the programming language without making any changes to our Jenkins installation and Jenkins’ Github Organization feature automatically adds new projects when a new repository with a Jenkinsfile is detected.

All of these improvements have reduced the time we spend maintaining Jenkins significantly and give us the possibility of easily scaling without any extra configuration.

This library is working in our CI but it still can be improved. If you would like to add features, feel free tocontribute!


Jenkins Essentials: The days of versions are numbered

$
0
0

A couple weeks ago, Iwrote about the Jenkins Essentials effort, on which we’ve been making steady progress. Personally, the most exciting challenge of this project of defining the machinery to drive automatic updates of Jenkins Essentials, which viewed from a high level, are classic continuous delivery challenges.

In this post, I wanted to dive into a bit of the gritty details of how we’re going to be delivering Jenkins Essentials with automatic updates, which has some really interesting requirements for the development of Jenkins itself.

Jenkins Essentials

The traditional Jenkins core and plugin development workflow involves a developer working on changes for some amount of time, then when they’re ready, they "create a release" which typically involves publishing artifacts to our Artifactory, and then on a timer (typically hourly) the Update Center will re-generated a file called update-center.json. Once the new Update Center has been generated, it is published and consumed by Jenkins installations within 24 hours. Of course, only after Jenkins administrators recognize that there is an update available, can they install it. All in all, it can take quite a long time from when a developer publishes a release, to when it is successfully used by an end-user.

With our desire to make Jenkins Essentials updates seamless and automatic, the status quo clearly was not going to work. Our shift in thinking has required a couple simultaneous efforts to make this more continuously delivered approach viable.

Developer Improvements

Starting from the developer’s workflow, Jesse Glick has been working on publishing "incremental builds" of artifacts into aspecial Maven repository in Artifactory. Much of his work is described in the very thoroughJenkins Enhancement Proposal 305. This support, which is now live onci.jenkins.io allows plugin developers to publish versioned changes from pull requests andbranches to the incrementals repository. Not only does this make it much easier for Jenkins Essentials to deliver changes closer to the HEAD ofmaster branches, it also unlocks lots of flexibility for Jenkins developers who coordinate changes across matrices of plugins and core, as occasionally is necessary for Jenkins Pipeline, Credentials, Blue Ocean, and a number of other foundational components of a modern Jenkins install.

In a follow-up blog post, Jesse is going to go into much more detail on some of the access control and tooling changes he had to solve to make this incrementals machinery work.

Of course, incremental builds are only a piece of the puzzle, with those artifacts, Jenkins Essentials has to be able to do something useful with them!

Update Improvements

The number one requirement, from my perspective, for the automatically updated distribution is that it is safe. Safety means that a user doesn’t need to be involved in the update process, and if something is to go wrong, the instance can recover without the user needing to do anything to remediate a "bad code deploy."

In my previous post on the subject, I mentioned Baptiste’s work on Jenkins Enhancement Proposal 302 which describes the "data safety" system for safely applying updates, and in case of failure, rolling back.

The next obvious question is "what’s failure?" which Baptiste spent some time exploring and implementing in two more designs:

On the server side, of which there is substantial work for Jenkins Essentials, these concepts integrate with the concept of aUpdate Lifecycle between the server and client. In essence, the server side must be able to deliver the right updates, to the right client, and avoid delivering tainted updates, those with known problems, to clients. While this part of the work is still on-going, tremendous progress has been made over the past couple weeks in ensuring that updates can be safely, securely, and automatically delivered.

With the ability to identify "bad code deploys", and having a mechanism for safely rolling back, not only does Jenkins Essentials allow seamless updates, but it enables Jenkins developers to deliver features and bugfixesmuch more quickly than our current distribution model allows.


While Jenkins Essentials does not have a package ready for broad consumption yet, we’re rapidly closing in on the completion of our first milestone which ties all of these automatic update components together and builds the foundation for continuous delivery of all subsequent improvements.

You can follow our progress in thejenkins-infra/evergreen repository, or join us in ourGitter chat!

Using new core APIs with the Beta annotation

$
0
0

This sort of slid under the radar in the middle of some bigger changes for the JEP-202 reference implementation, so I wanted to call it out now. Arguably this could deserve a retroactive JEP, though I would rather fold it into a JEP forJENKINS-49651 (see below).

As of Jenkins 2.118, or plugin parent POM 3.7, you can mark any Java member (class, method, constructor, field, or I suppose also interface,enum, or annotation) with API visibility (protected or public) with anannotation:

@Restricted(Beta.class)

The idea is to announce to potential users of the member that the API may still be in flux and only code prepared to keep up should be using it. For an example, 2.118 added a VirtualFile.toExternalURL() method that is being implemented in artifact-manager-s3 and (pending some PR merges) called in copyartifact and workflow-basic-steps. We do not necessarily want this to be called yet by unknown parties out there in the Jenkins ecosystem. To enforce that, any attempt to call or implement toExternalURL will produce a build failure, unless you add this property to your plugin POM, as these plugins have done:

<useBeta>true</useBeta>

Why? Because there is a chance the design is wrong and it might need to be changed—perhaps some upcoming bug fix would demand a boolean parameter be added, for example.

Under the conventional notion of Jenkins API deprecation and compatibility policy, once an API like this makes it into a release version, that is it—we might mark it @Deprecated but we need to maintain compatibility indefinitely, and find some way to migrate existing implementations / call sites.

With the @Beta annotation, that promise is not being made. If it needs a boolean parameter for some reason, that will be added and those three plugins updated to match; we are not going to bother retaining the original overload and somehow delegating to the new one. This simplification of the developer workflow is important to the use cases of Essentials (JEP-3xx), and I would expect the useBeta mark to become widespread among plugins included in Essentials. Such as the situation where one team needs to feel comfortable refactoring code under its aegis freely, and the refactored result should be deliverable as a unit to production via the Evergreen distribution system.

So that leaves two important questions:

First, is the annotation permanent, and if not, when should it be removed? I do not think there is any hard policy, but the intention is that it should be removed once the API is in more or less widespread use and has held up. For this example, if people start using S3 artifacts, and especially if someone successfully writes an implementation of artifact storage in Azure that uses the API, the concept will have been reasonably proven. At that point we want the API to be used wherever it would make sense, and if there is some very belated realization that the design is not quite right, we accept the burden of deprecating the original and migrating callers compatibly.

Second, it is fine and well to say that someone changing the signature of a beta toExternalURL is on the hook to update the three plugins using it, but what if a Jenkins admin (not running Essentials, for shame) upgrades to (say) Jenkins 2.125 with the new signature but declines to accept the updates to those plugins (say,workflow-basic-steps 2.9) which adapt to the change? It is not enough to say that it is their fault for holding back on the updates arbitrarily; the plugin manager offers you updates but does nothing to tell you when they are required, so suddenly throwingNoSuchMethodError is not a helpful response.

The solution needs to be ironed out, but my expectation is to useJENKINS-49651 for this. For example, workflow-basic-steps 2.8, using toExternalURL(), would have declared itself compatible withJenkins-Version: 2.118, and thus implicitly anything newer. The developer doing the refactoring would also amend some 2.125 (and newer) core metadata to say that it conflicts with anything older than the 2.9 release of the plugin. The plugin manager would therefore block the 2.8 plugin from even being loaded on the 2.125 core; the admin would need to update before using it. In the case of an incompatible change made to a plugin API, rather than a core API, the UX is a little smoother since the plugin manager could just refuse to let you update one without the other.


If you’re a plugin or core developer who is interested in using the @Beta annotations, or have questions about our motiviations, please join the discussion onthis mailing list thread.

Welcome Google Summer of Code 2018 students!

$
0
0

Jenkins GSoC

On behalf of the Jenkins GSoC team and mentors, I would like to welcomeShenyu Zheng,Udara De Silva,Pham Vu Tuan andAbhishek Gautam. They will be working on Google Summer of Code projects in the Jenkins organization, and they have already done some contributions.

This year we have the following projects:

During next 4 weeks project teams will be reaching out to potential stakeholders in order to establish connections and to get comments regarding their project designs. If you are interested in the projects, please join discussions in theDeveloper mailing lists and project meetings once they get scheduled. Please also expect expect more detailed blogposts about the projects soon.

If you are interested to know more about GSoC in Jenkins, you can find information, timeline and communication channelshere.

Jenkins X: Announcing CVE docker image analysis with Anchore

$
0
0

Anchore provides docker image analysis for user defined acceptance policies to allow automated image validation and acceptance.

As developers we would like to know if a change we are proposing introduces aCommon Vulnerability and Exposure (CVE). As operators we would like to know what running applications are affected if a new CVE is discovered.

Now in Jenkins X pipelines, if we find anAnchore engine service running we will add the preview and release images to be analyzed. This means we can look at any environment including previews (created from Pull Requests) to see if your application contains a CVE.

Upgrade

Start by checking your current Jenkins X version:

jx version

If your Jenkins X platform is older than 0.0.903, then first you will need to upgrade to at least 0.0.922:

jx upgrade cli
jx upgrade platform

Install addon

You can install theAnchore engine addon when you are in your Jenkins X team home environment.

jx env dev
jx create addon anchore

This will install the engine in a seperate anchore namespace and create a service link in the current team home environment so our pipeline builds can add docker images to Anchore for analysis.

Create an application

You can now create a new quickstart:

jx create quickstart

List any CVEs

Once the build has run you will be able to check for CVEs in any environment incluing previews created for pull requests.

jx get cve --environment staging

Demo

Here’s a 4 minute video that demonstrates the steps above:

Upgrading existing pipelines

If you have an existing application pipeline and and want enable image analysis you can update your Jenkinsfile, in the preview stage after the skaffold step add the line

sh "jx step validate --min-jx-version 1.2.36"
sh "jx step post build --image \$JENKINS_X_DOCKER_REGISTRY_SERVICE_HOST:\$JENKINS_X_DOCKER_REGISTRY_SERVICE_PORT/$ORG/$APP_NAME:$PREVIEW_VERSION"

In the master stage the add this line after the skaffold step

sh "jx step validate --min-jx-version 1.2.36"
sh "jx step post build --image \$JENKINS_X_DOCKER_REGISTRY_SERVICE_HOST:\$JENKINS_X_DOCKER_REGISTRY_SERVICE_PORT/$ORG/$APP_NAME:\$(cat VERSION)"

For any questions please find us - we mainly hang out on Slack at#jenkins-x-dev - or seejenkins-x.io/community for other channels.

Security updates for Jenkins core and plugins

$
0
0

We just released security updates to Jenkins, versions 2.121 and 2.107.3, that fix multiple security vulnerabilities.

Additionally, we announce previously published security issues and corresponding fixes in these plugins:

For an overview of what was fixed, see the security advisory. For an overview on the possible impact of these changes on upgrading Jenkins LTS, see our LTS upgrade guide.

Subscribe to the jenkinsci-advisories mailing list to receive important notifications related to Jenkins security.

Automatic deployment of “incremental” commits to Jenkins core and plugins

$
0
0

A couple of weeks ago, Tyler mentioned somedeveloper improvements in Essentials that had been recently introduced: the ability forci.jenkins.io builds to get deployed automatically to an “Incrementals” Maven repository, as described in JEP-305. For a plugin maintainer, you just need toturn on this support and you are ready to both deploy individual Git commits from your repository without the need to run heavyweight traditional Maven releases, and to depend directly on similar commits of Jenkins core or other plugins. This is a stepping stone toward continuous delivery, and ultimately deployment, of Jenkins itself.

Here I would like to peek behind the curtain a bit at how we did this, since the solution turns out to be very interesting for people thinking about security in Jenkins. I will gloss over the Maven arcana required to get the project version to look like 1.40-rc301.87ce0dd8909b (a real example from theCopy Artifact plugin) rather than the usual 1.40-SNAPSHOT, and why this format is even useful. Suffice it to say that if you had enough permissions, you could run

mvn -Dset.changelist -DskipTests clean deploy

from your laptop to publish your latest commit. Indeed asmentioned in the JEP, the most straightforward server setup would be to run more or less that command from the buildPlugin function called from a typical Jenkinsfile, with some predefined credentials adequate to upload to the Maven repository.

Unfortunately, that simple solution did not look very secure. If you offer deployment credentials to a Jenkins job, you need to trust anyone who might configure that job (here, its Jenkinsfile) to use those credentials appropriately. (The withCredentials step will mask the password from the log file, to prevent accidental disclosures. It in no way blocks deliberate misuse or theft.) If your Jenkins service runs inside a protected network and works with private repositories, that is probably good enough.

For this project, we wanted to permit incremental deployments from any pull request. Jenkins will refuse to run Jenkinsfile modifications from people who would not normally be able to merge the pull request or push directly, and those people would be more or less trustworthy Jenkins developers, but that is of no help if a pull request changes pom.xml or other source files used by the build itself. If the server administrator exposes a secret to a job, and it is bound to an environment variable while running some open-ended command like a Maven build, there is no practical way to control what might happen.

The lesson here is that the unit of access control in Jenkins is the job. You can control who can configure a job, or who can edit files it uses, but you have no control over what the job does or how it might use any credentials. For JEP-305, therefore, we wanted a way to perform deployments from builds considered as black boxes. This means a division of responsibility: the build produces some artifacts, however it sees fit; and another process picks up those artifacts and deploys them.

This worked was tracked inINFRA-1571. The idea was to create a “serverless function” in Azure that would retrieve artifacts from Jenkins at the end of a build, perform a set of validations to ensure that the artifacts follow an expected repository path pattern, and finally deploy them to Artifactory using a trusted token. I prototyped this in Java, Tyler rewrote it in JavaScript, and together we brought it into production.

The crucial bit here is what information (or misinformation!) the Jenkins build can send to the function. All we actually need to know is the build URL, so thecall site from Jenkins is quite simple. When the function is called with this URL, it starts off by performing input validation: it knows what the Jenkins base URL is, and what a build URL from inside an organization folder is supposed to look like:https://ci.jenkins.io/job/Plugins/job/git-plugin/job/PR-582/17/, for example.

The next step is to call back to Jenkins and ask it for some metadata about that build. While we do not trust the build, we trust the server that ran it to be properly configured. An obstacle here was that the ci.jenkins.io server had been configured to disable the Jenkins REST API; with Tyler’s guidance I was able to amend this policy to permit API requests from registered users (or, in the case of the Incrementals publisher, a bot).

If you want to try this at home, get anAPI token, pick a build of an “incrementalified” plugin or Jenkins core, and run something like

curl -igu <login>:<token> 'https://ci.jenkins.io/job/Plugins/job/git-plugin/job/PR-582/17/api/json?pretty&tree=actions[revision[hash,pullHash]]'

You will see a hash or pullHash corresponding to the main commit of that build. (This information was added to the Jenkins REST API to support this use case inJENKINS-50777.) The main commit is selected when the build starts and always corresponds to the version of Jenkinsfile in the repository for which the job is named. While a build might checkout any number of repositories,checkout scm always picks “this” repository in “this” version. Therefore the deployment function knows for sure which commit the sources came from, and will refuse to deploy artifacts named for some other commit.

Next it looks up information about the Git repository at the folder level (again from JENKINS-50777):

curl -igu <login>:<token> 'https://ci.jenkins.io/job/Plugins/job/git-plugin/api/json?pretty&tree=sources[source[repoOwner,repository]]'

The Git repository now needs to be correlated to a list of Maven artifact paths that this component is expected to produce. Therepository-permissions-updater (RPU) tool already had a list of artifact paths used to perform permission checks on regular release deployments to Artifactory; inINFRA-1598 I extended it to also record the GitHub repository name, as can be seenhere. Now the function knows that the CI build in this example may legitimately create artifacts in the org/jenkins-ci/plugins/git/ namespace including 38c569094828 in their versions. The build is expected to have produced artifacts in the same structure as mvn install sends to the local repository, so the function downloads everything associated with that commit hash:

curl -sg 'https://ci.jenkins.io/job/Plugins/job/git-plugin/job/PR-582/17/artifact/**/*-rc*.38c569094828/*-rc*.38c569094828*/*zip*/archive.zip' | jar t

When all the artifacts are indeed inside the expected path(s), and at least one POM file is included (here org/jenkins-ci/plugins/git/3.9.0-rc1671.38c569094828/git-3.9.0-rc1671.38c569094828.pom), then the ZIP file looks good—ready to send to Artifactory.

One last check is whether the commit has already been deployed (perhaps this is a rebuild). If it has not, the function uses the Artifactory REST API to atomically upload the ZIP file and uses the GitHub Status API to associate a message with the commit so that you can see right in your pull request that it got deployed:

incrementals status

One more bit of caution was required. Just because we successfully published some bits from some PR does not mean they should be used! We also needed a tool which lets you select the newest published version of some artifactwithin a particular branch, usually master. This was tracked inJENKINS-50953 and is available to start with as a Maven command operating on a pom.xml:

mvn incrementals:update

This will check Artifactory for updates to relevant components. When each one is found, it will use the GitHub API to check whether the commit has been merged to the selected branch. Only matches are offered for update.

Putting all this together, we have a system for continuously delivering components from any of the hundreds of Jenkins Git repositories triggered by the simple act of filing a pull request. Securing that system was a lot of work but highlights how boundaries of trust interact with CI/CD.

When using tags in Jenkins Pipeline

$
0
0

One common pattern for automated releases I have seen and used relies on Git tags as the catalyst for a release process. The immutable nature of releases and the immutable nature of tags can definitely go hand in hand, but up until few months ago Jenkins Pipeline was not able to trigger effectively off of Git tags.

In this post I want to briefly share how to use tags to drive behaviors in Jenkins Pipeline. Consider the following contrived Jenkinsfile, which contains the three basic stages of Build, Test, and Deploy:

pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                sh 'make package'
            }
        }
        stage('Test') {
            steps {
                sh 'make check'
            }
        }
        stage('Deploy') {
            when { tag "release-*" }
            steps {
                echo 'Deploying only because this commit is tagged...'
                sh 'make deploy'
            }
        }
    }
}

Of particular note is thewhen condition on the "Deploy" stage which is applying the tag criteria. This means the stage would only execute when the Pipeline has been triggered from a tag in Git matching the release-* Ant-style wildcard.

In practice, this means that all pull requests, and branch-based Pipeline Runs result in the stage being skipped:

Not Deployed!

When I push a release-1.0 tag, the Pipeline will then be triggerd and run the "Deploy" stage:

Deployed!

Out of the box, Pipelines won’t trigger off of the presence of tags, which means that a Multibranch Pipeline must have a configuration update to know that it must Discover Tags.

Configuring

From the configuration screen of a Multibranch Pipeline (or GitHub Organization Folder), Discovering tags can be enabled by adding the appropriate "Behavior" to the Branch Source configuration:

Configuring the Multibranch Pipeline

With these changes, the Jenkinsfile in the tagged versions of my source repository can now drive distinct deployment behavior which is not otherwise enabled in the Pipeline.


Introducing Tracy Miranda as the CloudBees Open Source Program Lead

$
0
0

I’m Tracy Miranda, and I’m really excited to have joined CloudBees this month leading the open source program. CloudBees’ contributions to Jenkins include developing Pipeline and Blue Ocean, staffing the infrastructure team, advocacy and events work, as well as security efforts. My focus is on making sure there is a great relationship between the Jenkins community and CloudBees, which means strong communication, help get traction on things the community wants, and generally working to make Jenkins and the community thrive and stay awesome in an ever-changing tech landscape.

Here’s a little background on me: I come from an electronics/EDA background but switched to software early in my career when I first got involved with open source software. I’ve been part of the Eclipse community for around 15 years, definitely from before git was even a thing. I love being involved with all levels: project committer, conference chair, steering committee for working groups and more recently board of directors.

On a personal note, I …

  • Live in the UK with my husband and 2 young kids

  • Grew up in Kenya

  • Enjoy playing badminton, love good food & am always first at any buffets

I am looking forward to getting to know the Jenkins community well, and really getting a feel for your Jenkins stories, good and bad. Please feel free to let me know:

  • What you love about the Jenkins community & how you are using Jenkins

  • What you’re working on doing with Jenkins

  • What you don’t like and want improved

You can find me on the mailing lists or via:

Also I’ll be at the upcoming events: DevOps World - Jenkins World in San Francisco, California and Nice, France so if you plan to attend do come and say hi. The Jenkins community is the real force behind Jenkins. And in turn Jenkins powers so much of the software out there. It is an honour to be joining this wonderful community.

Take the 2017 Jenkins Survey!

$
0
0
This is a guest post by Brian Dawson on behalf of CloudBees, where he works as a DevOps Evangelist responsible for developing and sharing continuous delivery and DevOps best practices. He also serves as the CloudBees Product Marketing Manager for Jenkins.

Once again it’s that time of year when CloudBees sponsors the Jenkins Community Survey to assist the community with gathering objective insights into how jenkins is being used and what users would like to see in the Jenkins project.

Your personal information (name, email address and company) will NOT be used by CloudBees for sales or marketing.

As an added incentive to take the survey, CloudBees will enter participants into a drawing for a free pass to Jenkins World 2018 (1st prize) and a $100 Amazon Gift Card (2nd prize). The survey will close at the end of September, so click the link at the end of the blog post to get started!

All participants will be able to access reports summarizing survey results. If you’re curious about what insights your input will provide, see the results of last year’s 2016 survey:

Your feedback helps capture a bigger picture of community trends and needs. There are laws that govern prize giveaways and eligibility; CloudBees has compiled all those fancyterms and conditions here.

Please take the survey and let your voice be heard - it will take less than 10 minutes.

Closure on enumerators in Pipeline

$
0
0

While at Jenkins World, Kohsuke Kawaguchi presented two long-time Jenkins contributors with a "Small Matter of Programming" award: Andrew Bayer andJesse Glick. "Small Matter of Programming" being:

a phrase used to ironically indicate that a suggested feature or design change would in fact require a great deal of effort; it often implies that the person proposing the feature underestimates its cost.

— Wikipedia

In this context the "Small Matter" relates to Jenkins Pipeline and a very simple snippet of Scripted Pipeline:

[1, 2, 3].each { println it }

For a long time in Scripted Pipeline, this simply did not work as users would expect it. Originally filed asJENKINS-26481 in 2015, it became one of the most voted for, and watched, tickets in the entire issue tracker until it was ultimately fixed earlier this year.

Photo by Kohsuke

At least some closures are executed only once inside of Groovy CPS DSL scripts managed by the workflow plugin.

— Original bug description by Daniel Tschan

At a high level, what has been confusing for many users is that Scripted Pipeline looks like a Groovy, it quacks like a Groovy, but it’s not exactly Groovy. Rather, there’s an custom Groovy interpreter (CPS) that executes the Scripted Pipeline in a manner which provides the durability/resumability that defines Jenkins Pipeline.

Without diving into too much detail, refer to the pull requests linked to JENKINS-26481 for that, the code snippet above was particularly challenging to rectify inside the Pipeline execution layer. As one of the chief architects for Jenkins Pipeline, Jesse made a number of changes around the problem in 2016, but it wasn’t until early 2017 when Andrew, working on Declarative Pipeline, started to identify a number of areas of improvement in CPS and provided multiple patches and test cases.

As luck would have it, combining two of the sharpest minds in the Jenkins project resulted in the "Small Matter of Programming" being finished, and released in May of this year with Pipeline: Groovy 2.33.

Please join me in congratulating, and thanking, Andrew and Jesse for their diligent and hard work smashing one of the most despised bugs in Jenkins history :).

Parallel stages with Declarative Pipeline 1.2

$
0
0

After a few months of work on its key features, I’m happy to announce the 1.2 release of Declarative Pipeline! On behalf of the contributors developing Pipeline, I thought it would be helpful to discuss three of the key changes.

A Pipeline with Parallel stages

Parallel Stages

First, we’ve added syntax support for parallel stages. In earlier versions of Declarative Pipeline, the only way to run chunks of Pipeline code in parallel was to use the parallel step inside the steps block for a stage, like this:

/* .. snip .. */
stage('run-parallel-branches') {
  steps {
    parallel(a: {
        echo "This is branch a"
      },b: {
        echo "This is branch b"
      }
    )
  }
}/* .. snip .. */

While this works, it doesn’t integrate well with the rest of the Declarative Pipeline syntax. For example, to run each parallel branch on a different agent, you need to use a node step, and if you do that, the output of the parallel branch won’t be available for post directives (at a stage or pipeline level). Basically the old parallel step required you to use Scripted Pipeline within a Declarative Pipeline.

But now with Declarative Pipeline 1.2, we’ve introduced a true Declarative syntax for running stages in parallel:

Jenkinsfile
pipeline {
    agent none
    stages {
        stage('Run Tests') {
            parallel {
                stage('Test On Windows') {
                    agent {
                        label "windows"
                    }
                    steps {
                        bat "run-tests.bat"
                    }
                    post {
                        always {
                            junit "**/TEST-*.xml"
                        }
                    }
                }
                stage('Test On Linux') {
                    agent {
                        label "linux"
                    }
                    steps {
                        sh "run-tests.sh"
                    }
                    post {
                        always {
                            junit "**/TEST-*.xml"
                        }
                    }
                }
            }
        }
    }
}

You can now specify either steps or parallel for a stage, and withinparallel, you can specify a list of stage directives to run in parallel, with all the configuration you’re used to for a stage in Declarative Pipeline. We think this will be really useful for cross-platform builds and testing, as an example. Support for parallel stages will be in the soon-to-be-released Blue Ocean Pipeline Editor 1.3 as well.

You can find more documentation on parallel stages in theUser Handbook.

Defining Declarative Pipelines in Shared Libraries

Until the 1.2 release, Declarative Pipelines did not officially support defining your pipeline blocks in a shared library. Some of you may have tried that out and found that it could work in some cases, but since it was never an officially supported feature, it was vulnerable to breaking due to necessary changes for the supported use cases of Declarative. But with 1.2, we’ve added official support for defining pipeline blocks in src/*.groovy files in your shared libraries. Within your src/*.groovy file’s call method, you can call pipeline { ... }, or possibly different pipeline { ... } blocks depending on if conditions and the like. Note that only one pipeline { ... } block can actually be executed per run - you’ll get an error if a second one tries to execute!

Major Improvements to Parsing and Environment Variables

Hopefully, you’ll never actually care about this change, but we’re very happy about it nonetheless. The original approach used for actually taking the pipeline { ... } block and executing its contents was designed almost two years ago, and wasn’t very well suited to how you all are actually using Declarative Pipelines. In our attempts to work around some of those limitations, we made the parsing logic even more complicated and fragile, resulting in an impressive number of bugs, mainly relating to inconsistencies and bad behavior withenvironment variables.

In Declarative 1.2, we’ve replaced the runtime parsing logic completely with a far more robust system, which also happens to fix most of those bugs at the same time! While not every issue has been resolved, you may find that you can use environment variables in more places, escaping is more consistent, Windows paths are no longer handled incorrectly, and a lot more. Again, we’re hoping you’ve never had the misfortune to run into any of these bugs, but if you have, well, they’re fixed now, and it’s going to be a lot easier for us to fix any future issues that may arise relating to environment variables, when expressions, and more. Also, the parsing at the very beginning of your build may be about 0.5 seconds faster. =)

More to Come!

While we don’t have any concrete plans for what will be going into Declarative Pipelines 1.3, rest assured that we’ve got some great new features in mind, as well as our continuing dedication to fixing the bugs you encounter and report. So please do keep opening tickets for issues and feature requests. Thanks!

Pipeline and Blue Ocean Demos from Jenkins World

$
0
0

At Jenkins World last month, we continued the tradition of "lunch-time demos" in the Jenkins project’s booth which we started in 2016. We invited a number of Jenkins contributors to present brief 10-15 minute demos on something they were working on, or considered themselves experts in. Continuing the post-Jenkins World tradition, we also just hosted a "Jenkins Online Meetup" featuring a selection of those lunch-time demos.

I would like to thank Alyssa Tong for organizing this online meetup, Liam Newman for acting as the host, and our speakers:

Below are some links from the sample projects demonstrated and the direct links to each session.

Developing Pipeline Libraries Locally

If you have ever tried developing Pipeline Libraries, you may have noticed how long it takes to deploy a new version to server to discover just another syntax error. I will show how to edit and test Pipeline libraries locally before committing to the repository (with Configuration-as-Code and Docker).

Delivery Pipelines with Jenkins

Showing off how to set up holistic Delivery Pipelines with the DevOps enabler tool Jenkins.

Pimp my Blue Ocean

How to customize Blue Ocean, where I create a custom plugin and extending Blue Ocean with custom theme and custom components.

Deliver Blue Ocean Components at the Speed of Light

Using storybook.js.org for Blue Ocean frontend to speed up the delivery process - validate with PM and designer the UX. Showing how quickly you develop your components.

Mozilla’s Declarative + Shared Libraries Setup

How Mozilla is using Declarative Pipelines and shared libraries together.

See also the #fx-test IRC channel on irc.mozilla.org

Git Tips and Tricks

Latest capabilities in the git plugin, like large file support, reference repositories and some reminders of existing tips that can reduce server load, decrease job time, and decrease disc use.

Visual Pipeline Creation in Blue Ocean

We will show how to use Blue Ocean to build a real-world continuous delivery pipeline using the visual pipeline editor. We will coordinate multiple components of a web application across test and production environments, simulating a modern development and deployment workflow.

Jenkins Contributors Awarded Top Honors at Jenkins World 2017

$
0
0

This is a guest post by Alyssa Tong, who runs the Jenkins Area Meetup program and is also responsible for Marketing & Community Programs at CloudBees, Inc.

Awards

For the first time at Jenkins World, the Jenkins project honored the achievement of three Jenkins contributors in the areas of Most Valuable Contributor, Jenkins Security MVP, and most Valuable Advocate. These three individuals has consistently demonstrated excellence and proven value to the project. With gratitude and congratulations, below are the well deserved winners:

Alex Earl - Most Valuable Contributor

Alex is the current or previous maintainer of some of the most used Jenkins plugins and has been for years. He’s a regular contributor to project policy discussions, and helps to keep the project running by improving the Jenkins project infrastructure, moderating the mailing lists and processing requests for hosting new plugins.

Steve Marlowe - Jenkins Security MVP

Steve is one of the most prolific reporter of security vulnerabilities in Jenkins. His reports are well-written, clearly identify the problematic behavior, and provide references that help quickly resolve the reported issue. On top of that, Steve is always responsive when asked for clarification.

Tomonari Nakamura - Most Valuable Advocate

Ikikko

Tomonari leads the Jenkins User Group in Tokyo, which is one of the largest and the most active with a long history. The group has been organizing meet-ups for more than 10 times now, and every meet-up fills up to 100% very quickly with regular turn-out of 100-200 people. At one point the group under his leadership organized a fully volunteer-run "Jenkins User Conference" in Tokyo that commanded 1000+ attendees.

Congratulations to our winners.

We can’t wait to recognize more contributors at Jenkins World 2018!

Share a standard Pipeline across multiple projects with Shared Libraries

$
0
0
This is a guest post by Philip Stroh, Software Architect atTimoCom.

When building multiple microservices - e.g. with Spring Boot - the integration and delivery pipelines of your services will most likely be very similar. Surely, you don’t want to copy-and-paste Pipeline code from one Jenkinsfile to another if you develop a new service or if there are adaptions in your delivery process. Instead you would like to define something like a pipeline "template" that can be applied easily to all of your services.

The requirement for a common pipeline that can be used in multiple projects does not only emerge in microservice architectures. It’s valid for all areas where applications are built on a similar technology stack or deployed in a standardized way (e.g. pre-packages as containers).

In this blog post I’d like to outline the possibility to create such a pipeline "template" using Jenkins Shared Libraries. If you’re not yet familiar with Shared Libraries I’d recommend having a look at the documentation.

The following code shows a (simplified) integration and delivery Pipeline for a Spring Boot application in declarative syntax.

JenkinsFile
pipeline {
    agent any
    environment {
        branch = 'master'
        scmUrl = 'ssh://git@myScmServer.com/repos/myRepo.git'
        serverPort = '8080'
        developmentServer = 'dev-myproject.mycompany.com'
        stagingServer = 'staging-myproject.mycompany.com'
        productionServer = 'production-myproject.mycompany.com'
    }
    stages {
        stage('checkout git') {
            steps {
                git branch: branch, credentialsId: 'GitCredentials', url: scmUrl
            }
        }

        stage('build') {
            steps {
                sh 'mvn clean package -DskipTests=true'
            }
        }

        stage ('test') {
            steps {
                parallel ("unit tests": { sh 'mvn test' },"integration tests": { sh 'mvn integration-test' }
                )
            }
        }

        stage('deploy development'){
            steps {
                deploy(developmentServer, serverPort)
            }
        }

        stage('deploy staging'){
            steps {
                deploy(stagingServer, serverPort)
            }
        }

        stage('deploy production'){
            steps {
                deploy(productionServer, serverPort)
            }
        }
    }
    post {
        failure {
            mail to: 'team@example.com', subject: 'Pipeline failed', body: "${env.BUILD_URL}"
        }
    }
}

This Pipeline builds the application, runs unit as well as integration tests and deploys the application to several environments. It uses a global variable "deploy" that is provided within a Shared Library. The deploy method copies the JAR-File to a remote server and starts the application. Through the handy REST endpoints of Spring Boot Actuator a previous version of the application is stopped beforehand. Afterwards the deployment is verified via the health status monitor of the application.

vars/deploy.groovy
defcall(def server, def port) {
    httpRequest httpMode: 'POST', url: "http://${server}:${port}/shutdown", validResponseCodes: '200,408'
    sshagent(['RemoteCredentials']) {
        sh "scp target/*.jar root@${server}:/opt/jenkins-demo.jar"
        sh "ssh root@${server} nohup java -Dserver.port=${port} -jar /opt/jenkins-demo.jar &"
    }
    retry (3) {
        sleep 5
        httpRequest url:"http://${server}:${port}/health", validResponseCodes: '200', validResponseContent: '"status":"UP"'
    }
}

The common approach to reuse pipeline code is to put methods like "deploy" into a Shared Library. If we now start developing the next application of the same fashion we can use this method for deployments as well. But often there are even more similarities within projects of one company. E.g. applications are built, tested and deployed in the same way into the same environments (development, staging and production). In this case it is possible to define the whole Pipeline as a global variable within a Shared Library. The next code snippet defines a Pipeline "template" for all of our Spring Boot applications.

vars/myDeliveryPipeline.groovy
defcall(Map pipelineParams) {

    pipeline {
        agent any
        stages {
            stage('checkout git') {
                steps {
                    git branch: pipelineParams.branch, credentialsId: 'GitCredentials', url: pipelineParams.scmUrl
                }
            }

            stage('build') {
                steps {
                    sh 'mvn clean package -DskipTests=true'
                }
            }

            stage ('test') {
                steps {
                    parallel ("unit tests": { sh 'mvn test' },"integration tests": { sh 'mvn integration-test' }
                    )
                }
            }

            stage('deploy developmentServer'){
                steps {
                    deploy(pipelineParams.developmentServer, pipelineParams.serverPort)
                }
            }

            stage('deploy staging'){
                steps {
                    deploy(pipelineParams.stagingServer, pipelineParams.serverPort)
                }
            }

            stage('deploy production'){
                steps {
                    deploy(pipelineParams.productionServer, pipelineParams.serverPort)
                }
            }
        }
        post {
            failure {
                mail to: pipelineParams.email, subject: 'Pipeline failed', body: "${env.BUILD_URL}"
            }
        }
    }
}

Now we can setup the Pipeline of one of our applications with the following method call:

Jenkinsfile
myDeliveryPipeline(branch: 'master', scmUrl: 'ssh://git@myScmServer.com/repos/myRepo.git',email: 'team@example.com', serverPort: '8080',developmentServer: 'dev-myproject.mycompany.com',stagingServer: 'staging-myproject.mycompany.com',productionServer: 'production-myproject.mycompany.com')

The Shared library documentation mentions the ability to encapsulate similarities between several Pipelines with a global variable. It shows how we can enhance our template approach and build a higher-level DSL step:

vars/myDeliveryPipeline.groovy
defcall(body) {// evaluate the body block, and collect configuration into the objectdef pipelineParams= [:]
    body.resolveStrategy = Closure.DELEGATE_FIRST
    body.delegate = pipelineParams
    body()

    pipeline {
        // our complete declarative pipeline can go in here
        ...
    }
}

Now we can even use our own DSL-step to set up the integration and deployment Pipeline of our project:

Jenkinsfile
myDeliveryPipeline {
    branch = 'master'
    scmUrl = 'ssh://git@myScmServer.com/repos/myRepo.git'
    email = 'team@example.com'
    serverPort = '8080'
    developmentServer = 'dev-myproject.mycompany.com'
    stagingServer = 'staging-myproject.mycompany.com'
    productionServer = 'production-myproject.mycompany.com'
}

The blog post showed how a common Pipeline template can be developed using the Shared Library functionality in Jenkins. The approach allows to create a standard Pipeline that can be reused by applications that are built in a similar way.

It works for Declarative and Scripted Pipelines as well. For declarative pipelines the ability to define a Pipeline block in a Shared Library is official supported since version 1.2 (see the recent blog post onDeclarative Pipeline 1.2).


Hacktoberfest. Contribute to Jenkins!

$
0
0

Once again it’s October in our calendars. It means that the regular Hacktoberfest event is back! During this one-month hackathon you can support open source and earn a limited edition T-shirt. Jenkins project offers an opportunity to participate in the project and to get reviews and help from Jenkins contributors.

Hacktoberfest

How do I sign up?

  1. Sign-up to Hacktoberfest on the event website.

  2. Everything is set, just start coding!

What can I do?

There are lots of ways to contribute to Jenkins during Hacktoberfest. You can…​

  • Write code

  • Improve documentation, write blogposts

  • Automate Tests

  • Translate and internationalize components

  • Design - artwork and UI improvements also count!

See the Contribute and Participate page for for information.

Where can I contribute?

The project is located in several organizations in GitHub. Core and plugins are located in the jenkinsci org, infrastructure - in jenkins-infra. You can contribute to any component within these organizations.

For example, you could contribute to the following components:

You can also create new Jenkins plugins and get themhosted in the organization.

What can I do?

Our issue tracker contains lots of issues you could work on. If you are new to Jenkins, you could start by fixing some easier issues. In the issue tracker we mark such issues with the newbie-friendly label (search query). You can also submit your own issue and propose a fix.

How do I label issues and pull requests?

Hacktoberfest project requires issues and/or pull requests to be labeled with the hacktoberfest label. You may have no permissions to set labels on your own, but do not worry! Just mention @jenkinsci/hacktoberfest or @jenkins-infra/hacktoberfest in the repository, and we will set the labels for you.

How do I get reviews?

All examples above are being monitored by the Jenkins contributors, and you will likely get a review within few days. Reviews in other repositories and plugins may take longer. In the case of delays, ping @jenkinsci/code-reviewers in your pull request or send a message to the mailing list.

Where can I find info?

Jenkins project contains lots of materials about contributing to the project. Here are some entry links:

Need help?

You can reach out to us using IRC Channels and the Jenkins Developer Mailing List. In the case of mailing lists it is recommended to mention Hacktoberfest in the email subject.

Important security updates for Jenkins core and plugins

$
0
0

We just released security updates to Jenkins, versions 2.84 and 2.73.2, that fix several security vulnerabilities. Additionally, we published a new release of Swarm Plugin whose client contains a security fix, and Maven Plugin 3.0 was recently released to resolve a security issue. Users of Swarm Plugin and Maven Plugin should update these to their respective newest versions.

For an overview of what was fixed, see the security advisory. For an overview on the possible impact of these changes on upgrading Jenkins LTS, see our LTS upgrade guide.

We also published information about a vulnerability in Speaks! Plugin. There is no fix available and we recommend it be uninstalled. Its distribution has been suspended.

Subscribe to the jenkinsci-advisories mailing list to receive important notifications related to Jenkins security.

Jenkins World 2017 Session Videos are Available

$
0
0

This is a guest post by Alyssa Tong, who runs the Jenkins Area Meetup program and is also responsible for Marketing & Community Programs at CloudBees, Inc.

Jenkins World 2017 keynotes and breakout session videos are now available HERE. Photos from the conference can be seen HERE.

Save the date for Jenkins World 2018:

  • Conference dates are September 16-19, 2018 in San Francisco.

  • Registration will open on October 16, 2017.

  • Call for Papers will open on December 1, 2017.

Security updates for multiple Jenkins plugins

$
0
0

Multiple Jenkins plugins received updates today that fix several security vulnerabilities.

Additionally, the Multijob Plugin also received a security update several weeks ago.

For an overview of these security fixes, see the security advisory.

Active Choices Plugin distribution had been suspended since April due to its mandatory dependency on the suspended Scriptler Plugin. That dependency has been made optional, so Active Choices can be used without having Scriptler installed. This means we are able to resume distribution of Active Choices Plugin again. It should be available on update sites later today.

We also announced a medium severity security vulnerability in SCP publisher plugin that does not have a fix at this time.

Subscribe to the jenkinsci-advisories mailing list to receive important future notifications related to Jenkins security.

Jenkins User Conference China

$
0
0

This is a guest post by Forest Jing, who runs the Shanghai Jenkins Area Meetup

C0238CBC BDE6 4097 B634 5D8633D24F4C

I am excited to announce the inauguralJenkins User Conference China will be taking place on November 19, 2017 in Shanghai, China. The theme of JUC China is “Jenkins Driven CD and DevOps”. Much like in the US, CD and DevOps are big topics of interest in China. We are honored to have Kohsuke Kawaguchi join us as one of the keynote speakers at this inaugural Jenkins event. We will also have sessions from many of China’s big named companies like Baidu, Tencet, Pinterest, Ctrip, Huawei, Microsoft, and more. Below are some highlights of the event.

Sunday Nov 19th Agenda

Morning keynote sessions

There will be 4 keynote speeches:

  1. Kohsuke Kawaguchi, creator of Jenkins will introduce Jenkins Past, Present & Future.

  2. Le Zhang, a very famous DevOps and CD expert will show pipeline driven CD and DevOps.

  3. Engineering Director from Huawei will show the CD and DevOps practice in Huawei.

  4. Xu Zheng from Pinterest will present Run Jenkins infrastructure as service in Kubernetes.

In the Afternoon, we have set up 3 tracks

  1. CD & DevOps user stories from Microsoft, Tencent, Ctrip and JinDong - all are big companies in China.

  2. Enterprise Jenkins experience the use of Jenkins as an enterprise tool not only for teams.

  3. Workshop to lead engineers to practice CloudBees Jenkins and open source Jenkins features.

8C5A0F23 4632 4DAC AFAE AE5535B1BED3

If you’re in the neighborhood, we sincerely invite you to join us at Jenkins User Conference China.

Follow us on Twitter @china_juc

Viewing all 1087 articles
Browse latest View live