Quantcast
Channel: Jenkins Blog
Viewing all 1088 articles
Browse latest View live

Monthly JAM Recap - November 2016

$
0
0

As we near the end of the year, the number of November JAMs show that the Jenkins community isn’t slowing down for holiday season. We had a number of excellent events hosted around the world this November with plenty of great stories and presentations shared by the various members of the world-wide Jenkins community.

Melbourne, Australia JAM

Melbourne JAM leaders,Raisa andBhuva hosted Blue Ocean for the inaugural meeting. Attendees learned the values of Blue Ocean, a project that rethinks the user experience of Jenkins, modeling and presenting the process of software delivery by surfacing information that is important to development teams with as few clicks as possible, while still staying true to the extensibility that Jenkins always has had as a core value. Thank you James Dumay for stopping by to take part in the inauguration.

Melbourne JAM

Singapore, Singapore JAM

One of the members who had several years of experience using Jenkins (since Hudson days in fact) to present some basics on Continuous Integration with GitHub. It was targeted at new members who are starting out with Jenkins. We understand that we cannot always serve advanced topics to cater to the experienced users and neglect the newbies so this session was targeted to help give new users an introduction to Jenkins. It went well with about 15-20 attendees and we hope to run some hands-on workshops in 2017. Some members were looking forward to freebies like stickers and T-shirts too!

Singapore JAM prep
Singapore JAM

Moscow, Russia JAM

Moscow JAM leaders,Kirill Tolkachev andOleg Nenashev led the inaugural meeting with a packed full agenda. Oleg began the meeting with an update on Jenkins 2 what improvements users can expect and what enhancements are in the works within the Jenkins project. Following Oleg, Kirill shared how his team in Alfa Laboratory used Jenkins to improve CD/DevOps in their projects (with Jenkins Pipeline, Job DSL andBlue Ocean ), the problems they experienced and how they fixed them. Then Oleg talked about Jenkins Pipeline internals, main features and recent changes in the ecosystem. It was followed up by a discussion of large-scale Jenkins instances at the after-party.

The recording of the event can be foundon YouTube.

Milan, Italy JAM

The first meetup was a great opportunity to meet local Jenkins fans to learn and share Jenkins experiences at a local cafe.

San Francisco, California JAM

R. Tyler Croy performed a 30 minutes live Pipeline coding demo to a relatively novice audience (though all had used Jenkins). A good amount of questions from the audience which conveyed an appetite for the content being presented.Ryan Wallner, presenter from ClusterHQ, also gave a demo based around Pipeline talking about ClusterHQ’s "Fli" integration with a delivery pipeline.

ClusterHQ & Jenkins stickers
Tyler presenting

Washington, DC JAM

There was a fantastic 90% showup rate at this month’s meetup - 58 RSVPs and 52 in attendance was pretty impressive. All this may be due to Fannie Mae’s story - the success of how they used Jenkins for CI/CD as part of their DevOps adoption. Afterwards, there was a lot of interests and further discussions taking place. Next month’s host will be Freddie Mac.

Seattle, Washington JAM

Long time Jenkins community member and Seattle JAM leader,Khai Do showed how OpenStack uses "Jenkins Job Builder" to manage and run thousands of Jenkins jobs per day in their multi-master CI/CD system. He also comparedJenkins Job Builder with other Jenkins "Infrastructure-as-code" technologies - Jenkins Pipeline and Jenkins JobDSL. It was followed by an in-depth Q&A and discussion session.

Dallas/Forth Worth, Texas (DFW) JAM

The November DFW JAM was the most strongly attended of the year! DFW JAM leader,Eric Smalling discussed the benefits of dynamic build agents and demonstrated various ways to implement them such as the EC2 and Docker plugins. There was a lot of interest and discussion, especially around Docker and the ability it provides to have ephemeral agents with very little provisioning time.

The recording can be downloaded fromGooel Drive.


Announcing the beta of Declarative Pipeline Syntax

$
0
0

Last week we released version 0.7.1 of thePipeline-Model-Defintion plugin and wanted to crown it as the official Beta version of the Declarative Pipeline syntax. Although it has been available in the update centersince August, we continue to solidify the syntax. We feel this release is getting very close to the final version and should not change much before 1.0. However, it is still a Beta so further tweaks are possible.

A release (0.8.0) is planned for early January 2017 which will finalize the syntax with the following changes:JENKINS-40524,JENKINS-40370,JENKINS-40462,JENKINS-40337

What is Declarative Pipeline?

All the way back at Jenkins World in September, Andrew Bayer presented asneak peak of a new syntax for constructing Pipelines. We are calling this new syntax Declarative Pipeline to differentiate it from the existing Scripted Pipeline syntax that has always been a part of Pipeline.

After listening to many Jenkins users over the last year we felt that, while Pipeline Script provides tremendous power, flexibility, and extensibility, the learning curve for Scripted Pipeline was steep for users new to either Jenkins or Pipeline. Beginning users wanting to take advantage of all the features provided by Pipeline and Jenkinsfiles were required to learn Scripted Pipeline or remain limited to the functionality provided by Freestyle jobs.

Declarative Pipeline does not replace Scripted Pipeline but extends Pipeline it with a pre-defined structure to let users focus entirely on the steps required at each stage without needing to worry about scripting every aspect of the pipeline. Granular flow-control is extremely powerful and Scripted Pipeline syntax will always be part of Pipeline but it’s not for everyone.

Declarative Pipeline enables all users to connect simple, declarative blocks that define build agents (including Docker), post build actions, environment settings, credentials and all stages that make up the pipeline. Best of all, because this Declarative syntax is part of Pipeline, all build steps and build wrappers available in Plugins or loaded from Shared Libraries are also available as steps in Declarative.

Example

Below is an example of a pipeline in Declarative syntax. You can also switch the view to show the same pipeline in Scripted syntax. The Declarative syntax has a more straightforward structure that is easier to grok by users not versed in Groovy.

Jenkinsfile (Declarative Pipeline)
pipeline {
  agent  label:'has-docker', dockerfile: true
  environment {
    GIT_COMMITTER_NAME = "jenkins"
    GIT_COMMITTER_EMAIL = "jenkins@jenkins.io"
  }
  stages {
    stage("Build") {
      steps {
        sh 'mvn clean install -Dmaven.test.failure.ignore=true'
      }
    }
    stage("Archive"){
      steps {
        archive "*/target/**/*"
        junit '*/target/surefire-reports/*.xml'
      }
    }
  }
  post {
    always {
      deleteDir()
    }
    success {
      mail to:"me@example.com", subject:"SUCCESS: ${currentBuild.fullDisplayName}", body: "Yay, we passed."
    }
    failure {
      mail to:"me@example.com", subject:"FAILURE: ${currentBuild.fullDisplayName}", body: "Boo, we failed."
    }
  }
}

How can you help?

  1. Install the lastest version of thePipeline-Model-Defintion plugin.

  2. Read the documentation:Getting Started andSyntax overview. (These documents will be incorporated into the Jenkins.io documentation.)

  3. Convert some of your existing Pipeline scripts into Declarative

  4. Log any issues or enhancements you havehere for the syntax, the execution, or the documentation.

  5. Ask questions. You can send questions to theusers mailing list or visit the #jenkins channel on IRC.

How will this work with Blue Ocean?

Blue Ocean is all about Pipelines in Jenkins. Running, displaying, and soon, creating Pipelines. Blue Ocean will be able to run and display Pipelines written in this new syntax just like any other Pipeline works today. However, because Declarative Pipeline includes a pre-defined structure, or model, it is now possible to create and edit pipelines with a GUI editor.

Pipeline Editor

Although we plan to launch 1.0 of Declarative Pipeline before Blue Ocean 1.0 is officially available, we expect to have a working Beta of the Editor available to play with. The combination of a simple syntax and an intuitive editor should make creating Jenkisn Pipelines a breeze.

Happy Holidays

I hope everyone has a great end of the year and a Happy New Year. With Declarative Pipeline andBlue Ocean we expect great things for Jenkins in 2017!

Continuous Delivery with Jenkins and Puppet Enterprise

$
0
0

This is a guest post by Carl Caum, who works at Puppet and created thePuppet Enterprise Pipeline plugin.

During PuppetConf 2016, myself and Brian Dawson from CloudBees announced thePuppet Enterprise plugin for Jenkins Pipeline. Let’s take a look at how the plugin makes it trivial to use Puppet to perform some or all of the deployment tasks in continuous delivery pipelines.

Jenkins Pipeline introduced an amazing world where the definition for a pipeline is managed from the same version control repository as the code delivered by the pipeline. This is a powerful idea, and one I felt complemented Puppet’s automation strengths. I wanted to make it trivial to control Puppet Enterprise’s orchestration and infrastructure code management capabilities, as well as set hierarchical configuration data and use Puppet’s inventory data system as a source of truth – all from a Pipeline script. The result was the Puppet Enterprise plugin, which fully buys into the Pipeline ideals by providing methods to control the different capabilities in Puppet Enterprise. The methods provide ways to query PuppetDB, setHiera key/value pairs, deploy Puppet code environments withCode Management, and kick off orchestrated Puppet runs with theOrchestrator.

The Puppet Enterprise for Jenkins Pipeline plugin

The Puppet Enterprise for Jenkins Pipeline plugin itself has zero system dependencies. You need only to install the plugin from the update center. The plugin uses APIs available in Puppet Enterprise to do its work. Since the PuppetDB query, Code Management, and Orchestrator APIs are all backed by Puppet Enterprise’s role-based access control (RBAC) system, it’s easy to restrict what pipelines are allowed to control in Puppet Enterprise. To learn more about RBAC in Puppet Enterprise, read the docs here.

Configuring

Configuring the plugin is fairly straight forward. It takes three simple steps:

  1. Set the address of the Puppet server

  2. Create a Jenkins credential with a Pupppet Enterprise RBAC authentication token

  3. Configure the Hiera backend

Set the Puppet Enterprise Server Address

Go to Jenkins > Manage Jenkins > Puppet Enterprise page. Put the DNS address of the Puppet server in the Puppet Master Address text field. Click the Test Connection button to verify the server is reachable, the Puppet CA certificate is retrievable, and HTTPS connections are successful. Once the test succeeds, Click Save.

Create a Jenkins Credentials Entry

The plugin uses the Jenkins built-in credentials system (the plain-credentials plugin) to store and refer RBAC tokens to Puppet Enterprise for authentication and authorization. First, generate an RBAC token in Puppet Enterprise by followingthe instructions on the docs site. Next, create a new Jenkins Credentials item with Kind Secret text and the Secret value the Puppet Enterprise RBAC token. It’s highly recommended to give the credential an ID value that’s descriptive and identifiable. You’ll use it in your Pipeline scripts.

In your Jenkinsfile, use the puppet.credentials method to set all future Puppet methods to use the RBAC token. For example:

puppet.credentials 'pe-team-token'

Configure the Hiera Backend

The plugin exposes an HTTP API for performing Hiera data lookups for key/value pairs managed by Pipeline jobs. To configure Hiera on the Puppet compile master(s) to query the Jenkins Hiera data store backend, use thehiera-http backend. On the Puppet Enterprise compile master(s), run the following commands:

/opt/puppetlabs/puppet/bin/gem install hiera-http
/opt/puppetlabs/bin/puppetserver gem install hiera-http

Now you can configure the /etc/puppetlabs/puppet/hiera.yaml file. The following configuration instructs Hiera to first look to the Hiera yaml files in the Puppet code’s environment, then fall back to the http backend. The http backend will first query the Hiera data store API looking for the key in the scope with the same name as the node. If nothing’s found, look for the key in the node’s environments. You can use any Facter fact to match scope names.

:backends:
  - yaml
  - http

:http:
  :host: jenkins.example.com
  :port: 8080
  :output: json
  :use_auth: true
  :auth_user: <user>
  :auth_pass: <pass>
  :cache_timeout: 10
  :failure: graceful
  :paths:
    - /hiera/lookup?path=%{clientcert}&key=%{key}
    - /hiera/lookup?path=%{environment}&key=%{key}

Finally, restart the pe-puppetserver process to pick up the new configs:

/opt/puppetlabs/bin/puppet resource service pe-puppetserver ensure=stopped
/opt/puppetlabs/bin/puppet resource service pe-puppetserver ensure=running

Hiera HTTP Authentication

If Jenkins' Global Security is configured to allow unauthenticated read-only access, the use_auth, auth_pass, and auth_user parameters are unnecessary. Otherwise, create a local Jenkins user that has permissions to view the Hiera Data Lookup page and use that user’s credentials for the hiera.yaml configuration.

Querying the infrastructure

PuppetDB is an extensive data store that holds every bit of information Puppet generates and collects across every system Puppet is installed on. PuppetDB provides a sweet query language calledPQL. With PQL, you can ask complex questions of your infrastructure such as "How many production Red Hat systems are there with the openssl package installed?" or "What us-west-2c nodes with the MyApp role that were created in the last 24 hours?"

This can be a powerful tool for parts of your pipeline where you need to perform specific operations on subsets of the infrastructure like draining a loadbalancer.

Here’s an example using the puppet.query method:

results = puppet.query '''
  inventory[certname] {
    facts.os.name = "RedHat" and
    facts.ec2_metadata.placement.availability-zone = "us-west-2c" and
    facts.uptime_hours < 24
  }'''

The query returns an array of matching items. The results can be iterated on, and even passed to a series of puppet.job calls. For example, the following code will query all nodes in production that experienced a failure on the last Puppet run.

results = puppet.query 'nodes { latest_report_status = "failed" and catalog_environment = "production"}'

Note that once you can use closures in Pipeline scripts, doing the above example will be much simpler.

Creating an orchestrator job

The orchestration service in Puppet Enterprise is a tool to perform orchestrated Puppet runs across as broad or as targeted an infrastructure as you need at different parts of a pipeline. You can use the orchestrator to update applications in an environment, or update a specific list of nodes, or update nodes across a set of nodes that match certain criteria. In each scenario, Puppet will always push distributed changes in the correct order by respecting the cross-node dependencies.

To create a job in the Puppet orchestrator from a Jenkins pipeline, use thepuppet.job method. The puppet.job method will create a new orchestrator job, monitor the job for completion, and determine if any Puppet runs failed. If there were failures, the pipeline will fail.

The following are just some examples of how to run Puppet orchestration jobs against the infrastructure you need to target.

Target an entire environment:

puppet.job 'production'

Target instances of an application in production:

puppet.job 'production', application: 'Myapp'

Target a specific list of nodes:

puppet.job 'production', nodes: ['db.example.com','appserver01.example.com','appserver02.example.com']

Target nodes matching a complex set if criteria:

puppet.job 'production', query: 'inventory[certname] { facts.os.name = "RedHat" and facts.ec2_metadata.placement.availability-zone = "us-west-2c" and uptime_hours < 24 }'

As you can see, the puppet.job command means you can be as broad or as targeted as you need to be for different parts of your pipeline. There are many other options you can add to the puppet.job method call, such as setting the Puppet runs to noop, or giving the orchestrator a maximum concurrency limit.Learn more about the orchestrator here.

Updating Puppet code

If you’re using Code Management in Puppet Enterprise (and you should), you can ensure that all the modules, site manifests, Hiera data, and roles and profiles are staged, synced, and ready across all your Puppet masters, direct from your Jenkins pipeline.

To update Puppet code across all Puppet masters, use the puppet.codeDeploy method:

puppet.codeDeploy 'staging'

Setting Hiera values

The plugin includes an experimental feature to set Hiera key/value pairs. There are many cases where you need to promote information through a pipeline, such as a build version or artifact location. Doing so is very difficult in Puppet, since data promotion almost always involves changing Hiera files and committing to version control.

The plugin exposes an HTTP API endpoint that Hiera can query using the hiera-http backend. With the backend configured on the Puppet master(s), key/value pairs can be set to scopes. A scope is arbitrary and can be anything you like, such as a Puppet environment, a node’s certname, or the name of a Facter fact like operatingsystem or domain.

To set a Hiera value from a pipeline, use the puppet.hiera method.

puppet.hiera scope: 'staging', key: 'build-version', value: env.BUILD_ID

Now you can set the same key with the same value to the production scope later in the pipeline, followed by a call to puppet.job to push the change out.

Examples

Theplugin’s Github repository contains a set of example Pipeline scripts. Feel free to issue pull requests to add your own scripts!

What’s next

I’m pretty excited to see how this is going to help simplify continuous delivery pipelines. I encourage everyone to get started with continuous delivery today, even if it’s just a simple pipeline. As your practices evolve, you can begin to add automated tests, automate away manual checkpoints, start to incorporate InfoSec tests, and include phases for practices like patch management that require lots of manual approvals, verifications and rollouts. You’ll be glad you did.

Thank you for an amazing 2016

$
0
0

Happy New Year from Jenkins

I do not think it is an exaggeration to say: 2016 was the best year yet for the Jenkins project. Since the first commit in 2006, the project has reached a number of significant milestones in its ten years but we have never experienced the breadth of major milestones in such a short amount of time. FromJenkins 2 andBlue Ocean to theGoogle Summer of Code andJenkins World,

I wanted to take a moment and celebrate the myriad of accomplishments which couldn’t have happened without the help from everybody who participates in the Jenkins project. The 1,300+ contributors to thejenkinsci GitHub organization, the 4,000+ members of thedevelopers mailing list, the 8,000+ members of theusers mailing list, and countless others who have reported issues, submitted pull requests, and presented at meetups and conferences.

Jenkins 2

Through the course of 2016, the Jenkins project published 16LTS releases and 54Weekly releases. Of those 70 releases, the most notable may have been theJenkins 2.0 release which was published in April.

Jenkins 2 made Pipeline as Code front-and-center in the user experience, introduced a new "Getting Started" experience, and included a number of other small UI improvements, all while maintaining backwards compatibility with existing Jenkins environments.

Since April, we have released a number of LTS releases using Jenkins 2 as a baseline, meaning the Jenkins project no longer maintains any 1.x release lines.

ThePipeline efforts have continuted to gain steam since April, covered on this blog with a number ofposts tagged "pipeline". Closing out 2016 with the announcement of the beta forDeclarative Pipeline syntax which is expected in early 2017.

Blue Ocean

Hot on the heels of Jenkins 2 announcementBlue Ocean, a new user experience for Jenkins, wasopen sourced in May. Blue Ocean is a new project that rethinks the user experience of Jenkins. Designed from the ground up for Jenkins Pipeline and compatible with Freestyle jobs. The goal for the project is to reduce clutter and increase clarity for every member of a team using Jenkins.

The Blue Ocean beta can be installed from the Update Center and can be run in production Jenkins environments alongside the existing UI. It adds the new user experience under/blue in the environment but does not disturb the existing UI.

Blue Ocean is expected to reach "1.0" in the first half of 2017.

Blue Ocean

Azure

Also in May of 2016, the Jenkins project announced an excitingPartnership with Microsoft to run our project infrastructure onAzure. While the migration of Jenkins project infrastructure into Azure is still on-going, there have been some notable milestones reached already:

  • End-to-end TLS encrypted delivery for Debian/openSUSE/Red Hat repositories which are configured to use https://pkg.jenkins.io by the end-user.

  • Major capacity improvements toci.jenkins.io providing on-demand Ubuntu and Windows build/test infrastructure.

  • A full continuous delivery Pipeline for all Azure-based infrastructure usingTerraform from Jenkins.

The migration to Azure is expected to complete in 2017.

Google Summer of Code

For the first time in the history of the project, Jenkins was accepted intoGoogle Summer of Code 2016. Google Summer of Code (GSoC) is an annual, international, program which encourages college-aged students to participate with open source projects during the summer break between classes. Students accepted into the program receive a stipend, paid by Google, to work well-defined projects to improve or enhance the Jenkins project.

In exchange, numerous Jenkins community members volunteered as "mentors" for students to help integrate them into the open source community and succeed in completing their summer projects.

A lot was learned during the summer which we look forward to applying to Google Summer of Code 2017

Jenkins World

In September, over one thousand people attendedJenkins World, in Santa Clara, California.

Demo Crowd

Following the event,Liam posted a series of blog posts which highlight some of the fantastic content shared by Jenkins users and contributors from around the world, such as:

Jenkins World was the first global event of its kind for Jenkins, it brought users and contributors together to exchange ideas on the current state of the project, celebrate accomplishments of the past year, and look ahead at all the exiting enhancements coming down the pipe(line).

It was such a smashing success thatJenkins World 2017 is already scheduled for August 30-31st in San Francisco, California.

JAM

Finally, 2016 saw tremendous growth in the number ofJenkins Area Meetups (JAMs) hosted around the world. JAMs are local meetups intended to bring Jenkins users and contributors together for socializing and learning. JAMs are organized by local Jenkins community members who have a passion for sharing new Jenkins concepts, patterns and tools.

Driven by current Jenkins Events Officer,Alyssa Tong, and the dozens of passionate organizers, JAMs have become a great way to meet other Jenkins users near you.

Jenkins Around the World Meetups

While we don’t yet have JAMs on each of the seven continents, you can always join the Jenkins Online Meetup. Though we’re hoping more groups will be founded near you in 2017!


I am personally grateful for the variety and volume of contributions made by thousands of people to the Jenkins project this year. I believe I can speak for project founder,Kohsuke Kawaguchi, in stating that the Jenkins community has grown beyond our anything we could have imagined five years ago, let alone ten!

There are number of ways toparticipate in the Jenkins project, so if you didn’t have an opportunity to join in during 2016, we hope to see you next year!

Learning plugin development by improving the LIFX notifier plugin

$
0
0

This is across post by Veaceslav Gaidarji, open source developer and contributor to the Jenkins and Bitrise projects.

Some time ago I encountered a LIFX smart bulbs. These are the bulbs with a chip inside - 50% bulb, 50% chip. There are mobile applications for easy configuration and remote control of the bulb. Nothing special here, it simply works and is very convenient to have such bulbs in dormitory.

Brilliant idea time

99% of ideas which come to our minds either were already implemented by someone else or they are useless.

— Veaceslav Gaidarji

And as it always happens, the developer inside me generated an idea which, as it always happens, was implemented by someone else already.

The idea was: to connect a LIFX bulb to Jenkins server and update the color according to a job’s state.

Before starting to work on such Jenkins plugin, I searched for similar projects on Google and the first links pointed me to existingLIFX notifier plugin and ablog post fromMichael Neale who created the plugin. Michael’s post describes exactly what I had in mind.

At this point I had 2 options:

  • forget about building something new and just use the plugin

  • improve existing plugin

First option is always easy and effortless, but second one is more challenging.

Improving an existing plugin

The existing LIFX notifier plugin did its job really well and I was able to connect my bulb to Jenkins and test it. But it wasn’t complete and had no configurable at all, therefore no possibility to change the colors.

First, I read Jenkins contribution guidelines, whichencourage developers to improve existing plugins (if any) and not create other versions of plugins with similar functionality. Then I contacted the plugin author, Michael Neale, via email and kindly asked for the contributor access in GitHub for the existing plugin version. After a short discussion about my plans on this plugin, Michael added me as a contributor to GitHubrepo and wished me good luck. Thanks Michael!

I wanted to improve the LIFX notifier plugin to add the ability customize the colors (in progress, build success and build failure). This is not a hard task actually. A 1000+ plugins were developed for Jenkins by the hackers like me, which means that I should have no problem to do it as well. Fortunately for me, I have used some plugins already which had a UI similar to that I had planned to add to the LIFX notifier, such as:

  1. HockeyApp plugin

  2. Fabric Beta publisher plugin

  3. Different Build notifiers plugins

Reviewing the code for these plugins, plus Jenkinsplugin development documentation, and of course looking overJelly components helped me to:

  • Better understand the Jenkins architecture.

  • Learn how Jenkins plugins work in general.

  • Learn how to create the UI components for a plugin.

  • Leanr how to subscribe to Jenkins job state changes using appropriate extension points.

In a few weeks I’ve finished my plugin modifications and added unit tests for its major parts. As a result, the plugin now has a UI configuration section in Post-build Actions which is self descriptive:

plugin configuration

The last step was to prepare new plugin version and publish it to the world! The Jenkins"Hosting document describes step by step process of how to publish a plugin.

This includes many steps which should be respected very carefully.

Demo

What I’ve learned

It was my first experience in Jenkins plugins development. I should say thatsteep learning curve is high enough, and sometimes is really hard to find answers on appearing questions. But in general it’s all about Java, XML,Maven and it’s a lot of fun developing Jenkins plugins.

Check out the LIFX notifier page for more information about the latest releases!


Bonus: bitrise.io users, I’ve developed step LIFX notifier for bitrise as well.

Security warnings in Jenkins

$
0
0

Jenkins 2.40 was released earlier this week, and readers of thechangelog will have noticed that it now includes the ability to show security warnings published by the configured update site. But what does that mean?

In the past, we’ve notified users about security issues in Jenkins and in plugins through various means: Emails to the jenkinsci-advisories mailing list (which I recommend you subscribe to), blog posts, and, recently, emails to the oss-security mailing list. But I still wanted to increase the reach of our notifications, to make sure Jenkins admins are informed quickly about possible security problems on their instances. The logical next step was to include these notifications in Jenkins itself, and that feature has been added in Jenkins 2.40.

Today we enabled the publication of warnings on our update sites: Once Jenkins 2.40 (or newer) refreshes its cache of update site metadata, it may now inform you that you’re using a vulnerable plugin that should be updated or removed. Right now, these aren’t previously unknown warnings, but reference security advisories for plugin vulnerabilies that have been published over the past few years.

We will of course continue to publish security advisories using the mailing list of the same name, as well other means.

Stay safe!

Jenkins World 2017 Call for Papers is Open

$
0
0

125x125

The largest Jenkins event, Jenkins World is coming to San Francisco, California on August 28 - 31, 2017, at the Marriott Marquis. This conference will feature two days of hands-on training, workshops, and certification exams followed by two more days with five tracks of technical sessions from Jenkins and DevOps experts from around the world.

Inspire your peers and colleagues by sharing your expertise and experience as one of the Jenkins World speakers.The Call for Papers is open, last day for submitted a proposal is March 5th, 2017.

Compared to Jenkins World 2016, what’s new for 2017? Two tracks are now dedicated to "show and tell." These sessions are technically advanced with code sharing, heavy on demos, and only a few slides. If you are like most of us - driven to learn, share, and collaborate…​we’d like to hear from you!

Looking forward to your amazing proposal(s)!

Declarative Pipeline Syntax Beta 2 release

$
0
0

This week, we released the second beta of the newDeclarative Pipeline syntax, available in the Update Center now as version 0.8.1 of Pipeline: Model Definition. You can read more about Declarative Pipelinein the blog post introducing the first beta from December, but we wanted to update you all on the syntax changes in the second beta. These syntax changes are the last compatibility-breaking changes to the syntax before the 1.0 release planned for February, so you can safely start using the 0.8.1 syntax now without needing to change it when 1.0 is released.

A full syntax reference is available on the wiki as well.

Syntax Changes

Changed "agent" configuration to block structure

In order to support more detailed and clear configuration of agents, as well as making agent syntax more consistent with the rest of the Declarative Pipeline syntax, we’ve moved the agent configuration into blocks. The agent any andagent none configurations work the same as previously, but label, docker and dockerfile now look like the following:

Just specifying a label is simple.

Jenkinsfile (Declarative Pipeline)
agent {
    label "some-label"
}

If you’re just specifying a Docker image, you can use this simple syntax.

Jenkinsfile (Declarative Pipeline)
agent {
    docker "ubuntu:16.04"
}

When you are specifying a label or other arguments, docker looks like this:

Jenkinsfile (Declarative Pipeline)
agent {
    docker {
        image "ubuntu:16.04"
        label "docker-label"
        args "-v /tmp:/tmp -p 8000:8000"
    }
}

When you’re building an image from "Dockerfile" in your repository and don’t care what node is used or have additional arguments, you can again use a simple syntax.

Jenkinsfile (Declarative Pipeline)
agent {
    dockerfile true
}

When you’re building an image from a different file, or have a label or other arguments, use the following syntax:

Jenkinsfile (Declarative Pipeline)
agent {
    dockerfile {
        filename "OtherDockerfile"
        label "docker-label"
        args "-v /tmp:/tmp -p 8000:8000"
    }
}

Improved "when" conditions

We introduced the when section a couple releases ago, but have made some changes to its syntax here in 0.8.1. We wanted to add some simpler ways to specify common conditions, and that required we re-work the syntax accordingly.

Branch

One of the most common conditions is running a stage only if you’re on a specific branch. You can also use wildcards like "*/master".

Jenkinsfile (Declarative Pipeline)
when {
    branch "master"
}

Environment

Another built-in condition is the environment condition, which checks to see if a given environment variable is set to a given value.

Jenkinsfile (Declarative Pipeline)
when {
    environment name: "SOME_ENV_VAR", value: "SOME_VALUE"
}

Expression

Lastly, there’s the expression condition, which resolves an arbitrary Pipeline expression. If the return value of that expression isn’t false or null, the stage will execute.

Jenkinsfile (Declarative Pipeline)
when {
    expression {
        echo "Should I run?"return"foo" == "bar"
    }
}

"options" replaces "properties" and "wrappers"

We’ve renamed the properties section to options, due to needing to add new Declarative-specific options and to cut down on confusion. The options section is now where you’ll put general Pipeline options like buildDiscarder, Declarative-specific options like skipDefaultCheckout, and block-scoped steps that should wrap the execution of the entire build, like timeout ortimestamps.

Jenkinsfile (Declarative Pipeline)

options {
    buildDiscarder(logRotator(numToKeepStr:'1'))
    skipDefaultCheckout()
    timeout(time: 5, unit: 'MINUTES')
}

Heading towards 1.0!

While we may still add more functionality to the Declarative Pipeline syntax, we won’t be making any changes to existing syntax for the 1.0 release. This means that any pipelines you write against the 0.8.1 syntax will keep working for the foreseeable future without any changes. So if you’re already using Declarative Pipelines, make sure to update your `Jenkinsfile`s after upgrading to 0.8.1, and if you haven’t been using Declarative Pipelines yet, install the Pipeline: Model Definition plugin and give them a try!


Blue Ocean Dev Log: January Week #2

$
0
0

As we get closer toBlue Ocean 1.0, which is planned for the end of March, I figured it would be great to highlight some of the good stuff that has been going on. It’s been a busy-as-usual week as everyone comes back from vacation. A couple of new betas went out this week. Of note:

  • input to Pipelines is now supported, a much asked for feature (see below)

  • A new French translation

  • Some optimisations (especially around reducing number of HTTP calls). We have started usinggtmetrix.com to measure changes ondogfood to get some numbers around optimisations on the web tier.

  • And a grab bag of other great bug fixes.

Using the input step in Blue Ocean

Also a bunch of work has been done to support parametrized pipelines, as well as creation of new multibranch pipelines (both are much asked for).

There is also now an "official" Docker image being published toDocker Hub. The Pipeline building the container is run weekly and will be picking up newly tagged releases of Blue Ocean.

Running the latest can be as simple as:

docker run -p 8888:8080 jenkinsci/blueocean:latest
Jenkins yarrr

This is built on the incredibly popularofficial "jenkins" image (10M pulls can’t all be wrong!). The container also has tags available (e.g. jenkinsci/blueocean:1.0.0-b16) for grabbing a specific released version.

Up next for Blue Ocean development as we march towards 1.0:

  • Support for parametrized jobs. For which a bunch of api work has already been done.

  • Creation of the new Pipeline GUI

  • Preview release of the Visual Editor forDeclarative Pipeline.

  • The new header design will be applied

Enjoy!


If you’re interested in helping to make Blue Ocean a great user experience for Jenkins, please join the Blue Ocean development team on Gitter!

SCM API turns 2.0 and what that means for you

$
0
0

We are announcing theSCM API 2.0.x andBranch API 2.0.x release lines.

Downstream of this there are also some great improvements to a number of popular plugins including:

There are some gotcha’s that Jenkins administrators will need to be aware of.

Always take a backup of your JENKINS_HOME before upgrading any plugins.

We want to give you the whole story, but the take home message is this:

When updating theSCM API and/orBranch API plugins to the 2.0.x release lines, if you have any of theGitHub Organization Folders,GitHub Branch Source and/orBitBucket branch source plugins installed then you must upgrade them all to 2.0.x at the same time or Bad Things™ will happen.

— A Jenkins Administrator

Do NOT upgrade some of these plugins but not others! Doing so may cause your jobs to fail to load.

If you don’t care about the hows and whys, you can just skip down to this section but if you are curious…​ here we go!

The back-story

Way back in September 2013 we announced theLiterate plugin, as an experimental new way of modeling branch development in Jenkins.

When you are performing an experiment, the recommendation is to do just enough work to let you perform the test. However, the culture in Jenkins is to always try and produce reusable components that others can use in ways you have not anticipated.

So when releasing the initial version of theLiterate plugin we also separated the Literate specific bits from the SCM specific concepts and multi-branch concepts. These were lower level concepts were organized into the following plugins:

  • SCM API - which was intended to be a plugin to hold a next generation API for interacting with source control systems.

  • Branch API - which was intended to be a plugin to hold the multi-branch functionality that was abstracted from the usage by the Literate plugin.

In addition, we released updates to three of the more common SCM plugins which included implementations of the SCM API:

  • Git plugin

  • Subversion plugin

  • Mercurial plugin

While there was some interest in the Literate plugin, it did not gain much traction - there are only 39 Jenkins instances running the Literate plugin as of December 2016.

In terms of the reusable components, we had only made a minimal implementation with some limitations:

  • Very basic event support - events can only trigger a re-scan of the entire repository. This was acceptable at the time because the only three implementations use a local cache of the remote state so re-scanning is quick.

  • No implementation of the SCMFileSystem API. As a result it is not possible for plugins likePipeline Multibranch to get the Jenkinsfile from the remote repository without needing to checkout the repository into a workspace.

  • No documentation on how plugin developers are supposed to implement the SCM API

  • No documentation on how plugin developers are supposed to consume the SCM API (if they wanted to do something like Branch API but not the same way as Branch API)

  • No documentation on how plugin developers are supposed to implement the Branch API to create their own multi-branch project types

  • No documentation on for users on how the Branch API based project types are expected to work.

Roll forward to November 2015 and Jenkins Pipeline got a release of thePipeline Multibranch. It seems that pairing Pipeline with Branch API style multi-branch is much more successful than Literate - there are close to 60,000 instances running the pipeline multi-branch plugin as of December 2016.

There also were two new SCM plugins implementing the SCM API:

  • GitHub Branch Source Plugin

  • BitBucket Branch Source Plugin

Unlike the previous implementations of the SCM API, however, these plugins do not maintain a local cache of the repository state. Rather they make queries via the GitHub / BitBucket REST APIs on demand.

The above design decision exposed one of the initial MVP compromises of the SCM API plugin: very basic event support. Under the SCM API 1.x model, the only event that an SCMSource can signal is something changed, go look at everything again. When you are accessing an API that only allows 5,000 API calls per hour, performing a full scan of the entire repository just to pick up a change in one branch does not make optimum usage of that 5,000 calls/hour rate limit.

So we decided that perhaps the SCM API and Branch API plugins have left their Minimum Viability Experiment state and the corresponding limitations should be addressed.

Enter SCM API 2.0.x and Branch API 2.0.x

So what has changed in theSCM API 2.0.x andBranch API 2.0.x release lines? These plugin releases include:

In addition, we have upgraded the following plugins to include the new fine-grained event support:

  • Git Plugin

  • Mercurial Plugin

Ok, that was the good news. Here is the bad news.

We found out that the GitHub Branch Source and BitBucket Branch Source plugins had made invalid assumptions about how to implement the SCM API. To be clear, this was not the plugin developers fault: at the time there was no documentation on how to implement the SCM API.

But fixing the issues that we found means that you have to be careful about which specific combinations of plugin versions you have installed.

SCM API Plugin

Technically, the 2.0.x line of this plugin is both API and on-disk compatible with plugins compiled against older version lines.

However, the 1.x lines of both the GitHub Branch Source and BitBucket Branch Source plugins have hard-coded assumptions about internal implementation of the SCM API that are no longer valid in the 2.0.x line.

If you upgrade to SCM API 2.0.x and you have either the GitHub Branch Source or the BitBucket Branch Source plugins and you do not upgrade those instances to the 2.0.x line then your Jenkins instance will fail to start-up correctly.

The solution is just to upgrade the GitHub Branch Source or the BitBucket Branch Source plugin (as appropriate) to the 2.0.x line.

If you upgrade the SCM API plugin to the 2.0.x line and do not upgrade the Branch API plugin to the 2.0.x line then you will not get any of the benefits of the new version of the SCM API plugin.

Branch API Plugin

The 2.0.x line of this plugin makes on-disk file format changes that mean you will be unable to roll back to the 1.x line after an upgrade without restoring the old data files from a back-up. Technically, the API is compatible with plugins compiled against older version lines.

The 1.x lines of both the GitHub Branch Source and BitBucket Branch Source plugins have implemented hacks that make assumptions about internal implementation of the Branch API that are no longer valid in the 2.0.x line.

The Pipeline Multibranch plugin made a few minor invalid assumptions about how to implement a Multibranch project type. For example, if you do not upgrade the Pipeline Multibranch plugin in tandem then you will be unable to manually delete an orphaned item before the orphaned item retention strategy runs, which should be significantly less frequently with the new event support.

If you upgrade to Branch API 2.0.x and you have either the GitHub Branch Source or the BitBucket Branch Source plugins and you do not upgrade those instances to the 2.0.x line then your Jenkins instance will fail to start-up correctly.

The solution is just to upgrade the GitHub Branch Source or the BitBucket Branch Source plugin (as appropriate) to the 2.0.x line.

Git Plugin

The new releases of this plugin are both API and on-disk compatible with plugins compiled against the previous releases.

The 2.0.x lines of both the GitHub Branch Source and BitBucket Branch Source plugins require that you upgrade your Git Plugin to one of the versions that supports SCM API 2.0.x. In general, the required upgrade will be performed automatically when you upgrade your GitHub Branch Source and BitBucket Branch Source plugins.

Mercurial Plugin

The new release of this plugin is both API and on-disk compatible with plugins compiled against the previous releases.

The 2.0.x line of the BitBucket Branch Source plugins require that you upgrade your Mercurial Plugin to the 2.0.x line. In general, the required upgrade will be performed automatically when you upgrade your BitBucket Branch Source plugins.

BitBucket Branch Source Plugin

The 2.0.x line of this plugin makes on-disk file format changes that mean you will be unable to roll back to the 1.x line after an upgrade without restoring the old data files from a back-up.

GitHub Branch Source Plugin

The 2.0.x line of this plugin makes on-disk file format changes that mean you will be unable to roll back to the 1.x line after an upgrade without restoring the old data files from a back-up.

If you upgrade to GitHub Branch Source 2.0.x and you have the GitHub Organization Folders plugin installed, you must upgrade that plugin to the tombstone release.

GitHub Organization Folders Plugin

The functionality of this plugin has been migrated to the GitHub Branch Source plugin. You will need to upgrade to the tombstone release in order to ensure all the data has been migrated to the classes in the GitHub Branch Source plugin.

Once you have upgraded to the tombstone version and all GitHub Organization Folders have had a full scan completed successfully, you can disable and uninstall the GitHub Organization Folders plugin. There will be no more releases of this plugin after the tombstone. The tombstone is only required for data migration.

Summary for busy Jenkins Administrators

Upgrading should make multi-branch projects much better. When you are ready to upgrade you must ensure that you upgrade all the required plugins. If you miss some, just upgrade them and restart to fix the issue.

SCM API Plugin

2.0.1 or newer

Branch API Plugin

2.0.0 or newer

Git Plugin

Either 2.6.2 or newer in the 2.6.x line or 3.0.2 or newer

Mercurial Plugin

2.0.0 or newer

GitHub Branch Source Plugin

2.0.0 or newer

BitBucket Branch Source Plugin

2.0.0 or newer

GitHub Organization Folders Plugin

1.6

Pipeline Multibranch Plugin

2.10 or newer

Other plugins that may require updating:

GitHub API Plugin

1.84 or newer

GitHub Plugin

1.25.0 or newer

Folders Plugin

5.16 or newer

After an upgrade you will see the data migration warning

Summary for busy Jenkins users

SCM API 2.0.x adds fine-grained event support. This should significantly improve the responsiveness of multi-branch projects. This should significantly reduce your GitHub API rate limit usage.

If you are using theGitHub Branch Source orGitHub Organization Folders plugins then upgrading will significantly reduce the API calls made by Jenkins to GitHub.

If you are using any of the upgraded SCM plugins (e.g. Git, Mercurial, GitHub Branch Source, BitBucket Branch Source) then upgrading will significantly improve the responsiveness to push event notifications.

Summary for busy SCM plugin developers

You should read the newdocumentation on how plugin developers are supposed to implement the SCM API

Where to now dear Literate Plugin

The persistent reader may be wondering what happens now to the Literate plugin.

For me, the logical heir of the Literate Plugin is thePipeline Model Definition plugin. This new plugin has the advantage of an easy to read pipeline syntax with the extra functionality that I suspect was preventing people from adopting Literate.

The good news is that the Pipeline Model Definition already has 5000 installations as of December 2016 and I expect up-take to keep on growing.

Jenkins Upgrades To Java 8

$
0
0

In the next few months, Jenkins will require Java 8 as its runtime.

Back in last November.

Timeline

Here is how we plan to roll that baseline upgrade in the next few months.

  • Now: Announce the intention publicly.

  • April, 2017: Drop support for Java 7 in Jenkins weekly. With the current rhythm, that means 2.52 will most likely be the first weekly to require Java 8.

  • June 2017: First LTS version requiring Java 8 is published. This should be something around 2.60.1.

If you are still running Java 7, you will not be able to upgrade to the latest LTS version after some date probably around May 2017.

Why Upgrade to Java 8

Balancing those numbers with many other criteria:

  • Java 7 has been now end-of-lifed for 18+ months

  • People are already moving away from Java 7, as show the numbers

    • 52.8% of instances were already running Java 8 back in last November, and now reaching 58% two months later.

    • If we only look at Jenkins 2.x, then we reach 72%.

  • Java 8 runtime is known from the field to be more stable

  • Many developers have been wanting to be allowed to leverage the improvements that Java 8 provides to the language and platform (lambdas, Date/Time API…​ just to name a few). Being also a developer community, we want Jenkins to be appealing to contributors.

If you have questions or feedback about this announcement, please feel free to post it to the Jenkins developers mailing list.

Converting Conditional Build Steps to Jenkins Pipeline

$
0
0
This is a guest post by Liam Newman, Technical Evangelist at CloudBees.

Introduction

With all the new developments inJenkins Pipeline (andDeclarative Pipeline on the horizon), it’s easy to forget what we did to create "pipelines" before Pipeline. There are number of plugins, some that have been around since the very beginning, that enable users to create "pipelines" in Jenkins. For example, basic job chaining worked well in many cases, and theParameterized Trigger plugin made chaining more flexible. However, creating chained jobs with conditional behavior was still one of the harder things to do in Jenkins.

TheConditional BuildStep plugin is a powerful tool that has allowed Jenkins users to write Jenkins jobs with complex conditional logic. In this post, we’ll take a look at how we might converting Freestyle jobs that include conditional build steps to Jenkins Pipeline. Unlike Freestyle jobs, implementing conditional operations in Jenkins Pipeline is trivial, but matching the behavior of complex conditional build steps will require a bit more care.

Graphical Programming

The Conditional BuildStep plugin lets users add conditional logic to Freestyle jobs from within the Jenkins web UI. It does this by:

  • Adding two types of Conditional BuildStep ("Single" and "Multiple") - these build steps contain one or more other build steps to be run when the configured condition is met

  • Adding a set of Condition operations - these control whether the Conditional BuildStep execute the contained step(s)

  • Leveraging the Token Macro facility - these provide values to the Conditions for evaluation

In the example below, this project will run the shell script step when the value of theREQUESTED_ACTION token equals "greeting".

Freestyle Job Parameters
Freestyle Job Conditional BuildStep

Here’s the output when I run this project with REQUESTED_ACTION set to "greeting":

Run condition [Strings match] enabling prebuild for step [Execute shell]
Strings match run condition: string 1=[greeting], string 2=[greeting]
Run condition [Strings match] enabling perform for step [Execute shell]
[freestyle-conditional] $ /bin/sh -xe /var/folders/hp/f7yc_mwj2tq1hmbv_5n10v2c0000gn/T/hudson5963233933358491209.sh
+ echo 'Hello, bitwiseman!'
Hello, bitwiseman!
Finished: SUCCESS

And when I pass the value "silence":

Run condition [Strings match] enabling prebuild for step [Execute shell]
Strings match run condition: string 1=[silence], string 2=[greeting]
Run condition [Strings match] preventing perform for step [Execute shell]
Finished: SUCCESS

This is a simple example but the conditional step can contain any regular build step. When combined with other plugins, it can control whether to send notifications, gather data from other sources, wait for user feedback, or call other projects.

The Conditional BuildStep plugin does a great job of leveraging strengths of the Jenkins web UI, Freestyle jobs, and UI-based programming, but it is also hampered by their limitations. The Jenkins web UI can be clunky and confusing at times. Like the steps in any Freestyle job, these conditional steps are only stored and viewable in Jenkins. They are not versioned with other product or build code and can’t be code reviewed. Like any number of UI-based programming tools, it has to make trade-offs between clarity and flexibility: more options or clearer presentation. There’s only so much space on the screen.

Converting to Pipeline

Jenkins Pipeline, on the other hand, enables users to implement their pipeline as code. Pipeline code can be written directly in the Jenkins Web UI or in any text editor. It is a full-featured programming language, which gives users access to much broader set of conditional statements without the restrictions of UI-based programming.

So, taking the example above, the Pipeline equivalent is:

Jenkinsfile (Declarative Pipeline)
pipeline {
    agent any
    parameters {
        choice(// choices are a string of newline separated values// https://issues.jenkins-ci.org/browse/JENKINS-41180choices: 'greeting\nsilence',description: '',name: 'REQUESTED_ACTION')
    }

    stages {
        stage ('Speak') {
            when {// Only say hello if a "greeting" is requested
                expression { params.REQUESTED_ACTION == 'greeting' }
            }
            steps {
                echo "Hello, bitwiseman!"
            }
        }
    }
}

When I run this project with REQUESTED_ACTION set to "greeting", here’s the output:

[Pipeline] node
Running on osx_mbp in /Users/bitwiseman/jenkins/agents/osx_mbp/workspace/pipeline-conditional
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Speak)
[Pipeline] echo
Hello, bitwiseman!
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

When I pass the value "silence", the only change is "Hello, bitwiseman!" is not printed.

Some might argue that the Pipeline code is a bit harder to understand on first reading. Others would say the UI is just as confusing if not more so. Either way, the Pipeline representation is considerably more compact than the Jenkins UI presentation. Pipeline also lets us add helpful comments, which we can’t do in the Freestyle UI. And we can easily put this Pipeline in a Jenkinsfile to be code-reviewed, checked-in, and versioned along with the rest of our code.

Conditions

The previous example showed the "Strings match" condition and its Pipeline equivalent. Let’s look at couple more interesting conditions and their Jenkins Pipeline equivalents.

Boolean condition

You might think that a boolean condition would be the simplest condition, but it isn’t. Since it works with string values from tokens, the Conditional BuildStep plugin offers a number of ways to indicate true or false. Truth is a case insensitive match of one of the following: 1 (the number one), Y, YES, T, TRUE, ON or RUN.

Pipeline can duplicate these, but depending on the scenario we might consider whether a simpler expression would suffice.

Jenkinsfile (Declarative Pipeline)
when {// case insensitive regular expression for truthy values
    expression { return token ==~ /(?i)(Y|YES|T|TRUE|ON|RUN)/ }
}
steps {/* step */
}

Logical "OR" of conditions

This condition wraps other conditions. It takes their results as inputs and performs a logical "or" of the results. The AND and NOT conditions do the same, performing their respective operations.

Jenkinsfile (Declarative Pipeline)
when {// A or B
    expression { return A || B }
}
steps {/* step */
}

Tokens

Tokens can be considerably more work than conditions. There are more of them and they cover a much broader range of behaviors. The previous example showed one of the simpler cases, accessing a build parameter, where the token has a direct equivalent in Pipeline. However, many tokens don’t have direct equivalents, some take a parameters (adding to their complexity), and some provide information that is simply not exposed in Pipeline yet. So, determining how to migrate tokens needs to be done on case-by-case basis.

Let’s look at a few examples.

"FILE" token

Expands to the contents of a file. The file path is relative to the build workspace root.

${FILE,path="PATH"}

This token maps directly to the readFile step. The only difference is the file path for readFile is relative to the current working directory on the agent, but that is the workspace root by default. No problem.

Jenkinsfile (Declarative Pipeline)
when {
    expression { return readFile('pom.xml').contains('mycomponent') }
}
steps {/* step */
}

GIT_BRANCH

Expands to the name of the branch that was built.

Parameters (descriptions omitted): all, fullName.

This information may or may not be exposed in Pipeline. If you’re using the Pipeline Multibranch pluginenv.BRANCH_NAME will give similar basic information, but doesn’t offer the parameters. There are also severalissues filed around GIT_* tokens in Pipeline. Until they are addressed fully, we can follow the pattern shown inpipeline-examples, executing a shell to get the information we need.

Pipeline
GIT_BRANCH = sh(returnStdout: true, script: 'git rev-parse --abbrev-ref HEAD').trim()

CHANGES_SINCE_LAST_SUCCESS

Displays the changes since the last successful build.

Parameters (descriptions omitted):reverse, format, changesFormat, showPaths, pathFormat,showDependencies, dateFormat, regex, replace, default.

Not only is the information provided by this token not exposed in Pipeline, the token has ten optional parameters, including format strings and regular expression searches. There are a number of ways we might get similar information in Pipeline. Each have their own particular limitations and ways they differ from the token output. Then we’ll need to consider how each of the parameters changes the output. If nothing else, translating this token is clearly beyond the scope of this post.

Slightly More Complex Example

Let’s do one more example that shows some of these conditions and tokens. This time we’ll perform different build steps depending on what branch we’re building. We’ll take two build parameters: BRANCH_PATTERN and FORCE_FULL_BUILD. Based on BRANCH_PATTERN, we’ll checkout a repository. If we’re building on the master branch or the user checked FORCE_FULL_BUILD, we’ll call three other builds in parallel (full-build-linux, full-build-mac, and full-build-windows), wait for them to finish, and report the result. If we’re not building on the master branch and the user did not check FORCE_FULL_BUILD, we’ll print a message saying we skipped the full builds.

Freestyle

Here’s the configuration for Freestyle version. (It’s pretty long. Feel free to skip down to the Pipeline version):

The Pipeline version of this job determines the GIT_BRANCH branch by running a shell script that returns the current local branch name. This means that the Pipeline version must checkout to a local branch (not a detached head).

Freestyle version of this job does not require a local branch, GIT_BRANCH is set automatically. However, to maintain functional parity, the Freestyle version of this job includes "Checkout to Specific Local Branch" as well.

Longer Freestyle Job

Pipeline

Here’s the equivalent Pipeline:

Freestyle version of this job is not stored in source control.

In general, the Pipeline version of this job would be stored in source control, would checkout scm, and would run that same repository. However, to maintain functional parity, the Pipeline version shown does a checkout from source control but is not stored in that repository.

Jenkinsfile (Declarative Pipeline)
pipeline {
    agent any
    parameters {
        string (defaultValue: '*',description: '',
            name : 'BRANCH_PATTERN')
        booleanParam (defaultValue: false,description: '',
            name : 'FORCE_FULL_BUILD')
    }

    stages {
        stage ('Prepare') {
            steps {
                checkout([$class: 'GitSCM',branches: [[name: "origin/${BRANCH_PATTERN}"]],doGenerateSubmoduleConfigurations: false,extensions: [[$class: 'LocalBranch']],submoduleCfg: [],userRemoteConfigs: [[credentialsId: 'bitwiseman_github',url: 'https://github.com/bitwiseman/hermann']]])
            }
        }

        stage ('Build') {
            when {
                expression {
                    GIT_BRANCH = 'origin/' + sh(returnStdout: true, script: 'git rev-parse --abbrev-ref HEAD').trim()return GIT_BRANCH == 'origin/master' || params.FORCE_FULL_BUILD
                }
            }
            steps {// Freestyle build trigger calls a list of jobs// Pipeline build() step only calls one job// To run all three jobs in parallel, we use "parallel" step// https://jenkins.io/doc/pipeline/examples/#jobs-in-parallel
                parallel (linux: {
                        build job: 'full-build-linux', parameters: [string(name: 'GIT_BRANCH_NAME', value: GIT_BRANCH)]
                    },mac: {
                        build job: 'full-build-mac', parameters: [string(name: 'GIT_BRANCH_NAME', value: GIT_BRANCH)]
                    },windows: {
                        build job: 'full-build-windows', parameters: [string(name: 'GIT_BRANCH_NAME', value: GIT_BRANCH)]
                    },failFast: false)
            }
        }
        stage ('Build Skipped') {
            when {
                expression {
                    GIT_BRANCH = 'origin/' + sh(returnStdout: true, script: 'git rev-parse --abbrev-ref HEAD').trim()return !(GIT_BRANCH == 'origin/master' || params.FORCE_FULL_BUILD)
                }
            }
            steps {
                echo 'Skipped full build.'
            }
        }
    }
}

Conclusion

As I said before, the Conditional BuildStep plugin is great. It provides a clear, easy to understand way to add conditional logic to any Freestyle job. Before Pipeline, it was one of the few plugins to do this and it remains one of the most popular plugins. Now that we have Pipeline, we can implement conditional logic directly in code.

This is blog post discussed how to approach converting conditional build steps to Pipeline and showed a couple concrete examples. Overall, I’m pleased with the results so far. I found scenarios which could not easily be migrated to Pipeline, but even those are only more difficult, rather than impossible.

The next thing to do is add a section to theJenkins Handbook documenting the Pipeline equivalent of all of the Conditions and the most commonly used Tokens. Look for it soon!

Blue Ocean Dev Log: January Week #3

$
0
0

As we get closer toBlue Ocean 1.0, which is planned for the end of March, I have startedhighlighting some of the good stuff that has been going on, and this week was a very busy week.

A new Blue Ocean beta (b18) was released with:

  • Parametrized pipelines are now supported!

  • i18n improvements

  • Better support for matrix and the evil (yet somehow still used) Maven project type (don’t use it!)

  • SSE fixes for IE and Edge browsers

  • An alpha release of the Visual Editor for Jenkinsfiles on top ofDeclarative Pipeline has snuck into the "experimental" update center. Andrew will be talking about Declarative Pipelines atFOSDEM next week.

Parametrized Pipelines

You would know this if you followedThorsten’s twitter account.

Starting with parameters from @thorscherler

That twitter account is mostly pics of Thorsten in running gear, but occasionally he announces new features as they land.

When you run a pipeline that requires parameters, it will popup a dialog like this no matter where you run it from. Most input types are supported (similar to input), with a planned extension point for custom input types.

Editor

Pipeline Editor plugin

A very-very early version of theBlue Ocean Pipeline Editor plugin that will set your hair on fire of the editor is in the experimental update center.

Declarative pipelines are still not at version 1.0 status, but will be shortly. This editor allows you to roundtrip Jenkinsfiles written in this way, so they can be edited as text, or visually. The steps available are discovered form the installed plugins. One to watch.

So, what’s next?

  • Creation of Git Pipelines, and likely GitHub too.

  • Show parallel branches that aren’t in a stage visually

  • Show stderr/out in test reports

  • Show more information when Jenkins is "busy", such as when agents are coming online, in the Pipeline view

Enjoy!


If you’re interested in helping to make Blue Ocean a great user experience for Jenkins, please join the Blue Ocean development team on Gitter!

Blue Ocean Dev Log: January Week #4

$
0
0

As we get closer toBlue Ocean 1.0, which is planned for the end of March, I havestartedhighlighting some of the good stuff that has been going on. This week was 10 steps forward, and about 1.5 backwards…​

There were two releases this week, b19 and b20. Unfortunately, b20 had to be released shortly after b19 hit the Update Center as an incompatible API change in a 3rd party plugin was discovered.

Regardless, the latest b20 has a lot of important improvements, and some very nice new features.

Creating a Pipeline in Blue Ocean

  • A first cut of the "Create Pipeline" UX, seen above, allowing you to create Git based Multibranch Pipelines like you have never seen before.

  • Handling network disconnections from the browser to server (eg server restart, network etc) gracefully with a nice UI.

  • More precise time information for steps and running Pipelines.

  • More information when a Pipeline is blocked on infrastructure, such as when the Pipeline is waiting for an agent to become available.

  • Fixed a really embarrassing typo (a prize if you spot it).

  • Test reports now include stdout and stderr

  • Better support for parallel visualisation, such as when a parallel step exists outside of a stage.

  • The Visual Editor also had another release, with the "sheets" visual component and better validation.

Creation

Currently this is hidden behind afeature toggle, to access append ?blueCreate to the URL in you browser, and then press the "New Pipeline" button. Currently it lets you quickly create a Pipeline from Git, add credentials, etc, in a very nice UX. More SCM types are being added to support this.

Reconnect/disconnect

Lost connection

As Blue Ocean is a very "live" style of UX, if your network becomes unavailable, or the server is restarted, it is good to know in case you were staring at the screen waiting for something to happen (don’t you have anything better to do??). When this happens, now you get a polite message, and then when the connection is restored, even if you are waiting for a Pipeline run to finish, it will then notice this, and refresh things for you:

Reconnected

Note the opacity changes to make it clear even if you don’t see the little message. Very nice addition for those of us who work on a train far to often.

Up next

What is up next:

  • SCM Api changes should land, making things much better for users of GitHub, Bitbucket, and many more.

  • Creating Pipelines from GitHub (including automatic discovery).

  • Lots of fixes and enhancements in the Pipeline from all over the place

  • More ATH [1] coverage against regressions

  • More Visual Editor releases as Declarative Pipeline reaches version 1.0

  • Improvements to i18n

There was also a couple of "alternative beta" releases in the "Experimental Update Center" to help test the new SCM API improvements for better use of GitHub APIs (based onthis branch) I do not recommend trying this branch unless you know what you are doing, as this will migrate some data, but help testing it would be appreciated!

Enjoy!


If you’re interested in helping to make Blue Ocean a great user experience for Jenkins, please join the Blue Ocean development team on Gitter!


1. Acceptance Test Harness

Best Practices for Scalable Pipeline Code

$
0
0

This is a guest post by Sam Van Oort, Software Engineer at CloudBees and contributor to the Jenkins project.

Today I’m going to show you best practices to write scalable and robust Jenkins Pipelines. This is drawn from a combination of work with the internals of Pipeline and observations with large-scale users.

Pipeline code works beautifully for its intended role of automating build/test/deploy/administer tasks. As it is pressed into more complex roles and unanticipated uses, some users hit issues. In these cases, applying the best practices can make the difference between:

  • A single master runninghundreds of concurrent builds on low end hardware (4 CPU cores and 4 GB of heap)

  • Running a couple dozen builds and bringing a master to its knees or crashing it…​even with 16+ CPU cores and 20+ GB of heap!

This has been seen in the wild.

Fundamentals

To understand Pipeline behavior you must understand a few points about how it executes.

  1. Except for the steps themselves, all of the Pipeline logic, the Groovy conditional, loops, etc executs on the master. Whether simple or complex! Even inside a node block!

  2. Steps may use executors to do work where appropriate, but each step has a small on-master overhead too.

  3. Pipeline code is written as Groovy but the execution model is radically transformed at compile-time to Continuation Passing Style (CPS).

  4. This transformation provides valuable safety and durability guarantees for Pipelines, but it comes with trade-offs:

    1. Steps can invoke Java and execute fast and efficiently, but Groovy is much slower to run than normal.

    2. Groovy logic requires far more memory, because an object-based syntax/block tree is kept in memory.

  5. Pipelines persist the program and its state frequently to be able to survive failure of the master.

From these we arrive at a set of best practices to make pipelines more effective.

Best Practices For Pipeline Code

  1. Think of Pipeline code as glue: just enough Groovy code to connect together the Pipeline steps and integrate tools, and no more.

    1. This makes code easier to maintain, more robust against bugs, and reduces load on masters.

  2. Keep it simple: limit the amount of complex logic embedded in the Pipeline itself (similarly to a shell script) and avoid treating it as a general-purpose programming language.

    1. Pipeline restricts all variables to Serializable types, so keeping Pipeline logic simple helps avoid a NotSerializableException - see appendix at the bottom.

  3. Use trusted global libraries or @NonCPS-annotated functions for slightly more complex work. This means more involved processing, logic, and transformations. This lets you leverage additional Groovy & functional features for more powerful, concise, and performant code.

    1. This still runs on masters so be mindful of complexity, but is much faster than native Pipeline code because it doesn’t provide durability and uses a faster execution model. Still, be mindful of the CPU cost and offload to executors for complex work (see below).

    2. @NonCPS functions can use a much broader subset of the Groovy language, such as iterators and functional features, which makes them more terse and fast to write.

    3. @NonCPS functions should not use Pipeline steps internally, however you can store the result of a Pipeline step to a variable and use it that as the input to a @NonCPS function.

      1. Gotcha: It’s not guaranteed that use of a step will generate an error (there is an open RFE to implement that), but you should not rely on that behavior. You may see improper handling of exceptions, in particular.

    4. While normal Pipeline is restricted to serializable local variables (see appendix at bottom), @NonCPS functions can use more complex, nonserializable types internally (for example regex matchers, etc). Parameters and return types should still be Serializable, however.

      1. Gotcha: improper usages are not guaranteed to raise an error with normal Pipeline (optimizations may mask the issue), but it is unsafe to rely on this behavior.

  4. Prefer external scripts/tools for complex or CPU-expensive processing rather than Groovy language features. This offloads work from the master to external executors, allowing for easy scale-out of hardware resources. It is also generally easier to test because these components can be tested in isolation without the full on-master execution environment.

    1. Many software vendors will provide easy command-line clients for their tools in various programming languages. These are often robust, performant, and easy to use. Plugins offer another option (see below).

    2. Shell or batch steps are often the easiest way to integrate these tools, which can be written in any language. For example: sh “java -jar client.jar $endPointUrl $inputData” for a Java client, or sh “python jiraClient.py $issueId $someParam” for a Python client.

    3. Gotcha: especially avoid Pipeline XML or JSON parsing using Groovy’s XmlSlurper and JsonSlurper! Strongly prefer command-line tools or scripts.

      1. The Groovy implementations are complex and as a result more brittle in Pipeline use.

      2. XmlSlurper and JsonSlurper can carry a high memory and CPU cost in pipelines

      3. xmllint and xmlstartlet are command-line tools offering XML extraction via xpath

      4. jq offers the same functionality for JSON

      5. These extraction tools may be coupled to curl or wget for fetching information from an HTTP API

    4. Examples of other places to use command-line tools:

      1. Templating large files

      2. Nontrivial integration with external APIs (for bigger vendors, consider a Jenkins plugin if a quality offering exists)

      3. Simulations/complex calculations

      4. Business logic

  5. Consider existing plugins for external integrations. Jenkins has a wealth of plugins, especially for source control, artifact management, deployment systems, and systems automation. These can greatly reduce the amount of Pipeline code to maintain. Well-written plugins may be faster and more robust than Pipeline equivalents.

    1. Consider both plugins and command-line clients (above) — one may be easier than the other.

    2. Plugins may be of widely varying quality. Look at the number of installations and how frequently and recently updates appear in the changelog. Poorly-maintained plugins with limited installations may actually be worse than writing a little custom Pipeline code.

    3. As a last resort, if there is a good-quality plugin that is not Pipeline-enabled, it is fairly easy to write a Pipeline wrapper to integrate it or write a custom step that will invoke it.

  6. Assume things will go wrong: don’t rely on workspaces being clean of the remnants from previous executions, clean explicitly where needed. Make use of timeouts and retry steps (that’s what they’re there for).

    1. Within a git repository, git clean -fdx is a good way to accomplish this and reduces the amount of SCM cloning

  7. DO use parameterized Pipelines and variables to make your Pipeline scripts more reusable. Passing in parameters is especially helpful for handling different environments and should be preferred to applying conditional lookup logic; however, try to limit parameterized pipelines invoking each other.

  8. Try to limit business logic embedded in Pipelines. To some extent this is inevitable, but try to focus on tasks to complete instead, because this yields more maintainable, reusable, and often more performant Pipeline code.

    1. One code smell that points to a problem is many hard-coded constants. Consider taking advantage of the options above to refactor code for better composability.

    2. For complex cases, consider using Jenkins integration options (plugins, Jenkins API calls, invoking input steps externally) to offload implementation of more complex business rules to an external system if they fit more naturally there.

Please, think of these as guidelines, not strict rules – Jenkins Pipeline provides a great deal of power and flexibility, and it’s there to be used.

Breaking enough of these rules at scale can cause masters to fail by placing an unsustainable load on them.

For additional guidance, I also recommendthis Jenkins World talk on how to engineer Pipelines for speed and performance:

Appendix: Serializable vs. Non-Serializable Types:

To assist with Pipeline development, here are common serializable and non-serializable types, to assist with deciding if your logic can be CPS or should be in a @NonCPS function to avoid issues.

Common Serializable Types (safe everywhere):

  1. All primitive types and their object wrappers: byte, boolean, int, double, short, char

  2. Strings

  3. enums

  4. Arrays of serializable types

  5. ArrayLists and normal Groovy Lists

  6. Sets: HashSet

  7. Maps: normal Groovy Map, HashMap, TreeMap

  8. Exceptions

  9. URLs

  10. Dates

  11. Regex Patterns (compiled patterns)

Common non-Serializable Types (only safe in @NonCPS functions):

  1. Iterators: this is a common problem. You need to use C-style loop, i.e.for(int i=0; i<max; i++){

  2. Regex Matchers (you can use the built-in functions in String, etc, just not the Matcher itself)

  3. Important:JsonObject, JsonSlurper, etc in Groovy 2+ (used in some 2.x+ versions of Jenkins).

    1. This is due to an internal implementation change — earlier versions may serialize.


Security updates for Jenkins core

$
0
0

We just released security updates to Jenkins, versions 2.44 and 2.32.2, that fix a high severity and several medium and low severity issues.

For an overview of what was fixed, see the security advisory. For an overview on the possible impact of these changes on upgrading Jenkins LTS, see our LTS upgrade guide. I strongly recommend you read these documents, as there are a few possible side effects of these fixes.

Subscribe to the jenkinsci-advisories mailing list to receive important notifications related to Jenkins security.

Blue Ocean Dev Log: February Week #1

$
0
0

With only a couple of months left beforeBlue Ocean 1.0, which is planned for the end of March, I havebeenhighlighting some of the good work being finished up by the developers hacking on Blue Ocean.

This week was a grab bag of important behind-the-scenes features and finalising the preview of the editor. The merge of the SCM API changes also made it in. The editor has the new sheets style of editing (there will be blogs and more on this in the next few weeks):

Blue Ocean Editor

Some highlights:

  • Fix to async loading of resources like translations, so screens don’t "flash" when they are loaded (i18n improvement)

  • Links in notifications can be configured to point to classic or Blue Ocean screens

  • Time reporting works better when browser clock is out of sync with server

  • SECURITY-380 was backported into a small fix for those that aren’t running the latest LTS (but you should ideally be running it)

  • SCM API changes finally landed - this will be in beta 22 which should hit the update centers soon. This should make things work better with GitHub rate limits.

  • Beta 21 was released

  • The editor reached "preview" release state ready for use with the newly announced Declarative Pipeline stuff.

Serenity

Also, a reference to Australian pop culture had to be removed, sadly.

Up Next:

  • Some cosmetic changes around headers to make it much nicer and clearer

  • Favorite improvements

  • GitHub Org-based Pipeline creation

  • Editor available in the general update center

  • Beta 22 with SCM improvements and no more GitHub rate limit hassles

  • Many fixes

  • Improvements to the Acceptance Test Harness to reduce the number of false-positives.

Enjoy!

Declarative Pipeline Syntax 1.0 is now available

$
0
0

This is a guest post byPatrick Wolf, Director of Product Management atCloudBees and contributor to the Jenkins project.

I am very excited to announce the addition ofDeclarative Pipeline syntax 1.0 toJenkins Pipeline. We think this new syntax will enable everyone involved in DevOps, regardless of expertise, to participate in the continuous delivery process. Whether creating, editing or reviewing a pipeline, having a straightforward structure helps to understand and predict the flow of the pipeline and provides a common foundation across all pipelines.

Pipeline as Code

Pipeline as Code was one of the pillars of the Jenkins 2.0 release and an essential part of implementing continuous delivery (CD). Defining all of the stages of an application’s CD pipeline within a Jenkinsfile and checking it into the repository with the application code provides all of the benefits inherent in source control management (SCM):

  • Retain history of all changes to Pipeline

  • Rollback to a previous Pipeline version

  • View diffs and merge changes to the Pipeline

  • Test new Pipeline steps in branches

  • Run the same Pipeline on a different Jenkins server

Getting Started with Declarative Pipeline

We recommend people begin using it for all their Pipeline definitions in Jenkins. The plugin has been available for use and testing starting with the 0.1 release that was debuted atJenkins World in September. Since then, it has already been installed in over 5,000 Jenkins environments.

If you haven’t tried Pipeline or have considered Pipeline in the past, I believe this new syntax is much more approachable with an easier adoption curve to quickly realize all of the benefits of Pipeline as Code. In addition, the pre-defined structure of Declarative makes it possible to create and edit Pipelines with a graphical user interface (GUI). The Blue Ocean team is actively working on a Visual Pipeline Editor which will be included in an upcoming release.

If you have already begun using Pipelines in Jenkins, I believe that this new alternative syntax can help expand that usage.

The original syntax for defining Pipelines in Jenkins is a Groovy DSL that allows most of the features of fullimperative programming.

This syntax is still fully supported and is now referred to as "Scripted Pipeline Syntax" to distinguish it from "Declarative Pipeline Syntax." Both use the same underlying execution engine in Jenkins and both will generate the same results inPipeline Stage View or Blue Ocean visualizations. All existingPipeline steps, Global Variables, andShared Libraries can be used in either. You can now create more cookie-cutter Pipelines and extend the power of Pipeline to all users regardless of Groovy expertise.

Declarative Pipeline Features

  • Syntax Checking

    • Immediate runtime syntax checking with explicit error messages.

    • API endpoint for linting a Jenkinsfile.

    • CLI command to lint a Jenkinsfile.

  • Docker Pipeline integration

    • Run all stages in a single container.

    • Run each stage in a different container.

  • Easy configuration

    • Quickly define parameters for your Pipeline.

    • Quickly define environment variables and credentials for your Pipeline.

    • Quickly define options (such as timeout, retry, build discarding) for your Pipeline.

    • Round trip editing with the Visual Pipeline Editor (watch for preview release shortly).

  • Conditional actions

    • Send notifications or take actions depending upon success or failure.

    • Skip stages based on branches, environment, or other Boolean expression. release shortly)

Where Can I Learn More?

Be on the lookout for future blog posts detailing specific examples of scenarios or features in Declarative Pipeline. Andrew Bayer, one of the primary developers behind Declarative Pipeline, will be presenting atFOSDEM in Brussels, Belgium this weekend. We have also scheduled an onlineJenkins Meetup (JAM) later this month to demo the features of Declarative Pipeline and give a sneak peek at the upcoming Blue Ocean Pipeline Editor.

In the meantime, all thePipeline documentation has been updated to incorporate aGuided Tour, and aSyntax Reference with numerous examples to help you get on your way to using Pipeline. Simply upgrade to the latest version, 2.5 or later of the Pipeline in Jenkins to enable all of these great features.

SCM API 2.0 Release Take 2

$
0
0

In January weannounced the release of SCM API 2.0. After the original release was published we identified four new high-impact issues. We decided to remove the new versions of the plugins from the update center until those issues could be resolved. The issues have now been resolved and the plugins are now available from the update center.

Summary for busy Jenkins Administrators

Upgrading should make multi-branch projects much better. When you are ready to upgrade you must ensure that you upgrade all the required plugins. If you miss some, just upgrade them and restart to fix the issue. And of course, it’s always a good idea to take a backup of your JENKINS_HOME before upgrading any plugins.

In the list below, version numbers in bold indicate a change from the original version in theoriginal announcement

Folders Plugin

5.17 or newer

SCM API Plugin

2.0.2 or newer

Branch API Plugin

2.0.2 or newer

Git Plugin

This depends on the exact release line of the Git plugin that you are using.

  • Following the 2.6.x release line: 2.6.4 or newer

  • Following the 3.0.x release line (recommended): 3.0.4 or newer

Mercurial Plugin

1.58 or newer

GitHub Branch Source Plugin

2.0.1 or newer

BitBucket Branch Source Plugin

2.0.2 or newer

GitHub Organization Folders Plugin

1.6

Pipeline Multibranch Plugin

2.12 or newer

If you are using the Blue Ocean plugin

Blue Ocean Plugin

1.0.0-b22 or newer

Other plugins that may require updating:

GitHub API Plugin

1.84 or newer

GitHub Plugin

1.25.0 or newer

If you upgrade to Branch API 2.0.x and you have either the GitHub Branch Source or the BitBucket Branch Source plugins and you do not upgrade those instances to the 2.0.x line then your Jenkins instance will fail to start-up correctly.

The solution is just to upgrade the GitHub Branch Source or the BitBucket Branch Source plugin (as appropriate) to the 2.0.x line.

After an upgrade you will see the data migration warning (see the screenshot inJENKINS-41608 for an example) this is normal and expected. The unreadable data will be removed by the next scan / index or can be removed manually using the Discard Unreadable Data button. The warning will disappear on the next restart after the unreadable data has been removed.

Please update to the versions listed above. If you want to know more about the issues and how they were resolved, see the next section.


Analysis of the issues

The issues described below are resolved with these plugin releases:

  • Folders Plugin: 5.17

  • SCM API Plugin: 2.0.2

  • Branch API Plugin: 2.0.2

  • Git Plugin: Either2.6.4or3.0.4

  • GitHub Branch Source Plugin: 2.0.1

  • BitBucket Branch Source Plugin: 2.0.2

  • Pipeline Multibranch Plugin: 2.12

Migration of GitHub branches from 1.x to 2.x resulted in a change of the implementation class used to identify branches. Some other other bugs in Branch API had been fixed and the combined effect resulted in a rebuild of all GitHub Branches (not PRs) after an upgrade to GitHub Branch Source Plugin 2.0.0. This rebuild was referred to as a "build storm".

Resolution:

  • The SCM API plugin was enhanced to add an extension point that allows for a second round of data migration when upgrading.

  • The second round of data migration allows plugins implementing the SCM API contract to fix implementation class issues in context.

  • The Branch API plugin was enhanced to use this new extension point.

  • The GitHub Branch Source plugin was enhanced to provide an implementation of this extension point.

The GitHub Branch Source and BitBucket Branch Source plugins in 1.x were not assigning consistent IDs to multi-branch projects discovered in an Organization Folder. Both plugins were fixed in 2.0.0 to assign consistent IDs as a change of ID would result in a rebuild of all projects. What was missed is that the very first scan of an Organization Folder after an upgrade will change the randomly assigned ID assigned by the 1.x plugins into the consistent ID assigned by the 2.0.0 plugins and consequently trigger a rebuild of all branches. This rebuild was referred to as a "build storm".

Resolution:

The Branch API plugin was enhanced to detect the case where a branch source has been changed but the change is only changing the ID. When such changes are identified, the downstream references of the ID are all updated which will prevent a build storm.

The BitBucket Branch Source 1.x did not store all the information about PRs that is required by the SCM API 2.0.x model. This could well have resulted in subtle effects when manually triggering a rebuild of a merge PR if the PR’s target branch has been modified after the PR branch was first detected by Jenkins. Consequently, as the information is required, BitBucket Branch Source plugin 2.0.0 populated the information with dummy values which would force the correct information to be retrieved. The side-effect is that all PR branches would be rebuilt.

Resolution:

  • The changes in SCM API 2.0.2 introduced to resolve JENKINS-41121 provided a path to resolve this issue without causing a rebuild of all PR branches.

  • The BitBucket Branch Source plugin was enhanced to provide an implementation of the new SCM API extension point that connects to BitBucket and retrieves the missing information.

During initial testing of the Branch API 2.0.0 release an issue was identified with how Organization Folders handled unusual names. None of the existing implementations of the SCMNavigator API could generate such unusual names due to form validation on GitHub / BitBucket replacing unusual characters with - when creating a repository.

It would be irresponsible to rely on external services sanitizing their input data for the correct operation of Organization Folders. Consequently, in Branch API 2.0.0 the names were all transformed into URL safe names, with the original URLs still resolving to the original projects so that any existing saved links would remain functional.

Quite a number of people objected to this change of URL scheme.

Resolution:

  • There has been a convention in Jenkins that the on-disk storage structure for jobs mirrors the URL structure. This is only a convention and there is nothing specific in the code that mandates following the convention.

  • The Folders Plugin was enhanced to allow for computed folders (where the item names are provided by an external source) to provide a strategy to use when generating the on-disk storage names as well as the URL component names for the folder’s child items.

  • The Branch API plugin was enhanced to use this new strategy for name transformation.

  • The net effect of this change is that the URLs remain the same as for 1.x but the on-disk storage uses transformed names that are future proofed against any new SCMNavigator implementations where the backing service allows names that are problematic to use as filesystem directory names.

Side-effect:

  • The Branch API 2.0.0 approach handled the transformation of names by renaming the items using the Jenkins Item rename API.

  • The Branch API 2.0.2 approach does not rename the child items as it is only the on-disk storage location that is moved.

This means that the Jenkins Item rename API cannot be used.

At this time, the only known side-effect is in the Job Configuration History plugin. The configuration history of each child item will still be tracked going forward after the upgrade. The pre-upgrade configuration history is also retained. Because the Jenkins Item rename API cannot be used to flag the configuration file location change, there is no association between the pre-upgrade history chain and the post-upgrade history chain.

Google Summer Of Code 2017: Call for mentors

$
0
0

On behalf of the GSoC Org Admin team I am happy to announce that we are going to apply toGoogle Summer of Code (GSoC) again this year. In GSoC high-profile students work in open-source projects for several months under mentorship of organization members.

We are looking for mentors and project ideas. So yes, we are looking for you :)

Conditions

As a mentor, you will be asked to:

  • lead the project in the area of their interest

  • actively participate in the project during student selection, community bonding and coding phases (March - August)

  • work in teams of 2+ mentors per 1 each student

  • dedicate a consistent and significant amount of time, especially during the coding phase (~5 hours per week in the team of two mentors)

Mentorship does not require strong expertise in Jenkins plugin development. The main objective is to guide students and to get them involved into the Jenkins community. If your mentor team requires any specific expertise, GSoC org admins will do their best in order to find advisors.

What do you get?

  • A student, who works within the area of your interest on full-time for several months

  • Joint projects with Jenkins experts, lots of fun and ability to study something together

  • Limited edition of swags from Google and Jenkins project

  • Maybe: Participation in GSoC Mentor Summit in California with expense coverage (depends on project results and per-project quotas)

Requirements

You are:

  • passionate about Jenkins

  • interested in being a mentor or advisor

  • ready to dedicate time && have no major unavailability periods planned to this summer

    • We expect mentors to be available by email during 75% of working days in the May-August timeframe

Your project idea is:

  • about code (though it may and likely should include some documentation and testing work)

  • about Jenkins (plugins, core, infrastructure, etc.)

  • potentially doable by a student in 3-4 months

How to apply

If you are interested, drop the Email to the Jenkins Developer mailing list with the GSoC2017 prefix.

  • Briefly describe your project idea (a couple of sentences) and required qualifications from students. Examples: GSoC2016, GSoC2017 - current project ideas

  • If you already have a co-mentor(s), please mention them

  • Having several project ideas is fine. Having no specific ideas is also fine.

Disclaimer: We cannot guarantee that all projects happen, it depends on student application results and the number of project slots.

Viewing all 1088 articles
Browse latest View live