Quantcast
Channel: Jenkins Blog
Viewing all 1088 articles
Browse latest View live

Sending Notifications in Pipeline

$
0
0
This is a guest post by Liam Newman, Technical Evangelist at Cloudbees.

Rather than sitting and watching Jenkins for job status, I want Jenkins to send notifications when events occur. There are Jenkins plugins forSlack,HipChat, or even email among others.

Note: Something is happening!

I think we can all agree getting notified when events occur is preferable to having to constantly monitor them just in case. I’m going to continue from where I left off in my previous post with thehermann project. I added a Jenkins Pipeline with an HTML publisher for code coverage. This week, I’d like to make Jenkins to notify me when builds start and when they succeed or fail.

Setup and Configuration

First, I select targets for my notifications. For this blog post, I’ll use sample targets that I control. I’ve created Slack and HipChat organizations called "bitwiseman", each with one member - me. And for email I’m running a Ruby SMTP server called mailcatcher, that is perfect for local testing such as this. Aside for these concessions, configuration would be much the same in a non-demo situation.

Next, I install and add server-wide configuration for theSlack,HipChat, and Email-ext plugins. Slack and HipChat use API tokens - both products have integration points on their side that generate tokens which I copy into my Jenkins configuration. Mailcatcher SMTP runs locally. I just point Jenkins at it.

Here’s what the Jenkins configuration section for each of these looks like:

Slack Configuration
HipChat Configuration
Email Configuration

Original Pipeline

Now I can start adding notification steps. The same aslast week, I’ll use the Jenkins Pipeline Snippet Generator to explore the step syntax for the notification plugins.

Here’s the base pipeline before I start making changes:

stage 'Build'

node {
  // Checkout
  checkout scm// install required bundles
  sh 'bundle install'// build and run tests with coverage
  sh 'bundle exec rake build spec'// Archive the built artifacts
  archive (includes: 'pkg/*.gem')// publish html// snippet generator doesn't include "target:"// https://issues.jenkins-ci.org/browse/JENKINS-29711.
  publishHTML (target: [allowMissing: false,alwaysLinkToLastBuild: false,keepAll: true,reportDir: 'coverage',reportFiles: 'index.html',reportName: "RCov Report"
    ])
}

This pipeline expects to be run from a Jenkinsfile in SCM. To copy and paste it directly into a Jenkins Pipeline job, replace the checkout scm step withgit 'https://github.com/reiseburo/hermann.git'.

Job Started Notification

For the first change, I decide to add a "Job Started" notification. The snippet generator and then reformatting makes this straightforward:

node {

  notifyStarted()

  /* ... existing build steps ... */
}defnotifyStarted() {// send to Slack
  slackSend (color: '#FFFF00', message: "STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})")// send to HipChat
  hipchatSend (color: 'YELLOW', notify: true,message: "STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})"
    )// send to email
  emailext (subject: "STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",body: """<p>STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p><p>Check console output at &QUOT;<a href='${env.BUILD_URL}'>${env.JOB_NAME} [${env.BUILD_NUMBER}]</a>&QUOT;</p>""",recipientProviders: [[$class: 'DevelopersRecipientProvider']]
    )
}

Since Pipeline is a Groovy-based DSL, I can usestring interpolation and variables to add exactly the details I want in my notification messages. When I run this I get the following notifications:

Started Notifications
Started Email Notification

Job Successful Notification

The next logical choice is to get notifications when a job succeeds. I’ll copy and paste based on the notifyStarted method for now and do some refactoring later.

node {

  notifyStarted()

  /* ... existing build steps ... */

  notifySuccessful()
}

defnotifyStarted() { /* .. */ }defnotifySuccessful() {
  slackSend (color: '#00FF00', message: "SUCCESSFUL: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})")

  hipchatSend (color: 'GREEN', notify: true,message: "SUCCESSFUL: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})"
    )

  emailext (
      subject: "SUCCESSFUL: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",body: """<p>SUCCESSFUL: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p><p>Check console output at &QUOT;<a href='${env.BUILD_URL}'>${env.JOB_NAME} [${env.BUILD_NUMBER}]</a>&QUOT;</p>""",recipientProviders: [[$class: 'DevelopersRecipientProvider']]
    )
}

Again, I get notifications, as expected. This build is fast enough, some of them are even on the screen at the same time:

Multiple Notifications

Job Failed Notification

Next I want to add failure notification. Here’s where we really start to see the power and expressiveness of Jenkins pipeline. A Pipeline is a Groovy script, so as we’d expect in any Groovy script, we can handle errors using try-catch blocks.

node {try {
    notifyStarted()/* ... existing build steps ... */

    notifySuccessful()
  } catch (e) {
    currentBuild.result = "FAILED"
    notifyFailed()throw e
  }
}defnotifyStarted() { /* .. */ }defnotifySuccessful() { /* .. */ }defnotifyFailed() {
  slackSend (color: '#FF0000', message: "FAILED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})")

  hipchatSend (color: 'RED', notify: true,message: "FAILED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})"
    )

  emailext (
      subject: "FAILED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",body: """<p>FAILED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p><p>Check console output at &QUOT;<a href='${env.BUILD_URL}'>${env.JOB_NAME} [${env.BUILD_NUMBER}]</a>&QUOT;</p>""",recipientProviders: [[$class: 'DevelopersRecipientProvider']]
    )
}
Failed Notifications

Code Cleanup

Lastly, now that I have it all working, I’ll do some refactoring. I’ll unify all the notifications in one method and move the final success/failure notification into a finally block.

stage 'Build'

node {
  try {
    notifyBuild('STARTED')/* ... existing build steps ... */

  } catch (e) {// If there was an exception thrown, the build failed
    currentBuild.result = "FAILED"throw e
  } finally {// Success or failure, always send notifications
    notifyBuild(currentBuild.result)
  }
}defnotifyBuild(String buildStatus = 'STARTED') {// build status of null means successful
  buildStatus =  buildStatus ?: 'SUCCESSFUL'// Default valuesdef colorName = 'RED'def colorCode = '#FF0000'def subject = "${buildStatus}: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'"def summary = "${subject} (${env.BUILD_URL})"def details = """<p>STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p><p>Check console output at &QUOT;<a href='${env.BUILD_URL}'>${env.JOB_NAME} [${env.BUILD_NUMBER}]</a>&QUOT;</p>"""// Override default values based on build statusif (buildStatus == 'STARTED') {
    color = 'YELLOW'
    colorCode = '#FFFF00'
  } elseif (buildStatus == 'SUCCESSFUL') {
    color = 'GREEN'
    colorCode = '#00FF00'
  } else {
    color = 'RED'
    colorCode = '#FF0000'
  }// Send notifications
  slackSend (color: colorCode, message: summary)

  hipchatSend (color: color, notify: true, message: summary)

  emailext (
      subject: subject,body: details,recipientProviders: [[$class: 'DevelopersRecipientProvider']]
    )
}

You have been notified!

I now get notified twice per build on three different channels. I’m not sure I need to get notified this much for such a short build. However, for a longer or complex CD pipeline, I might want exactly that. If needed, I could even improve this to handle other status strings and call it as needed throughout my pipeline.

Final View of Notifications

Blue Ocean July development update

$
0
0

The team have been hard at work moving the needle forward on the Blue Ocean 1.0 features. Many of the features we have been working on have come a long way in the past few months but here’s a few highlights:

Goodbye page refreshes, Hello Real Time updates!

Building uponTom's great work onServer Sent Events (SSE) bothCliff andTom worked on making the all the screens in Blue Ocean update without manual refreshes.

SSE is a great technology choice for new web apps as it only pushes out events to the client when things have changed on the server. That means there’s a lot less traffic going between your browser and the Jenkins server when compared to the continuous AJAX polling method that has been typical of Jenkins in the past.

New Test Reporting UI

Keith has been working with Vivek to drive out a new set of extension points that allow us to build a new rest reporting UI in Blue Ocean. Today this works for JUnit test reports but can be easily extended to work with other kinds of reports.

Pipeline logs are split into steps and update live

Thorsten andJosh have been hard at work breaking down the log into steps and making the live log tailing follow the pipeline execution - which we’ve lovingly nicknamed the “karaoke mode”

Pipelines can be triggered from the UI

Tom has been on allowing users to trigger jobs from Blue Ocean, which is one less reason to go back to the Classic UI :)

Blue Ocean has been released to the experimental update center

Many of you have asked us questions about how you can try Blue Ocean today and have resorted to building the plugin yourself or running our Docker image.

We wanted to make the process of trying Blue Ocean in its unfinished state by publishing the plugin to the experimental update center - it’s available today!

So what is the Experimental Update Center? It is a mechanism for the Jenkins developer community to share early previews of new plugins with the broader user community. Plugins in this update center are experimental and we strongly advise not running them on production or Jenkins systems that you rely on for your work.

That means any plugin in this update center could eat your Jenkins data, cause slowdowns, degrade security or have their behavior change at no notice.

Stay tuned for more updates!

Join me for Jenkins World 2016

$
0
0

Jenkins World, September 13-15 at the Santa Clara Convention Center (SCCC), takes our 6th annual community user conference to a whole new level. It will be one big party for everything Jenkins, from users to developers, from the community to vendors. There will be more of what people always loved in past user conferences, such as technical sessions from users and developers, the Ask the Experts booth and plugin development workshop, and even more has been added, such as Jenkins training pre-conference, workshops and the opportunity to get certified for free. Jenkins World is a not-to-be-missed.

Jenkins World 125x125

For me, the best part of Jenkins World is the opportunity to meet other Jenkins users and developers face-to-face. We all interact on IRC, Google Groups or GitHub, but when you have a chance to meet in person, the person behind the GitHub ID or IRC name, whose plugin you use every day, becomes a real person. Your motivation might be a little different from mine, but we have the breath in the agenda to cover everyone from new users to senior plugin developers.

This year, you’ll have more opportunities than ever before to learn about Jenkins and continuous delivery/DevOps practices, and explore what Jenkins has to offer.

  • If you are travelling from somewhere, you might as well get a two-day Jenkins training course to be held onsite, starting Monday.

  • On Tuesday, you can attend your choice of workshops, which gives you more hands-on time to go deeper, including:

    • The DevOps Toolkit 2.0 Workshop

    • Let’s Build a Jenkins Pipeline

    • Preparing for Jenkins Certification

    • Intro to Plugin Development

    • CD and DevOps Maturity for Managers

  • On Wednesday, the formal conference kicks off. Throughout Wednesday and Thursday, you can choose from sessions spread across five tracks and covering a diverse range of topics like infrastructure as code, security, containers, pipeline automation, best practices, scaling Jenkins and new community development initiatives.

At Jenkins World, you’ll be exposed to projects going on in the community such as Blue Ocean, a new Jenkins UX project. You can learn more about Jenkins 2 - a major release for the project, and based on the huge number of downloads we saw in the weeks following its introduction at the end of April, it was a big +1. At Jenkins World, you will be immersed in Jenkins and community, and leave knowing that you are part of a meaningful open source project that, with your involvement, can do anything!

This year there will only be one Jenkins World conference, so that everyone involved in Jenkins can get together in one place at one time and actually see each other. I understand that it might be a bit more difficult for Jenkins users outside of the US to make it to Jenkins World, but hopefully we made the event worth your visit. As the final push on the back, CloudBees has created a special international program for those who are coming from outside the United States. You’ll have time to talk with all of the other Jenkins users who have made the journey from across the globe, you’ll be able to attend exclusive networking events and more.

I hope to see you September 13th through 15th in Santa Clara atJenkins World in Santa Clara!

St. Petersburg Jenkins Meetup #3 and #4 Reports

$
0
0

I would like to write about two last Jenkins Meetups in Saint Petersburg, Russia.

stpetersburg butler 0

Meetup #3. Jenkins Administration (May 20, 2016)

In May we had a meetup about Jenkins administration techniques. At this meetup we were talking about common Jenkins ecosystem components like custom update centers, tool repositories and generic jobs.

Talks:

Meetup #4. IT Global Meetup (July 23, 2016)

In Saint Petersburg there is a regular gathering of local IT communities. This IT Global Meetup is a full-day event, which provides an opportunity to dozens of communities and hundreds of visitors to meet at a single place.

On July 23rd our local Jenkins community participated in the eight’s global meetup. We conduced 2 talks in main tracks and also had a round table in the evening.

Talks:

  • Oleg Nenashev, CloudBees, "About Jenkins 2 and future plans"

    • Oleg provided a top-level overview about changes in Jenkins, shared insights about upgrading to the new Jenkins 2.7.1 LTS and talked about Jenkins plans

    • Presentation (rus)

  • Aleksandr Tarasov, Alfa-Laboratory, "Continuous Delivery with Jenkins: Lessons learned"

After the talks we had a roundtable about Jenkins (~10 Jenkins experts). Oleg provided an overview of Docker and Configuration-as-Code features available in Jenkins, and then we talked about common use-cases in Jenkins installations. We hope to finally organize a "Jenkins & Docker" meetup at some point.

Q&A

If you have any questions, all speakers can be contacted viaJenkins RU Gitter Chat.

Acknowledgments

The events have been organized with help fromCloudBees, EMC and organizers of the St. Petersburg IT Global Meetup.

Don't install software, define your environment with Docker and Pipeline

$
0
0

This is a guest post by Michael Neale, long time open source developer and contributor to the Blue Ocean project.

If you are running parts of your pipeline on Linux, possibly the easiest way to get a clean reusable environment is to use:CloudBees Docker Pipeline plugin.

In this short post I wanted to show how you can avoid installing stuff on the agents, and have per project, or even per branch, customized build environments. Your environment, as well as your pipeline is defined and versioned alongside your code.

I wanted to use the Blue Ocean project as anexample of a project that uses the CloudBees Docker Pipeline plugin.

Environment and Pipeline for JavaScript components

The Blue Ocean project has a few moving parts, one of which is called the "Jenkins Design Language". This is a grab bag of re-usable CSS, HTML, style rules, icons and JavaScript components (using React.js) that provide the look and feel for Blue Ocean.

JavaScript and Web Development being what it is in 2016, many utilities are need to assemble a web app. This includes npm and all that it needs, less.js to convert Less to CSS, Babel to "transpile" versions of JavaScript to other types of JavaScript (don’t ask) and more.

We could spend time installling nodejs/npm on the agents, but why not just use the official off the shelf docker image from Docker Hub?

The only thing that has to be installed and run on the build agents is the Jenkins agent, and a docker daemon.

A simple pipeline using this approach would be:

node {
        stage "Prepare environment"
          checkout scm
          docker.image('node').inside {
            stage "Checkout and build deps"
                sh "npm install"

            stage "Test and validate"
                sh "npm install gulp-cli && ./node_modules/.bin/gulp"
          }
}

This uses the stock "official" Node.js image from the Docker Hub, but doesn’t let us customize much about the environment.

Customising the environment, without installing bits on the agent

Being the forward looking and lazy person that I am, I didn’t want to have to go and fish around for a Docker image every time a developer wanted something special installed.

Instead, I put a Dockerfile in the root of the repo, alongside the Jenkinsfile:

Environment

The contents of the Dockerfile can then define the exact environment needed to build the project. Sure enough, shortly after this, someone came along saying they wanted to use Flow from Facebook (A typechecker for JavaScript). This required an additional native component to work (via apt-get install).

This was achieved via apull request to both the Jenkinsfile and the Dockerfile at the same time.

So now our environment is defined by a Dockerfile with the following contents:

# Lets not just use any old version but pick one
FROM node:5.11.1

# This is needed for flow, and the weirdos that built it in ocaml:
RUN apt-get update && apt-get install -y libelf1

RUN useradd jenkins --shell /bin/bash --create-home
USER jenkins

The Jenkinsfile pipeline now has the following contents:

node {
    stage "Prepare environment"
        checkout scmdef environment  = docker.build 'cloudbees-node'

        environment.inside {
            stage "Checkout and build deps"
                sh "npm install"

            stage "Validate types"
                sh "./node_modules/.bin/flow"

            stage "Test and validate"
                sh "npm install gulp-cli && ./node_modules/.bin/gulp"
                junit 'reports/**/*.xml'
        }

    stage "Cleanup"
        deleteDir()
}
Even hip JavaScript tools can emit that weird XML format that test reporters can use, e.g. the junit result archiver.

The main change is that we have docker.build being called to produce theenvironment which is then used. Running docker build is essentially a "no-op" if the image has already been built on the agent before.

What’s it like to drive?

Well, using Blue Ocean, to build Blue Ocean, yields a pipeline that visually looks like this (a recent run I screen capped):

Pipeline

This creates a pipeline that developers can tweak on a pull-request basis, along with any changes to the environment needed to support it, without having to install any packages on the agent.

Why not use docker commands directly?

You could of course just use shell commands to do things with Docker directly, however, Jenkins Pipeline keeps track of Docker images used in a Dockerfile via the "Docker Fingerprints" link (which is good, should that image need to change due to a security patch).

GSoC: External Workspace Manager for Pipeline. Beta release is available

$
0
0

This blog post is a continuation of the External Workspace Manager Plugin related posts, starting withthe introductory blog post, and followed bythe alpha version release announcement.

As the title suggests, the beta version of the External Workspace Manager Plugin was launched! This means that it’s available only in the Experimental Plugins Update Center.

Take care when installing plugins from the Experimental Update Center, since they may change in backward-incompatible ways. It’s advisable not to use it for Jenkins production environments.

The plugin’s repository is on GitHub. The complete plugin’s documentation can be accessed here.

What’s new

Bellow is a summary of the features added so far, since the alpha version.

Multiple upstream run selection strategies

It has support for theRun Selector Plugin (which is still in beta), so you can provide different run selection strategies when allocating a disk from the upstream job.

Let’s suppose that we have an upstream job that clones the repository and builds the project:

def extWorkspace = exwsAllocate 'diskpool1'

node ('linux') {
    exws (extWorkspace) {
        checkout scm
        sh 'mvn clean install -DskipTests'
    }
}

In the downstream job, we run the tests on a different node, but we reuse the same workspace as the previous job:

def run = selectRun 'upstream'def extWorkspace = exwsAllocate selectedRun: run

node ('test') {
    exws (extWorkspace) {
        sh 'mvn test'
    }
}

The selectRun in this example selects the last stable build from the upstream job. But, we can be more explicit, and select a specific build number from the upstream job.

def run = selectRun 'upstream',selector: [$class: 'SpecificRunSelector', buildNumber: UPSTREAM_BUILD_NUMBER]def extWorkspace = exwsAllocate selectedRun: run// ...

When the selectedRun parameter is given to the exwsAllocate step, it will allocate the same workspace that was used by that run.

The Run Selector Plugin has several run selection strategies that are briefly explainedhere.

Automatic workspace cleanup

Provides an automatic workspace cleanup by integrating theWorkspace Cleanup Plugin. For example, if we need to delete the workspace only if the build has failed, we can do the following:

def extWorkspace = exwsAllocate diskPoolId: 'diskpool1'

node ('linux') {
    exws (extWorkspace) {try {
            checkout scm
            sh 'mvn clean install'
        } catch (e) {
            currentBuild.result = 'FAILURE'throw e
        } finally {
            step ([$class: 'WsCleanup', cleanWhenFailure: false])
        }
    }
}

More workspace cleanup examples can be found at thislink.

Custom workspace path

Allows the user to specify a custom workspace path to be used when allocating workspace on the disk. The plugin offers two alternatives for doing this:

  • by defining a global workspace template for each Disk Pool

This can be defined in the Jenkins global config, External Workspace Definitions section.

global custom workspace path

  • by defining a custom workspace path in the Pipeline script

We can use the Pipeline DSL to compute the workspace path. Then we pass this path as input parameter to the exwsAllocate step.

def customPath = "${env.JOB_NAME}/${PULL_REQUEST_NUMBER}/${env.BUILD_NUMBER}"def extWorkspace = exwsAllocate diskPoolId: 'diskpool1', path: customPath// ...

For more details see the afferentdocumentation page.

Disk Pool restrictions

The plugin comes with Disk Pool restriction strategies. It does this by using the restriction capabilities provided by theJob Restrictions Plugin.

For example, we can restrict a Disk Pool to be allocated only if the Jenkins job in which it’s allocated was triggered by a specific user:

restriction by user

Or, we can restrict the Disk Pool to be allocated only for those jobs whose name matches a well defined pattern:

restriction by job name

What’s next

Currently there is ongoing work for providing flexible disk allocation strategies. The user will be able to define a default disk allocation strategy in the Jenkins global config. So for example, we want to select the disk with the most usable space as default allocation strategy:

global disk allocation strategy

If needed, this allocation strategy may be overridden in the Pipeline code. Let’s suppose that for a specific job, we want to allocate the disk with the highest read speed.

def extWorkspace = exwsAllocate diskPoolId: 'diskpool1', strategy: fastestRead()// ...

When this feature is completed, the plugin will enter a final testing phase. If all goes to plan, a stable version should be released in about two weeks.

If you have any issues in setting up or using the plugin, please feel free to ask me on the plugin’s Gitter chat. Any feedback is welcome, and you may provide it either on the Gitter chat, or onJira by using the external-workspace-manager-plugin component.

Continuous Security for Rails apps with Pipeline and Brakeman

$
0
0

This is a guest post by R. Tyler Croy, who is a long-time contributor to Jenkins and the primary contact for Jenkins project infrastructure. He is also a Jenkins Evangelist atCloudBees, Inc.

When the Ruby on Rails framework debuted it changed the industry in two noteworthy ways: it created a trend of opinionated web application frameworks (Django,Play, Grails) and it also strongly encouraged thousands of developers to embrace test-driven development along with many other modern best practices (source control, dependency management, etc). Because Ruby, the language underneath Rails, is interpreted instead of compiled there isn’t a "build" per se but rather tens, if not hundreds, of tests, linters and scans which are run to ensure the application’s quality. With the rise in popularity of Rails, the popularity of application hosting services with easy-to-use deployment tools like Heroku orEngine Yard rose too.

This combination of good test coverage and easily automated deployments makes Rails easy to continuously deliver with Jenkins. In this post we’ll cover testing non-trivial Rails applications with Jenkins Pipeline and, as an added bonus, we will add security scanning viaBrakeman and theBrakeman plugin.

cfpapp stage view

Topics

For this demonstration, I used Ruby Central'scfp-app:

A Ruby on Rails application that lets you manage your conference’s call for proposal (CFP), program and schedule. It was written by Ruby Central to run the CFPs for RailsConf and RubyConf.

I chose this Rails app, not only because it’s a sizable application with lots of tests, but it’s actually the application we used to collect talk proposals for the "Community Tracks" at this year’s Jenkins World. For the most part, cfp-app is a standard Rails application. It usesPostgreSQL for its database,RSpec for its tests andRuby 2.3.x as its runtime.

If you prefer to just to look at the code, skip straight to theJenkinsfile.

Preparing the app

For most Rails applications there are few, if any, changes needed to enable continuous delivery with Jenkins. In the case ofcfp-app, I added two gems to get the most optimal integration into Jenkins:

  1. ci_reporter, for test report integration

  2. brakeman, for security scanning.

Adding these was simple, I just needed to update the Gemfile and theRakefile in the root of the repository to contain:

Gemfile
# .. snip ..
group :testdo# RSpec, etc
  gem 'ci_reporter'
  gem 'ci_reporter_rspec'
  gem "brakeman", :require => falseend
Rakefile
# .. snip ..
require 'ci/reporter/rake/rspec'# Make sure we setup ci_reporter before executing our RSpec examples
task :spec => 'ci:setup:rspec'

Preparing Jenkins

With the cfp-app project set up, next on the list is to ensure that Jenkins itself is ready. Generally I suggest running the latest LTS of Jenkins; for this demonstration I used Jenkins 2.7.1 with the following plugins:

I also used theGitHub Organization Folder plugin to automatically create pipeline items in my Jenkins instance; that isn’t required for the demo, but it’s pretty cool to see repositories and branches with a Jenkinsfile automatically show up in Jenkins, so I recommend it!

In addition to the plugins listed above, I also needed at least one Jenkins agent with the Docker daemon installed and running on it. I label these agents with "docker" to make it easier to assign Docker-based workloads to them in the future.

Any Linux-based machine with Docker installed will work, in my case I was provisioning on-demand agents with theAzure plugin which, like theEC2 plugin, helps keep my test costs down.

If you’re using Amazon Web Services, you might also be interested in this blog post from earlier this year unveiling theEC2 Fleet plugin for working with EC2 Spot Fleets.

Writing the Pipeline

To make sense of the various things that the Jenkinsfile needs to do, I find it easier to start by simply defining the stages of my pipeline. This helps me think of, in broad terms, what order of operations my pipeline should have. For example:

/* Assign our work to an agent labelled 'docker' */
node('docker') {
    stage 'Prepare Container'
    stage 'Install Gems'
    stage 'Prepare Database'
    stage 'Invoke Rake'
    stage 'Security scan'
    stage 'Deploy'
}

As mentioned previously, this Jenkinsfile is going to rely heavily on theCloudBees Docker Pipeline plugin. The plugin provides two very important features:

  1. Ability to execute steps inside of a running Docker container

  2. Ability to run a container in the "background."

Like most Rails applications, one can effectively test the application with two commands: bundle install followed by bundle exec rake. I already had some Docker images prepared with RVM and Ruby 2.3.0 installed, which ensures a common and consistent starting point:

node('docker') {// .. 'stage' steps removed
    docker.image('rtyler/rvm:2.3.0').inside { (1)
        rvm 'bundle install'(2)
        rvm 'bundle exec rake'
    } (3)
}
1Run the named container. The inside method can take optional additional flags for the docker run command.
2Execute our shell commands using our tiny sh step wrapperrvm. This ensures that the shell code is executed in the correct RVM environment.
3When the closure completes, the container will be destroyed.

Unfortunately, with this application, the bundle exec rake command will fail if PostgreSQL isn’t available when the process starts. This is where the second important feature of the CloudBees Docker Pipeline plugin comes into effect: the ability to run a container in the "background."

node('docker') {// .. 'stage' steps removed/* Pull the latest `postgres` container and run it in the background */
    docker.image('postgres').withRun { container -> (1)
        echo "PostgreSQL running in container ${container.id}"(2)
    } (3)
}
1Run the container, effectively docker run postgres
2Any number of steps can go inside the closure
3When the closure completes, the container will be destroyed.

Running the tests

Combining these two snippets of Jenkins Pipeline is, in my opinion, where the power of the DSL shines:

node('docker') {
    docker.image('postgres').withRun { container ->
        docker.image('rtyler/rvm:2.3.0').inside("--link=${container.id}:postgres") { (1)
            stage 'Install Gems'
            rvm "bundle install"

            stage 'Invoke Rake'
            withEnv(['DATABASE_URL=postgres://postgres@postgres:5432/']) { (2)
                rvm "bundle exec rake"
            }
            junit 'spec/reports/*.xml'(3)
        }
    }
}
1By passing the --link argument, the Docker daemon will allow the RVM container to talk to the PostgreSQL container under the host name postgres.
2Use the withEnv step to set environment variables for everything that is in the closure. In this case, the cfp-app DB scaffolding will look for the DATABASE_URL variable to override the DB host/user/dbname defaults.
3Archive the test reports generated by ci_reporter so that Jenkins can display test reports and trend analysis.
cfpapp tests

With this done, the basics are in place to consistently run the tests for cfp-app in fresh Docker containers for each execution of the pipeline.

Security scanning

Using Brakeman, the security scanner for Ruby on Rails, is almost trivially easy inside of Jenkins Pipeline, thanks to theBrakeman plugin which implements the publishBrakeman step.

Building off our example above, we can implement the "Security scan" stage:

node('docker') {/* --8<--8<-- snipsnip --8<--8<-- */
    stage 'Security scan'
    rvm 'brakeman -o brakeman-output.tabs --no-progress --separate-models'(1)
    publishBrakeman 'brakeman-output.tabs'(2)/* --8<--8<-- snipsnip --8<--8<-- */
}
1Run the Brakeman security scanner for Rails and store the output for later in brakeman-output.tabs
2Archive the reports generated by Brakeman so that Jenkins can display detailed reports with trend analysis.
cfpapp brakeman

As of this writing, there is work in progress (JENKINS-31202) to render trend graphs from plugins like Brakeman on a pipeline project’s main page.

Deploying the good stuff

Once the tests and security scanning are all working properly, we can start to set up the deployment stage. Jenkins Pipeline provides the variablecurrentBuild which we can use to determine whether our pipeline has been successful thus far or not. This allows us to add the logic to only deploy when everything is passing, as we would expect:

node('docker') {/* --8<--8<-- snipsnip --8<--8<-- */
    stage 'Deploy'if (currentBuild.result == 'SUCCESS') { (1)
        sh './deploy.sh'(2)
    }else {
        mail subject: "Something is wrong with ${env.JOB_NAME}${env.BUILD_ID}",to: 'nobody@example.com',body: 'You should fix it'
    }/* --8<--8<-- snipsnip --8<--8<-- */
}
1currentBuild has the result property which would be 'SUCCESS', 'FAILED', 'UNSTABLE', 'ABORTED'
2Only if currentBuild.result is successful should we bother invoking our deployment script (e.g. git push heroku master)

Wrap up

I have gratuitously commented the fullJenkinsfile which I hope is a useful summation of the work outlined above. Having worked on a number of Rails applications in the past, the consistency provided by Docker and Jenkins Pipeline above would have definitely improved those projects' delivery times. There is still room for improvement however, which is left as an exercise for the reader. Such as: preparing new containers with all theirdependencies built-in instead of installing them at run-time. Or utilizing the parallel step for executing RSpec across multiple Jenkins agents simultaneously.

The beautiful thing about defining your continuous delivery, and continuous security, pipeline in code is that you can continue to iterate on it!

cfpapp stage view

Using Jenkins for Disparate Feedback on GitHub

$
0
0
This is a guest post by Ben Patterson, Engineering Manager atedX.

Picking a pear from a basket is straightforward when you can hold it in your hand, feel its weight, perhaps give a gentle squeeze, observe its color and look more closely at any bruises. If the only information we had was a photograph from one angle, we’d have to do some educated guessing. 1pear

As developers, we don’t get a photograph; we get a green checkmark or a red x. We use that to decide whether or not we need to switch gears and go back to a pull request we submitted recently. At edX, we take advantage of some Jenkins features that could give us more granularity on GitHub pull requests, and make that decision less of a guessing game.

5pears

Multiple contexts reporting back when they’re available

Pull requests on our platform are evaluated from several angles: static code analysis including linting and security audits, javascript unit tests, python unit tests, acceptance tests and accessibility tests. Using an elixir of plugins, including the GitHub Pull Request Builder Plugin, we put more direct feedback into the hands of the contributor so s/he can quickly decide how much digging is going to be needed.

For example, if I made adjustments to my branch and know more requirements are coming, then I may not be as worried about passing the linter; however, if my unit tests have failed, I likely have a problem I need to address regardless of when the new requirements arrive. Timing is important as well. Splitting out the contexts means we can run tests in parallel and report results faster.

Developers can re-run specific contexts

jenkins run python

Occasionally the feedback mechanism fails. It is oftentimes a flaky condition in a test or in test setup. (Solving flakiness is a different discussion I’m sidestepping. Accept the fact that the system fails for purposes of this blog entry.) Engineers are armed with the power of re-running specific contexts, also available through the PR plugin. A developer can say “jenkins run bokchoy” to re-run the acceptance tests, for example. A developer can also re-run everything with “jenkins run all”. These phrases are set through the GitHub Pull Request Builder configuration.

More granular data is easier to find for our Tools team

Splitting the contexts has also given us important data points for our Tools team to help in highlighting things like flaky tests, time to feedback and other metrics that help the org prioritize what’s important. We use this with a log aggregator (in our case, Splunk) to produce valuable reports such as this one.

95th percentile

I could go on! The short answer here is we have an intuitive way of divvying up our tests, not only for optimizing the overall amount of time it takes to get build results, but also to make the experience more user-friendly to developers.

I’ll be presenting more of this concept and expanding on the edX configuration details at Jenkins World in September.


Continuously Delivering Continuous Delivery Pipelines

$
0
0
This is a guest post by Jenkins World speaker Neil Hunt, Senior DevOps Architect at Aquilent.

In smaller companies with a handful of apps and fewer silos, implementing CD pipelines to support these apps is fairly straightforward using one of the many delivery orchestration tools available today. There is likely a constrained tool set to support - not an abundance of flavors of applications and security practices - and generally fewer cooks in the kitchen. But in a larger organization, I have found that in the past, there were seemingly endless unique requirements and mountains to climb to reach this level of automation on each new project.

Enter the Jenkins Pipeline plugin. My recently departed former company, a large financial services organization with a 600+ person IT organization and 150+ application portfolio, set out to implement continuous delivery enterprise-wide. After considering several pipeline orchestration tools, we determined the Pipeline plugin (at the time called Workflow) to be the superior solution for our company. Pipeline has continued Jenkins' legacy of presenting an extensible platform with just the right set of features to allow organizations to scale its capabilities as they see fit, and do so rapidly. As early adopters of Pipeline with a protracted set of requirements, we used it both to accelerate the pace of onboarding new projects and to reduce the ongoing feature delivery time of our applications.

In my presentation at Jenkins World, I will demonstrate the methods we used to enable this. A few examples:

  • We leveraged the Pipeline Remote File Loader plugin to write shared common code and sought and received community enhancements to these functions.

jw speaker blog aquilent 1 1

Jenkinsfile, loading a shared AWS utilities function library

jw speaker blog aquilent 2

awsUtils.groovy, snippets of some AWS functions

  • We migrated from EC2 agents to Docker-based agents running on Amazon’s Elastic Container Service, allowing us to spin up new executors in seconds and for teams to own their own executor definitions.

jw speaker blog aquilent 3

Pipeline run #1 using standard EC2 executors, spinning up EC2 instance for each node; Pipeline run #2 using shared ECS cluster with near-instant instantiation of a Docker slave in the cluster for each node.

  • We also created a Pipeline Library of common pipelines, enabling projects that fit certain models to use ready-made end-to-end pipelines. Some examples:

    • Maven JAR Pipeline: Pipeline that clones git repository, builds JAR file from pom.xml, deploys to Artifactory, and runs maven release plugin to increment next version

    • Anuglar.JS Pipeline: Pipeline that executes a grunt and bower build, then runs S3 sync to Amazon S3 bucket in Dev, then Stage, then Prod buckets.

    • Pentaho Reports Pipeline: Pipeline that clones git repository, constructs zip file, and executes Pentaho Business Intelligence Platform CLI to import new set of reports in Dev, Stage, then Prod servers.

Perhaps most critically, a shout-out to the saving grace of this quest for our security and ops teams: the manual input step! While the ambition of continuous delivery is to have as few of these as possible, this was the single-most pivotal feature in convincing others of Pipeline’s viability, since now any step of the delivery process could be gate-checked by an LDAP-enabled permission group. Were it not for the availability of this step, we may still be living in the world of: "This seems like a great tool for development, but we will have a segregated process for production deployments." Instead, we had a pipeline full of many input steps at first, and then used the data we collected around the longest delays to bring management focus to them and unite everyone around the goal of strategically removing them, one by one.

jw speaker blog aquilent 4

Going forward, having recently joined Aquilent’s Cloud Solutions Architecture team, I’ll be working with our project teams here to further mature the use of these Pipeline plugin features as we move towards continuous delivery. Already, we have migrated several components of our healthcare.gov project to Pipeline. The team has been able to consolidate several Jenkins jobs into a single, visible delivery pipeline, to maintain the lifecycle of the pipeline with our application code base in our SCM, and to more easily integrate with our external tools.

Jenkins World 125x125

Due to functional shortcomings in the early adoption stages of the Pipeline plugin and the ever-present political challenges of shifting organizational policy, this has been and continues to be far from a bruise-free journey. But we plodded through many of these issues to bring this to fruition and ultimately reduced the number of manual steps in some pipelines from 12 down to 1 and brought our 20+ Jenkins-minute pipelines to only six minutes after months of iteration. I hope you’ll join this session at Jenkins World and learn about our challenges and successes in achieving the promise of continuous delivery at enterprise scale.

Neil will be presenting more of this concept at Jenkins World in September. Register with the code JWHINMAN for 20% off your full conference pass.

GSoC: External Workspace Manager for Pipeline is released

$
0
0

This blog post is the last one from the series ofGoogle Summer of Code 2016, External Workspace Manager Plugin project. The previous posts are:

In this post I would like to announce the 1.0.0 release of the External Workspace Manager Plugin version to the main update center.

Here’s a highlight of the available features:

  • Workspace share and reuse across multiple jobs, running on different nodes

  • Automatic workspace cleanup

  • Provide custom workspace path on the disk

  • Disk Pool restrictions

  • Flexible Disk allocation strategies

All the above are detailed, with usage examples, on the plugin’s documentation page.

Future work

Currently, there is work in progress for the workspace browsing feature (see pull request#37). Afterwards, I’m planning to integrate fingerprints, so that the user can view a specific workspace in which other jobs was used. A particular feature that would be nice to have is to integrate the plugin with at least one disk provider (e.g. Amazon EBS, Google Cloud Storage).

Many other features and improvements are still to come, they are grouped in the phase 3 EPIC:JENKINS-37543. The plugin’s repository is on GitHub. If you’d like to come up with new features or ideas, contributions are very welcome.

Closing

This was a Google Summer of Code 2016 project. A summary of the contributions that I’ve made to the Jenkins project during this time may be found here. It was a great experience, from which I learned a lot, and I’d wish I could repeat it every year.

I’d like to thank to my mentors, Oleg Nenashev andMartin d’Anjou for all their support, good advices and help they gave me. Also, thanks to the Jenkins contributors with which I have interacted and helped me during this period.

If you have any issues in setting up or using the plugin, please feel free to ask me on the plugin’s Gitter chat. Any feedback is welcome, and you may provide it either on the Gitter chat, or onJira by using the external-workspace-manager-plugin component.

Jenkins World 2016 Festivities

$
0
0

Jenkins World 125x125

At Jenkins World 2016 on September 14-15, stop by the "Open Source Hub", located in the Partner Expo hall at the Santa Clara Convention Center in Santa Clara, CA. The Open Source Hub will have many Jenkins contributors, committers, JAM leaders, andofficers from the governance board under one roof, so there will be plenty of knowledge and talents on hand to share. We hope you’ll join in on the festivities.

Ask the Experts

The setup that is waiting for you: white boards, monitors and lots of brain power to help answer those Jenkins questions that have been keeping you up at night. Jenkins experts can help with beginner questions to the more advanced ones. All you need to do is bring your laptop and your questions; the experts will help answer them!

Live Demos

Sometimes seeing is believing, there will be plenty of demos in the "Open Source Hub" during the lunch hours on Wednesday September 14th, and Thursday September 15th in the expo hall. Jenkins experts will be show-casing their favorite Jenkins features, plugins and projects. Grab your lunch, take a seat in the open source theater to learn about:

  • Pipelines for Building and Deploying Android Apps by Android Emulator plugin maintainer Chris Orr

  • Git Plugin - Large Repos, Submodule Authentication, and more by Git plugin maintainer Mark Waite

  • Docker and Pipeline by Jenkins infrastructure contributorR Tyler Croy

  • Extending Pipeline with Libraries by Pipeline plugin maintainerJesse Glick

  • Blue Ocean in Action by Blue Ocean contributorKeith Zantow

  • External Workspace Manager plugin for Pipeline byGoogle Summer of Code studentAlexandru Somai

  • And many more

Jenkins Mural

Jenkins World participants will take part in the realization of a giant collaborative mural painting with theCommitStrip team. Thomas, the writer and Etienne, the cartoonist, teamed up with a few Jenkins contributors to design a 5m x 2m mmural which will be drawn live! Brushes and colors will be available for all attendees who wish to help paint this one of a kind piece of Jenkins art.

Sticker Swap

Jenkins World attendees will have a chance to swap stickers. There will be a table where attendees are welcome to place/take stickers. Bring your cool stickers to share with others and take stickers that interest you.

logo 128angry jenkins 128ninja 128jenkins masterblue ocean girl

After Dark Reception Sponsored by CloudBees

cloudbees

After Dark reception will be from 6-8pm on Wed Sept 14 in the Partner Expo. Enjoy cocktails, appetizers, mingle, and dance to a live band. A big THANK YOU goes out to CloudBees for their generous contributions! See you at After Dark!

Contributor Summit - Tuesday, September 13

If Blue Ocean, Pipeline and Storage Pluggability sounds interesting to you, join the interactive discussions surrounding these topics. The Jenkins project is also looking to hear use-cases, war stories, and pain points. The objective of the summit is to work towards improving the Jeknins project.Seats are limited.


Don’t forget to register; I look forward to seeing you at the conference!

Acknowledgements

Special thanks to CloudBees as the premier sponsor and BlazeMeter, Microsoft, Red Hat and all the other sponsors who have made this event possible.

Ask the Experts at Jenkins World 2016

$
0
0

Jenkins World 125x125

Our events officer Alyssa has been working for the past several weeks to organize the "Open Source Hub" atJenkins World 2016. The Hub is a location on the expo floor where contributors to the Jenkins project can hang out, share demos and help Jenkins users via the "Ask the Experts" program. Thus far we have a great list of experts who have volunteered to help staff the booth, which includes many frequent contributors, JAM organizers and board members.

A few of the friendly folks you will see at Jenkins World are:

I hope that this list isn’t exhaustive! If you are an active member of the Jenkins community and/or a contributor, consider taking part in the "Ask the Experts" program. It’s a great opportunity to bond with other contributors and talk with fellow users at Jenkins World.

You will be able to find us in the expo hall under the "Open Source Hub" sign; please stop by at Jenkins World to say hello, pick up stickers and to ask questions!

Register for Jenkins World in September with the code JWTCROY for a 20% discount off your pass.

Enforcing Jenkins Best Practices

$
0
0
This is a guest post by Jenkins World speaker David Hinske, Release Engineer at Goodgame Studios.

Jenkins World

Hey there, my name is David Hinske and I work at Goodgame Studios (GGS), a game development company in Hamburg, Germany. As Release Engineer in a company with several development teams, it comes in handy using several Jenkins instances. While this approach works fine in our company and gives the developers a lot of freedom, we came across some long-term problems concerning maintenance and standards. These problems were mostly caused by misconfiguration or non-use of plugins. With “configuration as code” in mind, I took the approach to apply static code analysis with the help of SonarQube, a platform to manage code quality, for all of our Jenkins job configurations.

As a small centralized team, we were looking for an easy way to control the health of our growing Jenkins infrastructure. With considering “configuration as code“, I developed a simple extension of SonarQube, to manage the quality and usage of all spawned Jenkins instances. The given SonarQube features (like customized rules/metrics, quality profiles and dashboards) allow us and the development teams to analyze and measure the quality of all created jobs in our company. Even though Jenkins configuration analysis cannot cover all SonarQube’s axes of code quality, I think there is still potential for conventions/standards, duplications, complexity, potential bugs (misconfiguration) and design and architecture.

The results of this analysis can be used by all people working with Jenkins. To achieve this, I developed a simple extension of SonarQube, containing everything which is needed to hook up our SonarQube with our Jenkins environment. The implementation contains a new basic-language “Jenkins“ and an initial set of rules.

Of course the needs depend strongly on the way Jenkins is being used, so not every rule implemented might be useful for every team, but this applies to all types of code analysis. The main inspirations for the rules were developer feedback and some articles found in the web. The different ways Jenkins can be configured provides the potential for many more rules. With this new approach of quality analysis, we can enforce best practices like:

  • Polling must die (Better to triggerb uilds from pushes than poll the repository every x minutes).

  • Use Log Rotator (Not using log-rotator can result in disk space problems on the master).

  • Use slaves/labels (Jobs should be defined where to run).

  • Don’t build on the master (In larger systems, don’t build on the master).

  • Enforce plugin usage (For example: Timestamp, Mask-Passwords).

  • Naming sanity (Limit project names to a sane (e.g. alphanumeric) character set).

  • Analyze Groovy Scripts (For example: Prevent System.exit(0) in System Groovy Scripts).

jenkins1

Besides taking control of all configuration of any Jenkins instance we want, there is also room for additional metrics, like measuring the amount and different types of jobs (Freestyle/Maven etc…​) to get an overview about the general load of the Jenkins instance. A more sophisticated idea is to measure complexity of jobs and even pipelines. As code, jobs configuration gets harder to understand the more steps are involved. On the one hand scripts, conditions and many parameters can negatively influence the readability, especially if you have external dependencies (like scripts) in different locations. On the other hand, pipelines can also grow very complex when many jobs are involved and chained for execution. It will be very interesting for us to see where and why too complex pipelines are being created.

On visualization we rely on the data and its interpretation of SonarQube, which offers a big bandwidth of widgets. Everybody can use and customize the dashboards. Our centralized team for example has a separate dashboard where we can get a quick overview over all instances.

jenkins2

The problem of "growing" Jenkins with maintenance problems is not new. Especially when you have many developers involved, including with the access to create jobs and pipelines themselves, an analysis like this SonarQube plugin provides can be useful for anyone who wants to keep their Jenkins in shape. Customization and standards are playing a big role in this scenario. This blog post surely is not an advertisement for my developed plugin, it is more about the crazy idea of using static code analysis for Jenkins job configuration. I haven’t seen anything like it so far and I feel that there might be some potential behind this idea.

Join me at my Enforcing Jenkins Best Practices session at the 2016 Jenkins World to hear more!

David will bepresenting more of this concept atJenkins World in September. Register with the code JWFOSS for 20% off your full conference pass.

Browser-testing with Sauce OnDemand and Pipeline

$
0
0
This is a guest post by Liam Newman, Technical Evangelist at Cloudbees.

Testing web applications across multiple browsers on different platforms can be challenging even for smaller applications. With Jenkins and theSauce OnDemand Plugin, you can wrangle that complexity by defining your Pipeline as Code.

Pipeline ♥ UI Testing, Too

I recently started looking for a way to do browser UI testing for an open-source JavaScript project to which I contribute. The project is targeted primarily atNode.js but we’re committed to maintaining browser-client compatibility as well. That means we should run tests on a matrix of browsers. Sauce Labs has an "open-sauce" program that provides free test instances to open-source projects. I decided to try using theSauce OnDemand Plugin andNightwatch.js to run Selenium tests on a sample project first, before trying a full-blown suite of tests.

Starting from Framework

I started off by following Sauce Labs' instructions on "Setting up Sauce Labs with Jenkins" as far as I could. I installed theJUnit andSauce OnDemand plugins, created an account with Sauce Labs, andadded my Sauce Labs credentials to Jenkins. From there I started to get a little lost. I’m new to Selenium and I had trouble understanding how to translate the instructions to my situation. I needed a working example that I could play with.

Happily, there’s a whole range of sample projects in "saucelabs-sample-test-frameworks" on GitHub, which show how to integrate Sauce Labs with various test frameworks, including Nightwatch.js. I forked the Nightwatch.js sample tobitwiseman/JS-Nightwatch.js and set to writing my Jenkinsfile. Between the sample and the Sauce Labs instructions, I was able to write a pipeline that ran five tests on one browser viaSauce Connect:

node {
    stage "Build"
    checkout scm

    sh 'npm install'(1)

    stage "Test"
    sauce('f0a6b8ad-ce30-4cba-bf9a-95afbc470a8a') { (2)
        sauceconnect(options: '', useGeneratedTunnelIdentifier: false, verboseLogging: false) { (3)
            sh './node_modules/.bin/nightwatch -e chrome --test tests/guineaPig.js || true'(4)
            junit 'reports/**'(5)
            step([$class: 'SauceOnDemandTestPublisher']) (6)
        }
    }
}
1Install dependencies
2Use mypreviously added sauce credentials
3Start up theSauce Connect tunnel to Sauce Labs
4Run Nightwatch.js
5Use JUnit to track results and show a trend graph
6Link result details from Sauce Labs

This pipeline expects to be run from a Jenkinsfile in SCM. To copy and paste it directly into a Jenkins Pipeline job, replace the checkout scm step withgit url:'https://github.com/bitwiseman/JS-Nightwatch.js', branch: 'sauce-pipeline'.

I ran this job a few times to get the JUnit report to show a trend graph.

Pipeline Report for

This sample app generates the SauceOnDemandSessionID for each test, enabling the Jenkins Sauce OnDemand Plugin’s result publisher to link results to details Sauce Labs captured during the run.

Sauce Test Results
Sauce Test Result Details

Adding Platforms

Next I wanted to add a few more platforms to my matrix. This would require changing both the test framework configuration and the pipeline. I’d need to add new named combinations of platform, browser, and browser version (called "environments") to the Nightwatch.js configuration file, and modify the pipeline to run tests in those new environments.

This is a perfect example of the power of pipeline as code. If I were working with a separately configured pipeline, I’d have to make the change to the test framework, then change the pipeline manually. With my pipeline checked in as code, I could change both in one commit, preventing errors resulting from pipeline configurations going out of sync from the rest of the project.

I added three new environments to nightwatch.json:

"test_settings" : {"default": { /*----8<----8<----8<----*/ },"chrome": { /*----8<----8<----8<----*/ },"firefox": {"desiredCapabilities": {"platform": "linux","browserName": "firefox","version": "latest"
    }
  },"ie": {"desiredCapabilities": {"platform": "Windows 10","browserName": "internet explorer","version": "latest"
    }
  },"edge": {"desiredCapabilities": {"platform": "Windows 10","browserName": "MicrosoftEdge","version": "latest"
    }
  }
}

And I modified my Jenkinsfile to call them:

//----8<----8<----8<----8<----8<----8<----
sauceconnect(options: '', useGeneratedTunnelIdentifier: false, verboseLogging: false) {def configs = [ (1)'chrome','firefox','ie','edge'
    ].join(',')// Run selenium tests using Nightwatch.js
    sh "./node_modules/.bin/nightwatch -e ${configs} --test tests/guineaPig.js"(2)
} //----8<----8<----8<----8<----8<----8<----
1Using an array to improve readability and make it easy to add more platforms later.
2Changed from single-quoted string to double-quoted to support variable substitution.

Test frameworks have bugs too. Nightwatch.js (v0.9.8) generates incomplete JUnit files, reporting results without enough information in them to distinguish between platforms. I implemented a fix for it andsubmitted a PR to Nightwatch.js. This blog shows output with that fix applied locally.

As expected, Jenkins picked up the new pipeline and ran Nightwatch.js on four platforms. Sauce Labs of course recorded the results and correctly linked them into this build. Nightwatch.js was already configured to use multiple worker threads to run tests against those platforms in parallel, and my Sauce Labs account supported running them all at the same time, letting me cover four configurations in less that twice the time, and that added time was most due to individual new environments taking longer to complete. When I move to the actual project, this will let me run broad acceptance passes quickly.

Sauce Labs Results List
JUnit Report Showing Added Platforms

Conclusion: To Awesome and Beyond

Considering the complexity of the system, I was impressed with how easy it was to integrate Jenkins with Sauce OnDemand to start testing on multiple browsers. The plugin worked flawlessly with Jenkins Pipeline. I went ahead and ran some additional tests to show that failure reporting also behaved as expected.

//----8<----8<----8<----8<----8<----8<----
    sh "./node_modules/.bin/nightwatch -e ${configs}"(1)//----8<----8<----8<----8<----8<----8<----
1Removed --test filter to run all tests
Tests

Epilogue: Pipeline vs. Freestyle

Just for comparison here’s the final state of this job in Freestyle UI versus fully-commented pipeline code:

This includes theAnsiColor Plugin to support Nightwatch.js' default ANSI color output.

Freestyle

Freestyle Job Configuration - SCM
Freestyle Job Configuration - Wrappers and Sauce
Freestyle Job Configuration - Build and Publish

Pipeline

node {
    stage "Build"
    checkout scm// Install dependencies
    sh 'npm install'

    stage "Test"// Add sauce credentials
    sauce('f0a6b8ad-ce30-4cba-bf9a-95afbc470a8a') {// Start sauce connect
        sauceconnect(options: '', useGeneratedTunnelIdentifier: false, verboseLogging: false) {// List of browser configs we'll be testing against.def platform_configs = ['chrome','firefox','ie','edge'
            ].join(',')// Nightwatch.js supports color ouput, so wrap this step for ansi color
            wrap([$class: 'AnsiColorBuildWrapper', 'colorMapName': 'XTerm']) {// Run selenium tests using Nightwatch.js// Ignore error codes. The junit publisher will cover setting build status.
                sh "./node_modules/.bin/nightwatch -e ${platform_configs} || true"
            }

            junit 'reports/**'

            step([$class: 'SauceOnDemandTestPublisher'])
        }
    }
}

This pipeline expects to be run from a Jenkinsfile in SCM. To copy and paste it directly into a Jenkins Pipeline job, replace the checkout scm step withgit url:'https://github.com/bitwiseman/JS-Nightwatch.js', branch: 'sauce-pipeline'.

Not only is the pipeline as code more compact, it also allows for comments to further clarify what is being done. And as I noted earlier, changes to this pipeline code are committed the same as changes to the rest of the project, keeping everything synchronized, reviewable, and testable at any commit. In fact, you can view the full set of commits for this blog post in thesauce-pipeline branch of thebitwiseman/JS-Nightwatch.js repository.

Demos at Jenkins World 2016

$
0
0

Jenkins World 125x125

At this year’s Jenkins World, our events officer Alyssa has been working to organize various activities in the "Open Source Hub" on the expo floor. Both days of the conference (Sept. 14th and 15th), during the break for lunch, there will be 15 minute demos by many of theexperts helping to staff the Open Source Hub.

Demo Schedule

Wednesday, September 14th

TimeSessionDetailsPresenter

12:15 - 12:30

Blue Ocean in Action

Showcase of Blue Ocean and how it will make Jenkins a pleasure to use.

Keith Zantow

12:30 - 12:45

Notifications with Jenkins Pipeline

Sending information to Slack, HipChat, email and more from your Pipeline

Liam Newman

12:45 - 13:00

Docker and Pipeline

Learn how to use Docker inside of Pipeline for clean, repeatable testing environments

R Tyler Croy

13:00 - 13:15

Git plugin - large repos, submodule authentication and more

Techniques for managing large Git repositories, Submodule authentication, Pipelines and Git

Mark Waite

13:15 - 13:30

Freestyle to Pipeline

Overview of how easy it is to migrate from a confusing series of Freestyle Jobs to Jenkins Pipeline

R Tyler Croy

13:30 - 13:45

package.json and Jenkins

Using package.json to control your build; running tests, coverage and generating documentation in Jenkins

Casey Vega

13:45 - 14:00

Extending Pipeline with Libraries

When you have many jobs using similar configuration, it is natural to factor out the common parts into libraries. See some ways Pipeline lets you do this.

Jesse Glick

Thursday, September 15th

TimeSessionDetailsPresenter

12:15 - 12:30

A simpler way to define Jenkins Pipelines

Get to know a new way to define your Pipelines in a more configuration-like way!

Andrew Bayer

12:30 - 12:45

Multibranch Pipelines + Git symbolic-ref

Pipeline Multibranch Plugin is amazing, but is even better when used with Git symbolic references. The combination of the two gives users a way to create individual Jenkins jobs for each of their build/test configurations, instead of using a single parameterized job. I’ll show how to use these tools together to home in on problematic tests, systems under test, or both.

Jon Hermansen

12:45 - 13:00

External Workspace Manager plugin for Jenkins Pipeline

Meet the External Workspace Manager plugin, which supports managing workspaces across multiple Jenkins jobs running on different nodes and more!

Alex Somai

13:00 - 13:15

Ownership plugin for Jenkins

The presentation will introduce the Ownership engine for Jenkins jobs, folders and nodes. The presentation will cover plugin WebUI features, Ownership-based security and integration with Jenkins Pipeline

Oleg Nenashev

13:15 - 13:30

Pipelines for building and deploying Android apps

Using the various Android-related plugins for Jenkins, we will demonstrate pipelines to automatically build, test, and securely deploy Android apps.

Christopher Orr

As you can see there is a lot to see in the Open Source Hub at Jenkins World. To my knowledge these demos are not going to be recorded, so your only opportunities to see them might be at Jenkins World or your localJenkins Area Meetup!

Register for Jenkins World in September with the code JWFOSS for a 20% discount off your pass.


Scaling Jenkins at Jenkins World 2016

$
0
0

This is a guest post by R. Tyler Croy, who is a long-time contributor to Jenkins and the primary contact for Jenkins project infrastructure. He is also a Jenkins Evangelist atCloudBees, Inc.

Jenkins World

I find the topic of "scaling Jenkins" to be incredibly interesting because, more often than not, scaling Jenkins isn’t just about scaling a single instance but rather scaling an organization and its continuous delivery processes. In many cases when people talk about "scaling Jenkins" they’re talking about "Jenkins as a Service" or "Continuous Delivery as a Service" which introduces a much broader scope, and also more organization-specific requirements, to the problem.

One of my favorite parts of a big conference likeJenkins World is getting to see how other people are solving similar problems at different organizations, in essence: "how the sausage is made." This year’s Jenkins World will be no different, with a number of sessions by developers and engineers from the companies leading the way, scaling continuous delivery and Jenkins.

Register for Jenkins World in September with the code JWFOSS for a 20% discount off your pass.

In the realm of "scaling Jenkins" the following sessions stand-out to me as "must-attend" for those interested in the space:

September 14th 4:15 PM - 5:00 PM, Exhibit Hall A-1

159px National Public Radio logo.svg NPR’s Digital Media team uses Jenkins to build, test and deploy code to various staging and production environments. As the complexity of the software components, environments and tests have grown - both generally and due to our quest to achieve continuous deployment - management of Jenkins has become a challenge. In this talk, we share information about our “JenkinsOps” effort which has allowed us to automate many of the administrative tasks necessary to manage feature code branches, handle deployments, run tests and configure our environments properly.

— Paul Miles and Grant Dickie of NPR

September 15th 1:30 PM - 2:15 PM, Exhibit Hall C

Riot Games logo At Riot Games, we build a lot of software. Come learn how we built an integrated Docker solution using Jenkins that accepts Docker images submitted as build environments by engineers around the company. Our containerized farm now creates over 10,000 containers per week and handles nearly 1,000 jobs at a rate of about 100 jobs per hour. All this is done with readily available, open source Jenkins plugins. We’ll explore lessons learned, best practices and how to scale and build your own system, as well as why we chose to solve the problem this way…and whether or not we succeeded!

— Maxfield F Stewart of Riot Games

September 15th 2:30 PM - 3:15 PM, Great America J

93px RedHat.svg In this talk, we’ll show how to use Jenkins Pipeline together with Docker and Kubernetes to implement a complete end-to-end continuous delivery and continuous improvement system for microservices and monolithic applications using open source software. We’ll demonstrate how to easily create new microservices projects or import existing projects, have them automatically built, system and integration tested, staged and then deployed. Once deployed, we will also see how to manage and update applications using continuous delivery practices along with integrated ChatOps - all completely automated!

— James Strachan of Red Hat

September 15th 2:30 PM - 3:15 PM, Exhibit Hall C

320px CloudBees official logo The Jenkins platform can be dynamically scaled by using several Docker cluster and orchestration platforms, using containers to run agents and jobs and also isolating job execution. But which cluster technology should be used? Docker Swarm? Apache Mesos? Kubernetes? How do they compare? All of them can be used to dynamically run jobs inside containers. This talk will cover these main container clusters, outlining the pros and cons of each, the current state of the art of the technologies and Jenkins support. I believe people will be very interested in learning about the multiple options available.

— Carlos Sanchez of CloudBees

September 15th 3:45 PM - 4:30 PM, Exhibit Hall C

How can we do it? We start with some real world results realized by Jenkins users who have built large clusters and review how they got there. Next, we will do experiments scaling some individual sub-components of Jenkins in isolation and see what challenges we will face when integrated. The famous large, distributed systems undoubtedly faced problems scaling - and we can learn from them, too. The result will be recipes for building Jenkins clusters with different scaling capabilities. After all of this, you can build the biggest Jenkins cluster in the world…or maybe just make your own Jenkins cluster more efficient.

— Stephen Connolly of CloudBees

September 15th 3:45 PM - 4:30 PM, Exhibit Hall A-1

splunk logo 300x100 This session will highlight how Splunk uses Jenkins to provide an end-to-end solution in the development CI system. Attendees will see how test results are delivered to a Splunk indexer, where they can be analyzed and presented in a variety of ways. This session will also include a live demonstration.

— Bill Houston of Splunk

September 15th 4:45 PM - 5:30 PM, Exhibit Hall C

272px Google 2015 logo.svg Last year, we presented our initial investigations and stress testing as we prepared to deploy a large-scale Jenkins installation at Google. Now, with a year of real-world use under our belts, we’ll discuss how our expectations held up, what new issues we encountered and how we have addressed them.

— David Hoover of Google

In addition to these, we will also be hosting aJenkins World Contributor Summit where "scaling" relevant topics such as "Storage Pluggability" will be discussed.

The Jenkins World agenda is packed with even more sessions, so it should be a very informational event for everybody; hope to see you there!

Jenkins World Contributor Summit

$
0
0

Jenkins World

At previous Jenkins User Conferences we have hosted "Contributor Summits" to gather developers and power-users in one room to discuss specific areas of Jenkins, such as Scalability, Pipeline, etc. As part of this year’s Jenkins World we’re hosting another Contributor Summit, to discuss: Blue Ocean,Pipeline and Storage Pluggability.

Contributors to these three areas of the Jenkins ecosystem will be in attendance to present details of their design, requirements, and tentative roadmaps. After the presentations, the afternoon will be "unconference style" which is much more fluid to allow discussions, feedback, and brain-storming around the three focus areas.

The program for theJenkins World Contributor Summit includes:

  • Updates from the various projectofficers.

  • A discussion of the Blue Ocean technology stack, overall architecture, and how to develop plugins that integrate with Blue Ocean. Led by Keith Zantow.

  • Presentation on the current status of Pipeline, lessons learned, new changes and the future. Led byJesse Glick.

  • Overview of "Storage Pluggability", a new scalability-oriented project to revamp the underlying storage mechanisms in Jenkins. Led byKohsuke Kawaguchi.

I cannot recommend participating in the Contributor Summit enough. I have found previous Summits to be immensely useful for sharing my own thoughts, as well as for hearing new perspectives from the others in attendance.

Our space is limited however! I encourage you to join us, so pleaseRSVP soon!

Register for Jenkins World in September with the code JWFOSS for a 20% discount off your pass.

Security fixes in Script Security Plugin and Extra Columns Plugin

$
0
0

The Script Security Plugin and the Extra Columns Plugin were updated today to fix medium-severity security vulnerabilities. For detailed information about the security content of these updates, see the security advisory.

Subscribe to the jenkinsci-advisories mailing list to receive important notifications related to Jenkins security.

Run Your API Tests Continuously with Jenkins and DHC

$
0
0
This is a guest post by Guillaume Laforge. Well known for his contribution to the Apache Groovy project, Guillaume is also the "Product Ninja and Advocate" of Restlet, a company focusing on Web APIs: with DHC (an API testing client),Restlet Studio (an API designer),APISpark (an API platform in the cloud), and the Restlet Framework open source project for developing APIs.

Modern mobile apps, single-page web sites and applications, are more and more relying on Web APIs, as the nexus of the interaction between the frontend and the backend services. Web APIs are also central to third-party integration, when you want to share your services with others, or when you need to consume existing APIs to build your own solution on top of their shoulders.

With APIs being a key element of your architecture and big picture, it’s obviously important to assess that this API is functioning the way it should, thanks to proper testing. Your framework of choice, regardless of the technology stack or programming language used, will hopefully offer some facilities for testing your code, whether in the form of unit tests, or ideally with integration tests.

Coding Web API tests

From a code perspective, as I said, most languages and frameworks provide approaches to testing APIs built with them. There’s one I wanted to highlight in particular, which is one developed with a DSL approach (Domain-Specific Language), using the Apache Groovy programming language, it’s AccuREST.

To get started, you can have a look at the introduction, and the usage guide. If you use the contract DSL, you’ll be able to write highly readable examples of requests you want to issue against your API, and the assertions that you expect to be true when getting the response from that call. Here’s a concrete example from the documentation:

GroovyDsl.make {
    request {
        method 'POST'
        urlPath('/users') {
            queryParameters {
                parameter 'limit': 100
                parameter 'offset': containing("1")
                parameter 'filter': "email"
            }
        }
        headers {
            header 'Content-Type': 'application/json'
        }
        body '''{ "login" : "john", "name": "John The Contract" }'''
    }
    response {
        status 200
        headers {
            header 'Location': '/users/john'
        }
    }
}

Notice that the response is expected to return a status code 200 OK, and a Location header pointing at /users/john. Indeed, a very readable way to express the requests and responses!

Tooling to test your APIs

From a tooling perspective, there are some interesting tools that can be used to test Web APIs, like Paw (on Macs),Advanced REST client,Postman orInsomnia.

But in this article, I’ll offer a quick look at DHC, a handy visual tool, that you can use both manually to craft your tests and assertions, and whose test scenarios you can export and integrate in your build and continuous integration pipeline, thanks to Maven and Jenkins.

At the end of this post, you should be able to see the following reporting in your Jenkins dashboard, when visualising the resulting API test execution:

Export a scenario

Introducing DHC

DHC is a Chrome extension, that you caninstall from the Chrome Web Store, in your Chrome browser. There’s also an online service available, with some limitations. For the purpose of this article, we’ll use the Chrome extension.

DHC interface

In the main area, you can create your request, define the URL to call, specify the various request headers or params, chose the method you want to use, and then, you can click the send button to issue the request.

In the left pane, that’s where you’ll be able to see your request history, create and save your project in the cloud, or also set context variables.

The latter is important when testing your Web API, as you’ll be able to insert variables like for example {localhost} for testing locally on your machine or {staging} and {prod} to run your tests in different environments.

In the bottom pane, you have access to actual raw HTTP exchange, as well as the assertions pane.

Again, a very important pane to look at! With assertions, you’ll be able to ensure that your Web API works as expected. For instance, you can check the status code of the call, check the payload contains a certain element, by using JSON Path or XPath to go through the JSON or XML payload respectively.

DHC interface

Beyond assertions, what’s also interesting is that you can chain requests together. A call request can depend on the outcome of a previous request! For example, in a new request, you could pass a query parameter whose value would be the value of some element of the JSON payload of a previously executed request. And by combining assertions, linked requests and context variables together, you can create full-blown test scenarios, that you can then save in the cloud, but also export as a JSON file.

Running a test scenario

To export that test scenario, you can click on the little export icon in the bottom left hand corner, and you’ll be able to select exactly what you want to export:

Export a scenario

Running your Web API tests with Maven

Now things become even more interesting, as we’ll proceed to using Maven and Jenkins! As the saying goes, there’s a Maven plugin for that! For running those Web API tests in your build! Even if your Web API is developed in another technology than Java, you can still create a small Maven build just for your Web API tests. And the icing on the cake, when you configure Jenkins to run this build, as the plugin outputs JUnit-friendly test reports, you’ll be able to see the details of your successful and failed tests, just like you would see JUnit’s!

Let’s sketch your Maven POM:

<projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>com.example</groupId><artifactId>my-first-api-test</artifactId><version>1.2.3</version><build><plugins><plugin><groupId>com.restlet.dhc</groupId><artifactId>dhc-maven-plugin</artifactId><version>1.1</version><executions><execution><phase>test</phase><goals><goal>test</goal></goals><configuration><file>companies-scenario.json</file></configuration></execution></executions></plugin></plugins></build><pluginRepositories><pluginRepository><id>restlet-maven</id><name>Restlet public Maven repository Release Repository</name><url>http://maven.restlet.com</url></pluginRepository></pluginRepositories></project>

Visualizing Web API test executions in Jenkins

Once you’ve configured your Jenkins server to launch the test goal of this Maven project, you’ll be able to see nice test reports for your Web API scenarios, like in the screenshot in introduction of this article!

Next, you can easily run your Web API tests when developers commit changes to the API, or schedule regular builds with Jenkins to monitor an online Web API.

For more information, be sure to read the tutorial ontesting Web APIs with DHC. There are also some more resources like ascreencast, as well as theuser guide, if you want to learn more. And above all, happy testing!

Introducing a New Way to Define Jenkins Pipelines

$
0
0
This is a guest post by Jenkins World speaker Andrew Bayer, Jenkins developer at CloudBees.

Jenkins World

Over the last couple years, Pipeline as code has very much become the future of Jenkins - in fact, at this point, I’d say it’s pretty well established as the present of Jenkins. But that doesn’t mean it’s done, let alone that it’s perfect. While many developers enjoy the power and control that they get from writing Pipelines using scripting, not everyone feels the same way. A lot of developers want to specify their build as configuration and get on with building software.

Pipeline scripts haven’t been a good way to do that…​until now.

With new changes to Jenkins Pipeline, you are now able to define your Pipeline from configuration in your Jenkinsfile by installing the newPipeline Model Definition plugin. It’s available today for you to try via the update center. Be sure to check the documentation for examples on how to get started for a variety of languages and platforms.

Here’s a quick example based on the plugin’s own Jenkinsfile:

pipeline {// Make sure that the tools we need are installed and on the path.
    tools {
        maven "Maven 3.3.9"
        jdk "Oracle JDK 8u40"
    }// Run on any executor.
    agent label:""// The order that sections are specified doesn't matter - this will still be run// after the stages, even though it's specified before the stages.
    postBuild {// No matter what the build status is, run these steps. There are other conditions// available as well, such as "success", "failed", "unstable", and "changed".
        always {
            archive "target/**/*"
            junit 'target/surefire-reports/*.xml'
        }
    }

    stages {
        // While there's only one stage here, you can specify as many stages as you like!
        stage("build") {
            sh 'mvn clean install -Dmaven.test.failure.ignore=true'
        }
    }

}

It’s still early days for this feature, with a lot of further functionality planned to make it easier and more intuitive to define your Pipelines. All of that functionality lives on top of Pipeline scripting, so we’ll also keep improving Pipeline steps and syntax outside of the model! And perhaps most exciting, the Pipeline model will be used by an in-the-works visual editor that will be part of the Blue Ocean project - while the editor isn’t ready yet, the Pipeline Model Definition plugin will be bundled with the Blue Ocean beta for you to try out.

I’ll be going into all of this and more at my talk on Thursday, September 15th, at 3:45pm at Jenkins World, and showing off the same day at the lunchtime demo theater. I can’t wait to see you all there and hear what you think of all this!

Andrew will bepresenting more of this concept atJenkins World in September. Register with the code JWFOSS for 20% off your full conference pass.

Viewing all 1088 articles
Browse latest View live