Quantcast
Channel: Jenkins Blog
Viewing all 1088 articles
Browse latest View live

Jenkins 2.0 beta released

$
0
0

We released the Jenkins 2.0 beta earlier today. Download it here and try it!

Besides a number of bug fixes and minor improvements, the following changes are new since the last alpha preview release:

Redesigned "New Item" page

We redesigned the "New Item" page. Item types now have icons to be more visually distinctive.

Additionally, item types can now define a category they belong to (such as "Project" or "Folder"). Once the complexity of the "New Item" page reaches a certain threshold, the item types will be grouped into categories to be easier to find. However, for now, it's unlikely that you will see these categories, as support for this mechanism will need to be added in plugins. This is a new API in core, and we invite plugin developers to support it to make Jenkins easier to use for users with a large number of item types. It doesn't even require raising the minimum supported Jenkins version.

Separate configuration page for tools

The length and complexity of the Configure Jenkins page once a few dozen plugins are installed made it unnecessarily difficult to use. To improve that we're moving the tools configuration (Git, Maven, Gradle, Ant, etc.) out of that page, into the new Global Tools Configuration.

Upgrade notice and plugin installer

The Pipeline plugin suite is a big part of Jenkins 2. Over the past few weeks, open-source plugins adding support for visualization (Pipeline Stage View), automatic GitHub project creation (GitHub Branch Source Plugin) and Bitbucket project creation (Bitbucket Branch Source Plugin) have been released. However, when upgrading from Jenkins 1.x, users weren't even given any information on these features.

To address this, users upgrading from Jenkins 1.x will now be shown a banner when they first log into Jenkins as administrator, offering them to install the suite of Pipeline plugins.


Important notice regarding usage statistics

$
0
0

A bug was introduced in Jenkins versions 1.645 and 1.642.2 which caused Jenkins to sendanonymous usage statistics, even if the administrator opted-out of reporting usage data in the Jenkins web UI.

If you are running one of the affected versions, the best/easiest solution is to upgrade. The bug does not affect Jenkins 1.653 or newer, or Jenkins LTS 1.642.4 or newer.

If you cannot upgrade, it is possible to immediately disable submission of usage statistics by running the following script in "Manage Jenkins » Script Console":

    hudson.model.UsageStatistics.DISABLED = true

This will immediately disable usage data submission until you restart Jenkins. To make this permanent, change your Jenkins startup script so it passes a system property to the java process:

    java -Dhudson.model.UsageStatistics.disabled=true -jar …/jenkins.war

For information how to do this when using one of the installers/packages, see the installer/package documentation here.

To verify that usage stats submission is disabled, run the following script in "Manage Jenkins » Script Console" and confirm the result is true:

    println hudson.model.UsageStatistics.DISABLED

We have much more information about the issue and our usage statistics processin our wiki.

While we do not consider this a security advisory, if you are a Jenkins administrator we highly recommend subscribing to ourjenkinsci-advisories@ mailing list.

March 2016 St. Petersburg Jenkins Meetup Report

$
0
0

On March 10th we have conducted the second Jenkins meetup in Saint Petersburg, Russia. The meetup topic was "Jenkins and Continuous Delivery". We had 3 talks addressing various aspects of Jenkins usage in this area.

stpetersburg butler 0

Talks

  1. Introduction slides [ru]

  2. Jenkins 2.0 and Pipeline-as-Code

  3. Continuous Delivery for Documentation

  4. Continuous Delivery with Jenkins at ZeroTurnaround

We also had a long Jenkins afterparty. Starting from the next meetup we hope to make this part more official.

Acknowledgments

The event has been organized with the help fromYandex andCloudBees.

More Jenkins meetups

If you want to organize a Jenkins meetup in St. Petersburg or to be a speaker there, please contact us via theMeetup discussions page

Regarding other areas, check out whereJenkins Area Meetups (JAMs) are located in the world.

Don’t see a JAM in your area? Why not start your own, find out how.

Jenkins 2.0 Release Candidate available!

$
0
0

Those who fervently watch thejenkinsci-dev@ list, like I do, may have caught Daniel Beck's email today which quietly referenced a significant milestone on the road to 2.0 which has been reached: the first 2.0 release candidate is here!

The release candidate process, in short, is the final stabilization and testing period before the final release of Jenkins 2.0. If you have the cycles to help test, please download the release candidate and give us your feedback as soon as possible!

The release candidate process also means that changes targeting release after 2.0 can start landing in the master branch, laying the groundwork 2.1 and beyond.

I pushed the merge to master. So anything targeting 2.1+ can be now proposed in pull requests to that branch.

Anything happening on 2.0 branch will be limited to critical fixes for the 2.0 release specifically.

— Daniel Beck

Compared to the2.0 beta release, the first release candidate has a number of fixes for issues discovered in the alpha and beta process. Most notable perhaps is the stabilization of a system property which configuration management tools, like Puppet/Chef/Ansible/etc, can use to suppress the user-friendly Getting Started wizard. Since users of those tools have alternative means of ensuring security and correctness of their Jenkins installations, the out-of-the-box experience can be skipped.

Based on ourrough timeline this gives us a couple weeks to test the release candidates and get ready for a big exciting release of 2.0 at the end of April!

Automating test runs on hardware with Pipeline as Code

$
0
0

In addition to Jenkins development, during last 8 years I’ve been involved into continuous integration for hardware and embedded projects. At JUC2015/London I have conducted a talk about common automation challenges in the area.

In this blog post I would like to concentrate on Pipeline (formerly known as Workflow), which is a new ecosystem in Jenkins that allows implementing jobs in a domain specific language. It is in the suggested plugins list in the upcoming Jenkins 2.0 release.

The first time I tried Pipeline two and half years ago, it unfortunately did not work for my use-cases at all. I was very disappointed but tried it again a year later. This time, the plugin had become much more stable and useful. It had also attracted more contributors and started evolving more rapidly with the development of plugins extending the Pipeline ecosystem.

Currently, Pipeline a powerful tool available for Jenkins users to implement a variety of software delivery pipelines in code. I would like to highlight several Pipeline features which may be interesting to Jenkins users working specifically with embedded and hardware projects.

Introduction

In Embedded projects it’s frequently required to run tests on specific hardware peripherals: development boards, prototypes, etc. It may be required for both software and hardware areas, and especially for products involving both worlds. CI and CD methodologies require continuous integration and system testing, and Jenkins comes to help here. Jenkins is an automation framework, which can be adjusted to reliably work with hardware attached to its nodes.

Area challenges

Generally, any peripheral hardware device can be attached to a Jenkins node. Since Jenkins nodes require Java only, almost every development machine can be attached. Below you can find a common connection scheme:

Connecting the external device

After the connection, Jenkins jobs could invoke common EDA tools via command-line interfaces. It can be easily done by a Execute shell build steps in free-style projects. Such testing scheme is commonly affected by the following issues:

  • Nodes with peripherals are being shared across several projects. Jenkins must ensure the correctness of access (e.g. by throttling the access).

    • In a single Freestyle project builds utilize the node for a long period. If you synthesize the item before the run, much of the peripheral utilization file may be wasted.

    • The issue can be solved by one of concurrency management plugins:Throttle Concurrent Builds, Lockable Resources orExclusions.

  • Test parallelization on multiple nodes requires using of multiple projects orMatrix configurations, so it causes job chaining again.

  • Hardware infrastructure is usually flaky. If it fails during the build due to any reason, it’s hard to diagnose the issue and re-run the project if the issue comes from hardware.

    • Build Failure Analyzer allows to identify the root cause of a build failure (e.g. by build log parsing).

    • Conditional Build Step andFlexible Publish plugins allow altering the build flow according to the analysis results.

    • Combination of the plugins above is possible, but it makes job configurations extremely large.

  • Tests on hardware peripherals may take much time. If an infrastructure fails, we may have to restart the run from scratch. So the builds should be robust against infrastructure issues including network failures and Jenkins master restarts.

  • Tests on hardware should be reproducible, so the environment and input parameters should be controlled well.

    • Jenkins supportscleaning workspaces, so it can get rid of temporary files generated by previous runs.

    • Jenkins provides support of slaves connected via containers (e.g.Docker) or VMs, which allow creating clean environments for every new run. It’s important for 3rd-party tools, which may modify files outside the workspace: user home directory, temporary files, etc.

    • These environments still need to be connected to hardware peripherals, which may be a serious obstacle for Jenkins admins

The classic automation approaches in Jenkins are based on Free-style and Multi-configuration project types. Links to various articles on this topic are collected on theHW/Embedded Solution pageEmbedded on the Jenkins website. Tests automation on hardware peripherals has been covered in several publications by Robert Martin, Steve Harris, JL Gray, Gordon McGregor, Martin d’Anjou, and Sarah Woodall. There is also a top-level overview of classic approaches made by me at JUC2015/London (a bit outdated now).

On the other hand, there is no previous publications, which would address Pipeline usage for the Embedded area. In this post I want to address this use-case.

Pipeline as Code for test runs on hardware

Pipeline as Code is an approach for describing complex automation flows in software lifecycles: build, delivery, deployment, etc. It is being advertised in Continuous Delivery and DevOps methodologies.

In Jenkins there are two most popular plugins:Pipeline and Job DSL. JobDSL Plugin internally generates common freestyle jobs according to the script, so it’s functionality is similar to the classic approaches. Pipeline is fundamentally different, because it provides a new engine controlling flows independently from particular nodes and workspaces. So it provides a higher job description level, which was not available in Jenkins before.

Below you can find an example of Pipeline scripts, which runs tests on FPGA board. The id of this board comes from build parameters (fpgaId). In this script we also presume that all nodes have pre-installed tools (Xilinx ISE in this case).

// Run on node having my_fpga label
node("linux && ml509") {
  git url:"http://github.com/oleg-nenashev/pipeline_hw_samples"
  sh "make all"
}

But such scenario could be also implemented in a Free-style project. What would we get from Pipeline plugin?

Getting added-value from Pipeline as code

Pipeline provides much added-value features for hardware-based tests. I would like to highlight the following advantages:

  • Robustness against restarts of Jenkins master.

  • Robustness against network disconnects. sh() steps are based on theDurable Task plugin, so Jenkins can safely continue the execution flow once the node reconnects to the master.

  • It’s possible to run tasks on multiple nodes without creating complex flows based on job triggers and copy artifact steps, etc. It can be achieved via combination of parallel() and node() steps.

  • Ability to store the shared logic in standalone Pipeline libraries

  • etc.

First two advantages allow to improve the robustness of Jenkins nodes against infrastructure failures. It is critical for long-running tests on hardware.

Last two advantages address the flexibility of Pipeline flows. There are also plugins for freestyle projects, but they are not flexible enough.

Utilizing Pipeline features

The sample Pipeline script above is very simple. We would like to get some added value from Jenkins.

General improvements

Let’s enhance the script by using several features being provided by pipeline in order to get visualization of stages, report publishing and build notifications.

We also want to minimize the time being spent on the node with the attached FPGA board. So we will split the bitfile generation and further runs to two different nodes in this case: a general purpose linux node, and the node with the hardware attached.

You can find the resulting Pipeline script below:

// Synthesize on any nodedef imageId=""
node("linux") {
  stage "Prepare environment"
  git url:"http://github.com/oleg-nenashev/pipeline_hw_samples"// Construct the bitfile image ID from commit ID
  sh 'git rev-parse HEAD > GIT_COMMIT'
  imageId= "myprj-${fpgaId}-" + readFile('GIT_COMMIT').take(6)

  stage "Synthesize project"
  sh "make FPGA_TYPE=$fpgaId synthesize_for_fpga"/* We archive the bitfile before running the test, so it won't be lost it if something happens with the FPGA run stage. */
  archive "target/image_${fpgaId}.bit"
  stash includes: "target/image_${fpgaId}.bit", name: 'bitfile'
}/* Run on a node with 'my_fpga' label.
In this example it means that the Jenkins node contains the attacked FPGA of such type.*/
node ("linux && $fpgaId") {
  stage "Blast bitfile"
  git url:"http://github.com/oleg-nenashev/pipeline_hw_samples"def artifact='target/image_'+fpgaId+'.bit'
  echo "Using ${artifact}"
  unstash 'bitfile'
  sh "make FPGA_TYPE=$fpgaId impact"/* We run automatic tests.
  Then we report test results from the generated JUnit report. */
  stage "Auto Tests"
  sh "make FPGA_TYPE=$fpgaId tests"
  sh "perl scripts/convertToJunit.pl --from=target/test-results/* --to=target/report_${fpgaId}.xml --classPrefix=\"myprj-${fpgaId}.\""
  step([$class:"JUnitResultArchiver", testResults:"target/report_${fpgaId}.xml"])

  stage "Finalization"
  sh "make FPGA_TYPE=$fpgaId flush_fpga"
  hipchatSend("${imageId} testing has been completed")
}

As you may see, the pipeline script mostly consists of various calls of command-line tools via the sh() command. All EDA tools provide great CLIs, so we do not need special plugins in order to invoke common operations from Jenkins.

Makefile above is a sample stuff for demo purposes. It implements a set of unrelated routines merged into a single file without dependency declarations. Never write such makefiles.

It is possible to continue expanding the pipeline in such way.Pipeline Examples contain examples for common cases: build parallelization, code sharing between pipelines, error handling, etc.

Lessons learned

During last 2 years I’ve been using Pipeline for Hardware test automation several times. The first attempts were not very successful, but the ecosystem has been evolving rapidly. I feel Pipeline has become a really powerful tool, but there are several missing features. I would like to mention the following ones:

  1. Shared resource management across different pipelines.

  2. Better support of CLI tools.

    • EDA tools frequently need a complex environment, which should be deployed on nodes somehow.

    • Integration withCustom Tools Plugin seems to be the best option, especially in the case of multiple tool versions (JENKINS-30680).

  3. Pipeline package manager (JENKINS-34186)

    • Since there is almost no plugins for EDA tools in Jenkins, developers need to implement similar tasks at multiple jobs.

    • A common approach is to keep the shared "functions" in libraries.

    • Pipeline Global Library andPipeline Remote Loader can be used, but they do not provide features like dependency management.

  4. Pipeline debugger (JENKINS-34185)

    • Hardware test runs are very slow, so it is difficult to troubleshoot and fix issues in the Pipeline code if you have to run every build from scratch.

    • There are several features in Pipeline, which simplify the development, but we still need an IDE-alike implementation for complex scripts.

Conclusions

Jenkins is a powerful automation framework, which can be used in many areas. Even though Jenkins has no dedicated plugins for test runs on hardware, it provides many general-purpose "building blocks", which allow implementing almost any flow. That’s why Jenkins is so popular in the hardware and embedded areas.

Pipeline as code can greatly simplify the implementation of complex flows in Jenkins. It continues to evolve and extend support of use-cases. if you’re developing embedded projects, consider Pipeline as a durable, extensible and versatile means of implementing your automation.

What’s next?

Jenkins automation server dominates in the HW/Embedded area, but unfortunately there is not so much experience sharing for these use-cases. So Jenkins community encourages everybody to share the experience in this area by writing docs and articles for Jenkins website and other resources.

This is just a a first blog post on this topic. I am planning to provide more examples of Pipeline usage for Embedded and Hardware tests in the future posts. The next post will be about concurrency and shared resource management in Pipelines.

I am also going to talk about running tests on hardware at theupcoming Automotive event in Stuttgart on April 26th. This event is being held byCloudBees, but there will be several talks addressing Jenkins open-source as well.

If you want to share your experience about Jenkins usage in Hardware/Embedded areas, consider submitting a talk for theJenkins World conference or join/organize aJenkins Area Meetup in your city. There is also aJenkins Online Meetup.

Jenkins Community Survey Results

$
0
0
This is a guest post by Brian Dawson at CloudBees, where he works as a DevOps Evangelist responsible for developing and sharing continuous delivery and DevOps best practices. He also serves as the CloudBees Product Marketing Manager for Jenkins.

logo 128

Last fall CloudBees asked attendees at the Jenkins User Conference – US West (JUC), and other in the Jenkins community to take a survey. Almost 250 people did – and thanks to their input, we have results which provided interesting insights into how Jenkins is being used.

Back in 2012, at the time of the last community survey, 83% of respondents felt that Jenkins was mission-critical. By 2015, the percentage saying that Jenkins was mission-critical was 92%. Additionally, echoing the importance of Jenkins, 89% of respondents said their use of Jenkins had increased over the last year, while 11% said it had stayed the same. 0% said that it had decreased.

The trend in the industry over the last couple of years has been to adopt continuous delivery (CD), thus pushing automation further down the pipeline – from development all the way into production. Jenkins being an automation engine applicable to any phase of the software delivery lifecycle, is readily supporting this trend. Jenkins' extensible architecture and unparalleled plugin ecosystem enables integration with and orchestration of practically any tool in any phase of software delivery.

The trend towards adoption of CD is clearly reflected amongst the community: 59% of respondents are using Jenkins for continuous integration (CI), but an additional 30% have extended CI into CD and are manually deploying code to production. Finally, 11% are practicing continuous deployment – they have extended CI to CD and are deploying code automatically into production.

Another trend tied to the adoption of CD and DevOps is the frequent deployment of incremental releases to production. 26% of those respondents using continuous delivery practices are deploying code at least once per day. Another 37% are deploying code at least once per week.

In keeping with the move to CD, 30% of survey takers are already using the relatively new Pipeline plugin to automate their software delivery pipelines. Of those not using the Pipeline plugin, 79% plan to adopt it in the next 12 months.

Survey respondents are also using Jenkins for many different activities. 97% of survey takers use it for "build" – no surprise, since that is where Jenkins got its start - but 58% now also use it for their deployment.

2016 survey blog strongbutler

When the 2012 community survey was conducted, container technology was not as well understood as it is today, and many didn’t know what a “Docker” was. A short four years later, 96% of survey respondents who use Linux containers are using Docker. Container technology has seen impressive adoption and arguably is revolutionizing the way application infrastructure is delivered. When coupled with Jenkins as an automation engine, containers help accelerate software delivery by providing rapid access to lightweight environments. The Jenkins community has recognized and embraced the power of containers by providing plugins for Docker and Kubernetes.

The Jenkins improvements which survey respondents desired the most were quality/timely bug fixes, a better UI and more documentation/examples. Interestingly, Jenkins 2.0 - which is just about to officially launch, provides UI improvements and the new Jenkins.io website provides improved, centralized documentation.

2016 survey blog bb8

Finally, the respondents favorite Star Wars character was R2-D2, followed by Obi-Wan and Darth Vader. Yoda and Han Solo also got a fair amount of votes. The votes for Jar-Jar Binks and Jabba the Hutt left us puzzled. Notably, BB-8 had a write-in vote despite the fact the new Star Wars movie hadn’t been released yet.

As to where the community is headed, our prediction is that by the next Jenkins Community Survey:

  • More Jenkins users will have transitioned from just continuous integration to continuous delivery with some evening practicing continuous deployment

  • Pipeline plugin adoption and improvements will continue, leading to pipeline-as-code becoming an essential solution for automating the software (and infrastructure) delivery process

  • There will be a significant increase in use of the Docker plugin to support elastic Jenkins infrastructure and continuous delivery of containers using software development best practices

  • BB-8 will be the next favorite Star Wars character! <3


See you at Jenkins World, September 13-15, in Santa Clara, California!Register now for the largest Jenkins event on the planet in 2016 – and get the Early Bird discount. The Call for Papers is still open – so submit a talk and share your knowledge with the community about Jenkins.

Google Summer of Code. Call for Mentors

$
0
0

As you probably know, Jenkins project has been accepted toGoogle Summer of Code 2016.

During last month we were working with students in order to discuss their project ideas and to review their application drafts. Thanks again to all students and mentors for your hard work during about ten office hours and dozens of other calls/chats!

Current status

  • We have successfully handled the student application period

  • We have received a bunch of good project proposals (mentors cannot disclose the number)

  • We have done the preliminary filtering of applications

  • GSoC mentors and organization admins have prepared the project slot application draft

Currently we are looking for mentors. We have a minimal required number for the current project slot application plan, but additional expertise would allow us to share the load and to provide more expertise to students.

If you want to be a mentor:

  1. Check out mentor requirements here.

  2. Check out the project ideashere.

    • Student application period is finished, so it is too late to propose project ideas for this year

    • You can join the mentor team for one of the mentioned projects

    • Hot areas: UI improvements, Fingerprints, External Workspace Manager

  3. Contact Google GSoC admins via jenkinsci-gsoc-org@googlegroups.com

Run Your API Tests Continuously with Jenkins and DHC

$
0
0
This is a guest post by Guillaume Laforge. Well known for his contribution to the Apache Groovy project, Guillaume is also the "Product Ninja and Advocate" of Restlet, a company focusing on Web APIs: with DHC (an API testing client),Restlet Studio (an API designer),APISpark (an API platform in the cloud), and the Restlet Framework open source project for developing APIs.

Modern mobile apps, single-page web sites and applications, are more and more relying on Web APIs, as the nexus of the interaction between the frontend and the backend services. Web APIs are also central to third-party integration, when you want to share your services with others, or when you need to consume existing APIs to build your own solution on top of their shoulders.

With APIs being a key element of your architecture and big picture, it’s obviously important to assess that this API is functioning the way it should, thanks to proper testing. Your framework of choice, regardless of the technology stack or programming language used, will hopefully offer some facilities for testing your code, whether in the form of unit tests, or ideally with integration tests.

Coding Web API tests

From a code perspective, as I said, most languages and frameworks provide approaches to testing APIs built with them. There’s one I wanted to highlight in particular, which is one developed with a DSL approach (Domain-Specific Language), using the Apache Groovy programming language, it’s AccuREST.

To get started, you can have a look at the introduction, and the usage guide. If you use the contract DSL, you’ll be able to write highly readable examples of requests you want to issue against your API, and the assertions that you expect to be true when getting the response from that call. Here’s a concrete example from the documentation:

GroovyDsl.make {
    request {
        method 'POST'
        urlPath('/users') {
            queryParameters {
                parameter 'limit': 100
                parameter 'offset': containing("1")
                parameter 'filter': "email"
            }
        }
        headers {
            header 'Content-Type': 'application/json'
        }
        body '''{ "login" : "john", "name": "John The Contract" }'''
    }
    response {
        status 200
        headers {
            header 'Location': '/users/john'
        }
    }
}

Notice that the response is expected to return a status code 200 OK, and a Location header pointing at /users/john. Indeed, a very readable way to express the requests and responses!

Tooling to test your APIs

From a tooling perspective, there are some interesting tools that can be used to test Web APIs, like Paw (on Macs),Advanced REST client,Postman orInsomnia.

But in this article, I’ll offer a quick look at DHC, a handy visual tool, that you can use both manually to craft your tests and assertions, and whose test scenarios you can export and integrate in your build and continuous integration pipeline, thanks to Maven and Jenkins.

At the end of this post, you should be able to see the following reporting in your Jenkins dashboard, when visualising the resulting API test execution:

Export a scenario

Introducing DHC

DHC is a Chrome extension, that you caninstall from the Chrome Web Store, in your Chrome browser. There’s also an online service available, with some limitations. For the purpose of this article, we’ll use the Chrome extension.

DHC interface

In the main area, you can create your request, define the URL to call, specify the various request headers or params, chose the method you want to use, and then, you can click the send button to issue the request.

In the left pane, that’s where you’ll be able to see your request history, create and save your project in the cloud, or also set context variables.

The latter is important when testing your Web API, as you’ll be able to insert variables like for example {localhost} for testing locally on your machine or {staging} and {prod} to run your tests in different environments.

In the bottom pane, you have access to actual raw HTTP exchange, as well as the assertions pane.

Again, a very important pane to look at! With assertions, you’ll be able to ensure that your Web API works as expected. For instance, you can check the status code of the call, check the payload contains a certain element, by using JSON Path or XPath to go through the JSON or XML payload respectively.

DHC interface

Beyond assertions, what’s also interesting is that you can chain requests together. A call request can depend on the outcome of a previous request! For example, in a new request, you could pass a query parameter whose value would be the value of some element of the JSON payload of a previously executed request. And by combining assertions, linked requests and context variables together, you can create full-blown test scenarios, that you can then save in the cloud, but also export as a JSON file.

Running a test scenario

To export that test scenario, you can click on the little export icon in the bottom left hand corner, and you’ll be able to select exactly what you want to export:

Export a scenario

Running your Web API tests with Maven

Now things become even more interesting, as we’ll proceed to using Maven and Jenkins! As the saying goes, there’s a Maven plugin for that! For running those Web API tests in your build! Even if your Web API is developed in another technology than Java, you can still create a small Maven build just for your Web API tests. And the icing on the cake, when you configure Jenkins to run this build, as the plugin outputs JUnit-friendly test reports, you’ll be able to see the details of your successful and failed tests, just like you would see JUnit’s!

Let’s sketch your Maven POM:

<projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>com.example</groupId><artifactId>my-first-api-test</artifactId><version>1.2.3</version><build><plugins><plugin><groupId>com.restlet.dhc</groupId><artifactId>dhc-maven-plugin</artifactId><version>1.1</version><executions><execution><phase>test</phase><goals><goal>test</goal></goals><configuration><file>companies-scenario.json</file></configuration></execution></executions></plugin></plugins></build><pluginRepositories><pluginRepository><id>restlet-maven</id><name>Restlet public Maven repository Release Repository</name><url>http://maven.restlet.com</url></pluginRepository></pluginRepositories></project>

Visualizing Web API test executions in Jenkins

Once you’ve configured your Jenkins server to launch the test goal of this Maven project, you’ll be able to see nice test reports for your Web API scenarios, like in the screenshot in introduction of this article!

Next, you can easily run your Web API tests when developers commit changes to the API, or schedule regular builds with Jenkins to monitor an online Web API.

For more information, be sure to read the tutorial ontesting Web APIs with DHC. There are also some more resources like ascreencast, as well as theuser guide, if you want to learn more. And above all, happy testing!


Security fixes in Script Security Plugin and Extra Columns Plugin

$
0
0

The Script Security Plugin and the Extra Columns Plugin were updated today to fix medium-severity security vulnerabilities. For detailed information about the security content of these updates, see the security advisory.

Subscribe to the jenkinsci-advisories mailing list to receive important notifications related to Jenkins security.

Replay a Pipeline with script edits

$
0
0

This is a cross-post ofan article authored by Pipeline plugin maintainer Jesse Glick on theCloudBees blog.

For those of you not checking their Updates tab obsessively, Pipeline 1.14 [up to 2.1 now] wasreleased a couple of weeks ago and I wanted to highlight the major feature in this release: JENKINS-32727, or replay. Some folks writing "Jenkinsfiles" in the field had grumbled that it was awkward to develop the script incrementally, especially compared to jobs using inline scripts stored in the Jenkins job configuration: to try a change to the script, you had to edit Jenkinsfile in SCM, commit it (perhaps to a branch), and then go back to Jenkins to follow the output. Now this is a little easier. If you have a Pipeline build which did not proceed exactly as you expected, for reasons having to do with Jenkins itself (say, inability to find & publish test results, as opposed to test failures you could reproduce locally), try clicking the Replay link in the build’s sidebar. The quickest way to try this for yourself is to run the stock CD demo in its latest release:

$ docker run --rm -p 2222:2222 -p 8080:8080 -p 8081:8081 -p 9418:9418 -ti jenkinsci/workflow-demo:1.14-3

When you see the page Replay #1, you are shown two (Groovy) editor boxes: one for the mainJenkinsfile, one for a library script it loaded (servers.groovy, introduced to help demonstrate this feature). You can make edits to either or both. For example, the original demo allocates a temporary web application with a random name like9c89e9aa-6ca2-431c-a04a-6599e81827ac for the duration of the functional tests. Perhaps you wished to prefix the application name with tmp- to make it obvious to anyone encountering the Jetty index page that these URLs are transient. So in the second text area, find the line

def id = UUID.randomUUID().toString()

and change it to read

def id = "tmp-${UUID.randomUUID()}"

then click Run. Inthe new build’s log you will now see

Replayed #1

and later something like

… test -Durl=http://localhost:8081/tmp-812725bb-74c6-41dc-859e-7d9896b938c3/ …

with the improved URL format. Like the result? You will want to make it permanent. So jump to the [second build’s index page](http://localhost:8080/job/cd/branch/master/2/) where you will see a note that this build > Replayed #1 (diff) If youclick on diff you will see:

--- old/Script1+++ new/Script1@@ -8,7 +8,7 @@
 }

 def runWithServer(body) {
-    def id = UUID.randomUUID().toString()+    def id = "tmp-${UUID.randomUUID()}"
     deploy id
     try {
         body.call id

so you can know exactly what you changed from the last-saved version. In fact if you replay #2 and change tmp to temp in the loaded script, in the diff view for #3 you will see the diff from the first build, the aggregate diff:

--- old/Script1+++ new/Script1@@ -8,7 +8,7 @@
 }

 def runWithServer(body) {
-    def id = UUID.randomUUID().toString()+    def id = "temp-${UUID.randomUUID()}"
     deploy id
     try {
         body.call id

At this point you could touch up the patch to refer to servers.groovy (JENKINS-31838), git apply it to a clone of your repository, and commit. But why go to the trouble of editing Groovy in the Jenkins web UI and then manually copying changes back to your IDE, when you could stay in your preferred development environment from the start?

$ git clone git://localhost/repo
Cloning into 'repo'...
remote: Counting objects: 23, done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 23 (delta 1), reused 0 (delta 0)
Receiving objects: 100% (23/23), done.
Resolving deltas: 100% (1/1), done.
Checking connectivity... done.
$ cd repo
$ $EDITOR servers.groovy
# make the same edit as previously described
$ git diff
diff --git a/servers.groovy b/servers.groovy
index 562d92e..63ea8d6 100644
--- a/servers.groovy
+++ b/servers.groovy
@@ -8,7 +8,7 @@ def undeploy(id) {
 }

 def runWithServer(body) {
-    def id = UUID.randomUUID().toString()
+    def id = "tmp-${UUID.randomUUID()}"
     deploy id
     try {
         body.call id
$ ssh -p 2222 -o StrictHostKeyChecking=no localhost replay-pipeline cd/master -s Script1 < servers.groovy
Warning: Permanently added '[localhost]:2222' (RSA) to the list of known hosts.
# follow progress in Jenkins (see JENKINS-33438)
$ git checkout -b webapp-naming
M                                                                              servers.groovy
Switched to a new branch 'webapp-naming'
$ git commit -a -m 'Adjusted transient webapp name.'
[webapp-naming …] Adjusted transient webapp name.
 1 file changed, 1 insertion(+), 1 deletion(-)
$ git push origin webapp-naming
Counting objects: 3, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 330 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
To git://localhost/repo
 * [new branch]      webapp-naming -> webapp-naming

Using the replay-pipeline CLI command (in this example via SSH) you can prepare, test, and commit changes to your Pipeline script code without copying anything to or from a browser. That is all for now. Enjoy!

Registration is Open for Jenkins World 2016!

$
0
0
This is a guest post by Alyssa Tong. Alyssa works for CloudBees, helping to organize Jenkins community events around the world.

Jenkins World 2016

Jenkins World 2016 will be the largest gathering of Jenkins users in the world. This event will bring together Jenkins experts, continuous delivery thought leaders and the ecosystem offering complementary technologies for Jenkins. Join us September 13-15, 2016 in Santa Clara, California to learn and explore, network face-to-face and help shape the next evolution of Jenkins development and solutions for DevOps.

Registration for Jenkins World 2016 is now live. Take advantage of the Super Early Bird rate of $399 (available until July 1st).

And don’t forget, the Call for Papers will be ending on May 1st. That’s 2.5 short weeks left to get your proposal(s) in. We anxiously await your amazing stories.

The Need For Jenkins Pipeline

$
0
0

This is a cross-post ofan article authored by Viktor Farcic on theCloudBees blog. Viktor is also the author of The DevOps 2.0 Toolkit, which explores Jenkins, the Pipeline plugin, and the ecosystem around it in much more detail.

angry jenkins 128

Over the years, Jenkins has become the undisputed ruler among continuous integration (CI), delivery and deployment (CD) tools. It, in a way, defined the CI/CD processes we use today. As a result of its leadership, many other products have tried to overthrow it from its position. Among others, we got Bamboo and Team City attempting to get a piece of the market. At the same time, new products emerged with a service approach (as opposed to on-premise). Some of them are Travis, CircleCI and Shippable. Be that as it may, none managed to get even close to Jenkins' adoption. Today, depending on the source we use, Jenkins holds between 50-70% of the whole CI/CD tools market. The reason behind such a high percentage is its dedication to open source principles set from the very beginning by Kohsuke Kawaguchi. Those same principles were the reason he forked Jenkins from Hudson. The community behind the project, as well as commercial entities behind enterprise versions, are continuously improving the way it works and adding new features and capabilities. They are redefining not only the way Jenkins behaves but also the CI/CD practices in a much broader sense. One of those new features is the Jenkins Pipeline plugin. Before we dive into it, let us take a step back and discuss the reasons that led us to initiate the move away from Freestyle jobs and towards the Pipeline.

The Need for Change

Over time, Jenkins, like most other self-hosted CI/CD tools, tends to accumulate a vast number of jobs. Having a lot of them causes quite an increase in maintenance cost. Maintaining ten jobs is easy. It becomes a bit harder (but still bearable) to manage a hundred. When the number of jobs increases to hundreds or even thousands, managing them becomes very tedious and time demanding.

If you are not proficient with Jenkins (or other CI/CD tools) or you do not work for a big project, you might think that hundreds of jobs is excessive. The truth is that such a number is reached over a relatively short period when teams are practicing continuous delivery or deployment. Let’s say that an average CD flow has the following set of tasks that should be run on each commit: building, pre-deployment testing, deployment to a staging environment, post-deployment testing and deployment to production. That’s five groups of tasks that are often divided into, at least, five separate Jenkins jobs. In reality, there are often more than five jobs for a single CD flow, but let us keep it an optimistic estimate. How many different CD flows does a medium sized company have? With twenty, we are already reaching a three digits number. That’s quite a lot of jobs to cope with even though the estimates we used are too optimistic for all but the smallest entities.

Now, imagine that we need to change all those jobs from, let’s say, Maven to Gradle. We can choose to start modifying them through the Jenkins UI, but that takes too much time. We can apply changes directly to Jenkins XML files that represent those jobs but that is too complicated and error prone. Besides, unless we write a script that will do the modifications for us, we would probably not save much time with this approach. There are quite a few plugins that can help us to apply changes to multiple jobs at once, but none of them is truly successful (at least among free plugins). They all suffer from one deficiency or another. The problem is not whether we have the tools to perform massive changes to our jobs, but whether jobs are defined in a way that they can be easily maintained.

Besides the sheer number of Jenkins jobs, another critical Jenkins' pain point is centralization. While having everything in one location provides a lot of benefits (visibility, reporting and so on), it also poses quite a few difficulties. Since the emergence of agile methodologies, there’s been a huge movement towards self-sufficient teams. Instead of horizontal organization with separate development, testing, infrastructure, operations and other groups, more and more companies are moving (or already moved) towards self-sufficient teams organized vertically. As a result, having one centralized place that defines all the CD flows becomes a liability and often impedes us from splitting teams vertically based on projects. Members of a team should be able to collaborate effectively without too much reliance on other teams or departments. Translated to CD needs, that means that each team should be able to define the deployment flow of the application they are developing.

Finally, Jenkins, like many other tools, relies heavily on its UI. While that is welcome and needed as a way to get a visual overview through dashboards and reports, it is suboptimal as a way to define the delivery and deployment flows. Jenkins originated in an era when it was fashionable to use UIs for everything. If you worked in this industry long enough you probably saw the swarm of tools that rely completely on UIs, drag & drop operations and a lot of forms that should be filled. As a result, we got tools that produce artifacts that cannot be easily stored in a code repository and are hard to reason with when anything but simple operations are to be performed. Things changed since then, and now we know that many things (deployment flow being one of them) are much easier to express through code. That can be observed when, for example, we try to define a complex flow through many Jenkins jobs. When deployment complexity requires conditional executions and some kind of a simple intelligence that depends on results of different steps, chained jobs are truly complicated and often impossible to create.

All things considered, the major pain points Jenkins had until recently are as follows.

  • Tendency to create a vast number of jobs

  • Relatively hard and costly maintenance

  • Centralization of everything

  • Lack of powerful and easy ways to specify deployment flow through code

This list is, by no means, unique to Jenkins. Other CI/CD tools have at least one of the same problems or suffer from deficiencies that Jenkins solved a long time ago. Since the focus of this article is Jenkins, I won’t dive into a comparison between the CI/CD tools.

Luckily, all those, and many other deficiencies are now a thing of the past. With the emergence of thePipeline plugin and many others that were created on top of it, Jenkins entered a new era and proved itself as a dominant player in the CI/CD market. A whole new ecosystem was born, and the door was opened for very exciting possibilities in the future.

Before we dive into the Jenkins Pipeline and the toolset that surrounds it, let us quickly go through the needs of a modern CD flow.

Continuous Delivery or Deployment Flow with Jenkins

When embarking on the CD journey for the first time, newcomers tend to think that the tasks that constitute the flow are straightforward and linear. While that might be true with small projects, in most cases things are much more complicated than that. You might think that the flow consists of building, testing and deployment, and that the approach is linear and follows the all-or-nothing rule. Build invokes testing and testing invokes deployment. If one of them fails, the developer gets a notification, fixes the problem and commits the code that will initiate the repetition of the process.

simple cd flow small

In most instances, the process is far more complex. There are many tasks to run, and each of them might produce a failure. In some cases, a failure should only stop the process. However, more often than not, some additional logic should be executed as part of the after-failure cleanup. For example, what happens if post-deployment tests fail after a new release was deployed to production? We cannot just stop the flow and declare the build a failure. We might need to revert to the previous release, rollback the proxy, de-register the service and so on. I won’t go into many examples of situations that require complex flow with many tasks, conditionals that depend on results, parallel execution and so on. Instead, I’ll share a diagram of one of the flows I worked on.

complex cd flow small

Some tasks are run in one of the testing servers (yellow) while others are run on the production cluster (blue). While any task might produce an error, in some cases such an outcome triggers a separate set of tasks. Some parts of the flow are not linear and depend on task results. Some tasks should be executed in parallel to improve the overall time required to run them. The list goes on and on. Please note that this discussion is not about the best way to execute the deployment flow but only a demonstration that the complexity can be, often, very high and cannot be solved by a simple chaining of Freestyle jobs. Even in cases when such chaining is possible, the maintenance cost tends to be very high.

One of the CD objectives we are unable to solve through chained jobs, or is proved to be difficult to implement, is conditional logic. In many cases, it is not enough to simply chain jobs in a linear fashion. Often, we do not want only to create a job A that, once it’s finished running, executes job B, which, in turn, invokes job C. In real-world situations, things are more complicated than that. We want to run some tasks (let’s call them job A), and, depending on the result, invoke jobs B1 or B2, then run in parallel C1, C2 and C3, and, finally, execute job D only when all C jobs are finished successfully. If this were a program or a script, we would have no problem accomplishing something like that, since all modern programming languages allow us to employ conditional logic in a simple and efficient way. Chained Jenkins jobs, created through its UI, pose difficulties to create even a simple conditional logic. Truth be told, some plugins can help us with conditional logic. We have Conditional Build Steps, Parameterised Trigger, Promotions and others. However, one of the major issues with these plugins is configuration. It tends to be scattered across multiple locations, hard to maintain and with little visibility.

Resource allocation needs a careful thought and is, often, more complicated than a simple decision to run a job on a predefined slave. There are cases when slave should be decided dynamically, workspace should be defined during runtime and cleanup depends on a result of some action.

While a continuous deployment process means that the whole pipeline ends with deployment to production, many businesses are not ready for such a goal or have use-cases when it is not appropriate. Any other process with a smaller scope, be it continuous delivery or continuous integration, often requires some human interaction. A step in the pipeline might need someone’s confirmation, a failed process might require a manual input about reasons for the failure, and so on. The requirement for human interaction should be an integral part of the pipeline and should allow us to pause, inspect and resume the flow. At least, until we reach the true continuous deployment stage.

The industry is, slowly, moving towards microservices architectures. However, the transformation process might take a long time to be adopted, and even more to be implemented. Until then, we are stuck with monolithic applications that often require a long time for deployment pipelines to be fully executed. It is not uncommon for them to run for a couple of hours, or even days. In such cases, failure of the process, or the whole node the process is running on, should not mean that everything needs to be repeated. We should have a mechanism to continue the flow from defined checkpoints, thus avoiding costly repetition, potential delays and additional costs. That is not to say that long-running deployment flows are appropriate or recommended. A well-designed CD process should run within minutes, if not seconds. However, such a process requires not only the flow to be designed well, but also the architecture of our applications to be changed. Since, in many cases, that does not seem to be a viable option, resumable points of the flow are a time saver.

All those needs, and many others, needed to be addressed in Jenkins if it was to continue being a dominant CI/CD tool. Fortunately, developers behind the project understood those needs and, as a result, we got the Jenkins Pipeline plugin. The future of Jenkins lies in a transition from Freestyle chained jobs to a single pipeline expressed as code. Modern delivery flows cannot be expressed and easily maintained through UI drag 'n drop features, nor through chained jobs. They can neither be defined through YAML (Yet Another Markup Language) definitions proposed by some of the newer tools (which I’m not going to name). We need to go back to code as a primary way to define not only the applications and services we are developing but almost everything else. Many other types of tools adopted that approach, and it was time for us to get that option for CI/CD processes as well.

Making your own DSL with plugins, written in Pipeline script

$
0
0

In this post I will show how you can make your own DSL extensions and distribute them as a plugin, using Pipeline Script.

A quick refresher

Pipeline has a well kept secret: the ability to add your own DSL elements. Pipeline is itself a DSL, but you can extend it.

There are 2 main reasons I can think you may want to do this:

  1. You want to reduce boilerplate by encapsulating common snippets/things you do in one DSL statement.

  2. You want to provide a DSL that provides a prescriptive way that your builds work - uniform across your organisations Jenkinsfiles.

A DSL could look as simple as

acmeBuild {
    script = "./bin/ci"
    environment = "nginx"
    team = "evil-devs"
    deployBranch = "production"
}

This could be the entirety of your Jenkinsfile!

In this "simple" example, it could actually be doing a multi stage build with retries, in a specified docker container, that deploys only from the production branch. Detailed notifications are sent to the right team on important events (as defined by your org).

Traditionally this is done via theglobal library. You take a snippet of DSL you want to want to make into a DSL, and drop it in the git repo that is baked into Jenkins.

A great trivialexample is this:

jenkinsPlugin {
    name = 'git'
}

Which is enabled by git pushing the following into vars/jenkinsPlugin.groovy

The name of the file is the name of the DSL expression you use in the Jenkinsfile
defcall(body) {def config = [:]
    body.resolveStrategy = Closure.DELEGATE_FIRST
    body.delegate = config
    body()// This is where the magic happens - put your pipeline snippets in here, get variables from config.
    node {
        git url: "https://github.com/jenkinsci/${config.name}-plugin.git"
        sh "mvn install"
        mail to: "...", subject: "${config.name} plugin build", body: "..."
    }
}

You can imagine many more pipelines, or even archetypes/templates of pipelines you could do in this way, providing a really easy Jenkinsfile syntax for your users.

Making it a plugin

Using the global DSL library is a handy thing if you have a single Jenkins, or want to keep the DSLs local to a Jenkins instance. But what if you want to distribute it around your org, or, perhaps it is general purpose enough you want to share it with the world?

Well this is possible, by wrapping it in a plugin. You use the same pipeline snippet tricks you use in the global lib, but put it in the dsl directory of a plugin.

My simple build plugin shows how it is done. To make your own plugin:

  1. Create a new plugin project, either fork the simple build one, or add a dependency to it in your pom.xml / build.gradle file

  2. Put your dsl in the resources directory in a similar fashion tothis (note the "package dsl" declaration at the top)

  3. Create the equivalent extension that just points to the DSL by name likethis This is mostly "boiler plate" but it tells Jenkins there is a GlobalVariable extension available when Pipelines run.

  4. Deploy it to an Jenkins Update Center to share with your org, or everyone!

The advantage of delivering this DSL as a plugin is that it has a version (you can also put tests in there), and distributable just like any other plugin.

For the more advanced, Andrew Bayer has a Simple Travis Runner plugin thatinterprets and runs travis.yml files which is also implemented in pipeline.

So, approximately, you can build plugins for pipeline that extend pipeline, in pipeline script (with a teeny bit of boiler plate).

Enjoy!

Pipeline 2.x plugins

$
0
0

Those of you who routinely apply all plugin updates may already have noticed that the version numbers of the plugins in the Pipeline suite have switched to a 2.x scheme. Besides aligning better with the upcoming Jenkins 2.0 core release, the plugins are now being released with independent lifecycles.

“Pipeline 1.15” (the last in the 1.x line) included simultaneous releases of a dozen or so plugins with the 1.15 version number (and 1.15+ dependencies on each other). All these plugins were built out of a single workflow-plugin repository. While that was convenient in the early days for prototyping wide-ranging changes, it has become an encumbrance now that the Pipeline code is fairly mature, and more people are experimenting with additions and patches.

As of 2.0, all the plugins in the system live in their own repositories on GitHub—named to match the plugin code name, which in most cases uses the historical workflow term, so for example workflow-job-plugin. Some complex steps were moved into their own plugins, such as pipeline-build-step-plugin. The 1.x changelog is closed; now each plugin keeps a changelog in its own wiki, for example here for the Pipeline Job plugin.

Among other benefits, this change makes it easier to cut new plugin releases for even minor bug fixes or enhancements, or for developers to experiment with patches to certain plugins. It also opens the door for the “aggregator” plugin (called simply Pipeline) to pull in dependencies on other plugins that seem broadly valuable, like the stage view.

The original repository has been renamed pipeline-plugin and for now still holds some documentation, which might later be moved to jenkins.io.

You need not do anything special to “move” to the 2.x line; 1.642.x and later users can just accept all Pipeline-related plugin updates. Note that if you update Pipeline Supporting APIs you must update Pipeline, or at least install/update some related plugins as noted in the wiki.

Possible Jenkins Project Infrastructure Compromise

$
0
0

Last week, the infrastructure team identified the potential compromise of a key infrastructure machine. This compromise could have taken advantage of, what could be categorized as, an attempt to target contributors with elevated access. Unfortunately, when facing the uncertainty of a potential compromise, the safest option is to treat it as if it were an actual incident, and react accordingly. The machine in question had access to binaries published to our primary and secondary mirrors, and to contributor account information.

Since this machine is not the source of truth for Jenkins binaries, we verified that the files distributed to Jenkins users: plugins, packages, etc, were not tampered with. We cannot, however, verify that contributor account information was not accessed or tampered with and, as a proactive measure, we are issuing a password reset for all contributor accounts. We have also spent significant effort migrating all key services off of the potentially compromised machine to (virtual) hardware so the machine can be re-imaged or decommissioned entirely.

What you should do now

If you have ever filed an issue in JIRA, edited a wiki page, released a plugin or otherwise created an account via the Jenkins website, you have a Jenkins community account. You should be receiving a password reset email shortly, but if you have re-used your Jenkins account password with other services we strongly encourage you to update your passwords with those other services. If you’re not already using one, we also encourage the use of a password manager for generating and managing service-specific passwords.

The generated password sent out is temporary and will expire if you do not use it to update your account. Once it expires you will need recover your account with the password reset in the accounts app.

This does not apply to your own Jenkins installation, or any account that you may use to log into it. If you do not have a Jenkins community account, there is no action you need to take.

What we’re doing to prevent events like this in the future

As stated above, the potentially compromised machine is being removed from our infrastructure. That helps address the immediate problem but doesn’t put guarantees in place for the future. To help prevent potential issues in the future we’re taking the following actions:

  1. Incorporating more security policy enforcement into ourPuppet-driven infrastructure. Without a configuration management tool enforcing a given state for some legacy services, user error and manual mis-configurations can adversely affect project security. As of right now, all key services are managed by Puppet.

  2. Balkanizing our machine and permissions model more. The machine affected was literally the first independent (outside of Sun) piece of project infrastructure and like many legacy systems, it grew to host a multitude of services. We are rapidly evolving away from that model with increasing levels of user and host separation for project services.

  3. In a similar vein, we have also introduced a trusted zone in our infrastructure which is not routable on the public internet, where sensitive operations, such as generating update center information, can be managed and secured more effectively.

  4. We are performing an infrastructure permissions audit. Some portions of our infrastructure are 6+ years old and have had contributors come and go. Any inactive users with unnecessarily elevated permissions in the project infrastructure will have those permissions revoked.

I would like to extend thanks, on behalf of the Jenkins project, toCloudBees for their help in funding and migrating this infrastructure.

If you have further questions about the Jenkins project infrastructure, you can join us in the #jenkins-infra channel on Freenode or in an Infrastructure Q&A session I’ve scheduled for next Wednesday (April 27) at 20:00 UTC (12:00 PST).


Jenkins 2.0 is here!

$
0
0

Over the past 10 years, Jenkins has reallygrown to a de-facto standard tool that millions of people use to handle automation in software development and beyond. It is quite remarkable for a project that originally started as a hobby project under a different name. I’m very proud.

Around this time last year,we’ve celebrated 10 years, 1000 plugins, and 100K installations. That was a good time to retrospect, and we started thinking about the next 10 years of Jenkins and what’s necessary to meet that challenge. This project has long been on a weekly "train" release model, so it was useful to step back and think about a big picture.

That is where three pillars of Jenkins 2.0 have emerged from.

First, one of the challenges our users are facing today is that the automation that happens between a commit and a production has significantly grown in its scope. Because of this, the clothing that used to fit (aka "freestyle project", which was the workhorse of Jenkins) no longer fits. We now need something that better fits today’s use cases like "continuous delivery pipeline." This is why in 2.0 we’ve added the pipeline capability. This 2 year old effort allows you to describe your chain of automation in a textual form. This allows you to version control it, put it alongside your source tree, etc. It is also actually a domain specific language (DSL) of Groovy, so when your pipeline grows in complexity/sophistication, you can manage its complexity and keep it understandable far more easily.

Second, over time, Jenkins has developed the "assembly required before initial use" feeling. As the project has grown, the frontier of interesting development has shifted to plugins, which is how it should be, but we have left it up to users to discover & use them. As a result, the default installation became very thin and minimal, and every user has to find several plugins before Jenkins becomes really functional. This created a paradox of choice and unnecessarily hurt the user experience. In 2.0, we reset this thinking and tried to create more sensible out of the box experience that solves 80% use cases for 80% of people. You get something useful out of the box, and you can get some considerable mileage out of it before you start feeling the need of plugins. This allows us to focus our development & QA effort around this base functionality, too. By the way, the focus on the out of the box experience doesn’t stop at functionality, either. The initial security setup of Jenkins is improved, too, to prevent unprotected Jenkins instances from getting abused by botnets and attacks.

Third, we were fortunate to have a number of developers with UX background spend some quality time on Jenkins, and they have made a big dent in improving various parts of Jenkins web UI. The setup wizard that implements the out of the box experience improvement is one of them, and it also includes other parts of Jenkins that you use all the time, such as job configuration pages and new item pages. This brings much needed attention to the web UI.

As you can see, 2.0 brings a lot of exciting features on the table, but this is an evolutionary release, built on top of the same foundation, so that your existing installations can upgrade smoothly. After this initial release, we’ll get back to our usual weekly release march. Improvements will be made to those pillars and others in coming months and years continuously. If you’d like to get a more in-depth look at Jenkins 2.0, please join us in our virtual Jenkins meetup 2.0 launch event.

Thank you very much for everyone who made Jenkins 2.0 possible. There aretoo many of you to thank individually, but you know who you are. I wanted to thank CloudBees in particular for sponsoring the time of many of those people. Ten years ago, all I could utilize was my own night & weekend time. Now I’ve got a team of smart people working with me to carry this torch forward, and a big effort like 2.0 wouldn’t have been possible without such organized effort.

Jenkins 2.0 Online JAM Wrap-up

$
0
0

Last week we hosted our first everOnline JAM with the debut topic of: Jenkins 2.0. Alyssa, our Events officer, and I pulled together aseries of sessions focusing on some of the most notable aspects of Jenkins 2 with:

  • A Jenkins 2.0 keynote from project founderKohsuke Kawaguchi

  • An overview of "Pipeline as Code" from Patrick Wolf

  • A deep-dive into Pipeline and related plugins like Multibranch, etc fromJesse Glick andKishore Bhatia

  • An overview of new user experience changes in 2.0 fromKeith Zantow

  • A quick lightning talk about documentation by yours truly

  • Wrapping up the sessions, was Kohsuke again, talking about the road beyond Jenkins 2.0 and what big projects he sees on the horizon.

The event was really interesting for me, and I hope informative for those who participated in the live stream and Q&A session. I look forward to hosting more Virtual JAM events in the future, and I hope you willjoin us!

Questions and Answers

Below are a collection of questions and answers, that were posed during the Virtual JAM. Many of these were answered during the course of the sessions, but for posterity all are included below.

Pipeline

What kind of DSL is used behind pipeline as code? Groovy or allow freely use different languages as a user prefer?

Pipeline uses a Groovy-based domain specific language.

How do you test your very own pipeline DSL?

Replay helps in testing/debugging while creating pipelines and at the branch level. There are some ideas which Jesse Glick has proposed for testing Jenkinsfile and Pipeline libraries captured inJENKINS-33925.

Isn’t "Survive Jenkins restart" exclusive to [CloudBees] Jenkins Enterprise?

No, this feature does not needCloudBees Jenkins Enterprise. All features shown during the virtual JAM are free and open source. CloudBees' Jenkins Enterprise product does support restarting from a specified stage however, and that is not open source.

How well is jenkins 2.0 integrate with github for tracking job definitions?

Using theGitHub Organization Folder plugin, Jenkins can automatically detect a Jenkinsfile in source repositories to create Pipeline projects.

Please make the ability for re-run failed stages Open Source too :)

This has been passed on to our friends at CloudBees for consideration :)

If Jenkinsfile is in the repo, co-located with code, does this mean Jenkins can auto-detect new jobs for different branches?

This is possible using thePipeline Multibranch plugin.

What documentation sources are there for Pipeline?

Our documentation section contains a number of pagesaround Pipeline. There is also additional documentation and examples in the plugin’s git repository and thejenkinsci/pipeline-examples repository. (contributions welcome!)

Where we can find the DSL method documentation?

There is generated documentation on jenkins.io which incldues steps from all public plugins. Inside of a running Jenkins instance, you can also navigate toJENKINS_URL/workflow-cps-snippetizer/dslReference to see the documentation for the plugins which are installed in that instance.

If Pipeline is not support some plugins (there is a lot actually), I needed SonarQube Runner but unfortunately it’s not supported yet, in Job DSL plugin i can use "Configure Block" and cover any plugin via XML, how i can achieve the same with a Pipeline?

Not at this time

Is there a possibility to create custom tooltips i.e. with a quick reference or a link to internal project documentation? Might be useful i.e. for junior team members who need to refer to external docs.

Not generally. Though in the case of Pipeline global libraries, you can create descriptions of vars/functions like standardBuild in the demo, and these will appear in Snippet Generator under Global Variables.

Oh pipeline supports joining jobs? It’s really good, but I cannot find document at https://jenkins.io/doc/ could you tell me where is it?

There is a build step, but the Pipeline system is optimized for single-job pipelines

We have multiple projects that we would like to follow the same pipeline. How would I write a common pipeline that can be shared across multiple projects.

You may want to look at implementing some additional steps using thePipeline Global Library feature. This would allow you to define organization-specific extensions to the Pipeline DSL to abstract away common patterns between projects.

How much flexibility is there with creating context / setting environment variables or changing / modifying build tool options when calling a web hook / api to parameterize pipelines for example to target deployments to different env using same pipeline

Various environment variables are exposed under the env variable in the Groovy DSL which would allow you to construct logic as simple or as complex as necessary to achieve your goal.

When you set up the job for the first time, does it build every branch in git, or is there a way to stop it from building old branches?

Not at this time, the best way to prevent older branches from being built is to remove the Jenkinsfile in those branches. Alternatively, you could use the "include" or "exclude" patterns when setting up the SCM configuration of your multibranch Pipeline. See alsoJENKINS-32396.

Similar to GitHub organizations, will BitBucket "projects" (ways of organizing collections of repos) be supported?

Yes, these are supported via theBitbucket Branch Source plugin.

How do you handle build secrets with the pipeline plugin? Using unique credentials stored in the credentials plugin per project and/or branch?

This can be accomplished by using theCredentials Binding plugin.

Similar to GitHub Orgs, are Gitlab projects supported in the same way?

GitLab projects are not explicitly supported at this time, but the extension points which the GitHub Organization Folder plugin uses could be extended in a similar manner for GitLab. See also JENKINS-34396

Is Perforce scm supported by the Pipeline plugin?

As a SCM source for discovering a Jenkinsfile, not at this time. TheP4 plugin does provide some p4 steps which can be used in a Pipeline script however, see here for documentation.

Is Mercurial supported with multibranch?

Yes, it is.

Can Jenkinsfile detect when it’s running against a pull request vs an approved commit, so that it can perform a different type of build?

Yes, via the env variables provided in the DSL scope. Using an if statement, one could guard specific behaviors with:

if (env.CHANGE_ID != null) {/* do things! */
}

Let’s say I’m building RPMs with Jenkins and use build number as an RPM version/release number. Is there a way to maintain build numbers and leverage versioning of Jenkinsfile?

Through the env variable, it’s possible to utilize env.BUILD_NUMBER or the SCM commit ID, etc.

Love the snippet generator! Any chance of separating it out from the pipeline into a separate page on its own, available in the left nav?

Yes, this is tracked inJENKINS-31831

Any tips on pre-creating the admin user credential and selecting plugins to automate the Jenkins install?

There are various configuration management modules which provide parts of this functionality.

I’m looking at the pipeline syntax (in Jenkins 2.0) how do I detect a step([...]) has failed and create a notification inside the Jenkinsfile?

This can be done by wrapping a step invocation with a Groovy try/catch block. See also JENKINS-28119

User Interface/Experience

Is the user experience same as before when we replace the Jenkins.war(1.x to 2.x) in an existing (with security in place) installation?

You will get the new UI features like redesigned configuration forms, but the initial setup wizard will be skipped. In its stead, Jenkins will offer to install Pipeline-related functionality.

Is it possible to use custom defined syntax highlighting ?

Within the Pipeline script editor itself, no. It is using theACE editor system, so it may be possible for a plugin to change the color scheme used.

Can you elaborate on what the Blue Ocean UI is? Is there a link or more information on it?

Blue Ocean is the name of user experience an design project, unfortunately at this point in time there is not more information available on it.

General

How well this integrate with cloud environment?

The Jenkins master and agents can run easily in any public cloud environment that supports running Java applications. Through theEC2,JClouds,Azure, or any other plugins which extend the cloudextension point, it is possible to dynamically provision new build agents on a configured cloud provider.

Are help texts and other labels and messages updated for other localizations / languages as well?

Practically every string in Jenkins core is localizable. The extent to which those strings have been translated depends on contributors by speakers of those languages to the project. If you want to contribute translations, thiswiki page should get you started.

Any additional WinRM/Windows remoting functionality in 2.0?

No

Is there a CLI to find all the jobs created by a specific user?

No, out-of-the-box Jenkins does not keep track of which user created which jobs. The functionality provided by theOwnership plugin may be of interest though.

Please consider replacing terms like "master" and "slave" with "primary" and "secondary".

"slave" has been replaced with "agent" in Jenkins 2.0

We’ve been making tutorial videos on Jenkins for awhile (mostly geared toward passing the upcoming CCJPE). Because of that we’re using 1.625.2 (since that is what is listed on the exam), but should we instead base the videos on 2.0?

As of right now all of theJenkins Certification work done by CloudBees is focused around the Jenkins LTS 1.625.x.

Security updates for Jenkins core

$
0
0

We just released security updates to Jenkins that fix a number of low and medium severity issues. For an overview of what was fixed, see the security advisory.

One of the fixes may well break some of your use cases in Jenkins, at least until plugins have been adapted: SECURITY-170. This change removes parameters that are not defined on a job from the build environment. So, right now, a job could even be unparameterized, and plugins were able to pass parameters anyway. Since build parameters are added to the environment variables of scripts run during a build, parameters such as PATH or DYLD_LIBRARY_PATH can be defined – on jobs which don't even expect those as build parameters – to change the behavior of builds.

A number of plugins define additional parameters for builds. For example, GitHub Pull Request Builder passes a number of additional parameters describing the pull request. Release Plugin also allows adding several additional parameters to a build that are not considered to be defined in the job as part of this security fix.

Please see this wiki page for a list of plugins known to be affected by this change.

Until these plugins have been adapted to work with the new restriction (and advice on that is available further down), you can define the following system properties to work around this limitation, at least for a time:

  • Set hudson.model.ParametersAction.keepUndefinedParameters to true, e.g. java -Dhudson.model.ParametersAction.keepUndefinedParameters=true -jar jenkins.war to revert to the old behavior of allowing any build parameters. Depending on your environment, this may be unsafe, as it opens you up to attacks as described above.
  • Set hudson.model.ParametersAction.safeParameters to a comma-separated list of safe parameter names, e.g. java -Dhudson.model.ParametersAction.safeParameters=FOO,BAR_baz,quX -jar jenkins.war.

I realize this change, among a few others that improve the security of Jenkins, may be difficult to adapt for some, but given the valuable secrets typically stored in Jenkins, I'm certain that this is the correct approach. We made sure to release this fix with the options described above, so that this change doesn't block updating those that rely on this behavior.

Developers have several options to adapt to this change:

  • ParametersAction actually stores all parameters, but getParameters() only returns those that are defined on the job. The new method getAllParameters() returns all of them. This can be used, for example by EnvironmentContributor extensions, to add known safe parameters to build environments.
  • Don't pass extra arguments, but define a QueueAction for your metadata instead. Those can still be made available to the build environment as needed.

Subscribe to the jenkinsci-advisories mailing list to receive important notifications related to Jenkins security.

SF JAM Report: Scaling Jenkins for Continuous Delivery with Azure

$
0
0

A few weeks ago, my colleague Brian Dawson and I were invited to present onScaling Jenkins for Continuous Delivery with Microsoft Azure in Microsoft’s Reactor space. Azure is Microsoft’s public cloud offering and one of the many tools available to Jenkins users for adding elastic compute capacity, among other things, to their build/test/deploy infrastructure. While our presentations are applicable to practically any cloud-based Jenkins environment, Thiago Almeida and Oguz Pastirmaci from Microsoft were also on-hand and presented some interesting Azure-specific offerings like Azure Container Service with Jenkins.

While we do not have video from the meetup, Brian and I did recorda session with Thiago and Oguz for Channel9 which covers much of the same content:

To kick-off the meetup we asked attendees a few polling questions and received very telling responses:

  • How big is your Development/IT organization?

  • What is your role?

  • By show of hands do you practice CI/CD/DevOps/etc?

  • At what scale (tooling and practice)?

The responses indicated that the majority of attendees were from small to medium organizations where they practiced Continuous Delivery across multiple teams. A notable 25% or greater attendees considered themselves "fullstack" or participating in all of the roles of Developer, QA, and Operations. Interesting when paired with the high number (~80%) of those who practice CD. This is likely because modern teams, with mature CD practices, tend to blur the traditional lines of Developer, QA and Operations. However, In my experience, while this is often the case for small to medium companies in large organizations team members tend to fall into the traditional roles, with CD providing the practice and platform to unify teams across roles.

— Brian Dawson

After gauging the audience, Thiago and Brian reviewed Continuous Delivery (CD) and implementing it at scale. They highlighted the fact that CD is being rapidly adopted across teams and organizations, providing the ability: to deliver a demonstrably higher quality product, shipping more rapidly than before, and to keep team members happier.

However, when organizations fail to properly support CD as they scale, they run into issues such as: developers acting as administrators at the cost of productivity, potential lack of security and/or exposure of IP and difficulty in sharing best practices across teams.

Thiago then highlighted that properly scaling CD practices in the organization along with the infrastructure itself can alleviate these issues, and discussed the benefits of scaling CD to on cloud platforms to provide "CD-as-a-Service."

Overall I found the "theory" discussion to be on point, continuous delivery is not just a technology nor a people problem. Successful organizations scale their processes and tooling together.

The slides from our respective presentations are linked below:

I hope you join us at futureSan Francisco JAMs!

The State of Jenkins Area Meetups (JAM)

$
0
0

Recently, the Jenkins project announced the release ofJenkins 2.0, a first major release after 10 years and 655 weekly releases. This has been a major milestone for Jenkins and its growing community of developers, testers, designers and other users in the software delivery process.

With its rising popularity and wide adoption, the Jenkins community continues to grow and evolve into the millions. Jenkins community meetup activity has risen to an all time high since the first Jenkins meetup which was established on August 23 2010, in San Francisco.

Over the last six months the number ofJenkins Area Meetup (JAM) Groups has grown from 5 to 30, with coverage in Asia, North America, South America and Europe. That’s an average growth of 4 new JAMs per month.

JAM map

As of today, there are over 4,100 Jenkins fans within the Jenkins meetup community. This is the result of contributions from community JAM leaders who have volunteered their time to provide a platform for learning, sharing and networking all things Jenkins within their local communities.

For anyone who has not organized a meetup before, there are many moving parts that have to come together at a specific location, date and time. This process takes significant effort to methodically plan out. From planning the food and beverages to securing speaker(s), a venue, audio/visual setup, technical logistics and of course promoting the meetup. It does takes a level of passion and effort to make it all happen.

Many THANKS to the 55 JAM leaders, who share this passion - they have successfully organized over 41 meetups within the past six months in North America, South America and Europe. That’s about 6 meetups a month!

JAMs over time

There are still plenty of opportunities to be a JAM organizer. If there is not a JAM near you, we’d love to hear from you! Here’s how you can get started.

Toulouse JAM 1

Toulouse JAM 2

Seville JAM

Peru JAM

Barcelona JAM

Viewing all 1088 articles
Browse latest View live