Quantcast
Channel: Jenkins Blog
Viewing all 1088 articles
Browse latest View live

Replay a Pipeline with script edits

$
0
0

This is a cross-post ofan article authored by Pipeline plugin maintainer Jesse Glick on theCloudBees blog.

For those of you not checking their Updates tab obsessively, Pipeline 1.14 [up to 2.1 now] wasreleased a couple of weeks ago and I wanted to highlight the major feature in this release: JENKINS-32727, or replay. Some folks writing "Jenkinsfiles" in the field had grumbled that it was awkward to develop the script incrementally, especially compared to jobs using inline scripts stored in the Jenkins job configuration: to try a change to the script, you had to edit Jenkinsfile in SCM, commit it (perhaps to a branch), and then go back to Jenkins to follow the output. Now this is a little easier. If you have a Pipeline build which did not proceed exactly as you expected, for reasons having to do with Jenkins itself (say, inability to find & publish test results, as opposed to test failures you could reproduce locally), try clicking the Replay link in the build’s sidebar. The quickest way to try this for yourself is to run the stock CD demo in its latest release:

$ docker run --rm -p 2222:2222 -p 8080:8080 -p 8081:8081 -p 9418:9418 -ti jenkinsci/workflow-demo:1.14-3

When you see the page Replay #1, you are shown two (Groovy) editor boxes: one for the mainJenkinsfile, one for a library script it loaded (servers.groovy, introduced to help demonstrate this feature). You can make edits to either or both. For example, the original demo allocates a temporary web application with a random name like9c89e9aa-6ca2-431c-a04a-6599e81827ac for the duration of the functional tests. Perhaps you wished to prefix the application name with tmp- to make it obvious to anyone encountering the Jetty index page that these URLs are transient. So in the second text area, find the line

def id = UUID.randomUUID().toString()

and change it to read

def id = "tmp-${UUID.randomUUID()}"

then click Run. Inthe new build’s log you will now see

Replayed #1

and later something like

… test -Durl=http://localhost:8081/tmp-812725bb-74c6-41dc-859e-7d9896b938c3/ …

with the improved URL format. Like the result? You will want to make it permanent. So jump to the [second build’s index page](http://localhost:8080/job/cd/branch/master/2/) where you will see a note that this build > Replayed #1 (diff) If youclick on diff you will see:

--- old/Script1+++ new/Script1@@ -8,7 +8,7 @@
 }

 def runWithServer(body) {
-    def id = UUID.randomUUID().toString()+    def id = "tmp-${UUID.randomUUID()}"
     deploy id
     try {
         body.call id

so you can know exactly what you changed from the last-saved version. In fact if you replay #2 and change tmp to temp in the loaded script, in the diff view for #3 you will see the diff from the first build, the aggregate diff:

--- old/Script1+++ new/Script1@@ -8,7 +8,7 @@
 }

 def runWithServer(body) {
-    def id = UUID.randomUUID().toString()+    def id = "temp-${UUID.randomUUID()}"
     deploy id
     try {
         body.call id

At this point you could touch up the patch to refer to servers.groovy (JENKINS-31838), git apply it to a clone of your repository, and commit. But why go to the trouble of editing Groovy in the Jenkins web UI and then manually copying changes back to your IDE, when you could stay in your preferred development environment from the start?

$ git clone git://localhost/repo
Cloning into 'repo'...
remote: Counting objects: 23, done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 23 (delta 1), reused 0 (delta 0)
Receiving objects: 100% (23/23), done.
Resolving deltas: 100% (1/1), done.
Checking connectivity... done.
$ cd repo
$ $EDITOR servers.groovy
# make the same edit as previously described
$ git diff
diff --git a/servers.groovy b/servers.groovy
index 562d92e..63ea8d6 100644
--- a/servers.groovy
+++ b/servers.groovy
@@ -8,7 +8,7 @@ def undeploy(id) {
 }

 def runWithServer(body) {
-    def id = UUID.randomUUID().toString()
+    def id = "tmp-${UUID.randomUUID()}"
     deploy id
     try {
         body.call id
$ ssh -p 2222 -o StrictHostKeyChecking=no localhost replay-pipeline cd/master -s Script1 < servers.groovy
Warning: Permanently added '[localhost]:2222' (RSA) to the list of known hosts.
# follow progress in Jenkins (see JENKINS-33438)
$ git checkout -b webapp-naming
M                                                                              servers.groovy
Switched to a new branch 'webapp-naming'
$ git commit -a -m 'Adjusted transient webapp name.'
[webapp-naming …] Adjusted transient webapp name.
 1 file changed, 1 insertion(+), 1 deletion(-)
$ git push origin webapp-naming
Counting objects: 3, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 330 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
To git://localhost/repo
 * [new branch]      webapp-naming -> webapp-naming

Using the replay-pipeline CLI command (in this example via SSH) you can prepare, test, and commit changes to your Pipeline script code without copying anything to or from a browser. That is all for now. Enjoy!


Registration is Open for Jenkins World 2016!

$
0
0
This is a guest post by Alyssa Tong. Alyssa works for CloudBees, helping to organize Jenkins community events around the world.

Jenkins World 2016

Jenkins World 2016 will be the largest gathering of Jenkins users in the world. This event will bring together Jenkins experts, continuous delivery thought leaders and the ecosystem offering complementary technologies for Jenkins. Join us September 13-15, 2016 in Santa Clara, California to learn and explore, network face-to-face and help shape the next evolution of Jenkins development and solutions for DevOps.

Registration for Jenkins World 2016 is now live. Take advantage of the Super Early Bird rate of $399 (available until July 1st).

And don’t forget, the Call for Papers will be ending on May 1st. That’s 2.5 short weeks left to get your proposal(s) in. We anxiously await your amazing stories.

The Need For Jenkins Pipeline

$
0
0

This is a cross-post ofan article authored by Viktor Farcic on theCloudBees blog. Viktor is also the author of The DevOps 2.0 Toolkit, which explores Jenkins, the Pipeline plugin, and the ecosystem around it in much more detail.

angry jenkins 128

Over the years, Jenkins has become the undisputed ruler among continuous integration (CI), delivery and deployment (CD) tools. It, in a way, defined the CI/CD processes we use today. As a result of its leadership, many other products have tried to overthrow it from its position. Among others, we got Bamboo and Team City attempting to get a piece of the market. At the same time, new products emerged with a service approach (as opposed to on-premise). Some of them are Travis, CircleCI and Shippable. Be that as it may, none managed to get even close to Jenkins' adoption. Today, depending on the source we use, Jenkins holds between 50-70% of the whole CI/CD tools market. The reason behind such a high percentage is its dedication to open source principles set from the very beginning by Kohsuke Kawaguchi. Those same principles were the reason he forked Jenkins from Hudson. The community behind the project, as well as commercial entities behind enterprise versions, are continuously improving the way it works and adding new features and capabilities. They are redefining not only the way Jenkins behaves but also the CI/CD practices in a much broader sense. One of those new features is the Jenkins Pipeline plugin. Before we dive into it, let us take a step back and discuss the reasons that led us to initiate the move away from Freestyle jobs and towards the Pipeline.

The Need for Change

Over time, Jenkins, like most other self-hosted CI/CD tools, tends to accumulate a vast number of jobs. Having a lot of them causes quite an increase in maintenance cost. Maintaining ten jobs is easy. It becomes a bit harder (but still bearable) to manage a hundred. When the number of jobs increases to hundreds or even thousands, managing them becomes very tedious and time demanding.

If you are not proficient with Jenkins (or other CI/CD tools) or you do not work for a big project, you might think that hundreds of jobs is excessive. The truth is that such a number is reached over a relatively short period when teams are practicing continuous delivery or deployment. Let’s say that an average CD flow has the following set of tasks that should be run on each commit: building, pre-deployment testing, deployment to a staging environment, post-deployment testing and deployment to production. That’s five groups of tasks that are often divided into, at least, five separate Jenkins jobs. In reality, there are often more than five jobs for a single CD flow, but let us keep it an optimistic estimate. How many different CD flows does a medium sized company have? With twenty, we are already reaching a three digits number. That’s quite a lot of jobs to cope with even though the estimates we used are too optimistic for all but the smallest entities.

Now, imagine that we need to change all those jobs from, let’s say, Maven to Gradle. We can choose to start modifying them through the Jenkins UI, but that takes too much time. We can apply changes directly to Jenkins XML files that represent those jobs but that is too complicated and error prone. Besides, unless we write a script that will do the modifications for us, we would probably not save much time with this approach. There are quite a few plugins that can help us to apply changes to multiple jobs at once, but none of them is truly successful (at least among free plugins). They all suffer from one deficiency or another. The problem is not whether we have the tools to perform massive changes to our jobs, but whether jobs are defined in a way that they can be easily maintained.

Besides the sheer number of Jenkins jobs, another critical Jenkins' pain point is centralization. While having everything in one location provides a lot of benefits (visibility, reporting and so on), it also poses quite a few difficulties. Since the emergence of agile methodologies, there’s been a huge movement towards self-sufficient teams. Instead of horizontal organization with separate development, testing, infrastructure, operations and other groups, more and more companies are moving (or already moved) towards self-sufficient teams organized vertically. As a result, having one centralized place that defines all the CD flows becomes a liability and often impedes us from splitting teams vertically based on projects. Members of a team should be able to collaborate effectively without too much reliance on other teams or departments. Translated to CD needs, that means that each team should be able to define the deployment flow of the application they are developing.

Finally, Jenkins, like many other tools, relies heavily on its UI. While that is welcome and needed as a way to get a visual overview through dashboards and reports, it is suboptimal as a way to define the delivery and deployment flows. Jenkins originated in an era when it was fashionable to use UIs for everything. If you worked in this industry long enough you probably saw the swarm of tools that rely completely on UIs, drag & drop operations and a lot of forms that should be filled. As a result, we got tools that produce artifacts that cannot be easily stored in a code repository and are hard to reason with when anything but simple operations are to be performed. Things changed since then, and now we know that many things (deployment flow being one of them) are much easier to express through code. That can be observed when, for example, we try to define a complex flow through many Jenkins jobs. When deployment complexity requires conditional executions and some kind of a simple intelligence that depends on results of different steps, chained jobs are truly complicated and often impossible to create.

All things considered, the major pain points Jenkins had until recently are as follows.

  • Tendency to create a vast number of jobs

  • Relatively hard and costly maintenance

  • Centralization of everything

  • Lack of powerful and easy ways to specify deployment flow through code

This list is, by no means, unique to Jenkins. Other CI/CD tools have at least one of the same problems or suffer from deficiencies that Jenkins solved a long time ago. Since the focus of this article is Jenkins, I won’t dive into a comparison between the CI/CD tools.

Luckily, all those, and many other deficiencies are now a thing of the past. With the emergence of thePipeline plugin and many others that were created on top of it, Jenkins entered a new era and proved itself as a dominant player in the CI/CD market. A whole new ecosystem was born, and the door was opened for very exciting possibilities in the future.

Before we dive into the Jenkins Pipeline and the toolset that surrounds it, let us quickly go through the needs of a modern CD flow.

Continuous Delivery or Deployment Flow with Jenkins

When embarking on the CD journey for the first time, newcomers tend to think that the tasks that constitute the flow are straightforward and linear. While that might be true with small projects, in most cases things are much more complicated than that. You might think that the flow consists of building, testing and deployment, and that the approach is linear and follows the all-or-nothing rule. Build invokes testing and testing invokes deployment. If one of them fails, the developer gets a notification, fixes the problem and commits the code that will initiate the repetition of the process.

simple cd flow small

In most instances, the process is far more complex. There are many tasks to run, and each of them might produce a failure. In some cases, a failure should only stop the process. However, more often than not, some additional logic should be executed as part of the after-failure cleanup. For example, what happens if post-deployment tests fail after a new release was deployed to production? We cannot just stop the flow and declare the build a failure. We might need to revert to the previous release, rollback the proxy, de-register the service and so on. I won’t go into many examples of situations that require complex flow with many tasks, conditionals that depend on results, parallel execution and so on. Instead, I’ll share a diagram of one of the flows I worked on.

complex cd flow small

Some tasks are run in one of the testing servers (yellow) while others are run on the production cluster (blue). While any task might produce an error, in some cases such an outcome triggers a separate set of tasks. Some parts of the flow are not linear and depend on task results. Some tasks should be executed in parallel to improve the overall time required to run them. The list goes on and on. Please note that this discussion is not about the best way to execute the deployment flow but only a demonstration that the complexity can be, often, very high and cannot be solved by a simple chaining of Freestyle jobs. Even in cases when such chaining is possible, the maintenance cost tends to be very high.

One of the CD objectives we are unable to solve through chained jobs, or is proved to be difficult to implement, is conditional logic. In many cases, it is not enough to simply chain jobs in a linear fashion. Often, we do not want only to create a job A that, once it’s finished running, executes job B, which, in turn, invokes job C. In real-world situations, things are more complicated than that. We want to run some tasks (let’s call them job A), and, depending on the result, invoke jobs B1 or B2, then run in parallel C1, C2 and C3, and, finally, execute job D only when all C jobs are finished successfully. If this were a program or a script, we would have no problem accomplishing something like that, since all modern programming languages allow us to employ conditional logic in a simple and efficient way. Chained Jenkins jobs, created through its UI, pose difficulties to create even a simple conditional logic. Truth be told, some plugins can help us with conditional logic. We have Conditional Build Steps, Parameterised Trigger, Promotions and others. However, one of the major issues with these plugins is configuration. It tends to be scattered across multiple locations, hard to maintain and with little visibility.

Resource allocation needs a careful thought and is, often, more complicated than a simple decision to run a job on a predefined slave. There are cases when slave should be decided dynamically, workspace should be defined during runtime and cleanup depends on a result of some action.

While a continuous deployment process means that the whole pipeline ends with deployment to production, many businesses are not ready for such a goal or have use-cases when it is not appropriate. Any other process with a smaller scope, be it continuous delivery or continuous integration, often requires some human interaction. A step in the pipeline might need someone’s confirmation, a failed process might require a manual input about reasons for the failure, and so on. The requirement for human interaction should be an integral part of the pipeline and should allow us to pause, inspect and resume the flow. At least, until we reach the true continuous deployment stage.

The industry is, slowly, moving towards microservices architectures. However, the transformation process might take a long time to be adopted, and even more to be implemented. Until then, we are stuck with monolithic applications that often require a long time for deployment pipelines to be fully executed. It is not uncommon for them to run for a couple of hours, or even days. In such cases, failure of the process, or the whole node the process is running on, should not mean that everything needs to be repeated. We should have a mechanism to continue the flow from defined checkpoints, thus avoiding costly repetition, potential delays and additional costs. That is not to say that long-running deployment flows are appropriate or recommended. A well-designed CD process should run within minutes, if not seconds. However, such a process requires not only the flow to be designed well, but also the architecture of our applications to be changed. Since, in many cases, that does not seem to be a viable option, resumable points of the flow are a time saver.

All those needs, and many others, needed to be addressed in Jenkins if it was to continue being a dominant CI/CD tool. Fortunately, developers behind the project understood those needs and, as a result, we got the Jenkins Pipeline plugin. The future of Jenkins lies in a transition from Freestyle chained jobs to a single pipeline expressed as code. Modern delivery flows cannot be expressed and easily maintained through UI drag 'n drop features, nor through chained jobs. They can neither be defined through YAML (Yet Another Markup Language) definitions proposed by some of the newer tools (which I’m not going to name). We need to go back to code as a primary way to define not only the applications and services we are developing but almost everything else. Many other types of tools adopted that approach, and it was time for us to get that option for CI/CD processes as well.

Making your own DSL with plugins, written in Pipeline script

$
0
0

In this post I will show how you can make your own DSL extensions and distribute them as a plugin, using Pipeline Script.

A quick refresher

Pipeline has a well kept secret: the ability to add your own DSL elements. Pipeline is itself a DSL, but you can extend it.

There are 2 main reasons I can think you may want to do this:

  1. You want to reduce boilerplate by encapsulating common snippets/things you do in one DSL statement.

  2. You want to provide a DSL that provides a prescriptive way that your builds work - uniform across your organisations Jenkinsfiles.

A DSL could look as simple as

acmeBuild {
    script = "./bin/ci"
    environment = "nginx"
    team = "evil-devs"
    deployBranch = "production"
}

This could be the entirety of your Jenkinsfile!

In this "simple" example, it could actually be doing a multi stage build with retries, in a specified docker container, that deploys only from the production branch. Detailed notifications are sent to the right team on important events (as defined by your org).

Traditionally this is done via theglobal library. You take a snippet of DSL you want to want to make into a DSL, and drop it in the git repo that is baked into Jenkins.

A great trivialexample is this:

jenkinsPlugin {
    name = 'git'
}

Which is enabled by git pushing the following into vars/jenkinsPlugin.groovy

The name of the file is the name of the DSL expression you use in the Jenkinsfile
defcall(body) {def config = [:]
    body.resolveStrategy = Closure.DELEGATE_FIRST
    body.delegate = config
    body()// This is where the magic happens - put your pipeline snippets in here, get variables from config.
    node {
        git url: "https://github.com/jenkinsci/${config.name}-plugin.git"
        sh "mvn install"
        mail to: "...", subject: "${config.name} plugin build", body: "..."
    }
}

You can imagine many more pipelines, or even archetypes/templates of pipelines you could do in this way, providing a really easy Jenkinsfile syntax for your users.

Making it a plugin

Using the global DSL library is a handy thing if you have a single Jenkins, or want to keep the DSLs local to a Jenkins instance. But what if you want to distribute it around your org, or, perhaps it is general purpose enough you want to share it with the world?

Well this is possible, by wrapping it in a plugin. You use the same pipeline snippet tricks you use in the global lib, but put it in the dsl directory of a plugin.

My simple build plugin shows how it is done. To make your own plugin:

  1. Create a new plugin project, either fork the simple build one, or add a dependency to it in your pom.xml / build.gradle file

  2. Put your dsl in the resources directory in a similar fashion tothis (note the "package dsl" declaration at the top)

  3. Create the equivalent extension that just points to the DSL by name likethis This is mostly "boiler plate" but it tells Jenkins there is a GlobalVariable extension available when Pipelines run.

  4. Deploy it to an Jenkins Update Center to share with your org, or everyone!

The advantage of delivering this DSL as a plugin is that it has a version (you can also put tests in there), and distributable just like any other plugin.

For the more advanced, Andrew Bayer has a Simple Travis Runner plugin thatinterprets and runs travis.yml files which is also implemented in pipeline.

So, approximately, you can build plugins for pipeline that extend pipeline, in pipeline script (with a teeny bit of boiler plate).

Enjoy!

Pipeline 2.x plugins

$
0
0

Those of you who routinely apply all plugin updates may already have noticed that the version numbers of the plugins in the Pipeline suite have switched to a 2.x scheme. Besides aligning better with the upcoming Jenkins 2.0 core release, the plugins are now being released with independent lifecycles.

“Pipeline 1.15” (the last in the 1.x line) included simultaneous releases of a dozen or so plugins with the 1.15 version number (and 1.15+ dependencies on each other). All these plugins were built out of a single workflow-plugin repository. While that was convenient in the early days for prototyping wide-ranging changes, it has become an encumbrance now that the Pipeline code is fairly mature, and more people are experimenting with additions and patches.

As of 2.0, all the plugins in the system live in their own repositories on GitHub—named to match the plugin code name, which in most cases uses the historical workflow term, so for example workflow-job-plugin. Some complex steps were moved into their own plugins, such as pipeline-build-step-plugin. The 1.x changelog is closed; now each plugin keeps a changelog in its own wiki, for example here for the Pipeline Job plugin.

Among other benefits, this change makes it easier to cut new plugin releases for even minor bug fixes or enhancements, or for developers to experiment with patches to certain plugins. It also opens the door for the “aggregator” plugin (called simply Pipeline) to pull in dependencies on other plugins that seem broadly valuable, like the stage view.

The original repository has been renamed pipeline-plugin and for now still holds some documentation, which might later be moved to jenkins.io.

You need not do anything special to “move” to the 2.x line; 1.642.x and later users can just accept all Pipeline-related plugin updates. Note that if you update Pipeline Supporting APIs you must update Pipeline, or at least install/update some related plugins as noted in the wiki.

Possible Jenkins Project Infrastructure Compromise

$
0
0

Last week, the infrastructure team identified the potential compromise of a key infrastructure machine. This compromise could have taken advantage of, what I could be categorized as, an attempt to target contributors with elevated access. Unfortunately, when facing the uncertainty of a potential compromise, the safest option is to treat it as if it were an actual incident, and react accordingly. The machine in question had access to binaries published to our primary and secondary mirrors and contributor account information.

Since this machine is not the source of truth for Jenkins binaries, we verified that the files distributed to Jenkins users: plugins, packages, etc, were not tampered with. We cannot, however, verify that contributor account information was not accessed or tampered with and, as a proactive measure, we are issuing a password reset for all contributor accounts. We have also spent significant effort migrating all key services off of the potentially compromised machine to (virtual) hardware so the machine can be re-imaged or decommissioned entirely.

What you should do now

If you have ever filed an issue in JIRA, edited a wiki page, released a plugin or otherwise created an account via the Jenkins website, you have a Jenkins community account. You should be receiving a password reset email shortly, but if you have re-used your Jenkins account password with other services we strongly encourage you to update your passwords with those other services. If you’re not already using one, we also encourage the use of a password manager for generating and managing service-specific passwords.

This does not apply to your own Jenkins installation, or any account that you may use to log into it. If you do not have a Jenkins community account, there is no action you need to take.

What we’re doing to prevent events like this in the future

As stated above, the potentially compromised machine is being removed from our infrastructure. That helps address the immediate problem but doesn’t put guarantees in place for the future. To help prevent potential issues in the future we’re taking the following actions:

  1. Incorporating more security policy enforcement into ourPuppet-driven infrastructure. Without a configuration management tool enforcing a given state for some legacy services, user error and manual mis-configurations can adversely affect project security. As of right now, all key services are managed by Puppet.

  2. Balkanizing our machine and permissions model more. The machine affected was literally the first independent (outside of Sun) piece of project infrastructure and like many legacy systems, it grew to host a multitude of services. We are rapidly evolving away from that model with increasing levels of user and host separation for project services.

  3. In a similar vein, we have also introduced a trusted zone in our infrastructure which is not routable on the public internet, where sensitive operations, such as generating update center information, can be managed and secured more effectively.

  4. We are performing an infrastructure permissions audit. Some portions of our infrastructure are 6+ years old and have had contributors come and go. Any inactive users with unnecessarily elevated permissions in the project infrastructure will have those permissions revoked.

I would like to extend thanks, on behalf of the Jenkins project, toCloudBees for their help in funding and migrating this infrastructure.

If you have further questions about the Jenkins project infrastructure, you can join us in the #jenkins-infra channel on Freenode or in an Infrastructure Q&A session I’ve scheduled for next Wednesday (April 27) at 20:00 UTC (12:00 PST).

Jenkins 2.0 is here!

$
0
0

Over the past 10 years, Jenkins has reallygrown to a de-facto standard tool that millions of people use to handle automation in software development and beyond. It is quite remarkable for a project that originally started as a hobby project under a different name. I’m very proud.

Around this time last year,we’ve celebrated 10 years, 1000 plugins, and 100K installations. That was a good time to retrospect, and we started thinking about the next 10 years of Jenkins and what’s necessary to meet that challenge. This project has long been on a weekly "train" release model, so it was useful to step back and think about a big picture.

That is where three pillars of Jenkins 2.0 have emerged from.

First, one of the challenges our users are facing today is that the automation that happens between a commit and a production has significantly grown in its scope. Because of this, the clothing that used to fit (aka "freestyle project", which was the workhorse of Jenkins) no longer fits. We now need something that better fits today’s use cases like "continuous delivery pipeline." This is why in 2.0 we’ve added the pipeline capability. This 2 year old effort allows you to describe your chain of automation in a textual form. This allows you to version control it, put it alongside your source tree, etc. It is also actually a domain specific language (DSL) of Groovy, so when your pipeline grows in complexity/sophistication, you can manage its complexity and keep it understandable far more easily.

Second, over time, Jenkins has developed the "assembly required before initial use" feeling. As the project has grown, the frontier of interesting development has shifted to plugins, which is how it should be, but we have left it up to users to discover & use them. As a result, the default installation became very thin and minimal, and every user has to find several plugins before Jenkins becomes really functional. This created a paradox of choice and unnecessarily hurt the user experience. In 2.0, we reset this thinking and tried to create more sensible out of the box experience that solves 80% use cases for 80% of people. You get something useful out of the box, and you can get some considerable mileage out of it before you start feeling the need of plugins. This allows us to focus our development & QA effort around this base functionality, too. By the way, the focus on the out of the box experience doesn’t stop at functionality, either. The initial security setup of Jenkins is improved, too, to prevent unprotected Jenkins instances from getting abused by botnets and attacks.

Third, we were fortunate to have a number of developers with UX background spend some quality time on Jenkins, and they have made a big dent in improving various parts of Jenkins web UI. The setup wizard that implements the out of the box experience improvement is one of them, and it also includes other parts of Jenkins that you use all the time, such as job configuration pages and new item pages. This brings much needed attention to the web UI.

As you can see, 2.0 brings a lot of exciting features on the table, but this is an evolutionary release, built on top of the same foundation, so that your existing installations can upgrade smoothly. After this initial release, we’ll get back to our usual weekly release march. Improvements will be made to those pillars and others in coming months and years continuously. If you’d like to get a more in-depth look at Jenkins 2.0, please join us in our virtual Jenkins meetup 2.0 launch event.

Thank you very much for everyone who made Jenkins 2.0 possible. There aretoo many of you to thank individually, but you know who you are. I wanted to thank CloudBees in particular for sponsoring the time of many of those people. Ten years ago, all I could utilize was my own night & weekend time. Now I’ve got a team of smart people working with me to carry this torch forward, and a big effort like 2.0 wouldn’t have been possible without such organized effort.

Jenkins 2.0 Online JAM Wrap-up

$
0
0

Last week we hosted our first everOnline JAM with the debut topic of: Jenkins 2.0. Alyssa, our Events officer, and I pulled together aseries of sessions focusing on some of the most notable aspects of Jenkins 2 with:

  • A Jenkins 2.0 keynote from project founderKohsuke Kawaguchi

  • An overview of "Pipeline as Code" from Patrick Wolf

  • A deep-dive into Pipeline and related plugins like Multibranch, etc fromJesse Glick andKishore Bhatia

  • An overview of new user experience changes in 2.0 fromKeith Zantow

  • A quick lightning talk about documentation by yours truly

  • Wrapping up the sessions, was Kohsuke again, talking about the road beyond Jenkins 2.0 and what big projects he sees on the horizon.

The event was really interesting for me, and I hope informative for those who participated in the live stream and Q&A session. I look forward to hosting more Virtual JAM events in the future, and I hope you willjoin us!

Questions and Answers

Below are a collection of questions and answers, that were posed during the Virtual JAM. Many of these were answered during the course of the sessions, but for posterity all are included below.

Pipeline

What kind of DSL is used behind pipeline as code? Groovy or allow freely use different languages as a user prefer?

Pipeline uses a Groovy-based domain specific language.

How do you test your very own pipeline DSL?

Replay helps in testing/debugging while creating pipelines and at the branch level. There are some ideas which Jesse Glick has proposed for testing Jenkinsfile and Pipeline libraries captured inJENKINS-33925.

Isn’t "Survive Jenkins restart" exclusive to [CloudBees] Jenkins Enterprise?

No, this feature does not needCloudBees Jenkins Enterprise. All features shown during the virtual JAM are free and open source. CloudBees' Jenkins Enterprise product does support restarting from a specified stage however, and that is not open source.

How well is jenkins 2.0 integrate with github for tracking job definitions?

Using theGitHub Organization Folder plugin, Jenkins can automatically detect a Jenkinsfile in source repositories to create Pipeline projects.

Please make the ability for re-run failed stages Open Source too :)

This has been passed on to our friends at CloudBees for consideration :)

If Jenkinsfile is in the repo, co-located with code, does this mean Jenkins can auto-detect new jobs for different branches?

This is possible using thePipeline Multibranch plugin.

What documentation sources are there for Pipeline?

Our documentation section contains a number of pagesaround Pipeline. There is also additional documentation and examples in the plugin’s git repository and thejenkinsci/pipeline-examples repository. (contributions welcome!)

Where we can find the DSL method documentation?

There is generated documentation on jenkins.io which incldues steps from all public plugins. Inside of a running Jenkins instance, you can also navigate toJENKINS_URL/workflow-cps-snippetizer/dslReference to see the documentation for the plugins which are installed in that instance.

If Pipeline is not support some plugins (there is a lot actually), I needed SonarQube Runner but unfortunately it’s not supported yet, in Job DSL plugin i can use "Configure Block" and cover any plugin via XML, how i can achieve the same with a Pipeline?

Not at this time

Is there a possibility to create custom tooltips i.e. with a quick reference or a link to internal project documentation? Might be useful i.e. for junior team members who need to refer to external docs.

Not generally. Though in the case of Pipeline global libraries, you can create descriptions of vars/functions like standardBuild in the demo, and these will appear in Snippet Generator under Global Variables.

Oh pipeline supports joining jobs? It’s really good, but I cannot find document at https://jenkins.io/doc/ could you tell me where is it?

There is a build step, but the Pipeline system is optimized for single-job pipelines

We have multiple projects that we would like to follow the same pipeline. How would I write a common pipeline that can be shared across multiple projects.

You may want to look at implementing some additional steps using thePipeline Global Library feature. This would allow you to define organization-specific extensions to the Pipeline DSL to abstract away common patterns between projects.

How much flexibility is there with creating context / setting environment variables or changing / modifying build tool options when calling a web hook / api to parameterize pipelines for example to target deployments to different env using same pipeline

Various environment variables are exposed under the env variable in the Groovy DSL which would allow you to construct logic as simple or as complex as necessary to achieve your goal.

When you set up the job for the first time, does it build every branch in git, or is there a way to stop it from building old branches?

Not at this time, the best way to prevent older branches from being built is to remove the Jenkinsfile in those branches. Alternatively, you could use the "include" or "exclude" patterns when setting up the SCM configuration of your multibranch Pipeline. See alsoJENKINS-32396.

Similar to GitHub organizations, will BitBucket "projects" (ways of organizing collections of repos) be supported?

Yes, these are supported via theBitbucket Branch Source plugin.

How do you handle build secrets with the pipeline plugin? Using unique credentials stored in the credentials plugin per project and/or branch?

This can be accomplished by using theCredentials Binding plugin.

Similar to GitHub Orgs, are Gitlab projects supported in the same way?

GitLab projects are not explicitly supported at this time, but the extension points which the GitHub Organization Folder plugin uses could be extended in a similar manner for GitLab. See also JENKINS-34396

Is Perforce scm supported by the Pipeline plugin?

As a SCM source for discovering a Jenkinsfile, not at this time. TheP4 plugin does provide some p4 steps which can be used in a Pipeline script however, see here for documentation.

Is Mercurial supported with multibranch?

Yes, it is.

Can Jenkinsfile detect when it’s running against a pull request vs an approved commit, so that it can perform a different type of build?

Yes, via the env variables provided in the DSL scope. Using an if statement, one could guard specific behaviors with:

if (env.CHANGE_ID != null) {/* do things! */
}

Let’s say I’m building RPMs with Jenkins and use build number as an RPM version/release number. Is there a way to maintain build numbers and leverage versioning of Jenkinsfile?

Through the env variable, it’s possible to utilize env.BUILD_NUMBER or the SCM commit ID, etc.

Love the snippet generator! Any chance of separating it out from the pipeline into a separate page on its own, available in the left nav?

Yes, this is tracked inJENKINS-31831

Any tips on pre-creating the admin user credential and selecting plugins to automate the Jenkins install?

There are various configuration management modules which provide parts of this functionality.

I’m looking at the pipeline syntax (in Jenkins 2.0) how do I detect a step([...]) has failed and create a notification inside the Jenkinsfile?

This can be done by wrapping a step invocation with a Groovy try/catch block. See also JENKINS-28119

User Interface/Experience

Is the user experience same as before when we replace the Jenkins.war(1.x to 2.x) in an existing (with security in place) installation?

You will get the new UI features like redesigned configuration forms, but the initial setup wizard will be skipped. In its stead, Jenkins will offer to install Pipeline-related functionality.

Is it possible to use custom defined syntax highlighting ?

Within the Pipeline script editor itself, no. It is using theACE editor system, so it may be possible for a plugin to change the color scheme used.

Can you elaborate on what the Blue Ocean UI is? Is there a link or more information on it?

Blue Ocean is the name of user experience an design project, unfortunately at this point in time there is not more information available on it.

General

How well this integrate with cloud environment?

The Jenkins master and agents can run easily in any public cloud environment that supports running Java applications. Through theEC2,JClouds,Azure, or any other plugins which extend the cloudextension point, it is possible to dynamically provision new build agents on a configured cloud provider.

Are help texts and other labels and messages updated for other localizations / languages as well?

Practically every string in Jenkins core is localizable. The extent to which those strings have been translated depends on contributors by speakers of those languages to the project. If you want to contribute translations, thiswiki page should get you started.

Any additional WinRM/Windows remoting functionality in 2.0?

No

Is there a CLI to find all the jobs created by a specific user?

No, out-of-the-box Jenkins does not keep track of which user created which jobs. The functionality provided by theOwnership plugin may be of interest though.

Please consider replacing terms like "master" and "slave" with "primary" and "secondary".

"slave" has been replaced with "agent" in Jenkins 2.0

We’ve been making tutorial videos on Jenkins for awhile (mostly geared toward passing the upcoming CCJPE). Because of that we’re using 1.625.2 (since that is what is listed on the exam), but should we instead base the videos on 2.0?

As of right now all of theJenkins Certification work done by CloudBees is focused around the Jenkins LTS 1.625.x.


Security updates for Jenkins core

$
0
0

We just released security updates to Jenkins that fix a number of low and medium severity issues. For an overview of what was fixed, see the security advisory.

One of the fixes may well break some of your use cases in Jenkins, at least until plugins have been adapted: SECURITY-170. This change removes parameters that are not defined on a job from the build environment. So, right now, a job could even be unparameterized, and plugins were able to pass parameters anyway. Since build parameters are added to the environment variables of scripts run during a build, parameters such as PATH or DYLD_LIBRARY_PATH can be defined – on jobs which don't even expect those as build parameters – to change the behavior of builds.

A number of plugins define additional parameters for builds. For example, GitHub Pull Request Builder passes a number of additional parameters describing the pull request. Release Plugin also allows adding several additional parameters to a build that are not considered to be defined in the job as part of this security fix.

Please see this wiki page for a list of plugins known to be affected by this change.

Until these plugins have been adapted to work with the new restriction (and advice on that is available further down), you can define the following system properties to work around this limitation, at least for a time:

  • Set hudson.model.ParametersAction.keepUndefinedParameters to true, e.g. java -Dhudson.model.ParametersAction.keepUndefinedParameters=true -jar jenkins.war to revert to the old behavior of allowing any build parameters. Depending on your environment, this may be unsafe, as it opens you up to attacks as described above.
  • Set hudson.model.ParametersAction.safeParameters to a comma-separated list of safe parameter names, e.g. java -Dhudson.model.ParametersAction.safeParameters=FOO,BAR_baz,quX -jar jenkins.war.

I realize this change, among a few others that improve the security of Jenkins, may be difficult to adapt for some, but given the valuable secrets typically stored in Jenkins, I'm certain that this is the correct approach. We made sure to release this fix with the options described above, so that this change doesn't block updating those that rely on this behavior.

Developers have several options to adapt to this change:

  • ParametersAction actually stores all parameters, but getParameters() only returns those that are defined on the job. The new method getAllParameters() returns all of them. This can be used, for example by EnvironmentContributor extensions, to add known safe parameters to build environments.
  • Don't pass extra arguments, but define a QueueAction for your metadata instead. Those can still be made available to the build environment as needed.

Subscribe to the jenkinsci-advisories mailing list to receive important notifications related to Jenkins security.

SF JAM Report: Scaling Jenkins for Continuous Delivery with Azure

$
0
0

A few weeks ago, my colleague Brian Dawson and I were invited to present onScaling Jenkins for Continuous Delivery with Microsoft Azure in Microsoft’s Reactor space. Azure is Microsoft’s public cloud offering and one of the many tools available to Jenkins users for adding elastic compute capacity, among other things, to their build/test/deploy infrastructure. While our presentations are applicable to practically any cloud-based Jenkins environment, Thiago Almeida and Oguz Pastirmaci from Microsoft were also on-hand and presented some interesting Azure-specific offerings like Azure Container Service with Jenkins.

While we do not have video from the meetup, Brian and I did recorda session with Thiago and Oguz for Channel9 which covers much of the same content:

To kick-off the meetup we asked attendees a few polling questions and received very telling responses:

  • How big is your Development/IT organization?

  • What is your role?

  • By show of hands do you practice CI/CD/DevOps/etc?

  • At what scale (tooling and practice)?

The responses indicated that the majority of attendees were from small to medium organizations where they practiced Continuous Delivery across multiple teams. A notable 25% or greater attendees considered themselves "fullstack" or participating in all of the roles of Developer, QA, and Operations. Interesting when paired with the high number (~80%) of those who practice CD. This is likely because modern teams, with mature CD practices, tend to blur the traditional lines of Developer, QA and Operations. However, In my experience, while this is often the case for small to medium companies in large organizations team members tend to fall into the traditional roles, with CD providing the practice and platform to unify teams across roles.

— Brian Dawson

After gauging the audience, Thiago and Brian reviewed Continuous Delivery (CD) and implementing it at scale. They highlighted the fact that CD is being rapidly adopted across teams and organizations, providing the ability: to deliver a demonstrably higher quality product, shipping more rapidly than before, and to keep team members happier.

However, when organizations fail to properly support CD as they scale, they run into issues such as: developers acting as administrators at the cost of productivity, potential lack of security and/or exposure of IP and difficulty in sharing best practices across teams.

Thiago then highlighted that properly scaling CD practices in the organization along with the infrastructure itself can alleviate these issues, and discussed the benefits of scaling CD to on cloud platforms to provide "CD-as-a-Service."

Overall I found the "theory" discussion to be on point, continuous delivery is not just a technology nor a people problem. Successful organizations scale their processes and tooling together.

The slides from our respective presentations are linked below:

I hope you join us at futureSan Francisco JAMs!

The State of Jenkins Area Meetups (JAM)

$
0
0

Recently, the Jenkins project announced the release ofJenkins 2.0, a first major release after 10 years and 655 weekly releases. This has been a major milestone for Jenkins and its growing community of developers, testers, designers and other users in the software delivery process.

With its rising popularity and wide adoption, the Jenkins community continues to grow and evolve into the millions. Jenkins community meetup activity has risen to an all time high since the first Jenkins meetup which was established on August 23 2010, in San Francisco.

Over the last six months the number ofJenkins Area Meetup (JAM) Groups has grown from 5 to 30, with coverage in Asia, North America, South America and Europe. That’s an average growth of 4 new JAMs per month.

JAM map

As of today, there are over 4,100 Jenkins fans within the Jenkins meetup community. This is the result of contributions from community JAM leaders who have volunteered their time to provide a platform for learning, sharing and networking all things Jenkins within their local communities.

For anyone who has not organized a meetup before, there are many moving parts that have to come together at a specific location, date and time. This process takes significant effort to methodically plan out. From planning the food and beverages to securing speaker(s), a venue, audio/visual setup, technical logistics and of course promoting the meetup. It does takes a level of passion and effort to make it all happen.

Many THANKS to the 55 JAM leaders, who share this passion - they have successfully organized over 41 meetups within the past six months in North America, South America and Europe. That’s about 6 meetups a month!

JAMs over time

There are still plenty of opportunities to be a JAM organizer. If there is not a JAM near you, we’d love to hear from you! Here’s how you can get started.

Toulouse JAM 1

Toulouse JAM 2

Seville JAM

Peru JAM

Barcelona JAM

Partnering with Microsoft to run Jenkins infrastructure on Azure

$
0
0

I am pleased to announce that we have partnered with Microsoft to migrate and power the Jenkins project’s infrastructure with Microsoft Azure. The partnership comes at an important time, after the recent launch of Jenkins 2.0, Jenkins users are more readily adopting Pipeline as Code and many other plugins at an increasing rate, elevating the importance of Jenkins infrastructure to the overall success of the project. That strong and continued growth has brought new demands to our infrastructure’s design and implementation, requiring the next step in its evolution. This partnership helps us grow with the rest of the project by unifying our existing infrastructure under one comprehensive, modern and scalable platform.

In March wediscussed the potential partnership in our regularly scheduledproject meeting, highlighting some of the infrastructure challenges that we face:

  • Currently we have infrastructure in four different locations, with four different infrastructure providers, each with their own APIs and tools for managing resources, each with varying capabilities and capacities.

  • Project infrastructure is managed by a team of volunteers, operating more than 15 different services and managing a number of additional external services.

  • Our current download/mirror network, while geographically distributed, is relatively primitive and its implementation prevents us from using more modern distribution best practices.

In essence, five years of tremendous growth for Jenkins has outpaced our organically grown, unnecessarily complex, project infrastructure. Migrating to Azure simplifies and improves our infrastructure in a dramatic way that would not be possible without a comprehensive platform consisting of: compute, CDN, storage and data-store services. Our partnership covers, at minimum, the next three years of the project’s infrastructure needs, giving us a great home for the future.

Azure also enables a couple of projects that I have long been dreaming of providing to Jenkins users and contributors:

  • End-to-end TLS encrypted distribution of Jenkins packages, plugins and metadata via the Azure CDN.

  • More complete build/test/release support and capacity onci.jenkins.io for plugin developers usingAzure Container Service and generic VMs.

The Jenkins infrastructure is all open source which means all of our Docker containers, Puppet code and many of our tools are all available on GitHub. Not only can you watch the migration process to Azure as it happens, but I also invite you to participate in making our project’s infrastructure better (join us in the #jenkins-infra channel on Freenode or ourmailing list).

Suffice it to say, I’m very excited about the bright [blue] future for the Jenkins project and the infrastructure that powers it!

GSoC Project Intro: External Workspace Manager Plugin

$
0
0

About myself

My name is Alexandru Somai. I’m following a major in Software Engineering at the Babes-Bolyai University of Cluj-Napoca, Romania. I have more than two years hands-on experience working in Software Development.

I enjoy writing code in Java, Groovy and JavaScript. The technologies and frameworks that I’m most familiar with are: Spring Framework, Spring Security, Hibernate, JMS, Web Services, JUnit, TestNG, Mockito. As build tools and continuous integration, I’m using Maven and Jenkins. I’m a passionate software developer who is always learning, always looking for new challenges. I want to start contributing to the open source community and Google Summer of Code is a starting point for me.

Project summary

Currently, Jenkins’ build workspace may become very large in size due to the fact that some compilers generate very large volumes of data. The existing plugins that share the workspace across builds are able to do this by copying the files from one workspace to another, process which is inefficient. A solution is to have a Jenkins plugin that is able to manage and reuse the same workspace between multiple builds.

As part of the Google Summer of Code 2016 I will be working on the External Workspace Manager plugin. My mentors for this project are Oleg Nenashev and Martin d’Anjou. This plugin aims to provide an external workspace management system. It should facilitate workspace share and reuse across multiple Jenkins jobs. It should eliminate the need to copy, archive or move files. The plugin will be written for Pipeline jobs.

Usage

Prerequisites

  1. Multiple physical disks accessible from Master.

  2. The same physical disks must be accessible from Jenkins Nodes (renamed to Agents in Jenkins 2.0).

  3. In the Jenkins global configuration, define a disk pool (or many) that will contain the physical disks.

  4. In each Node configuration, define the mounting point from the current node to each physical disk.

The following diagram gives you an overview of how an External Workspace Manager configuration may look like:

ewm config

Example one

Let’s assume that we have one Jenkins job. In this job, we want to use the same workspace on multiple Jenkins nodes. Our pipeline code may look like this:

stage ('Stage 1. Allocate workspace')def extWorkspace = exwsAllocate id: 'diskpool1'

node ('linux') {
    exws (extWorkspace) {
        stage('Stage 2. Build on the build server')
        git url: '...'
        sh 'mvn clean install'
    }
}

node ('test') {
    exws (extWorkspace) {
        stage('Stage 3. Run tests on a test machine')
        sh 'mvn test'
    }
}

Note: The stage() steps are optional from the External Workspace Manager plugin perspective.

Stage 1. Allocate workspace

The exwsAllocate step selects a disk from diskpool1 (default behavior: the disk with the most available size). On that disk, let’s say disk1, it allocates a directory. The computed directory path is: /physicalPathOnDisk/$JOB_NAME/$BUILD_NUMBER.

For example, Let’s assume that the $JOB_NAME is integration and the $BUILD_NUMBER is 14. Then, the resulting path is: /jenkins-project/disk1/integration/14.

Stage 2. Build on the build server

All the nodes labeled linux must have access to the disks defined in the disk pool. In the Jenkins Node configurations we have defined the local paths that are the mounting points to each disk.

The exws step concatenates the node’s local path with the path returned by the exwsAllocate step. In our case, the node labeled linux has its local path to disk1 defined as: /linux-node/disk1/. So, the complete workspace path is: /linux-node/disk1/jenkins-project/disk1/integration/14.

Stage 3. Run tests on a test machine

Further, we want to run our tests on a different node, but we want to reuse the previously created workspace.

In the node labeled test we have defined the local path to disk1 as: /test-node/disk1/. By applying the exws step, our tests will be able to run in the same workspace as the build. Therefore, the path is: /test-node/disk1/jenkins-project/disk1/integration/14.

Example two

Let’s assume that we have two Jenkins jobs, one called upstream and the other one called downstream. In the upstream job, we clone the repository and build the project, and in the downstream job we run the tests. In the downstream job we don’t want to clone and re-build the project, we need to use the same workspace created in the upstream job. We have to be able to do so without copying the workspace content from one location to another.

The pipeline code in the upstream job is the following:

stage ('Stage 1. Allocate workspace in the upstream job')def extWorkspace = exwsAllocate id: 'diskpool1'

node ('linux') {
    exws (extWorkspace) {
        stage('Stage 2. Build in the upstream job')
           git url: '...'
           sh 'mvn clean install'
    }
}

And the downstream's pipeline code is:

stage ('Stage 3. Allocate workspace in the downstream job')def extWorkspace = exwsAllocate id: 'diskpool1', upstream: 'upstream'

node ('test') {
    exws (extWorkspace) {
        stage('Stage 4. Run tests in the downstream job')
        sh 'mvn test'
    }
}
Stage 1. Allocate workspace in the upstream job

The functionality is the same as in example one - stage 1. In our case, the allocated directory on the physical disk is: /jenkins-project/disk1/upstream/14.

Stage 2. Build in the upstream job

Same functionality as example one - stage 2. The final workspace path is: /linux-node/disk1/jenkins-project/disk1/upstream/14.

Stage 3. Allocate workspace in the downstream job

By passing the upstream parameter to the exwsAllocate step, it selects the most recent stable upstream workspace (default behavior). The workspace path pattern is like this: /physicalPathOnDisk/$UPSTREAM_NAME/$MOST_RECENT_STABLE_BUILD. Let’s assume that the last stable build number is 12, then the resulting path is:/jenkins-project/disk1/upstream/12.

Stage 4. Run tests in the downstream job

The exws step concatenates the node’s local path with the path returned by the exwsAllocate step in stage 3. In this scenario, the complete path for running tests is: /test-node/disk1/jenkins-project/disk1/upstream/12. It will reuse the workspace defined in the upstream job.

Additional details

You may find the complete project proposal, along with the design details, features, more examples and use cases, implementation ideas and milestones in the design document. The plugin repository will be available on GitHub.

A prototype version of the plugin should be available in late June and the releasable version in late August. I will be holding plugin functionality demos within the community.

I do appreciate any feedback. You may add comments in the design document. If you are interested to have a verbal conversation, feel free to join our regular meetings on Mondays at12:00 PM UTC on the Jenkins hangout. I will be posting updates from time to time about the plugin status on theJenkins developers mailing list.

Refactoring a Jenkins plugin for compatibility with Pipeline jobs

$
0
0
This is a guest post by Chris Price. Chris is a software engineer at Puppet, and has been spending some time lately on automating performance testing using the latest Jenkins features.

In this blog post, I’m going to attempt to provide some step-by-step notes on how to refactor an existing Jenkins plugin to make it compatible with the new Jenkins Pipeline jobs. Before we get to the fun stuff, though, a little background.

How’d I end up here?

Recently, I started working on a project to automate some performance tests for my company’s products. We use the awesome Gatling load testing tool for these tests, but we’ve largely been handling the testing very manually to date, due to a lack of bandwidth to get them automated in a clean, maintainable, extensible way. We have a years-old Jenkins server where we use the gatling jenkins plugin to track the history of certain tests over time, but the setup of the Jenkins instance was very delicate and not easy to reproduce, so it had fallen into a state of disrepair.

Over the last few days I’ve been putting some effort into getting things more automated and repeatable so that we can really maximize the value that we’re getting out of the performance tests. With some encouragement from the fine folks in the #jenkinsIRC channel, I ended up exploring the JobDSL plugin and the new Pipeline jobs. Combining those two things with some Puppet code to provision a Jenkins server via thejenkins puppet module gave me a really nice way to completely automate my Jenkins setup and get a seed job in place that would create my perf testing jobs. And the Pipeline job format is just an awesome fit for what I wanted to do in terms of being able to easily monitor the stages of my performance tests, and to make the job definitions modular so that it would be really easy to create new performance testing jobs with slight variations.

So everything’s going GREAT up to this point. I’m really happy with how it’s all shaping up. But then…​ (you knew there was a "but" coming, right?) I started trying to figure out how to add the Gatling Jenkins plugin to the Pipeline jobs, and kind of ran into a wall.

As best as I could tell from my Googling, the plugin was probably going to require some modifications in order to be able to be used with Pipeline jobs. However, I wasn’t able to find any really cohesive documentation that definitively confirmed that or explained how everything fits together.

Eventually, I got it all sorted out. So, in hopes of saving the next person a little time, and encouraging plugin authors to invest the time to get their plugins working with Pipeline, here are some notes about what I learned.

Spoiler: if you’re just interested in looking at the individual git commits that I made on may way to getting the plugin working with Pipeline, have a look at this github branch.

Creating a pipeline step

The main task that the Gatling plugin performs is to archive Gatling reports after a run. I figured that the end game for this exercise was that I was going to end up with a Pipeline "step" that I could include in my Pipeline scripts, to trigger the archiving of the reports. So my first thought was to look for an existing plugin / Pipeline "step" that was doing something roughly similar, so that I could use it as a model. The Pipeline "Snippet Generator" feature (create a pipeline job, scroll down to the "Definition" section of its configuration, and check the "Snippet Generator" checkbox) is really helpful for figuring out stuff like this; it is automatically populated with all of the steps that are valid on your server (based on which plugins you have installed), so you can use it to verify whether or not your custom "step" is recognized, and also to look at examples of existing steps.

Looking through the list of existing steps, I figured that the archive step was pretty likely to be similar to what I needed for the gatling plugin:

archive snippet

So, I started poking around to see what magic it was that made that archive step show up there. There are some mentions of this in thepipeline-plugin DEVGUIDE.md and theworkflow-step-api-plugin README.md, but the real breakthrough for me was finding the definition of thearchive step in the workflow-basic-steps-plugin source code.

With that as an example, I was able to start poking at getting agatlingArchive step to show up in the Snippet Generator. The first thing that I needed to do was to update the gatling-plugin project’s pom.xml to depend on a recent enough version of Jenkins, as well as specify dependencies on the appropriate pipeline plugins

Once that was out of the way, I noticed that the archive step had some tests written for it, using what looks to be a pretty awesome test API for pipeline jobs and plugins. Based on those archive tests, I addeda skeleton for a test for the gatlingArchive step that I was about to write.

Then, I moved on toactually creating the step. The meat of the code was this:

publicclassGatlingArchiverStepextends AbstractStepImpl {@DataBoundConstructorpublic GatlingArchiverStep() {}@ExtensionpublicstaticclassDescriptorImplextends AbstractStepDescriptorImpl {public DescriptorImpl() { super(GatlingArchiverStepExecution.class); }@OverridepublicString getFunctionName() {return"gatlingArchive";
        }@Nonnull@OverridepublicString getDisplayName() {return"Archive Gatling reports";
        }
    }
}

Note that in that commit I also added a config.jelly file. This is how you define the UI for your step, which will show up in the Snippet Generator. In the case of this Gatling step there’s really not much to configure, so my config.jelly is basically empty.

With that (and the rest of the code from that commit) in place, I was able to fire up the development Jenkins server (via mvn hpi:run, and note that you need to go into the "Manage Plugins" screen on your development server and install the Pipeline plugin once before any of this will work) and visit the Snippet Generator to see if my step showed up in the dropdown:

gatlingArchive snippet

GREAT SUCCESS!

This step doesn’t actually do anything yet, but it’s recognized by Jenkins and can be included in your pipeline scripts at that point, so, we’re on our way!

The step metastep

The step that we created above is a first-class DSL addition that can be used in Pipeline scripts. There’s another way to make your plugin work usable from a Pipeline job, without making it a first-class build step. This is by use of the step "metastep", mentioned in the pipeline-plugin DEVGUIDE. When using this approach, you simply refactor your Builder or Publisher to extend SimpleBuildStep, and then you can reference the build step from the Pipeline DSL using the step method.

In the Jenkins GUI, go to the config screen for a Pipeline job and click on the Snippet Generator checkbox. Select step: General Build Step from the dropdown, and then have a look at the options that appear in the Build Step dropdown. To compare with our previous work, let’s see what "Archive the artifacts" looks like:

archive metastep plugin

From the snippet generator we can see that it’s possible to trigger an Archive action with syntax like:

step([$class: 'ArtifactArchiver', artifacts: 'foo*', excludes: null])

This is the "metastep". It’s a way to trigger any build action that implements SimpleBuildStep, without having to actually implement a real "step" that extends the Pipeline DSL like we did above. In many cases, it might only make sense to do one or the other in your plugin; you probably don’t really need both.

For the purposes of this tutorial, we’re going to do both. For a couple of reasons:

  1. Why the heck not? :) It’s a good demonstration of how the metastep stuff works.

  2. Because implementing the "for realz" step will be a lot easier if the Gatling action that we’re trying to call from our gatlingArchive() syntax is using the newer Jenkins APIs that are required for subclasses of SimpleBuildStep.

GatlingPublisher is the main build action that we’re interested in using in Pipeline jobs. So, with all of that in mind, here’s our next goal: get step([$class: 'GatlingPublisher', ...) showing up in the Snippet Generator.

The javadocs for the SimpleBuildStep class have some notes on what you need to do when porting an existing Builder orPublisher over to implement the SimpleBuildStep interface. In all likelihood, most of what you’re going to end up doing is to replace occurrences of AbstractBuild with references to the Run class, and replace occurrences of AbstractProject with references to the Job class. The APIs are pretty similar, so it’s not too hard to do once you understand that that’s the game. There is some discussion of this in the pipeline-plugin DEVGUIDE.

For the Gatling plugin, myinitial efforts to port the GatlingPublisher over to implement SimpleBuildStep only required the AbstractBuildRun refactor.

After making these changes, I fired up the development Jenkins server, and, voila!

gatling metastep snippet

So, now, we can add a line like this to a Pipeline build script:

step([$class: 'GatlingPublisher', enabled: true])

And it’ll effectively be the same as if we’d added the Gatling "Post-Build Action" to an old-school Freestyle project.

Well…​ mostly.

Build Actions vs. Project Actions

At this point our modified Gatling plugin should work the same way as it always did in a Freestyle build, but in a Pipeline build, it only partially works. Specifically, the Gatling plugin implements two different "Actions" to surface things in the Jenkins GUI: a "Build" action, which adds the Gatling icon to the left sidebar in the GUI when you’re viewing an individual build in the build history of a job, and a "Project" action, which adds that same icon to the left sidebar of the GUI of the main page for a job. The "Project" action also adds a "floating panel" on the main job page, which shows a graph of the historical data for the Gatling runs.

In a Pipeline job, though, assuming we’ve added a call to the metastep, we’re only seeing the "Build" actions. Part of this is because, in the last round of changes that I linked, we only modified the "Build" action, and not the "Project" action. Running the metastep in a Pipeline job has no visible effect at all on the project/job page at this point. So that’s what we’ll tackle next.

The key thing to know about getting "Project" actions working in a Pipeline job is that, with a Pipeline job, there is no way for Jenkins to know up front what steps or actions are going to be involved in a job. It’s only after the job runs once that Jenkins has a chance to introspect what all the steps were. As such, there’s no list of Builders or Publishers that it knows about up front to call getProjectAction on, like it would with a Freestyle job.

This is whereSimpleBuildStep.LastBuildAction comes into play. This is an interface that you can add to your Build actions, which give them their own getProjectActions method that Jenkins recognizes and will call when rendering the project page after the job has been run at least once.

The build action class now constructs an instance of the Project action and makes it accessible via getProjectActions (which comes from theLastBuildAction interface):

publicclassGatlingBuildActionimplementsAction, SimpleBuildStep.LastBuildAction {public GatlingBuildAction(Run<?, ?> build, List<BuildSimulation> sims) {this.build = build;this.simulations = sims;List<GatlingProjectAction> projectActions = newArrayList<>();
        projectActions.add(new GatlingProjectAction(build.getParent()));this.projectActions = projectActions;
    }@OverridepublicCollection<? extendsAction> getProjectActions() {returnthis.projectActions;
    }
}

After making these changes, if we run the development Jenkins server, we can see that after the first successful run of the Pipeline job that calls theGatlingPublisher metastep, the Gatling icon indeed shows up in the sidebar on the main project page, and the floating box with the graph shows up as well:

gatling project page

Making our DSL step do something

So at this point we’ve got the metastep syntax working from end-to-end, and we’ve got a valid Pipeline DSL step (gatlingArchive()) that we can use in our Pipeline scripts without breaking anything…​ but our custom step doesn’t actually do anything. Here’s the part where we tie it all together…​ and it’s pretty easy! All we need to do is to make our step "Execution" class instantiate a Publisher and call perform on it.

As per thenotes in the pipeline-plugin DEVGUIDE, we can use the @StepContextParameter annotation to inject in the objects that we need to pass to the Publisher’s perform method:

publicclassGatlingArchiverStepExecutionextends AbstractSynchronousNonBlockingStepExecution<Void> {@StepContextParameterprivatetransient TaskListener listener;@StepContextParameterprivatetransient FilePath ws;@StepContextParameterprivatetransient Run build;@StepContextParameterprivatetransient Launcher launcher;@OverrideprotectedVoid run() throwsException {
        listener.getLogger().println("Running Gatling archiver step.");

        GatlingPublisher publisher = new GatlingPublisher(true);
        publisher.perform(build, ws, launcher, listener);returnnull;
    }
}

After these changes, we can fire up the development Jenkins server, and hack up our Pipeline script to call gatlingArchive() instead of the metastepstep([$class: 'GatlingPublisher', enabled: true]) syntax. One of these is nicer to type and read than the other, but I’ll leave that as an exercise for the reader.

Fin

With that, our plugin now works just as well in the brave new Pipeline world as it did in the olden days of Freestyle builds. I hope these notes save someone else a little bit of time and googling on your way to writing (or porting) an awesome plugin for Jenkins Pipeline jobs!

Introducing Blue Ocean: a new user experience for Jenkins

$
0
0

In recent years developers have become rapidly attracted to tools that are not only functional but are designed to fit into their workflow seamlessly and are a joy to use. This shift represents a higher standard of design and user experience that Jenkins needs to rise to meet.

We are excited to share and invite the community to join us on a project we’ve been thinking about over the last few months called Blue Ocean.

Blue Ocean is a project that rethinks the user experience of Jenkins, modelling and presenting the process of software delivery by surfacing information that’s important to development teams with as few clicks as possible, while still staying true to the extensibility that is core to Jenkins.

Pipeline execution

While this project is in the alpha stage of development, the intent is that Jenkins users can install Blue Ocean side-by-side with the Jenkins Classic UI via a plugin.

Not all the features listed on this blog are complete but we will be hard at work over the next few months preparing Blue Ocean for general use. We intend to provide regular updates on this blog as progress is made.

Blue Ocean is open source today and we invite you to give us feedback and to contribute to the project.

Blue Ocean will provide development teams:

New modern user experience

The UI aims to improve clarity, reduce clutter and navigational depth to make the user experience very concise. A modern visual design gives developers much needed relief throughout their daily usage and screens respond instantly to changes on the server making manual page refreshes a thing of the past.

Project dashboard

Advanced Pipeline visualisations with built-in failure diagnosis

Pipelines are visualised on screen along with the steps and logs to allow simplified comprehension of the continuous delivery pipeline – from the simple to the most sophisticated scenarios.

Scrolling through 10,000 line log files is a thing of the past. Blue Ocean breaks down your log per step and calls out where your build failed.

Failing Pipeline

Branch and Pull Request awareness

Modern pipelines make use of multiple Git branches, and Blue Ocean is designed with this in mind. Drop a Jenkinsfile into your Git repository that defines your pipeline and Jenkins will automatically discover and start automating any  Branches and validating Pull Requests.

Jenkins will report the status of your pipeline right inside Github or Bitbucket on all your commits, branches or pull requests.

Pull request view

Personalised View

Favourite any pipelines, branches or pull requests and see them appear on your personalised dashboard. Intelligence is being built into the dashboard. Jobs that need your attention, say a Pipeline waiting for approval or a failing job that you have recently changed, appear on the top of the dashboard.

Personalized dashboard

You can read more about Blue Ocean and its goals on theproject page and developers should watch theDevelopers list for more information.


For Jenkins developers and plugin authors:

Jenkins Design “Language”

The Jenkins Design Language (JDL) is a set of standardised React components and a style guide that help developers create plugins that retain the look and feel of Blue Ocean in an effortless way. We will be publishing more on the JDL, including the style guide and developer documentation, over the next few weeks.

Modern JavaScript toolchain

The Jenkins plugin tool chain has been extended so that developers can useES6,React, NPM in their plugins without endless yak-shaving. Jenkinsjs-modules are already in use in Jenkins today, and this builds on this, using the same tooling.

Client side Extension points

Client Side plugins use Jenkins plugin infrastructure. The Blue Ocean libraries built on ES6 and React.js provide an extensible client side component model that looks familiar to developers who have built Jenkins plugins before. Client side extension points can help isolate failure, so one bad plugin doesn’t take a whole page down.

Server Sent Events

Server Sent Events (SSE) allow plugin developers to tap into changes of state on the server and make their UI update in real time (watch this for a demo).


To make Blue Ocean a success, we’re asking for help and support from Jenkins developers and plugin authors. Please join in our Blue Ocean discussions on the Jenkins Developer mailing list and the #jenkins-ux IRC channel on Freenode!


GSoC Project Intro: Improving Job Creation/Configuration

$
0
0

About me

My name is Samat Davletshin and I am from HSE University from Moscow, Russia. I interned at Intel and Yandex, and cofounded a startup project where I personally developed front-end and back-end of the website.

I am excited to participate in GSoC with Jenkins this summer as a chanсe to make a positive change for thousands of users as well as to learn from great mentors.

Abstract

Although powerful, Jenkins new job creation and configuration process may be non obvious and time consuming. This can be improved by making UI more intuitive, concise, and functional. I plan to achieve this by creating a simpler new job creation, configuration process focused on essential elements, and embedding new functionality.

Deliverables

New job creation

New job name validation

Initially, job validation was unresponsive, job creation was still allowed with an invalid name, and some allowed characters even crashed Jenkins. Happily, two of this problems were fixed in recent improvements and I plan add only a real time name check for invalid characters.

Popup window

Popup window

Jenkins has a lot of windows reloads that may time consuming. The creation of new job is a simple process requiring only job name and job type. This way UI may be improved by reducing page reloads and putting new job creation interface in a dialog window. Such popup would likely consist of three steps of implementation: rendering a dialog window, receiving JSON with job types, sending a POST request to create the job.

Configuration page

Changing help information

Changing help information

As reported by some users, it would be useful to have the functionality to change help information. Installation administrators would be able to change the help info and choose editing rights for other users. That would likely require a creation of extension points and a plugin using them. I also would like to include the ability to style the help information using markdown as shown above.

[Optional] The functionality is extended to creation of crowd sourced "wiki like" documentation

As inlocalization plugin the changes are gathered and applied beyond installation of a particular user.

More intuitive configuration page.

Pursuing to solve this issue

Although there are a lot improvements in new configuration page, there is always a room for improvements. An advanced job still has a very complicated and hard to read configuration page. It is still open to discussion, but I may approach it by better division of configuration parts such as an accordion based navigation.

Home page

[Optional] Removing "My Views" page

Removing My Views

"My Views" page may unnecessary complicate essential sidepanel navigation. Since it contains very small functionality, the functions may be moved to the home page and the whole page may be removed. That may be implemented by adding icons to "My Views" tabs. Additionally, the standard view creation page can create either of the types

[Optional] Reducing number of UI elements

The home page may contain some UI elements that are not essential and rarely used. This way elements "enable auto refresh", “edit description”, “icon sizes”, ”legend”, “RSS” may be removed from home page and placed under "Manage Jenkins" or an upper menu. It is also possible to create new extension points to support new UI elements through plugins.

Credentials store page

[Optional] Grouping credentials and their domains

Grouping credentials

Credentials page has too many reloads and requires many clicks to get to a required credentials page. That may be improved by removing the last page and showing credentials under domains.

Current progress

By May 25th I learned about the structure and tools of Jenkins and started working on the first project:

  • I started with New Job Name validation first. Luckily, in last updates thechanges of recena there were implemented all of the changes I proposed except real time check on name validity. Here I proposed the change which fixes it by sending GET request on keyup event in addition to blur.

  • I also made a New Job Popup with using existing interface.

I used Remodal library for popup and put thereexisting New Job container. Surprisingly, it was fully functional right away. On the GIF you can see that popup receives all job types and then successfully submits the post form creating a new job. I think that could be a good first step. Further I can start changing the window itself.

New display of Pipeline’s "snippet generator"

$
0
0

Those of you updating the Pipeline Groovy plugin to 2.3 or later will notice a change to the appearance of the configuration form. The Snippet Generator tool is no longer a checkbox enabled inside the configuration page. Rather, there is a link Pipeline Syntax which opens a separate page with several options. (The link appears in the project’s sidebar; Jenkins 2 users will not see the sidebar from the configuration screen, so as of 2.4 there is also a link beneath the Pipeline definition.)

Snippet Generator

Snippet Generator continues to be available for learning the available Pipeline steps and creating sample calls given various configuration options. The new page also offers clearer links to static reference documentation, online Pipeline documentation resources, and an IntelliJ IDEA code completion file (Eclipse support is unfinished).

One motivation for this change (JENKINS-31831) was to give these resources more visual space and more prominence. But another consideration was that people using multibranch projects or organization folders should be able to use Snippet Generator when setting up the project, before any code is committed.

Those usingPipeline Multibranch plugin or organization folder plugins should upgrade to 2.4 or later to see these improvements as well.

GSOC Project Intro: Automatic Plugin Documentation

$
0
0

About me

I am Cynthia Anyango from Nairobi, Kenya. I am a second year student at Maseno University. I am currently specializing on Ruby on Rails and trying to learn Python. I recently started contributing to Open source projects.My major contribution was at Mozilla, where I worked with the QA for Cloud services. I did manual and automated tests for various cloud services. I wrote documentation too. Above that, I am competent and I am always passionate about what I get my hands on.

Project summary

Currently Jenkins plugin documentation is being stored in Confluence. Sometimes the documentation is scattered and outdated. In order to improve the situation we would like to follow the documentation-as-code approach and to put docs to plugin repositories and then publish them on the project website using the awestruct engine. The project aims an implementation of a documentation continuous deployment flow powered by Jenkins and Pipeline Plugin.

The idea is to automatically pull in the README and other docs from GitHub, show changelogs with versions and releases dates. I will be designing file templates that will contain most of the docs information that will be required from plugin developers. Initially the files will be written inAsciiDoc. Plugin developers will get a chance to review the templates. The templates will be prototyped by various plugin developers.

The docs that will be automatically pulled from github and will be published onJenkins.io under the Documentation section.

My mentors are R.Tyler andBaptiste Mathus

I hope to achieve this by 25th June when we will be having our mid-term evaluations.

I will update more on the progress.

Save up to 90% of CI cost on AWS with Jenkins and EC2 Spot Fleet

$
0
0
This is a guest post by Aleksei Besogonov, Senior Software Developer atAmazon Web Services.

Earlier this year, we published a case study on howLyft has used Amazon EC2 Spot instances to save 75% on their continuous delivery infrastructure costs by simply changing four lines of code. Several other EC2 customers like Mozilla have also reduced costs of their continuous integration, deployment and testing pipelines by up to 90% on Spot instances. You can view the current savings on Spot instances over EC2 On-demand instances using theSpot Bid Advisor:

bidadvisor

AWS Spot instances are spare EC2 instances that you can bid on. While your Spot instances may be terminated when EC2’s spare capacity declines, you can automatically replenish these instances and maintain your target capacity using EC2 Spot fleets. As each instance type and Availability Zone provides an alternative capacity pool, you can select multiple such pools to launch the lowest priced instances currently available by launching a Spot fleet on the Amazon EC2 Spot Requests console or using the AWS CLI/SDK tools.

In this walkthrough, we’ll show you how to configure Jenkins to automatically scale a fleet of Spot instances up or down depending on the number jobs to be completed.

Request an Amazon EC2 Spot fleet

To get started, login to Amazon EC2 console, and click on Spot Requests in the left hand navigation pane. Alternatively, you can directly login toAmazon EC2 Spot Requests console. Then click on theRequest Spot Instances button at the top of the dashboard.

In the Spot instance launch wizard, select the Request & Maintain option to request a Spot fleet that automatically provisions the most cost-effective EC2 Spot instances, and replenishes them if interrupted. Enter an initial target capacity, choose an AMI, and select multiple instance types to automatically provision the lowest priced instances available.

wDW63sm

On the next page, ensure that you have selected a key pair, complete the launch wizard, and note the Spot fleet request ID.

Amazon EC2 Spot fleet automates finding the lowest priced instances for you, and enables your Jenkins cluster to maintain the required capacity; so, you don’t need any bidding algorithms to provision the optimal Spot instances over time.

Configure Jenkins

Install the Plugin

From the Jenkins dashboard, select Manage Jenkins, and then click Manage Plugins. On the Available tab, search for and select the EC2 Fleet Jenkins Plugin. Then click the Install button.

iOfvSD8

After the plugin installation is completed, select Manage Jenkins from the Jenkins dashboard, and click Configure System. In the Cloud section, select Amazon Spot Fleet to add a new Cloud.

MtvaRLU

Configure AWS Credentials

Next, we will configure the AWS and slave node credentials. Click the Add button next to AWS Credentials, select Jenkins, and enter your AWS Access Key, secret, and ID.

oCkHRu5Mh9TabQ

Next, click the Add button in the Spot fleet launcher to configure your slave agents with an SSH key. Select Jenkins, and enter the username and private key (from the key pair you configured in your Spot fleet request) as shown below.

EuPvQdU

Confirm that the AWS and SSH credentials you just added are selected. Then choose the region, and the Spot fleet request ID from the drop-down. You can also enter the maximum idle time before your cluster automatically scales down, and the maximum cluster size that it can scale up to.

fk65vh7

Submit Jobs and View Status

After you have finished the previous step, you can view the EC2 Fleet Status in the left hand navigation pane on the Jenkins dashboard. Now, as you submit more jobs, Jenkins will automatically scale your Spot fleet to add more nodes. You can view these new nodes executing jobs under the Build Executor Status. After the jobs are done, if the nodes remain free for the specified idle time (configured in the previous step), then Jenkins releases the nodes, automatically scaling down your Spot fleet nodes.

P2eIB08

Build faster and cheaper

If you have a story to share about your team or product, or have a question to ask, do leave a comment for us; we’d love to connect with you!

GSoC Project Intro: Usage Statistics Analysis

$
0
0

About myself

Hello, my name is Payal Priyadarshini. I am pursing my major in Computer Science & Engineering at the Indian Institute of Technology Kharagpur, India. I am very proficient in writing code in Python, C++, Java and currently getting familiar and hopefully good in Groovy too.

I have internship experiences in renowned institutions like Google and VMware where I worked with some exciting technologies for example Knowledge Graphs, BigTable, SPARQL, RDF in Google. I am a passionate computer science student who is always interested in learning and looking for new challenges and technologies.That’s how I came across to Google Summer of Code where I am working on some exciting data mining problems which you are going to encounter below in this blog.

Project Overview

Jenkins has collected anonymous usage information of more than 100,000 installations which includes set of plugins and their versions etc and also release history information of the upgrades. This data collection can be used for various data mining experiments. The main goal of this project is to perform various analysis and studies over the available dataset to discover trends in data usage. This project will help us to learn more about the Jenkins usage by solving various problems, such as:

  • Plugin versions installation trends, will let us know about the versions installation behaviour of a given plugin.

  • Spotting downgrades, which will warn us that something is wrong with the version from which downgrading was performed.

  • Correlating what users are saying (community rating) with what users are doing (upgrades/downgrades).

  • Distribution of cluster size, where clusters represents jobs, nodes count which approximates the size of installation.

  • Finding set of plugins which are likely to be used together, will setup pillar for plugin recommendation system.

As a part of the Google Summer of Code 2016, I will be working on the above mentioned problems. My mentors for the project are Kohsuke Kawaguchi and Daniel Beck. Some analyses has already been done over this data but those are outdated as charts can be more clearer and interactive. This project aims to improvise existing statistics and generating new ones discussed above.

Use Cases

This project covers wide-range of the use-cases that has been derived from the problems mentioned above.

Use Case 1: Upgrade/Downgrade Analysis

Understanding the trend in upgrades and downgrades have lots of utilities, some of them have already been explained earlier which includes measuring the popularity, spotting downgrades, giving warning about the wrong versions quickly etc.

Here we are analysing the trend in the different version installations for a given plugin. This use-case will help us to know about:

  • Trend in the upgrade to the latest version released for a given plugin.

  • Trend in the popularity decrement of the previous versions after new version release.

  • Find the most popular plugin version at any given point of time.

Use Case 1.2: Spotting dowgrades

Here we are interested to know, how many installations are downgraded from any given version to previously used version. Far fetched goal of this analysis is to give warning when something goes wrong with the new version release, which can be sensed using downgrades performed by users. This analysis can be accomplished by studying the monotonic property of the version number vs. timestamp graph for a given plugin.

Use Case 1.3: Correlation with the perceived quality of Jenkins release

To correlate what users are saying to what users are doing, we have community ratings which tells us about the ratings and reviews of the releases and has following parameters:

  • Used the release on production site w/o major issues.

  • Don’t recommend to other.

  • Tried but rolled it back to the previous version.

First parameters can be calculated from the Jenkins usage data and third parameter is basically spotting downgrades(use case 1.2). But the second parameter is basically an expression which is not possible to calculate. This analysis is just to get a subjective idea about the correlation.

Use Case 2: Plugin Recommendation System

This section involves setting up ground work for the plugin recommendation system. The idea is to find out the set of plugins which are most likely to be used together. Here we will be following both content based filtering as well as collaborative filtering approach.

Collaborative Filtering

This approach is based upon analysing large amount of information on installation’s behaviours and activities. We have implicit form of the data about the plugins, that is for every install ids, we know the set of plugins installed. We can use this information to construct plugin usage graph where nodes are the plugins and the edges between them is the number of installations in which both plugins are installed together.

Content-based Filtering

This method is based on a properties or the content of the item for example recommending items that are similar to the those that a user liked in the past or examining in the present based upon some properties. Here, we are utilizingJenkins plugin dependency graph to learn about the properties of a plugin. This graph tells us about dependent plugins on a given plugin as well as its dependencies on others. Here is an example to show, how this graph is use for content based filetring, suppose if a user is using “CloudBees Cloud Connector”, then we can recommend them for “CloudBees Registration Plugin” as both plugins are dependent on “CloudBees Credentials Plugin”.

Additional Details

You may find the complete project proposal along with the detailed design of the use-cases with their implementation details here in thedesign document.

A complete version of the use-case 1: Upgrade & Downgrade Analysis should be available in late June and basic version of plugin recommendation system will be available in late July.

I do appreciate any kind of feedback and suggestions. You may add comments in thedesign doc. I will be posting updates about the statistics generation status on thejenkins-dev mailing list and Jenkins-infra@lists.jenkins-ci.org">jenkins-infra mailing list.

Viewing all 1088 articles
Browse latest View live