Quantcast
Channel: Jenkins Blog
Viewing all 1088 articles
Browse latest View live

Pipeline at Jenkins World 2016

$
0
0

This is a guest post by R. Tyler Croy, who is a long-time contributor to Jenkins and the primary contact for Jenkins project infrastructure. He is also a Jenkins Evangelist atCloudBees, Inc.

Jenkins World

I have been heavily using Jenkins Pipeline for just about every Jenkins-related project I have contributed to over the past year. Whether I am building and publishing Docker containers, testing infrastructure code orpublishing this very web site, I have been adding a Jenkinsfile to nearly every Git repository I touch.

Implementing Pipeline has been rewarding, but has not been without its own challenges. That’s why I’m excited to see lots of different Jenkins Pipeline related content in the agenda at Jenkins World.

I don’t think it’s possible for a single person to attend all of the Pipeline talks, or the Pipeline-related demos in the "Open Source Hub", but fortunately CloudBees will be recording the sessions. If you have Pipeline-related questions unanswered by all these presentations, feel free to join us at the "Open Source Hub" in the expo hall and Ask the Experts.


On the first day of Jenkins World (September 13th), Isaac Cohen is hosting a workshop titledLet’s Build a Jenkins Pipeline which may be interesting to you if you haven’t yet worked with Pipeline.


September 14th 2:00 PM - 2:45 PM, Exhibit Hall A-1

nextsource logo Automated workflow is a proven method for removing process variability. DevOps pipelines are the next step in the evolution of CI/CD/DevOps. This talk covers Jenkins pipelines, both with and without AWS integration, and explains how Jenkins can be used to create, execute and manage pipelines.

— Jimmy Ray of nextSource

September 14th 5:00 PM - 6:00 PM, Exhibit Hall C

Considering a mono repo that can manage all your source code, binary and other assets? Join us at the Perforce Birds of a Feather Session for updates and discussions around the Helix Plugin for Jenkins (or ‘P4 plugin’).

perforce

This session will look at the latest DSL PipeLine support in the ‘P4 plugin’ for Jenkins and will include a live demo. We will show you how to map your Branches and Streams into a Jenkins Workspace, publish assets back into Helix, and more. You may even get a sneak preview at the latest ‘P4 plugin’ for Jenkins that allows you the freedom to query and run commands from within Jenkins directly against your Helix connection.

— Paul Allen of Perforce

September 14th 3:00 PM - 3:45 PM, Exhibit Hall A-3

320px CloudBees official logo Many of us have already experimented with Docker - for example, by running one of the pre-built images from Docker Hub. It is possible that your team might have recognized the benefits that Docker provides in building microservices and the advantages the technology could bring to development, testing, integration and, ultimately, production. However, you must create a comprehensive build pipeline before deploying any containers into a live environment. Integrating containers into a CD pipeline is far from easy. Along with the benefits Docker brings, there are challenges both technically and process-related. This presentation attempts to outline the steps you need to take for a fully-automated Jenkins pipeline that continuously builds, tests and deploys microservices into a Docker Swarm cluster.

— Viktor Farcic

September 15th 10:30 AM - 11:15 AM, Exhibit Hall A-1

320px CloudBees official logo Pipeline is as powerful as a loaded gun, but with skill can be as delicate as a surgeon’s knife. This talk will give an overview of health and safety so that you can avoid shooting yourself in the head and walk the path to medical school. It will cover not only what not to do, but also why, and share some solutions so you are not left high and dry. Both James and Bobby have bullet wounds from “Champagning” pipeline to automate the test and release of several of the CloudBees products and can occasionally still be seen walking with a limp from shooting for the moon and hitting their feet.

— Bobby Sandell and James T. Nord of CloudBees

September 15th 11:30 AM - 12:15 PM, Exhibit Hall A-2

jfrog While Docker has enabled an unprecedented velocity of software production, it is all too easy to spin out of control. A promotion-based model is required to control and track the flow of Docker images as much as it is required for a traditional software development lifecycle. We will demonstrate how to go from development to containerization to distribution utilizing binary management promotion in a framework implemented on Jenkins, using the Pipeline functionality.

— Mark Galpin

September 15th 11:30 AM - 12:15 PM, Exhibit Hall A-1

320px CloudBees official logo The Pipeline feature has matured and is now included in Jenkins 2.0. During the time since its release, copious user feedback has been received about missing features and pain points. Come hear about some things we know should be worked on - or are already in progress - and bring your suggestions.

— Jesse Glick of CloudBees

September 15th 2:30 PM - 3:15 PM, Great America J

redhat In this talk, we’ll show how to use Jenkins Pipeline together with Docker and Kubernetes to implement a complete end-to-end continuous delivery and continuous improvement system for microservices and monolithic applications using open source software. We’ll demonstrate how to easily create new microservices projects or import existing projects, have them automatically built, system and integration tested, staged and then deployed. Once deployed, we will also see how to manage and update applications using continuous delivery practices along with integrated ChatOps - all completely automated!

— James Strachan of Red Hat

September 15th 3:45 PM - 4:30 PM, Great America J

320px CloudBees official logo Pipeline is quickly establishing itself as the direction that Jenkins jobs are going, enabling the definition of a complete CD pipeline in a single job; Pipeline as Code via the “Jenkinsfile”; job durability across master restarts; and more. I’ll be talking here about the next evolution for Pipeline: a simple, declarative model to define your Pipelines with no need to write scripts. This configuration syntax for Pipeline allows you to automatically configure all stages of your pipeline, the complete build environment, post-build actions, notifications and more. All while providing syntactic and semantic validation before the build actually gets going.

— Andrew Bayer of CloudBees

September 15th 4:45 PM - 5:30 PM, Exhibit Hall A-1

320px CloudBees official logo Response time is paramount for a CI/CD system. In this session, you will see how a few best practices in constructing pipelines can yield faster turnaround times and reduced resource use. We’ll also run through plugins and tools to analyze and visualize performance, including the Pipeline Stage View plugin. If time permits, we may briefly discuss some of the computer science theory behind different aspects of performance.

— Sam Van Oort of CloudBees

September 15th 4:45 PM - 5:30 PM, Exhibit Hall J

aquilent Our 600-person IT organization has committed to implementing continuous delivery practices enterprise-wide. This isn’t a single momentous event put in place overnight. Rather, it’s a strategic journey towards a common goal, and through which each application will take its own unique path. A seminal component of our CD journey is the Pipeline plugin and it has become our standard for CD pipeline orchestration. We will discuss a few of the diverse paths taken by the application teams at our company and show how the use of the Pipeline plugin has uniquely enabled continuous delivery for us in a way that no competing tool can.

— Neil Hunt of Aquilent

September 15th 4:45 PM - 5:30 PM, Exhibit Hall J

320px CloudBees official logo Pipeline Multibranch projects come as a natural evolution of pipeline as code: define your CD pipeline in your source code repository and Jenkins will create isolated branch and pull requests jobs for it. This talk is about the integration of the Pipeline Multibranch plugin with Github and Bitbucket as branch sources.

— Antonio Muñiz of CloudBees

Register for Jenkins World in September with the code JWFOSS for a 20% discount off your pass.


Continuous Delivery of Infrastructure with Jenkins

$
0
0
This is a guest post by Jenkins World speakerR Tyler Croy, infrastructure maintainer for the Jenkins project.

Jenkins World

I don’t think I have ever met a tools, infrastructure, or operations team that did not have a ton of work to do. The Jenkins project’s infrastructure "team" is no different; too much work, not enough time. In lieu of hiring more people, which isn’t always an option, I have found heavy automation and continuous delivery pipelines to be two solutions within reach of the over-worked infrastructure team.

As a big believer in the concept of "Infrastructure as Code", I have been, slowly but surely, moving the project’s infrastructure from manual tasks to code, whether implemented in our Puppet code-base,Docker containers, or even asmachine specifications withPacker. The more of our infrastructure that is code, the more we can apply continuous delivery practices to consistently and reliably build, test and deliver our infrastructure.

This approach integrates nicely withJenkins Pipeline, allowing us to also define our continuous delivery pipelines themselves as code. For example, by sanity-checking our BIND zone files:

Jenkinsfile
node('docker') {def dockerImage = 'rtyler/jenkins-infra-builder'

    checkout scm
    docker.image(dockerImage).inside {
        sh "/usr/sbin/named-checkzone jenkins-ci.org dist/profile/files/bind/jenkins-ci.org.zone"
        sh "/usr/sbin/named-checkzone jenkins.io dist/profile/files/bind/jenkins.io.zone"
    }
}

Or delivering our Docker containers automatically toDocker Hub , with a Jenkinsfile such as:

Jenkinsfile
node('docker') {
    checkout scm/* Get our abbreviated SHA-1 to uniquely identify this build */def shortCommit = sh(script: 'git rev-parse HEAD', returnStdout: true).take(6)

    stage 'Build ircbot' {
        withEnv(["JAVA_HOME=${tool 'jdk8'}", "PATH+MVN=${tool 'mvn'}/bin"]) }
            sh 'make bot'
        }
    }def whale
    stage 'Build container' {
        whale = docker.build("jenkinsciinfra/ircbot:build${shortCommit}")
    }

    stage 'Deploy container' {/* Push to Docker Hub */
        whale.push()
    }
}

Inmy talk at Jenkins World (September 14th, 3:00 - 3:45pm in Exhibit Hall A-1) I will discuss these Jenkinsfiles along with some of the strategies, patterns and code used with the Jenkins project’s open source infrastructure to get the most out of the team’s limited time.

R Tyler will bepresenting more about continous delivery of infrastructure atJenkins World in September. Register with the code JWFOSS for 20% off your full conference pass.

Take the 2016 Jenkins Survey!

$
0
0
This is a guest post by Brian Dawson on behalf of CloudBees, where he works as a DevOps Evangelist responsible for developing and sharing continuous delivery and DevOps best practices. He also serves as the CloudBees Product Marketing Manager for Jenkins.

Once again it’s that time of year when CloudBees sponsors the Jenkins Community Survey to assist the community with gathering objective insights into how jenkins is being used and what users would like to see in the Jenkins project.

As an added incentive to take the survey, CloudBees will enter participants into a drawing for a free pass to Jenkins World 2017 (1st prize) and a $100 Amazon Gift Card (2nd prize). The survey will close at the end of September, so click the link at the end of the blog post to get started!

All participants will be able to access reports summarizing survey results. If you’re curious about what insights your input will provide, see the results of last year’s 2015 survey:

As always, personal information you provide is not used by CloudBees for purposes beyond the survey. Your feedback helps capture a bigger picture of community trends and needs. There are laws that govern prize giveaways and eligibility; Cloudbees has compiled all those fancyterms and conditions here.

Please take the survey and let your voice be heard - it will take less than 10 minutes.

Announcing the Blue Ocean beta, Declarative Pipeline and Pipeline Editor

$
0
0

At Jenkins World on Wednesday 14th of September, the Jenkins project was happy to introduce the beta release of Blue Ocean. Blue Ocean is the new user experience for Jenkins, built from the ground up to take advantage of Jenkins Pipeline. It is an entire rethink of the the way that modern developers will use Jenkins.

Blue Ocean is available today via the Jenkins Update Center for Jenkins users running 2.7.1 and above.

Get the beta

Just search for BlueOcean beta in the Update Center, install it, browse to the dashboard, and then click the Try BlueOcean UI button on the dashboard.

Whats included?

Back in April we open sourced Blue Ocean and shared our vision with the community. We’re very happy that all the things we showed you then have shipped in the beta (software projects run on time?!).

For a refresher on Blue Ocean, watch this short video:

Declarative Pipeline

We have heard from the community about the usability of Jenkins Pipeline. Much of the feedback we received was to a desire toconfigure Pipelines rather than script them, and to make it easy for beginners to get started with their first Pipeline.

This is how Declarative Pipeline was born. We’ve introduced a new method whereby you declare how you want your Pipeline to look rather than using Pipeline Script - it’s configuration rather than code.

Here’s a small example of a Declarative Pipeline for nodejs that runs the whole Pipeline inside a Docker container:

pipeline {
  agent docker:'node:6.3'
  stages {
    stage('build') {
      sh 'npm --version'
      sh 'npm install'
    }
    stage ('test') {
      sh 'npm test'
    }
  }
}

Docker support in Declarative Pipeline allows you to version your application code, Jenkins Pipeline configuration, and the environment where your pipeline will run, all in a single repository. It’s a crazy powerful combination.

Declarative Pipeline introduces the postBuild section that makes it easy to run things conditionally at the end of your Pipeline without the complexity of the try... catch of Pipeline script.

postBuild {
  always {
    sh 'echo "This will always run"'
  }
  success {
    sh 'echo "This will run only if successful"'
  }
  failure {
    sh 'echo "This will run only if failed"'
  }
  unstable {
    sh 'echo "This will run only if the run was marked as unstable"'
  }
  changed {
    sh 'echo "This will run only if the state of the Pipeline has changed"'
    sh 'echo "For example, the Pipeline was previously failing but is now successful"'
    sh 'echo "... or the other way around :)"'
  }
}

And there is so much more!

If you have the Blue Ocean beta installed you already have Declarative Pipeline. While Declarative Pipeline is still alpha at the moment, we do encourage you tofollow our getting started guide, give us feedback on the Jenkins Users mailing list or file bugs against the pipeline-model-definition component in JIRA.

Introducing the Pipeline Editor

The Pipeline Editor is a graphical user interface that gives Jenkins users the simplest way yet to get started with creating Pipelines in Jenkins. It will also save a lot of time for intermediate and advanced Jenkins users as a way to author Pipelines.

When you build your Pipeline in the Editor and click the save button, the editor will commit a new Jenkinsfile back to your repository in the form of the new Declarative Pipeline. When you want to edit again, Jenkins will read it from your repository exactly how you saw it previously.

The Pipeline Editor is a work in progress and should arrive in a beta release soon.

Personalized dashboard

Thank you

Thanks for reading our news from Jenkins World and be sure to check the blog for regular updates!

I’d also like to thank our amazing community for their feedback and support as we change the way software teams around the world use Jenkins. We couldn’t do this without you.

Jenkins Online Meetup report. Plugin Development - WebUI

$
0
0

On September 6th we had a Jenkins Online Meetup. This meetup was the second event in the series of Plugin Development meet ups. At this meetup we were talking about Jenkins Web UI development.

logo

Talks

1) Classic Jenkins UI framework -Daniel Beck

In the first part of his talk, Daniel presented how Stapler, the web framework used in Jenkins, works, and how you can add to the set of URLs handled by Jenkins. In the second part he was talking about creating new views using Jelly and Groovy, and how to add new content to existing views.

Keywords:Stapler, Jelly, Groovy-defined UIs

2) Developing modern Jenkins UIs with Javascript -Tom Fennelly

Feel that Jenkins UI is a bit old? You are not alone. In addition to the old stack Jenkins offers a framework for writing UI components in Javascript with help of Node.js. Tom presented this new engine, which is being used in new Jenkins Web UI components like Jenkins installation wizard. He also provided several examples from the BlueOcean project he is working on.

Want to conduct a meetup?

We are looking for speakers, who would be interested to share their experience about Jenkins best-practices, war stories and plugin development.

If you are interested to conduct a presentation, please contact meetup organizers using meetup.com “contact organizers” feature or via the events@lists.jenkins-ci.org mailing list.

Jenkins World 2016 Wrap-up - Introduction

$
0
0
This is a guest post by Liam Newman, Technical Evangelist at Cloudbees.

Jenkins World 2016

That’s a Wrap!

Any way you look at it, last week’s Jenkins World Conference 2016 was a huge success.

In 2011, a few hundred users gathered in San Francisco for the first "Jenkins User Conference". Over successive years, this grew into several yearly regional Jenkins user conferences. This year, over 1,300 people came from around the world to "Jenkins World 2016", the first global event for the Jenkins community.

Kohsuke Kawaguchi Keynote

This year’s Jenkins World conference included:

Stickers!

  • Keynote presentation by Jenkins creator, Kohsuke Kawaguchi, announcing a number of great new Jenkins project features, such as "Blue Ocean".

  • More than 50 sessions on everything from the new "Blue Ocean" UI, to "Continuous Security" to "Dockerizing Jenkins".

  • Jenkins Open-source Hub, with "Ask the Experts" and demos by 20+ Jenkins contributors.

  • Booths from 30+ sponsors.

  • Stickers!

Over the next week, I’ll be posting highlights from the event, including slides, videos, and links to other useful resources. Stay tuned!

Jenkins World 2016 Wrap-up - Pipeline

$
0
0
This is a guest post by Liam Newman, Technical Evangelist at CloudBees.

Jenkins World 2016

As someone who has managed Jenkins for years and manually managed jobs, I think pipeline is fantastic. I spent much of the conference manning theAsk the Experts desk of the "Open Source Hub" and was glad to find I was not alone in that sentiment. The questions were not "Why should I use Pipeline?", but "How do I do this in Pipeline?"

Everyone was interested in showing what they have been able to accomplish, learning about best practices, and seeing what new features were on the horizon. The sessions and demos on Pipeline that I saw were all well attended, but no one could have seen all of them.

CloudBees recorded all the sessions and will eventually post them (editing takes time). I’ll post an update when they become available. In the meanwhile, here’s a recap of the some of the sessions on Pipeline, with links to slides:


320px CloudBees official logo

Jesse Glick discussed the past, present, and future of Jenkins Pipeline inDirections for Pipeline. He reviewed a broad range of improvements made to Pipeline over the last year, including syntax, documentation, plugin support, and stability. He reviewed the changes currently underway. He also pointed out that many of the improvements have been driven by user feedback and invited everyone to continue to participate in making pipeline even better.


Nextsource

InPipelining DevOps with Jenkins and AWS,Jimmy Ray ofnextSource showed how Pipeline can be used to automate CI/CD build processes, and how to integrate Jenkins and Pipeline with AWS. He also discussed some admin-level considerations, such as how to install Jenkins on EC2 and the merits of "LTS" and "latest build".


Continuous Build and Delivery Pipelines for Android

Christopher Orr examined how to create "Continuous Build and Delivery Pipelines for Android" applications. He showed how to set up Android-capable build agents, ensure traceable application releases, reporting warnings, run various types of tests, and deploy and app to Google Play. This included live demonstrations and discussion of best practices.


A New Way to Define Jenkins Pipelines

Andrew Bayer presentedA New Way to Define Jenkins Pipelines. He showed the next evolution for Pipeline, based on a simpler declarative model. This declarative syntax for Pipeline still supports the creation of complex pipelines, including complete build environments, post-build actions, and notifications, while also being easier to understand. This declarative syntax also makes in it easier to implement other interesting scenarios such as early validation of pipelines and a visual pipeline editor.


Perforce

InPerfecting Your Development Tools: Updates to the Helix Plugin for Jenkins,Paul Allen ofPerforce walked through using Perforce’s "Monorepo" model with Jenkins Pipeline. He explained in detail how to work with the Perforce P4 in Jenkins, including credential passing and workspace management. Of particular interest was his side-by-side comparison the various actions done with the Jenkins UI vs Pipeline.


Building Pipelines To Be Faster

Sam Van Oort demonstrated strategies for faster pipelines inThe Need For Speed: Building Pipelines To Be Faster. He discussed various elements that contribute to making pipelines faster or slower, such a number of resources and latency. He then showed several best practices for constructing pipelines that have lower turnaround times and reduced resource use. He also reviewed plugins and tools that can help analyze and visualize pipeline performance, including the Pipeline Stage View plugin and Blue Ocean.


320px CloudBees official logo

Bobby Sandell andJames T. Nord talked about what not to do with Pipeline inNo, You Shouldn’t Do That! Lessons from Using Pipeline. They told the story of their own experiences as early adopters of Jenkins Pipeline at CloudBees. They described a number of key scenarios they attempted to address, detailed various mistakes and false starts, and finally share what they learned in each case.


Google Summer of Code

Alexandru Somai gave alightning talk on hisGoogle Summer of Code (GSoC) 2016 project, "External Workspace Manager Plugin for Jenkins Pipeline". The build workspace for Jenkins projects may become very large. Alex showed how the External Workspace Manager plugin addresses this issue, adding support for managing and reusing the same workspace between multiple pipeline builds.

A recording of his presentation for GSOC is availablehere.


Red Hat

How to Do Continuous Delivery with Jenkins Pipeline, Docker and Kubernetes, presented byJames Strachan ofRed Hat, showed how to use Jenkins Pipeline with Docker and Kubernetes to implement a complete end-to-end continuous delivery and continuous improvement system using open source software for both microservices and monolithic applications. He demonstrated how to create or import projects, and have them automatically build, run system and integration tests, stage, and finally deploy. He also showed to manage and update those deployed applications using continuous delivery practices.

Jenkins World 2016 Wrap-up - Scaling

$
0
0
This is a guest post by Liam Newman, Technical Evangelist at CloudBees.

Jenkins World 2016

One of the great features of Jenkins is how far it can scale, not only from a software perspective, but also from an organizational one. From a single Jenkins master with one or two agents to a multiple master with thousands of agents, from a team of only a few people to a whole company with multiple disparate departments and organizations, you’ll find space where Jenkins is used.

Like any software or organization, there are common challenges for increasing scale with Jenkins and some common best practices, but there are also some unique solutions. A big conference likeJenkins World brings users from all scales together to see how people in other organizations at similar or greater scale are solving similar problems.

As I notedbefore,CloudBees recorded all the sessions and will eventually post them (editing takes time). I’ll post an update when they become available. In the meanwhile, here’s a recap of the some of the sessions related to scaling Jenkins, with links to slides:


159px National Public Radio logo.svg

Paul Miles andGrant Dickie ofNPR talked aboutJenkinsOps: An Initiative to Streamline and Automate Jenkins. They shared ways their team has used Jenkins to automate many of the administrative tasks related to managing feature code branches, handling deployments, running tests, and properly configuring their environments. They also showed code samples and talked about future challenges in their quest to achievecontinuous deployment.


Riot Games logo

Maxfield F Stewart ofRiot Games showed how they built an integrated Docker solution using Jenkins inThinking Inside the Container: A Continuous Delivery Story He showed how their system allows engineers around the company to submit Docker images as build environments. This has let their containerized farm now create over 10,000 containers per week and handles nearly 1,000 jobs at a rate of about 100 jobs per hour. And they have done this using readily available, open source Jenkins plugins. He also talked about how they settled on this design, lessons learned, best practices, and how to build and scale other similar system.


Red Hat

How to Do Continuous Delivery with Jenkins Pipeline, Docker and Kubernetes, presented byJames Strachan ofRed Hat, showed how to use Jenkins Pipeline with Docker and Kubernetes to implement a complete end-to-end continuous delivery and continuous improvement system using open source software for both microservices and monolithic applications. He demonstrated how to create or import projects, and have them automatically build, run system and integration tests, stage, and finally deploy. He also showed to manage and update those deployed applications using continuous delivery practices.


320px CloudBees official logo

Carlos Sanchez ofCloudBees discussedScaling Jenkins with Docker: Swarm, Kubernetes or Mesos? He compared various Docker Swarm, Apache Mesos, and Kubernetes in terms of their ability to dynamically scale in Jenkins by running jobs inside containers. He also discussed the pros and cons, best practices, level of Jenkins support for each of these technologies.


Stephen Connolly ofCloudBees asked "So, You Want to Build the World’s Biggest Jenkins Cluster?" and explained how to do so. He started with real world results realized by Jenkins users who have built large clusters. Next, he showed experiments around scaling some individual sub-components of Jenkins in isolation to see what challenges have been faced when integrated. Finally, he arrived at recipes for building Jenkins clusters with different scaling capabilities and making existing Jenkins clusters more efficient.


splunk logo 300x100

Bill Houston andAli Raza ofSplunk gave a talk in two parts,Jenkins at Splunk and Splunking Jenkins In the first part, Bill showed how Splunk uses Jenkins to implement their end-to-end CI system. They discussed features and design goals, challenges they encountered, and how they addressed these challenges. In the second part, Ali showed how to use the Jenkins Splunk plugin. Using plugin, he gathered test results and Jenkins environment data, and delivered it to a Splunk indexer for analysis and presentation.


272px Google 2015 logo.svg

David Hoover ofGoogle talked aboutJenkins inside Google. Last year, theypresented their initial investigations and stress testing as they prepared to deploy a large-scale Jenkins installation at Google. Now, with a year of real-world use under their belts, they returned to present on how their expectations held up, what new issues they encountered, how they have addressed those issues, and the challenges and opportunities they see ahead.


Jenkins World 2016 Wrap-up - Ask the Experts & Demos

$
0
0
This is a guest post by Liam Newman, Technical Evangelist at CloudBees.

Jenkins World 2016

As I mentioned in myprevious post,Jenkins World brought together Jenkins users from organizations of all sizes. It also brought together Jenkins users of all skill levels; from beginners to experts (including to JAM organizers, board members, and long time contributors). A number of those experts also volunteered to staff the Open Source Hub’s "Ask the Experts" desk throughout the conference to answer Jenkins questions. This included, but was not limited to: Paul Allen,R Tyler Croy,James Dumay,Jesse Glick,Eddú Meléndez Gonzales,Jon Hermansen,Owen Mehegan,Oleg Nenashev,Liam Newman,Christopher Orr,Casey Vega,Mark Waite,Dean Yu, andKeith Zantow.

Ask the Experts

I actually chose to spend the majority of my time at the booth. It was fantastic to hear all the different ways people are using Jenkins and wanting use Jenkins to do even more. I answered dozens of questions on both days of the conference, often learning new things in the process of answering them. And for questions that were beyond any one person’s knowledge, there was such a breadth of expertise, very few questions were beyond our combined abilities.

Ask the Experts

While "Ask the Experts" saw a lot traffic, the Open Source Hub’s lunch-time demos drew really big crowds. They covered wide range of subjects in a quick succession and offered people a chance to be introduced to new areas of in Jenkins without spending a whole session on them. Some demos were only presented at lunch while others were abbreviated versions of longer talks presented at other times during the conference. Here’s the full list with related links:

Demo Crowd

Ask the Experts

Thank you to everyone who staffed the booth and gave demos.

Also, thanks to everyone who attended the demos and came by to ask questions. If you have more questions, you don’t have to wait until next year’s Jenkins World. Join the jenkinsci-users mailing list or the#jenkins IRC channel to get help from experts around the world.

And finally, a special thanks to the Jenkins Events officer, Alyssa Tong, for getting the entire booth designed, prepared, and keeping everything on track before, during, and after the conference.

Ask the Experts

Jenkins World 2016, That's a Wrap!

$
0
0
This is a guest post by Liam Newman, Technical Evangelist at CloudBees.

Jenkins World 2016

This year’s Jenkins World conference was a huge milestone for the Jenkins project - the first global event for the Jenkins community. It brought users and contributors together to exchange ideas on the current state of the project, celebrate accomplishments of the past year, and look ahead at all the exiting enhancements coming down the pipe(line).

Contributor Summit

To kick off Jenkins World, we had a full day "Contributor Summit". Jenkins is a distributed project with contributors from all over the globe. Conferences like this are perfect time to get contributors together face-to-face, to talk through current issues and upcoming plans for the the project. Some key topics discussed during this summit were:

  • Infrastructure - In the past year, the Jenkins project has moved new domain name, a statically generated website, and has entered apartnership with Microsoft to host to host infrastructure on Azure.

  • Events - A year ago, there were fiveJenkins Area Meetups, today there are 37 around the world, with ~7000 members.

  • Security - Daniel Beck has done a great job a "Security Officer" for the project over the last year. Jenkins 2 includes tighter security out of the box, 9 security alerts have been addressed, and theSecurity Team is continuing to evaluate threats as they are reported.

  • Pipeline - Pipeline has been a success and there many improvements on the way, including better Pipeline Library support, a UI-based Pipeline Editor, and Declarative Pipeline syntax.

  • Blue Ocean - Blue Ocean announced their "1.0 Beta" release and discussed their roadmap.

  • Storage Pluggability - One of the big upcoming goals is reducing Jenkins' dependence on local file system storage on the server system (job configuration, build logs, etc.). There was extensive discussion of how to accomplish this goal.

Contributors Summit

Keynote: The State of Jenkins 2016

The next day,Kohsuke gave a greatkeynote, showing how far the project as come this year and where it is headed. You can get the slideshere or see the full video below.

What’s Next?

Overall, Jenkins World was a very enjoyable event. I’m sure everyone came away having learned a lot and made many new connections. I know I’m excited to see what the coming year brings for Jenkins and the Jenkins community.

Don’t forget that there are many ways to continue to build connections to the rest of the Jenkins community throughout the year, such as the Jenkins Online Meetup which hosts online events year-round. Or, see if there is aJenkins Area Meetup (JAM) near you. If there isn’t, take a look at the Jenkins Area Meetup page to see about starting one.

Thanks, and I hope to see you all and Jenkins World 2017!

CommitStrip Mural

Controlling the Flow with Stage, Lock, and Milestone

$
0
0
This is a guest post by Patrick Wolf, Director of Product Management at CloudBees.

Recently the Pipeline team began making several changes to improve the stage step and increase control of concurrent builds in Pipeline. Until now the stage step has been the catch-all for functionality related to the flow of builds through the Pipeline: grouping build steps into visualized stages, limiting concurrent builds, and discarding stale builds.

In order to improve upon each of these areas independently we decided to break this functionality into discrete steps rather than push more and more features into an already packed stage step.

  • stage - the stage step remains but is now focused on grouping steps and providing boundaries for Pipeline segments.

  • lock - the lock step throttles the number of concurrent builds in a defined section of the Pipeline.

  • milestone - the milestone step automatically discards builds that will finish out of order and become stale.

Separating these concerns into explicit, independent steps allows for much greater control of Pipelines and broadens the set of possible use cases.

Stage

The stage step is a primary building block in Pipeline, dividing the steps of a Pipeline into explicit units and helping to visualize the progress using the "Stage View" plugin or Blue Ocean. Beginning with version 2.2 of "Pipeline Stage Step" plugin, the stage step now requires a block argument, wrapping all steps within the defined stage. This makes the boundaries of where each stage begins and ends obvious and predictable. In addition, the concurrency argument of stage has now been removed to make this step more concise; responsibility for concurrency control has been delegated to the lock step.

stage('Build') {
  doSomething()
  sh "echo $PATH"
}

Omitting the block from stage and using the concurrency argument are now deprecated in Pipeline. Pipelines using this syntax will continue to function but will produce a warning in the console log:

Using the 'stage' step without a block argument is deprecated

This message is only a reminder to update your Pipeline scripts; none of your Pipelines will stop working. If we reach a point where the old syntax is to be removed we will make an announcement prior to the change. We do, however, recommend that you update your existing Pipelines to utilize the new syntax.

note: Stage View and Blue Ocean will both work with either the old stage syntax or the new.

Lock

Rather than attempt to limit the number of concurrent builds of a job using the stage, we now rely on the "Lockable Resources" plugin and the lock step to control this. The lock step limits concurrency to a single build and it provides much greater flexibility in designating where the concurrency is limited.

  • lock can be used to constrain an entire stage or just a segment:

stage('Build') {
  doSomething()
  lock('myResource') {
    echo "locked build"
  }
}
  • lock can be also used to wrap multiple stages into a single concurrency unit:

lock('myResource') {
  stage('Build') {
    echo "Building"
  }
  stage('Test') {
    echo "Testing"
  }
}

Milestone

The milestone step is the last piece of the puzzle to replace functionality originally intended for stage and adds even more control for handling concurrent builds of a job. The lock step limits the number of builds running concurrently in a section of your Pipeline while the milestone step ensures that older builds of a job will not overwrite a newer build.

Concurrent builds of the same job do not always run at the same rate. Depending on the network, the node used, compilation times, test times, etc. it is always possible for a newer build to complete faster than an older build. For example:

  • Build 1 is triggered

  • Build 2 is triggered

  • Build 2 builds faster than Build 1 and enters the Test stage sooner.

Rather than allowing Build 1 to continue and possibly overwrite the newer artifact produced in Build 2, you can use the milestone step to abort Build 1:

stage('Build') {
  milestone()
  echo "Building"
}
stage('Test') {
  milestone()
  echo "Testing"
}

When using the input step or the lock step a backlog of concurrent builds can easily stack up, either waiting for user input or waiting for a resource to become free. The milestone step will automatically prune all older jobs that are waiting at these junctions.

milestone()
input message: "Proceed?"
milestone()

Bookending an input step like this allows you to select a specific build to proceed and automatically abort all antecedent builds.

milestone()
lock(resource: 'myResource', inversePrecedence: true) {
  echo "locked step"
  milestone()
}

Similarly a pair of milestone steps used with a lock will discard all old builds waiting for a shared resource. In this example, inversePrecedence: true instructs the lock to begin most recent waiting build first, ensuring that the most recent code takes precedence.

Putting it all together

Each of these steps can be used independently of the others to control one aspect of a Pipeline or they can be combined to provide powerful, fine-grained control of every aspect of multiple concurrent builds flowing through a Pipeline. Here is a very simple example utilizing all three:

stage('Build') {// The first milestone step starts tracking concurrent build order
  milestone()
  node {
    echo "Building"
  }
}// This locked resource contains both Test stages as a single concurrency Unit.// Only 1 concurrent build is allowed to utilize the test resources at a time.// Newer builds are pulled off the queue first. When a build reaches the// milestone at the end of the lock, all jobs started prior to the current// build that are still waiting for the lock will be aborted
lock(resource: 'myResource', inversePrecedence: true){
  node('test') {
    stage('Unit Tests') {
      echo "Unit Tests"
    }
    stage('System Tests') {
      echo "System Tests"
    }
  }
  milestone()
}// The Deploy stage does not limit concurrency but requires manual input// from a user. Several builds might reach this step waiting for input.// When a user promotes a specific build all preceding builds are aborted,// ensuring that the latest code is always deployed.
stage('Deploy') {
  input "Deploy?"
  milestone()
  node {
    echo "Deploying"
  }
}

For a more complete and complex example utilizing all these steps in a Pipeline check out the Jenkinsfile provided with the Docker image for demonstrating Pipeline. This is a working demo that can be quickly set up and run.

Jenkins World 2016 Session Videos

xUnit and Pipeline

$
0
0
This is a guest post by Liam Newman, Technical Evangelist at CloudBees.

TheJUnit plugin is the go-to test result reporter for many Jenkins projects, but the it is not the only one available. ThexUnit plugin is a viable alternative that supports JUnit and many other test result file formats.

Introduction

No matter the project, you need to gather and report test results. JUnit is one of the most widely supported formats for recording test results. For a scenarios where your tests are stable and your framework can produce JUnit output, this makes the JUnit plugin ideal for reporting results in Jenkins. It will consume results from a specified file or path, create a report, and if it finds test failures it will set the the job state to "unstable" or "failed".

Test reporting with JUnit

There are also plenty of scenarios where the JUnit plugin is not enough. If your project has some failing tests that will take some time to fix, or if there are some flaky tests, the JUnit plugin’s simplistic view of test failures may be difficult to work with.

No problem, the Jenkins plugin model lets us replace the JUnit plugin functionality with similar functionality from another plugin and Jenkins Pipeline lets us do this in safe stepwise fashion where we can test and debug each of our changes.

In this article, I will show you how to replace the JUnit plugin with the xUnit plugin in Pipeline code to address a few common test reporting scenarios.

Initial Setup

I’m going to use the "JS-Nightwatch.js" sample project from my previous post to demonstrate a couple common scenarios that the xUnit handles better. I already have the latestJUnit plugin andxUnit plugin installed on my Jenkins server.

I’ll be keeping my changes in link:my fork of the "JS-Nightwatch.js" sample project on GitHub, under the "blog/xunit" branch.

Here’s what the Jenkinsfile looked like at the end of that previous post and what the report page looks like after a few runs:

Jenkinsfile
node {
    stage "Build"
    checkout scm// Install dependencies
    sh 'npm install'

    stage "Test"// Add sauce credentials
    sauce('f0a6b8ad-ce30-4cba-bf9a-95afbc470a8a') {// Start sauce connect
        sauceconnect(options: '', useGeneratedTunnelIdentifier: false, verboseLogging: false) {// List of browser configs we'll be testing against.def platform_configs = ['chrome','firefox','ie','edge'
            ].join(',')// Nightwatch.js supports color ouput, so wrap this step for ansi color
            wrap([$class: 'AnsiColorBuildWrapper', 'colorMapName': 'XTerm']) {// Run selenium tests using Nightwatch.js// Ignore error codes. The junit publisher will cover setting build status.
                sh "./node_modules/.bin/nightwatch -e ${platform_configs} || true"
            }

            junit 'reports/**'

            step([$class: 'SauceOnDemandTestPublisher'])
        }
    }
}
JUnit plugin console output

Switching from JUnit to xUnit

I’ll start by replacing JUnit with xUnit in my pipeline. I use the Snippet Generator to create the step with the right parameters. The main downside of using the xUnit plugin is that while it is Pipeline compatible, it still uses the more verbose step() syntax and has some very rough edges around that, too. I’ve filed JENKINS-37611 but in the meanwhile, we’ll work with what we have.

// Original JUnit step
junit 'reports/**'// Equivalent xUnit step - generated (reformatted)
step([$class: 'XUnitBuilder', testTimeMargin: '3000', thresholdMode: 1,thresholds: [
        [$class: 'FailedThreshold', failureNewThreshold: '', failureThreshold: '', unstableNewThreshold: '', unstableThreshold: '1'],
        [$class: 'SkippedThreshold', failureNewThreshold: '', failureThreshold: '', unstableNewThreshold: '', unstableThreshold: '']],tools: [
        [$class: 'JUnitType', deleteOutputFiles: false, failIfNotNew: false, pattern: 'reports/**', skipNoTestFiles: false, stopProcessingIfError: true]]
    ])// Equivalent xUnit step - cleaned
step([$class: 'XUnitBuilder',thresholds: [[$class: 'FailedThreshold', unstableThreshold: '1']],tools: [[$class: 'JUnitType', pattern: 'reports/**']]])

If I replace the junit step in my Jenkinsfile with that last step above, it produces a report and job result identical to the JUnit plugin but using the xUnit plugin. Easy!

node {
    stage "Build"// ... snip ...

    stage "Test"// Add sauce credentials
    sauce('f0a6b8ad-ce30-4cba-bf9a-95afbc470a8a') {// Start sauce connect
        sauceconnect(options: '', useGeneratedTunnelIdentifier: false, verboseLogging: false) {// ... snip ...// junit 'reports/**'
            step([$class: 'XUnitBuilder',thresholds: [[$class: 'FailedThreshold', unstableThreshold: '1']],tools: [[$class: 'JUnitType', pattern: 'reports/**']]])// ... snip ...
        }
    }
}
Test reporting with xUnit
xUnit plugin console output

Accept a Baseline

Most projects don’t start off with automated tests passing or even running. They start with a people hacking and prototyping, and eventually they start to write tests. As new tests are written, having tests checked-in, running, and failing can be valuable information. With the xUnit plugin we can accept a baseline of failed cases and drive that number down over time.

I’ll start by changing the Jenkinsfile to fail jobs only if the number of failures is greater than an expected baseline, in this case four failures. When I run the job with this change, the reported numbers remain the same, but the job passes.

Jenkinsfile
// The rest of the Jenkinsfile is unchanged.// Only the xUnit step() call is modified.
step([$class: 'XUnitBuilder',thresholds: [[$class: 'FailedThreshold', failureThreshold: '4']],tools: [[$class: 'JUnitType', pattern: 'reports/**']]])
Accept a baseline of failing tests.

Next, I can also check that the plugin reports the job as failed if more failures occur. Since this is sample code, I’ll do this by adding another failing test and checking the job reports as failed.

tests/guineaPig.js
// ... snip ...'Guinea Pig Assert Title 0 - D': function(client) { /* ... */ },'Guinea Pig Assert Title 0 - E': function(client) {
        client
            .url('https://saucelabs.com/test/guinea-pig')
            .waitForElementVisible('body', 1000)//.assert.title('I am a page title - Sauce Labs');
            .assert.title('I am a page title - Sauce Labs - Cause a Failure');
    },afterEach: function(client, done) { /* ... */ }// ... snip ...
All tests pass!

In a real project, we’d make fixes over a number of commits bringing the number of failures down and adjusting our baseline. Since this is a sample, I’ll just make all tests pass and set the job failure threshold for failed and skipped cases to zero.

Jenkinsfile
// The rest of the Jenkinsfile is unchanged.// Only the xUnit step() call is modified.
step([$class: 'XUnitBuilder',thresholds: [
        [$class: 'SkippedThreshold', failureThreshold: '0'],
        [$class: 'FailedThreshold', failureThreshold: '0']],tools: [[$class: 'JUnitType', pattern: 'reports/**']]])
tests/guineaPig.js
// ... snip ...'Guinea Pig Assert Title 0 - D': function(client) { /* ... */ },'Guinea Pig Assert Title 0 - E': function(client) {
        client
            .url('https://saucelabs.com/test/guinea-pig')
            .waitForElementVisible('body', 1000)
            .assert.title('I am a page title - Sauce Labs');
    },afterEach: function(client, done) { /* ... */ }// ... snip ...
tests/guineaPig_1.js
// ... snip ...'Guinea Pig Assert Title 1 - A': function(client) {
        client
            .url('https://saucelabs.com/test/guinea-pig')
            .waitForElementVisible('body', 1000)
            .assert.title('I am a page title - Sauce Labs');
    },// ... snip ...
All tests pass!

Allow for Flakiness

We’ve all known the frustration of having one flaky test that fails once every ten jobs. You want to keep it active so you can working isolating the source of the problem, but you also don’t want to destablize your CI pipeline or reject commits that are actually okay. You could move the test to a separate job that runs the "flaky" tests, but in my experience that just leads to a job that is always in a failed state and a pile of flaky tests no one looks at.

With the xUnit plugin, we can keep the this flaky test in main test suite but allow the our job to still pass.

I’ll start by adding a sample flaky test. After a few runs, we can see the test fails intermittently and causes the job to fail too.

tests/guineaPigFlaky.js
// New test file: tests/guineaPigFlaky.jsvar https = require('https');var SauceLabs = require("saucelabs");

module.exports = {

    '@tags': ['guineaPig'],'Guinea Pig Flaky Assert Title 0': function(client) {var expectedTitle = 'I am a page title - Sauce Labs';// Fail every fifth minuteif (Math.floor(Date.now() / (1000 * 60)) % 5 === 0) {
            expectedTitle += " - Cause failure";
        }

        client
            .url('https://saucelabs.com/test/guinea-pig')
            .waitForElementVisible('body', 1000)
            .assert.title(expectedTitle);
    }afterEach: function(client, done) {
        client.customSauceEnd();

        setTimeout(function() {
            done();
        }, 1000);

    }

};
The pain of flaky tests failing the build

I can almost hear my teammates screaming in frustration just looking at this report. To allow specific tests to be unstable but not others, I’m going to add a guard "suite completed" test to the suites that should be stable, and keep flaky test on it’s own. Then I’ll tell xUnit to allow for a number of failed tests, but no skipped ones. If any test fails other than the ones I allow to be flaky, it will also result in one or more skipped tests and will fail the build.

// The rest of the Jenkinsfile is unchanged.// Only the xUnit step() call is modified.
step([$class: 'XUnitBuilder',thresholds: [
        [$class: 'SkippedThreshold', failureThreshold: '0'],// Allow for a significant number of failures// Keeping this threshold so that overwhelming failures are guaranteed//     to still fail the build
        [$class: 'FailedThreshold', failureThreshold: '10']],tools: [[$class: 'JUnitType', pattern: 'reports/**']]])
tests/guineaPig.js
// ... snip ...'Guinea Pig Assert Title 0 - E': function(client) { /* ... */ },'Guinea Pig Assert Title 0 - Suite Completed': function(client) {// No assertion needed
    },afterEach: function(client, done) { /* ... */ }// ... snip ...
tests/guineaPig_1.js
// ... snip ...'Guinea Pig Assert Title 1 - E': function(client) { /* ... */ },'Guinea Pig Assert Title 1 - Suite Completed': function(client) {// No assertion needed
    },afterEach: function(client, done) { /* ... */ }// ... snip ...

After a few more runs, you can see the flaky test is still being flaky, but it is no longer failing the build. Meanwhile, if another test fails, it will cause the "suite completed" test to be skipped, failing the job. If this were a real project, the test owner could instrument and eventually fix the test. When they were confident they had stabilized the test the could add a "suite completed" test after it to enforce it passing without changes to other tests or framework.

Flaky tests don't have to fail the build
Results from flaky test

Conclusion

This post has shown how to migrate from the JUnit plugin to the xUnit plugin on an existing project in Jenkins pipeline. It also covered how to use the features of xUnit plugin to get more meaningful and effective Jenkins reporting behavior.

What I didn’t show was how many other formats xUnit supports - from CCPUnit to MSTest. You can also write your own XSL for result formats not on the known/supported list.

Monthly JAM Recap - October 2016

$
0
0

October has proven to be a busy month within the Jenkins Area Meetup groups. Below is a recap of topics discussed at various JAMS in the month of October.

Dallas Forth Worth, Texas (DFW) JAM

James Dumay took time out of his vacation to present Blue Ocean, a project that rethinks the user experience of Jenkins, modeling and presenting the process of software delivery by surfacing information that is important to development teams with as few clicks as possible, while still staying true to the extensibility that Jenkins always has had as a core value.

See recording HERE.

San Francisco, CA JAM

Andrey Falko from Salesforce shared how he and his Diagnostics team used Jenkins to deliver software securely and reliably to production within Salesforce.

See videos HERE andHERE.

Salesforce

Boulder, CO JAM

This was a meetup with CA Technologies and includedMark Waite, maintainer of the Jenkins git plugin and a director at CA Technologies in Fort Collins.Tyler did a great presentation about Jenkins Pipeline and Blue Ocean and showed off how the community is using Blue Ocean to build Jenkins.

Barcelona, Spain JAM

At this meetup, there were plenty of engaging discussions surrounding the Jenkins Certification and DevOps 2.1 Toolkit: Continuous Deployment with Jenkins and Docker Swarm. Guillem Sola shared his Jenkins certification experience HERE while Viktor Farcic presented his thoughts on the aspects of building, testing, deploying, and monitoring services inside Docker Swarm clusters and Jenkinshttps://www.youtube.com/watch?v=fs1ED_y5mUc.

Viktor FarcicGuillem SolaBarcelona JAM

Lima, Peru JAM

Lima JAM

October’s meetup was a joint effort with collaboration from Perú JUG, and Docker Lima. The first talk was an Introduction to Docker Ecosystem, second wasBuilding and Testing Apps with Docker andArquillian Cube and the last one wasCI/CD using Docker and Jenkins Pipelines.

We had a full house at the meetup. Now, everyone in the room has a Mr. Jenkins branded on their laptop :)

Special thanks to Mario Inga andHéctor Paz for their collaborations during the last meetups.

Addressing recently disclosed vulnerabilities in the Jenkins CLI

$
0
0

The Jenkinssecurity team has been made aware of a new attack vector for a remote code execution vulnerability in theJenkins CLI, according tothis advisory by Daniel Beck:

We have received a report of a possible unauthenticated remote code execution vulnerability in Jenkins (all versions).

We strongly advise anyone running a Jenkins instance on a public network disable the CLI for now.

As this uses the same attack vector as SECURITY-218, you can reuse the script and instructions published in this repository: https://github.com/jenkinsci-cert/SECURITY-218

We have since been able to confirm the vulnerability and strongly recommend that everyone follow the instructions in the linked repository.

As Daniel mentions in the security advisory, the advised mitigation strategy is to disable the CLI subsystem viathis Groovy script. If you are a Jenkins administrator, navigate to the Manage Jenkins page and click on the Script Console, which will allow you to run the Groovy script to immediately disable the CLI.

In order to persist this change across restarts of your Jenkins master, placethe Groovy script in $JENKINS_HOME/init.groovy.d/cli-shutdown.groovy so that Jenkins executes the script on each boot.

We are expecting to have a fix implemented, tested and included in an updated weekly and LTS release this upcoming Wednesday, November 16th.

For users who are operating Jenkins on public, or otherwise hostile, networks, we suggest hosting Jenkins behind reverse proxies such as Apache or Nginx. These can help provide an additional layer of security, when used appropriately, to cordon off certain URLs such as /cli.

Additionally, we strongly recommend that all Jenkins administrators subscribe to thejenkinsci-advisories@googlegroups.com mailing list to receive future advisories.


The Jenkins project has a responsible disclosure policy, which we strongly encourage anybody who believes they have discovered a potential vulnerability to follow. You can learn more about this policy and our processes on oursecurity page.


Upcoming November Jenkins Events

$
0
0

Guadalahara JAM

November is packed full of meetups and events. If you are in any of the areas below please stop by to say "Hi" and talk Jenkins over beer.

North America

Europe

Australia

Asia

Security updates addressing zero day vulnerability

$
0
0

A zero-day vulnerability in Jenkins was published on Friday, November 11. Last weekwe provided an immediate mitigation and today we are releasing updates to Jenkins which fix the vulnerability. We strongly recommend you update Jenkins to 2.32 (main line) or 2.19.3 (LTS) as soon as possible.

Today’s security advisory contains more information on the exploit, affected versions, and fixed versions, but in short:

An unauthenticated remote code execution vulnerability allowed attackers to transfer a serialized Java object to the Jenkins CLI, making Jenkins connect to an attacker-controlled LDAP server, which in turn can send a serialized payload leading to code execution, bypassing existing protection mechanisms.

Moving forward, the Jenkins security team is revisiting the design of the Jenkins CLI over the coming weeks to prevent this class of vulnerability in the future. If you are interested in participating in that discussion, please join in on thejenkinsci-dev@ mailing list.


The Jenkins project encourages administrators to subscribe to thejenkinsci-advisories@ mailing list to receive future Jenkins security notifications.

Tuning Jenkins GC For Responsiveness and Stability with Large Instances

$
0
0

This is across post by Sam Van Oort, Software Engineer atCloudBees and contributor to the Jenkins project.

Today I’m going to show you how easy it is to tune Jenkins Java settings to make your masters more responsive and stable, especially with large heap sizes.

The Magic Settings:

  • Basics:-server -XX:+AlwaysPreTouch

  • GC Logging:-Xloggc:$JENKINS_HOME/gc-%t.log -XX:NumberOfGCLogFiles=5 -XX:+UseGCLogFileRotation -XX:GCLogFileSize=20m -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCCause -XX:+PrintTenuringDistribution -XX:+PrintReferenceGC -XX:+PrintAdaptiveSizePolicy

  • G1 GC settings:-XX:+UseG1GC -XX:+ExplicitGCInvokesConcurrent -XX:+ParallelRefProcEnabled -XX:+UseStringDeduplication -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=20 -XX:+UnlockDiagnosticVMOptions -XX:G1SummarizeRSetStatsPeriod=1

  • Heap settings: set your minimum heap size (-Xms) to at least 1/2 of your maximum size (-Xmx).

Now, let’s look at where those came from! We’re going to focus on garbage collection (GC) here and dig fast and deep to strike for gold; if you’re not familiar with GC fundamentals take a look at this source.

Because performance tuning is data driven, I’m going to use real-world data selected three very large Jenkins instances that I help support.

What we’re not going to do: Jenkins basics, or play with max heap. See the section "what should I do before tuning." This is for cases where we reallydo need a big heap and can’t easily split our Jenkins masters into smaller ones.

The Problem: Hangups

Symptom: Users report that the Jenkins instance periodically hangs, taking several seconds to handle normally fast requests. We may even see lockups or timeouts from systems communicating with the Jenkins master (build agents, etc). In long periods of heavy load, users may report Jenkins running slowly. Application monitoring shows that during lockups all or most of the CPU cores are fully loaded, but there’s not enough activity to justify it. Process and JStack dumps will reveal that the most active Java threads are doing garbage collection.

With Instance A, they had this problem. Their Jenkins Java arguments are very close to the default, aside from sizing the heap:

  • 24 GB max heap, 4 GB initial, default GC settings (ParallelGC)

  • A few flags set (some coming in as defaults): -XX:-BytecodeVerificationLocal -XX:-BytecodeVerificationRemote -XX:+ReduceSignalUsage -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:-UseLargePagesIndividualAllocation

After enabling garbage collection (GC) logging we see the following rough stats:

HeapStats Instance A System Red CPU use-parallelGC.

Diving deeper, we get this chart of GC pause durations:

Instance A Jenkins Red GC duration use-parallelGC

Key stats:

  • Throughput: 99.64% (percent of time spent executing application code, not doing garbage collection)

  • Average GC time: 348 ms (ugh!)

  • GC cycles over 2 seconds: 36 (2.7%)

  • Minor/Full GC average time: 263 ms / 2.803 sec

  • Object creation & promotion rate: 42.4 MB/s & 1.99 MB/s

Explanations:

As you can see, young GC cycles very quickly clear away freshly-created garbage, but the deeper old-gen GC cycles run very slowly: 2-4 seconds. This is where our problems happen. The default Java garbage collection algorithm (ParallelGC) pauses everything when it has to collect garbage (often called a "stop the world pause"). During that period, Jenkins is fully halted: normally (with small heaps) these pauses are too brief to be an issue. With heaps of 4 GB or larger, the time required becomes long enough to be a problem: several seconds over short windows, and over a longer interval you occasionally see much longer pauses (tens of seconds, or minutes.)

This is where the user-visible hangs and lock-ups happen. It also adds significant latency to those build/deploy tasks. In periods of heavy load, the system was even experiencing hangs of 30+ seconds for a single full GC cycle. This was long enough to trigger network timeouts (or internal Jenkins thread timeouts) and cause even larger problems.

Fortunately there’s a solution: the concurrent low-pause garbage collection algorithms, Concurrent Mark Sweep (CMS) and Garbage First (G1). These attempt to do much of the garbage collection concurrently with application threads, resulting in much shorter pauses (at a slight cost in extra CPU use). We’re going to focus on G1, because it is slated to become the default in Java 9 and is the official recommendation for large heap sizes.

Let’s see what happens when someone uses G1 on a similarly-sized Jenkins master with Instance B (17 GB heap):

Their settings:

  • 16 GB max heap, 0.5 GB initial size

  • Java flags (mostly defaults, except for G1): -XX:+UseG1GC -XX:+UseCompressedClassPointers -XX:+UseCompressedOops

And the GC log analysis:

Instance B Jenkins G1 duration

Key stats:

  • Throughput: 98.76% (not great, but still only slowing things down a bit)

  • Average GC time: 128 ms

  • GC cycles over 2 seconds: 11, 0.27%

  • Minor/Full GC average time: 122 ms / 1 sec 232 ms

  • Object creation & promotion rate: 132.53 MB/s & 522 KB/s

Okay, much better: some improvement may be expected from a 30% smaller heap, but not as much as we’ve seen. Most of the GC pauses are under well under 2 seconds, but we have 11 outliers - long Full GC pauses of 2-12 seconds. Those are troubling; we’ll take a deeper dive into their causes in a second. First, let’s look at the big picture and at how Jenkins behaves with G1 GC for a second instance.

G1 Garbage Collection with Instance C (24 GB heap):

Their settings:

  • 24 GB max heap, 24 GB initial heap, 2 GB max metaspace

  • Some custom flags: `-XX:+UseG1GC -XX:+AlwaysPreTouch -XX:+UseStringDeduplication -XX:+UseCompressedClassPointers -XX:+UseCompressedOops `

Clearly they’ve done some garbage collection tuning and optimization. The AlwaysPreTouch pre-zeros allocated heap pages, rather than waiting until they’re first used. This is suggested especially for large heap sizes, because it trades slightly slower startup times for improved runtime performance. Note also that they pre-allocated the whole heap. This is a common optimization.

They also enabled StringDeduplication, a G1 option introduced in Java 8 Update 20 that transparently replaces identical character arrays with pointers to the original, reducing memory use (and improving cache performance). Think of it like String.intern() but it silently happens during garbage collection. This is a concurrent operation added on to normal GC cycles, so it doesn’t pause the application. We’ll look at its impacts later.

Looking at the basics:

Instance C G1 duration

Similar picture to Instance B, but it’s hidden by the sheer number of points (this is a longer period here, 1 month). Those same occasional Full GC outliers are present!

Key stats:

  • Throughput: 99.93%

  • Average GC time: 127 ms

  • GC cycles over 2 seconds: 235 (1.56%)

  • Minor/Full GC average time: 56 ms / 3.97 sec

  • Object creation & promotion rate: 34.06 MB/s & 286 kb/s

Overall fairly similar to Instance B: ~100 ms GC cycles, all the minor GC cycles are very fast. Object promotion rates sound similar.

Remember those random long pauses?

Let’s find out what caused them and how to get rid of them. Instance B had 11 super-long pause outliers. Let’s get some more detail, by opening GC Logs in GCViewer. This tool gives a tremendous amount of information. Too much, in fact —  I prefer to use GCEasy.io except where needed. Since GC logs do not contain compromising information (unlike heap dumps or some stack traces), web apps are a great tool for analysis.

Instance B Jenkins G1 causes

What we care about are at the Full GC times in the middle (highlighted). See how much longer they are vs. the young and concurrent GC cycles up top (2 seconds or less)?

Now, I lied a bit earlier - sorry! For concurrent garbage collectors, there are actually 3 modes: young GC, concurrent GC, and full GC. Concurrent GC replaces the Full GC mode in Parallel GC with a faster concurrent operation that runs in parallel with the application. But in a few cases, we will are forced to fall back to a non-concurrent Full GC operation, which will use the serial (single-threaded) garbage collector. That means that even if we have 30+ CPU cores, only one is working. This is what is happening here, and on a large-heap, multicore system it is S L O W. How slow? 280 MB/s vs. 12487 MB/s for Instance B (for instance C, the difference is also about 50:1).

What triggers a full GC instead of concurrent:

  • Explicit calls to System.gc() (most common culprit, often tricky to trace down)

  • Metadata GC Threshold: Metaspace (used for Class data mostly) has hit the defined size to force garbage collection or increase it. Documentation is terrible for this,Stack Overflow will be your friend.

  • Concurrent mode failure: concurrent GC can’t complete fast enough to keep up with objects the application is creating (there are JVM arguments to trigger concurrent GC earlier)

How do we fix this?

For explicit GC:

  • -XX:+DisableExplicitGC will turn off Full GC triggered by System.gc(). Often set in production, but the below option is safer.

  • We can trigger a concurrent GC in place of a full one with -XX:+ExplicitGCInvokesConcurrent - this will take the explicit call as a hint to do deeper cleanup, but with less performance cost.

Gotcha for people who’ve used CMS: if you have used CMS in the past, you may have used the option -XX:+ExplicitGCInvokesConcurrentAndUnloadsClasses — which does what it says. This option will silently fail in G1, meaning you still see the very long pauses from Full GC cycles as if it wasn’t set (no warning is generated). I have logged a JVM bug for this issue.

For the Metadata GC threshold:

  • Increase your initial metaspace to the final amount to avoid resizing. For example: -XX:MetaspaceSize=500M

Instance C also suffered the same problem with explicit GC calls, with almost all our outliers accounted for (230 out of 235) by slow, nonconcurrent Full GC cycles (all from explicit System.gc() calls, since they tuned metaspace):

Instance C Jenkins G1 GC causes

Here’s what GC pause durations look like if we remove the log entries for the explicit System.gc() calls, assuming that they’ll blend in with the other concurrent GC pauses (not 100% accurate, but a good approximation):

Instance B:

Instance B Jenkins GC duration - G1 - no explicit pauses

The few long Full GC cycles at the start are from metaspace expansion — they can be removed by increasing initial Metaspace size, as noted above. The spikes? That’s when we’re about to resize the Java heap, and memory pressure is high. You can avoid this by setting the minimum/initial heap to at least half of the maximum, to limit resizing.

Stats:

  • Throughput: 98.93%

  • Average GC time: 111 ms

  • GC cycles over 2 seconds: 3

  • Minor & Full or concurrent GC average time: 122 ms / 25 ms (yes, faster than minor!)

  • Object creation & promotion rate: 132.07 MB/s & 522 kB/s

Instance C:

Instance C Jenkins G1 - no explicit pauses

Stats:

  • Throughput: 99.97%

  • Average GC time: 56 ms

  • GC cycles over 2 seconds: 0 (!!!)

  • Minor & Full or concurrent GC average time: 56 ms & 10 ms (yes, faster than minor!)

  • Object creation & promotion rate: 33.31 MB/s & 286 kB/s

  • Side point: GCViewer is claiming GC performance of 128 GB/s (not unreasonable, we clear ~10 GB of young generation in under 100 ms usually)

Okay, so we’ve tamed the long worst-case pauses!

But What About Those Long Minor GC Pauses We Saw?

Okay, now we’re in the home stretch! We’ve tamed the old-generation GC pauses with concurrent collection, but what about those longer young-generation pauses? Lets look at stats for the different phases and causes again in GCViewer.

Instance C Jenkins G1 causes -no explicit pauses

Highlighted in yellow we see the culprit: the remark phase of G1 garbage collection. This stop-the-world phase ensures we’ve identified all live objects, and processes references ( more info).

Let’s look at a sample execution to get more info:

2016-09-07T15:28:33.104+0000: 26230.652: [GC remark 26230.652: [GC ref-proc, 1.7204585 secs], 1.7440552 secs]

 [Times: user=1.78 sys=0.03, real=1.75 secs]

This turns out to be typical for the GC log: the longest pauses are spent in reference processing. This is not surprising because Jenkins internally uses references heavily for caching, especially weak references, and the default reference processing algorithm is single-threaded. Note that user (CPU) time matches real time, and it would be higher if we were using multiple cores.

So, we add the GC flag -XX:+ParallelRefProcEnabled which enables us to use the multiple cores more effectively.

Tuning young-generation GC further based on Instance C:

Back to GCViewer we go, to see what’s time consuming with the GC for Instance C.

Instance C Jenkins G1 causes -no explicit pauses

That’s good, because most of the time is just sweeping out the trash (evacuation pause). But the 1.8 second pause looks odd. Let’s look at the raw GC log for the longest pause:

2016-09-24T16:31:27.738-0700: 106414.347: [GC pause (G1 Evacuation Pause) (young), 1.8203527 secs]
[Parallel Time: 1796.4 ms, GC Workers: 8]
 [GC Worker Start (ms): Min: 106414348.2, Avg: 106414348.3, Max: 106414348.6, Diff: 0.4]
[Ext Root Scanning (ms): Min: 0.3, Avg: 1.7, Max: 5.7, Diff: 5.4, Sum: 14.0]
  [Update RS (ms): Min: 0.0, Avg: 7.0, Max: 19.6, Diff: 19.6, Sum: 55.9]
    [Processed Buffers: Min: 0, Avg: 45.1, Max: 146, Diff: 146, Sum: 361]
 [Scan RS (ms): Min: 0.2, Avg: 0.4, Max: 0.7, Diff: 0.6, Sum: 3.5]
 [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.2]
 [Object Copy (ms): Min: 1767.1, Avg: 1784.4, Max: 1792.6, Diff: 25.5, Sum: 14275.2]
 [Termination (ms): Min: 0.3, Avg: 2.4, Max: 3.5, Diff: 3.2, Sum: 19.3]
    [Termination Attempts: Min: 11, Avg: 142.5, Max: 294, Diff: 283, Sum: 1140]
 [GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.4, Diff: 0.3, Sum: 0.8]
 [GC Worker Total (ms): Min: 1795.9, Avg: 1796.1, Max: 1796.2, Diff: 0.3, Sum: 14368.9]
 [GC Worker End (ms): Min: 106416144.4, Avg: 106416144.5, Max: 106416144.5, Diff: 0.1]

…​oh, well dang. Almost the entire time (1.792 s out of 1.820) is walking through the live objects and copying them. And wait, what about this line, showing the summary statistics:

Eden: 13.0G(13.0G)->0.0B(288.0M) Survivors: 1000.0M->936.0M Heap: 20.6G(24.0G)->7965.2M(24.0G)]

Good grief, we flushed out 13 GB (!!!) of freshly-allocated garbage in one swoop and compacted the leftovers! No wonder it was so slow. I wonder how we accumulated so much…​

Instance C Jenkins G1-ExplictGC removed

Oh, right…​ we set up for 24 GB of heap initially, and each minor GC clears most of the young generation. Okay, so we’ve set aside tons of space for trash to collect, which means longer but less frequent GC periods. This also gets the best performance from Jenkins memory caches which are using WeakReferences (survives until collected by GC) and SoftReferences (more long-lived). Those caches boost performance a lot.

We could take actions to prevent those rare longer pauses. The best ways are to limit total heap size or reduce the value of -XX:MaxGCPauseMillis=200 from its default (200). A more advanced way (if those don’t help enough) is to explicitly set the maximum size of the young generation smaller (say -XX:G1MaxNewSizePercent=45 instead of the default of 60). We could also throw more CPUs at the problem.

But if we look up, most pauses are around 100 ms (200 ms is the default value for MaxGCPauseMillis). For Jenkins on this hardware, this appears to workjust fine and a rare longer pause is OK as long as they don’t get too big. Also remember, if this happens often, G1 GC will try to autotune for lower pauses and more predictable performance.

A Few Final Settings

We mentioned StringDeduplication was on with Instance C, what is the impact? This only triggers on Strings that have survived a few generations (most of our garbage does not), has limits on the CPU time it can use, and replaces duplicate references to their immutable backing character arrays.For more info, look here. So, we should be trading a little CPU time for improved memory efficiently (similarly to string interning).

At the beginning, this has a huge impact:

[GC concurrent-string-deduplication, 375.3K->222.5K(152.8K), avg 63.0%, 0.0     024966 secs]
[GC concurrent-string-deduplication, 4178.8K->965.5K(3213.2K), avg 65.3%, 0     .0272168 secs]
[GC concurrent-string-deduplication, 36.1M->9702.6K(26.6M), avg 70.3%, 0.09     65196 secs]
[GC concurrent-string-deduplication, 4895.2K->394.9K(4500.3K), avg 71.9%, 0     .0114704 secs]

This peaks at an average of about ~90%:

After running for a month, less of an impact - many of the strings that can be deduplicated already are:

[GC concurrent-string-deduplication, 138.7K->39.3K(99.4K), avg 68.2%, 0.0007080 secs]
[GC concurrent-string-deduplication, 27.3M->21.5M(5945.1K), avg 68.1%, 0.0554714 secs]
[GC concurrent-string-deduplication, 304.0K->48.5K(255.5K), avg 68.1%, 0.0021169 secs]
[GC concurrent-string-deduplication, 748.9K->407.3K(341.7K), avg 68.1%, 0.0026401 secs]
[GC concurrent-string-deduplication, 3756.7K->663.1K(3093.6K), avg 68.1%, 0.0270676 secs]
[GC concurrent-string-deduplication, 974.3K->17.0K(957.3K), avg 68.1%, 0.0121952 secs]

However it’s cheap to use: in average, each dedup cycle takes 8.8 ms and removes 2.4 kB of duplicates. The median takes 1.33 ms and removes 17.66 kB from the old generation. A small change per cycle, but in aggregate it adds up quickly — in periods of heavy load, this can save hundreds of megabytes of data. But that’s still small, relative to multi-GB heaps.

Conclusion: turn string deduplication on string deduplication is fairly cheap to use, and reduces the steady-state memory needed for Jenkins. That frees up more room for the young generation, and should overall reduce GC time by removing duplicate objects. I think it’s worth turning on.

Soft reference flushing: Jenkins uses soft references for caching build records and in pipeline FlowNodes. The only guarantee for these is that they will be removed instead of causing an OutOfMemoryError…​ however Java applications can slow to a crawl from memory pressure long before that happens. There’s an option that provides a hint to the JVM based on time & free memory, controlled by -XX:SoftRefLRUPolicyMSPerMB (default 1000). The SoftReferences become eligible for garbage collection after this many milliseconds have elapsed since last touch…​ per MB of unused heap (vs the maximum). The referenced objects don’t count towards that target. So, with 10 GB of heap free and the default 1000 ms setting, soft references stick around for ~2.8 hours (!).

If the system is continuously allocating more soft references, it may trigger heavy GC activity, rather than clearing out soft references. See the open bugJDK-6912889 for more details.

If Jenkins consumes excessive old generation memory, it may help to make soft references easier to flush by reducing -XX:SoftRefLRUPolicyMSPerMB from its default (1000) to something smaller (say 10-200). The catch is that SoftReferences are often used for objects that are relatively expensive to load, such lazy-loaded build records and pipeline FlowNode data.

Caveats

G1 vs. CMS:

G1 was available on later releases of JRE 7, but unstable and slow. If you use it you absolutely must be using JRE 8, and the later the release the better (it’s gotten a lot of patches). Googling around will show horrible G1 vs CMS benchmarks from around 2014: these are probably best ignored, since the G1 implementation was still immature then. There’s probably a niche for CMS use still, especially on midsized heaps (1-3 GB) or where settings are already tuned. With appropriate tuning it can still perform generally well for Jenkins (which mostly generates short-lived garbage), but CMS eventually suffer from heap fragmentation and need a slow, non-concurrent Full GC to clear this. It also needs considerably more tuning than G1.

General GC tuning caveats:

No single setting is perfect for everybody. We avoid tweaking settings that we don’t have strong evidence for here, but there are of course many additional settings to tweak. One shouldn’t change them without evidence though, because it can cause unexpected side effects. The GC logs we enabled earlier will collect this evidence. The only setting that jumps out as a likely candidate for further tuning is G1 region size (too small and there are many humungous object allocations, which hurt performance). Running on smaller systems, I’ve seen evidence that regions shouldn’t be smaller than 4 MB because there are 1-2 MB objects allocated somewhat regularly — but it’s not enough to make solid guidance without more data.

What Should I Do Before Tuning Jenkins GC:

If you’ve seen Stephen Connolly’s excellent Jenkins World talk, you know that most Jenkins instances can and should get by with 4 GB or less of allocated heap, even up to very large sizes. You will want to turn on GC logging (suggested above) and look at stats over a few weeks (rememberGCeasy.io). If you’re not seeing periodic longer pause times, you’re probably okay.

For this post we assume we’ve already done the basic performance work for Jenkins:

  1. Jenkins is running on fast, SSD-backed storage.

  2. We’ve set up build rotation for your Jobs, to delete old builds so they don’t pile up.

  3. The weather column is already disabled for folders.

  4. All builds/deploys are running on build agents (formerly slaves), not on the master. If the master has executors allocated, they are exclusively used for backup tasks.

  5. We’ve verified that Jenkins really does need the large heap size and can’t easily be split into separate masters.

If not, we need to do that FIRST before looking at GC tuning, because those will have larger impacts.

Conclusions

We’ve gone from:

  • Average 350 ms pauses (bad user experience) including less frequent 2+ second generation pauses

  • To an average pause of ~50 ms, with almost all under 250 ms

  • Reduced total memory footprint from String deduplication

How:

  1. Use Garbage First (G1) garbage collection, which performs generally very well for Jenkins. Usually there’s enough spare CPU time to enable concurrent running.

  2. Ensure explicit System.gc() and metaspace resizing do not trigger a Full GC because this can trigger a very long pause

  3. Turn on parallel reference processing for Jenkins to use all CPU cores fully.

  4. Use String deduplication, which generates a tidy win for Jenkins

  5. Enable GC logging, which can then be used for the next level of tuning and diagnostics, if needed.

There’s still a little unpredictability, but using appropriate settings gives a much more stable, responsive CI/CD server…​ even up to 20 GB heap sizes!

One additional thing

I’ve added -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=20 to our options above. This is covering a complex and usually infrequent case where G1 self-tuning can trigger bad performance for Jenkins — but that’s material for another post…​

What JVM versions are running Jenkins? 2016 Update!

$
0
0
What follows contains some opinions or statements which may not be seen as purely factual or neutral. Note that this represents by no mean the general position of the Jenkins governance board. This is solely my opinion as a contributor based on the data I gathered, and what I feel from the feedback of the community at large.

Java 8 now the most used version, and growing

If we look at the global numbers, Java 8 runtimes now represent 52.8% of the Jenkins instances running, which have not opted out of anonymous usage statistics.

2016 jvm stats all

And if you look at the trend, Java 8 is clearly growing fast.

Zooming into the Jenkins 2.x instances subset

Now, if you look at that picture, though already interesting and showing a clear trend towards Java 8 runtime adoption, some might argue it’s being too nice to older JREs. The reasoning could be: instances running (very) old Jenkins versions may not be the ones you want to look at when trying to plan the future of an opensource project: those are indeed probably not going to upgrade in general anyway, or when they do, upgrading the JRE would be a small thing compared to the rest to be tested with such a gap.

So, if we only keep the instances running Jenkins 2.x, then the proportion of Java 8 goes to almost 70% compared to Java 7 (Jenkins 2.x requires Java 7)[1]:

2016 jvm stats only 2.x

Conclusion

Java 8 adoption numbers are getting bigger, while every other JREs are going down.

If you are still using a JRE 7 to run Jenkins, it is seriously time to think about upgrading to 8. Knowing that it’s definitely not a bleeding-edge path might help you go that way, especially if you generally do not like upgrades. Also, as a reminder, the most used JDK, Oracle JDK 7 now got end-of-lifed more than 18 months ago.

Contrary to the past attempts the previous years, the discussion on the Jenkins development mailing list did not trigger strong rebutals by many people.

Perhaps it’s finally time for Mr. Jenkins to upgrade to Java 8!

All numbers shown below are derived from the new jvms.json file now generated automatically every month, after the two related pull-requests 1 and 2 got merged.[2]

1. 69% for October, 67% in September
2. You are more than welcome to review those Pull-Requests and shout if you see something wrong in the calculations.

Upcoming December Jenkins Events

$
0
0

Happy Holidays! A special shout out to all JAM leaders who continue to keep local activities going in December.

North America

Australia

Viewing all 1088 articles
Browse latest View live