Quantcast
Channel: Jenkins Blog
Viewing all 1087 articles
Browse latest View live

Jenkins Javadoc: Service Improvements

$
0
0

Jenkins infrastructure is continuously improving. The latest service to get some attention and major improvement is the Jenkins javadoc.

There were a number of issues affecting that service:

  • Irregular updates - Developers wouldn’t find the latest java documentation because of inadequate update frequence.

  • Broken HTTPS support - when users would go to the Javadoc site they would get an unsafe site warning and then an incorrect redirection.

  • Obsolete content - Javadoc was not cleaned up correctly and plenty of obsolete pages remained which confused end users.

As Jenkins servicesmigrate to Azure infrastructure, something that needed to be done was to move the javadoc service there as a standalone service. I took the same approach as jenkins.io, putting data on an azure file storage, using a nginx proxy in front of it and running on kubernetes. This approach brings multiple benefits:

  1. We store static files on an azure file storage which brings data reliability, redundancy, etc.

  2. We use Kubernetes Ingress to configure HTTP/HTTPS endpoint

  3. We use Kubernetes Service to provide load balancing

  4. We use Kubernetes deployment to deploy default nginx containers with azure file storage volume.

HTTP/HTTPS workflow
  +----------------------+     goes on     +------------------------------+
  |  Jenkins Developer   |---------------->+  https://javadoc.jenkins.io  |
  +----------------------+                 +------------------------------+
                                                                      |
  +-------------------------------------------------------------------|---------+
  | Kubernetes Cluster:                                               |         |
  |                                                                   |         |
  | +---------------------+     +-------------------+     +-----------v------+  |
  | | Deployment: Javadoc |     | Service: javadoc  <-----| Ingress: javadoc |  |
  + +---------------------+     +-------------------+     +------------------+  |
  |                                           |                                 |
  |                          -----------------+                                 |
  |                          |                |                                 |
  |                          |                |                                 |
  | +------------------------v--+    +--------v------------------+              |
  | | Pod: javadoc              |    | Pod: javadoc              |              |
  | | container: "nginx:alpine" |    | container: "nginx:alpine" |              |
  | | +-----------------------+ |    | +-----------------------+ |              |
  | | | Volume:               | |    | | Volume:               | |              |
  | | | /usr/share/nginx/html | |    | | /usr/share/nginx/html | |              |
  | | +-------------------+---+ |    | +----+------------------+ |              |
  | +---------------------|-----+    +------|--------------------+              |
  |                       |                 |                                   |
  +-----------------------|-----------------|-----------------------------------+
                          |                 |
                          |                 |
                       +--+-----------------+-----------+
                       |   Azure File Storage: javadoc  |
                       +--------------------------------+

The javadoc static files are now generated by a Jenkinsjob regularly and then published from a trusted jenkins instance. We only update what has changed and remove obsolete documents. More information can be findhere

The next thing in continuously improving is also to look at the user experience of the javadoc to make it easier to discover javadoc for other components or versions. (Help Needed)

These changes all go towards improving the developer experience for those using javadocs and making life easier for core and plugin developers. See the new and improved javadoc service hereJenkins Javadoc.


Join us at the Jenkins Contributor Summit San Francisco, Monday 17 September 2018

$
0
0

The Jenkins Contributor summit is where the current and future contributors of the Jenkins project get together. This summit will be on Monday, September 17th 2018 in San Francisco, just before DevOps World | Jenkins World. The summit brings together community members to learn, meet and help shape the future of Jenkins. In the Jenkins commmunity we value all types and sizes of contributions and love to welcome new participants. Register here.

Topics

There are plenty of exciting developments happening in the Jenkins community. The summit will feature a 'State of the Project' update including updates from the Jenkins officers. We will also have updates on the 'Big 5' projects in active development: * Jenkins Essentials * Jenkins X * Configuration as Code * Jenkins Pipeline * Cloud Native Jenkins Architecture

Plus we will feature a Google Summer of Code update, and more!

Agenda

The agenda is shaping up well and here is the outline so far.

  • 9:00am Kickoff & Welcome with coffee/pastries

  • 10:00am Project Updates

  • 12:00pm Lunch

  • 1.00pm BoF/Unconference

  • 3.00pm Break

  • 3.30pm Ignite Talks

  • 5.00pm Wrap-up

  • 6.00pm Contributor Dinner

The BoF (birds-of-a-feather) session will be an opportunity for in depth discussions, hacking or learning more about any of the big 5. Bring your laptop, come prepared with questions and ideas, and be ready for some hacking too if you want. Join in, hear the latest and get involved in any project during the BoF sessions. If you want to share anything there will be an opportunity to do a 5-min ignite talk at the end. Attending is free, and no DevOps World | Jenkins World ticket is needed, but RSVP if you are going to attend to help us plan. See you there!

Introducing Jenkins Cloud Native SIG

$
0
0

On large-scale Jenkins instances master disk and network I/O become bottlenecks in particular cases. Build logging and artifact storage were one for the most intensive I/O consumers, hence it would be great to somehow redirect them to an external storage. Back in 2016 there were active discussions about such Pluggable Storage for Jenkins. At that point we created several prototypes, but then other work took precedence. There was still a high demand in Pluggable Storage for large-scale instances, and these stories also become a major obstacle for cloud native Jenkins setups.

I am happy to say that the Pluggable Storage discussions are back online. You may have seen changes in the Core for Artifact Storage (JEP-202) and a new Artifact Manager for S3 plugin. We have also created a number of JEPs for External Logging and created a new Cloud Native Special Interest Group (SIG) to offer a venue for discussing changes and to keep them as open as possible.

Tomorrow Jesse Glick and I will be presenting the current External Logging designs at the Cloud Native SIG online meeting, you can find more info about the meeting here. I decided that it is a good time to write about the new SIG. In this blogpost I will try to provide my vision of the SIG and its purpose. I will also summarize the current status of the activities in the group.

What are Special Interest Groups?

If you follow the developer mailing list, you may have seen the discussion about introducing SIGs in the Jenkins project. The SIG model has been proposed byR. Tyler Croy, and it largely follows the successfulKubernetes SIG model. The objective of these SIGs is to make the community more transparent to contributors and to offer venues for specific discussions. The idea of SIGs and how to create them is documented inJEP-4. JEP-4 is still in Draft state, but a few SIGs have been already created using that process:Platform SIG, GSoC SIG and, finally,Cloud Native SIG.

SIGs are a big opportunity to the Jenkins project, offering a new way to onboard contributors who are interested only in particular aspects of Jenkins. With SIGs they can subscribe to particular topics without following the entire Developer mailing list which can become pretty buzzy nowadays. It also offers company contributors a clear way how to join community and participate in specific areas. This is great for larger projects which cannot be done by a single contributor. Like JEPs, SIGs help focus and coordinate efforts.

And, back to major efforts…​ Lack of resources among core contributors was one of the reasons why we did not deliver on Pluggable Storage stories back in 2016. I believe that SIGs can help fix that in Jenkins, making it easier to find groups with the same interests and reach out to them in order to organize activity. Regular meetings are also helpful to get such efforts moving.

Points above are the main reasons why I joined the Cloud Native SIG. Similarly, that’s why I decided to create a Platform SIG to deliver on major efforts like Java 10+ support in Jenkins. I hope that more SIGs get created soon so that contributors could focus on areas of their interest.

Cloud Native SIG

In the original proposal Carlos Sanchez, the Cloud Native SIG chair, has described the purpose of the SIG well. There has been great progress this year in cloud-native-minded projects like Jenkins X and Jenkins Essentials, but the current Jenkins architecture does not offer particular features which could be utilized there: Pluggable Storage, High Availability, etc. There are ways to achieve it using Jenkins plugins and some infrastructure tweaks, but it is far from the out-of-the-box experience. It complicates Jenkins management and slows down development of new cloud-native solutions for Jenkins.

So, what do I expect from the SIG?

  • Define roadmap towards Cloud-Native Jenkins architecture which will help the project to stay relevant for Cloud Native installations

  • Provide a venue for discussion of critical Jenkins architecture changes

  • Act as a steering committee for Jenkins Enhancement Proposals in the area of Cloud-Native solutions

  • Finally, coordinate efforts between contributors and get new contributors onboard

What’s next in the SIG?

The SIG agenda is largely defined by the SIG participants. If you are interested to discuss particular topics, just propose them in the SIG mailing list. As the current SIG page describes, there are several areas defined as initial topics:Artifact Storage,Log Storage,Configuration Storage

All these topics are related to the Pluggable Storage Area, and the end goal for them is to ensure that all data is externalized so that replication becomes possible. In addition to the mentioned data types, discussed at the Jenkins World 2016 summit, we will need to externalize other data types:Item and Run storage,Fingerprints,Test and coverage results, etc. There is some foundation work being done for that. For example, Shenyu Zheng is working on aCode Coverage API plugin which would allow to unify the code coverage storage formats in Jenkins.

Once the Pluggable Storage stories are done the next steps are true High Availability, rolling or canary upgrades and zero downtime. At that point other foundation stories like Remoting over Kafka by Pham Vu Tuan might be integrated into the Cloud Native architecture to make Jenkins more robust against outages within the cluster. It will take some time to get to this state, but it can be done incrementally.

Let me briefly summarize current state of the 3 focuses listed in the Cloud Native SIG.

Artifact Storage

There are many existing plugins allowing to upload and download artifacts from external storage (e.g. S3, Artifactory, Publish over SFTP, etc., etc.), but there are no plugins which can do it transparently without using new steps. In many cases the artifacts also get uploaded through master, and it increases load on the system. It would be great if there was a layer which would allow storing artifacts externally when using common steps like Archive Artifacts.

Artifact storage work was started this spring by Jesse Glick, Carlos Sanchez andIvan Fernandez Calvo before the Cloud Native SIG was actually founded. Current state:

  • JEP-202 "External Artifact Storage" has been proposed in the Jenkins community. This JEP defines API changes in the Jenkins core which are needed to support External artifact managers

  • Jenkins Pipeline has been updated to support external artifact storages for archive/unarchive and stash/unstash

  • New Artifact Manager for S3 plugin reference implementation of the new API. The plugin is available in main Jenkins update centers

  • A number of plugins has been updated in order to support external artifact storage

The Artifact Manager API is available in Jenkins LTS starting from 2.121.1, so it is possible to create new implementations using the provided API and existing implementations. This new feature is fully backward compatible with the default Filesystem-based storage, but there are known issues for plugins explicitly relying on artifact locations in JENKINS_HOME (you can find a list of such pluginshere). It will take a while to get all plugins supported, but the new API in the core should allow migrating plugins.

I hope we will revisit the External Artifact Storage at the SIG meetings at some point. It would be a good opportunity to do a retrospective and to understand how to improve the process in SIG.

Log storage

Log storage is a separate big story. Back in 2016 External logging was one of the key Pluggable Storage stories we defined at the contributor summit. We created an EPIC for the story (JENKINS-38313) and after that created a number of prototypes together withXing Yan and Jesse Glick. One of these prototypes for Pipeline has recently been updated and publishedhere.

Jesse Glick and Carlos Sanchez are returning to this story and plan to discuss it within the Cloud Native SIG. There are a number of Jenkins Enhancement proposals which have been submitted recently:

  • JEP-207 - External Build Logging support in the Jenkins Core

  • JEP-210 - External log storage for Pipeline

  • Draft JEP - External Logging API Plugin

  • JEP-206 - Use UTF-8 for Pipeline build logs

In the linked documents you can find references to current reference implementations. So far we have a working prototype for the new design. There are still many bits to fix before the final release, but the designs are ready for review and feedback.

This Tuesday (Jul 31) we are going to have a SIG meeting in order to present the current state and to discuss the proposed designs and JEPs. The meeting will happen at 3PM UTC. You can watch the broadcast using this link. Participant link will be posted in the SIGs Gitter channel 10 minutes before the meeting.

Configuration storage

This is one of the future stories we would like to consider. Although configurations are not big, externalizing them is a critical task for getting highly-available or disposable Jenkins masters. There are many ways to store configurations in Jenkins, but 95% of cases are covered by the XmlFile layer which serializes objects to disk and reads them using the XStream library. Externalizing these XmlFiles would be a great step forward.

There are several prototypes for externalizing configurations, e.g. in DotCI. There are also other implementations which could be upstreamed to the Jenkins core:

  • Alex Nordlund has recently proposed apull request to Jenkins Core, which should make the XML Storage pluggable

  • James Strachan has implemented similar engine for Kubernetes in the kubeify prototype

  • I also did some experiments with externalizing XML Storages back in 2016

The next steps for this story would be to aggregate implementations into a single JEP. I have it in my queue, and I hope to write up a design once we get more clarity on the External logging stories.

Conclusions

Special Interest Groups are a new format for collaboration and disucssion in the Jenkins community. Although we have had some work groups before (Infrastructure, Configuration-as-Code, etc.), introduction of SIGs sets a new bar in terms of the project transparency and consistency. Major architecture changes in Jenkins are needed to ensure its future in the new environments, and SIGs will help to boost visibility and participation around these changes.

If you want to know more about the Cloud Native SIG, all resources are listed on the SIG’s page on jenkins.io. If you want to participate in the SIG’s activities, just do the following:

  1. Subscribe to the mailing list

  2. Join our Gitter channel

  3. Join our public meetings

I am also working on organizing a face-to-face Cloud Native SIG meeting at theJenkins Contributor Summit, which will happen on September 17 duringDevOps World | Jenkins World in San Francisco. If you come to DevOps World | Jenkins World, please feel free to join us at the contributor summit or to meet us at the community booth. Together with Jesse and Carlos we are also going to present some bits of our work at theA Cloud Native Jenkins talk.

Stay tuned for more updates and demos on the Cloud-Native Jenkins fronts!

Building a Serverless CI/CD Pipeline with Jenkins

$
0
0
serverless pipeline

As with any new hyped-technology the term 'serverless' is often overloaded with different meanings. Sometimes serverless is oversimplified to mean function-as-a-service(faas). But there is more to it than that. Also, not many people are talking about doing CI/CD with serverless, even though where there is code there still in need of continuous integration and continuous delivery. So I was excited to hear about this talk byAnubhav Mishra onBuilding a CI/CD Pipeline for Serverless Applications.

In the talk Anubhav proposes a new definition for serverless:

Serverless is a technology pattern that provides services and concepts to minimize operational overhead that comes with managing servers. It is a powerful abstraction when used can result in an increased focus on business value.

— Anubhav Mishra, OSCON 2018 Portland

The talk then goes on to demo Jenkins on AWS Fargate (a platform for running containers without managing servers or clusters). The main focus is on increased elasticity/scaling.

The advantages of this approach are:

  • No nodes/servers to manage

  • Launch 10,000+ builds/containers in seconds

  • No cost for idle time

The real headline is the cost saving, which is 2 orders of magnitude better with serverless. A cost comparison is done based on 1 vCPU & 2GB memory:

  • With Jenkins on Fargate: 100 builds * 5 mins = $0.633/month

  • With Jenkins on EC2 Instances: ~50/month

This huge potential cost saving is one of the things that makes serverless incredibly compelling. Not to mention you don’t have to think much upfront about scaling the system.

But there are drawbacks with this approach, noted as:

  • Cold starts - slower boot times for clients

  • Large container images (~1G)

  • No root access

  • Ephemeral storage (default)

This is an area where Jenkins can continue to evolve to make the most of serverless architectures. I highly recommend you check out theslides for yourself. The best part is that, in the true spirit of open source, Anubvha shared the codehere. So you can give it a try yourself and build your own serverless CI/CD pipeline with Jenkins.

alpha-3 release Pipeline as YAML (Simple pull request plugin)

$
0
0

About me

I am Abhishek Gautam, 3rd year student from Visvesvaraya National Institute of technology, India, Nagpur. I was a member of ACM Chapter and Google student developer club of my college. I am passionate about automation.

Project Summary

This is a GSoC 2018 project.

This project aims to develop a pull request Job Plugin. Users should be able to configure job type using YAML file placed in root directory of the Git repository being the subject of the pull request. The plugin should interact with various platforms like Bitbucket, Github, Gitlab, etc whenever a pull request is created or updated.

Plugin detects the presence of certain types of reports at conventional locations, and publish them automatically. If the reports are not present at their respective conventional location, the location of the report can be configured in the YAML file.

Project Repository

Code changes

All the pull requests made can be found here

List of major pull requests.

Phase 1
  • PR-5: Git wrappers like clone, pull, checkout, pullChangesOfPullrequest, merge, deleteBranch and merge added.

  • PR-6: Yaml to Declarative Pipeline code generation.

Phase 2
  • PR-11: Implemented StepConfigurator using Jenkins configuration as code plugin.

  • PR-19: Unit tests created for agent and yaml to pipeline generation.

Phase 3
  • PR-25: Declarative pipeline code generator code exported to extensions for extensibility and support of custom sections

Jenkinsfile.yaml example

Documentation of Jenkinsfile.yaml and yaml format can be found here

Tasks completed in Coding Phase 3

  1. Add unit tests, JenkinsRule tests JENKINS-52495

  2. Refactor snippet generator to extensions (JENKINS-52491)

  3. Plugin overview (Present in README.md)

Future tasks

  1. Release 1.0 (JENKINS-52519)

  2. Support the “when” Declarative Pipeline directive (JENKINS-52520)

  3. Nice2have: Support hierarchical report types (JENKINS-52521)

  4. Acceptance Test Harness tests JENKINS-52496

  5. Automatic Workspace Cleanup when PR is closed (JENKINS-51897)

  6. Test Multi-Branch Pipeline features support:

    1. Support for webhooks (JENKINS-51941)

    2. Check if trusted people have approved a pull request and start build accordingly (JENKINS-52517)

  7. Finalize documentation (JENKINS-52518)

  8. Test the integration with various platforms Bitbucket, Gitlab, Github.

Phase 3 evaluation presentation video

Phase 3 evaluation presentation slides

My GSoC experience

Student applications started on March 12 16:00 UTC and ended on March 27 16:00 UTC. Application period allowed me to explore many new technology and platforms that are making peoples life easy.

Before starting of the application period I did not know anything about Jenkins. I found Jenkins organisation on the GSoC organisations page and came to know that I is a CI/CD platform that is used automate various things related to software development. I studied about Jenkins online and went through the problem statements provided by some mentors.

I decided that to work on Simple Pull-Request Job Plugin project. Then I wrote a draft proposal for this project and received many comments to refactor the proposal and enhance its quality from the mentors, then finally I submitted my final proposal to Google.

I was able to complete most of the tasks decided in Phase 1 and 2. After Phase 2 I was not able to give time to the project because of the placement season in the my college. I modified the code so that other plugin developers can contribute to it by Jenkins extensions.

All the mentors made themselves available for most of the weekly calls and provided many valuable suggestions during the entire period of GSoC. Sometimes I was not able to communicate effectively. As communication is the key while working remotely, mentors suggested to communicate more thorough gitter chat.

My overall experience of GSoC was good and all the mentors helped me as they can all times. This project allowed me to explore Jenkins and the services offered by it. I am allowed to work on the project after GSoC ends (This is a good thing).

Jenkins 2.121.3 and 2.138 security updates

$
0
0

We just released security updates to Jenkins, versions 2.138 and 2.121.3, that fix multiple security vulnerabilities.

For an overview of what was fixed, see the security advisory. For an overview on the possible impact of these changes on upgrading Jenkins LTS, see our LTS upgrade guide.

Subscribe to the jenkinsci-advisories mailing list to receive important notifications related to Jenkins security.

DevOps World | Jenkins World 2018 is Almost Here

$
0
0

DevOps World | Jenkins World 2018 DevOps World | Jenkins World 2018 in San Francisco is only a month away. It is shaping up to be a great event including the Contributor Summit, the "Ask the Experts" desk at the Jenkin booth, several days of training and certifications, and tons of informative presentation and demos.

To give you a taste of what you’ll see this year at DevOps World | Jenkins World 2018, we’ve lined up a series of guest blog posts by a number of this years speakers, starting in the next week with posts from Tracy Miranda, Brent Laster, and Nicholas De Loof. For now, let’s take a look at last year’s keynote from Kohsuke Kawaguchi.

Stay tuned!

Join the Jenkins project atJenkins World on September 16-19th, register with the code JWFOSS for a 30% discount off your pass.

Code Coverage API plugin: 1.0 Release

$
0
0

I am happy to announce availability of Code Coverage API. These plugins have been recently released as 1.0, and they are now available in the Jenkins Update Center. In this blogpost I will introduce the features and project structure of Code Coverage API plugin.

My name is Shenyu Zheng, and I am an undergraduate student in Computer Science and Technology at Henan University from China.

Overview

Code Coverage API plugin is one of GSoC 2018 Jenkins projects.

There are a lot of plugins which currently implement code coverage; however, they all use similar config, charts, and content. So it would be much better if we could have an API plugin which does the most repeated work for those plugins and offers a unified API which can be consumed by other plugins and external tools.

Supported Coverage Formats

Features

  • Modernized coverage chart

  • Coverage trend

  • Source code navigation

  • Parallel pipeline support

  • Reports combining

  • REST API

  • Failed conditions and flexible threshold setting

  • Other small features

Modernized Coverage Chart

In the summary chart we can see the coverage summary of current coverage metric.summary chart

In the child summary chart, we can see the coverage summary of each child, also, we can use the range handler to filter item we want to see to reduce the chart size. If we want to see coverage details of the child, we can click the child name to see more information.child summary chart

Coverage Trend

We also support coverage trend to show coverage metrics changing between builds.trend chart

Source Code Navigation

You can enable source code navigation by specifying Source File Storing Level to save last build source files (enable source files navigation in current and last build) or save all build source files (enable source files navigation in all builds).source files config

You can see source file with coverage information on File level coverage page.source files result

Parallel Pipeline Support

We support parallel pipeline. You can call the Code Coverage API plugin in different branches like this:

node {
    parallel firstBranch: {
        publishCoverage adapters: [jacoco('target/site/jacoco/jacoco.xml')]
}, secondBranch: {
        publishCoverage adapters: [jacoco('jacoco.xml')]
    }
}

Reports Combining

You can add tag on publishCoverage and Code Coverage API plugin will combine reports have same tag

node {
    parallel firstBranch: {
        publishCoverage adapters: [jacoco('target/site/jacoco/jacoco.xml')], tag: ‘t’
}, secondBranch: {
        publishCoverage adapters: [jacoco('jacoco.xml')], tag: ‘t’
    }
}

REST API

We provide a REST API to retrieve coverage data:

  • Coverage result: …​/{buildNumber}/coverage/…​/result/api/\{json|xml\}

  • Trend result: …​/{buildNumber}/coverage/…​/trend/api/\{json|xml\}

  • Coverage result of last build: …​/{buildNumber}/coverage/…​/last/result/api/\{json|xml\}

  • Trend result of last build: …​/{buildNumber}/coverage/…​/last/trend/api/\{json|xml\}

Failed Conditions and Flexible Threshold Setting

You can set different failed conditions and threholds to control build result.thresholds config

If the thresholds satisfy the failed conditions, it will fail the build.thresholds result

Other Small Features

We also have other small features like auto detecting reports, coverage filters, etc. You can find more information about these features in the plugin documentation.

Architecture

This API plugin will mainly do these things:

  • Find coverage reports according to the user’s config.

  • Use adapters to convert reports into the our standard format.

  • Parse standard format reports, and aggregate them.

  • Show parsed result in a chart.

So, we can implement code coverage publishing by simply writing an adapter, and such adapter only needs to do one thing - convert a coverage report into the standard format. The implementation is based on extension points, so new adapters can be created in separate plugins. In order to simplify conversion for XML reports, there is also an abstraction layer which allows creating XSLT-based converters.

The below diagram show the architecture of Code Coverage API plugin

architecture

Implementing a New Coverage Plugin

We can implement a coverage plugin by implementing CoverageReportAdapter extension point. For example, by using the provided abstract layer, we can implement JaCoCo simple like this:

publicfinalclassJacocoReportAdapterextends JavaXMLCoverageReportAdapter {@DataBoundConstructorpublic JacocoReportAdapter(String path) {super(path);
    }/**
     * {@inheritDoc}
     */@OverridepublicString getXSL() {return"jacoco-to-standard.xsl";
    }/**
     * {@inheritDoc}
     */@OverridepublicString getXSD() {returnnull;
    }@Symbol("jacoco")@ExtensionpublicstaticfinalclassJacocoReportAdapterDescriptorextends JavaCoverageReportAdapterDescriptor {public JacocoReportAdapterDescriptor() {super(JacocoReportAdapter.class);
        }@Nonnull@OverridepublicString getDisplayName() {return Messages.JacocoReportAdapter_displayName();
        }
    }
}

All we need is to extend an abstract layer for XML-based Java report and provide an XSL file to convert the report to our standard format. There are also other extension points which are under development.

If you want implement a new coverage format that we did not provide abstract layer, you need to register `CoverageElement`s and implement an simple parser. See llvm-cov Plugin to get more details.

Future Tasks

Phase 3 Presentation Slides

Phase 3 Presentation Video


Using the Docker Global Variable in Your Jenkins Pipeline

$
0
0
This a guest post by Brent Laster, DevOps World | Jenkins World 2018 Speaker and author of "Jenkins 2 – Up and Running: Evolve Your Pipeline for Next-Generation Automation".

DevOps World | Jenkins World 2018

More and more today, continuous delivery (CD) pipelines are making use of containers. In many implementations, the primary workflow/orchestration tool for CD pipelines is Jenkins. And the primary container orchestration tool is Docker. Together these two applications provide a powerful, yet simple to understand and use, model for leveraging containers in your CD pipeline.

When creating a pipeline script in Jenkins, there are multiple ways to incorporate Docker into your CD pipeline. They include:

  • Manually running a predefined Docker image as a separate Jenkins agent

  • Automatically provisioning a Docker image, when needed, as a part of a “cloud” configuration

  • Referencing a “docker” global variable that can be invoked via the Jenkins DSL

  • Calling the Docker executable directly via a shell call in the Jenkins DSL

For this article, we’ll focus on the third item in this list given that it provides the most flexibility and convenience for Docker use in the pipeline. More details on the other three can be found in the upcoming “Continuous Delivery and Containerization” workshop at Jenkins World/DevOps World 2018.

First, we’ll provide some background on a couple of terms for those who may not be familiar with Jenkins 2. If you already are familiar with it, feel free to skip ahead to the Global Variables section.

Background

When we talk about Jenkins here, we’re referring to “Jenkins 2” - a name we use to generally refer to the 2.0 and beyond versions of Jenkins. Jenkins 2 offers a powerful evolution of Jenkins over prior versions. In particular, it provides full integration for “pipeline-as-code” (PAC). PAC refers to being able to write your pipeline in a scripting language, much like source code for any program. The code you write becomes the program that defines your pipeline. It is also the code that gets executed when your pipeline is initiated. Listing 1 shows a simple example pipeline. Notice that this is very different from the classic way of creating pipelines in Jenkins. Here you are writing code - rather than the more traditional approaches, such as filling in web forms to configure a Freestyle job.

Jenkinsfile (Scripted Pipeline)
node('worker') {
    stage('Source') { // Get code// Get code from our git repository
        git 'git@diyvb2:/home/git/repositories/workshop.git'
    }
    stage('Compile') { // Compile and do unit testing// Run gradle to execute compile and unit testing
        sh "gradle clean compileJava test"
    }
}

Listing 1: Example Jenkins 2 pipeline

The language that we write the Jenkins pipeline code in is a Domain-Specific Language (DSL). You can think of it as the “programming language” for Jenkins pipelines. There are two variants of it. The style we saw in figure 1 is called “scripted syntax”. It is a mixture of elements from the Groovy programming language and special Jenkins “steps”. The Jenkins steps are provided by the plugins that are installed in the current system. A built-in tool called the Snippet Generator provides a wizard interface to allow users to pick the step and options they want. Then, the user can click on a button to have Jenkins automatically generate the correct DSL code in the large text box (figure 1). The DSL code can be copied from there and pasted into the pipeline script.

The Snippet Generator
Figure 1. The Snippet Generator

A second type of syntax is called “declarative syntax.” We won’t go into detail on it here. But it is a much more structured syntax that focuses on having users declare what they want in a pipeline, rather than writing the logic to make it happen.

Global Variables

In addition to the steps that are provided by plugins, additional functionality for pipelines can be provided by global variables. The simplest way to think of a global variable is as an object with methods that can be invoked on it. Several of these are built in to Jenkins, such as the Docker global variable. Others can be created by users as part of the structure of a shared source code repository called a “shared pipeline library.”

To get a list of the global variables that are currently available to your Jenkins instance, you can go to the Snippet Generator screen. Immediately below the box for the generated pipeline script is a section titled Global Variables. There, within the small print, is a link to get to the actual section (figure 2).

Link to Global Variables Reference section
Figure 2. Link to Global Variables Reference section.

Clicking on that link takes us to a list of currently available Global Variables. If you have the Docker Pipeline Plugin installed, you will see one at the top for Docker. (Figure 3).

Docker global variable specifics
Figure 3. Docker global variable specifics.

Broadly, the docker global variable includes methods that can be applied to the Docker application, Docker images, and Docker containers.

We’ll focus first on a couple of the Docker image methods as shown in figure 4.

Key methods for getting a Docker image
Figure 4. Key methods for getting a Docker image.

There are multiple ways you can use these methods to create a new image. Listing 2 shows a basic example of assigning and pulling an image using the image method.

myImage = docker.image("bclaster/jenkins-node:1.0")
myImage.pull()

Listing 2: Assigning a image to a variable and pulling it down.

This can also be done in a single statement as shown in listing 3.

docker.image("bclaster/jenkins-node:1.0").pull()

Listing 3: Shorthand version of previous call.

You can also download a Dockerfile and build an image based on it.(See listing 4.)

node() {def myImg
    stage ("Build image") {// download the dockerfile to build from
        git 'git@diyvb:repos/dockerResources.git'// build our docker image
        myImg = docker.build 'my-image:snapshot'
    }
}

Listing 4: Pipeline code to download a Dockerfile and build an image from it.

Figure 5 shows the actual output from running that “Build image” stage. Note that the docker.build step was translated into an actual Docker build command.

Actual Docker output from running the download and build
Figure 5. Actual Docker output from running the download and build

The Inside Command

Another powerful method available for the Docker global variable is the inside method. When executed, this method will do the following:

  1. Get an agent and a workspace to execute on

  2. If the Docker image is not already present, pull it down

  3. Start the container with that image

  4. Mount the workspace from Jenkins

  5. Execute the build steps

Mounting the workspace means that the Jenkins workspace will appear as a volume inside the container. And it will have the same file path. So, things running in the container will have direct access to the same location. However, this can only be done if the container is running on the same underlying system - such that it can directly access the path.

In terms of executing the build steps, the inside method acts as a scoping method. This means that the environment it sets up is in effect for any statement that happens within its scope (within the block under it bounded by {}). The practical application here is that any pipeline “sh” steps (a call to the shell to execute something) are automatically run in the container. Behind the scenes, this is done by wrapping the calls with “docker exec”.

When executed, the calls with the global variable are translated (by Jenkins) into actual Docker call invocations. Listing 5 shows an example of using this in a script, along with the output from the first invocation of the “inside” method. You can see in the output the docker commands that are generated from the inside method call.

    stage ("Get Source") {// run a command to get the source code download
        myImg.inside('-v /home/git/repos:/home/git/repos') {
            sh "rm -rf gradle-greetings"
            sh "git clone --branch test /home/git/repos/gradle-greetings.git"
        }
    }
    stage ("Run Build") {
        myImg.inside() {
            sh "cd gradle-greetings && gradle -g /tmp clean build -x test"
        }
    }

Listing 5: Example inside method usage.

Example inside method Docker command output
Figure 6. Example inside method Docker command output.

Once completed, the inside step will stop the container, get rid of the storage, and create a record that this image was used for the build. That record facilitates image traceability, updates, etc.

As you can see, the combination of using the Docker “global variable” and its “inside” method provide a simple and powerful way to spin up and work with containers in your pipeline. In addition, since you are not having to make the direct Docker calls, you can invoke steps like sh within the scope of the inside method, and have them executed by Docker transparently.

As we mentioned, this is only one of several ways you can interact with Docker in your pipeline code. To learn about the other methods and get hands-on practice, join me at DevOps World/Jenkins World in San Francisco or Nice for the workshop "Creating a Deployment Pipeline with Jenkins 2". Hope to see you there!

Join the Jenkins project atJenkins World on September 16-19th, register with the code JWFOSS for a 30% discount off your pass.

Join us at the Jenkins Contributor Summit Nice, Tuesday 23 October 2018

$
0
0
devops world cs Nice

The Jenkins Contributor summit is where the current and future contributors of the Jenkins project get together. This summit will be on Tuesday, October 23rd 2018 in Nice, France just before Jenkins World. The summit brings together community members to learn, meet and help shape the future of Jenkins. In the Jenkins commmunity we value all types and sizes of contributions and love to welcome new participants. It is free to join, just register here.

Topics

There are plenty of exciting developments happening in the Jenkins community. The summit will feature a 'State of the Project' update including updates from the Jenkins officers. We will also have updates on the 'Big 5' projects in active development:

Plus we will feature a Google Summer of Code update, Special Interest Group updates and more!

Agenda

The agenda is shaping up well and here is the outline so far.

  • 9:00am Kickoff & Welcome with coffee/pastries

  • 10:00am Project Updates

  • 12:00pm Lunch

  • 1.00pm BoF/Unconference

  • 3.00pm Break

  • 3.30pm Ignite Talks

  • 5.00pm Wrap-up

  • 6.00pm Contributor Dinner

The BoF (birds-of-a-feather) session will be an opportunity for in depth discussions, hacking or learning more about any of the big 5. Bring your laptop, come prepared with questions and ideas, and be ready for some hacking too if you want. Join in, hear the latest and get involved in any project during the BoF sessions. If you want to share anything there will be an opportunity to do a 5-min ignite talk at the end. Attending is free, and no DevOps World | Jenkins World ticket is needed, but RSVP if you are going to attend to help us plan. See you there!

Jenkins Configuration-as-Code: Look ma, no hands

$
0
0
This blog post is part 1 of a Configuration-as-Code series

Jenkins is highly flexible and is today the de facto standard for implementing CI/CD, with an active community to maintain plugins for almost any combination of tools and use-cases. But flexibility has a cost: in addition to Jenkins core, many plugins require some system-level configuration to be set so they can do their job.

In some circumstances, "Jenkins Administrator" is a full time position. One person is responsible for both maintaining the infrastructure, and also pampering a huge Jenkins master with hundred installed plugins and thousands hosted jobs. Maintaining up-to-date plugin versions is a challenge and failover is a nightmare.

This is like years ago when system administrators had to manage dedicated machines per service. In 2018, everything is managed as code using infrastructure automation tools and virtualization. Need a fresh new application server as staging environment for your application? Just deploy a Docker container. Infrastructure is missing resources? Apply a Terraform recipe to allocate more on your favourite Cloud.

What about the Jenkins administrator role in this context? Should they still spend hours in the web UI, clicking checkboxes on web forms? Maybe they already adopted some automation, relying on Groovy script voodoo, or some home-made XML templating?

Early this year we announced the first alpha release of “Jenkins Configuration-as-Code” (JCasC), a fresh new approach to Jenkins configuration management, based on YAML configuration files and automatic model discovery. “JCasC” has been promoted as a top-level Jenkins project, and the correspondingJenkins Enhancement Proposal has been accepted.

What can JCasC do for our Jenkins Administrator?

JCasC allows us to apply a set of YAML files on our Jenkins master at startup or on-demand via the web UI. Those configuration files are very concise and human readable compared to verbose XML files the Jenkins uses to actually store configuration. The files also have user-friendly naming conventions making it easy for administrators to configure all Jenkins components.

Here’s an example:

jenkins:systemMessage: "Jenkins managed by Configuration as Code"securityRealm:ldap:configurations:
       - server: ldap.acme.comrootDN: dc=acme,dc=frmanagerPasswordSecret: ${LDAP_PASSWORD}cache:size: 100ttl: 10userIdStrategy: CaseInsensitivegroupIdStrategy: CaseSensitive

As you can see, you don’t need long explanation to understand how this YAML file will setup your Jenkins master.

Benefits

The most immediate benefit of JCasC is reproducibility. An administrator can now bootstrap a new Jenkins master with the exact same configuration with a trivial setup. This allows them to create a test instance and check the impact of plugin upgrades in a sandboxed environment. This also lets them be more confident with failover and disaster recovery scenarios.

Further benefits come when administrators start managing their Jenkins’ YAML configuration files in source control, like they do with Terraform configuration. Doing so gives them auditing and reversibility of their Jenkins master configuration. Theycan establish a sane configuration change workflow that runs a test Jenkins instance and ensures configuration is healthy before actually applying any change to their production Jenkins master.

Last but not least, with ability to quickly setup Jenkins masters and control them from a set of shared YAML configuration files, administrators can now offer per-team Jenkins instances, with more flexibility on installed plugins. A Master becomes more or less a transient piece of infrastructure for your team, as long as they also manage build definition with Jenkinsfiles.

With Configuration-as-Code we can stop having to treat our Jenkins master like a pet we need to pamper, and start managing Jenkins masters as cattle you can replace without effort nor impacts. Welcome in the “as-code” world.

Cattle not pets
Figure 1. They are still cute though, right?

Ok, so what’s next?

You can read more about the Jenkins Configuration-as-Code plugin on the project’s github repository. To chat with the community and contributors join ourgitter channel, or come see us in person at link:Jenkins World to discuss the JCasC project and its future!

Also don’t miss next post from the Configuration-as-Code series, where we’ll look at how JCasC works with sensitive data like passwords and other credentials.

Come meet the Configuration as Code contributors, Nicolas de Loof and Ewelina Wilkosz atJenkins World on September 16-19th, register with the code JWFOSS for a 30% discount off your pass.

Day of Jenkins, and other chances to meet JCasC

$
0
0

The Jenkins Configuration as Code plugin is reaching a stage when it is almost ready to be used in a production environment. As a matter of fact, I know some living-on-the-edge users are already doing that. The first release candidates are out and the official 1.0 is just around the corner.

I’d like to use this chance to invite you to meet us and contribute to the plugin. There will be plenty opportunities this autumn.

Jenkins Configuration as Code (also called "JCasC") is a Jenkins plugin that allows you to store and maintain all your Jenkins configuration in yaml file. It’s like Pipeline or Job DSL but for managing Jenkins.

In one of my blogposts,Jenkins Configuration as Code - Automating an automation server, I provide a longer explanation of the plugin, and answer questions like “why did we decided to develop it?” and “why you may want to use it?”. I recommend you to read that one if you’re not familiar with the project yet.

The plugin has been presented at a number of meetups - by me but also other contributors. This is my first open source project that I’ve actively participated in and I’m quite shocked - positively - how many people decided to join the effort and actively develop the plugin with us. Now it’s time to take it to the bigger stage and broader audience. So together with Nicolas de Loof I’m gonna present the plugin at DevOps World | Jenkins World in San Francisco (19th of September) and in Nice (24th of October) - yes, Jenkins World is coming to Europe.

But that’s not all! Praqma - the company I work for - has organised a number of “Day of Jenkins” events around Scandinavia in past years. This October they have decided to bring the events back with a theme 2018 Day of Jenkins is Day of Jenkins [as code]. It’s a two track one day event - presentations and hands-on for users and hackathon for contributors - in that specific case Configuration as Code Plugin’s contributors.

Detailed agenda is available on theevent page - Jenkins X, Jenkins Evergreen, Jenkins Confguration as Code and more waiting for you!

I really can’t wait to hear what Kohsuke has to say and to introduce you to the plugin during hands-on session I’ll run.

Hope to see you at least at one of those events!

Come meet the Configuration as Code contributors, Nicolas de Loof and Ewelina Wilkosz atJenkins World on September 16-19th, register with the code JWFOSS for a 30% discount off your pass.

Effectively using Kubernetes plugin with Jenkins

$
0
0
This is a guest blog by Niklas Tanskanen, consultant atEficode.
Jenkins X

Kubernetes, the container orchestration platform is rapidly becoming popular. There are more and more workloads that you can run on top of Kubernetes. It’s becoming an enabling layer of your Hyper-convergenced infrastructure.

If you set up Kubernetes as a Cloud provider in Jenkins, you’ll get a very powerful couple for running your workloads. To do that, you can simply install Kubernetes plugin. Kubernetes is able to run your Jenkins workloads as long as they are run in container. And containers are an awesome way if your workload is a build, because you can pack all your application and OS dependencies in a container and then run it anywhere!

Let’s imagine that you have been running a Kubernetes cluster setup in your organisation for a while now. First it was all about proof of concept but now its becoming more popular within your developers and you have to think about scaling and orchestration. Resource quotas are a part of that and every responsible operator should set those up both in both development and production clusters. Otherwise people will be lazy and just reserve all the resources of your cluster without actually using those resources for anything. By introducing quotas into your cluster, you can control how many resources should each namespace have.

Quotas are a mature feature of Kubernetes already. You have the possibility to create very fine grained quotas for different hardware resources, whenever it’s fast disk, GPUs or CPU time. You can also specify multiple scopes of quota per one namespace. For example, you can have a quota for workloads that are to be run to the infinity like web servers or databases. Or have quota for workloads that are short lived like builds or test automation runs.

Table 1. Scopes
ScopeDescription

Terminating

Match pods where .spec.activeDeadlineSeconds >= 0

NotTerminating

Match pods where .spec.activeDeadlineSeconds is nil

BestEffort

Match pods that have best effort quality of service.

NotBestEffort

Match pods that do not have best effort quality of service.

Different scopes of Kubernetes quota

Since Jenkins is all about running short workloads, you should aim for the Terminating scope of quota. But how do you specify workloads in Jenkins so that correct scope is used?

If you were to do this in Kubernetes, you have to specify .spec.activeDeadlineSeconds. The same field can also be specified by the Kubernetes plugin when you are specifying a Pod Template.

time deadline
Figure 1. Specifying .spec.activeDeadlineSeconds in the Kubernetes plugin

Same configuration is available in the Jenkinsfile as well if you don’t like static configurations.

podTemplate(label: 'maven', activeDeadlineSeconds: 180, containers: [
    containerTemplate(name: 'maven', image: 'maven:3.5.4-jdk-10-slim')
  ]) {// maven magic
}
DevOps World | Jenkins World 2018

This was just a small sample of features of the Kubernetes plugin in Jenkins. For more, be sure to check out ourtalk where we share more of how you can utilise Kubernetes with Jenkins!

Come see Niklas Tanskanen and many other Jenkins experts and contributors atJenkins World on September 16-19th, register with the code JWFOSS for a 30% discount off your pass.

Jenkins: Shifting Gears

$
0
0

Kohsuke here. This is a message for my fellow Jenkins developers.

Jenkins has been on an amazing run, but I believe we are trapped in a local optimum, and losing appeal to people who fall outside of our traditional sweet spot. We need to take on new efforts to solve this. One is “cloud native Jenkins” that creates a flavor of Jenkins that runs well on Kubernetes. The other is “gear shift”, where we take an evolutionary line from the current Jenkins 2, but with breaking changes in order to gain higher development speed.

I say it’s time we tackle these problems head on. I’ve been talking to various folks, and I think we need to take on two initiatives. One is what I call "Cloud Native Jenkins," and the other is to insert a jolt in Jenkins.

Some of you have already seen the presentation I posted on the Jenkins YouTube channel. In this post, I’ll expand on that with some additional details.

Jenkins: Shifting Gears Presentation (Slides)

Come hear more in Kohsuke’s keynote at Jenkins World on September 16-19th, register with the code JWFOSS for a 30% discount off your pass.

Our Amazing Success

Our project has been an amazing success over the past 10+ years, thanks to you all. What started as my hobby project became a huge community that boasts thousands of contributors and millions of users. When I think about what enabled this amazing journey, I can think of several magic sauces:

  • Extensible: the ability to take the system, or a portion of the system, then build on top of it to achieve what you need, without anyone else’s permission. Here, I’m not talking about the specific technical mechanism of Guice, extension point, etc, but rather I’m talking more broadly about the governance, culture, distribution mechanism, and so on.

  • General purpose: At the base level, Jenkins can be used for any kind of automation around the area of software development. This matched the reality of the software engineering world well. Combined with extensibility, this general purpose system that is Jenkins can specialize into any domain, much like Linux and JetBrains IDEs.

  • Community: Together we created a community where different people push envelopes in different directions and share the fruits with others. This meant everyone can benefit from somebody else’s work, and great ideas and best practices spread more quickly.

DevOps World | Jenkins World 2018

Our Challenges

The way we set up our community meant that collectively we were able to work toward solving certain kinds of problems locally and organically, such as Android application development, new UX, more expressive pipeline description language, …​

But at the same time, the incremental, autonomous nature of our community made us demonstrably unable to solve certain kinds of problems. And after 10+ years, these unsolved problems are getting more pronounced, and they are taking a toll — segments of users correctly feel that the community doesn’t get them, because we have shown an inability to address some of their greatest difficulties in using Jenkins. And I know some of those problems, such as service instability, matter to all of us.

In a way, we are stuck in a local optimum, and that is a dangerous place to be when there is growing competition from all sides. So we must solve these problems to ensure our continued relevance and popularity in the space.

Solving those problems starts with correctly understanding them, so let’s look at those.

Service Instability

CI/CD service was once a novelty and a nice-to-have. Today, it is very much a mission critical service, in no small part because of us! Increasingly, people are running bigger and bigger workloads, loading up more and more plugins, and expect higher and higher availability.

Admins today are unable to meet that heightened expectation using Jenkins easily enough. A Jenkins instance, especially a large one, requires too much overhead just to keep it running. It’s not unheard of that somebody restarts Jenkins every day.

Admins expect errors to be contained and not impact the entire service. They expect Jenkins to defend itself better from issues such as pipeline execution problems, run-away processes, over resource consumption so that they don’t have to constantly babysit the service.

Every restart implies degraded service for the software delivery teams where they have to wait longer for their builds to start or complete.

Brittle Configuration

Every Jenkins admin must have been burnt at least once in the past by making changes that have caused unintended side effects. By “changes,” I’m talking about installing/upgrading plugins, tweaking job settings, etc.

As a result, too many admins today aren’t confident that they can make changes safely. They fear that their changes might cause issues for their software delivery teams, that those teams will notice regressions before they do, and that they may not be able to back out somes changes easily. It feels like touching a Jenga tower for them, even when a change is small.

Upgrading Jenkins and plugins is an important sub case of this, where admins often do not have understanding of the impact. This decreases the willingness to upgrade, which in turn makes it difficult for the project to move forward more rapidly, and instead we get trapped with the long tail of compatibility burden.

Assembly Required

I’ve often described Jenkins as a bucket full of LEGO blocks — you can build any car you want, but everyone first has to assemble their own car in order to drive one.

As CI/CD has gone mainstream, this is no longer OK. People want something that works out of the box, something that gets people to productivity within 5 clicks in 5 minutes. Too many choices are confusing users, and we are not helping them toward “the lit path.” Everyone feels uncertain if they are doing the right thing, contributors are spread thin, and the whole thing feels a bit like a Frankenstein.

This is yet another problem we can’t solve by “writing more plugins.”

Reduced Development Velocity

This one is a little different from others that our users face, but nonetheless a very important one, because it impacts our ability to expand and sustain the developer community, and influences how fast we can solve challenges that our users face.

Some of these problems are not structural and rather just a matter of doing it (for example, Java 11 upgrade), but there are some problems here that are structural.

I think the following ones are the key ones:

  • As a contributor, a change that spans across multiple plugins is difficult. Tooling gets in the way, users might not always upgrade a group of changes together, reviewing changes is hard.

  • As a contributor, the tests that we have do not give me enough confidence to ship code. Not enough of them run automatically, coverage is shallow, and there just isn’t anything like production workload of real users/customers.

These core problems create other downstream problems, for example:

  • As a non-regular contributor, what I think of as a small and reasonable change takes forever and a 100 comments going back & forth to get in. I get discouraged from ever doing it again.

  • As a regular contributor, I feel people are throwing crap over the wall, and if they cause problems after a release, I’m on the hook to clean up that mess.

  • As a user, I get a half-baked change that wreaks havoc, which results in loss of their confidence to Jenkins, an even slower pace of change, etc. This is a vicious cycle as it makes us even more conservative, and slow down the development velocity.

Path Forward

In the past, my frustration and regret is that we couldn’t take on an effort of this magnitude. But that is NO MORE! As CTO of CloudBees, I’m excited that these challenges are important enough for CloudBees now that we want to solve these efforts within the Jenkins project.

I’ve been talking to many of you, and there are a number of existing efforts going on that touch this space already. From there, the vision emerged is that we organize around two key efforts:

  • Cloud Native Jenkins: a general purpose CI/CD engine that runs on Kubernetes, and embraces a fundamentally different architecture and extensibility mechanism.

  • Jolt in Jenkins: continue the incremental trajectory of Jenkins 2 today, but with renegotiated “contract” with users to gain what we really need, such as faster pace of development and better stability.

Cloud Native Jenkins

In order to solve these problems that we can’t solve incrementally, I’m proposing the “Cloud Native Jenkins” sub-project in the context of the Cloud Native SIG with Carlos, who is the leader of this SIG.

We don’t have all the answers, that’s something we’ll discuss and figure out collectively, but based on numerous conversations with various folks, I think there are many clear pieces of puzzles.

Kubernetes as the Runtime

Just like Java was the winning server application platform in the early 2000s, today, Kubernetes is the dominant, winning platform. Cloud Native Jenkins should embrace the paradigm this new platform encourages. For example,

  • Serverless / function-as-a-service build execution (alaJenkinsfile runner) that are isolated.

  • Various pieces of functionalities deployed as separate microservices.

  • Services interacting throughKubernetes CRDs in order to promote better reuse and composability.

These are the design principles that enable highly desirable properties like infinite scalability, pay-as-you-go cost model, immutability, zero down time operability, etc.

New Extensibility Mechanism

We need to introduce a new mechanism of extensibility in order to retain the magic sauces, and continue our incredible ecosystem.

For example, microservice or container-based extensibility avoids the service instability problem (alaKnative builder and theuserspace-scm work.) Pipeline shared libraries is another example that concretely shows how extensibility mechanism can go beyond plugin, though it hasn’t fully flourished as one just yet.

Data on Cloud Managed Data Services

The long-term data storage must be moved from the file system to data services backed by cloud managed services, in order to achieve high availability and horizontal scalability, without burdening admins with additional operational responsibilities.

Configuration as Code

Jenkins Configuration as Code has been incredibly well received, in part because it helps to solve some of the brittle configuration problems. In Cloud Native Jenkins, JCasC must play a more central role, which in turn also helps us reduce the surface area for Blue Ocean to cover by eliminating many configuration screens.

Evergreen

Jenkins Evergreen is another well received effort that’s already underway, which aims to solve the brittleness problem and developer velocity problem. This is a key piece of the puzzle that allows us to move faster without throwing users under the bus.

Secure by Default Design

Over the past years, we’ve learned that several different areas of Jenkins codebase, such as Remoting, are inherently prone to security vulnerabilities because of their design. Cloud Native Jenkins must address those problems by flipping those to “secure by design.”

Following Footsteps of Jenkins X

Jenkins X has been pioneering the use of Jenkins on Kubernetes for a while now, and it has been very well received, too. So naturally, part of the aim of Cloud Native Jenkins is to grow and morph Jenkins into a shape that really works well for Jenkins X. Cloud Native Jenkins will be the general purpose CI/CD engine that runs on Kubernetes, which Jenkins X uses to create an opinionated CD experience for developing cloud native apps.

All The Same Good Things, with New Foundation

And then on top of these foundations, we need to rebuild or transplant all the good things that people love about Jenkins today, and all the good things people expect, such as:

  • Great “batteries included” onboarding experience for new users, where we are present in all the marketplaces, 5 clicks to get going and easy integration with key services.

  • Modern lovable UX in the direction of front-end web apps that Blue Ocean pioneered.

  • General purpose software that is useful for all sorts of software development.

Cloud Native Jenkins MVP

As I wrote, a number of good efforts are already ongoing today. Thus in order to get this effort off the ground, I believe the first MVP that we aim toward is pretty clear, which is to build a function-as-a-service style Jenkins build engine that can be used underneath Jenkins X.

Cloud Native Jenkins MVP combines the spirits of Jenkins Pipeline, Jenkins Evergreen, Jenkinsfile Runner, and Jenkins Configuration as Code. It consists of:

  • Webhook receiver: a service that receives webhooks from GitHub and triggers a build engine.

  • Build Engine: take Jenkinsfile Runner and evolve it so that it can run as a “function” that carries out a pipeline execution, with some CasC sprinkled together in order to control Jenkins configuration and plugins used. This way, Jenkinsfile works as-is for the most part.

  • Continuously delivered through Evergreen: It allows us to solve the combinatorial version explosion problem, allow us to develop changes that span multiple plugins faster, and develop changes more confidently. Of all the projects out there, ours should be the community that believes in the value of Continuous Delivery and Evergreen is how we bring continuous delivery to the development of Cloud Native Jenkins itself.

This solves some of the key challenges listed above that are really hard to achieve today, so it’s already incredibly useful.

The catch is that this MVP has no GUI. There’s no Blue Ocean UI to look at. No parsing of test reports, no build history. It uses no persistent volumes, it keeps no record of builds. The only thing permanent at the end of a build is whatever data is pushed out from Jenkins Pipeline, such as images pushed to a Docker registry, email notifications, and GitHub commit status updates. Load of other features in Jenkins will not be available here.

This is not that far from how some sophisticated users are deploying Jenkins today. All in all, I think this is the right trade off for the first MVP. As you can see, we have most of the pieces already.

From here, the build engine will get continuously more polished and more cloud native, other services will get added to regain features that were lost, new extensibility will get introduced to reduce the role of current in-VM plugins, and so on.

Jolt in Jenkins

Cloud Native Jenkins is a major effort and in particular initially it’s not usable for everyone; it only targets a subset of Jenkins functionalities, and it requires a platform whose adoption is still limited today. So in parallel, we need to continue the incremental evolution of Jenkins 2, but in an accelerated speed. Said differently, we need to continue to serve the majority of production workload on Jenkins 2 today, but we are willing to break some stuff to gain what we really need, such as faster pace of development and better stability, in ways that were previously not possible. This requires us injecting a jolt in Jenkins.

Release Model Change

The kind of jolts that we need will almost certainly means we need to renegotiate the expectation around new releases with our users. My inspiration source is what happened to the development of Java SE. It changed the release model and started moving faster, by shedding off more pieces faster, in ways that they haven’t done before. Again, Jenkins Evergreen is the key piece that achieves this without throwing users under a bus, for the reasons I described in the Cloud Native MVP above.

Compatibility

This jolt is aimed to put us on a different footing, one where our current “forever compatibility” expectation does not hold. If that requires us to use a new major version number, such as Jenkins 3, or new major version number every N months, I’m open to that.

Of course, whatever move we do has to make sense to users. The accelerated pace of value delivery needs to justify any inconvenience we put on users, such as migration, breaking changes, and so on.

In practice, what that means is that we need to be largely compatible. We have to protect users’ investment into their existing job definitions as much as possible. We continue to run freestyle jobs, etc…​

Ingredients

Other proposals CloudBees is putting forward with the intent to staff the effort are:

  • Configuration as Code: accelerate that and make it a more central part of Jenkins.

  • Developer experience improvements through buildpack style auto-detection of project types.

  • Continued evolution of Jenkins Pipeline

    • There’s an effort going on to remove CPS execution of Pipeline and isolate any failures during pipeline execution.

    • Continue to evolve Jenkins Pipeline toward the sweet spot that works well with the Cloud Native Jenkins effort.

    • Continued tactical bug-by-bug improvements of Pipeline.

  • Evergreen: I already talked about this above.

  • Plugin spring cleaning: let’s actively guide users more toward the sweet spot of Jenkins and reduce our feature surface area, so that we can focus our contributors’ effort to important parts of Jenkins. I expect this to be a combination of governance and technical efforts.

  • Table stakes service integration: let’s look at what kind of tablestake tool/service integrations today’s user need, and see if we are meeting/exceeding the competition. Where we fall short, let’s add/reimplement what are needed.

UI Effort

The Web UI will be likely done differently in Cloud Native Jenkins, as its own app and not a plugin in Jenkins. JCasC will also play a bigger role in Cloud Native Jenkins, reducing UI surface area from Jenkins.

Given that, CloudBees will reconsider where to spend its effort in Blue Ocean. The current work where parts of Blue Ocean are made reusable as NPM modules is one example that aligns well with this new vision.

Conclusion

This document lays out the key directions and approaches in a broad stroke, which I discussed with a number of you in the past. Hopefully, this gives you the big picture of how I envision where to move Jenkins forward, not just as the creator of Jenkins but as the CTO of CloudBees, who employs a number of key contributors to the Jenkins project.

Come meet Kohsuke and chat with him about the direction of Jenkins atJenkins World on September 16-19th, register with the code JWFOSS for a 30% discount off your pass.

Scaling Network Connections from the Jenkins Master

$
0
0
DevOps World | Jenkins World 2018

Oleg Nenashev and I will be speaking at DevOps World | Jenkins World in San Francisco this year aboutScaling Network Connections from the Jenkins Master. Over the years there have been many efforts to analyze, optimize, and fortify the “Remoting channel” that allows a master to orchestrate agent activity and receive build results. Techniques such as tuning the agent launcher can improve service, but qualitative change can only come from fundamentally reworking what gets transmitted and how.

In March, JENKINS-27035 introduced a framework for inspecting the traffic on a Remoting channel at a high level. Previously, developers could only use generic low-level tools such as Wireshark, which cannot identify the precise piece of Jenkins code responsible for traffic.

Over the past few months, theCloud Native SIG has been making progress in addressing root causes. TheArtifact Manager on S3 plugin has been released and integrated with Jenkins Evergreen, allowing upload and download of large artifacts to happen entirely between the agent and Amazon servers. Prototype plugins allow all build log content generated by an agent (such as in sh steps) to be streamed directly to external storage services such as AWS CloudWatch Logs. Work has also begun on uploading JUnit-format test results, which can sometimes get big, directly from an agent to database storage. All these efforts can reduce the load on the Jenkins master and local network without requiring developers to touch their Pipeline scripts.

Other approaches are on the horizon. While “one-shot” agents run in fresh VMs or containers greatly improve reproducibility, they suffer from the need to transmit megabytes of Java code for every build, so Jenkins features will need to be built to precache most or all of it. Work is underway to use Apache Kafka to make channels more robust against network failures. Most dramatically, the proposed Cloud Native Jenkins MVP would eliminate the bottleneck of a single Jenkins master service handling hundreds of builds.

Come meet Jesse, Oleg, and other Cloud Native SIG members atJenkins World on September 16-19th, register with the code JWFOSS for a 30% discount off your pass.


Warnings Plugin 5.0 (White Mountain) Public Beta

$
0
0
This is a guest post by Ullrich Hafner, professor for Software Engineering at the University of Applied Sciences Munich and Jenkins contributor. He will be presenting Static Analysis Plugins - White Mountain Release for Pipelines at DevOps World | Jenkins World 2018.
DevOps World | Jenkins World 2018

Jenkins' Warnings plugin collects compiler warnings or issues reported by static analysis tools and visualizes the results. The plugin (and the associated static analysis plugin suite) has been part of the Jenkins plugin eco-system for more than ten years now. In order to optimize user experience and support Pipeline, a major rewrite of the whole set of plugins was necessary. This new version (code name White Mountain) is now available as a public beta. Please download and install this new version and help us to identify problems before the API is sealed.

The new release is available in theexperimental update center. It has built-in support for almost hundred static analysis tools (including several compilers), see the list ofsupported report formats.

Features overview

The Warnings plugin provides the following features when added as a post build action (or step) to a job:

  1. The plugin scans the console log of a Jenkins build or files in the workspace of your job for any kind of issues. There are almost one hundredreport formats supported. Among the problems it can detect:

    • errors from your compiler (C, C#, Java, etc.)

    • warnings from a static analysis tool (CheckStyle, StyleCop, SpotBugs, etc.)

    • duplications from a copy-and-paste detector (CPD, Simian, etc.)

    • vulnerabilities

    • open tasks in comments of your source files

  2. The plugin publishes a report of the issues found in your build, so you can navigate to a summary report from the main build page. From there you can also dive into the details:

    • distribution of new, fixed and outstanding issues

    • distribution of the issues by severity, category, type, module, or package

    • list of all issues including helpful comments from the reporting tool

    • annotated source code of the affected files

    • trend charts of the issues

In the next sections, I’ll show the new and enhanced features in more detail.

One plugin for all tools

Previously the warnings plugin was part of the static analysis suite that provided the same set of features through several plugins (CheckStyle, PMD, Static Analysis Utilities, Analysis Collector etc.). In order to simplify the user experience and the development process, these plugins and the core functionality have been merged into the warnings plugin. All other plugins are not required anymore and will not be supported in the future. If you currently use one of these plugins you should migrate to the new recorders and steps as soon as possible. I will still maintain the old code for a while, but the main development effort will be spent into the new code base.

The following plugins have been integrated into the beta version of the warnings plugin:

  • Android-Lint Plugin

  • CheckStyle Plugin

  • CCM Plugin

  • Dry Plugin

  • PMD Plugin

  • FindBugs Plugin

All other plugins still need to be integrated or need to be refactored to use the new API.

New pipeline support

Requirements for using the Warnings plugin in Jenkins Pipeline can be complex and sometimes controversial. In order to be as flexible as possible I decided to split the main step into two individual parts, which could then be used independently from each other.

Simple pipeline configuration

The simple pipeline configuration is provided by the step recordIssues. This step is automatically derived from the FreeStyle job recorder: it scans for issues in a given set of files (or in the console log) and reports these issues in your build. You can use the snippet generator to create a working snippet that calls this step. A typical example of this step is shown in the following example:

recordIssuesenabledForFailure: true,tools: [[pattern: '*.log', tool: [$class: 'Java']]],filters: [includeFile('MyFile.*.java'), excludeCategory('WHITESPACE')]

In this example, the files '*.log' are scanned for Java issues. Only issues with a file name matching the pattern 'MyFile.*.java' are included. Issues with category 'WHITESPACE' will be excluded. The step will be executed even if the build failed. The recorded report of warnings will be published under the fixed URL 'https://[your-jenkins]/job/[your-job]/java'. URL or name of the report can be changed if required.

Advanced Pipeline Configuration

Sometimes publishing and reporting issues using a single step is not sufficient. For instance, if you build your product using several parallel steps and you want to combine the issues from all of these steps into a single result. Then you need to split scanning and aggregation. Therefore, the plugin provides the following two steps that are combined by using an intermediate result object:

  • scanForIssues: this step scans a report file or the console log with a particular parser and creates an intermediate report object that contains the report.

  • publishIssues: this step publishes a new report in your build that contains the aggregated results of one or several scanForIssues steps.

You can see the usage of these two steps in the following example:

def java = scanForIssues tool: [$class: 'Java']def javadoc = scanForIssues tool: [$class: 'JavaDoc']publishIssuesissues:[java, javadoc], filters:[includePackage('io.jenkins.plugins.analysis.*')]def checkstyle = scanForIssues tool: [$class: 'CheckStyle'], pattern: '**/target/checkstyle-result.xml'
publishIssues issues:[checkstyle]def pmd = scanForIssues tool: [$class: 'Pmd'], pattern: '**/target/pmd.xml'
publishIssues issues:[pmd]

publishIssues id:'analysis', name:'White Mountains Issues', issues:[checkstyle, pmd],filters:[includePackage('io.jenkins.plugins.analysis.*')]

Filtering issues

The created report of issues can be filtered afterwards. You can specify an arbitrary number of include or exclude filters. Currently, there is support for filtering issues by module name, package or namespace name, file name, category or type.

Filtering

An example pipeline that uses such a filter is shown in the following snippet:

recordIssuestools: [[pattern: '*.log', tool: [$class: 'Java']]],filters: [includeFile('MyFile.*.java'), excludeCategory('WHITESPACE')]

Quality gate configuration

You can define several quality gates that will be checked after the issues have been reported. These quality gates let you to modify Jenkins' build status so that you immediately see if the desired quality of your product is met. A build can be set to unstable or failed for each of these quality gates. All quality gates use a simple metric: the maximum number of issues that can be found and still pass a given quality gate.

Quality Gate

An example pipeline that enables a quality gate for 10 warnings in total or 1 new warning is shown in the following snippet:

recordIssuestools: [[pattern: '*.log', tool: [$class: 'Java']]], unstableTotalHigh: 10, unstableNewAll: 1

Issues history: new, fixed, and outstanding issues

One highlight of the plugin is the ability to categorize issues of subsequent builds as new, fixed and outstanding.

History

Using this feature makes it a lot easier to keep the quality of your project under control: you can focus only on those warnings that have been introduced recently.

Note: the detection of new warnings is based on a complex algorithm that tries to track the same warning in two two different versions of the source code. Depending on the extend of the modification of the source code it might produce some false positives, i.e., you might still get some new and fixed warnings even if there should be none. The accuracy of this algorithm is still ongoing research and will be refined in the next couple of months.

Severities

The plugin shows the distribution of the severities of the issues in a chart. It defines the following default severities, but additional ones might be added by plugins that extend the warnings plugin.

  • Error: Indicates an error that typically fails the build

  • Warning (High, Normal, Low): Indicates a warning of the given priority. Mapping to the priorities is up to the individual parsers.

Note that not every parser is capable of producing warnings with a different severity. Some of the parses simply use the same severity for all issues.

Severities

Build Trend

In order to see the trend of the analysis results, a chart showing the number of issues per build is also shown. This chart is used in the details page as well as in the job overview. Currently, type and configuration of the chart is fixed. This will be enhanced in future versions of the plugin.

Trend Chart

Issues Overview

You can get a fast and efficient overview of the reported set of issues in several aggregation views. Depending on the number or type of issues you will see the distribution of issues by

  • Static Analysis Tool

  • Module

  • Package or Namespace

  • Severity

  • Category

  • Type

Each of these detail views are interactive, i.e. you can navigate into a subset of the categorized issues.

Packages Overview

Issues Details

The set of reported issues is shown in a modern and responsive table. The table is loaded on demand using an Ajax call. It provides the following features:

  • Pagination: the number of issues is subdivided into several pages which can be selected by using the provided page links. Note that currently the pagination is done on the client side, i.e. it may take some time to obtain the whole table of issues from the server.

  • Sorting: the table content can be sorted by clicking on ony of the table columns.

  • Filtering, Searching: you can filter the shown issues by entering some text in the search box.

  • Content Aware: columns are only shown if there is something useful to display. I.e., if a tool does not report an issues category, then the category will be automatically hidden.

  • Responsive: the layout should adapt to the actual screen size.

  • Details: the details message for an issue (if provided by the corresponding static analysis tool) is shown as child row within the table.

Details

Remote API

The plugin provides two REST API endpoints.

Summary of the analysis result

You can obtain a summary of a particular analysis report by using the URL [tool-id]/api/xml (or [tool-id]/api/json). The summary contains the number of issues, the quality gate status, and all info and error messages.

Details of the analysis result

The reported issues are also available as REST API. You can either query all issues or only the new, fixed, or outstanding issues. The corresponding URLs are:

  1. [tool-id]/all/api/xml: lists all issues

  2. [tool-id]/fixed/api/xml: lists all fixed issues

  3. [tool-id]/new/api/xml: lists all new issues

  4. [tool-id]/outstanding/api/xml: lists all outstanding issues

How You Can Help

I hope these new features are useful for everyone! Please download or install this new release and test it in your jobs:

  • Convert some of your jobs to the new API and test the new (and old) features (based on your requirements).

  • Read all labels carefully, I’m not a native speaker so some descriptions might be misleading or incorrect.

  • Check the new URLs and names of the parsers, see list ofsupported report formats. These can’t be changed after the beta testing.

If you find a problem, incorrect phrase, typo, etc. please report a bug in Jira (or even better: file a PR in GitHub).

This has been a brief overview of the new features of the Warnings plugin in Jenkins. For more, be sure to check out mytalk at "DevOps World | Jenkins World" where I show more details of the Warnings plugin!

Come see Ullrich Hafner and many other Jenkins experts and contributors atDevOps World | Jenkins World on September 16-19th, register with the code JWFOSS for a 30% discount off your pass.

Speaker blogpost: A Cloud Native Jenkins

$
0
0
DevOps World | Jenkins World 2018

A few months ago I published ablog post aboutCloud Native Special Interest Group (SIG) and ongoing projects related to Cloud Native Jenkins. Next week we will be presenting at DevOps World | Jenkins World together with Carlos Sanchez and Jesse Glick, so I would like to provide a heads up forour talk: “A Cloud Native Jenkins”.

In our talk, we will focus on the following topics: Pluggable Storage, our ephemeral Jenkins masters experiments, and tools which may be used to implement single-shot masters.

Pluggable Storage

Pluggable storage is one of the major areas we have been working on over the last few months. There are a number of parallel stories which are summarized onthis page. There has been significant progress in the areas of artifact storage, build logging and configuration storage. A number of Jenkins Enhancement Proposals were submitted and accepted, and there are plugin releases and prototypes for these stories.

During our talk we will discuss the current status of these stories and future plans. In particular, we will cover the following areas and reference implementations:

  • Storing all your artifacts transparently, e.g. in a cloud service blob store like AWS S3.

  • Providing credentials from an external location.

  • Sending and retrieving the build logs from a cloud service.

  • Storing configuration data in external storage like Kubernetes Resources and SQL database

  • Storing test results externally, e.g. in an SQL database or a specialized Test Management System

There are existing plugins for the areas above, but there is a difference in approach we have taken. Instead of creating new custom steps we extend Jenkins architecture in a way that the storage becomes transparent to users. For example, with Artifact Manager for S3 Plugin common Archive Artifacts steps work transparently with Remote storage, as well as Jenkins Pipeline’s stash()/unstash() steps.

The reference implementations intentionally use different technologies so that we cover more scenarios. We regularly discuss the implementations in the Cloud Native SIG, and we would appreciate your feedback.

Ephemeral Jenkins masters research

Want something new? Several days ago Kohsuke Kawaguchi, the creator of Jenkins, posted theJenkins: Shifting Gears article to summarize the plan for Jenkins evolution. Cloud Native Jenkins is a critical part of this plan, and it is not “just Jenkins X”. There are various architectural changes in Jenkins required to make this vision happen, and we plan to work on these changes in the Cloud Native SIG.

In our presentation, we will talk about our experiment with ephemeral Jenkins and single-shot masters. In this story we are creating a headless single-shot master which starts in a container, executes a Pipeline build and pushes all the results to remote storage so that the container can just be deleted after completion. Such a master bundles plugins and self-configuration logic using “Configuration as Code”, so that it can start executing Pipelines in just a few seconds. Once packaged, it can be invoked from CLI as simply as…​

docker run --rm -v $PWD/demo/Jenkinsfile:/workspace/Jenkinsfile onenashev/cwp-jenkinsfile-runner-demo

or, in Kubernetes:

kubectl create configmap jenkinsfile --from-file=demo/Jenkinsfile
kubectl create -f demo/kubernetes.yaml

Such a single-shot master could also be made a part of a Cloud Native Jenkins system. Standard event handlers like Prow can invoke the builds on webhooks and report results back, so that the single-shot master can be used to build pull requests or to run Continuous Delivery flows. Extra agents could also be connected to the master on-demand, using the Kubernetes plugin or sidecar containers.

Single-shot master concept

Tools

In order to make this experiment possible, we used a toolchain based on Docker,Jenkinsfile Runner,Configuration as Code Plugin (JCasC), and a links:https://github.com/jenkinsci/custom-war-packager[Custom WAR Packager tool] which glues all the things together.

Custom WAR Packager is a new tool which takes various configurations (YAML specification defining core version, list of plugins, system properties, Groovy Hooks, JCasC YAMLs)…​ and then bundles everything as a ready-to-fly WAR file or Docker image. Starting from version 1.2, Custom WAR Packager also supports packaging Jenkinsfile Runner images as an experimental feature. I will do a separate blogpost about this new tool later, but there is already some documentation a number of demos in the project’s repo.

Our demo

Yes, we will have a demo! We will show a single-shot master running with Pluggable storage implementations for AWS environments (Amazon S3, AWS CloudWatch, EKS, etc.), which executes Jenkins Pipelines for Maven projects and provisions agents in Kubernetes on-demand.

The demo has to be published yes, but you can already find a more simple Jenkinsfile Runner demohere.

Want to know more?

The upcoming DevOps World | Jenkins World conferences are heavily packed with talks related to Cloud Native Jenkins, including war stories and presentations on projects like Jenkins X and Jenkins Evergreen. It is a great chance to get more information about using Jenkins in cloud environments.

If you are a Jenkins contributor or just want to become a contributor, also join the Contributor Summit (Sep 17 in US and Oct 23 in Nice) or visit the Jenkins community booth in the Exhibition hall. At the Contributor Summit on Sep 17 we will also have a face-to-face Cloud Native SIG meeting. Feel free to contribute to the agenda here.

Come meet Carlos, Jesse, Oleg, and other Cloud Native SIG members atJenkins World on September 16-19th in San Francisco and on October 22-25 in Nice. register with the code JWFOSS for a 30% discount off your pass.

2018 DevOps|Jenkins Community Survey Now Open

$
0
0
This is a guest post by Brian Dawson on behalf of CloudBees, where he works as a DevOps Evangelist responsible for developing and sharing continuous delivery and DevOps best practices. He also serves as the CloudBees Product Marketing Manager for Jenkins.

Take the 5th Annual DevOps and Jenkins Community Survey

With DevOps World | Jenkins World San Francisco right around the corner, CloudBees is excited to sponsor the 2018 DevOps and Jenkins Community Survey. We want to capture the details of your DevOps experience in order to provide valuable insights to the Jenkins Community and beyond. Our community is stronger together - and this look at our collective experience will reveal the big picture and shine a light on key trends. This year, as the Jenkins project continues to evolve with Jenkins X , Configuration as Code and more, your input is more critical than ever.

Let’s look at what we learned in 2016 and 2017:

In 2016 we found that:

  • Jenkins continued to hold the position as a company standard orchestration solution.

    • 29% of respondents companies use Jenkins on more than 50 projects

  • In regards to SCM tools, Git continued the march to dominance:

    • Git usage increased to 85%

    • Subversion usage decreased to 35%

  • When it comes to practices, Agile and CI seemed to be the standard, and CD adoption still had a ways to go:

    • 85% practiced Agile

    • 82% practiced CI

    • 61% practiced DevOps

    • 46% practiced CD

Agile, CD and DevOps practices

In 2017 respondents reported that:

  • Jenkins Pipeline gained widespread adoption with 89% of survey takers used pipeline or planned to use in 6 months or less.

  • Container technology was cemented as a key part of the CD/DevOps ecosystem, yet Kubernetes usage was just starting gain momentum at 20.15%:

Contatiner technology usage
  • Jenkins, CD, and DevOps are getting more attention from Architects with 39% of respondents identified as Architects, nearly double the previous year

  • Git was the clear SCM of choice at 90%, increasing nearly 5% over last the previous year

What SCM do you use?

What will you and the community tell us this year? Are more people practicing DevOps? Is Kubernetes the leader in container orchestration? Is pipeline the standard for creating workflows? Take the survey and let’s find out!

As always, your personal information (name, email address and company) will NOT be used by CloudBees for sales or marketing and the survey results will be made publicly available to the Jenkins Community. We will also be publishing a blog series analyzing trends over the last 5 years and offering predictions on the evolution of DevOps. If you’re curious about what insights your input will provide, see the results of last year’s 2017 survey.

As an added incentive to participate, CloudBees will enter participants into a drawing for a free pass to DevOps World | Jenkins World 2019 (1st prize, $1,199.00 value) or a $100 Amazon gift card (2nd prize)!

The survey will close at the end of October so grab a cup of coffee get started. We promise the survey will be done before your latte is.

There are laws that govern prize giveaways and eligibility; CloudBees has compiled all those fancy terms and conditions here.

Speaker blogpost: Jenkins Evergreen At DevOps World | Jenkins World 2018

$
0
0
DevOps World | Jenkins World 2018

Evergreen is a distribution of Jenkins we are working on that provides an easy to use and automatically upgrading experience. This year at the conference, there will be not just one, but two talks to present Evergreen to the Jenkins community:

Tyler will present the overall Jenkins Evergreen architecture, its inception and how this aims at making it much simpler for people to just use Jenkins to build their projects, without having to become Jenkins admins.

On the last conference day, during my own talk I will focus on the improved developer experience, and zoom into how we implemented some important features.

We will dig together into the Error Telemetry system put in place, allowing us to actually fix errors and warnings people see in production environments. How instances are automatically reporting errors to the Evergreen backend, and how we then centralize and analyze them using Sentry. We will explain how the Incrementals system allows developers a very short roundtrip, between a merged pull-request and a release we can push out to all instances. We will see concrete examples of issues we already fixed and released to Evergreen instances in just a few days after we opened an alpha version to the world.

I will show you how an instance starts up and gets upgraded by discussing with the backend it’s constantly connected to. How the backend knows what it should instruct an instance to download and install, or how we trigger an automated data snapshot.

You will obviously see a demo of all this showing in particular how Evergreen can already run on a Docker host, or on AWS (more environments to come!), using some of the so-called flavors for Jenkins Evergreen.

Come meet us atDevOps World | Jenkins World 2018 on September 16-19th in San Francisco. We will be hanging out around the OSS space, eager to answer more questions.

Register with the code JWFOSS for a 30% discount off your pass.

Want to know how Jenkins builds Jenkins? Catch this session at DevOps World | Jenkins World next week in San Francisco!

$
0
0
DevOps World | Jenkins World 2018

Next week Olivier Vernin from CloudBees and Brian Benz from Microsoft will be presenting a session at DevOps World | Jenkins World about how Microsoft has been working with Jenkins to build Jenkins plugins and produce Jenkins on Microsoft Azure. These plugins run Jenkins on Azure Linux and Windows VMs, Kubernetes, azure App service, as well as deploy artifacts to those Azure platforms and more. All are open source and available on GitHub.

Here’s our session, where we’ll be sharing successes and challenges of getting the infrastructure up and running:

Tuesday, September 18

In this session, we’ll discuss the real-life implementation of Jenkins' development and delivery infrastructure in the cloud as it has evolved from a mix of platforms to Microsoft Azure. Expect a frank discussion of how issues that were encountered along the way were overcome, how the architecture has evolved, and what’s on the roadmap. We’ll share important tips and tricks for implementing your own Jenkins infrastructure on any cloud, based on Jenkins' own experience with their implementation.

See you in San Francisco!

Come meet us atDevOps World | Jenkins World 2018 on September 16-19th in San Francisco. We will be hanging out around the OSS space, eager to answer more questions.

Register with the code JWFOSS for a 30% discount off your pass.

Viewing all 1087 articles
Browse latest View live