Quantcast
Channel: Jenkins Blog
Viewing all 1088 articles
Browse latest View live

Indispensable, Disposable Jenkins

$
0
0
This is a guest post by Mandy Hubbard, Software Engineer/QA Architect atCare.com.

Imagine this: It’s 4:30pm on a Friday, you have a major release on Monday, and your Jenkins server goes down. It doesn’t matter if it experienced a hardware failure, fell victim to a catastrophic fat-finger error, or just got hit by a meteor - your Jenkins server is toast. How long did it take to perfect your Pipeline, all your Continuous Delivery jobs, plugins, and credentials? Hopefully you at least have a recent backup of your Jenkins home directory, but you’re still going have to work over the weekend with IT to procure a new server, install it, and do full regression testing to be up and running by Monday morning. Go ahead and take a moment, go to your car and just scream. It will help …​ a little.

But what if you could have a Jenkins environment that is completely disposable, one that could be easily rebuilt at any time? Using Docker and Joyent’s ContainerPilot, the team atCare.com HomePay has created a production Jenkins environment that is completely software-defined. Everything required to set up a new Jenkins environment is stored in source control, versioned, and released just like any other software. At Jenkins World, I’ll do a developer deep-dive into this approach during my technical session, Indispensable, Disposable Jenkins, including a demo of bringing up a fully configured Jenkins server in a Docker container. For now, let me give you a basic outline of what we’ve done.

Mandy will bepresenting more on this topic atJenkins World in August, register with the code JWFOSS for a 30% discount off your pass.

First, we add ContainerPilot to our Jenkins image by including it in the Dockerfile.

Dockerfile
## ContainerPilot

ENV CONTAINERPILOT_VERSION 2.7.0
ENV CONTAINERPILOT_SHA256 3cf91aabd3d3651613942d65359be9af0f6a25a1df9ec9bd9ea94d980724ee13
ENV CONTAINERPILOT file:///etc/containerpilot/containerpilot.json

RUN curl -Lso /tmp/containerpilot.tar.gz https://github.com/joyent/containerpilot/releases/download/${CONTAINERPILOT_VERSION}/containerpilot-${CONTAINERPILOT_VERSION}.tar.gz && \
    echo "${CONTAINERPILOT_SHA256}  /tmp/containerpilot.tar.gz" | sha256sum -c && \
    tar zxf /tmp/containerpilot.tar.gz -C /bin && \
rm /tmp/containerpilot.tar.gz

Then we specify containerpilot as the Docker command in the docker-compose.yml and pass the Jenkins startup script as an argument. This allows ContainerPilot to perform our preStart business before starting the Jenkins server.

docker-compose.yml
jenkins:image: devmandy/auto-jenkins:latestrestart: alwaysmem_limit: 8gports:
      - 80
      - 22dns:
      - 8.8.8.8
      - 127.0.0.1env_file: _envvolumes:
      - /var/run/docker.sock:/var/run/docker.sockenvironment:
      - CONSUL=consullinks:
      - consul:consulports:
      - "8080:80"
      - "2222:22"command: >
      containerpilot
      /usr/local/bin/jenkins.sh

Configuration data is read from a Docker Compose _env file, as specified in the docker-compose.yml file, and stored in environment variables inside the container. This is an example of our _env file:

_env
GITHUB_TOKEN=<my Github user token>
GITHUB_USERNAME=DevMandy
GITHUB_ORGANIZATION=DevMandy
DOCKERHUB_ORGANIZATION=DevMandy
DOCKERHUB_USERNAME=DevMandy
DOCKERHUB_PASSWORD=<my Dockerhub password>
DOCKER_HOST=<my Docker host, or localhost>
SLACK_TEAM_DOMAIN=DevMandy
SLACK_CHANNEL=jenkinsbuilds
SLACK_TOKEN=<my Slack token>
BASIC_AUTH=<my basic auth token>
AD_NAME=<my AD domain>
AD_SERVER=<my AD server>
PRIVATE_KEY=<my ssh private key, munged by a setup script>

Jenkins stores its credentials and plugin information in various xml files. The preStart script modifies the relevant files, substituting the environment variables as appropriate, using a set of command line utilities called ``xmlstarlet``. Here is an example method from our preStart script that configures Github credentials:

github_credentials_setup() {
    ## Setting Up Github username in credentials.xml file
    echo
    echo -e "Adding Github username to credentials.xml file for SSH key"
    xmlstarlet \
        ed \
        --inplace \
        -u '//com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey[id="github"]/username' \
        -v ${GITHUB_USERNAME} \
        ${JENKINS_HOME}/credentials.xml

    echo -e "Adding Github username to credentials.xml file for Github token"
    xmlstarlet \
        ed \
         --inplace \
        -u '//com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl[id="github_token"]/username' \
        -v ${GITHUB_USERNAME} \
        ${JENKINS_HOME}/credentials.xml

    PASSWORD=${GITHUB_TOKEN}
    echo -e "Adding Github token to credentials.xml"
    xmlstarlet \
        ed \
        --inplace \
        -u '//com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl[id="github_token"]/password' \
        -v ${PASSWORD} \
        ${JENKINS_HOME}/credentials.xml
}

This approach can be used to automate all things Jenkins. These are just a few of the things I’ll show you in my Jenkins World session, which you can build on to automate anything else your Jenkins environment needs.

With software-defined Jenkins, pipeline infrastructure gains the same flexibility and resiliency as the rest of the development pipeline. If we decide to change our Jenkins configuration in any way – for example installing a new plugin or upgrading an existing one, adding a new global library, or adding new Docker images for build slaves – we simply edit our preStart script to include these changes, build a new Docker image, and the Jenkins environment is automatically reconfigured when we start a new container. Because the entire configuration specification lives in a Github repository, changes are merged to the "master" branch using pull requests, and our Jenkins Docker image is tagged usingsemantic versioning just like any other component. Jenkins can be both indispensable and completely disposable at the same time.


Scaling Jenkins with Kubernetes on Google Container Engine

$
0
0
This is a guest post by Guillaume Laforge, Developer Advocate for Google Cloud

Last week, I had the pleasure to speak at theJenkins Community Day conference, in Paris, organized by my friends from JFrog, provider of awesome tools for software management and distribution. I covered how to scale Jenkins with Kubernetes onGoogle Container Engine.

For the impatient, here are the slides of the presentation I’ve given:

Scaling Jenkins with Kubernetes on Google Container Engine

But let’s step back a little. In this article, I’d like to share with you why you would want to run Jenkins in the cloud, as well as give you some pointers to interesting resources on the topic.

Why running Jenkins in the cloud?

So why running Jenkins in the cloud? First of all, imagine your small team, working on a single project. You have your own little server, running under a desk somewhere, happily building your application on each commit, a few times a day. So far so good, your build machine running Jenkins isn’t too busy, and stays idle most of the day.

Let’s do some bottom of the napkin calculations. Let’s say you have a team of 3 developers, committing roughly 4 times a day, on one single project, and the build takes roughly 10 minutes to go.

3 developers * 4 commits / day / developer * 10 minutes build time * 1 project = 1 hour 20 minutes

So far so good, your server indeed stays idle most of the day. Usually, at most, your developers will wait just 10 minutes to see the result of their work.

But your team is growing to 10 persons, the team is still as productive, but the project becoming bigger, the build time goes up to 15 minutes:

10 developers * 4 commits / day / developer * 15 minutes build time * 1 project = 10 hours

You’re already at 10 hours build time, so your server is busy the whole day, and at times, you might have several build going on at the same time, using several CPU cores in parallel. And instead of building in 15 minutes, sometimes, the build might take longer, or your build might be queued. So in theory, it might be 15 minutes, but in practice, it could be half an hour because of the length of the queue or the longer time to build parallel projects.

Now, the company is successful, and has two projects instead of one (think a backend and a mobile app). Your teams grow further up to 20 developers per project. The developers are a little less productive because of the size of the codebase and project, so they only commit 3 times a day. The build takes more time too, at 20 minutes (in ideal time). Let’s do some math again:

20 developers * 3 commits / day / developer * 20 minutes build time * 2 projects = 40 hours

Woh, that’s already 40 hours of total build time, if all the builds are run serially. Fortunately, our server is multi-core, but still, there are certainly already many builds that are enqueued, and many of them, perhaps up to 2-3 or perhaps even 4 could be run in parallel. But as we said, the build queue increases further, the real effective time of build is certainly longer than 30 minutes. Perhaps at times, developers won’t see the result of their developments before at least an hour, if not more.

One last calculation? With team sizes of 30 developers, decreased productivity of 2 commits, 25 build time, and 3 projects? And you’ll get 75 hours total build time. You may start creating a little build farm, with a master and several build agents. But you also increase the burden of server management. Also, if you move towards a full Continuous Delivery or Continuous Deployment approach, you may further increase your build times to go up to deployment, make more but smaller commits, etc. You could think of running builds less often, or even on a nightly basis, to cope with the demand, but then, your company is less agile, and the time-to-market for fixes of new features might increase, and your developers may also become more frustrated because they are developing in the blind, not knowing before the next day if their work was successful or not.

With my calculations, you might think that it makes more sense for big companies, with tons of projects and developers. This is quite true, but when you’re a startup, you also want to avoid taking care of local server management, provisioning, etc. You want to be agile, and use only compute resources you need for the time you need them. So even if you’re a small startup, a small team, it might still make sense to take advantage of the cloud. You pay only for the actual time taken by your builds as the build agent containers are automatically provisioned and decommissioned. The builds can scale up via Kubernetes, as you need more (or less) CPU time for building everything.

And this is why I was happy to dive into scaling Jenkins in the cloud. For that purpose, I decided to go with building with containers, with Kubernetes, as my app was also containerized as well. Google Cloud offers Container Engine, which is basically just Kubernetes in the cloud.

Useful pointers

I based my presentation and demo on some great solutions that are published on the Google Cloud documentation portal. Let me give you some pointers.

The latter one is the tutorial I actually followed for the demo that I presented during the conference. It’s a simple Go application, with a frontend and backend. It’s continuously build, on each commit (well, every minute to check if there’s a new commit), and deployed automatically in different environments: dev, canary, production. The sources of the project are stored in Cloud Source Repository (it can be mirrored from Github, for example). The containers are stored in Cloud Container Registry. And both the Jenkins master and agents, as well as the application are running inside Kubernetes clusters in Container Engine.

Summary and perspective

Don’t bother with managing servers! Quickly, you’ll run out of CPU cycles, and you’ll have happier developers with builds that are super snappy!

And for the record, at Google, dev teams are also running Jenkins! There was a presentation (video andslides available) given last year by David Hoover at Jenkins World talking about how developers inside Google are running hundreds of build agents to build projects on various platforms.

Jenkins World 2017 Agenda is Live!

$
0
0

This is a guest post by Alyssa Tong, who runs the Jenkins Area Meetup program and is also responsible for Marketing & Community Programs at CloudBees, Inc.

I am excited to announce the agenda forJenkins World 2017. This year’s event promises to have something for everyone - whether you are a novice, intermediate, or advanced user…​you are covered. Jenkins World 2017 consists of 6 tracks, 60+ Jenkins and DevOps sessions, 40+ industry speakers, 16+ training and workshops.

jenkinsworld shutterfly speaking

Here is a sneak peek at Jenkins World 2017:

Show 'n Tell

It’s all about that demo. These sessions are technically advanced with some code sharing, heavy on demos and just a tad bit of slides.

  • Plugin Development for Pipeline

  • Extending Blue Ocean

  • How to Use Jenkins Less: How and Why You Can Minimize Your Jenkins Footprint

  • Jenkins Pipeline on your Local Box to Reduce Cycle Time

brian tweet

War Stories

These are first-hand Jenkins experience and lessons learned. These stories will inspire your innovative solutions.

  • Pipelines At Scale: How Big, How Fast, How Many?

  • JenkinsPipelineUnit: Test Your Continuous Delivery Pipeline

  • Codifying the Build and Release Process with a Jenkins Pipeline Shared Library

  • Jumping on the Continuous Delivery Bandwagon: From 100+ FreeStyle Jobs to Pipeline(s) - Tactics, Pitfalls and Woes

james tweet

Trainings and Workshops

(additional fees apply to certain trainings/workshops)

  • Introduction to Jenkins

  • Introduction to Plugin Development

  • Let’s Build a Jenkins Pipeline!

  • Fundamentals of Jenkins and Docker

The Jenkins World agenda is packed with even more sessions, it will be a very informational event.

saldin tweet

Convince your Boss

We know that attending Jenkins World needs little convincing but just in case you need a little help to justify your attendance, we’ve created a Justify your Trip document to help speed up the process.

Register for Jenkins World 2017 with the code JWATONG for a 20% discount off your pass.

Hope to see you there!

Getting Started with the Blue Ocean Dashboard

$
0
0
This is a guest post by Liam Newman, Technical Evangelist at CloudBees.

Blue Ocean is a new user experience for Jenkins, and version 1.0 is now live! Blue Ocean makes Jenkins, and continuous delivery, approachable to all team members. In my previous post, I used the Blue Ocean Activity View to track the state of branches and Pull Requests in one project. In this video, I’ll use the Blue Ocean Dashboard get a personalized view of the areas that of my project that are most important to me, and also to monitor multiple projects. Please Enjoy!

Jenkins World 2017 Agenda is Live!

$
0
0

This is a guest post by Alyssa Tong, who runs the Jenkins Area Meetup program and is also responsible for Marketing & Community Programs at CloudBees, Inc.

I am excited to announce the agenda forJenkins World 2017. This year’s event promises to have something for everyone - whether you are a novice, intermediate, or advanced user…​you are covered. Jenkins World 2017 consists of 6 tracks, 60+ Jenkins and DevOps sessions, 40+ industry speakers, 16+ training and workshops.

jenkinsworld shutterfly speaking

Here is a sneak peek at Jenkins World 2017:

Show 'n Tell

It’s all about that demo. These sessions are technically advanced with some code sharing, heavy on demos and just a tad bit of slides.

  • Plugin Development for Pipeline

  • Extending Blue Ocean

  • How to Use Jenkins Less: How and Why You Can Minimize Your Jenkins Footprint

  • Jenkins Pipeline on your Local Box to Reduce Cycle Time

brian tweet

War Stories

These are first-hand Jenkins experience and lessons learned. These stories will inspire your innovative solutions.

  • Pipelines At Scale: How Big, How Fast, How Many?

  • JenkinsPipelineUnit: Test Your Continuous Delivery Pipeline

  • Codifying the Build and Release Process with a Jenkins Pipeline Shared Library

  • Jumping on the Continuous Delivery Bandwagon: From 100+ FreeStyle Jobs to Pipeline(s) - Tactics, Pitfalls and Woes

james tweet

Trainings and Workshops

(additional fees apply to certain trainings/workshops)

  • Introduction to Jenkins

  • Introduction to Plugin Development

  • Let’s Build a Jenkins Pipeline!

  • Fundamentals of Jenkins and Docker

The Jenkins World agenda is packed with even more sessions, it will be a very informational event.

saldin tweet

Convince your Boss

We know that attending Jenkins World needs little convincing but just in case you need a little help to justify your attendance, we’ve created a Justify your Trip document to help speed up the process.

Register for Jenkins World 2017 with the code JWATONG for a 20% discount off your pass.

Hope to see you there!

Getting Started with the Blue Ocean Dashboard

$
0
0
This is a guest post by Liam Newman, Technical Evangelist at CloudBees.

Blue Ocean is a new user experience for Jenkins, and version 1.0 is now live! Blue Ocean makes Jenkins, and continuous delivery, approachable to all team members. In my previous post, I used the Blue Ocean Activity View to track the state of branches and Pull Requests in one project. In this video, I’ll use the Blue Ocean Dashboard get a personalized view of the areas that of my project that are most important to me, and also to monitor multiple projects. Please Enjoy!

Jenkins World 2017 Agenda is Live!

$
0
0

This is a guest post by Alyssa Tong, who runs the Jenkins Area Meetup program and is also responsible for Marketing & Community Programs at CloudBees, Inc.

I am excited to announce the agenda forJenkins World 2017. This year’s event promises to have something for everyone - whether you are a novice, intermediate, or advanced user…​you are covered. Jenkins World 2017 consists of 6 tracks, 60+ Jenkins and DevOps sessions, 40+ industry speakers, 16+ training and workshops.

jenkinsworld shutterfly speaking

Here is a sneak peek at Jenkins World 2017:

Show 'n Tell

It’s all about that demo. These sessions are technically advanced with some code sharing, heavy on demos and just a tad bit of slides.

  • Plugin Development for Pipeline

  • Extending Blue Ocean

  • How to Use Jenkins Less: How and Why You Can Minimize Your Jenkins Footprint

  • Jenkins Pipeline on your Local Box to Reduce Cycle Time

brian tweet

War Stories

These are first-hand Jenkins experience and lessons learned. These stories will inspire your innovative solutions.

  • Pipelines At Scale: How Big, How Fast, How Many?

  • JenkinsPipelineUnit: Test Your Continuous Delivery Pipeline

  • Codifying the Build and Release Process with a Jenkins Pipeline Shared Library

  • Jumping on the Continuous Delivery Bandwagon: From 100+ FreeStyle Jobs to Pipeline(s) - Tactics, Pitfalls and Woes

james tweet

Trainings and Workshops

(additional fees apply to certain trainings/workshops)

  • Introduction to Jenkins

  • Introduction to Plugin Development

  • Let’s Build a Jenkins Pipeline!

  • Fundamentals of Jenkins and Docker

The Jenkins World agenda is packed with even more sessions, it will be a very informational event.

saldin tweet

Convince your Boss

We know that attending Jenkins World needs little convincing but just in case you need a little help to justify your attendance, we’ve created a Justify your Trip document to help speed up the process.

Register for Jenkins World 2017 with the code JWATONG for a 20% discount off your pass.

Hope to see you there!

Microsoft PowerShell Support for Pipeline

$
0
0

I am pleased to announce Microsoft PowerShell support for Jenkins Pipeline! As of Durable Task 1.14 andPipeline Nodes and Processes Plugin 2.12, you will now be able to run Microsoft PowerShell scripts directly in your Jenkins Pipeline projects. This blog post covers the basics of getting started with Microsoft PowerShell in Pipeline and provides some basic examples.

Introduction to Microsoft PowerShell

PowerShell is Microsoft’s open source and cross platform command line shell, as well as an automation and configuration tool/framework which has a broad user base. PowerShell can be used to perform common system administration tasks in Windows, macOS, and Linux environments. It can also be used as a general purpose scripting language. Now that Jenkins Pipeline supports PowerShell, you can enjoy the rich set of features in PowerShell for your daily DevOps work.

Before diving into using PowerShell in your Pipeline, I recommend reading theWindows PowerShell Reference as well as thePowerShell Team Blog for an introduction to PowerShell features, utilities, and as a quick look into the PowerShell language. Microsoft also has an activePowerShell community on GitHub, which I highly recommend visiting to submit feature requests and bug reports as you see fit. Jenkins Pipeline currently supports Microsoft PowerShell 3.0 or higher, so also be sure to check which version of PowerShell is installed on your system in order to take advantage of PowerShell in your Pipeline. Please note that we recommend that you upgrade to the latest stable version of PowerShell available, which as of this writing is version 5.1.14393.

The powershell step

node {
    powershell 'Write-Output "Hello, World!"'
}

Using Microsoft PowerShell in Pipeline

Writing PowerShell code as part of your pipeline is incredibly simple. The step that you will use is simply powershell, and it includes the same optional parameters as the Windows Batch (bat) step, including:

  • returnStdout: Returns the standard output stream with a default encoding of UTF-8 (alternative encoding is optional)

  • returnStatus: Returns the exit status (integer) of the PowerShell script

Examples

Capture exit status of a PowerShell script

node {def status = powershell(returnStatus: true, script: 'ipconfig')if (status == 0) {// Success!
    }
}

Capture and print the output of a PowerShell script

node {def msg = powershell(returnStdout: true, script: 'Write-Output "PowerShell is mighty!"')
    println msg
}

Which streams get returned when I use returnStdout?

Until the release of PowerShell 5, there were five distinct output streams. PowerShell 5 introduced a sixth stream for pushing "informational" content, with the added benefit of being able to capture messages sent to Write-Host. Each row of the following table describes a PowerShell stream along with the corresponding Cmdlet used for writing to the stream for that particular row. Please keep in mind that stream 6 and associated cmdlets either do not exist or exhibit alternate behavior in versions of PowerShell earlier than version 5.

StreamDescriptionCmdlet

1

Output stream (e.g. stdOut)

Write-Output

2

Error stream (e.g. stdErr)

Write-Error

3

Warning stream

Write-Warning

4

Verbose stream

Write-Verbose

5

Debug stream

Write-Debug

6

Information stream

Write-Information (or Write-Host with caveats)

If you are using the returnStdout option of the powershell Pipeline step then only stream 1 will be returned, while streams 2-6 will be redirected to the console output. For example:

Write to all available streams and return the standard output

node {def stdout = powershell(returnStdout: true, script: '''
        # Enable streams 3-6
        $WarningPreference = 'Continue'
        $VerbosePreference = 'Continue'
        $DebugPreference = 'Continue'
        $InformationPreference = 'Continue'

        Write-Output 'Hello, World!'
        Write-Error 'Something terrible has happened!'
        Write-Warning 'Warning! There is nothing wrong with your television set'
        Write-Verbose 'Do not attempt to adjust the picture'
        Write-Debug 'We will control the horizontal.  We will control the vertical'
        Write-Information 'We can change the focus to a soft blur or sharpen it to crystal clarity.'
    ''')
    println stdout
}
Console output:
[Pipeline] {
[Pipeline] powershell
[TestStreams] Running PowerShell script
<Jenkins Home>\workspace\TestStreams@tmp\durable-4d924c2d\powershellScript.ps1 : Something terrible has
happened!
At <Jenkins Home>\workspace\TestStreams@tmp\durable-4d924c2d\powershellMain.ps1:2 char:1
+ & '<Jenkins Home>\workspace\TestStreams@tmp\durable-4d924c ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Write-Error], WriteErrorException
    + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,powershellScript.ps1

Warning! There is nothing wrong with your television set
Do not attempt to adjust the picture
We will control the horizontal.  We will control the vertical
We can change the focus to a soft blur or sharpen it to crystal clarity.
Hello, World!
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE

Note that "Hello, World!" gets printed last even though it is the first output statement in my script. Another interesting aspect of this example is that thepowershell step failed, which ultimately caused the job to fail. The failure in this example is due to the PowerShell error stream being non-empty, which therefore caused the step to result in a non-zero exit status. However, as you will soon discover, there are a variety of causes for a failing powershell step.

What causes a failing exit status?

When you execute a powershell step, it may produce a non-zero exit code and fail your pipeline build. This is very similar to other shell steps with some interesting caveats. Your powershell step may produce a failing exit status in the following instances:

  1. Something in your PowerShell script has thrown an exception

  2. Your PowerShell script explicitly calls exit with a non-zero exit code

  3. Your PowerShell script calls a native application that produces a non-zero $LastExitCode

  4. Your PowerShell script results in a non-empty error stream (with or without throwing an exception)

Overriding the exit status behavior of your powershell step can be achieved by explicitly exiting from your script as long as the failure was not caused by an unhandled exception. For example:

Unavoidable failure caused by an unhandled exception

node {
    powershell '''
        throw 'Error! Problem Exists Between Keyboard And Chair'
        exit 0  # Unreachable code'''
}

Failed step caused by a non-empty error stream

node {
    powershell '''
        Write-Error 'Error! Problem Exists Between Keyboard And Chair''''
}

Failure prevented by an explicit exit

node {
    powershell '''
        Write-Error 'Error! Problem Exists Between Keyboard And Chair'
        exit 0'''
}

Scripts vs. Cmdlets

A Cmdlet is a small lightweight utility written in either C#, and compiled, or written in PowerShell directly. Depending on what your goal is in your pipeline you can make use of Cmdlets directly in your pipeline code, call a self contained PowerShell script, or some mixture of the two. If your strategy is to keep each powershell step as short and succinct as possible then it may make sense for you to write a library of Cmdlets, but if you have monolithic scripts then it may make sense for you to call those scripts directly from your pipeline. The choice is entirely up to you, as both scenarios are supported.

Thanks for reading, and have fun!

I sincerely hope that this post has encouraged you to try using PowerShell in your Jenkins Pipeline. Please do not hesitate to file an issue against thedurable-task plugin onJIRA if you have discovered any problem that you suspect is related to thepowershell step. For general PowerShell related issues or inquiries please route your questions to thePowerShell community.


Codifying the Build and Release Process with a Pipeline Shared Library

$
0
0
This is a guest post by Alvin Huang, DevOps Engineer atFireEye.

As a security company, FireEye relentlessly protects our customers from cyber attacks. To act quickly on intelligence and expertise learned, the feedback loop from the front lines to features and capabilities in software must be small. Jenkins helps us achieve this by allowing us to build, test, and deploy to our hardware and software platforms faster, so we can stop the bad guys before they reach our customers.

More capabilities and functionalities in our product offerings means more applications and systems, which means more software builds and jobs in Jenkins. Within the FaaS (FireEye as a Service) organization, the tens of Jenkins jobs that were manageable manually in the web GUI quickly grew to hundreds of jobs that required more automation. Along the way, we outgrew our old legacy datacenter and were tasked with migrating 150+ Freestyle jobs on an old 1.x Jenkins instance to a newer 2.x instance in the new datacenter in 60 days.

Copying Freestyle job XML configuration files to the new server would leave technical debt. Using Freestyle job templates would be better but for complicated jobs that require multiple templates, this would still create large dependency chains that would be hard to trace in the log output. Finally, developers were not excited about having to replicate global changes, such as add an email recipient when a new member joins the team, across tens of jobs manually or using theConfiguration Slicer. We needed a way to migrate the jobs in a timely fashion while getting rid of as much technical debt as possible.

Jenkins Pipeline to the rescue! In 2.0, Jenkins added the capability to create pipelines as first- class entities. At FireEye, we leveraged many of the features available in pipeline to aid in the migration process including the ability to:

  • create Pipeline as Code in a Jenkinsfile stored in SCM

  • create Jenkins projects automatically when new branches or repos get added with a Jenkinsfile

  • continue jobs after the Jenkins master or build agent crashes

  • and most importantly, build a PipelineShared Library that keeps projectsDRY and allows new applications to be on boarded into Jenkins within seconds

However, Jenkins Pipeline came with a DSL that our users would have to learn to translate their Freestyle jobs to pipeline jobs. This would be a significant undertaking across multiple teams just to create Jenkins jobs. Instead, the DevOps team identified similarities across all the Freestyle jobs that we were migrating, learned the Jenkins DSL to become SMEs for the organization, and built a shared library of functions and wrappers that saved each Dev/QA engineer hours of time.

Below is an example function we created to promote builds in Artifactory:

vars/promoteBuild.groovy
defcall(source_repo, target_repo, build_name, build_number) {
    stage('Promote to Production repo') {
        milestone label: 'promote to production'
        input 'Promote this build to Production?'

        node {
            Artifactory.server(getASrtifactoryServerID()).promote([
                'buildName'   : build_name,'buildNumber' : build_number,'targetRepo'  : target_repo,'sourceRepo'  : source_repo,'copy'        : true,
            ])
    }
}defcall(source_repo, target_repo) {
    buildInfo = getBuildInfo()

    call(source_repo, target_repo, buildInfo.name, buildInfo.number)
}

Rather than learning the Jenkins DSL and looking up how the Artifactory Plugin worked in Pipeline, users could easily call this function and pass it parameters to do the promotion work for them. In the Shared Library, we can also create build wrappers of opinionated workflows, that encompasses multiple functions, based on a set of parameters defined in the Jenkinsfile. In addition to migrating the jobs, we also had to migrate the build agents. No one knew the exact list of packages, versions, and build tools installed on each build server, so rebuilding them would be extremely difficult. Rather than copying the VMs or trying to figure out what packages were on the build agents, we opted to use Docker to build containers with all dependencies needed for an application.

I hope you will join me at my Jenkins World session:Codifying the Build and Release Process with a Jenkins Pipeline Shared Library, as I deep dive into the inner workings of our Shared Pipeline Library and explore how we integrated Docker into our CI/CD pipeline. Come see how we can turn a Jenkinsfile with just a set of parameters like this:

Jenkinsfile
standardBuild {
    machien          = 'docker'
    dev_branch       = 'develop'
    release_branch   = 'master'
    artifact_apttern = '*.rpm'
    html_pattern     = [keepAll: true, reportDir: '.', reportFiles: 'output.html', reportName: 'OutputReport']
    dev_repo         = 'pipeline-examples-dev'
    prod_repo        = 'pipeline-examples-prod'
    pr_script        = 'make prs'
    dev_script       = 'make dev'
    release_script   = 'make release'
}

and a Dockerfile like this:

Dockerfile
FROM faas/el7-python:base

RUN yum install -y python-virtualenv \
        rpm-build && \
        yum clean all

Into a full Jenkins Pipeline like this:

Full Stage View

As we look ahead at FireEye, I will explore how the Shared Library sets us up for easier future migrations of other tools such as Puppet, JIRA, and Artifactory, and easier integration with new tools like Openshift. I will also cover our strategies for deployments and plans to move to Declarative Pipeline.

Alvin will bepresenting more on this topic atJenkins World in August, register with the code JWFOSS for a 30% discount off your pass.

Ask the Experts at Jenkins World 2017

$
0
0

This is a guest post by Alyssa Tong, who runs the Jenkins Area Meetup program and is also responsible for Marketing & Community Programs at CloudBees, Inc.

Ask the Experts

There are less than four weeks left until Jenkins World 2017. As usual, Jenkins World would not be complete without the Jenkins projects' "Ask the Experts". If you are new to Jenkins World, the Jenkins project booth will be located on the expo floor where contributors to the project hang out, share demos, and help users via the "Ask the Experts" program. I hope you will be pleasantly surprised at the amount of 1-on-1 learning to be had in the booth!

We have a great list of experts who have volunteered to help staff the booth, including many frequent contributors, JAM organizers, and board members:

Don’t have questions? Stop by anyways to say ‘hello’ and pick up some stickers.

If you are an active member of the Jenkins community and/or a contributor, consider taking part in the "Ask the Experts" program. It’s a great opportunity to bond with other contributors and talk with fellow Jenkins users.

Join the Jenkins project atJenkins World in August, register with the code JWFOSS for a 30% discount off your pass.

Plugin Development Tutorials, Videos, and More

$
0
0

This is a guest post by Mark Waite, who maintains the git plugin, the git client plugin, and is a technical evangelist for CloudBees, Inc.

While developing the "Intro to Plugin Development" workshop for Jenkins World 2017, I was impressed by the many Jenkins plugin development videos, tutorials, and guides. Here are some of my favorite plugin development topics and links.

Plugin tutorial videos

Plugin tutorial pages

More details

Many of the Jenkins plugin development topics have dedicated pages of their own, including user interface, plugin testing, and javadoc.

User interface

Testing a plugin

Custom build steps

Actions

Mark will be presentingIntro to Plugin Development atJenkins World in August. Register with the code JWFOSS for a 30% discount off your pass.

Important security updates for multiple Jenkins plugins

$
0
0

Multiple Jenkins plugins received updates today that fix several security vulnerabilities, including multiple high severity ones.

We strongly recommend updating the following plugins as soon as possible:

Less severe security updates have been released for these plugins:

Additionally, the OWASP Dependency-Check Plugin recently also received a security update.

For an overview of what was fixed, see the security advisory.

Subscribe to the jenkinsci-advisories mailing list to receive important future notifications related to Jenkins security.

Introducing the Jenkins Minute video series

$
0
0
This is a guest post by Liam Newman, Technical Evangelist at CloudBees.

There are less than three weeks left untilJenkins World 2017. Like last year, I’ll be at the "Ask the Experts" booth to answer questions about all things Jenkins. In preparation, I’ve started a continuing series of quick tutorial videos that answer some of the most common questions I’ve seen asked in the community forums. These are by no means exhaustive - they’re basic answers, which we can build upon. Each video give a takes a simple example, shows how to create a working solution, and includes links in the description to related Jenkins documentation pages.

I hope you find them useful. Look for more of them coming soon!

Liam will be at the "Ask the Experts" booth atJenkins World in August. Register with the code JWFOSS for a 30% discount off your pass.

Creating Your First Pipeline in Blue Ocean
Using a Dockerfile with Jenkins Pipeline
Adding Parameters to Jenkins Pipeline
Recording Test Results and Archiving Artifacts

CI/CD with Jenkins Pipeline and Azure

$
0
0

This is a guest post by Pui Chee Chen, Product Manager at Microsoft working onAzure DevOps open source integrations.

Recently, we improved the Azure Credential plugin by adding a custom binding for Azure Credentials which allows you to use anAzure service principal (the analog to a service or system account) via the Credentials Binding plugin. This means it’s now trivial to run Azure CLI commands from a Jenkins Pipeline. We also recently published the first version of the Azure App Service plugin which makes it very easy to deployAzure Web Apps directly from Jenkins Pipeline. While we’ll have much more to discuss in our Jenkins World presentation on Azure DevOps open source integrations, in this blog post I wanted to share some good snippets of what is possible today with Jenkins Pipeline and Azure.

First, a simple example using the Azure CLI to list resources in the subscription:

Jenkinsfile (Scripted Pipeline)
node {/* .. snip .. */
    stage('Deploy') {
        withCredentials([azureServicePrincipal('principal-credentials-id')]) {
            sh 'az login --service-principal -u $AZURE_CLIENT_ID -p $AZURE_CLIENT_SECRET -t $AZURE_TENANT_ID'
            sh 'az account set -s $AZURE_SUBSCRIPTION_ID'
            sh 'az resource list'
        }
    }
}

azureServicePrincipal() cannot be used in Declarative Pipeline untilJENKINS-46103 is resolved.

Once a Pipeline can interact with Azure, there are countless ways one could implement continuous delivery with Jenkins and Azure. From a deploying a simple webapp with theAzure App Service plugin and the azureWebAppPublish step, or a more advanced container-based delivery pipeline to deliver new containers toKubernetes via Azure Container Service.

With the Docker Pipeline plugin and a little bit of extra scripting, a Jenkins Pipeline can also build and publish a Docker container to anAzure Container Registry:

Jenkinsfile (Scripted Pipeline)
importgroovy.json.JsonSlurper

node {
    def containerdef acrSettings

    withCredentials([azureServicePrincipal('principal-credentials-id')]) {
        stage('Prepare Environment') {
            sh 'az login --service-principal -u $AZURE_CLIENT_ID -p $AZURE_CLIENT_SECRET -t $AZURE_TENANT_ID'
            sh 'az account set -s $AZURE_SUBSCRIPTION_ID'
            acrSettings = new JsonSlurper().parseText(
                                            sh(script: "az acs show -o json -n my-acr", returnStdout: true))
        }

        stage('Build') {
            container = docker.build("${acrSettings.loginServer}/my-app:${env.BUILD_ID}")
        }

        stage('Publish') {/* https://issues.jenkins-ci.org/browse/JENKINS-46108 */
            sh "docker login -u ${AZURE_CLIENT_ID} -p ${AZURE_CLIENT_SECRET}${acrSettings.loginServer}"
            container.push()
        }

        stage('Deploy') {
            echo 'Orchestrating a new deployment with kubectl is a simple exercise left to the reader ;)'
        }
    }
}

If you have been following ourAzure Blog, you may have noticed we have shipped a lot of updates to provide better support for Azure on Jenkins, and vice versa, such as:

  • Hosted Jenkins. NewSolution Template in Azure Marketplace lets you spin up a Jenkins Master on Azure in minutes. Not only is it easy and fast, the solution template gives you option to scale up by selecting the VM disk type and size. And guess what? You can even select the Jenkins release type you want to use - LTS, weekly build or Azure verified - all under your control.

  • Continuous integration experience. In the latest version of ourAzure VM Agents plugin, we improved the user experience and added the option to let you to select Managed Disk for disk type (which is currently used extensively onci.jenknis.io. You no longer need to worry about exceeding the number of VMs on your subscription.

  • Continuous deployment experience. Now, if Azure CLI is not your cup of tea, we released our first plugin to provide continuous deployment support to Azure App Service. The plugin supports all languages Azure App Service supports. We even have a walkthroughhere in the brand new Jenkins Hub where you can find all Jenkins on Azure resources.

  • Pipeline readiness. Also, all Azure plugins are and will be pipeline ready. Have you been leveraging ourAzure Storage plugin in your Pipeline?

So, what’s next? We have a big surprise in store at Jenkins World! :)

We are serious about supporting open source and the open source community. Be sure to catch our talk onAzure DevOps open source integrations. See you atJenkins World 2017!

Join the Azure DevOps team atJenkins World in August, register with the code JWFOSS for a 30% discount off your pass.

Remoting Update. Protocols deprecation, Java 8 requirement and plans

$
0
0

There are upcoming changes in Jenkins "core" which may require extra steps when upgrading Jenkins. If you use configuration management for Jenkins agents, please read this announcement carefully.

If you have ever seen messages like "Channel is already closed" or "Remote call failed" in your build logs, you have already met Jenkins Remoting.

Remoting is an agent executable (aka slave.jar) and a library implementing the communication layer between Jenkins masters and their agents (including communication protocols, distributed calls and classloading). It is also used in several other cases: Maven Integration Plugin, Remoting-based CLI, etc.

In order to make it clear what’s changing in Jenkins Remoting, I have documented the various components on the Remoting’s sub-project page, and will try to publish regular updates about the status of Remoting to this site and the developer mailing list.

In this post I would like to provide an update on the Remoting roadmap and to announce two major incoming changes: deprecation of old protocols and upgrade to Java 8. Both changes will take place in one of the next Weekly releases. ETA is Jenkins 2.75 on Aug 20, 2017.

Below are details on the incoming changes and compatibility notes.

Old Remoting Protocols Deprecation

It has been almost one year since the release of JNLP4-connect protocol in Remoting 3.0. This protocol has been enabled by default since 2.46.x, and so far it demonstrates good stability being compared to JNLP2 and JNLP3 protocols.

At the governance meetingwe decided to disable old Remoting protocols (JNLP/JNLP2 + CLI1) in new installations by default. There are 3 reasons for it:

  1. Maintenance of multiple protocols takes a lot of extra effort. The JNL2 NIO engine is complex and barely diagnosable.

  2. There are known issues in JNLP2 connection management (see the protocol’s Errata). In many cases update to JNLP4 was a resolution

  3. JNLP1/JNLP2/CLI1 are unencrypted, and it is not something Jenkins users may expect in 2017

It is tracked as JENKINS-45841 in Jenkins JIRA.

How?

  • When Jenkins is started in the new installation mode with enabled Installation Wizard, old protocols will be disabled

  • Jenkins shows an administrative warning when obsolete protocols are enabled

Compatibility notes

Older instances won’t be affected by the disabling of the older JNLP1/JNLP2 protocols, which will still be enabled for them. Newly created instances which skip, or disable, the Setup Wizard will not be affected either.

"New" Jenkins instances installed via setup wizard may be affected in edge cases. For example:

  • Agents with Remoting older than 3.0 will be unable to connect.

    • Mitigation: Before updating make sure Remoting is not bundled custom Docker images, AMIs, etc.

  • Swarm Plugin: old versions of Swarm Client (before 3.3) will be unable to connect to Jenkins, because Remoting 2.x is bundled

    • Mitigation: Update Swarm Client

  • Very old jenkins-cli.jar without CLI2 support will be unable to connect.

    • Mitigation: Do not use Remoting-based CLI on new instances (see this blogpost)

Upgrade to Java 8

Starting with version 2.54, Jenkins requires Java 8 to run (announcement blog post). This version is also required for Jenkins LTS 2.60.1.

Remoting continued to support Java 7 for a while for backporting purposes, but it will be also upgraded to Java 8 in the Remoting 3.11 release. This Rremoting version is expected to be available in Jenkins 2.75 (ETA: Aug 20, 2017). This change is tracked as JENKINS-43985.

Compatibility notes

The update does not cause compatibility issues in common use-cases. However, there may be issues in custom Jenkins core builds and packaging. There are several examples below.

  • Jenkins instances with built-in Remoting versions will NOT be affected, Java 8 is already required there

  • Users of community-provided Docker packages (docker-slave,docker-jnlp-slave) will NOT be affected, Java 8 is already required there

  • Custom Jenkins WAR file builds targeting Java 7 may fail to build/run if they bundle Remoting 3.11 or later

  • Custom Jenkins agent instances (manually installed hosts, VM snapshots, Docker packages, AMIs, etc.) may fail if they download the latest Remoting version and use Java 7

Java 9 support

As with Jenkins core, Java 9 not supported and not tested in Remoting. It may work in some configurations, but it is not guaranteed.

As a consequence, it is not recommended to run Remoting with Java 9 right now. It is also not recommended to use Maven Integration Plugin to run builds on Java 9.

What’s next?

There are some ongoing activities in the Remoting sub-project:

  1. Stability and Diagnosability improvements (JENKINS-38833)

    • Why? When it comes to Remoting issues, it is really hard to diagnose them

    • Recently I have published some slides about preventing and diagnosing issues, but I want the behavior to be more stable by default

    • This Epic lists my plans about Remoting issues and papercuts I would like to fix this year

  2. Remoting Work Directories (JENKINS-44108)

    • For a long time logging was disabled by default in Java Web Start (JNLP) and SSH agents, because Remoting had no option to determine where to store such data before connecting to the master

    • The new Remoting Work Directory feature (since Remoting 3.8) offers such storage, which is also used for storing JAR caches and for checking workspace writeability before accepting builds.

    • This Epic is about enabling Remoting work directories by default in common Agent launcher types.

  3. Remoting Upgradeability (JENKINS-44099)

    • Right now Remoting is not being upgraded automatically on JNLP agents, it is supported only for Windows service agents starting from Jenkins 2.50

    • On the Jenkins master side it is required to upgrade the Jenkins core in order to pick Remoting fixes.

    • This Epic aims simplifying the upgrade procedure for most common cases.

If you are interested in contributing to these tasks, or others in the Remoting sub-project, please feel free to reach out via the issue tracker or#jenkins IRC channel.

If you are coming to Jenkins World, you can also find me at the "Ask the Experts" booth there. See more info about Ask the Experts here.


Running load tests in Jenkins Pipeline with Taurus

$
0
0

This is a guest post by Guy Salton, Sr. Professional Services Engineer forCA BlazeMeter.

Jenkins Pipeline is an important Jenkins feature for creating and managing a project in Jenkins. This is opposed to the traditional way of creating a Jenkins project by using the Jenkins GUI. When running your open-source load test, Jenkins Pipeline enables resilience, execution control, advanced logic and Version Control management. This blog post will explain how to run any open-source load test with Jenkins Pipeline, through Taurus.

Taurus is an open source test automation framework that enables running and analyzing tests from 9 open source load and functional testing tools: JMeter, Selenium, Gatling, The Grinder, Locust, Tsung, Siege, Apache Bench, and PBench. Test results can be analyzed in Taurus. For advanced analyses or running tests in the cloud, Taurus integrates withBlazeMeter.

Guy will bepresenting more on this topic atJenkins World in August, register with the code JWFOSS for a 30% discount off your pass.

Getting started with Taurus

  1. Install Taurus.

  2. Create the following Taurus configuration in YAML. Learn more about YAML in Taurus fromthis tutorial.

    ## execution:
    - concurrency: 100hold-for: 10mramp-up: 120sscenario: Thread Groupscenarios:Thread Group:requests:
        - label: blazedemomethod: GETurl: http://blazedemo.com/

    This script runs 100 concurrent users, holds the load for 10 minutes, the ramp-up is 120 seconds and the thread group runs one GET request toblazedemo.com.

    You can specify an executor by adding executor: <executor_name> to the script. Otherwise, the default executor will be JMeter. In the background, Taurus will create an artifact directory with a jmx file (or a Scala file if you run Gatling, a Python file if you are running Selenium, etc.).

  3. Open a terminal and run: bzt <file_name>.yml

  4. View the test results:

Viewing test results from Taurus

If you want to conduct an in-depth analysis of your test results, run your tests on BlazeMeter. You will be able to monitor KPIs through advanced and colorful reports, evaluate system health over time, and run your tests from multiple geo-locations.

Run the following command from the terminal:

bzt <file_name>.yml -report
Viewing test results in Blazemeter

Integrate Taurus With Pipeline

To run Taurus through Pipeline, you can also go straight to Jenkins after creating your Taurus script.

  1. Open Jenkins → New Item → Fill in an item name → Click on ‘Pipeline’

    blazemeter speaker blog 2017 3
    blazemeter speaker blog 2017 4
  2. Now create a Pipeline script. You can include all parts of your CI/CD process in this script: Commit, Build, Unit Test, Performance Test, etc., by creating different stages.

    This Pipeline has three stages: The first is called “build”. In this example it is empty, but you can add commands that will build your code. The second, called “Performance Tests”, creates a folder called “Taurus-Repo” and runs the Taurus script that we created. At the same time (note the “parallel” command), there is a “sleep” command for 60 seconds. Obviously it makes no sense to put those two commands together, this is just to show you the option of running 2 commands in parallel. The third stage called “Deploy” is also empty in this example. This is where you could deploy your new version.

    node {
       stage('Build') {// Run the Taurus build
       }
       stage('Performance Tests') {
        parallel(BlazeMeterTest: {
                dir ('Taurus-Repo') {
                    sh 'bzt <file_name>.yml -report'
                }
            },Analysis: {
                sleep 60
            })
       }
    
       stage(Deploy) {
       }
    }
    blazemeter speaker blog 2017 5

    Note that you can either add the Pipeline inline, or choose the “Pipeline script from SCM” option and add the URL to the script on GitHub (in this case you need to upload a Jenkinsfile to GitHub). With "Pipeline from SCM", whenever you need to update the tests, you can just add new commits to theJenkinsfile.

  3. Save the Pipeline

  4. Click on ‘Build Now’ to run the Pipeline

    blazemeter speaker blog 2017 6
  5. Click on the new Build that is running now (build #6 in this example).

    blazemeter speaker blog 2017 7
  6. Click on ‘Console Output’ to see the test results:

    blazemeter speaker blog 2017 8
  7. In the Console Output you can see the test results and also the link to the report in BlazeMeter.

    blazemeter speaker blog 2017 9
    blazemeter speaker blog 2017 10

That’s it! Jenkins Pipeline is now running open-source load testing tools via Taurus.

Come tomy free hands-on workshop “Learn to Release Faster by Load Testing With Jenkins” at Jenkins World 2017 on Tuesday August 29th from 1-5pm. You will learn how to test continuously with Jenkins, JMeter, BlazeMeter and Taurus, including how to run JMeter with Jenkins, run the BlazeMeter plugin for Jenkins and how to use open-source Taurus.

To learn more about BlazeMeter,click here.

Declarative Pipeline at Jenkins World

$
0
0

This is a guest post by Andrew Bayer, who is one of the authors of theDeclarative Pipeline plugin, and is a software engineer on the Pipeline team atCloudBees, Inc.

A year ago at Jenkins World 2016, we unveiled Declarative Pipeline, a structured way to define your Pipeline. It’s been a great year for Declarative and Pipeline in general, with the release of Declarative Pipeline 1.0 in February, multiple releases since then, the introduction of documentation on Pipeline at jenkins.io, with a focus on Declarative, and more. Given everything that’s happened over the last year, we thought it’d be good to let you all know what you can expect to see and hear about Declarative Pipeline at this year’s Jenkins World.

First, on Thursday, August 31, I’ll be giving a talk on Declarative Pipeline with Robert Sandell, one of my coworkers here at CloudBees and another author of Declarative Pipeline. We’ll be covering what’s happened with Declarative over the last year, new features added since the 1.0 release, such as the libraries directive and more when conditions, what’s planned for the upcoming 1.2 release (which is planned for shortly after Jenkins World!), including parallel `stage`s, and what’s on the roadmap for the future. In addition, we’ll be demoing some of the features in 1.2, and providing some pointers on best practices for writing your Declarative Pipeline.

Also on Thursday, Stephen Donner from Mozilla will be giving a demo showing Mozilla’s usage of Declarative Pipeline and shared libraries at the Community Booth - Mozilla has been doing great work with Declarative, and I’m excited to see their usage in more detail and hear Stephen talk about their experience!

In addition, Robert, Stephen, and myself will all be at Jenkins World both days of the main sessions, and Robert and myself will also be at theContributor Summit on Tuesday. We’d love to hear your thoughts on Declarative and will be happy to answer any questions that we can. Looking forward to seeing you all!

Andrew Bayer and Robert Sandell will be talking about the latest onDeclarative Pipeline in Jenkins at Jenkins World in August, register with the code JWFOSS for a 30% discount off your pass.

Demos at Jenkins World 2017

$
0
0

This is a guest post by Alyssa Tong, who runs the Jenkins Area Meetup program and is also responsible for Marketing & Community Programs at CloudBees, Inc.

Jenkins World 2017 is a week away. Like last year, we are bringing back the lunch-time demos in the Jenkins project’s booth. These are quick 15 minute How-to demos by Jenkins experts. These demos will not be live streamed, nor recorded, so come early to get the front row seat, we are expecting a large crowd!

Wednesday, August 30th

TimeSessionDetailsPresenter

12:15 - 12:30

Delivery Pipelines with Jenkins

How to set up holistic Delivery Pipelines with the DevOps enabler tool Jenkins.

Michael Hutterman

12:30 - 12:45

Developing Pipeline Libraries Locally

If you have ever tried developing Pipeline Libraries, you may have noticed how long it takes to deploy a new version to server to discover just another syntax error. I will show how to edit and test Pipeline libraries locally before committing to the repository (with Configuration-as-Code and Docker).

Oleg Nenashev

12:45 - 13:00

Securing a Jenkins Instance

A set of minimum steps every Jenkins Admin should follow so his public-facing Jenkins instance doesn’t turn into a Bitcoin mine.

Claudiu Guiman

13:00 - 13:15

Git Tips and Tricks

Latest capabilities in the git plugin, like large file support, reference repositories and some reminders of existing tips that can reduce server load, decrease job time, and decrease disc use.

Mark Waite

13:15 - 13:30

Delivery Pipelines with Jenkins 2

How to promote Java EE and Docker binaries toward production.

Michael Hutterman

13:30 - 13:45

Delivery Pipelines, with Jenkins 2, SonarQube, and Artifactory

The nuts and bolts of setting up a scalable, high-end delivery pipeline.

Michael Hutterman

13:45 - 14:00

Visual Pipeline Creation in Blue Ocean

We will show how to use Blue Ocean to build a real-world continuous delivery pipeline using the visual pipeline editor. We will coordinate multiple components of a web application across test and production environments, simulating a modern development and deployment workflow.

Keith Zantow

Thursday, August 31st

TimeSessionDetailsPresenter

12:30 - 12:45

Docker Based Build Executor Agents

How using Docker based build agents can simplify your Jenkins management duties.

Eric Smalling

12:45 - 13:00

Pimp my Blue Ocean

Using storybook.js.org for Blue Ocean frontend to speed up the delivery process - validate with PM and designer the UX. Showing how quickly you develop your components.

Thorsten Scherler

13:00 - 13:15

Deliver Blue Ocean Components at the Speed of Light

How to customize Blue Ocean, where I create a custom plugin and extending Blue Ocean with custom theme and custom components.

Thorsten Scherler

13:15 - 13:30

Mozilla’s Declarative + Shared Libraries Setup

How Mozilla is using Declarative Pipelines and shared libraries together.

Stephen Donner

Join the Jenkins project atJenkins World on August 30-31, register with the code JWFOSS for a 30% discount off your pass.

Jenkins Needs You - Pull Request Corner at Jenkins World 2017

$
0
0

This is a guest post by Mark Waite, who maintains the git plugin, the git client plugin, and is a technical evangelist for CloudBees, Inc.

The Jenkins project booth at Jenkins World 2017 will include the "Pull Requests Corner", recruiting new Jenkins contributors. We think there are many people who will attend the conference without realizing how easy it is to help the Jenkins project, and how much the help is appreciated.

Meet us in the "Pull Requests Corner" and we’ll help you find a way to help Jenkins. Here are some areas where we can use your help. Most of them do not require coding, and do not require a large time commitment.

One Minute Feedback on Your Version

The Jenkins changelog pages (LTS and weekly) gather user experiences with specific Jenkins versions. You can help other Jenkins users by clicking one of the weather icons in the LTS changelog (or the weekly changelog) for the release you’re using. Changelog feedback from weekly releases helps the release team select the long term support version. Changelog feedback from LTS releases helps other users prepare to upgrade.

It takes less than a minute, and helps the community (which will ultimately help you).

Five Minutes to Answer a Question

Jenkins needs YOU In five minutes or less, you can help other Jenkins users.

For example:

Ten Minutes to Learn and Share

If you have ten minutes, you can learn something new and share what you learned.

Fifteen Minutes for Pipeline

Liam Newman has created the "Jenkins Minute" video series. They are brief video segments focusing on specific Jenkins functionality. Choose a video, watch it, and share what you learned on social media.

Twenty Minutes for a Bug

The Jenkins bug tracker contains thousands of bugs. Reviewing, duplicating, and clarifying bug reports takes time. When maintainers are reviewing, duplicating, and clarifying bug reports, they are not fixing bugs, and they are not adding new capabilities.

You can help maintainers by reviewing and duplicating a bug report that matters to you. A comment on a bug report is especially helpful when it confirms you’ve been able to duplicate the bug. It is even more helpful if your verification includes the steps you took and how they differ from the original report.

A bug report which has been duplicated, and includes clear instructions, is much more likely to receive maintainer attention. Help yourself and others by duplicating bugs that matter to you.

Thirty Minutes for Documentation

The Jenkins documentation includes user documentation (guided tour and handbook) and developer documentation (tutorial, how-to guides, and reference). You can help the documentation by describing something important to you clearly and completely.

Refer to the instructions for documentation contributors to see how easy it is to help.

Forty Five Minutes for Translation

If English is not your native language, you can help with Jenkins localization. Jenkins is used worldwide, and many users will benefit from translations. Considering the rapid and continuing evolution of Jenkins, it is no surprise that there is plenty to translate. Refer to the internationalization guide for instructions to help you contribute translations.

Sixty Minutes for a Meetup

Local groups around the world meet often for Jenkins presentations, discussions, and demonstrations. Organizing a Jenkins Area Meetup will introduce you to other users, and will let you explore new ways to benefit from Jenkins. The team at jenkinsci-jam@googlegroups.com is ready to support your JAM with stickers, t-shirts, and more.

Week or More - Adopt a Plugin

The Jenkins plugin ecosystem covers a wide range of areas. Jenkins plugin maintainers come from many different backgrounds, with many different interests. Often, a plugin maintainer may find that they want to do something different on the project, or they may leave the project. When a plugin maintainer is no longer able to maintain a plugin, they can place it for adoption.

Plugins placed for adoption range from very specific use cases (node stalker plugin) to very general use cases (Subversion plugin).

Maintaining an orphan plugin is a great way to contribute to the project. Follow the instructions to "Adopt a Plugin".

See You There!

All those techniques (and more) are available on the Jenkins participate page.

Look for the "Jenkins Needs You" poster at Jenkins World, and come talk to us about the ways you can learn new things, address your concerns, and help Jenkins.

Join the Jenkins project atJenkins World on August 30-31, register with the code JWFOSS for a 30% discount off your pass.

Take the 2017 Jenkins Survey!

$
0
0
This is a guest post by Brian Dawson on behalf of CloudBees, where he works as a DevOps Evangelist responsible for developing and sharing continuous delivery and DevOps best practices. He also serves as the CloudBees Product Marketing Manager for Jenkins.

Once again it’s that time of year when CloudBees sponsors the Jenkins Community Survey to assist the community with gathering objective insights into how jenkins is being used and what users would like to see in the Jenkins project.

Your personal information (name, email address and company) will NOT be used by CloudBees for sales or marketing.

As an added incentive to take the survey, CloudBees will enter participants into a drawing for a free pass to Jenkins World 2018 (1st prize) and a $100 Amazon Gift Card (2nd prize). The survey will close at the end of September, so click the link at the end of the blog post to get started!

All participants will be able to access reports summarizing survey results. If you’re curious about what insights your input will provide, see the results of last year’s 2016 survey:

Your feedback helps capture a bigger picture of community trends and needs. There are laws that govern prize giveaways and eligibility; CloudBees has compiled all those fancyterms and conditions here.

Please take the survey and let your voice be heard - it will take less than 10 minutes.

Viewing all 1088 articles
Browse latest View live