Quantcast
Channel: Jenkins Blog
Viewing all 1087 articles
Browse latest View live

WebSocket

$
0
0

I am happy to report that JEP-222 has landed in Jenkins weeklies, starting in 2.217. This improvement brings experimental WebSocket support to Jenkins, available when connecting inbound agents or when running the CLI. The WebSocket protocol allows bidirectional, streaming communication over an HTTP(S) port.

While many users of Jenkins could benefit, implementing this system was particularly important for CloudBees because of how CloudBees Core on modern cloud platforms (i.e., running on Kubernetes) configures networking. When an administrator wishes to connect an inbound (formerly known as “JNLP”) external agent to a Jenkins master, such as a Windows virtual machine running outside the cluster and using the agent service wrapper, until now the only option was to use a special TCP port. This port needed to be opened to external traffic using low-level network configuration. For example, users of the nginx ingress controller would need to proxy a separate external port for each Jenkins service in the cluster. The instructions to do this are complex and hard to troubleshoot.

Using WebSocket, inbound agents can now be connected much more simply when a reverse proxy is present: if the HTTP(S) port is already serving traffic, most proxies will allow WebSocket connections with no additional configuration. The WebSocket mode can be enabled in agent configuration, and support for pod-based agents in the Kubernetes plugin is coming soon. You will need an agent version 4.0 or later, which is bundled with Jenkins in the usual way (Docker images with this version are coming soon).

Another part of Jenkins that was troublesome for reverse proxy users was the CLI. Besides the SSH protocol on port 22, which again was a hassle to open from the outside, the CLI already had the ability to use HTTP(S) transport. Unfortunately the trick used to implement that confused some proxies and was not very portable. Jenkins 2.217 offers a new -webSocket CLI mode which should avoid these issues; again you will need to download a new version of jenkins-cli.jar to use this mode.

The WebSocket code has been tested against a sample of Kubernetes implementations (including OpenShift), but it is likely that some bugs and limitations remain, and scalability of agents under heavy build loads has not yet been tested. Treat this feature as beta quality for now and let us know how it works!


Trip to DevOps World | Jenkins World

$
0
0

I had the privilege of being invited to DevOps World | Jenkins World 2019 for presenting the work I did during Google Summer of Code 2019. What follows is a day-by-day summary of an amazing trip to the conference.

Day 0: December 1, 2019

Travelling to Lisbon

I am an undergraduate student from New Delhi, India and had traveled to Lisbon to attend the conference. I had an early morning flight to Lisbon from Delhi via Istanbul. At the Airport, I met Parichay who had been waiting there from his connecting flight. After flying 8000 km, we reached Lisbon. We took a taxi to the hotel and were greeted there by one of my Google Summer of Code mentors, Oleg. After four months of working with him on my GSoC project, meeting him was an amazing experience. Later that day, after stretching our legs in the hotel, we met Long for an early dinner, who came to the hotel after exploring much of Lisbon.

Day 1: Hackfest, December 2

A photo from the HackFest

Next morning, we all met for breakfast where we all got to taste some Pastel de Nata. We then took a cab to the Congress Centre for attending the Jenkins and Jenkins X hackfest. At the Hackfest, I met Mark, Joseph, Kasper, Andrey and other Jenkins contributors. I also met Oleg, this time together with his son and his wife. After our introductions, and a short presentation by Oleg, I started hacking on the Folder Auth plugin and made it possible to delete user sids from roles. The best part of hacking there was to get instant feedback on what I was working on. More and more people kept coming throughout the day. It was great to see so many people working hard to improve Jenkins. At the end, everyone presented what they had achieved that day. Having skipped lunch for some snacks, Oleg and others tried hard to get some pizza delivered without much success. After the Hackfest, everyone was hungry and most attendees including me went looking for nearby restaurants. Since it was early and most restaurants were not open yet, we all decided to have burgers. It was a great learning experience listening to and talking about Jenkins, Elasticsearch, Jira, GitHub and a lot of other things. After that, we took a taxi back to the hotel and I went to bed.

Sunset outside the conference center

Day 2: Contributor Summit, December 3

We had the Jenkins and Jenkins X contributor summit the next day. Me and Parichay took the bus to the Congress Centre in the morning. After registration, I got my ‘Speaker’ badge and the conference T-shirt. The contributor summit took place in the same hall as the Hackfest, but the seating arrangement was completely different and there were a lot more people. The summit started with everyone introducing themselves. It turned out that there were a lot of people from Munich. There were presentations and talks about all things Jenkins, Jenkins X and the Continuous Delivery Foundation by Kohsuke, Oleg, Joseph, Liam, Olivier, Wadek and others. I had no experience with Jenkins X which made the summit very interesting. After lunch, the talks were over, and everyone was free to join any session discussing various things about Jenkins. I attended the Cloud Native Jenkins and the Configuration-as-Code sessions.

The Contributor Summit
My badge

While some of the conference attendees were in the Contributor summit, the others were going through certifications and trainings. At around 5 o’clock in the evening, the summit and the trainings all got over and the expo hall was thrown open. On the entrance, there was a large stack of big DWJW bags. I did not realize why those bags were kept there. Since everyone was taking one, I took one as well. As soon as I went into the hall, I realized that those bags were for collecting swag. I had never seen anything like this where sponsors were just giving away T-shirts, stickers and other stuff. There were snacks and Kohsuke was cutting the extremely tasty 15 years of Jenkins cake. After having the cake, I went on a swag-collecting spree going from one sponsor booth to the other. This was an amazing experience, not only was I able to get cool stuff, I was also able to learn a lot about the software these companies made and how it fits into the DevOps pipeline.

Photo with Oleg

After the conference got over, me, Long and Parichay went to the Lisbon Mariott Hotel for the Eurodog party. After collecting another T-shirt, I went to the nearest restaurant (McDonald’s) with Andrey who I had earlier met at the Hackfest.

Kohsuke cutting cake
Photo with Kohsuke
Photo with Oleg and Joseph

Day 3: December 4

This was the first official day of the conference and it began with the keynote. There were over 900 people in the keynote hall. It was amazing to see so many people attending the conference. After the keynote got over, I went to several sessions throughout the day learning about how companies are using Jenkins and implementing DevOps tools.

15 years of Jenkins

In the evening, we had the Sonatype Superparty which was a lot of fun. There were neon lights, arcade machines, VR experiences, superheroes and more swag. There was a lot of good food including pizzas and burgers and hot dogs. Superhero inspired desserts were very interesting. I was able to talk to Oleg and Wadek about the security challenges in Jenkins. During the party, I also got a chance to meet the CEO of Cloudbees, Sacha Labourey.

Batman vs Superman
A photo with Bumblebee

Day 4, December 5

This was the last day of the conference and it began with another keynote. After the keynote, I attended a very interesting talk on how the European Observatory built software for large telescopes using Jenkins. After that, I prepared for my talk on the work I did during Google Summer of Code 2019. I had my presentation in the community booth during the lunch time. Presenting in front of real people was an amazing experience and very different from the ones we had on Zoom chats for our GSoC evaluations. In the evening, I got another chance to present my project at the Jenkins Community Lightning Talks.

Me speaking at the community booth
Me speaking at the community lightining talks

After that, the conference came to an end and I went back to the hotel. After relaxing for some time, me, Parichay and Long were invited by Oleg to a dinner at Corinthia Hotel with Kohsuke, Mark and his wife, Tracy, Alyssa, and Liam. Unfortunately, Long couldn’t attend the dinner because he had the flight back earlier that evening. After the amazing dinner, I thanked everyone for such an amazing trip and said goodbye.

GSoC Group Photo

DWJW was the best experience I’ve ever had. I was able to learn about a lot of new things and talk to some amazing people. In the end, I would like to especially thank Oleg for helping me throughout and making it possible for me to attend such a wonderful conference. I would like to thank my other mentors Runze Xia and Supun for their support in my Google Summer of Code project. I would like to thank Google for organizing Google Summer of Code, everyone at Jenkins project for sponsoring my travel, and CloudBees for inviting me to the conference.

Looking forward to seeing you all again soon!

CI. CD. Continuous Fun.

T-Mobile and Jenkins Case Study

$
0
0

Saving Thousands of Hours and Millions of Dollars at T-Mobile with Jenkins

Most people know T-Mobile as a wireless service provider. After all, we have an international presence and we’re the third largest mobile carrier in the United States. But we’re also a technology company with new products that include our TVision Home television service, our T-Mobile Money personal banking offering, and our SyncUp Drive vehicle monitoring and roadside assistance device.

T-Mobile and Jenkins - a case study

Behind the scenes, T-Mobile is also a leader in the open source community. We have shared 35+ code repositories on GitHub — including our POET pipeline framework automation library — to help other organizations support their internal and external customers by adopting robust and intelligent practices that speed up the CI/CD cycle.

I’m a senior systems reliability engineer in T-Mobile’s system reliability engineering (SRE) unit. Our team successfully rolled out phase-1 of POET implementation to 30+ teams. This was a huge success and the plan is to scale it up to our 350 developer teams and 5,000 active users with a stable, reliable CI/CD pipeline using a combination of Jenkins and CloudBees Core running on a Kubernetes cluster.

Fewer Plugins, More Masters

We started by building a streamlined container-based pipeline infrastructure that is centrally managed and easily adaptable to development methodologies. The result frees our developer teams to focus on developing and testing applications rather than on maintaining the Jenkins environment.

We then reduced the number of Jenkins plugins we use in our master from 200 to four. There are over 1,000 such add-ons, including build tools, testing utilities and cloud-integration resources. They are an excellent way to extend the platform, but they are also the Achilles' heel of Jenkins because they can cause conflicts.

Next, we moved from a single master powering all our Jenkins slaves to multiple masters, and now have 30 pipeline engines powering roughly 10 teams each. This setup has reduced CPU loads and other bottlenecks while allowing T-Mobile’s DevOps teams to continue enjoying the benefits of horizontal scaling.

Spinning-Up Jenkins Pipelines in Two Minutes

As a result of this work, my SRE team can now spin up a Jenkins master from a Docker image in roughly two minutes, test it and roll it out to our production environment. Individual teams can then customize their CI/CD pipelines to meet the needs of specific projects. We allow these teams to extend the platform, but we have restricted the list of add-ons to 16 core plugins. These plugins are preconfigured in a Docker container, and every team starts with an identical CI/CD pipeline, which they can then set up to their liking at the folder level.

This streamlined and centralized approach to deploying our pipeline allows the SRE team to put everything in motion and then get out of the way. But that’s only half the story. The real magic happens when our developer teams take ownership of the simplified CI/CD pipelines. They no longer have to worry about the underlying Jenkins technology and can shift their attention to on-boarding their solutions.

The POET Pipeline minimizes the need for Jenkins Groovy code, which is cumbersome, error-prone and difficult to incorporate into third-party libraries. Instead, everything starts with pipeline definition files located within the pipeline source code and step containers are created to perform builds, deployments and other pipeline functions.

We include 40 generic containers in the POET Pipeline, so our developers don’t have to start from scratch. Of course, they have to know how to create Docker containers and how to write a YAML file to extend the pipeline functionalities. By simplifying the infrastructure, keeping plugins to a minimum and eliminating the need for Groovy, we’ve given our developers the freedom to define their own pipelines without having to depend on a centralized management team.

Our Developers Are No Longer Infrastructure Engineers

To further empower our developers, we’ve authored comprehensive POET Pipeline documentation, including easy-to-understand help files, tutorials and videos for self-guided learning and self-service support. This valuable resource also frees up our pipeline management team and our developers to concentrate on innovation.

This documentation is part of the "customer-focused" approach we’ve adopted. We treat our internal development teams as our customers, and the POET Pipeline is our product. Can you imagine T-Mobile asking subscribers to rebuild their smartphones every time they make a call? Or making them talk to a CSR before sending a text message? Then why should we ask our developers to serve double duty as infrastructure engineers?

Reducing Downtime

On top of keeping developers happy and simplifying management tasks, our streamlined POET Pipeline framework has dramatically reduced downtime. Our plugin-heavy, single-master Jenkins environment hogged CPU cycles, caused all kinds of configuration headaches and was constantly going down.

On any given week, we had to restart Jenkins two or three times. Sometimes, our builds put such a strain on our environment that we had to restart it overnight and reset everything when our teams weren’t working. With POET Pipeline, we’ve reduced downtime to a single such incident per year.

Scaling Our Successes

By eliminating the need for a pipeline specialist on every development team, we have also incurred substantial labor and cost savings as a result of our work with Jenkins and CloudBees Core. When you consider a typical work year of 2,000 hours and multiply that by 350 teams, you’re looking at hundreds of thousands of hours and tens of millions of dollars. We can now redirect these resources into and to building revenue-generating products to better serve T-Mobile’s external customers.

These numbers are huge, but don’t let them fool you into thinking that the POET Pipeline is not for you. We may have hundreds of teams and thousands of developers, but Jenkins is scalable, and any size organization can use the tools we’ve developed. That’s why we’ve chosen to share our pipeline with the open source community on GitHub.

Innovating with the World

Innovation does not occur in a vacuum. By putting our code out there for others to use and modify, we are helping developers around the world shift their focus from managing pipelines to building better applications. In turn, we benefit by applying the wisdom of the wider community to our internal projects.

Everyone wins, but the real winners are T-Mobile’s customers. They can look forward to new and improved offerings because we’re spending less time managing our pipeline framework and more time delivering the products and services that simplify and enhance their lives.

My DevOps World | Jenkins World Lisbon Experience

$
0
0

After an amazing three months of development period in the summer of 2019 with Jenkins Project, I was a better developer, loved open source, met passionate people and had fun at work. Jenkins is not just a community, it is a family. When GSoC period was over, we received swags from Jenkins.Natasha Stopa (one of the students in GSoC 2019) was invited to attend DevOps World | Jenkins World San Francisco. It was nice to see her enjoy there. But guess what? Jenkins also invited three other students (Abhyudaya, Long and me) to DevOps World | Jenkins World Lisbon. I was super psyched when Marky Jackson (one of my project mentors) broke the news to me.

The trip to Lisbon required to sort a few things like flight tickets, hotel booking, passport, visa etc. Oleg Nenashev had scheduled meetings to discuss and help us with arranging everything for our travel. Thanks to him. :)

From India to Lisbon (Dec 1)

Abhyudaya and I boarded our flight from Indira Gandhi Airport (New Delhi) to Lisbon on December 1, 2019 morning at 0500 hours (local time). It was a fine trip with an hour layover in Istanbul Ataturk Airport. We arrived in Lisbon at 1500 hours (local time). The weather in Lisbon was terrific. A mild cold but strong sea breeze was the starting point of me falling in love with the place. We arrived at our hotel (Novotel Lisboa) in an Uber. Oleg met us at the lobby to help us with check-in. It was great to finally meet him in person after months of knowing and working together. We had a good chat about the event, what to expect and other sightseeing areas. After a short time of refresh, Long who traveled from Berlin a day before met us at the restaurant. We had a brief chat knowing each other, had our food and went to bed. The next day was Hackfest. We hit the bed after that as we had to reach Centro de Congressos de Lisboa (CCL) where the event was organised by 0900 hours.

Day 0 (Dec 2)

I woke up early for a short jog in the streets. Lisbon is a city made on hills. The streets have beautiful mosaic styled pavements. It was nice to see around the city. Then Abhyudaya and me went for breakfast and reached CCL in an Uber at 0815 hours.

A picture with Oleg Nenashev

There was a round table sitting arrangement in an auditorium. It was like a meet-and-greet event to interact with other developers (some known and some new). Everybody had to figure out their problem statements and work on it. There was milk, juice, sandwiches which gave us energy throughout the day. I took a small break to come out of the building to go to the other side of the road which was on the banks of Tagus River. From there you could have very close view underneath the Ponte 25 de Abril (looks strikingly similar to Golden Gate Bridge). You can also see The Sanctuary of Christ the King on the other side of the river (again looks similar to Christ the Redeemer in Rio, Brazil). It was great to kick off the event with Hackfest. At the end of Hackfest some of us presented our work. Later, we went to a nearby restaurant to have burgers which apparently was the best burger I ever had (could be because I hadn’t tried too many burgers before :P). We talked and interacted with people from other parts of the world for about an hour and a half then went back to our hotel rooms.

Outside CCL

Day 1 (Dec 3)

The conference officially began on this day. Abyudaya and me had breakfast and took the shuttle to CCL. We collected our t-shirts and IDs. The event managing team made an app for DevOps World | Jenkins World (DWJW) Lisbon with all schedules and other informations which was incredibly convenient for all attendees. There were multiple sessions/events on different topics related to Jenkins or DevOps in general. I attended the Jenkins and Jenkins X contributor summit. Had a nice lunch and went around to explore Lisbon. I went to Padrao dos Descobrimentos and the beautiful Belem Palace. Had some Pasteis de Belem (a popular Portuguese desert). Took a tram to Praça do Comércio. It is Lisbon’s most important square. You will find lots of tourists, street bands, sea food restaurants, shops for every budget, the famous pink street and so much more. Later that evening we had a party hosted by EURODOG (European DevOps Group) at Lisboa Marriott. It was a nice party to network with developers over casual wine and beer. We later head out to a nearby Indian restaurant for Kebab and rice.

Intro to Jenkins X session

Day 2 (Dec 4)

The second day began with the opening keynote. Later went to the Jenkins X Introduction, Deploying K8s with Jenkins on GCP, Build top mobile games by King in that order. Also occasionally hitting the sponsors booth to have a chat and collect some swags. In the evening there was the superhero themed party, sponsored by Sonatype. It was probably the most fun event in the entire conference. The expo hall had an entirely different look with the party lights on, people wearing capes, fun events going around. There were artists dressed in a Bumblebee, a Batman, a Superman, a Supergirl, a Thor and more superhero costumes! I was previously made aware of the interesting parties at Jenkins World but the experience was very different. People from all over the world had come together to celebrate the 15 years of success of an open source software. After partying from 5 to 7 we went back to the hotel. I spent some time to prepare the slides for the next day’s presentation and went to bed.

A Bumblebee at Super Hero Party

Day 3 (Dec 5)

6K Jenkins World Fun Run

The third and final day of conference began with Jenkins World Fun Run. I missed the keynote for being late and other setup required for the presentation. My laptop was broken so had to do all the setup for demo on a friend’s laptop. The situation felt like a Jenkins admin under fire for a production bug. After being under pressure for a while, took a break to admire the developer comics and had a chat with the graffiti painter. During lunch it was time for the GSoC presentation at the Jenkins community booth. All our presentation went well and we also interacted with real users. Then we had the GSoC Team pic at the Jenkins community booth. Later Abhyudaya and me gave our presentation at the lightning talks as well upon Mark Waite’s request. The event concluded with emotional goodbyes.

Developer Comics
Graffiti Wall

All GSoC Students were invited for dinner at Corinthia Lisboa’s Soul Garden restaurant. The party comprised of Oleg, Mark and his lovely wife, Liam, Tracy, Alyssa and Olivier. We had a very nice conversation and I had a very delicious Bacalhau (cod fish) dish. Then bid final goodbye to everybody.

Presenting my GSoC project work
GSoC Team Picture

It was a wonderful experience in a wonderful country among wonderful people. Hats off to the management team lead by Alyssa Tong and co. An event this big was carried out without any hiccups! Everybody contributed their part to the event which made it very interactive and fun. Checkout some of my swags:

Swag from Jenkins World Lisbon

A big shout out to the Jenkins project and CloudBees for sponsoring this trip. Also thank you Jenkins and Google Summer of Code for support. :)

Validating JCasC configuration files using Visual Studio Code

$
0
0

Configuration-as-code plugin

Problem Statement: Convert the existing schema validation workflow from the current scripting language in the Jenkins Configuration as Code Plugin to a Java based rewrite thereby enhancing its readablity and testability supported by a testing framework for the same. Enhance developer experience by developing a VSCode Plugin to facilitate autocompletion and validation which would help the developer write correct yaml files before application to a Jenkins Instance.

The Configuration as Code plugin has been designed as an opinionated way to configure Jenkins based on human-readable declarative configuration files. Writing such a file should be feasible without being a Jenkins expert, just translating into code a configuration process one is used to executing in the web UI. The plugin uses a schema to verify the files being applied to the Jenkins instance.

With the new JSON Schema being enabled developers can now test their yaml file against it. The schema checks the descriptors i.e. configuration that can be applied to a plugin or Jenkins core, the correct type is used and help text is provided in some cases. VSCode allows us to test out the schema right out of the box with some modifications. This project was built as part of the Community Bridge initiative which is a platform created by the Linux Foundation to empower developers — and the individuals and companies who support them — to advance sustainability, security, and diversity in open source technology. You can take a look at the Jenkins Community Bridge Project Page

Steps to Enable the Schema Validation

a) The first step includes installing the JCasC Plugin for Visual Studio Code and opening up the extension via the extension list. Shortcut for opening the extension list in VSCode editor using Ctrl + Shift + X.

b) In order to enable validation we need to include it in the workspace settings. Navigate to File and then Preference and then Settings. Inside settings search for json and inside settings.json include the following configuration.

{"yaml.schemas": {"schema.json": "y[a]?ml"
    }
}

You can specify a glob pattern as the value for schema.json which is the file name for the schema. This would apply the schema to all yaml files. eg: .[y[a]?ml]

c) The following tasks can be done using VSCode:

a) Auto completion (Ctrl + Space):
  Auto completes on all commands.
b) Document Outlining (Ctrl + Shift + O):
Provides the document outlining of all completed nodes in the file.

d) Create a new file under the work directory called jenkins.yml. For example consider the following contents for the file:

jenkins:systemMessage: “Hello World”numExecutors: 2
  1. The above yaml file is valid according to the schema and vscode should provide you with validation and autocompletion for the same.

Screenshots

vscode

userDocs1

userDocs2

We are holding an online meetup on the 26th February regarding this plugin and how you could use it to validate your YAML configuration files. For any suggestions or dicussions regarding the schema feel free to join our gitter channel. Issues can be created on Github.

Findsecbugs for Developers

$
0
0

Spotbugs is a utility used in Jenkins and many other Java projects to detect common Java coding mistakes and bugs. It is integrated into the build process to improve the code before it gets merged and released. Findsecbugs is a plugin for Spotbugs that adds 135 vulnerability types focused on the OWASP TOP 10 and the Common Weakness Enumeration (CWE). I’m working on integrating findsecbugs into our Jenkins ecosystem.

Background

Spotbugs traces its history through Findbugs, which started in 2006. As Findbugs it was widely adopted by many projects. About 2016, the Findbugs project ground to a halt. Like the mythical phoenix, the Spotbugs project rose from the ashes to keep the capabilities alive. Most things are completely compatible between the two systems.

Jenkins has used Findbugs and now Spotbugs for years. This is integrated as a build step into parent Maven poms, including the plugin parent pom and the parent pom for libraries and core components. There are various properties that can be set to control the detection threshold, the effort, and findings or categories to exclude. Take a look at the effective pom for a project to see the settings.

Conundrums

There is a fundamental conundrum with introducing an analysis tool into a project. The best time to have done it is always in the past, particularly when the project first started. There are always difficulties in introducing it into an existing project. Putting it off for later just delays the useful results and makes later implementation more difficult. The best time to do it is now, as early as possible.

All analysis tools are imperfect. They report some issues that don’t actually exist. They miss some important issues. This is worse in legacy code, making the adoption more difficult. Findings have to be examined and evaluated. Some are code weaknesses but don’t indicate necessary fixes. For example, MD5 has been known for years as a weak algorithm, unsuitable for security uses. It can be used for non-security purposes, such as fingerprinting, but even there other algorithms (SHA-2) are preferred. We should replace all usages of MD5, but in some cases that’s difficult and it’s not exactly a problem.

Ultimately, the gain from these analysis tools isn’t so much from finding issues in existing code. The value comes more from catching new regressions that might be introduced or improving new code. This is one reason why it is valuable to add useful new analysis such as findsecbugs now, so that we can begin reaping the benefits.

With a security tool like findsecbugs, there is another paradox. Adding the tool makes it easier to find potential security issues. Attackers could take advantage of this information. However, security by obscurity is not a good design. Anyone can run findsecbugs now without the project integrating it. Integrating it makes it easier for legitimate developers to resolve issues and prevent future ones.

Implementation

I’ve been working on integrating findsecbugs into the Jenkins project for several months. It is working in several repos. There are several others where I have presented draft PRs to demonstrate what it will look like once it is enabled. As soon as we can disseminate the information enough, I propose to enable it in the parent poms for widespread use.

Existing

I started by enabling findsecbugs in two major components where I have a high degree of familiarity, Remoting, and Jenkins. Most of the work here involves examining each finding and figuring out what to do with it. In most cases this results in using one of the suppression mechanisms to ignore the finding. In some cases, the code can be removed or improved.

Findsecbugs reported a significant number of false positives in Remoting for a couple of notable reasons. (See the PR.) Remoting uses Spotbugs aggressively with a Low threshold setting. This produces more results. Findsecbugs targets Java web applications. As the communication layer between agents and master, Remoting uses some mechanisms that would be a problem on the server side but are acceptable on the agent.

Even without all its plugins, Jenkins is a considerable collection of code. Findsecbugs reported a smaller number of false positives for Jenkins (See the PR.) It runs Spotbugs at a High threshold, so it only reports issues it deems more concerning. A number of these indicate code debt, deprecated code to remove, or areas that could be improved. I created Jira tickets for many of these.

Demonstrated

I have created draft PRs to demonstrate how findsecbugs will look in several plugins. The goal is not to use these PRs directly but instead integrate findsecbugs at the parent pom level. These PRs serve as reference documentation.

Credentials

This one is particularly interesting because here findsecbugs correctly detects the remains of a valid security vulnerability (CVE-2019-10320). Currently, this code is safely used only for migration of old data. If we had run findsecbugs on this plugin a year ago, it would have detected this valid vulnerability.

SSH Build Agents

This one is interesting because it flags MD5 as a concern. Since it is used for fingerprinting, it isn’t a valid vulnerability, but since the hash isn’t stored it is easy to improve the code here.

EC2

In this case, findsecbugs found some valid concerns, but the code isn’t used so it can be removed. Also, MD5 is harder to remove here but should be considered technical debt and removed when possible.

Platform Labeler

Findsecbugs didn’t find any concerns here. This means adapting to it requires no work. In this demonstration, I added a fake finding to prove that it was working.

File Leak Detector

There is one simple finding noted here. Because it is part of the configuration performed by an administrator we can ignore it.

Credentials Binding

Nothing was found here so integration requires no effort.

Proposed

My proposal is to integrate findsecbugs configuration into the parent poms as soon as we can. The delay is currently mostly around sharing the information to prepare developers by blog post, email list discussion, and presentation.

Even before I started working on this, StefanSpieker proposed a PR to integrate into the parent Jenkins pom. This will apply to Jenkins libraries and core components. Once this is integrated, I will pull out the changes I made to the Jenkins and Remoting project poms.

I also plan on integrating findsecbugs into the plugin and Stapler parent poms. Once it is added to the plugin parent pom all plugins will automatically perform these checks when they upgrade their parent pom version. If there are any findings, developers will need to take care of them as described in the next section.

What do you need to do?

Once developers upgrade to a parent pom version that integrates findsecbugs, they may have to deal with evaluating, fixing, or suppressing findings. The parent pom versions do not yet exist but are in process or proposed.

Extraneous build message

In some cases, an extraneous message may show up in the build logs. It starts with a line like this The following classes needed for analysis were missing: followed by lines listing some methods by name. Ignore this message. It results from SpotBugs printing some internal, debug information that isn’t helpful here.

Examine findings

If findsecbugs reports any findings, then a developer needs to examine and determine what to do about each one.

Excluding issues

You can exclude an issue, so that it is never reported in a project. This is done by configuring an exclusion file. If you encounter the findings CRLF_INJECTION_LOGS or INFORMATION_EXPOSURE_THROUGH_AN_ERROR_MESSAGE feel free to add these to an exclusion file. These are not considered a concern in Jenkins. See the Jenkins project exclusion file for an example. You should be cautious about including other issue types here.

Temporarily disable findsecbugs

You may disable findsecbugs by adding <Bug category="SECURITY"/> to the exclusion file. I strongly encourage you to only disable findsecbugs temporarily when genuinely needed.

Suppress a finding

After determining that a finding is not important, you can suppress it by annotating a method or a class with @SuppressFBWarnings(value = “…​”, justification=”…​”). I encourage you to suppress narrowly. Never suppress at the class level when you can add it to a method. For a long method, extract the problematic part into a small method and add the suppression there. I also encourage you to always add a meaningful justification.

Improve code

Whenever possible improve the code such that the problematic code no longer exists. This can include removing deprecated or unused code, using improved algorithms, or improving structure or implementation. This is where the significant gains come from with SpotBugs and findsecbugs. Also, as you make changes or add new features make sure to implement them so as not to introduce new issues.

Report security vulnerabilities

If you encounter a finding related to a valid security vulnerability, please report it via the Jenkins security reporting process. This is the responsible behavior that benefits the community. Try not to discuss or call attention to the issue before it can be disclosed in a Jenkins security advisory.

Create tasks

If you discover an improvement area that is too large to fit into your current work or release plan, I encourage you to record a task to get it done. You can do this in Jira, like I did for several issues in Jenkins core, or in whatever task management system you use.

Conclusion

SpotBugs has long been used in Jenkins to catch bugs and improve code quality. Findsecbugs adds valuable security-related bug definitions. As we integrate it into the existing Jenkins code base it will require analysis and suppression for legacy code. This identifies areas we can improve and enhances quality as we move forward. Please responsibly report any security vulnerabilites you discover.

Pipeline-Authoring SIG Update

$
0
0

What is the Pipeline-Authoring Special Interest Group

This special interest group aims to improve and curate the experience of authoring Jenkins Pipelines. This includes the syntax of `Jenkinsfile`s and shared libraries, code sharing and reuse, testing of Pipelines and shared libraries, IDE integration and other development tools, documentation, best practices, and examples.

What Are The Focus Areas of the Pipeline-Authoring Special Interest Group

  • Syntax - How `Jenkinsfile`s and shared libraries are written.

  • Code sharing and reuse - Shared libraries and future improvements.

  • Testing - Unit and functional testing of `Jenkinsfile`s and shared libraries.

  • IDE integration, editors, and other development tools - IDE plugins, visual editors, etc.

  • Documentation - Reference documentation, tutorials, and more.

  • Best practices - Defining, maintaining, and evangelizing best practices in Jenkins Pipeline.

  • Examples - Real-world `Jenkinsfile`s and shared libraries demonstrating how to utilize various features of Pipeline, as well as basic or starter `Jenkinsfile`s for common patterns that can be used as jumping-off points by new users.

What Have We Been Up To

With the start of a new year, members got together to discuss the roadmap for 2020. During the initial discussions we determined that it would be good to examine the goals of previous meetings and determine the best path forward.

A mutual decision was made that to better create a roadmap; we needed to understand better who we were aiming to help. We decided that creating personas was very beneficial. Personas are fictional characters, which we are creating based upon our research to represent the different user types that might use Jenkins pipelines. Creating personas can help us step out of ourselves. It can help us to recognize that different people have different needs and expectations, and it can also help us to identify with the user we are building the roadmap for. Personas make the task at hand less complicated, they guide our ideation processes, and they can help us to achieve the goal of creating a good user experience for our target user group. A lot of that work can be found here:https://docs.google.com/document/d/1CdyzJwt50Wk3uUNsLMl2d4w2MGYss-phqet0s-KjbEs/edit The idea is to map the personas to a maturity model and then map the maturity model to the actual documentation. That maturity model can be found here: https://drive.google.com/file/d/1ByzWlPU0j1qM_gqspJppkNKkR5ZVLWlB/view

How Can I Get Involved

We have been meeting regularly to define personas to help us better create the SIG roadmap. We meet twice a week, once on Thursday for the EMEA timezone and once on Friday for the US timezone. Meeting notes can be found here:https://docs.google.com/document/d/1EhWoBplGl4M8bHz0uuP-iOynPGuONjcz4enQm8sDyUE/edit# and the calendar, if you would like to attend, is here: https://jenkins.io/event-calendar/. The previous recording of the meetings are located here: https://www.youtube.com/watch?v=pz_kPpb9C1w&list=PLN7ajX_VdyaOKKLBXek6iG8wTS24Ac7Y3

Next Steps

We have a lot of work to do and could use your help. If you would love to join us, check out the meeting link. If you would like to check out the personas and give feedback, also check out the link. Once we have wrapped up the personas work, we will start to identify the available documentation and ensure we have adequate documentation with the help of the Doc SIG. We will finally then start working to build out tools to help the community with pipelines in Jenkins better.

Contact Us

If you would like to get in touch with the Pipeline-Authoring SIG, you can by joining thePipeline-Authoring SIG gitter channel or via thePipeline-Authoring SIG mailing list

Hands On: Beautify the user interface of Jenkins reporter plugins

$
0
0

For Jenkins a large number of plugins are available that visualize the results of a wide variety of build steps. There are plugins available to render the test results, the code coverage, the static analysis and so on. All of these plugins typically pick up the build results of a given build step and show them in the user interface. In order to render these details most of the plugins use static HTML pages, since this type of user interface is the standard visualization in Jenkins since its inception in 2007.

In order to improve the look and feel and the user experience of these plugins it makes sense to move forward and incorporate some modern Java Script libraries and components. Since development of Blue Ocean has been stopped (seeJenkins mailing list post) plugin authors need to decide on their own, which UI technologies are helpful for that task. However, the universe of modern UI components is so overwhelming that it makes sense to pick up only a small set of components that are proven to be useful and compatible with Jenkins underlying web technologies. Moreover, the initial setup of incorporating such a new component is quite large so it would be helpful if that work needs to be done only once.

This guide introduces a few UI components that make sense to be used by all plugin authors in the future to provide a rich user interface for reports in Jenkins. In order to simplify the usage of these libraries in the context of Jenkins as a Java based web application, these Java Script libraries and components have been packaged as ordinary Jenkins plugins.

In the following sections, these new components will be introduced step by step. In order to see how these components can be used a plugin, I demonstrate the new features while enhancing the existingForensics Plugin with a new user interface. Since the Warnings Next Generation Plugin also uses these new components, you can see additional examples in the documentation of the warnings plugin or in our public ci.jenkins.io instance, that already is using these components in the detail views of the warnings plugin.

1. New user interface plugins

The following UI components are provided as new Jenkins plugins:

  • jquery3-api-plugin: Provides jQuery 3 for Jenkins Plugins. jQuery is — as described on their home page — a fast, small, and feature-rich JavaScript library. It makes things like HTML document traversal and manipulation, event handling, animation, and Ajax much simpler with an easy-to-use API that works across a multitude of browsers. With a combination of versatility and extensibility, jQuery has changed the way that millions of people write JavaScript.

  • bootstrap4-api-plugin: Provides Bootstrap 4 for Jenkins Plugins. Bootstrap is — according to their self-perception — the world’s most popular front-end component library to build responsive, mobile-first projects on the web. It is an open source toolkit for developing with HTML, CSS, and JS. Developers can quickly prototype their ideas or build entire apps with their Sass variables and mixins, responsive grid system, extensive prebuilt components, and powerful plugins built on jQuery.

  • data-tables-api-plugin: Provides DataTables for Jenkins Plugins. DataTables is a plug-in for the jQuery Javascript library. It is a highly flexible tool, built upon the foundations of progressive enhancement, that adds all of these advanced features to any HTML table:

    • Previous, next and page navigation

    • Filter results by text search

    • Sort data by multiple columns at once

    • DOM, Javascript, Ajax and server-side processing

    • Easily theme-able

    • Mobile friendly

  • echarts-api-plugin: Provides ECharts for Jenkins Plugins. ECharts is an open-sourced JavaScript visualization tool to create intuitive, interactive, and highly-customizable charts. It can run fluently on PC and mobile devices and it is compatible with most modern Web Browsers.

  • font-awesome-api-plugin: Provides Font Awesome for Jenkins Plugins. Font Awesome has vector icons and social logos, according to their self-perception it is the web’s most popular icon set and toolkit. Currently, it contains more than 1,500 free icons.

  • popper-api-plugin Provides Popper.js for Jenkins Plugins. Popper can easily position tooltips, popovers or anything else with just a line of code.

  • plugin-util-api-plugin: This small plugin provides some helper and base classes to simplify the creation of reporters in Jenkins. This plugin also provides a set of architecture rules that can be included in an architecture test suite of your plugin.

2. Required changes for a plugin POM

In order to use these plugins you need to add them as dependencies in your plugin pom. You can use the following snippet to add them all:

pom.xml
<project>

 [...]

  <properties><plugin-util-api.version>1.0.2</plugin-util-api.version><font-awesome-api.version>5.12.0-7</font-awesome-api.version><bootstrap4-api.version>4.4.1-10</bootstrap4-api.version><echarts-api.version>4.6.0-8</echarts-api.version><data-tables-api.version>1.10.20-13</data-tables-api.version>
    [...]</properties><dependencies><dependency><groupId>io.jenkins.plugins</groupId><artifactId>plugin-util-api</artifactId><version>${plugin-util-api.version}</version></dependency><dependency><groupId>io.jenkins.plugins</groupId><artifactId>font-awesome-api</artifactId><version>${font-awesome-api.version}</version></dependency><dependency><groupId>io.jenkins.plugins</groupId><artifactId>bootstrap4-api</artifactId><version>${bootstrap4-api.version}</version></dependency><dependency><groupId>io.jenkins.plugins</groupId><artifactId>echarts-api</artifactId><version>${echarts-api.version}</version></dependency><dependency><groupId>io.jenkins.plugins</groupId><artifactId>data-tables-api</artifactId><version>${data-tables-api.version}</version></dependency>
    [...]</dependencies>

  [...]

</project>

Alternatively, you have a look at the POM files of theWarnings Next Generation Plugin or theForensics API Plugin which already use these plugins.

3. General structure of a reporter

In this section I will explain some fundamentals of the design of Jenkins, i.e. the Java model and the associated user interface elements. If you are already familiar on how to implement the corresponding extension points of a reporter plugin (see section Extensibility in Jenkins' developer guide), then you can skip this section and head directly to Section 3.1.

Jenkins organizes projects using the static object model structure shown in Figure 1.

Jenkins design
Figure 1. Jenkins design - high level view of the Java model

The top level items in Jenkins user interface are jobs (at least the top level items we are interested in). Jenkins contains several jobs of different types (Freestyle jobs, Maven Jobs, Pipelines, etc.).

Each of these jobs contains an arbitrary number of builds (or more technically, runs). Each build is identified by its unique build number. Jenkins plugins can attach results to these builds, e.g. build artifacts, test results, analysis reports, etc. In order to attach such a result, a plugin technically needs to implement and create an action that stores these results.

These Java objects are visualized in several different views, which are described in more detail in the following sections. The top-level view that shows all available Jobs is shown in Figure 2.

Jobs
Figure 2. Jenkins view showing all available jobs

Plugins can also contribute UI elements in these views, but this is out of scope of this guide.

Each job has a detail view, where plugins can extend corresponding extension points and provide summary boxes and trend charts. Typically, summary boxes for reporters are not required on the job level, so I describe only trend charts in more detail, see section Section 5.5.2.

Job details
Figure 3. Jenkins view showing details about a job

Each build has a detail view as well. Here plugins can provide summary boxes similar to the boxes for the job details view. Typically, plugins show here only a short summary and provide a link to detailed results, see Figure 4 for an example.

Build details
Figure 4. Jenkins view showing details about a build

The last element in the view hierarchy actually is a dedicated view that shows the results of a specific plugin. E.g., there are views to show the test results, the analysis results, and so on. It is totally up to a given plugin what elements should be shown there. In the next few sections I will introduce some new UI components that can be used to show the corresponding results in a pleasant way.

3.1. Extending Jenkins object model

Since reporters typically are composed in a similar way, I extended Jenkins' original object model (see Figure 1) with some additional elements, so it will be much simpler to create or implement a new reporter plugin. This new model is shown in Figure 5. The central element is a build action that will store the results of a plugin reporter. This action will be attached to each build and will hold (and persist) the results for a reporter. The detail data of each action will be automatically stored in an additional file, so the memory footprint of Jenkins can be kept small if the details are never requested by users. Additionally, this action is also used to simplify the creation of project actions and trend charts, see Section 5.5.2.

Jenkins reporter design
Figure 5. Jenkins reporter design - high level view of the model for reporter plugins

4. Git Forensics plugin

The elements in this tutorial will be all used in the newForensics API Plugin (actually the plugin is not new, it is a dependency of theWarnings Next Generation Plugin). You can download the plugin content and see in more detail how these new components can be used in practice. Or you can change this plugin just to see how these new components can be parametrized.

If you are using Git as source code management system then this plugin will mine the repository in the style ofCode as a Crime Scene (Adam Tornhill, November 2013) to determine statistics of the contained source code files:

  • total number of commits

  • total number of different authors

  • creation time

  • last modification time

The plugin provides a new step (or post build publisher) that starts the repository mining and stores the collected information in a Jenkins action (see Figure 5). Afterwards you get a new build summary that shows the total number of scanned files (as trend and as build result). From here you can navigate to the details view that shows the scanned files in a table that can be simply sorted and filtered. You also will get some pie charts that show important aspects of the commit history.

Please note that this functionality of the plugin still is a proof of concept: the performance of this step heavily depends on the size and the number of commits of your Git repository. Currently it scans the whole repository in each build. In the near future I hope to find a volunteer who is interested in replacing this dumb algorithm with an incremental scanner.

5. Introducing the new UI components

As already mentioned in Section 3, a details view is plugin specific. What is shown and how these elements are presented is up to the individual plugin author. So in the next sections I provide some examples and new concepts that plugins can use as building blocks for their own content.

5.1. Modern icons

Jenkins plugins typically do not use icons very frequently. Most plugins provide an icon for the actions and that’s it. If you intend to use icons in other places, plugin authors are left on their own: the recommended Tango icon set is more than 10 years old and too limited nowadays. There are several options available, but the most popular is the Font Awesome Icon Set. It provides more than 1500 free icons that follow the same design guidelines:

Font Awesome icons
Figure 6. Font Awesome icons in Jenkins plugins

In order to use Font Awesome icons in a plugin you simply need a dependency to the correspondingfont-awesome-api-plugin. Then you can use any of the solid icons by using the new tag svg-icon in your jelly view:

index.jelly
1
2
3
4
5
6
7
<j:jellyxmlns:j="jelly:core"xmlns:st="jelly:stapler"xmlns:l="/lib/layout"xmlns:fa="/font-awesome">

  [...]
  <fa:svg-iconname="check-double"class="no-issues-banner"/>
  [...]</j:jelly>

If you are generating views using Java code, then you also can use the class SvgTag to generate the HTML markup for such an icon.

5.2. Grid layout

Jenkins currently includes in all views an old and patched version of Boostrap’s grid system (with 24 columns). This version is not compatible with Boostrap 4 or any of the JS libraries that depend on Bootstrap4. In order to use Bootstrap 4 features we need to replace the Jenkins provided layout.jelly file with a patched version, that does not load the broken grid system. I’m planning to create a PR that fixes the grid in Jenkins core, but that will take some time. Until then you will need to use the provided layout.jelly of the Boostrap4 plugin, see below.

The first thing to decide is, which elements should be shown on a plugin page and how much space each element should occupy. Typically, all visible components are mapped on the available space using a simple grid. In a Jenkins view we have a fixed header and footer and a navigation bar on the left (20 percent of the horizontal space). The rest of a screen can be used by a details view. In order to simplify the distribution of elements in that remaining space we useBootstrap’s grid system.

Grid layout in Jenkins
Figure 7. Jenkins layout with a details view that contains a grid system

That means, a view is split into 12 columns and and arbitrary number of rows. This grid system is simple to use (but complex enough to also support fancy screen layouts) - I won’t go into details here, please refer to the Bootstrap documentation for details.

For the forensics detail view we use a simple grid of two rows and two columns. Since the number of columns always is 12 we need to create two "fat" columns that fill 6 of the standard columns. In order to create such a view in our plugin we need to create a view given as a jelly file and a corresponding Java view model object. A view with this layout is shown in the following snippet:

index.jelly
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<?jelly escape-by-default='true'?><j:jellyxmlns:j="jelly:core"xmlns:st="jelly:stapler"xmlns:l="/lib/layout"xmlns:bs="/bootstrap"><bs:layouttitle="${it.displayName}"norefresh="true">(1)<st:includeit="${it.owner}"page="sidepanel.jelly"/><l:main-panel><st:adjunctincludes="io.jenkins.plugins.bootstrap4"/>(2)<divclass="fluid-container">(3)<divclass="row py-3">(4)<divclass="col-6">(5)
            Content of column 1 in row 1</div><divclass="col-6">(6)
            Content of column 2 in row 1</div></div><divclass="row py-3">(7)<divclass="col">(8)
            Content of row 2</div></div></div></l:main-panel></bs:layout></j:jelly>
1Use a custom layout based on Bootstrap: since Jenkins core contains an old version of Bootstrap, we need to replace the standard layout.jelly file.
2Import Bootstrap 4: Importing of JS and CSS components is done using the adjunct concept, which is the preferred way of referencing static resources within Jenkins' Stapler Web framework.
3The whole view will be placed into a fluid container that fills up the whole screen (100% width).
4A new row of the view is specified with class row. The additional class py-3 defines the padding to use for this row, see Bootstrap Spacing for more details.
5Since Bootstrap automatically splits up a row into 12 equal sized columns we define here that the first column should occupy 6 of these 12 columns. You can also leave off the detailed numbers, then Bootstrap will automatically distribute the content in the available space. Just be aware that this not what you want in most of the times.
6The second column uses the remaining space, i.e. 6 of the 12 columns.
7The second row uses the same layout as row 1.
8There is only one column for row 1, it will fill the whole available space.

You can also specify different column layouts for one row, based on the actual visible size of the screen. This helps to improve the layout for larger screens. In the warnings plugin you will find an example: on small devices, there is one card visible that shows one pie chart in a carousel. If you are opening the same page on a larger device, then two of the pie charts are shown side by side and the carousel is hidden.

5.3. Cards

When presenting information of a plugin as a block, typically plain text elements are shown. This will normally result in some kind of boring web pages. In order to create a more appealing interface, it makes sense to present such information in a card, that has a border, a header, an icon, and so on. In order to create such aBootstrap card a small jelly tag has been provided by the newBootstrap plugin that simplifies this task for a plugin. Such a card can be easily created in a jelly view in the following way:

1
2
3
<bs:cardtitle="${%Card Title}"fontAwesomeIcon="icon-name">
  Content of the card</bs:card>

In Figure 8 examples of such cards are shown. The cards in the upper row contain pie charts that show the distribution of the number of authors and commits in the whole repository. The card at the bottom shows the detail information in a DataTable. The visualization is not limited to charts or tables, you can show any kind of HTML content in there. You can show any icon of your plugin in these cards, but it is recommended to use one of the existing Font Awesome icons to get a consistent look and feel in Jenkins' plugin ecosystem.

Card examples
Figure 8. Bootstraps cards in Jenkins plugins

Note that the size of the cards is determined by the grid configuration, see Section 5.2.

5.4. Tables

A common UI element to show plugin details is a table control. Most plugins (and Jenkins core) typically use plain HTML tables. However, if the table should show a large number of rows then using a more sophisticated control like DataTables makes more sense. Using this JS based table control provides additional features at no cost:

  • filter results by text search

  • provide pagination of the result set

  • sort data by multiple columns at once

  • obtain table rows using Ajax calls

  • show and hide columns based on the screen resolution

In order to use DataTables in a view there are two options, you can either decorate existing static HTML tables (see Section 5.4.1) or populate the table content using Ajax (see Section 5.4.2).

5.4.1. Tables with static HTML content

The easiest way of using DataTables is by creating a static HTML table that will be decorated by simply calling the constructor of the datatable. This approach involves no special handling on the Java and Jelly side, so I think it is sufficient to follow the example in the DataTables documentation. Just make sure that after building the table in your Jelly file you need to decorate the table with the following piece of code:

<j:jellyxmlns:j="jelly:core"xmlns:st="jelly:stapler"><st:adjunctincludes="io.jenkins.plugins.jquery3"/><st:adjunctincludes="io.jenkins.plugins.data-tables"/>

  [...]

    <divclass="table-responsive"><tableclass="table table-hover table-striped display"id="id">
            [...]</table></div>

  [...]
  <script>$('#id').DataTable(); (1)</script></j:jelly>
1replace id with the ID of your HTML table element

In the Forensics plugin no such static table is used so far, but you can have a look at thetable that shows fixed warnings in the warnings plugin to see how such a table can be decorated.

5.4.2. Tables with dynamic model based content

While static HTML tables are easy to implement, they have several limitations. So it makes sense to follow a more sophisticated approach. Typically, tables in user interfaces are defined by using a corresponding table (and row) model. Java Swing successfully provides such atable model concept since the early days of Java. I adapted these concepts for Jenkins and DataTables as well. In order to create a table in a Jenkins view a plugin needs to provide a table model class, that provides the following information:

  • the ID of the table (since there might be several tables in the view)

  • the model of the columns (i.e., the number, type, and header labels of the columns)

  • the content of the table (i.e. the individual row objects)

You will find an example of such a table in the Forensics plugin: here a table lists the files in your Git repository combined with the corresponding commit statistics (number of authors, number of commits, last modification, first commit). A screenshot of that table is shown in Figure 9.

Table example
Figure 9. Dynamic Table in the Forensics plugin

In order to create such a table in Jenkins, you need to create a table model class that derives from TableModel. In Figure 10 a diagram of the corresponding classes in the Forensics plugin is shown.

Tabel model
Figure 10. Table model of the Forensics plugin
Table column model

This first thing a table model class defines is a model of the available columns by creating correspondingTableColumn instances. For each column you need to specify a header label and the name of the bean property that should be shown in the corresponding column (the row elements are actually Java beans: each column will show one distinct property of such a bean, see next section). You can use any of the supported column types by simply providing aString or Integer based column.

Table rows content

Additionally, a table model class provides the content of the rows. This getRows() method will be invoked asynchronously using an Ajax call. Typically, this method simply returns a list of Java Bean instances, that provide the properties of each column (see previous section). These objects will be converted automatically to an array of JSON objects, the basic data structure required for the DataTables API. You will find a fully working example table model implementation in the Git repository of the forensics plugin in the classForensicsTableModel.

In order to use such a table in your plugin view you need to create the table in the associated Jelly file using the new table tag:

index.jelly
<j:jellyxmlns:j="jelly:core"xmlns:dt="/data-tables">
    [...]<st:adjunctincludes="io.jenkins.plugins.data-tables"/><dt:tablemodel="${it.getTableModel('id')}"/>(1)
    [...]</j:jelly>
1replace id with the id of your table

The only parameter you need to provide for the table is the model — it is typically part of the corresponding Jenkins view model class (this object is referenced with ${it} in the view). In order to connect the corresponding Jenkins view model class with the table, the view model class needs to implement the AsyncTableContentProvider interface. Or even simpler, let your view model class derive fromDefaultAsyncTableContentProvider. This relationship is required, so that Jenkins can automatically create and bind a proxy for the Ajax calls that will automatically fill the table content after the HTML page has been created.

If we put all those pieces together, we are required to define a model similar to the model of the Forensics plugin, that is shown in Figure 11.

Forensics view model
Figure 11. Jenkins reporter design - high level view of the model for reporter plugins

As already described in Figure 5 the plugin needs to attach a BuildAction to each build. The Forensics plugin attaches a ForensicBuildAction to the build. This action stores a RepositoryStatistics instance, that contains the repository results for a given build. This action delegates all Stapler requests to a newStapler proxy instance so we can keep the action clean of user interface code. This ForensicsViewModel class then acts as view model that provides the server side model for the corresponding Jelly view given by the file index.jelly.

While this approach looks quite complex at a first view, you will see that the actual implementation part is quite small. Most of the boilerplate code is already provided by the base classes and you need to implement only a few methods. Using this concept also provides some additional features, that are part of the DataTables plugin:

  • Ordering of columns is persisted automatically in the browser local storage.

  • Paging size is persisted automatically in the browser local storage.

  • The Ajax calls are actually invoked only if a table will become visible. So if you have several tables hidden in tabs then the content will be loaded on demand only, reducing the amount of data to be transferred.

  • There is an option available to provide an additional details row that can be expanded with a + symbol, see warnings plugin table for details.

5.5. Charts

A plugin reporter typically also reports some kind of trend from build to build. Up to now Jenkins core provides only a quite limited concept of rendering such trends as trend charts. TheJFreeChart framework offered by Jenkins core is a server side rendering engine that creates charts as static PNG images that will be included on the job and details pages. Nowadays, several powerful JS based charting libraries are available, that do the same job (well actually an even better job) on the client side. That has the advantage that these charts can be customized on each client without affecting the server performance. Moreover, you get a lot of additional features (like zooming, animation, etc.) for free. Additionally, these charting libraries not only support the typical build trend charts but also a lot of additional charts types that can be used to improve the user experience of a plugin. One of those charting libraries is ECharts: this library has a powerful API and supports literally every chart type one can image of. You can get some impressions of the features on theexamples page of the library.

In order to use these charts one can embed charts that use this library by importing the corresponding JS files and by defining the chart in the corresponding Jelly file. While that already works quite well it will be still somewhat cumbersome to provide the corresponding model for these charts from Jenkins build results. So I added a powerful Java API that helps to create the model for these charts on the Java side. This API provides the following features:

  • Create trend charts based on a collection of build results.

  • Separate the chart type from the aggregation in order to simplify unit testing of the chart model.

  • Toggle the type of the X-Axis between build number or build date (with automatic aggregation of results that have been recorded at the same day).

  • Automatic conversion of the Java model to the required JSON model for the JS side.

  • Support for pie and line charts (more to come soon).

Those charts can be used as trend chart in the project page (see Figure 3) or as information chart in the details view of a plugin (see Section 5).

5.5.1. Pie charts

A simple but still informative chart is a pie chart that illustrates numerical proportions of plugin data. In the Forensics plugin I am using this chart to show the numerical proportions of the number of authors or commits for the source code files in the Git repository (see Figure 8). In the warnings plugin I use this chart to show the numerical proportions of the new, outstanding, or fixed warnings, see Figure 12.

Pie chart example
Figure 12. Pie chart in the Warnings plugin

In order to include such a chart in your details view, you can use the provided pie-chart tag. In the following snippet you see this tag in action (embedded in a Bootstrap card, see Section 5.3):

index.jelly
1
2
3
4
5
6
7
8
9
10
<?jelly escape-by-default='true'?><j:jellyxmlns:j="jelly:core"xmlns:c="/charts"xmlns:bs="/bootstrap">

    [...]
    <bs:cardtitle="${%Number of authors}"fontAwesomeIcon="users"><c:pie-chartid="authors"model="${it.authorsModel}"height="256"/></bs:card>
    [...]</j:jelly>

You need to provide a unique ID for this chart and the corresponding model value. The model must be the JSON representation of a corresponding PieChartModel instance. Such a model can be created with a couple of lines:

ViewModel.java
1
2
3
4
5
6
7
8
9
    [...]
    PieChartModel model = new PieChartModel("Title");

    model.add(new PieData("Segment 1 name", 10), Palette.RED);
    model.add(new PieData("Segment 2 name", 15), Palette.GREEN);
    model.add(new PieData("Segment 3 name", 20), Palette.YELLOW);String json = new JacksonFacade().toJson(model);
    [...]

5.5.2. Trend charts on the job level view

In order to show a trend that renders a line chart on the job page (see Figure 3) you need to provide a so called floating box (stored in the file floatingBox.jelly of your job action (see Section 3)). The content of this file is quite simple and contains just a trend-chart tag:

floatingBox.jelly
1
2
3
4
5
6
<?jelly escape-by-default='true'?><j:jellyxmlns:j="jelly:core"xmlns:c="/charts"><c:trend-chartit="${from}"title="${%SCM Files Count Trend}"enableLinks="true"/></j:jelly>

On the Java side the model for the chart needs to be provided in the corresponding sub class of JobAction (which is the owner of the floating box). Since the computation of trend charts is quite expensive on the server side as well (several builds need to be read from disk and the interesting data points need to be computed) this process has been put into a separate background job. Once the computation is done the result is shown via an Ajax call. In order to hide these details for plugin authors you should simply derive your JobAction class from the correspondingAsyncTrendJobAction class, that already contains the boilerplate code. So your static plugin object model will actually become a little bit more complex:

Jenkins chart model
Figure 13. Jenkins chart model design

Basically, you need to implement the method LinesChartModel createChartModel() to create the line chart. This method is quite simple to implement, since most of the hard work is provided by the library: you will invoke with an iterator of your build actions, starting with the latest build. The iterator advances from build to build until no more results are available (or the maximum number of builds to consider has been reached). The most important thing to implement in your plugin is the way how data points are computed for a given BuildAction. Here is an example of such a SeriesBuilder implementation in the Forensics Plugin:

FilesCountSeriesBuilder.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
packageio.jenkins.plugins.forensics.miner;importjava.util.HashMap;importjava.util.Map;importedu.hm.hafner.echarts.SeriesBuilder;/**
 * Builds one x-axis point for the series of a line chart showing the number of files in the repository.
 *
 * @author Ullrich Hafner
 */publicclassFilesCountSeriesBuilderextends SeriesBuilder<ForensicsBuildAction> {staticfinalString TOTALS_KEY = "total";@OverrideprotectedMap<String, Integer> computeSeries(final ForensicsBuildAction current) {Map<String, Integer> series = newHashMap<>();
        series.put(TOTALS_KEY, current.getNumberOfFiles());return series;
    }
}

You are not limited to a single line chart. You can show several lines in a single chart, you can show stacked values, or even the delta between some values. You can also have a look at thecharts of the warnings plugin to see some of these features in detail.

Trend with several lines example
Figure 14. Trend chart with several lines in the Warnings plugin
Trend chart with stacked lines example
Figure 15. Trend chart with stacked lines in the Warnings plugin

Introducing the Azure Key Vault Credentials Provider for Jenkins

$
0
0

Azure Key Vault is a product for securely managing keys, secrets and certificates.

I’m happy to announce two new features in the Azure Key Vault plugin:

These changes were released in v1.8 but make sure to run the latest version of the plugin, there has been some fixes since then.

Some advantages of using the credential provider rather than your own scripts:

  • your Jenkins jobs consume the credentials with no knowledge of Azure Key Vault, so they stay vendor-independent.

  • the provider integrates with the ecosystem of existing Jenkins credential consumers, such as the Slack Notifications plugin.

  • credential usage is recorded in the central Jenkins credentials tracking log.

  • Jenkins can use multiple credentials providers concurrently, so you can incrementally migrate credentials to Azure Key Vault while consuming other credentials from your existing providers.

Note: Currently only secret text credentials are supported via the credential provider, you can use the configuration-as-code integration to load the secret from Azure Key Vault into the System Credential Provider to work around this limitation.

Getting started

Install the Azure Key Vault plugin

Then you will need to configure the plugin.

Azure authentication

There’s two types of authentication you can use 'Microsoft Azure Service Principal' or 'Managed Identities for Azure Resources'

The easiest one to set this up quickly with is the 'Microsoft Azure Service Principal',

$ az ad sp create-for-rbac --name http://service-principal-name
Creating a role assignment under the scope of "/subscriptions/ff251390-d7c3-4d2f-8352-f9c6f0cc8f3b"
  Retrying role assignment creation: 1/36
  Retrying role assignment creation: 2/36
{
  "appId": "021b5050-9177-4268-a300-7880f2beede3",
  "displayName": "service-principal-name",
  "name": "http://service-principal-name",
  "password": "d9d0d1ba-d16f-4e85-9b48-81ea45a46448",
  "tenant": "7e593e3e-9a1e-4c3d-a26a-b5f71de28463"
}

If this doesn’t work then take a look at the Microsoft documentation for creating a service principal.

Note: for production 'Managed Identities for Azure Resources' is more secure as there’s no password involved and you don’t need to worry about the service principal’s password or certificate expiring.

Vault setup

You need to create a vault and give your service principal access to it:

RESOURCE_GROUP_NAME=my-resource-group
az group create --location uksouth --name $RESOURCE_GROUP_NAME

VAULT=my-vault # you will need a unique name for the vault
az keyvault create --resource-group $RESOURCE_GROUP_NAME --name $VAULT
az keyvault set-policy --resource-group $RESOURCE_GROUP_NAME --name $VAULT \
  --secret-permissions get list --spn http://service-principal-name

Jenkins credential

The next step is to configure the credential in Jenkins:

  1. click 'Credentials'

  2. click 'System' (it’ll appear below the Credentials link in the side bar)

  3. click 'Global credentials (unrestricted)'

  4. click 'Add Credentials'

  5. select 'Microsoft Azure Service Principal'Microsoft Azure Service Principal dropdown

  6. fill out the form from the credential created above, appId is 'Client ID', password is 'Client Secret'Microsoft Azure Service Principal credential configuration

  7. click 'Verify Service Principal', you should see 'Successfully verified the Microsoft Azure Service Principal'.

  8. click 'Save'

Jenkins Azure Key Vault plugin configuration

You now have a credential you can use to interact with Azure resources from Jenkins, now you need to configure the plugin:

  1. go back to the Jenkins home page

  2. click 'Manage Jenkins'

  3. click 'Configure System'

  4. search for 'Azure Key Vault Plugin'

  5. enter your vault url and select your credentialAzure Key Vault plugin configuration

  6. click 'Save'

Store a secret in Azure Key Vault

For the step after this you will need a secret, so let’s create one now:

$ az keyvault secret set --vault-name $YOUR_VAULT --name secret-key --value my-super-secret

Create a pipeline

Install the Pipeline plugin if you don’t already have it.

From the Jenkins home page, click 'New item', and then:

  1. enter a name, i.e. 'key-vault-test'

  2. click on 'Pipeline'

  3. add the following to the pipeline definition:

Jenkinsfile (Declarative Pipeline)
pipeline {
  agent any
  environment {
    SECRET_KEY = credentials('secret-key')
  }
  stages {
    stage('Foo') {
      steps {
        echo SECRET_KEY
        echo SECRET_KEY.substring(0, SECRET_KEY.size() - 1) // shows the right secret was loaded, don't do this for real secrets unless you're debugging
      }
    }
  }
}

You have now successfully retrieved a credential from Azure Key Vault using native Jenkins credentials integration.

configuration-as-code integration

The Configuration as Code plugin has been designed as an opinionated way to configure Jenkins based on human-readable declarative configuration files. Writing such a file should be easy without being a Jenkins expert.

For many secrets the credential provider is enough, but when integrating with other plugins you will likely need more than string credentials.

You can use the configuration-as-code plugin (aka JCasC) to allow integrating with other credential types.

configure authentication

As the JCasC plugin runs during initial startup the Azure Key Vault credential provider needs to be configured before JCasC runs during startup.

The easiest way to do that is via environment variables set before Jenkins starts up:

export AZURE_KEYVAULT_URL=https://my.vault.azure.net
export AZURE_KEYVAULT_SP_CLIENT_ID=...
export AZURE_KEYVAULT_SP_CLIENT_SECRET=...
export AZURE_KEYVAULT_SP_SUBSCRIPTION_ID=...
export AZURE_KEYVAULT_SP_SUBSCRIPTION_ID=...

See the azure-keyvault documentation for other authentication options.

You will now be able to refer to Azure Key Vault secret IDs in your jenkins.yaml file:

credentials:system:domainCredentials:
      - credentials:
        - usernamePassword:
            description: "GitHub"
            id: "jenkins-github"
            password: "${jenkins-github-apikey}"
            scope: GLOBAL
            username: "jenkinsadmin"

Thanks for reading, send feedback on twitter using the tweet button in the top right, any issues or feature requests use GitHub issues.

GitHub App authentication support released

$
0
0

I’m excited to announce support for authenticating as a GitHub app in Jenkins. This has been a long awaited feature by many users.

It has been released in GitHub Branch Source 2.7.0-beta1 which is available in the Jenkins experimental update center.

Authenticating as a GitHub app brings many benefits:

  • Larger rate limits - The rate limit for a GitHub app scales with your organization size, whereas a user based token has a limit of 5000 regardless of how many repositories you have.

  • User-independent authentication - Each GitHub app has its own user-independent authentication. No more need for 'bot' users or figuring out who should be the owner of 2FA or OAuth tokens.

  • Improved security and tighter permissions - GitHub Apps offer much finer-grained permissions compared to a service user and its personal access tokens. This lets the Jenkins GitHub app require a much smaller set of privileges to run properly.

  • Access to GitHub Checks API - GitHub Apps can access the the GitHub Checks API to create check runs and check suites from Jenkins jobs and provide detailed feedback on commits as well as code annotation

Getting started

Install the GitHub Branch Source plugin, make sure the version is at least 2.7.0-beta1. Installation guidelines for beta releases are available here

Configuring the GitHub Organization Folder

Follow the GitHub App Authentication setup guide. These instructions are also linked from the plugin’s README on GitHub.

Once you’ve finished setting it up, Jenkins will validate your credential and you should see your new rate limit. Here’s an example on a large org:

GitHub app rate limit

How do I get an API token in my pipeline?

In addition to usage of GitHub App authentication for Multi-Branch Pipeline, you can also use app authentication directly in your Pipelines. You can access the Bearer token for the GitHub API by just loading a 'Username/Password' credential as usual, the plugin will handle authenticating with GitHub in the background.

This could be used to call additional GitHub API endpoints from your pipeline, possibly thedeployments api or you may wish to implement your ownchecks api integration until Jenkins supports this out of the box.

Note: the API token you get will only be valid for one hour, don’t get it at the start of the pipeline and assume it will be valid all the way through

Example: Let’s submit a check run to Jenkins from our Pipeline:

pipeline {
  agent any

  stages{
    stage('Check run') {
      steps {
        withCredentials([usernamePassword(credentialsId: 'githubapp-jenkins',usernameVariable: 'GITHUB_APP',passwordVariable: 'GITHUB_JWT_TOKEN')]) {
            sh '''
            curl -H "Content-Type: application/json" \                 -H "Accept: application/vnd.github.antiope-preview+json" \                 -H "authorization: Bearer ${GITHUB_JWT_TOKEN}" \                 -d '{ "name": "check_run", \                       "head_sha": "'${GIT_COMMIT}'", \                       "status": "in_progress", \                       "external_id": "42", \                       "started_at": "2020-03-05T11:14:52Z", \                       "output": { "title": "Check run from Jenkins!", \                                   "summary": "This is a check run which has been generated from Jenkins as GitHub App", \                                   "text": "...and that is awesome"}}' https://api.github.com/repos/<org>/<repo>/check-runs'''
        }
      }
    }
  }
}

What’s next

GitHub Apps authentication in Jenkins is a huge improvement. Many teams have already started using it and have helped improve it by giving pre-release feedback. There are more improvements on the way.

There’s a proposed Google Summer of Code project: GitHub Checks API for Jenkins Plugins. It will look at integrating with the Checks API, with a focus on reporting issues found using the warnings-ng plugin directly onto the GitHub pull requests, along with test results summary on GitHub. Hopefully it will make the Pipeline example below much simpler for Jenkins users :) If you want to get involved with this, join the GSoC Gitter channel and ask how you can help.

Call for User Stories - Jenkins is the Way

$
0
0

Jenkins Is The Way

One of the things we loved about going to developer conferences was meeting Jenkins users — newbies and old-timers alike — who are excited to talk about their projects and share tips on how to move forward using Jenkins. Since the coronavirus pandemic, we’re learning to rely more on new ways to gather, and it’s happening via Jenkins online meetups, GitHub collaborations, and Twitter threads, to name a few.

It’s a significant change. But what hasn’t changed is the need to share stories about the things users have built, the solutions they’ve developed, and the excellent results they’re getting from some really innovative Jenkins implementations. Then we wondered, why isn’t anyone collecting these user stories and sharing them with the Jenkins community.

Introducing Jenkins is the Way

So we took the first step to record and archive all the great stuff everyone in our community is building with Jenkins. This way, Jenkins users old and new can come to an archive and search for Jenkins solutions for inspiration. We foresee a vast library of solutions from all around the world, solving a wide array of challenges in every industry imaginable. We decided to call this archive "Jenkins Is The Way" and host it at https://JenkinsIsTheWay.io .

To aggregate all these stories, we built a simple online questionnaire so that Jenkins users can submit their own experience using this leading open source automation server. With so many plugins to support building, deploying, and automating your projects, we expect to see a vast collection of stories.

We’ve already received a handful, including stories that illustrate how Jenkins Is The Way:

Add your story. Show your Jenkins pride. Get our T-shirt

Jenkins Is The Way T-shirt

Be an inspiration to the Jenkins community by sharing your Jenkins story. Just go to this link and fill out the form. We’ll ask you about your project’s goals, the technical challenges you overcame with Jenkins, and the solutions you created. It should take no more than 20-30 minutes to complete.

We’ll clean it up for clarity and publish it on https://JenkinsIsTheWay.io .

Once it’s part of our archive, we’ll send you our new 2020 Jenkins Is the Way t-shirt.

And since the more, the merrier, please share this blog post with peers and colleagues. We want to hear everyone’s stories about the clever ways Jenkins is used to automate all that we need to do.

Thanks and Acknowledgement

Special thanks to abConsulting for creating and managing the https://JenkinsIsTheWay.io site and for reviewing, editing, and publishing the submitted stories.

Thanks to the Jenkins Advocacy and Outreach SIG for their reviews and feedback.

Thanks also to CloudBees for sponsoring the "Jenkins is the Way" program.

CloudBees

Docker images for agents: New names and What's next

$
0
0

We would like to announce the renaming of the official Docker images for Jenkins agents. It does not have any immediate impact on Jenkins users, but they are expected to gradually upgrade their instances. This article provides information about the new official names, upgrade procedure, and the support policy for the old images. We will also talk about what’s next for the Docker packaging in Jenkins.

Jenkins and Docker

New image names

See the upgrade guidelines below.

Why?

The "slave" term is widely considered inappropriate in open source communities. It has been officially deprecated in Jenkins 2.0 in 2016, but there are remaining usages in some Jenkins components. The JENKINS-42816: Slave to Agent renaming leftovers EPIC tracks cleanup of such usages. Official Docker agent images were a glaring case, it was not easy to fix that with the previous versions of the image release Pipelines on DockerHub. It is great to have the image naming issue finally fixed by this update.

Another notable change is replacing the JNLP agent term with inbound agent. Historically "JNLP" has been used as a name of Remoting protocols. JNLP stands for Java Network Launch Protocol which is a part of the Java Web Start. Jenkins supports Java Web Start mode for agents when running agents on Java 1.8, but our networking protocols are based on TCP and have nothing to do with Java Network Launch Protocol. This name has been very confusing since the beginning and became worse with the introduction of WebSocket support in Jenkins 2.217 (JEP-222). Docker agent images support WebSockets, so we decided to change the image name to jenkins/inbound-agent so that it prevents further confusion.Inbound agent term refers to agent protocols in which the agent initiates the connection to the Jenkins master through different protocols.

Thanks a lot to Alex Earl and krufab for the repository restructuring groundwork which made the renaming possible! Also thanks to Tim Jacomb, Marky Jackson, Mark Waite, Ivan Fernandez Calvo and other contributors for their reviews and testing.

Upgrading and Compatibility Notes

Good news, there are no breaking changes caused by this renaming. All images have been already modified to use the new terminology internally. If you use the recent versions of the previous images, you can just replace the old names with the new ones. These names may be referenced in your Dockerfiles, scripts, and Jenkins configurations.

We will keep updating the old images on DockerHub for at least 3 months (until August 05, 2020). There will be no new configurations and platforms added to the old images, but all existing ones will remain available (Debian for Java 1.8 and 11, Alpine for Java 1.8, etc.). After August 05, 2020, the old images will no longer receive updates, but previous versions will remain available to users on Dockerhub.

What’s next?

We will continue renaming of the Docker images in Jenkins components which reference old image names. There is also a set of convenience Docker images which include build tools like Maven or Gradle which will be renamed later. The jenkins/ssh-agent image might be renamed again in the future as well; see the ongoing discussion in this developer mailing list thread.

If you are rather interested in new features in Jenkins Docker packaging, stay tuned for future announcements! There are multiple ongoing initiatives which you can find on the public Jenkins roadmap (in the draft stage, see JEP-14). Some stories:

  • General availability of Windows images.

  • Support for more platforms (AArch64, IBM s390x, PowerPC).

  • Switching to AdoptOpenJDK.

  • Introducing multi-platform Docker images.

If you are interested in any of these projects and would like to contribute, please reach out to the Platform Special Interest Group which coordinates initiatives related to Jenkins in Docker.

Regarding the agent terminology cleanup outside Docker images, we will keep working on this project in the Advocacy & Outreach SIG. If you see the usage of the obsolete "slave" term anywhere in the Jenkins organization (Web UI, documentation, etc.), please feel free to submit a pull request or to report an issue in the JENKINS-42816: Slave to Agent renaming leftovers EPIC. There are "just" 3000 occurences left in the jenkinsci GitHub organization, but we will get there. Any contributions will be appreciated!

Windows Docker Agent Images: General Availability

$
0
0

We would like to announce the availability of official Windows agent images for Docker. These images allow provisioning Jenkins agents with Windows OS on Docker and Kubernetes.

Jenkins and Docker

New images

All official Docker images for agents now provide nanoserver-1809 and windowsservercore-1809 tags which include Windows images and, at the moment, Java 8 (these are like the latest tag). We also provide tags with explicit Java selection, e.g. jdk8-windowsservercore-1809 or jdk11-nanoserver-1809. Version tags are also available, e.g. jenkins/agent:4.3-4-jdk8-nanoserver-1809.

  • jenkins/agent is a basic agent which bundles the agent.jar for agent ⇐ ⇒ master communication. This is most useful as a base image for other images. Windows images are available starting from version 4.3-4

  • jenkins/inbound-agent is an agent that is based on the jenkins/agent image above. It provides a wrapper script written in PowerShell to help specify the parameters to agent.jar. Windows images are available starting from version 4.3-4

  • jenkins/ssh-agent is an image which has OpenSSH installed and should be used with the SSH Build Agents Plugin. Windows images are available starting from version 2.1.0

Using Windows Docker images

To use the new images, you will need a proper Docker or Kubernetes environment which supports running Windows containers. For Windows desktop users, the easiest way is to use Docker for Windows. Windows support in Kubernetes is documented here.

jenkins/agent

The jenkins/agent image is a simple agent with the JDK and the agent.jar (Jenkins Remoting library).

There are two main use cases for this image:

  1. As a base image for other Docker images (e.g., FROM jenkins/agent:jdk8-nanoserver-1809 in your Dockerfile). The jenkins/inbound-agent is based on this image.

  2. This image may also be used to launch an agent using the Launch method of Launch agent via execution of command on the master. This allows the master to launch the agent inside the docker container automatically.

To run the agent for the second use case, you would specify the following command on the Jenkins master after setting Remote root directory to C:\Users\jenkins\agent:

docker run -i --rm --name agent --init jenkins/agent:jdk8-windowsservercore-1809 java -jar C:/ProgramData/Jenkins/agent.jar


jenkins/inbound-agent

The inbound-agent Docker image tries to provide a higher level interaction with the agent.jar executable. It provides a PowerShell wrapper script around agent.jar and it is specified as the entrypoint so that you just need to pass in some command line arguments to run the agent. A pull request has been opened which documents these command line parameters and environment variables.

Example:

docker run jenkins/inbound-agent:windowsservercore-1809 `
   -Url http://jenkins-server:port `
   -WorkDir=C:/Users/jenkins/Agent `
   -Secret <SECRET> `
   -Name <AGENTNAME>

Example using environment variables:

docker run -e "JENKINS_URL=http://jenkins-server:port" -e "JENKINS_AGENT_NAME=AGENTNAME" `
   jenkins/inbound-agent:windowsservercore-1809 `
   -WorkDir=C:/Users/jenkins/Agent `
   -Secret <SECRET> `
   -Name <AGENTNAME>
The -Url, -Name and -Secret parameters are required, but can be specified as either command line parameters or environment variables.


jenkins/ssh-agent

As mentioned above the jenkins/ssh-agent docker image is based on SSH communication with the master, rather than the remoting TCP or WebSocket protocols. The image sets up a jenkins user and the OpenSSH server so that the master can connect to the agent via SSH. The image expects an SSH public key as a parameter and puts that key into the authorized_keys file for the jenkins user. The private key should be specified in the agent configuration on the master to allow the master to connect.

Example:

docker run jenkins/ssh-agent:jdk8-windowsservercore-1809 "<public key>"

You can also pass the public key as an environment variable when using docker run.

Example:

docker run -e "JENKINS_AGENT_SSH_PUBKEY=<public key>" jenkins/ssh-agent:jdk8-windowsservercore-1809

You will then be able to connect this agent using the SSH Build Agents Plugin as "jenkins" with the matching private key.


What’s next?

We are considering providing versions based on Windows Server 2019 build 1909 so that Jenkins users can run these images on GKE clusters (see this issue).

We are also looking into providing multiarch manifests which would allow Windows images to be part of the latest tag.

There is also an open pull-request to create a Windows based Docker image for a Jenkins master. There hasn’t been a lot of requests for this, but to make the offerings complete for Windows users, the pull request was created.

For plans unrelated to Windows, please see the Docker images for agents: New names and What’s next blogpost.

Join us for online UI/UX hackfest on May 25-29!

$
0
0

Jenkins Is The Way

On behalf of the Jenkins User Experience, Documentation and Advocacy and Outreach special interest groups, we are happy to announce the online UI/UX hackfest on May 25-29! Everyone is welcome to participate, regardless of their Jenkins development experience.

The goal is to get together and work on improving Jenkins user experience, including but not limited to user interface and user documentation. We also invite you to share experiences about Jenkins and the recent UI/UX improvements. The event follows the Jenkins is the Way theme and the most active contributors will get special edition swag and prizes!

register button

Event plan

This hackfest is NOT a hackathon. We do not expect participants to dedicate all their time during the event timeframe, but hop-in/hop-out as their time allows. Everybody can spend as much time as they are willing to dedicate. Spending a few days or just a few hours is fine, any contributions matter regardless of their size. Jenkins development experience is not required, we have newcomer-friendly stories for those who want to start contributing to the project. We will also have a 24/7 jenkinsci/hackfest Gitter chat for Q&A and coordination between contributors.

There will be 3 main tracks:

  • User Interface - Improve look&feel and accessibility for Jenkins users, work on new read-only interface for instances managed with configuration as code, create and update Jenkins themes, and many other topics. This track is coordinated by the UX SIG.

  • User Documentation - Improve and create new user documentation, tutorials and solution pages. Also, there is ongoing documentation migration from Wiki to jenkins.io and plugin repositories. This track is coordinated by the Documentation SIG.

  • Spread the word - Write user stories for Jenkins Is The Way site and the Jenkins blog, post about your Jenkins user experience and new features, record overview and HOWTO videos, etc. This track is coordinated by the Advocacy and Outreach SIG.

We are working on publishing project ideas and issues for the listed tracks. The current list can be found on the UI / UX hackfest event page, this list will be finalized by the beginning of the hackfest. You are welcome to propose your own projects within the User Experience theme.

During the event, we will organize online meetups and ad-hoc training sessions in different timezones. All these sessions will be recorded and shared on our YouTube channel. There are no mandatory sessions you must attend, you are welcome to join ones remotely or watch the recordings. After the event we will invite participants to demo their projects at online meetings or recorded sessions.

Registration

register button

P.S: Note that the registration form has a question top 3 things we could change in Jenkins to improve your user experience. We would appreciate your response there!

Contacts

Please use the following contacts to contact organizers:

Resources

Swag and Prizes

Thanks to our sponsors (CloudBees, Inc. and Continuous Delivery Foundation), we are happy to offer swag to active contributors!

  • 50 most-active contributors will get an exclusive "Jenkins Is The Way" T-shirt and stickers

  • Active contributors will get Jenkins stickers and socks

  • We are working on special prizes for top contributors, to be announced later

Jenkins Is The Way T-shirtJenkins SocksJenkins Stickers

Acknowledgements

We thank all contributors who participate in this event as committers! We especially thank all reviewers, organizers and those who participated in the initial program reviews and provided invaluable feedback. In particular, we thank User Experience, Documentation and Advocacy and Outreach SIG members who heavily contributed to this event.

We also thank sponsors of the event who make the swag and prizes possible:CloudBees, Inc. andContinuous Delivery Foundation (CDF). In addition to swag, CloudBees donates working time for event hosts and reviewers. CDF also sponsors our online meetup platform which we will be using for the event.

cloudbeescdf.

Read-only Jenkins Configuration

$
0
0

I’m excited to announce that the 'read-only' Jenkins feature is now available for preview. This feature allows restricting configuration UIs and APIs while providing access to essential Jenkins system configuration, diagnostics, and self-monitoring tools through Web UI. Such mode is critical for instances managed as code, e.g. with Jenkins Configuration-as-Code plugin. It is delivered as a part of the JEP-224: Readonly system configuration effort.

You will want to use at least Jenkins 2.238 to have all the features mentioned in this post.

Read-only Jenkins currently allows users to have access to:

  • job configuration

  • system configuration

  • plugin manager

  • system logs

  • cloud configuration

  • agent configuration

  • agent logs

For more planned integrations see the JENKINS-12548 epic.

Read-only Jenkins is split into three permissions:

  • Job/ExtendedRead - Read-only access to job configurations

    • existed since 2009 but the UI didn’t do anything to indicate to the users that they couldn’t edit the job configuration page. This has now been adapted to the new read-only engine.

  • Agent/ExtendedRead - Read-only access to agent configurations

    • existed since 2013 but it was undocumented and only allowed access to API and no UI

    • UI support added in Jenkins 2.238

  • Overall/SystemRead - System-wide read-only access. It is very useful for Jenkins instances managed as code, e.g. with help of the Jenkins Configuration as Code Plugin.

You can selectively grant the permission(s) as you wish.

Why do I want this?

Given the rise of the configuration-as-code plugin a lot of Jenkins instances are fully managed as code, which means that no changes are allowed through the UI.

The problem with this is you don’t know when new plugin versions are available and in order to see what other configuration options are available to a plugin you currently need the 'Administer' permission.

Read-only access to system administration information allows users who are not administrators to more easily debug build issues. For example, given a 'Jenkins' error message in a build the user can check:

  1. which plugins are installed

  2. the version of the plugin

This can allow the user to solve their issue themselves and makes it easier for the user to report an issue with a plugin directly to the maintainers.

What can I expect

All built in UI controls have been adapted to clearly distinguish between an editable control and a control you don’t have permission to edit:

Editable:

build discarder edit

Non editable:

build discarder read

Note: there are other controls such as in the credentials and pipeline plugins that have not been updated yet.

Action buttons, (Such as 'Save' and 'Apply') have been hidden in most cases.

Work will continue on read-only configuration. Some plugins need support added and certain controls could have some improvements done to render better.

How can I use it?

These permissions are currently available in beta and for now disabled by default. You can enable them by installing the Extended read permission plugin v3.2 or above.

Then you will need to add the following permissions to a user / group depending on your use case:

  • Overall/SystemRead

  • Job/ExtendedRead

  • Agent/ExtendedRead

Note: You will need to set the Overall/Read and Job/Read permissions as well. You might want to consider creating a role containing the required permissions.

jenkins:authorizationStrategy:folderBased:globalRoles:
        - name: "admin"permissions:
            - id: "Overall/Administer"sids:
            - "admin"
        - name: "global read"permissions:
            - id: "Agent/ExtendedRead"
            - id: "Overall/SystemRead"
            - id: "Overall/Read"
            - id: "Job/Read"
            - id: "Job/ExtendedRead"sids:
            - "reader"

I can’t see a configuration that I think should be allowed

Most of Jenkins itself has been updated to support read-only Jenkins, but not very many plugins. Please create an enhancement issue on the plugins issue tracker. If the plugin uses Jira to track issues, then you can add it to the JENKINS-12548 epic.

How do I update my plugin to support it

See the Read only view section of the developer documentation.

What’s next

In this release we introduce a foundation feature which is already supported in all key Jenkins core controls and in some plugins. There are many plugins which contribute to global configurations and diagnostics which still need to be adapted to support the new mode. We will keep working on this feature and its adoption so that the next LTS baseline in September provides a full-fledged user experience for Jenkins admins.

System read permission is a featured project in the UI/UX Hackfest happening May 25-29 2020. If you want to get involved please check it out!


Machine Learning Plugin project - community bonding blog post

$
0
0
jenkins gsoc logo small

Hello everyone !

This is one of the Jenkins project in GSoC 2020. We are working this new Machine Learning Plugin for this GSoC 2020. This is my story about the community bonding of GSoC 2020. I am happy to share my journey with you.

Introducing Myself and my Fantastic 4 Mentors

I am Loghi Perinpanayagam from University of Moratuwa. I was selected for GSoC 2020 for Machine Learning Plugin in Jenkins. I am glad to introduce my mentors to this project. I was assigned with four mentors who are really enthusiastic to help me on kicking off this summer of code.

Student

Mentors

  • Bruno P. Kinoshita

  • Ioannis Moutsatsos

  • Marky Jackson

  • Shivay Lamba

  • How was my preparation last year ?

    I learned about this open source program in my second year. But atleast I tried last year on a different organization’s project that was related to Data Visualization Recommendation for Data Science. But the problem was I did not contribute as much as this year and was too late in the application process. As usual Machine learning related projects have a lot of competition compared to other projects. I prepared on learning Data visualization in Machine Learning and existing Models for the recommendation system. Finally I wrote a proposal with the SeqToSeq model without much knowledge on neural networks at that time. And I did not communicate much through the dedicated slack channel. That may be one of the reasons for the failure. But the main reason was my latency for GSoC 2019.

    How did I hurdle GSoC 2020 ?

    Since the time I realized how open source is needed and helpful for the community, I have been passionate about contributing to open source projects. At the instance, I finished my internship in Bangalore, India in 2019, I immediately focused on participating in GSoC. This is my last year (2020) as a student of my BSc Computer Science life, I wanted to get selected this year as a student.

    There was a guidance seminar organized by our department, I got to know that Jenkins had opened their project ideas. That was an extremely impressive beginning of my GSoC 2020 journey. I walked through all the draft and accepted projects in the Jenkins.io page. As I am already interested in Machine Learning and I am familiar with Java, I picked the most impressive idea for me that does not have an initial repo. That means I wanted to use my knowledge to think and research a lot with this project. But I had to contribute and want to know about the infrastructure of Jenkins codebase. Because that makes the selection panel easy to pick up the student for the project. Then I repeatedly searched to contribute to Jenkins. I found issues that were easy for me to work from the git plugin and git client plugin. I started to contribute some test issues on git plugin and git client plugin. After I got a clear knowledge on how a plugin works in Jenkins, I started working on the POC with the hint provided in the project idea page. Actually, that was fun to code.

    Mentors have helped many students during the application process. I was able to do a working POC that had a minimum capability to do the task of the project. Finally mentors opened for proposal submission. I hurried to prepare a draft proposal. After I got reviews from mentors, I started to improve the proposal. At the end of the proposal submission, I was able to deliver a good proposal for this project. As I was curious about this plugin, I dug into more on how to integrate Jupyter notebook with this plugin. I published an medium article as a result of my research during the acceptance waiting period.

    Results released

    The result was going to be announced on 4th May, I believed in my project proposal and POC and I got selected for this GSoC 2020. Whoa ! That was a goosebumping moment in my entire life. The feeling was like Something I achieved. As a result of my hard work, I deserved that. For example, I spent 7 days continuously making the POC work without any collision between maven artifacts.

    Community Bonding

    After the release of results, I was preparing myself for the community bonding. There are lots of interactions happening between me and mentors than before.I had to update my project page and my profile in Jenkins.io. We had our first meeting with lots of excitement and love on 10th of May. Mentors and I introduced ourselves even though we know each other. We discussed the high level view of GSoC and I asked some questions that I had in my mind. As my plugin was a new repository, most of the discussion was related to the repository and its name. I had to find a name for the new plugin. We had regular conversations about the blogpost and presentations at the end.

    In the second meeting, We discussed the process for hosting a new plugin in Jenkins, tracking issues with JIRA, blog posts and high level road map for the project. And I suggested some interesting plugin names but they were not matching to the goal of the project, mentors told me to try other names which perfectly describe the project. I was advised to read all the research guidelines and plugin naming conventions. We discussed how code reviews will be done and source code management through the git. After this meeting, our meeting has shifted to the official Jenkins Zoom account.

    Our third meeting was quite serious about our project planning. I had been preparing my design document for the project with the help of mentors before the meeting day. Hence I got lots of reviews and useful examples for my future work on phase 1. At this point, we decided with the plugin name Machine Learning Plugin which was accepted by all mentors and I created the repo and requested a Jira ticket for the plugin hosting request. We were planning to remind the Jira ticket within the next 3 days. Mentors want me to make sure I updated the Jenkins GSoC page before the community period ends. Lots of discussion carried about the design document that I had been preparing last week before the meeting. Some important points from the meeting notes follows :

    • Define features in the design document

    • Diagrams for the operations

    • How plugin works in distributed environment

    • Code editor library

    • Requirements for the first Plugin release

    • Blog post draft document

    • ToDo works for me for next week

    Therefore, I had to work hard after this meeting, this made me involved in the project more. I have to put my huge effort to make this opportunity golden. Our team has the willingness to complete this project and will definitely help the Data Science community with this plugin. Kudos to my team for the amazing work so far!!!

    This was my entire journey until now. Hope you enjoyed it and hope you learned the mistakes I made last year and corrected in this summer. Thanks for reading, and Stay tuned I will be uploading blog posts for those of you interested.

Jenkins User Experience Hackfest Documentation Results

$
0
0

Documentation is not glamorous, but it is goodness.

Jenkins technical documentation is an important part of our project as it is key to using Jenkins well. Good documentation guides users and encourages good implementation choices. It is a crucial part of the user experience.

In the recent Jenkins UI/UX hackfest, documentation was a specific track to improve the Jenkins user experience. We received many improvements from experienced Jenkins contributors and newcomers alike. Contributors from all around the world submitted pull requests for documentation on installing, managing, administering, and operating Jenkins.

Contributors to Docs by country

Documentation migration from Wiki

The Jenkins Wiki pages have collected 15 years of experience and wisdom for Jenkins users. However, that experience and wisdom is intermixed with inaccurate, incomplete, and outdated information.

The Jenkins Wiki migration project identified the 50 most accessed pages on the Jenkins wiki and created GitHub issues to track the migration of those pages to www.jenkins.io. This was our first large scale experiment using GitHub issues for documentation. The results have been overwhelmingly positive. Hackfest contributors added new sections to many documentation chapters, including:

The Hackfest closed 19 of the wiki migration issues. Work is in progress on an additional 25 wiki migration issues. We’ve made great progress and look forward to even better results in the future. New contributors used the "good first issue" label very effectively. We started the Hackfest with most of the 25 "good first issues" unassigned and completed the Hackfest with 14 closed and 10 others in progress. We’ll provide more "good first issues" as we use the Jenkins Wiki migration to welcome new documentation contributors.

Sample Hackfest documentation pages

Migrating plugin documentation

Plugin documentation is also in transition. Since November 2019, plugins have been moving their documentation into the GitHub repository that hosts the plugin source code. This "documentation as code" approach allows plugin maintainers to include documentation improvements in the same pull requests that implement new capabilities. It assures that documentation changes are reviewed by the same maintainers who review and approve new capabilities.

Hackfest participants submitted pull requests to migrate plugin documentation to GitHub. 10 plugin pull requests are in progress from the Hackfest. 5 plugin pull requests from the Hackfest have been already merged and are awaiting the release of the plugin.

Chuck Norris uses documentation as code

In the spirit of fun and adventure, Oleg Nenashev migrated the "Chuck Norris plugin" to GitHub documentation as code in a live Hackfest presentation May 26, 2020. Links to the recording, the plugin migration guide, and the export tool are available from "Migrating plugins to documentation-as-code".

Chuck Norris plugin uses documentation-as-code

Documentation updates

Jenkins works with other technologies to solve automation challenges in many different environments. We describe those environments in our "Solution Pages". As part of the Hackfest, we’ve started a series of improvements to the solution pages.

The Docker solutions page now includes updated videos and a better page layout for easier reading and better navigation. Other solution pages will receive similar improvements in the future.

Jenkins and Docker

System properties

The global configuration of Jenkins can be modified at startup by defining Java properties. System properties can change system defaults and can provide compatibility "escape hatches" when a new default configuration might be incompatible with existing installations.

Daniel Beck has improved the navigation and user experience of the system properties page as part of the Hackfest. It is now much easier to read and to reference, with embeddable links available with a mouse-over to the right of every property and labels that categorize and classify each property.

System Properties

Plugin site improvements

During the Hackfest, Gavin Mogan has continued his efforts to improve the Jenkins Plugins Site so that users can easily access plugin changelogs and reported issues. Once this pull request is merged, it will greatly improve the experience of those Jenkins users who want to update plugins and look for documentation about what has changed in them and what are the possible issues they might experience.

Example of the incoming UI for the Jira plugin:

Plugins Site

What’s next?

There is still much to do in Jenkins documentation and we need your help to do it. There are many ways to participate in the Jenkins project, including documentation. See the contributing guidelines for detailed instructions. Join the documentation chat for personalized help and encouragement.

The Jenkins project has been also accepted to Google Season of Docs this year. This open-source mentorship program brings together open source and technical writers communities for the benefit of both. We are looking for technical writers who are interested to contribute to the project in September-December 2020. It is a great opportunity to study Documentation-as-code tools and to learn more about contributing to open-source projects. You can find Jenkins project ideas and more information here.

Jenkins Infrastructure: Stats, Updates, and AWS sponsorship

$
0
0

The Jenkins project relies heavily on its infrastructure. We use websites like www.jenkins.io and plugins.jenkins.io, ticketing systems like issues.jenkins.io , CI/CD infrastructure like ci.jenkins.io , and many other services. Just to provide some context about the Jenkins infrastructure scale, here are some stats from April 2020:

Country by country visitors to jenkins.io

Country by country visitors to jenkins.io

The Jenkins project, as an open source project, is built and maintained by its awesome community. Like in any organization, there are specific people who make sure that those services are always up and running. Everyone is welcome to participate. Infrastructure is no exception, we are always looking for new contributors to the infrastructure!

While we can’t share publicly everything like secrets and certificates, we still try to be as transparent as possible so that everybody can understand and improve our infrastructure without having privileged access. What better way than using Git to manage infrastructure work?

Who said GitOps?

Since the creation of the Jenkins-infra organization on GitHub in March 2008, more than 650 people have contributed to over 80 git repositories. Those contributions make the Jenkins community what it is today. If you can’t find something there, it probably means that some help is welcomed.

More recently, with help from Gavin Mogan, Tim Jacomb, and Alex Earl, big achievements have been possible on many fronts like automating Jenkins releases, refreshing plugins.jenkins.io, adding new agents to ci.jenkins.io, and maintaining our Kubernetes cluster. We thank them for their help and for the infrastructure progress they have enabled.

Infrastructure at Scale

Running infrastructure at the scale the Jenkins project does is expensive and sometimes quite challenging. We are fortunate enough to be supported by many leading companies that provide us their expertise, their products, and their support.

Recently, Amazon Web Services donated $60 000 to run Jenkins infrastructure on the AWS cloud. We’re so grateful for their donation and for the flexibility it provides. We’re running Linux agents with AMD64 and ARM64 architectures on AWS. We’re using AWS cloud for our Windows agents. The generous infrastructure donation from Amazon Web Services has increased our continuous integration capacity and broadened our platform coverage.

Our Sponsors

Additional sponsors of Jenkins infrastructure services and software includeAtlassian,Datadog,Fastly,IBM.JFrog,Pagerduty,Rackspace,Sentry,Serverion,SpinUp,Tsinghua University, andXMission.

Each of these organizations support the Jenkins project in their own way. We thank them for their contributions, their support and for their willingness to help the Jenkins community.

On Jenkins Terminology Updates

$
0
0

In 2016, the Jenkins community decided to start removing offensive terminology within the project. The "slave" term was deprecated in Jenkins 2.0 and replaced by the "agent" term. Other terminology was slated for review after the cleanup of the "slave" term which was considered as most problematic one. In 2017, the project began tracking areas for correction. Work has been done on renaming the SSH build agent plugin as well as gradual removal of offensive naming in services and repositories. This year, a group of core contributors continued addressing this critical work.

The Advocacy & Outreach SIG met to discuss and prioritize the continued work. The governance board has also met and there will be more information coming regarding removal of offensive terminology. Last week we took another step towards removing offensive terminology within the project by updating previous blog posts and removing offensive terminology in old blogs, cleaning up some references in Jenkins built-in documetation and localization, etc. The meeting minutes are available here and a recording of the meeting here There is more work to do. The core team is working to address terms such as "Master", "whitelist" and "blacklist" as well addressing git branching terminology.

We could use your help We continue to do this much needed work and would like to remind everyone that the Jenkins project is governed by the Code of Conduct.

Sincerely, Marky Jackson

UI/UX Hackfest: Jenkins User Interface track highlights

$
0
0

In this article, I would like to share some highlights from the User Interface track of theJenkins UI/UX Hackfest we held on May 25..29. This blog post has been slightly delayed by the infrastructure issues we had in the project, but, as for improving the Jenkins UI, it is better late than never. Key highlights from the event:

  • We delivered a preview of Jenkins read-only configuration. During the hackfest we discovered and fixed many compatibility issues.

  • We created a new Dark Theme for Jenkins. We also improved theming support in the core, and fixed compatibility in many plugins.

  • We contributed to the Jenkins UI accessibility, including UX testing and fixing the reported issues.JENKINS-62437: Configuration UI: Tables to divs migration testing was the dominant story there.

  • We worked on a New Script Security approvals management UI

See the blog post below to know more about these and other user interface improvements.

Read-only Jenkins Configuration

A read-only view of Jenkins configurations, jobs and agents is important to Jenkins Configuration-as-Code users. It would allow them to access configuration and diagnostics information about their Jenkins instances while having no opportunity to occasionally change it. This story is a part of the Jenkins roadmap, and it was featured as an area for contribution during the UI/UX hackfest.

On May 25th we have released a preview for Read-only Jenkins Configuration. Read the announcement by Tim Jacomb in this blogpost. During the hackfest we kept testing the change and fixing compatibility in the Jenkins plugins, including the Cloud Stats Plugin, Role Strategy Plugin, Simple Disk Usage Plugin and others.

Read-only build discarder configuration

We would appreciate feedback and testing from the Jenkins users! See the blogpost for the guidelines.

Dark Theme

Dark user interface themes are very popular among developers: in IDE, communication tools, etc. And there is an interest to have one for Jenkins. There were a few of implementations before the hackfest, most notably camalot/jenkins-dark-stylish and a dark version of the Neo2 Theme. These themes were difficult to maintain, and finally they were either removed or abandoned. What if Jenkins had an official theme?

During the event a group of contributors focused on creating a new Dark Theme for Jenkins. This effort included:

  • Patches to the Jenkins core which simplified development and maintenance of UI themes. Support for CSS variables was added, as well as PostCSS processing which helps to simplify browser compatibility.

  • Dark Theme itself.

  • UI Testing and compatibility fixes in the core and multiple Jenkins plugins.

  • Dark theme demo with support for the development mode.

You can try out this theme starting from Jenkins 2.239. It is available as a plugin from the Jenkins Update Center. An example screenshot of the main page:

Dark Theme - Main page

If you discover any Dark theme compatibility issues, please report them here.

Jenkins Configuration UI Accessibility

Quick access:demo,project page

Jenkins Web UI accessibility was one of the suggested topics at the event. We would like to make Jenkins usable by as many people as possible. It includes multiple groups of users: people with disabilities, ones using mobile devices, or those with slow network connections. In general, all Jenkins users would benefit from better navigation and layouts. Some of the accessibility improvements we implemented during the event:

  • Added aria-labels to username & password input fields

  • Indicate the language of the page in the footer (not merged yet)

  • Remove page generation timestamp from the footer

At the UI/UX hackfest the major focus was on migrating configuration pages from tables to divs (JENKINS-62437). It will make them more user-friendly on narrow and especially mobile screens. The change will also help users to navigate complex forms with multiple levels of nesting. Our progress:

  • User Experience testing. Thanks to the contributors, we discovered several compatibility issues in plugins.

  • Bug fixes in several plugins

  • A new Dockerized demo which allows to evaluate the change with a set of pre-configured plugins.

Here is an example of a job configuration page using the new layout:

Tables to Divs - Job configuration example

We will keep working on this change in the coming weeks, and we invite Jenkins users and Contributors to help us with testing the change! Testing guidelines are available in the JENKINS-62437 ticket.

New Script Security approvals management UI

Quick access:demo,pull request

During the hackfest Wadeck Follonier redesigned the script approval interface in the Script Security Plugin. The new UI allows viewing the list of approved scripts, shows the last access timestamp, and allows managing the approvals individually. Before, it was not possible to do it from the Web interface. Once the pull request is released, the feature will become available to Jenkins users.

New Script Security approvals management UI

Other UI improvements

In addition to the major improvements listed above, there were also many smaller patches in the Jenkins core and various plugins. You can find a full list of contributions to the user interface here, some important improvements:

Auto-grading plugin - XL Screens

Contributing

We invite Jenkins users and contributors to join the effort and to improve the user interface together. The Jenkins project gradually adopts modern frontend stacks (JavaScript, React, Gatsby, Vue.js, etc.) and design methodologies. For example, see the presentation about beautifying the UI of Jenkins reporter plugins by Ullrich Hafner. It is a great opportunity for frontend developers to join the project, share their experiences, experiment with new technologies, and improve the Jenkins user interface and user experience. Join us!

See this page for more information about contributing to the Jenkins codebase. If you want to know more, join us in the Jenkins User Experience SIG channels.

Viewing all 1087 articles
Browse latest View live