Quantcast
Channel: Jenkins Blog
Viewing all 1087 articles
Browse latest View live

External Fingerprint Storage Phase-1 Updates

$
0
0

Externalizing fingerprint storage for Jenkins is a Google Summer of Code 2020 project. We are working on building a pluggable storage engine for fingerprints (see JEP-226).

File fingerprinting is a way to track which version of a file is being used by a job/build, making dependency tracking easy. The fingerprint engine of Jenkins can track usages of artifacts, credentials, files, etc. within the system. Currently, it does this by maintaining a local XML-based database which leads to dependence on the physical disk of the Jenkins master.

Allowing fingerprint storage to be moved to external storages decreases the dependence of Jenkins instances on the physical disk space and also allows for tracking the flow of fingerprints across instances of Jenkins connected to the same external storage.

Advantages of using external storage drivers:

  • Remove dependence on Jenkins master disk storage

  • Can configure pay-as-you-use cloud storages

  • Easy Backup Management

  • Better Reliability and Availability

  • Fingerprints can be tracked across Jenkins instances

Along with this API, we are also working on a reference implementation in the form of a plugin, powered by Redis.

As phase 1 of this project comes to an end, this blog post serves as a summary of the progress we made to the entire Jenkins community.

Current State

  • The new API introduced in Jenkins core is under review. Once merged, it will offer developers to extend it to build external fingerprint storage plugins.

  • The Redis Fingerprint Storage Plugin is alpha release ready. We would immensely appreciate any feedback.

External Fingerprint Storage Demo

Introducing the new API for plugin developers

With PR-4731, we introduce a new fingerprint storage API, allowing configuring custom storage engines. We exposed the following methods in the new FingerprintStorage class:

  • void save()

    • Saves the given Fingerprint in the storage.

  • Fingerprint load(String id)

    • Returns the Fingerprint with the given unique ID. The unique ID for a fingerprint is defined by Fingerprint#getHashString().

  • void delete(String id)

    • Deletes the Fingerprint with the given unique ID.

  • boolean isReady()

    • Returns true if there is some data in the fingerprint database corresponding to the particular Jenkins instance.

Introducing Redis Fingerprint Storage Plugin

Redis Fingerprint Storage Plugin uses the new External Fingerprint Storage API to store the fingerprints in a Redis instance.

Installation:

The alpha release (version 0.1-alpha-1) for the plugin was drafted, and can be installed using the experimental update center.

Follow along the following steps after running Jenkins to download and install the plugin:

  1. Select Manage Jenkins

  2. Select Manage Plugins

  3. Go to Advanced tab

  4. Configure the Update Site URL as: https://updates.jenkins.io/experimental/update-center.json

  5. Click on Submit, and then press the Check Now button.

  6. Go to Available tab.

  7. Search for Redis Fingerprint Storage Plugin and check the box along it.

  8. Click on Install without restart

The plugin should now be installed on your system.

Usage

Once the plugin has been installed, you can configure the Redis server details by following the steps below:

  1. Select Manage Jenkins

  2. Select Configure System

  3. Scroll to the section Redis Fingerprint Storage Configuration and fill in the required details:

    Configure Redis

    • Host - Enter hostname where Redis is running

    • Port - Specify the port on which Redis is running

    • SSL - Click if SSL is enabled

    • Database - Redis supports integer indexed databases, which can be specified here.

    • Connection Timeout - Set the connection timeout duration in milliseconds.

    • Socked Timeout - Set the socket timeout duration in milliseconds.

    • Credentials - Configure authentication using username and password to the Redis instance.

    • Enabled - Check this to enable the plugin (Note: This is likely to be removed very soon, and will be enabled by default.)

  4. Use the Test Redis Connection to verify that the details are correct and Jenkins is able to connect to the Redis instance.

  5. Press the Save button.

  6. Now, all the fingerprints produced by this Jenkins instance should be saved in the configured Redis server!

Future Work

Some of the topics we aim to tackle in the next phases include extending the API, fingerprint cleanup, migrations (internal→external, external→internal, external→external), tracing, ORM, implementing the saveable listener, etc.

Acknowledgements

The Redis Fingerprint Storage plugin is built and maintained by the Google Summer of Code (GSoC) Team forExternal Fingerprint Storage for Jenkins.

Special thanks to Oleg Nenashev, Andrey Falko, Mike Cirioli, Jesse Glick, and the entire Jenkins community for all the contribution to this project.

Reaching Out

Feel free to reach out to us for any questions, feedback, etc. on the project’s Gitter Channel or the Jenkins Developer Mailing list

We use Jenkins Jira to track issues. Feel free to file issues under redis-fingerprint-storage-plugin component.


Machine Learning Plugin project - coding phase 1 blog post

$
0
0
jenkins gsoc logo small

Welcome back !

This blog post is briefing my coding phase 1 in Jenkins Machine Learning Plugin for this GSoC 2020.

After a fresh introduction of community bonding, On June 1st, coding of GSoC had started officially with phase 1. At this point, every GSoC student should be expected to have a rigid plan with their entire project. With the guidance of mentors I was able to complete a design document and timeline which can be slightly adjustable during the coding. The coding phase was more about coding and discussion.

Quick review

Week 1

I have to ensure that I have a solid architecture for implementing the core of this plugin such that perhaps I or future community will be able to develop R and Julia kernels for this plugin. Factory method design patterns are suitable when users need different types of products ( Python, R and Julia) without knowing much about the internal infrastructure ( Manager of these interpreters ).

All the base classes were implemented this week.

  • Design the Kernel connectors

  • Initiate the interpreter

  • Close the connection

  • Add simple tests

  • Update pom.xml

More than these changes, repo was updated with pull request template and licence header. Readme was extended a little at the end of the week.

Issues and Challenges

  • Git rebase and squash

  • Tests invokes ipython client in the server failed during the CI build

Week 2

With the help of a design document, I had a plan to do the configurations globally and using the Abstract Folder property I could save the configuration and retrieve for the job configuartion. I used to reference some other well developed plugin for the structure of code. That helped me a lot while I was coding. Our first official contributor has popped out his pull request.

Form validations and helper html will be a great help in the user point of view as well as developers. A minor bug was fixed with the guidance of mentors by writing tests with ‘Jenkins WebClient`. Until the end of the week, builder class of the plugin has been implemented with lots of research and discussion. Finally, Test connection was added to the global configuration page to start the connection and test it. A single issue that blocked me using py4j authentication about zeppelin-python was reported in Jira.

Server Configuration

global_config

Issues and challenges

  • Backend depends on Apache zeppelin-python API to connect IPython

  • Find relevant extension points to extend the plugin

Week 3

Earlier in this week, we were trying to merge our IPython builder PR without any memory leaks or bugs that will cause the system to be devastating while running this plugin. Later, this whole week I was implementing a file parser that could copy the necessary files and had the ability to accomplish the file conversion.

Supported file types

  • Python (.py)

  • JSON (Zeppelin notebooks format)

IPython builder was able to run Jupyter Notebooks and Zeppelin formatted JSON files at the end of the 3rd week. Minor issues were fixed in the code. We used ANSI color plugin to fix the abnormal view of error messages produced by the ipython kernel.

Copying and converting Jupyter Notebook

file_convert

Issues and Challenges

Python error messages could not be displayed in rich format If a job is running at user level, but if the python code access file/file path which is not authorized to the user, it returns a permission denied message. While running on agent, notebook has to be written/copied to agent workspace Artifacts should be maintained/reachable from master after build.

Week 4

As all the major tasks has done, the demo preparation and plan for a experimental release was carried during the last week. There were lots of research on how to connect to a existing kernel in remote. Demo and presentation were prepared along the week.

Issues and Challenges

  • Releasing the first version was bit late

Knowledge transfer

How to debug the code through IntelliJ

  • Edit configuration → Add new Configuration → Maven

  • Command line → type hpi:run

  • Click the debug icon on the toolbar or go to Run menu then Debug

How to setup to test the plugin

  • Setup JDK 8 and Maven 3.5.*

  • Create a directory $ mkdir machine-learning-plugin

  • Create a virtual environment $ virtualenv venv

  • Activate your virtual environment $ source venv/bin/activate

  • Run $ which python to ensure your python path

  • $ git clone https://github.com/jenkinsci/machine-learning-plugin.git

  • Run $ mvn clean install from the machine-learning-plugin directory

  • Run $ mvn hpi:run to start Jenkins with the plugin

  • Set up the builder with localhost and other parameters

  • Create a job

  • Write python code like print(“plugin works”)

  • Build the job

Issues and bugs

Pull Requests

21

Jira Issues

11

Major Tasks

3

Completed

3

In progress

0

Windows Service Wrapper : YAML Configuration Support - GSoC Phase - 01 Updates

$
0
0

Hello all, I am Buddhika Chathuranga from Sri Lanka and I am a final year undergraduate at the Faculty of IT, University of Moratuwa. I am participating in GSoC 2020 with Jenkins. I am working on the Windows Service Wrapper Project. So the Coding Phase 01 of GSoC 2020 is now over and this blog post describes what I have done so far.

Windows Service Wrapper is an executable, which we can use to run applications as Windows Services on Windows machines, which has almost one million downloads. In Jenkins, we use Windows service wrapper to run Jenkins server and agents as Windows services to gain more robustness. This feature is bundled into Jenkins’s core. Currently, the Windows Service wrapper is configured by an XML file. However, there is a limited number of configuration checks and there is no XML schema.

XML is not such a human-friendly way to do that. It is quite verbose and not easy to identify the schema without some effort. Usually, users misconfigure the service wrapper. This is a sample XML configuration file that we can use to provide configurations to Windows Service Wrapper.

Sample XML Configuration File

<service><id>jenkins</id><name>Jenkins</name><description>This service runs Jenkins automation server.</description><envname="JENKINS_HOME"value="%LocalAppData%\Jenkins.jenkins"/><executable>C:\Program Files\Java\jdk1.8.0_201\bin\java.exe</executable><arguments>-Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle
    -jar "C:\Program Files\Jenkins\jenkins.war" --httpPort=8081 --webroot="%LocalAppData%\Jenkinswar"</arguments><logmode>rotate</logmode><onfailureaction="restart"/><extensions><extensionenabled="true"className="winsw.Plugins.RunawayProcessKiller.RunawayProcessKillerExtension"id="killOnStartup"><pidfile>%LocalAppData%\Jenkinsjenkins.pid</pidfile><stopTimeout>10000</stopTimeout><stopParentFirst>false</stopParentFirst></extension></extensions></service>

The usage of YAML could simplify configuration management in Jenkins, especially when automated and configuration management tools are used. So what we are doing under GSoC - 2020 is to update the Windows Service Wrapper to support YAML configurations. After finishing this project, users will be able to provide configurations to the Windows Service Wrapper as a YAML file.

This is a sample YAML configuration file for Windows Service Wrapper and you can see it is less verbose than XML or JSON and much more human friendly. Users can read and edit this without a big effort.

Sample YAML Configuration File

id: jenkinsname: Jenkinsdescription: This service runs Jenkins automation server.env:_name: JENKINS_HOME_value: '%LocalAppData%\Jenkins.jenkins'executable: 'C:\Program Files\Java\jdk1.8.0_201\bin\java.exe'arguments: >-
    -Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle
    -jar "C:\Program Files\Jenkins\jenkins.war" --httpPort=8081 --webroot="%LocalAppData%\Jenkinswar"logmode: rotateonfailure:_action: restartextensions:
    -pidfile: '%LocalAppData%\Jenkinsjenkins.pid'stopTimeout: '10000'stopParentFirst: 'false'_enabled: 'true'_className: winsw.Plugins.RunawayProcessKiller.RunawayProcessKillerExtension_id: killOnStartup

Advantages of YAML as a configuration file

  • It is less verbose and much more human friendly than XML.

  • Since YAML is not using extra delimiters, it is lightweight.

  • Nowadays YAML has become more popular among configuration management tools.

Project Scope

During this project, I will add the following features to Windows Service Wrapper.

  • YAML Configuration support

  • YAML Schema validation

  • New CLI for the Windows Service Wrapper

  • Support for XML Schema validation via XML Schema Definition (XSD)

Phase 01 Updates

In GSoC - 2020 phase 01, I have done the following updates to the Windows Service Wrapper.

You can find Phase 01 Demo slides in this link.

Below you can find more details about the deliverables listed above.

Project Structure overview

The project structure overview document describes how files and directories are organized in the Windows Service Wrapper project. It will help contributors as well as users, to understand the codebase easily. Also, it helps me a lot to understand the codebase. You can find the document from the given link.

YAML configurations support

As I explained before, in this project, configurations will be provided as a YAML file. I used YamlDotNet library which has more than 2.2k stars on GitHub, to deserialize the YAML file into an Object graph. In this YAML file, users can specify configurations in a more structured way than in XML configuration files. As an example, now users can specify all the log related configurations under the log config. Users can specify all service account related configurations under serviceaccount config etc.

At the moment, I am working on a design document for YAML configuration support. I will add it to the GitHub Issue once ready

New CLI

Before moving into Phase 01 updates, it’s better to explain why we needed a new CLI for Windows Service Wrapper. In the early phases of Windows Service Wrapper, we will keep the XML configuration support as well. So we should allow users to specify the configurations file separately. The current approach is, configurations file should be in the same directory, where Windows Service Wrapper executable exists and the file name of the XML file should be the same as the Windows Service Wrapper executable file name. Also, users should be able to redirect logs if they need to and they should be allowed to elevate command prompt using Windows Service Wrapper. Also, we thought that it’s better to allow users to skip schema validation if they needed. So we decided to move into a new CLI.

As I explained, after releasing this, users will have options in addition to commands. It will make the WinSW CLI more flexible so that we can easily extend it later. These are the options users are allowed to use. These options are available with all the commands except help and version

  • --redirect / -r [string]

    • Users can specify the redirect path for the logs if needed

    • Not required | Default value is null

  • --elevated / -e [boolean]

    • Elevate the command prompt before executing the command

    • Not required | Default value is false

  • --configFile / -c [string]

    • Users can specify the configurations file as a path

    • Not Required | Default value is null

  • --skipConfigValidation / -s [boolean]

    • Users can skip schema validation for configurations file if needed

    • Not required | Default value is true

  • --help / -h

    • User can find what options are available with a particular command with this option

This option is available with the install command

  • --profile / -f [boolean]

    • If this option is true, then users can provide a service account for installation explicitly.

    • Not required | Default value is false

We used commandlineparser/commandline library to parse the command line argument which has more than 2k stars in GitHub. At a glance, the library is compatible with .NET Framework 4.0+, Mono 2.1+ Profile, .NET Standard, and .NET Core.

XML Schema validation

As I mentioned before, there was no schema validation for XML in Windows Service Wrapper. Hence, I was working on schema validation for XML. I use XSD to validate XML files. The XSD file will be shipped as an embedded resource with the executable. You can find the XSD file in my pull request.

Future updates

In the next phase, for GSoC 2020 the listed deliverables features will be released and the YAML schema validation feature will be added. Also, we hope to publish a design document for the new features, which will help contributors.

How to contribute

You can find the GitHub repository in this link. Issues and Pull requests are always welcome. Also, you can communicate with us in the WinSW Gitter channel, which is a great way to get in touch and there are project sync up meetings every Tuesday at 13:30 UTC on the Gitter channel.

GitHub Checks API Plugin Project - Coding Phase 1

$
0
0

This blog post is about our coding phase 1 progress on GSoC project: GitHub Checks API Plugin.

The GitHub Checks API is a highly customized way to integrate CI tools to make reports for pull-requests (PRs). It allows users to see CI reports on GitHub pages directly.

github check run
Figure 1. GitHub Check Run Screenshot from GitHub Docs

What’s more exciting is that it can leave annotations on specific lines of code, just as the comments people left while reviewing.

github check annotations
Figure 2. Check Run Annotation Screenshot from GitHub Docs

While on Jenkins' side, the source code view provided by Warnings Next Generation Plugin does pretty much the same thing.

source view
Figure 3. Source Code View from Warnings Next Generation Plugin

Utilizing such features through GitHub Checks API, it would make Jenkins more convenient to GitHub users.

Features from Coding Phase 1

In the past month, our team was mostly working on the general checks API and an implementation for GitHub checks API.

GitHub Checks API Plugin Demo [starts from 50:15]

General Checks API

Although the general checks API is developed based on the semantic meaning of GitHub Checks API, we still want to prepare it for similar concepts on other platforms like Commit Status API from GitLab. Contributions for implementations on these platforms will be welcomed in the future.

GitHub Checks API Implementation

Our work on supporting GitHub Checks API is mostly done by now. Besides, we implemented a consumer to automatically create a check run that simply indicates the current stage of a Jenkins build. After the release, Jenkins developers (especially publisher plugin ones) can create their own GitHub checks for a GitHub branch source project by consuming our API.

Example: To create a check run like:

Created Check Run

Consumers need to use our API in this way:

ChecksDetails details = new ChecksDetailsBuilder()
        .withName("Jenkins")
        .withStatus(ChecksStatus.COMPLETED)
        .withDetailsURL("https://ci.jenkins.io")
        .withStartedAt(LocalDateTime.now(ZoneOffset.UTC))
        .withCompletedAt(LocalDateTime.now(ZoneOffset.UTC))
        .withConclusion(ChecksConclusion.SUCCESS)
        .withOutput(new ChecksOutputBuilder()
                .withTitle("Jenkins Check")
                .withSummary("# A Successful Build")
                .withText("## 0 Failures")
                .withAnnotations(Arrays.asList(new ChecksAnnotationBuilder()
                                .withPath("Jenkinsfile")
                                .withLine(1)
                                .withAnnotationLevel(ChecksAnnotationLevel.NOTICE)
                                .withMessage("say hello to Jenkins")
                                .withStartColumn(0)
                                .withEndColumn(20)
                                .withTitle("Hello Jenkins")
                                .withRawDetails("a simple echo command")
                                .build(),new ChecksAnnotationBuilder()
                                .withPath("Jenkinsfile")
                                .withLine(2)
                                .withAnnotationLevel(ChecksAnnotationLevel.WARNING)
                                .withMessage("say hello to GitHub Checks API")
                                .withStartColumn(0)
                                .withEndColumn(30)
                                .withTitle("Hello GitHub Checks API")
                                .withRawDetails("a simple echo command")
                                .build()))
                .build())
        .withActions(Collections.singletonList(new ChecksAction("formatting", "format code", "#0")))
        .build();

ChecksPublisher publisher = ChecksPublisherFactory.fromRun(run);
publisher.publish(details);

Future Works

The next step is integrating our API into Warnings Next Generation Plugin and Code Coverage API Plugin consume our API. After that, pipeline support will be added: users can publish checks directly in a pipeline script without requiring a consumer plugin that support the checks.

Git Plugin Performance Improvement: Phase-1

$
0
0

Git Plugin Performance Improvement is a Google Summer of Code 2020 project. It aims to improve the performance of the git plugin, which provides fundamental git functionalities.

Internally, the plugin provides these functionalities using two implementations: command line git and JGit (pure java implementation).

git-intro

CLI git is the default implementation for the plugin, a user can switch to JGit if needed

The project is divided into two parallel stages:

  • Stage 1: Create benchmarks which evaluate the execution time of a git operation provided by CLI git and JGit using JMH, a micro benchmarking test harness.

  • Stage 2: Implement the insights gained from the analysis into the plugin to improve the overall performance of the plugin.

The project also aims to fix any existing performance bottlenecks within the plugin as well.

Benchmarks

The benchmarks are written using JMH. It was introduced in a GSoC 2019 project to Jenkins.

  • JMH is provided within the plugin through the Jenkins Unit Test Harness POM dependency.

  • The JMH benchmarks are created and run within the git client plugin

  • During phase-1, we have created benchmarks for two operations: "git fetch" and "git ls-remote"

Results and Analysis

The benchmark analysis for git fetch:

Git fetch results

git-fetch-results

  • The performance of git fetch (average execution time/op) is strongly correlated to the size of a repository

  • There exists an inflection point on the scale of repository size after which the nature of JGit performance changes (it starts to degrade)

  • After running multiple benchmarks, it is safe to say that for a large sized repository CLI-git would be a better choice of implementation.

  • We can use this insight to implement a feature which avoids JGit when it comes to large repositories.

Please refer to PR-521 for an elaborate explanation on these results

Note: Repository size means du -h .git

Fixing redundant fetch issue

The git plugin performs two fetch operations instead of one while performing a fresh checkout of a remote git repository.

To fix this issue, we had to safely remove the second fetch keeping multiple use-cases in mind. The fix itself was not difficult to code, but to do that safely without breaking any existing use-case was a challenging task.

Further Plan

After consolidating a benchmarking strategy during Phase 1, the next steps will be:

  • Provide functionality to the git plugin, which enables it to estimate the size of the repository without cloning it.

  • Broaden the scope of benchmarking strategy

    • Consider parameters like number of branches, references and commit history to find a relation with the performance of a git operation

    • The git plugin depends on other plugins like Credentials which might require benchmarking the plugin itself and the effects of these external dependencies on the plugin’s performance

  • Focus on other use-cases of the plugin

    • For phase-1, I focused on the checkout step and the operations involved with it

    • For the next phase, the focus will shift to other areas like Multibranch pipelines or Organisation Folders

How can you help?

If you have reached this far of the blog, you might be interested in the project.

To help, you can

Come visit our Gitter channel: https://gitter.im/jenkinsci/git-plugin

Severity of cross-site scripting vulnerabilities

$
0
0

Eagle-eyed readers of today’s security advisory may already have noticed that we consider the cross-site scripting (XSS) vulnerabilities to be 'High' severity. This is a change from previous security advisories, in which similar vulnerabilities got a 'Medium' score.

We follow the guidelines of CVSS version 3.0 for the severity we assign to these issues. Their examples for XSS vulnerabilities, as well as XSS vulnerabilities in other software, consider the most severe, immediate impact to be a modification of the HTML output, possibly also the extraction of the session cookie (something Jenkins prevents by declaring it to be HttpOnly).

Unfortunately, this does not adequately model the impact that a successful XSS exploitation in Jenkins can have: Jenkins administrators can perform far more sensitive actions than e.g. the admins of most content management systems could, as it is designed to allow users to execute code to build, test, and deploy their software. So this kind of vulnerability, that allows attackers to do anything their victims have permission to do, in Jenkins can mean execution of arbitrary code, perhaps via the script console, if the victim has the Overall/Administer permission. None of this requires chaining different actions in an attack, a well-chosen XSS payload will accomplish this.

Therefore, starting today, we score XSS vulnerabilities by the highest immediate impact a successful attack can have, which is a complete system compromise if admins can be attacked. For stored XSS requiring some permissions, like the ability to configure jobs, a typical score would be 8.0. Reflected XSS, which don’t require any permissions to exploit, will usually score 8.8.

Jenkins 2.248: Windows Support Updates

$
0
0

In this article, I would like to announce the new Windows support policy which was introduced in the Jenkins project in June 2020. This policy sets an expectation about how we handle issues and patches related to Windows support for the Jenkins server and agents, and how we organize testing of Windows support in the project. We will also talk about .NET Framework 2.0 support removal in Jenkins 2.248, and about new Windows service management features and fixes Jenkins users get with this release.

header image
Figure 1. Jenkins on Windows

Why?

In theory, Jenkins can run everywhere where you can run Java 8 or Java 11, but, in practice, there are some limitations. The Jenkins core and some plugins contain native code, and hence they rely on operating systems and platforms. We use Java Native Access and Java Native Runtime libraries which provide wide platform support for low-level operations, but there are platform-specific cases not covered by such generic libraries. In the case of Windows platforms we use Windows Service Wrapper (WinSW) andWindows Process Management Library (WinP). These libraries depend on particular Windows API versions and, in the case of Windows services, on .NET Framework.

Historically Jenkins had no documented support policy for Windows, and we were accepting patches for all versions which existed since the Hudson inception in 2004. It became a serious obstacle for Windows component maintainers who had to be very conservative about incoming patches so that we could avoid breaking instances running on old platforms. Lack of testing for older platforms did not help either. And it is not just about maintenance overhead. Users were impacted as well, because it blocked us from adopting some new Windows features and making Jenkins more stable/maintainable on modern platforms.

New policy

To set proper expectations about Windows support, in the new policy we defined four support levels. See the Windows support policy page for the actual information about the support levels and the supported platforms. This blogpost captures the support state as of Jul 23, 2020:

Level 1 - Full Support

We run automated testing for these platforms, and we intend to timely fix the reported issues. This support level includes 64-bit (amd-64) Windows Server versions with the latest GA update pack, and versions used in the official Jenkins server and agent Docker images.

Level 2 - Supported

We do not actively test these platforms, but we intend to keep compatibility. We are happy to accept patches. This support level includes 64-bit (amd64) Windows Server and Windows 10 versions generally supported by Microsoft.

Level 3 - Patches considered

The platforms are generally expected to work, but they may have limitations and extra requirements. We do not test compatibility, and we may drop support if needed. We will consider patches if they do not put Level 1/2 platforms at risk and if they do not create maintenance overhead. This support level includes non-amd64 platforms like x86 (32-bit) and AArch64 (Arm). It also applies to non-mainstream release lines like Windows Embedded, preview releases, and versions no longer supported by Microsoft.

Level 4 - Unsupported

These versions are known to be incompatible or to have severe limitations. We do not support the listed platforms, and we will not accept patches. At the moment this level applies to platforms released before 2008.

When the policy was introduced, there were questions raised about platforms listed in the Level 3 support category. First of all, these platforms are still supported. Users are welcome to run Jenkins on these platforms. We recognize the importance of the platforms listed there, and we intend to keep compatibility with them. At the same time, particular functionality may break there due to the lack of testing when we update Jenkins or upstream dependencies. It may take a while until a fix is submitted by a user or contributor, because we do not maintain development environments for these platforms. By setting a Level 3 support level, we want to set an explicit expectation about those limitations.

If you are interested in expanding the official Windows support policy and adding more platforms there, we invite you to participate in quality assurance of Jenkins. You may contribute by expanding test automation for Jenkins, contributing test environments for your platforms, or participating in the LTS release candidate testing and reporting results. Please contact us via Platform SIG channels if you are interested.

Windows Service Management changes in Jenkins 2.248

winsw logo
Figure 2. WinSW Logo

Although the policy was introduced more than 1 month ago,Jenkins 2.248 is the first release where the new policy is applied. Starting from this release, we won’t support .NET Framework 2.0 for launching the Jenkins server or agents as Windows services. .NET Framework 4.0 or above is now required for using the default service management features.

This release also upgrades Windows Service Wrapper (WinSW) from 2.3.0 to 2.9.0 and replaces the bundled binary from .NET Framework 2.0 to 4.0. There are many improvements and fixes in these versions, big thanks to NextTurn and all other contributors. You can find the full WinSW changelog here, just a few highlights important to Jenkins users:

  • Prompt for permission elevation when administrative access is required. Now Jenkins users do not need to run the agent process as Administrator to install the agent as a service from GUI.

  • Enable TLS 1.1/1.2 in .NET Framework 4.0 packages on Windows 7 and Windows Server 2008 R2.

  • Enable strong cryptography when running .NET Framework 4.0 binaries on .NET 4.6.

  • Support security descriptor string in the Windows service definition.

  • Support 'If-Modified-Since' and proxy settings for automatic downloads.

  • Fix Runaway Process Killer extension so that it does not kill wrong processes with the same PID on startup.

  • Fix the default domain name in the serviceaccount parameter (JENKINS-12660)

  • Fix archiving of old logs in the roll-by-size-time mode.

As you may see, there are many improvements available with this version, and we hope that it will make Windows service installation even more reliable. Some of the changes in WinSW also replaced old workarounds in the Jenkins core, making the code more maintainable.

Use-cases affected by .NET Framework 2.0 support removal

If you use .NET Framework 2.0 to run the Jenkins Windows services, the following use-cases are likely to be affected:

  • Installing the Jenkins server as a Windows service from Web UI. The official MSI Installer supports .NET Framework 2.0 for the moment, but it will be changed in future versions.

  • Installing agents as Windows services from GUI. This feature is provided by in Windows Agent Installer Module from the Jenkins core.

  • Installing agents over Windows Management Instrumentation (WMI) via the WMI Windows Agents plugin

  • Auto-updating of Windows service wrappers on agents installed from GUI.

Upgrade guidelines

If all of your Jenkins server and agent instances already use .NET Framework 4.0 or above, there are no special upgrade steps required. Please enjoy the new features!

If you run the Jenkins server as a Windows Service with .NET Framework 2.0, this instance will require an upgrade of .NET Framework to version 4.0 or above. We recommend running with .NET Framework 4.6.1 or above, because this .NET version provides many platform features by default (e.g. TLS 1.2 encryption and strong cryptography), and Windows Service Wrapper does not have to apply custom workarounds.

If you want to continue running some of your agents with .NET Framework 2.0, the following extra upgrade steps are required:

  1. Disable auto-upgrade of Windows Service Wrapper on agents by setting the-Dorg.jenkinsci.modules.windows_slave_installer.disableAutoUpdate=true flag on the Jenkins server side.

  2. Upgrade agents with .NET Framework 4.0+ by downloading the recent Windows Service Wrapper 2.x version from WiNSW GitHub Releases and manually replacing the wrapper ".exe" files in the agent workspaces.

What’s next?

We plan to continue expanding the Windows support in Jenkins, including providing official Docker images for newer Windows versions. For example, there is already a pull request which will introduce official agent images for Windows Server Core LTSC 2019 and for Windows Server Core and Nano Server 1909. We are also interested to keep expanding test coverage for Windows platforms. Any contributions and feedback will be appreciated!

We also keep working on improving Windows Services. During his Google Summer of Code 2020 project,Buddhika Chathuranga is working on adding support for YAML Configurations in Windows Service Wrapper, and on better verification of XML and YAML Configurations. See the details on the project page and in theCoding Phase 1 Report. In addition to that, there is ongoing work on a new Windows Service Wrapper 3.0 release which will redesign CLI and introduce a lot more improvements. If you are interested in contributing to Windows Service Wrapper, see the guidelines here. We will also appreciate your feedback on the WinSW Gitter channel.

External Fingerprint Storage Phase-2 Updates

$
0
0

As another great phase for theExternal Fingerprint Storage Project comes to an end, we summarise the work done during this phase in this blog post. It was an exciting and fruitful journey, just like the previous phase, and offered some great learning experience.

To understand what the project is about and the past progress, please refer to thephase 1 blog post.

New Stories Completed

We targeted four stories in this phase, namely fingerprint cleanup, fingerprint migration, refactoring the current implementation to use descriptors, and improved testing of the Redis Fingerprint Storage Plugin. We explain these stories in detail below.

Fingerprint Cleanup

This story involved extending the FingerprintStorage API to allow external storage plugins to perform and configure their own fingerprint cleanup strategies. We added the following functionalities to Jenkins core API:

  • FingerprintStorage#iterateAndCleanupFingerprints(TaskListener taskListener)

    • This allows external fingerprint storage implementations to implement their own custom fingerprint cleanup. The method is called periodically by Jenkins core.

  • FingerprintStorage#cleanFingerprint(Fingerprint fingerprint, TaskListener taskListener)

    • This is a reference implementation which can be called by external storage plugins to clean up a fingerprint. It is upto the plugin implementation to decide whether to use this method. They may choose to write a custom implementation.

We consume these new API functionalities in theRedis Fingerprint Storage plugin. The plugin uses cursors to traverse the fingerprints, updating the build information, and deleting the build-less fingerprints.

Earlier, fingerprint cleanup was always run periodically and there was no way to turn it off. We also added an option to allow the user to turn off fingerprint cleanup.

Fingerprint cleanup disable

This was done because it may be the case that keeping redundant fingerprints in memory might be cheaper than the cleanup operation (especially in the case of external storages, which are cheaper these days).

Fingerprint Migration

Earlier, there was no support for fingerprints stored in the local storage. In this phase, we introduce migration support for users. The old fingerprints are now migrated to the new configured external storage whenever they are used (lazy migration). This allows gradual migration of old fingerprints from local disk storage to the new external storage.

Refactor FingerprintStorage to use descriptors

Earlier, whenever an external fingerprint storage plugin was installed, it was enabled by default. We refactored the implementation to make use of Descriptor pattern so the fingerprint engine can now be selected as a dropdown from the Jenkins configuration page. The dropdown is shown only when multiple fingerprint storage engines are configured on the system.Redis Fingerprint Storage Plugin was refactored to use this new implementation.

Fingerprint Storage Engine Dropdown

Strengthened testing for the Redis Fingerprint Storage Plugin

We introduced new connection tests in theRedis Fingerprint Storage Plugin. These tests allow testing of cases like slow connection, breakage of connection to Redis, etc. These were implemented using the Toxiproxy module inside Testcontainers.

We introduced test for Configuration-as-code (JCasC) compatibility with the plugin. The documentation for configuring the plugin using JCasC was also added.

We introduced a suite of authentication tests, to verify the proper working of the Redis authentication system. Authentication uses the credentials plugin.

We strengthened our web UI testing to ensure that the configuration page for the plugin works properly as planned.

Other miscellaneous tasks

Please refer to the Jira Epic for this phase.

Release

Changes in the Jenkins core (except migration) were released in Jenkins 2.248. A release for the Redis Fingerprint Storage Plugin is to happen soon!

Trying out the new features!

The latest release for the plugin can be downloaded from the experimental update center, instructions for which can be found in the README of the plugin. We appreciate you trying out the plugin, and welcome any suggestions, feature requests, bug reports, etc.

Acknowledgements

The Redis Fingerprint Storage plugin is built and maintained by the Google Summer of Code (GSoC) Team forExternal Fingerprint Storage for Jenkins. Special thanks to Oleg Nenashev,Andrey Falko, Mike Cirioli,Tim Jacomb, and the entire Jenkins community for all the contribution to this project.

Future Work

Some of the topics we aim to tackle in the next phase include a new reference implementation (possibly backed by PostgreSQL), tracing, etc.

Reaching Out

Feel free to reach out to us for any questions, feedback, etc. on the project’s Gitter Channel or the Jenkins Developer Mailing list. We use Jenkins Jira to track issues. Feel free to file issues under redis-fingerprint-storage-plugin component.


Machine Learning Plugin project - Coding Phase 2 blog post

$
0
0
jenkins gsoc logo small

Welcome back folks!

This blog post is about my coding phase 2 in Jenkins Machine Learning Plugin for this GSoC 2020. After successfully passing the evaluation and demo in the phase 1, our team went ahead for facing the challenges in phase 2.

Summary

This phase of coding was well spent by documentation and by fixing many bugs. As the main feature of connecting to an IPython Kernel is done in phase 1, we were able to focus on fixing minor/major bugs and documenting for the users. According to the JENKINS-62927 issue, a Docker agent was built to facilitate users without concerning plugin dependencies in python. In the act of deprecation of Python 2, we ported our plugin to support Python 3. We have tested our plugin in Conda, venv and Windows environments. Machine learning plugin has successfully passed the end to end test. A feature for a code editor is needed for further discussion/analysis as we have done a simple editor that may be useful in other ways in the future. PR#35

Main features of Machine Learning plugin

  • Run Jupyter notebook, (Zeppelin) JSON and Python files

  • Run Python code directly

  • Convert Jupyter Notebooks to Python and JSON

  • Configure IPython kernel properties

  • Support to execute Notebooks/Python on Agent

  • Support for Windows and Linux

Upcoming features

  • Extract graph/map/images from the code

  • Save artifacts according to the step name

  • Generate reports for corresponding build

Future improvements

  • Usage of JupyterRestClient

  • Support for multiple language kernels

    • Note : There is no commitment on future improvements during GSoC period

Docker agent

The following Dockerfile can be used to build the Docker container as an agent for the Machine Learning plugin. This docker agent can be used to run notebooks or python scripts.

Dockerfile
FROM jenkins/agent:latest

MAINTAINER Loghi <loghijiaha@gmail.com>

USER root

RUN apt update && apt install --no-install-recommends python3 -y \
    python3-pip \&& rm -rf /var/lib/apt/lists/*

COPY requirements.txt /requirements.txt

RUN pip3 install --upgrade pip setuptools && \
    pip3 install --no-cache-dir -r /requirements.txt && \
    ln -sf /usr/bin/python3 /usr/bin/python && \
    ln -sf /usr/bin/pip3 /usr/bin/pip

USER jenkins

Ported to Python 3

As discussed in the previous meeting, we concluded that the plugin should support Python 3 as Python 2.7+ has been deprecated since the beginning of 2020. Pull request for docker agent should be also ported to Python 3 support.

Jupyter Rest Client API

The Jupyter Notebook server API seemed to be promising that it can be also used to run notebooks and codes. There were 3 api implementations that were merged in the master. But we had to focus on what was proposed in the design document and had to finish all must-have issues/works. Jupyter REST client was left for future implementation. It is also a good start to contribute to the plugin from the community.

Fixed bugs for running in agent

There were a few bugs related to the file path of notebooks while building a job. The major problem was caused by the python dependencies needed to connect to a IPython kernel. All issues/bugs were fixed before the timeline given.

R support as a future improvement

This is what we tried to give a glimpse of knowledge that this plugin can be extended for multi language support in the future. There was a conclusion that the kernel should be selected dynamically using extension of the script file(like eval_model.rb or train_model.r), instead of scripting the same code for each kernel.

Documentation and End to End testing

A well explained documentation was published in the repository. A guided tutorial to run a notebook checked out from a git repo in an agent was included in the docs page. Mentors helped to test our plugin in both Linux and Windows.

Code editor with rebuild feature

Code editor was filtered as a nice to have feature in the design document. After grabbing the idea of Jenkinsfile replay editor, I could do the same for the code. At the same time, when we are getting the source code from git, it is not an elegant way of editing code in the original code. After the discussion, we had to leave the PR open that may have use cases in the future if needed.

Jenkins LTS update

The plugin has been updated to support Jenkins LTS 2.204.1 as 2.164.3 had some problems with installing pipeline supported API/plugin

Installation for experimental version

  1. Enable the experimental update center

  2. Search for Machine Learning Plugin and check the box along it.

  3. Click on Install without restart

The plugin should now be installed on your system.

Custom Distribution Service : Midterm Summary

$
0
0

Hello, After an eventful community bonding period we finally entered into the coding phase. This blog post will summarize the work done till the midterm of the coding phases i.e. week 6. If some of the topics here require a more detailed explanation, I will write a separate blog post. These blogs posts will not have a very defined format but would cover all of the user stories or features implemented.

Project Summary

The main idea behind the project is to build a customizable jenkins distribution service that could be used to build tailor-made jenkins distributions. The service would provide users with a simple interface to select the configurations they want to build the instance with eg: plugins, authorization matrices etc. Furthermore it would include a section for sharing community created distros so that users can find and download already built jenkins war/configuration files to use out of the box.

Quick review

Details

I have written separate blog posts for every week in GSoC and the intricate details for each of them can be found at their respective blog pages. I am including a summary for every phase supported with the respective links.

Community Bonding

This year GSoC had a longer community bonding than any of the previous editions due to the Coronavirus pandemic and therefore this gave me a lot of time to explore, so I spent it by building a prototype for my project. I realised some of the blockages I might face early on, and therefore it gave me more clarity in terms of how I can proceed. I also spent this time preparing a design document which you can find here.

Week 1

In week one, I spent time getting used to the tech stack I would be using, I was pretty familiar with Spring Boot but React was something I was going to be using for the first time, so I spent time studying more about it. I also got the project page ready, the issues I was going to tackle and the milestones that I had to achieve before the evaluation. I also spent a bit of time setting up the home page and a bit of front-end components.

Week 2

Once we were done with the initial setup, it was time to work on the core of the project. In the second week, I worked on generating the package configuration and the plugin list dummy display page setup. I also ran into issues with the Jenkinsfile so the majority of time was spent fixing it. Finally I managed to get around those problems. You can read more about it in the Week 2 Blog post.

Week 3

The last week was spent cleaning up most of the code and getting the remaining milestones in. This was probably the hardest part of phase 1 because it involved connecting the front and back end of the project.You can read more about it here.

Midterm Update

The second phase has been going on for the past 3 weeks and we have already accomplished a majority of the deliverables including community configurations, war downloading and filtering of plugins. More details about the mid term report can be found here.

Getting the Code

The Custom Distribution Service was created from scratch during GSoC and can be found here on Github.

Pull Requests Opened

38

Github Issues completed

36

Jenkins 2.235.3: New Linux Repository Signing Keys

$
0
0

The Jenkins core release automation project has been delivering Jenkins weekly releases since Jenkins 2.232, April 16, 2020. The Linux repositories that deliver the weekly release were updated with new GPG keys with the release of Jenkins 2.232.

Beginning with Jenkins LTS release 2.235.3, stable repositories will be signed with the same GPG keys that sign the weekly repositories. Administrators of Linux systems must install the new signing keys on their Linux servers before installing Jenkins 2.235.3.

Debian/Ubuntu

Update Debian compatible operating systems (Debian, Ubuntu, Linux Mint Debian Edition, etc.) with the command:

Debian/Ubuntu
# wget -qO - https://pkg.jenkins.io/debian-stable/jenkins.io.key | apt-key add -

Red Hat/CentOS

Update Red Hat compatible operating systems (Red Hat Enterprise Linux, CentOS, Fedora, Oracle Linux, Scientific Linux, etc.) with the command:

Red Hat/CentOS
# rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key

Frequently Asked Questions

What if I don’t update the repository signing key?

Updates will be blocked by the operating system package manager (apt, yum, dnf) on operating systems that have not installed the new repository signing key. Sample messages from the operating system may look like:

Debian/Ubuntu
Reading package lists... Done
W: GPG error: https://pkg.jenkins.io/debian-stable binary/ Release:
    The following signatures couldn't be verified because the public key is not available:
        NO_PUBKEY FCEF32E745F2C3D5
E: The repository 'https://pkg.jenkins.io/debian-stable binary/ Release' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
Red Hat/CentOS
Downloading packages:
warning: /var/cache/yum/x86_64/7/jenkins/packages/jenkins-2.235.3-1.1.noarch.rpm:
    Header V4 RSA/SHA512 Signature, key ID 45f2c3d5: NOKEY
Public key for jenkins-2.235.3-1.1.noarch.rpm is not installed

Why is the repository signing key being updated?

The original repository GPG signing key is owned by Kohsuke Kawaguchi. Rather than require that Kohsuke disclose his personal GPG signing key, the core release automation project has used a new repository signing key. The updated GPG repository signing key is used in the weekly repositories and the stable repositories.

Which operating systems are affected?

Operating systems that use Debian package management (apt) and operating systems that use Red Hat package management (yum and dnf) need the new repository signing key.

Other operating systems like Windows, macOS, FreeBSD, OpenBSD, Solaris, and OpenIndiana are not affected.

Are there other signing changes?

Yes, there are other signing changes, though they do not need specific action from users.

The jenkins.war file is signed with a new code signing certificate. The new code signing certificate has been used on weekly releases since April 2020.

Git Plugin Performance Improvement Phase-2 Progress

$
0
0

The second phase of the Git Plugin Performance Improvement project has been great in terms of the progress we have achieved in implementing performance improvement insights derived from the phase one JMH micro-benchmark experiments.

What we’ve learned so far in this project is that a git fetch is highly correlated to the size of the remote repository. In order to make fetch improvements in this plugin, our task was to find the difference in performance for the two available git implementations in the Git Plugin, git and JGit.

Our major finding was that git performs much better than JGit when it comes to a large sized repository (>100 MiB). Interestingly, JGit performs better than git when size of the repository is less than 100 MiB.

In this phase, we were successful in coding this derived knowledge from the benchmarks into a new functionality called the GitToolChooser.

GitToolChooser

This class aims to add the functionality of recommending a git implementation on the basis of the size of a repository which has a strong correlation to the performance of git fetch (from performance Benchmarks).

It utilizes two heuristics to calculate the size:

  • Using cached .git dir from multibranch projects to estimate the size of a repository

  • Providing an extension point which, upon implementation, can use REST APIs exposed by git service providers like Github, GitLab, etc to fetch the size of the remote repository.

Will it optimize your Jenkins instance? That requires one of the following:

  • you have a multibranch project in your Jenkins instance, the plugin can use that to recommend the optimal git implementation

  • you have a branch Source Plugin installed in the Jenkins instance, the particular branch source plugin will recommend a git implementation using REST APIs provided by GitHub or GitLab respectively.

The architecture and code for this class is at: PR-931

Note: This functionality is an upcoming feature in the subsequent Git Plugin release.

JMH benchmarks in multiple environments

The benchmarks were being executed on Linux and macOS machines frequently but there was a need to check if the results gained from those benchmarks would hold true across more platforms to ensure that the solution (GitToolChooser) is generally platform-agnostic.

To test this hypothesis, we performed an experiment:

Running git fetch operation for a 400 MiB sized repository on:

  • Windows

  • FreeBSD 12

  • ppc64le

  • s390x

The result of running this experiment is given below:

Performance on multiple platforms

Observations:

  • ppc64le and s390x are able to run the operation in almost half the time it takes for the Windows or FreeBSD 12 machine. This behavior may be attributed to the increased computational power of those machines.

  • The difference in performance between git and JGit remains constant across all platforms which is a positive sign for the GitToolChooser as its recommendation would be consistent across multiple devices and operating systems.

Release Plan 🚀

JENKINS-49757 - Avoid double fetch from Git checkout step This issue was fixed in phase one, avoids the second fetch in redundant cases. It will be shipped with some benchmarks on the change in performance due to the removal of the second fetch.

GitToolChooser

  • PR-931 This pull request is under review, will be shipped in one of the subsequent Git Plugin releases.

Current Challenges with GitToolChooser

  • Implement the extension point to support GitHub Branch Source Plugin, Gitlab Branch Source Plugin and Gitea Plugin.

  • The current version of JGit doesn’t support LFS checkout and sparse checkout, need to make sure that the recommendation doesn’t break existing use cases.

Future Work

In phase three, we wish to:

  • Release a new version of the Git and Git Client Plugin with the features developed during the project

  • Continue to explore more areas for performance improvement

  • Add a new git operation: git clone (Stretch Goal)

Reaching Out

Feel free to reach out to us for any questions or feedback on the project’s Gitter Channel or the Jenkins Developer Mailing list.

GitHub Checks API Plugin Project - Coding Phase 2

$
0
0

Another great coding phase for GitHub Checks API Project ends! In this phase, we focused on consuming the checks API in two widely used plugins:

Besides the external usage, we have also split the general checks API from its GitHub implementation and released both of the plugins:

Coding Phase 2 Demo [starts from 25:20]

Warning Checks

The newly released Warnings NG plugin 8.4.0 will use checks API to publish different check runs for different static analysis tools. Without leaving GitHub, users are now able to see the analysis report they interested in.

Warning Checks Summary

On GitHub’s conversation tab for each PR, users will see summaries for those checks like the screenshot above. The summaries will include:

  • The status that indicates the quality gate

  • The name of the analysis tool used

  • A short message that indicates statistics of new and total issues

More fine-grained statistics can be found in the Details page.

Severity Statis

Another practical feature is the annotation for specific lines of code. Users can now review the code alone with the annotations.

Warning Annotations

Try It

In Wanings NG plugin 8.4.0, the warning checks is set as a default feature only for GitHub. For other SCM platforms, a NullPublisher will be used which does nothing. Therefore, you can get those checks for your own GitHub project just in a few steps:

  1. Update Warnings NG plugin to 8.4.0

  2. Install GitHub Checks plugin on your Jenkins instance

  3. Follow the GitHub app authentication guide to configure the credentials for the multi-branch project or GitHub organization project you are going to use

  4. Use warnings-ng plugin in your Jenkinsfile for the project you configured in the last step, e.g.

node {
    stage ('Checkout') {
        checkout scm
    }

    stage ('Build and Static Analysis') {
        sh 'mvn -V -e clean verify -Dmaven.test.failure.ignore'

        recordIssues tools: [java(), javaDoc()], aggregatingResults: 'true', id: 'java', name: 'Java'
        recordIssues tool: errorProne(), healthy: 1, unhealthy: 20
        recordIssues tools: [checkStyle(pattern: 'target/checkstyle-result.xml'),
            spotBugs(pattern: 'target/spotbugsXml.xml'),
            pmdParser(pattern: 'target/pmd.xml'),
            cpd(pattern: 'target/cpd.xml')],qualityGates: [[threshold: 1, type: 'TOTAL', unstable: true]]
    }
}

For more about the pipeline usage of warnings-ng plugin, please see the official documentation.

However, if you don’t want to publish the warnings to GitHub, you can either uninstall the GitHub Checks plugin or disable it by adding skipPublishingChecks: true.

recordIssues enabledForFailure: true, tools: [java(), javaDoc()], skipPublishingChecks: true

Coverage Checks

The coverage checks are achieved by consuming the API in Code Coverage API plugin. First, in the conversation tab of a PR, users will be able to see the summary about the coverage difference compared to previous builds.

Coverage Summary

The Details page will contain some other things:

  • Links to the reference build, including the target branch build from the master branch and the last successful build from this branch

  • Coverage healthy score (the default value is 100% if the threshold is not configured)

  • Coverages and trends of different types in table format

Coverage Details

The pull request for this feature will soon be merged and will be included in the next release of Coverage Checks API plugin. After that, you can use it by adding the below section to your pipeline script:

node {
    stage ('Checkout') {
        checkout scm
    }

    stage ('Line and Branch Coverage') {
        publishCoverage adapters: [jacoco('**/*/jacoco.xml')], sourceFileResolver: sourceFiles('STORE_ALL_BUILD')
    }
}

Like the warning checks, you can also disable the coverage checks by setting the field skipPublishingChecks, e.g.

publishCoverage adapters: [jacoco('**/*/jacoco.xml')], sourceFileResolver: sourceFiles('STORE_ALL_BUILD'), skipPublishingChecks: true

Next Phase

In the next phase, we will turn our attention back to Checks API Plugin and GitHub Checks Plugin and add the following features in future versions:

  • Pipeline Support

    • Users can publish checks directly in a pipeline script without requiring a consumer plugin that supports the checks.

  • Re-run Request

    • Users can re-run Jenkins build through Checks API.

Lastly, it is exciting to inform that we are currently making the checks feature available on ci.jenkins.io for all plugins hosted in the jenkinsci GitHub organization, please see INFRA-2694 for more details.

Jenkins graduates in the Continuous Delivery Foundation

$
0
0

We are happy to announce that the Jenkins project has achieved the graduated status in the Continuous Delivery Foundation (CDF). This status is officially effective Aug 03, 2020. Jenkins is the first project to graduate in the CD Foundation. Thanks to all contributors who made our graduation possible!

In this article, we will discuss what the CD Foundation membership and graduation mean to the Jenkins community. We will also talk about what changed in Jenkins as a part of the graduation, and what are the future steps for the project.

To know more about the Jenkins graduation, see also the announcement on the CD Foundation website. Also see the special edition of the CD Foundation Newsletter for Jenkins user success stories and some surprise content. The press release is available here.

How does CDF membership help us?

About 18 months ago, Jenkins became one of the CDF founding projects, along with Jenkins X, Spinnaker and Tekton. A new foundation was formed to provide a vendor-neutral home for open source projects used for Continuous Delivery and Continuous Integration. Special interest groups were started to foster collaboration between projects and end user companies, most notably:Interoperability,MLOps andSecurity SIGs. Also, a Community Ambassador role was created to organize local meetups and to provide public-facing community representatives. Many former Jenkins Ambassadors and other contributors are now CDF Ambassadors, and they promote Jenkins and other projects there.

Thanks to this membership we addressed key project infrastructure needs. Starting from Jan 2020, CDF covers a significant part of the infrastructure costs including our services and CI/CD instances running on Microsoft Azure. The CD Foundation provided us with legal assistance required to get code signing keys for the Jenkins project. Thanks to that, we were able to switch to a new Jenkins Release Infrastructure. The foundation sponsors the Zoom account we use for Jenkins Online Meetups and community meetings. In the future we will continue to review ways of reducing maintenance overhead by switching some of our self-hosted services to equivalents provided by the Linux Foundation to CDF members.

Another important CDF membership benefit is community outreach and marketing. It helped us to establish connections with other CI/CD projects and end user companies. Through the foundation we have access to the DevStats service that provides community contribution statistics and helps us track trends and discover areas for improvement. On the marketing side, the foundation organizes webinars, podcasts and newsletters. Jenkins is regularly represented there. The CD Foundation also runs the meetup.com professional account which is used by local Jenkins communities forCI/CD and Jenkins Area Meetups. Last but not least, the Jenkins community is also represented at virtual conferences where CDF has a booth. All of that helps to grow Jenkins visibility and to highlight new features and initiatives in the project.

Why did we graduate?

Jenkins Graduation Logo

The Jenkins project has a long history of open governance which is a key part of today’s project success. Starting from 2011, the project has introduced the governance meeting which are open to anyone. Most of the discussions and decision making happen publicly in the mailing lists.In 2015 we introduced teams, sub-projects and officer roles.In 2017 we introduced the Jenkins Enhancement Proposal process which helped us to make the key architecture and governance decisions more open and transparent to the community and the Jenkins users.In 2018 we introduced special interest groups that focus on community needs.In 2019 we have expanded the Jenkins governance board so that it got more bandwidth to facilitate initiatives in the project.

Since the Jenkins project inception 15 years ago, it has been steadily growing. Now it has millions of users and thousands of contributors. In 2019 it has seen 5,433 contributors from 111 countries and 272 companies, 67 core and 2,654 plugin releases, 45,484 commits, 7,000+ pull requests. In 2020 Q2 the project has seen 21% growth in pull requests numbers compared to 2019 Q2, bots excluded.

One may say that the Jenkins project already has everything needed to succeed. It is a result of continuous work by many community members, and this work will never end as long as the project remains active. Like in any other industry, the CI/CD ecosystem changes every day and sets new expectations from the automation tools in this domain. Just as the tools evolve, open source communities need to evolve so that they can address expectations, and onboard more users and contributors. The CDF graduation process helped us to discover opportunities for improvement, and address them. We reviewed the project processes and compared them with the Graduated Project criteria defined in the CDF project lifecycle. Based on this review, we made changes in our processes and documentation. It should improve the experience of Jenkins users, and help to make the Jenkins community more welcoming to existing and newcomer contributors.

What changed for the project?

Below you can find a few key changes we have applied during the graduation process:

Public roadmap

We introduced a new public roadmap for the Jenkins project. This roadmap aggregates key initiatives in all community areas: features, infrastructure, documentation, community, etc. It makes the project more transparent to all Jenkins users and adopters, and at the same time helps potential contributors find the hot areas and opportunities for contribution. The roadmap is driven by the Jenkins community and it has a fully public process documented in JEP-14.

More details about the public roadmap are coming next week, stay tuned for a separate blogpost. On July 10th we had an online contributor meetup about the roadmap and you can find more information in its materials (slides, video recording).

User Documentation
  • Jenkins Weekly Release line is now documented on our website (here). We have also reworked the downloads page and added guidelines explaining how to verify downloads.

  • A new list of Jenkins adopters was introduced on jenkins.io. This list highlights Jenkins users and references their case studies and success stories, including ones submitted through the Jenkins Is The Way portal. Please do not hesitate to add your company there!

Community
  • We passed the Core Infrastructure Initiative (CII) certification. This certification helps us to verify compliance with open source best practices and to make adjustments in the project (see the bullets below). It also provides Jenkins users and adopters with a public summary about compliance with each best practice. Details are on the Jenkins core page.

  • Jenkins Code of Conduct was updated to the new version of Contributor Covenant. In particular, it sets best practices of behavior in the community, and expands definitions of unacceptable behavior.

  • The default Jenkins contributing template was updated to cover more common cases for plugin contributors. This page provides links to the Participate and Contribute guidelines hosted on our website, and helps potential contributors to easily access the documentation.

  • The Jenkins Core maintainer guide was updated to include maintenance and issues triage guidelines. It should help us to deliver quality releases and to timely triage and address issues reported by Jenkins users.

What’s next?

It an honor to be the first project to reach the graduated stage in the Continuous Delivery Foundation, but it is also a great responsibility for the project. As a project, we plan to continue participating in the CDF activities and to work with other projects and end users to maintain the Jenkins' leader role in the CI/CD space.

We encourage everyone to join the project and participate in evolving the Jenkins project and driving its roadmap. It does not necessarily mean committing code or documentation patches; user feedback is also very important to the project. If you are interested to contribute or to share your feedback, please contact us in the Jenkins community channels (mailing lists, chats)!

Acknowledgements

CDF graduation work was a major effort in the Jenkins community. Congratulations and thanks to the dozens of contributors who made our graduation possible. I would like to thankAlex Earl,Alyssa Tong,Dan Lorenc,Daniel Beck,Jeff Thompson,Marky Jackson,Mark Waite,Olivier Vernin,Tim Jacomb,Tracy Miranda,Ullrich Hafner,Wadeck Follonier, and all other contributors who helped with reviews and provided their feedback!

Also thanks to the Continuous Delivery Foundation marketing team (Jacqueline Salinas, Jesse Casman and Roxanne Joncas) for their work on promoting the Jenkins project and, specifically, its graduation.

About the Continuous Delivery Foundation

CDF Logo

The Continuous Delivery Foundation (CDF) serves as the vendor-neutral home of many of the fastest-growing projects for continuous delivery, including Jenkins, Jenkins X, Tekton, and Spinnaker, as well as fosters collaboration between the industry’s top developers, end users and vendors to further continuous delivery best practices. The CDF is part of the Linux Foundation, a nonprofit organization. For more information about the foundation, please visit its website.

More information

To know more about the Jenkins graduation in the Continuous Delivery Foundation, see the announcement on the CD Foundation website. Also see the special edition of the CD Foundation Newsletter for Jenkins user success stories and some surprise content. The press release is available here.

Custom Distribution Service : Phase 2 Blogpost

$
0
0

Hello everyone, It is time to wrap up another successfull phase for the custom distribution service project, and we have incorporated most of the features that we had planned at the start of the phase. It has been an immense learning curve for me and the entire team.

To understand what the project is about and the past progress, please refer to the phase one blogposthere.

Front-End

Filters for Plugins

In the previous phase we implemented the ability to add plugins to the configuration, and the ability to search these plugins via a search bar. Sometimes though we would like to filter these plugins based on their usage, popularity, stars etc. Hence we have added a certain set of filters to these plugins. We support only four major filters for now. They are:

  1. Title

  2. Most installed

  3. Relevance

  4. Trending

Filter implementation

The major heavy lifting is done by the plugin api which takes in the necessary parameters and returns the relevant plugins in the form of a json object, here is an example of the api call url: const url = https://plugins.jenkins.io/api/plugins?$params.

For details, see:

  • Feature request #9

  • Pull Request #76

Community Configurations

One major deliverable for the project was the ability for users to share the configurations developed by them, so that they can be used widely within the community. For example we see quite a lot of jenkins configurations involve being run on AWS and kubernetes and so on. Therefore it would be really good for the community to have a place to find and run these configurations right out of the box.

community-config

Design Decision

The major design decision taken here was whether to include the configurations inside the repository or to have them in a completely new repository. Let us talk about both these approaches.

Having the configurations in the current repository:

This allows us to have all of the relevant configurations inside the repository itself, and so users would not have to go fetch this in different repositories. We could have issues with the release cycle and dependencies since, it would have to happen along with the custom distribution service project releases.

Having the configurations in a different repository:

This allows us to manage all of the configurations and the relevant dependencies separately and easily, thus avoiding any release conflict with the current repository. However it would be a bit difficult if users were to not find this repository.

Decision : We still cannot quite agree on what is the best method so for now, I have included the url from which the community configurations are picked up as a configuration variable in the .env file which can be configured later and therefore it can be up to the user to configure. Another advantage of having it configurable, is that the user can decide to load configurations which are private to his organization as well.

For details, see:

Back-End

War Generation

The ability to generate and download war files has finally been achieved, the reason this feature took so long to complete is because we had some difficulty in implementing the war generation and its tests. However this has been completed and can now be tested successfully.

Things to take care while generating war files

In its current state the war generation cannot include casc.yml or groovy files if they are included in the configuration they would have to be added externally. There is an issue opened here. The war file generation would yell at you if you tried to build a war file with a jcasc file configuration.

For details, see:

Pull Request Creation

This feature was included in the design document that I created after my GSoC selection. It involves the ability to create pull requests via the front-end of the service. The User Story behind this feature was that If I want to share a configuration with the community and I do not quite know how to use github or I do not want to do it via the terminal. This feature includes creation of a bot that handles the creation of pull requests in the repository. This bot would have to be installed by the jenkins organization in this repository and the bot would handle the rest.

For details, see:

Disclaimer:

This feature has however been put on the back-burner for now because we are focusing on getting the project to be self hosted and therefore would like to implement this once we have a clear path for the project to be hosted by the jenkins-infra team.If you would like to participate in the discussion here are the links for the pull requests,PR 1 and link: PR 2, or you can even jump in our gitter channel.

If you have been following my posts, I mentioned in my second week blog post that pulling in the json file consisting of more than 1600 plugins took a bit more time that my liking. We managed to solve that issue using a caching mechanism, so now the files are pulled in the first time you start the service and downloaded in a temporary folder. The next time you want to view the plugin cards they are pulled in directly from the temp directory bam ! thereby reducing time.

For details see Pull Request #90

Fixes and improvements

Port 8080

Port 8080 now does have a message instead of a whitelabel error message which is present by default in the spring-boot tomcat server setup. Turns out it requires overriding a particular class, and inserting a custom message

For details, see:

  • Pull Request #92

War Generation

Till now while you were generating the war file, if something went wrong during genration the service would not complain it would just swallow the error and throw back a corrupted war file, however now we have added an error support feature that will alert you when something goes wrong, the error is not very informative as of now, but we are working on making it more informative in the future.

For details, see:

  • War generation error handling #91

  • Add Github controller and jwt helper #66

Dockerfile

One of the major milestones of this phase was to have a project that can be self hosted, needless to say we needed the dockerfile i.e docker-compose.yml to spin the project with a few commands. The major issue we faced here was that there was a bit of a problem making the two containers talk to each other. Let me give you a little bit of context here. Our docker-compose is constructed using two separate dockerfiles one for the backend of the service and the other for the front-end. The backend makes api calls to the front-end via the proxy url i.e localhost:8080. We now had to change this since the network bridge between the two containers spoke to each other via the backend-server name i.e app-server. To brige that gap we have this PR that ensured that the docker compose works flawlessly.

For details, see:

  • Pull Request #82

However there is a minor draw-back of the above approach was now the entire project just relied on the docker compose and could not run using the simple combination of npm and maven since the proxy was different. In order to fix this I decided to follow a multiple environment approach, where we have multiple environment files that pick up the correct proxy and insert it at build time, to elaborate further we have two environment files, (using the env-cmd library ) .env and the docker.env and we insert, the correct file depending on how you want to build the project. For instance if you want to run it using the dockerfile the command that is run under the hood is something along these lines — npm --env-cmd -f docker.env start scripts.

For details, see:

  • Pull Request #88


Windows Installer Upgrades

$
0
0

This article describes the transition from the old Jenkins Windows installer 2.235.2 (32 bit) to the new Jenkins Windows installer 2.235.3 (64 bit)

Let’s take a look how Jenkins installation on Windows happened before release of this upgrade.

Step 1

Installer Startup

It’s evident that branding information is not present here.

Step 2

Installation Directory

Jenkins would be installed into the 32 bit programs directory along with a 32 bit Java 8 runtime environment.

Step 3

Install It

There was no option to select the user that would run the Jenkins service or the network port that would be used.

Issues

The previous installer had issues that needed to be resolved:

  • Only supported 32-bit installations

  • Bundled an outdated Java 8 runtime environment

  • No support for Java 11

  • No port selection during installation

  • No choice of account for the Jenkins service

  • The Program Files (x86) directory was used for the Jenkins home directory

Road Forward

The new Jenkins Windows Installer resolves those issues

  • Supports 64 bit installations and drops 32 bit support

  • Supports 64 bit Java 8 and 64 bit Java 11

  • Port selection and validation from the installer

  • Service account selection and validation from the installer

  • Program is installed in Program Files with Jenkins home directory in %AppData% of the selected service account

  • The JENKINS_HOME directory is placed in the LocalAppData directory for the user that the service will run as, this aligns with modern Windows file system layouts

  • The installer has been updated with branding to make it look nicer and provide a better user experience

Screenshots

You may see below the sequence of screenshots for the new installer:

Step 1

Installer Startup

We can see now the Jenkins logo as a prominent part of the installer UI.

Step 2

Installation Directory

Jenkins installs by default in the 64 bit programs folder rather than in the 32 bit folder. Now the Jenkins logo and name are in the header during entire process of installation.

Step 3

Account Selection

Now the installer allows both specifying and testing the credentials by validating that the account has LogonAsService rights.

Step 4

Port Selection

Now the installer also allows specifying the port that Jenkins should run on and will not continue until a valid port is entered and tested.

Step 5

JRE Selection

Now instead of bundling a JRE, the installer searches for a compatible JRE on the system (in the current search no JRE was installed). In case you would like to use a different JRE from the one found by the installer, you can browse and specify it. Only Java 8 and Java 11 runtimes are supported. In case the selected JRE is found to be version 11 the installer will automatically add the necessary arguments and additional jar files for running under Java 11.

Step 6

Install It

All of the items that users can enter in the installer should be overridable on the command line for automated deployment as well. The full list of properties that can be overridden will be available soon.

Next Steps

Windows users have alternatives for their existing Jenkins installations:

Upgrade from inside Jenkins

The "Manage Jenkins" section of the running Jenkins will continue to include an "Upgrade" button for Windows users. You may continue to use that "Upgrade" button to update the Jenkins installation on your Windows computer. Upgrade from inside Jenkins will continue to use the current Java version. Upgrade from inside Jenkins will continue to use the current installation location.

Upgrade with the new Jenkins MSI installer

If you run the new Jenkins MSI installer on your Jenkins that was installed with the old Jenkins MSI installer, it will prompt for a new port and a service account.

  1. Stop and disable the existing Jenkins service from the Windows Service Manager

  2. Run the new installer to create the new installation with desired settings

  3. Stop the newly installed Jenkins service

  4. Copy existing Jenkins configuration files to the new Jenkins home directory

  5. Start the newly installed Jenkins service

After the new Jenkins MSI installer has run, the "Manage Jenkins" section of the running Jenkins will continue to include an "Upgrade" button for Windows users. You may continue to use that "Upgrade" button to update the Jenkins installation on your Windows computer.

Windows Service Wrapper : YAML Configuration Support - GSoC Phase - 02 Updates

$
0
0

Hello, world! GSoC 2020 Phase 2 has ended now and it was a great period for WinSW - YAML Configuration support project. In this blog post, I will announce the updates during the GSoC 2020 - Phase 2. If you are not already aware of this project, I would recommend reading this blog post which was published after GSoC 2020 - Phase 1.

Project Scope

  • Windows Service Wrapper - YAML configuration support

  • New CLI

  • Support for XML Schema validation

  • Support for YAML Schema validation

YAML Configuration Support

Under WinSW - YAML configurations support, these tasks will be done.

YAML to Object mapping

At the moment YAML object mapping is finished and merged. You can find all the implementations in this Pull Request.

Extend WinSW to support both XML and YAML

This task is already done and merged. Find the implementation in this Pull Request.

Validate Configurations on Startup

In the current implementation, configurations are validated on demand. So there may be failures due to bad configurations at the runtime. In this project, I will update WinSW to validate configurations at the startup. It is not implemented yet and will be in phase 3.

YAML Configuration support for Extensions

At the moment there are 2 internal plugins in WinSW. RunAwayProcessKiller and SharedDirectoryMapper. We allow users to provide configurations for those plugins in the same XML and YAML configuration file which is used to configure WinSW. At the moment XML is implemented. I have started working on YAML configuration support for extensions and it will be finished by the end of phase 3.

Key updates in Phase 2

  • YAML Configuration structure

    • Environment variables

      • Now users can provide environment variables as a sequence of dictionaries that contains name and value for environment variables.

    • TimeStamp values

      • Users can specify timestamp values in the same manner used in XML (e.g. 10 ms, 5 sec, 3 min)

  • YAML configuration document was published. YAML Configuration Specification

  • Extend the WinSW to support both XML and YAML

Sample YAML Configuration File

id: jenkinsname: Jenkinsdescription: This service runs Jenkins automation server.env:
    - name: JENKINS_HOMEvalue: '%LocalAppData%\Jenkins.jenkins'
    - name: LM_LICENSE_FILEvalue: host1;host2executable: javaarguments: >-
    -Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle
    -jar "E:\Winsw Test\yml6\jenkins.war" --httpPort=8081log:mode: rotateonFailure:
    - action: restartdelay: 10 sec
    - action: rebootdelay: 1 hour

New CLI

Let me explain in brief, why we need a new CLI. In WinSW, we will keep both XML and YAML configuration support. But according to the current implementation, the user can’t specify the configurations file explicitly. Also, we want to let the user skip the schema validation as well. So We decided to move into new CLI which is more structured with commands and options. Please read my previous blog post to learn more about commands and options in the new CLI.

Key updates in phase 2

  • Remove the /redirect command

  • testwait command was removed and add the wait option to the test command.

  • stopwait command was removed and add the wait option to the stop command.

How to try

User can configure the Windows Service Wrapper by both XML and YAML configuration files using the following steps.

  1. Create the configuration file (XML or YAML).

  2. Save it with the same name as the Windows Service Wrapper executable name.

  3. Place the configuration file inside the directory(or in a parent directory), where the Windows Service Wrapper executable is located.

If there are both XML and YAML configuraiton files, Windows Service Wrapper will be configured by the XML configuration file.

GSoC 2020 Phase 2 Demo

Future Works

  • YAML schema validation

    • YAML Configuration file will be validated with the JSON schema file.

  • XML Schema validation

    • XML configuration file will be validated with the XSD file. I have started working on this feature and you can find the implementation in this Pull Request.

  • YAML Configuration validate on startup

  • YAML support for Extensions

How to contribute

You can find the GitHub repository in this link. Issues and Pull requests are always welcome. Also, you can communicate with us in the WinSW Gitter channel, which is a great way to get in touch and there are project sync up meetings every Tuesday at 13:30 UTC on the Gitter channel.

External Fingerprint Storage Phase-3 Update: Introducing the PostgreSQL Fingerprint Storage Plugin

$
0
0

The final phase for the External Fingerprint Storage Project has come to an end and to finish off, we release one more fingerprint storage plugin: the PostgreSQL Fingerprint Storage Plugin!

This post highlights the progress made during phase-3. To understand what the project is about and the past progress, please refer to thephase-1 post and thephase-2 post.

Introducing the PostgreSQL Fingerprint Storage Plugin

Why PostgreSQL?

There were several reasons why it made sense to build another reference implementation, especially backed by PostgreSQL.

Redis is a key-value storage, and hence stores the fingerprints as blobs. The PostgreSQL plugin defines a relational structure for fingerprints. This offers a more powerful way to query the database for fingerprint information. Fingerprint facets can store extra information inside the fingerprints, which cannot be queried in Redis directly. PostgreSQL plugin allows powerful (indexing) and efficient querying strategies which can even query the facet metadata.

Another reason for building this plugin was to provide a basis for other relational database plugins to be built. It also validates the flexibility and design of our external fingerprint storage API.

Since PostgreSQL is a traditional disk storage database, it is more suitable for systems storing a massive number of fingerprints.

Among relational databases, PostgreSQL is quite popular, has extensive support, and is open-source. We expect the new implementation to drive more adoption, and prove to be beneficial to the community.

Installation:

The plugin can be installed using theexperimental update center. Follow along the following steps after running Jenkins to download and install the plugin:

  1. Select Manage Jenkins

  2. Select Manage Plugins

  3. Go to Advanced tab

  4. Configure the Update Site URL as: https://updates.jenkins.io/experimental/update-center.json

  5. Click on Submit, and then press the Check Now button.

  6. Go to Available tab.

  7. Search for PostgreSQL Fingerprint Storage Plugin and check the box along it.

  8. Click on Install without restart

The plugin should now be installed on the system.

Usage

Once the plugin has been installed, you can configure the PostgreSQL server details by following the steps below:

  1. Select Manage Jenkins

  2. Select Configure System

  3. Scroll to the section Fingerprints and choose PostgreSQL Fingerprint Storage in the dropdown forFingerprint Storage Engine.

  4. Configure the following parameters to connect to your PostgreSQL instance:

    Configure Redis

    • Host - Enter hostname where PostgreSQL is running

    • Port - Specify the port on which PostgreSQL is running

    • SSL - Click if SSL is enabled

    • Database Name - Specify the database name inside the PostgreSQL instance to be used. Please note that the database will not be created by the plugin, the user has to create the database.

    • Connection Timeout - Set the connection timeout duration in seconds.

    • Socket Timeout - Set the socket timeout duration in seconds.

    • Credentials - Configure authentication using username and password to the PostgreSQL instance.

  5. Use the Test PostgreSQL Connection button to verify that the details are correct and Jenkins is able to connect to the PostgreSQL instance.

  6. [IMPORTANT] When configuring the plugin for the first time, it is highly important to press the Perform PostgreSQL Schema Initialization button. It will automatically perform schema initialization and create the necessary indexes. The button can also be used in the case the database is wiped out and schema needs to be recreated.

  7. Press the Save button.

  8. Now, all the fingerprints produced by this Jenkins instance should be saved in the configured PostgreSQL instance!

Querying the Fingerprint Database

Due to the relational structure defined by PostgreSQL, it allows users/developers to query the fingerprint data which was not possible using the Redis fingerprint storage plugin.

The fingerprint storage can act as a consolidated storage for multiple Jenkins instances. For example, to search for a fingerprint id across Jenkins instances using the file name, the following query could be used:

SELECT fingerprint_id FROM fingerprint.fingerprint
WHERE filename = 'random_file';

A sample query is provided which can be tweaked depending on the parameters to be searched:

SELECT * FROM fingerprint.fingerprint
WHERE fingerprint_id = 'random_id'
        AND instance_id = 'random_jenkins_instance_id'
        AND filename = 'random_file'
        AND original_job_name = 'random_job'
        AND original_job_build_number = 'random_build_number'
        AND timestamp BETWEEN '2019-12-01 23:59:59'::timestamp AND now()::timestamp

The facets are stored in the database as jsonb. PostgreSQL offers support to query jsonb. This is especially useful for querying the information stored inside fingerprint facets. As an example, the Docker Traceability Plugin stores information like the name of Docker images inside these facets. These can be queried across Jenkins instances like so:

SELECT * FROM fingerprint.fingerprint_facet_relation
WHERE facet_entry->>'imageName' = 'random_container';

At the moment these queries require working knowledge of the database. In future, these queries can be abstracted away by plugins and the features made available to users directly inside Jenkins.

Releases 🚀

We released the 0.1-alpha-1 version for the PostgreSQL Fingerprint Storage Plugin. Please refer to the changelog for more information.

Redis Fingerprint Storage Plugin1.0-rc-3 was also released. The changelog provides more details.

A few API changes made in the Jenkins core were released in Jenkins-2.253. It mainly includes exposing fingerprint range set serialization methods for plugins.

Future Directions

The relational structure of the plugin allows some performance improvements that can be made when implementing cleanup, as well as improving the performance of Fingerprint#add(String job, int buildNumber). These designs were discussed and are a scope of future improvement.

The current external fingerprint storage API supports configuring multiple Jenkins instances to a single storage. This opens up the possibility of developing traceability plugins which can track fingerprints across Jenkins instances.

Please consider reaching out to us if you feel any of the use cases would benefit you, or if you would like to share some new use cases.

Acknowledgements

The PostgreSQL Fingerprint Storage Plugin and the Redis Fingerprint Storage plugin are maintained by the Google Summer of Code (GSoC) Team for External Fingerprint Storage for Jenkins. Special thanks to Oleg Nenashev,Andrey Falko, Mike Cirioli,Tim Jacomb, and the entire Jenkins community for all the contribution to this project.

As we wrap up, we would like to point out that there are plenty of future directions and use cases for the externalized fingerprint storage, as mentioned in the previous section, and we welcome everybody to contribute.

Reaching Out

Feel free to reach out to us for any questions, feedback, etc. on the project’s Gitter Channel or theJenkins Developer Mailing list. We use Jenkins Jira to track issues. Feel free to file issues under either the postgresql-fingerprint-storage-plugin or theredis-fingerprint-storage-plugin component depending on the plugin.

Machine Learning Plugin project - Coding Phase 3 blog post

$
0
0
jenkins gsoc logo small

Good to see you all again !

This is my final blog post about coding phase 3 in Jenkins Machine Learning Plugin for GSoC 2020. Being at the end of GSoC 2020, we had to finish all the pending issues and testing before a stable release in the main repository. Throughout this program, there were lots of learning and hard work will make this plugin valuable to the Data Science and Jenkins community.

Summary

With combining all of the work in phase 1, 2 and 3, initial version of Machine learning plugin( 1.0 ) was successfully released in Jenkins plugin repository. An interesting feature which allows users to connect to their existing programming language kernels more than connecting to only IPython kernel was introduced in this phase. It can be selected in multiple steps with different kernel. Images and graphs produced by Jupyter notebooks will be saved in user preferred folder in the workspace that can be used for reporting/analytic purposes later. Hoping this blog summarizes the Machine Learning’s features and future contributions. Thank you for your interest and support !!!

Main features of Machine Learning plugin

  • Execute Jupyter notebooks directly

  • Run different language scripts using multiple build steps

  • Convert Jupyter Notebooks to Python

  • Configure Jupyter kernels( IPython, IRKernel, IJulia etc) properties

  • Support to execute Notebooks/scripts on Agent

  • Extract graph/map/images from the code

  • Each build step can be associated with a machine learning task

  • Support for Windows and Linux

Future improvements

  • Improving performance of the plugin

  • Try to implement JENKINS-63377

  • Support parameterized definitions in Notebooks JENKINS-63478

  • Increasing testing code coverage

Multiple language kernel support

If there are existing kernels in the system, user will be able to configure in the global configurations in order to apply in the builder/step configuration.

Some popular interactive kernels

  • IPython for python

  • IRKernel for R

  • IJulia for Julia

  • IJavascript for javascript

More kernels and installation guides are found here. https://github.com/jupyter/jupyter/wiki/Jupyter-kernels

Dump images and graphs

Text output will be displayed in the console log. At the same time images/graphs/heat maps and HTMLs will be saved in the workspace. An action is shown in the left panel to display images in realtime. Due to the Content Security Policy of jenkins, some HTMLs which contain harmful javascript may not render in jenkins UI.

action image view

Fixed bugs

There were more bugs identified and fixed with many interactive testings. Setting the working directory of kernels was a big issue while getting datasets/files by script. Zeppelin process launcher was bypassed to fix this issue.

Patch version released

A major bug which was created while setting the process working directory had patched in the v1.0.1. The latest release is more stable now.

Acknowledgement

Machine Learning plugin had been developed under GSoC 2020 program. A huge thanks to Bruno P. Kinoshita, Marky Jackson, Shivay Lamba, Ioannis Moutsatsos and Org admins for this wonderful experience. I would be grateful for contributing this plugin continuously and more in Jenkins.

Jenkins Windows Services: YAML Configuration Support - GSoC Project Results

$
0
0

Hello, world! GSoC 2020 Phase 3 has ended now and it was a great period for thw Jenkins Windows Services - YAML Configuration Support project. In this blog post, I will announce the updates during the GSoC 2020 - Phase 2 and Phase 3. If you are not already aware of this project, I would recommend reading this blog post which was published after GSoC 2020 - Phase 1.

Project Scope

  • Windows Service Wrapper - YAML configuration support

  • YAML schema validation

  • New CLI

  • XML Schema validation

YAML Configuration Support

Under WinSW - YAML configurations support, these tasks will be done.

YAML to Object mapping

At the moment YAML object mapping is finished and merged. You can find all the implementations in this Pull Request.

Extend WinSW to support both XML and YAML

This task is already done and merged. Find the implementation in this Pull Request.

YAML Configuration support for Extensions

At the moment there are 2 internal plugins in WinSW. RunAwayProcessKiller and SharedDirectoryMapper. We allow users to provide configurations for those plugins in the same XML and YAML configuration file which is used to configure WinSW. This task is merged as well.Pull Request

YAML schema validation

Users can validate YAML configuration file against JSON schema file. Users can use YAML utility tool from Visual Studio market place to validate YAML config file against JSON schema.

Key updates in Phase 2 and Phase 3

Sample YAML Configuration File

id: jenkinsname: Jenkinsdescription: This service runs Jenkins automation server.env:
    - name: JENKINS_HOMEvalue: '%LocalAppData%\Jenkins.jenkins'
    - name: LM_LICENSE_FILEvalue: host1;host2executable: javaarguments: >-
    -Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle
    -jar "E:\Winsw Test\yml6\jenkins.war" --httpPort=8081log:mode: rotateonFailure:
    - action: restartdelay: 10 sec
    - action: rebootdelay: 1 hourextensions:
    - id: killOnStartupenabled: yesclassname: WinSW.Plugins.RunawayProcessKiller.RunawayProcessKillerExtensionsettings:pidfile: '%BASE%\pid.txt'stopTimeOut: 5000StoprootFirst: false
    - id: mapNetworDirsenabled: yesclassname: WinSW.Plugins.SharedDirectoryMapper.SharedDirectoryMappersettings:mapping:
                - enabled: falselabel: Nuncpath: \\UNC
                - enabled: falselabel: Muncpath: \\UNC2

New CLI

Let me explain in brief, why we need a new CLI. In WinSW, we will keep both XML and YAML configuration support. But according to the current implementation, the user can’t specify the configurations file explicitly. Also, we want to let the user skip the schema validation as well. So We decided to move into new CLI which is more structured with commands and options. Please read my previous blog post to learn more about commands and options in the new CLI.

Key updates in phase 2

  • Remove the /redirect command

  • testwait command was removed and add the wait option to the test command.

  • stopwait command was removed and add the wait option to the stop command.

How to try

User can configure the Windows Service Wrapper by both XML and YAML configuration files using the following steps.

  1. Create the configuration file (XML or YAML).

  2. Save it with the same name as the Windows Service Wrapper executable name.

  3. Place the configuration file inside the directory(or in a parent directory), where the Windows Service Wrapper executable is located.

If there are both XML and YAML configuraiton files, Windows Service Wrapper will be configured by the XML configuration file.

GSoC 2020 Phase 2 Demo

Future Works

  • XML Schema validation

    • XML configuration file will be validated with the XSD file. I have started working on this feature and you can find the implementation in this Pull Request.

  • YAML Configuration validate on startup

How to contribute

You can find the GitHub repository in this link. Issues and Pull requests are always welcome. Also, you can communicate with us in the WinSW Gitter channel, which is a great way to get in touch and there are project sync up meetings every Tuesday at 13:30 UTC on the Gitter channel.

Viewing all 1087 articles
Browse latest View live