Tag Archives: tycho

Eclipse p2 site references

Say you publish a p2 repository for your Eclipse bundles and features. Typically your bundles and features will depend on something external (other Eclipse bundles and features). The users of your p2 repository will have to also use the p2 repositories of the dependencies of your software otherwise they won’t be able to install your software. If your software only relies on standard Eclipse bundles and features, that is, something that can be found in the standard Eclipse central update site, you should have no problem: your users will typically have the Eclipse central update site already configured in their Eclipse installations. So, unless your software requires a specific version of an Eclipse dependency, you should be fine.

What happens instead if your software relies on external dependencies that are available only in other p2 sites? Or, put it another way, you rely on an Eclipse project that is not part of the simultaneous release or you need a version different from the one provided by a specific Eclipse release.

You should tell your users to use those specific p2 sites as well. This, however, will decrease the user experience at least from the installation point of view. One would like to use a p2 site and install from it without further configurations.

To overcome this issue, you should make your p2 repository somehow self-contained. I can think of 3 alternative ways to do that:

  • If you build with Tycho (which is probably the case if you don’t do releng stuff manually), you could use <includeAllDependencies> of the tycho-p2-repository plugin to “to aggregate all transitive dependencies, making the resulting p2 repository self-contained.” Please keep in mind that your p2 repository itself will become pretty huge (likely a few hundred MB), so this might not be feasible in every situation.
  • You can put the required p2 repositories as children of your composite update site. This might require some more work and will force you to introduce composite update sites just for this. I’ve written about p2 composite update sites many times in this blog in the past, so I will not consider this solution further.
  • You can use p2 site references that are meant just for the task mentioned so far and that have been introduced in the category.xml specification for some time now. The idea is that you put references to the p2 sites of your software dependencies and the corresponding content metadata of the generated p2 repository will contain links to the p2 sites of dependencies. Then, p2 will automatically contact those sites when installing software (at least from Eclipse, from the command line we’ll have to use specific arguments as we’ll see later). Please keep in mind that this mechanism works only if you use recent versions of Eclipse (if I remember correctly this has been added a couple of years ago).

In this blog post, I’ll describe such a mechanism, in particular, how this can be employed during the Tycho build.

The simple project used in this blog post can be found here: https://github.com/LorenzoBettini/tycho-site-references-example. You should be able to easily reuse most of the POM stuff in your own projects.

IMPORTANT: To benefit from this, you’ll have to use at least Tycho 2.4.0. In fact, Tycho started to support site references only a few versions ago, but only in version 2.4.0 this has been implemented correctly. (I personally fixed this: https://github.com/eclipse/tycho/issues/141.) If you use a (not so) older version, e.g., 2.3.0, there’s a branch in the above GitHub repository, tycho-2.3.0, where some additional hacks have to be performed to make it work (rewrite metadata contents and re-compress the XML files, just to mention a few), but I’d suggest you use Tycho 2.4.0.

There’s also another important aspect to consider: if your software switches to a different version of a dependency that is available on a different p2 repository, you have to update such information consistently. In this blog post, we’ll deal with this issue as well, keeping it as automatic (i.e., less error-prone) as possible.

The example project

The example project is very simple:

  • parent project with the parent POM;
  • a plugin project created with the Eclipse wizard with a simple handler (so it depends on org.eclipse.ui and org.eclipse.core.runtime);
  • a feature project including the plugin project. To make the example more interesting this feature also requires, i.e., NOT includes, the external feature org.eclipse.xtext.xbase. We don’t actually use such an Xtext feature, but it’s useful to recreate an example where we need a specific p2 site containing that feature;
  • a site project with category.xml that is used to generate during the Tycho build our p2 repository.

To make the example interesting the dependency on the Xbase feature is as follows

So we require version 2.25.0.

The target platform is defined directly in the parent POM as follows (again, to keep things simple):

Note that I explicitly added the Xtext 2.25.0 site repository because in the 2020-12 Eclipse site Xtext is available with a lower version 2.24.0.

This defines the target platform we built (and in a real example, hopefully, tested) our bundle and feature.

The category.xml initially is defined as follows

The problem

If you generate the p2 repository with the Maven/Tycho build, you will not be able to install the example feature unless Xtext 2.25.0 and its dependencies can be found (actually, also the standard Eclipse dependencies have to be found, but as said above, the Eclipse update site is already part of the Eclipse distributions). You then need to tell your users to first add the Xtext 2.25.0 update site. In the following, we’ll handle this.

A manual, and thus cumbersome, way to verify that is to try to install the example feature in an Eclipse installation pointing to the p2 repository generated during the build. Of course, we’ll keep also this verification mechanism automatic and easy. So, before going on, following a Test-Driven approach (which I always love), let’s first reproduce the problem in the Tycho build, by adding this configuration to the site project (plug-in versions are configured in the pluginManagement section of the parent POM):

The idea is to run the standard Eclipse p2 director application through the tycho-eclipserun-plugin. The dependency configuration is standard for running such an Eclipse application. We try to install our example feature from our p2 repository into a temporary output directory (these values are defined as properties so that you can copy this plugin configuration in your projects and simply adjust the values of the properties). Also, the arguments passed to the p2 director are standard and should be easy to understand. The only non-standard argument is -followReferences that will be crucial later (for this first run it would not be needed).

Running mvn clean verify should now highlight the problem:

This would mimic the situation your users might experience.

The solution

Let’s fix this: we add to the category.xml the references to the same p2 repositories we used in our target platform. We can do that manually (or by using the Eclipse Category editor, in the tab Repository Properties):

The category.xml initially is defined as follows

Now when we create the p2 repository during the Tycho build, the content.xml metadata file will contain the references to the p2 repository (with a syntax slightly different, but that’s not important; it will contain a reference to the metadata repository and to the artifact repository, which usually are the same). Now, our users can simply use our p2 repository without worrying about dependencies! Our p2 repository will be self-contained.

Let’s verify that by running mvn clean verify; now everything is fine:

Note that this requires much more time: now the p2 director has to contact all the p2 sites defined as references and has to also download the requirements during the installation. We’ll see how to optimize this part as well.

In the corresponding output directory, you can find the installed plugins; you can’t do much with such installed bundles, but that’s not important. We just want to verify that our users can install our feature simply by using our p2 repository, that’s all!

You might not want to run this verification on every build, but, for instance, only during the build where you deploy the p2 repository to some remote directory (of course, before the actual deployment step). You can easily do that by appropriately configuring your POM(s).

Some optimizations

As we saw above, each time we run the clean build, the verification step has to access remote sites and has to download all the dependencies. Even though this is a very simple example, the dependencies during the installation are almost 100MB. Every time you run the verification. (It might be the right moment to stress that the p2 director will know nothing about the Maven/Tycho cache.)

We can employ some caching mechanisms by using the standard mechanism of p2: bundle pool! This way, dependencies will have to be downloaded only the very first time, and then the cached versions will be used.

We simply introduce another property for the bundle pool directory (I’m using by default a hidden directory in the home folder) and the corresponding argument for the p2 director application:

Note that now the plug-ins during the verification step will NOT be installed in the specified output directory (which will store only some p2 properties and caches): they will be installed in the bundle pool directory. Again, as said above, you don’t need to interact with such installed plug-ins, you only need to make sure that they can be installed.

In a CI server, you should cache the bundle pool directory as well if you want to benefit from some speed. E.g., this example comes with a GitHub Actions workflow that stores also the bundle pool in the cache, besides the .m2 directory.

This will also allow you to easily experiment with different configurations of the site references in your p2 repository. For example, up to now, we put the same sites used for the target platform. Referring to the whole Eclipse releases p2 site might be too much since it contains all the features and bundles of all the projects participating in Eclipse Simrel. In the target platform, this might be OK since we might want to use some dependencies only for testing. For our p2 repository, we could tweak references so that they refer only to the minimal sites containing all our features’ requirements.

For this example we can replace the 2 sites with 4 small sites with all the requirements (actually the Xtext 2.25.0 is just the same as before):

You can verify that removing any of them will lead to installation failures.

The first time this tweaking might require some time, but you now have an easy way to test this!

Keeping things consistent

When you update your target platform, i.e., your dependencies versions, you must make sure to update the site references in the category.xml accordingly. It would be instead nice to modify this information in a single place so that everything else is kept consistent!

We can use again properties in the parent POM:

We want to rely on such properties also in the category.xml, relying on the Maven standard mechanism of copy resources with filtering.

We create another category.xml in the subdirectory templates of the site project using the above properties in the site references (at least in the ones where we want to have control on a specific version):

and in the site project we configure the Maven resources plugin appropriately:

Of course, we execute that in a phase that comes BEFORE the phase when the p2 repository is generated. This will overwrite the standard category.xml file (in the root of the site project) by replacing properties with the corresponding values!

By the way, you could use the property eclipse-version also in the configuration of the Tycho Eclipserun plugin seen above, instead of hardcoding 2020-12.

Happy releasing! 🙂

Publishing an Eclipse p2 composite repository on GitHub Pages

I had already described the process of publishing an Eclipse p2 composite update site:

Well, now that Bintray is shutting down, and Sourceforge is quite slow in serving an Eclipse update site, I decided to publish my Eclipse p2 composite update sites on GitHub Pages.

GitHub Pages might not be ideal for serving binaries, and it has a few limitations. However, such limitations (e.g., published sites may be no larger than 1 GB, sites have a soft bandwidth limit of 100GB per month and sites have a soft limit of 10 builds per hour) are not that crucial for an Eclipse update site, whose artifacts are not that huge. Moreover, at least my projects are not going to serve more than 100GB per month, unfortunately, I might say 😉

In this tutorial, I’ll show how to do that, so that you can easily apply this procedure also to your projects!

The procedure is part of the Maven/Tycho build so that it is fully automated. Moreover, the pom.xml and the ant files can be fully reused in your own projects (just a few properties have to be adapted). The idea is that you can run this Maven build (basically, “mvn deploy”) on any CI server (as long as you have write-access to the GitHub repository hosting the update site – more on that later). Thus, you will not depend on the pipeline syntax of a specific CI server (Travis, GitHub Actions, Jenkins, etc.), though, depending on the specific CI server you might have to adjust a few minimal things.

These are the main points:

The p2 children repositories and the p2 composite repositories will be published with standard Git operations since we publish them in a GitHub repository.

Let’s recap what p2 composite update sites are. Quoting from https://wiki.eclipse.org/Equinox/p2/Composite_Repositories_(new)

As repositories continually grow in size they become harder to manage. The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.

In order to achieve this, all published p2 repositories must be available, each one with its own p2 metadata that should never be overwritten. On the contrary, the metadata that we will overwrite will be the one for the composite metadata, i.e., compositeContent.xml and compositeArtifacts.xml.

Directory Structure

I want to be able to serve these composite update sites:

  • the main one collects all the versions
  • a composite update site for each major version (e.g., 1.x, 2.x, etc.)
  • a composite update site for each major.minor version (e.g., 1.0.x, 1.1.x, 2.0.x, etc.)

What I aim at is to have the following paths:

  • releases: in this directory, all p2 simple repositories will be uploaded, each one in its own directory, named after version.buildQualifier, e.g., 1.0.0.v20210307-2037, 1.1.0.v20210307-2104, etc. Your Eclipse users can then use the URL of one of these single update sites to stick to that specific version.
  • updates: in this directory, the metadata for major and major.minor composite sites will be uploaded.
  • root: the main composite update site collecting all versions.

To summarize, we’ll end up with a remote directory structure like the following one

Thus, if you want, you can provide these sites to your users (I’m using the URLs that correspond to my example):

  • https://lorenzobettini.github.io/p2composite-github-pages-example-updates for the main global update site: every new version will be available when using this site;
  • https://lorenzobettini.github.io/p2composite-github-pages-example-updates/updates/1.x for all the releases with major version 1: for example, the user won’t see new releases with major version 2;
  • https://lorenzobettini.github.io/p2composite-github-pages-example-updates/updates/1.x/1.0.x for all the releases with major version 1 and minor version 0: the user will only see new releases of the shape 1.0.0, 1.0.1, 1.0.2, etc., but NOT 1.1.0, 1.2.3, 2.0.0, etc.

If you want to change this structure, you have to carefully tweak the ant file we’ll see in a minute.

Building Steps

During the build, before the actual deployment, we’ll have to update the composite site metadata, and we’ll have to do that locally.

The steps that we’ll perform during the Maven/Tycho build are:

  • Clone the repository hosting the composite update site (in this example, https://github.com/LorenzoBettini/p2composite-github-pages-example-updates);
  • Create the p2 repository (with Tycho, as usual);
  • Copy the p2 repository in the cloned repository in a subdirectory of the releases directory (the name of the subdirectory has the same qualified version of the project, e.g., 1.0.0.v20210307-2037);
  • Update the composite update sites information in the cloned repository (using the p2 tools);
  • Commit and push the updated clone to the remote GitHub repository (the one hosting the composite update site).

First of all, in the parent POM, we define the following properties, which of course you need to tweak for your own projects:

It should be clear which properties you need to modify for your project. In particular, the github-update-repo is the URL (with authentication information) of the GitHub repository hosting the composite update site, and the site.label is the label that will be put in the composite metadata.

Then, in the parent POM, we configure in the pluginManagement section all the versions of the plugin we are going to use (see the sources of the example on GitHub).

The most interesting configuration is the one for the tycho-packaging-plugin, where we specify the format of the qualified version:

Moreover, we create a profile release-composite (which we’ll also use later in the POM of the site project), where we disable the standard Maven plugins for install and deploy. Since we are going to release our Eclipse p2 composite update site during the deploy phase, but we are not interested in installing and deploying the Maven artifacts, we skip the standard Maven plugins bound to those phases:

The interesting steps are in the site project, the one with <packaging>eclipse-repository</packaging>. Here we also define the profile release-composite and we use a few plugins to perform the steps involving the Git repository described above (remember that these configurations are inside the profile release-composite, of course in the build plugins section):

Let’s see these configurations in detail. In particular, it is important to understand how the goals of the plugins are bound to the phases of the default lifecycle; remember that on the phase package, Tycho will automatically create the p2 repository and it will do that before any other goals bound to the phase package in the above configurations:

  • with the build-helper-maven-plugin we parse the current version of the project, in particular, we set the properties holding the major and minor versions that we need later to create the composite metadata directory structure; its goal is automatically bound to one of the first phases (validate) of the lifecycle;
  • with the exec-maven-plugin we configure the execution of the Git commands:
    • we clone the Git repository of the update site (with –depth=1 we only get the latest commit in the history, the previous commits are not interesting for our task); this is done in the phase pre-package, that is before the p2 repository is created by Tycho; the Git repository is cloned in the output directory target/checkout
    • in the phase verify (that is, after the phase package), we commit the changes (which will be done during the phase package as shown in the following points)
    • in the phase deploy (that is, the last phase that we’ll run on the command line), we push the changes to the Git repository of the update site
  • with the maven-resources-plugin we copy the p2 repository generated by Tycho into the target/checkout/releases directory in a subdirectory with the name of the qualified version of the project (e.g., 1.0.0.v20210307-2037);
  • with the tycho-eclipserun-plugin we create the composite metadata; we rely on the Eclipse application org.eclipse.ant.core.antRunner, so that we can execute the p2 Ant task for managing composite repositories (p2.composite.repository). The Ant tasks are defined in the Ant file packaging-p2composite.ant, stored in the site project. In this file, there are also a few properties that describe the layout of the directories described before. Note that we need to pass a few properties, including the site.label, the directory of the local Git clone, and the major and minor versions that we computed before.

Keep in mind that in all the above steps, non-existing directories will be automatically created on-demand (e.g., by the maven-resources-plugin and by the p2 Ant tasks). This means that the described process will work seamlessly the very first time when we start with an empty Git repository.

Now, from the parent POM on your computer, it’s enough to run

and the release will be performed. When cloning you’ll be asked for the password of the GitHub repository, and, if not using an SSH agent or a keyring, also when pushing. Again, this depends on the URL of the GitHub repository; you might use an HTTPS URL that relies on the GitHub token, for example.

If you want to make a few local tests before actually releasing, you might stop at the phase verify and inspect the target/checkout to see whether the directories and the composite metadata are as expected.

You might also want to add another execution to the tycho-eclipserun-plugin to add a reference to another Eclipse update site that is required to install your software. The Ant file provides a task for that, p2.composite.add.external that will store the reference into the innermost composite child (e.g., into 1.2.x); here’s an example that adds a reference to the Eclipse main update site:

For example, in my Xtext projects, I use this technique to add a reference to the Xtext update site corresponding to the Xtext version I’m using in that specific release of my project. This way, my update site will be “self-contained” for my users: when using my update site for installing my software, p2 will be automatically able to install also the required Xtext bundles!

Releasing from GitHub Actions

The Maven command shown above can be used to perform a release from your computer. If you want to release your Eclipse update site directly from GitHub Actions, there are a few more things to do.

First of all, we are talking about a GitHub Actions workflow stored and executed in the GitHub repository of your project, NOT in the GitHub repository of the update site. In this example, it is https://github.com/LorenzoBettini/p2composite-github-pages-example.

In such a workflow, we need to push to another GitHub repository. To do that

  • create a GitHub personal access token (selecting repo);
  • create a secret in the GitHub repository of the project (where we run the GitHub Actions workflow), in this example it is called ACTIONS_TOKEN, with the value of that token;
  • when running the Maven deploy command, we need to override the property github-update-repo by specifying a URL for the GitHub repository with the update site using the HTTPS syntax and the encrypted ACTIONS_TOKEN; in this example, it is https://x-access-token:${{ secrets.ACTIONS_TOKEN }}@github.com/LorenzoBettini/p2composite-github-pages-example-updates;
  • we also need to configure in advance the Git user and email, with some values, otherwise, Git will complain when creating the commit.

To summarize, these are the interesting parts of the release.yml workflow (see the full version here: https://github.com/LorenzoBettini/p2composite-github-pages-example/blob/master/.github/workflows/release.yml):

The workflow is configured to be executed only when you push to the release branch.

Remember that we are talking about the Git repository hosting your project, not the one hosting your update site.

Final thoughts

With the procedure described in this post, you publish your update sites and the composite metadata during the Maven build, so you never deal manually with the GitHub repository of your update site. However, you can always do that! For example, you might want to remove a release. It’s just a matter of cloning that repository, do your changes (i.e., remove a subdirectory of releases and update manually the composite metadata accordingly), commit, and push. Now and then you might also clean up the history of such a Git repository (the history is not important in this context), by pushing with –force after resetting the Git history. By the way, by tweaking the configurations above you could also do that every time you do a release: just commit with amend and push force!

Finally, you could also create an additional GitHub repository for snapshot releases of your update sites, or for milestones, or release candidate.

Happy releasing! 🙂

Caching dependencies in GitHub Actions

I recently started to port all my Java projects from Travis CI to GitHub Actions, since Travis CI changed its pricing model. (I’ll soon update also my book on TDD and Build Automation under that respect.)

I’ve always used caching mechanisms during the builds in Travis CI, to speed up the builds: caching Maven dependencies, especially in big projects, can save a lot of time. In my case, I’m mostly talking of Eclipse plug-in projects, built with Maven/Tycho, and the target platform resolution might have to download a few hundreds of megabytes. Thus, I wanted to use caching also in GitHub Actions, and there’s an action for that.

In this post, I’ll show my strategies for using the cache, in particular, using different workflows based on different operating systems, which are triggered only on some specific events. I’ll use a very simple example, but I’m using this strategy currently on this Xtext project: https://github.com/LorenzoBettini/edelta, which uses more than 300 Mb of dependencies.

The post assumes that you’re already familiar with GitHub Actions.

Warning: Please keep in mind that caches will also be evicted automatically (currently, the documentation says that “caches that are not accessed within the last week will also be evicted”). However, we can still benefit from caches if we are working on a project for a few days in a row.

To experiment with building mechanisms, I suggest you use a very simple example. I’m going to use a simple Maven Java project created with the corresponding Maven archetype: a Java class and a trivial JUnit test. The Java code is not important in this context, and we’ll concentrate on the build automation mechanisms.

The final project can be found here:
https://github.com/LorenzoBettini/github-actions-cache-example.

This is the initial POM for this project:

This is the main workflow file (stored in .github/workflows/maven.yml):

This is a pretty standard workflow for a Java project built with Maven. This workflow runs for every push on any branch and every PR.

Note that we specify to cache the directory where Maven stores all the downloaded artifacts, ~/.m2.

For the cache key, we use the OS where our build is running, a constant string “-m2-” and the result of hashing all the POM files (we’ll see how we rely on this hashing later in this post).

Remember that the cache key will be used in future builds to restore the files saved in the cache. When no cache is found with the given key, the action searches for alternate keys if the restore-keys has been specified. As you see, we specified as the restore key something similar to the actual key: the running OS and the constant string “-m2-” but no hashing. This way, if we change our POMs, the hashing will be different, but if a previous cache exists we can still restore that and make use of the cached values. (See the official documentation for further details.) We’ll then have to download only the new dependencies if any. The cache will then be updated at the end of the successful job.

I usually rely on this strategy for the CI of my projects:

  • build every pushes in any branch using a Linux build environment;
  • build PRs in any branch also on a Windows and macOS environment (actually, I wasn’t using Windows with Travis CI since it did not provide Java support on that environment; that’s another advantage of GitHub Actions, which provides Java support also on Windows)

Thus, I have another workflow definition just for PRs (stored in .github/workflows/pr.yml):

Besides the build matrix for OSes, that’s basically the same as the previous workflow. In particular, we use the same strategy for defining the cache key (and restore key). Thus, we have a different cache for each different operating system.

Now, let’s have a look at the documentation of this action:

A workflow can access and restore a cache created in the current branch, the base branch (including base branches of forked repositories), or the default branch. For example, a cache created on the default branch would be accessible from any pull request. Also, if the branch feature-b has the base branch feature-a, a workflow triggered on feature-b would have access to caches created in the default branch (main), feature-a, and feature-b.

Access restrictions provide cache isolation and security by creating a logical boundary between different workflows and branches. For example, a cache created for the branch feature-a (with the base main) would not be accessible to a pull request for the branch feature-b (with the base main).

What does that mean in our scenario? Since the workflow running on Windows and macOS is executed only in PRs, this means that the cache for these two configurations will never be saved for the master branch. In turns, this means that each time we create a new PR, this workflow will have no chance of finding a cache to restore: the branch for the PR is new (so no cache is available for such a branch) and the base branch (typically, “master” or “main”) will have no cache saved for these two OSes. Summarizing, the builds for the PRs for these two configurations will always have to download all the Maven dependencies from scratch. Of course, if we don’t immediately merge the PR and we push other commits on the branch of the PR, the builds for these two OSes will find a saved cache (if the previous builds of the PR succeeded), but, in any case, the first build for each new PR for these two OSes will take more time (actually, much more time in a complex project with lots of dependencies).

Thus, if we want to benefit from caching also on these two OSes, we have to have another workflow on the OSes Windows and macOS that runs when merging a PR, so that the cache will be stored also for the master branch (actually we could use this strategy also when merging any PR with any base branch, not necessarily the main one).

Here’s this additional workflow (stored in .github/workflows/pr-merge.yml):

Note that we intercept the event push (since a merge of a PR is actually a push) but we have an if statement that enables the workflow only when the commit message contains the string “Merge pull request”, which is the default message when merging a PR on GitHub. In this example, we are only interested in PR merged with the master branch and with any branch starting with “experiments”, but you can adjust that as you see fit. Furthermore, since this workflow is only meant for updating the Maven dependency cache, we skip the tests (with -DskipTests) so that we save some time (especially in a complex project with lots of tests).

This way, after the first PR merged, the PR workflows running on Windows and macOS will find a cache (at least as a starting point).

We can also do better than that and avoid running the Maven build if there’s no need to update the cache. Remember that we use the result of hashing all the POM files in our cache key? We mean that if our POMs do not change then basically we don’t expect our dependencies to change (of course if we’re not using SNAPSHOT dependencies). Now, in the documentation, we also read

When key matches an existing cache, it’s called a cache hit, […] When key doesn’t match an existing cache, it’s called a cache miss, and a new cache is created if the job completes successfully.

The idea is to skip the Maven step in the above workflow “Updates Cache on Windows and macOS” if we have a cache hit since we expect no new dependencies are needed to be downloaded (our POMs haven’t changed). This is the interesting part to change:

Note that we need to define an id for the cache to intercept the cache hit or miss and the id must match the id in the if statement.

This way, if we have a cache hit the workflow for updating the cache on Windows and macOS will be really fast since it won’t even run the Maven build for updating the cache.

If we change the POM, e.g., switch to JUnit 4.13.1, push the change, create a PR, and merge it, then, the workflow for updating the cache will actually run the Maven build since we have a cache miss: the key of the cache has changed. Of course, we’ll still benefit from the already cached dependencies (and all the Maven plugins already downloaded) and we’ll update the cache with the new dependencies for JUnit 4.13.1.

Final notes

One might think to intercept the merge of a PR by using on: pull_request: (as we did in the pr.yml workflow). However, “There’s no way to specify that a workflow should be triggered when a pull request is merged”. In the official forum, you can find a solution based on the “closed” PR event and the inspection of the PR “merged” event. So one might think to update the pr.yml workflow accordingly and get rid of the additional pr-merge.yml workflow. However, from my experiments, this solution will not make use of caching, which is the main goal of this post. The symptoms of such a problem are this message when the workflow initially tries to restore a cache:

Warning: Cache service responded with 403

and this message when the cache should be saved:

Unable to reserve cache with key …, another job may be creating this cache.

Another experiment that I tried was to remove the running OS from the cache key, e.g., m2-${{ hashFiles(‘**/pom.xml’) }} instead of ${{ runner.os }}-m2-${{ hashFiles(‘**/pom.xml’) }}, and to use a restore accordingly key, like m2- instead of ${{ runner.os }}-m2-. I was hoping to reuse the same cache across different OS environments. This seems to work for macOS, which seems to be able to reuse the Linux cache. Unfortunately, this does not work for Windows. Thus, I gave up that solution.

Remove SNAPSHOT and Qualifier in Maven/Tycho Builds

Before releasing Maven artifacts, you remove the -SNAPSHOT from your POMs. If you develop Eclipse projects and build with Maven and Tycho, you have to keep the versions in the POMs and the versions in MANIFEST, feature.xml and other Eclipse project artifacts consistent. Typically when you release an Eclipse p2 site, you don’t remove the .qualifier in the versions and you will get Eclipse bundles and features versions automatically processed: the .qualifer is replaced with a timestamp. But if you want to release some Eclipse bundles also as Maven artifacts (e.g., to Maven central) you have to remove the -SNAPSHOT before deploying (or they will still be considered snapshots, of course 🙂 and you have to remove .qualifier in Eclipse bundles accordingly.

To do that, in an automatic way, you can use a combination of Maven plugins and of tycho-versions-plugin.

I’m going to show two different ways of doing that. The example used in this post can be found here: https://github.com/LorenzoBettini/tycho-set-version-example.

First method

The idea is to use the goal parse-version of the org.codehaus.mojo:build-helper-maven-plugin. This will store the parts of the current version in some properties (by default, parsedVersion.majorVersion, parsedVersion.minorVersion and parsedVersion.incrementalVersion).

Then, we can pass these properties appropriately to the goal set-version of the org.eclipse.tycho:tycho-versions-plugin.

This is the Maven command to run:

The goal set-version of the Tycho plugin will take care of updating the versions (without the -SNAPSHOT and .qualifier) both in POMs and in Eclipse projects’ metadata.

Second method

Alternatively, we can use the goal set (with argument -DremoveSnapshot=true) of the org.codehaus.mojo:versions-maven-plugin. Then, we use the goal update-eclipse-metadata of the org.eclipse.tycho:tycho-versions-plugin, to update Eclipse projects’ versions according to the version in the POM.

This is the Maven command to run:

The first goal will change the versions in POMs while the second one will change the versions in Eclipse projects’ metadata.

Configuring the plugins

As usual, it’s best practice to configure the used plugins (in this case, their versions) in the pluginManagement section of your parent POM.

For example, in the parent POM of https://github.com/LorenzoBettini/tycho-set-version-example we have:

 

Conclusions

In the end, choose the method you prefer. Please keep in mind that these goals are not meant to be used during a standard Maven lifecycle, that’s why we ran them explicitly.

Furthermore, the goal set of the org.codehaus.mojo:versions-maven-plugin might give you some headache if the structure of your Maven/Eclipse projects is quite different from the default one based on nested directories. In particular, if you have an aggregator project different from the parent project, you will have to pass additional arguments or set the versions in different commands (e.g., first on the parent, then on the other modules of the aggregator, etc.)

Analyzing Eclipse plug-in projects with Sonarqube

In this tutorial I’m going to show how to analyze multiple Eclipse plug-in projects with Sonarqube. In particular, I’m going to focus on peculiarities that have to be taken care of due to the standard way Sonarqube analyzes sources and to the structure of typical Eclipse plug-in projects (concerning tests and code coverage).

The code of this example is available on Github: https://github.com/LorenzoBettini/tycho-sonarqube-example

This can be seen as a follow-up of my previous post on “Jacoco code coverage and report of multiple Eclipse plug-in projects. I’ll basically reuse almost the same structure of that example and a few things. The part of Jacoco report is not related to Sonarqube but I’ll leave it there.

The structure of the projects is as follows:

Each project’s code is tested in a specific .tests project. The code consists of simple Java classes doing nothing interesting, and tests just call that code.

The project example.tests.parent contains all the common configurations for test projects (and test report, please refer to my previous post on “Jacoco code coverage and report of multiple Eclipse plug-in projects for the details of this report project, which is not strictly required for Sonarqube).

This is its pom

Note that this also shows a possible way of dealing with custom argLine for tycho-surefire configuration: tycho.testArgLine will be automatically set the jacoco:prepare-agent goal, with the path of jacoco agent (needed for code coverage); the property tycho.testArgLine is automatically used by tycho-surefire. But if you have a custom configuration of tycho-surefire with additional arguments you want to pass in argLine, you must be careful not to overwrite the value set by jacoco. If you simply refer to tycho.testArgLine in the custom tycho-surefire configuration’s argLine, it will work when the jacoco profile is active but it will fail when it is not active since that property will not exist. Don’t try to define it as an empty property by default, since when tycho-surefire runs it will use that empty value, ignoring the one set by jacoco:prepare-agent (argLine’s properties are resolved before jacoco:prepare-agent is executed). Instead, use another level of indirection: refer to a new property, e.g., additionalTestArgLine, which by default is empty. In the jacoco profile, set additionalTestArgLine referring to tycho.testArgLine (in that profile, that property is surely set by jacoco:prepare-agent). Then, in the custom argLine, refer to additionalTestArgLine. An example is shown in the project example.plugin2.tests pom:

You can check that code coverage works as expected by running (it’s important to verify that jacoco has been configured correctly in your projects before running Sonarqube analysis: if it’s not working in Sonarqube then it’s something wrong in the configuration for Sonarqube, not in the jacoco configuration, as we’ll see in a minute):

Mare sure that example.tests.report/target/site/jacoco-aggregate/index.html reports some code coverage (in this example, example.plugin1 has some code uncovered by intention).

Now I assume you already have Sonarqube installed.

Let’s run a first Sonarqube analysis with

This is the result:

So Unit Tests are correctly collected! What about Code Coverage? Something is shown, but if you click on that you see some bad surprises:

Code coverage only on tests (which is irrelevant) and no coverage for our SUT (Software Under Test) classes!

That’s because jacoco .exec files are by default generated in the target folder of the tests project, now when Sonarqube analyzes the projects:

  • it finds the jacoco.exec file when it analyzes a tests project but can only see the sources of the tests project (not the ones of the SUT)
  • when it analyzes a SUT project it cannot find any jacoco.exec file.

We could fix this by configuring the maven jacoco plugin to generate jacoco.exec in the SUT project, but then the aggregate report configuration should be updated accordingly (while it works out of the box with the defaults). Another way of fixing the problem is to use the Sonarqube maven property sonar.jacoco.reportPaths and “trick” Sonarqube like that (we do that in the parent pom properties section):

This way, when it analyzes example.plugin1 it will use the jacoco.exec found in example.plugin1.tests project (if you follow the convention foo and foo.tests this works out of the box, otherwise, you have to list all the jacoco.exec paths in all the projects in that property, separated by comma).

Let’s run the analysis again:

OK, now code coverage is collected on the SUT classes as we wanted. Of course, now test classes appear as uncovered (remember, when it analyzes example.plugin1.tests it now searchs for jacoco.exec in example.plugin1.tests.tests, which does not even exist).

This leads us to another problem: test classes should be excluded from Sonarqube analysis. This works out of the box in standard Maven projects because source folders of SUT and source folders of test classes are separate and in the same project (that’s also why code coverage for pure Maven projects works out of the box in Sonarqube); this is not the case for Eclipse projects, where SUT and tests are in separate projects.

In fact, issues are reported also on test classes:

We can fix both problems by putting in the tests.parent pom properties these two Sonarqube properties (note the link to the Eclipse bug about this behavior)

This will be inherited by our tests projects and for those projects, Sonarqube will not analyze test classes.

Run the analysis again and see the results: code coverage only on SUT and issues only on SUT (remember that in this example MyClass1 is not uncovered completely by intention):

You might be tempted to use the property sonar.skip set to true for test projects, but you will use JUnit test reports collection.

The final bit of customization is to exclude the Main.java class from code coverage. We have already configured the jacoco maven plugin to do so, but this won’t be picked up by Sonarqube (that configuration only tells jacoco to skip that class when it generates the HTML report).

We have to repeat that exclusion with a Sonarqube maven property, in the parent pom:

Note that in the jacoco maven configuration we had excluded a .class file, while here we exclude Java files.

Run the analysis again and Main is not considered in code coverage:

Now you can have fun in fixing Sonarqube issues, which is out of the scope of this tutorial 🙂

This example is also analyzed from Travis using Sonarcloud (https://sonarcloud.io/dashboard?id=example%3Aexample.parent).

Hope you enjoyed this tutorial and Happy new year! 🙂

 

JaCoCo Code Coverage and Report of multiple Eclipse plug-in projects

In this tutorial I’ll show how to use Jacoco with Maven/Tycho to create a code coverage report of multiple Eclipse plug-in projects.

The code of the example is available here: https://github.com/LorenzoBettini/tycho-multiproject-jacoco-report-example.

This is the structure of the projects:

jacoco-report-projects

Each project’s code is tested in a specific .tests project. The code consists of simple Java classes doing nothing interesting, and tests just call that code.

The Maven parent pom file is written such that Jacoco is enabled only when the profile “jacoco” is explicitly activated:

This is just an example of configuration; you might want to tweak it as you see fit for your own projects (in this example I also created a Main.java with a main method that I exclude from the coverage). By default, the jacoco-maven-plugin will “prepare” the Jacoco agent in the property tycho.testArgLine (since our test projects are Maven projects with packaging eclipse-plugin-test); since tycho.testArgLine is automatically used by the tycho-surefire-plugin and since we have no special test configuration, the pom.xml of our test projects is just as simple as this:

if you need a custom configuration, then you have to explicitly mention ${tycho.testArgLine} in the <argLine>.

Now we want to create an aggregate Jacoco report for the classes in plugin1 and plugin2 projects (tested by plugin1.tests and plugin2.tests, respectively); each test project will generate a jacoco.exec file with coverage information. Before Jacoco 0.7.7, creating an aggregate report wasn’t that easy and required to store all coverage data in a single an .exec file and then use an ant task (with a manual configuration specifying all the source file and class file paths). In 0.7.7, the jacoco:report-aggregate has been added, which makes creating a report really easy!

Here’s an excerpt of the documentation:

Creates a structured code coverage report (HTML, XML, and CSV) from multiple projects within reactor. The report is created from all modules this project depends on. From those projects class and source files as well as JaCoCo execution data files will be collected. […] This also allows to create coverage reports when tests are in separate projects than the code under test. […]

Using the dependency scope allows to distinguish projects which contribute execution data but should not become part of the report:

  • compile: Project source and execution data is included in the report.
  • test: Only execution data is considered for the report.

So it’s just a matter of creating a separate project (I called that example.tests.report) where we:

  • configure the report-aggregate goal (in this example I bind that to the verify phase)
  • add as dependencies with scope compile the projects containing the actual code and with scope test the projects containing the tests and .exec data

That’s all!

Now run this command in the example.parent project:

and when the build terminates, you’ll find the HTML code coverage report for all your projects in the directory (again, you can configure jacoco with a different output path, that’s just the default):

/example.tests.report/target/site/jacoco-aggregate

jacoco-report

Since, besides the HTML report, jacoco will create an XML report, you can use any tool that keeps track of code coverage, like the online free solution Coveralls (https://coveralls.io/). Coveralls is automatically accessible from Travis (I assume that you know how to connect your github projects to Travis and Coveralls). So we just need to configure the coveralls-maven-plugin with the path of the Jacoco xml report (I’m doing this in the parent pom, in the pluginManagement section in the jacoco profile):

And here’s the Travis file:

This is the coveralls page for the example project https://coveralls.io/github/LorenzoBettini/tycho-multiproject-jacoco-report-example. And an example of coverage information:

jacoco-coveralls

That’s all!

Happy coverage! 🙂

Publish an Eclipse p2 composite repository on Bintray

In a previous post I showed how to manage an Eclipse composite p2 repository and how to publish an Eclipse p2 composite repository on Sourceforge. In this post I’ll show a similar procedure to publish an Eclipse p2 composite repository on Bintray. The procedure is part of the Maven/Tycho build so that it is fully automated. Moreover, the pom.xml and the ant files can be fully reused in your own projects (just a few properties have to be adapted).

The complete example at https://github.com/LorenzoBettini/p2composite-bintray-example.

First of all, this procedure is quite different from the ones shown in other blogs (e.g., this one, this one and this one): in those approaches the p2 metadata (i.e., artifacts.jar and content.jar) are uploaded independently from a version, always in the same directory, thus overwriting the existing metadata. This leads to the fact that only the latest version of published features and bundles will be available to the end user. This is quite against the idea that old versions should still be available, and in general, all the versions should be available for the end users, especially if a new version has some breaking change and the user is not willing to update (see p2’s do’s and do not’s). For this reason, I always publish p2 composite repositories.

Quoting from https://wiki.eclipse.org/Equinox/p2/Composite_Repositories_(new)

The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.

In order to achieve this, all published p2 repositories must be available, each one with their own p2 metadata that should never be overwritten.

On the contrary, the metadata that we will overwrite will be the one for the composite metadata, i.e., compositeContent.xml and compositeArtifacts.xml.

In this example, all the binary artifacts are meant to be made available from: https://dl.bintray.com/lorenzobettini/p2-composite-example/. (NOTE: the artifacts are not effectively available from that URL anymore).

Directory Structure

What I aim at is to have the following remote paths on Bintray:

  • releases: in this directory all p2 simple repositories will be uploaded, each one in its own directory, named after version.buildQualifier, e.g., 1.0.0.v20160129-1616/ etc. Your Eclipse users can then use the URL of one of these single update sites to stick to that specific version.
  • updates: in this directory the composite metadata will be uploaded. The URL https://dl.bintray.com/lorenzobettini/p2-composite-example/updates/ should be used by your Eclipse users to install the features in their Eclipse of for target platform resolution (depending on the kind of projects you’re developing). All versions will be available from this composite update site; I call this main composite. Moreover, you can provide the URL to a child composite update site that includes all versions for a given major.minor stream, e.g., https://dl.bintray.com/lorenzobettini/p2-composite-example/updates/1.0/, https://dl.bintray.com/lorenzobettini/p2-composite-example/updates/1.1/, etc. I call each one of these, child composite.
  • zipped: in this directory we will upload the zipped p2 repository for each version.

Summarizing we’ll end up with a remote directory structure like the following

Uploading using REST API

In the posts I mentioned above, the typical line to upload contents with the REST API is of the shape

For metadata, and

For features and plugins.

But this has the drawback I was mentioning above.

Thanks to the Bintray Support, I managed to use a different scheme that allows me to store p2 metadata for a single p2 repository in the same directory of the p2 repository itself and to keep those metadata separate for each single release.

To achieve this, we need to use another URL scheme for uploading, using matrix params options or header options.

This means that we’ll upload everything with this URL

On the contrary, for uploading p2 composite metadata, we’ll use the schema of the other approaches, i.e., we will not associate it to any specific version; we just need to specify the desired remote path where we’ll upload the main and the child composite metadata.

Building Steps

During the build, we’ll have to update the composite site metadata, and we’ll have to do that locally.

The steps that we’ll perform during the Maven/Tycho build, which will rely on some Ant scripts can be summarized as follows:

  • Retrieve the remote composite metadata compositeContent/Artifacts.xml, both for the main composite and the child composite. If these metadata cannot be found remotely, we fail gracefully: it means that it is the first time we release, or, if only the child composite cannot be found, that we’re releasing a new major.minor version. These will be downloaded in the directories target/main-composite and target/child-composite respectively. These will be created anyway.
  • Preprocess possible downloaded composite metadata: if this property is present

    We must temporarily set it to false, otherwise we will not be able to add additional elements in the composite site with the p2 ant tasks.
  • Update the composite metadata using the version information passed from the Maven/Tycho build using the p2 Ant tasks for composite repositories
  • Post process the composite metadata (i.e., put the property p2.atomic.composite.loading above to true, see https://bugs.eclipse.org/bugs/show_bug.cgi?id=356561 for further details about this property). UPDATE: Please have a look at the comment section, in particular, the comments from pascalrapicault, about this property.
  • Upload everything to bintray: both the new p2 repository, its zipped version and all the composite metadata.

IMPORTANT: the pre and post processing of composite metadata that we’ll implement assumes that such metadata are not compressed. Anyway, I always prefer not to compress the composite metadata since it’s easier, later, to manually change them or reviewing.

Technical Details

You can find the complete example at https://github.com/LorenzoBettini/p2composite-bintray-example. Here I’ll sketch the main parts. First of all, all the mechanisms for updating the composite metadata and pushing to Bintray (i.e., the steps detailed above) are in the project p2composite.example.site, which is a Maven/Tycho project with eclipse-repository packaging.

The pom.xml has some properties that you should adapt to your project, and some other properties that can be left as they are if you’re OK with the defaults:

If you change the default remote paths it is crucial that you update the child.repository.path.prefix consistently. In fact, this is used to update the composite metadata for the composite children. For example, with the default properties the composite metadata will look like the following (here we show only compositeContent.xml):

You can also see that two crucial properties, bintray.user and, in particular, bintray.apikey should not be made public. You should keep these hidden, for example, you can put them in your local .m2/settings.xml file, associated to the Maven profile that you use for releasing (as illustrated in the following). This is an example of settings.xml

In the pom.xml of this project there is a Maven profile, release-composite, that should be activated when you want to perform the release steps described above.

We also make sure that the generated zipped p2 repository has a name with fully qualified version

In the release-composite Maven profile, we use the maven-antrun-plugin to execute some ant targets (note that the Maven properties are automatically passed to the Ant tasks): one to retrieve the remote composite metadata, if they exist, and the other one as the final step to deploy the p2 repository, its zipped version and the composite metadata to Bintray:

The Ant tasks are defined in the file bintray.ant. Please refer to the example for the complete file. Here we sketch the main parts.

This Ant file relies on some properties with default values, and other properties that are expected to be passed when running these tasks, i.e., from the pom.xml

To retrieve the existing remote composite metadata we execute the following, using the standard Ant get task. Note that if there is no composite metadata (e.g., it’s the first release that we execute, or we are releasing a new major.minor version so there’s no child composite for that version) we ignore the error; however, we still create the local directories for the composite metadata:

For preprocessing/postprocessing composite metadata (in order to deal with the property p2.atomic.composite.loading as explained in the previous section) we have

Finally, to push everything to Bintray we execute curl with appropriate URLs, as we described in the previous section about REST API. The single tasks for pushing to Bintray are similar, so we only show one for uploading the p2 repository associated to a specific version, and the one for uploading p2 composite metadata. As detailed at the beginning of the post, we use different URL shapes.

To update composite metadata we execute an ant task using the tycho-eclipserun-plugin. This way, we can execute the Eclipse application org.eclipse.ant.core.antRunner, so that we can execute the p2 Ant tasks for managing composite repositories.

ATTENTION: in the following snipped, for the sake of readability, I split the <appArgLine> into several lines, but in your pom.xml it must be exactly in one (long) line.

The file packaging-p2-composite.ant is similar to the one I showed in a previous post. We use the p2 Ant tasks for adding a child to a composite p2 repository (recall that if there is no existing composite repository, the task for adding a child also creates new compositeContent.xml/Artifacts.xml; if a child with the same name exists the ant task will not add anything new).

Removing Released artifacts

In case you want to remove an existing released version, since we upload the p2 repository and the zipped version as part of a package’s version, we just need to delete that version using the Bintray Web UI. However, this procedure will never remove the metadata, i.e., artifacts.jar and content.jar. The same holds if you want to remove the composite metadata. For these metadata files, you need to use the REST API, e.g., with curl. I put a shell script in the example to quickly remove all the metadata files from a given remote Bintray directory.

Performing a Release

For performing a release you just need to run

on the p2composite.example.tycho project.

Concluding Remarks

As I said, the procedure shown in this example is meant to be easily reusable in your projects. The ant files can be simply copied as they are. The same holds for the Maven profile. You only need to specify the Maven properties that contain values for your very project, and adjust your settings.xml with sensitive data like the bintray APIKEY.

Happy Releasing! 🙂

Running SWTBot tests in Travis

The problem I was having when running SWTBot tests in Travis CI was that I could not use the new container-based infrastructure of Travis, which allows to cache things like the local maven repository. This was not possible since to run SWTBot tests you need a Window Manager (in Linux, you can use metacity), and so you had to install it during the Travis build; this requires sudo and using sudo prevents the use of the container-based infrastructure. Not using the cache means that each build would download all the maven artifacts from the start.

Now things have changed 🙂

When running in the container-based infrastructure, you’re still allowed to use Travis’  APT sources and packages extensions, as long as the package you need is in their whitelist. Metacity was not there, but I opened a request for that, and now metacity is available 🙂

Now you can use the container-based infrastructure and install metacity together (note that you won’t be able to cache installed apt packages, so each time the build runs, metacity will have to be reinstalled, but installing metacity is much faster than downloading all the Maven/Tycho artifacts).

The steps to run SWTBot tests in Travis can be summarized as follows:

I left the old steps “before_install” commented out, just as a comparison.

  • “sudo: false” enables the container based infrastructure
  • “cache:” ensures that the Maven repository is cached
  • “env:” enables the use of graphical display
  • “addons:apt:packages” uses the extensions that allow you to install whitelisted APT packages (metacity in our case).
  • “before_script:” starts the virtual framebuffer and then metacity.

Then, you can specify the Maven command to run your build (here are some examples:

Happy SWTBot testing! 🙂

 

 

Deploy your own custom Eclipse

This is the follow up of my previous post about building a custom Eclipse distribution. In this post I’ll show how to deploy the p2 site and the zipped products on Sourceforge. Concerning the p2 site, I’ll use the same technique, with some modifications, for building a composite update site and deploy it with rsync that I showed on another post.

In particular, we’ll accomplish several tasks:

  • creating and deploying the update site with only the features (without the products)
  • creating and deploying the update site including product definition and the zipped provisioned products
  • creating a self-contained update site (including all the dependencies)
  • providing an ant script for installing your custom Eclipse from the net

The code of the example can be found at: https://github.com/LorenzoBettini/customeclipse-example. In particular, I’ll start from where I left in the previous post.

The source code assumes a specific remote directory on Sourceforge, that is part of one of my Sourceforge projects, and it is writable only with my username and password. If you want to test this example, you can simply modify the property remote.dir in the parent pom specifying a local path in your computer (or by passing a value to the maven command with the syntax -Dremote.dir=<localpath>). Indeed, rsync can also synchronize two local directories.

Recall that when you perform a synchronization, specifying the wrong local directory might lead to a complete deletion of that directory. Moreover, source and destinations URLs in rsync have a different semantics depending on whether they terminate with a slash or not, so make sure you understand them if you need to customize this ant file or to pass special URLs.  

Creating and Deploying the p2 composite site

This part reuses most of what I showed in the previous posts:

In this blog post we want to be able to add a new p2 site to the composite update site (and deploy it) for two different projects:

  • customeclipse.example.site: This is the update site with only our features and bundles
  • customeclipse.example.ide.site: This is the update site with our features and bundles and the Eclipse product definition.

To reuse the ant files for managing the p2 composite update site and syncing it with rsync, and the Maven executions that use such ant files, we put the ant files in the parent project customeclipse.example.tycho, and we configure the Maven executions in the pluginManagement section of the parent pom.

We also put in the parent pom all the properties we’ll use for the p2 composite site and for rsync (again, please have a look at the previous posts for their meaning)

The pluginManagement section contains the configuration for managing the composite update site.

ATTENTION: in the following snipped, for the sake of readability, I split the <appArgLine> into several lines, but in your pom.xml it must be exactly in one (long) line.

The pluginManagement section also contains the configuration for updating and committing the composite update site to Sourceforge.

Now, we can simply activate such plugins in the build sections of our site projects described above.

In particular, we activate such plugins only inside profiles; for example, in the customeclipse.example.site project we have:

In customeclipse.example.ide.site we have similar sections, but the profiles are called differently, release-ide-composite and deploy-ide-composite, respectively.

So, if you want to update the p2 composite site with a new version containing only the features/bundles and deploy it on Sourceforge you need to run maven as follows

If you want to do the same, including the custom product definitions you need to run maven as follows (the additional build-ide profile is required because the customeclipse.example.ide.site is included as a Maven module only when that profile is activated; this way, products are created only when that profile is activated – just because provisioning a product requires some time and we don’t want to do that on normal builds)

NOTE: The remote directory on Sourceforge hosting  the composite update site will always be the same. This means that the local composite update site created and updated by both deploy-composite and deploy-ide-composite will be synchronized with the same remote folder.

In the customeclipse.example.ide.site, we added a p2.inf file with touchpoint instructions to add as update site in our Eclipse products the update site hosted on Sourceforge: http://sourceforge.net/projects/eclipseexamples/files/customeclipse/updates.

Deploying the zipped products

To copy the zipped products on Sourceforge we will still use rsync; actually, we won’t use any synchronization features: we only want to copy the zip files. I could have used the Ant Scp or Sftp tasks, but I experienced many problems with such tasks, so let’s use rsync also for that.

The ant file for rsync is slightly different with respect to the one shown in the previous post, since it has been refactored to pass the rsync macro more parameters. We still have the targets for update/commit synchronization; we added another target that will be used to simply copy something (i.e., the zipped products) to the remote directory, without any real synchronization. You may want to have a look at rsync documentation to fully understand the command line arguments.

In the customeclipse.example.ide.site, in the deploy-ide-composite profile, we configure another execution for the maven ant plugin (recall that in this profile the rsync synchronization configured in the parent’s pom pluginManagement section is also executed); this further execution will copy the zipped products to a remote folder on Sourceforge (as detailed in the previous post, you first need to create such folder using the Sourceforge web interface):

Note that when calling the rsync-copy-dir-contents of the rsync.ant file, we pass the properties as nested elements, in order to override their values (such properties’ value are already defined in the parent’s pom, and for this run we need to pass different values).

Now, if we run

many things will be executed:

  • rsync will synchronize our local composite update site with the remote composite update site
  • a new p2 site will be created, and added to our local composite update site
  • rsync will synchronize our local changes with the remote composite update site
  • Eclipse products will be created and zipped
  • the zipped products will be copied to Sourceforge

A self-contained p2 repository

Recall from the previous post that since in customeclipse.example.ide.feature we added Eclipse features (such as the platform and jdt) as dependencies (and not as included features), then the p2 update site we’ll create will not contain such features: it will contain only our own features and bundles. And that was actually intentional.

However, this means that the users of our features and of our custom Eclipse will still need to add the standard Eclipse update site before installing our features or updating the installed custom Eclipse.

If you want your p2 repository to be self-contained, i.e., to include also the external dependencies, you can do so by setting includeAllDependencies to true in the configuration of the tycho-p2-repository-plugin.

It makes sense to do that in the customeclipse.example.ide.site, so that all the dependencies for our custom Eclipse product will end up in the p2 repository:

However, doing so every time we add a new p2 update site to the composite update site would make our composite update site grow really fast in size. A single p2 repository for this example, including all dependencies is about 110Mb. A composite update site with just two p2 repositories would be 220Mb, and so on.

I think a good rule of thumb is

  • include all dependencies the first time we release our product’s update site (setting the property includeAllDependencies to true, and then setting it to false right after the first release)
  • for further releases do not include dependencies
  • include the dependencies again when we change the target platform of our product (indeed, Tycho will take the dependencies from our target platform)

Provide a command line installer

Now that our p2 composite repository is on the Internet, our users can simply download the zip file according to their OS, unzip it and enjoy it. But we could also provide another way for installing our custom Eclipse: an ant file so that the user will have to

The ant file will use the p2 director command line application to install our Eclipse product directly from the remote update site (the ant file is self-contained since if the director application is not already installed, it will install it as the first task).

Here’s the install.ant file (note that we ask the director to install our custom Eclipse product, customeclipse.example.ide and, explicitly, the main feature customeclipse.example.feature; this reflects what we specified in the product configuration, in particular, the fact that customeclipse.example.feature must be a ROOT feature, so that it can be updatable – see all the details in the previous post)

Note that this will always install the latest version present in the remote composite update site.

For instance, consider that you created zipped products for version 1.0.0, then you deployed a small upgrade only for your features, version 1.0.1, i.e., without releasing new zipped products. The ant script will install the custom Eclipse including version 1.0.1 of your features.

Some experiments

You may want to try and download the zipped product for your OS from this URL: https://sourceforge.net/projects/eclipseexamples/files/customeclipse/products/

After I deployed the self-contained p2 repository and the zipped products (activating the profiles release-ide-composite and deploy-ide-composite, with the property includeAllDependencies set to true, using the project customeclipse.example.ide.site), I deployed another p2 repository into the composite site only for the customeclipse.example.feature (activating the profiles release-composite and deploy-composite, i.e., using the project customeclipse.example.site).

Unzip the downloaded product, and check for updates (recall that the product is configured with the update site hosted on Sourceforge, through the p2.inf file described before). You will find that there’s an update for the Example Feature:

customeclipse before upgrading customeclipse available updates

After the upgrade and restart you should see the new version of the feature installed:

customeclipse after upgrading

Now, try to install the product using the ant file shown above, that can be downloaded from https://raw.githubusercontent.com/LorenzoBettini/customeclipse-example/master/customeclipse.example.tycho/install.ant.

You’ll have to wait a few minutes (and don’t worry about cookie warnings); run this version of the custom Eclipse, and you’ll find no available updates: check the installation details and you’ll see you already have the latest version of the Example Feature.

That’s all! Hope you find this post useful and… Happy Easter 🙂

Build your own custom Eclipse

In this tutorial I’ll show how to build a custom Eclipse distribution with Maven/Tycho. We will create an Eclipse distribution including our own features/plugins and standard Eclipse features, trying to keep the size of the final distribution small.

The code of the example can be found at: https://github.com/LorenzoBettini/customeclipse-example

First of all, we want to mimic the Eclipse SDK product and Eclipse SDK feature; have a look at your Eclipse Installation details

eclipse SDK installation details

You see that “Eclipse SDK” is the product (org.eclipse.sdk.ide), and “Eclipse Project SDK” is the feature (org.eclipse.sdk.feature.group).

Moreover, we want to deal with a scenario such that

Our custom feature can be installed in an existing Eclipse installation, thus we can release it independently from our custom Eclipse distribution. Our custom Eclipse distribution must be updatable, e.g., when we release a new version of our custom feature. 

The project representing our parent pom will be

  • customeclipse.example.tycho

The target platform is defined in

  • customeclipse.example.targetplatform

For this example we only need the org.eclipse.sdk feature and the native launcher feature

We created a plugin project and a feature project including such plugin (the plugin is nothing fancy, just an “Hello World Command” created with the Eclipse Plug-in project wizard):

  • customeclipse.example.plugin
  • customeclipse.example.feature

We also create another project for the p2 repository (Tycho packaging type: eclipse-repository) that distributes our plugin and feature (including the category.xml file)

  • customeclipse.example.site

All these projects are then configured with Maven/Tycho pom.xml files.

Then we create another feature that will represent our custom Eclipse distribution

  • customeclipse.example.ide.feature

This feature will then specify the features that will be part of our custom Eclipse distribution, i.e., our own feature (customeclipse.example.feature) and all the features taken from the Eclipse update sites that we want to include in our custom distribution.

Finally, we create another site project (Tycho packaging type: eclipse-repository) which is basically the same as customeclipse.example.site, but it also includes the product definition for our custom Eclipse product:

  • customeclipse.example.ide.site

NOTE: I’m using two different p2 repository projects because I want to be able to release my feature without releasing the product (see the scenario at the beginning of the post). This will also allow us to experiment with different ways of specifying the features for our custom Eclipse distribution.

Product Configuration

This is our product configuration file customeclipse.example.ide.product in the project customeclipse.example.ide.site and its representation in the Product Configuration Editor: