Accessing Google Online Account from GNOME and KDE

In this post, I’d like to share my experiences in setting a Google Online Account in GNOME and KDE. Actually, I have more than one Google account, and the procedures I show can be repeated for all your Google accounts.

First, a disclaimer: I’ve always loved KDE and I’ve used that since version 3. Lately, I have started to appreciate GNOME though. I’ve been using GNOME most of the time now, in most of my computers, for a few years. But lately, I started to experiment with KDE again, and I started to install that on some of my computers.

KDE is well-known for its customizability, while GNOME is known for the opposite. However, I must admit that in GNOME settings most of the things are trivial, while in KDE, you pay a lot for its customizability.

I think setting a Google Account is a good example of what I’ve just said. Of course, I might be wrong concerning the procedure I’ll show in this post, but, from what I’ve read around, especially for KDE, there doesn’t seem to be an easier way. Of course, if you know an easier procedure I’d like to know in the comments 🙂

In the following, I’m showing how to set a Google Account so that its features, mainly the calendar and access to Google Drive, get integrated into GNOME and KDE. I tested these procedures both in Ubuntu/Kubuntu and in Manjaro GNOME/KDE, but I guess that’s the same in other distributions.

TL;DR: in Gnome it’s trivial, in KDE you need some effort.

GNOME

Just open “Online Accounts”, and choose Google. Use the web form to log in and give the permissions so that GNOME can access all your Google data. As I said, I’m focusing on the calendar and drive. Repeat the same procedure for all your Google accounts you want to connect.

Done! In Files (Nautilus) you can see on the left, the links to your Google drive (or drives, if you configured several accounts). In the Gnome Calendar, you can choose the Google calendars you want to show. The events will be automatically shown in the top Gnome shell clock and calendar widget. Notifications will be automatically shown (by Evolution). For the Gnome Contacts, things are similar. By the way, also Gnome Tasks and other Gnome applications will be automatically able to access your Google accounts data.

To summarize, one single configuration and everything else is automatically integrated.

KDE

Now be prepared for an overwhelming number of steps, most of which, I’m afraid, I find rather complex and counter-intuitive.

In particular, you won’t get access to your Google account data in a single step. In fact, I’ll first show how to mount a Google drive and then how to set up the calendar.

Mount your Google drive

Go to

System Settings -> Online Accounts -> Add New Account -> Google

As usual, you get redirected to the “Web authentication for google”, login and give the consent allowing “KDE Online Accounts” to access some of your Google information, including drive, manage your YouTube videos, access your contacts, and calendar. (This procedure can be repeated for all your Google accounts if you have many.) Note that with all the permissions you give, you’d expect that then everything is automatically configured in KDE, but that’s not the case…

Back to the system settings, you get a “Google account”, not with your Google username or email, which is what I’d expect, but a simple “google” and a progress number (of course, you can rename it).

OK, now I can access my Google drive files from Dolphin and have my local calendar automatically connected to my Google calendar? Just like in Gnome? I’m afraid not… we’re still far away from that.

If you go to Dolphin’s Network place you see no Google drive mounted, nor a mechanism to do that… First, you have to install the package kio-gdrive (at least in Kubuntu and Manjaro KDE that’s not installed by default…). After that, back to Dolphin’s Network place you can expand the “Google Drive” folder and you get asked for the Google account you had previously configured. Select that, and “Use This Account For” -> “Drive” in Accounts Details. Now you can access your Google drive from Dolphin.

Add your Google calendar

What about my Google Calendar? First, you have to install the package korganizer (or the full suite kontact); again, at least in Kubuntu and Manjaro KDE, that’s not installed by default… Great, once installed I can simply select my previously configured Google account? Ehm… no… you “just” have to go to

Settings -> Configure KOrganizer -> General -> Calendars -> Add… -> Google Groupware -> a dialog appears, click “Configure…”

Now the browser (not a web dialog as before) is opened to login into your Google account. Then, give the permissions so that “This will allow Akonadi Resources for Google Services to…” (Again, you have to do the same for all your Google accounts you want to connect to.) In the browser, you then see: “You can close this tab and return to the application now.” Go back to the dialog in KOrganizer, and your calendars and tasks should already be selected (unselect anything you don’t want). OK, now in the previous dialog you should see KOrganizer synchronizing with your Google calendar and tasks.

Now I should get notifications from Google calendars events, right? Ehm… not necessarily: you need to make sure that in the “Status and Notifications” system tray, by right-clicking on “KOrganizer Reminders”, the “Enable Reminders” and “Start Reminder Daemon at Login” are selected (I see different default behaviors under that respect in different distributions). If not, enable them and log out and log in.

OK! But what about my Google calendar events in the standard “Digital Clock” widget in the corner of the system tray? Are they automatically shown just like in GNOME? No! There’s some more work to do! First, install kdepim-addons (guess what? At least in Kubuntu and Manjaro KDE, that’s not installed by default…). Now, go to “Digital Clock Settings” -> “Calendar” -> check “PIM Events Plugin” (quite counter-intuitive!) -> Apply; now a new “PIM Events Plugin” appears on the left, select that. Fortunately, this one will automatically propose to select all the calendars that have been previously configured in KOrganizer.

something similar for kaddressbook; probably with kontact the steps will be less, but I’ve always found Kontact chaotic…

Summary

Now, I like KDE customizations possibilities (while GNOME is pretty rigid about customizations and most things cannot be customized at all), but the above steps are far too much! After a few weeks, I wouldn’t be able to remember them correctly… In KDE, even the number of steps of the above procedures is overwhelming. You have to follow complex, heterogeneous and counter-intuitive procedures in KDE, and long menu chains. Maybe it’s the distribution’s fault? I doubt. I guess it’s an issue with the overall organization and integration in the KDE desktop environment. In GNOME the integration is just part of the desktop environment.

Eclipse p2 site references

Say you publish a p2 repository for your Eclipse bundles and features. Typically your bundles and features will depend on something external (other Eclipse bundles and features). The users of your p2 repository will have to also use the p2 repositories of the dependencies of your software otherwise they won’t be able to install your software. If your software only relies on standard Eclipse bundles and features, that is, something that can be found in the standard Eclipse central update site, you should have no problem: your users will typically have the Eclipse central update site already configured in their Eclipse installations. So, unless your software requires a specific version of an Eclipse dependency, you should be fine.

What happens instead if your software relies on external dependencies that are available only in other p2 sites? Or, put it another way, you rely on an Eclipse project that is not part of the simultaneous release or you need a version different from the one provided by a specific Eclipse release.

You should tell your users to use those specific p2 sites as well. This, however, will decrease the user experience at least from the installation point of view. One would like to use a p2 site and install from it without further configurations.

To overcome this issue, you should make your p2 repository somehow self-contained. I can think of 3 alternative ways to do that:

  • If you build with Tycho (which is probably the case if you don’t do releng stuff manually), you could use <includeAllDependencies> of the tycho-p2-repository plugin to “to aggregate all transitive dependencies, making the resulting p2 repository self-contained.” Please keep in mind that your p2 repository itself will become pretty huge (likely a few hundred MB), so this might not be feasible in every situation.
  • You can put the required p2 repositories as children of your composite update site. This might require some more work and will force you to introduce composite update sites just for this. I’ve written about p2 composite update sites many times in this blog in the past, so I will not consider this solution further.
  • You can use p2 site references that are meant just for the task mentioned so far and that have been introduced in the category.xml specification for some time now. The idea is that you put references to the p2 sites of your software dependencies and the corresponding content metadata of the generated p2 repository will contain links to the p2 sites of dependencies. Then, p2 will automatically contact those sites when installing software (at least from Eclipse, from the command line we’ll have to use specific arguments as we’ll see later). Please keep in mind that this mechanism works only if you use recent versions of Eclipse (if I remember correctly this has been added a couple of years ago).

In this blog post, I’ll describe such a mechanism, in particular, how this can be employed during the Tycho build.

The simple project used in this blog post can be found here: https://github.com/LorenzoBettini/tycho-site-references-example. You should be able to easily reuse most of the POM stuff in your own projects.

IMPORTANT: To benefit from this, you’ll have to use at least Tycho 2.4.0. In fact, Tycho started to support site references only a few versions ago, but only in version 2.4.0 this has been implemented correctly. (I personally fixed this: https://github.com/eclipse/tycho/issues/141.) If you use a (not so) older version, e.g., 2.3.0, there’s a branch in the above GitHub repository, tycho-2.3.0, where some additional hacks have to be performed to make it work (rewrite metadata contents and re-compress the XML files, just to mention a few), but I’d suggest you use Tycho 2.4.0.

There’s also another important aspect to consider: if your software switches to a different version of a dependency that is available on a different p2 repository, you have to update such information consistently. In this blog post, we’ll deal with this issue as well, keeping it as automatic (i.e., less error-prone) as possible.

The example project

The example project is very simple:

  • parent project with the parent POM;
  • a plugin project created with the Eclipse wizard with a simple handler (so it depends on org.eclipse.ui and org.eclipse.core.runtime);
  • a feature project including the plugin project. To make the example more interesting this feature also requires, i.e., NOT includes, the external feature org.eclipse.xtext.xbase. We don’t actually use such an Xtext feature, but it’s useful to recreate an example where we need a specific p2 site containing that feature;
  • a site project with category.xml that is used to generate during the Tycho build our p2 repository.

To make the example interesting the dependency on the Xbase feature is as follows

So we require version 2.25.0.

The target platform is defined directly in the parent POM as follows (again, to keep things simple):

Note that I explicitly added the Xtext 2.25.0 site repository because in the 2020-12 Eclipse site Xtext is available with a lower version 2.24.0.

This defines the target platform we built (and in a real example, hopefully, tested) our bundle and feature.

The category.xml initially is defined as follows

The problem

If you generate the p2 repository with the Maven/Tycho build, you will not be able to install the example feature unless Xtext 2.25.0 and its dependencies can be found (actually, also the standard Eclipse dependencies have to be found, but as said above, the Eclipse update site is already part of the Eclipse distributions). You then need to tell your users to first add the Xtext 2.25.0 update site. In the following, we’ll handle this.

A manual, and thus cumbersome, way to verify that is to try to install the example feature in an Eclipse installation pointing to the p2 repository generated during the build. Of course, we’ll keep also this verification mechanism automatic and easy. So, before going on, following a Test-Driven approach (which I always love), let’s first reproduce the problem in the Tycho build, by adding this configuration to the site project (plug-in versions are configured in the pluginManagement section of the parent POM):

The idea is to run the standard Eclipse p2 director application through the tycho-eclipserun-plugin. The dependency configuration is standard for running such an Eclipse application. We try to install our example feature from our p2 repository into a temporary output directory (these values are defined as properties so that you can copy this plugin configuration in your projects and simply adjust the values of the properties). Also, the arguments passed to the p2 director are standard and should be easy to understand. The only non-standard argument is -followReferences that will be crucial later (for this first run it would not be needed).

Running mvn clean verify should now highlight the problem:

This would mimic the situation your users might experience.

The solution

Let’s fix this: we add to the category.xml the references to the same p2 repositories we used in our target platform. We can do that manually (or by using the Eclipse Category editor, in the tab Repository Properties):

The category.xml initially is defined as follows

Now when we create the p2 repository during the Tycho build, the content.xml metadata file will contain the references to the p2 repository (with a syntax slightly different, but that’s not important; it will contain a reference to the metadata repository and to the artifact repository, which usually are the same). Now, our users can simply use our p2 repository without worrying about dependencies! Our p2 repository will be self-contained.

Let’s verify that by running mvn clean verify; now everything is fine:

Note that this requires much more time: now the p2 director has to contact all the p2 sites defined as references and has to also download the requirements during the installation. We’ll see how to optimize this part as well.

In the corresponding output directory, you can find the installed plugins; you can’t do much with such installed bundles, but that’s not important. We just want to verify that our users can install our feature simply by using our p2 repository, that’s all!

You might not want to run this verification on every build, but, for instance, only during the build where you deploy the p2 repository to some remote directory (of course, before the actual deployment step). You can easily do that by appropriately configuring your POM(s).

Some optimizations

As we saw above, each time we run the clean build, the verification step has to access remote sites and has to download all the dependencies. Even though this is a very simple example, the dependencies during the installation are almost 100MB. Every time you run the verification. (It might be the right moment to stress that the p2 director will know nothing about the Maven/Tycho cache.)

We can employ some caching mechanisms by using the standard mechanism of p2: bundle pool! This way, dependencies will have to be downloaded only the very first time, and then the cached versions will be used.

We simply introduce another property for the bundle pool directory (I’m using by default a hidden directory in the home folder) and the corresponding argument for the p2 director application:

Note that now the plug-ins during the verification step will NOT be installed in the specified output directory (which will store only some p2 properties and caches): they will be installed in the bundle pool directory. Again, as said above, you don’t need to interact with such installed plug-ins, you only need to make sure that they can be installed.

In a CI server, you should cache the bundle pool directory as well if you want to benefit from some speed. E.g., this example comes with a GitHub Actions workflow that stores also the bundle pool in the cache, besides the .m2 directory.

This will also allow you to easily experiment with different configurations of the site references in your p2 repository. For example, up to now, we put the same sites used for the target platform. Referring to the whole Eclipse releases p2 site might be too much since it contains all the features and bundles of all the projects participating in Eclipse Simrel. In the target platform, this might be OK since we might want to use some dependencies only for testing. For our p2 repository, we could tweak references so that they refer only to the minimal sites containing all our features’ requirements.

For this example we can replace the 2 sites with 4 small sites with all the requirements (actually the Xtext 2.25.0 is just the same as before):

You can verify that removing any of them will lead to installation failures.

The first time this tweaking might require some time, but you now have an easy way to test this!

Keeping things consistent

When you update your target platform, i.e., your dependencies versions, you must make sure to update the site references in the category.xml accordingly. It would be instead nice to modify this information in a single place so that everything else is kept consistent!

We can use again properties in the parent POM:

We want to rely on such properties also in the category.xml, relying on the Maven standard mechanism of copy resources with filtering.

We create another category.xml in the subdirectory templates of the site project using the above properties in the site references (at least in the ones where we want to have control on a specific version):

and in the site project we configure the Maven resources plugin appropriately:

Of course, we execute that in a phase that comes BEFORE the phase when the p2 repository is generated. This will overwrite the standard category.xml file (in the root of the site project) by replacing properties with the corresponding values!

By the way, you could use the property eclipse-version also in the configuration of the Tycho Eclipserun plugin seen above, instead of hardcoding 2020-12.

Happy releasing! 🙂

Fixing Right Click Touchpad in PineBook Pro

I recently bought a PineBook Pro (maybe I’ll review it in the future in another post). What annoyed me first was that the right click on the touchpad with a two-finger tap was basically unusable: you should be extremely fast.

In some forums, the solution is to issue this command

but then you have to make it somehow permanent.

Actually, it’s much easier than that: just use the KDE Setting (Tap DetectionMaximum time), and this will make it permanent right away:

Publishing an Eclipse p2 composite repository on GitHub Pages

I had already described the process of publishing an Eclipse p2 composite update site:

Well, now that Bintray is shutting down, and Sourceforge is quite slow in serving an Eclipse update site, I decided to publish my Eclipse p2 composite update sites on GitHub Pages.

GitHub Pages might not be ideal for serving binaries, and it has a few limitations. However, such limitations (e.g., published sites may be no larger than 1 GB, sites have a soft bandwidth limit of 100GB per month and sites have a soft limit of 10 builds per hour) are not that crucial for an Eclipse update site, whose artifacts are not that huge. Moreover, at least my projects are not going to serve more than 100GB per month, unfortunately, I might say 😉

In this tutorial, I’ll show how to do that, so that you can easily apply this procedure also to your projects!

The procedure is part of the Maven/Tycho build so that it is fully automated. Moreover, the pom.xml and the ant files can be fully reused in your own projects (just a few properties have to be adapted). The idea is that you can run this Maven build (basically, “mvn deploy”) on any CI server (as long as you have write-access to the GitHub repository hosting the update site – more on that later). Thus, you will not depend on the pipeline syntax of a specific CI server (Travis, GitHub Actions, Jenkins, etc.), though, depending on the specific CI server you might have to adjust a few minimal things.

These are the main points:

The p2 children repositories and the p2 composite repositories will be published with standard Git operations since we publish them in a GitHub repository.

Let’s recap what p2 composite update sites are. Quoting from https://wiki.eclipse.org/Equinox/p2/Composite_Repositories_(new)

As repositories continually grow in size they become harder to manage. The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.

In order to achieve this, all published p2 repositories must be available, each one with its own p2 metadata that should never be overwritten. On the contrary, the metadata that we will overwrite will be the one for the composite metadata, i.e., compositeContent.xml and compositeArtifacts.xml.

Directory Structure

I want to be able to serve these composite update sites:

  • the main one collects all the versions
  • a composite update site for each major version (e.g., 1.x, 2.x, etc.)
  • a composite update site for each major.minor version (e.g., 1.0.x, 1.1.x, 2.0.x, etc.)

What I aim at is to have the following paths:

  • releases: in this directory, all p2 simple repositories will be uploaded, each one in its own directory, named after version.buildQualifier, e.g., 1.0.0.v20210307-2037, 1.1.0.v20210307-2104, etc. Your Eclipse users can then use the URL of one of these single update sites to stick to that specific version.
  • updates: in this directory, the metadata for major and major.minor composite sites will be uploaded.
  • root: the main composite update site collecting all versions.

To summarize, we’ll end up with a remote directory structure like the following one

Thus, if you want, you can provide these sites to your users (I’m using the URLs that correspond to my example):

  • https://lorenzobettini.github.io/p2composite-github-pages-example-updates for the main global update site: every new version will be available when using this site;
  • https://lorenzobettini.github.io/p2composite-github-pages-example-updates/updates/1.x for all the releases with major version 1: for example, the user won’t see new releases with major version 2;
  • https://lorenzobettini.github.io/p2composite-github-pages-example-updates/updates/1.x/1.0.x for all the releases with major version 1 and minor version 0: the user will only see new releases of the shape 1.0.0, 1.0.1, 1.0.2, etc., but NOT 1.1.0, 1.2.3, 2.0.0, etc.

If you want to change this structure, you have to carefully tweak the ant file we’ll see in a minute.

Building Steps

During the build, before the actual deployment, we’ll have to update the composite site metadata, and we’ll have to do that locally.

The steps that we’ll perform during the Maven/Tycho build are:

  • Clone the repository hosting the composite update site (in this example, https://github.com/LorenzoBettini/p2composite-github-pages-example-updates);
  • Create the p2 repository (with Tycho, as usual);
  • Copy the p2 repository in the cloned repository in a subdirectory of the releases directory (the name of the subdirectory has the same qualified version of the project, e.g., 1.0.0.v20210307-2037);
  • Update the composite update sites information in the cloned repository (using the p2 tools);
  • Commit and push the updated clone to the remote GitHub repository (the one hosting the composite update site).

First of all, in the parent POM, we define the following properties, which of course you need to tweak for your own projects:

It should be clear which properties you need to modify for your project. In particular, the github-update-repo is the URL (with authentication information) of the GitHub repository hosting the composite update site, and the site.label is the label that will be put in the composite metadata.

Then, in the parent POM, we configure in the pluginManagement section all the versions of the plugin we are going to use (see the sources of the example on GitHub).

The most interesting configuration is the one for the tycho-packaging-plugin, where we specify the format of the qualified version:

Moreover, we create a profile release-composite (which we’ll also use later in the POM of the site project), where we disable the standard Maven plugins for install and deploy. Since we are going to release our Eclipse p2 composite update site during the deploy phase, but we are not interested in installing and deploying the Maven artifacts, we skip the standard Maven plugins bound to those phases:

The interesting steps are in the site project, the one with <packaging>eclipse-repository</packaging>. Here we also define the profile release-composite and we use a few plugins to perform the steps involving the Git repository described above (remember that these configurations are inside the profile release-composite, of course in the build plugins section):

Let’s see these configurations in detail. In particular, it is important to understand how the goals of the plugins are bound to the phases of the default lifecycle; remember that on the phase package, Tycho will automatically create the p2 repository and it will do that before any other goals bound to the phase package in the above configurations:

  • with the build-helper-maven-plugin we parse the current version of the project, in particular, we set the properties holding the major and minor versions that we need later to create the composite metadata directory structure; its goal is automatically bound to one of the first phases (validate) of the lifecycle;
  • with the exec-maven-plugin we configure the execution of the Git commands:
    • we clone the Git repository of the update site (with –depth=1 we only get the latest commit in the history, the previous commits are not interesting for our task); this is done in the phase pre-package, that is before the p2 repository is created by Tycho; the Git repository is cloned in the output directory target/checkout
    • in the phase verify (that is, after the phase package), we commit the changes (which will be done during the phase package as shown in the following points)
    • in the phase deploy (that is, the last phase that we’ll run on the command line), we push the changes to the Git repository of the update site
  • with the maven-resources-plugin we copy the p2 repository generated by Tycho into the target/checkout/releases directory in a subdirectory with the name of the qualified version of the project (e.g., 1.0.0.v20210307-2037);
  • with the tycho-eclipserun-plugin we create the composite metadata; we rely on the Eclipse application org.eclipse.ant.core.antRunner, so that we can execute the p2 Ant task for managing composite repositories (p2.composite.repository). The Ant tasks are defined in the Ant file packaging-p2composite.ant, stored in the site project. In this file, there are also a few properties that describe the layout of the directories described before. Note that we need to pass a few properties, including the site.label, the directory of the local Git clone, and the major and minor versions that we computed before.

Keep in mind that in all the above steps, non-existing directories will be automatically created on-demand (e.g., by the maven-resources-plugin and by the p2 Ant tasks). This means that the described process will work seamlessly the very first time when we start with an empty Git repository.

Now, from the parent POM on your computer, it’s enough to run

and the release will be performed. When cloning you’ll be asked for the password of the GitHub repository, and, if not using an SSH agent or a keyring, also when pushing. Again, this depends on the URL of the GitHub repository; you might use an HTTPS URL that relies on the GitHub token, for example.

If you want to make a few local tests before actually releasing, you might stop at the phase verify and inspect the target/checkout to see whether the directories and the composite metadata are as expected.

You might also want to add another execution to the tycho-eclipserun-plugin to add a reference to another Eclipse update site that is required to install your software. The Ant file provides a task for that, p2.composite.add.external that will store the reference into the innermost composite child (e.g., into 1.2.x); here’s an example that adds a reference to the Eclipse main update site:

For example, in my Xtext projects, I use this technique to add a reference to the Xtext update site corresponding to the Xtext version I’m using in that specific release of my project. This way, my update site will be “self-contained” for my users: when using my update site for installing my software, p2 will be automatically able to install also the required Xtext bundles!

Releasing from GitHub Actions

The Maven command shown above can be used to perform a release from your computer. If you want to release your Eclipse update site directly from GitHub Actions, there are a few more things to do.

First of all, we are talking about a GitHub Actions workflow stored and executed in the GitHub repository of your project, NOT in the GitHub repository of the update site. In this example, it is https://github.com/LorenzoBettini/p2composite-github-pages-example.

In such a workflow, we need to push to another GitHub repository. To do that

  • create a GitHub personal access token (selecting repo);
  • create a secret in the GitHub repository of the project (where we run the GitHub Actions workflow), in this example it is called ACTIONS_TOKEN, with the value of that token;
  • when running the Maven deploy command, we need to override the property github-update-repo by specifying a URL for the GitHub repository with the update site using the HTTPS syntax and the encrypted ACTIONS_TOKEN; in this example, it is https://x-access-token:${{ secrets.ACTIONS_TOKEN }}@github.com/LorenzoBettini/p2composite-github-pages-example-updates;
  • we also need to configure in advance the Git user and email, with some values, otherwise, Git will complain when creating the commit.

To summarize, these are the interesting parts of the release.yml workflow (see the full version here: https://github.com/LorenzoBettini/p2composite-github-pages-example/blob/master/.github/workflows/release.yml):

The workflow is configured to be executed only when you push to the release branch.

Remember that we are talking about the Git repository hosting your project, not the one hosting your update site.

Final thoughts

With the procedure described in this post, you publish your update sites and the composite metadata during the Maven build, so you never deal manually with the GitHub repository of your update site. However, you can always do that! For example, you might want to remove a release. It’s just a matter of cloning that repository, do your changes (i.e., remove a subdirectory of releases and update manually the composite metadata accordingly), commit, and push. Now and then you might also clean up the history of such a Git repository (the history is not important in this context), by pushing with –force after resetting the Git history. By the way, by tweaking the configurations above you could also do that every time you do a release: just commit with amend and push force!

Finally, you could also create an additional GitHub repository for snapshot releases of your update sites, or for milestones, or release candidate.

Happy releasing! 🙂

Caching dependencies in GitHub Actions

I recently started to port all my Java projects from Travis CI to GitHub Actions, since Travis CI changed its pricing model. (I’ll soon update also my book on TDD and Build Automation under that respect.)

I’ve always used caching mechanisms during the builds in Travis CI, to speed up the builds: caching Maven dependencies, especially in big projects, can save a lot of time. In my case, I’m mostly talking of Eclipse plug-in projects, built with Maven/Tycho, and the target platform resolution might have to download a few hundreds of megabytes. Thus, I wanted to use caching also in GitHub Actions, and there’s an action for that.

In this post, I’ll show my strategies for using the cache, in particular, using different workflows based on different operating systems, which are triggered only on some specific events. I’ll use a very simple example, but I’m using this strategy currently on this Xtext project: https://github.com/LorenzoBettini/edelta, which uses more than 300 Mb of dependencies.

The post assumes that you’re already familiar with GitHub Actions.

Warning: Please keep in mind that caches will also be evicted automatically (currently, the documentation says that “caches that are not accessed within the last week will also be evicted”). However, we can still benefit from caches if we are working on a project for a few days in a row.

To experiment with building mechanisms, I suggest you use a very simple example. I’m going to use a simple Maven Java project created with the corresponding Maven archetype: a Java class and a trivial JUnit test. The Java code is not important in this context, and we’ll concentrate on the build automation mechanisms.

The final project can be found here:
https://github.com/LorenzoBettini/github-actions-cache-example.

This is the initial POM for this project:

This is the main workflow file (stored in .github/workflows/maven.yml):

This is a pretty standard workflow for a Java project built with Maven. This workflow runs for every push on any branch and every PR.

Note that we specify to cache the directory where Maven stores all the downloaded artifacts, ~/.m2.

For the cache key, we use the OS where our build is running, a constant string “-m2-” and the result of hashing all the POM files (we’ll see how we rely on this hashing later in this post).

Remember that the cache key will be used in future builds to restore the files saved in the cache. When no cache is found with the given key, the action searches for alternate keys if the restore-keys has been specified. As you see, we specified as the restore key something similar to the actual key: the running OS and the constant string “-m2-” but no hashing. This way, if we change our POMs, the hashing will be different, but if a previous cache exists we can still restore that and make use of the cached values. (See the official documentation for further details.) We’ll then have to download only the new dependencies if any. The cache will then be updated at the end of the successful job.

I usually rely on this strategy for the CI of my projects:

  • build every pushes in any branch using a Linux build environment;
  • build PRs in any branch also on a Windows and macOS environment (actually, I wasn’t using Windows with Travis CI since it did not provide Java support on that environment; that’s another advantage of GitHub Actions, which provides Java support also on Windows)

Thus, I have another workflow definition just for PRs (stored in .github/workflows/pr.yml):

Besides the build matrix for OSes, that’s basically the same as the previous workflow. In particular, we use the same strategy for defining the cache key (and restore key). Thus, we have a different cache for each different operating system.

Now, let’s have a look at the documentation of this action:

A workflow can access and restore a cache created in the current branch, the base branch (including base branches of forked repositories), or the default branch. For example, a cache created on the default branch would be accessible from any pull request. Also, if the branch feature-b has the base branch feature-a, a workflow triggered on feature-b would have access to caches created in the default branch (main), feature-a, and feature-b.

Access restrictions provide cache isolation and security by creating a logical boundary between different workflows and branches. For example, a cache created for the branch feature-a (with the base main) would not be accessible to a pull request for the branch feature-b (with the base main).

What does that mean in our scenario? Since the workflow running on Windows and macOS is executed only in PRs, this means that the cache for these two configurations will never be saved for the master branch. In turns, this means that each time we create a new PR, this workflow will have no chance of finding a cache to restore: the branch for the PR is new (so no cache is available for such a branch) and the base branch (typically, “master” or “main”) will have no cache saved for these two OSes. Summarizing, the builds for the PRs for these two configurations will always have to download all the Maven dependencies from scratch. Of course, if we don’t immediately merge the PR and we push other commits on the branch of the PR, the builds for these two OSes will find a saved cache (if the previous builds of the PR succeeded), but, in any case, the first build for each new PR for these two OSes will take more time (actually, much more time in a complex project with lots of dependencies).

Thus, if we want to benefit from caching also on these two OSes, we have to have another workflow on the OSes Windows and macOS that runs when merging a PR, so that the cache will be stored also for the master branch (actually we could use this strategy also when merging any PR with any base branch, not necessarily the main one).

Here’s this additional workflow (stored in .github/workflows/pr-merge.yml):

Note that we intercept the event push (since a merge of a PR is actually a push) but we have an if statement that enables the workflow only when the commit message contains the string “Merge pull request”, which is the default message when merging a PR on GitHub. In this example, we are only interested in PR merged with the master branch and with any branch starting with “experiments”, but you can adjust that as you see fit. Furthermore, since this workflow is only meant for updating the Maven dependency cache, we skip the tests (with -DskipTests) so that we save some time (especially in a complex project with lots of tests).

This way, after the first PR merged, the PR workflows running on Windows and macOS will find a cache (at least as a starting point).

We can also do better than that and avoid running the Maven build if there’s no need to update the cache. Remember that we use the result of hashing all the POM files in our cache key? We mean that if our POMs do not change then basically we don’t expect our dependencies to change (of course if we’re not using SNAPSHOT dependencies). Now, in the documentation, we also read

When key matches an existing cache, it’s called a cache hit, […] When key doesn’t match an existing cache, it’s called a cache miss, and a new cache is created if the job completes successfully.

The idea is to skip the Maven step in the above workflow “Updates Cache on Windows and macOS” if we have a cache hit since we expect no new dependencies are needed to be downloaded (our POMs haven’t changed). This is the interesting part to change:

Note that we need to define an id for the cache to intercept the cache hit or miss and the id must match the id in the if statement.

This way, if we have a cache hit the workflow for updating the cache on Windows and macOS will be really fast since it won’t even run the Maven build for updating the cache.

If we change the POM, e.g., switch to JUnit 4.13.1, push the change, create a PR, and merge it, then, the workflow for updating the cache will actually run the Maven build since we have a cache miss: the key of the cache has changed. Of course, we’ll still benefit from the already cached dependencies (and all the Maven plugins already downloaded) and we’ll update the cache with the new dependencies for JUnit 4.13.1.

Final notes

One might think to intercept the merge of a PR by using on: pull_request: (as we did in the pr.yml workflow). However, “There’s no way to specify that a workflow should be triggered when a pull request is merged”. In the official forum, you can find a solution based on the “closed” PR event and the inspection of the PR “merged” event. So one might think to update the pr.yml workflow accordingly and get rid of the additional pr-merge.yml workflow. However, from my experiments, this solution will not make use of caching, which is the main goal of this post. The symptoms of such a problem are this message when the workflow initially tries to restore a cache:

Warning: Cache service responded with 403

and this message when the cache should be saved:

Unable to reserve cache with key …, another job may be creating this cache.

Another experiment that I tried was to remove the running OS from the cache key, e.g., m2-${{ hashFiles(‘**/pom.xml’) }} instead of ${{ runner.os }}-m2-${{ hashFiles(‘**/pom.xml’) }}, and to use a restore accordingly key, like m2- instead of ${{ runner.os }}-m2-. I was hoping to reuse the same cache across different OS environments. This seems to work for macOS, which seems to be able to reuse the Linux cache. Unfortunately, this does not work for Windows. Thus, I gave up that solution.

Grub remembers the last choice

In all my computers I have dual boot, Ubuntu and Windows, though I’m using the former 99% of the time 😉 On one of my laptop I started to evaluate also Manjaro (probably a blog post will come in the near future). I let Manjaro install the main efi grub boot loader and I noticed that upon reboots its grub configuration remembers the last choice! That is, if I booted Ubuntu (not the first choice in the menu) and I reboot then “Ubuntu” entry is the one selected by default. The same holds for Windows.

I find this feature really cool and useful:

  • if I had previously used Ubuntu and possibly hibernated the computer, then, even after a few days, when I boot the laptop I know which OS I had booted the last time;
  • if I boot Windows (…once in a month?) I will probably experience many updates which require a few reboots; if I left the computer unattended during rebooting I used to find myself back to Linux (the primary default choice in grub) and I had to reboot and choose Windows so that updates are installed (if Windows updates require a few reboots that’s quite annoying).

I thought that Manjaro had some special tweaks in the installed grub, but then I learned that’s a standard feature of Grub!

You just have to add these two lines in your /etc/default/grub:

Save and run

And that’s all! From then on Grub will remember your last choice 🙂

Enabling Hibernation on Ubuntu 20.04

I have never been able to make hibernation (suspend to disk) work on my laptops (Dell M3800 and Dell XPS 13 9370) on Ubuntu with systemd. The symptom was that running

was making the system shutdown, but then upon restart the system was not restored: it was just like booting the system from scratch.

I had also tried with uswsusp (which is installed if you install the package hibernate), with its program s2disk, but I experienced many problems: it wasn’t working reliably and it was making booting (even standard booting) much longer (several seconds more).

Then, after looking at several blog posts, I found that the solution is rather simple, and I’ll detail the steps here. I’ll also show how to use suspend-then-hibernate.

First, you need to have swap already setup, e.g., a swap partition (though I think a swap file would work as well, but in that case the configuration is slightly more complex). For example in /etc/fstab you should have something like

The UUID is important and you should take note of it.

How big should the swap be? You can find some hints here: https://help.ubuntu.com/community/SwapFaq. I have 16Gb of RAM and my swap partition is 20 Gb.

Then, you must make sure initramfs is “aware” of your swap partition and that it is already able to “resume” from that. This should already be the case but you can try to run

and after some time you should see something like:

The UUID must be the same as your swap UUID in the /etc/fstab.

Now, it’s just a matter of editing your /etc/default/grub and make sure you specify resume in GRUB_CMDLINE_LINUX_DEFAULT, with the UUID of your swap partition. So it should be something like (remember that <UUID of your swap partition> must be replaced with the UUID):

Save the file and update grub:

Reboot the system and now try to hibernate again (first you might want to start a few applications so that you’re sure that the system is effectively restored to the same state):

Wait for the system to shut down and switch it on again. The splash screen should tell you something about that it is “resuming from <your swap partition>”. If all goes well you’ll have to enter your password to unlock the system which you should find in the state you left it before hibernating! 🙂

suspend-then-hibernate

Another interesting mechanism provided by systemd is suspend-then-hibernate: the system is suspended (to RAM) and after some time it is hibernated (suspended to disk).

The amount of time before hibernating is defined in the file /etc/systemd/sleep.conf. Let’s have a look at the default contents:

By default everything is commented out, but the values, as stated at the beginning of the file, represent the default values. So you can see that suspend-then-hibernate is enabled and that the default delay time before hibernating is 180 minutes. If you’re not happy with that value, uncomment the line and change the value. For example, I set it to 10 minutes:

You can now test this functionality with this command:

The system will suspend to RAM and if you don’t touch the computer after 10 minutes you can hear some sounds: the system will effectively hibernate.

If you want to make this mechanisms the default suspend mechanism, e.g., you close the lid and the system will suspend and then after some time it will hibernate, you CANNOT set the value of SuspendMode in the file above, since that has another meaning. To make suspend-then-hibernate the default suspend mechanism you have to create this symlink:

No need to restart, try to close the lid and the laptop will suspend, after 10 minutes it will hibernate.

Please, keep in mind that the above command will completely replace the behavior of suspend.

If you want to have a finer grain control, you might want to edit the file /etc/systemd/logind.conf, in particular uncomment and set the entries (then you’ll have to restart or restart the systemd-logind.service service):

which should be self-explicative, but I haven’t tested this approach.

Happy hibernating 🙂

Remove SNAPSHOT and Qualifier in Maven/Tycho Builds

Before releasing Maven artifacts, you remove the -SNAPSHOT from your POMs. If you develop Eclipse projects and build with Maven and Tycho, you have to keep the versions in the POMs and the versions in MANIFEST, feature.xml and other Eclipse project artifacts consistent. Typically when you release an Eclipse p2 site, you don’t remove the .qualifier in the versions and you will get Eclipse bundles and features versions automatically processed: the .qualifer is replaced with a timestamp. But if you want to release some Eclipse bundles also as Maven artifacts (e.g., to Maven central) you have to remove the -SNAPSHOT before deploying (or they will still be considered snapshots, of course 🙂 and you have to remove .qualifier in Eclipse bundles accordingly.

To do that, in an automatic way, you can use a combination of Maven plugins and of tycho-versions-plugin.

I’m going to show two different ways of doing that. The example used in this post can be found here: https://github.com/LorenzoBettini/tycho-set-version-example.

First method

The idea is to use the goal parse-version of the org.codehaus.mojo:build-helper-maven-plugin. This will store the parts of the current version in some properties (by default, parsedVersion.majorVersion, parsedVersion.minorVersion and parsedVersion.incrementalVersion).

Then, we can pass these properties appropriately to the goal set-version of the org.eclipse.tycho:tycho-versions-plugin.

This is the Maven command to run:

The goal set-version of the Tycho plugin will take care of updating the versions (without the -SNAPSHOT and .qualifier) both in POMs and in Eclipse projects’ metadata.

Second method

Alternatively, we can use the goal set (with argument -DremoveSnapshot=true) of the org.codehaus.mojo:versions-maven-plugin. Then, we use the goal update-eclipse-metadata of the org.eclipse.tycho:tycho-versions-plugin, to update Eclipse projects’ versions according to the version in the POM.

This is the Maven command to run:

The first goal will change the versions in POMs while the second one will change the versions in Eclipse projects’ metadata.

Configuring the plugins

As usual, it’s best practice to configure the used plugins (in this case, their versions) in the pluginManagement section of your parent POM.

For example, in the parent POM of https://github.com/LorenzoBettini/tycho-set-version-example we have:

 

Conclusions

In the end, choose the method you prefer. Please keep in mind that these goals are not meant to be used during a standard Maven lifecycle, that’s why we ran them explicitly.

Furthermore, the goal set of the org.codehaus.mojo:versions-maven-plugin might give you some headache if the structure of your Maven/Eclipse projects is quite different from the default one based on nested directories. In particular, if you have an aggregator project different from the parent project, you will have to pass additional arguments or set the versions in different commands (e.g., first on the parent, then on the other modules of the aggregator, etc.)

Publishing a Maven Site to GitHub Pages

In this tutorial I’d like to show how to publish a Maven Site to GitHub Pages. You can find several documents on the web about this subject, but I decided to publish one myself because the documents I found refer to a deprecated plugin (https://github.com/github/maven-plugins/tree/master/github-site-plugin) or show complicated settings, also when using the official Maven plugin that I’m going to use myself in this post, Apache Maven SCM Publish Plugin, https://maven.apache.org/plugins/maven-scm-publish-plugin/index.html (probably because the documents I found are somehow old, and they refer to an old version of such a plugin, which has probably been improved a lot since then).

I’m going to use a very simple Java example. The final project is available here: https://github.com/LorenzoBettini/maven-site-github-example.

Create the Java Maven project

Let’s create a simple Java project using the archetype (of course, I’m going to use fake names for the groupId and artifactId, but such values should be used consistently from now on):

This will create the my-app folder with the Maven project. Let’s cd into that folder and make sure it compiles fine:

We can also generate the site of this project (in its default shape):

The site will be generated in the target/site folder and can be opened with a browser; for example, let’s open its index.html:

GitHub setup

Let’s publish the Maven project on GitHub; in my case it’s in https://github.com/LorenzoBettini/maven-site-github-example.

Now we have to setup gh-pages branch on our Git repository. This step is required the first time only. (You might want to have a look at https://help.github.com/en/github/working-with-github-pages/about-github-pages if you’re note familiar with GitHub pages).

Let’s prepare the gh-pages branch: this will be a separate branch in the Git repository, we create it as an orphan branch (see also https://maven.apache.org/plugins/maven-scm-publish-plugin/various-tips.html). WARNING: the following commands will wipe out the current contents of the local Git repository, so make sure you don’t have any uncommitted changes! Let’s run the following commands in the main folder of the Git repository:

  1. git checkout --orphan gh-pages to create the branch locally,
  2. rm .git/index ; git clean -fdx to clean the branch content and let it empty,
  3. echo "It works" > index.html to create an initial site content
  4. git add . && git commit -m "initial site content" && git push origin gh-pages to commit and push to the branch gh-pages

The corresponding web site will be published in your GitHub project’s corresponding URL; in my case it is https://lorenzobettini.github.io/maven-site-github-example:  This should show “It works”

Now we can go back to the master branch:

and remove the index.html we previously created.

Maven POM configuration

Let’s see how to configure our POM to automatically publish the Maven site to GitHub pages.

First, we must specify the distribution management section:

Remember that you will have to use the URL corresponding to your own GitHub project (including your GitHub user id).

Then, we configure the maven-scm-publish-plugin configuration in the pluginManagement section (we configure it there, and then we will call its goals directly from the command line – such a plugin can also be configured to be bound to the Maven lifecycle, but that’s out of the scope of this tutorial, you might want to have a look here in case: https://maven.apache.org/plugins/maven-scm-publish-plugin/usage.html). Note that we specify the gh-pages branch:

Publish the site

Now, publishing the Maven site to GitHub pages it’s just a matter of running:

Wait for the plugin to perform its job.

What the plugin does is

  1. It first checks out the contents of the gh-pages branch from GitHub into a local folder (by default, target/scmpublish-checkout);
  2. Then locally staged site content is applied to the check-out, issuing appropriate Git commands to add and delete entries, followed by a commit and a push.

Now the site is on GitHub Pages:

Improve the site contents

Just for demonstration, let’s enrich the site a bit by using the Maven Archetype Site, https://maven.apache.org/archetypes/maven-archetype-site.

We layer this archetype upon our existing project, so we must run it from the directory containing our current Maven project and specify the same groupId and artifactId we specified when we created the Maven project:

The archetype will update our project with a src/site directory containing a few example contents in Markdown, APT, etc. It will also update our POM with some configuration for maven-project-info-reports-plugin and for i18n localization. Moreover, a new skin will be used for the final site, maven-fluido-skin.

You might want to have a look locally by regenerating the site.

Let’s publish the new site, again by running

Now it’s on GitHub Pages (it might take some time for the new website to be updated on GitHub and browser reload might help):

To jump to the published website you could also use the GitHub web interface: click on the “environment” link and then on the “View deployment” button corresponding to the latest pushed commit on the gh-pages branch:

Now you could experiment by adding/changing/removing the contents of the directory src/site and then publish the website again through Maven.

Remember

What’s in your Maven project (e.g., master branch) contains the sources of the site, while, what’s in the gh-pages branch on GitHub will contain the generated website. You won’t modify the contents of gh-pages branch manually: the Apache Maven SCM Publish will do that for you.

Hope you enjoy this tutorial 🙂

Installing KDE on top of Ubuntu

If you like to use KDE you probably install Kubuntu directly, instead of Ubuntu, which has been based on Gnome for a long time now.

However, I like to have several Desktop Environments, and, now and then, I like to switch from Gnome to KDE and then back. Currently, I’m using Gnome for most of the time, that’s why I install Ubuntu (instead of Kubuntu).

In any case, you can still install KDE Plasma on top of Ubuntu. The following has been tested on an Ubuntu Disco 19.04, but I guess it will work also on previous distributions.

For a reduced installation of KDE you might want to install only these packages

In particular, kwin-addons includes some useful things: it contains additional KWin desktop and window switchers shipped in the Plasma 5 addons module.

When installation has finished you may want to reboot and then, on the Login screen, you can use the gear icon for specifying that you want to enter the KDE Plasma environment instead of the default Gnome environment.

The above packages should provide you with enough stuff to enjoy a Plasma experience, but it lacks many (K)ubuntu configurations and addons for KDE.

If you want more Kubuntu stuff, you might want to install the “huge” package:

And then you get a real Kubuntu KDE Plasma experience.

Note that this will replace the classic Ubuntu splash screen when booting the OS: it replaces it with the Kubuntu splash screen. If you want to go back to the original splash screen it’s just a matter of removing the following packages:

Remember that you can also use the Kubuntu Backport PPA for enjoying more recent versions of KDE software.

Enjoy Gnome and KDE! 🙂