Author Archives: Lorenzo Bettini

About Lorenzo Bettini

Lorenzo Bettini is an Associate Professor in Computer Science at the Dipartimento di Statistica, Informatica, Applicazioni "Giuseppe Parenti", Università di Firenze, Italy. Previously, he was a researcher in Computer Science at Dipartimento di Informatica, Università di Torino, Italy. He has a Masters Degree summa cum laude in Computer Science (Università di Firenze) and a PhD in "Logics and Theoretical Computer Science" (Università di Siena). His research interests cover design, theory, and the implementation of statically typed programming languages and Domain Specific Languages. He is also the author of about 90 research papers published in international conferences and international journals.

Installing KDE on top of Ubuntu

If you like to use KDE you probably install Kubuntu directly, instead of Ubuntu, which has been based on Gnome for a long time now.

However, I like to have several Desktop Environments, and, now and then, I like to switch from Gnome to KDE and then back. Currently, I’m using Gnome for most of the time, that’s why I install Ubuntu (instead of Kubuntu).

In any case, you can still install KDE Plasma on top of Ubuntu. The following has been tested on an Ubuntu Disco 19.04, but I guess it will work also on previous distributions.

For a reduced installation of KDE you might want to install only these packages

sudo apt install kde-plasma-desktop kde-standard kwin-addons

In particular, kwin-addons includes some useful things: it contains additional KWin desktop and window switchers shipped in the Plasma 5 addons module.

When installation has finished you may want to reboot and then, on the Login screen, you can use the gear icon for specifying that you want to enter the KDE Plasma environment instead of the default Gnome environment.

The above packages should provide you with enough stuff to enjoy a Plasma experience, but it lacks many (K)ubuntu configurations and addons for KDE.

If you want more Kubuntu stuff, you might want to install the “huge” package:

sudo apt install kubuntu-desktop

And then you get a real Kubuntu KDE Plasma experience.

Note that this will replace the classic Ubuntu splash screen when booting the OS: it replaces it with the Kubuntu splash screen. If you want to go back to the original splash screen it’s just a matter of removing the following packages:

sudo apt remove plymouth-theme-kubuntu-logo plymouth-theme-kubuntu-text

Remember that you can also use the Kubuntu Backport PPA for enjoying more recent versions of KDE software.

Enjoy Gnome and KDE! 🙂

My new book on TDD, Build Automation and Continuous Integration

I haven’t been blogging for some time now. I’m getting back to blogging by announcing my new book on TDD (Test-Driven Development), Build Automation and Continuous Integration.

The title is indeed, “Test-Driven Development, Build Automation, Continuous Integration
(with Java, Eclipse and friends)
” and can be bought from https://leanpub.com/tdd-buildautomation-ci.

The main goal of the book is to get you started with Test-Driven Development (write tests before the code), Build Automation (make the overall process of compilation and testing automatic with Maven) and Continuous Integration (commit changes and a server will perform the whole build of your code). Using Java, Eclipse and their ecosystems.

The main subject of this book is software testing. The main premise is that testing is a crucial part of software development. You need to make sure that the software you write behaves correctly. You can manually test your software. However, manual tests require lots of manual work and it is error prone.

On the contrary, this book focuses on automated tests, which can be done at several levels. In the book we will see a few types of tests, starting from those that test a single component in isolation to those that test the entire application. We will also deal with tests in the presence of a database and with tests that verify the correct behavior of the graphical user interface.

In particular, we will describe and apply the Test-Driven Development methodology, writing tests before the actual code.

Throughout the book we will use Java as the main programming language. We use Eclipse as the IDE. Both Java and Eclipse have a huge ecosystem of “friends”, that is, frameworks, tools and plugins. Many of them are related to automated tests and perfectly fit the goals of the book. We will use JUnit throughout the book as the main Java testing framework.

it is also important to be able to completely automate the build process. In fact, another relevant subject of the book is Build Automation. We will use one of the mainstream tools for build automation in the Java world, Maven.

We will use Git as the Version Control System and GitHub as the hosting service for our Git repositories. We will then connect our code hosted on GitHub with a cloud platform for Continuous Integration. In particular, we will use Travis CI. With the Continuous Integration process, we will implement a workflow where each time we commit a change in our Git repository, the CI server will automatically run the automated build process, compiling all the code, running all the tests and possibly create additional reports concerning the quality of our code and of our tests.

The code quality of tests can be measured in terms of a few metrics using code coverage and mutation testing. Other metrics are based on static analysis mechanisms, inspecting the code in search of bugs, code smells and vulnerabilities. For such a static analysis we will use SonarQube and its free cloud version SonarCloud.

When we need our application to connect to a service like a database, we will use Docker a virtualization program, based on containers, that is much more lightweight than standard virtual machines. Docker will allow us to

configure the needed services in advance, once and for all, so that the services running in the containers will take part in the reproducibility of the whole build infrastructure. The same configuration of the services will be used in our development environment, during build automation and in the CI server.

Most of the chapters have a “tutorial” nature. Besides a few general explanations of the main concepts, the chapters will show lots of code. It should be straightforward to follow the chapters and write the code to reproduce the examples. All the sources of the examples are available on GitHub.

The main goal of the book is to give the basic concepts of the techniques and tools for testing, build automation and continuous integration. Of course, the descriptions of these concepts you find in this book are far from being exhaustive. However, you should get enough information to get started with all the presented techniques and tools.

I hope you enjoy the book 🙂

How to install Linux on a USB drive using Virtualbox

In this tutorial I’m going to show how to install Linux on a USB drive using Virtualbox. I find this useful to test a LInux distribution. Note that using a live distribution only allows you a small testing experience, while installing Linux on a USB drive will give you the full experience (and if the USB drive is fast it’s almost like using Linux on a standard computer).

You can use this procedure to install Linux on any USB drive, that is, both a USB stick or an external USB hard drive.

You could burn a live image on a USB stick, boot it, and then install Linux on a second USB stick from the live system, but using Virtualbox is faster and does not require you to create a live USB stick just to install it on a second USB stick.

I’m going to use Ubuntu (17.10) as the main system and install Fedora on a USB stick (I’ve already downloaded the 27 iso).

First of all, let’s install Virtualbox:

Add your user to the Virtualbox users, or you won’t be able to use USB 3 in the virtual machine: run this command and reboot:

Then run Virtualbox and create a new Linux machine (increase the memory a bit, depending on your actual physical memory).

Since we’ll use this machine only for booting the live iso, there’s no need to create a disk

Now let’s go to Settings of the newly created machine, and in “USB” select USB 3.0:

In “Storage” select the “Empty” disk icon and in “Optical Drive” “Choose a virtual optical disk file…” and select the iso image of the Live distribution you want to boot the machine (in my case the Fedora 27 ISO I’ve already downloaded), then check the “Live CD/DVD” checkbox

Now “Start” the virtual machine (which will boot the Live ISO).

You’ll be asked to confirm the virtual optical disk file you had previously specified in the settings.

From now on, you’ll boot the live system into the virtual machine, and this depends on the distribution you choose (in my case Fedora). You should be already familiar with that procedure if you’ve installed a Linux distribution before starting with a Live system.

For example, we choose “Try Fedora” and we can perform some tests (for instance, that we can access the Internet from the virtual machine; being able to access the Internet might be crucial later when installing the distribution on the USB stick since the installation might want to download some upgrades).

Now let’s connect the USB stick where we want to install Linux; then in the Devices menu of the Virtualbox machine, you should select the USB stick you’ve just inserted (in my case it’s a SanDisk):

Now the USB stick is mounted in the virtual machine, and it will be the target disk of our installation.

We can now start the Fedora installation in the virtual machine; again, this assumes you’re familiar with the installation of this distribution. The important part will be to select the USB stick as the target of the installation. Actually, the USB stick should be the only available option (unless you manually mounted other drives in the virtual machine):

Continue the installation and once it’s done, you can reboot the computer and make sure to boot from the USB stick.

Enjoy your new Linux installation on the USB stick 🙂

Eclipse tested with a few Gnome themes

In this small blog post I’ll show how Eclipse looks like in Linux Gnome (Ubuntu 17.10) with a few Gnome themes.

First of all, the default Ubuntu theme, Ambiance, makes Eclipse look not very nice… see the icons, which are “packed” and “compressed” in the toolbar, not to mention the cut “Filter Files” textbox in the “Git Staging” view:

Numix has similar problems:

Adwaita, (the default Gnome theme) instead makes it look great:

The same holds for alternative themes; the following screenshots are based on Arc, Pop and Matcha, respectively:

So, in the end, stay away from Ubuntu default theme 😉

Analyzing Eclipse plug-in projects with Sonarqube

In this tutorial I’m going to show how to analyze multiple Eclipse plug-in projects with Sonarqube. In particular, I’m going to focus on peculiarities that have to be taken care of due to the standard way Sonarqube analyzes sources and to the structure of typical Eclipse plug-in projects (concerning tests and code coverage).

The code of this example is available on Github: https://github.com/LorenzoBettini/tycho-sonarqube-example

This can be seen as a follow-up of my previous post on “Jacoco code coverage and report of multiple Eclipse plug-in projects. I’ll basically reuse almost the same structure of that example and a few things. The part of Jacoco report is not related to Sonarqube but I’ll leave it there.

The structure of the projects is as follows:

Each project’s code is tested in a specific .tests project. The code consists of simple Java classes doing nothing interesting, and tests just call that code.

The project example.tests.parent contains all the common configurations for test projects (and test report, please refer to my previous post on “Jacoco code coverage and report of multiple Eclipse plug-in projects for the details of this report project, which is not strictly required for Sonarqube).

This is its pom

Note that this also shows a possible way of dealing with custom argLine for tycho-surefire configuration: tycho.testArgLine will be automatically set the jacoco:prepare-agent goal, with the path of jacoco agent (needed for code coverage); the property tycho.testArgLine is automatically used by tycho-surefire. But if you have a custom configuration of tycho-surefire with additional arguments you want to pass in argLine, you must be careful not to overwrite the value set by jacoco. If you simply refer to tycho.testArgLine in the custom tycho-surefire configuration’s argLine, it will work when the jacoco profile is active but it will fail when it is not active since that property will not exist. Don’t try to define it as an empty property by default, since when tycho-surefire runs it will use that empty value, ignoring the one set by jacoco:prepare-agent (argLine’s properties are resolved before jacoco:prepare-agent is executed). Instead, use another level of indirection: refer to a new property, e.g., additionalTestArgLine, which by default is empty. In the jacoco profile, set additionalTestArgLine referring to tycho.testArgLine (in that profile, that property is surely set by jacoco:prepare-agent). Then, in the custom argLine, refer to additionalTestArgLine. An example is shown in the project example.plugin2.tests pom:

You can check that code coverage works as expected by running (it’s important to verify that jacoco has been configured correctly in your projects before running Sonarqube analysis: if it’s not working in Sonarqube then it’s something wrong in the configuration for Sonarqube, not in the jacoco configuration, as we’ll see in a minute):

Mare sure that example.tests.report/target/site/jacoco-aggregate/index.html reports some code coverage (in this example, example.plugin1 has some code uncovered by intention).

Now I assume you already have Sonarqube installed.

Let’s run a first Sonarqube analysis with

This is the result:

So Unit Tests are correctly collected! What about Code Coverage? Something is shown, but if you click on that you see some bad surprises:

Code coverage only on tests (which is irrelevant) and no coverage for our SUT (Software Under Test) classes!

That’s because jacoco .exec files are by default generated in the target folder of the tests project, now when Sonarqube analyzes the projects:

  • it finds the jacoco.exec file when it analyzes a tests project but can only see the sources of the tests project (not the ones of the SUT)
  • when it analyzes a SUT project it cannot find any jacoco.exec file.

We could fix this by configuring the maven jacoco plugin to generate jacoco.exec in the SUT project, but then the aggregate report configuration should be updated accordingly (while it works out of the box with the defaults). Another way of fixing the problem is to use the Sonarqube maven property sonar.jacoco.reportPaths and “trick” Sonarqube like that (we do that in the parent pom properties section):

This way, when it analyzes example.plugin1 it will use the jacoco.exec found in example.plugin1.tests project (if you follow the convention foo and foo.tests this works out of the box, otherwise, you have to list all the jacoco.exec paths in all the projects in that property, separated by comma).

Let’s run the analysis again:

OK, now code coverage is collected on the SUT classes as we wanted. Of course, now test classes appear as uncovered (remember, when it analyzes example.plugin1.tests it now searchs for jacoco.exec in example.plugin1.tests.tests, which does not even exist).

This leads us to another problem: test classes should be excluded from Sonarqube analysis. This works out of the box in standard Maven projects because source folders of SUT and source folders of test classes are separate and in the same project (that’s also why code coverage for pure Maven projects works out of the box in Sonarqube); this is not the case for Eclipse projects, where SUT and tests are in separate projects.

In fact, issues are reported also on test classes:

We can fix both problems by putting in the tests.parent pom properties these two Sonarqube properties (note the link to the Eclipse bug about this behavior)

This will be inherited by our tests projects and for those projects, Sonarqube will not analyze test classes.

Run the analysis again and see the results: code coverage only on SUT and issues only on SUT (remember that in this example MyClass1 is not uncovered completely by intention):

You might be tempted to use the property sonar.skip set to true for test projects, but you will use JUnit test reports collection.

The final bit of customization is to exclude the Main.java class from code coverage. We have already configured the jacoco maven plugin to do so, but this won’t be picked up by Sonarqube (that configuration only tells jacoco to skip that class when it generates the HTML report).

We have to repeat that exclusion with a Sonarqube maven property, in the parent pom:

Note that in the jacoco maven configuration we had excluded a .class file, while here we exclude Java files.

Run the analysis again and Main is not considered in code coverage:

Now you can have fun in fixing Sonarqube issues, which is out of the scope of this tutorial 🙂

This example is also analyzed from Travis using Sonarcloud (https://sonarcloud.io/dashboard?id=example%3Aexample.parent).

Hope you enjoyed this tutorial and Happy new year! 🙂

 

Formatting Java method calls in Eclipse

Especially with lambdas, you may end up with a chain of method calls that you’d like to have automatically formatted with each invocation on each line (maybe except for the very first invocation).

You can configure the Eclipse Java formatter with that respect; you just need to reach the right option (the “Force split” is necessary to have each invocation on a separate line):

and then you can have method calls formatted automatically like that:

 

How to add Eclipse launcher in Gnome dock

In this post I’ll show how to add an Eclipse launcher as a favorite (pinned) application in the Gnome dock (I’m using Ubuntu Artful). This post is inspired by http://blog.ttoine.net/en/2016/06/30/how-to-add-eclipse-neon-launcher-in-gnu-linux-menus-and-launchers/.

First of all, you need to create a .desktop file, where you need to specify the full path of your Eclipse installation:

This is relative to my installation of Eclipse which is in the folder /home/bettini/eclipse/java-latest-released/eclipse, note the executable “eclipse” and the “icon.xpm”. The name “Eclipse Java” is what will appear as the launcher name both in Gnome applications and later in the dock.

Make this file executable.

Copy this file in your home folder in .local/share/applications.

Now in Gnome Activities search for such a launcher and it should appear:

Select it and make sure that Eclipse effectively runs.

Unfortunately, in the dock, there’s no contextual menu for you to add it as a favorite and pin it to the dock:

But you can still add it to the dock favorites (and thus pin it there) by using the corresponding contextual menu that is available when the launcher appears in the Activities:

And there you go: the Eclipse launcher is now on your dock and it’s there to stay 🙂

 

Touchpad gestures in Linux KDE with Libinput-gestures

This post is based on my Dell M3800 with Linux Neon.

KDE already does a good job with touchpad gestures (e.g., two fingers for scrolling, 3 finger tap for pasting, etc.) but it does not support 3 finger swype gestures like in MacOs, e.g., for displaying all the windows or for showing the desktop.

Today I tried this utility, Libinput-gestures, which works like magic! The utility comes with good default for typical gestures (including pinch) but I configured that to fit my needs (in particular, I wanted to mimic MacOs behavior for 3 finger swypes: up = display all windows, down = display all windows of the same class and for pinch out = show desktop.

The installation of Linput-gestures is really easy (just follow the instructions at its web page).

Remember that, first of all, your user must be in the input group, so first run

Then logout from your current session, and login again.

Then, in Ubuntu, it’s just a matter of running

and install the software like this (you need git):

You can already start the program like this

and if you want it to be started at login time, then run

The default gestures are in /etc/libinput-gestures.conf. If you want to create your own custom gestures then copy that file to ~/.config/libinput-gestures.conf and edit it.

These are the lines I changed in my configuration (remember that each time you modify the configuration you need to restart libinput-gestures, i.e., instead of start in the command line above, just use restart):

You only need to know the keyboard shortcuts of the actions you want to associate to mouse gestures. With that respect, you might want to have a look at the current shortcuts in KDE Settings (the interesting components are “KWin” and “Plasma”):

This is a video demoing the gestures:

Happy gestures! 🙂

JaCoCo Code Coverage and Report of multiple Eclipse plug-in projects

In this tutorial I’ll show how to use Jacoco with Maven/Tycho to create a code coverage report of multiple Eclipse plug-in projects.

The code of the example is available here: https://github.com/LorenzoBettini/tycho-multiproject-jacoco-report-example.

This is the structure of the projects:

jacoco-report-projects

Each project’s code is tested in a specific .tests project. The code consists of simple Java classes doing nothing interesting, and tests just call that code.

The Maven parent pom file is written such that Jacoco is enabled only when the profile “jacoco” is explicitly activated:

This is just an example of configuration; you might want to tweak it as you see fit for your own projects (in this example I also created a Main.java with a main method that I exclude from the coverage). By default, the jacoco-maven-plugin will “prepare” the Jacoco agent in the property tycho.testArgLine (since our test projects are Maven projects with packaging eclipse-plugin-test); since tycho.testArgLine is automatically used by the tycho-surefire-plugin and since we have no special test configuration, the pom.xml of our test projects is just as simple as this:

if you need a custom configuration, then you have to explicitly mention ${tycho.testArgLine} in the <argLine>.

Now we want to create an aggregate Jacoco report for the classes in plugin1 and plugin2 projects (tested by plugin1.tests and plugin2.tests, respectively); each test project will generate a jacoco.exec file with coverage information. Before Jacoco 0.7.7, creating an aggregate report wasn’t that easy and required to store all coverage data in a single an .exec file and then use an ant task (with a manual configuration specifying all the source file and class file paths). In 0.7.7, the jacoco:report-aggregate has been added, which makes creating a report really easy!

Here’s an excerpt of the documentation:

Creates a structured code coverage report (HTML, XML, and CSV) from multiple projects within reactor. The report is created from all modules this project depends on. From those projects class and source files as well as JaCoCo execution data files will be collected. […] This also allows to create coverage reports when tests are in separate projects than the code under test. […]

Using the dependency scope allows to distinguish projects which contribute execution data but should not become part of the report:

  • compile: Project source and execution data is included in the report.
  • test: Only execution data is considered for the report.

So it’s just a matter of creating a separate project (I called that example.tests.report) where we:

  • configure the report-aggregate goal (in this example I bind that to the verify phase)
  • add as dependencies with scope compile the projects containing the actual code and with scope test the projects containing the tests and .exec data

That’s all!

Now run this command in the example.parent project:

and when the build terminates, you’ll find the HTML code coverage report for all your projects in the directory (again, you can configure jacoco with a different output path, that’s just the default):

/example.tests.report/target/site/jacoco-aggregate

jacoco-report

Since, besides the HTML report, jacoco will create an XML report, you can use any tool that keeps track of code coverage, like the online free solution Coveralls (https://coveralls.io/). Coveralls is automatically accessible from Travis (I assume that you know how to connect your github projects to Travis and Coveralls). So we just need to configure the coveralls-maven-plugin with the path of the Jacoco xml report (I’m doing this in the parent pom, in the pluginManagement section in the jacoco profile):

And here’s the Travis file:

This is the coveralls page for the example project https://coveralls.io/github/LorenzoBettini/tycho-multiproject-jacoco-report-example. And an example of coverage information:

jacoco-coveralls

That’s all!

Happy coverage! 🙂

The second edition of the Xtext book has been published

The second edition of the Xtext book, Implementing Domain-Specific Languages with Xtext and Xtend, was published at the end of August: https://www.packtpub.com/web-development/implementing-domain-specific-languages-xtext-and-xtend-second-edition. So… get it while it’s hot 🙂

4965OS_5541_Implementing Domain Specific Languages with Xtext and Xtend - Second Edition

Please, see my previous post for details about the novelties in this edition.

Sources of the examples are on github: https://github.com/LorenzoBettini/packtpub-xtext-book-2nd-examples.

Hope you’ll enjoy the book!