Grub remembers the last choice

In all my computers I have dual boot, Ubuntu and Windows, though I’m using the former 99% of the time 😉 On one of my laptop I started to evaluate also Manjaro (probably a blog post will come in the near future). I let Manjaro install the main efi grub boot loader and I noticed that upon reboots its grub configuration remembers the last choice! That is, if I booted Ubuntu (not the first choice in the menu) and I reboot then “Ubuntu” entry is the one selected by default. The same holds for Windows.

I find this feature really cool and useful:

  • if I had previously used Ubuntu and possibly hibernated the computer, then, even after a few days, when I boot the laptop I know which OS I had booted the last time;
  • if I boot Windows (…once in a month?) I will probably experience many updates which require a few reboots; if I left the computer unattended during rebooting I used to find myself back to Linux (the primary default choice in grub) and I had to reboot and choose Windows so that updates are installed (if Windows updates require a few reboots that’s quite annoying).

I thought that Manjaro had some special tweaks in the installed grub, but then I learned that’s a standard feature of Grub!

You just have to add these two lines in your /etc/default/grub:

Save and run

And that’s all! From then on Grub will remember your last choice 🙂

Enabling Hibernation on Ubuntu 20.04

I have never been able to make hibernation (suspend to disk) work on my laptops (Dell M3800 and Dell XPS 13 9370) on Ubuntu with systemd. The symptom was that running

was making the system shutdown, but then upon restart the system was not restored: it was just like booting the system from scratch.

I had also tried with uswsusp (which is installed if you install the package hibernate), with its program s2disk, but I experienced many problems: it wasn’t working reliably and it was making booting (even standard booting) much longer (several seconds more).

Then, after looking at several blog posts, I found that the solution is rather simple, and I’ll detail the steps here. I’ll also show how to use suspend-then-hibernate.

First, you need to have swap already setup, e.g., a swap partition (though I think a swap file would work as well, but in that case the configuration is slightly more complex). For example in /etc/fstab you should have something like

The UUID is important and you should take note of it.

How big should the swap be? You can find some hints here: https://help.ubuntu.com/community/SwapFaq. I have 16Gb of RAM and my swap partition is 20 Gb.

Then, you must make sure initramfs is “aware” of your swap partition and that it is already able to “resume” from that. This should already be the case but you can try to run

and after some time you should see something like:

The UUID must be the same as your swap UUID in the /etc/fstab.

Now, it’s just a matter of editing your /etc/default/grub and make sure you specify resume in GRUB_CMDLINE_LINUX_DEFAULT, with the UUID of your swap partition. So it should be something like (remember that <UUID of your swap partition> must be replaced with the UUID):

Save the file and update grub:

Reboot the system and now try to hibernate again (first you might want to start a few applications so that you’re sure that the system is effectively restored to the same state):

Wait for the system to shut down and switch it on again. The splash screen should tell you something about that it is “resuming from <your swap partition>”. If all goes well you’ll have to enter your password to unlock the system which you should find in the state you left it before hibernating! 🙂

suspend-then-hibernate

Another interesting mechanism provided by systemd is suspend-then-hibernate: the system is suspended (to RAM) and after some time it is hibernated (suspended to disk).

The amount of time before hibernating is defined in the file /etc/systemd/sleep.conf. Let’s have a look at the default contents:

By default everything is commented out, but the values, as stated at the beginning of the file, represent the default values. So you can see that suspend-then-hibernate is enabled and that the default delay time before hibernating is 180 minutes. If you’re not happy with that value, uncomment the line and change the value. For example, I set it to 10 minutes:

You can now test this functionality with this command:

The system will suspend to RAM and if you don’t touch the computer after 10 minutes you can hear some sounds: the system will effectively hibernate.

If you want to make this mechanisms the default suspend mechanism, e.g., you close the lid and the system will suspend and then after some time it will hibernate, you CANNOT set the value of SuspendMode in the file above, since that has another meaning. To make suspend-then-hibernate the default suspend mechanism you have to create this symlink:

No need to restart, try to close the lid and the laptop will suspend, after 10 minutes it will hibernate.

Please, keep in mind that the above command will completely replace the behavior of suspend.

If you want to have a finer grain control, you might want to edit the file /etc/systemd/logind.conf, in particular uncomment and set the entries (then you’ll have to restart or restart the systemd-logind.service service):

which should be self-explicative, but I haven’t tested this approach.

Happy hibernating 🙂

Remove SNAPSHOT and Qualifier in Maven/Tycho Builds

Before releasing Maven artifacts, you remove the -SNAPSHOT from your POMs. If you develop Eclipse projects and build with Maven and Tycho, you have to keep the versions in the POMs and the versions in MANIFEST, feature.xml and other Eclipse project artifacts consistent. Typically when you release an Eclipse p2 site, you don’t remove the .qualifier in the versions and you will get Eclipse bundles and features versions automatically processed: the .qualifer is replaced with a timestamp. But if you want to release some Eclipse bundles also as Maven artifacts (e.g., to Maven central) you have to remove the -SNAPSHOT before deploying (or they will still be considered snapshots, of course 🙂 and you have to remove .qualifier in Eclipse bundles accordingly.

To do that, in an automatic way, you can use a combination of Maven plugins and of tycho-versions-plugin.

I’m going to show two different ways of doing that. The example used in this post can be found here: https://github.com/LorenzoBettini/tycho-set-version-example.

First method

The idea is to use the goal parse-version of the org.codehaus.mojo:build-helper-maven-plugin. This will store the parts of the current version in some properties (by default, parsedVersion.majorVersion, parsedVersion.minorVersion and parsedVersion.incrementalVersion).

Then, we can pass these properties appropriately to the goal set-version of the org.eclipse.tycho:tycho-versions-plugin.

This is the Maven command to run:

The goal set-version of the Tycho plugin will take care of updating the versions (without the -SNAPSHOT and .qualifier) both in POMs and in Eclipse projects’ metadata.

Second method

Alternatively, we can use the goal set (with argument -DremoveSnapshot=true) of the org.codehaus.mojo:versions-maven-plugin. Then, we use the goal update-eclipse-metadata of the org.eclipse.tycho:tycho-versions-plugin, to update Eclipse projects’ versions according to the version in the POM.

This is the Maven command to run:

The first goal will change the versions in POMs while the second one will change the versions in Eclipse projects’ metadata.

Configuring the plugins

As usual, it’s best practice to configure the used plugins (in this case, their versions) in the pluginManagement section of your parent POM.

For example, in the parent POM of https://github.com/LorenzoBettini/tycho-set-version-example we have:

 

Conclusions

In the end, choose the method you prefer. Please keep in mind that these goals are not meant to be used during a standard Maven lifecycle, that’s why we ran them explicitly.

Furthermore, the goal set of the org.codehaus.mojo:versions-maven-plugin might give you some headache if the structure of your Maven/Eclipse projects is quite different from the default one based on nested directories. In particular, if you have an aggregator project different from the parent project, you will have to pass additional arguments or set the versions in different commands (e.g., first on the parent, then on the other modules of the aggregator, etc.)

Publishing a Maven Site to GitHub Pages

In this tutorial I’d like to show how to publish a Maven Site to GitHub Pages. You can find several documents on the web about this subject, but I decided to publish one myself because the documents I found refer to a deprecated plugin (https://github.com/github/maven-plugins/tree/master/github-site-plugin) or show complicated settings, also when using the official Maven plugin that I’m going to use myself in this post, Apache Maven SCM Publish Plugin, https://maven.apache.org/plugins/maven-scm-publish-plugin/index.html (probably because the documents I found are somehow old, and they refer to an old version of such a plugin, which has probably been improved a lot since then).

I’m going to use a very simple Java example. The final project is available here: https://github.com/LorenzoBettini/maven-site-github-example.

Create the Java Maven project

Let’s create a simple Java project using the archetype (of course, I’m going to use fake names for the groupId and artifactId, but such values should be used consistently from now on):

This will create the my-app folder with the Maven project. Let’s cd into that folder and make sure it compiles fine:

We can also generate the site of this project (in its default shape):

The site will be generated in the target/site folder and can be opened with a browser; for example, let’s open its index.html:

GitHub setup

Let’s publish the Maven project on GitHub; in my case it’s in https://github.com/LorenzoBettini/maven-site-github-example.

Now we have to setup gh-pages branch on our Git repository. This step is required the first time only. (You might want to have a look at https://help.github.com/en/github/working-with-github-pages/about-github-pages if you’re note familiar with GitHub pages).

Let’s prepare the gh-pages branch: this will be a separate branch in the Git repository, we create it as an orphan branch (see also https://maven.apache.org/plugins/maven-scm-publish-plugin/various-tips.html). WARNING: the following commands will wipe out the current contents of the local Git repository, so make sure you don’t have any uncommitted changes! Let’s run the following commands in the main folder of the Git repository:

  1. git checkout --orphan gh-pages to create the branch locally,
  2. rm .git/index ; git clean -fdx to clean the branch content and let it empty,
  3. echo "It works" > index.html to create an initial site content
  4. git add . && git commit -m "initial site content" && git push origin gh-pages to commit and push to the branch gh-pages

The corresponding web site will be published in your GitHub project’s corresponding URL; in my case it is https://lorenzobettini.github.io/maven-site-github-example:  This should show “It works”

Now we can go back to the master branch:

and remove the index.html we previously created.

Maven POM configuration

Let’s see how to configure our POM to automatically publish the Maven site to GitHub pages.

First, we must specify the distribution management section:

Remember that you will have to use the URL corresponding to your own GitHub project (including your GitHub user id).

Then, we configure the maven-scm-publish-plugin configuration in the pluginManagement section (we configure it there, and then we will call its goals directly from the command line – such a plugin can also be configured to be bound to the Maven lifecycle, but that’s out of the scope of this tutorial, you might want to have a look here in case: https://maven.apache.org/plugins/maven-scm-publish-plugin/usage.html). Note that we specify the gh-pages branch:

Publish the site

Now, publishing the Maven site to GitHub pages it’s just a matter of running:

Wait for the plugin to perform its job.

What the plugin does is

  1. It first checks out the contents of the gh-pages branch from GitHub into a local folder (by default, target/scmpublish-checkout);
  2. Then locally staged site content is applied to the check-out, issuing appropriate Git commands to add and delete entries, followed by a commit and a push.

Now the site is on GitHub Pages:

Improve the site contents

Just for demonstration, let’s enrich the site a bit by using the Maven Archetype Site, https://maven.apache.org/archetypes/maven-archetype-site.

We layer this archetype upon our existing project, so we must run it from the directory containing our current Maven project and specify the same groupId and artifactId we specified when we created the Maven project:

The archetype will update our project with a src/site directory containing a few example contents in Markdown, APT, etc. It will also update our POM with some configuration for maven-project-info-reports-plugin and for i18n localization. Moreover, a new skin will be used for the final site, maven-fluido-skin.

You might want to have a look locally by regenerating the site.

Let’s publish the new site, again by running

Now it’s on GitHub Pages (it might take some time for the new website to be updated on GitHub and browser reload might help):

To jump to the published website you could also use the GitHub web interface: click on the “environment” link and then on the “View deployment” button corresponding to the latest pushed commit on the gh-pages branch:

Now you could experiment by adding/changing/removing the contents of the directory src/site and then publish the website again through Maven.

Remember

What’s in your Maven project (e.g., master branch) contains the sources of the site, while, what’s in the gh-pages branch on GitHub will contain the generated website. You won’t modify the contents of gh-pages branch manually: the Apache Maven SCM Publish will do that for you.

Hope you enjoy this tutorial 🙂

Installing KDE on top of Ubuntu

If you like to use KDE you probably install Kubuntu directly, instead of Ubuntu, which has been based on Gnome for a long time now.

However, I like to have several Desktop Environments, and, now and then, I like to switch from Gnome to KDE and then back. Currently, I’m using Gnome for most of the time, that’s why I install Ubuntu (instead of Kubuntu).

In any case, you can still install KDE Plasma on top of Ubuntu. The following has been tested on an Ubuntu Disco 19.04, but I guess it will work also on previous distributions.

For a reduced installation of KDE you might want to install only these packages

sudo apt install kde-plasma-desktop kde-standard kwin-addons

In particular, kwin-addons includes some useful things: it contains additional KWin desktop and window switchers shipped in the Plasma 5 addons module.

When installation has finished you may want to reboot and then, on the Login screen, you can use the gear icon for specifying that you want to enter the KDE Plasma environment instead of the default Gnome environment.

The above packages should provide you with enough stuff to enjoy a Plasma experience, but it lacks many (K)ubuntu configurations and addons for KDE.

If you want more Kubuntu stuff, you might want to install the “huge” package:

sudo apt install kubuntu-desktop

And then you get a real Kubuntu KDE Plasma experience.

Note that this will replace the classic Ubuntu splash screen when booting the OS: it replaces it with the Kubuntu splash screen. If you want to go back to the original splash screen it’s just a matter of removing the following packages:

sudo apt remove plymouth-theme-kubuntu-logo plymouth-theme-kubuntu-text

Remember that you can also use the Kubuntu Backport PPA for enjoying more recent versions of KDE software.

Enjoy Gnome and KDE! 🙂

My new book on TDD, Build Automation and Continuous Integration

I haven’t been blogging for some time now. I’m getting back to blogging by announcing my new book on TDD (Test-Driven Development), Build Automation and Continuous Integration.

The title is indeed, “Test-Driven Development, Build Automation, Continuous Integration
(with Java, Eclipse and friends)
” and can be bought from https://leanpub.com/tdd-buildautomation-ci.

The main goal of the book is to get you started with Test-Driven Development (write tests before the code), Build Automation (make the overall process of compilation and testing automatic with Maven) and Continuous Integration (commit changes and a server will perform the whole build of your code). Using Java, Eclipse and their ecosystems.

The main subject of this book is software testing. The main premise is that testing is a crucial part of software development. You need to make sure that the software you write behaves correctly. You can manually test your software. However, manual tests require lots of manual work and it is error prone.

On the contrary, this book focuses on automated tests, which can be done at several levels. In the book we will see a few types of tests, starting from those that test a single component in isolation to those that test the entire application. We will also deal with tests in the presence of a database and with tests that verify the correct behavior of the graphical user interface.

In particular, we will describe and apply the Test-Driven Development methodology, writing tests before the actual code.

Throughout the book we will use Java as the main programming language. We use Eclipse as the IDE. Both Java and Eclipse have a huge ecosystem of “friends”, that is, frameworks, tools and plugins. Many of them are related to automated tests and perfectly fit the goals of the book. We will use JUnit throughout the book as the main Java testing framework.

it is also important to be able to completely automate the build process. In fact, another relevant subject of the book is Build Automation. We will use one of the mainstream tools for build automation in the Java world, Maven.

We will use Git as the Version Control System and GitHub as the hosting service for our Git repositories. We will then connect our code hosted on GitHub with a cloud platform for Continuous Integration. In particular, we will use Travis CI. With the Continuous Integration process, we will implement a workflow where each time we commit a change in our Git repository, the CI server will automatically run the automated build process, compiling all the code, running all the tests and possibly create additional reports concerning the quality of our code and of our tests.

The code quality of tests can be measured in terms of a few metrics using code coverage and mutation testing. Other metrics are based on static analysis mechanisms, inspecting the code in search of bugs, code smells and vulnerabilities. For such a static analysis we will use SonarQube and its free cloud version SonarCloud.

When we need our application to connect to a service like a database, we will use Docker a virtualization program, based on containers, that is much more lightweight than standard virtual machines. Docker will allow us to

configure the needed services in advance, once and for all, so that the services running in the containers will take part in the reproducibility of the whole build infrastructure. The same configuration of the services will be used in our development environment, during build automation and in the CI server.

Most of the chapters have a “tutorial” nature. Besides a few general explanations of the main concepts, the chapters will show lots of code. It should be straightforward to follow the chapters and write the code to reproduce the examples. All the sources of the examples are available on GitHub.

The main goal of the book is to give the basic concepts of the techniques and tools for testing, build automation and continuous integration. Of course, the descriptions of these concepts you find in this book are far from being exhaustive. However, you should get enough information to get started with all the presented techniques and tools.

I hope you enjoy the book 🙂

How to install Linux on a USB drive using Virtualbox

In this tutorial I’m going to show how to install Linux on a USB drive using Virtualbox. I find this useful to test a LInux distribution. Note that using a live distribution only allows you a small testing experience, while installing Linux on a USB drive will give you the full experience (and if the USB drive is fast it’s almost like using Linux on a standard computer).

You can use this procedure to install Linux on any USB drive, that is, both a USB stick or an external USB hard drive.

You could burn a live image on a USB stick, boot it, and then install Linux on a second USB stick from the live system, but using Virtualbox is faster and does not require you to create a live USB stick just to install it on a second USB stick.

I’m going to use Ubuntu (17.10) as the main system and install Fedora on a USB stick (I’ve already downloaded the 27 iso).

First of all, let’s install Virtualbox:

Add your user to the Virtualbox users, or you won’t be able to use USB 3 in the virtual machine: run this command and reboot:

Then run Virtualbox and create a new Linux machine (increase the memory a bit, depending on your actual physical memory).

Since we’ll use this machine only for booting the live iso, there’s no need to create a disk

Now let’s go to Settings of the newly created machine, and in “USB” select USB 3.0:

In “Storage” select the “Empty” disk icon and in “Optical Drive” “Choose a virtual optical disk file…” and select the iso image of the Live distribution you want to boot the machine (in my case the Fedora 27 ISO I’ve already downloaded), then check the “Live CD/DVD” checkbox

Now “Start” the virtual machine (which will boot the Live ISO).

You’ll be asked to confirm the virtual optical disk file you had previously specified in the settings.

From now on, you’ll boot the live system into the virtual machine, and this depends on the distribution you choose (in my case Fedora). You should be already familiar with that procedure if you’ve installed a Linux distribution before starting with a Live system.

For example, we choose “Try Fedora” and we can perform some tests (for instance, that we can access the Internet from the virtual machine; being able to access the Internet might be crucial later when installing the distribution on the USB stick since the installation might want to download some upgrades).

Now let’s connect the USB stick where we want to install Linux; then in the Devices menu of the Virtualbox machine, you should select the USB stick you’ve just inserted (in my case it’s a SanDisk):

Now the USB stick is mounted in the virtual machine, and it will be the target disk of our installation.

We can now start the Fedora installation in the virtual machine; again, this assumes you’re familiar with the installation of this distribution. The important part will be to select the USB stick as the target of the installation. Actually, the USB stick should be the only available option (unless you manually mounted other drives in the virtual machine):

Continue the installation and once it’s done, you can reboot the computer and make sure to boot from the USB stick.

Enjoy your new Linux installation on the USB stick 🙂

Eclipse tested with a few Gnome themes

In this small blog post I’ll show how Eclipse looks like in Linux Gnome (Ubuntu 17.10) with a few Gnome themes.

First of all, the default Ubuntu theme, Ambiance, makes Eclipse look not very nice… see the icons, which are “packed” and “compressed” in the toolbar, not to mention the cut “Filter Files” textbox in the “Git Staging” view:

Numix has similar problems:

Adwaita, (the default Gnome theme) instead makes it look great:

The same holds for alternative themes; the following screenshots are based on Arc, Pop and Matcha, respectively:

So, in the end, stay away from Ubuntu default theme 😉

Analyzing Eclipse plug-in projects with Sonarqube

In this tutorial I’m going to show how to analyze multiple Eclipse plug-in projects with Sonarqube. In particular, I’m going to focus on peculiarities that have to be taken care of due to the standard way Sonarqube analyzes sources and to the structure of typical Eclipse plug-in projects (concerning tests and code coverage).

The code of this example is available on Github: https://github.com/LorenzoBettini/tycho-sonarqube-example

This can be seen as a follow-up of my previous post on “Jacoco code coverage and report of multiple Eclipse plug-in projects. I’ll basically reuse almost the same structure of that example and a few things. The part of Jacoco report is not related to Sonarqube but I’ll leave it there.

The structure of the projects is as follows:

Each project’s code is tested in a specific .tests project. The code consists of simple Java classes doing nothing interesting, and tests just call that code.

The project example.tests.parent contains all the common configurations for test projects (and test report, please refer to my previous post on “Jacoco code coverage and report of multiple Eclipse plug-in projects for the details of this report project, which is not strictly required for Sonarqube).

This is its pom

Note that this also shows a possible way of dealing with custom argLine for tycho-surefire configuration: tycho.testArgLine will be automatically set the jacoco:prepare-agent goal, with the path of jacoco agent (needed for code coverage); the property tycho.testArgLine is automatically used by tycho-surefire. But if you have a custom configuration of tycho-surefire with additional arguments you want to pass in argLine, you must be careful not to overwrite the value set by jacoco. If you simply refer to tycho.testArgLine in the custom tycho-surefire configuration’s argLine, it will work when the jacoco profile is active but it will fail when it is not active since that property will not exist. Don’t try to define it as an empty property by default, since when tycho-surefire runs it will use that empty value, ignoring the one set by jacoco:prepare-agent (argLine’s properties are resolved before jacoco:prepare-agent is executed). Instead, use another level of indirection: refer to a new property, e.g., additionalTestArgLine, which by default is empty. In the jacoco profile, set additionalTestArgLine referring to tycho.testArgLine (in that profile, that property is surely set by jacoco:prepare-agent). Then, in the custom argLine, refer to additionalTestArgLine. An example is shown in the project example.plugin2.tests pom:

You can check that code coverage works as expected by running (it’s important to verify that jacoco has been configured correctly in your projects before running Sonarqube analysis: if it’s not working in Sonarqube then it’s something wrong in the configuration for Sonarqube, not in the jacoco configuration, as we’ll see in a minute):

Mare sure that example.tests.report/target/site/jacoco-aggregate/index.html reports some code coverage (in this example, example.plugin1 has some code uncovered by intention).

Now I assume you already have Sonarqube installed.

Let’s run a first Sonarqube analysis with

This is the result:

So Unit Tests are correctly collected! What about Code Coverage? Something is shown, but if you click on that you see some bad surprises:

Code coverage only on tests (which is irrelevant) and no coverage for our SUT (Software Under Test) classes!

That’s because jacoco .exec files are by default generated in the target folder of the tests project, now when Sonarqube analyzes the projects:

  • it finds the jacoco.exec file when it analyzes a tests project but can only see the sources of the tests project (not the ones of the SUT)
  • when it analyzes a SUT project it cannot find any jacoco.exec file.

We could fix this by configuring the maven jacoco plugin to generate jacoco.exec in the SUT project, but then the aggregate report configuration should be updated accordingly (while it works out of the box with the defaults). Another way of fixing the problem is to use the Sonarqube maven property sonar.jacoco.reportPaths and “trick” Sonarqube like that (we do that in the parent pom properties section):

This way, when it analyzes example.plugin1 it will use the jacoco.exec found in example.plugin1.tests project (if you follow the convention foo and foo.tests this works out of the box, otherwise, you have to list all the jacoco.exec paths in all the projects in that property, separated by comma).

Let’s run the analysis again:

OK, now code coverage is collected on the SUT classes as we wanted. Of course, now test classes appear as uncovered (remember, when it analyzes example.plugin1.tests it now searchs for jacoco.exec in example.plugin1.tests.tests, which does not even exist).

This leads us to another problem: test classes should be excluded from Sonarqube analysis. This works out of the box in standard Maven projects because source folders of SUT and source folders of test classes are separate and in the same project (that’s also why code coverage for pure Maven projects works out of the box in Sonarqube); this is not the case for Eclipse projects, where SUT and tests are in separate projects.

In fact, issues are reported also on test classes:

We can fix both problems by putting in the tests.parent pom properties these two Sonarqube properties (note the link to the Eclipse bug about this behavior)

This will be inherited by our tests projects and for those projects, Sonarqube will not analyze test classes.

Run the analysis again and see the results: code coverage only on SUT and issues only on SUT (remember that in this example MyClass1 is not uncovered completely by intention):

You might be tempted to use the property sonar.skip set to true for test projects, but you will use JUnit test reports collection.

The final bit of customization is to exclude the Main.java class from code coverage. We have already configured the jacoco maven plugin to do so, but this won’t be picked up by Sonarqube (that configuration only tells jacoco to skip that class when it generates the HTML report).

We have to repeat that exclusion with a Sonarqube maven property, in the parent pom:

Note that in the jacoco maven configuration we had excluded a .class file, while here we exclude Java files.

Run the analysis again and Main is not considered in code coverage:

Now you can have fun in fixing Sonarqube issues, which is out of the scope of this tutorial 🙂

This example is also analyzed from Travis using Sonarcloud (https://sonarcloud.io/dashboard?id=example%3Aexample.parent).

Hope you enjoyed this tutorial and Happy new year! 🙂

 

Formatting Java method calls in Eclipse

Especially with lambdas, you may end up with a chain of method calls that you’d like to have automatically formatted with each invocation on each line (maybe except for the very first invocation).

You can configure the Eclipse Java formatter with that respect; you just need to reach the right option (the “Force split” is necessary to have each invocation on a separate line):

and then you can have method calls formatted automatically like that: