Category Archives: Reports

Testing the new Fedora 36

Fedora 36 has just been released, and I couldn’t resist trying it right away. I had already started using Fedora 35 daily (though I have several Linux distributions installed), and I’ve been enjoying it so far.

Before upgrading my Fedora 35 installations, I decided to install Fedora 36 on a virtual machine with VirtualBox.

These are a few screenshots of the installation procedure.

As usual, you’re greeted by a dialog for installing or trying Fedora, and I went for the latter.

The installation procedure is available from the dock:

To be honest, I’m not a big fan of the Fedora installer: compared to other installers like Ubuntu and EndeavourOS or Manjaro, I find the Fedora installer much more confusing. Maybe it’s just that I’m not used to such an installer, but I never had problems with Calamares in EndeavourOS or Manjaro, not even the very first time I tried Calamares.

For example, once a subsection is selected, the button “Done” is at the upper left corner, why I would expect buttons at the bottom (right).

I appreciate that you can select the NTP server time synchronization (at my University, I cannot use external NTP servers, and in fact, the default one is not working: I have to use the one provided by my University). Unfortunately, this setting does not seem to be persisted in the installed system. UPDATE 12/May: Actually it is persisted: I thought I’d find it in the fileย /etc/systemd/timesyncd.conf but instead it is inย /etc/chrony.conf. Well done!

Since I’m installing the system on a VM hard disk for the partitioning, I chose the “Automatic” configuration. On a real computer, I’d go for manual partitioning. Even in this task, the Fedora installer is a bit confusing. Maybe the “Advanced Custom (Blivet GUI)” is more accessible than the default “Custom” GUI, or, at least, it’s much similar to what I’m accustomed to.

Finally, we’re ready to start the installation.

Even on a virtual machine, the installation does not take that long.

Once rebooted (actually, in the virtual machine, the first reboot did not succeed, and I had to force the shutdown of the VM), you’re greeted by a Welcome program. This program allows you to configure a few things, including enabling 3rd party repositories and online accounts and specifying your user account.

Then, there is the Gnome welcome tour, which I’ll skip here.

Here is the information about the installed system. As you can see, Fedora ships with the brand new Gnome 42 and with Wayland by default:

Fedora uses offline updates, so once notified of updates, you have to restart the system, and the updates will be installed on the next boot:

The installation is not bloated with too much software. Gnome 42 new theme looks fine, with folder icons in blue (instead of the old-fashioned light brown). Fedora also ships with the new Gnome Text Editor. Differently from the old Gedit, the new text editor finally allows you to increase/decrease the font size with Ctrl and +/-, respectively. I cannot believe Gedit did not provide such a mechanism. I used to install Kate in Gnome because I was not too fond of Gedit for that missing feature.

Instead, Fedora does not install the new Gnome terminal (gnome-console) by default. I installed that with DNF, and I wouldn’t say I liked it that much: with Ctrl +/-, you can zoom the terminal’s font, but the terminal does not resize accordingly. For that reason, I prefer to stay with the good old Gnome terminal (gnome-terminal).

First impressions

First of all, although I tried this installation in a VM, Fedora 36 seems pretty responsive and efficient. I might even say that the guest Fedora 36 VM looked faster than my host (Ubuntu 22.04). Maybe that was just an impression ๐Ÿ˜‰

Since I chose the Automatic partitioning, Fedora created two BTRFS subvolumes (one for / and one for /home) with compression, and a separate ext4 /boot partition:

It also uses swap on zram:

I soon installed the extension “AppIndicator, KStatusNotifierItem and legacy Tray icons support to the Shell” by Ubuntu (https://extensions.gnome.org/extension/615/appindicator-support/) and it works in Gnome 42.

However, after installing Dropbox, while the icon shows on the system tray, clicking on that Dropbox icon (left or right-click or double-click) does not make the context menu appear, making that unusable. I seem to understand that it is a known problem, and maybe they are already working on that. For the time being, if you need the Dropbox context menu for settings like “selective sync,” you’re out of luck. However, you can use the dropbox command-line program for the settings. In that case, I first ignore all the folders and then remove the exclusion for the folders I want to have in sync.

For example, I only want “Screenshot” and “sync” from my Dropbox on my local computer, and I run:

On a side note, I find the Dropbox support for Linux a kind of an insult…

I look forward to upgrading my existing Fedora 35 installations on my computers, and maybe I’ll get back with more impressions on Fedora 36 on real hardware.

For the moment, it looks promising ๐Ÿ™‚

Getting started with KVM and Virtual Machine Manager

After playing with VirtualBox (see my posts), I’ve decided to try also KVM (based on QEMU) and Virtual Machine Manager (virt-manager).

The installation is straightforward.

In Ubuntu systems:

In Arch-based systems:

Then, you need to add your user to the corresponding group:

Reboot, and you’re good to go.

In this post, I’m going to install Fedora 35 on a virtual machine through Virtual Machine Manager (based on KVM and QEMU).

So, first, download the ISO of this distribution if you want to follow along.

Let’s start Virtual Machine Manager (virt-manager):

Press the “+” button to create a new virtual machine, and we select the first entry since we have downloaded an ISO.

Here, we select the ISO and let the manager detect the installed OS. Otherwise, we can choose the OS manually (the manager might not catch the OS correctly in some cases: it happened to me with ArcoLinux, for example).

Then, we allocate some resources. Since I have 16GB and a quad-core, I give the virtual machine 8GB and two cores.

Then, we allocate storage for the machine. Alternatively, we can select or create a custom image file in another location. By default, the image will NOT occupy the whole space physically on your disk. Thus, I will not lose 60GB (unless I’ll effectively use such a space on the virtual machine). The file will appear of the specified size on your drive, but if you check the free disk space on your drive, you will note that you haven’t lost so many Gigas (more on that in the next steps).

In the last step, we can give a custom name to our machine and customize a few settings before starting the installation by selecting the appropriate checkbox (we also make sure that the network is configured correctly).

If we selected “Customize configuration before install,” by pressing “Finish,” we get to the settings of our virtual machine.

In this example, I’m going to change the chipset and specify a UEFI firmware:

We can also get other information, like the path of the disk image:

And we can click “Begin Installation.” After the boot menu, we’ll get to the live environment of the distribution ISO we chose:

You can also specify to resize the display of the VM automatically if you resize the window and when to do that. (WARNING: this will work correctly only after installing the OS in the virtual machine since this feature requires some software in the guest operating system. Typically, such a software,ย spice-vdagent, is automatically installed in the guest during the OS installation, from what I’ve seen in my experiments.)

And we can start the installation of the distribution (or try it live before the actual installation), as usual. Of course, the whole installation process will be a bit slower than on real hardware.

I’ll choose the “Automatic” choice for disk partitioning since the disk image will be allocated only to this machine, so I will not bother customizing that.

While installing, you might want to check the disk image size and the effective space on the disk:

After a few minutes, the installation should be complete, and we can reboot our virtual machine

And upon reboot, we’ll get to our new installed OS on the virtual machine:

In the primary Virtual Machine Manager window, you can see your virtual machines, and, if they are running, a few statistics:

In the virtual machine window’s “View” menu, you can switch between the “Console” view (that is, the virtual machine installed and running OS) and the “Details” view, where you can see its settings, and change a few of them.

Note that now the automatic resize of the machine display and the window works: in the screenshot I resized the window (made it bigger) and the display of the machine resized accordingly.

When you later restart a virtual machine from the manager, you might have to double-click on the virtual machine element and possibly switch to the “Console” view.

After installing the OS, you might want to check the image file and the actual disk usage again. You will find that while the image file size did not change, the disk usage has:

What I’ve shown in this blog post was one of my first experiments with KVM and the Virtual Machine Manager. To be honest, I still prefer VirtualBox, but maybe that’s only because I’m more used to VirtualBox, while I’ve just started using virt-manager.

That’s all for now! Stay tuned for further posts on KVM and virt-manager, and happy virtualization! ๐Ÿ™‚

Linux EndeavourOS review

I’ve been using Linux EndeavourOS (the latest version, “Atlantis neo”) for a few days now, and I love it!

I mainly use Ubuntu and Kubuntu, but I recently enjoyed Manjaro, an Arch-based distro. I still haven’t tried to install the pure Arch distribution, but I learned about EndeavourOS, an Arch-based distro, which is pure Arch. For sure, it’s more Arch than Manjaro since EndeavourOS uses the Arch repositories, plus a small EndeavourOS repository. On the contrary, Manjaro heavily relies on its independent repositories (which also contain software packages not provided by Arch). So, they’re both rolling releases, but EndeavourOS is Arch but with a much simpler installation procedure.

I’ll first briefly recap the installation procedure and then do a short review.

Installation

The installation starts with an XFCE desktop and a dialog where you can set a few things, including the screen resolution in case you need to:

Now it’s time to connect to the Internet, e.g., with a WIFI (the setting will be remembered in the final installation so that you will not have to re-enter the WIFI username and password).

Then, we can start the installer:

I prefer to choose “Online” so that I can select a different desktop environment (I don’t use XFCE, which is the only choice if you perform the “Offline” method):

One of the exciting aspects of the EndeavourOS installation process is that it automatically shows a terminal with the log. This terminal can be helpful to debug possible installation problems.

The installer is Calamares, which you might already know if you used Manjaro.

I’m going to show only the interesting parts of the installation.

The partitioning already found the main SSD drive.

Since I have a few Linux installations already on this computer, I choose to replace one of them with EndeavourOS.

In particular, I select the Manjaro Linux (21.2rc) checkbox to replace that installation (see the “Current:” and the “After:” parts):

Since I chose the “Online” installer, I can now select the software to install. Note the printing support software:

I also decide to install both KDE and GNOME (maybe I’ll blog in the future about the coexistence of the two desktop environments). That’s another exciting feature of EndeavourOS: it lets you install as many desktop environments as you want right during the installation. Other distributions typically only provide ISOs for specific desktop environments (the so-called “spins”).

If you expand the nodes in the tree, you can see the installed software for each DE. I can anticipate that for both KDE and GNOME, the installed programs are not so many.

Time for looking at the summary, and then we’re good to start the installation, which takes only a few minutes on my computer.

Review

As I have already anticipated, I’m enjoying this distribution so far.

I mainly use the KDE Plasma desktop. Plasma looks like it is very close to vanilla Plasma in this distribution. It does not come with many preinstalled KDE software, but all the necessary KDE applications are there.

I had to install a few additional KDE applications I like to have. The corresponding packages are plasma-systemmonitor, kdeplasma-addons (for other task switchers), and kcalc.

Of course, pacman is already installed, but you also have yay already installed.

Since I like the GUI front-end pamac, I had to install that manually:

Remember that, besides an EndeavourOS repository, everything else comes from the official Arch repositories.

EndeavourOS ships with the latest Linux kernel 5.15, and on my computers, it works like a charm.

The “Welcome” application automatically appears when you log in, and it provides a few helpful buttons: for updating the mirrors, the packages, and configuring package cache cleaning:

For updating the software packages, yay will start in a terminal window. Indeed, EndeavourOS defines itself as a “terminal-centric distro.”

Speaking about software updates, you get a system tray notification when they are available:

But unfortunately, clicking on that does not do anything: you have to update the software manually (e.g., by using the above-mentioned “Welcome” app).

Another minor defect (if I have to find defects) is the empty icon on the panel: it refers to the KDE “Discover” application, which is not installed by default. That is confusing, and probably the installation should have taken care of not putting it there by default.

Besides that, I enjoy the KDE Plasma experience provided by EndeavourOS.

Concerning GNOME, again, the installed software is minimal, but you get the essential software, including Gnome Tweaks. No specific GNOME extensions are provided, but you can install them yourself. In the end, it’s vanilla GNOME.

All in all, I guess I’ll be using EndeavourOS as my daily driver in the next few days!

I hope you try EndeavourOS yourself and enjoy it as much as I do ๐Ÿ™‚

Using the Unison File Synchronizer on macOS

For ages, I’ve been using the excellent Unison file synchronizer to synchronize my directories across several Linux machines, using the SSH protocol. I love it! ๐Ÿ™‚

Unison gives you complete control over the synchronization, and, most of all, it’s a two-way synchronizer.

Quoting from its home page:

Unison is a file-synchronization tool for OSX, Unix, and Windows. It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other.

On Linux, I never experienced problems with Unison, especially from the installation point of view: it’s available on most distributions’ package managers. If that’s not the case, you can download a binary package from https://github.com/bcpierce00/unison/releases.

However, I had never used Unison on a macOS computer, so today, I decided to try it.

Please, keep in mind that you must use the same version of Unison on all the computers you want to synchronize (at least, I seem to understand, the major.minor version numbers must be the same on all computers, and this also includes the version of OCaml, on which Unison relies).

For macOS, you go to https://github.com/bcpierce00/unison/releases, and you download theย .app.tar.gz file according to the Unison (and OCaml) version you need. The other macOS .tar.gz archives, without the .app, contain the command-line binary and a GTK UI binary, which, however, requires the GTK libraries to be already installed on your system and, to be honest, I have no idea how to do that in a compatible way. On the contrary, the .app.tar.gz contains the macOS application, which, I seem to understand, it’s self-contained.

By the way, there’s also a brew package for Unison, but that’s only the command line application, so you won’t get any UI, which is quite helpful, especially when you want to have complete control over the elements to be synchronized and you want to have the last chance to select or unselect the files before the synchronization starts. Moreover, the UI is quite helpful when you have conflicts to solve.

Then, you extract the archive, and you need to run this command (assuming you have extracted it in the Downloads folder):

otherwise, macOS will complain (with an unhelpful error message about a damaged app) since it does not recognize the archive provider.

Move the Unison.app into your Applications, and you’re good to go, assuming you already know how to use Unison.

The first time you run the app, it will ask you to install also the command-line version of Unison, which is also helpful:

And here’s a screenshot showing the files that are going to be synchronized in an example of mine (from the direction of the arrows, you can see that this is a two-way synchronization):

I find the Linux UI of Unison much simpler to understand and deal with, but maybe that’s because I’ve been using it for ages, and I still do.

Happy synchronization! ๐Ÿ™‚

My new book on TDD, Build Automation and Continuous Integration

I haven’t been blogging for some time now. I’m getting back to blogging by announcing my new book on TDD (Test-Driven Development), Build Automation and Continuous Integration.

The title is indeed, “Test-Driven Development, Build Automation, Continuous Integration
(with Java, Eclipse and friends)
” and can be bought from https://leanpub.com/tdd-buildautomation-ci.

The main goal of the book is to get you started with Test-Driven Development (write tests before the code), Build Automation (make the overall process of compilation and testing automatic with Maven) and Continuous Integration (commit changes and a server will perform the whole build of your code). Using Java, Eclipse and their ecosystems.

The main subject of this book is software testing. The main premise is that testing is a crucial part of software development. You need to make sure that the software you write behaves correctly. You can manually test your software. However, manual tests require lots of manual work and it is error prone.

On the contrary, this book focuses on automated tests, which can be done at several levels. In the book we will see a few types of tests, starting from those that test a single component in isolation to those that test the entire application. We will also deal with tests in the presence of a database and with tests that verify the correct behavior of the graphical user interface.

In particular, we will describe and apply the Test-Driven Development methodology, writing tests before the actual code.

Throughout the book we will use Java as the main programming language. We use Eclipse as the IDE. Both Java and Eclipse have a huge ecosystem of “friends”, that is, frameworks, tools and plugins. Many of them are related to automated tests and perfectly fit the goals of the book. We will use JUnit throughout the book as the main Java testing framework.

it is also important to be able to completely automate the build process. In fact, another relevant subject of the book is Build Automation. We will use one of the mainstream tools for build automation in the Java world, Maven.

We will use Git as the Version Control System and GitHub as the hosting service for our Git repositories. We will then connect our code hosted on GitHub with a cloud platform for Continuous Integration. In particular, we will use Travis CI. With the Continuous Integration process, we will implement a workflow where each time we commit a change in our Git repository, the CI server will automatically run the automated build process, compiling all the code, running all the tests and possibly create additional reports concerning the quality of our code and of our tests.

The code quality of tests can be measured in terms of a few metrics using code coverage and mutation testing. Other metrics are based on static analysis mechanisms, inspecting the code in search of bugs, code smells and vulnerabilities. For such a static analysis we will use SonarQube and its free cloud version SonarCloud.

When we need our application to connect to a service like a database, we will use Docker a virtualization program, based on containers, that is much more lightweight than standard virtual machines. Docker will allow us to

configure the needed services in advance, once and for all, so that the services running in the containers will take part in the reproducibility of the whole build infrastructure. The same configuration of the services will be used in our development environment, during build automation and in the CI server.

Most of the chapters have a “tutorial” nature. Besides a few general explanations of the main concepts, the chapters will show lots of code. It should be straightforward to follow the chapters and write the code to reproduce the examples. All the sources of the examples are available on GitHub.

The main goal of the book is to give the basic concepts of the techniques and tools for testing, build automation and continuous integration. Of course, the descriptions of these concepts you find in this book are far from being exhaustive. However, you should get enough information to get started with all the presented techniques and tools.

I hope you enjoy the book ๐Ÿ™‚

Eclipse tested with a few Gnome themes

In this small blog post I’ll show how Eclipse looks like in Linux Gnome (Ubuntu 17.10) with a few Gnome themes.

First of all, the default Ubuntu theme, Ambiance, makes Eclipse look not very nice… see the icons, which are “packed” and “compressed” in the toolbar, not to mention the cut “Filter Files” textbox in the “Git Staging” view:

Numix has similar problems:

Adwaita, (the default Gnome theme) instead makes it look great:

The same holds for alternative themes; the following screenshots are based on Arc, Pop and Matcha, respectively:

So, in the end, stay away from Ubuntu default theme ๐Ÿ˜‰

Dealing with Technical Debt with Sonarqube: a case study with Xsemantics

I recently started to play with Sonarqube to reduce “technical debt” and hopefully improve code quality. I’d like to report on my experiences about using Sonarqube to analyze Xsemantics, a DSL for writing rule systems (e.g., type systems) for Xtext languages.

I was already using the Jenkins Continuous Integration server, and while building I was already using Findbugs and Jacoco, thus, I was already analyzing such software, but Sonarqube brings new analysis rules for Java programs and it also integrates results from Findbugs and Jacoco, aggregating all the code quality results in a web site.

In spite of the Jenkins builds Sonarqube detected some issues when I started

xsemantics sonarqube 1

First of all, I had to exclude the src-gen and emf-gen directories (the former is where Xtext generates all its artifacts, and the latter is where Xcore generates the EMF model files); since these are generated files and I did not want to make them part of the analysis. I’ve done such exclusion with a property in the main pom.xml (for readability I split it into lines):

Note that for the moment I’m also excluding tests from the analysis… it is considered best practice to analyse tests as well (and I have many of them), but I wanted to concentrate on the code first. I also excluded other Java files for which issues are reported, like the Xtext Guice modules, due to the wildcards in the method signatures… I have to live with them anyway ๐Ÿ™‚

After that the number of issues reduced a little bit, but there were still some issues to fix; most of them were easy, basically due to Java conventions I hadn’t use (e.g., name of fields and methods or even names of type parameters).

One of the major ones was due to the wrong implementation of the clone method (“super.clone() should be called when overriding Object.clone()” (https://github.com/LorenzoBettini/xsemantics/issues/34).

Another thing that I had never considered was dependency cycles among Java packages and files. Sonarqube reports them. Luckily there were only few of them in Xsemantics, and the hardest part was to read the Dependency Structure Matrix, but in the end I managed to remove them (there must be nothing in the upper triangle to have no cycle):

xsemantics sonarqube 2

To solve the cycles I had to change something in the runtime API (http://xsemantics.sourceforge.net/snapshots-for-xsemantics-1-6-for-xtext-2-7/) but it was basically a matter of moving Java classes into different packages.

Then came the last major issue: Duplicated Code!!! All by itself this issue was estimated with 13 days of technical debt! And most of the duplicated code was in the model inferrer (a concept from Xbase). Moreover, such inferrer is written in Xtend, a cleaner Java, and the Xtend compiler then generates Java code. Thus, Sonarqube analyses the generated Java code, and the detected duplicate code blocks are on the Java code. This means that it takes some time to understand the corresponding original Xtend code. That’s not impossible since Xtend generates clean Java code, but it surely adds some work ๐Ÿ™‚

Before starting to remove duplicated code (around 80 blocks in the generated Java code) the Xtend inferrer was around 1090 lines long (many parts are related to string templates for code generation) corresponding to around 2500 lines of generated Java code! After the refactoring the Xtend inferrer was around 1045 lines long, and the generated Java code reduced to around 2000 lines.

That explains also the reduction of lines of code and complexity:

xsemantics sonarqube 3

But now technical debt is 0 ๐Ÿ™‚

xsemantics sonarqube 4

And it’s nice to look at this dashboard ๐Ÿ™‚

xsemantics sonarqube 5

By the way, I also had to disable some issues I did not agree on (tabulation characters) and avoid reported issues on method name conventions on a specific file (because methods that start with the underline characters _ have a specific meaning in Xtext/Xtend). Instead of disabling them on the Sonarqube web interface, I preferred to disable them using properties in the pom file so that it works across different Sonarqube installations (e.g., I also have a local Sonarqube instance on my machine to do some quick experiments). Such multi properties are not officially supported in the Sonar invocation (e.g., through the sonar runner or via Maven), but I found a workaround:ย http://stackoverflow.com/questions/21825469/configure-sonar-sonar-issue-ignore-multicriteria-through-maven (but, be careful, it is considered a hack as reported in the mailing list: http://sonarqube.15.x6.nabble.com/sonar-issue-ignore-multicriteria-td5021722.html):

That’s all! I strongly suggest to give Sonarqube a try! ๐Ÿ™‚