Author Archives: Lorenzo Bettini

About Lorenzo Bettini

Lorenzo Bettini is an Associate Professor in Computer Science at the Dipartimento di Statistica, Informatica, Applicazioni "Giuseppe Parenti", Università di Firenze, Italy. Previously, he was a researcher in Computer Science at Dipartimento di Informatica, Università di Torino, Italy. He has a Masters Degree summa cum laude in Computer Science (Università di Firenze) and a PhD in "Logics and Theoretical Computer Science" (Università di Siena). His research interests cover design, theory, and the implementation of statically typed programming languages and Domain Specific Languages. He is also the author of about 90 research papers published in international conferences and international journals.

Getting Started with Gnome Boxes

Gnome Boxes is meant to be an easy and quick-starting software for Qemu virtual machines.

I have already blogged about Qemu with KVM Virtual Machine Manager. These tools are powerful but require time to get familiar with them and tweak them correctly (I tried to provide some hints in my previous posts about these tools).

Today, I’ll write about Gnome Boxes. It is effectively easier than the above tools, and since it’s based on Qemu, the created virtual machines can be used by virt-manager and vice-versa (more on that at the end of the post).

I will use the version of Gnome Boxes that comes installed by default in Fedora 39.

The first time you start it, you’re greeted by a few introductory pictures about its main features:

Concerning “Expression Installation”, I still haven’t tried that, but I’m not interested in that feature either.

This “Easy Downloads” is an excellent feature that I’ll use for the supported OSes. In that respect, Quickemu provides many more supported systems. For example, Gnome Boxes does not provide an easy download for my favorite distro, EndeavourOS.

That is also an excellent feature that worked in all my experiments.

Here’s the main interface, which, like many Gnome software, is minimal: you have the “+” toolbar icon on the left-most corner to create a new virtual machine, either by an existing ISO file (or even existing .qcow2 machine, which I’ve never tested) or by downloading one of the supported ISOs.

Let’s start installing a few virtual machines and see how it goes.

Installing Fedora 39

I’ll use the option to download Fedora Workstation 39 directly from Boxes (you have to wait a few seconds for the list to get populated):

See the download feedback on the other side of the window. You might want to explore those menus, especially when a virtual machine has started.

The ISO will be saved in your “Downloads” folder when finished.

Then, the main dialog for configuring the virtual machines appears; these are the default settings:

I changed the Firmware to UEFI and gave more RAM and disk to the machine (remember that the machine is saved on a qcow file, so its effective disk occupation is not the effective size of the file: only the real occupied space is used on your hard disk):

Let’s press “Create,” and the VM is started.

Take note of the following shortcut:

REMEMBER: you can use left Ctrl + Alt to “ungrab” the keyboard from/to the virtual machine/your host (or click on anything on your host desktop if you don’t have Gnome Boxes full screen).

Now, you can install Fedora as usual (I’m not covering this here).

While waiting for the installation to finish, you might want to take note of the menus in the top-right corner, available when a virtual machine is in execution, to send special key combinations to the virtual machine…

… or for other tasks.

Once the installation has finished, you can restart the virtual machine, e.g., using the Gnome menu.

Note that when a virtual machine is running, you can see its executing thumbnail in the main Gnome Boxes window:

Click on that to see the virtual machine itself.

Note that since “spice agent” is automatically installed when installing Fedora (it detects you’re installing it on a virtual machine), the virtual machine screen is automatically resized and scaled according to the size of the Gnome Boxes window. For example, this is the Gnome Boxes window full screen:

and this when slightly resized as smaller (not how the virtual machine desktop has been automatically resized):

You have bidirectional copy-and-paste from the virtual machine and the host and vice-versa.

You can also drag some files (not directories) from the host to the virtual machine: in the virtual machine, they will appear in the “Downloads” folder.

Let’s shut down the machine, right-click on its icon in the Gnome Boxes window, and explore its “Preferences” (note that you can also create a full clone of the virtual machine):

In the “Resources” tab, I can enable 3D acceleration so the virtual machine will use my computer’s graphical card instead of relying on “Software rendering.” This will allow you to have fluid and smooth 3d effects in the virtual machine desktop environments.

Then, we can share devices and folders with the machine (I’m not covering that; note that you must install additional software in the virtual machine; in the Fedora virtual machine, “spice-webdav” has already been installed):

Then, we have the tab to create snapshots of the machine for “going back in time” in the virtual machine (note that this is different from cloning, which will give a full and independent machine clone). For example, I create a snapshot and give it a meaningful description:

Remember that, later, if you restore a snapshot, you lose everything you’ve done in the virtual machine from when you created the snapshot.

NOTE: if you start the Fedora virtual machine after the first boot on the installed system, chances are that you will have to wait for it to download and install updates on the next boot.

With the 3d acceleration set, if we boot in the virtual machine, the Gnome effects will be enabled and fluid (depending on your host CPU and graphic card, of course). If we inspect the system details of the virtual machine through the Gnome “About,” we can see that the graphic card is Virgl. Without 3d acceleration, you would see “Software rendering”:

WARNING: in my experiments, starting the Fedora virtual machine with 3d acceleration sometimes hangs. I have to quit Gnome Boxes, reopen it, and then the machine correctly starts.

Installing EndeavourOS

As said above, this is not one of the ISO automatically downloadable, so I had first to download the ISO and then choose the other menu to start from an existing ISO:

Note that the “Operating System” must be specified: use the search box below to select the closest one, in this case, “Arch Linux”:

As before, I increased RAM and disk size.

In the first experiment, I selected “UEFI”, but then the virtual machine did not start at all due to a missing UEFI:

I did not investigate further… I deleted the virtual machine and started from scratch with the good old BIOS, and this time, the virtual machine started:

I then installed EndeavourOS GNOME, and after shutting the installed virtual machine, I could select “3D acceleration”.

EndeavourOS GNOME also works fine in the virtual machine: fluid 3d effects, automatic resolution resize, bidirectional shared clipboard, and drag and drop in the machine (the “spice agent” has also been automatically installed).

Unfortunately, the host touchpad scroll is not recognized in the virtual machine. Since it works in the Fedora and Ubuntu VM, I guess it’s just a problem with the new GNOME 46 version already landing in Arch.

Installing Ubuntu 23.10

Ubuntu 23.10 is one of the supported ISOs, so I let Gnome Boxes download the ISO for me, as I’ve done above for Fedora.

The initial configuration of the Virtual Machine is the same as that of Fedora.

I did the installation as usual and ensured the virtual machine boots correctly.

However, 3d acceleration is not available for Ubuntu. You can find many issues in that respect on the Gnome Boxes issue website, in particular, this one: https://gitlab.gnome.org/GNOME/gnome-boxes/-/issues/709; I seem to understand that’s an intentional choice because Ubuntu used to give problems when started with 3d acceleration.

And the system information in the virtual machine stays with “Software rendering”:

This means, of course, no fluid 3d effects 🙁

Accessing Boxes with Virtual Machine Manager

I’ve also installed “virt-manager” in my Fedora host.

You can access Gnome Boxes virtual machines with virt-manager if you want full configuration options. As you’ve seen, Gnome Boxes is all about simplicity and exposes a very limited number of configuration options for the Qemu virtual machines.

First, we must create a new “connection” in virt-manager: by default, virt-manager relies on a system session, while Gnome Boxes uses a user session. So, let’s open Virtual Machine Manager, select File > Add New Connection, and select user session in the “Hypervisor”:

After pressing “Connect,” we should see all the virtual machines we created with Gnome Boxes; in particular, I’m interested in the Ubuntu one:

Now, we can configure the machine using the full configuration options of virt-manager. Of course, I’m interested in enabling 3d, so first I set the “Spice” section as follows:

And then the video section accordingly:

And if I start the virtual machine from virt-manager, 3d acceleration works!

But… if I open the configured machine with Gnome Boxes, 3d acceleration will be automatically removed: Gnome Boxes automatically and silently revert the configuration since it insists on considering Ubuntu not compliant with 3d acceleration. You can verify that by reopening the machine details with virt-manager.

Summary

While it is true that Gnome Boxes is a very easy and quick solution for creating Qemu virtual machines with sensible defaults, it is still too rigid. It even resets your manual configurations if they don’t meet its expectations. If you’re already familiar with virt-manager, using Gnome Boxes might not be worthwhile unless for very quick experiments with the supported operating systems. However, I still prefer Gnome Boxes to Quickemu for the moment since at least a shared clipboard works while keeping good 3d acceleration performances (if supported). Quickemu forces you to choose one of the two.

Happy virtualization! 😉

Docker in Fedora 39

I haven’t been using Fedora for a few years but wanted to try it again (Fedora 39). In the past, I had a few problems with Docker in Fedora (see Fixing Docker problems in Fedora). Things are slightly improved, while others haven’t changed. For sure, I gave up using the official Docker command in Fedora: I’m using “moby-engine” instead of “docker-ce” and also “docker-compose“. In the past, Moby was mostly working out of the box, while Docker CE required a few tweaks. I don’t think it’s worthwhile insisting on tweaking it, so I’ll go straight with Moby.

This is a docker-compose file that gave me troubles in the past in Fedora:

Save it to “docker-compose.yaml” and run “docker-compose up,” and it works fine! (In the past, with Docker CE, this used to hang and eat all the memory).

Concerning the Testcontainers library, I’ll use this example project (taken from my TDD book): https://github.com/LorenzoBettini/it-docker-mongo-example. I run “mvn verify”.

When it comes to using Testcontainers, I get this error:

However, the error goes away if I update Testcontainers to a more recent version (previously, I was using 1.16.3):

So, that works!

Let’s try now by telling Docker to use another folder (for the images and containers) on another disk.

First, stop Docker

Then, change this file (remember, I’m using “moby-engine”, which uses this file with these contents) “/etc/sysconfig/docker”:

I’m adding this last line to point to a directory that mounts a partition on another disk:

And let’s reload Docker

Now, the “docker-compose.yaml” file above gives this error:

And the Maven project above fails with something like that:

This problem is due to SELinux (that figures!!!) If you temporarily disable SELinux:

Everything succeeds again!

Thus, Moby in Fedora 39 works out of the box with the default configuration. If you want images and containers in another mounted directory (not handled by Selinux), you must disable SELinux.

Better than nothing! 😉

xdg-desktop-portal-gnome strikes again!

I haven’t used Fedora for a few years but wanted to try it again (Fedora 39).

I was immediately hit by a problem, which is not due to Fedora itself, I seem to understand.

I’ll use this example project (taken from my TDD book): https://github.com/LorenzoBettini/demo-attsw. It is a simple Java Swing application where UI tests are written with AssertJ Swing. I run “mvn verify,” and most of the UI tests fail (when the AssertJ Swing bot tries to interact with the application window, it mostly gets the position wrong). Most of all, and this is the cause of the failure, I get this dialog popping up from Gnome “Allow remote interaction”:

Despite I click allow and share, the tests are still failing with the same behavior.

The problem of this dialog popping up, not related to UI tests, is already reported on the Fedora forum: https://discussion.fedoraproject.org/t/f39-opens-a-window-with-remote-desktop-allow-remote-interaction/100228. And the forum points in the right direction: the problem is related to xdg-desktop-portal-gnome! (https://gitlab.gnome.org/GNOME/xdg-desktop-portal-gnome/-/issues/114) This portal has already created several problems in the past, slowing down several applications. I hadn’t realized this was installed automatically in Fedora. By the way, xdg-desktop-portal-gtk is also installed, so I tried to remove the gnome version:

By the way, removing the package using “dnf” does not work because it complains that this is a requirement of “gnome-shell” (“Problem: The operation would result in removing the following protected packages: gnome-shell”).

After the removal, the dialog no longer popped up, and my UI tests are back to green! 🙂

Another solution to the problem, if you don’t want to remove the gnome portal package, is to log in to the X11 session instead of Wayland. But I prefer the more radical solution. 😉

Nerd Fonts in KDE Plasma 6

As I have written, I’m using “Oh-My-Zsh” with Nerd icons and fonts. It has always worked perfectly in KDE.

A few days ago, KDE Plasma 6 landed in Arch (and thus, EndeavourOS), and after upgrading, the Nerd fonts were not displayed in Konsole and Kate (and, I guess, in other KDE applications).

For example, before upgrading, Konsole looked like that:

After upgrading all the nice Nerd fonts were gone:

I reported that on the EndeavourOS forum and linked the corresponding KDE bug. It looks like it is due to an intentional change in Qt6. The Qt6 issue is https://bugreports.qt.io/browse/QTBUG-110502. The issue was fixed here: https://code.qt.io/cgit/qt/qtbase.git/commit/?id=a44b6950268214d802bc7ce7df09975261263e31, leading to this new behavior, which breaks the old behavior.

Long story short, before, when a font did not provide support for a “glyph”, that missing glyph was looked up in other fonts. After that change, that does not happen anymore.

The default monospace font in KDE is “Hack”. I have installed other Nerd fonts, but not the “Hack Nerd” version, so what worked before the upgrade no longer works.

To fix the problem, I Installed the nerd font, e.g., for Hack (the default KDE font):

Then, open “System Settings” -> “Fonts”, and change the “Fixed width” font from the default “Hack 10pt” to the corresponding Nerd font:

Restart Konsole and the Nerd fonts are back:

Note that this works if your Konsole profile does not have a custom font set; if you use another font, you’ll have to use the Nerd font corresponding to that font.

For example, I used JetBrains fonts in another Konsole profile, but I hadn’t installed the Nerd version:

I installed the Nerd version and changed the font from simply “JetBrains” to the Nerd version, and also, this profile was fixed:

The same holds for other KDE applications like Kate. If you haven’t set a custom font, then the Nerd version of Hack will be automatically used. Otherwise, you have to use the Nerd version of the specified font.

Note that other non-Qt applications will not be affected by this change. For example, for Alacritty, I have this section in its configuration:

So, I simply specify JetBrains, not its Nerd version. Still, when icons and other glyphs are to be rendered, they are automatically taken from any Nerd font providing those glyphs:

Good-looking Nerd fonts are back! 🙂

KDE Plasma 6 Desktop Cube in Arch Linux

At last, KDE Plasma 6 has landed in Arch Linux (and in EndeavourOS, of course), and you’re eager to try the return of the desktop effect “Desktop Cube”! 🙂

You try to enable that in the System Settings “Desktop Effects.” You try the default shortcut “Meta + C”, and… it doesn’t work 🙁

Oh, they say you need at least 4 virtual desktops! So you make sure you have 4 virtual desktops, you try again, and… it still doesn’t work 🙁

Actually, 3 virtual desktops are enough. What you really need is this package, so make sure you install that and reboot:

Now, the Desktop Cube will finally work! 😀

Here’s that in action with 3 virtual desktops:

Here with 5 virtual desktops:

Useless but good looking! 😉

A quick look at Quickemu

I have already blogged in the past about kvm/qemu.

Using the “Virtual Machine Manager” might not be straightforward initially. I’ve heard about Quickemu, which has this goal:

Quickly create and run optimised Windows, macOS and Linux desktop virtual machines.

In this blog post, I’m describing my experience with quickemu in Arch. The package is available from AUR:

And you also need the package for Qemu desktop mechanisms:

Then, you use the program “quickget” to get the ISO of one of the many distributions handled by quickemu. Let’s run it without arguments:

Let’s try Zorin:

We also need to specify its version:

Now, it starts downloading into the current directory, creating a subdirectory. For example, I’m running that from a mounted drive. Here’s some output and some commands to show the layout of the directories and the created configuration file to start the virtual machine with the mounted ISO:

Let’s start it:

Note the layout after starting the machine:

Thus, it should be easy to put it on an external drive

Here’s the machine starting with the live ISO:

I pressed Ctrl+C to cancel md5sum checks.

Here’s Zorin starting (audio is working):

The screen had resized automatically and became bigger.

I started the installation, mainly choosing default options (e.g., erase the entire disk). And here’s the login screen after the installation finished and the machine rebooted:

The impressive thing is that animations are really fluid and smooth in the virtual machine: you almost don’t realize you’re using a virtual machine:

Here’s the disk layout and memory (on this computer, I have 16 GB, and quickemu automatically selected half the memory for the virtual machine):

Then, I tried Garuda:

I tried the “Garuda KDE Dr460nized”:

I edited the conf file to increase the disk size:

Let’s start it:

Even in this case, the desktop automatically resizes if I resize the Qemu window.

The installation went smoothly and fast in this case. The login screen is full of nice blurry effects:

On the first login, you’re welcomed by the Garuda assistant to perform some initial tasks.

Here’s the information about the installed system:

Animations and effects are smooth, e.g., the “Overview”:

To summarize, with quickemu, creating a new Qemu virtual machine is easy, starting from one of the many managed Linux distributions. It also works for macOS and Windows distributions, though I haven’t tried them.

Moreover, the performance of the virtual machine is fantastic. The virtual machine seems as smooth as the currently running system.

The only drawback I’ve experienced is that, with the default configuration, the shared clipboard does not work: you must start the virtual machine with the spice display (” –display spice”). For example,

Remember to install the spice agent in the virtual machine. In the two above examples I’ve tried, the installed virtual machine already has the agent installed automatically during the installation.

First, at least in my experiments, the shared clipboard does not work anyway when the host is running on a Wayland session. Moreover, using the “spice” display, the virtual machine’s performance decreases significantly (see my reported issue: https://github.com/quickemu-project/quickemu/issues/933). Probably, to easily communicate and paste commands in the virtual machine, it is better to install the SSH server in the virtual machine and connect to the virtual machine via SSH.

In any case, this quick look at Quickemu impressed me a lot. 🙂

Hyprland and KDE Applications

Updated on 8 March 2024 (for Plasma 6)

This is another post of the Hyprland series.

I am a big fan of KDE applications for many everyday tasks. Since Hyprland is not a full desktop environment, you can choose which applications to install for text editing, images, viewers, file managers, etc.

When I started using Hyprland, I was installing Thunar as a file manager; then I switched to Nemo because it’s more powerful, then to Nautilus (but it doesn’t look right in Hyprland). Finally, I decided to use Dolphin since I already used several KDE applications in Hyprland.

This is the list of Arch packages I install in Hyprland

  • kate (for text editing)
  • gwenview (for quick image editing)
  • konsole (as a terminal, though I still use also Alacritty)
  • breeze-icons (to have nice icons in KDE application)
  • kvantum (for Kate color schemes)
  • okular (for a better PDF viewer and annotator)
  • kcalc (as a calculator)
  • dolphin (for a powerful file manager)
  • dolphin-plugins (e.g., for Dropbox folder overlay)
  • ark (for Archive management, including Dolphin context menus)

Note that some of the above applications (namely, Dolphin and Gweenview) have “baloo” (the KDE file indexer and searcher) as a dependency. In Hyprland, that’s pretty useless and since it takes some resources for indexing, it’s better to disable that for good right after installing the above packages:

UPDATE (8 March): After the update to KDE Plasma 6, the name of the baloo command has changed:

Let’s look at a few features of KDE applications that I like.

Concerning Dolphin, it has several powerful features, too many to list here 😉 I mention better renaming for multiple files out of the box. This feature requires additional work for Thunar or Nemo, and I never like the final result.

Let’s see the enabling of the Dropbox plugin (see the installed “dolphin-plugins” above):

After restarting Dolphin, you’ll get the nice overlay on the “Dropbox” folder:

Another reason I like KDE applications is that they have built-in HUD (Head Up Display), that is, a global searchable menu: use the keyboard shortcut Ctrl + Alt + i and you get the menu: start typing to quickly reach the intended item (in this example, I quickly switch to a custom Konsole profile):

You may want to create or change the keybinding for the file manager, in my case it is:

Moreover, you’ll have to update the “~/.config/mimeapps.list” file accordingly, that is, specify this line and replace the corresponding existing ones:

Concerning theming, some applications like Kate allow you to choose the color scheme. For example, since we installed Kvantum, we can choose the color scheme in Kate with “Settings” -> “Window Color Scheme”.

Konsole has profiles that you can create and customize.

On the other hand, Dolphin has no such functionality, so we should theme all KDE/Qt applications. That’s the subject of another possible future post.

Enjoy your KDE applications on Hyprland as well! 🙂

Running Maven from Java: the Maven Embedder

This is probably the beginning of a series of articles about testing Maven plugins.

I’ll start with the Maven Embedder, which allows you to run an embedded Maven from a Java program. Note that we’re not simply running a locally installed Maven binary from a Java program; we run Maven taken from a Java library. So, we’re not forking any process.

Whether this is useful or not for your integration tests is your decision 😉

The source code of the example used in this tutorial can be found here: https://github.com/LorenzoBettini/maven-embedder-example/

I like to use the Maven Embedder when using the Maven Verifier Component (described in another blog post). Since it’s not trivial to get the dependencies to run the Maven Embedder properly, I decided to write this tutorial, where I’ll show a basic Java class running the Maven Embedder and a few JUnit tests that use this Java class to build (with the embedded Maven) a test Maven project.

This is the website of the Maven Embedder and its description:

https://maven.apache.org/ref/3.9.6/maven-embedder/

Maven embeddable component, with CLI and logging support.

Remember: this post will NOT describe integration testing for Maven plugins; however, getting to know the Maven Embedder in a simpler context was helpful for me.

Let’s create a simple Java Maven project with the quickstart archetype

Let’s change the Java version in the POM to Java 17, use a more recent version of JUnit, and add another test dependency we’ll use later:

Let’s import the Maven Java project into Eclipse (assuming you have m2e installed in Eclipse).

Let’s add the dependencies for the Maven Embedder:

Getting all the needed dependencies right for the Maven Embedder is not trivial due to the dynamic nature of Maven components and dependency injection. The requirements are properly documented above.

Let’s replace the “App.java” inside “src/main/java/” with this Java class:

That’s just a simple example of using the Maven Embedder. We rely on its “doMain” method that takes the arguments to pass to the embedded Maven, the base directory from where we want to launch the embedded Maven, and the standard output/error where Maven will log all its information. In a more advanced scenario, we could store the logging in a file instead of the console by passing the proper “PrintStream” streams acting on files.

Let’s create the folder “src/test/resources” (it will be used by default as a source folder in Eclipse); this is where we’ll store the test Maven project to build with the Maven Embedder.

Inside that folder, let’s create another Maven project (remember, this will be used only for testing purposes: we’ll use the Maven Embedder to build that project from a JUnit test):

We rely on the fact that the contents of “src/test/resources” are automatically copied recursively into the “target/test-classes” folder. Eclipse and m2e will take care of such a copy; during the Maven build, there’s a dedicated phase (coming before the phase “test”) that performs the copy: “process-test-resources”.

Let’s replace the “AppTest.java” inside “src/test/java/” with this JUnit class:

The first test is simpler: it runs the embedded Maven with the goals “clean” and “verify” on the test project we created above. The second one is more oriented to a proper integration test since it also passes the standard system property to tell Maven to use another local repository (not the default one “~/.m2/repository”). In such a test, we use a temporary local repository inside the target folder and always wipe its contents before the test. This way, Maven will always start with an empty local repository and download everything from scratch for building the test project in this test. On the contrary, the first test, when running the embedded Maven, will use the same local repository of your user.

The first test will be faster but will add Maven artifacts to your local Maven repository. This might be bad if you run the “install” phase on the test project because the test project artifacts will be uselessly stored in your local Maven repository.

The second test will be slower since it will always download dependencies and plugins from scratch. However, it will be completely isolated, which is good for tests and makes it more reproducible.

Note that we are not running Maven on the test project stored in “src/test/reources” to avoid cluttering the test project with generated Maven artifacts: we build the test project copied in the “target/test-classes”.

In both cases, we expect success (as usual, a 0 return value means success).

In a more realistic integration test, we should also verify the presence of some generated artifacts, like the JAR and the executed tests. However, this is easier with the Maven Verifier Component, which I’ll describe in another post.

IMPORTANT: if you run these tests from Eclipse and they fail because the Embedded Maven cannot find the test project to build, run “Project -> Clean” so that Eclipse will force the copying of the test project from “src/test/resources” to “target/test-classes” directory, where the tests expect the test project. Such a copy should happen automatically, but sometimes Eclipse goes out of sync and removes the copied test resources.

If you run such tests, you’ll see the logging of the embedded Maven on the console while it builds the test project. For example, something like that (the log is actually full of additional information like the Java class of the current goal; I replaced such noise with “…” in the shown log below):

REMEMBER: this is not the output of the main project’s build; it is the embedded Maven running the build from our JUnit test on the test project.

Note that the two tests will build the same test project. In a more realistic integration test scenario, each test should build a different test project.

If you only run the second test after it finishes, you can inspect the “target/test-classes” to see the results of the build (note the “local-repo” containing all the downloaded dependencies and plugins for the test project and the generated artifacts, including test results, for the test project):

Now, you can continue experimenting with the Maven Embedder.

In the next articles, we’ll see how to use the Maven Embedder when running Maven integration tests (typically, for integration tests of Maven plugins), e.g., together with the Maven Verifier Component.

Stay tuned 🙂

Hyprland and wlogout

This is another post on the Hyprland series.

In a previous post, I showed how to create a custom button in the Waybar for wlogout: a wlogout is a logout menu for Wayland environments:

Remember that in Arch, this is available as an AUR package, e.g., you have to install it through an AUR helper like Yay:

Of course, you can also test it by running that from the command line.

However, I hadn’t realized that when I click logout in wlogout, I always get a black screen: I don’t return to the SDDM menu.

However, that’s easy to fix, and you can also take the chance to customize its aspect.

First, you have to create its configuration layout file, e.g., by starting from the default one:

Then, we can edit the layout file we copied into “~/.config/wlogout/layout” and change this section

into

Now, logout will work fine.

Let’s also configure “Swaylock” for screen locking, as I have already shown in this previous blog post.

If you still haven’t installed that:

The default screen locking uses a bright screen. Let’s make it darker:

And let’s create and edit its configuration file “~/.config/swaylock/config”; in this example, I’m going to make it “dark green”, so I’m specifying:

By looking at its “man page”, we can see:

-c, –color <rrggbb[aa]>
Turn the screen into the given color instead of white. If -i is used, this sets the background of the image to the given color. Defaults to white (FFFFFF).

The “aa” in the previously present hex notation is the alpha value defining the color’s opacity. In the above example, I’m using no opacity.

This is my complete configuration file for swaylock:

Again, the “man page” explains these values:

-F, –show-failed-attempts
Show the current count of failed authentication attempts.

-f, –daemonize
Detach from the controlling terminal after locking.

In my Hyprland configuration file, I also use “swayidle” (the Idle management daemon for Wayland)

In my waybar configuration, I have:

Note that I’ve used a character using the installed Nerd font. Of course, you can choose anything you like. The “wlogout” menu will appear when you click on that module.

That’s all for this post 🙂

Java, Maven and Gitpod, part 2: Using the IDE

In a previous post, I showed how to start with Java and Maven in Gitpod.

This is the second post of this series, where I show a few interesting tools provided by the IDE. As I said in the previous post, this might be unrelated to Gitpod since we’re using the tools provided by Visual Studio Code and its Java extensions. However, I think the post still fits the Gitpod Java series.

Of course, this post assumes you have already followed the first post on Java and Gitpod, linked above.

We can enjoy several refactoring tools on Java projects. For example, in the “App.java”, let’s select the “Hello World!” string. We can access the available refactorings on the selected element via the context menu or by using “Ctrl+.” (Quick fix):

Let’s choose “Extract to method”:

The refactoring creates a new method returning the selected expression, and the original expression is replaced by the call to such a new method. We can use the text box to give the method a better name. (“Shift+Enter” will show a preview of the refactoring; however, if you’re used to Eclipse like me, the preview is not as visually appealing and informative as the one of Eclipse.)

Alternatively, we can accept the default name and then position the cursor on one of the “extracted” occurrences and choose F2 (“Rename symbol”) to rename the method and its references. A text box like the one above will appear to specify the name, for example, “getMessage”.

While on the method name, we can see other refactorings (“Inline” is the opposite of “Extract to method”) and actions:

Let’s choose “Change signature” and use the dialog to change a few details. For example, let’s make the method “public” (of course, that’s just an example: we could easily manually change “private” to “public”); if we haven’t renamed the method (e.g., to “getMessage”), we could do that right now with this dialog:

Let’s see what happens in case of a test failure. Now that we have a public method to call it by changing the test like that:

Let’s run the test (e.g., by using the green arrow of the code lens):

As expected, we get the test failure; in particular, we get some information about the failure both on the editor and with an additional pop-up.

Let’s fix the test with the right expected message and re-run it (again, by using the now red cross of the code lens); this time, it should succeed.

Now that we have removed the “assertTrue”, we have an unused import in the test case. We can fix that by manually removing the import, but it’s better to use a fix from the context menu in the “Problems” tab:

Alternatively, we can select the “Organize Imports” command using F1 and start typing or the corresponding shortcut “Shift+Alt+O”.

We can now enrich our project with a README.md file (exploiting the Markdown editor available in Visual Studio Code) and create a GitHub Actions workflow (again, using the YAML support, which knows about the GitHub Actions workflow schema).

For Markdown, we can also use the preview pane:

For the GitHub Actions YAML file, we can use the code completion:

That’s all for this second post. Stay tuned for the third one! 🙂

Oh My Zsh and Powerlevel10k in macOS

I know there are many blog posts on installing and configuring “Oh My Zsh” and Powerlevel10k (p10k for short) in macOS. However, they are a bit outdated, and the installation/configuration process is now much easier. So, here’s my blog post on installing and configuring Oh My Zsh with Powerlevel10k (p10k for short) in macOS (for Linux, I have already blogged about my Ansible playbook for Oh My Zsh installation with Powerlevel10k or Starship prompt).

Let’s start!

First of all, install iTerm2 (because it provides a much better experience with Oh My Zsh and Powerlevel10k); either download it and install it from here https://iterm2.com/downloads.html or use “homebrew”:

Then, install Oh My Zsh; since I have “curl” installed, I’m using this command (otherwise, see the Oh My Zsh URL for alternative options):

You should see something like the following output (if Zsh is not your current shell, at the end of the installation you’ll get asked whether to switch to Zsh):

Then, we install p10k:

To enable it, edit “~/.zshrc” and set the variable ZSH_THEME accordingly:

Now, either “source” the .zshrc file or open a new instance of iterm2 to see the initial configuration of p10k (remember you can always reconfigure it by running “p10k configure”):

Meslo fonts are recommended to have nice icon fonts, so it’s best to accept the proposal to install the Meslo fonts (in macOS, you have this nice automatic procedure, while in Linux distributions, you must install them manually). Let’s wait for the fonts to be downloaded:

And then, we must restart iterm2:

Now, we start a new iterm2 instance, and we start p10k from scratch, answering the questions for checking whether we can see the font icons correctly:

Then, we can start choosing our preferred options:

I like “Rainbow”.

In the question above, I chose “Unicode” to have lots of nice-looking icons like, as we see in a minute, the Git branch and OS icon.

Above, I chose two lines to have more space on the prompt.

Here are other options you can choose:

Note above the “many icons” I previously talked about (I chose to have many icons).

Note the “Transient Prompt” option, which is the one I prefer.

Here, I select the recommended option.

Again, I let the configuration process change the ~/.zshrc file. You can then inspect the changes made as suggested:

Here’s an example of a nice-looking prompt inside a directory with a GitHub repository:

Now, I have installed two other useful plugins (to have syntax highlighting on the command line and to have suggested commands as you type based on history and completions):

The plug-ins must be enabled in the proper section of ~/.zshrc:

Here, you can see the two plugins in action (note the syntax highlighting in green for correct commands and suggestions to complete the command):

I also like to have fzf, a general-purpose command-line fuzzy finder. This must be first installed as a program, e.g., with homebrew:

And then enable the corresponding plug-in:

I also enable a few more standard plugins. This is my list of plugins in ~/.zshrc:

Fzf has a few default shortcuts:

  • CTRL-T – Paste the selected files and directories onto the command line
  • CTRL-R – Paste the selected command from history onto the command line
  • ALT-C – cd into the selected directory

Unfortunately, the last one (which is one of my favorites) does not work out of the box in iterm2 because the “option/alt” key does not act like “Meta” (as in Linux). This is documented in the FAQ:

Q: How do I make the option/alt key act like Meta or send escape codes?
A: Go to Preferences > Profiles tab. Select your profile on the left, and then open the Keyboard tab. At the bottom is a set of buttons that lets you select the behavior of the Option key. For most users, Esc+ will be the best choice.

If you don’t want to perform that change, you can use “ESC c” to achieve the same result.

That’s all! Enjoy your ZSH 🙂

A First Look at KDE Plasma 6 (beta) in KDE Neon

I tried the KDE Plasma 6 (beta) by using the KDE Neon Unstable Edition.

This is a quick report.

I tried that in a KVM virtual machine. I had to disable 3d graphics, or the installer showed an empty Window.

Here’s the live environment where I started the installer:

Note that it uses the Wayland session by default:

There are not many options when choosing to erase the disk:

The installation went smoothly.

Upon reboot, the login screen allows you to choose the X11 session, but Wayland is the default (that’s what I used):

Without 3D, you miss the blur and other effects; for example, you only get transparency without blurring:

Let’s enable 3D (“Display Spice”, “Listen Type = None” and check “OpenGL”, “Apply”, and then “Video Virtio”, check “3D acceleration”).

Everything seems to work this time (so the problem was only during the installation). We now have blur effects and smooth 3D effects:

The “Overview” effect (Alt+W) looks much nicer now (in the meantime, I switched to the dark theme), and it retains the features I had already blogged about:

The default Task Switcher (Thumbnail Grid) now makes sense (in Plasma 5, changing the default Task Switcher was the first thing I was doing in Plasma 5!):

From the visual point of view, you now also have a floating panel enabled by default.

There was a substantial system update (about 500Mb), which I applied. After rebooting, I was greeted like this:

Unfortunately, the links do not work: no browser opens…

After the update, logging out does not seem to work anymore: I get a blank screen. The same holds for the other menus like “Shut Down” and “Restart”. Welcome to beta software 😉

However, I did another upgrade the day after, and these issues were fixed.

By the way, if you want to upgrade the system, remember that in KDE Neon, you should not use “sudo apt upgrade” but “sudo pkcon update“.

These are the system information (remember: I’m on a virtual machine):

Speaking about desktop effects, we have the (useless but good-looking) Desktop Cube back! You have to enable it in the “Desktop Effects” and remember you must have at least 4 virtual desktops, or the effect will not kick in:

Cool effect 🙂

Speaking of the Desktop effects, the other effects seem to work fine, at least the ones I tried: Present Windows, Magic Lamp, Cover Flow (task switcher), and Blur.

In Wayland, there are some small quirks. The one I noted most is the missing close/maximize/minimize icons in Firefox (you cannot see them, though if you hover, you can press them):

That’s all for the moment!

I’ll keep experimenting with KDE Plasma 6 beta.

Eclipse fonts in Windows 11

This is a quick post about having nice fonts in Eclipse in Windows 11, based on my experience (maybe I had bad luck with the default configurations of Eclipse and/or Windows).

When I bought my Acer Aspire Vero, I found Windows 11 installed, and now and then, I’m using Windows 11 (though I’m using Linux most of the time). As an Eclipse user, I immediately installed Eclipse. However, I found the default fonts were really ugly:

Indeed, “Courier New” is not the most beautiful mono-space font 😉

Other applications look nice in Windows 11, including text editors. They use, by default, “Lucida Console”, which looks OK:

Indeed, Eclipse uses “Consolas” for other Text parts:

“Consolas” looks even better than “Lucida”! I changed that in Eclipse also for the standard Text font, and the result looks nice to me:

Nice, isn’t it? 🙂

Java, Maven and Gitpod: Getting Started

I have already blogged about Gitpod, which allows you to spin up fresh development environments from your GitHub projects so that you can code with Visual Studio on the web (that’s just a very reductive definition, so you may want to look at its website for the complete set of features). I have already shown how to use it for Ansible and Molecule.

I honestly prefer to have my IDE (Eclipse) on my computer. However, Gitpod allows you to use an old or less powerful computer to develop applications and systems requiring much computing power and resources. I showed an example for the PineBook Pro: you can use Gitpod to develop Ansible roles and test them with Docker.

Today, I will show how to use Gitpod for Java/Maven projects. This is the first post of a series about Java, Maven, and Gitpod.

NOTE: Although the post focuses on Gitpod, most of the features we will see come from Visual Studio Code and the extensions we will install. Thus, the same mechanisms could be used also on a locally installed Visual Studio Code. In that respect, it is best to get familiar with the main keyboard shortcuts (these will be shown in Visual Studio Code when no editor is opened):

Gitpod provides an example for Java, but it relies on Spring Boot and is probably too complex, especially if you’re not interested in web applications.

In this post, instead, I’ll start with a very basic Java/Maven project. It is intended as a tutorial, so you might want to follow along doing these steps with your GitHub account.

I start by creating a Maven project with the quickstart archetype locally on my computer:

I move it into another directory, for example:

Create a standard “.gitignore”:

Initialize the Git repository and create a first commit:

Finally, I create a new repository on GitHub and push my Git repository there (in this example: https://github.com/LorenzoBettini/java-maven-gitpod-example)

To access Gitpod easily from a GitHub repository, I installed the Gitpod browser extension.

Now, I can start Gitpod for this repository using the button (as I said, you need to use a browser extension; otherwise, you have to prefix the URL appropriately):

Let’s press the “Gitpod” button. (The first time you use Gitpod, you’ll have to accept a few authorizations.).

Press the “Continue with GitHub” button and wait for the workspace to be ready.

NOTE: I’m using the light theme of Visual Studio in Gitpod in this blog post.

Gitpod detected that this was a Maven project and automatically executed the command:

Note that it also created the file “.gitpod.yml”, which we’ll later tweak to customize the default command and other things:

Moreover, it offers to install the Java extension pack:

Of course, we accept it because we want to have a fully-fledged Java IDE (this is based on the Eclipse JDT Language Server Protocol; you might want to have a look at what a Language Server Protocol, LSP, is). We use the arrow to choose “Install Do not Sync” (we don’t want that in all Gitpod workspaces, and we’ll configure the extensions for this project later).

Once that’s installed (note also the recommended extension GitLens, which we might want to install later, let’s use the gear icon to add the extension to our “.gitpod.yml” so that the extension will be automatically installed and available the next time we open Gitpod on this project:

Unfortunately, the “.gitpod.yml” is a bit messed up now (maybe a bug?), and we have to adjust it so that it looks like as follows:

There’s also a warning on top of the file; by hovering, we can see a few warnings complaining that the transitive dependencies of the extension are not part of the file:

Let’s click on “Quick Fix…” and then apply the suggestions to add the extensions to the file (these are just warnings, but I prefer not to have warnings in my development environment):

In the end, the file should look like this:

Note that we have “code lens” in the editor, and we can choose to let Gitpod validate this configuration:

TIP: another extension I always add is “eamodio.gitlens”.

This will rebuild the Docker image for our workspace (see the terminal view at the bottom):

This operation takes some time to complete, so you might want to avoid that for the moment. If you choose to do the operation, in the end, another browser tab will be opened with this new configuration. We can switch to the new browser tab (the “.gitpod.yml” is available in the new workspace, though we still haven’t committed that).

NOTE: I find “mvn install” an anti-pattern, and, especially in this context, it makes no sense to run the “install” phase and run the tests when the workspace starts. In fact, I changed the “init” task to a simpler “mvn test-compile”; this is enough to let Maven resolve the compile and test dependencies when the workspace starts. The Java LSP will not have to resolve them again and will find them in the local Maven cache.

We can take the chance to commit the file by using the corresponding tab in Visual Studio Code and then push it to GitHub (“Sync Changes”):

We could also close the Gitpod tabs and re-open Gitpod (the “.gitpod.yml” is now saved in the GitHub repository), but let’s continue on the open workspace.

Let’s now open a Java file in our project:

We get a notification that the IDE is loading the Java project (this might take a few seconds).

TIP: to quickly open a file knowing (part of) its name, press “Ctrl + P” (see the shortcuts above) and start typing:

We have a fully-fledged Java IDE with “code lens” for running/debugging and parameter names (see the argument passed to “System.out.println”):

For example, let’s use “Run” to run the application and see the output in the terminal view:

Though this project generated by the archetype is just a starting point, we also have a simple JUnit test. Let’s open it.

After a few seconds, the editor is decorated with some “code lens” that allows us to run all the tests or a single test (see the green arrow in the editor’s left ruler). Clicking on the arrow immediately runs the tests or a single test. Right-clicking on such arrows gives us more options, like debugging the test.

On the right pane, we can select the “Testing” tab (depicted as a chemical ampoule) that shows all the tests detected in the project (in this simple example, there’s a single one, but in more complex projects, we can see all the tests). We can run/debug them from there.

Let’s run them and see the results (in this case, it is a complete success); note the decorations showing the succeeded tests (in case of failures, the decorations will be different):

Of course, we could run the tests through Maven in the console, but this would be a more manual process, and the output would be harder to interpret in case of failures: we want to use an IDE to run the tests.

We could also run the tests by pressing “F1” and typing “Run tests” (we’ll then use the command “Java: Run Tests”): we need to do that when a JUnit test case is open in the editor.

Let’s hover on the “assertTrue”, which is a static method of the JUnit library. The IDE will resolve its Javadoc and will show it on a pop-up (the “code lens” for the parameter names is also updated):

We can use the menu “Go to definition” (or Ctrl+click) to jump to our project’s source code and libraries. For example, let’s do that on “assertTrue”. We can view the method’s source code in the class “Assert” of JUnit (note that this editor is read-only, and the name of the file ends with “.class”):

Note that the “JAVA PROJECTS” in the “Explorer” shows the corresponding file. In this case, it is a file in the referred test dependency “junit-4.11.jar” in the local Maven cache (see the POM where this dependency is explicit).

Of course, we have code completion by pressing “Ctrl+Space”; when the suggestions appear, we can start typing to filter them, and substring filtering works as well (see the screenshot below where typing “asE” shows completions matching):

With ENTER, we select the proposal. In this case, if we select one “assertEquals”, which is a static method of “Assert”, upon selection, we will also have the corresponding static import automatically added to the file.

That’s all for the first post! Stay tuned for more posts on Java, Maven, and Gitpod! 🙂

My Ansible Role for KDE

I have already started blogging about Ansible; in particular, I have shown how to develop and test an Ansible role with Molecule and Docker, also on Gitpod. I have also shown my Ansible role for GNOME.

This blog post will describe my Ansible role for installing the KDE Plasma desktop environment with several programs and configurations. As for the other roles I’ve blogged about, this one is tested with Molecule and Docker and can be developed with Gitpod (see the linked posts above). In particular, it is tested in Arch, Ubuntu, and Fedora.

This role is for my personal installation and configuration and is not meant to be reusable.

The role can be found here: https://github.com/LorenzoBettini/my_kde_role.

The role assumes that at least the basic KDE DE is already installed in the Linux distribution. The role then installs several programs I’m using daily and performs a few configurations (it also installs a few extensions I use).

At the time of writing, the role has the following directory structure, which is standard for Ansible roles tested with Molecule.

The role has a few requirements, listed in “requirements.yml”:

These requirements must also be present in playbooks using this role; my playbooks (which I’ll write about in future articles) have such dependencies in the requirements.

Let’s have a look at the main file “tasks/main.yml”, which is quite long, so I’ll show its parts and comment on the relevant parts gradually.

This shows a few debugging details about the current Linux distribution. Indeed, the whole role has conditional tasks and variables depending on the current Linux distribution.

The file installs a few KDE programs I’m using in KDE.

The “vars/main.yml” only defines a few default variables used above:

As seen above, a few packages have a different name in Ubuntu (Debian), which is overridden.

Then, I configure a few things in the KDE configuration (.ini) files and set a few keyboard shortcuts. The configuration should be self-explanatory.

Then, I ensure Kate is the default editor for textual files (including XML files, which otherwise, would be opened with the default browser); I also configure a few Kate preferences:

Then, I copy a few Konsole profiles (and the corresponding color schemes, see the directory “files/konsole”) and also configure the Yakuake drop-down terminal:

The final part deals with configuring the Kwallet manager to store SSH key passphrases, which, in KDE, has always been a pain to get correctly (at least, now, I have a configuration that I know works on all the distributions mentioned above):

Concerning Molecule, I have several scenarios. As I said, I tested this role in Arch, Ubuntu, and Fedora, so I have a scenario for each operating system. The “default” scenario is Arch, which nowadays is my daily driver.

For Ubuntu, we have a “prepare.yml” file:

The reason for this is explained in my previous posts on Ansible and Molecule.

I have a similar “prepare.yml” for the default scenario, Arch.

I have nothing to verify for this role in the “verify.yml”. I just want to ensure that the Ansible role can be run (and is idempotent) in Arch, Ubuntu, and Fedora.

Of course, this is tested on GitHub Actions and can be developed directly on the web IDE Gitpod.

I hope you find this post useful for inspiration on how to use Ansible to automatize your Linux installations 🙂

A look at Ubuntu 23.10 Mantic Minotaur

Nowadays, I mostly use Arch-based distributions (especially with EndeavourOS). So I haven’t been using Ubuntu for a while and decided to try it again now that the brand new release, 23.10 “Mantic Minotaur”, is available.

I tested it on my Dell XPS 13.

Here we are in the live environment:

Let’s start the installation. This new version of Ubuntu features a new installer, which looks nice. I still feel comfortable with this new installer having already installed Ubuntu many times.

The initial steps are the language, keyboard, and network connection:

In the next step, the installer detected a new version available to download. I said yes. Then, you have to restart the installer, starting from scratch.

By default, Ubuntu proposes a minimal installation when choosing the installation type. However, I prefer to have most of the things installed during this stage, so I chose the “Full Installation”:

Then, we get to the partitioning. As usual, I prefer manual partitioning since I have several Linux distributions installed on my computer. I chose EXT4 as the file system. On Arch, I use BTRFS. However, Ubuntu does not come with good defaults for BTRFS. I dealt with such problems in the past, but now I prefer to stick with EXT4 in Ubuntu and give up on BTRFS snapshots.

Then, we get to the timezone selection (the installer automatically detected my location) and user details. This is as usual.

Interestingly, you can select during the installation the theme and the color accent (that’s nothing special, but it is a nice surprise):

The installation starts; by clicking on the small icon on the bottom right, you can also enable logging on the terminal:

The installation only took a few minutes on this laptop.

Time to restart. Of course, at the first login, you get some updates to install:

The touchpad is already configured with tap-to-click, but it defaults to “natural scrolling” (which I don’t like). That gave me the chance to see the new nice-looking Gnome setting for the touchpad:

I installed Dropbox, and with the Ubuntu extension for “app indicator”, the Dropbox icon appears in the tray bar. It works (mostly: sometimes it always shows as if it is synchronizing, though everything is up-to-date).

Remember that the current icon theme does not show the “Dropbox” folder in Nautilus with overlay.

Connecting an external HDMI monitor works perfectly (so Wayland is not a problem); I prefer to mirror the contents:

Also, GNOME extensions work fine. Despite the new GNOME Version (45), known to have broken all extensions due to an API breakage, the ones I use seem to have been ported and work correctly.

I don’t like the fact that, despite a SWAP partition already present on my disk, the installer did not pick it up: the result is the usage of a small SWAP file, which I don’t like.

I removed this line from the “/etc/fstab”:

I added the line to refer to my existing SWAP partition.

I also enabled ZRAM, which will automatically have precedence over the SWAP partition:

I don’t like the wallpapers shipped with this version (in the screenshot, you can easily tell the GNOME wallpapers from the Ubuntu ones):

However, I typically use Variety for wallpapers, so it’s not a big problem.

IMPORTANT: as I have already blogged, you need additional fonts for “Oh-My-Zsh” with the “p10k” prompt.

All in all, Ubuntu 23.10 seems pretty stable and smooth. I’m using it (not as my daily driver), and for the moment, I’m enjoying it.

Hyprland and notifications with mako

Here’s another post on how to get started with Hyprland.

This time, we’ll see how to configure notifications with mako, a lightweight notification daemon for Wayland, which also works with Hyprland. (you might also want to consider and experiment with an alternative: dunst).

If you followed my previous tutorials, you have no notification daemon installed. You can verify that by running the following command (to issue a notification manually) and by looking at the resulting errors:

Let’s install “mako”:

The nice thing about mako is that you don’t need to start it as a service manually: the first time a notification is emitted, mako will run automatically.

Let’s try to run the above notification command above, and this time, we see the pop-up, by default, on the right top corner of the screen:

You have to click the pop-up to make it disappear.

Each time a program emits a notification, mako will show it. For example, Thunderbird, Firefox, and Chrome will emit notifications that mako will display.

Let’s do some further experiments by manually emitting notifications:

will lead to

You can see that the first argument is the title and formatted in boldface.

You can have a look at mako’s manual (5) about its configuration file and where it is searched for:

An example configuration, usable as a starting point, can be found here: https://github.com/emersion/mako/wiki/Example-configuration.

Each time you modify the configuration, you must reload mako by using one of the following commands:

or

With that example configuration, we can emit a few notifications with different “urgencies”, and see the different colors and positions of the boxes:

If you use EndeavourOS, you will get notifications about new updates and when a reboot is required after a system update (the latter is a “critical” notification):

That’s all! Not too difficult, isn’t it? 🙂

Stay tuned for more posts about Hyprland. 🙂

My Ansible Role for GNOME

I have already started blogging about Ansible; in particular, I have shown how to develop and test an Ansible role with Molecule and Docker, also on Gitpod.

This blog post will describe my Ansible role for installing the GNOME desktop environment with several programs and configurations. As for the other roles I’ve blogged about, this one is tested with Molecule and Docker and can be developed with Gitpod (see the linked posts above). In particular, it is tested in Arch, Ubuntu, and Fedora.

This role is for my personal installation and configuration and is not meant to be reusable.

The role can be found here: https://github.com/LorenzoBettini/my_gnome_role.

The role assumes that at least the basic GNOME DE is already installed in the Linux distribution. The role then installs several programs I’m using on a daily basis and performs a few configurations (it also installs a few extensions I use).

At the time of writing, the role has the following directory structure, which is standard for Ansible roles tested with Molecule.

The role has a few requirements, listed in “requirements.yml”:

These requirements must also be present in playbooks using this role; my playbooks (which I’ll write about in future articles) have such dependencies in the requirements.

The main file “tasks/main.yml” is as follows:

This shows a few debug information about the current Linux distribution. Indeed, the whole role has conditional tasks and variables depending on the current Linux distribution.

The file installs a few programs, mainly Gnome programs, but also other programs I’m using in GNOME.

The “vars/main.yml” only defines a few default variables used above:

As seen above, the package for “python psutils” has a different name in Arch, and it is overridden.

For Arch, we have to install a few additional packages, which are not required in the other distributions (file “gnome-arch.yml”):

For the Guake dropdown terminal, we install it (see the corresponding YAML file).

The file “gnome-templates.yml” creates the template for “New File”, which, otherwise, would not be available in recent versions of GNOME, at least in the distributions I’m using.

For the search engine GNOME Tracker, I performed a few configurations concerning the exclusion mechanisms. This is done by using the Community “dconf” module:

This also ensures that possibly previous versions of Tracker are not installed. Moreover, while I use Tracker to quickly look for files (e.g., with the GNOME Activities search bar), I don’t want to use “Tracker extract”, which also indexes file contents. For indexing file contents, I prefer “Recoll”, which is installed and configured in my dedicated playbooks for specific Linux distributions (I’ll blog about them in the future).

Then, the file “gnome-configurations.yml” configures a few aspects (the comments should be self-documented), including some custom keyboard shortcuts (including the one for Guake, which, in Wayland, must be set explicitly as a GNOME shortcut):

Then, by using the “petermosmans.customize-gnome” role (see the requirements file above), I install a few GNOME extensions, which are specified by their identifiers (these can be found on the GNOME extensions website). I leave a few of them commented out, since I don’t use them anymore, but I might need them in the future):

Then, we have the files for installing and configuring Flatpak, which I use only to install the GNOME Extension manager:

I installed them system-wide (the “user” option is commented out).

Concerning Molecule, I have several scenarios. As I said, I tested this role in Arch, Ubuntu, and Fedora, so I have a scenario for each operating system. The “default” scenario is Arch, which nowadays is my daily driver.

For Ubuntu, we have a “prepare.yml” file:

The reason why is explained in my previous posts on Ansible and Molecule.

By default, I test that “flatpak” is installed. (see the default variables above: by default, Flatpak is installed)

But I also have a scenario (in Arch) where I run the role without Flatpak:

For this scenario, the “verify.yml” verifies Flatpak is not installed:

Of course, this is tested on GitHub Actions and can be developed directly on the web IDE Gitpod.

I hope you find this post useful for inspiration on how to use Ansible to automatize your Linux installations 🙂

Ubuntu, Oh My Zsh, Powerlevel10k and Meslo fonts

I haven’t been using Ubuntu for a while, but I wanted to give it another try. I’m using my Ansible playbook for installing ZSH, Oh My Zsh, and p10k (Powerlevel10k), so I thought everything would work like a charm.

However, after running the playbook and restarting, the terminal did not look quite right:

You see the OS logo before the “>” is not displayed, and other icon fonts (I’m using exa/eza instead of “ls”) are missing, too (e.g., the one for YAML and Markdown files). In Arch, I knew how to solve icon problems for exa. Here in Ubuntu, I never experimented in that respect.

However, the p10k GitHub repository provides many hints in that respect. Unfortunately, Ubuntu does not provide packages for Nerd fonts. However, the p10k GitHub repository provides some Meslo fonts that can be directly downloaded.

The commands to solve the problem (provided you already have “fontconfig” and “wget” installed, otherwise, do install them) are:

And then issue

You can verify that they are now installed:

Now, reboot (this seems to be required), and the next time you open the terminal, everything looks fine (note the OS icon and the icons for YAML and Markdown files):

Of course, you could also download another Nerd font from the corresponding GitHub repository, but this procedure seems to work like a charm, and you use the p10k recommended font (Meslo).

By the way, the Gnome Text Editor automatically uses the new icon fonts. Other programs like Kate (which I use in Gnome as well) have to be configured to use the Meslo font.

Dell OptiPlex 5040 MiniTower: upgrading SSD

I am writing this report about my (nice) experience upgrading the SSD (1 TB) to my Dell OptiPlex 5040 MiniTower. That’s an old computer (I bought it in 2016), but it’s still working great. However, its default SSD of 256 GB was becoming too small for Windows and my Linux distributions. This computer also came with a secondary mechanical hard disk (1 TB).

DISCLAIMER: This is NOT meant to be a tutorial; it’s just a report. You will do that at your own risk if you perform these operations! Ensure you did not void the warranty by opening your laptop.

I wrote this blog post as a reminder for myself in case I have to open this desktop again in the future!

To be honest, my plan was to add the new SSD as an additional SSD, but, as described later, I found out that the mechanical hard disk was a 2.5 one, so I replaced the old SSD with the new one (after cloning it). I’ve used a “FIDECO YPZ220C” to perform the offline cloning, which worked great!

This is the BIOS status BEFORE the upgrade:

I seem to remember that “RAID” is required to have Linux installed on such a machine.

This is the new SSD (a Samsung 870 EVO, 1 TB, SATA 2.5”):

The cool thing about this desktop PC, similar to other Dell computers I had in the past, is that you don’t need a screwdriver: you disassemble it just with your hands. However, I suggest you have a look at a disassembling video like the one I’ve used: https://www.youtube.com/watch?v=gXePa1N_8iI. I know the video is about a Dell Optiplex 7040 MT, while mine is a Dell Optiplex 5040 MT, but their shapes and internals look the same. On the contrary, the Dell Optiplex 5040 SmallFactor videos are not useful because there’s a huge difference between my MiniTower and a SmallFactor 5040.

These are a few photos of the disassembling, showing the handles to use to open the computer, disconnect a few parts, and access the part holding the 2.5 drives.

This is the part holding the two 2.5 drives (as I said, at this point, I realized that also the mechanical hard disk is occupying one such place):

The SSD (I will replace) is the first one on top.

It’s easy to remove that: just use the handles to pull it off:

There are no screws to remove: you just enlarge the container to remove the SSD and insert the new one.

As I said above, I inserted the new one after performing the offline cloning.

Once I closed the desktop, the BIOS confirmed that the new SSD was recognized! 🙂

Now, some bad news (which is easy to fix, though): if you use a partition manager, e.g., in Linux, the SSD is seen as 1 TB, but the partitions are based on the original source SSD, so you end up with lots of free space that you cannot use!

For example, here’s the output of fdisk, which understands there’s something wrong with the partition table:

It also suggests that it’s not a good idea to try to fix it when one of the partitions is mounted.

Using a live ISO, e.g., the one from EndeavourOS, is just a matter of fixing the partition table as follows.

Now, you have access to the whole space in the disk.

For example, this is the output of “gparted” (Yes, I have a few Linux distributions installed on this PC):

That’s all! 🙂