Monthly Archives: October 2022

Timeshift and grub-btrfs in Ubuntu

UPDATED 22/Dec/2022, ChangeLog:

  • 22/Dec/2022: added the flag “-czstd” for defragmentation and compression.
  • 20/Nov/2022: documented the new version of grub-btrfs and its new grub-btrfsd daemon; the configuration for Timeshift is much simpler, but you have to install another package: inotify-tools.
  • 17/Nov/2022: documented that I could also create additional subvolumes and move existing contents from the running system

In this post, I’ll describe the same procedure I described in one of my previous posts but applied to Ubuntu (in particular, Ubuntu Jammy 22.04). I’m not describing an autosnap mechanism when performing package updates in Ubuntu, though you might want to try timeshift-autosnap-apt for that.

If you choose the BTRFS filesystem when you install Ubuntu, you get the two subvolumes “@” for “/” and “@home” for “/home”, which will make Timeshift work since that’s what Timeshift expects to find to take BTRFS snapshots.

However, some adjustments are still worth making to avoid useless data ending in the snapshots and to be able to boot snapshots directly (with the help of grub-btrfs).

Adjust compression

Ubuntu does not mount BTRFS subvolumes with compression. Instead, I prefer to have compression on BTRFS.

To do that, right after the installation, I change /etc/fstab by adding compression and “noatime” to the existing entries for “/” and “/home” (as usual, UUID must be replaced with the UUID of your disk partition):

Save the file, reboot, and perform “defragment” so that the existing data gets compressed:

It’s also best to create /var/lib/portables and /var/lib/machines if not already there

I’m not using these directories, but it looks like systemd creates them automatically as nested subvolumes. Nested subvolumes will force you to do some manual removal after restoring a snapshot and removing old snapshots.

The current situation of subvolumes should be as follows:

Create additional subvolumes

With the current subvolume layout, things like logs and caches will end up in snapshots, giving you problems when you try to boot a snapshot (which we want to be able to do).

Creating a subvolume for the whole /var directory is not ideal because we lost the contents of essential things like /var/lib in the snapshot, so we could not restore a snapshot correctly. Instead, we create subvolumes to mount /var/log, /var/cache, and /var/tmp.

Since we have to move the existing contents of such directories into the new subvolume, it’s better to operate from a Live ISO.

UPDATE 17/Nov/2022: I also tried to create the new subvolumes and move the existing contents in the running system, following the steps described below. It worked! However, I’d still suggest you do the next steps from a Live ISO because you never know 😉

So, let’s boot a Live ISO (e.g., Ubuntu live installation medium).

Switch to superuser (no password required in the live media):

Mount the BTRFS filesystem into /mnt, which should already exist in the live media. We have to mount the partition hosting our existing installation. In my example, it is /dev/sdb10:

You should see the existing subvolumes in the mount directory:

Create the new subvolumes

Here are the new subvolumes:

Let’s move the contents of /mnt/@/var/log to /mnt/@log and the same for /mnt/@/var/cache into /mnt/@cache

The directory /mnt/@/var/tmp/ should be empty, so there’s nothing to move (you may want to check that).

Adjust the fstab of the installed system to point to the renamed subvolumes and mount the new subvolumes as /var/log, /var/cache, and /var/tmp

You will have to use the correct UUID, which is the same as the one for the already mounted volumes (when you copy and paste, make sure you update both the mount point and the subvolume name consistently):

Remember to double-check everything!

Let’s unmount the partition

and we should be able to reboot the system, hopefully without problems

Install Timeshift

We can install it from the Ubuntu repositories with

NOTE: since this is not the latest version, Timeshift will always generate snapshots in the same directory, so there’s no need for the additional fixed mount point as in Arch (see my previous post).

Let’s create a new Timeshift snapshot and browse it with the Timeshift toolbar button “Browse”. We can verify that the directories @/var/log, @/var/cache, and @/var/tmp are empty in the snapshot. The same holds for home (but that was already true thanks to the initial subvolume for home).

Configure grub-btrfs

We have to install grub-btrfs from sources because there’s no official package for Ubuntu. However, after installing the dependencies,

the installation procedure is straightforward:

We now need to configure that to monitor the Timeshift snapshot directory instead of the default one (/.snapshots).

Updated version (20 November 2022)

UPDATE 20/Nov/2022: A new version of grub-btrfsd is available only by installing from sources. What follows is based on this new version. At the bottom of the post, there are still the old instructions, but they’re not valid anymore.

If you see an ASCII splash screen installing grub-btrfs from sources, you’re using the latest version.

Now, let’s make sure grub-btrfs can find Timeshift’s snapshots (remember, we’ve just created one). So let’s run update-grub, and we should see in the end something like the following output:

The last lines prove that grub-btrfs can detect snapshots.

Automatically update the grub menu upon snapshot creation or deletion (20 November 2022)

UPDATE 20/Nov/2022: A new version of grub-btrfsd is available only by installing from sources. What follows is based on this new version. At the bottom of the post, there are still the old instructions, but they’re not valid anymore.

Grub-btrfs provides a daemon watching the snapshot directory and updates the grub menu automatically every time a snapshot is created or deleted.

Important: This daemon requires an additional package:

By default, this daemon watches the directory “/.snapshots” (the default directory for Snapper). Since Timeshift uses a different directory, we have to tweak the configuration for the daemon.

Let’s run:

We must change the line

into

This is required for Timeshift version 22.06 and later because Timeshift creates a new directory named after their process ID in /run/timeshift every time they are started. Since the PID will be different every time, also the directory will be different. Grub-btrfs provides the command line argument –timeshift-auto to correctly detect the current snapshot directory (In previous versions of grub-btrfs, we had to tweak /etc/fstab to deal with that).

Let’s start the daemon:

In the journalctl log, we should see something like (where the date and time have been stripped off):

Let’s start Timeshift. In the journalctl log, we should see something like this:

Let’s verify that if we create a new snapshot, grub-btrfs automatically updates the GRUB menu: in a terminal window, run “journalctl -f” to look at the log, then create a new snapshot in Timeshift. In the log, you should see something like the following lines:

Similarly, if we delete an existing snapshot, we should see something similar in the log.

Remember that it takes a few seconds for grub-btrfs to recreate the grub menu.

Once we’re sure everything works, we can enable the daemon to always start at boot:

The next time we boot, our grub menu will also show a submenu to boot snapshots.

Concerning doing some experiments booting a snapshot and restoring it, please look at my other post.

Old version (with old release 4.11)

UPDATE 20/Nov/2022: These are the older instructions for the previous version of grub-btrfs, where there was no “grub-btrfsd.service” and there was another systemd program (“grub-btrfs.path”).

I leave these instructions here just for “historical reasons”.

The file contents

should be replaced with

Let’s reload and re-enable the monitoring service:

If we have already created a few snapshots, we can run update-grub and verify that new grub entries are created for the found snapshots:

Let’s verify that if we create (or delete) a snapshot, grub-btrfs automatically updates the GRUB menu: in a terminal window, run “journalctl -f” to look at the log, then create a new snapshot in Timeshift. In the log, you should see something like the following lines (where the date and time have been stripped off):

This output tells you that the grub-btrfs monitoring service works!

The next time we boot, our grub menu will also show a submenu to boot snapshots.

Concerning doing some experiments booting a snapshot and restoring it, please look at my other post.

Taming KDE baloo

Quoting from https://community.kde.org/Baloo,

Baloo is the file indexing and file search framework for KDE Plasma, with a focus on providing a very small memory footprint along with extremely fast searching.

Unfortunately, it has a bad reputation for being a resource hog. However, it all boils down to configuring it appropriately (of course, according to your needs), and not only will it be “tamed”, but it will also work fast, using a few resources, and it will be your friend when using KDE.

In this article, I describe how I configure it to be fast and functional without being a resource hog.

I anticipate that I use baloo only for file search, NOT content searching. I don’t think baloo is the best tool under that concern because it updates its index continuously. For content searching, I suggest Recoll. You can configure Recoll to update the index a few times daily, which is typically enough. Moreover, Recoll provides more powerful query mechanisms for content searching than baloo. If you want baloo to index file contents, remember that this requires more resources and time to complete.

First of all, configuring baloo should be the first thing you do after you install KDE and log in to KDE for the first time. At least, it would be best if you disabled it before filling your home folder with all your contents.

Whether baloo is enabled or disabled by default depends on the distribution. Go to System Settings -> Search -> File Search.

In this example, it’s disabled. Let’s check “Enable File Search” and uncheck “Also index file content” (at least, I disable indexing of file contents for the reasons I said above). DON’T PRESS “Apply” yet. Another problem with the default configuration is that it will index all the files in your home folder. That’s probably too much unless you want to search everywhere in your home folder (including temporary files, locally installed binaries, etc.).

In that respect, I prefer to exclude my home folder and provide some subfolders to be indexed. So I use the dropdown menu and exclude my home folder:

Then, I add the folders I want to be indexed with the button “Start indexing a folder…”:

Now, you can hit “Apply”, and after some time, you can see that indexing completes. With an SSD and indexed folders that do not contain many files, it takes baloo just a moment to index the selected folders.

You may also want to learn to use the command balooctl from the terminal to have more control over baloo and retrieve indexing information. For example, here’s an example of the status subcommand and its result on my system after the configuration above and after the indexed directories are filled with my data:

Learning more about baloo configuration to have fine-grained control of its indexing functionality is even better. For example, you can refine the configuration by excluding some files from the specified indexed folders based on their extensions or with regular expressions. As far as I know, you cannot do that from the system setting’s dialog shown above. You have to tweak the configuration file ~/.config/baloofilerc.

The interesting part is “exclude filters=…” where you specify a comma-separated list of regular expression for excluding files or folders. It comes prefilled with some sensible values for exclusions. You can add your own exclusion filters.

After modifying this files, I’d suggest you rebuild the index with this command:

After the index is recreated, many files have been excluded:

You can enjoy baloo file search from the “Search” in the “Application Menu” and from “KRunner” (Alt+space). In that respect, KRunner might take some time before it correctly shows the indexed files after a re-index.

Now, baloo is tamed and you can enjoy its features without losing your resources.

Final thoughts: for you sure, you must make sure things like BTRFS snapshots don’t end in the folders to be indexed, or you will have several problems, including lots of resource usage not to mention duplicate results during the search.

Fixing Docker problems in Fedora

In this post, I’ll describe a few problems with Docker in Fedora and how to fix them. In particular, I’ll try to provide an analysis of the problems and the sources of the solutions.

I’ve experienced a few problems with Docker in Fedora (35 and 36). I first installed docker-ce by following the official documentation: https://docs.docker.com/engine/install/fedora/. Things went smoothly at the beginning with the first Docker images, e.g., the “hello-world”, “ubuntu,” and a few others.

However, I started to experience severe problems with this image “mysql:5.7”: the container took a lot of time to start, and I noted that Docker immediately ate my whole RAM, 16 Gb. If I don’t terminate the process, the computer will likely hang in a minute or less.

This is a docker-compose file to recreate the problem:

Save it to “docker-compose.yaml” and start it with “docker-compose up” and see the problem.

You must install the package docker-compose from Fedora’s repositories to verify the problem. I anticipate that the problem is not due to docker-compose because I have precisely the same problems with the above Docker image even if I run it without docker-compose (e.g., during a Maven build).

There’s no problem with a more recent version of the MySQL image, e.g., “mysql:8.0.27”. However, there’s something wrong with Docker in Fedora because the same docker-compose file using “mysql:5.7” does not give me any problem and works perfectly in other Linux distributions like Ubuntu and Arch.

I also experienced slow performance and ram draining when running an ansible molecule test with one of my ansible playbooks, e.g., running a flatpak installation in a Docker container. Again, only in Fedora.

I reported here https://ask.fedoraproject.org/t/docker-very-slow-in-fedora/23214 without getting any solution.

Here are the solutions I found myself.

Use moby engine (and tweak its configuration)

Instead of using “docker-ce”, I tried moby-engine (https://mobyproject.org/), which provides the same “docker” command line tools. The existing docker-compose works with moby as well. As stated here https://fedoramagazine.org/docker-and-fedora-35/, moby-engine is the “Fedora’s way” (though the official Docker package is said to work as well).

Surprise! the above docker-compose file works like magic: it starts fast and does not eat RAM. Also, my ansible molecule tests work perfectly.

Unfortunately, you run into problems using the excellent Testcontainers library. This problem is due to SELinux (that figures!!!). You can use this example project (taken from my TDD book): https://github.com/LorenzoBettini/it-docker-mongo-example. If you have Java and Maven installed, you can clone the repository, enter the directory “com.examples.school,” and run “mvn verify”. When it comes to Testcontainers tests, you get errors of this shape:

If you temporarily disable Selinux:

The Testcontainers tests will succeed.

However, let’s try to fix this problem once and for all.

To understand why moby-engine’s docker command works while docker-ce’s docker does not I inspected the version of the file /usr/lib/systemd/system/docker.service file provided by the moby-engine’s installation and the version provided by docker-ce’s installation.

These are the interesting parts of the docker.service file that comes with the moby-engine package:

While these are the corresponding parts in the version of the file provided by the docker-ce package:

The main differences are:

  • docker-ce uses a simple command to start the Docker daemon
  • moby-engine passes more arguments and relies on additional arguments (the OPTIONS environment variable) read from the additional file /etc/sysconfig/docker, whose contents are listed here:

First, I like this approach of relying on a separate file, which the user can customize. I understand that updates to the moby-engine package will not overwrite this file.

Thus, to avoid the Selinux problem with Testcontainers, it is enough to remove the “–selinux-enabled” argument in the above file. These are the steps:

  • Stop docker, “sudo systemctl stop docker”
  • Modify the file as follows:

  • Restart docker “sudo systemctl daemon reload && sudo systemctl start docker”

Moreover, disabling SELinux in the above file is also required if you, like me, prefer to have the docker directory (by default, it is /var/lib/docker) in another place (e.g., in a standard partition, which is not BTRFS and which is shared among several Linux distributions on the same computer).

Here’s an example of customization of the above file to tell Docker to use a different root directory:

If you use a different root directory with SELinux enabled, you might get errors when running containers of the shape

Fixing docker-ce

By taking inspiration from the configuration of moby-engine, we can also fix docker-ce by tweaking its configuration.

Of course, stop the Docker service (“sudo systemctl stop docker”) before doing the following steps.

It makes no sense to modify the file /usr/lib/systemd/system/docker.service directly because that will be overwritten by docker-ce package updates (I’m sure about that: I’ve seen it).

Instead, we first create the file /etc/sysconfig/docker (which does not exist in docker-ce installation) by removing a few arguments that would not work in docker-ce:

Then, we create another service file to specify the Docker daemon execution command:

And these are the contents of the created file:

Restart docker “sudo systemctl daemon reload && sudo systemctl start docker”. Everything should now work with docker-ce!

Of course, this approach also works with a specified “–data-root” argument, as seen in the moby-engine example.

Conclusions

On a side note, it might be enough to use only a subset of arguments in the docker-ce configuration taken from moby-engine. However, I think I wasted enough time debugging these problems in Fedora, and for the time being, I’m happy with these existing configurations.

I also want to stress my disappointment about how Fedora makes developers’ lives hard in these situations. I decided to debug these Docker problems and find solutions as a challenge. However, my interest in Fedora and my initial appreciation for this distribution significantly decreased. I won’t stop using it shortly, but I will not use it as my daily driver. For that, there’s Arch! 🙂

VirtualBox in Fedora Linux

I have no problem installing VirtualBox and the related tools and extensions in Ubuntu and Arch. In Fedora, things are a little bit harder.

First, I think it’s better NOT to download binaries distributed by VirtualBox: I’m using Fedora packages that are available from RPM Fusion Free. Thus, first of all, you have to enable such a repository.

Then you run

It’s also best to add your user to the VirtualBox group:

And reboot, of course.

Then, to have additional features, you should install the VirtualBox extension pack. That’s where Fedora gets complicated. Differently from Ubuntu (package virtualbox-ext-pack) and Arch (package virtualbox-ext-oracle), you will not find a corresponding package in the RPM Fusion repositories.

You need to download the file from the VirtualBox website manually, http://download.virtualbox.org/virtualbox/. You must download a file of the shape Oracle_VM_VirtualBox_Extension_Pack-<VERSION>.vbox-extpack where <VERSION> must match the version of VirtualBox you installed. Then, you install it inside VirtualBox with File => Preferences => Extensions.

If the versions don’t match, you will have trouble starting your virtual machines with errors of the shape:

I experienced such a problem and asked on the Fedora forum.

Then, I realized that VirtualBox had been upgraded during a system update, but I forgot to download and install the updated extension pack. You can see the versions don’t match:

I then downloaded the corresponding file http://download.virtualbox.org/virtualbox/6.1.38/Oracle_VM_VirtualBox_Extension_Pack-6.1.38.vbox-extpack. And added it to the “Extensions” preference (selecting “Upgrade”):

That was enough to go back to running my VirtualBox machines.

However, I must admit that the whole procedure for using VirtualBox in Fedora is much more cumbersome and error-prone than in other distributions 🙁 It’s far too easy in Fedora to forget about package upgrades that require manual interventions. In general, I’d like to avoid manual interventions at all 😉

Again woes for KDE and Google Accounts

TL;DR If you want to access your Google Calendars from Korganizer or Kalendar, remember to enable “Enable the KDE wallet subsystem”.

I had already written about the cumbersome procedure to access Google accounts in KDE, and I thought I had learned enough to “easily” set my Google calendars in KDE. I was wrong. Today I tried KDE again (I’m mainly a GNOME user for the moment) in a brand new Arch-based installation. I also decided to try the new calendar application, Kalendar. I managed to add a new calendar by selecting “Google Groupware”:

(By the way, once you selected “Google Groupware”, can you spot any “Add”, “OK”, or any other way to confirm the selection? There’s “Close”, but I’d assume that is for canceling the selection… IIRC, you must double-click on the selection; anyway, that’s odd!)

But then, I couldn’t configure it: I pressed the button “Edit,” and nothing happened!

I mean, no error pop-up. Nothing printed on the console if I ran Kalendar from the command line. Nothing in the journalctl log. NOTHING!

The same happens with Korganizer. The Google Groupware calendar already appeared because Kalendar and Korganizer share the same calendars. The problem is still there: the calendar is not configured and clicking “Modify” led to nothing. No error. NOTHING again!

To cut a long story short: you must enable the wallet subsystem in the KDE system settings:

How could it be possible? In 2022, KDE and its programs cannot cooperate in providing meaningful information when something goes wrong! And in KDE, things often go wrong.

For example, it’s been more than one year since KAddressBook cannot access Google’s address book! https://bugs.kde.org/show_bug.cgi?id=439285.

SSH into a VirtualBox machine

I use VirtualBox a lot for testing purposes, mainly to experiment with a Linux distribution (to see whether it’s worthwhile to install it on bare metal) and to test procedures that might involve some complex (and maybe “dangerous”) operations.

After you install VirtualBox Guest Additions, things like bidirectional copy-and-paste will work. Some distributions seem to enable and install such additions right from the start (e.g., Fedora, if I’m not wrong). However, there are many situations where you’re installing a distribution in a VirtualBox machine. If Guest Additions are not already working, you will not be able to copy and paste anything. For example, the ISO of Arch Linux even comes in textual mode only, and it’s helpful to be able to copy and paste commands.

In such situations, I prefer to SSH into the virtual machine from a local terminal: copy and paste will work (since I’m in a local terminal). Moreover, the keyboard layout will be the one of the local system (the host). Thus, the keyboard layout will be already configured correctly. While in the virtual machine, you’d have to configure the keyboard layout. Finally, you can quickly transfer files via SSH between the host and the virtual machine without configuring shared folders.

Before being able to SSH in a virtual machine, you need two things:

  • An SSH server up and running in the virtual machine (how to do that depends on the specific Linux distribution)
  • Configure port forwarding in the virtual machine: for instance, all connections to a specific port on the host will be forwarded to port 22 (the default SSH port) on the virtual machine.

I could do the second operation through the “Settings” of the machine. That requires a few dialogs and filling a few table cells. However, I prefer to do that with a single command from the host. For example, I create and configure a new virtual machine, let’s say with the name “My Machine”. When it is not running, I run from the host:

Then, when I start the virtual machine, assuming the SSH server is configured in the virtual machine, I SSH into the machine through the local port 2522. Of course, you can choose any free port number on your host. For example, assuming there’s a user foo in the virtual machine, I run:

Let’s see a complete example.

Let’s say I create a virtual machine called “Arch Gnome” for installing Arch Linux. I configure it appropriately. In particular, I “insert” the ISO of Arch Linux into the virtual CD ROM drive of the machine. Before starting the machine, I run from the host:

Start the virtual machine:

Inside the live environment of Arch Linux, the SSH server is already up and running. However, since we’ll connect with the root account (the only one present), we must give the root account a password. By default, it’s empty, and SSH will not allow you to log in with a blank password. Choose a password. This password is temporary, and if you’re in a trusted local network, you can choose an easy one.

Now, we can connect via SSH to the virtual machine through localhost. If you have already connected via SSH to localhost, you might get an error of the shape:

All you have to do is edit the known_hosts file by removing the offending lines and try again. You will have to remove all the lines that start with “[127.0.0.1]:2522”.

Note that we’re using port 2522 because we previously used that for creating the port mapping. Let’s connect to the virtual machine and type the password we have previously specified for the root account inside the virtual machine (Accept the fingerprint, when asked.):

In your local terminal, you see that you get the colors of the virtual machine (now, you’re inside the virtual machine):

The keyboard layout is the one of the host; as usual, you copy and paste the text into the local terminal and run commands in the virtual machine, but more comfortably since you’re in your local terminal. You can also transfer files with “scp”.

For transferring files, you could also configure your file manager to access the “remote” server (the virtual machine) and browse and perform operations on the virtual machine’s file system from your local file manager.