Category Archives: Tutorials

Enabling Hibernation in Arch-based distros

I had already posted about enabling hibernation in Linux, particularly in Ubuntu.

Thanks to the script hibernator, the procedure is much more straightforward in Arch-based distros (including Manjaro and EndeavourOS).

First of all, make sure you first install the package update-grub.

The package hibernator is available from AUR, but currently, there’s a problem with the build instructions. Thus, we must install it manually. There’s no need to install the script anywhere: it’s just a matter of cloning the script from GitHub and running it once:

Then, before running the script (as a superuser), we must decide where we want the suspend-to-disk to take place: either in a swap partition or in a swap file (please keep in mind that currently, the script cannot handle hibernation into a swap file of a BTRFS partition).

If we want a swap file, the script can create that for us, and we can also specify the size of the swap file. Here’s the quote from the home page of the project

Hibernator accepts desired size of swapfile as arguments. Running hibernator 2G creates 2 Gb swapfil, hibernator 1000M creates 1000 Mb swapfile. The script defaults to 4G if no arguments are given.

If we want a swap partition, we must ensure the partition is already in place and that our /etc/fstab already refers to that (i.e., it mounts it appropriately).

When we’re ready, we just run the script with sudo. The script will update the GRUB command line with the resume option and a resume hook to /etc/mkinitcpio.conf. Finally, it will update the GRUB configuration (with update-grub, that is why you need to install this package beforehand).

Here’s an example of the output:

And that’s all: we just reboot, and hibernation is ready to be used!

Before rebooting, you might want to check that the GRUB_CMDLINE_LINUX_DEFAULT variable in /etc/default/grub has been set with a valid resume entry (if you rely on a swap partition, which has been properly specified in /etc/fstab, it should contain a reference to the UUID of the swap partition):

You can test that, as usual, with

You may also want to refer to the older post enabling hibernation in Linux, particularly in Ubuntu, for other mechanisms related to hibernation, like suspend and then hibernate.

Timeshift and grub-btrfs in Linux Arch

After looking at the very nice videos of Stephen’s Tech Talks, in particular, this one https://www.youtube.com/watch?v=6wUtRkEWBwE, I decided to try to set up Timeshift, Timeshift-autosnap, and grub-btrfs in my Linux Arch installation, where I’m using BTRFS as the filesystem. These three packages allow you to have a timeshift snapshot automatically created each time you update your system; moreover, a new grub entry is automatically generated to boot into a specific snapshot.

The video mentioned above is handy, but unfortunately, some recent changes in Timeshift itself broke the behavior of the two other packages. In this post, I’ll try to show how to fix the problem and go back to a working behavior. I’ll also show an experiment using the snapshots so that, hopefully, it’s clear what’s going on in the presence of such snapshots and how to use them in case you want to revert your system.

First of all, let’s install timeshift and timeshift-autosnap (the latter depends on the former, and they are both available from AUR; I’m using the AUR helper yay here):

The first problem is that timeshift has recently changed the strategy for creating snapshots. Instead of creating them in /run/timeshift/backup/timeshift-btrfs/snapshots, it now creates them in /run/timeshift/<PID>/backup/timeshift-btrfs/snapshots, where <PID> is the PID of the Timeshift process. Each time you run Timeshift, the directory will be different, breaking grub-btrfs (which expects to find the snapshots always in the same directory).

Fortunately, there’s a workaround: we add an entry to /etc/fstab in order to mount explicitly the path /run/timeshift/backup/timeshift-btrfs/snapshots:

where, of course, <UUID> has to be replaced with the same UUID of the physical disk partition.

Reboot, and then Timeshift will also put the snapshot in that directory (besides the one with the PID, as mentioned above). You can try to create a snapshot to verify that (this also allows us to use the Timeshift wizard so that we specify to create BTRFS snapshots).

Let’s make sure the mount point is active (and note the unit name)

Let’s now install grub-btrfs

We need to configure that to monitor the Timeshift snapshot directory instead of the default one (/.snapshots).

The file contents

should be replaced with

Let’s reload and re-enable the monitoring service:

If we have already created a few snapshots, we can run update-grub (or, if you have not installed the package update-grub, use the command “grub-mkconfig -o /boot/grub/grub.cfg”) and verify that new grub entries are created for the found snapshots:

We can also restart the system and prove that we can access the GRUB submenu with the generated entries for the snapshots.

IMPORTANT: If you have several Linux distributions on your computer and you use a multiboot system like the one I blogged about, and this distribution is not the main one, you will have to manually tweak the entry in your main distribution’s GRUB menu. See the linked blog post near the end.

Some experiments

Let’s do some experiments with this configuration.

Here’s the kernel I’m currently running:

I’m updating the system (I’m skipping some output below, and you can ignore the “stale mount” errors):

So it created a snapshot before updating the system (in particular, it installed a new kernel version). Let’s reboot and verify we are running the new kernel (5.18.8 instead of 5.18.7):

Let’s reboot and select from GRUB the latest snapshot (remember, the one before applying the upgrade), so timeshift-btrfs/snapshots/2022-07-02_15-35-53 (snapshots are presented in the grub submenu from the most recent to the oldest one). We do that by pretending that the update broke the system (it’s not the case), and we want to get back to a working system before the update we have just performed.

You see that the “Authentication Required” dialog greets us, and in the background, you can see the notification that we “booted into Timeshift Snapshot, please restore the snapshot”:

The password is required because it’s trying to run Timeshift:

In the screenshot, you can see that we are now using the older kernel since we booted in that snapshot where the update has not yet been performed. We have to restore the snapshot manually; otherwise, on the next boot, we’ll get back to the updated system version and not in the snapshot anymore.

So, let’s restore the snapshot:

You see, Timeshift has just created another snapshot ([LIVE]). We now reboot normally (that is, using the main grub entry, NOT the snapshot entries).

Once rebooted normally, we can verify again that we are running the old kernel:

Let’s have a look at Timeshift, and we can see the last snapshot is an effective one, not a LIVE one:

Yes, we are now in a system where the update above has never been applied.

Let’s try to rerun the update command (we don’t effectively execute the update, it’s just an experiment):

Why? Because the snapshot had been created automatically by timeshift-autosnap before applying the updates while the package manager was running, its lock is still there.

Let’s remove the lock and try to rerun the update:

The output is similar to the one shown above (unless there are even more new updates in the meantime, which might happen in a rolling release), but something is missing:

Why? Because the downloaded packages in the cache are NOT part of the saved snapshot, they are still present in the current system, even though we restored the snapshot. Why are the cached packages still there, but the lock has been restored with the snapshot? That’s due to the way subvolumes are specified in the /etc/fstab:

You see, the cache of downloaded packages and the logs are NOT part of the snapshots, while /var/lib (including the pacman lock) is part of the snapshots.

Let’s now revert the snapshot: we select the one with “Before restoring…”.

Again, we are now in a LIVE situation, and Timeshift tells us again to reboot to make it effective.

Let’s reboot (normally, by using the main grub entry).

We’re back to the updated system, and there’s nothing to update (again, unless new updates have been made available in the meantime):

If we’re happy with the updated system, we can also remove those two snapshots (remember that grub-btrfs monitors the snapshots so that it will update its grub submenu entries):

I hope you find this blog post helpful, and I hope it complements the wonderful video of Stephen’s Tech Talks mentioned above.

Mirror Eclipse p2 repositories with Tycho

I had previously written about mirroring Eclipse p2 repositories (see this blog tag), but I’ll show how to do that with Tycho and one of its plugins in this post.

The goal is always the same: speed up my Maven/Tycho builds that depend on target platforms and insulate me from external servers.

The source code of this example can be found here: https://github.com/LorenzoBettini/tycho-mirror-example.

I will show how to create a mirror of a few features and bundles from a few p2 repositories so that I can then resolve a target definition file against the mirror. In the POM, I will also create a version of the target definition file modified to use the local mirror (using Ant). Moreover, I will also use a Tycho goal to validate such a modified target definition file against the local mirror. The overall procedure is also automatized in the CI (GitHub Actions). This way, we are confident that we will create a mirror that can be used locally for our builds.

First of all, let’s see the target platform I want to use during my Maven/Tycho builds. The target platform definition file is taken from my project Edelta, based on Xtext.

As you see, it’s rather complex and relies on several p2 repositories. The last repository is the Orbit repository; although it does not list any installable units, that is still required to resolve dependencies of Epsilon (see the last but one location). We have to consider this when defining our mirroring strategy.

As usual, we define a few properties at the beginning of the POM for specifying the versions of the plugin and the parts of the p2 update site we will mirror from:

Let’s configure the Tycho plugin for mirroring (see the documentation of the plugin for all the details of the configuration):

The mirror will be generated in the user home subdirectory “eclipse-mirrors” (<destination> tag); we also define a few other mirroring options. Note that in this example, we cannot mirror only the latest versions of bundles (<latestVersionOnly>), as detailed in the comment in the POM. We also avoid mirroring the entire contents of the update sites (it would be too much). That’s why we specify single installable units. Remember that also dependencies of the listed installable units will be mirrored, so it is enough to list the main ones. You might note differences between the installable units specified in the target platform definition and those listed in the plugin configuration. Indeed, the target platform file could also be simplified accordingly, but I just wanted to have slight differences to experiment with.

If you write the above configuration in a POM file (a <packaging>pom</packaging> will be enough), you can already build the mirror running:

Remember that the mirroring process will take several minutes depending on your Internet connection speed since it will have to download about 500 Mb of data.

You can verify that all the specified repositories are needed to create the mirror correctly. For example, try to remove this part from the POM:

Try to create the mirror, and you should see this warning message because some requirements of Epsilon bundles cannot be resolved:

Those requirements are found in the Orbit p2 repository, which we have just removed for testing purposes.

Unfortunately, I found no way to make the build fail in such cases, even because it’s just a warning, not an error. I guess this is a limitation of the Eclipse mirroring mechanism. However, we will now see how to verify that the mirror contains all the needed software using another mechanism.

We create a modified version of our target definition file pointing to our local mirror. To do that, we create an Ant file (create_local_target.ant):

Note that this also handles path separators in Windows correctly. The idea is to replace lines of the shape <repository location=”https://…”/> with <repository location=”file:/…/eclipse-mirrors”/>. This file assumes the original target file is example.target, and the modified file is generated into local.target.

Let’s call this Ant script from the POM:

Finally, let’s use Tycho to validate the local.target file (see the documentation of the goal):

Now, if we run:

we build the mirror, and we create the local.target file.

Then, we can run the above goal explicitly to verify everything:

If this goal also succeeds, we managed to create a local mirror that we can use in our local builds. Of course, in the parent POM of your project, you must configure the build so that you can switch to local.target instead of using your standard .target file. (You might want to look at the parent POM of my Edelta project to take some inspiration.)

Since we should not trust a test that we never saw failing (see also my TDD book 🙂 let’s try to verify with the incomplete mirror that we learned to create by removing the Orbit URL. We should see that our local target platform cannot be validated:

Alternatively, let’s try to build our mirror with <latestVersionOnly>true</latestVersionOnly>, and during the validation of the target platform, we get:

In fact, we mirror only the latest version of org.antlr.runtime (4.7.2.v20200218-0804), which does not satisfy that requirement. That’s why we must use with <latestVersionOnly>false</latestVersionOnly> in this example.

For completeness, this is the full POM:

And this is the YAML file to build and verify in GitHub Actions:

I hope you found this post valuable, and happy mirroring! 🙂

Multibooting with GRUB

4th July, updated with BTRFS installations.

Besides Windows (which I rarely use) on my computers, I have a few Linux distributions. Grub 2 does a good job in booting Windows and Linux, especially thanks to os-prober, in autodetecting other operating systems in other partitions of the same computer. However, there are a few “buts” in this strategy:

  1. Typically, the last installed Linux distribution, say L1, installs its own grub as the main one, and when you upgrade the kernel in another Linux distribution, say L2, you have to boot into L1 and “update-grub” so that the main grub configuration learns about the new kernel of L2. Only then can you boot the new kernel of L2. Of course, you can change the main grub by reordering the EFI entries, e.g., by using the computer’s BIOS, but again that’s far from optimal.
  2. Not all Linux distributions’ grub configurations can boot other Linux distributions. For example, Arch-based distros like EndeavourOS and Manjaro can boot Ubuntu-based distros, but not the other way round (unless you fix a few things in the grub configuration of Ubuntu)! Recently, I started to use also Fedora. I found out that os-prober in Ubuntu and EndeavourOS does not detect the configurations correctly to boot Fedora: recently, Fedora switched to “blscfg” (https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault), and as a result, Ubuntu and EndeavourOS create grub configurations that do not consider the changes you made in Fedora’s /etc/default/grub.

That’s why I started to experiment with grub configurations. I still have a “main grub” in a Linux installation, which simply “delegates” to the grub configurations of the other Linux installations. This way, I solve both the two problems above!

In this blog post, I’ll show how I did that. Note that this assumes you use EFI boot.

I have Windows 10, Kubuntu, EndeavourOS, and Fedora on the same computer in this example. I will configure the grub installation of Fedora so that it delegates to Windows, Kubuntu, and EndeavourOS, without relying on os-prober.

This is the disk layout of my computer so that you understand the numbers in the grub configuration that I’ll show later (I omit other partitions like Windows recovery).

The key point is modifying the file /etc/grub.d/40_custom. I guess you already know that you should not modify directly grub.cfg, because a system update or a grub update (e.g., “update-grub”) will overwrite that file.

The file /etc/grub.d/40_custom already comes with some contents that must be left as they are: you add your lines after the existing ones. For example, in Fedora, you have:

We will use the option configfile of grub configuration (see https://www.gnu.org/software/grub/manual/grub/grub.html#configfile): “Load file as a configuration file. If file defines any menu entries, then show a menu containing them immediately.” The Arch wiki also explains it well:

If the other distribution has already a valid /boot folder with installed GRUB, grub.cfg, kernel and initramfs, GRUB can be instructed to load these other grub.cfg files on-the-fly during boot.

The idea is to put in /etc/grub.d/40_custom an entry for each Linux distribution, pointing to the grub.cfg of that distribution, after setting the root partition. Thus, the path to the grub.cfg must be intended as an absolute path in that partition. If you look at the partition numbers above, these are the two entries for booting EndeavourOS and Kubuntu:

NOTE: the “rmmod tpm” is required to avoid TPM errors when booting those systems (“Unknown TPM error”, “you need to load the kernel first”). It happened on my Dell XPS 13, for example. Adding that line (i.e., not loading the module “tpm”) solved the problem.

Remember that the path assumes that the /boot directory is not mounted on a separate partition. If instead, that’s the case, you probably have to remove “/boot”, but I haven’t tried that.

Concerning the entry for Windows, here it is:

In this entry, the root must correspond to the EFI partition, NOT to the partition of Windows.

Save the file and regenerate the grub configuration. In other Linux distributions, it would be a matter of running “update-grub,” but in Fedora, it is:

Now reboot, and you should see the grub menu of Fedora and then, at the bottom, the entries for EndeavourOS, Kubuntu, and Windows. Choosing “EndeavourOS” or “Kubuntu” will NOT boot directly in these systems: it will show the grub menu of “EndeavourOS” or “Kubuntu.”

If you upgrade the kernel on one of these two systems, their grub configuration will be correctly updated. There’s no need to boot into Fedora to update its grub configuration 🙂

If you want to configure the grub in another Linux distribution, please remember that Fedora stores the grub.cfg in /boot/grub2 instead of /boot/grub, so you should write the entry for Fedora with the right path. However, if you plan to boot Fedora with this mechanism, you should disable “blscfg” in the Fedora grub configuration, or you will not be able to boot Fedora (errors “increment.mod” and “blscfg.mod” not found).

Now that we verified that it works, we can remove the entries generated by os-prober. In /etc/default/grub add the line:

and regenerate the grub configuration.

If you want grub to remember the last choice, you can look at this post.

On a side note, due to the way Fedora uses grub (https://fedoraproject.org/wiki/Changes/HiddenGrubMenu), without os-prober, you will not see the grub menu unless you press ESC. After the timeout, it will simply boot on the default entry. To avoid that and see the grub menu, just run:

And the grub menu will get back as usual.

Then, you can also remove os-prober in the other Linux installations, since that is useless now.

These were the original grub menus of Fedora and EndeavourOS, before applying the modifications described in this post:

Pretty crowded!

And this is the result after the procedure described in this post (note that from the Fedora grub menu you select EndeavourOS to land its grub menu and Kubuntu to land its grub menu):

Much better! 🙂

If you need to boot an installation in a BTRFS filesystem (which includes also the /boot directory and the grub.cfg), things are slightly more complex. In fact, BTRFS installations are typically based on subvolumes. The root subvolume is typically denoted by the label “@”. This must be taken into consideration when creating the menu entry.

For example, I’ve also installed Arch on my computer using BTRFS, and the root subvolume is denoted by “@”. The menu entry is as follows:

Note the presence of “/@” in the configfile specification.

That’s not all. If the GRUB configuration specified in configfile has submenus, for example, automatically generated by grub-btrfs, you also must define the prefix variable appropriately (in fact, grub-btrfs generates entries relying on such a variable):

 

Getting started with KVM and Virtual Machine Manager

After playing with VirtualBox (see my posts), I’ve decided to try also KVM (based on QEMU) and Virtual Machine Manager (virt-manager).

The installation is straightforward.

In Ubuntu systems:

In Arch-based systems:

Then, you need to add your user to the corresponding group:

Reboot, and you’re good to go.

In this post, I’m going to install Fedora 35 on a virtual machine through Virtual Machine Manager (based on KVM and QEMU).

So, first, download the ISO of this distribution if you want to follow along.

Let’s start Virtual Machine Manager (virt-manager):

Press the “+” button to create a new virtual machine, and we select the first entry since we have downloaded an ISO.

Here, we select the ISO and let the manager detect the installed OS. Otherwise, we can choose the OS manually (the manager might not catch the OS correctly in some cases: it happened to me with ArcoLinux, for example).

Then, we allocate some resources. Since I have 16GB and a quad-core, I give the virtual machine 8GB and two cores.

Then, we allocate storage for the machine. Alternatively, we can select or create a custom image file in another location. By default, the image will NOT occupy the whole space physically on your disk. Thus, I will not lose 60GB (unless I’ll effectively use such a space on the virtual machine). The file will appear of the specified size on your drive, but if you check the free disk space on your drive, you will note that you haven’t lost so many Gigas (more on that in the next steps).

In the last step, we can give a custom name to our machine and customize a few settings before starting the installation by selecting the appropriate checkbox (we also make sure that the network is configured correctly).

If we selected “Customize configuration before install,” by pressing “Finish,” we get to the settings of our virtual machine.

In this example, I’m going to change the chipset and specify a UEFI firmware:

We can also get other information, like the path of the disk image:

And we can click “Begin Installation.” After the boot menu, we’ll get to the live environment of the distribution ISO we chose:

You can also specify to resize the display of the VM automatically if you resize the window and when to do that. (WARNING: this will work correctly only after installing the OS in the virtual machine since this feature requires some software in the guest operating system. Typically, such a software, spice-vdagent, is automatically installed in the guest during the OS installation, from what I’ve seen in my experiments.)

And we can start the installation of the distribution (or try it live before the actual installation), as usual. Of course, the whole installation process will be a bit slower than on real hardware.

I’ll choose the “Automatic” choice for disk partitioning since the disk image will be allocated only to this machine, so I will not bother customizing that.

While installing, you might want to check the disk image size and the effective space on the disk:

After a few minutes, the installation should be complete, and we can reboot our virtual machine

And upon reboot, we’ll get to our new installed OS on the virtual machine:

In the primary Virtual Machine Manager window, you can see your virtual machines, and, if they are running, a few statistics:

In the virtual machine window’s “View” menu, you can switch between the “Console” view (that is, the virtual machine installed and running OS) and the “Details” view, where you can see its settings, and change a few of them.

Note that now the automatic resize of the machine display and the window works: in the screenshot I resized the window (made it bigger) and the display of the machine resized accordingly.

When you later restart a virtual machine from the manager, you might have to double-click on the virtual machine element and possibly switch to the “Console” view.

After installing the OS, you might want to check the image file and the actual disk usage again. You will find that while the image file size did not change, the disk usage has:

What I’ve shown in this blog post was one of my first experiments with KVM and the Virtual Machine Manager. To be honest, I still prefer VirtualBox, but maybe that’s only because I’m more used to VirtualBox, while I’ve just started using virt-manager.

That’s all for now! Stay tuned for further posts on KVM and virt-manager, and happy virtualization! 🙂

Limiting Battery Charge on LG Gram in Linux

I’ve been using this laptop for some months now (see my other posts). In Windows, you can easily set the battery charge limit to 80% using the LG Gram control center. In Linux, I did not find any specific configuration in any system settings in any DE (not even in KDE Plasma, where, for some laptops, there’s support for setting the battery charge limit).

However, since kernel 5.15, you can do it yourself, thanks to some specific LG Gram kernel features, https://www.kernel.org/doc/html/latest/admin-guide/laptops/lg-laptop.html:

Writing 80/100 to /sys/devices/platform/lg-laptop/battery_care_limit sets the maximum capacity to charge the battery. Limiting the charge reduces battery capacity loss over time.

This value is reset to 100 when the kernel boots.

So you need to write ’80’ in that file. I do that like that:

After that, you can see that when charging reaches 80%, the laptop will not charge the battery anymore. Depending on the DE, either you see the charging notice disappear, or the charging stuck at 80%. The DE might even tell you that it still needs some time until fully charged, but you can ignore that. That notice will stay like that, as shown in these two screenshots (KDE Plasma), taken at different times:

Note that in the quotation shown above, you also read

This value is reset to 100 when the kernel boots.

If you reboot, the value in that file will go back to ‘100’, and charging will effectively continue. Note that this also holds if you hibernate (suspend to disk) the laptop since when you restart it from hibernate, you’ll boot it anyway, so that will reset the value in the file. However, if you put the laptop to sleep, the value in the file will not change.

Above I said that you need kernel 5.15. I think the feature described above was introduced even before, but in kernel 5.13, that does not seem to work: no matter what you write in that file, the change will not be persisted. In my experience, this only works starting from kernel 5.15.

With kernel 5.15, it works for me in EndeavourOS, Manjaro, and Kubuntu.

How to install Linux on a USB drive with UEFI support using VirtualBox

That’s the third post on installing Linux on a USB drive!

Remember that the idea is to have a USB drive that will work as a portable Linux operating system on any computer.

In the first post, How to install Linux on a USB drive using Virtualbox, the USB drive with Linux installed could be used when booting from a computer with “Legacy boot” enabled: it could not boot if UEFI were the only choice in that computer.

In the second post, How to install Linux on a USB with UEFI support, I showed how to install Linux on the USB drive directly, without using VirtualBox, while creating a UEFI bootable device. However, you had to be careful during the installation to avoid overwriting the UEFI boot loader of your computer.

In this post, I’ll show how to install Linux on a USB drive, with UEFI support, using VirtualBox. In the end, we’ll get a UEFI bootable device, but without being scared of breaking the UEFI boot loader of your computer, since we’ll do that using a virtual machine.

The scenario

First of all, let’s summarize what I want to do. I want to install Linux on a portable external USB SSD. I don’t want a live distribution: a live distribution only allows you a little testing experience, it’s not easily maintainable and upgradable, it’s harder to keep your data in there. On the contrary, installing Linux on a USB drive will give you the whole experience (and if the USB drive is fast, it’s almost like using Linux on a standard computer; that’s undoubtedly the case for an external SSD, which are pretty cheap nowadays).

In the previous post, I described how to create such an installation from VirtualBox. As I said, you can boot the USB drive only in Legacy mode. This time, we’ll be able to boot the USB drive in any UEFI computer.

I’m going to perform this experiment:

  • I’m going to use VirtualBox installed on a Dell XPS 13 where I already have (in multi-boot, UEFI), Windows, Ubuntu, Kubuntu, and Manjaro GNOME
  • I’m going to install Ubuntu 21.10 into an external USB SanDisk SSD (256 Gb)
  • then I’m going to install on the same external USB drive also EndeavourOS (an excellent distribution I’ve just started to enjoy) along with the installed Ubuntu

I have already downloaded the two distributions’ ISOs.

I’ve installed VirtualBox in Ubuntu following this procedure

and then reboot.

By the way, since the second distribution will take precedence over an existing UEFI configuration on the USB, it’s better to start with Ubuntu and then proceed with EndeavourOS (Arch based). While an Arch GRUB configuration has no problem booting other distributions, Ubuntu cannot boot an Arch-based distribution. Of course, the second distribution’s GRUB menu will let you also boot the first one. We could solve the booting problem later, but I prefer to keep things easy and install them in the above order.

In the screenshots of the running virtual machine, the USB SanDisk is /dev/sda.

I will boot a virtual machine where I set the ISO of the current distribution as a LIVE CD. I’m going to use a different virtual machine for each distribution. Maybe that’s not strictly required, but since the two OSes are different (the first one is an Ubuntu OS, while the second one is an Arch Linux), I prefer to keep the two virtual machines separate, just in case.

Create the first virtual machine and install Ubuntu on the USB drive

I’m assuming you’re already familiar with VirtualBox, so I’ll post the main screenshots of the procedure.

Let’s create a virtual machine.

We don’t need a hard disk in the virtual machine since we’ll use it only for installing Linux on a USB drive, so we’ll ignore the warning.

.

Now it’s time to configure a few settings.

The important setting is “Enable EFI” to make our virtual machine aware of UEFI, and the booted Live OS will also be aware of it. As we will see later, the booted Live OS will correctly install GRUB in a UEFI partition.

We also specify to insert the ISO of the distribution (Ubuntu 21.10) so that when the virtual machine starts, it will boot the Live ISO.

Let’s start the virtual machine, and we will see the boot menu of the Live ISO.

We choose to Try Ubuntu, and then we plug the external SanDisk in the computer, and we make the virtual machine aware of that by using the context menu of the USB connection icon and selecting the item corresponding to the USB hard disk (in your case it will be different)

After that, the Ubuntu Live OS should notify about the connected disk. We can start the installation, and when it comes to the disk selection and partition, I chose to erase the entire disk and install Ubuntu:

Of course, you can choose to partition the hard disk manually, but then you’ll have to remember to create a GPT partition table, and you’ll also have to create the FAT32 partition for UEFI manually. By using “Erase disk and install Ubuntu,” I’ll let the installer do all this work.

You can see the summary before actually performing the partition creation. Note that we are doing such operations on the external USB drive, which, as I said above, corresponds to /dev/sda.

Now, we have to wait for the installation to finish. In the end, instead of restarting the virtual machine, we shut it down.

Let’s restart the computer with the USB drive connected. Depending on the computer setup, you’ll have to press some function key (e.g., F2 or, in my Dell XPS 13, F12, to choose to boot from a different device). Here’s the menu in my Dell XPS 13, where we can see that the external USB (SanDisk) is detected as a UEFI bootable device. It’s also detected as a Legacy boot device, but we’re interested in the UEFI one:

We can then verify that we can boot the Ubuntu distribution installed in the USB drive.

By the way, I also verified that, without the USB drive connected, I can always boot my computer: indeed, the existing UEFI Grub configuration is intact (remember, I have Windows, Ubuntu, Kubuntu, and Manjaro GNOME; the grub menu with higher priority is the one of Manjaro):

Create the second virtual machine and install EndeavourOS on the USB drive

Let’s create the second virtual machine to install on the same USB drive EndeavourOS, along with the Ubuntu we have just installed.

To speed up things, instead of creating a brand new machine, we clone the previous one, and we change a few settings (basically the name, the version of Linux, which now is Arch, and finally we change the Live ISO):

Let’s start the virtual machine and land into the EndeavourOS Live system

As before, we have to connect the USB drive to the computer and let the virtual machine detect that (see the procedure already shown in the first installation section).

We start the installer and choose the “Online” version so that we can choose what to install next (including several Desktop Environments). The installer is Calamares (if you used Manjaro before, you already know this installer).

When it comes to the partitioning part, we make sure we select the SanDisk external drive (as usual, /dev/sda). Note that the installer detects the existing Ubuntu installation. This time, we choose to install EndeavourOS alongside:

And we use the slider to specify how much space the new installation should take:

Let’s select a few packages to install (a cool feature of EndeavourOS)

And this is the summary before starting the installation:

Once the installation has finished, we shut down the virtual machine and reboot the computer with the USB drive inserted. This time we see the EndeavourOS grub configuration, including the previously installed Ubuntu. Remember, these are the installations in the USB drive (as usual, note the /dev/sda representing the USB drive):

And now we have a USB drive with two Linux distributions installed that we can use to boot our computers! However, some drivers for some specific computer configurations might not be installed in the Linux installation of the external USB. Also, other configurations like screen resolutions and scaling might depend on the computer you’re booting and might have to be adjusted each time you test the external USB drive in a different computer.

I hope you enjoyed the tutorial!

Happy installations and Happy New Year! 🙂

Playing with KDE Plasma Themes

I want to share some of my experiences with KDE Plasma Themes in this post.

These themes are pretty powerful, but, as it often happens with KDE and its configuration capabilities, it might not be immediately clear how to benefit from all its power and all its themes’ power.

I’m assuming that you already enabled the KWin Blur effect (In “Desktop Effects”), which is usually the case by default. Please remember that desktop effects, like “blur,” applied to menus, windows, etc., will use more CPU. This CPU usage might increase battery usage (but, at least from my findings, it’s not that much).

First, installing a theme using “Get New Global Themes…” is not ideal. In my experiments, the installation often makes the System Settings crash; the artifacts of the theme might be out of date concerning the current Plasma version. Moreover, the installation usually does not install other required artifacts, like icons and, most of all, the Kvantum theme corresponding to the Plasma theme. In particular, the themes that I use in this post all come with the corresponding Kvantum theme. Using such an additional theme configuration is crucial to enjoying that Plasma theme thoroughly.

Thus, I’ll always install themes and icons from sources in this post.

I mentioned Kvantum, which you have to install first. In recent Ubuntu distributions

In other distributions, the package(s) names might be different.

Quoting from Kvantum site:

Kvantum […] is an SVG-based theme engine for Qt, tuned to KDE and LXQt, with an emphasis on elegance, usability and practicality. Kvantum has a default dark theme, which is inspired by the default theme of Enlightenment. Creation of realistic themes like that for KDE was my first reason to make Kvantum but it goes far beyond its default theme: you could make themes with very different looks and feels for it, whether they be photorealistic or cartoonish, 3D or flat, embellished or minimalistic, or something in between, and Kvantum will let you control almost every aspect of Qt widgets. Kvantum also comes with many other themes that are installed as root and can be selected and activated by using Kvantum Manager.

As described in https://github.com/tsujan/Kvantum/blob/master/Kvantum/INSTALL.md,

The contents of theme folders (if valid) can also be installed manually in the user’s home. The possible installation paths are ~/.config/Kvantum/$THEME_NAME/, ~/.themes/$THEME_NAME/Kvantum/ and ~/.local/share/themes/$THEME_NAME/Kvantum/, each one of which takes priority over the next one, i.e. if a theme is installed in more than one path, only the instance with the highest priority will be used by Kvantum.

On the contrary, the KDE themes artifacts are searched for in ~/.local/share. Since some of the themes we will install from the source do not provide an installation script, we will have to copy artifacts manually. In the meantime, you might want to create the Kvantum config directory (though the installation commands we will see in this post will take care of that anyway):

After Kvantum is installed, going to “Appearance” -> “Application Style,” you’ll see the kvantum style that you can select as an application style (we won’t do that right now). Once that’s set, the application style will be configured through the Kvantum Manager, which we’ll see in a minute.

So let’s start installing and playing with a few themes. As I anticipated initially, we’ll install the themes from sources. You’ll need git to do that. If not already installed, you should do that right now.

Nordic KDE

https://github.com/EliverLara/Nordic

This theme does not come with an installation script, so I’ll show all the commands to clone its source repository and manually copy its contents to the correct directories (see the note above concerning directories for Plasma and Kvantum themes):

Now, we got to “Appearance” -> “Global Theme,” and we find two new entries for the Nordic theme we’ve just installed:

Select one of the Nordic global themes (I chose “Nordic”) and press “Apply.”

Here’s the result (this is not yet the final intended look of the theme):

We can see that the menus are nicely blurred (of course, if you like blur effect 🙂

Go to “Application Style,” you see that “kvantum” is selected (that has happened automatically when selecting the “Nordic” global theme):

However, we still need to apply the Nordic Kvantum theme.

Launch Kvantum Manager, select one of the Nordic themes (Kvantum finds the Nordic Kvantum theme because we installed them in the correct position in the home folder), and press “Use this theme”:

and now everything looks consistent with the Nordic theme (the menu is still blurred). Keep in mind that applications have to be restarted to see the new theme applied to them:

Let’s make sure that applications like Dolphin are blurred themselves: go to the tab “Configure Active Theme” -> “Hacks” and make sure “Transparent Dolphin View” is selected. IMPORTANT: if you use fractional scaling in Plasma (e.g., I use 150% or 175%), you must ensure that “Disable translucency with non-integer scaling” is NOT selected.

Scroll down and press “Save”; remember to restart the applications. Now enjoy the nice translucent blurred effect in many applications (including the Kvantum manager itself); I changed the wallpaper to something lighter to appreciate the transparency better:

Of course, you can change a few Kvantum Nordic theme configurations parameters, including the opacity and other things.

You might also have to log out and log in to the Plasma session to see the theming applied to everything.

This theme also installs a Konsole color scheme, so you can create a new Konsole profile using such a color scheme: here’s the excellent result (this color scheme comes with blurred background by default):

In this example, I’m still using the standard Breeze icon theme, but of course, you might want to select a different icon theme.

Lyan

https://github.com/vinceliuice/Layan-kde

Lyan is one of my favorite themes (and one of the most appreciated in general). It’s based on the Tela icon theme (also very beautiful), https://github.com/vinceliuice/Tela-icon-theme, so we’ll have to install the icon theme first. Both come with an installation (and, in case, an uninstallation) script so that everything will be much easier! We’ll have to clone their repositories and then run the installation script.

These are the command lines to run to set them both up (note the -a in the Tela installation command: this will install all the color variants; if you only want to install the default variant or just a subset, please have a look at the project site):

Then, the procedure to apply the Global Theme and the corresponding Kvantum theme is the same as before. Once you have selected the Global Theme and the Kvantum theme for Lyan, you should get something like that:

Look at all the beautiful transparency and blur effects on Dolphin, on the title bars, and in some parts of Kate and the System Settings, not to mention the blur on menus.

WhiteSur

https://github.com/vinceliuice/WhiteSur-kde

This theme is for macOS look and feel fans. I’m not one of those but let’s try that as well 🙂

For this theme as well, we’ll install the recommended icons (we can rely on their installation scripts also in this case):

Once the corresponding Global Theme is selected, if you are using a fractional scaling like me (as I also said before), you’ll get a nasty surprise: huge borders as shown in the screenshot (independently from the selected Kvantum theme):

However, this theme comes with a few versions suitable for fractional scaling concerning “Window Decorations” (ignore the previews shown in the selection, which do not look good: that does not matter for the final result). There’s no version for my current 175% scaling, however, selecting the Window Decoration for 1.5, the borders are better, though still a bit too thick:

If you have a Window scaling factor for which there’s a specific Window Decoration of this theme, then everything looks fine. For example, with 150% scaling and by selecting the corresponding Window Decoration, the window borders look fine:

The rest of the screenshots are based on 175% scaling and the x1.5 variant. As I said, it does not look perfect, but that’s acceptable 😉

Note that if we apply this Global Theme, the installed WhiteSur icons are used as well.

After applying the Kvantum theme as before, changing the wallpaper with the one provided by this theme, and setting the Task Switcher to Large Icons here’s the result:

Please keep in mind that if you end up with thick borders, the resizing point is not exactly on the edge, but slightly inside, as shown in this screenshot:

Edna

https://gitlab.com/jomada/edna.git

This does not come with an installation script either. We’ll have to copy all the artifacts manually in the correct folders (in the following commands, directories are created as well if not already there):

This is another theme with the huge border problem when using fractional scaling:

Unfortunately, this one comes with a single version and no variant for fractional scaling. Thus, when using fractional scaling, we have to manually patch the theme: open the file ~/.local/share/aurorae/themes/Edna/Ednarc and change the lines

as follows (these values fit 150% scaling)

For 175% scaling, use the same values but this one

The windows already opened will still show the huge border, but the new windows will show smaller borders. Feel free to play with such values til you get a border size you like most. In the following screenshot, I’m using this theme together with the Tela icon (the green variant) that we installed before and I also created a new Konsole profile using the Konsole Edna color scheme (which comes with transparent background):

This theme also provides a complex Latte dock layout, with several docks. We won’t see this feature in this post, you might want to experiment with that.

Conclusions

I hope you enjoyed this tutorial and that you’ll start playing with KDE themes as well 🙂

How to install Linux on a USB with UEFI support

I have already blogged about installing Linux on an external USB stick or drive (better if it’s an SSD) to make such an installation portable on any computer. In that old blog post, I was using VirtualBox to do the actual installation. I was relying on VirtualBox because when I had tried to install Linux directly to an external USB drive after booting with another USB with a live image I ended up breaking my current computer’s grub configuration: if I tried to boot my computer without the newly created USB installation I couldn’t select any OS to boot. At that time, I did not investigate further, because the VirtualBox solution was working like a charm for me.

However, the USB with Linux installed through VirtualBox could be used only when booting from a computer with “Legacy boot” enabled, that is, it could not be booted if UEFI was the only choice in that computer. Even in that case, it wasn’t a problem for me: it was enough to enable Legacy Boot in the computer’s BIOS. Unfortunately, when I tried to boot such a Linux USB in my new LG GRAM 16, I realized that this LG GRAM provides no way to specify Legacy boot! Then, I found this interesting post from “It’s FOSS” that both explains the problem with UEFI I had previously experienced (that is, the fact I couldn’t boot my computer at all because the GRUB configuration was broken) and a way to circumvent the problem. I suggest you go and read that post!

In this post, I’d like to summarize my experience applying the suggested workaround and also report that you might still get into trouble in some circumstances, but fixing things will be easy.

Just like suggested in https://itsfoss.com/intsall-ubuntu-on-usb/, before you start experimenting with the procedure of this tutorial, read it entirely.

The scenario

First of all, let’s summarize what I want to do. I want to install Linux on a portable external USB SSD. I don’t want a live distribution: a live distribution only allows you a small testing experience, it’s not easily maintainable and upgradable, it’s harder to keep your data in there. On the contrary, installing Linux on a USB drive will give you the full experience (and if the USB drive is fast it’s almost like using Linux on a standard computer; that’s surely the case for an external SSD, which are quite cheap nowadays).

In the previous post, I described how to create such an installation from VirtualBox. As I said, such a USB drive can only be booted with Legacy mode (I still have to investigate if you can get a UEFI bootable USB drive by installing it through VirtualBox There’s also a more recent post where I achieve the same goal, a UEFI bootable USB drive, by using VirtualBox).

Now I want to install Linux on a USB drive performing a real installation: I’m going to boot my computer with a USB stick with a Linux Live distribution, then I’m going to attach a USB external SSD drive, where I’m going to perform the actual installation. Thus, I’m NOT going to install Linux on the very computer, but on the USB external drive.

I’m going to perform this experiment:

  • I’m going to use a Dell XPS 13 where I already have (in multi-boot, UEFI), Windows, Ubuntu, Kubuntu, and Manjaro GNOME
  • I’m going to install Manjaro KDE (I have already created a LIVE USB stick) into an external USB SanDisk SSD

The USB stick with the Live distribution is a SanDisk as well (just to let you know, in case you see SanDisk in the screenshots; I’ll try to make it clear when I’m talking about the Live USB stick and the USB SSD).

I know that I’ve said that I cannot boot with Legacy boot from the LG GRAM, while I can from the Dell XPS, but to make the experiment more interesting I decided to install Linux on the external USB drive using the Dell so that I can then test it both from the Dell and from the new LG GRAM.

The problem

That’s already well explained in the blog post https://itsfoss.com/intsall-ubuntu-on-usb/. I’ll briefly summarize it here: a system can only have one active ESP partition at a time. Even though you choose the USB as the destination for the bootloader while installing Linux, the EFI file for the new distribution (remember, installed on an external drive) will be put in the existing ESP partition (belonging to the computer you’re using just for performing the installation in the external drive). Thus, the computer you used for installing Linux on the external drive will not boot if you don’t have the Linux external USB drive plugged in.

I might add that the fact that the Linux installer lets you use another device for the boot loader, while it will silently use an existing ESP partition might be seen as a bug. Indeed, I read about that in many places. However, it looks like all the Linux installers share this behavior, so we’ll have to live with that, and use a workaround.

The solution

The solution (workaround) for the problem above, as described in https://itsfoss.com/intsall-ubuntu-on-usb/ is simple and clever: you fool the installer by removing the ESP flag from the ESP partition (of the current computer’s SSD) before installing Linux on the external USB drive. Of course, it is crucial to put the ESP back after installation, before rebooting (as instructed at the end of the installation procedure). The removal and addition of the ESP flag can be done after booting into the live system with a partition manager program, which is usually part of the live installation media.

As I anticipated, you might still get into little trouble. I’ll talk about that at the end of the post. However, fear not, the trouble is not as bad as breaking the whole booting procedure of your computer 😉

My experience

So I booted with the Live USB stick with Manjaro KDE. Remember, the Live USB must have been created appropriately with UEFI support, or everything from now on will not work at all. Remember that I’m booting the Live USB from the Dell XPS, where the Legacy boot is enabled. So I have to make sure to boot the Live USB with UEFI, NOT with Legacy:

Being KDE, the partition manager available in the Live system is the one of KDE. I prefer Gparted instead, so it’s just a matter of installing that in the live system, using the package manager. Since I’m using Manjaro in this example, I just run

For Ubuntu, it will be a command based on apt (if it’s not already there) and for Fedora, you’ll have to use dnf (again, unless it’s already installed).

Then, I launch GParted; in my system, you can see the complex configuration of the internal SSD of my computer (/dev/nvme0n1)You can see a few reserved partitions for recovering the Windows installation (the one that came with this computer), 3 ext4 partitions for the 3 Linux OSes mentioned above, 1 ext4 partition that is mounted in all the 3 installed Linuxes, the Windows installation and, the one we’re interested in, the first one, with label ESP. You can see its flags boot and esp. We have to remove those flags before starting the installation. Right-click on that partition, choose “manage flags” and unselect one of the two flags boot or esp: the other one will be automatically unselected and a new flag mfsdata will be selected, but that’s not important.

Let’s close GParted, and let’s run the Manjaro installer. This part is not documented because I’m assuming you’re already familiar with the installation procedure of the Linux distribution you want to install in the external USB drive. The important part is when you have to partition the target drive. Of course, it is crucial to select the right one (the external USB drive where you want to install Linux), NOT the SSD of your computer or you’ll be in real trouble, as you can imagine 😉

In my example, I selected /dev/sdb, because /dev/sda is the Live USB stick (as seen above, the internal SSD is /dev/nvme0n1). Then it’s up to you to partition the target drive appropriately. In this example, I decide to let the Manjaro installer erase that entire disk, specifying to create a SWAP partition with hibernate. Depending on the installer you might choose something else. I chose this strategy because this way, the installer will also create the ESP partition for the boot manager on the target drive automatically, with the right flags. If you want to partition that manually, again, depending on the installer, you might have to create the ESP partition manually. (you can see an example of such a manual partition in the mentioned article https://itsfoss.com/intsall-ubuntu-on-usb/.)Remember that you still have a chance to review such changes before starting the actual modifications on the file system

When the installation finishes, you see the message to reboot into the newly installed system… DON’T DO THAT YET. Remember: you have to reset the esp and boot flag of the ESP partition of the internal drive of the computer: simply use GParted again and follow a procedure similar to the one performed at the beginning.

Since you’re still in GParted, you might also want to verify that also the external drive where you’ve just installed Linux looks correct, for example in my case

Now, it’s finally time to reboot and see what happens…

Small Trouble

First of all, I wanted to make sure that I could still boot the operating systems on my Dell XPS computers, so I made sure I booted with all the USB drives unplugged. Everything seemed to work but… wait… the main UEFI loader on my computer was the Manjaro GNOME one, which was automatically configured to boot also the Ubuntu OSes, simply relying on the os-prober, which is usually part of most Linux installations (apart from PopOS, from what I know). However, there was no trace of the old Manjaro UEFI loader: the Ubuntu UEFI loader showed up. You know that you can have several UEFI loaders on the same machine, and you can also reorder them from the BIOS. Also getting into the BIOS, the Manjaro UEFI was gone! The problem, in this case, is that Ubuntu doesn’t seem to be able to boot a Manjaro distribution (I still don’t know why). The Manjaro installation was still there but I could not boot it!

But wait… I still have the brand new installation on the external USB drive! I booted with that and that one shows both the new Manjaro KDE (the first one of course in the menu) and the entry for booting the Manjaro GNOME of my computer. Indeed the os-prober kicked in also during the installation on the external hard drive: it detected also the OSes installed on my computer (that’s expected). You can see that in the photo (note the reference to the /dev/nvme0n1 partition):

Great! I could boot into the computer’s Manjaro GNOME, using the boot loader of the external drive. Once there, I disconnected the external drive and reinstalled the GRUB UEFI of Manjaro GNOME into my computer. It’s just a matter of running sudo grub-install (no need to specify anything else: the existing installed OS already knows where to install GRUB) and then sudo update-grub:

Rebooted and everything went back to normal on my computer!

all’s well that ends well! 😉

Why did that happen anyway? To be honest, I’m not completely sure… what I noted before when I installed Ubuntu and then Kubuntu on this computer was that since the GRUB configurations of both systems use “Ubuntu” as the label, the Kubuntu installation, which was done after the Ubuntu installation, replaced the UEFI entry of the former; that had never been a problem because I can boot Ubuntu from Kubuntu, and vice-versa in case. Maybe that happened also in my experiment since I had a “Manjaro” (GNOME) UEFI entry on my computer and I installed on the external hard drive another “Manjaro” (KDE) distribution: both use the same “Manjaro” label. That shouldn’t have happened because the ESP partition should not have been detected from the installer, but maybe that was a wrong assumption (after all, os-prober can still detect existing OS installations).

This situation is NOT described in https://itsfoss.com/intsall-ubuntu-on-usb/ and indeed in that article the experiment was slightly different: the author installs on the external hard drive an “Ubuntu” distribution, while the computer already had a “Debian” distribution, so the labels were different.

Anyway, even in case of problems like the one experienced, it was pretty easy to fix things!

Hope you find this tutorial useful for experimenting with installing Linux on portable USB hard drives, or even USB sticks, provided they are fast ones 😉

Concluding Remarks

Please keep in mind that the created Linux installation on the USB external drive is effectively portable: you can use it on several computers and laptops. However, some drivers for some specific computer configurations might not be installed in the Linux installation of the external USB. Also, other configurations like screen resolutions and scaling might really depend on the computer you’re booting and might have to be adjusted each time you test the external USB drive in a different computer.

Accessing Google Online Account from GNOME and KDE

In this post, I’d like to share my experiences in setting a Google Online Account in GNOME and KDE. Actually, I have more than one Google account, and the procedures I show can be repeated for all your Google accounts.

First, a disclaimer: I’ve always loved KDE and I’ve used that since version 3. Lately, I have started to appreciate GNOME though. I’ve been using GNOME most of the time now, in most of my computers, for a few years. But lately, I started to experiment with KDE again, and I started to install that on some of my computers.

KDE is well-known for its customizability, while GNOME is known for the opposite. However, I must admit that in GNOME settings most of the things are trivial, while in KDE, you pay a lot for its customizability.

I think setting a Google Account is a good example of what I’ve just said. Of course, I might be wrong concerning the procedure I’ll show in this post, but, from what I’ve read around, especially for KDE, there doesn’t seem to be an easier way. Of course, if you know an easier procedure I’d like to know in the comments 🙂

In the following, I’m showing how to set a Google Account so that its features, mainly the calendar and access to Google Drive, get integrated into GNOME and KDE. I tested these procedures both in Ubuntu/Kubuntu and in Manjaro GNOME/KDE, but I guess that’s the same in other distributions.

TL;DR: in Gnome it’s trivial, in KDE you need some effort.

GNOME

Just open “Online Accounts”, and choose Google. Use the web form to log in and give the permissions so that GNOME can access all your Google data. As I said, I’m focusing on the calendar and drive. Repeat the same procedure for all your Google accounts you want to connect.

Done! In Files (Nautilus) you can see on the left, the links to your Google drive (or drives, if you configured several accounts). In the Gnome Calendar, you can choose the Google calendars you want to show. The events will be automatically shown in the top Gnome shell clock and calendar widget. Notifications will be automatically shown (by Evolution). For the Gnome Contacts, things are similar. By the way, also Gnome Tasks and other Gnome applications will be automatically able to access your Google accounts data.

To summarize, one single configuration and everything else is automatically integrated.

KDE

Now be prepared for an overwhelming number of steps, most of which, I’m afraid, I find rather complex and counter-intuitive.

In particular, you won’t get access to your Google account data in a single step. In fact, I’ll first show how to mount a Google drive and then how to set up the calendar.

Mount your Google drive

Go to

System Settings -> Online Accounts -> Add New Account -> Google

As usual, you get redirected to the “Web authentication for google”, login and give the consent allowing “KDE Online Accounts” to access some of your Google information, including drive, manage your YouTube videos, access your contacts, and calendar. (This procedure can be repeated for all your Google accounts if you have many.) Note that with all the permissions you give, you’d expect that then everything is automatically configured in KDE, but that’s not the case…

Back to the system settings, you get a “Google account”, not with your Google username or email, which is what I’d expect, but a simple “google” and a progress number (of course, you can rename it).

OK, now I can access my Google drive files from Dolphin and have my local calendar automatically connected to my Google calendar? Just like in Gnome? I’m afraid not… we’re still far away from that.

If you go to Dolphin’s Network place you see no Google drive mounted, nor a mechanism to do that… First, you have to install the package kio-gdrive (at least in Kubuntu and Manjaro KDE that’s not installed by default…). After that, back to Dolphin’s Network place you can expand the “Google Drive” folder and you get asked for the Google account you had previously configured. Select that, and “Use This Account For” -> “Drive” in Accounts Details. Now you can access your Google drive from Dolphin.

Add your Google calendar

What about my Google Calendar? First, you have to install the package korganizer (or the full suite kontact); again, at least in Kubuntu and Manjaro KDE, that’s not installed by default… Great, once installed I can simply select my previously configured Google account? Ehm… no… you “just” have to go to

Settings -> Configure KOrganizer -> General -> Calendars -> Add… -> Google Groupware -> a dialog appears, click “Configure…”

Now the browser (not a web dialog as before) is opened to login into your Google account. Then, give the permissions so that “This will allow Akonadi Resources for Google Services to…” (Again, you have to do the same for all your Google accounts you want to connect to.) In the browser, you then see: “You can close this tab and return to the application now.” Go back to the dialog in KOrganizer, and your calendars and tasks should already be selected (unselect anything you don’t want). OK, now in the previous dialog you should see KOrganizer synchronizing with your Google calendar and tasks.

Now I should get notifications from Google calendars events, right? Ehm… not necessarily: you need to make sure that in the “Status and Notifications” system tray, by right-clicking on “KOrganizer Reminders”, the “Enable Reminders” and “Start Reminder Daemon at Login” are selected (I see different default behaviors under that respect in different distributions). If not, enable them and log out and log in.

OK! But what about my Google calendar events in the standard “Digital Clock” widget in the corner of the system tray? Are they automatically shown just like in GNOME? No! There’s some more work to do! First, install kdepim-addons (guess what? At least in Kubuntu and Manjaro KDE, that’s not installed by default…). Now, go to “Digital Clock Settings” -> “Calendar” -> check “PIM Events Plugin” (quite counter-intuitive!) -> Apply; now a new “PIM Events Plugin” appears on the left, select that. Fortunately, this one will automatically propose to select all the calendars that have been previously configured in KOrganizer.

something similar for kaddressbook; probably with kontact the steps will be less, but I’ve always found Kontact chaotic…

Summary

Now, I like KDE customizations possibilities (while GNOME is pretty rigid about customizations and most things cannot be customized at all), but the above steps are far too much! After a few weeks, I wouldn’t be able to remember them correctly… In KDE, even the number of steps of the above procedures is overwhelming. You have to follow complex, heterogeneous and counter-intuitive procedures in KDE, and long menu chains. Maybe it’s the distribution’s fault? I doubt. I guess it’s an issue with the overall organization and integration in the KDE desktop environment. In GNOME the integration is just part of the desktop environment.