Author Archives: Lorenzo Bettini

About Lorenzo Bettini

Lorenzo Bettini is an Associate Professor in Computer Science at the Dipartimento di Statistica, Informatica, Applicazioni "Giuseppe Parenti", Università di Firenze, Italy. Previously, he was a researcher in Computer Science at Dipartimento di Informatica, Università di Torino, Italy. He has a Masters Degree summa cum laude in Computer Science (Università di Firenze) and a PhD in "Logics and Theoretical Computer Science" (Università di Siena). His research interests cover design, theory, and the implementation of statically typed programming languages and Domain Specific Languages. He is also the author of about 90 research papers published in international conferences and international journals.

Stop VS Code’s Java LSP from Rewriting Your Eclipse .classpath with m2e-apt Entries

How to prevent JDT LS (via m2e) from adding generated-sources APT folders and org.eclipse.jdt.apt prefs to an Eclipse+Maven project in VS Code.

If you open a Maven Java project in Visual Studio Code that also contains Eclipse project metadata (.project, .classpath, .settings/…), you might notice that VS Code’s Java tooling (JDT Language Server) “helpfully” edits your Eclipse files.

In particular, it may keep re-inserting entries like these into your .classpath:

…and it may also create this file:

.settings/org.eclipse.jdt.apt.core.prefs

VS Code Java support is powered by Eclipse JDT Language Server (JDT LS). When it detects Eclipse metadata (.project / .classpath), it will often use Eclipse-style project configuration and keep it synchronized.

For Maven projects, JDT LS relies on m2e (the “Maven integration for Eclipse”), and in many setups m2e-apt is present as well. m2e-apt is the component that manages annotation processing (APT) integration and, as part of that, it adds the standard “generated sources” folders into .classpath.

I find that very annoying!

If your project doesn’t use annotation processing and you don’t want these Eclipse files constantly modified, and you remove the entries, Visual Studio Code will re-add them when you open the projects in Visual Studio Code. If you open the projects from Eclipse and you “update” the Maven projects, Eclipse will remove the entries… and so on and so forth!

Here’s how to fix things for good.

The fix: disable m2e-apt at the project level

m2e-apt supports overriding its activation per-project using a Maven property:

Add this to your pom.xml

Put it under the regular Maven <properties> section:

That’s it. After this, m2e-apt will stop treating your project as something it should manage, and VS Code/JDT LS will no longer keep reintroducing those APT-related .classpath entries.

Note: The documentation mentions a “settings section” in the POM. There is no settings element in pom.xml; Maven “settings” live in ~/.m2/settings.xml. In the POM, this is implemented via a property, so properties (or a profile’s properties) is the right place.

Refresh VS Code so it stops regenerating the files

After editing the POM, VS Code may still have cached the project configuration. Do this once:

  1. Open the Command Palette
  2. Run: Java: Clean Java Language Server Workspace
  3. Let the Java server restart and re-import the project

Then you can delete the unwanted entries/files one last time:

  • Remove the APT-related classpathentry … m2e-apt … blocks from .classpath
  • Delete .settings/org.eclipse.jdt.apt.core.prefs if you don’t want it around

They should not come back.

If you only want to disable it in certain environments, you can place the property in a Maven profile or in the pom file of a single project.

Happy (quiet) classpaths! 😉

LaTeX listings: Eclipse colors

This is the style I use to highlight my Java code in LaTeX documents with the Listings package, with Eclipse colors:

And this is an example of a document, where I show the same listing both with black and white colors an Eclipse colors:

Here’s the result:

Browse and run your Sway keybindings with Rofi

Remembering every Sway shortcut is tough. I wrote a small script that parses your Sway config, displays all bindsym shortcuts in a clean, searchable list via Rofi, and executes the command associated with whichever one you select.

It’s fast, keyboard-friendly, and great for discovery: “What did I bind to Mod + Shift + P again?” Now you can search, see, and execute it.

What the script does

  • Reads your Sway config from $XDG_CONFIG_HOME/sway/config (or ~/.config/sway/config)
  • Finds all bindsym … lines
  • Formats each entry nicely, e.g.

Mod + Return → exec alacritty

  • Shows the list in a wide Rofi dmenu
  • When you select an entry, it executes the associated command through swaymsg

Dependencies

  • sway (for swaymsg)
  • rofi
  • awk, sed, grep (standard on most distros)
  • notify-send (optional – shows an error if the config isn’t found)

The script

Save this as ~/.local/bin/rofi-sway-keybindings.sh and make it executable.

How it works

The core of the script is a small text-processing pipeline that reads the config and renders a nice two-column list for Rofi:

  1. grep -E '^\s*bindsym' finds all bindsym lines (ignoring leading whitespace)
  2. grep -v '^[[:space:]]*#' ignores full-line comments
  3. sed 's/^\s*bindsym\s*//' strips the leading bindsym
  4. awk splits the line into (and does some cleanup):

keys: the first token (e.g. $mod+Return)
cmd: the rest of the line (e.g. exec alacritty)

It also strips trailing inline comments (after #) and skips bindsym flags like --release or --locked before reading the key. Finally, it prettifies modifiers and prints a fixed-width column so the arrows line up.

Rofi presents that list with -dmenu. When you pick one, the script extracts the command part (after ) and sends it to swaymsg. That means anything you can put after bindsym (like exec …, workspace …, kill, etc.) will run on demand.

Usage

  • Run the script from a terminal: rofi-sway-keybindings.sh
  • Or bind it to a key in your Sway config, e.g.:

Tip: the window is set to 60% width for readability; tweak it via -theme-str if you prefer.

Nice touches in the UI

  • Replaces $mod with Mod
  • Shows Shift as and Control as Ctrl
  • Adds spacing around + so chords read clearly: Mod + ⇧ + q
  • Aligns the left column to 35 characters for a tidy two-column look

Why this is handy

  • Onboarding: perfect for new setups or when you come back after a while
  • Discovery: search by key or by command to find what you already have
  • Launcher: use it as a programmable “cheat sheet” that also runs things

Here are some screenshots (including filtering):

Modern Java in LaTeX listings (Java 17)

If you use the LaTeX listings package to typeset Java, you’ve probably noticed that modern Java has moved faster than the package itself. Records, var, and text blocks may not highlight correctly out of the box. The good news: the listings package is extensible so that you can teach it “modern Java” with a tiny language definition.

The minimal language extension for Java 17

Here’s a drop‑in snippet that builds on the stock Java lexer to support key Java 17 features:

What each line does:

  • language = Java: inherit all of the listings’ built‑in Java rules.
  • morekeywords = {var,record}: colorize var and record as keywords (var is contextual, but highlighting it improves readability in code listings).
  • deletekeywords = {label}: avoid mistakenly highlighting labeled statements like label: for (…) { … }. label is not a Java keyword; removing it prevents false positives.
  • morestring=[b]”””: treat triple quotes as a balanced string delimiter so Java text blocks highlight as a single string.

Using it in your document

Activate the language globally:

…or per listing:

If you already have a custom style (e.g., mystyle) with colors and fonts, combine them:

Minimal working example

This is a compact MWE you can compile with pdflatex, xelatex, or lualatex. Adjust the style to your taste:

Here’s the result, where you can see the differences (note in the standard behavior the wrong highlighting of the double-quoted string in the text-block):

Happy highlighting! 🙂

Getting Your MacBook Air Webcam Working on Linux

If you’ve installed Linux on your MacBook Air, you’ve probably discovered that while most hardware works out of the box, the built-in FaceTime HD camera is notably absent from your video applications. Don’t worry—you’re not alone, and there’s a solution that doesn’t involve external USB webcams or complicated workarounds.

The issue stems from Apple’s use of Broadcom’s proprietary FaceTime HD camera hardware. Unlike most standard USB webcams that work with Linux’s UVC (USB Video Class) drivers, the MacBook Air’s camera uses a PCIe interface with the Broadcom 1570 chipset, which requires specialized drivers that aren’t included in the Linux kernel.

Identifying Your Hardware

Before diving into the solution, let’s confirm you have the same hardware. Open a terminal and run:

If you see output similar to this, you’re dealing with the same Broadcom camera:

The key identifier here is [14e4:1570]—this tells us we have the Broadcom 1570 chipset that needs the reverse-engineered driver.

The Solution: Community-Developed Drivers

Thanks to the hard work of the Linux community, reverse-engineered drivers are available through the Arch User Repository (AUR). These drivers have been developed by analyzing the hardware behavior and creating open-source alternatives to Apple’s proprietary drivers.

If you’re using an Arch-based distribution (like EndeavourOS, Manjaro, or plain Arch), you can search for the available packages:

This will show you several packages:

  • facetimehd-dkms: The main reverse-engineered driver
  • facetimehd-firmware: Required firmware files extracted from macOS
  • facetimehd-data: Sensor calibration data for optimal performance
  • facetimehd-dkms-git: Development version of the driver

The installation is straightforward. Install the main driver package:

The package manager will automatically pull in the required dependencies, including the firmware and calibration data packages.

After installation, you must reboot your system. This isn’t just a suggestion—the kernel module needs to be loaded fresh, and the hardware needs to be properly initialized during the boot process.

Once your system boots back up, your webcam should be functional. However, there’s an important caveat to be aware of.

While the driver successfully enables the webcam, there are some compatibility quirks:

  • Google Chrome/Chromium: Works perfectly
  • Firefox: May not detect the camera
  • Native Linux applications (like Kamoso): May have issues

This inconsistency likely stems from the different ways in which various applications interact with the video4linux (V4L2) subsystem and handle the specific quirks of this reverse-engineered driver.

Maintaining KDE dotfiles with Chezmoi Modify Manager

I have already blogged about managing KDE dotfiles with chezmoi and chezmoi_modify_manager. But what about maintaining them?

For example, one of the KDE configuration files changes, and you want to update the version managed by chezmoi.

Here’s an example where the Kate configuration file changed on the system and chezmoi detects that:

You can see the change with “chezmoi diff”:

Remember that the part with “+” is the version known by chezmoi, while the one with “-” is the version in the local system.

You now want to update the corresponding file managed by chezmoi.

The command “chezmoi re-add” won’t help because that file is handled by chezmoi_modify_manager, which splits it into two files: “modify_private_katerc” and “private_katerc.src.ini”.
The latter contains only the parts of the file we want to track, and the former is the corresponding modification script.

Moreover, “chezmoi merge” won’t help either for the same reason. Here’s what this command shows (I configured chezmoi to use the GUI program “meld” for such a command):

However, “chezmoi_modify_manager” has an option for such situations. Here’s the option to use with “chezmoi_modify_manager”:

Here’s the command and the output:

By using “chezmoi cd” and “git diff”, we can verify that the corresponding “.src.ini” file has been correctly updated (and the git repository can then be committed and pushed):

In the end, it’s easy, once you know the right option to use with “chezmoi_modify_manager”!

Enjoy your KDE dotfiles! 🙂

Configure Tmux to support true color and italics in Alacritty and Neovim

I know there are many blog posts about configuring Tmux to support true color and italics in Alacritty, but many of them miss a crucial detail that breaks Neovim’s diagnostic undercurl (wavy underlines).
Many of them suggest overriding the TERM variable in Alacritty to xterm-256color, which causes Neovim to lose the ability to display undercurl correctly.
Many of them are also outdated or incomplete.

This is how I configured Tmux and Alacritty to work perfectly together, supporting true color, italics, and Neovim’s undercurl diagnostics.

The Problem

When using Tmux inside Alacritty, you may encounter several issues:

  1. No True Color Support: Colors may appear washed out or limited to 256 colors instead of the full 24-bit RGB spectrum
  2. Italics Not Working: Italic fonts don’t render correctly, or reverse video appears instead
  3. Neovim Diagnostic Undercurl: Neovim’s diagnostic undercurl (wavy underlines) shows as simple underlines instead

These issues stem from incorrect TERM environment variable configuration and missing terminal capability overrides.

For example, using the shell script linked below to check true color support, when it’s not working, you get (left: Tmux inside Alacritty, right: Alacritty by itself):

When it works, you get:

The Solution

1. Alacritty Configuration

DO NOT override the TERM variable in Alacritty. Leave it at the default value alacritty.

Why? Alacritty’s default TERM=alacritty includes the terminfo capabilities for undercurl, which Neovim needs to display diagnostic wavy underlines. Setting it to xterm-256color breaks this feature.

2. Tmux Configuration

Add the following lines to your ~/.tmux.conf:

Explanation:
default-terminal "tmux-256color": Sets tmux to use a terminal type that supports 256 colors and italics
terminal-overrides ",*:Tc": Tells tmux to enable true-color (24-bit RGB) support for all terminal types that support it

3. Why This Works

When you start tmux inside Alacritty:
– Alacritty sets TERM=alacritty (which supports true color and undercurl)
– Tmux creates a new environment with TERM=tmux-256color (which supports italics)
– The terminal-overrides setting tells tmux to pass through true-color escape sequences from the outer terminal (Alacritty)

This combination gives you:
– True color (24-bit RGB) support
– Italic fonts working correctly
– Neovim undercurl diagnostics rendering properly

Verification

To verify everything works:

Check true color support:

Check italics:

Check in Neovim:

  • Open a file with syntax errors or any diagnostics (including spelling errors)
  • You should see wavy underlines (undercurl) for diagnostics, not simple underlines

References

How we used Maven relocation for Xtend

In Xtext release 2.40.0, we adopted Maven relocation to move Xtend Maven artifacts’ groupIds from org.eclipse.xtend to org.eclipse.xtext without breaking existing consumers.

References:

Rationale

Xtend’s Maven coordinates were relocated to comply with Maven Central’s new publishing requirements after the OSSRH sunset.

The new Maven Central publishing portal enforces namespace consistency: all artifacts in a single deployment must share the same groupId prefix (namespace). We were getting this error when trying to deploy:

Xtend Maven artifacts historically had groupId org.eclipse.xtend, while all other Xtext Maven artifacts use org.eclipse.xtext. This mismatch prevented us from publishing both in a single deployment to Maven Central.

See the detailed rationale in issue #3398 and specifically this comment.

Why relocation instead of just renaming

  • Backwards compatibility: Existing builds continue to resolve, emitting a clear warning rather than failing.
  • Gradual migration: Library and plugin maintainers can update on their own schedule.
  • Single source of truth: Only the new artifact publishes real content; the old coordinate becomes a lightweight stub POM.
  • Clear deprecation signal: A relocation message is more explicit than a silent artifact disappearance.
  • No breaking changes: Consumers don’t need to update immediately; their builds keep working.

Maven relocation basics (summary)

Maven relocation allows you to redirect artifact coordinates without breaking existing consumers.

The process involves:

  1. Real artifacts with the new groupId that contain actual JARs, source, and javadoc
  2. Relocation artifacts with the old groupId that are minimal POMs pointing to the new coordinates

A relocation artifact is a simple POM project with this structure:

At resolution time, Maven automatically replaces the old coordinates with the new ones and displays a warning to the user.

Our goals

  1. Preserve build stability for all existing Xtend consumers
  2. Minimize maintenance by publishing only one real artifact per logical module
  3. Provide a clear migration path with visible warnings
  4. Avoid transitive duplication (both old + new coordinates ending up on classpath)
  5. Comply with Maven Central’s namespace requirements

Implementation outline

Our implementation involved several steps (see PR #3461 for details):

Identify artifacts to relocate: All Xtend artifacts published to Maven Central under org.eclipse.xtend: org.eclipse.xtend.lib, org.eclipse.xtend.lib.gwt,org.eclipse.xtend.lib.macro, etc.

Create relocation parent POM: Created org.eclipse.xtend.relocated.parent to organize all relocation artifacts.

For each artifact, create a relocation POM module with:
– Packaging: pom
– Old groupId: org.eclipse.xtend
– Same artifactId and version as the original
– Relocation block pointing to org.eclipse.xtext

Separate publishing workflow: Since Maven Central requires same-namespace deployments, we had to:
– Build relocation artifacts separately
– Archive deployment bundles for manual upload
– Publish relocation artifacts in a separate step from main artifacts

Update CI/CD scripts: Modified Jenkins deployment scripts to handle both artifact sets.

Example relocation POM

Here’s a real example from our implementation:

When a consumer uses the old coordinates, Maven shows:

Ensuring no duplicate classes

Because the relocation artifact is only a stub POM with no JAR attached, the classpath will contain only the new artifact. This prevents:

  • Duplicate classes on the classpath
  • Class shadowing issues
  • Version conflicts between old and new coordinates

Maven and Gradle both handle this correctly by fetching only the relocated target.

Publishing workflow

The key challenge was Maven Central’s namespace requirement. Our solution:

  1. Main build: Publishes all org.eclipse.xtext artifacts (including the real Xtend artifacts with new groupId)
  2. Separate relocation build: Publishes all org.eclipse.xtend relocation POMs
  3. Validation: We performed dry-run deployments to verify Maven Central would accept the artifacts
  4. Manual upload: For milestone releases, we archived bundles and manually uploaded them to Maven Central

As noted in the PR discussion, we had to update version-bumping scripts to include the new relocation parent directory.

Migration guidance for consumers

Search your builds for the old groupId:
– Maven: mvn dependency:tree | grep org.eclipse.xtend
– Gradle: ./gradlew dependencies --configuration compileClasspath | grep org.eclipse.xtend

Replace org.eclipse.xtend with org.eclipse.xtext in your POMs or build.gradle files:

Run your build and verify the relocation warning disappears

Update any BOM or dependencyManagement entries

Tooling considerations

  • IDEs: Eclipse, IntelliJ IDEA, and other IDEs honor Maven relocation. Refresh your project after migration.
  • Gradle: Fully supports Maven relocation when resolving from Maven repositories.
  • Reproducibility: The relocation POMs are stable and don’t affect build reproducibility.
  • CI/CD: No changes needed; relocation works transparently in CI environments.

Lessons learned

  1. Maven Central namespace enforcement is strict: You cannot publish artifacts with different groupId namespaces in a single deployment, even if they’re in the same monorepo.

  2. Relocation is low-effort and highly effective: Once set up, relocation artifacts are trivial to maintain across versions.

  3. Separate publishing is required: Relocation artifacts must be published in a completely separate Maven deployment due to namespace restrictions.

  4. Testing is crucial: We performed dry-run deployments first to ensure Maven Central would validate the artifacts correctly.

  5. Scripts need updates: Don’t forget to update version-bumping and release automation scripts to include relocation modules.

  6. Communication is important: Clear documentation and release notes help consumers understand and adopt the changes smoothly.

  7. It works across ecosystems: Both Maven and Gradle consumers benefit from relocation automatically, as do IDE integrations.

FAQ

Q: Do I need to change anything immediately?
A: No; builds continue to work with the old coordinates, but you’ll see warnings. Update when convenient to eliminate warnings.

Q: Does relocation affect checksums or reproducible builds?
A: No; the new artifact is authoritative. The stub POM exists only for resolution redirection and contains no actual code.

Q: Can Gradle consumers rely on this?
A: Yes; Gradle honors Maven relocation information when resolving from Maven repositories.

Q: What about IDEs?
A: IDEs (tested with Eclipse and IntelliJ) honor Maven relocation when resolving from Maven repositories. You may need to refresh your project after migration.

Q: What if I use dependencyManagement or BOM entries?
A: Update them to reference the new groupId. Transitive relocation continues working in the meantime.

Q: Will this affect my transitive dependencies?
A: No; if your direct dependencies haven’t migrated yet, their use of old coordinates will be automatically relocated, and you’ll see warnings for those too.

Q: What happens if I have both old and new coordinates in my dependency tree?
A: Maven/Gradle will resolve both to the same artifact (the new one), so you won’t have duplicates on the classpath.

Install Ubuntu on Apple Silicon Macs Using UTM

Let’s install Ubuntu 25.04 on Apple Silicon (M1) in a virtual machine using UTM.

We must first download an Ubuntu ISO for ARM. Go to https://ubuntu.com/download/desktop and search for “ARM 64-bit architecture”; if not available for the specific version you want to install, then you’ll have to install the Ubuntu Server and then install the “ubuntu-desktop” package. Before downloading the server edition, you might want to search deeper in the Ubuntu release download site and look for the ARM ISO, which is not advertised in the above download site. For example, for 25.04, you get the link for the ARM version from the download site, but not for 24.04. However, by looking at the releases web directory, you’ll also find the ARM ISO for 24.04 (for example, https://cdimage.ubuntu.com/releases/24.04.3/release/ubuntu-24.04.3-desktop-arm64.iso).

You can install UTM in several ways. I installed with Homebrew: “brew install utm”.

Let’s start it and create a new Virtual Machine:

Choose Virtualize (we want to use an ARM Linux distribution):

Then, of course, choose “Linux”

Here we use QEMU, not Apple Virtualization. We select the ISO we downloaded before:

Let’s change Memory and Cores to allocate to the VM (I also enable hardware acceleration to have nice visual effects in the VM; As specified in the checkbox’s documentation, this might give you troubles in some configurations):

And the storage of the VM disk. Remember that by default, not all the specified GiB will be used by the file of the disk image: the disk image file will occupy only the effectively used data by the filesystem.

For the moment, I won’t use a shared directory.

Here’s the summary, where you can also give a custom name to the VM:

OK, let’s start the VM; the first time, this will start the installation ISO:

Here we are in the VM. From now on, the installation procedure is basically the standard Ubuntu one. Here’s the list of screenshots:

Since that’s a VM, I select to erase the entire (virtual) disk; alternatively, you might want to specify your custom partitioning scheme, possibly with a different file system than the default one, i.e., EXT4.

The installation starts with the usual slide show:

Remember that by clicking on the small icon on the bottom-right, you’ll get the log of the installation:

After installation has finished, shut down the machine (instead of “restart now”) and “Clear” CD to avoid booting from the installation ISO again:

Now, start the installed VM.

As usual, you’ll almost immediately get updates:

If not already installed, you might want to install spice-vdagent and spice-webdavd for better integration with the host system (for example, shared clipboard and folder sharing).

Let’s see a few screenshots of the installed system:

Note the SWAP file created by the installer and the filesystem layout.

I’ve installed fastfetch to show the typical output:

Note the graphics (remember we selected above the Hardware OpenGL Acceleration):

Concerning the resolution of the VM, let’s consider the “Display” setting of the virtual machine:

Note the selected “Resize display to window size automatically”; that’s useful, especially when setting the VM window to full screen.

Concerning the display settings: “Retina mode” (optimize resolution for HDPI displays). Then you have to adjust the resolution in the VM.
The documentation https://docs.getutm.app/settings-qemu/devices/display/ suggests NOT to enable “Retina mode” because it increases memory usage and processing (and the host operating system can use efficient hardware scaling, while the guest uses software scaling).

Without this setting, the display will be presented at the current resolution and scale that the operating system uses. For example, here’s my macOS setting:

And in fact, as you see from one of the screenshots above, the Ubuntu desktop, when the UTM VM is full-screen, has the same 1280×800 resolution.

You might want to have a look at the special input settings:

The “Command+Option” is important to easily switch between the VM and the host OS concerning keyboard inputs.

First impressions

In general, the VM usage is very pleasant. Everything runs smoothly, including the visual effect. Indeed, it all seems to run at native speed.

WARNING: You’re running an ARM Linux distribution, so packages must be available for this architecture. Sad news: a few programs are NOT available for Linux ARM, notably Google Chrome and Dropbox. Please, consider that.

That’s all for this initial UTM post.

Stay tuned for other posts related to Linux virtual machines in UTM.

Switch SSD mode from RAID to AHCI (for Windows and Linux dual boot)

On some computers, like the new Dell Pro Max Tower T2 and the old XPS 13, the SATA operation mode for the SSD is set to RAID by default.
Linux will not be able to see the SSD in RAID mode: you need to change it to AHCI.
If you just change it to AHCI in BIOS, Windows will not boot.

The idea is to boot Windows in Safe Mode once, then change the SATA mode to AHCI in BIOS, then boot Windows normally (it will automatically load the AHCI drivers).

WARNING: Do the procedure at your own risk!

The procedure can be found in many places online.
However, I decided to put the steps that worked for me here:

  1. Click the Start Button and type cmd; Right-click the result and select Run as administrator
  2. Type this command and press ENTER: bcdedit /set {current} safeboot minimal (If it does not work, use this alternative: bcdedit /set safeboot minimal)
  3. Restart the computer and enter BIOS Setup (in Dell, it is F2)
  4. Change the SATA Operation mode to AHCI from RAID
  5. Save changes and exit Setup, and Windows will automatically boot to Safe Mode.
  6. Do as in the first step to open an elevated command prompt.
  7. Type this command and press ENTER: bcdedit /deletevalue {current} safeboot (If it does not work, use this alternative: bcdedit /deletevalue safeboot)
  8. Reboot once more, and Windows will automatically start with AHCI drivers enabled.

Note that in the Dell Pro Max Tower T2 BIOS, you must enable the “Advanced Setup” (see the top-left corner) to be able to change the operation mode in the “Storage” section:

Here are the sources I used:

Managing KDE Dotfiles with Chezmoi and Chezmoi Modify Manager

If you’re a KDE user who wants to keep your desktop configuration under version control, you’ve probably discovered that KDE’s configuration files can be quite challenging to manage with traditional dotfile tools. KDE stores settings in complex INI files that frequently change, contain system-specific data, and include sections you may not want to track. This is where Chezmoi Modify Manager becomes useful when using the Chezmoi dotfile manager.

The Problem with KDE Configuration Files

KDE applications like Kate, Dolphin, and KWin store their settings in INI-style configuration files. These files often contain:

  • Volatile sections that change frequently (like window positions, recent files)
  • System-specific data (like file dialog sizes, screen configurations)
  • Mixed content where you only want to track specific settings

Things have improved recently in that respect; however, some KDE INI files still mix those configurations.

Managing these files directly with Chezmoi would result in noisy diffs and configurations that don’t work well across different machines.

Enter Chezmoi Modify Manager

Chezmoi Modify Manager acts as a configurable filter between your actual config files and your Chezmoi repository. It allows you to:

  • Ignore entire sections or specific keys
  • Set specific values while ignoring everything else
  • Use regex patterns for flexible matching
  • Transform values during processing

The tool works by creating “modify scripts” that tell Chezmoi how to process each configuration file.

Quoting from the official documentation:

For each settings file you want to manage with chezmoi_modify_manager there will be two files in your chezmoi source directory:

  • modify_<config file> or modify_<config file>.tmpl, e.g. modify_private_kdeglobals.tmpl
    This is the modify script/configuration file that calls chezmoi_modify_manager. It contains the directives describing what to ignore.
  • <config file>.src.ini, e.g. private_kdeglobals.src.ini
    This is the source state of the INI file.

The modify_ script is responsible for generating the new state of the file given the current state in your home directory. The modify_ script is set up to use chezmoi_modify_manager as an interpreter to do so. chezmoi_modify_manager will read the modify script to read configuration and the .src.ini file and by default will apply that file exactly (ignoring blank lines and comments).

Note that this is based on the Chezmoi mechanism of “modifying scripts”, allowing you to manage only some parts of files.

Thus, the integration with Chezmoi is based on these mechanisms:

  1. Source files (.src.ini) contain your desired configuration
  2. Modify scripts (starting with modify_) define filtering rules
  3. Chezmoi applies the modifications when deploying configs
  4. The .chezmoiignore file ensures source files aren’t directly copied

The file name after “modify_” and the file name of the “.src.ini” must follow the naming conventions of Chezmoi.

Your .chezmoiignore must include:

This prevents the source files from being deployed directly, letting Chezmoi Modify Manager handle the processing.

So, let’s see how to use that.

Real-World Examples

You can use the “chezmoi_modify_manager” command line to create the proper files.

Let’s look at how this works in practice with actual KDE configurations, based on my dotfiles (so I have already created these files):

KWin Configuration (kwinrc)

The file “modify_private_kwinrc”:

This script ensures only the relevant window manager settings are tracked (the file “private_kwinrc.src.ini”):

Global Shortcuts (kglobalshortcutsrc)

The names of the files are using the same convention already shown before.

For keyboard shortcuts, you might want to ignore certain dynamic sections:

This keeps your custom shortcuts while filtering out activity-specific bindings that may not be relevant across systems. The file “private_kglobalshortcutsrc.src.ini” is not shown because it’s quite huge.

Font Configuration (kdeglobals)

Sometimes you only want to track a single setting:

This results in a clean config that only tracks the terminal font:

Kate Editor Configuration

For Kate, you might want to ignore volatile sections while keeping your editor preferences:

Again, the “.src.ini” file is not shown.

Benefits

  • Clean diffs: Only track settings you care about
  • Portable configs: No system-specific clutter
  • Selective tracking: Include only relevant sections

Drawbacks

In general, setting the files initially takes much more time: you need to understand what to include/exclude in the “modify_” scripts and properly craft the “.src.ini” files accordingly. Moreover, some Chezmoi mechanisms, such as “merge,” will not function. Therefore, updating the files requires employing alternative techniques, as outlined in the next blog post. Finally, besides “chezmoi”, you need to install an additional program “chezmoi_modify_manager”.

Eclipse in Wayland (2025)

Let’s see what Eclipse looks like in Wayland in 2025.

I report some screenshots of a few Wayland Window Managers and Desktop Environments.

Sway

Eclipse looks good in Sway:

Hyprland

The same can be said for Hyprland, especially now that the infamous bug has been solved:

GNOME

No problems on GNOME either; I’d expect that since it is “natively” based on GTK:

If I run Eclipse forcing X11 (GDK_BACKEND=x11), I see no difference.

KDE

On KDE, the situation is not bad, but I find it far from optimal.

Eclipse on Wayland doesn’t look completely native on KDE Plasma:

Many parts don’t look sized correctly.

There’s also the strange thing of the window title bar for the splash screen, which, of course, is not expected:

Dialogs also look not properly sized:

In X11, it looks better, or at least better integrated with the KDE desktop. Moreover, there’s no additional titlebar in the splash screen, and the dialog looks better (for size and width):

That’s all for this post!

Using Unison File Synchronizer on macOS: Now Available via Homebrew

Unison, a powerful file synchronizer, has long been one of my favorite tools. However, installing Unison on macOS used to be a manual and sometimes cumbersome process, as detailed in my earlier guide.

The great news is that Unison is now available as a Homebrew cask! This means you can install it with a single command, leveraging the convenience and reliability of Homebrew’s package management.

To install Unison, simply run:

This command will download and install the Unison application into your /Applications folder.

After installation, you might encounter permission issues due to macOS’s security features (especially if you downloaded the app from the internet). To clear any extended attributes that might prevent Unison from launching, run:

This command removes quarantine flags and ensures Unison can start without macOS warnings.

For more details and troubleshooting tips, check out my earlier guide.

Installing EndeavourOS Linux on an old MacBook Air (2016)

I bought this laptop in late 2016. It’s still a good laptop (8 GB RAM, 128 GB SSD) and very light. However, I cannot use it with macOS anymore.

I previously blogged on installing Ubuntu on my old MacBook Air. Everything mainly went smoothly, except for the WiFi, which was not working during and after the installation, but it could be fixed by installing the proper module. The upgrade to Ubuntu 25.04 was almost fine: after the upgrade, the system did not boot anymore; I didn’t even get the Grub menu.

Ubuntu once again disappointed me. Time to wipe everything and go with my favorite Arch Linux distribution: EndeavourOS. In particular, “Mercury Neo”.

Installation

After having put the EndeavourOS ISO on a USB stick with Ventoy and inserted the USB stick, turn on the Mac and immediately press and hold the Option (⌥) key until you see the startup disk selection screen:

Select the entry corresponding to the USB and get to the Ventoy menu.

After some time, here’s the live system:

Did you notice the WiFi icon? Exactly! The WiFi has been automatically detected! Well done!

This is the WiFi card:

And we can see that the corresponding kernel module is part of the live system (of course, it will also be part of the installed system):

I changed the keyboard layout for an easier installation.

Before starting the installation, I use the Welcome menu to update Arch and EndeavourOS mirrors, of course, after connecting to the WiFi.

Before going on, everything seemed to work: touchpad, keyboard light, volume, brightness, though function keys were inverted. We’ll fix that later.

Let’s start the installation, choosing the “Online” method:

Then, the installation, based on Calamares, is the standard one, showing a few dialogs to select some configurations:

I will install KDE:

I also select as additional packages the LTS kernel and the printing packages:

As usual, I choose Grub as the bootloader:

I choose to wipe everything, select BTRFS as the file system, and also “Swap with hibernation”:

Let’s review everything and choose “Install”:

The installation takes a few minutes. Time to restart:

The installed system

Here we are:

Here are some screenshots of the good-looking KDE desktop:

And, of course, the fastfetch output in all its glory:

Configuration

As already stressed, unlike Ubuntu, there’s no need to fix any WiFi problem: it works out of the box!

Then, let’s fix the function keys, which are inverted (so, to have F1, you’d have to press “Fn F1”, which is not ideal). You can try:

If it works, change this permanently:

Concerning hibernation (selected during the installation), it does work; however, there’s a big problem: the system does not shut down, it reboots. Upon rebooting, it effectively resumes from hibernation, but as it is, it’s rather useless. I still have not figured out how to fix that.

Another thing not working is the webcam.

This is the device:

Maybe it’s just a matter of installing the corresponding packages, but I haven’t investigated further yet.

Everything else works smoothly, as I have already said (including touchpad gestures in KDE).

Final thoughts

The laptop works great with EndeavourOS, even better than with Ubuntu. Everything is smooth and reactive. Even more than with the original old macOS operating system.

I noted that concerning sleep, the default configuration already uses the more power-saving setting:

Power consumption also works fine after installing the powertop package (running that with the “auto-tune” setting) and setting the power profile to “Power Saver” from the KDE menu. MacOS probably used to have better power consumption, but this one is acceptable.

It was a nice decision to put Linux on this laptop, even more so with EndeavourOS instead of Ubuntu! 🙂

Speed Up Your Linux System with Zram

Zram, https://www.kernel.org/doc/html/latest/admin-guide/blockdev/zram.html, is a Linux kernel module that creates a compressed block device in RAM. This device can be used as swap space or a general-purpose RAM disk. By compressing data in memory, zram allows your system to store more data in RAM, reducing the need to swap to slower disk storage and improving overall responsiveness.

In particular, zram

  • Increases effective RAM capacity by compressing data.
  • Reduces disk I/O and wear, especially useful on SSDs.
  • Improves performance on systems with limited memory.

If you’re looking to boost your Linux system’s performance, especially on machines with limited RAM, zram is a powerful tool worth exploring.

In this post, I’ll show how to set it up on both Arch Linux and Ubuntu.

Installing zram

On Arch Linux

Install the zram generator package:

On Ubuntu

Install the systemd zram generator:

Configuring zram

Create a configuration file to set up your zram device. For example, to allocate half of your system’s RAM to zram and use the efficient zstd compression algorithm, run:

After saving the configuration, reboot your system to activate zram.

By default, zram will have the precedence over an existing swap partition.

You can use the command zramctl to see the status of zram and swapon to show your swap partitions (zram’s one will be /dev/zram0).

Install Nerd Fonts on macOS with Homebrew

I like Nerd Fonts a lot, and blogged about those in the past. If you spend a lot of time in the terminal, you’ve probably heard of them: they patch popular programming fonts with a huge set of icons, making your terminal and development environment look great and more informative.

Here’s how you can easily install Nerd Fonts on macOS using Homebrew.

Step 1: Install Homebrew (if you haven’t already)

If you don’t have Homebrew installed, open your terminal and run:

Step 2: Install fontconfig

Before installing fonts, it’s a good idea to have fontconfig:

Step 3: Install Nerd Fonts with Homebrew Cask

Homebrew makes it easy to install fonts with the --cask option. Here’s how I install my favorite Nerd Fonts:

You can add or remove fonts from this list as you prefer. Homebrew will handle downloading and installing them for you.

Step 4: Use Your New Fonts

After installation, open your terminal or code editor’s settings and select your preferred Nerd Font from the font list. Now you can enjoy enhanced icons and a better coding experience!


Tip: You can browse all available Nerd Fonts with:

Happy fonts! 🙂

Better diffs in Lazygit with delta

If you use Lazygit as your terminal Git UI, you know how convenient it is for staging, committing, and managing branches.

I use it in Neovim (LazyVim already configures it).

Integrating a custom pager (Lazygit Custom Pagers Documentation) can dramatically improve how diffs are displayed.

In this blog post, I’ll document how to use delta: a syntax-highlighting pager for git, diff, and grep output.

Delta makes diffs much more readable by adding syntax highlighting, line numbers, and custom themes. This is especially helpful when reviewing changes in Lazygit, as it makes it easier to spot what’s changed at a glance.

Installing delta

On Arch Linux (or derivatives), you can install delta with:

For other platforms, check the delta installation instructions.

Configuring Lazygit to use delta

To use delta as the pager in Lazygit, add the following to your ~/.config/lazygit/config.yml:

This tells Lazygit to use delta for displaying diffs, with color always enabled and paging disabled (since Lazygit handles paging itself).

Here are two screenshots with the diffs better highlighted:

Show line numbers

If you want to see line numbers in your diffs, update the pager line with “–line-numbers”.

Customizing delta with themes

Delta supports custom themes for even better readability. You can find a collection of themes here.

To use these themes:

  1. Download the raw themes.gitconfig file, for example, to ~/.config/delta/themes.gitconfig:

  2. Include it in your global Git config by adding the following to your ~/.gitconfig:

  3. To see available themes, run:

  4. Pick a theme you like (e.g., colibri) and enable it in your Lazygit config:

For example, with the “colibri” theme:

With “weeping-willow” theme:

Enjoy your diffs! 🙂

Computing the total test execution time of Maven Surefire

When working with Maven projects, the Surefire plugin is commonly used to execute tests, but it lacks a built-in feature to display the total execution time across all test suites. This can be particularly important when monitoring performance trends in larger projects with many test classes.

Maven’s Surefire plugin reports execution time for individual test classes but doesn’t provide an aggregated view of the total test execution time. The reports are generated in the “target/surefire-reports” folder, both as text files and XML files.

Here’s a shell script that parses the XML report files generated by Surefire and calculates the total execution time. The script is compatible with both Linux and macOS environments.

These are the steps:

  1. Look for lines containing <testsuite> tags
  2. For each matching line, loops through all fields (words) in the line
  3. Find fields that start with time=
  4. Uses gsub() to extract just the numeric value by removing the time=" prefix and the " suffix
  5. Add the extracted value to the running total
  6. Format the output in the same way, with the total time in seconds

To make this script run automatically after your tests, you can integrate it into your Maven build process using the exec-maven-plugin:

The above snippet assumes the script is in a file “report-total-time.sh” in the same directory as the POM. Otherwise, you’ll have to adjust the argument accordingly.

When your tests complete, you’ll see output similar to:

That’s all!

Using Neovim in Gitpod

I’m going to show you how to use Neovim on Gitpod. This can be useful for checking and testing your Neovim configuration.

The example can be found here: https://github.com/LorenzoBettini/neovim-gitpod-example.

I’m using a LazyVim distribution as a demonstration.

The Gitpod custom Dockerfile, “.gitpod.Dockerfile”, must be tweaked to install Neovim and its requirements (especially for using Lazyvim):

Then, the file “.gitpod.yml” must be tweaked accordingly; in particular, I’m using “stow” to create a symlink for the default Neovim configuration directory using as the source the configuration directory of this repository (you could also simply use the “ln” command for that):

The “stow.sh” script is part of the repository. I also specify a few extensions to install in the VScode of Gitpod.

Note that the first time, it will take a few minutes for Gitpod to provision such a Docker image.

Once in Gitpod, we can see that the link has been already configured:

Now, let’s enlarge the Terminal view and start Neovim.

We should see Neovim is installing all the packages as configured by LazyVim:

Note the change of the default color scheme:

Let’s close the Lazy window and see the Dashboard:

Since I’m using a light theme for the VScode, upon restarting Neovim, the color scheme is changed to its light variant as well:

We can now open the explorer (“space e”) to browse the contents:

By default, we have the Lua LSP installed.

We can use the file picker (“space f”):

We can use Lua LSP features like code completion:

And hover (“K”):

And change the color scheme with the picker (“space u C”):

Note that there are a few things that are not working correctly in Gitpod concerning Neovim:

  • Clipboard does not work since there’s no “DISPLAY” set.
  • Missing nerd fonts, things like folders and file types in the explorer, are not rendered correctly.

That’s all!

Installing Ansible and Molecule in Arch Linux

Using “pip” is the supported installation method for Ansible and Molecule. Let’s install Python libraries and applications (in this case, Ansible and Molecule) in a Python virtual environment. (This post is similar to the one about Ubuntu.)

First, install the required packages, including the Python virtual environment package:

Create a virtual environment somewhere (in this example, I create it in my home folder as a subdirectory of a folder for all the virtual environments; the directory will be created automatically):

Once the virtual environment has been created, “enter” the virtual environment:

Install the Python packages for Ansible, Molecule, and its plugins in the virtual environment:

You can verify that everything is installed correctly, e.g., at the time of writing:

Each time you want to run Ansible or Molecule, just run the “source” command above:

And then you can run “ansible” and “molecule”.

When using “Oh My Zsh” with the “Powerlevel10k” theme, you also get the virtual environment name shown in the prompt: