In this article, I’ll describe the procedure to perform the upgrade, using the source code.
Content:
First, install the required libraries:
apt install -y libical3-dev python3-docutils
Find the latest release on the official repository:
latest_release=$(
git ls-remote https://github.com/bluez/bluez.git |
perl -lne 'print $1 if /refs\/tags\/([\d.]+)$/' |
sort -V |
tail -n 1
)
Perform a shallow clone:
git clone --depth 1 https://github.com/bluez/bluez.git -b "$latest_release"
cd bluez
Bootstrap and configure the compilation:
./bootstrap
# The `--libexecdir` is required to match the Ubuntu paths configuration, otherwise, `/usr/libexec
# is used by default.
#
./configure \
--prefix=/usr --mandir=/usr/share/man --sysconfdir=/etc --localstatedir=/var \
--libexecdir=/usr/lib
Compile and install:
make -j "$(nproc)"
# The file `/usr/lib/cups/backend/bluetooth` is also owned by the `bluez-cups` package; it's not
# clear if it needs to be up to date, but if one doesn't use it, it doesn't matter.
# In theory, the `--disable-cups` configure option can be used, but it causes a configure error
# (not mentioned by the documentation).
#
make install
Finally, hold the bluez
package:
apt-mark hold bluez
As general practice, it’s advisable to watch the repository releases on the GitHub project, so that one can perform an upgrade, especially in case of a security fix.
]]>This isn’t solvable with a oneliner, but I wanted to solve it nonetheless, in a simple way; this tiny article will show how.
Content:
I often use rsync
to keep some data in sync. In some cases I need to manually inspect the output, filtering out some noise (lines matching a certain pattern); also, I need to view the progress update.
It’s not possible to use grep - the logic is not simply “inspect a line, and if it doesn’t match a pattern, print it”; progress is displayed by using special characters (the carriage return (\r
)) that need to be printed immediately.
Following this logic, grep should hypothetically print the characters immediately, then, when a newline is found, inspect the line, and erase or print according to the match; this functionality is not supported.
While a trivial (oneliner) solution is not possible, by using a scripting language, we can still implement a simple solution.
This is the implementation using Ruby:
stdbuf -o0 rsync \
--itemize-changes <other_params...> \
| ruby -e '
STDIN.each_char.each_with_object("") do |char, current_line|
print char
if char == "\n"
print "\e[A\e[K" if current_line =~ /^[.<>][fd]\.\.[.t][.p]\.\.\.\.\./
current_line.clear
else
current_line << char
end
end
'
Explanation of the most important concepts:
stdbuf -o0
sets rsync output to unbuffered, forcing it to send individually each character to the pipe; normally, the output is buffered for performance reasons"\e[A\e[K"
is an ANSI escape sequence, that goes up one line, and clears the landing line^[.<>][fd]\.\.[.t][.p]\.\.\.\.\.
is a pattern that matches lines indicating entries that have permissions or modification time changed (in the format enabled by --itemize-changes
; for the details, see the man page, via man rsync | less +/--itemize-changes, +n
); note that the regex could be simplified, but with the current structure, it’s more readableWhile the solution is not a oneliner, it’s still compact and intuitive; additionally, it can be easily separated into a script (e.g. grep_progress
) that can be reused in such cases.
Happy scripting!
]]>In this article I’ll explain how to setup a mirrored and encrypted btrfs root filesystem.
Content:
The resulting setup is:
Note that for simplicity, the btrfs encrypted volume on disk B, fills the space corresponding to the swap partition.
The EFI partition on the disk B is valid, and can be used if anything happens to disk A, however, its content is not automatically synced if there are changes to the disks partitioning.
A typical way to perform automatic syncing is via an apt
hook, however, the sync will be performed on each package setup, which may be excessive.
Since on a stable system, there won’t be changes to the EFI partition (kernel updates reflect on the boot partition, not the EFI one), it’s not strictly necessary to implement syncing - the decision is up to the user.
I’ve maintained a ZFS installer for a few years; I’ve ultimately archived it because, a ZFS setup comparable to the one proposed in this guide, is trivial to configure (just add a new device to the mirror after installation!).
Why choosing btrfs over ZFS, then? In my opinion, there’s no reason; ZFS is (again, in my opinion) superior in any aspect.
There are few exceptions where btrfs is preferable:
For users who don’t have such requirements, I advise against using btrfs.
Any procedure that alters the standard course of installation is inherently unstable; the installer (Ubiquity) is very rough around the edges, and it doesn’t help power users in any way, but most importantly, it doesn’t have a specification. For this reason, even a well-written procedure that works at a point in time, may fail after some time for very minor, but still breaking, details.
A few strategies can be used; some of them have only a few moving parts, and they will likely be stable for a very long time.
Generally speaking, with solutions 3. and 4., barring architectural changes, the only potential for breakages is in the predefined names (but automated detection can be implemented, if one wants).
In this procedure, one does:
This procedure is the one described by the guides at mutschler.dev guides; it’s not very stable, because there are many moving parts that can break the installer. Additionally, patching the programs is very unstable, and causes odd Ubiquity errors when it doesn’t work.
In this procedure, one does:
This procedure is a middle ground. There are considerably less moving parts than setting up the disks pre-installation, because the standard Ubuntu setup is used.
The disadvantage is that one still does some level of customization behind Ubiquity’s back, which requires manually setting up the bootloader at the end.
In this procedure, one does:
This is a very stable procedure, as Ubiquity will do complete the installation without any underlying change. The only downside is that in-place conversion requires a few extra commands, because the converted partition is unoptimized.
In this procedure, one does:
This is a very stable procedure, very much like #3. The only downside is that it’s slower.
We assume the installation of Ubuntu 22.04 Jammy, on two disks, sda
and sdb
. If the devices are different, e.g. NVMe, just change the related variables.
ubiquity --no-bootloader
It’s not possible to make Ubiquity install the bootloader; with the btrfs changes, it crashes, without any meaningful message in the log. It’s a bit odd, because installing and updating grub from a chrooted target, succeeds.
# The options chosen below are indicative, and depend on the kernel version.
#
export BTRFS_OPTS=noatime,compress=zstd:1,space_cache=v2,discard=async
DISK1_DEV=/dev/sda
DISK2_DEV=/dev/sdb
MIRROR_LV_NAME=vgubuntu-mate-mirror # arbitrary, but leave 'mirror' in the name, so it's recognized
PASSWORD=foo # same as the one entered during Ubiquity's setup
ROOT_LV_DEV=$(find /dev/mapper -name '*-root')
# This script doesn't require interaction; it displays some useful information during execution.
# Note that the cloned EFI partition is setup at the end of the second step.
# Sample output:
#
# /dev/mapper/vgubuntu--mate-root on /target type ext4 (rw,relatime,errors=remount-ro)
# /dev/sda2 on /target/boot type ext4 (rw,relatime)
# /dev/sda1 on /target/boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
#
mount | grep target
umount /target/boot/efi
# -rT: Copy content, including hidden files; not necessary, but better safe than sorry.
#
TEMP_DIR_BOOT=$(mktemp --directory)
cp -avrT /target/boot "$TEMP_DIR_BOOT"/
umount /target/boot
TEMP_DIR_TARGET=$(mktemp --directory)
cp -avrT /target "$TEMP_DIR_TARGET"/
umount /target
sgdisk $DISK1_DEV -R $DISK2_DEV
sgdisk -G $DISK2_DEV
CONTAINER2_NAME=$(basename $DISK2_DEV)3_crypt
echo -n "$PASSWORD" | cryptsetup luksFormat ${DISK2_DEV}3 -
echo -n "$PASSWORD" | cryptsetup luksOpen ${DISK2_DEV}3 "$CONTAINER2_NAME" -
# LUKS containers are not strictly necessary, however, it makes the second device structure consistent
# with the first; additionally, password caching is on volume groups.
# Display the containers; sample output:
#
# sda3_crypt (253, 0)
# sdb3_crypt (253, 3)
#
dmsetup ls --target=crypt
# Create a physical container.
#
pvcreate /dev/mapper/"$CONTAINER2_NAME"
# List physical containers; sample output:
#
# PV VG Fmt Attr PSize PFree
# /dev/mapper/sda3_crypt vgubuntu-mate lvm2 a-- 61.81g 0
# /dev/mapper/sdb3_crypt lvm2 --- 63.98g 63.98g
pvs
# Create a volume group.
#
vgcreate "$MIRROR_LV_NAME" /dev/mapper/"$CONTAINER2_NAME"
# Display the volume groups; sample output:
#
# VG #PV #LV #SN Attr VSize VFree
# vgubuntu-mate 1 2 0 wz--n- 61.81g 0
# vgubuntu-mate-mirror 1 0 0 wz--n- 63.98g 63.98g
#
vgs
# Create a logical volume (in the volume group).
# [n]ame; [l] size in extents
#
lvcreate -l 100%FREE -n root "$MIRROR_LV_NAME"
# List the logical volumes; sample output:
#
# LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
# root vgubuntu-mate -wi-a----- 58.16g
# swap_1 vgubuntu-mate -wi-ao---- <3.65g
# root vgubuntu-mate-mirror -wi-a----- 63.98g
#
lvs
mkfs.btrfs -f "$ROOT_LV_DEV"
mount -o $BTRFS_OPTS "$ROOT_LV_DEV" /target
MIRROR_LV_DEV=$(find /dev/mapper -name '*mirror*-root')
btrfs device add "$MIRROR_LV_DEV" /target
btrfs balance start --full-balance --verbose -dconvert=raid1 -mconvert=raid1 /target
# Sample output:
#
# Data,RAID1: Size:2.00GiB, Used:0.00B (0.00%)
# Metadata,RAID1: Size:1.00GiB, Used:128.00KiB (0.01%)
# System,RAID1: Size:64.00MiB, Used:16.00KiB (0.02%)
#
btrfs filesystem usage /target | grep -P '^\w+,'
btrfs subvolume create /target/@
btrfs subvolume create /target/@home
umount /target
mount -o subvol=@,$BTRFS_OPTS "$ROOT_LV_DEV" /target
mkdir /target/home
mount -o subvol=@home,$BTRFS_OPTS "$ROOT_LV_DEV" /target/home
cp -avrT "$TEMP_DIR_TARGET" /target/
mkfs.btrfs -f ${DISK1_DEV}2
mount -o $BTRFS_OPTS ${DISK1_DEV}2 /target/boot
btrfs device add /dev/sdb2 /target/boot
btrfs balance start --full-balance --verbose -dconvert=raid1 -mconvert=raid1 /target/boot
cp -avrT "$TEMP_DIR_BOOT" /target/boot/
mount ${DISK1_DEV}1 /target/boot/efi
sed -ie '/vgubuntu--mate-root/ d' /target/etc/fstab
sed -ie "/^# \/boot / i "$ROOT_LV_DEV" / btrfs defaults,subvol=@,$BTRFS_OPTS 0 1" /target/etc/fstab
sed -ie "/^# \/boot / i "$ROOT_LV_DEV" /home btrfs defaults,subvol=@home,$BTRFS_OPTS 0 2" /target/etc/fstab
BOOT_PART_UUID=$(blkid -s UUID -o value ${DISK1_DEV}2)
sed -ie "/^UUID.* \/boot / c UUID=$BOOT_PART_UUID /boot btrfs defaults,$BTRFS_OPTS 0 2" /target/etc/fstab
# Can't set keyscript=decrypt_keyctl now; see the second part of the procedure.
#
LUKS_DISK2_PART_UUID=$(blkid -s UUID -o value ${DISK2_DEV}3)
echo "$CONTAINER2_NAME UUID=$LUKS_DISK2_PART_UUID none luks,discard" >> /target/etc/crypttab
Now return to the installer, and complete the installation. At the end, click on “Continue”; don’t reboot.
export DISK1_DEV=/dev/sda
export DISK2_DEV=/dev/sdb
export BTRFS_OPTS=noatime,compress=zstd:1,space_cache=v2,discard=async # same as set in step #2
ROOT_LV_DEV=$(find /dev/mapper -name '*-root' -not -name '*mirror*')
# This script doesn't require interaction.
mount -o subvol=@,$BTRFS_OPTS "$ROOT_LV_DEV" /target
mount ${DISK1_DEV}2 /target/boot
mount ${DISK1_DEV}1 /target/boot/efi
for vdev in dev sys proc run; do mount --bind /$vdev /target/$vdev; done
chroot /target
# Cache the password, so that it's not asked twice for the two volume groups.
#
perl -i -pe 's/$/,keyscript=decrypt_keyctl/' /etc/crypttab
# The `keyutils` package is required in order to use `keyscript=decrypt_keyctl`.
# The `btrfs-progs` package is required to load the btrfs filesystem; without it, everything proceeds
# well, but on boot, the root filesystem won't load, opening busybox.
#
apt install -y grub-efi-amd64-signed keyutils btrfs-progs
grub-install ${DISK1_DEV}
update-grub
exit
# Setup the cloned EFI partition, and sync it.
#
mkfs.fat -F 32 -n EFI ${DISK2_DEV}1
mkdir /target/boot/efi2
mount ${DISK2_DEV}1 /target/boot/efi2
EFI2_PART_UUID=$(blkid -s UUID -o value ${DISK2_DEV}1)
echo "UUID=$EFI2_PART_UUID /boot/efi2 vfat umask=0077 0 1" >> /target/etc/fstab
rsync --archive --delete --verbose /target/boot/efi/ /target/boot/efi2
umount --recursive /target
The procedure has been completed. Reboot and enjoy!
Ubiquity is a very limited and ultimately frustrating software. Fortunately, the operating system as a whole, has good support for btrfs, so there is a range of options, which includes very stable, and conceptually simple (enough), solutions.
Happy mirroring 😁
]]>In this article I’ll describe how to restore support for it on the Lenovo Yoga 7 AMD Gen 7. This method likely applies to other models; for example, the HP ENVY x360) has the same S3 conditional logic in the DSDT.
Content:
The procedure is generic, and can be performed on any Linux distribution; the difference should be just in the tools package; on Ubuntu, install the acpica-tools
:
$ sudo apt install -y acpica-tools
In order to verify which sleep states the machine supports, run:
# This message comes from the kernel ring buffer, which rotates; if nothing is shown, reboot and
# rerun the command.
#
$ sudo dmesg | grep 'ACPI.*supports S'
[ 0.309933] ACPI: PM: (supports S0 S4 S5)
Dump the DSDT:
# Can also be achieved via `acpidump -b`, which dumps more data (not required in this context).
#
$ sudo cat /sys/firmware/acpi/tables/DSDT > dsdt.dat
Disassemble it:
$ iasl -d dsdt.dat
The resulting disassembly, dsdt.dsl
, is human readable. On the Lenovo Yoga 7 AMD Gen 7, one can see that the S3 state is supported, but with conditionals:
If ((CNSB == Zero))
{
If ((DAS3 == One))
{
Name (_S3, Package (0x04) // _S3_: S3 System State
{
0x03,
0x03,
Zero,
Zero
})
}
}
I don’t have domain knowledge, however, my educated guess is that this is (primarily) a check whether the option is set in the firmware (the Lenovo Yoga 7 AMD Gen 7 allows the user access only very basic firmware settings, and this is not included).
The fix is simply to remove the conditionals; this can be done with any editor, or with a Perl script:
# A backup file (`dsdt.dsl.bak`) is created.
#
# Regex: remove the four lines before "S3_: S3 System State" and the two lines after; keep the six
# lines in between.
#
perl -0777 -i.bak -pe 's/(.+\n){4}(.+_S3_: S3 System State\n(.+\n){6})(.+\n){2}/$2/m' dsdt.dsl
We also need to bump the DSDT revision, otherwise when booting, the patched DSDT will be overridden (this is not required if patching the kernel):
# Regex: replace the last value of the DSDT table header definition:
#
perl -i -pe 's/^DefinitionBlock.+\K0x00000001/0x00000002/' dsdt.dsl
Now we just reassemble the DSDT:
iasl -tc dsdt.dsl
This will generate multiple files - different override methods require different files.
There are different approaches to overriding the DSDT. I’ll describe what I’ve tested, and the pros/cons.
The best method is to add an initrd hook; it’s clean, and it doesn’t require any maintenance:
# Create the initrd image, including the patched DSDT in the approprite directory, which corresponds
# to the `firmware/acpi` subdirectory of the `/sys` virtual filesystem.
#
mkdir -p kernel/firmware/acpi
cp dsdt.aml kernel/firmware/acpi
find kernel | cpio -H newc --create | sudo tee /boot/acpi_override > /dev/null
# Now create the hook. Note that this is not the canonical style for hooks; it's been reduced to the
# simplest form, for clarity.
#
sudo tee /etc/initramfs-tools/hooks/acpi_override << 'SH'
#!/bin/sh
if [ "$1" = prereqs ]; then
echo
else
. /usr/share/initramfs-tools/hook-functions
prepend_earlyinitramfs /boot/acpi_override
fi
SH
sudo chown root: /etc/initramfs-tools/hooks/acpi_override
sudo chmod 755 /etc/initramfs-tools/hooks/acpi_override
# Now update the initramfs (for all the kernels).
#
sudo update-initramfs -k all -u
For those who use a patched kernel, it’s just a matter of setting the related configuration symbol(s):
# Run from the kernel source root.
#
scripts/config --set-val CONFIG_ACPI_CUSTOM_DSDT y
scripts/config --set-val CONFIG_ACPI_CUSTOM_DSDT_FILE '"/path/to/dsdt.hex"'
Then recompile and boot. Done!
This is a method that works, but it’s discouraged, since requires repeating the operation every time the initrd image is regenerated (essentially, for any kernel update).
# Create the initrd image, including the patched DSDT in the approprite directory, which corresponds
# to the `firmware/acpi` subdirectory of the `/sys` virtual filesystem.
#
$ mkdir -p kernel/firmware/acpi
$ cp dsdt.aml kernel/firmware/acpi
$ find kernel | cpio -H newc --create > initrd-patched-dsdt.img
# Backup the initrd for the running kernel, and prepend the initrd image just created, to the
# regular kernel one.
#
$ cp /boot/initrd.img-"$(uname -r)" .
$ cat initrd-patched-dsdt.img initrd.img-"$(uname -r)" | sudo tee /boot/initrd.img-"$(uname -r)" > /dev/null
The source, for the generic method, is in the kernel docs.
Another clean and automatic method is to set the custom initrd image via GRUB. Note that this method has been reported to work, but it didn’t on my O/S.
$ sudo cp dsdt.aml /boot/patched-dsdt.aml
$ echo acpi /boot/patched-dsdt.aml | sudo tee -a /boot/grub/custom.cfg
$ sudo update-grub
This should work on systems where the /boot/grub/custom.cfg
is included by default; on Ubuntu, this rule is encoded in /etc/grub.d/41_custom
:
$ cat /etc/grub.d/41_custom
#!/bin/sh
cat <<EOF
if [ -f \${config_directory}/custom.cfg ]; then
source \${config_directory}/custom.cfg
elif [ -z "\${config_directory}" -a -f \$prefix/custom.cfg ]; then
source \$prefix/custom.cfg
fi
EOF
In case a given distro doesn’t include /boot/grub/custom.cfg
, just add the rule file.
On reboot, support for S3 sleep state will be advertised:
$ sudo dmesg | grep 'ACPI.*supports S'
[ 0.648536] ACPI: PM: (supports S0 S3 S4 S5)
# Go to sleep!
#
$ systemctl suspend
Watch out! After suspending, closing the laptop lid will wake up the system!! I don’t know what precisely causes this, but fixing this behavior is outside the scope of the article.
If the procedure doesn’t yield the desired effect, my advice is to first rule out problems with the boot override; disable the S4 sleep support (just comment or remove the corresponding block), and if, after boot, the change has been successfully applied:
$ sudo dmesg | grep 'ACPI.*supports S'
[ 0.309933] ACPI: PM: (supports S0 S5)
then the problem is in the DSDT patch.
The following methods either didn’t work on my system, or they’re not robust:
GRUB_EARLY_INITRD_LINUX_CUSTOM
won’t work on at least some operating systems (ie. Fedora);I don’t exclude that with appropriate changes, some of the methods above can work.
The following are some references on the topic. Watch out: the Microsoft and Anadtech references are biased and/or deceptive, and they’re present only for completeness.
Removal of the S3 sleep state is a terrible state of affairs.
It’s not clear who drove this change; according to some, Microsoft initiated it, in its push for the “Connected/Modern Standby”; according to others, the producers did.
I believe that Microsoft has been the drive, in particular, considering that they’ve been pushing the Modern Standby through a campaign of deceptive half-truths. Don’t forget that Microsoft ships on the near-totality of the (non-Mac) desktop PCs, and they have an enormous leverage on producers.
The future is uncertain. Some hardware producers do make available the S3 option in the firmware; vote with your wallet (and some noise 😁).
]]>In this article I present easy to read diagrams, that one can refer to while developing the exercises.
Content:
The fields described in the diagrams are a subset of the full specification - they’re only those required to solve the problems of the CodeCrafters project; some concepts are therefore skipped, e.g. overflow.
The green background color indicates fields that are shared across different page types; if fields of a child are highlighted, but not the parent, it means that the child itself is optional, but when present, its highlighted fields are mandatory.
The diagram files can be found in the related repository of mine; the reference format is PlantUML.
If you find any error, please contact me, or add a comment (below)!
This is a relatively typical task, so there is plenty of information around, however, I’ve found lack of clarity about the concepts involved, outdated and incomplete information, etc.
For this reason, I’ve decided to write a small guide about this task, which can be used only as copy/paste reference, but also fully read, in order to get a better understanding of the concepts involved..
Content:
This guide is based on Debian/Ubuntu systems, however, it can be easily adapter to other systems.
In order to compile the kernel, some packages are required. They may change with time, so this is an approximate list:
sudo apt install libncurses5 libncurses5-dev libncurses-dev qtbase5-dev-tools flex \
bison openssl libssl-dev dkms libelf-dev libudev-dev libpci-dev libiberty-dev autoconf
There are different repositories available:
https://github.com/torvalds/linux.git
: Official (Torvalds’) kernel repository; doesn’t include patch versionsgit://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/focal
: Canonical release versions (see here)git://git.launchpad.net/~ubuntu-kernel-test/ubuntu/+source/linux/+git/mainline-crack
: Canonical mainline/testing versionsIt’s also possible to download the source code via Ubuntu kernel source packages, however, it’s simpler to just use a repository.
For simplicity, we’ll use the official kernel repository, but the procedure to configure and compile all the versions is identical.
In case one wants to use specific Canonical version, this guide explains how to find the reference kernel version corresponding to a Canonical one.
Clone the (reference) repository:
git clone git@github.com:torvalds/linux.git
cd linux
Now, we checkout the desired version:
# In this example, we checkout the major.minor version corresponding to the running kernel (e.g. v5.15).
#
git checkout "$(uname -r | cut -d. -f1-2)"
If we want to patch the kernel, this is the appropriate stage.
For example, this fixes the keyboard problem on modern AMD Zen systems (6800+):
git cherry-pick 9946e39fe8d0a5da9eb947d8e40a7ef204ba016e
The kernel compilation centers around the configuration file, .config
.
This file doesn’t come, directly, with the repository, so there are several considerations to make:
There are different approaches to address these points.
Gathering the configuration of a certain kernel that is not currently running (therefore, from a 3rd party source) is not always feasible; there is a script in the repository for performing this operation (extract-ikconfig
), however, it requires the given kernel to be compiled with a specific option.
In this guide, we’ll therefore use the configuration of a running kernel as base.
This command copies the running kernel configuration, and applies the defaults for new settings added by the new kernel version:
make olddefconfig
There are two ways of making changes to the configuration: programmatic and interactive.
Programmatically, one uses the script scripts/config
(which has different actions like setting and removing entries). However, this is dangerous; some logical changes require multiple settings to be changed, so it’s easy to make mistakes.
The simplest and safest way is to run the interactive programs:
make xconfig # X11
make menuconfig # terminal
Both will also run make olddefconfig
, if this hasn’t been done already.
The clearest way of observing kernel changes is via scripts/diffconfig
, which is cleaner than a manual diff:
$ scripts/diffconfig .config{.old,}
HZ 250 -> 100
HZ_100 n -> y
HZ_250 y -> n
$ diff .config{.old,}
457,458c457,458
< # CONFIG_HZ_100 is not set
< CONFIG_HZ_250=y
---
> CONFIG_HZ_100=y
> # CONFIG_HZ_250 is not set
461c461
< CONFIG_HZ=250
---
> CONFIG_HZ=100
It’s certainly possible, if one wants, to run the interactive program and perform changes, then run scripts/diffconfig
, and convert them to scripts/config
commands. In this case, don’t forget to use the exact actions:
--undefine
: entirely remove--disable
: comment--enable
: uncommment--set-val
: set a value--set-str
: set a quoted valueBefore proceeding with the customizations, there are some changes to apply.
The first is necessary on Ubuntu/Debian configurations; we must specify not to bake extra trusted X.509 keys into the kernel (used to verify kernel modules; see here):
scripts/config --set-str SYSTEM_TRUSTED_KEYS ""
scripts/config --set-str SYSTEM_REVOCATION_KEYS ""
Without this change, the kernel compilation will raise an error like No rule to make target 'debian/canonical-certs.pem', needed by 'certs/x509_certificate_list
.
Then, we disable the debug information; by default (as of v5.19), an extra 1.2 GiB package is generated, containing the kernel debugging information, which is not useful for the general public.
The easiest way to disable it interactively; the entry is located under Kernel hacking
-> Compile-time checks and compiler options
-> Compile the kernel with debug info
.
On a v5.19 kernel, the corresponding programmatic changes are:
scripts/config --undefine DEBUG_INFO
scripts/config --undefine DEBUG_INFO_COMPRESSED
scripts/config --undefine DEBUG_INFO_REDUCED
scripts/config --undefine DEBUG_INFO_SPLIT
scripts/config --undefine GDB_SCRIPTS
scripts/config --set-val DEBUG_INFO_DWARF5 n
scripts/config --set-val DEBUG_INFO_NONE y
Now we can apply the desired customizations.
For example, the kernel timer frequency entry is listed under Processor type and features
-> Timer frequency
.
On a v5.15 kernel, the programmatic changes to set a 1000 Hz frequency are:
scripts/config --set-val HZ 1000
scripts/config --set-val HZ_1000 y
scripts/config --set-val HZ_250 n
Time to build the kernel!
It’s common practice to add a version modifier, in order to make the kernel recognizable:
version_suffix="timer-100"
make -j "$(nproc)" bindeb-pkg LOCALVERSION=-"$version_suffix"
This will run make clean
, and generate the desired deb packages (along with other files) in the parent directory; note that the firmware files are not included (they’re in a separate repository).
If there are errors, the last error message is not informative; either scroll up, or run without -j
(which makes the last error message informative).
If the build is interrupted, it’s best to perform a complete reset:
make mrproper
If not done, temporary files may be left in the filesystem, which can cause very confusing errors on the next build attempt.
Although I would have expected the procedure to be trivial, it wasn’t. Once the involved concepts were clear though, the procedure became simple and straightforward.
It’s now trivially possible for everybody to have a standard-as-desired kernel, with the intended customizations.
Happy kernel hacking!
]]>There are some important things that are very hard to assess, before moving to the cloud service itself.
We moved, long ago, to AWS, and we had certain surprises; in this article, I’ll describe them, so that companies that plan to move to the cloud can make more informed decisions.
This article is updated to Jan/2023, and I will update it if/when I’ll found other notable things.
Content:
Even when an application doesn’t make heavy use of disks, it happens sometimes that a certain event will trigger heavy I/O load, at least for a short time.
In AWS, disks (EBS) have three main properties:
If/when the application does heavy I/O, it risks to drain the I/O budget, therefore reaching the minimum guaranteed I/O. Even if such events are rare, they surely happen on any application, and they must be taken into account, since they can cause insufficient performance or even downtime.
There are two strategies to handle this:
Both solutions have an expense.
Option (1) is possible, however, making the application handle I/O with certainty is a nontrivial task from a development perspective (that is, development cost), and it’s a continuous job (since typically, applications add new features).
Option (2) is easy to apply, but it has a monetary cost; large part of the disks will be left unused, which is undesirable.
AWS has recently (2022) introduced the gp3
disks, which have a high baseline, therefore, resolving the I/O budget problem. Cunningly though, AWS doesn’t offer this disk type for RDS users, which are stuck with this problem, and related cost.
If an application has I/O peaks (e.g updates dozens of millions of records in the database), even if seldom, the user must very carefully plan I/O costs, when moving to AWS.
One would think that they can stop certain services, and resume them when they wish, while paying for the storage only in the meantime.
This is not possible. There is one excetion, RDS (database), whose services can be stopped, but they restart automatically after one week (!). This is so undesirable in our case, that we wrote a Lambda that stops RDS instances when they’re not explicitly turned on.
AWS storage services can’t be stopped. The only exception is RDS (databases), but it needs code to be written, in order to keep it stopped.
AWS has introduced, in the last years, their own ARM hardware; thins includes the platforms for database services.
For AWS customers, it’s crucial to reserve database instances, which are typically a (very) large part of the bill.
When the new generation of RDS instances was introduced (6th), only ARM could be reserved at first; it took (I estimate) a few months, for the reservations to available on Intel/AMD as well.
This meant that user requiring a reservation in the meantime, either had to switch to ARM, or to use an older RDS generation.
It is possible (but not necessarily) that for some periods of time, RDS reservations are only available for ARM instances and older Intel/AMD generations, but not for new Intel/AMD ones.
AWS doesn’t set any specification of the downtime caused by service upgrades; the documentation is typically fuzzy (reporting a “best effort” approach), and the upgrades have imprecise timespans, without any indication.
For example, even if one has a redundant Elastcache cluster, and an upgrade specifies “up to 30 minutes” per node, there is no indication about when, within the allocated time (say, 30 minutes * 2 node = 1 hour!), the connection will drop, and for how long.
This means that if the application has no measures against sudden connection drops, over the whole application, it will experience unpredictable disruption of service during the upgrade.
The application must have measures against connection drops from all the services, all over the application, even for services configured with redundant topologies. If this is not the case, unpredictable disruption of service will be experienced during service upgrades.
]]>Through a certain action, actions support data caching. I was very surprised though, when I’ve noticed that caching, as frequently described, has a very severe limitation - it’s not shared across PRs; this limits its performance severely.
In this small article, I’ll describe the problem, the solution, and two preset workflows, in Ruby and Rust.
Content:
If a dev sets up CI as typically described, they will get caching; opening a PR will have the first workflow run fill the cache, then subsequent runs of the same PRs will reuse it.
This is very inefficient; if the cached operation is slow (e.g. installing many Ruby gems, or building a large Rust project), the first workflow run for each PR will take a considerable time.
The reason for this is actually explained in the GitHub Actions documentation (emphasis mine):
Restrictions for accessing a cache
A workflow can access and restore a cache created in the current branch, the base branch (including base branches of forked repositories), or the default branch (usually main). For example, a cache created on the default branch would be accessible from any pull request. Also, if the branch feature-b has the base branch feature-a, a workflow triggered on feature-b would have access to caches created in the default branch (main), feature-a, and feature-b.
Access restrictions provide cache isolation and security by creating a logical boundary between different branches. For example, a cache created for the branch feature-a (with the base main) would not be accessible to a pull request for the branch feature-c (with the base main).
Multiple workflows within a repository share cache entries. A cache created for a branch within a workflow can be accessed and restored from another workflow for the same repository and branch.
Surprisingly, this detail is frequently omitted. For example, this is a the Ruby section of the caching action:
Caching gems with Bundler correctly is not trivial and just using actions/cache is not enough.
Instead, it is recommended to use ruby/setup-ruby’s bundler-cache: true option whenever possible:
- uses: ruby/setup-ruby@v1 with: ruby-version: ... bundler-cache: true
The setup-ruby
action doesn’t mention it as well.
A convenient solution to improve cache reuse is to build it on every main branch push; this way:
Note that if there are no related changes (e.g. no new libraries added), the cache will be fully recycled.
I provide two sample implementations here, for Ruby and Rust.
Please note that they’re intentionally bare-bones; for real projects, there are many small things to add (names, conditions, job matrices etc.).
In Ruby, we’re going to rely on the ruby-setup
action.
Main branch workflow:
on:
push:
branches: [ $default-branch ]
jobs:
build_ruby_cache:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: ruby/setup-ruby@v1
with:
bundler-cache: true
The following is a very basic example of a workflow CI to run on PRs:
on:
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: ruby/setup-ruby@v1
with:
bundler-cache: true
- run: bundle install
- run: bundle exec rspec
Things are simple in Ruby land 😄
Rust, in principle, is the same; the complication is that we need to differentiate between build (Cargo) actions.
For example, if we run Clippy, its data is shared with (Cargo) build data, but it’s not the same; therefore, we need to build both caches.
Something else to keep in mind is that getting caching right in Rust projects is very important, as the compiler is “not exactly a speed demon” 😄, and build time is consumed in large quantity with extreme ease.
In this example, we’ll just perform two PR jobs:
and fail the build if any fails.
Main branch workflow:
on:
push:
branches: [ $default-branch ]
jobs:
build_clippy_cache:
name: Build Clippy cache
runs-on: ubuntu-latest
steps:
# Don't forget to install dev libraries 🙂
- run: sudo apt install libasound2-dev libudev-dev
- uses: actions/checkout@v3
- uses: actions/cache@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- uses: actions-rs/cargo@v1
with:
command: clippy
The cached paths are the standard cargo cache locations, and the project build directory.
Now, the PR workflow:
on:
pull_request:
jobs:
check_formatting:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check
clippy_correctness_checks:
runs-on: ubuntu-latest
steps:
- run: sudo apt install libasound2-dev libudev-dev
- uses: actions/checkout@v3
- uses: actions/cache@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- uses: actions-rs/cargo@v1
with:
command: clippy
args: -- -W clippy::correctness -D warnings
Nice and easy! Note how we don’t cache cargo fmt
, since it doesn’t involve any build.
When adding, as typical, full project builds (for testing, release, etc.), the corresponding (Cargo) build jobs need to be added to the main branch workflow.
Github Actions provide 10 GB for each repository, which is enough space to build a mid-sized Rust project for multiple platforms.
I’m baffled why this topic is not frequently mentioned, and indeed, not all the devs are aware of it.
Regardless, solving the problem is easy, both conceptually, and implementationally.
Happy CI 😄
]]>The idea is for a beginner to learn ECS concepts from the base book, then apply them using Bevy; the structure of the game is ideal for a gentle introduction to ECS architecture.
Read it here!
]]>features = ["dynamic"]
).
While this works fine when manually invoking Cargo, attempting to launch a debug session from Visual Studio Code will raise this error:
/path/to/project/target/debug/project: error while loading shared libraries: libbevy_dylib-ae04813e8bd66866.so: cannot open shared object file: No such file or directory
This is a relatively common topic on the net, but the solutions presented are not very clear.
What one exactly needs to do is to add this entry to the launch configuration (in launch.json
):
"env": {
"LD_LIBRARY_PATH": "${workspaceFolder}/target/debug/deps:${env:HOME}/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib",
}
This assumes that the dev uses Rustup and the nightly toolchain; if one uses the stable toolchain, replace nightly
with stable
.
Happy debugging 🙂
]]>