* Bump buildroot
* buildroot a1bdf74b19...f125c3e292 (1):
> package/containerd: add control for additional build tags
* Drop unnecessary containerd changes
Now that the snappshotter and the CRI plug-ins are disabled we don't
need to configure or disable them via configuration anymore. Drop the
unnecessary configs.
Move from the current plain format to the new verity bundle format. This
requires at least HAOS 10.4 to work. The Supervisor will make sure to
update to the latest minor release of the previous major release, so
updating will work in the regular use case.
* Add fsfreeze support for QEMU/KVM/Proxmox installations
Add fsfreeze scripts which calls the new Supervisor API to freeze Home
Assistant Core and add-ons which support the backup freeze scripts
(`backup_pre` and `backup_post`).
This allows to create safe snapshots with databases running.
* Fix lint issues
This enables backlight support on these hosts, which is useful if
running HASS on an old laptop or tablet and you want to (e.g.) conserve
power by controlling the backlight.
* buildroot d6894cf55f...df5fccafd8 (3):
> package/docker-cli: bump version to v24.0.6
> package/docker-engine: bump version to v24.0.6
> package/containerd: bump to version 1.7.6
Currently `CONFIG_OVERLAY_FS_METACOPY` and
`CONFIG_OVERLAY_FS_REDIRECT_DIR` kernel options are enabled but not
preferred by Docker. The metadata copy feature is disabled by default,
and also not actively used by the overlayfs2 driver (see
2c3d1f7b4b).
So the metadata copy config is not really problematic per se. However,
it enables the redirect_dir feature. And a kernel which has the
redirect_dir feature compiled in also enables it by default. This
actually makes the overlayfs2 driver to fallback to naive diff, which
is, from what I understand, slower than the overlayfs native diff (see
also
49c3a7c4ba).
The Docker daemon is also reporting this on startup:
Not using native diff for overlay2, this may cause degraded performance
for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled
Currently `CONFIG_OVERLAY_FS_METACOPY` is enabled, and it also enables
`CONFIG_OVERLAY_FS_REDIRECT_DIR`. There was already a previous attempt
to disable the latter (see #2067).
Disable both configs explicitly until Docker is able to use them.
Respect quotes in the meta file. While at it, simplify version
validation as well.
Make sure development version is correctly set at build time.
While at it also simplify version check.
* Adjust Home Assistant versioning to prepare for new release strategy
With OS 11 we'll create rc pre-releases which will get directly pushed
to the beta channel. In contrast, release builds will get directly
pushed to the stable channel.
Similar to Home Assistant Core we'll create bump commits for all stable
and beta releases. This makes sure that the source code matches the
built binaries for all releases.
The development build will get a generated version. To avoid issues
with the new rc builds the dev build version will get injected on source
level now.
* Apply suggestions from code review
* Download latest stable Supervisor after device wipe
Currently we download the latest tag after a device wipe, which gives us
the latest Supervisor (which quite likely can be a development version).
Use the stable version file instead to get the tag to be used to
download the Supervisor.
* Delete potentially corrupted updater info
Use a single workflow file for releases and dev builds. This avoids
duplication and enhances the release builds with some of the recent
improvements (e.g. shared build container).
This essentially reverts #2380, making sure that Home Assistant OS uses
systemd's latest network naming scheme.
We stick to a certain naming scheme to make sure NetworkManager still
applies the network configuration (which is matched by network interface
name by default).
With Supervisor [PR #4476](https://github.com/home-assistant/supervisor/pull/4476)
NetworkManager uses udev path by default. With this we can safely enable
the new interface naming and NetworkManager will still apply the
configuration based on udev path correctly.
Pull in the swapfile creation service haos-swapfile.service when
swap.target is reached. This makes sure the service is started even when
other targets are used (e.g. rescue.target).
* Delete Bluetooth device cache regularly
Delete stale Bluetooth devices from the BlueZ device cache every week.
This makes sure that the overlay partition doesn't run out of inodes
which has happened in real world scenarios where many new Bluetooth
devices are discovered.
BlueZ maintains these files on a best effort base. So removing them
while BlueZ is running should be safe.
An alternative considered was to lower BlueZ GATT caching (e.g. by
using Cache=yes instead of always, to cache only paired devices).
However, this would hurt performance and battery lifetime of Bluetooth
devices due to additional unnecessary GATT attributes reads. This is in
particular true for Bluetooth 5.1 devices which support the Database
Hash charactristic. Caching has also helped reliability with
intermittent connections (see
https://github.com/bluez/bluez/issues/191).
More importantly, besides the GATT attribute cache the same files are
also used to cache the device names as well. This is independent of the
above mentioned GATT cache configuration (see device_store_cached_name
in BlueZ). So disabling the GATT caching alone wouldn't solve the
particular problem we are facing.
See also: https://github.com/home-assistant/supervisor/issues/4490
* Use access timestamp instead of modification timestamp
The modification timestamp gets updated regularly (on each connect) it
seems. However, using access timestamp might be more accurate, as it
seems to preserves slightly more cache files. This additional devices
might be devices we don't regularly connect but are still around (and
therefor we shouldn't reread the GATT attributes regularly).
So deleting cache entries with access time older than 7 days. Which
essentially deletes all the entries of devices which haven't been seen
the last 7 days.
It turns out that the way concurrency works in GitHub action doesn't
allow to queue up multiple pending jobs. As soon as a second job gets
pending, the previous pending jobs get cancelled. So this does not allow
to sequentially run all cache combine jobs as we hoped for.
Let's use a single download cache and per board build cache for now.
This combines all caches in a single cache to save space (assumption is
that quite some files are duplicated otherwise). With this we shouold
end up with 4 relevant cache files (build cache for each architecture
plus download cache).
Use more specific keys for GitHub Action caches to make sure we update
caches regularly. Also add board id to the downloads cache to get a
more specific cache file. This avoid redownloading large dependencies
of some boards.
Separate fetching the current release and loading the container image
into separate build steps. This allows to manually later the version
json file for testing.
Enable fully preemptible kernel (low-latency desktop) configuration for
Home Assistant. Home Assistant can be considered as a soft real-time
system, where a lower latency is preferred over throughput.
A few tests using the rt_test development add-on didn't show measurable
improvements, but this could be due to rather synthetic test.
Currently some platform use voluntary preemptible kernel, and some fully
preemptible. So besides improving latency, this also aims to synchronize
the settings across all platforms.
Also make sure that debugging is not enable as it can have high runtime
overhead according to Kconfig.
The BR2_GCC_ENABLE_LTO config used to enable LTO on compiler level. That
config symbol doesn't exist anymore. Instead, LTO is enabled by default
with GCC.
However, there is a new flag named BR2_ENABLE_LTO which enables LTO in
packages. So far it doesn't look like that packages we are using support
the flag, but that might get added in the feature. Opt-in already today.
* Determine git reference in prepare step
We can determin the git reference used once in the prepare step.
* Build HAOS builder in prepare step
Instead of building the build container multiple times, simply build it
once in the prepare step. This saves some GitHub Runner time (as we only
need to create the builder once).
* Drop per PR builds
Drop the per PR builds which are based on pull_request_target. These
make things more complicated with the recent changes requiring two
deployment approvals since we use the environment in for the prepare
and build job now. It will also interfere with future expansions.
We should consider readding the feature using `pull_request` and
subsequent `workflow_run` trigger, as suggested by
https://securitylab.github.com/research/github-actions-preventing-pwn-requests/.
* Simplify board filter
In case a group with the same id as used outside the container already
exists, do not create a group inside the container.
It seems that GitHub Action runners started to use primary group id 999
which is the default group id used by the Docker daemon.