* Fix mypy errors in misc and mounts
* Fix mypy issues in os module
* Fix typing of capture_exception
* avoid unnecessary property call
* Fixes from feedback
* Avoid aiodns resolver memory leak
In certain cases, the aiodns resolver can leak memory. This also
leads to Fatal `Python error… ffi.from_handle()`. This addresses
the issue by ensuring that the resolver is properly closed
when it is no longer needed.
* Address coderabbitai feedback
* Fix pytest
* Fix pytest
Configurable and w/ migrations between IPv4-Only and Dual-Stack
Signed-off-by: David Rapan <david@rapan.cz>
Co-authored-by: Stefan Agner <stefan@agner.ch>
This reverts commit 63fde3b4109310e95ebdcc8e3c23a04ff96ba592.
This change introduced another more severe regression, causing all
add-ons that haven't been started since Supervisor startup to cause
errors during their backup. More sophisticated check would have to be
implemented to address edge cases during backups for non-existing
add-ons (or their config actually).
Fixes#5924
* Avoid early DNS plug-in start
A connectivity check can potentially be triggered before the DNS
plug-in is loaded. Avoid calling restart on the DNS plug-in before
it got initially loaded. This prevents starting before attaching.
The attaching makes sure that the DNS plug-in container is recreated
before the DNS plug-in is initially started, which is e.g. needed
by a potentially hassio network configuration change (e.g. the
migration required to enable/disable IPv6 on the hassio network,
see #5879).
* Mock DNS plug-in running
Instead of starting a task in the background synchronously wait
for Supervisor start sequence to complete. This should be functional
equivalent, as we anyways would loop forever in the event loop just
afterwards.
The advantage is that we now can catch any exceptions during the
start sequence and report any errors with critical logging to report
those to Sentry, if enabled. It also avoids "Task exception was never
retrieved" errors. Reporting errors is especially important since we
can't use the asyncio Sentry integration (see #5729 for details).
Also handle early add-on start errors just like other add-on start
errors (make sure the finally block is executed as well). And finally,
register signal handlers synchronously. There is no real benefit in
doing them asynchronously, and it avoids a potential race condition.
* Use journal-gatewayd's new /boots endpoint to list boots
Current method we use for getting boots has several known downsides, for
example it can miss some incomplete boots and the performance might be
worse than what we could get by using Systemd directly. Systemd was
missing a method to get list boots through the journal-gatewayd but that
should be addressed by the new /boots endpoint added in [1] which
returns application/json-seq response containing all boots as reported
in `journalctl --list-boots`.
Implement Supervisor methods to parse this format and use the endpoint
at first, falling back to the old method if it fails.
[1] https://github.com/systemd/systemd/pull/37574
* Log info instead of warning when /boots is not present
Co-authored-by: Stefan Agner <stefan@agner.ch>
* Split records only by RS instead of LF in journal_boots_reader
* Strip only RS, json.loads is fine with whitespace
---------
Co-authored-by: Stefan Agner <stefan@agner.ch>
When a backup error occurs, it might be that the backup file hasn't
been created yet, e.g. when there is no space or no permission on
the target backup directory. Deleting the backup file would fail
in this case. Use missing_ok instead to ignore a missing backup file
on delete.
With #5897 we renamed addon to addon_config and vis-versa. The log
messages were still using the previous variable names leading to
UnboundLocalError.
Fix the log messages to use the correct variable names.
To avoid accidential writes to the Supervisor root filesystem, we might
use the Docker read-only mode at one point. This is not yet the default,
but using s6-overlay with the read-only flag seems not to have any
downsides. So enable this by default.
To start Supervisor with read-only root file system teh following
arguments have to be used: `--read-only --tmpfs /run:exec`.
* Avoid initializing Blockbuster on Supervisor info call
Instead of creating an instance of Blockbuster to simply check if
Bluckbuster is enabled, use a global variable to store the instance
of Blockbuster and only initialize it when needed. This avoids
unnecessary initialization of Blockbuster when it is not required.
* Update supervisor/utils/blockbuster.py
Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
* Fix merge and rename singleton class to BlockBusterManager
* Fix pytest
---------
Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
Process NetworkManager interface updates in case PrimaryConnection
changes. This makes sure that the /network/interface/default/info
endpoint can be used to get the IP address of the primary interface.
* Use add-on config timestamp to determine add-on update age
Instead of using the current timestamp when loading the add-on config,
simply use the add-on config modification timestamp. This way, we can
get a timetsamp even when Supervisor got restarted. It also simplifies
the code a bit.
* Fix pytest
* Patch stat() instead of modifing fixture files
* Stop reading advanced logs on ConnectionError
If the client side connection closes with a `ConnectionError`, stop
reading the advanced logs.
This is very similar to ClientConnectionResetError which is easily
reproducable by having a log open and following in the browser and
then restaring Home Assistant. So far I wans't able to artificaially
reproduce the ConnectionError, but there are quite some reports on
Sentry so it seems to happen in real world.
* Warn on ConnectionError
* feat: Add IPv6 address generation mode & privacy extensions
Signed-off-by: David Rapan <david@rapan.cz>
* Use NetworkManager fixture for settings init tests
This fixes the test by since the extended implementation now can read
the version of NetworkManager.
* Add pytest for addr_gen_mode
---------
Signed-off-by: David Rapan <david@rapan.cz>
Co-authored-by: Stefan Agner <stefan@agner.ch>
* Trigger auto-update through Core WebSocket call
Instead of auto-updating add-ons on Supervisor side trigger an update
through Core via a WebSocket command. This makes sure that the backup
is categorized correctly and all backup features like retention are
applied.
* Add pytest
* Fix pytest
* Fix pytest
* Fix pytest
* Fix pytest
* Fix pytest cleaner
* Set timestamp of add-on far into the past
Instead of copying the backup in the main job, lets copy them in
separate job per location. This allows to use the same backup error
handling mechanism as for add-ons and folders.
This makes the stage introduced in #5784 somewhat redundant, but
before removing it, let's see if this approach works out.
* Harmonize folder and add-on backup error handling
Align add-on and folder backup error handling in that in both cases
errors are recorded on the respective backup Jobs, but not raised to
the caller. This allows the backup to complete successfully even if
some add-ons or folders fail to back up.
Along with this, also record errors in the per-add-on and per-folder
backup jobs, as well as the add-on and folder root job.
And finally, align the exception handling to only catch expected
exceptions for add-ons too.
* Fix pytest