Compare commits

..

160 Commits

Author SHA1 Message Date
Mike Degatano
dafe271050 Use checkboxes in PR template to set labels 2026-03-30 23:46:49 +00:00
Stefan Agner
e630ec1ac4 Include Docker registry configurations in backups (#6674)
* Include Docker registry configurations in backups

Docker registry credentials were removed from backup metadata in a prior
change to avoid exposing secrets in unencrypted data. Now that the encrypted
supervisor.tar inner archive exists, add docker.json alongside mounts.json
to securely backup and restore registry configurations.

On restore, registries from the backup are merged with any existing ones.
Old backups without docker.json are handled gracefully.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Increase test coverage by testing more error paths

* Address review feedback for Docker registry backup

Remove unnecessary dict() copy when serializing registries for backup
since the property already returns a dict.

Change DockerConfig.registries to use direct key access instead of
.get() with a default. The schema guarantees the key exists, and
.get() with a default would return a detached temporary dict that
silently discards updates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 18:34:27 +02:00
Stefan Agner
be95349185 Reuse IMAGE_REGISTRY_REGEX for docker_image validation (#6667)
* Reuse IMAGE_REGISTRY_REGEX for docker_image validation

Replace the monolithic regex in docker_image validator with a
function-based approach that reuses get_registry_from_image() from
docker.utils for robust registry detection. This properly handles
domains, IPv4/IPv6 addresses, ports, and localhost while still
rejecting tags (managed separately by the add-on system).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Address review feedback: reorder checks for efficiency

Check falsy value before isinstance, and empty path before tag check,
as suggested in PR review.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 18:33:11 +02:00
Stefan Agner
1fd78dfc4e Fix Docker Hub registry auth for containerd image store (#6677)
aiodocker derives ServerAddress for X-Registry-Auth by doing
image.partition("/"). For Docker Hub images like
"homeassistant/amd64-supervisor", this extracts "homeassistant"
(the namespace) instead of "docker.io" (the registry).

With the classic graphdriver image store, ServerAddress was never
checked and credentials were sent regardless. With the containerd
image store (default since Docker v29 / HAOS 15), the resolver
compares ServerAddress against the actual registry host and silently
drops credentials on mismatch, falling back to anonymous access.

Fix by prefixing Docker Hub images with "docker.io/" when registry
credentials are configured, so aiodocker sets ServerAddress correctly.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 14:43:18 +02:00
Stefan Agner
6b41fd4112 Improve codecov settings for more sensible defaults (#6676)
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 13:24:22 +02:00
sepahewe
98bbb8869e Add support for negative numbers in apps options (#6673)
* Added support for negative numbers in options

* Do not allow -. as float

* Added tests for integers and floats in options.

* Fixed ruff errors

* Added tests for outside of int/float limits
2026-03-30 12:08:57 +02:00
dependabot[bot]
ef71ffb32b Bump aiohttp from 3.13.3 to 3.13.4 (#6675)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.13.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 10:56:15 +02:00
Stefan Agner
713354bf56 Handle certain OSError in get_latest_mtime during directory walk (#6632)
Besides file not found also catch "Too many levels of symbolic links"
which can happen when there are symbolic link loops in the add-on/apps
repository.

Also improve error handling in the repository update process to catch
OSError when checking for local modifications and raise a specific
error that can be handled appropriately.

Fixes SUPERVISOR-1FJ0

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-27 09:13:48 +01:00
dependabot[bot]
e2db2315b3 Bump ruff from 0.15.7 to 0.15.8 (#6670)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.7 to 0.15.8.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.7...0.15.8)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.8
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-27 09:13:23 +01:00
dependabot[bot]
39afa70cf6 Bump codecov/codecov-action from 5.5.3 to 6.0.0 (#6669)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.5.3 to 6.0.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](1af58845a9...57e3a136b7)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-27 09:13:10 +01:00
Paul Tarjan
2b9c8282a4 Include network storage mount configurations in backups (#6411)
* Include network storage mount configurations in backups

When creating backups, now stores mount configurations (CIFS/NFS shares)
including server, share, credentials, and other settings. When restoring
from a backup, mount configurations are automatically restored.

This fixes the issue where network storage definitions for backups
were lost after restoring from a backup, requiring users to manually
reconfigure their network storage mounts.

Changes:
- Add ATTR_MOUNTS schema to backup validation
- Add store_mounts() method to save mount configs during backup
- Add restore_mounts() method to restore mount configs during restore
- Add MOUNTS stage to backup/restore job stages
- Update BackupManager to call mount backup/restore methods
- Add tests for mount backup/restore functionality

Fixes home-assistant/core#148663

* Address reviewer feedback for mount backup/restore

Changes based on PR review:
- Store mount configs in encrypted mounts.tar instead of unencrypted
  backup metadata (security fix for passwords)
- Separate mount restore into config save + async activation tasks
  (mounts activate in background, failures don't block restore)
- Add replace_default_backup_mount parameter to control whether to
  overwrite existing default mount setting
- Remove unnecessary broad exception handler for default mount setter
- Simplify schema: ATTR_MOUNTS is now just a boolean flag since
  actual data is in the encrypted tar file
- Update tests to reflect new async API and return types

* Fix code review issues in mount backup/restore

- Add bind mount handling for MEDIA and SHARE usage types in
  _activate_restored_mount() to mirror MountManager.create_mount()
- Fix double save_data() call by using needs_save flag
- Import MountUsage const for usage type checks

* Add pylint disable comments for protected member access

* Tighten broad exception handlers in mount backup restore

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Address second round of reviewer feedback

- Catch OSError separately and check errno.EBADMSG for drive health
- Validate mounts JSON against SCHEMA_MOUNTS_CONFIG before importing
- Use mount_data[ATTR_NAME] instead of .get("name", "unknown")
- Overwrite existing mounts on restore instead of skipping
- Move restore_mount/activate logic to MountManager (no more
  protected-access in Backup)
- Drop unused replace_default_backup_mount parameter
- Fix test_backup_progress: add mounts stage to expected events
- Fix test_store_mounts: avoid create_mount which requires dbus

* Rename mounts.tar to supervisor.tar for generic supervisor config

Rename the inner tar from mounts.tar to supervisor.tar so it can hold
multiple config files (mounts.json now, docker credentials later).
Rename store_mounts/restore_mounts to store_supervisor_config/
restore_supervisor_config and update stage names accordingly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Fix pylint protected-access and test timeouts in backup tests

- Add pylint disable comment for _mounts protected access in test_backup.py
- Mock restore_supervisor_config in test_full_backup_to_mount and
  test_partial_backup_to_mount to avoid D-Bus mount activation during
  restore that causes timeouts in the test environment

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Address agners review feedback

- Change create_inner_tar() to create_tar() per #6575
- Remove "r" argument from SecureTarFile (now read-only by default)
- Change warning to info for missing mounts tar (old backups won't have it)
- Narrow exception handler to (MountError, vol.Invalid, KeyError, OSError)

* Update supervisor/backups/backup.py

* Address agners feedback: remove metadata flag, add mount feature check

- Remove ATTR_SUPERVISOR boolean flag from backup metadata; instead
  check for physical presence of supervisor.tar (like folder backups)
- Remove has_supervisor_config property
- Always attempt supervisor config restore (tar existence check handles it)
- Add HostFeature.MOUNT check in _activate_restored_mount before
  attempting to activate mounts on systems without mount support

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2026-03-26 17:08:16 -04:00
Stefan Agner
6525c8c231 Remove unused requests dependency (#6666)
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 10:32:19 +01:00
Stefan Agner
612664e3d6 Fix build workflow by hardcoding architectures matrix (#6665)
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 09:59:19 +01:00
dependabot[bot]
aa9a4c17f6 Bump cryptography from 46.0.5 to 46.0.6 (#6663)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.5 to 46.0.6.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.5...46.0.6)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-26 08:38:14 +01:00
dependabot[bot]
0f9cb9ee03 Bump sigstore/cosign-installer from 4.1.0 to 4.1.1 (#6662) 2026-03-26 07:53:12 +01:00
dependabot[bot]
03b1e95b94 Bump requests from 2.32.5 to 2.33.0 (#6664) 2026-03-26 07:47:29 +01:00
Stefan Agner
c0cca1ff8b Centralize OSError bad message handling in ResolutionManager (#6641)
Add check_oserror() method to ResolutionManager with an extensible
errno-to-unhealthy-reason mapping. Replace ~20 inline
`if err.errno == errno.EBADMSG` checks across 14 files with a single
call to self.sys_resolution.check_oserror(err). This makes it easy to
add handling for additional filesystem errors (e.g. EIO, ENOSPC) in
one place.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 16:47:36 +01:00
dependabot[bot]
2b2aca873b Bump sentry-sdk from 2.55.0 to 2.56.0 (#6661)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-25 07:49:24 +01:00
Jan Čermák
27609ee992 Migrate builder workflow to new builder actions (#6653)
* Migrate builder workflow to new builder actions

Migrate Supervisor image build to new builder actions. The resulting images
should be identical to those built by the builder.

Refs #6646 - does not implement multi-arch manifest publishing (will be done in
a follow-up)

* Update devcontainer version to 3
2026-03-24 10:04:18 +01:00
dependabot[bot]
573e5ac767 Bump types-requests from 2.32.4.20260107 to 2.32.4.20260324 (#6660)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-24 07:33:58 +01:00
Stefan Agner
c9a2da34c2 Call Subscribe on systemd D-Bus connect to enable signals (#6659)
systemd only emits bus signals (including PropertiesChanged) when at
least one client has called Subscribe() on the Manager interface. On
regular HAOS systems, systemd-logind calls Subscribe which enables
signals for all bus clients. However, in environments without
systemd-logind (such as the Supervisor devcontainer with systemd), no
signals are emitted, causing the firewall unit wait to time out.

Explicitly calling Subscribe() has no downsides and makes it clear
that the Supervisor relies on these signals. There is no need to call
Unsubscribe() as systemd automatically tracks clients and stops
emitting signals when all subscribers have disconnected from the bus.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 14:21:09 +01:00
Jan Čermák
0cb96c36b6 Disable hassio gateway protection rules in dev mode (#6658)
Since there's no Systemd guaranteed to be available for Devcontainer to create
the transient unit, we need to skip creating of the firewall rules when
Supervisor is started in dev mode.

See: https://github.com/home-assistant/supervisor/pull/6650#issuecomment-4097583552
2026-03-23 10:13:31 +01:00
dependabot[bot]
f719db30c4 Bump attrs from 25.4.0 to 26.1.0 (#6655)
Bumps [attrs](https://github.com/sponsors/hynek) from 25.4.0 to 26.1.0.
- [Commits](https://github.com/sponsors/hynek/commits)

---
updated-dependencies:
- dependency-name: attrs
  dependency-version: 26.1.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-23 09:44:42 +01:00
dependabot[bot]
ae3634709b Bump pytest-cov from 7.0.0 to 7.1.0 (#6657)
Bumps [pytest-cov](https://github.com/pytest-dev/pytest-cov) from 7.0.0 to 7.1.0.
- [Changelog](https://github.com/pytest-dev/pytest-cov/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest-cov/compare/v7.0.0...v7.1.0)

---
updated-dependencies:
- dependency-name: pytest-cov
  dependency-version: 7.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-23 09:11:21 +01:00
dependabot[bot]
64d9bbada5 Bump ruff from 0.15.6 to 0.15.7 (#6654)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-20 20:37:19 +01:00
Stefan Agner
36124eafae Add firewall rules to protect Docker gateway from external access (#6650)
Add iptables rules via a systemd transient unit to drop traffic
addressed to the bridge gateway IP from non-bridge interfaces.

The firewall manager waits for the transient unit to complete and
verifies success via D-Bus property change signals. On failure, the
system is marked unhealthy and host-network add-ons are prevented
from booting.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 16:46:17 +01:00
dependabot[bot]
c16b3ca516 Bump codecov/codecov-action from 5.5.2 to 5.5.3 (#6649)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.5.2 to 5.5.3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](671740ac38...1af58845a9)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 5.5.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-19 13:34:03 +01:00
dependabot[bot]
02b201d0f7 Bump actions/cache from 5.0.3 to 5.0.4 (#6647)
Bumps [actions/cache](https://github.com/actions/cache) from 5.0.3 to 5.0.4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](cdf6c1fa76...668228422a)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: 5.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-19 13:33:50 +01:00
dependabot[bot]
4bde25794f Bump release-drafter/release-drafter from 7.1.0 to 7.1.1 (#6648)
Bumps [release-drafter/release-drafter](https://github.com/release-drafter/release-drafter) from 7.1.0 to 7.1.1.
- [Release notes](https://github.com/release-drafter/release-drafter/releases)
- [Commits](44a942e465...139054aeaa)

---
updated-dependencies:
- dependency-name: release-drafter/release-drafter
  dependency-version: 7.1.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-19 13:33:36 +01:00
dependabot[bot]
82b893a5b1 Bump coverage from 7.13.4 to 7.13.5 (#6645)
Bumps [coverage](https://github.com/coveragepy/coveragepy) from 7.13.4 to 7.13.5.
- [Release notes](https://github.com/coveragepy/coveragepy/releases)
- [Changelog](https://github.com/coveragepy/coveragepy/blob/main/CHANGES.rst)
- [Commits](https://github.com/coveragepy/coveragepy/compare/7.13.4...7.13.5)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.13.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 09:48:13 +01:00
dependabot[bot]
b24ada6a21 Bump sentry-sdk from 2.54.0 to 2.55.0 (#6644)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.54.0 to 2.55.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.54.0...2.55.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.55.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 09:47:48 +01:00
dependabot[bot]
6dff48dbb4 Bump release-drafter/release-drafter from 7.0.0 to 7.1.0 (#6643)
Bumps [release-drafter/release-drafter](https://github.com/release-drafter/release-drafter) from 7.0.0 to 7.1.0.
- [Release notes](https://github.com/release-drafter/release-drafter/releases)
- [Commits](3a7fb5c85b...44a942e465)

---
updated-dependencies:
- dependency-name: release-drafter/release-drafter
  dependency-version: 7.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 09:46:47 +01:00
Stefan Agner
40f9504157 Increase plugin update check interval from ~8h to 12h, Supervisor to 24h (#6638)
Further slow down automatic update rollout to reduce pressure on container
registry infrastructure (GHCR rate limiting). Plugins are staggered 2 minutes
apart starting at 12h, Supervisor moves from 12h to 24h.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 19:28:36 +01:00
Stefan Agner
687dccd1f5 Increase Supervisor update check interval from 8h to 12h (#6633)
Slow down the automatic Supervisor update rollout to reduce pressure
on the container registry infrastructure (GHCR rate limiting).

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 17:19:26 +01:00
Stefan Agner
f41a8e9d08 Wait for addon startup task before unload to prevent data access race (#6630)
* Wait for addon startup task before unload to prevent data access race

Replace the cancel-based approach in unload() with an await of the outer
_wait_for_startup_task. The container removal and state change resolve the
startup event naturally, so we just need to ensure the task completes
before addon data is removed. This prevents a KeyError on self.name access
when _wait_for_startup times out after data has been removed.

Also simplify _wait_for_startup by removing the unnecessary inner task
wrapper — asyncio.wait_for can await the event directly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Drop asyncio.sleep() in test_manager.py

* Only clear startup task reference if still the current task

Prevent a race where an older _wait_for_startup task's finally block
could wipe the reference to a newer task, causing unload() to skip
the await.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Reuse existing pending startup wait task when addon is already running

If start() is called while the addon is already running and a startup
wait task is still pending, return the existing task instead of creating
a new one.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 12:46:29 +01:00
Thomas Kadauke
cbeb3520c3 fix: remove WebSocket message size limit in ingress proxy (#6604)
aiohttp's default max_msg_size of 4MB causes the ingress WebSocket proxy
to silently drop connections when an add-on sends messages larger than
that limit (e.g. Zigbee2MQTT's bridge/devices payload with many devices).

Setting max_msg_size=0 removes the limit on both the server-side
WebSocketResponse and the upstream ws_connect, fixing dropped connections
for add-ons that produce large WebSocket messages.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: J. Nick Koston <nick@koston.org>
2026-03-16 12:46:16 +01:00
dependabot[bot]
8b9928d313 Bump masesgroup/retrieve-changed-files from 3.0.0 to 4.0.0 (#6637)
Bumps [masesgroup/retrieve-changed-files](https://github.com/masesgroup/retrieve-changed-files) from 3.0.0 to 4.0.0.
- [Release notes](https://github.com/masesgroup/retrieve-changed-files/releases)
- [Commits](491e80760c...45a8b3b496)

---
updated-dependencies:
- dependency-name: masesgroup/retrieve-changed-files
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 09:47:28 +01:00
dependabot[bot]
f58d905082 Bump release-drafter/release-drafter from 6.4.0 to 7.0.0 (#6636)
Bumps [release-drafter/release-drafter](https://github.com/release-drafter/release-drafter) from 6.4.0 to 7.0.0.
- [Release notes](https://github.com/release-drafter/release-drafter/releases)
- [Commits](6a93d82988...3a7fb5c85b)

---
updated-dependencies:
- dependency-name: release-drafter/release-drafter
  dependency-version: 7.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 09:46:30 +01:00
Jan Čermák
093e98b164 Fix fallback time sync, create repair issue if time is out of sync (#6625)
* Fix fallback time sync, create repair issue if time is out of sync

The "poor man's NTP" using the whois service didn't work because it attempted
to sync the time when the NTP service was enabled, which is rejected by the
timedated service. To fix this, Supervisor now first disables the
systemd-timesyncd service and creates a repair issue before adjusting the time.
The timesyncd service stays disabled until submitting the fixup. Theoretically,
if the time moves backwards from an invalid time in the future,
systemd-timesyncd could otherwise restore the wrong time from a timestamp if we
did that after the time was set.

Also, the sync is now performed if the time is more that 1 hour off and in both
directions (previously it only intervened if it was more than 3 days in the
past).

Fixes #6015, refs #6549

* Update test_adjust_system_datetime_if_time_behind
2026-03-13 16:01:38 +01:00
dependabot[bot]
eedc623ec5 Bump ruff from 0.15.5 to 0.15.6 (#6629)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.5 to 0.15.6.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.5...0.15.6)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-13 08:49:48 +01:00
dependabot[bot]
7ac900da83 Bump actions/download-artifact from 8.0.0 to 8.0.1 (#6624)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 8.0.0 to 8.0.1.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](70fc10c6e5...3e5f45b2cf)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 8.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-12 09:56:02 +01:00
Stefan Agner
f8d3443f30 Use securetar v3 for backups when Core is 2026.3.0 or newer (#6621)
Core supports SecureTar v3 since 2026.3.0, so use the new version only
then to ensure compatibility. Fall back to v2 for older Core versions.
2026-03-11 18:50:24 +01:00
Stefan Agner
83c8c0aab0 Remove obsolete persistent notification system (#6623)
The core_security check (HA < 2021.1.5 with custom components) and the
ResolutionNotify class that created persistent notifications for it are
no longer needed. The minimum supported HA version is well past 2021.1.5,
so this check can never trigger. The notify module was the only consumer
of persistent notifications and had no other users.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 18:50:07 +01:00
Jan Čermák
3c703667ce Bump uv to v0.10.9 (#6622)
* https://github.com/astral-sh/uv/blob/0.10.9/CHANGELOG.md
2026-03-11 16:17:02 +01:00
Stefan Agner
31c2fcf377 Treat empty string password as None in backup restore (#6618)
* Treat empty string password as None in backup restore

Work around a securetar 2026.2.0 bug where an empty string password
sets encrypted=True but fails to derive a key, leading to an
AttributeError on restore. This also restores consistency with backup
creation which uses a truthiness check to skip encryption for empty
passwords.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Explicitly mention that "" is treated as no password

* Add tests for empty string password handling in backups

Verify that empty string password is treated as no password on both
backup creation (not marked as protected) and restore (normalized to
None in set_password).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Improve comment

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 09:53:43 +01:00
dependabot[bot]
8749d11e13 Bump sigstore/cosign-installer from 4.0.0 to 4.1.0 (#6619)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 4.0.0 to 4.1.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](faadad0cce...ba7bc0a3fe)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 4.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-10 08:30:46 +01:00
dependabot[bot]
0732999ea9 Bump setuptools from 82.0.0 to 82.0.1 (#6620) 2026-03-10 07:28:04 +01:00
Mike Degatano
f6c8a68207 Deprecate advanced mode option in addon config (#6614)
* Deprecate advanced mode option in addon config

* Note deprecation of field in addon info and list APIs

* Update docstring per copilot
2026-03-09 10:26:28 +01:00
dependabot[bot]
5c35d86abe Bump release-drafter/release-drafter from 6.2.0 to 6.4.0 (#6617)
Bumps [release-drafter/release-drafter](https://github.com/release-drafter/release-drafter) from 6.2.0 to 6.4.0.
- [Release notes](https://github.com/release-drafter/release-drafter/releases)
- [Commits](6db134d15f...6a93d82988)

---
updated-dependencies:
- dependency-name: release-drafter/release-drafter
  dependency-version: 6.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-09 09:48:30 +01:00
dependabot[bot]
38d6907377 Bump ruff from 0.15.4 to 0.15.5 (#6616)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-06 09:51:20 +01:00
Jan Čermák
b1be897439 Use Python 3.14(.3) in CI and base image (#6586)
* Use Python 3.14(.3) in CI and base image

Update base image to the latest tag using Python 3.14.3 and update Python
version in CI workflows to 3.14.

With Python 3.14, backports.zstd is no longer necessary as it's now available
in the standard library.

* Update wheels ABI in the wheels builder to cp314

* Use explicit Python fix version in GH actions

Specify explicitly Python 3.14.3, as the setup-python action otherwise default
to 3.14.2 when 3.14.3, leading to different version in CI and in production.

* Update Python version references in pyproject.toml

* Fix all ruff quoted-annotation (UP037) errors

* Revert unquoting of DBus types in tests and ignore UP037 where needed
2026-03-05 21:11:25 +01:00
dependabot[bot]
80f790bf5d Bump docker/login-action from 3.7.0 to 4.0.0 (#6615)
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.7.0 to 4.0.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](c94ce9fb46...b45d80f862)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-05 09:05:33 +01:00
Stefan Agner
5e1eaa9dfe Respect auto-update setting for plug-in auto-updates (#6606)
* Respect auto-update setting for plug-in auto-updates

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Also skip auto-updating plug-ins in decorator

* Raise if auto-update flag is not set and plug-in is not up to date

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 09:04:33 +01:00
Stefan Agner
9e0d3fe461 Return 401 for non-Basic Authorization headers on /auth endpoint (#6612)
aiohttp's BasicAuth.decode() raises ValueError for any non-Basic auth
method (e.g. Bearer tokens). This propagated as an unhandled exception,
causing a 500 response instead of the expected 401 Unauthorized.

Catch the ValueError in _process_basic() and raise HTTPUnauthorized with
the WWW-Authenticate realm header so clients get a proper 401 response.

Fixes SUPERVISOR-BFG

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-04 15:55:49 -05:00
Stefan Agner
659735d215 Guard _migrate validator against non-dict add-on configs (#6611)
The _migrate function in addons/validate.py is the first validator in the
SCHEMA_ADDON_CONFIG All() chain and was called directly with raw config data.
If a malformed add-on config file contained a non-dict value (e.g. a string),
config.get() would raise an AttributeError instead of a proper voluptuous
Invalid error, causing an unhandled exception.

Add an isinstance check at the top of _migrate to raise vol.Invalid for
non-dict inputs, letting validation fail gracefully.

Fixes SUPERVISOR-HMP

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-04 15:54:21 -05:00
Stefan Agner
0ef71d1dd1 Drop unsupported architectures and machines, create issue for affected apps (#6607)
* Drop unsupported architectures and machines from Supervisor

Since #5620 Supervisor no longer updates the version information on
unsupported architectures and machines. This means users can no longer
update to newer version of Supervisor since that PR got released.
Furthermore since #6347 we also no longer build for these
architectures. With this, any code related to these architectures
becomes dead code and should be removed.

This commit removes all refrences to the deprecated architectures and
machines from Supervisor.

This affects the following architectures:
- armhf
- armv7
- i386

And the following machines:
- odroid-xu
- qemuarm
- qemux86
- raspberrypi
- raspberrypi2
- raspberrypi3
- raspberrypi4
- tinker

* Create issue if an app using a deprecated architecture is installed

This adds a check to the resolution system to detect if an app is
installed that uses a deprecated architecture. If so, it will show a
warning to the user and recommend them to uninstall the app.

* Formally deprecate machine add-on configs as well

Not only deprecate add-on configs for unsupported architectures, but
also for unsupported machines.

* For installed add-ons architecture must always exist

Fail hard in case of missing architecture, as this is a required field
for installed add-ons. This will prevent the Supervisor from running
with an unsupported configuration and causing further issues down the
line.
2026-03-04 10:59:14 +01:00
Stefan Agner
96fb26462b Fix apps build using wrong architecture for non-native arch apps (#6610)
* Fix add-on build using wrong architecture for non-native arch add-ons

When building a locally-built add-on (no image tag), the architecture
was always set to sys_arch.default (e.g. amd64 on x86_64) instead of
matching against the add-on's declared architectures. This caused an
i386-only add-on to incorrectly build as amd64.

Use sys_arch.match() against the add-on's declared arch list in all
code paths: the arch property, image name generation, BUILD_ARCH build
arg, and default base image selection.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Use CpuArch enums to fix tests

* Explicitly set _supported_arch as new list to fix tests

* Fix pytests

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 15:36:30 +01:00
Stefan Agner
2627d55873 Add default verbose timestamps for plugin logs (#6598)
* Use verbose log output for plug-ins

All three plug-ins which support logging (dns, multicast and audio)
should use the verbose log format by default to make sure the log lines
are annotated with timestamp. Introduce a new flag default_verbose for
advanced logs.

* Use default_verbose for host logs as well

Use the new default_verbose flag for advanced logs, to make it more
explicit that we want timestamps for host logs as well.
2026-03-03 11:58:11 +01:00
dependabot[bot]
6668417e77 Bump sentry-sdk from 2.53.0 to 2.54.0 (#6609)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.53.0 to 2.54.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.53.0...2.54.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.54.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-03 07:58:26 +01:00
Jan Čermák
6a955527f3 Ensure dt_utc in /os/info always returns current time (#6602)
The /os/info API endpoint has been using D-Bus property TimeUSec which got
cached between requests, so the time returned was not always the same as
current time on the host system at the time of the request. Since there's no
reason to use D-Bus API for the time, as Supervisor runs on the same machine
and time is global, simply format current datetime object with Python and
return it in the response.

Fixes #6581
2026-02-27 17:59:11 +01:00
dependabot[bot]
8eb188f734 Bump actions/upload-artifact from 6.0.0 to 7.0.0 (#6599) 2026-02-27 08:22:28 +01:00
dependabot[bot]
e7e3882013 Bump ruff from 0.15.2 to 0.15.4 (#6601)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.2 to 0.15.4.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.2...0.15.4)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-27 07:53:59 +01:00
dependabot[bot]
caa2b8b486 Bump actions/download-artifact from 7.0.0 to 8.0.0 (#6600)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 7.0.0 to 8.0.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](37930b1c2a...70fc10c6e5)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 8.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-27 07:53:31 +01:00
Stefan Agner
3bf5ea4a05 Add remaining NetworkManager device type enums (#6593) 2026-02-26 18:50:52 +01:00
Stefan Agner
7f6327e94e Handle missing Accept header in host logs (#6594)
* Handle missing Accept header in host logs

Avoid indexing request headers directly in the host advanced logs handler when Accept is absent, preventing KeyError crashes on valid requests without that header. Fixes SUPERVISOR-1939.

* Add pytest
2026-02-26 11:30:08 +01:00
Mike Degatano
9f00b6e34f Ensure uuid of dismissed suggestion/issue matches an existing one (#6582)
* Ensure uuid of dismissed suggestion/issue matches an existing one

* Fix lint, test and feedback issues

* Adjust existing tests and remove new ones for not found errors

* fix device access issue usage
2026-02-25 10:26:44 +01:00
Stefan Agner
7a0b2e474a Remove unused Docker config from backup metadata (#6591)
Remove the docker property and schema validation from backup metadata.
The Docker config (registry credentials, IPv6 setting) was already
dropped from backup/restore operations in #5605, but the property and
schema entry remained. Old backups with the docker key still load fine
since the schema uses extra=vol.ALLOW_EXTRA.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 09:12:05 +01:00
dependabot[bot]
b74277ced0 Bump home-assistant/builder from 2025.11.0 to 2026.02.1 (#6592)
Bumps [home-assistant/builder](https://github.com/home-assistant/builder) from 2025.11.0 to 2026.02.1.
- [Release notes](https://github.com/home-assistant/builder/releases)
- [Commits](https://github.com/home-assistant/builder/compare/2025.11.0...2026.02.1)

---
updated-dependencies:
- dependency-name: home-assistant/builder
  dependency-version: 2026.02.1
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-25 09:07:24 +01:00
Stefan Agner
c9a874b352 Remove RuntimeError from APIError inheritance (#6588)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 22:46:16 +01:00
Stefan Agner
3de2deaf02 Bump securetar to 2026.2.0 (#6575)
* Bump securetar from 2025.12.0 to 2026.2.0

Adapt to the new securetar API:
- Use SecureTarArchive for outer backup tar (replaces SecureTarFile
  with gzip=False for the outer container)
- create_inner_tar() renamed to create_tar(), password now inherited
  from the archive rather than passed per inner tar
- SecureTarFile no longer accepts a mode parameter (read-only by
  default, InnerSecureTarFile for writing)
- Pass create_version=2 to keep protected backups at version 2

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Reformat imports

* Rename _create_cleanup to _create_finalize and update docstring

* Use constant for SecureTar create version

* Add test for SecureTarReadError in validate_backup

securetar >= 2026.2.0 raises SecureTarReadError instead of
tarfile.ReadError for invalid passwords. Catching this exception
and raising BackupInvalidError is required so Core shows the
encryption key dialog to the user.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Handle InvalidPasswordError for v3 backups

* Address typos

* Add securetar v3 encrypted password test fixture

Add a test fixture for a securetar v3 encrypted backup with password.
This will be used in the test suite to verify that the backup
extraction process correctly handles encrypted backups.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 13:08:14 +01:00
dependabot[bot]
c79e58d584 Bump pylint from 4.0.4 to 4.0.5 (#6584)
Bumps [pylint](https://github.com/pylint-dev/pylint) from 4.0.4 to 4.0.5.
- [Release notes](https://github.com/pylint-dev/pylint/releases)
- [Commits](https://github.com/pylint-dev/pylint/compare/v4.0.4...v4.0.5)

---
updated-dependencies:
- dependency-name: pylint
  dependency-version: 4.0.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-23 10:09:38 +01:00
Stefan Agner
6070d54860 Harden backup tar extraction with Python tar_filter (#6559)
* Harden backup tar extraction with Python data filter

Replace filter="fully_trusted" with a custom backup_data_filter that
wraps tarfile.data_filter. This adds protection against symlink attacks
(absolute targets, destination escapes), device node injection, and
path traversal, while resetting uid/gid and sanitizing permissions.

Unlike using data_filter directly, the custom filter skips problematic
entries with a warning instead of aborting the entire extraction. This
ensures existing backups containing absolute symlinks (e.g. in shared
folders) still restore successfully with the dangerous entries omitted.

Also removes the now-redundant secure_path member filtering, as
data_filter is a strict superset of its protections. Fixes a standalone
bug in _folder_restore which had no member filtering at all.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Simplify security tests to test backup_data_filter directly

Test the public backup_data_filter function with plain tarfile
extraction instead of going through Backup internals. Removes
protected-access pylint warnings and unnecessary coresys setup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Switch to tar filter instead of custom data filter wrapper

Replace backup_data_filter (which wrapped data_filter and skipped
problematic entries) with the built-in tar filter. The tar filter
rejects path traversal and absolute names while preserving uid/gid
and file permissions, which is important for add-ons running as
non-root users.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Apply suggestions from code review

Co-authored-by: Erik Montnemery <erik@montnemery.com>

* Use BackupInvalidError instead of BackupError for tarfile.TarError

Make sure FilterErrors lead to BackupInvalidError instead of BackupError,
as they are not related to the backup process itself but rather to the
integrity of the backup data.

* Improve test coverage and use pytest.raises

* Only make FilterError a BackupInvalidError

* Add test case for FilterError during Home Assistant Core restore

* Add test cases for Add-ons

* Fix pylint warnings

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Erik Montnemery <erik@montnemery.com>
2026-02-23 10:09:19 +01:00
dependabot[bot]
03e110cb86 Bump ruff from 0.15.1 to 0.15.2 (#6583)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.1 to 0.15.2.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.1...0.15.2)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 10:12:25 +01:00
Mike Degatano
4a1c816b92 Finish dockerpy to aiodocker migration (#6578) 2026-02-18 08:49:15 +01:00
Mike Degatano
b70f44bf1f Bump aiodocker from 0.25.0 to 0.26.0 (#6577) 2026-02-17 14:26:01 -05:00
Stefan Agner
c981b3b4c2 Extend and improve release drafter config (#6576)
* Extend and improve release drafter config

Extend the release drafter config with more types (labels) and order
them by priority. Inspired by conventional commits, in particular
the list documented at (including the order):
https://github.com/pvdlg/conventional-changelog-metahub?tab=readme-ov-file#commit-types

Additionally, we left the "breaking-change" and "dependencies" labels.

* Add revert to the list of labels
2026-02-17 19:32:25 +01:00
Stefan Agner
f2d0ceab33 Add missing WIFI_P2P device type to NetworkManager enum (#6574)
Add the missing WIFI_P2P (30) entry to the DeviceType NetworkManager
enum. Without it, systems with a Wi-Fi P2P interface log a warning:

  Unknown DeviceType value received from D-Bus: 30

Closes #6573

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 13:13:02 -05:00
Stefan Agner
3147d080a2 Unify Core user handling with HomeAssistantUser model (#6558)
* Unify Core user listing with HomeAssistantUser model

Replace the ingress-specific IngressSessionDataUser with a general
HomeAssistantUser dataclass that models the Core config/auth/list WS
response. This deduplicates the WS call (previously in both auth.py
and module.py) into a single HomeAssistant.list_users() method.

- Add HomeAssistantUser dataclass with fields matching Core's user API
- Remove get_users() and its unnecessary 5-minute Job throttle
- Auth and ingress consumers both use HomeAssistant.list_users()
- Auth API endpoint uses typed attribute access instead of dict keys
- Migrate session serialization from legacy "displayname" to "name"
- Accept both keys in schema/deserialization for backwards compat
- Add test for loading persisted sessions with legacy displayname key

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Tighten list_users() to trust Core's auth/list contract

Core's config/auth/list WS command always returns a list, never None.
Replace the silent `if not raw: return []` (which also swallowed empty
lists) with an assert, remove the dead AuthListUsersNoneResponseError
exception class, and document the HomeAssistantWSError contract in the
docstring.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove | None from async_send_command return type

The WebSocket result is always set from data["result"] in _receive_json,
never explicitly to None. Remove the misleading | None from the return
type of both WSClient and HomeAssistantWebSocket async_send_command, and
drop the now-unnecessary assert in list_users.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Use HomeAssistantWSConnectionError in _ensure_connected

_ensure_connected and connect_with_auth raise on connection-level
failures, so use the more specific HomeAssistantWSConnectionError
instead of the broad HomeAssistantWSError. This allows callers to
distinguish connection errors from Core API errors (e.g. unsuccessful
WebSocket command responses). Also document that _ensure_connected can
propagate HomeAssistantAuthError from ensure_access_token.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove user list cache from _find_user_by_id

Drop the _list_of_users cache to avoid stale auth data in ingress
session creation. The method now fetches users fresh each time and
returns None on any API error instead of serving potentially outdated
cached results.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 18:31:08 +01:00
dependabot[bot]
09a4e9d5a2 Bump actions/stale from 10.1.1 to 10.2.0 (#6571)
Bumps [actions/stale](https://github.com/actions/stale) from 10.1.1 to 10.2.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](997185467f...b5d41d4e1d)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-17 10:36:56 +01:00
dependabot[bot]
d93e728918 Bump sentry-sdk from 2.52.0 to 2.53.0 (#6572)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.52.0 to 2.53.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.52.0...2.53.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.53.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-17 10:34:12 +01:00
c0ffeeca7
27c6af4b4b App store strings: rename add-on to app (#6569) 2026-02-16 09:20:53 +01:00
Stefan Agner
00f2578d61 Add missing BRIDGE device type to NetworkManager enum (#6567)
NMDeviceType 13 (NM_DEVICE_TYPE_BRIDGE) was not listed in the
DeviceType enum, causing a warning when NetworkManager reported
a bridge interface.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 10:25:15 -05:00
Stefan Agner
50e6c88237 Add periodic progress logging during initial Core installation (#6562)
* Add periodic progress logging during initial Core installation

Log installation progress every 15 seconds while downloading the
Home Assistant Core image during initial setup (landing page to core
transition). Uses asyncio.Event with wait_for timeout to produce
time-based logs independent of Docker pull events, ensuring visibility
even when the network stalls.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Add test coverage

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
2026-02-13 14:17:35 +01:00
dependabot[bot]
0cce2dad3c Bump ruff from 0.15.0 to 0.15.1 (#6565)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.0 to 0.15.1.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.0...0.15.1)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-13 08:59:59 +01:00
Stefan Agner
8dd42cb7a0 Fix getting Supervisor IP address in testing (#6564)
* Fix getting Supervisor IP address in testing

Newer Docker versions (probably newer than 29.x) do not have a global
IPAddress attribute under .NetworkSettings anymore. There is a network
specific map under Networks. For our case the hassio has the relevant
IP address. This network specific maps already existed before, hence
the new inspect format works for old as well as new Docker versions.

While at it, also adjust the test fixture.

* Actively wait for hassio IPAddress to become valid
2026-02-13 08:12:19 +01:00
Mike Degatano
590674ba7c Remove blocking I/O added to import_image (#6557)
* Remove blocking I/O added to import_image

* Add scanned modules to extra blockbuster functions

* Use same cast avoidance approach in export_image

* Remove unnecessary local image_writer variable

* Remove unnecessary local image_tar_stream variable

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2026-02-12 17:37:15 +01:00
Stefan Agner
da800b8889 Simplify HomeAssistantWebSocket and raise on connection errors (#6553)
* Raise HomeAssistantWSError when Core WebSocket is unreachable

Previously, async_send_command silently returned None when Home Assistant
Core was not reachable, leading to misleading error messages downstream
(e.g. "returned invalid response of None instead of a list of users").

Refactor _can_send to _ensure_connected which now raises
HomeAssistantWSError on connection failures while still returning False
for silent-skip cases (shutdown, unsupported version). async_send_message
catches the exception to preserve fire-and-forget behavior.

Update callers that don't handle HomeAssistantWSError: _hardware_events
and addon auto-update in tasks.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Simplify HomeAssistantWebSocket command/message distinction

The WebSocket layer had a confusing split between "messages" (fire-and-forget)
and "commands" (request/response) that didn't reflect Home Assistant Core's
architecture where everything is just a WS command.

- Remove dead WSClient.async_send_message (never called)
- Rename async_send_message → _async_send_command (private, fire-and-forget)
- Rename send_message → send_command (sync wrapper)
- Simplify _ensure_connected: drop message param, always raise on failure
- Simplify async_send_command: always raise on connection errors
- Remove MIN_VERSION gating (minimum supported Core is now 2024.2+)
- Remove begin_backup/end_backup version guards for Core < 2022.1.0
- Add debug logging for silently ignored connection errors

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Wait for Core to come up before backup

This is crucial since the WebSocket command to Core now fails with the
new error handling if Core is not running yet.

* Wait for Core install job instead

* Use CLI to fetch jobs instead of Supervisor API

The Supervisor API needs authentication token, which we have not
available at this point in the workflow. Instead of fetching the token,
we can use the CLI, which is available in the container.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 09:20:23 +01:00
Stefan Agner
7ae14b09a7 Reload ingress tokens on addon update and rebuild (#6556)
When an addon updates from having no ingress to having ingress, the
ingress token map was never rebuilt. Both update() and rebuild() called
_check_ingress_port() to assign a dynamic port but skipped the
sys_ingress.reload() call that registers the token. This caused
Ingress.get() to return None, resulting in a 503 error.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 15:20:08 -05:00
Jan Čermák
cc2da7284a Adjust tests in builder workflow to use apps instead of addons in CLI (#6554)
The CLI calls in the tests are still using deprecated add-ons terminology,
causing deprecation warnings. Change the commands and flags to the new ones.
2026-02-11 17:10:25 +01:00
Stefan Agner
6877a8b210 Add D-Bus tolerant enum base classes to prevent crashes on unknown values (#6545)
* Add D-Bus tolerant enum base classes to prevent crashes on unknown values

D-Bus services (systemd, NetworkManager, RAUC, UDisks2) can introduce
new enum values at any time via OS updates. Standard Python enum
construction raises ValueError for unknown values, which would crash
the Supervisor.

Introduce DBusStrEnum and DBusIntEnum base classes that use Python's
_missing_ hook to create pseudo-members for unknown values. These
pseudo-members pass isinstance checks (satisfying typeguard), preserve
the original value, don't pollute __members__, and report unknown
values to Sentry (deduplicated per class+value) for observability.

Migrate 17 D-Bus enums in dbus/const.py and udisks2/const.py to the
new base classes. Enums only sent TO D-Bus (StopUnitMode, StartUnitMode,
etc.) are left unchanged. Remove the manual try/except workaround in
NetworkInterface.type now that DBusIntEnum handles it automatically.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Add explicit enum conversions for systemd-resolved D-Bus properties

The resolved properties (dns_over_tls, dns_stub_listener, dnssec, llmnr,
multicast_dns, resolv_conf_mode) were returning raw string values from
D-Bus without converting to their declared enum types. This would fail
runtime type checking with typeguard.

Now safe to add explicit conversions since these enums use DBusStrEnum,
which tolerates unknown values from D-Bus without crashing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Avoid blocking I/O in D-Bus enum Sentry reporting

Move sentry_sdk.capture_message out of the event loop by adding a
fire_and_forget_capture_message helper that offloads the call to the
executor when a running loop is detected.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Handle exceptions when reporting message to Sentry

* Narrow typing of reported values

Use str/int explicitly since that is what the two existing Enum classes
can actually report.

* Adjust test style

* Apply suggestions from code review

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 15:53:19 +01:00
Stefan Agner
4b9f62b14b Add test style guideline to copilot instructions (#6552)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 11:54:59 +01:00
Stefan Agner
4dd58342b8 Fix RestartPolicy type annotation for runtime type checking (#6546)
* Fix RestartPolicy type annotation for runtime type checking

The restart_policy property returned a plain str from the Docker API
instead of a RestartPolicy instance, causing TypeCheckError with
typeguard. Use explicit mapping via _restart_policy_from_model(),
consistent with the existing _container_state_from_model() pattern,
to always return a proper RestartPolicy enum member. Unknown values
from Docker are logged and default to RestartPolicy.NO.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Drop unnecessary _RESTART_POLICY_MAP

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 10:17:52 +01:00
Mike Degatano
825ff415e0 Migrate export image to aiodocker (#6534)
* Migrate export image to aiodocker

* Remove aiofiles and just use executor

* Fixes from feedback
2026-02-11 10:03:50 +01:00
Stefan Agner
7e91cfe01c Reformat pyproject.toml using taplo (#6550)
Reformat pyproject.toml using taplo. This aligns formatting with Home
Assistant Core.
2026-02-11 09:56:00 +01:00
dependabot[bot]
327a2fe6b1 Bump cryptography from 46.0.4 to 46.0.5 (#6551)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.4 to 46.0.5.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.4...46.0.5)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.5
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-11 00:18:13 -05:00
dependabot[bot]
28b7cbe16b Bump coverage from 7.13.3 to 7.13.4 (#6548)
Bumps [coverage](https://github.com/coveragepy/coveragepy) from 7.13.3 to 7.13.4.
- [Release notes](https://github.com/coveragepy/coveragepy/releases)
- [Changelog](https://github.com/coveragepy/coveragepy/blob/main/CHANGES.rst)
- [Commits](https://github.com/coveragepy/coveragepy/compare/7.13.3...7.13.4)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.13.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-10 10:29:32 +01:00
Stefan Agner
3f1b3bb41f Remove deprecated loop parameter from WSClient (#6540)
The explicit event loop parameter passed to WSClient has been deprecated
since Python 3.8. Replace self._loop.create_future() with
asyncio.get_running_loop().create_future() and remove the loop parameter
from __init__, connect_with_auth, and its call site.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 09:44:25 +01:00
Stefan Agner
6b974a5b88 Validate device option type before path conversion in addon options (#6542)
Add a type check for device options in AddonOptions._single_validate
to ensure the value is a string before passing it to Path(). When a
non-string value (e.g. a dict) is provided for a device option, this
now raises a proper vol.Invalid error instead of an unhandled TypeError.

Fixes SUPERVISOR-175H

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 09:44:10 +01:00
Stefan Agner
66228f976d Use session.request() instead of getattr dispatch in HomeAssistantAPI (#6541)
Replace the dynamic `getattr(self.sys_websession, method)(...)` pattern
with the explicit `self.sys_websession.request(method, ...)` call. This
is type-safe and avoids runtime failures from typos in method names.

Also wrap the timeout parameter in `aiohttp.ClientTimeout` for
consistency with the typed `request()` signature.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 09:43:55 +01:00
Stefan Agner
74da5cdaf7 Fix builder workflow for workflow_dispatch events (#6547)
The retrieve-changed-files action only supports pull_request and push
events. Restrict the "Get changed files" step to those event types so
manual workflow_dispatch runs no longer fail. Also always build wheels
on manual dispatches since there are no changed files to compare against.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 23:12:46 +01:00
dependabot[bot]
b2baad7c28 Bump dbus-fast from 3.1.2 to 4.0.0 (#6515)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 3.1.2 to 4.0.0.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v3.1.2...v4.0.0)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 17:14:22 +01:00
Stefan Agner
db0bfa952f Remove frontend auto-update workflow (#6539)
Remove the automated frontend update workflow and version tracking file
as the frontend repository no longer builds supervisor-specific assets.
Frontend updates will now follow a different distribution mechanism.

Related to home-assistant/frontend#29132

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 11:38:56 +01:00
dependabot[bot]
b069358b93 Bump setuptools from 80.10.2 to 82.0.0 (#6537)
Bumps [setuptools](https://github.com/pypa/setuptools) from 80.10.2 to 82.0.0.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v80.10.2...v82.0.0)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-version: 82.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 10:53:48 +01:00
Stefan Agner
c3b9b9535c Fix RAUC D-Bus type annotations for runtime type checking (#6532)
Replace ctypes integer types (c_uint32, c_uint64) with standard Python int
in SlotStatusDataType to satisfy typeguard runtime type checking. D-Bus
returns standard Python integers, not ctypes objects.

Also fix the mark() method return type from tuple[str, str] to list[str] to
match the actual D-Bus return value, and add missing optional fields
"bundle.hash" and "installed.transaction" to SlotStatusDataType.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-05 13:00:35 +01:00
Stefan Agner
0cd668ec77 Fix typeguard errors by explicitly converting IP addresses to strings (#6531)
* Fix environment variable type errors by converting IP addresses to strings

Environment variables must be strings, but IPv4Address and IPv4Network
objects were being passed directly to container environment dictionaries,
causing typeguard validation errors.

Changes:
- Convert IPv4Address objects to strings in homeassistant.py for
  SUPERVISOR and HASSIO environment variables
- Convert IPv4Network object to string in observer.py for
  NETWORK_MASK environment variable
- Update tests to expect string values instead of IP objects in
  environment dictionaries
- Remove unused ip_network import from test_observer.py

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Use explicit string conversion for extra_hosts IP addresses

Use the !s format specifier in the f-string to explicitly convert
IPv4Address objects to strings when building the ExtraHosts list.
While f-strings implicitly convert objects to strings, using !s makes
the conversion explicit and consistent with the environment variable
fixes in the previous commit.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-05 11:00:43 +01:00
dependabot[bot]
d1cbb57c34 Bump sentry-sdk from 2.51.0 to 2.52.0 (#6530)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.51.0 to 2.52.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.51.0...2.52.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.52.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-05 09:28:13 +01:00
Stefan Agner
3d4849a3a2 Include Docker storage driver in Sentry reports (#6529)
Add the Docker storage driver (e.g., overlay2, vfs) to the context
information sent with Sentry error reports. This helps correlate
issues with specific storage backends and improves debugging of
Docker-related problems.

The storage driver is now included in both SETUP and RUNNING state
error reports under contexts.docker.storage_driver.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-05 09:27:51 +01:00
Tom Quist
4d8d44721d Fix MCP API proxy support for streaming and headers (#6461)
* Fix MCP API proxy support for streaming and headers

This commit fixes two issues with using the core API core/api/mcp through
the API proxy:

1. **Streaming support**: The proxy now detects text/event-stream responses
   and properly streams them instead of buffering all data. This is required
   for MCP's Server-Sent Events (SSE) transport.

2. **Header forwarding**: Added MCP-required headers to the forwarded headers:
   - Accept: Required for content negotiation
   - Last-Event-ID: Required for resuming broken SSE connections
   - Mcp-Session-Id: Required for session management across requests

The proxy now also preserves MCP-related response headers (Mcp-Session-Id)
and sets X-Accel-Buffering to "no" for streaming responses to prevent
buffering by intermediate proxies.

Tests added to verify:
- MCP headers are properly forwarded to Home Assistant
- Streaming responses (text/event-stream) are handled correctly
- Response headers are preserved

* Refactor: reuse stream logic for SSE responses (#3)

* Fix ruff format + cover streaming payload error

* Fix merge error

* Address review comments (headers / streaming proxy) (#4)

* Address review: header handling for streaming/non-streaming

* Forward MCP-Protocol-Version and Origin headers

* Do not forward Origin header through API proxy (#5)

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2026-02-04 17:28:11 +01:00
Stefan Agner
a849050369 Improve CpuArch type safety with explicit conversions (#6524)
The CpuArch enum was being used inconsistently throughout the codebase,
with some code expecting enum values and other code expecting strings.
This caused type checking issues and potential runtime errors.

Changes:
- Fix match_base() to return CpuArch enum instead of str
- Add explicit string conversions using !s formatting where arch values
  are used in f-strings (build.py, model.py)
- Convert CpuArch to str explicitly in contexts requiring strings
  (docker/addon.py, misc/filter.py)
- Update all tests to use CpuArch enum values instead of strings
- Update test mocks to return CpuArch enum values

This ensures type consistency and improves MyPy type checking accuracy
across the architecture detection and management code.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-04 11:34:23 +01:00
dependabot[bot]
2ee7e22bbd Bump coverage from 7.13.2 to 7.13.3 (#6528)
Bumps [coverage](https://github.com/coveragepy/coveragepy) from 7.13.2 to 7.13.3.
- [Release notes](https://github.com/coveragepy/coveragepy/releases)
- [Changelog](https://github.com/coveragepy/coveragepy/blob/main/CHANGES.rst)
- [Commits](https://github.com/coveragepy/coveragepy/compare/7.13.2...7.13.3)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.13.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-04 11:29:35 +01:00
dependabot[bot]
9f5def5fb7 Bump ruff from 0.14.14 to 0.15.0 (#6527)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.14 to 0.15.0.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.14...0.15.0)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-04 11:29:23 +01:00
Stefan Agner
df03b8fb68 Fix type annotations in backup and restore methods (#6523)
Update type hints throughout backup/restore code to match actual types:
- Change tarfile.TarFile to SecureTarFile for backup/restore methods
- Add None union types for Backup properties that can return None
- Fix exclude_database parameter to accept None in restore method
- Update API backup methods to handle None return from backup tasks
- Fix condition check for exclude_database to explicitly compare with True
- Add assertion to help type checker with indexed assignment

These changes improve type safety and resolve type checking issues
discovered by runtime type validation.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-04 11:18:06 +01:00
Stefan Agner
d1a576e711 Fix Docker Hub manifest fetching by using correct registry API endpoint (#6525)
The manifest fetcher was using docker.io as the registry API endpoint,
but Docker Hub's actual registry API is at registry-1.docker.io. When
trying to access https://docker.io/v2/..., requests were being redirected
to https://www.docker.com/ (the marketing site), which returned HTML
instead of JSON, causing manifest fetching to fail.

This matches exactly what Docker itself does internally - see
daemon/pkg/registry/config.go:49 where Docker hardcodes
DefaultRegistryHost = "registry-1.docker.io" for registry operations.

Changes:
- Add DOCKER_HUB_API constant for the actual API endpoint
- Add _get_api_endpoint() helper to translate docker.io to
  registry-1.docker.io for HTTP API calls
- Update _get_auth_token() and _fetch_manifest() to use the API endpoint
- Keep docker.io as the registry identifier for naming and credentials
- Add tests to verify the API endpoint translation

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-03 19:03:47 +01:00
Mike Degatano
a122b5f1e9 Migrate info, events and container logs to aiodocker (#6514)
* Migrate info and events to aiodocker

* Migrate container logs to aiodocker

* Fix dns plugin loop test

* Fix mocking for docker info

* Fixes from feedback

* Harden monitor error handling

* Deleted failing tests because they were not useful
2026-02-03 18:36:41 +01:00
Wendelin
c2de83e80d Add .cache to backup exclusion list (#6483)
* Add frontend_development_pr to backup exclusion list

* update frontend dev pr folder name to frontend_development_artifacts

* Update backup exclusion list to replace frontend_development_artifacts with .cache
2026-02-03 15:33:13 +01:00
Stefan Agner
6806c1d58a Fix Docker exec exit code handling by using detach=False (#6520)
* Fix Docker exec exit code handling by using detach=False

When executing commands inside containers using `container_run_inside()`,
the exec metadata did not contain a valid exit code because `detach=True`
starts the exec in the background and returns immediately before completion.

Root cause: With `detach=True`, Docker's exec start() returns an awaitable
that yields output bytes. However, the await only waits for the HTTP/REST
call to complete, NOT for the actual exec command to finish. The command
continues running in the background after the HTTP response is received.
Calling `inspect()` immediately after returns `ExitCode: None` because
the exec hasn't completed yet.

Solution: Use `detach=False` which returns a Stream object that:
- Automatically waits for exec completion by reading from the stream
- Provides actual command output (not just empty bytes)
- Makes exit code immediately available after stream closes
- No polling needed

Changes:
- Switch from `detach=True` to `detach=False` in container_run_inside()
- Read output from stream using async context manager
- Add defensive validation to ensure ExitCode is never None
- Update tests to mock the Stream interface using AsyncMock
- Add debug log showing exit code after command execution

Fixes #6518

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Address review feedback

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-03 13:36:24 +01:00
Jan Čermák
a2ee2223fa Instruct agents to use relative imports within supervisor package (#6522)
It seems that agents like Claude like to do absolute imports despite it's a
pattern we're trying to avoid. Adjust the instructions to avoid them doing it.
2026-02-03 13:36:12 +01:00
Stefan Agner
7ad9a911e8 Add DELETE method support to /core/api proxy (#6521)
The Supervisor's /core/api proxy previously only supported GET and POST
methods, returning 405 Method Not Allowed for DELETE requests. This
prevented addons from calling Home Assistant Core REST API endpoints
that require DELETE methods, such as deleting automations, scripts,
or scenes.

The underlying proxy implementation already supported passing through
any HTTP method via request.method.lower(), so only the route
registration was needed.

Fixes #6509

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-03 11:51:59 +01:00
Stefan Agner
05a58d4768 Add exception handling for pull progress tracking errors (#6516)
* Add exception handling for pull progress tracking errors

Wrap progress event processing in try-except blocks to prevent image
pulls from failing due to progress tracking issues. This ensures that
progress updates, which are purely informational, never abort the
actual Docker pull operation.

Catches two categories of exceptions:
- ValueError: Includes "Cannot update a job that is done" errors that
  can occur under rare event combinations (similar to #6513)
- All other exceptions: Defensive catch-all for any unexpected errors
  in the progress tracking logic

All exceptions are logged with full context (layer ID, status, progress)
and sent to Sentry for tracking and debugging. The pull continues
successfully in all cases.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Apply suggestions from code review

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Apply suggestions from code review

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2026-02-03 11:12:37 +01:00
dependabot[bot]
c89d28ae11 Bump orjson from 3.11.6 to 3.11.7 (#6517)
Bumps [orjson](https://github.com/ijl/orjson) from 3.11.6 to 3.11.7.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.11.6...3.11.7)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-03 10:51:51 +01:00
Stefan Agner
79f9afb4c2 Fix port conflict tests for aiodocker 0.25.0 compatibility (#6519)
The aiodocker 0.25.0 upgrade (PR #6448) changed how DockerError handles
the message parameter. The library now extracts the message string from
Docker API JSON responses before passing it to DockerError, rather than
passing the entire dict.

The port conflict detection tests were written before this change and
incorrectly passed dicts to DockerError. This caused TypeErrors when
the port conflict detection code tried to match err.message with a
regex, expecting a string but receiving a dict.

Update both test_addon_start_port_conflict_error and
test_observer_start_port_conflict to pass message strings directly,
matching the real aiodocker 0.25.0 behavior.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-03 10:34:47 +01:00
Mike Degatano
11b754102c Map port conflict on start error into a known error (#6445)
* Map port conflict on start error into a known error

* Apply suggestions from code review

* Run ruff format

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2026-02-02 17:16:31 +01:00
Stefan Agner
6957341c3e Refactor Docker pull progress with registry manifest fetcher (#6379)
* Use count-based progress for Docker image pulls

Refactor Docker image pull progress to use a simpler count-based approach
where each layer contributes equally (100% / total_layers) regardless of
size. This replaces the previous size-weighted calculation that was
susceptible to progress regression.

The core issue was that Docker rate-limits concurrent downloads (~3 at a
time) and reports layer sizes only when downloading starts. With size-
weighted progress, large layers appearing late would cause progress to
drop dramatically (e.g., 59% -> 29%) as the total size increased.

The new approach:
- Each layer contributes equally to overall progress
- Per-layer progress: 70% download weight, 30% extraction weight
- Progress only starts after first "Downloading" event (when layer
  count is known)
- Always caps at 99% - job completion handles final 100%

This simplifies the code by moving progress tracking to a dedicated
module (pull_progress.py) and removing complex size-based scaling logic
that tried to account for unknown layer sizes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Exclude already-existing layers from pull progress calculation

Layers that already exist locally should not count towards download
progress since there's nothing to download for them. Only layers that
need pulling are included in the progress calculation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add registry manifest fetcher for size-based pull progress

Fetch image manifests directly from container registries before pulling
to get accurate layer sizes upfront. This enables size-weighted progress
tracking where each layer contributes proportionally to its byte size,
rather than equal weight per layer.

Key changes:
- Add RegistryManifestFetcher that handles auth discovery via
  WWW-Authenticate headers, token fetching with optional credentials,
  and multi-arch manifest list resolution
- Update ImagePullProgress to accept manifest layer sizes via
  set_manifest() and calculate size-weighted progress
- Fall back to count-based progress when manifest fetch fails
- Pre-populate layer sizes from manifest when creating layer trackers

The manifest fetcher supports ghcr.io, Docker Hub, and private
registries by using credentials from Docker config when available.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Clamp progress to 100 to prevent floating point precision issues

Floating point arithmetic in weighted progress calculations can produce
values slightly above 100 (e.g., 100.00000000000001). This causes
validation errors when the progress value is checked.

Add min(100, ...) clamping to both size-weighted and count-based
progress calculations to ensure the result never exceeds 100.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Use sys_websession for manifest fetcher instead of creating new session

Reuse the existing CoreSys websession for registry manifest requests
instead of creating a new aiohttp session. This improves performance
and follows the established pattern used throughout the codebase.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Make platform parameter required and warn on missing platform

- Make platform a required parameter in get_manifest() and _fetch_manifest()
  since it's always provided by the calling code
- Return None and log warning when requested platform is not found in
  multi-arch manifest list, instead of falling back to first manifest
  which could be the wrong architecture

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Log manifest fetch failures at warning level

Users will notice degraded progress tracking when manifest fetch fails,
so log at warning level to help diagnose issues.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add pylint disable comments for protected access in manifest tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Separate download_current and total_size updates in pull progress

Update download_current and total_size independently in the DOWNLOADING
handler. This ensures download_current is updated even when total is
not yet available.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Reject invalid platform format in manifest selection

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-02-02 15:56:24 +01:00
Stefan Agner
77f3da7014 Disable Home Assistant watchdog during system shutdown (#6512)
During system shutdown (reboot/poweroff), the watchdog was incorrectly
detecting the Home Assistant Core container as failed and attempting to
restart it. This occurred because Docker was stopping all containers in
parallel with Supervisor's own shutdown sequence, causing the watchdog
to trigger while add-ons were still being stopped.

This led to an abrupt termination of Core before it could cleanly shut
down its SQLite database, resulting in a warning on the next startup:
"The system could not validate that the sqlite3 database was shutdown
cleanly".

The fix registers a supervisor state change listener that unregisters
the watchdog when entering any shutdown state (SHUTDOWN, STOPPING, or
CLOSE). This prevents restart attempts during both user-initiated
reboots (via API) and external shutdown signals (Docker SIGTERM,
console reboot commands).

Since SHUTDOWN, STOPPING, and CLOSE are terminal states with no reverse
transition back to RUNNING, no re-registration logic is needed.

Fixes #6511

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-31 17:01:05 +01:00
Stefan Agner
96d0593af2 Handle order issues in job progress updates during image pulls (#6513)
Catch ValueError exceptions with "Cannot update a job that is done"
during image pull progress updates. This error occurs intermittently
when progress events arrive after a job has completed. It is not clear
why this happens, maybe the job gets prematurely marked as done or
the pull events arrive in a different order than expected.

Rather than failing the entire pull operation, we now:
- Log a warning with context (layer ID, status, progress)
- Send the error to Sentry for tracking and investigation
- Continue with the pull operation

This prevents pull failures while gathering information to help
identify and fix the root cause of the race condition.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-30 17:06:53 -05:00
Stefan Agner
3db60170aa Fix flaky test_group_throttle_rate_limit race condition (#6504)
The test was failing intermittently in CI because concurrent async
operations in asyncio.gather() were getting slightly different
timestamps (microseconds apart) despite being inside a time_machine
context.

When test2.execute() calls were timestamped at start+2ms due to async
scheduling delays, they weren't cleaned up in the final test block
(cutoff = start+1ms), causing a false rate limit error.

Fix by using tick=False to completely freeze time during the gather,
ensuring all 4 calls get the exact same timestamp.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-30 17:17:50 +01:00
Mike Degatano
a5c3781f9d Migrate network interactions to aiodocker (#6505) 2026-01-30 15:34:12 +01:00
dependabot[bot]
2a4890e2b0 Bump aiodocker from 0.24.0 to 0.25.0 (#6448)
* Bump aiodocker from 0.24.0 to 0.25.0

Bumps [aiodocker](https://github.com/aio-libs/aiodocker) from 0.24.0 to 0.25.0.
- [Release notes](https://github.com/aio-libs/aiodocker/releases)
- [Changelog](https://github.com/aio-libs/aiodocker/blob/main/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiodocker/compare/v0.24.0...v0.25.0)

---
updated-dependencies:
- dependency-name: aiodocker
  dependency-version: 0.25.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update to new timeout configuration

* Fix pytest failure

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2026-01-30 09:39:06 +01:00
dependabot[bot]
8fa55bac9e Bump debugpy from 1.8.19 to 1.8.20 (#6508)
Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.8.19 to 1.8.20.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.8.19...v1.8.20)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-version: 1.8.20
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-30 08:39:39 +01:00
dependabot[bot]
72003346f4 Bump actions/cache from 5.0.2 to 5.0.3 (#6507)
Bumps [actions/cache](https://github.com/actions/cache) from 5.0.2 to 5.0.3.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](8b402f58fb...cdf6c1fa76)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: 5.0.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-30 08:38:54 +01:00
dependabot[bot]
51c447a1e8 Bump orjson from 3.11.5 to 3.11.6 (#6506)
Bumps [orjson](https://github.com/ijl/orjson) from 3.11.5 to 3.11.6.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.11.5...3.11.6)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-30 08:38:29 +01:00
dependabot[bot]
a3f5675c96 Bump docker/login-action from 3.6.0 to 3.7.0 (#6502)
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.6.0 to 3.7.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](5e57cd1181...c94ce9fb46)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.7.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-29 09:29:56 +01:00
dependabot[bot]
f7e4f6a1b2 Bump sentry-sdk from 2.50.0 to 2.51.0 (#6503)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.50.0 to 2.51.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.50.0...2.51.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.51.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-29 09:27:13 +01:00
Stefan Agner
a2db716a5f Check frontend availability after Home Assistant Core updates (#6311)
* Check frontend availability after Home Assistant Core updates

Add verification that the frontend is actually accessible at "/" after core
updates to ensure the web interface is serving properly, not just that the
API endpoints respond.

Previously, the update verification only checked API endpoints and whether
the frontend component was loaded. This could miss cases where the API is
responsive but the frontend fails to serve the UI.

Changes:
- Add check_frontend_available() method to HomeAssistantAPI that fetches
  the root path and verifies it returns HTML content
- Integrate frontend check into core update verification flow after
  confirming the frontend component is loaded
- Trigger automatic rollback if frontend is inaccessible after update
- Fix blocking I/O calls in rollback log file handling to use async
  executor

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Avoid checking frontend if config data is None

* Improve pytest tests

* Make sure Core returns a valid config

* Remove Core version check in frontend availability test

The call site already makes sure that an actual Home Assistant Core
instance is running before calling the frontend availability test.
So this is rather redundant. Simplify the code by removing the version
check and update tests accordingly.

* Add test coverage for get_config

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-29 09:06:45 +01:00
David Rapan
641b205ee7 Add configurable interface route metric (#6447)
* Add route_metric attribute to IpProperties class

Signed-off-by: David Rapan <david@rapan.cz>

* Refactor dbus setting IP constants

Signed-off-by: David Rapan <david@rapan.cz>

* Add route metric

Signed-off-by: David Rapan <david@rapan.cz>

* Merge test_api_network_interface_info

Signed-off-by: David Rapan <david@rapan.cz>

* Add test case for route metric update

Signed-off-by: David Rapan <david@rapan.cz>

---------

Signed-off-by: David Rapan <david@rapan.cz>
2026-01-28 13:08:36 +01:00
AlCalzone
de02bc991a fix: pull missing images before running (#6500)
* fix: pull missing images before running

* add tests for auto-pull behavior
2026-01-28 13:08:03 +01:00
dependabot[bot]
80cf00f195 Bump cryptography from 46.0.3 to 46.0.4 (#6501)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.3 to 46.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.3...46.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-28 10:39:42 +01:00
AlCalzone
df8201ca33 Update get_docker_args() to return mounts not volumes (#6499)
* Update `get_docker_args()` to return `mounts` not `volumes`

* fix more mocks to return PurePaths
2026-01-27 15:00:33 -05:00
Mike Degatano
909a2dda2f Migrate (almost) all docker container interactions to aiodocker (#6489)
* Migrate all docker container interactions to aiodocker

* Remove containers_legacy since its no longer used

* Add back remove color logic

* Revert accidental invert of conditional in setup_network

* Fix typos found by copilot

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Revert "Apply suggestions from code review"

This reverts commit 0a475433ea.

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-27 12:42:17 +01:00
Stefan Agner
515114fa69 Improve CI workflow wait logic and reduce duplication (#6496)
* Improve Supervisor startup wait logic in CI workflow

The 'Wait for Supervisor to come up' step was failing intermittently when
the Supervisor API wasn't immediately available. The original script relied
on bash's lenient error handling in command substitution, which could fail
unpredictably.

Changes:
- Use curl -f flag to properly handle HTTP errors
- Use jq -e for robust JSON validation and exit code handling
- Add explicit 5-minute timeout with elapsed time tracking
- Reduce log noise by only reporting progress every 15 seconds
- Add comprehensive error diagnostics on timeout:
  * Show last API response received
  * Dump last 50 lines of Supervisor logs
- Show startup time on success for performance monitoring

This makes the CI workflow more reliable and easier to debug when the
Supervisor fails to start.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Use YAML anchor to deduplicate wait step in CI workflow

The 'Wait for Supervisor to come up' step appears twice in the
run_supervisor job - once after starting and once after restarting.
Use a YAML anchor to define the step once and reference it on the
second occurrence.

This reduces duplication by 28 lines and makes future maintenance
easier by ensuring both wait steps remain identical.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-27 09:31:45 +01:00
dependabot[bot]
a0594c8a1f Bump coverage from 7.13.1 to 7.13.2 (#6494)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 08:28:41 +01:00
dependabot[bot]
df96fb711a Bump setuptools from 80.10.1 to 80.10.2 (#6495)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 08:28:27 +01:00
dependabot[bot]
d58d5769d4 Bump release-drafter/release-drafter from 6.1.1 to 6.2.0 (#6490)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-23 17:23:13 +01:00
dependabot[bot]
c4eda35184 Bump ruff from 0.14.13 to 0.14.14 (#6492)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-23 16:48:59 +01:00
dependabot[bot]
07a8350c40 Bump actions/checkout from 6.0.1 to 6.0.2 (#6491) 2026-01-23 09:28:51 +01:00
Jan Čermák
a8e5c4f1f2 Add missing syslog identifiers to default list for Host logs (#6485)
Add more syslog identifiers (most importantly containerd) extracted from real
systems that were missing in the list. This should make the host logs contain
the same events as journalctl logs, minus audit logs and Docker container logs.
2026-01-22 10:05:52 +01:00
dependabot[bot]
cbaef62d67 Bump setuptools from 80.9.0 to 80.10.1 (#6488)
Bumps [setuptools](https://github.com/pypa/setuptools) from 80.9.0 to 80.10.1.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v80.9.0...v80.10.1)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-version: 80.10.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-22 08:29:19 +01:00
dependabot[bot]
16cde8365f Bump peter-evans/create-pull-request from 8.0.0 to 8.1.0 (#6487)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 8.0.0 to 8.1.0.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](98357b18bf...c0f553fe54)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 8.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-22 08:28:53 +01:00
dependabot[bot]
afc0f37fef Bump actions/setup-python from 6.1.0 to 6.2.0 (#6486)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 6.1.0 to 6.2.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](83679a892e...a309ff8b42)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: 6.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-22 08:28:22 +01:00
dependabot[bot]
7b0ea51ef6 Bump sentry-sdk from 2.49.0 to 2.50.0 (#6484)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.49.0 to 2.50.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.49.0...2.50.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.50.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-21 12:27:57 +01:00
dependabot[bot]
94daaf4e52 Bump release-drafter/release-drafter from 6.1.0 to 6.1.1 (#6482)
Bumps [release-drafter/release-drafter](https://github.com/release-drafter/release-drafter) from 6.1.0 to 6.1.1.
- [Release notes](https://github.com/release-drafter/release-drafter/releases)
- [Commits](b1476f6e6e...267d2e0268)

---
updated-dependencies:
- dependency-name: release-drafter/release-drafter
  dependency-version: 6.1.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-21 12:27:11 +01:00
dependabot[bot]
2edd8d0407 Bump actions/cache from 5.0.1 to 5.0.2 (#6481)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-19 10:13:01 +01:00
dependabot[bot]
308589e1de Bump ruff from 0.14.11 to 0.14.13 (#6480)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-16 09:44:42 +01:00
Jan Čermák
ec0f7c2b9c Use query_dns instead of deprecated query after aiohttp 4.x update (#6478) 2026-01-14 15:22:12 +01:00
Jan Čermák
753021d4d5 Fix 'DockerMount is not JSON serializable' in DockerAPI.run_command (#6477) 2026-01-14 15:21:11 +01:00
Erich Gubler
3e3db696d3 Fix typo in log message when running commands in a container (#6472) 2026-01-14 13:28:15 +01:00
dependabot[bot]
4d708d34c8 Bump aiodns from 3.6.1 to 4.0.0 (#6473)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-13 22:43:50 +01:00
dependabot[bot]
e7a0559692 Bump types-requests from 2.32.4.20250913 to 2.32.4.20260107 (#6464)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-13 22:00:46 +01:00
dependabot[bot]
9ad1bf0f1a Bump sentry-sdk from 2.48.0 to 2.49.0 (#6467)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 22:51:15 +01:00
dependabot[bot]
0f30e2cb43 Bump ruff from 0.14.10 to 0.14.11 (#6468)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 22:48:34 +01:00
214 changed files with 10011 additions and 4511 deletions

View File

@@ -1,6 +1,6 @@
{
"name": "Supervisor dev",
"image": "ghcr.io/home-assistant/devcontainer:2-supervisor",
"image": "ghcr.io/home-assistant/devcontainer:3-supervisor",
"containerEnv": {
"WORKSPACE_DIRECTORY": "${containerWorkspaceFolder}"
},

View File

@@ -91,8 +91,8 @@ availability.
### Python Requirements
- **Compatibility**: Python 3.13+
- **Language Features**: Use modern Python features:
- **Compatibility**: Python 3.14+
- **Language Features**: Use modern Python features:
- Type hints with `typing` module
- f-strings (preferred over `%` or `.format()`)
- Dataclasses and enum classes
@@ -233,6 +233,8 @@ async def backup_full(self, request: web.Request) -> dict[str, Any]:
- **Fixtures**: Extensive use of pytest fixtures for CoreSys setup
- **Mocking**: Mock external dependencies (Docker, D-Bus, network calls)
- **Coverage**: Minimum 90% test coverage, 100% for security-sensitive code
- **Style**: Use plain `test_` functions, not `Test*` classes — test classes are
considered legacy style in this project
### Error Handling
@@ -276,12 +278,14 @@ Always run the pre-commit hooks at the end of code editing.
- Access Docker via `self.sys_docker` not direct Docker API
- Use constants from `const.py` instead of hardcoding
- Store types in (per-module) `const.py` (e.g. supervisor/store/const.py)
- Use relative imports within the `supervisor/` package (e.g., `from ..docker.manager import ExecReturn`)
**❌ Avoid These Patterns**:
- Direct Docker API usage - use Supervisor's Docker manager
- Blocking operations in async context (use asyncio alternatives)
- Hardcoded values - use constants from `const.py`
- Manual error handling in API endpoints - let `@api_process` handle it
- Absolute imports within the `supervisor/` package (e.g., `from supervisor.docker.manager import ...`) - use relative imports instead
This guide provides the foundation for contributing to Home Assistant Supervisor.
Follow these patterns and guidelines to ensure code quality, security, and

View File

@@ -5,45 +5,53 @@ categories:
- title: ":boom: Breaking Changes"
label: "breaking-change"
- title: ":wrench: Build"
label: "build"
- title: ":boar: Chore"
label: "chore"
- title: ":sparkles: New Features"
label: "new-feature"
- title: ":zap: Performance"
label: "performance"
- title: ":recycle: Refactor"
label: "refactor"
- title: ":green_heart: CI"
label: "ci"
- title: ":bug: Bug Fixes"
label: "bugfix"
- title: ":white_check_mark: Test"
- title: ":gem: Style"
label: "style"
- title: ":package: Refactor"
label: "refactor"
- title: ":rocket: Performance"
label: "performance"
- title: ":rotating_light: Test"
label: "test"
- title: ":hammer_and_wrench: Build"
label: "build"
- title: ":gear: CI"
label: "ci"
- title: ":recycle: Chore"
label: "chore"
- title: ":wastebasket: Revert"
label: "revert"
- title: ":arrow_up: Dependency Updates"
label: "dependencies"
collapse-after: 1
include-labels:
- "breaking-change"
- "build"
- "chore"
- "performance"
- "refactor"
- "new-feature"
- "bugfix"
- "dependencies"
- "style"
- "refactor"
- "performance"
- "test"
- "build"
- "ci"
- "chore"
- "revert"
- "dependencies"
template: |

View File

@@ -27,18 +27,18 @@ on:
paths:
- "rootfs/**"
- "supervisor/**"
- build.yaml
- Dockerfile
- requirements.txt
- setup.py
env:
DEFAULT_PYTHON: "3.13"
DEFAULT_PYTHON: "3.14.3"
COSIGN_VERSION: "v2.5.3"
CRANE_VERSION: "v0.20.7"
CRANE_SHA256: "8ef3564d264e6b5ca93f7b7f5652704c4dd29d33935aff6947dd5adefd05953e"
BUILD_NAME: supervisor
BUILD_TYPE: supervisor
ARCHITECTURES: '["amd64", "aarch64"]'
concurrency:
group: "${{ github.workflow }}-${{ github.ref }}"
@@ -49,21 +49,17 @@ jobs:
name: Initialize build
runs-on: ubuntu-latest
outputs:
architectures: ${{ steps.info.outputs.architectures }}
architectures: ${{ env.ARCHITECTURES }}
version: ${{ steps.version.outputs.version }}
channel: ${{ steps.version.outputs.channel }}
publish: ${{ steps.version.outputs.publish }}
build_wheels: ${{ steps.requirements.outputs.build_wheels }}
steps:
- name: Checkout the repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
- name: Get information
id: info
uses: home-assistant/actions/helpers/info@master
- name: Get version
id: version
uses: home-assistant/actions/helpers/version@master
@@ -72,8 +68,8 @@ jobs:
- name: Get changed files
id: changed_files
if: github.event_name != 'release'
uses: masesgroup/retrieve-changed-files@491e80760c0e28d36ca6240a27b1ccb8e1402c13 # v3.0.0
if: github.event_name == 'pull_request' || github.event_name == 'push'
uses: masesgroup/retrieve-changed-files@45a8b3b496d2d6037cbd553e8a3450989b9384a2 # v4.0.0
- name: Check if requirements files changed
id: requirements
@@ -81,7 +77,10 @@ jobs:
# No wheels build necessary for releases
if [[ "${{ github.event_name }}" == "release" ]]; then
echo "build_wheels=false" >> "$GITHUB_OUTPUT"
elif [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements\.txt|build\.yaml|\.github/workflows/builder\.yml) ]]; then
# Always build wheels for manual dispatches
elif [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "build_wheels=true" >> "$GITHUB_OUTPUT"
elif [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements\.txt|\.github/workflows/builder\.yml) ]]; then
echo "build_wheels=true" >> "$GITHUB_OUTPUT"
else
echo "build_wheels=false" >> "$GITHUB_OUTPUT"
@@ -103,13 +102,13 @@ jobs:
- runs-on: ubuntu-24.04-arm
arch: aarch64
env:
WHEELS_ABI: cp313
WHEELS_ABI: cp314
WHEELS_TAG: musllinux_1_2
WHEELS_APK_DEPS: "libffi-dev;openssl-dev;yaml-dev"
WHEELS_SKIP_BINARY: aiohttp
steps:
- name: Checkout the repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
@@ -152,7 +151,7 @@ jobs:
- name: Upload local wheels artifact
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: wheels-${{ matrix.arch }}
path: wheels
@@ -166,13 +165,13 @@ jobs:
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
if: needs.init.outputs.publish == 'true'
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Install Cosign
if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
with:
cosign-release: ${{ env.COSIGN_VERSION }}
@@ -188,38 +187,29 @@ jobs:
run: |
cosign sign-blob --yes rootfs/supervisor.sha256 --bundle rootfs/supervisor.sha256.sig
- name: Login to GitHub Container Registry
if: needs.init.outputs.publish == 'true'
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set build arguments
if: needs.init.outputs.publish == 'false'
run: echo "BUILD_ARGS=--test" >> $GITHUB_ENV
# home-assistant/builder doesn't support sha pinning
- name: Build supervisor
uses: home-assistant/builder@2025.11.0
uses: home-assistant/builder/actions/build-image@62a1597b84b3461abad9816d9cd92862a2b542c3 # 2026.03.2
with:
image: ${{ matrix.arch }}
args: |
$BUILD_ARGS \
--${{ matrix.arch }} \
--target /data \
--cosign \
--generic ${{ needs.init.outputs.version }}
arch: ${{ matrix.arch }}
container-registry-password: ${{ secrets.GITHUB_TOKEN }}
cosign-base-identity: 'https://github.com/home-assistant/docker-base/.*'
cosign-base-verify: ghcr.io/home-assistant/base-python:3.14-alpine3.22
image: ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor
image-tags: |
${{ needs.init.outputs.version }}
latest
push: ${{ needs.init.outputs.publish == 'true' }}
version: ${{ needs.init.outputs.version }}
version:
name: Update version
if: github.repository_owner == 'home-assistant'
needs: ["init", "run_supervisor", "retag_deprecated"]
runs-on: ubuntu-latest
steps:
- name: Checkout the repository
if: needs.init.outputs.publish == 'true'
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Initialize git
if: needs.init.outputs.publish == 'true'
@@ -244,26 +234,28 @@ jobs:
timeout-minutes: 60
steps:
- name: Checkout the repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Download local wheels artifact
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
name: wheels-amd64
path: wheels
# home-assistant/builder doesn't support sha pinning
# Build the Supervisor for non-publish runs (e.g. PRs)
- name: Build the Supervisor
if: needs.init.outputs.publish != 'true'
uses: home-assistant/builder@2025.11.0
uses: home-assistant/builder/actions/build-image@62a1597b84b3461abad9816d9cd92862a2b542c3 # 2026.03.2
with:
args: |
--test \
--amd64 \
--target /data \
--generic runner
arch: amd64
container-registry-password: ${{ secrets.GITHUB_TOKEN }}
image: ghcr.io/home-assistant/amd64-hassio-supervisor
image-tags: runner
load: true
version: runner
# Pull the Supervisor for publish runs to test the published image
- name: Pull Supervisor
if: needs.init.outputs.publish == 'true'
run: |
@@ -290,14 +282,68 @@ jobs:
- name: Start the Supervisor
run: docker start hassio_supervisor
- name: Wait for Supervisor to come up
- &wait_for_supervisor
name: Wait for Supervisor to come up
run: |
SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' hassio_supervisor)
ping="error"
while [ "$ping" != "ok" ]; do
ping=$(curl -sSL "http://$SUPERVISOR/supervisor/ping" | jq -r '.result')
sleep 5
until SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.Networks.hassio.IPAddress}}' hassio_supervisor 2>/dev/null) && \
[ -n "$SUPERVISOR" ] && [ "$SUPERVISOR" != "<no value>" ]; do
echo "Waiting for network configuration..."
sleep 1
done
echo "Waiting for Supervisor API at http://${SUPERVISOR}/supervisor/ping"
timeout=300
elapsed=0
while [ $elapsed -lt $timeout ]; do
if response=$(curl -sSf "http://${SUPERVISOR}/supervisor/ping" 2>/dev/null); then
if echo "$response" | jq -e '.result == "ok"' >/dev/null 2>&1; then
echo "Supervisor is up! (took ${elapsed}s)"
exit 0
fi
fi
if [ $((elapsed % 15)) -eq 0 ]; then
echo "Still waiting... (${elapsed}s/${timeout}s)"
fi
sleep 5
elapsed=$((elapsed + 5))
done
echo "ERROR: Supervisor failed to start within ${timeout}s"
echo "Last response: $response"
echo "Checking supervisor logs..."
docker logs --tail 50 hassio_supervisor
exit 1
# Wait for Core to come up so subsequent steps (backup, addon install) succeed.
# On first startup, Supervisor installs Core via the "home_assistant_core_install"
# job (which pulls the image and then starts Core). Jobs with cleanup=True are
# removed from the jobs list once done, so we poll until it's gone.
- name: Wait for Core to be started
run: |
echo "Waiting for Home Assistant Core to be installed and started..."
timeout=300
elapsed=0
while [ $elapsed -lt $timeout ]; do
jobs=$(docker exec hassio_cli ha jobs info --no-progress --raw-json | jq -r '.data.jobs[] | select(.name == "home_assistant_core_install" and .done == false) | .name' 2>/dev/null)
if [ -z "$jobs" ]; then
echo "Home Assistant Core install/start complete (took ${elapsed}s)"
exit 0
fi
if [ $((elapsed % 15)) -eq 0 ]; then
echo "Core still installing... (${elapsed}s/${timeout}s)"
fi
sleep 5
elapsed=$((elapsed + 5))
done
echo "ERROR: Home Assistant Core failed to install/start within ${timeout}s"
docker logs --tail 50 hassio_supervisor
exit 1
- name: Check the Supervisor
run: |
@@ -313,28 +359,28 @@ jobs:
exit 1
fi
- name: Check the Store / Addon
- name: Check the Store / App
run: |
echo "Install Core SSH Add-on"
test=$(docker exec hassio_cli ha addons install core_ssh --no-progress --raw-json | jq -r '.result')
echo "Install Core SSH app"
test=$(docker exec hassio_cli ha apps install core_ssh --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
# Make sure it actually installed
test=$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.version')
test=$(docker exec hassio_cli ha apps info core_ssh --no-progress --raw-json | jq -r '.data.version')
if [[ "$test" == "null" ]]; then
exit 1
fi
echo "Start Core SSH Add-on"
test=$(docker exec hassio_cli ha addons start core_ssh --no-progress --raw-json | jq -r '.result')
echo "Start Core SSH app"
test=$(docker exec hassio_cli ha apps start core_ssh --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
# Make sure its state is started
test="$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.state')"
test="$(docker exec hassio_cli ha apps info core_ssh --no-progress --raw-json | jq -r '.data.state')"
if [ "$test" != "started" ]; then
exit 1
fi
@@ -348,9 +394,9 @@ jobs:
fi
echo "slug=$(echo $test | jq -r '.data.slug')" >> "$GITHUB_OUTPUT"
- name: Uninstall SSH add-on
- name: Uninstall SSH app
run: |
test=$(docker exec hassio_cli ha addons uninstall core_ssh --no-progress --raw-json | jq -r '.result')
test=$(docker exec hassio_cli ha apps uninstall core_ssh --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
@@ -362,30 +408,23 @@ jobs:
exit 1
fi
- name: Wait for Supervisor to come up
run: |
SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' hassio_supervisor)
ping="error"
while [ "$ping" != "ok" ]; do
ping=$(curl -sSL "http://$SUPERVISOR/supervisor/ping" | jq -r '.result')
sleep 5
done
- *wait_for_supervisor
- name: Restore SSH add-on from backup
- name: Restore SSH app from backup
run: |
test=$(docker exec hassio_cli ha backups restore ${{ steps.backup.outputs.slug }} --addons core_ssh --no-progress --raw-json | jq -r '.result')
test=$(docker exec hassio_cli ha backups restore ${{ steps.backup.outputs.slug }} --app core_ssh --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
# Make sure it actually installed
test=$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.version')
test=$(docker exec hassio_cli ha apps info core_ssh --no-progress --raw-json | jq -r '.data.version')
if [[ "$test" == "null" ]]; then
exit 1
fi
# Make sure its state is started
test="$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.state')"
test="$(docker exec hassio_cli ha apps info core_ssh --no-progress --raw-json | jq -r '.data.state')"
if [ "$test" != "started" ]; then
exit 1
fi
@@ -418,14 +457,14 @@ jobs:
FROZEN_VERSION: "2025.11.5"
steps:
- name: Login to GitHub Container Registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Install Cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
with:
cosign-release: ${{ env.COSIGN_VERSION }}

View File

@@ -1,19 +1,111 @@
name: Check PR
# yamllint disable-line rule:truthy
on:
pull_request:
branches: ["main"]
types: [labeled, unlabeled, synchronize]
types: [opened, edited, labeled, unlabeled, synchronize]
permissions:
contents: read
pull-requests: write
jobs:
sync-type-labels:
name: Sync type labels from PR body
runs-on: ubuntu-latest
outputs:
labels: ${{ steps.sync.outputs.labels }}
steps:
- id: sync
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
with:
script: |
const pr = context.payload.pull_request;
const body = pr.body || "";
function isTypeChecked(text, keySubstring) {
for (const line of text.split("\n")) {
if (!line.includes(keySubstring)) continue;
const m = line.match(/^\s*-\s*\[\s*([ xX])\s*\]\s*/);
if (m) return m[1].toLowerCase() === "x";
}
return false;
}
const typeMappings = [
{ key: "Dependency upgrade", label: "dependencies" },
{
key: "Bugfix (non-breaking change which fixes an issue)",
label: "bugfix",
},
{
key: "New feature (which adds functionality to the supervisor)",
label: "new-feature",
},
{
key: "Breaking change (fix/feature causing existing functionality to break)",
label: "breaking-change",
},
{
key: "Code quality improvements to existing code or addition of tests",
label: "ci",
},
];
const originalLabels = new Set(pr.labels.map((l) => l.name));
const desiredLabels = new Set(originalLabels);
for (const { key, label } of typeMappings) {
if (isTypeChecked(body, key)) {
desiredLabels.add(label);
} else {
desiredLabels.delete(label);
}
}
const owner = context.repo.owner;
const repo = context.repo.repo;
const prNumber = pr.number;
for (const { label } of typeMappings) {
const wanted = desiredLabels.has(label);
const had = originalLabels.has(label);
if (wanted === had) continue;
try {
if (wanted) {
await github.rest.issues.addLabels({
owner,
repo,
issue_number: prNumber,
labels: [label],
});
} else {
await github.rest.issues.removeLabel({
owner,
repo,
issue_number: prNumber,
name: label,
});
}
} catch (e) {
core.warning(`Label API (${label}): ${e.message}`);
}
}
const labelsJson = JSON.stringify([...desiredLabels].sort());
core.setOutput("labels", labelsJson);
init:
name: Check labels
needs: sync-type-labels
runs-on: ubuntu-latest
steps:
- name: Check labels
env:
LABELS_JSON: ${{ needs.sync-type-labels.outputs.labels }}
run: |
labels=$(jq -r '.pull_request.labels[] | .name' ${{github.event_path }})
echo "$labels"
if [ "$labels" == "cla-signed" ]; then
echo "$LABELS_JSON" | jq -r '.[]'
if [ "$(echo "$LABELS_JSON" | jq -c .)" = '["cla-signed"]' ]; then
exit 1
fi

View File

@@ -8,7 +8,7 @@ on:
pull_request: ~
env:
DEFAULT_PYTHON: "3.13"
DEFAULT_PYTHON: "3.14.3"
PRE_COMMIT_CACHE: ~/.cache/pre-commit
MYPY_CACHE_VERSION: 1
@@ -26,15 +26,15 @@ jobs:
name: Prepare Python dependencies
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python
id: python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
key: |
@@ -48,7 +48,7 @@ jobs:
pip install -r requirements.txt -r requirements_tests.txt
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
lookup-only: true
@@ -68,15 +68,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
key: |
@@ -88,7 +88,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -111,15 +111,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
key: |
@@ -131,7 +131,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -154,7 +154,7 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Register hadolint problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/hadolint.json"
@@ -169,15 +169,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
key: |
@@ -189,7 +189,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -213,15 +213,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
key: |
@@ -233,7 +233,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -257,15 +257,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
key: |
@@ -293,9 +293,9 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
@@ -307,7 +307,7 @@ jobs:
echo "key=mypy-${{ env.MYPY_CACHE_VERSION }}-$mypy_version-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
key: >-
@@ -318,7 +318,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore mypy cache
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: .mypy_cache
key: >-
@@ -339,19 +339,19 @@ jobs:
name: Run tests Python ${{ needs.prepare.outputs.python-version }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
with:
cosign-release: "v2.5.3"
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
key: |
@@ -386,7 +386,7 @@ jobs:
-o console_output_style=count \
tests
- name: Upload coverage artifact
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: coverage
path: .coverage
@@ -398,15 +398,15 @@ jobs:
needs: ["pytest", "prepare"]
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
key: |
@@ -417,7 +417,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Download all coverage artifacts
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
name: coverage
path: coverage/
@@ -428,4 +428,4 @@ jobs:
coverage report
coverage xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0

View File

@@ -11,7 +11,7 @@ jobs:
name: Release Drafter
steps:
- name: Checkout the repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
@@ -36,7 +36,7 @@ jobs:
echo "version=$datepre.$newpost" >> "$GITHUB_OUTPUT"
- name: Run Release Drafter
uses: release-drafter/release-drafter@b1476f6e6eb133afa41ed8589daba6dc69b4d3f5 # v6.1.0
uses: release-drafter/release-drafter@139054aeaa9adc52ab36ddf67437541f039b88e2 # v7.1.1
with:
tag: ${{ steps.version.outputs.version }}
name: ${{ steps.version.outputs.version }}

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out code from GitHub
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Sentry Release
uses: getsentry/action-release@dab6548b3c03c4717878099e43782cf5be654289 # v3.5.0
env:

View File

@@ -9,7 +9,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
- uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30

View File

@@ -1,82 +0,0 @@
name: Update frontend
on:
schedule: # once a day
- cron: "0 0 * * *"
workflow_dispatch:
jobs:
check-version:
runs-on: ubuntu-latest
outputs:
skip: ${{ steps.check_version.outputs.skip || steps.check_existing_pr.outputs.skip }}
current_version: ${{ steps.check_version.outputs.current_version }}
latest_version: ${{ steps.latest_frontend_version.outputs.latest_tag }}
steps:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get latest frontend release
id: latest_frontend_version
uses: abatilo/release-info-action@32cb932219f1cee3fc4f4a298fd65ead5d35b661 # v1.3.3
with:
owner: home-assistant
repo: frontend
- name: Check if version is up to date
id: check_version
run: |
current_version="$(cat .ha-frontend-version)"
latest_version="${{ steps.latest_frontend_version.outputs.latest_tag }}"
echo "current_version=${current_version}" >> $GITHUB_OUTPUT
echo "LATEST_VERSION=${latest_version}" >> $GITHUB_ENV
if [[ ! "$current_version" < "$latest_version" ]]; then
echo "Frontend version is up to date"
echo "skip=true" >> $GITHUB_OUTPUT
fi
- name: Check if there is no open PR with this version
if: steps.check_version.outputs.skip != 'true'
id: check_existing_pr
env:
GH_TOKEN: ${{ github.token }}
run: |
PR=$(gh pr list --state open --base main --json title --search "Update frontend to version $LATEST_VERSION")
if [[ "$PR" != "[]" ]]; then
echo "Skipping - There is already a PR open for version $LATEST_VERSION"
echo "skip=true" >> $GITHUB_OUTPUT
fi
create-pr:
runs-on: ubuntu-latest
needs: check-version
if: needs.check-version.outputs.skip != 'true'
steps:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Clear www folder
run: |
rm -rf supervisor/api/panel/*
- name: Update version file
run: |
echo "${{ needs.check-version.outputs.latest_version }}" > .ha-frontend-version
- name: Download release assets
uses: robinraju/release-downloader@daf26c55d821e836577a15f77d86ddc078948b05 # v1.12
with:
repository: 'home-assistant/frontend'
tag: ${{ needs.check-version.outputs.latest_version }}
fileName: home_assistant_frontend_supervisor-${{ needs.check-version.outputs.latest_version }}.tar.gz
extract: true
out-file-path: supervisor/api/panel/
- name: Remove release assets archive
run: |
rm -f supervisor/api/panel/home_assistant_frontend_supervisor-*.tar.gz
- name: Create PR
uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725 # v8.0.0
with:
commit-message: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
branch: autoupdate-frontend
base: main
draft: true
sign-commits: true
title: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
body: >
Update frontend from ${{ needs.check-version.outputs.current_version }} to
[${{ needs.check-version.outputs.latest_version }}](https://github.com/home-assistant/frontend/releases/tag/${{ needs.check-version.outputs.latest_version }})

View File

@@ -1 +0,0 @@
20250925.1

View File

@@ -1,4 +1,4 @@
ARG BUILD_FROM
ARG BUILD_FROM=ghcr.io/home-assistant/base-python:3.14-alpine3.22-2026.03.1
FROM ${BUILD_FROM}
ENV \
@@ -22,7 +22,7 @@ RUN \
openssl \
yaml \
\
&& pip3 install uv==0.9.18
&& pip3 install uv==0.10.9
# Install requirements
RUN \
@@ -48,3 +48,12 @@ RUN \
WORKDIR /
COPY rootfs /
LABEL \
io.hass.type="supervisor" \
org.opencontainers.image.title="Home Assistant Supervisor" \
org.opencontainers.image.description="Container-based system for managing Home Assistant Core installation" \
org.opencontainers.image.authors="The Home Assistant Authors" \
org.opencontainers.image.url="https://www.home-assistant.io/" \
org.opencontainers.image.documentation="https://www.home-assistant.io/docs/" \
org.opencontainers.image.licenses="Apache License 2.0"

View File

@@ -1,16 +0,0 @@
image: ghcr.io/home-assistant/{arch}-hassio-supervisor
build_from:
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.22-2025.12.2
amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.22-2025.12.2
cosign:
base_identity: https://github.com/home-assistant/docker-base/.*
identity: https://github.com/home-assistant/supervisor/.*
labels:
io.hass.type: supervisor
org.opencontainers.image.title: Home Assistant Supervisor
org.opencontainers.image.description: Container-based system for managing Home Assistant Core installation
org.opencontainers.image.source: https://github.com/home-assistant/supervisor
org.opencontainers.image.authors: The Home Assistant Authors
org.opencontainers.image.url: https://www.home-assistant.io/
org.opencontainers.image.documentation: https://www.home-assistant.io/docs/
org.opencontainers.image.licenses: Apache License 2.0

View File

@@ -4,8 +4,11 @@ coverage:
status:
project:
default:
target: 40
threshold: 0.09
target: auto
threshold: 1
patch:
default:
target: 80
comment: false
github_checks:
annotations: false

View File

@@ -1,5 +1,5 @@
[build-system]
requires = ["setuptools~=80.9.0", "wheel~=0.46.1"]
requires = ["setuptools~=82.0.0", "wheel~=0.46.1"]
build-backend = "setuptools.build_meta"
[project]
@@ -9,10 +9,10 @@ license = { text = "Apache-2.0" }
description = "Open-source private cloud os for Home-Assistant based on HassOS"
readme = "README.md"
authors = [
{ name = "The Home Assistant Authors", email = "hello@home-assistant.io" },
{ name = "The Home Assistant Authors", email = "hello@home-assistant.io" },
]
keywords = ["docker", "home-assistant", "api"]
requires-python = ">=3.13.0"
requires-python = ">=3.14.0"
[project.urls]
"Homepage" = "https://www.home-assistant.io/"
@@ -31,7 +31,7 @@ include-package-data = true
include = ["supervisor*"]
[tool.pylint.MAIN]
py-version = "3.13"
py-version = "3.14"
# Use a conservative default here; 2 should speed up most setups and not hurt
# any too bad. Override on command line as appropriate.
jobs = 2
@@ -53,154 +53,154 @@ good-names = ["id", "i", "j", "k", "ex", "Run", "_", "fp", "T", "os"]
# too-few-* - same as too-many-*
# unused-argument - generic callbacks and setup methods create a lot of warnings
disable = [
"format",
"abstract-method",
"cyclic-import",
"duplicate-code",
"locally-disabled",
"no-else-return",
"not-context-manager",
"too-few-public-methods",
"too-many-arguments",
"too-many-branches",
"too-many-instance-attributes",
"too-many-lines",
"too-many-locals",
"too-many-public-methods",
"too-many-return-statements",
"too-many-statements",
"unused-argument",
"consider-using-with",
"format",
"abstract-method",
"cyclic-import",
"duplicate-code",
"locally-disabled",
"no-else-return",
"not-context-manager",
"too-few-public-methods",
"too-many-arguments",
"too-many-branches",
"too-many-instance-attributes",
"too-many-lines",
"too-many-locals",
"too-many-public-methods",
"too-many-return-statements",
"too-many-statements",
"unused-argument",
"consider-using-with",
# Handled by ruff
# Ref: <https://github.com/astral-sh/ruff/issues/970>
"await-outside-async", # PLE1142
"bad-str-strip-call", # PLE1310
"bad-string-format-type", # PLE1307
"bidirectional-unicode", # PLE2502
"continue-in-finally", # PLE0116
"duplicate-bases", # PLE0241
"format-needs-mapping", # F502
"function-redefined", # F811
# Needed because ruff does not understand type of __all__ generated by a function
# "invalid-all-format", # PLE0605
"invalid-all-object", # PLE0604
"invalid-character-backspace", # PLE2510
"invalid-character-esc", # PLE2513
"invalid-character-nul", # PLE2514
"invalid-character-sub", # PLE2512
"invalid-character-zero-width-space", # PLE2515
"logging-too-few-args", # PLE1206
"logging-too-many-args", # PLE1205
"missing-format-string-key", # F524
"mixed-format-string", # F506
"no-method-argument", # N805
"no-self-argument", # N805
"nonexistent-operator", # B002
"nonlocal-without-binding", # PLE0117
"not-in-loop", # F701, F702
"notimplemented-raised", # F901
"return-in-init", # PLE0101
"return-outside-function", # F706
"syntax-error", # E999
"too-few-format-args", # F524
"too-many-format-args", # F522
"too-many-star-expressions", # F622
"truncated-format-string", # F501
"undefined-all-variable", # F822
"undefined-variable", # F821
"used-prior-global-declaration", # PLE0118
"yield-inside-async-function", # PLE1700
"yield-outside-function", # F704
"anomalous-backslash-in-string", # W605
"assert-on-string-literal", # PLW0129
"assert-on-tuple", # F631
"bad-format-string", # W1302, F
"bad-format-string-key", # W1300, F
"bare-except", # E722
"binary-op-exception", # PLW0711
"cell-var-from-loop", # B023
# "dangerous-default-value", # B006, ruff catches new occurrences, needs more work
"duplicate-except", # B014
"duplicate-key", # F601
"duplicate-string-formatting-argument", # F
"duplicate-value", # F
"eval-used", # PGH001
"exec-used", # S102
# "expression-not-assigned", # B018, ruff catches new occurrences, needs more work
"f-string-without-interpolation", # F541
"forgotten-debug-statement", # T100
"format-string-without-interpolation", # F
# "global-statement", # PLW0603, ruff catches new occurrences, needs more work
"global-variable-not-assigned", # PLW0602
"implicit-str-concat", # ISC001
"import-self", # PLW0406
"inconsistent-quotes", # Q000
"invalid-envvar-default", # PLW1508
"keyword-arg-before-vararg", # B026
"logging-format-interpolation", # G
"logging-fstring-interpolation", # G
"logging-not-lazy", # G
"misplaced-future", # F404
"named-expr-without-context", # PLW0131
"nested-min-max", # PLW3301
# "pointless-statement", # B018, ruff catches new occurrences, needs more work
"raise-missing-from", # TRY200
# "redefined-builtin", # A001, ruff is way more stricter, needs work
"try-except-raise", # TRY203
"unused-argument", # ARG001, we don't use it
"unused-format-string-argument", #F507
"unused-format-string-key", # F504
"unused-import", # F401
"unused-variable", # F841
"useless-else-on-loop", # PLW0120
"wildcard-import", # F403
"bad-classmethod-argument", # N804
"consider-iterating-dictionary", # SIM118
"empty-docstring", # D419
"invalid-name", # N815
"line-too-long", # E501, disabled globally
"missing-class-docstring", # D101
"missing-final-newline", # W292
"missing-function-docstring", # D103
"missing-module-docstring", # D100
"multiple-imports", #E401
"singleton-comparison", # E711, E712
"subprocess-run-check", # PLW1510
"superfluous-parens", # UP034
"ungrouped-imports", # I001
"unidiomatic-typecheck", # E721
"unnecessary-direct-lambda-call", # PLC3002
"unnecessary-lambda-assignment", # PLC3001
"unneeded-not", # SIM208
"useless-import-alias", # PLC0414
"wrong-import-order", # I001
"wrong-import-position", # E402
"comparison-of-constants", # PLR0133
"comparison-with-itself", # PLR0124
# "consider-alternative-union-syntax", # UP007, typing extension
"consider-merging-isinstance", # PLR1701
# "consider-using-alias", # UP006, typing extension
"consider-using-dict-comprehension", # C402
"consider-using-generator", # C417
"consider-using-get", # SIM401
"consider-using-set-comprehension", # C401
"consider-using-sys-exit", # PLR1722
"consider-using-ternary", # SIM108
"literal-comparison", # F632
"property-with-parameters", # PLR0206
"super-with-arguments", # UP008
"too-many-branches", # PLR0912
"too-many-return-statements", # PLR0911
"too-many-statements", # PLR0915
"trailing-comma-tuple", # COM818
"unnecessary-comprehension", # C416
"use-a-generator", # C417
"use-dict-literal", # C406
"use-list-literal", # C405
"useless-object-inheritance", # UP004
"useless-return", # PLR1711
# "no-self-use", # PLR6301 # Optional plugin, not enabled
# Handled by ruff
# Ref: <https://github.com/astral-sh/ruff/issues/970>
"await-outside-async", # PLE1142
"bad-str-strip-call", # PLE1310
"bad-string-format-type", # PLE1307
"bidirectional-unicode", # PLE2502
"continue-in-finally", # PLE0116
"duplicate-bases", # PLE0241
"format-needs-mapping", # F502
"function-redefined", # F811
# Needed because ruff does not understand type of __all__ generated by a function
# "invalid-all-format", # PLE0605
"invalid-all-object", # PLE0604
"invalid-character-backspace", # PLE2510
"invalid-character-esc", # PLE2513
"invalid-character-nul", # PLE2514
"invalid-character-sub", # PLE2512
"invalid-character-zero-width-space", # PLE2515
"logging-too-few-args", # PLE1206
"logging-too-many-args", # PLE1205
"missing-format-string-key", # F524
"mixed-format-string", # F506
"no-method-argument", # N805
"no-self-argument", # N805
"nonexistent-operator", # B002
"nonlocal-without-binding", # PLE0117
"not-in-loop", # F701, F702
"notimplemented-raised", # F901
"return-in-init", # PLE0101
"return-outside-function", # F706
"syntax-error", # E999
"too-few-format-args", # F524
"too-many-format-args", # F522
"too-many-star-expressions", # F622
"truncated-format-string", # F501
"undefined-all-variable", # F822
"undefined-variable", # F821
"used-prior-global-declaration", # PLE0118
"yield-inside-async-function", # PLE1700
"yield-outside-function", # F704
"anomalous-backslash-in-string", # W605
"assert-on-string-literal", # PLW0129
"assert-on-tuple", # F631
"bad-format-string", # W1302, F
"bad-format-string-key", # W1300, F
"bare-except", # E722
"binary-op-exception", # PLW0711
"cell-var-from-loop", # B023
# "dangerous-default-value", # B006, ruff catches new occurrences, needs more work
"duplicate-except", # B014
"duplicate-key", # F601
"duplicate-string-formatting-argument", # F
"duplicate-value", # F
"eval-used", # PGH001
"exec-used", # S102
# "expression-not-assigned", # B018, ruff catches new occurrences, needs more work
"f-string-without-interpolation", # F541
"forgotten-debug-statement", # T100
"format-string-without-interpolation", # F
# "global-statement", # PLW0603, ruff catches new occurrences, needs more work
"global-variable-not-assigned", # PLW0602
"implicit-str-concat", # ISC001
"import-self", # PLW0406
"inconsistent-quotes", # Q000
"invalid-envvar-default", # PLW1508
"keyword-arg-before-vararg", # B026
"logging-format-interpolation", # G
"logging-fstring-interpolation", # G
"logging-not-lazy", # G
"misplaced-future", # F404
"named-expr-without-context", # PLW0131
"nested-min-max", # PLW3301
# "pointless-statement", # B018, ruff catches new occurrences, needs more work
"raise-missing-from", # TRY200
# "redefined-builtin", # A001, ruff is way more stricter, needs work
"try-except-raise", # TRY203
"unused-argument", # ARG001, we don't use it
"unused-format-string-argument", #F507
"unused-format-string-key", # F504
"unused-import", # F401
"unused-variable", # F841
"useless-else-on-loop", # PLW0120
"wildcard-import", # F403
"bad-classmethod-argument", # N804
"consider-iterating-dictionary", # SIM118
"empty-docstring", # D419
"invalid-name", # N815
"line-too-long", # E501, disabled globally
"missing-class-docstring", # D101
"missing-final-newline", # W292
"missing-function-docstring", # D103
"missing-module-docstring", # D100
"multiple-imports", #E401
"singleton-comparison", # E711, E712
"subprocess-run-check", # PLW1510
"superfluous-parens", # UP034
"ungrouped-imports", # I001
"unidiomatic-typecheck", # E721
"unnecessary-direct-lambda-call", # PLC3002
"unnecessary-lambda-assignment", # PLC3001
"unneeded-not", # SIM208
"useless-import-alias", # PLC0414
"wrong-import-order", # I001
"wrong-import-position", # E402
"comparison-of-constants", # PLR0133
"comparison-with-itself", # PLR0124
# "consider-alternative-union-syntax", # UP007, typing extension
"consider-merging-isinstance", # PLR1701
# "consider-using-alias", # UP006, typing extension
"consider-using-dict-comprehension", # C402
"consider-using-generator", # C417
"consider-using-get", # SIM401
"consider-using-set-comprehension", # C401
"consider-using-sys-exit", # PLR1722
"consider-using-ternary", # SIM108
"literal-comparison", # F632
"property-with-parameters", # PLR0206
"super-with-arguments", # UP008
"too-many-branches", # PLR0912
"too-many-return-statements", # PLR0911
"too-many-statements", # PLR0915
"trailing-comma-tuple", # COM818
"unnecessary-comprehension", # C416
"use-a-generator", # C417
"use-dict-literal", # C406
"use-list-literal", # C405
"useless-object-inheritance", # UP004
"useless-return", # PLR1711
# "no-self-use", # PLR6301 # Optional plugin, not enabled
]
[tool.pylint.REPORTS]
@@ -226,120 +226,120 @@ log_date_format = "%Y-%m-%d %H:%M:%S"
asyncio_default_fixture_loop_scope = "function"
asyncio_mode = "auto"
filterwarnings = [
"error",
"ignore:pkg_resources is deprecated as an API:DeprecationWarning:dirhash",
"ignore::pytest.PytestUnraisableExceptionWarning",
"error",
"ignore:pkg_resources is deprecated as an API:DeprecationWarning:dirhash",
"ignore::pytest.PytestUnraisableExceptionWarning",
]
markers = [
"no_mock_init_websession: disable the autouse mock of init_websession for this test",
"no_mock_init_websession: disable the autouse mock of init_websession for this test",
]
[tool.ruff]
lint.select = [
"B002", # Python does not support the unary prefix increment
"B007", # Loop control variable {name} not used within loop body
"B014", # Exception handler with duplicate exception
"B023", # Function definition does not bind loop variable {name}
"B026", # Star-arg unpacking after a keyword argument is strongly discouraged
"B904", # Use raise from to specify exception cause
"C", # complexity
"COM818", # Trailing comma on bare tuple prohibited
"D", # docstrings
"DTZ003", # Use datetime.now(tz=) instead of datetime.utcnow()
"DTZ004", # Use datetime.fromtimestamp(ts, tz=) instead of datetime.utcfromtimestamp(ts)
"E", # pycodestyle
"F", # pyflakes/autoflake
"G", # flake8-logging-format
"I", # isort
"ICN001", # import concentions; {name} should be imported as {asname}
"N804", # First argument of a class method should be named cls
"N805", # First argument of a method should be named self
"N815", # Variable {name} in class scope should not be mixedCase
"PGH004", # Use specific rule codes when using noqa
"PLC0414", # Useless import alias. Import alias does not rename original package.
"PLC", # pylint
"PLE", # pylint
"PLR", # pylint
"PLW", # pylint
"Q000", # Double quotes found but single quotes preferred
"RUF006", # Store a reference to the return value of asyncio.create_task
"S102", # Use of exec detected
"S103", # bad-file-permissions
"S108", # hardcoded-temp-file
"S306", # suspicious-mktemp-usage
"S307", # suspicious-eval-usage
"S313", # suspicious-xmlc-element-tree-usage
"S314", # suspicious-xml-element-tree-usage
"S315", # suspicious-xml-expat-reader-usage
"S316", # suspicious-xml-expat-builder-usage
"S317", # suspicious-xml-sax-usage
"S318", # suspicious-xml-mini-dom-usage
"S319", # suspicious-xml-pull-dom-usage
"S601", # paramiko-call
"S602", # subprocess-popen-with-shell-equals-true
"S604", # call-with-shell-equals-true
"S608", # hardcoded-sql-expression
"S609", # unix-command-wildcard-injection
"SIM105", # Use contextlib.suppress({exception}) instead of try-except-pass
"SIM117", # Merge with-statements that use the same scope
"SIM118", # Use {key} in {dict} instead of {key} in {dict}.keys()
"SIM201", # Use {left} != {right} instead of not {left} == {right}
"SIM208", # Use {expr} instead of not (not {expr})
"SIM212", # Use {a} if {a} else {b} instead of {b} if not {a} else {a}
"SIM300", # Yoda conditions. Use 'age == 42' instead of '42 == age'.
"SIM401", # Use get from dict with default instead of an if block
"T100", # Trace found: {name} used
"T20", # flake8-print
"TID251", # Banned imports
"TRY004", # Prefer TypeError exception for invalid type
"TRY203", # Remove exception handler; error is immediately re-raised
"UP", # pyupgrade
"W", # pycodestyle
"B002", # Python does not support the unary prefix increment
"B007", # Loop control variable {name} not used within loop body
"B014", # Exception handler with duplicate exception
"B023", # Function definition does not bind loop variable {name}
"B026", # Star-arg unpacking after a keyword argument is strongly discouraged
"B904", # Use raise from to specify exception cause
"C", # complexity
"COM818", # Trailing comma on bare tuple prohibited
"D", # docstrings
"DTZ003", # Use datetime.now(tz=) instead of datetime.utcnow()
"DTZ004", # Use datetime.fromtimestamp(ts, tz=) instead of datetime.utcfromtimestamp(ts)
"E", # pycodestyle
"F", # pyflakes/autoflake
"G", # flake8-logging-format
"I", # isort
"ICN001", # import concentions; {name} should be imported as {asname}
"N804", # First argument of a class method should be named cls
"N805", # First argument of a method should be named self
"N815", # Variable {name} in class scope should not be mixedCase
"PGH004", # Use specific rule codes when using noqa
"PLC0414", # Useless import alias. Import alias does not rename original package.
"PLC", # pylint
"PLE", # pylint
"PLR", # pylint
"PLW", # pylint
"Q000", # Double quotes found but single quotes preferred
"RUF006", # Store a reference to the return value of asyncio.create_task
"S102", # Use of exec detected
"S103", # bad-file-permissions
"S108", # hardcoded-temp-file
"S306", # suspicious-mktemp-usage
"S307", # suspicious-eval-usage
"S313", # suspicious-xmlc-element-tree-usage
"S314", # suspicious-xml-element-tree-usage
"S315", # suspicious-xml-expat-reader-usage
"S316", # suspicious-xml-expat-builder-usage
"S317", # suspicious-xml-sax-usage
"S318", # suspicious-xml-mini-dom-usage
"S319", # suspicious-xml-pull-dom-usage
"S601", # paramiko-call
"S602", # subprocess-popen-with-shell-equals-true
"S604", # call-with-shell-equals-true
"S608", # hardcoded-sql-expression
"S609", # unix-command-wildcard-injection
"SIM105", # Use contextlib.suppress({exception}) instead of try-except-pass
"SIM117", # Merge with-statements that use the same scope
"SIM118", # Use {key} in {dict} instead of {key} in {dict}.keys()
"SIM201", # Use {left} != {right} instead of not {left} == {right}
"SIM208", # Use {expr} instead of not (not {expr})
"SIM212", # Use {a} if {a} else {b} instead of {b} if not {a} else {a}
"SIM300", # Yoda conditions. Use 'age == 42' instead of '42 == age'.
"SIM401", # Use get from dict with default instead of an if block
"T100", # Trace found: {name} used
"T20", # flake8-print
"TID251", # Banned imports
"TRY004", # Prefer TypeError exception for invalid type
"TRY203", # Remove exception handler; error is immediately re-raised
"UP", # pyupgrade
"W", # pycodestyle
]
lint.ignore = [
"D202", # No blank lines allowed after function docstring
"D203", # 1 blank line required before class docstring
"D213", # Multi-line docstring summary should start at the second line
"D406", # Section name should end with a newline
"D407", # Section name underlining
"E501", # line too long
"E731", # do not assign a lambda expression, use a def
"D202", # No blank lines allowed after function docstring
"D203", # 1 blank line required before class docstring
"D213", # Multi-line docstring summary should start at the second line
"D406", # Section name should end with a newline
"D407", # Section name underlining
"E501", # line too long
"E731", # do not assign a lambda expression, use a def
# Ignore ignored, as the rule is now back in preview/nursery, which cannot
# be ignored anymore without warnings.
# https://github.com/astral-sh/ruff/issues/7491
# "PLC1901", # Lots of false positives
# Ignore ignored, as the rule is now back in preview/nursery, which cannot
# be ignored anymore without warnings.
# https://github.com/astral-sh/ruff/issues/7491
# "PLC1901", # Lots of false positives
# False positives https://github.com/astral-sh/ruff/issues/5386
"PLC0208", # Use a sequence type instead of a `set` when iterating over values
"PLR0911", # Too many return statements ({returns} > {max_returns})
"PLR0912", # Too many branches ({branches} > {max_branches})
"PLR0913", # Too many arguments to function call ({c_args} > {max_args})
"PLR0915", # Too many statements ({statements} > {max_statements})
"PLR2004", # Magic value used in comparison, consider replacing {value} with a constant variable
"PLW2901", # Outer {outer_kind} variable {name} overwritten by inner {inner_kind} target
"UP006", # keep type annotation style as is
"UP007", # keep type annotation style as is
# False positives https://github.com/astral-sh/ruff/issues/5386
"PLC0208", # Use a sequence type instead of a `set` when iterating over values
"PLR0911", # Too many return statements ({returns} > {max_returns})
"PLR0912", # Too many branches ({branches} > {max_branches})
"PLR0913", # Too many arguments to function call ({c_args} > {max_args})
"PLR0915", # Too many statements ({statements} > {max_statements})
"PLR2004", # Magic value used in comparison, consider replacing {value} with a constant variable
"PLW2901", # Outer {outer_kind} variable {name} overwritten by inner {inner_kind} target
"UP006", # keep type annotation style as is
"UP007", # keep type annotation style as is
# May conflict with the formatter, https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules
"W191",
"E111",
"E114",
"E117",
"D206",
"D300",
"Q000",
"Q001",
"Q002",
"Q003",
"COM812",
"COM819",
"ISC001",
"ISC002",
# May conflict with the formatter, https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules
"W191",
"E111",
"E114",
"E117",
"D206",
"D300",
"Q000",
"Q001",
"Q002",
"Q003",
"COM812",
"COM819",
"ISC001",
"ISC002",
# Disabled because ruff does not understand type of __all__ generated by a function
"PLE0605",
# Disabled because ruff does not understand type of __all__ generated by a function
"PLE0605",
]
[tool.ruff.lint.flake8-import-conventions.extend-aliases]
@@ -354,11 +354,11 @@ fixture-parentheses = false
[tool.ruff.lint.isort]
force-sort-within-sections = true
section-order = [
"future",
"standard-library",
"third-party",
"first-party",
"local-folder",
"future",
"standard-library",
"third-party",
"first-party",
"local-folder",
]
forced-separate = ["tests"]
known-first-party = ["supervisor", "tests"]
@@ -368,7 +368,7 @@ split-on-trailing-comma = false
[tool.ruff.lint.per-file-ignores]
# DBus Service Mocks must use typing and names understood by dbus-fast
"tests/dbus_service_mocks/*.py" = ["F722", "F821", "N815"]
"tests/dbus_service_mocks/*.py" = ["F722", "F821", "N815", "UP037"]
[tool.ruff.lint.mccabe]
max-complexity = 25

View File

@@ -1,32 +1,29 @@
aiodns==3.6.1
aiodocker==0.24.0
aiohttp==3.13.3
aiodns==4.0.0
aiodocker==0.26.0
aiohttp==3.13.4
atomicwrites-homeassistant==1.4.1
attrs==25.4.0
attrs==26.1.0
awesomeversion==25.8.0
backports.zstd==1.3.0
blockbuster==1.5.26
brotli==1.2.0
ciso8601==2.3.3
colorlog==6.10.1
cpe==1.3.1
cryptography==46.0.3
debugpy==1.8.19
cryptography==46.0.6
debugpy==1.8.20
deepmerge==2.0
dirhash==0.5.0
docker==7.1.0
faust-cchardet==2.1.19
gitpython==3.1.46
jinja2==3.1.6
log-rate-limit==1.4.2
orjson==3.11.5
orjson==3.11.7
pulsectl==24.12.0
pyudev==0.24.4
PyYAML==6.0.3
requests==2.32.5
securetar==2025.12.0
sentry-sdk==2.48.0
setuptools==80.9.0
securetar==2026.2.0
sentry-sdk==2.56.0
setuptools==82.0.1
voluptuous==0.16.0
dbus-fast==3.1.2
dbus-fast==4.0.0
zlib-fast==0.2.1

View File

@@ -1,16 +1,14 @@
astroid==4.0.3
coverage==7.13.1
coverage==7.13.5
mypy==1.19.1
pre-commit==4.5.1
pylint==4.0.4
pylint==4.0.5
pytest-aiohttp==1.1.0
pytest-asyncio==1.3.0
pytest-cov==7.0.0
pytest-cov==7.1.0
pytest-timeout==2.4.0
pytest==9.0.2
ruff==0.14.10
ruff==0.15.8
time-machine==3.2.0
types-docker==7.1.0.20260109
types-pyyaml==6.0.12.20250915
types-requests==2.32.4.20250913
urllib3==2.6.3

View File

@@ -5,7 +5,6 @@ from collections.abc import Awaitable
from contextlib import suppress
from copy import deepcopy
from datetime import datetime
import errno
from functools import partial
from ipaddress import IPv4Address
import logging
@@ -15,12 +14,12 @@ import secrets
import shutil
import tarfile
from tempfile import TemporaryDirectory
from typing import Any, Final
from typing import Any, Final, cast
import aiohttp
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from deepmerge import Merger
from securetar import AddFileError, atomic_contents_add, secure_path
from securetar import AddFileError, SecureTarFile, atomic_contents_add
import voluptuous as vol
from voluptuous.humanize import humanize_error
@@ -61,6 +60,7 @@ from ..const import (
from ..coresys import CoreSys
from ..docker.addon import DockerAddon
from ..docker.const import ContainerState
from ..docker.manager import ExecReturn
from ..docker.monitor import DockerContainerStateEvent
from ..docker.stats import DockerStats
from ..exceptions import (
@@ -70,13 +70,16 @@ from ..exceptions import (
AddonNotRunningError,
AddonNotSupportedError,
AddonNotSupportedWriteStdinError,
AddonPortConflict,
AddonPrePostBackupCommandReturnedError,
AddonsError,
AddonsJobError,
AddonUnknownError,
BackupInvalidError,
BackupRestoreUnknownError,
ConfigurationFileError,
DockerBuildError,
DockerContainerPortConflict,
DockerError,
HostAppArmorError,
StoreAddonNotFoundError,
@@ -85,7 +88,7 @@ from ..hardware.data import Device
from ..homeassistant.const import WSEvent
from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job
from ..resolution.const import ContextType, IssueType, UnhealthyReason
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..resolution.data import Issue
from ..store.addon import AddonStore
from ..utils import check_port
@@ -147,7 +150,7 @@ class Addon(AddonModel):
self._manual_stop: bool = False
self._listeners: list[EventListener] = []
self._startup_event = asyncio.Event()
self._startup_task: asyncio.Task | None = None
self._wait_for_startup_task: asyncio.Task | None = None
self._boot_failed_issue = Issue(
IssueType.BOOT_FAIL, ContextType.ADDON, reference=self.slug
)
@@ -187,18 +190,18 @@ class Addon(AddonModel):
self._startup_event.set()
# Dismiss boot failed issue if present and we started
if (
new_state == AddonState.STARTED
and self.boot_failed_issue in self.sys_resolution.issues
if new_state == AddonState.STARTED and (
issue := self.sys_resolution.get_issue_if_present(self.boot_failed_issue)
):
self.sys_resolution.dismiss_issue(self.boot_failed_issue)
self.sys_resolution.dismiss_issue(issue)
# Dismiss device access missing issue if present and we stopped
if (
new_state == AddonState.STOPPED
and self.device_access_missing_issue in self.sys_resolution.issues
if new_state == AddonState.STOPPED and (
issue := self.sys_resolution.get_issue_if_present(
self.device_access_missing_issue
)
):
self.sys_resolution.dismiss_issue(self.device_access_missing_issue)
self.sys_resolution.dismiss_issue(issue)
self.sys_homeassistant.websocket.supervisor_event_custom(
WSEvent.ADDON,
@@ -235,6 +238,19 @@ class Addon(AddonModel):
await self._check_ingress_port()
if (self.has_deprecated_arch and not self.has_supported_arch) or (
self.has_deprecated_machine and not self.has_supported_machine
):
self.sys_resolution.create_issue(
IssueType.DEPRECATED_ARCH_ADDON,
ContextType.ADDON,
reference=self.slug,
suggestions=[SuggestionType.EXECUTE_REMOVE],
)
with suppress(DockerError):
await self.instance.attach(version=self.version)
return
default_image = self._image(self.data)
try:
await self.instance.attach(version=self.version)
@@ -359,11 +375,10 @@ class Addon(AddonModel):
self.persist[ATTR_BOOT] = value
# Dismiss boot failed issue if present and boot at start disabled
if (
value == AddonBoot.MANUAL
and self._boot_failed_issue in self.sys_resolution.issues
if value == AddonBoot.MANUAL and (
issue := self.sys_resolution.get_issue_if_present(self._boot_failed_issue)
):
self.sys_resolution.dismiss_issue(self._boot_failed_issue)
self.sys_resolution.dismiss_issue(issue)
@property
def auto_update(self) -> bool:
@@ -744,11 +759,11 @@ class Addon(AddonModel):
)
async def unload(self) -> None:
"""Unload add-on and remove data."""
if self._startup_task:
# If we were waiting on startup, cancel that and let the task finish before proceeding
self._startup_task.cancel(f"Removing add-on {self.name} from system")
with suppress(asyncio.CancelledError):
await self._startup_task
# Wait for startup wait task to complete before removing data.
# The container remove/state change resolves _startup_event; this
# ensures _wait_for_startup finishes before we touch addon data.
if self._wait_for_startup_task:
await self._wait_for_startup_task
for listener in self._listeners:
self.sys_bus.remove_listener(listener)
@@ -923,6 +938,10 @@ class Addon(AddonModel):
await self.sys_addons.data.update(store)
await self._check_ingress_port()
# Reload ingress tokens in case addon gained ingress support
if self.with_ingress:
await self.sys_ingress.reload()
# Cleanup
with suppress(DockerError):
await self.instance.cleanup(
@@ -976,6 +995,11 @@ class Addon(AddonModel):
await self.sys_addons.data.update(self.addon_store)
await self._check_ingress_port()
# Reload ingress tokens in case addon gained ingress support
if self.with_ingress:
await self.sys_ingress.reload()
_LOGGER.info("Add-on '%s' successfully rebuilt", self.slug)
finally:
@@ -1002,10 +1026,7 @@ class Addon(AddonModel):
try:
await self.sys_run_in_executor(write_pulse_config)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error(
"Add-on %s can't write pulse/client.config: %s", self.slug, err
)
@@ -1084,8 +1105,7 @@ class Addon(AddonModel):
async def _wait_for_startup(self) -> None:
"""Wait for startup event to be set with timeout."""
try:
self._startup_task = self.sys_create_task(self._startup_event.wait())
await asyncio.wait_for(self._startup_task, STARTUP_TIMEOUT)
await asyncio.wait_for(self._startup_event.wait(), STARTUP_TIMEOUT)
except TimeoutError:
_LOGGER.warning(
"Timeout while waiting for addon %s to start, took more than %s seconds",
@@ -1095,7 +1115,8 @@ class Addon(AddonModel):
except asyncio.CancelledError as err:
_LOGGER.info("Wait for addon startup task cancelled due to: %s", err)
finally:
self._startup_task = None
if self._wait_for_startup_task is asyncio.current_task():
self._wait_for_startup_task = None
@Job(
name="addon_start",
@@ -1111,7 +1132,11 @@ class Addon(AddonModel):
"""
if await self.instance.is_running():
_LOGGER.warning("%s is already running!", self.slug)
return self.sys_create_task(self._wait_for_startup())
if not self._wait_for_startup_task or self._wait_for_startup_task.done():
self._wait_for_startup_task = self.sys_create_task(
self._wait_for_startup()
)
return self._wait_for_startup_task
# Access Token
self.persist[ATTR_ACCESS_TOKEN] = secrets.token_hex(56)
@@ -1140,12 +1165,19 @@ class Addon(AddonModel):
self._startup_event.clear()
try:
await self.instance.run()
except DockerContainerPortConflict as err:
raise AddonPortConflict(
_LOGGER.error,
name=self.slug,
port=cast(dict[str, Any], err.extra_fields)["port"],
) from err
except DockerError as err:
_LOGGER.error("Could not start container for addon %s: %s", self.slug, err)
self.state = AddonState.ERROR
raise AddonUnknownError(addon=self.slug) from err
return self.sys_create_task(self._wait_for_startup())
self._wait_for_startup_task = self.sys_create_task(self._wait_for_startup())
return self._wait_for_startup_task
@Job(
name="addon_stop",
@@ -1176,13 +1208,6 @@ class Addon(AddonModel):
await self.stop()
return await self.start()
def logs(self) -> Awaitable[bytes]:
"""Return add-ons log output.
Return a coroutine.
"""
return self.instance.logs()
def is_running(self) -> Awaitable[bool]:
"""Return True if Docker container is running.
@@ -1226,10 +1251,11 @@ class Addon(AddonModel):
async def _backup_command(self, command: str) -> None:
try:
command_return = await self.instance.run_inside(command)
command_return: ExecReturn = await self.instance.run_inside(command)
if command_return.exit_code != 0:
_LOGGER.debug(
"Pre-/Post backup command failed with: %s", command_return.output
"Pre-/Post backup command failed with: %s",
command_return.output.decode("utf-8", errors="replace"),
)
raise AddonPrePostBackupCommandReturnedError(
_LOGGER.error, addon=self.slug, exit_code=command_return.exit_code
@@ -1305,7 +1331,7 @@ class Addon(AddonModel):
on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def backup(self, tar_file: tarfile.TarFile) -> asyncio.Task | None:
async def backup(self, tar_file: SecureTarFile) -> asyncio.Task | None:
"""Backup state of an add-on.
Returns a Task that completes when addon has state 'started' (see start)
@@ -1313,65 +1339,59 @@ class Addon(AddonModel):
"""
def _addon_backup(
store_image: bool,
metadata: dict[str, Any],
apparmor_profile: str | None,
addon_config_used: bool,
temp_dir: TemporaryDirectory,
temp_path: Path,
):
"""Start the backup process."""
with TemporaryDirectory(dir=self.sys_config.path_tmp) as temp:
temp_path = Path(temp)
# Store local configs/state
try:
write_json_file(temp_path.joinpath("addon.json"), metadata)
except ConfigurationFileError as err:
_LOGGER.error("Can't save meta for %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
# store local image
if store_image:
try:
self.instance.export_image(temp_path.joinpath("image.tar"))
except DockerError as err:
raise BackupRestoreUnknownError() from err
# Store local configs/state
# Store AppArmor Profile
if apparmor_profile:
profile_backup_file = temp_path.joinpath("apparmor.txt")
try:
write_json_file(temp_path.joinpath("addon.json"), metadata)
except ConfigurationFileError as err:
_LOGGER.error("Can't save meta for %s: %s", self.slug, err)
self.sys_host.apparmor.backup_profile(
apparmor_profile, profile_backup_file
)
except HostAppArmorError as err:
_LOGGER.error(
"Can't backup AppArmor profile for %s: %s", self.slug, err
)
raise BackupRestoreUnknownError() from err
# Store AppArmor Profile
if apparmor_profile:
profile_backup_file = temp_path.joinpath("apparmor.txt")
try:
self.sys_host.apparmor.backup_profile(
apparmor_profile, profile_backup_file
)
except HostAppArmorError as err:
raise BackupRestoreUnknownError() from err
# Write tarfile
with tar_file as backup:
# Backup metadata
backup.add(temp_dir.name, arcname=".")
# Write tarfile
with tar_file as backup:
# Backup metadata
backup.add(temp, arcname=".")
# Backup data
atomic_contents_add(
backup,
self.path_data,
file_filter=partial(
self._is_excluded_by_filter, self.path_data, "data"
),
arcname="data",
)
# Backup data
# Backup config (if used and existing, restore handles this gracefully)
if addon_config_used and self.path_config.is_dir():
atomic_contents_add(
backup,
self.path_data,
self.path_config,
file_filter=partial(
self._is_excluded_by_filter, self.path_data, "data"
self._is_excluded_by_filter, self.path_config, "config"
),
arcname="data",
arcname="config",
)
# Backup config (if used and existing, restore handles this gracefully)
if addon_config_used and self.path_config.is_dir():
atomic_contents_add(
backup,
self.path_config,
file_filter=partial(
self._is_excluded_by_filter, self.path_config, "config"
),
arcname="config",
)
wait_for_start: asyncio.Task | None = None
data = {
@@ -1385,22 +1405,35 @@ class Addon(AddonModel):
)
was_running = await self.begin_backup()
temp_dir = await self.sys_run_in_executor(
TemporaryDirectory, dir=self.sys_config.path_tmp
)
temp_path = Path(temp_dir.name)
_LOGGER.info("Building backup for add-on %s", self.slug)
try:
_LOGGER.info("Building backup for add-on %s", self.slug)
# store local image
if self.need_build:
await self.instance.export_image(temp_path.joinpath("image.tar"))
await self.sys_run_in_executor(
partial(
_addon_backup,
store_image=self.need_build,
metadata=data,
apparmor_profile=apparmor_profile,
addon_config_used=self.addon_config_used,
temp_dir=temp_dir,
temp_path=temp_path,
)
)
_LOGGER.info("Finish backup for addon %s", self.slug)
except DockerError as err:
_LOGGER.error("Can't export image for addon %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
except (tarfile.TarError, OSError, AddFileError) as err:
_LOGGER.error("Can't write backup tarfile for addon %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
finally:
await self.sys_run_in_executor(temp_dir.cleanup)
if was_running:
wait_for_start = await self.end_backup()
@@ -1411,7 +1444,7 @@ class Addon(AddonModel):
on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def restore(self, tar_file: tarfile.TarFile) -> asyncio.Task | None:
async def restore(self, tar_file: SecureTarFile) -> asyncio.Task | None:
"""Restore state of an add-on.
Returns a Task that completes when addon has state 'started' (see start)
@@ -1425,10 +1458,11 @@ class Addon(AddonModel):
tmp = TemporaryDirectory(dir=self.sys_config.path_tmp)
try:
with tar_file as backup:
# The tar filter rejects path traversal and absolute names,
# aborting restore of malicious backups with such exploits.
backup.extractall(
path=tmp.name,
members=secure_path(backup),
filter="fully_trusted",
filter="tar",
)
data = read_json_file(Path(tmp.name, "addon.json"))
@@ -1440,8 +1474,12 @@ class Addon(AddonModel):
try:
tmp, data = await self.sys_run_in_executor(_extract_tarfile)
except tarfile.FilterError as err:
raise BackupInvalidError(
f"Can't extract backup tarfile for {self.slug}: {err}",
_LOGGER.error,
) from err
except tarfile.TarError as err:
_LOGGER.error("Can't extract backup tarfile for %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
except ConfigurationFileError as err:
raise AddonUnknownError(addon=self.slug) from err

View File

@@ -6,7 +6,7 @@ import base64
from functools import cached_property
import json
import logging
from pathlib import Path
from pathlib import Path, PurePath
from typing import TYPE_CHECKING, Any
from awesomeversion import AwesomeVersion
@@ -24,7 +24,7 @@ from ..const import (
CpuArch,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..docker.const import DOCKER_HUB, DOCKER_HUB_LEGACY
from ..docker.const import DOCKER_HUB, DOCKER_HUB_LEGACY, DockerMount, MountType
from ..docker.interface import MAP_ARCH
from ..exceptions import (
AddonBuildArchitectureNotSupportedError,
@@ -84,7 +84,7 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
def base_image(self) -> str:
"""Return base image for this add-on."""
if not self._data[ATTR_BUILD_FROM]:
return f"ghcr.io/home-assistant/{self.sys_arch.default}-base:latest"
return f"ghcr.io/home-assistant/{self.arch!s}-base:latest"
if isinstance(self._data[ATTR_BUILD_FROM], str):
return self._data[ATTR_BUILD_FROM]
@@ -220,7 +220,7 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
build_args = {
"BUILD_FROM": self.base_image,
"BUILD_VERSION": version,
"BUILD_ARCH": self.sys_arch.default,
"BUILD_ARCH": self.arch,
**self.additional_args,
}
@@ -232,25 +232,39 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
self.addon.path_location
)
volumes = {
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
addon_extern_path: {"bind": "/addon", "mode": "ro"},
}
mounts = [
DockerMount(
type=MountType.BIND,
source=SOCKET_DOCKER.as_posix(),
target="/var/run/docker.sock",
read_only=False,
),
DockerMount(
type=MountType.BIND,
source=addon_extern_path.as_posix(),
target="/addon",
read_only=True,
),
]
# Mount Docker config with registry credentials if available
if docker_config_path:
docker_config_extern_path = self.sys_config.local_to_extern_path(
docker_config_path
)
volumes[docker_config_extern_path] = {
"bind": "/root/.docker/config.json",
"mode": "ro",
}
mounts.append(
DockerMount(
type=MountType.BIND,
source=docker_config_extern_path.as_posix(),
target="/root/.docker/config.json",
read_only=True,
)
)
return {
"command": build_cmd,
"volumes": volumes,
"working_dir": "/addon",
"mounts": mounts,
"working_dir": PurePath("/addon"),
}
def _fix_label(self, label_name: str) -> str:

View File

@@ -4,10 +4,10 @@ import asyncio
from collections.abc import Awaitable
from contextlib import suppress
import logging
import tarfile
from typing import Self, Union
from attr import evolve
from securetar import SecureTarFile
from ..const import AddonBoot, AddonStartup, AddonState
from ..coresys import CoreSys, CoreSysAttributes
@@ -22,7 +22,7 @@ from ..exceptions import (
from ..jobs import ChildJobSyncFilter
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from ..store.addon import AddonStore
from ..utils.sentry import async_capture_exception
from .addon import Addon
@@ -110,6 +110,17 @@ class AddonManager(CoreSysAttributes):
for addon in self.installed:
if addon.boot != AddonBoot.AUTO or addon.startup != stage:
continue
if (
addon.host_network
and UnhealthyReason.DOCKER_GATEWAY_UNPROTECTED
in self.sys_resolution.unhealthy
):
_LOGGER.warning(
"Skipping boot of add-on %s because gateway firewall"
" rules are not active",
addon.slug,
)
continue
tasks.append(addon)
# Evaluate add-ons which need to be started
@@ -334,9 +345,7 @@ class AddonManager(CoreSysAttributes):
],
on_condition=AddonsJobError,
)
async def restore(
self, slug: str, tar_file: tarfile.TarFile
) -> asyncio.Task | None:
async def restore(self, slug: str, tar_file: SecureTarFile) -> asyncio.Task | None:
"""Restore state of an add-on.
Returns a Task that completes when addon has state 'started' (see addon.start)

View File

@@ -12,7 +12,7 @@ from typing import Any
from awesomeversion import AwesomeVersion, AwesomeVersionException
from ..const import (
ATTR_ADVANCED,
ARCH_DEPRECATED,
ATTR_APPARMOR,
ATTR_ARCH,
ATTR_AUDIO,
@@ -78,6 +78,7 @@ from ..const import (
ATTR_VIDEO,
ATTR_WATCHDOG,
ATTR_WEBUI,
MACHINE_DEPRECATED,
SECURITY_DEFAULT,
SECURITY_DISABLE,
SECURITY_PROFILE,
@@ -94,6 +95,7 @@ from ..exceptions import (
AddonNotSupportedError,
AddonNotSupportedHomeAssistantVersionError,
AddonNotSupportedMachineTypeError,
HassioArchNotFound,
)
from ..jobs.const import JOB_GROUP_ADDON
from ..jobs.job_group import JobGroup
@@ -252,8 +254,10 @@ class AddonModel(JobGroup, ABC):
@property
def advanced(self) -> bool:
"""Return advanced mode of add-on."""
return self.data[ATTR_ADVANCED]
"""Return False; advanced mode is deprecated and no longer supported."""
# Deprecated since Supervisor 2026.03.0; always returns False and can be
# removed once that version is the minimum supported.
return False
@property
def stage(self) -> AddonStage:
@@ -542,6 +546,35 @@ class AddonModel(JobGroup, ABC):
"""Return list of supported arch."""
return self.data[ATTR_ARCH]
@property
def has_deprecated_arch(self) -> bool:
"""Return True if add-on includes deprecated architectures."""
return any(arch in ARCH_DEPRECATED for arch in self.supported_arch)
@property
def has_supported_arch(self) -> bool:
"""Return True if add-on supports any architecture on this system."""
return self.sys_arch.is_supported(self.supported_arch)
@property
def has_deprecated_machine(self) -> bool:
"""Return True if add-on includes deprecated machine entries."""
return any(
machine.lstrip("!") in MACHINE_DEPRECATED
for machine in self.supported_machine
)
@property
def has_supported_machine(self) -> bool:
"""Return True if add-on supports this machine."""
if not (machine_types := self.supported_machine):
return True
return (
f"!{self.sys_machine}" not in machine_types
and self.sys_machine in machine_types
)
@property
def supported_machine(self) -> list[str]:
"""Return list of supported machine."""
@@ -550,10 +583,7 @@ class AddonModel(JobGroup, ABC):
@property
def arch(self) -> CpuArch:
"""Return architecture to use for the addon's image."""
if ATTR_IMAGE in self.data:
return self.sys_arch.match(self.data[ATTR_ARCH])
return self.sys_arch.default
return self.sys_arch.match(self.data[ATTR_ARCH])
@property
def image(self) -> str | None:
@@ -721,8 +751,12 @@ class AddonModel(JobGroup, ABC):
"""Generate image name from data."""
# Repository with Dockerhub images
if ATTR_IMAGE in config:
arch = self.sys_arch.match(config[ATTR_ARCH])
try:
arch = self.sys_arch.match(config[ATTR_ARCH])
except HassioArchNotFound:
arch = self.sys_arch.default
return config[ATTR_IMAGE].format(arch=arch)
# local build
return f"{config[ATTR_REPOSITORY]}/{self.sys_arch.default}-addon-{config[ATTR_SLUG]}"
arch = self.sys_arch.match(config[ATTR_ARCH])
return f"{config[ATTR_REPOSITORY]}/{arch!s}-addon-{config[ATTR_SLUG]}"

View File

@@ -37,8 +37,8 @@ RE_SCHEMA_ELEMENT = re.compile(
r"|device(?:\((?P<filter>subsystem=[a-z]+)\))?"
r"|str(?:\((?P<s_min>\d+)?,(?P<s_max>\d+)?\))?"
r"|password(?:\((?P<p_min>\d+)?,(?P<p_max>\d+)?\))?"
r"|int(?:\((?P<i_min>\d+)?,(?P<i_max>\d+)?\))?"
r"|float(?:\((?P<f_min>[\d\.]+)?,(?P<f_max>[\d\.]+)?\))?"
r"|int(?:\((?P<i_min>-?\d+)?,(?P<i_max>-?\d+)?\))?"
r"|float(?:\((?P<f_min>-?\d*\.?\d+)?,(?P<f_max>-?\d*\.?\d+)?\))?"
r"|match\((?P<match>.*)\)"
r"|list\((?P<list>.+)\)"
r")\??$"
@@ -169,6 +169,10 @@ class AddonOptions(CoreSysAttributes):
elif typ.startswith(_LIST):
return vol.In(match.group("list").split("|"))(str(value))
elif typ.startswith(_DEVICE):
if not isinstance(value, str):
raise vol.Invalid(
f"Expected a string for option '{key}' in {self._name} ({self._slug})"
)
try:
device = self.sys_hardware.get_by_path(Path(value))
except HardwareNotFound:

View File

@@ -9,7 +9,8 @@ import uuid
import voluptuous as vol
from ..const import (
ARCH_ALL,
ARCH_ALL_COMPAT,
ARCH_DEPRECATED,
ATTR_ACCESS_TOKEN,
ATTR_ADVANCED,
ATTR_APPARMOR,
@@ -97,6 +98,7 @@ from ..const import (
ATTR_VIDEO,
ATTR_WATCHDOG,
ATTR_WEBUI,
MACHINE_DEPRECATED,
ROLE_ALL,
ROLE_DEFAULT,
AddonBoot,
@@ -156,6 +158,8 @@ SCHEMA_ELEMENT = vol.Schema(
RE_MACHINE = re.compile(
r"^!?(?:"
r"|intel-nuc"
r"|khadas-vim3"
r"|generic-aarch64"
r"|generic-x86-64"
r"|odroid-c2"
r"|odroid-c4"
@@ -188,6 +192,15 @@ def _warn_addon_config(config: dict[str, Any]):
if not name:
raise vol.Invalid("Invalid Add-on config!")
if ATTR_ADVANCED in config:
# Deprecated since Supervisor 2026.03.0; this field is ignored and the
# warning can be removed once that version is the minimum supported.
_LOGGER.warning(
"Add-on '%s' uses deprecated 'advanced' field in config. "
"This field is ignored by the Supervisor. Please report this to the maintainer.",
name,
)
if config.get(ATTR_FULL_ACCESS, False) and (
config.get(ATTR_DEVICES)
or config.get(ATTR_UART)
@@ -207,6 +220,26 @@ def _warn_addon_config(config: dict[str, Any]):
name,
)
if deprecated_arches := [
arch for arch in config.get(ATTR_ARCH, []) if arch in ARCH_DEPRECATED
]:
_LOGGER.warning(
"Add-on config 'arch' uses deprecated values %s. Please report this to the maintainer of %s",
deprecated_arches,
name,
)
if deprecated_machines := [
machine
for machine in config.get(ATTR_MACHINE, [])
if machine.lstrip("!") in MACHINE_DEPRECATED
]:
_LOGGER.warning(
"Add-on config 'machine' uses deprecated values %s. Please report this to the maintainer of %s",
deprecated_machines,
name,
)
if ATTR_CODENOTARY in config:
_LOGGER.warning(
"Add-on '%s' uses deprecated 'codenotary' field in config. This field is no longer used and will be ignored. Please report this to the maintainer.",
@@ -220,6 +253,8 @@ def _migrate_addon_config(protocol=False):
"""Migrate addon config."""
def _migrate(config: dict[str, Any]):
if not isinstance(config, dict):
raise vol.Invalid("Add-on config must be a dictionary!")
name = config.get(ATTR_NAME)
if not name:
raise vol.Invalid("Invalid Add-on config!")
@@ -349,7 +384,7 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
vol.Required(ATTR_VERSION): version_tag,
vol.Required(ATTR_SLUG): vol.Match(RE_SLUG_FIELD),
vol.Required(ATTR_DESCRIPTON): str,
vol.Required(ATTR_ARCH): [vol.In(ARCH_ALL)],
vol.Required(ATTR_ARCH): [vol.In(ARCH_ALL_COMPAT)],
vol.Optional(ATTR_MACHINE): vol.All([vol.Match(RE_MACHINE)], vol.Unique()),
vol.Optional(ATTR_URL): vol.Url(),
vol.Optional(ATTR_STARTUP, default=AddonStartup.APPLICATION): vol.Coerce(
@@ -462,7 +497,7 @@ SCHEMA_BUILD_CONFIG = vol.Schema(
{
vol.Optional(ATTR_BUILD_FROM, default=dict): vol.Any(
vol.Match(RE_DOCKER_IMAGE_BUILD),
vol.Schema({vol.In(ARCH_ALL): vol.Match(RE_DOCKER_IMAGE_BUILD)}),
vol.Schema({vol.In(ARCH_ALL_COMPAT): vol.Match(RE_DOCKER_IMAGE_BUILD)}),
),
vol.Optional(ATTR_SQUASH, default=False): vol.Boolean(),
vol.Optional(ATTR_ARGS, default=dict): vol.Schema({str: str}),

View File

@@ -129,14 +129,23 @@ class RestAPI(CoreSysAttributes):
await self.start()
def _register_advanced_logs(self, path: str, syslog_identifier: str):
def _register_advanced_logs(
self,
path: str,
syslog_identifier: str,
default_verbose: bool = False,
):
"""Register logs endpoint for a given path, returning logs for single syslog identifier."""
self.webapp.add_routes(
[
web.get(
f"{path}/logs",
partial(self._api_host.advanced_logs, identifier=syslog_identifier),
partial(
self._api_host.advanced_logs,
identifier=syslog_identifier,
default_verbose=default_verbose,
),
),
web.get(
f"{path}/logs/follow",
@@ -144,6 +153,7 @@ class RestAPI(CoreSysAttributes):
self._api_host.advanced_logs,
identifier=syslog_identifier,
follow=True,
default_verbose=default_verbose,
),
),
web.get(
@@ -153,11 +163,16 @@ class RestAPI(CoreSysAttributes):
identifier=syslog_identifier,
latest=True,
no_colors=True,
default_verbose=default_verbose,
),
),
web.get(
f"{path}/logs/boots/{{bootid}}",
partial(self._api_host.advanced_logs, identifier=syslog_identifier),
partial(
self._api_host.advanced_logs,
identifier=syslog_identifier,
default_verbose=default_verbose,
),
),
web.get(
f"{path}/logs/boots/{{bootid}}/follow",
@@ -165,6 +180,7 @@ class RestAPI(CoreSysAttributes):
self._api_host.advanced_logs,
identifier=syslog_identifier,
follow=True,
default_verbose=default_verbose,
),
),
]
@@ -177,10 +193,13 @@ class RestAPI(CoreSysAttributes):
self.webapp.add_routes(
[
web.get("/host/info", api_host.info),
web.get("/host/logs", api_host.advanced_logs),
web.get(
"/host/logs",
partial(api_host.advanced_logs, default_verbose=True),
),
web.get(
"/host/logs/follow",
partial(api_host.advanced_logs, follow=True),
partial(api_host.advanced_logs, follow=True, default_verbose=True),
),
web.get("/host/logs/identifiers", api_host.list_identifiers),
web.get("/host/logs/identifiers/{identifier}", api_host.advanced_logs),
@@ -189,10 +208,13 @@ class RestAPI(CoreSysAttributes):
partial(api_host.advanced_logs, follow=True),
),
web.get("/host/logs/boots", api_host.list_boots),
web.get("/host/logs/boots/{bootid}", api_host.advanced_logs),
web.get(
"/host/logs/boots/{bootid}",
partial(api_host.advanced_logs, default_verbose=True),
),
web.get(
"/host/logs/boots/{bootid}/follow",
partial(api_host.advanced_logs, follow=True),
partial(api_host.advanced_logs, follow=True, default_verbose=True),
),
web.get(
"/host/logs/boots/{bootid}/identifiers/{identifier}",
@@ -335,7 +357,9 @@ class RestAPI(CoreSysAttributes):
web.post("/multicast/restart", api_multicast.restart),
]
)
self._register_advanced_logs("/multicast", "hassio_multicast")
self._register_advanced_logs(
"/multicast", "hassio_multicast", default_verbose=True
)
def _register_hardware(self) -> None:
"""Register hardware functions."""
@@ -522,6 +546,7 @@ class RestAPI(CoreSysAttributes):
web.get("/core/api/stream", api_proxy.stream),
web.post("/core/api/{path:.+}", api_proxy.api),
web.get("/core/api/{path:.+}", api_proxy.api),
web.delete("/core/api/{path:.+}", api_proxy.api),
web.get("/core/api/", api_proxy.api),
]
)
@@ -694,7 +719,7 @@ class RestAPI(CoreSysAttributes):
]
)
self._register_advanced_logs("/dns", "hassio_dns")
self._register_advanced_logs("/dns", "hassio_dns", default_verbose=True)
def _register_audio(self) -> None:
"""Register Audio functions."""
@@ -717,7 +742,7 @@ class RestAPI(CoreSysAttributes):
]
)
self._register_advanced_logs("/audio", "hassio_audio")
self._register_advanced_logs("/audio", "hassio_audio", default_verbose=True)
def _register_mounts(self) -> None:
"""Register mounts endpoints."""

View File

@@ -187,7 +187,7 @@ class APIAddons(CoreSysAttributes):
ATTR_NAME: addon.name,
ATTR_SLUG: addon.slug,
ATTR_DESCRIPTON: addon.description,
ATTR_ADVANCED: addon.advanced,
ATTR_ADVANCED: addon.advanced, # Deprecated 2026.03
ATTR_STAGE: addon.stage,
ATTR_VERSION: addon.version,
ATTR_VERSION_LATEST: addon.latest_version,
@@ -224,7 +224,7 @@ class APIAddons(CoreSysAttributes):
ATTR_DNS: addon.dns,
ATTR_DESCRIPTON: addon.description,
ATTR_LONG_DESCRIPTION: await addon.long_description(),
ATTR_ADVANCED: addon.advanced,
ATTR_ADVANCED: addon.advanced, # Deprecated 2026.03
ATTR_STAGE: addon.stage,
ATTR_REPOSITORY: addon.repository,
ATTR_VERSION_LATEST: addon.latest_version,

View File

@@ -49,7 +49,10 @@ class APIAuth(CoreSysAttributes):
Return a coroutine.
"""
auth = BasicAuth.decode(request.headers[AUTHORIZATION])
try:
auth = BasicAuth.decode(request.headers[AUTHORIZATION])
except ValueError as err:
raise HTTPUnauthorized(headers=REALM_HEADER) from err
return self.sys_auth.check_login(addon, auth.login, auth.password)
def _process_dict(
@@ -127,14 +130,14 @@ class APIAuth(CoreSysAttributes):
return {
ATTR_USERS: [
{
ATTR_USERNAME: user[ATTR_USERNAME],
ATTR_NAME: user[ATTR_NAME],
ATTR_IS_OWNER: user[ATTR_IS_OWNER],
ATTR_IS_ACTIVE: user[ATTR_IS_ACTIVE],
ATTR_LOCAL_ONLY: user[ATTR_LOCAL_ONLY],
ATTR_GROUP_IDS: user[ATTR_GROUP_IDS],
ATTR_USERNAME: user.username,
ATTR_NAME: user.name,
ATTR_IS_OWNER: user.is_owner,
ATTR_IS_ACTIVE: user.is_active,
ATTR_LOCAL_ONLY: user.local_only,
ATTR_GROUP_IDS: user.group_ids,
}
for user in await self.sys_auth.list_users()
if user[ATTR_USERNAME]
if user.username
]
}

View File

@@ -3,8 +3,7 @@
from __future__ import annotations
import asyncio
import errno
from io import IOBase
from io import BufferedWriter
import logging
from pathlib import Path
import re
@@ -44,12 +43,12 @@ from ..const import (
ATTR_TIMEOUT,
ATTR_TYPE,
ATTR_VERSION,
DEFAULT_CHUNK_SIZE,
REQUEST_FROM,
)
from ..coresys import CoreSysAttributes
from ..exceptions import APIError, APIForbidden, APINotFound
from ..mounts.const import MountUsage
from ..resolution.const import UnhealthyReason
from .const import (
ATTR_ADDITIONAL_LOCATIONS,
ATTR_BACKGROUND,
@@ -310,7 +309,7 @@ class APIBackups(CoreSysAttributes):
if background and not backup_task.done():
return {ATTR_JOB_ID: job_id}
backup: Backup = await backup_task
backup: Backup | None = await backup_task
if backup:
return {ATTR_JOB_ID: job_id, ATTR_SLUG: backup.slug}
raise APIError(
@@ -346,7 +345,7 @@ class APIBackups(CoreSysAttributes):
if background and not backup_task.done():
return {ATTR_JOB_ID: job_id}
backup: Backup = await backup_task
backup: Backup | None = await backup_task
if backup:
return {ATTR_JOB_ID: job_id, ATTR_SLUG: backup.slug}
raise APIError(
@@ -480,14 +479,14 @@ class APIBackups(CoreSysAttributes):
tmp_path = await self.sys_backups.get_upload_path_for_location(location)
temp_dir: TemporaryDirectory | None = None
backup_file_stream: IOBase | None = None
backup_file_stream: BufferedWriter | None = None
def open_backup_file() -> Path:
def open_backup_file() -> tuple[Path, BufferedWriter]:
nonlocal temp_dir, backup_file_stream
temp_dir = TemporaryDirectory(dir=tmp_path.as_posix())
tar_file = Path(temp_dir.name, "upload.tar")
backup_file_stream = tar_file.open("wb")
return tar_file
return (tar_file, backup_file_stream)
def close_backup_file() -> None:
if backup_file_stream:
@@ -503,12 +502,10 @@ class APIBackups(CoreSysAttributes):
if not isinstance(contents, BodyPartReader):
raise APIError("Improperly formatted upload, could not read backup")
tar_file = await self.sys_run_in_executor(open_backup_file)
while chunk := await contents.read_chunk(size=2**16):
await self.sys_run_in_executor(
cast(IOBase, backup_file_stream).write, chunk
)
await self.sys_run_in_executor(cast(IOBase, backup_file_stream).close)
tar_file, backup_writer = await self.sys_run_in_executor(open_backup_file)
while chunk := await contents.read_chunk(size=DEFAULT_CHUNK_SIZE):
await self.sys_run_in_executor(backup_writer.write, chunk)
await self.sys_run_in_executor(backup_writer.close)
backup = await asyncio.shield(
self.sys_backups.import_backup(
@@ -519,13 +516,8 @@ class APIBackups(CoreSysAttributes):
)
)
except OSError as err:
if err.errno == errno.EBADMSG and location in {
LOCATION_CLOUD_BACKUP,
None,
}:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
if location in {LOCATION_CLOUD_BACKUP, None}:
self.sys_resolution.check_oserror(err)
_LOGGER.error("Can't write new backup file: %s", err)
return False

View File

@@ -208,9 +208,10 @@ class APIHost(CoreSysAttributes):
follow: bool = False,
latest: bool = False,
no_colors: bool = False,
default_verbose: bool = False,
) -> web.StreamResponse:
"""Return systemd-journald logs."""
log_formatter = LogFormatter.PLAIN
log_formatter = LogFormatter.VERBOSE if default_verbose else LogFormatter.PLAIN
params: dict[str, Any] = {}
if identifier:
params[PARAM_SYSLOG_IDENTIFIER] = identifier
@@ -218,8 +219,6 @@ class APIHost(CoreSysAttributes):
params[PARAM_SYSLOG_IDENTIFIER] = request.match_info[IDENTIFIER]
else:
params[PARAM_SYSLOG_IDENTIFIER] = self.sys_host.logs.default_identifiers
# host logs should be always verbose, no matter what Accept header is used
log_formatter = LogFormatter.VERBOSE
if BOOTID in request.match_info:
params[PARAM_BOOT_ID] = await self._get_boot_id(request.match_info[BOOTID])
@@ -240,7 +239,9 @@ class APIHost(CoreSysAttributes):
f"Cannot determine CONTAINER_LOG_EPOCH of {identifier}, latest logs not available."
) from err
if ACCEPT in request.headers and request.headers[ACCEPT] not in [
accept_header = request.headers.get(ACCEPT)
if accept_header and accept_header not in [
CONTENT_TYPE_TEXT,
CONTENT_TYPE_X_LOG,
"*/*",
@@ -250,7 +251,7 @@ class APIHost(CoreSysAttributes):
"supported for now."
)
if "verbose" in request.query or request.headers[ACCEPT] == CONTENT_TYPE_X_LOG:
if "verbose" in request.query or accept_header == CONTENT_TYPE_X_LOG:
log_formatter = LogFormatter.VERBOSE
if "no_colors" in request.query:
@@ -326,10 +327,11 @@ class APIHost(CoreSysAttributes):
follow: bool = False,
latest: bool = False,
no_colors: bool = False,
default_verbose: bool = False,
) -> web.StreamResponse:
"""Return systemd-journald logs. Wrapped as standard API handler."""
return await self.advanced_logs_handler(
request, identifier, follow, latest, no_colors
request, identifier, follow, latest, no_colors, default_verbose
)
@api_process

View File

@@ -29,8 +29,8 @@ from ..const import (
HEADER_REMOTE_USER_NAME,
HEADER_TOKEN,
HEADER_TOKEN_OLD,
HomeAssistantUser,
IngressSessionData,
IngressSessionDataUser,
)
from ..coresys import CoreSysAttributes
from ..exceptions import HomeAssistantAPIError
@@ -39,6 +39,8 @@ from .utils import api_process, api_validate, require_home_assistant
_LOGGER: logging.Logger = logging.getLogger(__name__)
MAX_WEBSOCKET_MESSAGE_SIZE = 16 * 1024 * 1024 # 16 MiB
VALIDATE_SESSION_DATA = vol.Schema({ATTR_SESSION: str})
"""Expected optional payload of create session request"""
@@ -75,12 +77,6 @@ def status_code_must_be_empty_body(code: int) -> bool:
class APIIngress(CoreSysAttributes):
"""Ingress view to handle add-on webui routing."""
_list_of_users: list[IngressSessionDataUser]
def __init__(self) -> None:
"""Initialize APIIngress."""
self._list_of_users = []
def _extract_addon(self, request: web.Request) -> Addon:
"""Return addon, throw an exception it it doesn't exist."""
token = request.match_info["token"]
@@ -186,7 +182,10 @@ class APIIngress(CoreSysAttributes):
req_protocols = []
ws_server = web.WebSocketResponse(
protocols=req_protocols, autoclose=False, autoping=False
protocols=req_protocols,
autoclose=False,
autoping=False,
max_msg_size=MAX_WEBSOCKET_MESSAGE_SIZE,
)
await ws_server.prepare(request)
@@ -207,6 +206,7 @@ class APIIngress(CoreSysAttributes):
protocols=req_protocols,
autoclose=False,
autoping=False,
max_msg_size=MAX_WEBSOCKET_MESSAGE_SIZE,
) as ws_client:
# Proxy requests
await asyncio.wait(
@@ -306,20 +306,15 @@ class APIIngress(CoreSysAttributes):
return response
async def _find_user_by_id(self, user_id: str) -> IngressSessionDataUser | None:
async def _find_user_by_id(self, user_id: str) -> HomeAssistantUser | None:
"""Find user object by the user's ID."""
try:
list_of_users = await self.sys_homeassistant.get_users()
except (HomeAssistantAPIError, TypeError) as err:
_LOGGER.error(
"%s error occurred while requesting list of users: %s", type(err), err
)
users = await self.sys_homeassistant.list_users()
except HomeAssistantAPIError as err:
_LOGGER.warning("Could not fetch list of users: %s", err)
return None
if list_of_users is not None:
self._list_of_users = list_of_users
return next((user for user in self._list_of_users if user.id == user_id), None)
return next((user for user in users if user.id == user_id), None)
def _init_header(
@@ -332,8 +327,8 @@ def _init_header(
headers[HEADER_REMOTE_USER_ID] = session_data.user.id
if session_data.user.username is not None:
headers[HEADER_REMOTE_USER_NAME] = session_data.user.username
if session_data.user.display_name is not None:
headers[HEADER_REMOTE_USER_DISPLAY_NAME] = session_data.user.display_name
if session_data.user.name is not None:
headers[HEADER_REMOTE_USER_DISPLAY_NAME] = session_data.user.name
# filter flags
for name, value in request.headers.items():

View File

@@ -36,6 +36,7 @@ from ..const import (
ATTR_PRIMARY,
ATTR_PSK,
ATTR_READY,
ATTR_ROUTE_METRIC,
ATTR_SIGNAL,
ATTR_SSID,
ATTR_SUPERVISOR_INTERNET,
@@ -68,6 +69,7 @@ _SCHEMA_IPV4_CONFIG = vol.Schema(
vol.Optional(ATTR_ADDRESS): [vol.Coerce(IPv4Interface)],
vol.Optional(ATTR_METHOD): vol.Coerce(InterfaceMethod),
vol.Optional(ATTR_GATEWAY): vol.Coerce(IPv4Address),
vol.Optional(ATTR_ROUTE_METRIC): vol.Coerce(int),
vol.Optional(ATTR_NAMESERVERS): [vol.Coerce(IPv4Address)],
}
)
@@ -79,6 +81,7 @@ _SCHEMA_IPV6_CONFIG = vol.Schema(
vol.Optional(ATTR_ADDR_GEN_MODE): vol.Coerce(InterfaceAddrGenMode),
vol.Optional(ATTR_IP6_PRIVACY): vol.Coerce(InterfaceIp6Privacy),
vol.Optional(ATTR_GATEWAY): vol.Coerce(IPv6Address),
vol.Optional(ATTR_ROUTE_METRIC): vol.Coerce(int),
vol.Optional(ATTR_NAMESERVERS): [vol.Coerce(IPv6Address)],
}
)
@@ -113,6 +116,7 @@ def ip4config_struct(config: IpConfig, setting: IpSetting) -> dict[str, Any]:
ATTR_ADDRESS: [address.with_prefixlen for address in config.address],
ATTR_NAMESERVERS: [str(address) for address in config.nameservers],
ATTR_GATEWAY: str(config.gateway) if config.gateway else None,
ATTR_ROUTE_METRIC: setting.route_metric,
ATTR_READY: config.ready,
}
@@ -126,6 +130,7 @@ def ip6config_struct(config: IpConfig, setting: Ip6Setting) -> dict[str, Any]:
ATTR_ADDRESS: [address.with_prefixlen for address in config.address],
ATTR_NAMESERVERS: [str(address) for address in config.nameservers],
ATTR_GATEWAY: str(config.gateway) if config.gateway else None,
ATTR_ROUTE_METRIC: setting.route_metric,
ATTR_READY: config.ready,
}
@@ -201,7 +206,7 @@ class APINetwork(CoreSysAttributes):
raise APINotFound(f"Interface {name} does not exist") from None
@api_process
async def info(self, request: web.Request) -> dict[str, Any]:
async def info(self, _: web.Request) -> dict[str, Any]:
"""Return network information."""
return {
ATTR_INTERFACES: [
@@ -242,6 +247,7 @@ class APINetwork(CoreSysAttributes):
method=config.get(ATTR_METHOD, InterfaceMethod.STATIC),
address=config.get(ATTR_ADDRESS, []),
gateway=config.get(ATTR_GATEWAY),
route_metric=config.get(ATTR_ROUTE_METRIC),
nameservers=config.get(ATTR_NAMESERVERS, []),
)
elif key == ATTR_IPV6:
@@ -255,6 +261,7 @@ class APINetwork(CoreSysAttributes):
),
address=config.get(ATTR_ADDRESS, []),
gateway=config.get(ATTR_GATEWAY),
route_metric=config.get(ATTR_ROUTE_METRIC),
nameservers=config.get(ATTR_NAMESERVERS, []),
)
elif key == ATTR_WIFI:
@@ -275,7 +282,7 @@ class APINetwork(CoreSysAttributes):
await asyncio.shield(self.sys_host.network.apply_changes(interface))
@api_process
def reload(self, request: web.Request) -> Awaitable[None]:
def reload(self, _: web.Request) -> Awaitable[None]:
"""Reload network data."""
return asyncio.shield(
self.sys_host.network.update(force_connectivity_check=True)
@@ -325,7 +332,8 @@ class APINetwork(CoreSysAttributes):
ipv4_setting = IpSetting(
method=body[ATTR_IPV4].get(ATTR_METHOD, InterfaceMethod.AUTO),
address=body[ATTR_IPV4].get(ATTR_ADDRESS, []),
gateway=body[ATTR_IPV4].get(ATTR_GATEWAY, None),
gateway=body[ATTR_IPV4].get(ATTR_GATEWAY),
route_metric=body[ATTR_IPV4].get(ATTR_ROUTE_METRIC),
nameservers=body[ATTR_IPV4].get(ATTR_NAMESERVERS, []),
)
@@ -340,7 +348,8 @@ class APINetwork(CoreSysAttributes):
ATTR_IP6_PRIVACY, InterfaceIp6Privacy.DEFAULT
),
address=body[ATTR_IPV6].get(ATTR_ADDRESS, []),
gateway=body[ATTR_IPV6].get(ATTR_GATEWAY, None),
gateway=body[ATTR_IPV6].get(ATTR_GATEWAY),
route_metric=body[ATTR_IPV6].get(ATTR_ROUTE_METRIC),
nameservers=body[ATTR_IPV6].get(ATTR_NAMESERVERS, []),
)

View File

@@ -21,7 +21,13 @@ from ..utils.logging import AddonLoggerAdapter
_LOGGER: logging.Logger = logging.getLogger(__name__)
FORWARD_HEADERS = ("X-Speech-Content",)
FORWARD_HEADERS = (
"X-Speech-Content",
"Accept",
"Last-Event-ID",
"Mcp-Session-Id",
"MCP-Protocol-Version",
)
HEADER_HA_ACCESS = "X-Ha-Access"
# Maximum message size for websocket messages from Home Assistant.
@@ -35,6 +41,38 @@ MAX_MESSAGE_SIZE_FROM_CORE = 64 * 1024 * 1024
class APIProxy(CoreSysAttributes):
"""API Proxy for Home Assistant."""
async def _stream_client_response(
self,
request: web.Request,
client: aiohttp.ClientResponse,
*,
content_type: str,
headers_to_copy: tuple[str, ...] = (),
) -> web.StreamResponse:
"""Stream an upstream aiohttp response to the caller.
Used for event streams (e.g. Home Assistant /api/stream) and for SSE endpoints
such as MCP (text/event-stream).
"""
response = web.StreamResponse(status=client.status)
response.content_type = content_type
for header in headers_to_copy:
if header in client.headers:
response.headers[header] = client.headers[header]
response.headers["X-Accel-Buffering"] = "no"
try:
await response.prepare(request)
async for data in client.content:
await response.write(data)
except (aiohttp.ClientError, aiohttp.ClientPayloadError):
# Client disconnected or upstream closed
pass
return response
def _check_access(self, request: web.Request):
"""Check the Supervisor token."""
if AUTHORIZATION in request.headers:
@@ -95,16 +133,11 @@ class APIProxy(CoreSysAttributes):
_LOGGER.info("Home Assistant EventStream start")
async with self._api_client(request, "stream", timeout=None) as client:
response = web.StreamResponse()
response.content_type = request.headers.get(CONTENT_TYPE, "")
try:
response.headers["X-Accel-Buffering"] = "no"
await response.prepare(request)
async for data in client.content:
await response.write(data)
except (aiohttp.ClientError, aiohttp.ClientPayloadError):
pass
response = await self._stream_client_response(
request,
client,
content_type=request.headers.get(CONTENT_TYPE, ""),
)
_LOGGER.info("Home Assistant EventStream close")
return response
@@ -118,10 +151,31 @@ class APIProxy(CoreSysAttributes):
# Normal request
path = request.match_info.get("path", "")
async with self._api_client(request, path) as client:
# Check if this is a streaming response (e.g., MCP SSE endpoints)
if client.content_type == "text/event-stream":
return await self._stream_client_response(
request,
client,
content_type=client.content_type,
headers_to_copy=(
"Cache-Control",
"Mcp-Session-Id",
),
)
# Non-streaming response
data = await client.read()
return web.Response(
response = web.Response(
body=data, status=client.status, content_type=client.content_type
)
# Copy selected headers from the upstream response
for header in (
"Cache-Control",
"Mcp-Session-Id",
):
if header in client.headers:
response.headers[header] = client.headers[header]
return response
async def _websocket_client(self) -> ClientWebSocketResponse:
"""Initialize a WebSocket API connection."""

View File

@@ -19,7 +19,6 @@ from ..const import (
ATTR_UNSUPPORTED,
)
from ..coresys import CoreSysAttributes
from ..exceptions import APINotFound, ResolutionNotFound
from ..resolution.checks.base import CheckBase
from ..resolution.data import Issue, Suggestion
from .utils import api_process, api_validate
@@ -32,24 +31,17 @@ class APIResoulution(CoreSysAttributes):
def _extract_issue(self, request: web.Request) -> Issue:
"""Extract issue from request or raise."""
try:
return self.sys_resolution.get_issue(request.match_info["issue"])
except ResolutionNotFound:
raise APINotFound("The supplied UUID is not a valid issue") from None
return self.sys_resolution.get_issue_by_id(request.match_info["issue"])
def _extract_suggestion(self, request: web.Request) -> Suggestion:
"""Extract suggestion from request or raise."""
try:
return self.sys_resolution.get_suggestion(request.match_info["suggestion"])
except ResolutionNotFound:
raise APINotFound("The supplied UUID is not a valid suggestion") from None
return self.sys_resolution.get_suggestion_by_id(
request.match_info["suggestion"]
)
def _extract_check(self, request: web.Request) -> CheckBase:
"""Extract check from request or raise."""
try:
return self.sys_resolution.check.get(request.match_info["check"])
except ResolutionNotFound:
raise APINotFound("The supplied check slug is not available") from None
return self.sys_resolution.check.get(request.match_info["check"])
def _generate_suggestion_information(self, suggestion: Suggestion):
"""Generate suggestion information for response."""

View File

@@ -248,6 +248,7 @@ class APISupervisor(CoreSysAttributes):
return asyncio.shield(self.sys_supervisor.restart())
@api_process_raw(CONTENT_TYPE_TEXT, error_type=CONTENT_TYPE_TEXT)
def logs(self, request: web.Request) -> Awaitable[bytes]:
async def logs(self, request: web.Request) -> bytes:
"""Return supervisor Docker logs."""
return self.sys_supervisor.logs()
logs = await self.sys_supervisor.logs()
return "\n".join(logs).encode(errors="replace")

View File

@@ -14,11 +14,8 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
ARCH_JSON: Path = Path(__file__).parent.joinpath("data/arch.json")
MAP_CPU: dict[str, CpuArch] = {
"armv7": CpuArch.ARMV7,
"armv6": CpuArch.ARMHF,
"armv8": CpuArch.AARCH64,
"aarch64": CpuArch.AARCH64,
"i686": CpuArch.I386,
"x86_64": CpuArch.AMD64,
}
@@ -64,11 +61,12 @@ class CpuArchManager(CoreSysAttributes):
if not self.sys_machine or self.sys_machine not in arch_data:
_LOGGER.warning("Can't detect the machine type!")
self._default_arch = native_support
self._supported_arch.append(self.default)
self._supported_arch = [self.default]
self._supported_set = {self.default}
return
# Use configs from arch.json
self._supported_arch.extend(CpuArch(a) for a in arch_data[self.sys_machine])
self._supported_arch = [CpuArch(a) for a in arch_data[self.sys_machine]]
self._default_arch = self.supported[0]
# Make sure native support is in supported list
@@ -85,7 +83,7 @@ class CpuArchManager(CoreSysAttributes):
"""Return best match for this CPU/Platform."""
for self_arch in self.supported:
if self_arch in arch_list:
return self_arch
return CpuArch(self_arch)
raise HassioArchNotFound()
def detect_cpu(self) -> CpuArch:

View File

@@ -6,13 +6,12 @@ import logging
from typing import Any, TypedDict, cast
from .addons.addon import Addon
from .const import ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
from .const import ATTR_PASSWORD, ATTR_USERNAME, FILE_HASSIO_AUTH, HomeAssistantUser
from .coresys import CoreSys, CoreSysAttributes
from .exceptions import (
AuthHomeAssistantAPIValidationError,
AuthInvalidNonStringValueError,
AuthListUsersError,
AuthListUsersNoneResponseError,
AuthPasswordResetError,
HomeAssistantAPIError,
HomeAssistantWSError,
@@ -157,22 +156,14 @@ class Auth(FileConfiguration, CoreSysAttributes):
raise AuthPasswordResetError(user=username)
async def list_users(self) -> list[dict[str, Any]]:
async def list_users(self) -> list[HomeAssistantUser]:
"""List users on the Home Assistant instance."""
try:
users: (
list[dict[str, Any]] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"}
)
return await self.sys_homeassistant.list_users()
except HomeAssistantWSError as err:
_LOGGER.error("Can't request listing users on Home Assistant: %s", err)
raise AuthListUsersError() from err
if users is not None:
return users
raise AuthListUsersNoneResponseError(_LOGGER.error)
@staticmethod
def _rehash(value: str, salt2: str = "") -> str:
"""Rehash a value."""

View File

@@ -12,13 +12,19 @@ import json
import logging
from pathlib import Path, PurePath
import tarfile
from tarfile import TarFile
from tempfile import TemporaryDirectory
import time
from typing import Any, Self, cast
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from securetar import AddFileError, SecureTarFile, atomic_contents_add, secure_path
from securetar import (
AddFileError,
InvalidPasswordError,
SecureTarArchive,
SecureTarFile,
SecureTarReadError,
atomic_contents_add,
)
import voluptuous as vol
from voluptuous.humanize import humanize_error
@@ -35,6 +41,7 @@ from ..const import (
ATTR_HOMEASSISTANT,
ATTR_NAME,
ATTR_PROTECTED,
ATTR_REGISTRIES,
ATTR_REPOSITORIES,
ATTR_SIZE,
ATTR_SLUG,
@@ -51,15 +58,28 @@ from ..exceptions import (
BackupFileNotFoundError,
BackupInvalidError,
BackupPermissionError,
MountError,
)
from ..homeassistant.const import LANDINGPAGE
from ..jobs.const import JOB_GROUP_BACKUP
from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup
from ..utils import remove_folder
from ..mounts.const import ATTR_DEFAULT_BACKUP_MOUNT, ATTR_MOUNTS
from ..mounts.mount import Mount
from ..mounts.validate import SCHEMA_MOUNTS_CONFIG
from ..utils import remove_folder, version_is_new_enough
from ..utils.dt import parse_datetime, utcnow
from ..utils.json import json_bytes
from ..utils.sentinel import DEFAULT
from .const import BUF_SIZE, LOCATION_CLOUD_BACKUP, BackupType
from ..validate import SCHEMA_DOCKER_CONFIG
from .const import (
BUF_SIZE,
CORE_SECURETAR_V3_MIN_VERSION,
LOCATION_CLOUD_BACKUP,
SECURETAR_CREATE_VERSION,
SECURETAR_V3_CREATE_VERSION,
BackupType,
)
from .validate import SCHEMA_BACKUP
IGNORED_COMPARISON_FIELDS = {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}
@@ -99,7 +119,7 @@ class Backup(JobGroup):
)
self._data: dict[str, Any] = data or {ATTR_SLUG: slug}
self._tmp: TemporaryDirectory | None = None
self._outer_secure_tarfile: SecureTarFile | None = None
self._outer_secure_tarfile: SecureTarArchive | None = None
self._password: str | None = None
self._locations: dict[str | None, BackupLocation] = {
location: BackupLocation(
@@ -170,21 +190,21 @@ class Backup(JobGroup):
self._data[ATTR_REPOSITORIES] = value
@property
def homeassistant_version(self) -> AwesomeVersion:
def homeassistant_version(self) -> AwesomeVersion | None:
"""Return backup Home Assistant version."""
if self.homeassistant is None:
return None
return self.homeassistant[ATTR_VERSION]
@property
def homeassistant_exclude_database(self) -> bool:
def homeassistant_exclude_database(self) -> bool | None:
"""Return whether database was excluded from Home Assistant backup."""
if self.homeassistant is None:
return None
return self.homeassistant[ATTR_EXCLUDE_DATABASE]
@property
def homeassistant(self) -> dict[str, Any]:
def homeassistant(self) -> dict[str, Any] | None:
"""Return backup Home Assistant data."""
return self._data[ATTR_HOMEASSISTANT]
@@ -198,16 +218,6 @@ class Backup(JobGroup):
"""Get extra metadata added by client."""
return self._data[ATTR_EXTRA]
@property
def docker(self) -> dict[str, Any]:
"""Return backup Docker config data."""
return self._data.get(ATTR_DOCKER, {})
@docker.setter
def docker(self, value: dict[str, Any]) -> None:
"""Set the Docker config data."""
self._data[ATTR_DOCKER] = value
@property
def location(self) -> str | None:
"""Return the location of the backup."""
@@ -324,7 +334,8 @@ class Backup(JobGroup):
# Add defaults
self._data = SCHEMA_BACKUP(self._data)
# Set password
# Set password - intentionally using truthiness check so that empty
# string is treated as no password, consistent with set_password().
if password:
self._password = password
self._data[ATTR_PROTECTED] = True
@@ -335,8 +346,13 @@ class Backup(JobGroup):
self._data[ATTR_COMPRESSED] = False
def set_password(self, password: str | None) -> None:
"""Set the password for an existing backup."""
self._password = password
"""Set the password for an existing backup.
Treat empty string as None to stay consistent with backup creation
and Supervisor behavior before #6402, independent of SecureTar
behavior in this regard.
"""
self._password = password or None
async def validate_backup(self, location: str | None) -> None:
"""Validate backup.
@@ -364,15 +380,17 @@ class Backup(JobGroup):
test_tar_file = backup.extractfile(test_tar_name)
try:
with SecureTarFile(
ending, # Not used
gzip=self.compressed,
mode="r",
fileobj=test_tar_file,
password=self._password,
):
# If we can read the tar file, the password is correct
return
except tarfile.ReadError as ex:
except (
tarfile.ReadError,
SecureTarReadError,
InvalidPasswordError,
) as ex:
raise BackupInvalidError(
f"Invalid password for backup {self.slug}", _LOGGER.error
) from ex
@@ -440,8 +458,17 @@ class Backup(JobGroup):
@asynccontextmanager
async def create(self) -> AsyncGenerator[None]:
"""Create new backup file."""
core_version = self.sys_homeassistant.version
if (
core_version is not None
and core_version != LANDINGPAGE
and version_is_new_enough(core_version, CORE_SECURETAR_V3_MIN_VERSION)
):
securetar_version = SECURETAR_V3_CREATE_VERSION
else:
securetar_version = SECURETAR_CREATE_VERSION
def _open_outer_tarfile() -> tuple[SecureTarFile, tarfile.TarFile]:
def _open_outer_tarfile() -> SecureTarArchive:
"""Create and open outer tarfile."""
if self.tarfile.is_file():
raise BackupFileExistError(
@@ -449,14 +476,15 @@ class Backup(JobGroup):
_LOGGER.error,
)
_outer_secure_tarfile = SecureTarFile(
_outer_secure_tarfile = SecureTarArchive(
self.tarfile,
"w",
gzip=False,
bufsize=BUF_SIZE,
create_version=securetar_version,
password=self._password,
)
try:
_outer_tarfile = _outer_secure_tarfile.open()
_outer_secure_tarfile.open()
except PermissionError as ex:
raise BackupPermissionError(
f"Cannot open backup file {self.tarfile.as_posix()}, permission error!",
@@ -468,11 +496,9 @@ class Backup(JobGroup):
_LOGGER.error,
) from ex
return _outer_secure_tarfile, _outer_tarfile
return _outer_secure_tarfile
outer_secure_tarfile, outer_tarfile = await self.sys_run_in_executor(
_open_outer_tarfile
)
outer_secure_tarfile = await self.sys_run_in_executor(_open_outer_tarfile)
self._outer_secure_tarfile = outer_secure_tarfile
def _close_outer_tarfile() -> int:
@@ -483,7 +509,7 @@ class Backup(JobGroup):
try:
yield
finally:
await self._create_cleanup(outer_tarfile)
await self._create_finalize(outer_secure_tarfile)
size_bytes = await self.sys_run_in_executor(_close_outer_tarfile)
self._locations[self.location].size_bytes = size_bytes
self._outer_secure_tarfile = None
@@ -512,12 +538,24 @@ class Backup(JobGroup):
)
tmp = TemporaryDirectory(dir=str(backup_tarfile.parent))
with tarfile.open(backup_tarfile, "r:") as tar:
tar.extractall(
path=tmp.name,
members=secure_path(tar),
filter="fully_trusted",
)
try:
with tarfile.open(backup_tarfile, "r:") as tar:
# The tar filter rejects path traversal and absolute names,
# aborting restore of potentially crafted backups.
tar.extractall(
path=tmp.name,
filter="tar",
)
except tarfile.FilterError as err:
raise BackupInvalidError(
f"Can't read backup tarfile {backup_tarfile.as_posix()}: {err}",
_LOGGER.error,
) from err
except tarfile.TarError as err:
raise BackupError(
f"Can't read backup tarfile {backup_tarfile.as_posix()}: {err}",
_LOGGER.error,
) from err
return tmp
@@ -531,11 +569,11 @@ class Backup(JobGroup):
if self._tmp:
await self.sys_run_in_executor(self._tmp.cleanup)
async def _create_cleanup(self, outer_tarfile: TarFile) -> None:
"""Cleanup after backup creation.
async def _create_finalize(self, outer_archive: SecureTarArchive) -> None:
"""Finalize backup creation.
Separate method to be called from create to ensure
that cleanup is always performed, even if an exception is raised.
Separate method to be called from create to ensure that the backup is
finalized.
"""
# validate data
try:
@@ -554,7 +592,7 @@ class Backup(JobGroup):
tar_info = tarfile.TarInfo(name="./backup.json")
tar_info.size = len(raw_bytes)
tar_info.mtime = int(time.time())
outer_tarfile.addfile(tar_info, fileobj=fileobj)
outer_archive.tar.addfile(tar_info, fileobj=fileobj)
try:
await self.sys_run_in_executor(_add_backup_json)
@@ -581,10 +619,9 @@ class Backup(JobGroup):
tar_name = f"{slug}.tar{'.gz' if self.compressed else ''}"
addon_file = self._outer_secure_tarfile.create_inner_tar(
addon_file = self._outer_secure_tarfile.create_tar(
f"./{tar_name}",
gzip=self.compressed,
password=self._password,
)
# Take backup
try:
@@ -634,7 +671,6 @@ class Backup(JobGroup):
tar_name = f"{addon_slug}.tar{'.gz' if self.compressed else ''}"
addon_file = SecureTarFile(
Path(self._tmp.name, tar_name),
"r",
gzip=self.compressed,
bufsize=BUF_SIZE,
password=self._password,
@@ -730,10 +766,9 @@ class Backup(JobGroup):
return False
with outer_secure_tarfile.create_inner_tar(
with outer_secure_tarfile.create_tar(
f"./{tar_name}",
gzip=self.compressed,
password=self._password,
) as tar_file:
atomic_contents_add(
tar_file,
@@ -793,15 +828,21 @@ class Backup(JobGroup):
_LOGGER.info("Restore folder %s", name)
with SecureTarFile(
tar_name,
"r",
gzip=self.compressed,
bufsize=BUF_SIZE,
password=self._password,
) as tar_file:
# The tar filter rejects path traversal and absolute names,
# aborting restore of potentially crafted backups.
tar_file.extractall(
path=origin_dir, members=tar_file, filter="fully_trusted"
path=origin_dir,
filter="tar",
)
_LOGGER.info("Restore folder %s done", name)
except tarfile.FilterError as err:
raise BackupInvalidError(
f"Can't restore folder {name}: {err}", _LOGGER.warning
) from err
except (tarfile.TarError, OSError) as err:
raise BackupError(
f"Can't restore folder {name}: {err}", _LOGGER.warning
@@ -854,16 +895,15 @@ class Backup(JobGroup):
tar_name = f"homeassistant.tar{'.gz' if self.compressed else ''}"
# Backup Home Assistant Core config directory
homeassistant_file = self._outer_secure_tarfile.create_inner_tar(
homeassistant_file = self._outer_secure_tarfile.create_tar(
f"./{tar_name}",
gzip=self.compressed,
password=self._password,
)
await self.sys_homeassistant.backup(homeassistant_file, exclude_database)
# Store size
self.homeassistant[ATTR_SIZE] = await self.sys_run_in_executor(
self._data[ATTR_HOMEASSISTANT][ATTR_SIZE] = await self.sys_run_in_executor(
getattr, homeassistant_file, "size"
)
@@ -881,7 +921,6 @@ class Backup(JobGroup):
)
homeassistant_file = SecureTarFile(
tar_name,
"r",
gzip=self.compressed,
bufsize=BUF_SIZE,
password=self._password,
@@ -920,3 +959,187 @@ class Backup(JobGroup):
return self.sys_store.update_repositories(
set(self.repositories), issue_on_error=True, replace=replace
)
@Job(name="backup_store_supervisor_config", cleanup=False)
async def store_supervisor_config(self) -> None:
"""Store supervisor configuration into backup as encrypted tar."""
if not self._outer_secure_tarfile:
raise RuntimeError(
"Cannot backup components without initializing backup tar"
)
registries = self.sys_docker.config.registries
if not self.sys_mounts.mounts and not registries:
return
mounts_data = {
ATTR_DEFAULT_BACKUP_MOUNT: (
self.sys_mounts.default_backup_mount.name
if self.sys_mounts.default_backup_mount
else None
),
ATTR_MOUNTS: [
mount.to_dict(skip_secrets=False) for mount in self.sys_mounts.mounts
],
}
docker_data = {ATTR_REGISTRIES: registries}
outer_secure_tarfile = self._outer_secure_tarfile
tar_name = f"supervisor.tar{'.gz' if self.compressed else ''}"
def _save() -> None:
"""Save supervisor config data to tar file."""
_LOGGER.info("Backing up supervisor configuration")
# Create JSON data
mounts_json = json.dumps(mounts_data).encode("utf-8")
docker_json = json.dumps(docker_data).encode("utf-8")
with outer_secure_tarfile.create_tar(
f"./{tar_name}",
gzip=self.compressed,
) as tar_file:
# Add mounts.json to tar
tarinfo = tarfile.TarInfo(name="mounts.json")
tarinfo.size = len(mounts_json)
tar_file.addfile(tarinfo, io.BytesIO(mounts_json))
# Add docker.json to tar
tarinfo = tarfile.TarInfo(name="docker.json")
tarinfo.size = len(docker_json)
tar_file.addfile(tarinfo, io.BytesIO(docker_json))
_LOGGER.info("Backup supervisor configuration done")
try:
await self.sys_run_in_executor(_save)
except (tarfile.TarError, OSError) as err:
raise BackupError(
f"Can't write supervisor config tarfile: {err!s}"
) from err
@Job(name="backup_restore_supervisor_config", cleanup=False)
async def restore_supervisor_config(self) -> tuple[bool, list[asyncio.Task]]:
"""Restore supervisor configuration from backup.
Returns tuple of (success, list of mount activation tasks).
The tasks should be awaited after the restore is complete to activate mounts.
"""
if not self._tmp:
raise RuntimeError("Cannot restore components without opening backup tar")
tar_name = Path(
self._tmp.name, f"supervisor.tar{'.gz' if self.compressed else ''}"
)
# Extract and parse supervisor data
def _load_supervisor_data() -> tuple[
dict[str, Any] | None, dict[str, Any] | None
]:
"""Load mounts and docker data from tar file."""
if not tar_name.exists():
_LOGGER.info("Supervisor tar file not found in backup")
return (None, None)
mounts_data = None
docker_data = None
with SecureTarFile(
tar_name,
gzip=self.compressed,
bufsize=BUF_SIZE,
password=self._password,
) as tar_file:
try:
member = tar_file.getmember("mounts.json")
file_obj = tar_file.extractfile(member)
if file_obj:
mounts_data = json.loads(file_obj.read().decode("utf-8"))
except KeyError:
_LOGGER.debug("mounts.json not found in supervisor tar")
try:
member = tar_file.getmember("docker.json")
file_obj = tar_file.extractfile(member)
if file_obj:
docker_data = json.loads(file_obj.read().decode("utf-8"))
except KeyError:
_LOGGER.debug("docker.json not found in supervisor tar")
return (mounts_data, docker_data)
try:
mounts_data, docker_data = await self.sys_run_in_executor(
_load_supervisor_data
)
except OSError as err:
self.sys_resolution.check_oserror(err)
_LOGGER.warning("Failed to read supervisor tar from backup: %s", err)
return (False, [])
except (tarfile.TarError, json.JSONDecodeError) as err:
_LOGGER.warning("Failed to read supervisor config from backup: %s", err)
return (False, [])
if not mounts_data and not docker_data:
return (True, [])
success = True
mount_tasks: list[asyncio.Task] = []
# Restore mount configurations
if mounts_data:
try:
mounts_data = SCHEMA_MOUNTS_CONFIG(mounts_data)
except vol.Invalid as err:
_LOGGER.warning("Invalid mounts data in supervisor config: %s", err)
success = False
mounts_data = None
if mounts_data:
for mount_data in mounts_data.get(ATTR_MOUNTS, []):
mount_name = mount_data[ATTR_NAME]
try:
mount = Mount.from_dict(self.coresys, mount_data)
mount_tasks.append(await self.sys_mounts.restore_mount(mount))
_LOGGER.info("Restored mount configuration: %s", mount_name)
except (MountError, vol.Invalid, KeyError, OSError) as err:
_LOGGER.warning("Failed to restore mount %s: %s", mount_name, err)
success = False
# Restore default backup mount if not already set
default_mount_name = mounts_data.get(ATTR_DEFAULT_BACKUP_MOUNT)
if (
default_mount_name
and default_mount_name in self.sys_mounts
and self.sys_mounts.default_backup_mount is None
):
self.sys_mounts.default_backup_mount = self.sys_mounts.get(
default_mount_name
)
_LOGGER.info("Restored default backup mount: %s", default_mount_name)
# Save mount configuration to disk
await self.sys_mounts.save_data()
# Restore Docker registry configurations
if docker_data:
try:
docker_data = SCHEMA_DOCKER_CONFIG(docker_data)
except vol.Invalid as err:
_LOGGER.warning("Invalid docker data in supervisor config: %s", err)
success = False
docker_data = None
if docker_data:
registries = docker_data.get(ATTR_REGISTRIES, {})
if registries:
self.sys_docker.config.registries.update(registries)
await self.sys_docker.config.save_data()
_LOGGER.info(
"Restored %d docker registry configuration(s)", len(registries)
)
return (success, mount_tasks)

View File

@@ -3,9 +3,14 @@
from enum import StrEnum
from typing import Literal
from awesomeversion import AwesomeVersion
from ..mounts.mount import Mount
BUF_SIZE = 2**20 * 4 # 4MB
SECURETAR_CREATE_VERSION = 2
SECURETAR_V3_CREATE_VERSION = 3
CORE_SECURETAR_V3_MIN_VERSION: AwesomeVersion = AwesomeVersion("2026.3.0")
DEFAULT_FREEZE_TIMEOUT = 600
LOCATION_CLOUD_BACKUP = ".cloud_backup"
@@ -27,6 +32,7 @@ class BackupJobStage(StrEnum):
FINISHING_FILE = "finishing_file"
FOLDERS = "folders"
HOME_ASSISTANT = "home_assistant"
SUPERVISOR_CONFIG = "supervisor_config"
COPY_ADDITONAL_LOCATIONS = "copy_additional_locations"
AWAIT_ADDON_RESTARTS = "await_addon_restarts"
@@ -40,4 +46,5 @@ class RestoreJobStage(StrEnum):
AWAIT_HOME_ASSISTANT_RESTART = "await_home_assistant_restart"
FOLDERS = "folders"
HOME_ASSISTANT = "home_assistant"
SUPERVISOR_CONFIG = "supervisor_config"
REMOVE_DELTA_ADDONS = "remove_delta_addons"

View File

@@ -210,13 +210,11 @@ class BackupManager(FileConfiguration, JobGroup):
try:
return await self.sys_run_in_executor(find_backups)
except OSError as err:
if err.errno == errno.EBADMSG and path in {
if path in {
self.sys_config.path_backup,
self.sys_config.path_core_backup,
}:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error("Could not list backups from %s: %s", path.as_posix(), err)
return []
@@ -365,13 +363,8 @@ class BackupManager(FileConfiguration, JobGroup):
) from err
except OSError as err:
msg = f"Could delete backup at {backup_tarfile.as_posix()}: {err!s}"
if err.errno == errno.EBADMSG and location in {
None,
LOCATION_CLOUD_BACKUP,
}:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
if location in {None, LOCATION_CLOUD_BACKUP}:
self.sys_resolution.check_oserror(err)
raise BackupError(msg, _LOGGER.error) from err
# If backup has been removed from all locations, remove it from cache
@@ -403,12 +396,10 @@ class BackupManager(FileConfiguration, JobGroup):
return (location_name, Path(path))
except OSError as err:
msg = f"Could not copy backup to {location_name} due to: {err!s}"
if err.errno == errno.EBADMSG and location in {
LOCATION_CLOUD_BACKUP,
None,
}:
raise BackupDataDiskBadMessageError(msg, _LOGGER.error) from err
if location in {LOCATION_CLOUD_BACKUP, None}:
self.sys_resolution.check_oserror(err)
if err.errno == errno.EBADMSG:
raise BackupDataDiskBadMessageError(msg, _LOGGER.error) from err
raise BackupError(msg, _LOGGER.error) from err
@Job(name="backup_copy_to_additional_locations", cleanup=False)
@@ -468,10 +459,8 @@ class BackupManager(FileConfiguration, JobGroup):
try:
await self.sys_run_in_executor(backup.tarfile.rename, tar_file)
except OSError as err:
if err.errno == errno.EBADMSG and location in {LOCATION_CLOUD_BACKUP, None}:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
if location in {LOCATION_CLOUD_BACKUP, None}:
self.sys_resolution.check_oserror(err)
_LOGGER.error("Can't move backup file to storage: %s", err)
return None
@@ -549,6 +538,10 @@ class BackupManager(FileConfiguration, JobGroup):
self._change_stage(BackupJobStage.FOLDERS, backup)
await backup.store_folders(folder_list)
# Backup supervisor configuration (mounts, etc.)
self._change_stage(BackupJobStage.SUPERVISOR_CONFIG, backup)
await backup.store_supervisor_config()
self._change_stage(BackupJobStage.FINISHING_FILE, backup)
except BackupError as err:
@@ -750,6 +743,14 @@ class BackupManager(FileConfiguration, JobGroup):
)
success = success and restore_success
# Restore supervisor configuration (mounts, etc.)
self._change_stage(RestoreJobStage.SUPERVISOR_CONFIG, backup)
(
mount_success,
mount_tasks,
) = await backup.restore_supervisor_config()
success = success and mount_success
# Wait for Home Assistant Core update/downgrade
if task_hass:
await task_hass
@@ -770,6 +771,14 @@ class BackupManager(FileConfiguration, JobGroup):
):
return False
# Wait for mount activations (failures don't affect restore success
# since config was already saved)
if mount_tasks:
results = await asyncio.gather(*mount_tasks, return_exceptions=True)
for result in results:
if isinstance(result, Exception):
_LOGGER.warning("Mount activation error: %s", result)
return success
finally:
# Leave Home Assistant alone if it wasn't part of the restore

View File

@@ -14,7 +14,6 @@ from ..const import (
ATTR_CRYPTO,
ATTR_DATE,
ATTR_DAYS_UNTIL_STALE,
ATTR_DOCKER,
ATTR_EXCLUDE_DATABASE,
ATTR_EXTRA,
ATTR_FOLDERS,
@@ -35,7 +34,7 @@ from ..const import (
FOLDER_SSL,
)
from ..store.validate import repositories
from ..validate import SCHEMA_DOCKER_CONFIG, version_tag
from ..validate import version_tag
ALL_FOLDERS = [
FOLDER_SHARE,
@@ -114,7 +113,6 @@ SCHEMA_BACKUP = vol.Schema(
)
),
),
vol.Optional(ATTR_DOCKER, default=dict): SCHEMA_DOCKER_CONFIG,
vol.Optional(ATTR_FOLDERS, default=list): vol.All(
v1_folderlist, [vol.In(ALL_FOLDERS)], vol.Unique()
),

View File

@@ -1,11 +1,12 @@
"""Constants file for Supervisor."""
from collections.abc import Mapping
from dataclasses import dataclass
from enum import StrEnum
from ipaddress import IPv4Network, IPv6Network
from pathlib import Path
from sys import version_info as systemversion
from typing import NotRequired, Self, TypedDict
from typing import Any, NotRequired, Self, TypedDict
from aiohttp import __version__ as aiohttpversion
@@ -305,6 +306,7 @@ ATTR_REGISTRIES = "registries"
ATTR_REGISTRY = "registry"
ATTR_REPOSITORIES = "repositories"
ATTR_REPOSITORY = "repository"
ATTR_ROUTE_METRIC = "route_metric"
ATTR_SCHEMA = "schema"
ATTR_SECURITY = "security"
ATTR_SERIAL = "serial"
@@ -386,7 +388,20 @@ ARCH_AARCH64 = "aarch64"
ARCH_AMD64 = "amd64"
ARCH_I386 = "i386"
ARCH_ALL = [ARCH_ARMHF, ARCH_ARMV7, ARCH_AARCH64, ARCH_AMD64, ARCH_I386]
ARCH_ALL = [ARCH_AARCH64, ARCH_AMD64]
ARCH_DEPRECATED = [ARCH_ARMHF, ARCH_ARMV7, ARCH_I386]
ARCH_ALL_COMPAT = ARCH_ALL + ARCH_DEPRECATED
MACHINE_DEPRECATED = [
"odroid-xu",
"qemuarm",
"qemux86",
"raspberrypi",
"raspberrypi2",
"raspberrypi3",
"raspberrypi4",
"tinker",
]
REPOSITORY_CORE = "core"
REPOSITORY_LOCAL = "local"
@@ -411,6 +426,11 @@ ROLE_ADMIN = "admin"
ROLE_ALL = [ROLE_DEFAULT, ROLE_HOMEASSISTANT, ROLE_BACKUP, ROLE_MANAGER, ROLE_ADMIN]
OBSERVER_PORT = 4357
# Used for stream operations
DEFAULT_CHUNK_SIZE = 2**16 # 64KiB
class AddonBootConfig(StrEnum):
"""Boot mode config for the add-on."""
@@ -523,67 +543,81 @@ class BusEvent(StrEnum):
class CpuArch(StrEnum):
"""Supported CPU architectures."""
ARMV7 = "armv7"
ARMHF = "armhf"
AARCH64 = "aarch64"
I386 = "i386"
AMD64 = "amd64"
class IngressSessionDataUserDict(TypedDict):
"""Response object for ingress session user."""
id: str
username: NotRequired[str | None]
# Name is an alias for displayname, only one should be used
displayname: NotRequired[str | None]
name: NotRequired[str | None]
@dataclass
class IngressSessionDataUser:
"""Format of an IngressSessionDataUser object."""
class HomeAssistantUser:
"""A Home Assistant Core user.
Incomplete model — Core's User object has additional fields
(credentials, refresh_tokens, etc.) that are not represented here.
Only fields used by the Supervisor are included.
"""
id: str
display_name: str | None = None
username: str | None = None
def to_dict(self) -> IngressSessionDataUserDict:
"""Get dictionary representation."""
return IngressSessionDataUserDict(
id=self.id, displayname=self.display_name, username=self.username
)
name: str | None = None
is_owner: bool = False
is_active: bool = False
local_only: bool = False
system_generated: bool = False
group_ids: list[str] | None = None
@classmethod
def from_dict(cls, data: IngressSessionDataUserDict) -> Self:
def from_dict(cls, data: Mapping[str, Any]) -> Self:
"""Return object from dictionary representation."""
return cls(
id=data["id"],
display_name=data.get("displayname") or data.get("name"),
username=data.get("username"),
# "displayname" is a legacy key from old ingress session data
name=data.get("name") or data.get("displayname"),
is_owner=data.get("is_owner", False),
is_active=data.get("is_active", False),
local_only=data.get("local_only", False),
system_generated=data.get("system_generated", False),
group_ids=data.get("group_ids"),
)
class IngressSessionDataUserDict(TypedDict):
"""Serialization format for user data stored in ingress sessions.
Legacy data may contain "displayname" instead of "name".
"""
id: str
username: NotRequired[str | None]
name: NotRequired[str | None]
class IngressSessionDataDict(TypedDict):
"""Response object for ingress session data."""
"""Serialization format for ingress session data."""
user: IngressSessionDataUserDict
@dataclass
class IngressSessionData:
"""Format of an IngressSessionData object."""
"""Ingress session data attached to a session token."""
user: IngressSessionDataUser
user: HomeAssistantUser
def to_dict(self) -> IngressSessionDataDict:
"""Get dictionary representation."""
return IngressSessionDataDict(user=self.user.to_dict())
return IngressSessionDataDict(
user=IngressSessionDataUserDict(
id=self.user.id,
name=self.user.name,
username=self.user.username,
)
)
@classmethod
def from_dict(cls, data: IngressSessionDataDict) -> Self:
def from_dict(cls, data: Mapping[str, Any]) -> Self:
"""Return object from dictionary representation."""
return cls(user=IngressSessionDataUser.from_dict(data["user"]))
return cls(user=HomeAssistantUser.from_dict(data["user"]))
STARTING_STATES = [

View File

@@ -16,6 +16,7 @@ from .const import (
CoreState,
)
from .coresys import CoreSys, CoreSysAttributes
from .dbus.const import StopUnitMode
from .exceptions import (
HassioError,
HomeAssistantCrashError,
@@ -423,18 +424,34 @@ class Core(CoreSysAttributes):
await self.sys_host.control.set_timezone(timezone)
# Calculate if system time is out of sync
delta = data.dt_utc - utcnow()
if delta <= timedelta(days=3) or self.sys_host.info.dt_synchronized:
delta = abs(data.dt_utc - utcnow())
if delta <= timedelta(hours=1) or self.sys_host.info.dt_synchronized:
return
_LOGGER.warning("System time/date shift over more than 3 days found!")
_LOGGER.warning("System time/date shift over more than 1 hour detected!")
if self.sys_host.info.use_ntp:
# Stop timesyncd if NTP is enabled, as set_time is blocked while it runs.
_LOGGER.info("Stopping systemd-timesyncd to allow manual time adjustment")
await self.sys_dbus.systemd.stop_unit(
"systemd-timesyncd.service", StopUnitMode.REPLACE
)
# Keep service disabled and create a repair issue
self.sys_resolution.create_issue(
IssueType.NTP_SYNC_FAILED,
ContextType.SYSTEM,
suggestions=[SuggestionType.ENABLE_NTP],
)
# We need to wait a bit for the service to stop.
await asyncio.sleep(1)
await self.sys_host.control.set_datetime(data.dt_utc)
await self.sys_supervisor.check_connectivity()
async def repair(self) -> None:
"""Repair system integrity."""
_LOGGER.info("Starting repair of Supervisor Environment")
await self.sys_run_in_executor(self.sys_docker.repair)
await self.sys_docker.repair()
# Fix plugins
await self.sys_plugins.repair()

View File

@@ -628,9 +628,17 @@ class CoreSys:
context = callback(context)
return context
def create_task(self, coroutine: Coroutine) -> asyncio.Task:
def create_task(
self, coroutine: Coroutine, *, eager_start: bool | None = None
) -> asyncio.Task:
"""Create an async task."""
return self.loop.create_task(coroutine, context=self._create_context())
# eager_start kwarg works but wasn't added for mypy visibility until 3.14
# can remove the type ignore then
return self.loop.create_task(
coroutine,
context=self._create_context(),
eager_start=eager_start, # type: ignore
)
def call_later(
self,
@@ -847,9 +855,11 @@ class CoreSysAttributes:
"""Add a job to the executor pool."""
return self.coresys.run_in_executor(funct, *args, **kwargs)
def sys_create_task(self, coroutine: Coroutine) -> asyncio.Task:
def sys_create_task(
self, coroutine: Coroutine, *, eager_start: bool | None = None
) -> asyncio.Task:
"""Create an async task."""
return self.coresys.create_task(coroutine)
return self.coresys.create_task(coroutine, eager_start=eager_start)
def sys_call_later(
self,

View File

@@ -1,25 +1,17 @@
{
"raspberrypi": ["armhf"],
"raspberrypi2": ["armv7", "armhf"],
"raspberrypi3": ["armv7", "armhf"],
"raspberrypi3-64": ["aarch64", "armv7", "armhf"],
"raspberrypi4": ["armv7", "armhf"],
"raspberrypi4-64": ["aarch64", "armv7", "armhf"],
"raspberrypi5-64": ["aarch64", "armv7", "armhf"],
"yellow": ["aarch64", "armv7", "armhf"],
"green": ["aarch64", "armv7", "armhf"],
"tinker": ["armv7", "armhf"],
"odroid-c2": ["aarch64", "armv7", "armhf"],
"odroid-c4": ["aarch64", "armv7", "armhf"],
"odroid-m1": ["aarch64", "armv7", "armhf"],
"odroid-n2": ["aarch64", "armv7", "armhf"],
"odroid-xu": ["armv7", "armhf"],
"khadas-vim3": ["aarch64", "armv7", "armhf"],
"raspberrypi3-64": ["aarch64"],
"raspberrypi4-64": ["aarch64"],
"raspberrypi5-64": ["aarch64"],
"yellow": ["aarch64"],
"green": ["aarch64"],
"odroid-c2": ["aarch64"],
"odroid-c4": ["aarch64"],
"odroid-m1": ["aarch64"],
"odroid-n2": ["aarch64"],
"khadas-vim3": ["aarch64"],
"generic-aarch64": ["aarch64"],
"qemux86": ["i386"],
"qemux86-64": ["amd64", "i386"],
"qemuarm": ["armhf"],
"qemux86-64": ["amd64"],
"qemuarm-64": ["aarch64"],
"intel-nuc": ["amd64", "i386"],
"generic-x86-64": ["amd64", "i386"]
"intel-nuc": ["amd64"],
"generic-x86-64": ["amd64"]
}

View File

@@ -3,10 +3,17 @@
"bluetoothd",
"bthelper",
"btuart",
"containerd",
"dbus-broker",
"dbus-broker-launch",
"docker",
"docker-prepare",
"dockerd",
"dropbear",
"fstrim",
"haos-data-disk-detach",
"haos-swapfile",
"haos-wipe",
"hassos-apparmor",
"hassos-config",
"hassos-expand",
@@ -17,7 +24,10 @@
"kernel",
"mount",
"os-agent",
"qemu-ga",
"rauc",
"raucdb-update",
"sm-notify",
"systemd",
"systemd-coredump",
"systemd-fsck",

View File

@@ -3,6 +3,8 @@
from enum import IntEnum, StrEnum
from socket import AF_INET, AF_INET6
from .enum import DBusIntEnum, DBusStrEnum
DBUS_NAME_HAOS = "io.hass.os"
DBUS_NAME_HOSTNAME = "org.freedesktop.hostname1"
DBUS_NAME_LOGIND = "org.freedesktop.login1"
@@ -208,7 +210,7 @@ DBUS_ATTR_WWN = "WWN"
DBUS_ERR_SYSTEMD_NO_SUCH_UNIT = "org.freedesktop.systemd1.NoSuchUnit"
class RaucState(StrEnum):
class RaucState(DBusStrEnum):
"""Rauc slot states."""
GOOD = "good"
@@ -216,7 +218,7 @@ class RaucState(StrEnum):
ACTIVE = "active"
class InterfaceMethod(StrEnum):
class InterfaceMethod(DBusStrEnum):
"""Interface method simple."""
AUTO = "auto"
@@ -225,7 +227,7 @@ class InterfaceMethod(StrEnum):
LINK_LOCAL = "link-local"
class InterfaceAddrGenMode(IntEnum):
class InterfaceAddrGenMode(DBusIntEnum):
"""Interface addr_gen_mode."""
EUI64 = 0
@@ -234,7 +236,7 @@ class InterfaceAddrGenMode(IntEnum):
DEFAULT = 3
class InterfaceIp6Privacy(IntEnum):
class InterfaceIp6Privacy(DBusIntEnum):
"""Interface ip6_privacy."""
DEFAULT = -1
@@ -243,14 +245,14 @@ class InterfaceIp6Privacy(IntEnum):
ENABLED = 2
class ConnectionType(StrEnum):
class ConnectionType(DBusStrEnum):
"""Connection type."""
ETHERNET = "802-3-ethernet"
WIRELESS = "802-11-wireless"
class ConnectionState(IntEnum):
class ConnectionState(DBusIntEnum):
"""Connection states.
https://networkmanager.dev/docs/api/latest/nm-dbus-types.html#NMActiveConnectionState
@@ -280,7 +282,7 @@ class ConnectionStateFlags(IntEnum):
EXTERNAL = 0x80
class ConnectivityState(IntEnum):
class ConnectivityState(DBusIntEnum):
"""Network connectvity.
https://networkmanager.dev/docs/api/latest/nm-dbus-types.html#NMConnectivityState
@@ -293,7 +295,7 @@ class ConnectivityState(IntEnum):
CONNECTIVITY_FULL = 4
class DeviceType(IntEnum):
class DeviceType(DBusIntEnum):
"""Device types.
https://networkmanager.dev/docs/api/latest/nm-dbus-types.html#NMDeviceType
@@ -302,15 +304,41 @@ class DeviceType(IntEnum):
UNKNOWN = 0
ETHERNET = 1
WIRELESS = 2
UNUSED1 = 3
UNUSED2 = 4
BLUETOOTH = 5
OLPC_MESH = 6
WIMAX = 7
MODEM = 8
INFINIBAND = 9
BOND = 10
VLAN = 11
ADSL = 12
BRIDGE = 13
GENERIC = 14
TEAM = 15
TUN = 16
IP_TUNNEL = 17
MAC_VLAN = 18
VXLAN = 19
VETH = 20
MACSEC = 21
DUMMY = 22
PPP = 23
OVS_INTERFACE = 24
OVS_PORT = 25
OVS_BRIDGE = 26
WPAN = 27
LOWPAN6 = 28
WIREGUARD = 29
WIFI_P2P = 30
VRF = 31
LOOPBACK = 32
HSR = 33
IPVLAN = 34
class WirelessMethodType(IntEnum):
class WirelessMethodType(DBusIntEnum):
"""Device Type."""
UNKNOWN = 0
@@ -327,7 +355,7 @@ class DNSAddressFamily(IntEnum):
INET6 = AF_INET6
class MulticastProtocolEnabled(StrEnum):
class MulticastProtocolEnabled(DBusStrEnum):
"""Multicast protocol enabled or resolve."""
YES = "yes"
@@ -335,7 +363,7 @@ class MulticastProtocolEnabled(StrEnum):
RESOLVE = "resolve"
class MulticastDnsValue(IntEnum):
class MulticastDnsValue(DBusIntEnum):
"""Connection MulticastDNS (mdns/llmnr) values."""
DEFAULT = -1
@@ -344,7 +372,7 @@ class MulticastDnsValue(IntEnum):
ANNOUNCE = 2
class DNSOverTLSEnabled(StrEnum):
class DNSOverTLSEnabled(DBusStrEnum):
"""DNS over TLS enabled."""
YES = "yes"
@@ -352,7 +380,7 @@ class DNSOverTLSEnabled(StrEnum):
OPPORTUNISTIC = "opportunistic"
class DNSSECValidation(StrEnum):
class DNSSECValidation(DBusStrEnum):
"""DNSSEC validation enforced."""
YES = "yes"
@@ -360,7 +388,7 @@ class DNSSECValidation(StrEnum):
ALLOW_DOWNGRADE = "allow-downgrade"
class DNSStubListenerEnabled(StrEnum):
class DNSStubListenerEnabled(DBusStrEnum):
"""DNS stub listener enabled."""
YES = "yes"
@@ -369,7 +397,7 @@ class DNSStubListenerEnabled(StrEnum):
UDP_ONLY = "udp"
class ResolvConfMode(StrEnum):
class ResolvConfMode(DBusStrEnum):
"""Resolv.conf management mode."""
FOREIGN = "foreign"
@@ -398,7 +426,7 @@ class StartUnitMode(StrEnum):
ISOLATE = "isolate"
class UnitActiveState(StrEnum):
class UnitActiveState(DBusStrEnum):
"""Active state of a systemd unit."""
ACTIVE = "active"

56
supervisor/dbus/enum.py Normal file
View File

@@ -0,0 +1,56 @@
"""D-Bus tolerant enum base classes.
D-Bus services (systemd, NetworkManager, RAUC, UDisks2) can introduce new enum
values at any time via OS updates. Standard enum construction raises ValueError
for unknown values. These base classes use Python's _missing_ hook to create
pseudo-members for unknown values, preventing crashes while preserving the
original value for logging and debugging.
"""
from enum import IntEnum, StrEnum
import logging
from ..utils.sentry import fire_and_forget_capture_message
_LOGGER: logging.Logger = logging.getLogger(__name__)
_reported: set[tuple[str, str | int]] = set()
def _report_unknown_value(cls: type, value: str | int) -> None:
"""Log and report an unknown D-Bus enum value to Sentry."""
msg = f"Unknown {cls.__name__} value received from D-Bus: {value}"
_LOGGER.warning(msg)
key = (cls.__name__, value)
if key not in _reported:
_reported.add(key)
fire_and_forget_capture_message(msg)
class DBusStrEnum(StrEnum):
"""StrEnum that tolerates unknown values from D-Bus."""
@classmethod
def _missing_(cls, value: object) -> DBusStrEnum | None:
if not isinstance(value, str):
return None
_report_unknown_value(cls, value)
obj = str.__new__(cls, value)
obj._name_ = value
obj._value_ = value
return obj
class DBusIntEnum(IntEnum):
"""IntEnum that tolerates unknown values from D-Bus."""
@classmethod
def _missing_(cls, value: object) -> DBusIntEnum | None:
if not isinstance(value, int):
return None
_report_unknown_value(cls, value)
obj = int.__new__(cls, value)
obj._name_ = f"UNKNOWN_{value}"
obj._value_ = value
return obj

View File

@@ -77,6 +77,7 @@ class IpProperties(ABC):
method: str | None
address_data: list[IpAddress] | None
gateway: str | None
route_metric: int | None
@dataclass(slots=True)

View File

@@ -60,15 +60,7 @@ class NetworkInterface(DBusInterfaceProxy):
@dbus_property
def type(self) -> DeviceType:
"""Return interface type."""
try:
return DeviceType(self.properties[DBUS_ATTR_DEVICE_TYPE])
except ValueError:
_LOGGER.debug(
"Unknown device type %s for %s, treating as UNKNOWN",
self.properties[DBUS_ATTR_DEVICE_TYPE],
self.object_path,
)
return DeviceType.UNKNOWN
return DeviceType(self.properties[DBUS_ATTR_DEVICE_TYPE])
@property
@dbus_property

View File

@@ -46,6 +46,15 @@ class IpConfiguration(DBusInterfaceProxy):
"""Primary interface of object to get property values from."""
return self._properties_interface
@property
@dbus_property
def address(self) -> list[IPv4Interface | IPv6Interface]:
"""Get address."""
return [
ip_interface(f"{address[ATTR_ADDRESS]}/{address[ATTR_PREFIX]}")
for address in self.properties[DBUS_ATTR_ADDRESS_DATA]
]
@property
@dbus_property
def gateway(self) -> IPv4Address | IPv6Address | None:
@@ -70,12 +79,3 @@ class IpConfiguration(DBusInterfaceProxy):
ip_address(bytes(nameserver))
for nameserver in self.properties[DBUS_ATTR_NAMESERVERS]
]
@property
@dbus_property
def address(self) -> list[IPv4Interface | IPv6Interface]:
"""Get address."""
return [
ip_interface(f"{address[ATTR_ADDRESS]}/{address[ATTR_PREFIX]}")
for address in self.properties[DBUS_ATTR_ADDRESS_DATA]
]

View File

@@ -53,27 +53,25 @@ CONF_ATTR_802_WIRELESS_SECURITY_AUTH_ALG = "auth-alg"
CONF_ATTR_802_WIRELESS_SECURITY_KEY_MGMT = "key-mgmt"
CONF_ATTR_802_WIRELESS_SECURITY_PSK = "psk"
CONF_ATTR_IPV4_METHOD = "method"
CONF_ATTR_IPV4_ADDRESS_DATA = "address-data"
CONF_ATTR_IPV4_GATEWAY = "gateway"
CONF_ATTR_IPV4_DNS = "dns"
CONF_ATTR_IP_METHOD = "method"
CONF_ATTR_IP_ADDRESS_DATA = "address-data"
CONF_ATTR_IP_GATEWAY = "gateway"
CONF_ATTR_IP_ROUTE_METRIC = "route-metric"
CONF_ATTR_IP_DNS = "dns"
CONF_ATTR_IPV6_METHOD = "method"
CONF_ATTR_IPV6_ADDR_GEN_MODE = "addr-gen-mode"
CONF_ATTR_IPV6_PRIVACY = "ip6-privacy"
CONF_ATTR_IPV6_ADDRESS_DATA = "address-data"
CONF_ATTR_IPV6_GATEWAY = "gateway"
CONF_ATTR_IPV6_DNS = "dns"
IPV4_6_IGNORE_FIELDS = [
_IP_IGNORE_FIELDS = [
CONF_ATTR_IP_METHOD,
CONF_ATTR_IP_ADDRESS_DATA,
CONF_ATTR_IP_GATEWAY,
CONF_ATTR_IP_ROUTE_METRIC,
CONF_ATTR_IP_DNS,
CONF_ATTR_IPV6_ADDR_GEN_MODE,
CONF_ATTR_IPV6_PRIVACY,
"addresses",
"address-data",
"dns",
"dns-data",
"gateway",
"method",
"addr-gen-mode",
"ip6-privacy",
]
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -195,13 +193,13 @@ class NetworkSetting(DBusInterface):
new_settings,
settings,
CONF_ATTR_IPV4,
ignore_current_value=IPV4_6_IGNORE_FIELDS,
ignore_current_value=_IP_IGNORE_FIELDS,
)
_merge_settings_attribute(
new_settings,
settings,
CONF_ATTR_IPV6,
ignore_current_value=IPV4_6_IGNORE_FIELDS,
ignore_current_value=_IP_IGNORE_FIELDS,
)
_merge_settings_attribute(new_settings, settings, CONF_ATTR_MATCH)
@@ -291,26 +289,28 @@ class NetworkSetting(DBusInterface):
if CONF_ATTR_IPV4 in data:
address_data = None
if ips := data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_ADDRESS_DATA):
if ips := data[CONF_ATTR_IPV4].get(CONF_ATTR_IP_ADDRESS_DATA):
address_data = [IpAddress(ip["address"], ip["prefix"]) for ip in ips]
self._ipv4 = Ip4Properties(
method=data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_METHOD),
method=data[CONF_ATTR_IPV4].get(CONF_ATTR_IP_METHOD),
address_data=address_data,
gateway=data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_GATEWAY),
dns=data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_DNS),
gateway=data[CONF_ATTR_IPV4].get(CONF_ATTR_IP_GATEWAY),
route_metric=data[CONF_ATTR_IPV4].get(CONF_ATTR_IP_ROUTE_METRIC),
dns=data[CONF_ATTR_IPV4].get(CONF_ATTR_IP_DNS),
)
if CONF_ATTR_IPV6 in data:
address_data = None
if ips := data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_ADDRESS_DATA):
if ips := data[CONF_ATTR_IPV6].get(CONF_ATTR_IP_ADDRESS_DATA):
address_data = [IpAddress(ip["address"], ip["prefix"]) for ip in ips]
self._ipv6 = Ip6Properties(
method=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_METHOD),
method=data[CONF_ATTR_IPV6].get(CONF_ATTR_IP_METHOD),
addr_gen_mode=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_ADDR_GEN_MODE),
ip6_privacy=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_PRIVACY),
address_data=address_data,
gateway=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_GATEWAY),
dns=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_DNS),
gateway=data[CONF_ATTR_IPV6].get(CONF_ATTR_IP_GATEWAY),
route_metric=data[CONF_ATTR_IPV6].get(CONF_ATTR_IP_ROUTE_METRIC),
dns=data[CONF_ATTR_IPV6].get(CONF_ATTR_IP_DNS),
)
if CONF_ATTR_MATCH in data:

View File

@@ -41,17 +41,14 @@ from . import (
CONF_ATTR_CONNECTION_MDNS,
CONF_ATTR_CONNECTION_TYPE,
CONF_ATTR_CONNECTION_UUID,
CONF_ATTR_IP_ADDRESS_DATA,
CONF_ATTR_IP_DNS,
CONF_ATTR_IP_GATEWAY,
CONF_ATTR_IP_METHOD,
CONF_ATTR_IP_ROUTE_METRIC,
CONF_ATTR_IPV4,
CONF_ATTR_IPV4_ADDRESS_DATA,
CONF_ATTR_IPV4_DNS,
CONF_ATTR_IPV4_GATEWAY,
CONF_ATTR_IPV4_METHOD,
CONF_ATTR_IPV6,
CONF_ATTR_IPV6_ADDR_GEN_MODE,
CONF_ATTR_IPV6_ADDRESS_DATA,
CONF_ATTR_IPV6_DNS,
CONF_ATTR_IPV6_GATEWAY,
CONF_ATTR_IPV6_METHOD,
CONF_ATTR_IPV6_PRIVACY,
CONF_ATTR_MATCH,
CONF_ATTR_MATCH_PATH,
@@ -75,11 +72,11 @@ MULTICAST_DNS_MODE_VALUE_MAPPING = {
def _get_ipv4_connection_settings(ipv4setting: IpSetting | None) -> dict:
ipv4 = {}
if not ipv4setting or ipv4setting.method == InterfaceMethod.AUTO:
ipv4[CONF_ATTR_IPV4_METHOD] = Variant("s", "auto")
ipv4[CONF_ATTR_IP_METHOD] = Variant("s", "auto")
elif ipv4setting.method == InterfaceMethod.DISABLED:
ipv4[CONF_ATTR_IPV4_METHOD] = Variant("s", "disabled")
ipv4[CONF_ATTR_IP_METHOD] = Variant("s", "disabled")
elif ipv4setting.method == InterfaceMethod.STATIC:
ipv4[CONF_ATTR_IPV4_METHOD] = Variant("s", "manual")
ipv4[CONF_ATTR_IP_METHOD] = Variant("s", "manual")
address_data = []
for address in ipv4setting.address:
@@ -90,26 +87,25 @@ def _get_ipv4_connection_settings(ipv4setting: IpSetting | None) -> dict:
}
)
ipv4[CONF_ATTR_IPV4_ADDRESS_DATA] = Variant("aa{sv}", address_data)
ipv4[CONF_ATTR_IP_ADDRESS_DATA] = Variant("aa{sv}", address_data)
if ipv4setting.gateway:
ipv4[CONF_ATTR_IPV4_GATEWAY] = Variant("s", str(ipv4setting.gateway))
ipv4[CONF_ATTR_IP_GATEWAY] = Variant("s", str(ipv4setting.gateway))
else:
raise RuntimeError("Invalid IPv4 InterfaceMethod")
if (
ipv4setting
and ipv4setting.nameservers
and ipv4setting.method
in (
if ipv4setting:
if ipv4setting.route_metric is not None:
ipv4[CONF_ATTR_IP_ROUTE_METRIC] = Variant("i", ipv4setting.route_metric)
if ipv4setting.nameservers and ipv4setting.method in (
InterfaceMethod.AUTO,
InterfaceMethod.STATIC,
)
):
nameservers = ipv4setting.nameservers if ipv4setting else []
ipv4[CONF_ATTR_IPV4_DNS] = Variant(
"au",
[socket.htonl(int(ip_address)) for ip_address in nameservers],
)
):
nameservers = ipv4setting.nameservers if ipv4setting else []
ipv4[CONF_ATTR_IP_DNS] = Variant(
"au",
[socket.htonl(int(ip_address)) for ip_address in nameservers],
)
return ipv4
@@ -119,7 +115,7 @@ def _get_ipv6_connection_settings(
) -> dict:
ipv6 = {}
if not ipv6setting or ipv6setting.method == InterfaceMethod.AUTO:
ipv6[CONF_ATTR_IPV6_METHOD] = Variant("s", "auto")
ipv6[CONF_ATTR_IP_METHOD] = Variant("s", "auto")
if ipv6setting:
if ipv6setting.addr_gen_mode == InterfaceAddrGenMode.EUI64:
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant(
@@ -158,9 +154,9 @@ def _get_ipv6_connection_settings(
"i", NMInterfaceIp6Privacy.DEFAULT.value
)
elif ipv6setting.method == InterfaceMethod.DISABLED:
ipv6[CONF_ATTR_IPV6_METHOD] = Variant("s", "link-local")
ipv6[CONF_ATTR_IP_METHOD] = Variant("s", "link-local")
elif ipv6setting.method == InterfaceMethod.STATIC:
ipv6[CONF_ATTR_IPV6_METHOD] = Variant("s", "manual")
ipv6[CONF_ATTR_IP_METHOD] = Variant("s", "manual")
address_data = []
for address in ipv6setting.address:
@@ -171,26 +167,26 @@ def _get_ipv6_connection_settings(
}
)
ipv6[CONF_ATTR_IPV6_ADDRESS_DATA] = Variant("aa{sv}", address_data)
ipv6[CONF_ATTR_IP_ADDRESS_DATA] = Variant("aa{sv}", address_data)
if ipv6setting.gateway:
ipv6[CONF_ATTR_IPV6_GATEWAY] = Variant("s", str(ipv6setting.gateway))
ipv6[CONF_ATTR_IP_GATEWAY] = Variant("s", str(ipv6setting.gateway))
else:
raise RuntimeError("Invalid IPv6 InterfaceMethod")
if (
ipv6setting
and ipv6setting.nameservers
and ipv6setting.method
in (
if ipv6setting:
if ipv6setting.route_metric is not None:
ipv6[CONF_ATTR_IP_ROUTE_METRIC] = Variant("i", ipv6setting.route_metric)
if ipv6setting.nameservers and ipv6setting.method in (
InterfaceMethod.AUTO,
InterfaceMethod.STATIC,
)
):
nameservers = ipv6setting.nameservers if ipv6setting else []
ipv6[CONF_ATTR_IPV6_DNS] = Variant(
"aay",
[ip_address.packed for ip_address in nameservers],
)
):
nameservers = ipv6setting.nameservers if ipv6setting else []
ipv6[CONF_ATTR_IP_DNS] = Variant(
"aay",
[ip_address.packed for ip_address in nameservers],
)
return ipv6

View File

@@ -1,6 +1,5 @@
"""D-Bus interface for rauc."""
from ctypes import c_uint32, c_uint64
import logging
from typing import Any, NotRequired, TypedDict
@@ -33,13 +32,15 @@ SlotStatusDataType = TypedDict(
"state": str,
"device": str,
"bundle.compatible": NotRequired[str],
"bundle.hash": NotRequired[str],
"sha256": NotRequired[str],
"size": NotRequired[c_uint64],
"installed.count": NotRequired[c_uint32],
"size": NotRequired[int],
"installed.count": NotRequired[int],
"installed.transaction": NotRequired[str],
"bundle.version": NotRequired[str],
"installed.timestamp": NotRequired[str],
"status": NotRequired[str],
"activated.count": NotRequired[c_uint32],
"activated.count": NotRequired[int],
"activated.timestamp": NotRequired[str],
"boot-status": NotRequired[str],
"bootname": NotRequired[str],
@@ -117,7 +118,7 @@ class Rauc(DBusInterfaceProxy):
return self.connected_dbus.signal(DBUS_SIGNAL_RAUC_INSTALLER_COMPLETED)
@dbus_connected
async def mark(self, state: RaucState, slot_identifier: str) -> tuple[str, str]:
async def mark(self, state: RaucState, slot_identifier: str) -> list[str]:
"""Get slot status."""
return await self.connected_dbus.Installer.call("mark", state, slot_identifier)

View File

@@ -103,19 +103,19 @@ class Resolved(DBusInterfaceProxy):
@dbus_property
def dns_over_tls(self) -> DNSOverTLSEnabled | None:
"""Return DNS over TLS enabled."""
return self.properties[DBUS_ATTR_DNS_OVER_TLS]
return DNSOverTLSEnabled(self.properties[DBUS_ATTR_DNS_OVER_TLS])
@property
@dbus_property
def dns_stub_listener(self) -> DNSStubListenerEnabled | None:
"""Return DNS stub listener enabled on port 53."""
return self.properties[DBUS_ATTR_DNS_STUB_LISTENER]
return DNSStubListenerEnabled(self.properties[DBUS_ATTR_DNS_STUB_LISTENER])
@property
@dbus_property
def dnssec(self) -> DNSSECValidation | None:
"""Return DNSSEC validation enforced."""
return self.properties[DBUS_ATTR_DNSSEC]
return DNSSECValidation(self.properties[DBUS_ATTR_DNSSEC])
@property
@dbus_property
@@ -159,7 +159,7 @@ class Resolved(DBusInterfaceProxy):
@dbus_property
def llmnr(self) -> MulticastProtocolEnabled | None:
"""Return LLMNR enabled."""
return self.properties[DBUS_ATTR_LLMNR]
return MulticastProtocolEnabled(self.properties[DBUS_ATTR_LLMNR])
@property
@dbus_property
@@ -171,13 +171,13 @@ class Resolved(DBusInterfaceProxy):
@dbus_property
def multicast_dns(self) -> MulticastProtocolEnabled | None:
"""Return MDNS enabled."""
return self.properties[DBUS_ATTR_MULTICAST_DNS]
return MulticastProtocolEnabled(self.properties[DBUS_ATTR_MULTICAST_DNS])
@property
@dbus_property
def resolv_conf_mode(self) -> ResolvConfMode | None:
"""Return how /etc/resolv.conf managed on host."""
return self.properties[DBUS_ATTR_RESOLV_CONF_MODE]
return ResolvConfMode(self.properties[DBUS_ATTR_RESOLV_CONF_MODE])
@property
@dbus_property

View File

@@ -2,6 +2,7 @@
from functools import wraps
import logging
from typing import NamedTuple
from dbus_fast import Variant
from dbus_fast.aio.message_bus import MessageBus
@@ -36,6 +37,14 @@ from .utils import dbus_connected
_LOGGER: logging.Logger = logging.getLogger(__name__)
class ExecStartEntry(NamedTuple):
"""Systemd ExecStart entry for transient units (D-Bus type signature 'sasb')."""
binary: str
argv: list[str]
ignore_failure: bool
def systemd_errors(func):
"""Wrap systemd dbus methods to handle its specific error types."""
@@ -103,6 +112,12 @@ class Systemd(DBusInterfaceProxy):
"No systemd support on the host. Host control has been disabled."
)
if self.is_connected:
try:
await self.connected_dbus.Manager.call("subscribe")
except DBusError:
_LOGGER.warning("Could not subscribe to systemd signals")
@property
@dbus_property
def startup_time(self) -> float:

View File

@@ -8,12 +8,11 @@ from typing import Any
from dbus_fast.aio.message_bus import MessageBus
from ..exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError
from ..utils.dt import get_time_zone, utc_from_timestamp
from ..utils.dt import get_time_zone
from .const import (
DBUS_ATTR_LOCAL_RTC,
DBUS_ATTR_NTP,
DBUS_ATTR_NTPSYNCHRONIZED,
DBUS_ATTR_TIMEUSEC,
DBUS_ATTR_TIMEZONE,
DBUS_IFACE_TIMEDATE,
DBUS_NAME_TIMEDATE,
@@ -65,12 +64,6 @@ class TimeDate(DBusInterfaceProxy):
"""Return if NTP is synchronized."""
return self.properties[DBUS_ATTR_NTPSYNCHRONIZED]
@property
@dbus_property
def dt_utc(self) -> datetime:
"""Return the system UTC time."""
return utc_from_timestamp(self.properties[DBUS_ATTR_TIMEUSEC] / 1000000)
@property
def timezone_tzinfo(self) -> tzinfo | None:
"""Return timezone as tzinfo object."""

View File

@@ -72,7 +72,7 @@ class UDisks2Block(DBusInterfaceProxy):
@staticmethod
async def new(
object_path: str, bus: MessageBus, *, sync_properties: bool = True
) -> "UDisks2Block":
) -> UDisks2Block:
"""Create and connect object."""
obj = UDisks2Block(object_path, sync_properties=sync_properties)
await obj.connect(bus)

View File

@@ -4,6 +4,8 @@ from enum import StrEnum
from dbus_fast import Variant
from ..enum import DBusStrEnum
UDISKS2_DEFAULT_OPTIONS = {"auth.no_user_interaction": Variant("b", True)}
@@ -31,7 +33,7 @@ class FormatType(StrEnum):
GPT = "gpt"
class PartitionTableType(StrEnum):
class PartitionTableType(DBusStrEnum):
"""Partition Table type."""
DOS = "dos"

View File

@@ -46,7 +46,7 @@ class DeviceSpecification:
partlabel: str | None = None
@staticmethod
def from_dict(data: DeviceSpecificationDataType) -> "DeviceSpecification":
def from_dict(data: DeviceSpecificationDataType) -> DeviceSpecification:
"""Create DeviceSpecification from dict."""
return DeviceSpecification(
path=Path(data["path"]) if "path" in data else None,
@@ -108,7 +108,7 @@ class FormatOptions:
auth_no_user_interaction: bool | None = None
@staticmethod
def from_dict(data: FormatOptionsDataType) -> "FormatOptions":
def from_dict(data: FormatOptionsDataType) -> FormatOptions:
"""Create FormatOptions from dict."""
return FormatOptions(
label=data.get("label"),
@@ -182,7 +182,7 @@ class MountOptions:
auth_no_user_interaction: bool | None = None
@staticmethod
def from_dict(data: MountOptionsDataType) -> "MountOptions":
def from_dict(data: MountOptionsDataType) -> MountOptions:
"""Create MountOptions from dict."""
return MountOptions(
fstype=data.get("fstype"),
@@ -226,7 +226,7 @@ class UnmountOptions:
auth_no_user_interaction: bool | None = None
@staticmethod
def from_dict(data: UnmountOptionsDataType) -> "UnmountOptions":
def from_dict(data: UnmountOptionsDataType) -> UnmountOptions:
"""Create MountOptions from dict."""
return UnmountOptions(
force=data.get("force"),
@@ -268,7 +268,7 @@ class CreatePartitionOptions:
auth_no_user_interaction: bool | None = None
@staticmethod
def from_dict(data: CreatePartitionOptionsDataType) -> "CreatePartitionOptions":
def from_dict(data: CreatePartitionOptionsDataType) -> CreatePartitionOptions:
"""Create CreatePartitionOptions from dict."""
return CreatePartitionOptions(
partition_type=data.get("partition-type"),
@@ -310,7 +310,7 @@ class DeletePartitionOptions:
auth_no_user_interaction: bool | None = None
@staticmethod
def from_dict(data: DeletePartitionOptionsDataType) -> "DeletePartitionOptions":
def from_dict(data: DeletePartitionOptionsDataType) -> DeletePartitionOptions:
"""Create DeletePartitionOptions from dict."""
return DeletePartitionOptions(
tear_down=data.get("tear-down"),

View File

@@ -51,7 +51,7 @@ class UDisks2Drive(DBusInterfaceProxy):
await self._reload_interfaces()
@staticmethod
async def new(object_path: str, bus: MessageBus) -> "UDisks2Drive":
async def new(object_path: str, bus: MessageBus) -> UDisks2Drive:
"""Create and connect object."""
obj = UDisks2Drive(object_path)
await obj.connect(bus)

View File

@@ -96,7 +96,7 @@ class UDisks2NVMeController(DBusInterfaceProxy):
super().__init__()
@staticmethod
async def new(object_path: str, bus: MessageBus) -> "UDisks2NVMeController":
async def new(object_path: str, bus: MessageBus) -> UDisks2NVMeController:
"""Create and connect object."""
obj = UDisks2NVMeController(object_path)
await obj.connect(bus)

View File

@@ -2,22 +2,17 @@
from __future__ import annotations
from collections.abc import Awaitable
from contextlib import suppress
from http import HTTPStatus
from ipaddress import IPv4Address
import logging
import os
from pathlib import Path
from socket import SocketIO
import tempfile
from typing import TYPE_CHECKING, Literal, cast
from typing import TYPE_CHECKING, Any, Literal, cast
import aiodocker
from attr import evolve
from awesomeversion import AwesomeVersion
import docker
import docker.errors
import requests
from ..addons.build import AddonBuild
from ..addons.const import MappingType
@@ -134,7 +129,7 @@ class DockerAddon(DockerInterface):
def arch(self) -> str | None:
"""Return arch of Docker image."""
if self.addon.legacy:
return self.sys_arch.default
return str(self.sys_arch.default)
return super().arch
@property
@@ -690,34 +685,43 @@ class DockerAddon(DockerInterface):
await build_env.is_valid()
_LOGGER.info("Starting build for %s:%s", self.image, version)
if build_env.squash:
_LOGGER.warning(
"Ignoring squash build option for %s as Docker BuildKit does not support it.",
self.addon.slug,
)
def build_image() -> tuple[str, str]:
if build_env.squash:
_LOGGER.warning(
"Ignoring squash build option for %s as Docker BuildKit does not support it.",
self.addon.slug,
)
addon_image_tag = f"{image or self.addon.image}:{version!s}"
addon_image_tag = f"{image or self.addon.image}:{version!s}"
docker_version = self.sys_docker.info.version
builder_version_tag = (
f"{docker_version.major}.{docker_version.minor}.{docker_version.micro}-cli"
)
docker_version = self.sys_docker.info.version
builder_version_tag = f"{docker_version.major}.{docker_version.minor}.{docker_version.micro}-cli"
builder_name = f"addon_builder_{self.addon.slug}"
builder_name = f"addon_builder_{self.addon.slug}"
# Remove dangling builder container if it exists by any chance
# E.g. because of an abrupt host shutdown/reboot during a build
try:
container = await self.sys_docker.containers.get(builder_name)
await container.delete(force=True, v=True)
except aiodocker.DockerError as err:
if err.status != HTTPStatus.NOT_FOUND:
raise DockerBuildError(
f"Can't clean up existing builder container: {err!s}", _LOGGER.error
) from err
# Remove dangling builder container if it exists by any chance
# E.g. because of an abrupt host shutdown/reboot during a build
with suppress(docker.errors.NotFound):
self.sys_docker.containers_legacy.get(builder_name).remove(
force=True, v=True
)
# Generate Docker config with registry credentials for base image if needed
docker_config_content = build_env.get_docker_config_json()
temp_dir: tempfile.TemporaryDirectory | None = None
# Generate Docker config with registry credentials for base image if needed
docker_config_path: Path | None = None
docker_config_content = build_env.get_docker_config_json()
temp_dir: tempfile.TemporaryDirectory | None = None
try:
try:
def pre_build_setup() -> tuple[
tempfile.TemporaryDirectory | None, dict[str, Any]
]:
docker_config_path: Path | None = None
temp_dir: tempfile.TemporaryDirectory | None = None
if docker_config_content:
# Create temporary directory for docker config
temp_dir = tempfile.TemporaryDirectory(
@@ -732,54 +736,54 @@ class DockerAddon(DockerInterface):
docker_config_path,
)
result = self.sys_docker.run_command(
ADDON_BUILDER_IMAGE,
version=builder_version_tag,
name=builder_name,
**build_env.get_docker_args(
return (
temp_dir,
build_env.get_docker_args(
version, addon_image_tag, docker_config_path
),
)
finally:
# Clean up temporary directory
if temp_dir:
temp_dir.cleanup()
logs = result.output.decode("utf-8")
temp_dir, build_args = await self.sys_run_in_executor(pre_build_setup)
if result.exit_code != 0:
error_message = f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}"
raise docker.errors.DockerException(error_message)
return addon_image_tag, logs
try:
addon_image_tag, log = await self.sys_run_in_executor(build_image)
_LOGGER.debug("Build %s:%s done: %s", self.image, version, log)
# Update meta data
self._meta = await self.sys_docker.images.inspect(addon_image_tag)
except (
docker.errors.DockerException,
requests.RequestException,
aiodocker.DockerError,
) as err:
result = await self.sys_docker.run_command(
ADDON_BUILDER_IMAGE,
tag=builder_version_tag,
name=builder_name,
**build_args,
)
except DockerError as err:
raise DockerBuildError(
f"Can't build {self.image}:{version}: {err!s}", _LOGGER.error
) from err
finally:
# Clean up temporary directory
if temp_dir:
await self.sys_run_in_executor(temp_dir.cleanup)
logs = "\n".join(result.log)
if result.exit_code != 0:
raise DockerBuildError(
f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}",
_LOGGER.error,
)
_LOGGER.debug("Build %s:%s done: %s", self.image, version, logs)
try:
# Update meta data
self._meta = await self.sys_docker.images.inspect(addon_image_tag)
except aiodocker.DockerError as err:
raise DockerBuildError(
f"Can't get image metadata for {addon_image_tag} after build: {err!s}"
) from err
_LOGGER.info("Build %s:%s done", self.image, version)
def export_image(self, tar_file: Path) -> None:
"""Export current images into a tar file.
Must be run in executor.
"""
async def export_image(self, tar_file: Path) -> None:
"""Export current images into a tar file."""
if not self.image:
raise RuntimeError("Cannot export without image!")
self.sys_docker.export_image(self.image, self.version, tar_file)
await self.sys_docker.export_image(self.image, self.version, tar_file)
@Job(
name="docker_addon_import_image",
@@ -826,34 +830,26 @@ class DockerAddon(DockerInterface):
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
def write_stdin(self, data: bytes) -> Awaitable[None]:
async def write_stdin(self, data: bytes) -> None:
"""Write to add-on stdin."""
return self.sys_run_in_executor(self._write_stdin, data)
def _write_stdin(self, data: bytes) -> None:
"""Write to add-on stdin.
Need run inside executor.
"""
try:
# Load needed docker objects
container = self.sys_docker.containers_legacy.get(self.name)
# attach_socket returns SocketIO for local Docker connections (Unix socket)
socket = cast(
SocketIO, container.attach_socket(params={"stdin": 1, "stream": 1})
)
except (docker.errors.DockerException, requests.RequestException) as err:
_LOGGER.error("Can't attach to %s stdin: %s", self.name, err)
raise DockerError() from err
container = await self.sys_docker.containers.get(self.name)
socket = container.attach(stdin=True)
except aiodocker.DockerError as err:
raise DockerError(
f"Can't attach to {self.name} stdin: {err!s}", _LOGGER.error
) from err
try:
# Write to stdin
data += b"\n"
os.write(socket.fileno(), data)
socket.close()
except OSError as err:
_LOGGER.error("Can't write to %s stdin: %s", self.name, err)
raise DockerError() from err
await socket.write_in(data + b"\n")
await socket.close()
# Seems to raise very generic exceptions like RuntimeError or AssertionError
# So we catch all exceptions and re-raise them as DockerError
except Exception as err:
raise DockerError(
f"Can't write to {self.name} stdin: {err!s}", _LOGGER.error
) from err
@Job(
name="docker_addon_stop",
@@ -878,11 +874,12 @@ class DockerAddon(DockerInterface):
await super().stop(remove_container)
# If there is a device access issue and the container is removed, clear it
if (
remove_container
and self.addon.device_access_missing_issue in self.sys_resolution.issues
if remove_container and (
issue := self.sys_resolution.get_issue_if_present(
self.addon.device_access_missing_issue
)
):
self.sys_resolution.dismiss_issue(self.addon.device_access_missing_issue)
self.sys_resolution.dismiss_issue(issue)
@Job(
name="docker_addon_hardware_events",
@@ -899,15 +896,13 @@ class DockerAddon(DockerInterface):
return
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers_legacy.get, self.name
)
except docker.errors.NotFound:
if self._hw_listener:
self.sys_bus.remove_listener(self._hw_listener)
self._hw_listener = None
return
except (docker.errors.DockerException, requests.RequestException) as err:
docker_container = await self.sys_docker.containers.get(self.name)
except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
if self._hw_listener:
self.sys_bus.remove_listener(self._hw_listener)
self._hw_listener = None
return
raise DockerError(
f"Can't process Hardware Event on {self.name}: {err!s}", _LOGGER.error
) from err

View File

@@ -2,13 +2,11 @@
from __future__ import annotations
from contextlib import suppress
from dataclasses import dataclass
from enum import Enum, StrEnum
from functools import total_ordering
from enum import StrEnum
from pathlib import PurePath
import re
from typing import Any, cast
from typing import Any
from ..const import MACHINE_ID
@@ -18,6 +16,10 @@ RE_RETRYING_DOWNLOAD_STATUS = re.compile(r"Retrying in \d+ seconds?")
# Docker's default registry is docker.io
DOCKER_HUB = "docker.io"
# Docker Hub API endpoint (used for direct registry API calls)
# While docker.io is the registry identifier, registry-1.docker.io is the actual API endpoint
DOCKER_HUB_API = "registry-1.docker.io"
# Legacy Docker Hub identifier for backward compatibility
DOCKER_HUB_LEGACY = "hub.docker.com"
@@ -81,57 +83,6 @@ class PropagationMode(StrEnum):
RSLAVE = "rslave"
@total_ordering
class PullImageLayerStage(Enum):
"""Job stages for pulling an image layer.
These are a subset of the statuses in a docker pull image log. They
are the standardized ones that are the most useful to us.
"""
PULLING_FS_LAYER = 1, "Pulling fs layer"
RETRYING_DOWNLOAD = 2, "Retrying download"
DOWNLOADING = 2, "Downloading"
VERIFYING_CHECKSUM = 3, "Verifying Checksum"
DOWNLOAD_COMPLETE = 4, "Download complete"
EXTRACTING = 5, "Extracting"
PULL_COMPLETE = 6, "Pull complete"
def __init__(self, order: int, status: str) -> None:
"""Set fields from values."""
self.order = order
self.status = status
def __eq__(self, value: object, /) -> bool:
"""Check equality, allow StrEnum style comparisons on status."""
with suppress(AttributeError):
return self.status == cast(PullImageLayerStage, value).status
return self.status == value
def __lt__(self, other: object) -> bool:
"""Order instances."""
with suppress(AttributeError):
return self.order < cast(PullImageLayerStage, other).order
return False
def __hash__(self) -> int:
"""Hash instance."""
return hash(self.status)
@classmethod
def from_status(cls, status: str) -> PullImageLayerStage | None:
"""Return stage instance from pull log status."""
for i in cls:
if i.status == status:
return i
# This one includes number of seconds until download so its not constant
if RE_RETRYING_DOWNLOAD_STATUS.match(status):
return cls.RETRYING_DOWNLOAD
return None
@dataclass(slots=True, frozen=True)
class MountBindOptions:
"""Bind options for docker mount."""

View File

@@ -172,8 +172,8 @@ class DockerHomeAssistant(DockerInterface):
async def run(self, *, restore_job_id: str | None = None) -> None:
"""Run Docker image."""
environment = {
"SUPERVISOR": self.sys_docker.network.supervisor,
"HASSIO": self.sys_docker.network.supervisor,
"SUPERVISOR": str(self.sys_docker.network.supervisor),
"HASSIO": str(self.sys_docker.network.supervisor),
ENV_TIME: self.sys_timezone,
ENV_TOKEN: self.sys_homeassistant.supervisor_token,
ENV_TOKEN_OLD: self.sys_homeassistant.supervisor_token,
@@ -210,12 +210,11 @@ class DockerHomeAssistant(DockerInterface):
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def execute_command(self, command: str) -> CommandReturn:
async def execute_command(self, command: list[str]) -> CommandReturn:
"""Create a temporary container and run command."""
return await self.sys_run_in_executor(
self.sys_docker.run_command,
return await self.sys_docker.run_command(
self.image,
version=self.sys_homeassistant.version,
tag=str(self.sys_homeassistant.version),
command=command,
privileged=True,
init=True,
@@ -226,19 +225,19 @@ class DockerHomeAssistant(DockerInterface):
source=self.sys_config.path_extern_homeassistant.as_posix(),
target="/config",
read_only=False,
).to_dict(),
),
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(),
target="/ssl",
read_only=True,
).to_dict(),
),
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(),
target="/share",
read_only=False,
).to_dict(),
),
],
environment={ENV_TIME: self.sys_timezone},
)

View File

@@ -9,17 +9,14 @@ from contextlib import suppress
from http import HTTPStatus
import logging
from time import time
from typing import Any, cast
from typing import Any
from uuid import uuid4
import aiodocker
import aiohttp
from awesomeversion import AwesomeVersion
from awesomeversion.strategy import AwesomeVersionStrategy
import docker
from docker.models.containers import Container
import requests
from ..bus import EventListener
from ..const import (
ATTR_PASSWORD,
ATTR_REGISTRY,
@@ -35,50 +32,58 @@ from ..exceptions import (
DockerError,
DockerHubRateLimitExceeded,
DockerJobError,
DockerLogOutOfOrder,
DockerNotFound,
DockerRequestError,
)
from ..jobs import SupervisorJob
from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobConcurrency
from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.sentry import async_capture_exception
from .const import (
DOCKER_HUB,
DOCKER_HUB_LEGACY,
ContainerState,
PullImageLayerStage,
RestartPolicy,
)
from .manager import CommandReturn, PullLogEntry
from .const import DOCKER_HUB, DOCKER_HUB_LEGACY, ContainerState, RestartPolicy
from .manager import CommandReturn, ExecReturn, PullLogEntry
from .monitor import DockerContainerStateEvent
from .pull_progress import ImagePullProgress
from .stats import DockerStats
_LOGGER: logging.Logger = logging.getLogger(__name__)
MAP_ARCH: dict[CpuArch, str] = {
CpuArch.ARMV7: "linux/arm/v7",
CpuArch.ARMHF: "linux/arm/v6",
CpuArch.AARCH64: "linux/arm64",
CpuArch.I386: "linux/386",
CpuArch.AMD64: "linux/amd64",
}
def _container_state_from_model(docker_container: Container) -> ContainerState:
def _restart_policy_from_model(meta_host: dict[str, Any]) -> RestartPolicy | None:
"""Get restart policy from host config model."""
if "RestartPolicy" not in meta_host:
return None
name = meta_host["RestartPolicy"].get("Name")
if not name:
return RestartPolicy.NO
if name in RestartPolicy:
return RestartPolicy(name)
_LOGGER.warning("Unknown Docker restart policy '%s', treating as no", name)
return RestartPolicy.NO
def _container_state_from_model(container_metadata: dict[str, Any]) -> ContainerState:
"""Get container state from model."""
if docker_container.status == "running":
if "Health" in docker_container.attrs["State"]:
if "State" not in container_metadata:
return ContainerState.UNKNOWN
if container_metadata["State"]["Status"] == "running":
if "Health" in container_metadata["State"]:
return (
ContainerState.HEALTHY
if docker_container.attrs["State"]["Health"]["Status"] == "healthy"
if container_metadata["State"]["Health"]["Status"] == "healthy"
else ContainerState.UNHEALTHY
)
return ContainerState.RUNNING
if docker_container.attrs["State"]["ExitCode"] > 0:
if container_metadata["State"]["ExitCode"] > 0:
return ContainerState.FAILED
return ContainerState.STOPPED
@@ -163,11 +168,7 @@ class DockerInterface(JobGroup, ABC):
@property
def restart_policy(self) -> RestartPolicy | None:
"""Return restart policy of container."""
if "RestartPolicy" not in self.meta_host:
return None
policy = self.meta_host["RestartPolicy"].get("Name")
return policy if policy else RestartPolicy.NO
return _restart_policy_from_model(self.meta_host)
@property
def security_opt(self) -> list[str]:
@@ -181,18 +182,31 @@ class DockerInterface(JobGroup, ABC):
"""Healthcheck of instance if it has one."""
return self.meta_config.get("Healthcheck")
def _get_credentials(self, image: str) -> dict:
"""Return a dictionary with credentials for docker login."""
def _get_credentials(self, image: str) -> tuple[dict, str]:
"""Return credentials for docker login and the qualified image name.
Returns a tuple of (credentials_dict, qualified_image) where the image
is prefixed with the registry when needed. This ensures aiodocker sets
the correct ServerAddress in the X-Registry-Auth header, which Docker's
containerd image store requires to match the actual registry host.
"""
credentials = {}
registry = self.sys_docker.config.get_registry_for_image(image)
qualified_image = image
if registry:
stored = self.sys_docker.config.registries[registry]
credentials[ATTR_USERNAME] = stored[ATTR_USERNAME]
credentials[ATTR_PASSWORD] = stored[ATTR_PASSWORD]
# Don't include registry for Docker Hub (both official and legacy)
if registry not in (DOCKER_HUB, DOCKER_HUB_LEGACY):
credentials[ATTR_REGISTRY] = registry
credentials[ATTR_REGISTRY] = registry
# For Docker Hub images, the image name typically lacks a registry
# prefix (e.g. "homeassistant/foo" instead of "docker.io/homeassistant/foo").
# aiodocker derives ServerAddress from image.partition("/"), so without
# the prefix it would use the namespace ("homeassistant") as ServerAddress,
# which Docker's containerd resolver rejects as a host mismatch.
if registry in (DOCKER_HUB, DOCKER_HUB_LEGACY):
qualified_image = f"{DOCKER_HUB}/{image}"
_LOGGER.debug(
"Logging in to %s as %s",
@@ -200,160 +214,7 @@ class DockerInterface(JobGroup, ABC):
stored[ATTR_USERNAME],
)
return credentials
def _process_pull_image_log( # noqa: C901
self, install_job_id: str, reference: PullLogEntry
) -> None:
"""Process events fired from a docker while pulling an image, filtered to a given job id."""
if (
reference.job_id != install_job_id
or not reference.id
or not reference.status
or not (stage := PullImageLayerStage.from_status(reference.status))
):
return
# Pulling FS Layer is our marker for a layer that needs to be downloaded and extracted. Otherwise it already exists and we can ignore
job: SupervisorJob | None = None
if stage == PullImageLayerStage.PULLING_FS_LAYER:
job = self.sys_jobs.new_job(
name="Pulling container image layer",
initial_stage=stage.status,
reference=reference.id,
parent_id=install_job_id,
internal=True,
)
job.done = False
return
# Find our sub job to update details of
for j in self.sys_jobs.jobs:
if j.parent_id == install_job_id and j.reference == reference.id:
job = j
break
# There should no longer be any real risk of logs out of order anymore.
# However tests with very small images have shown that sometimes Docker
# skips stages in log. So keeping this one as a safety check on null job
if not job:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {install_job_id} but could not find a matching job, skipping",
_LOGGER.debug,
)
# For progress calculation we assume downloading is 70% of time, extracting is 30% and others stages negligible
progress = job.progress
match stage:
case PullImageLayerStage.DOWNLOADING | PullImageLayerStage.EXTRACTING:
if (
reference.progress_detail
and reference.progress_detail.current
and reference.progress_detail.total
):
progress = (
reference.progress_detail.current
/ reference.progress_detail.total
)
if stage == PullImageLayerStage.DOWNLOADING:
progress = 70 * progress
else:
progress = 70 + 30 * progress
case (
PullImageLayerStage.VERIFYING_CHECKSUM
| PullImageLayerStage.DOWNLOAD_COMPLETE
):
progress = 70
case PullImageLayerStage.PULL_COMPLETE:
progress = 100
case PullImageLayerStage.RETRYING_DOWNLOAD:
progress = 0
# No real risk of getting things out of order in current implementation
# but keeping this one in case another change to these trips us up.
if stage != PullImageLayerStage.RETRYING_DOWNLOAD and progress < job.progress:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} that implied progress was {progress} but current progress is {job.progress}, skipping",
_LOGGER.debug,
)
# Our filters have all passed. Time to update the job
# Only downloading and extracting have progress details. Use that to set extra
# We'll leave it around on later stages as the total bytes may be useful after that stage
# Enforce range to prevent float drift error
progress = max(0, min(progress, 100))
if (
stage in {PullImageLayerStage.DOWNLOADING, PullImageLayerStage.EXTRACTING}
and reference.progress_detail
and reference.progress_detail.current is not None
and reference.progress_detail.total is not None
):
job.update(
progress=progress,
stage=stage.status,
extra={
"current": reference.progress_detail.current,
"total": reference.progress_detail.total,
},
)
else:
# If we reach DOWNLOAD_COMPLETE without ever having set extra (small layers that skip
# the downloading phase), set a minimal extra so aggregate progress calculation can proceed
extra = job.extra
if stage == PullImageLayerStage.DOWNLOAD_COMPLETE and not job.extra:
extra = {"current": 1, "total": 1}
job.update(
progress=progress,
stage=stage.status,
done=stage == PullImageLayerStage.PULL_COMPLETE,
extra=None if stage == PullImageLayerStage.RETRYING_DOWNLOAD else extra,
)
# Once we have received a progress update for every child job, start to set status of the main one
install_job = self.sys_jobs.get_job(install_job_id)
layer_jobs = [
job
for job in self.sys_jobs.jobs
if job.parent_id == install_job.uuid
and job.name == "Pulling container image layer"
]
# First set the total bytes to be downloaded/extracted on the main job
if not install_job.extra:
total = 0
for job in layer_jobs:
if not job.extra:
return
total += job.extra["total"]
install_job.extra = {"total": total}
else:
total = install_job.extra["total"]
# Then determine total progress based on progress of each sub-job, factoring in size of each compared to total
progress = 0.0
stage = PullImageLayerStage.PULL_COMPLETE
for job in layer_jobs:
if not job.extra or not job.extra.get("total"):
return
progress += job.progress * (job.extra["total"] / total)
job_stage = PullImageLayerStage.from_status(cast(str, job.stage))
if job_stage < PullImageLayerStage.EXTRACTING:
stage = PullImageLayerStage.DOWNLOADING
elif (
stage == PullImageLayerStage.PULL_COMPLETE
and job_stage < PullImageLayerStage.PULL_COMPLETE
):
stage = PullImageLayerStage.EXTRACTING
# Ensure progress is 100 at this point to prevent float drift
if stage == PullImageLayerStage.PULL_COMPLETE:
progress = 100
# To reduce noise, limit updates to when result has changed by an entire percent or when stage changed
if stage != install_job.stage or progress >= install_job.progress + 1:
install_job.update(stage=stage.status, progress=max(0, min(progress, 100)))
return credentials, qualified_image
@Job(
name="docker_interface_install",
@@ -374,34 +235,81 @@ class DockerInterface(JobGroup, ABC):
raise ValueError("Cannot pull without an image!")
image_arch = arch or self.sys_arch.supervisor
listener: EventListener | None = None
platform = MAP_ARCH[image_arch]
pull_progress = ImagePullProgress()
current_job = self.sys_jobs.current
# Try to fetch manifest for accurate size-based progress
# This is optional - if it fails, we fall back to count-based progress
try:
manifest = await self.sys_docker.manifest_fetcher.get_manifest(
image, str(version), platform=platform
)
if manifest:
pull_progress.set_manifest(manifest)
_LOGGER.debug(
"Using manifest for progress: %d layers, %d bytes",
manifest.layer_count,
manifest.total_size,
)
except (aiohttp.ClientError, TimeoutError) as err:
_LOGGER.warning("Could not fetch manifest for progress: %s", err)
async def process_pull_event(event: PullLogEntry) -> None:
"""Process pull event and update job progress."""
if event.job_id != current_job.uuid:
return
try:
# Process event through progress tracker
pull_progress.process_event(event)
# Update job if progress changed significantly (>= 1%)
should_update, progress = pull_progress.should_update_job()
if should_update:
stage = pull_progress.get_stage()
current_job.update(progress=progress, stage=stage)
except ValueError as err:
# Catch ValueError from progress tracking (e.g. "Cannot update a job
# that is done") which can occur under rare event combinations.
# Log with context and send to Sentry. Continue the pull anyway as
# progress updates are informational only.
_LOGGER.warning(
"Received an unprocessable update for pull progress (layer: %s, status: %s, progress: %s): %s",
event.id,
event.status,
event.progress,
err,
)
await async_capture_exception(err)
except Exception as err: # pylint: disable=broad-except
# Catch any other unexpected errors in progress tracking to prevent
# pull from failing. Progress updates are informational - the pull
# itself should continue. Send to Sentry for debugging.
_LOGGER.warning(
"Error updating pull progress (layer: %s, status: %s): %s",
event.id,
event.status,
err,
)
await async_capture_exception(err)
listener = self.sys_bus.register_event(
BusEvent.DOCKER_IMAGE_PULL_UPDATE, process_pull_event
)
_LOGGER.info("Downloading docker image %s with tag %s.", image, version)
try:
# Get credentials for private registries to pass to aiodocker
credentials = self._get_credentials(image) or None
curr_job_id = self.sys_jobs.current.uuid
async def process_pull_image_log(reference: PullLogEntry) -> None:
try:
self._process_pull_image_log(curr_job_id, reference)
except DockerLogOutOfOrder as err:
# Send all these to sentry. Missing a few progress updates
# shouldn't matter to users but matters to us
await async_capture_exception(err)
listener = self.sys_bus.register_event(
BusEvent.DOCKER_IMAGE_PULL_UPDATE, process_pull_image_log
)
credentials, pull_image_name = self._get_credentials(image)
# Pull new image, passing credentials to aiodocker
docker_image = await self.sys_docker.pull_image(
self.sys_jobs.current.uuid,
image,
current_job.uuid,
pull_image_name,
str(version),
platform=MAP_ARCH[image_arch],
auth=credentials,
platform=platform,
auth=credentials or None,
)
# Tag latest
@@ -412,18 +320,6 @@ class DockerInterface(JobGroup, ABC):
await self.sys_docker.images.tag(
docker_image["Id"], image, tag="latest"
)
except docker.errors.APIError as err:
if err.status_code == HTTPStatus.TOO_MANY_REQUESTS:
self.sys_resolution.create_issue(
IssueType.DOCKER_RATELIMIT,
ContextType.SYSTEM,
suggestions=[SuggestionType.REGISTRY_LOGIN],
)
raise DockerHubRateLimitExceeded(_LOGGER.error) from err
await async_capture_exception(err)
raise DockerError(
f"Can't install {image}:{version!s}: {err}", _LOGGER.error
) from err
except aiodocker.DockerError as err:
if err.status == HTTPStatus.TOO_MANY_REQUESTS:
self.sys_resolution.create_issue(
@@ -436,54 +332,42 @@ class DockerInterface(JobGroup, ABC):
raise DockerError(
f"Can't install {image}:{version!s}: {err}", _LOGGER.error
) from err
except (
docker.errors.DockerException,
requests.RequestException,
) as err:
await async_capture_exception(err)
raise DockerError(
f"Unknown error with {image}:{version!s} -> {err!s}", _LOGGER.error
) from err
finally:
if listener:
self.sys_bus.remove_listener(listener)
self.sys_bus.remove_listener(listener)
self._meta = docker_image
async def exists(self) -> bool:
"""Return True if Docker image exists in local repository."""
with suppress(aiodocker.DockerError, requests.RequestException):
with suppress(aiodocker.DockerError):
await self.sys_docker.images.inspect(f"{self.image}:{self.version!s}")
return True
return False
async def _get_container(self) -> Container | None:
async def _get_container(self) -> dict[str, Any] | None:
"""Get docker container, returns None if not found."""
try:
return await self.sys_run_in_executor(
self.sys_docker.containers_legacy.get, self.name
)
except docker.errors.NotFound:
return None
except docker.errors.DockerException as err:
container = await self.sys_docker.containers.get(self.name)
return await container.show()
except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
return None
raise DockerAPIError(
f"Docker API error occurred while getting container information: {err!s}"
) from err
except requests.RequestException as err:
raise DockerRequestError(
f"Error communicating with Docker to get container information: {err!s}"
) from err
async def is_running(self) -> bool:
"""Return True if Docker is running."""
if docker_container := await self._get_container():
return docker_container.status == "running"
return False
return bool(
(container_metadata := await self._get_container())
and "State" in container_metadata
and container_metadata["State"]["Running"]
)
async def current_state(self) -> ContainerState:
"""Return current state of container."""
if docker_container := await self._get_container():
return _container_state_from_model(docker_container)
if container_metadata := await self._get_container():
return _container_state_from_model(container_metadata)
return ContainerState.UNKNOWN
@Job(name="docker_interface_attach", concurrency=JobConcurrency.GROUP_QUEUE)
@@ -491,14 +375,12 @@ class DockerInterface(JobGroup, ABC):
self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False
) -> None:
"""Attach to running Docker container."""
with suppress(docker.errors.DockerException, requests.RequestException):
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers_legacy.get, self.name
)
self._meta = docker_container.attrs
self.sys_docker.monitor.watch_container(docker_container)
with suppress(aiodocker.DockerError):
docker_container = await self.sys_docker.containers.get(self.name)
self._meta = await docker_container.show()
self.sys_docker.monitor.watch_container(self._meta)
state = _container_state_from_model(docker_container)
state = _container_state_from_model(self._meta)
if not (
skip_state_event_if_down
and state in [ContainerState.STOPPED, ContainerState.FAILED]
@@ -507,11 +389,11 @@ class DockerInterface(JobGroup, ABC):
self.sys_bus.fire_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE,
DockerContainerStateEvent(
self.name, state, cast(str, docker_container.id), int(time())
self.name, state, docker_container.id, int(time())
),
)
with suppress(aiodocker.DockerError, requests.RequestException):
with suppress(aiodocker.DockerError):
if not self._meta and self.image:
self._meta = await self.sys_docker.images.inspect(
f"{self.image}:{version!s}"
@@ -563,11 +445,8 @@ class DockerInterface(JobGroup, ABC):
async def stop(self, remove_container: bool = True) -> None:
"""Stop/remove Docker container."""
with suppress(DockerNotFound):
await self.sys_run_in_executor(
self.sys_docker.stop_container,
self.name,
self.timeout,
remove_container,
await self.sys_docker.stop_container(
self.name, self.timeout, remove_container
)
@Job(
@@ -577,7 +456,7 @@ class DockerInterface(JobGroup, ABC):
)
def start(self) -> Awaitable[None]:
"""Start Docker container."""
return self.sys_run_in_executor(self.sys_docker.start_container, self.name)
return self.sys_docker.start_container(self.name)
@Job(
name="docker_interface_remove",
@@ -617,7 +496,7 @@ class DockerInterface(JobGroup, ABC):
if self.image == expected_image:
try:
image = await self.sys_docker.images.inspect(image_name)
except (aiodocker.DockerError, requests.RequestException) as err:
except aiodocker.DockerError as err:
raise DockerError(
f"Could not get {image_name} for check due to: {err!s}",
_LOGGER.error,
@@ -670,14 +549,11 @@ class DockerInterface(JobGroup, ABC):
with suppress(DockerError):
await self.stop()
async def logs(self) -> bytes:
async def logs(self) -> list[str]:
"""Return Docker logs of container."""
with suppress(DockerError):
return await self.sys_run_in_executor(
self.sys_docker.container_logs, self.name
)
return b""
return await self.sys_docker.container_logs(self.name)
return []
@Job(name="docker_interface_cleanup", concurrency=JobConcurrency.GROUP_QUEUE)
async def cleanup(
@@ -703,9 +579,7 @@ class DockerInterface(JobGroup, ABC):
)
def restart(self) -> Awaitable[None]:
"""Restart docker container."""
return self.sys_run_in_executor(
self.sys_docker.restart_container, self.name, self.timeout
)
return self.sys_docker.restart_container(self.name, self.timeout)
@Job(
name="docker_interface_execute_command",
@@ -718,22 +592,12 @@ class DockerInterface(JobGroup, ABC):
async def stats(self) -> DockerStats:
"""Read and return stats from container."""
stats = await self.sys_run_in_executor(
self.sys_docker.container_stats, self.name
)
stats = await self.sys_docker.container_stats(self.name)
return DockerStats(stats)
async def is_failed(self) -> bool:
"""Return True if Docker is failing state."""
if not (docker_container := await self._get_container()):
return False
# container is not running
if docker_container.status != "exited":
return False
# Check return value
return int(docker_container.attrs["State"]["ExitCode"]) != 0
return await self.current_state() == ContainerState.FAILED
async def get_latest_version(self) -> AwesomeVersion:
"""Return latest version of local image."""
@@ -755,10 +619,6 @@ class DockerInterface(JobGroup, ABC):
raise DockerNotFound(
f"No version found for {self.image}", _LOGGER.info
) from err
except requests.RequestException as err:
raise DockerRequestError(
f"Communication issues with dockerd on Host: {err}", _LOGGER.warning
) from err
_LOGGER.info("Found %s versions: %s", self.image, available_version)
@@ -771,8 +631,6 @@ class DockerInterface(JobGroup, ABC):
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
def run_inside(self, command: str) -> Awaitable[CommandReturn]:
def run_inside(self, command: str) -> Awaitable[ExecReturn]:
"""Execute a command inside Docker container."""
return self.sys_run_in_executor(
self.sys_docker.container_run_inside, self.name, command
)
return self.sys_docker.container_run_inside(self.name, command)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,352 @@
"""Docker registry manifest fetcher.
Fetches image manifests directly from container registries to get layer sizes
before pulling an image. This enables accurate size-based progress tracking.
"""
from __future__ import annotations
from dataclasses import dataclass
import logging
import re
from typing import TYPE_CHECKING
import aiohttp
from supervisor.docker.utils import get_registry_from_image
from .const import DOCKER_HUB, DOCKER_HUB_API, DOCKER_HUB_LEGACY
if TYPE_CHECKING:
from ..coresys import CoreSys
_LOGGER = logging.getLogger(__name__)
# Media types for manifest requests
MANIFEST_MEDIA_TYPES = (
"application/vnd.docker.distribution.manifest.v2+json",
"application/vnd.oci.image.manifest.v1+json",
"application/vnd.docker.distribution.manifest.list.v2+json",
"application/vnd.oci.image.index.v1+json",
)
@dataclass
class ImageManifest:
"""Container image manifest with layer information."""
digest: str
total_size: int
layers: dict[str, int] # digest -> size in bytes
@property
def layer_count(self) -> int:
"""Return number of layers."""
return len(self.layers)
def parse_image_reference(image: str, tag: str) -> tuple[str, str, str]:
"""Parse image reference into (registry, repository, tag).
Examples:
ghcr.io/home-assistant/home-assistant:2025.1.0
-> (ghcr.io, home-assistant/home-assistant, 2025.1.0)
homeassistant/home-assistant:latest
-> (registry-1.docker.io, homeassistant/home-assistant, latest)
alpine:3.18
-> (registry-1.docker.io, library/alpine, 3.18)
"""
# Check if image has explicit registry host
registry = get_registry_from_image(image)
if registry:
repository = image[len(registry) + 1 :] # Remove "registry/" prefix
else:
registry = DOCKER_HUB
repository = image
# Docker Hub requires "library/" prefix for official images
if "/" not in repository:
repository = f"library/{repository}"
return registry, repository, tag
class RegistryManifestFetcher:
"""Fetches manifests from container registries."""
def __init__(self, coresys: CoreSys) -> None:
"""Initialize the fetcher."""
self.coresys = coresys
@property
def _session(self) -> aiohttp.ClientSession:
"""Return the websession for HTTP requests."""
return self.coresys.websession
def _get_api_endpoint(self, registry: str) -> str:
"""Get the actual API endpoint for a registry.
Translates docker.io to registry-1.docker.io for Docker Hub.
This matches exactly what Docker itself does internally - see daemon/pkg/registry/config.go:49
where Docker hardcodes DefaultRegistryHost = "registry-1.docker.io" for registry operations.
Without this, requests to https://docker.io/v2/... redirect to https://www.docker.com/
"""
return DOCKER_HUB_API if registry == DOCKER_HUB else registry
def _get_credentials(self, registry: str) -> tuple[str, str] | None:
"""Get credentials for registry from Docker config.
Returns (username, password) tuple or None if no credentials.
"""
registries = self.coresys.docker.config.registries
# Map registry hostname to config key
# Docker Hub can be stored as "hub.docker.com" in config
if registry in (DOCKER_HUB, DOCKER_HUB_LEGACY):
if DOCKER_HUB in registries:
creds = registries[DOCKER_HUB]
return creds.get("username"), creds.get("password")
elif registry in registries:
creds = registries[registry]
return creds.get("username"), creds.get("password")
return None
async def _get_auth_token(
self,
registry: str,
repository: str,
) -> str | None:
"""Get authentication token for registry.
Uses the WWW-Authenticate header from a 401 response to discover
the token endpoint, then requests a token with appropriate scope.
"""
api_endpoint = self._get_api_endpoint(registry)
# First, make an unauthenticated request to get WWW-Authenticate header
manifest_url = f"https://{api_endpoint}/v2/{repository}/manifests/latest"
try:
async with self._session.get(manifest_url) as resp:
if resp.status == 200:
# No auth required
return None
if resp.status != 401:
_LOGGER.warning(
"Unexpected status %d from registry %s", resp.status, registry
)
return None
www_auth = resp.headers.get("WWW-Authenticate", "")
except aiohttp.ClientError as err:
_LOGGER.warning("Failed to connect to registry %s: %s", registry, err)
return None
# Parse WWW-Authenticate: Bearer realm="...",service="...",scope="..."
if not www_auth.startswith("Bearer "):
_LOGGER.warning("Unsupported auth type from %s: %s", registry, www_auth)
return None
params = {}
for match in re.finditer(r'(\w+)="([^"]*)"', www_auth):
params[match.group(1)] = match.group(2)
realm = params.get("realm")
service = params.get("service")
if not realm:
_LOGGER.warning("No realm in WWW-Authenticate from %s", registry)
return None
# Build token request URL
token_url = f"{realm}?scope=repository:{repository}:pull"
if service:
token_url += f"&service={service}"
# Check for credentials
auth = None
credentials = self._get_credentials(registry)
if credentials:
username, password = credentials
if username and password:
auth = aiohttp.BasicAuth(username, password)
_LOGGER.debug("Using credentials for %s", registry)
try:
async with self._session.get(token_url, auth=auth) as resp:
if resp.status != 200:
_LOGGER.warning(
"Failed to get token from %s: %d", realm, resp.status
)
return None
data = await resp.json()
return data.get("token") or data.get("access_token")
except aiohttp.ClientError as err:
_LOGGER.warning("Failed to get auth token: %s", err)
return None
async def _fetch_manifest(
self,
registry: str,
repository: str,
reference: str,
token: str | None,
platform: str,
) -> dict | None:
"""Fetch manifest from registry.
If the manifest is a manifest list (multi-arch), fetches the
platform-specific manifest.
"""
api_endpoint = self._get_api_endpoint(registry)
manifest_url = f"https://{api_endpoint}/v2/{repository}/manifests/{reference}"
headers = {"Accept": ", ".join(MANIFEST_MEDIA_TYPES)}
if token:
headers["Authorization"] = f"Bearer {token}"
try:
async with self._session.get(manifest_url, headers=headers) as resp:
if resp.status != 200:
_LOGGER.warning(
"Failed to fetch manifest for %s/%s:%s - %d",
registry,
repository,
reference,
resp.status,
)
return None
manifest = await resp.json()
except aiohttp.ClientError as err:
_LOGGER.warning("Failed to fetch manifest: %s", err)
return None
media_type = manifest.get("mediaType", "")
# Check if this is a manifest list (multi-arch image)
if "list" in media_type or "index" in media_type:
manifests = manifest.get("manifests", [])
if not manifests:
_LOGGER.warning("Empty manifest list for %s/%s", registry, repository)
return None
# Platform format is "linux/amd64", "linux/arm64", etc.
parts = platform.split("/")
if len(parts) < 2:
_LOGGER.warning("Invalid platform format: %s", platform)
return None
target_os, target_arch = parts[0], parts[1]
platform_manifest = None
for m in manifests:
plat = m.get("platform", {})
if (
plat.get("os") == target_os
and plat.get("architecture") == target_arch
):
platform_manifest = m
break
if not platform_manifest:
_LOGGER.warning(
"Platform %s/%s not found in manifest list for %s/%s, "
"cannot use manifest for progress tracking",
target_os,
target_arch,
registry,
repository,
)
return None
# Fetch the platform-specific manifest
return await self._fetch_manifest(
registry,
repository,
platform_manifest["digest"],
token,
platform,
)
return manifest
async def get_manifest(
self,
image: str,
tag: str,
platform: str,
) -> ImageManifest | None:
"""Fetch manifest and extract layer sizes.
Args:
image: Image name (e.g., "ghcr.io/home-assistant/home-assistant")
tag: Image tag (e.g., "2025.1.0")
platform: Target platform (e.g., "linux/amd64")
Returns:
ImageManifest with layer sizes, or None if fetch failed.
"""
registry, repository, tag = parse_image_reference(image, tag)
_LOGGER.debug(
"Fetching manifest for %s/%s:%s (platform=%s)",
registry,
repository,
tag,
platform,
)
# Get auth token
token = await self._get_auth_token(registry, repository)
# Fetch manifest
manifest = await self._fetch_manifest(
registry, repository, tag, token, platform
)
if not manifest:
return None
# Extract layer information
layers = manifest.get("layers", [])
if not layers:
_LOGGER.warning(
"No layers in manifest for %s/%s:%s", registry, repository, tag
)
return None
layer_sizes: dict[str, int] = {}
total_size = 0
for layer in layers:
digest = layer.get("digest", "")
size = layer.get("size", 0)
if digest and size:
# Store by short digest (first 12 chars after sha256:)
short_digest = (
digest.split(":")[1][:12] if ":" in digest else digest[:12]
)
layer_sizes[short_digest] = size
total_size += size
digest = manifest.get("config", {}).get("digest", "")
_LOGGER.debug(
"Manifest for %s/%s:%s - %d layers, %d bytes total",
registry,
repository,
tag,
len(layer_sizes),
total_size,
)
return ImageManifest(
digest=digest,
total_size=total_size,
layers=layer_sizes,
)

View File

@@ -1,21 +1,25 @@
"""Supervisor docker monitor based on events."""
from contextlib import suppress
import asyncio
from dataclasses import dataclass
import logging
from threading import Thread
from typing import Any
from docker.models.containers import Container
from docker.types.daemon import CancellableStream
import aiodocker
from aiodocker.channel import ChannelSubscriber
from ..const import BusEvent
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import HassioError
from ..utils.sentry import async_capture_exception, capture_exception
from .const import LABEL_MANAGED, ContainerState
_LOGGER: logging.Logger = logging.getLogger(__name__)
STOP_MONITOR_TIMEOUT = 5.0
@dataclass
@dataclass(slots=True, frozen=True)
class DockerContainerStateEvent:
"""Event for docker container state change."""
@@ -25,71 +29,157 @@ class DockerContainerStateEvent:
time: int
class DockerMonitor(CoreSysAttributes, Thread):
@dataclass(slots=True, frozen=True)
class DockerEventCallbackTask:
"""Docker event and task spawned for it."""
data: DockerContainerStateEvent
task: asyncio.Task
class DockerMonitor(CoreSysAttributes):
"""Docker monitor for supervisor."""
def __init__(self, coresys: CoreSys):
def __init__(self, coresys: CoreSys, docker_client: aiodocker.Docker):
"""Initialize Docker monitor object."""
super().__init__()
self.coresys = coresys
self._events: CancellableStream | None = None
self.docker = docker_client
self._unlabeled_managed_containers: list[str] = []
self._monitor_task: asyncio.Task | None = None
self._await_task: asyncio.Task | None = None
self._event_tasks: asyncio.Queue[DockerEventCallbackTask | None]
def watch_container(self, container: Container):
def watch_container(self, container_metadata: dict[str, Any]):
"""If container is missing the managed label, add name to list."""
if LABEL_MANAGED not in container.labels and container.name:
self._unlabeled_managed_containers += [container.name]
labels: dict[str, str] = container_metadata.get("Config", {}).get("Labels", {})
name: str | None = container_metadata.get("Name")
if name:
name = name.lstrip("/")
if LABEL_MANAGED not in labels and name:
self._unlabeled_managed_containers += [name]
async def load(self):
"""Start docker events monitor."""
self._events = self.sys_docker.events
Thread.start(self)
events = self.docker.events.subscribe()
self._event_tasks = asyncio.Queue()
self._monitor_task = self.sys_create_task(self._run(events), eager_start=True)
self._await_task = self.sys_create_task(
self._await_event_tasks(), eager_start=True
)
_LOGGER.info("Started docker events monitor")
async def unload(self):
"""Stop docker events monitor."""
self._events.close()
with suppress(RuntimeError):
self.join(timeout=5)
await self.docker.events.stop()
tasks = [task for task in (self._monitor_task, self._await_task) if task]
if tasks:
_, pending = await asyncio.wait(tasks, timeout=STOP_MONITOR_TIMEOUT)
if pending:
_LOGGER.warning(
"Timeout stopping docker events monitor, cancelling %s pending task(s)",
len(pending),
)
for task in pending:
task.cancel()
await asyncio.gather(*pending, return_exceptions=True)
self._event_tasks.shutdown(immediate=True)
self._monitor_task = None
self._await_task = None
_LOGGER.info("Stopped docker events monitor")
def run(self) -> None:
async def _run(self, events: ChannelSubscriber) -> None:
"""Monitor and process docker events."""
if not self._events:
raise RuntimeError("Monitor has not been loaded!")
try:
while True:
event: dict[str, Any] | None = await events.get()
if event is None:
break
for event in self._events:
attributes: dict[str, str] = event.get("Actor", {}).get("Attributes", {})
if event["Type"] == "container" and (
LABEL_MANAGED in attributes
or attributes.get("name") in self._unlabeled_managed_containers
):
container_state: ContainerState | None = None
action: str = event["Action"]
if action == "start":
container_state = ContainerState.RUNNING
elif action == "die":
container_state = (
ContainerState.STOPPED
if int(event["Actor"]["Attributes"]["exitCode"]) == 0
else ContainerState.FAILED
try:
attributes: dict[str, str] = event.get("Actor", {}).get(
"Attributes", {}
)
elif action == "health_status: healthy":
container_state = ContainerState.HEALTHY
elif action == "health_status: unhealthy":
container_state = ContainerState.UNHEALTHY
if container_state:
self.sys_loop.call_soon_threadsafe(
self.sys_bus.fire_event,
BusEvent.DOCKER_CONTAINER_STATE_CHANGE,
DockerContainerStateEvent(
name=attributes["name"],
state=container_state,
id=event["Actor"]["ID"],
time=event["time"],
),
if event["Type"] == "container" and (
LABEL_MANAGED in attributes
or attributes.get("name") in self._unlabeled_managed_containers
):
container_state: ContainerState | None = None
action: str = event["Action"]
if action == "start":
container_state = ContainerState.RUNNING
elif action == "die":
container_state = (
ContainerState.STOPPED
if int(event["Actor"]["Attributes"]["exitCode"]) == 0
else ContainerState.FAILED
)
elif action == "health_status: healthy":
container_state = ContainerState.HEALTHY
elif action == "health_status: unhealthy":
container_state = ContainerState.UNHEALTHY
if container_state:
state_event = DockerContainerStateEvent(
name=attributes["name"],
state=container_state,
id=event["Actor"]["ID"],
time=event["time"],
)
tasks = self.sys_bus.fire_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE, state_event
)
await asyncio.gather(
*[
self._event_tasks.put(
DockerEventCallbackTask(state_event, task)
)
for task in tasks
]
)
# Broad exception here because one bad event cannot stop the monitor
# Log what went wrong and send it to sentry but continue monitoring
except Exception as err: # pylint: disable=broad-exception-caught
await async_capture_exception(err)
_LOGGER.error(
"Could not process docker event, container state my be inaccurate: %s %s",
event,
err,
)
# Can only get to this except if an error raised while getting events from queue
# Shouldn't really happen but any errors raised there are catastrophic and end the monitor
# Log that the monitor broke and send the details to sentry to review
except Exception as err: # pylint: disable=broad-exception-caught
await async_capture_exception(err)
_LOGGER.error(
"Cannot get events from docker, monitor has crashed. Container "
"state information will be inaccurate: %s",
err,
)
finally:
await self._event_tasks.put(None)
async def _await_event_tasks(self):
"""Await event callback tasks to clean up and capture output."""
while (event := await self._event_tasks.get()) is not None:
try:
await event.task
# Exceptions which inherit from HassioError are already handled
# We can safely ignore these, we only track the unhandled ones here
except HassioError:
pass
except Exception as err: # pylint: disable=broad-exception-caught
capture_exception(err)
_LOGGER.error(
"Error encountered while processing docker container state event: %s %s %s",
event.task.get_name(),
event.data,
err,
)

View File

@@ -1,20 +1,18 @@
"""Internal network manager for Supervisor."""
import asyncio
from contextlib import suppress
from http import HTTPStatus
from ipaddress import IPv4Address
import logging
from typing import Self, cast
from typing import Any, Self, cast
import docker
from docker.models.networks import Network
import requests
import aiodocker
from aiodocker.networks import DockerNetwork as AiodockerNetwork
from ..const import (
ATTR_AUDIO,
ATTR_CLI,
ATTR_DNS,
ATTR_ENABLE_IPV6,
ATTR_OBSERVER,
ATTR_SUPERVISOR,
DOCKER_IPV4_NETWORK_MASK,
@@ -31,44 +29,112 @@ from ..exceptions import DockerError
_LOGGER: logging.Logger = logging.getLogger(__name__)
DOCKER_ENABLEIPV6 = "EnableIPv6"
DOCKER_NETWORK_PARAMS = {
"name": DOCKER_NETWORK,
"driver": DOCKER_NETWORK_DRIVER,
"ipam": docker.types.IPAMConfig(
pool_configs=[
docker.types.IPAMPool(subnet=str(DOCKER_IPV6_NETWORK_MASK)),
docker.types.IPAMPool(
subnet=str(DOCKER_IPV4_NETWORK_MASK),
gateway=str(DOCKER_IPV4_NETWORK_MASK[1]),
iprange=str(DOCKER_IPV4_NETWORK_RANGE),
),
]
),
ATTR_ENABLE_IPV6: True,
"options": {"com.docker.network.bridge.name": DOCKER_NETWORK},
}
DOCKER_OPTIONS = "Options"
DOCKER_ENABLE_IPV6_DEFAULT = True
DOCKER_NETWORK_PARAMS = {
"Name": DOCKER_NETWORK,
"Driver": DOCKER_NETWORK_DRIVER,
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": str(DOCKER_IPV6_NETWORK_MASK),
},
{
"Subnet": str(DOCKER_IPV4_NETWORK_MASK),
"Gateway": str(DOCKER_IPV4_NETWORK_MASK[1]),
"IPRange": str(DOCKER_IPV4_NETWORK_RANGE),
},
],
},
DOCKER_ENABLEIPV6: DOCKER_ENABLE_IPV6_DEFAULT,
DOCKER_OPTIONS: {"com.docker.network.bridge.name": DOCKER_NETWORK},
}
class DockerNetwork:
"""Internal Supervisor Network.
"""Internal Supervisor Network."""
This class is not AsyncIO safe!
"""
def __init__(self, docker_client: docker.DockerClient):
def __init__(self, docker_client: aiodocker.Docker):
"""Initialize internal Supervisor network."""
self.docker: docker.DockerClient = docker_client
self._network: Network
self.docker: aiodocker.Docker = docker_client
self._network: AiodockerNetwork | None = None
self._network_meta: dict[str, Any] | None = None
async def post_init(
self, enable_ipv6: bool | None = None, mtu: int | None = None
) -> Self:
"""Post init actions that must be done in event loop."""
self._network = await asyncio.get_running_loop().run_in_executor(
None, self._get_network, enable_ipv6, mtu
try:
self._network = network = await self.docker.networks.get(DOCKER_NETWORK)
except aiodocker.DockerError as err:
# If network was not found, create it instead. Can skip further checks since it's new
if err.status == HTTPStatus.NOT_FOUND:
await self._create_supervisor_network(enable_ipv6, mtu)
return self
raise DockerError(
f"Could not get network from Docker: {err!s}", _LOGGER.error
) from err
# Cache metadata for network
await self.reload()
current_ipv6: bool = self.network_meta.get(DOCKER_ENABLEIPV6, False)
current_mtu_str: str | None = self.network_meta.get(DOCKER_OPTIONS, {}).get(
"com.docker.network.driver.mtu"
)
current_mtu = int(current_mtu_str) if current_mtu_str is not None else None
# Check if we have explicitly provided settings that differ from what is set
changes = []
if enable_ipv6 is not None and current_ipv6 != enable_ipv6:
changes.append("IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only")
if mtu is not None and current_mtu != mtu:
changes.append(f"MTU {mtu}")
if not changes:
return self
_LOGGER.info("Migrating Supervisor network to %s", ", ".join(changes))
# System is considered running if any containers besides Supervisor and Observer are found
# A reboot is required then, we won't disconnect those containers to remake network
containers: dict[str, dict[str, Any]] = self.network_meta.get("Containers", {})
system_running = containers and any(
container.get("Name") not in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME)
for container in containers.values()
)
if system_running:
_LOGGER.warning(
"System appears to be running, not applying Supervisor network change. "
"Reboot your system to apply the change."
)
return self
# Disconnect all containers in the network
for c_id, meta in containers.items():
try:
await network.disconnect({"Container": c_id, "Force": True})
except aiodocker.DockerError:
_LOGGER.warning(
"Cannot apply Supervisor network changes because container %s "
"could not be disconnected. Reboot your system to apply change.",
meta.get("Name"),
)
return self
# Remove the network
try:
await network.delete()
except aiodocker.DockerError:
_LOGGER.warning(
"Cannot apply Supervisor network changes because Supervisor network "
"could not be removed and recreated. Reboot your system to apply change."
)
return self
# Recreate it with correct settings
await self._create_supervisor_network(enable_ipv6, mtu)
return self
@property
@@ -77,14 +143,23 @@ class DockerNetwork:
return DOCKER_NETWORK
@property
def network(self) -> Network:
def network(self) -> AiodockerNetwork:
"""Return docker network."""
if not self._network:
raise RuntimeError("Network not set!")
return self._network
@property
def containers(self) -> list[str]:
"""Return of connected containers from network."""
return list(self.network.attrs.get("Containers", {}).keys())
def network_meta(self) -> dict[str, Any]:
"""Return docker network metadata."""
if not self._network_meta:
raise RuntimeError("Network metadata not set!")
return self._network_meta
@property
def containers(self) -> dict[str, dict[str, Any]]:
"""Return metadata of connected containers to network."""
return self.network_meta.get("Containers", {})
@property
def gateway(self) -> IPv4Address:
@@ -116,94 +191,37 @@ class DockerNetwork:
"""Return observer of the network."""
return DOCKER_IPV4_NETWORK_MASK[6]
def _get_network(
async def _create_supervisor_network(
self, enable_ipv6: bool | None = None, mtu: int | None = None
) -> Network:
"""Get supervisor network."""
try:
if network := self.docker.networks.get(DOCKER_NETWORK):
current_ipv6 = network.attrs.get(DOCKER_ENABLEIPV6, False)
current_mtu = network.attrs.get("Options", {}).get(
"com.docker.network.driver.mtu"
)
current_mtu = int(current_mtu) if current_mtu else None
# If the network exists and we don't have explicit settings,
# simply stick with what we have.
if (enable_ipv6 is None or current_ipv6 == enable_ipv6) and (
mtu is None or current_mtu == mtu
):
return network
# We have explicit settings which differ from the current state.
changes = []
if enable_ipv6 is not None and current_ipv6 != enable_ipv6:
changes.append(
"IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only"
)
if mtu is not None and current_mtu != mtu:
changes.append(f"MTU {mtu}")
if changes:
_LOGGER.info(
"Migrating Supervisor network to %s", ", ".join(changes)
)
if (containers := network.containers) and (
containers_all := all(
container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME)
for container in containers
)
):
for container in containers:
with suppress(
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
):
network.disconnect(container, force=True)
if not containers or containers_all:
try:
network.remove()
except docker.errors.APIError:
_LOGGER.warning("Failed to remove existing Supervisor network")
return network
else:
_LOGGER.warning(
"System appears to be running, "
"not applying Supervisor network change. "
"Reboot your system to apply the change."
)
return network
except docker.errors.NotFound:
_LOGGER.info("Can't find Supervisor network, creating a new network")
) -> None:
"""Create supervisor network."""
network_params = DOCKER_NETWORK_PARAMS.copy()
network_params[ATTR_ENABLE_IPV6] = (
DOCKER_ENABLE_IPV6_DEFAULT if enable_ipv6 is None else enable_ipv6
)
if enable_ipv6 is not None:
network_params[DOCKER_ENABLEIPV6] = enable_ipv6
# Copy options and add MTU if specified
if mtu is not None:
options = cast(dict[str, str], network_params["options"]).copy()
options = cast(dict[str, str], network_params[DOCKER_OPTIONS]).copy()
options["com.docker.network.driver.mtu"] = str(mtu)
network_params["options"] = options
network_params[DOCKER_OPTIONS] = options
try:
self._network = self.docker.networks.create(**network_params) # type: ignore
except docker.errors.APIError as err:
self._network = await self.docker.networks.create(network_params)
except aiodocker.DockerError as err:
raise DockerError(
f"Can't create Supervisor network: {err}", _LOGGER.error
) from err
await self.reload()
with suppress(DockerError):
self.attach_container_by_name(
await self.attach_container_by_name(
SUPERVISOR_DOCKER_NAME, [ATTR_SUPERVISOR], self.supervisor
)
with suppress(DockerError):
self.attach_container_by_name(
await self.attach_container_by_name(
OBSERVER_DOCKER_NAME, [ATTR_OBSERVER], self.observer
)
@@ -213,105 +231,90 @@ class DockerNetwork:
(ATTR_AUDIO, self.audio),
):
with suppress(DockerError):
self.attach_container_by_name(f"{DOCKER_PREFIX}_{name}", [name], ip)
await self.attach_container_by_name(
f"{DOCKER_PREFIX}_{name}", [name], ip
)
return self._network
async def reload(self) -> None:
"""Get and cache metadata for supervisor network."""
try:
self._network_meta = await self.network.show()
except aiodocker.DockerError as err:
raise DockerError(
f"Could not get network metadata from Docker: {err!s}", _LOGGER.error
) from err
def attach_container(
async def attach_container(
self,
container_id: str,
name: str,
name: str | None,
alias: list[str] | None = None,
ipv4: IPv4Address | None = None,
) -> None:
"""Attach container to Supervisor network.
Need run inside executor.
"""
"""Attach container to Supervisor network."""
# Reload Network information
with suppress(docker.errors.DockerException, requests.RequestException):
self.network.reload()
with suppress(DockerError):
await self.reload()
# Check stale Network
if name in (
val.get("Name") for val in self.network.attrs.get("Containers", {}).values()
):
self.stale_cleanup(name)
if name and name in (val.get("Name") for val in self.containers.values()):
await self.stale_cleanup(name)
# Attach Network
endpoint_config: dict[str, Any] = {}
if alias:
endpoint_config["Aliases"] = alias
if ipv4:
endpoint_config["IPAMConfig"] = {"IPv4Address": str(ipv4)}
try:
self.network.connect(
container_id, aliases=alias, ipv4_address=str(ipv4) if ipv4 else None
await self.network.connect(
{
"Container": container_id,
"EndpointConfig": endpoint_config,
}
)
except (
docker.errors.NotFound,
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
except aiodocker.DockerError as err:
raise DockerError(
f"Can't connect {name} to Supervisor network: {err}",
f"Can't connect {name or container_id} to Supervisor network: {err}",
_LOGGER.error,
) from err
def attach_container_by_name(
self,
name: str,
alias: list[str] | None = None,
ipv4: IPv4Address | None = None,
async def attach_container_by_name(
self, name: str, alias: list[str] | None = None, ipv4: IPv4Address | None = None
) -> None:
"""Attach container to Supervisor network.
Need run inside executor.
"""
"""Attach container to Supervisor network."""
try:
container = self.docker.containers.get(name)
except (
docker.errors.NotFound,
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
container = await self.docker.containers.get(name)
except aiodocker.DockerError as err:
raise DockerError(f"Can't find {name}: {err}", _LOGGER.error) from err
if not (container_id := container.id):
raise DockerError(f"Received invalid metadata from docker for {name}")
if container.id not in self.containers:
await self.attach_container(container.id, name, alias, ipv4)
if container_id not in self.containers:
self.attach_container(container_id, name, alias, ipv4)
def detach_default_bridge(self, container_id: str, name: str) -> None:
"""Detach default Docker bridge.
Need run inside executor.
"""
async def detach_default_bridge(
self, container_id: str, name: str | None = None
) -> None:
"""Detach default Docker bridge."""
try:
default_network = self.docker.networks.get(DOCKER_NETWORK_DRIVER)
default_network.disconnect(container_id)
except docker.errors.NotFound:
pass
except (
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
default_network = await self.docker.networks.get(DOCKER_NETWORK_DRIVER)
await default_network.disconnect({"Container": container_id})
except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
return
raise DockerError(
f"Can't disconnect {name} from default network: {err}",
f"Can't disconnect {name or container_id} from default network: {err}",
_LOGGER.warning,
) from err
def stale_cleanup(self, name: str) -> None:
async def stale_cleanup(self, name: str) -> None:
"""Force remove a container from Network.
Fix: https://github.com/moby/moby/issues/23302
"""
try:
self.network.disconnect(name, force=True)
except (
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
await self.network.disconnect({"Container": name, "Force": True})
except aiodocker.DockerError as err:
raise DockerError(
f"Can't disconnect {name} from Supervisor network: {err}",
_LOGGER.warning,

View File

@@ -2,7 +2,7 @@
import logging
from ..const import DOCKER_IPV4_NETWORK_MASK, OBSERVER_DOCKER_NAME
from ..const import DOCKER_IPV4_NETWORK_MASK, OBSERVER_DOCKER_NAME, OBSERVER_PORT
from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..jobs.const import JobConcurrency
@@ -48,10 +48,10 @@ class DockerObserver(DockerInterface, CoreSysAttributes):
environment={
ENV_TIME: self.sys_timezone,
ENV_TOKEN: self.sys_plugins.observer.supervisor_token,
ENV_NETWORK_MASK: DOCKER_IPV4_NETWORK_MASK,
ENV_NETWORK_MASK: str(DOCKER_IPV4_NETWORK_MASK),
},
mounts=[MOUNT_DOCKER],
ports={"80/tcp": 4357},
ports={"80/tcp": OBSERVER_PORT},
oom_score_adj=-300,
)
_LOGGER.info(

View File

@@ -0,0 +1,368 @@
"""Image pull progress tracking."""
from __future__ import annotations
from contextlib import suppress
from dataclasses import dataclass, field
from enum import Enum
import logging
from typing import TYPE_CHECKING, cast
if TYPE_CHECKING:
from .manager import PullLogEntry
from .manifest import ImageManifest
_LOGGER = logging.getLogger(__name__)
# Progress weight distribution: 70% downloading, 30% extraction
DOWNLOAD_WEIGHT = 70.0
EXTRACT_WEIGHT = 30.0
class LayerPullStatus(Enum):
"""Status values for pulling an image layer.
These are a subset of the statuses in a docker pull image log.
The order field allows comparing which stage is further along.
"""
PULLING_FS_LAYER = 1, "Pulling fs layer"
WAITING = 1, "Waiting"
RETRYING = 2, "Retrying" # Matches "Retrying in N seconds"
DOWNLOADING = 3, "Downloading"
VERIFYING_CHECKSUM = 4, "Verifying Checksum"
DOWNLOAD_COMPLETE = 5, "Download complete"
EXTRACTING = 6, "Extracting"
PULL_COMPLETE = 7, "Pull complete"
ALREADY_EXISTS = 7, "Already exists"
def __init__(self, order: int, status: str) -> None:
"""Set fields from values."""
self.order = order
self.status = status
def __eq__(self, value: object, /) -> bool:
"""Check equality, allow string comparisons on status."""
with suppress(AttributeError):
return self.status == cast(LayerPullStatus, value).status
return self.status == value
def __hash__(self) -> int:
"""Return hash based on status string."""
return hash(self.status)
def __lt__(self, other: object) -> bool:
"""Order instances by stage progression."""
with suppress(AttributeError):
return self.order < cast(LayerPullStatus, other).order
return False
@classmethod
def from_status(cls, status: str) -> LayerPullStatus | None:
"""Get enum from status string, or None if not recognized."""
# Handle "Retrying in N seconds" pattern
if status.startswith("Retrying in "):
return cls.RETRYING
for member in cls:
if member.status == status:
return member
return None
@dataclass
class LayerProgress:
"""Track progress of a single layer."""
layer_id: str
total_size: int = 0 # Size in bytes (from downloading, reused for extraction)
download_current: int = 0
extract_current: int = 0 # Extraction progress in bytes (overlay2 only)
download_complete: bool = False
extract_complete: bool = False
already_exists: bool = False # Layer was already locally available
def calculate_progress(self) -> float:
"""Calculate layer progress 0-100.
Progress is weighted: 70% download, 30% extraction.
For overlay2, we have byte-based extraction progress.
For containerd, extraction jumps from 70% to 100% on completion.
"""
if self.already_exists or self.extract_complete:
return 100.0
if self.download_complete:
# Check if we have extraction progress (overlay2)
if self.extract_current > 0 and self.total_size > 0:
extract_pct = min(1.0, self.extract_current / self.total_size)
return DOWNLOAD_WEIGHT + (extract_pct * EXTRACT_WEIGHT)
# No extraction progress yet - return 70%
return DOWNLOAD_WEIGHT
if self.total_size > 0:
download_pct = min(1.0, self.download_current / self.total_size)
return download_pct * DOWNLOAD_WEIGHT
return 0.0
@dataclass
class ImagePullProgress:
"""Track overall progress of pulling an image.
When manifest layer sizes are provided, uses size-weighted progress where
each layer contributes proportionally to its size. This gives accurate
progress based on actual bytes to download.
When manifest is not available, falls back to count-based progress where
each layer contributes equally.
Layers that already exist locally are excluded from the progress calculation.
"""
layers: dict[str, LayerProgress] = field(default_factory=dict)
_last_reported_progress: float = field(default=0.0, repr=False)
_seen_downloading: bool = field(default=False, repr=False)
_manifest_layer_sizes: dict[str, int] = field(default_factory=dict, repr=False)
_total_manifest_size: int = field(default=0, repr=False)
def set_manifest(self, manifest: ImageManifest) -> None:
"""Set manifest layer sizes for accurate size-based progress.
Should be called before processing pull events.
"""
self._manifest_layer_sizes = dict(manifest.layers)
self._total_manifest_size = manifest.total_size
_LOGGER.debug(
"Manifest set: %d layers, %d bytes total",
len(self._manifest_layer_sizes),
self._total_manifest_size,
)
def get_or_create_layer(self, layer_id: str) -> LayerProgress:
"""Get existing layer or create new one."""
if layer_id not in self.layers:
# If we have manifest sizes, pre-populate the layer's total_size
manifest_size = self._manifest_layer_sizes.get(layer_id, 0)
self.layers[layer_id] = LayerProgress(
layer_id=layer_id, total_size=manifest_size
)
return self.layers[layer_id]
def process_event(self, entry: PullLogEntry) -> None:
"""Process a pull log event and update layer state."""
# Skip events without layer ID or status
if not entry.id or not entry.status:
return
# Skip metadata events that aren't layer-specific
# "Pulling from X" has id=tag but isn't a layer
if entry.status.startswith("Pulling from "):
return
# Parse status to enum (returns None for unrecognized statuses)
status = LayerPullStatus.from_status(entry.status)
if status is None:
return
layer = self.get_or_create_layer(entry.id)
# Handle "Already exists" - layer is locally available
if status is LayerPullStatus.ALREADY_EXISTS:
layer.already_exists = True
layer.download_complete = True
layer.extract_complete = True
return
# Handle "Pulling fs layer" / "Waiting" - layer is being tracked
if status in (LayerPullStatus.PULLING_FS_LAYER, LayerPullStatus.WAITING):
return
# Handle "Downloading" - update download progress
if status is LayerPullStatus.DOWNLOADING:
# Mark that we've seen downloading - now we know layer count is complete
self._seen_downloading = True
if entry.progress_detail and entry.progress_detail.current is not None:
layer.download_current = entry.progress_detail.current
if entry.progress_detail and entry.progress_detail.total is not None:
# Only set total_size if not already set or if this is larger
# (handles case where total changes during download)
layer.total_size = max(layer.total_size, entry.progress_detail.total)
return
# Handle "Verifying Checksum" - download is essentially complete
if status is LayerPullStatus.VERIFYING_CHECKSUM:
if layer.total_size > 0:
layer.download_current = layer.total_size
return
# Handle "Download complete" - download phase done
if status is LayerPullStatus.DOWNLOAD_COMPLETE:
layer.download_complete = True
if layer.total_size > 0:
layer.download_current = layer.total_size
elif layer.total_size == 0:
# Small layer that skipped downloading phase
# Set minimal size so it doesn't distort weighted average
layer.total_size = 1
layer.download_current = 1
return
# Handle "Extracting" - extraction in progress
if status is LayerPullStatus.EXTRACTING:
# For overlay2: progressDetail has {current, total} in bytes
# For containerd: progressDetail has {current, units: "s"} (time elapsed)
# We can only use byte-based progress (overlay2)
layer.download_complete = True
if layer.total_size > 0:
layer.download_current = layer.total_size
# Check if this is byte-based extraction progress (overlay2)
# Overlay2 has {current, total} in bytes, no units field
# Containerd has {current, units: "s"} which is useless for progress
if (
entry.progress_detail
and entry.progress_detail.current is not None
and entry.progress_detail.units is None
):
# Use layer's total_size from downloading phase (doesn't change)
layer.extract_current = entry.progress_detail.current
_LOGGER.debug(
"Layer %s extracting: %d/%d (%.1f%%)",
layer.layer_id,
layer.extract_current,
layer.total_size,
(layer.extract_current / layer.total_size * 100)
if layer.total_size > 0
else 0,
)
return
# Handle "Pull complete" - layer is fully done
if status is LayerPullStatus.PULL_COMPLETE:
layer.download_complete = True
layer.extract_complete = True
if layer.total_size > 0:
layer.download_current = layer.total_size
return
# Handle "Retrying in N seconds" - reset download progress
if status is LayerPullStatus.RETRYING:
layer.download_current = 0
layer.download_complete = False
return
def calculate_progress(self) -> float:
"""Calculate overall progress 0-100.
When manifest layer sizes are available, uses size-weighted progress
where each layer contributes proportionally to its size.
When manifest is not available, falls back to count-based progress
where each layer contributes equally.
Layers that already exist locally are excluded from the calculation.
Returns 0 until we've seen the first "Downloading" event, since Docker
reports "Already exists" and "Pulling fs layer" events before we know
the complete layer count.
"""
# Don't report progress until we've seen downloading start
# This ensures we know the full layer count before calculating progress
if not self._seen_downloading or not self.layers:
return 0.0
# Only count layers that need pulling (exclude already_exists)
layers_to_pull = [
layer for layer in self.layers.values() if not layer.already_exists
]
if not layers_to_pull:
# All layers already exist, nothing to download
return 100.0
# Use size-weighted progress if manifest sizes are available
if self._manifest_layer_sizes:
return min(100, self._calculate_size_weighted_progress(layers_to_pull))
# Fall back to count-based progress
total_progress = sum(layer.calculate_progress() for layer in layers_to_pull)
return min(100, total_progress / len(layers_to_pull))
def _calculate_size_weighted_progress(
self, layers_to_pull: list[LayerProgress]
) -> float:
"""Calculate size-weighted progress.
Each layer contributes to progress proportionally to its size.
Progress = sum(layer_progress * layer_size) / total_size
"""
# Calculate total size of layers that need pulling
total_size = sum(layer.total_size for layer in layers_to_pull)
if total_size == 0:
# No size info available, fall back to count-based
total_progress = sum(layer.calculate_progress() for layer in layers_to_pull)
return total_progress / len(layers_to_pull)
# Weight each layer's progress by its size
weighted_progress = 0.0
for layer in layers_to_pull:
if layer.total_size > 0:
layer_weight = layer.total_size / total_size
weighted_progress += layer.calculate_progress() * layer_weight
return weighted_progress
def get_stage(self) -> str | None:
"""Get current stage based on layer states."""
if not self.layers:
return None
# Check if any layer is still downloading
for layer in self.layers.values():
if layer.already_exists:
continue
if not layer.download_complete:
return "Downloading"
# All downloads complete, check if extracting
for layer in self.layers.values():
if layer.already_exists:
continue
if not layer.extract_complete:
return "Extracting"
# All done
return "Pull complete"
def should_update_job(self, threshold: float = 1.0) -> tuple[bool, float]:
"""Check if job should be updated based on progress change.
Returns (should_update, current_progress).
Updates are triggered when progress changes by at least threshold%.
Progress is guaranteed to only increase (monotonic).
"""
current_progress = self.calculate_progress()
# Ensure monotonic progress - never report a decrease
# This can happen when new layers get size info and change the weighted average
if current_progress < self._last_reported_progress:
_LOGGER.debug(
"Progress decreased from %.1f%% to %.1f%%, keeping last reported",
self._last_reported_progress,
current_progress,
)
return False, self._last_reported_progress
if current_progress >= self._last_reported_progress + threshold:
_LOGGER.debug(
"Progress update: %.1f%% -> %.1f%% (delta: %.1f%%)",
self._last_reported_progress,
current_progress,
current_progress - self._last_reported_progress,
)
self._last_reported_progress = current_progress
return True, current_progress
return False, self._last_reported_progress

View File

@@ -1,15 +1,12 @@
"""Init file for Supervisor Docker object."""
import asyncio
from collections.abc import Awaitable
from ipaddress import IPv4Address
import logging
import os
import aiodocker
from awesomeversion.awesomeversion import AwesomeVersion
import docker
import requests
from ..exceptions import DockerError
from ..jobs.const import JobConcurrency
@@ -53,13 +50,13 @@ class DockerSupervisor(DockerInterface):
) -> None:
"""Attach to running docker container."""
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers_legacy.get, self.name
)
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
docker_container = await self.sys_docker.containers.get(self.name)
self._meta = await docker_container.show()
except aiodocker.DockerError as err:
raise DockerError(
f"Could not get supervisor container metadata: {err!s}"
) from err
self._meta = docker_container.attrs
_LOGGER.info(
"Attaching to Supervisor %s with version %s",
self.image,
@@ -72,8 +69,7 @@ class DockerSupervisor(DockerInterface):
# Attach to network
_LOGGER.info("Connecting Supervisor to hassio-network")
await self.sys_run_in_executor(
self.sys_docker.network.attach_container,
await self.sys_docker.network.attach_container(
docker_container.id,
self.name,
alias=["supervisor"],
@@ -81,32 +77,32 @@ class DockerSupervisor(DockerInterface):
)
@Job(name="docker_supervisor_retag", concurrency=JobConcurrency.GROUP_QUEUE)
def retag(self) -> Awaitable[None]:
async def retag(self) -> None:
"""Retag latest image to version."""
return self.sys_run_in_executor(self._retag)
def _retag(self) -> None:
"""Retag latest image to version.
Need run inside executor.
"""
try:
docker_container = self.sys_docker.containers_legacy.get(self.name)
except (docker.errors.DockerException, requests.RequestException) as err:
docker_container = await self.sys_docker.containers.get(self.name)
container_metadata = await docker_container.show()
except aiodocker.DockerError as err:
raise DockerError(
f"Could not get Supervisor container for retag: {err}", _LOGGER.error
) from err
if not self.image or not docker_container.image:
# See https://github.com/docker/docker-py/blob/df3f8e2abc5a03de482e37214dddef9e0cee1bb1/docker/models/containers.py#L41
metadata_image = container_metadata.get("ImageID", container_metadata["Image"])
if not self.image or not metadata_image:
raise DockerError(
"Could not locate image from container metadata for retag",
_LOGGER.error,
)
try:
docker_container.image.tag(self.image, tag=str(self.version))
docker_container.image.tag(self.image, tag="latest")
except (docker.errors.DockerException, requests.RequestException) as err:
await asyncio.gather(
self.sys_docker.images.tag(
metadata_image, self.image, tag=str(self.version)
),
self.sys_docker.images.tag(metadata_image, self.image, tag="latest"),
)
except aiodocker.DockerError as err:
raise DockerError(
f"Can't retag Supervisor version: {err}", _LOGGER.error
) from err
@@ -118,28 +114,38 @@ class DockerSupervisor(DockerInterface):
async def update_start_tag(self, image: str, version: AwesomeVersion) -> None:
"""Update start tag to new version."""
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers_legacy.get, self.name
)
docker_image = await self.sys_docker.images.inspect(f"{image}:{version!s}")
except (
aiodocker.DockerError,
docker.errors.DockerException,
requests.RequestException,
) as err:
docker_container = await self.sys_docker.containers.get(self.name)
container_metadata = await docker_container.show()
except aiodocker.DockerError as err:
raise DockerError(
f"Can't get image or container to fix start tag: {err}", _LOGGER.error
f"Can't get container to fix start tag: {err}", _LOGGER.error
) from err
if not docker_container.image:
# See https://github.com/docker/docker-py/blob/df3f8e2abc5a03de482e37214dddef9e0cee1bb1/docker/models/containers.py#L41
metadata_image = container_metadata.get("ImageID", container_metadata["Image"])
if not metadata_image:
raise DockerError(
"Cannot locate image from container metadata to fix start tag",
_LOGGER.error,
)
try:
container_image, new_image = await asyncio.gather(
self.sys_docker.images.inspect(metadata_image),
self.sys_docker.images.inspect(f"{image}:{version!s}"),
)
except aiodocker.DockerError as err:
raise DockerError(
f"Can't get image metadata to fix start tag: {err}", _LOGGER.error
) from err
try:
# Find start tag
for tag in docker_container.image.tags:
for tag in container_image["RepoTags"]:
# See https://github.com/docker/docker-py/blob/df3f8e2abc5a03de482e37214dddef9e0cee1bb1/docker/models/images.py#L47
if tag == "<none>:<none>":
continue
start_image = tag.partition(":")[0]
start_tag = tag.partition(":")[2] or "latest"
@@ -148,12 +154,12 @@ class DockerSupervisor(DockerInterface):
continue
await asyncio.gather(
self.sys_docker.images.tag(
docker_image["Id"], start_image, tag=start_tag
new_image["Id"], start_image, tag=start_tag
),
self.sys_docker.images.tag(
docker_image["Id"], start_image, tag=version.string
new_image["Id"], start_image, tag=version.string
),
)
except (aiodocker.DockerError, requests.RequestException) as err:
except aiodocker.DockerError as err:
raise DockerError(f"Can't fix start tag: {err}", _LOGGER.error) from err

View File

@@ -3,6 +3,8 @@
from collections.abc import Callable, Mapping
from typing import Any
from .const import OBSERVER_PORT
MESSAGE_CHECK_SUPERVISOR_LOGS = (
"Check supervisor logs for details (check with '{logs_command}')"
)
@@ -44,7 +46,7 @@ class HassioNotSupportedError(HassioError):
# API
class APIError(HassioError, RuntimeError):
class APIError(HassioError):
"""API errors."""
status = 400
@@ -289,6 +291,18 @@ class ObserverJobError(ObserverError, PluginJobError):
"""Raise on job error with observer plugin."""
class ObserverPortConflict(ObserverError, APIError):
"""Raise if observer cannot start due to a port conflict."""
error_key = "observer_port_conflict"
message_template = "Cannot start {observer} because port {port} is already in use"
extra_fields = {"observer": "observer", "port": OBSERVER_PORT}
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Raise & log."""
super().__init__(None, logger)
# Multicast
@@ -393,6 +407,20 @@ class AddonNotRunningError(AddonsError, APIError):
super().__init__(None, logger)
class AddonPortConflict(AddonsError, APIError):
"""Raise if addon cannot start due to a port conflict."""
error_key = "addon_port_conflict"
message_template = "Cannot start addon {name} because port {port} is already in use"
def __init__(
self, logger: Callable[..., None] | None = None, *, name: str, port: int
) -> None:
"""Raise & log."""
self.extra_fields = {"name": name, "port": port}
super().__init__(None, logger)
class AddonNotSupportedError(HassioNotSupportedError):
"""Addon doesn't support a function."""
@@ -592,18 +620,6 @@ class AuthListUsersError(AuthError, APIUnknownSupervisorError):
message_template = "Can't request listing users on Home Assistant"
class AuthListUsersNoneResponseError(AuthError, APIInternalServerError):
"""Auth error if listing users returned invalid None response."""
error_key = "auth_list_users_none_response_error"
message_template = "Home Assistant returned invalid response of `{none}` instead of a list of users. Check Home Assistant logs for details (check with `{logs_command}`)"
extra_fields = {"none": "None", "logs_command": "ha core logs"}
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Initialize exception."""
super().__init__(None, logger)
class AuthInvalidNonStringValueError(AuthError, APIUnauthorized):
"""Auth error if something besides a string provided as username or password."""
@@ -843,10 +859,6 @@ class DockerAPIError(DockerError):
"""Docker API error."""
class DockerRequestError(DockerError):
"""Dockerd OS issues."""
class DockerTrustError(DockerError):
"""Raise if images are not trusted."""
@@ -855,10 +867,6 @@ class DockerNotFound(DockerError):
"""Docker object don't Exists."""
class DockerLogOutOfOrder(DockerError):
"""Raise when log from docker action was out of order."""
class DockerNoSpaceOnDevice(DockerError):
"""Raise if a docker pull fails due to available space."""
@@ -870,6 +878,22 @@ class DockerNoSpaceOnDevice(DockerError):
super().__init__(None, logger=logger)
class DockerContainerPortConflict(DockerError, APIError):
"""Raise if docker cannot start a container due to a port conflict."""
error_key = "docker_container_port_conflict"
message_template = (
"Cannot start container {name} because port {port} is already in use"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, name: str, port: int
) -> None:
"""Raise & log."""
self.extra_fields = {"name": name, "port": port}
super().__init__(None, logger)
class DockerHubRateLimitExceeded(DockerError, APITooManyRequests):
"""Raise for docker hub rate limit exceeded error."""
@@ -936,6 +960,44 @@ class ResolutionFixupJobError(ResolutionFixupError, JobException):
"""Raise on job error."""
class ResolutionCheckNotFound(ResolutionNotFound, APINotFound): # pylint: disable=too-many-ancestors
"""Raise if check does not exist."""
error_key = "resolution_check_not_found_error"
message_template = "Check '{check}' does not exist"
def __init__(
self, logger: Callable[..., None] | None = None, *, check: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"check": check}
super().__init__(None, logger)
class ResolutionIssueNotFound(ResolutionNotFound, APINotFound): # pylint: disable=too-many-ancestors
"""Raise if issue does not exist."""
error_key = "resolution_issue_not_found_error"
message_template = "Issue {uuid} does not exist"
def __init__(self, logger: Callable[..., None] | None = None, *, uuid: str) -> None:
"""Initialize exception."""
self.extra_fields = {"uuid": uuid}
super().__init__(None, logger)
class ResolutionSuggestionNotFound(ResolutionNotFound, APINotFound): # pylint: disable=too-many-ancestors
"""Raise if suggestion does not exist."""
error_key = "resolution_suggestion_not_found_error"
message_template = "Suggestion {uuid} does not exist"
def __init__(self, logger: Callable[..., None] | None = None, *, uuid: str) -> None:
"""Initialize exception."""
self.extra_fields = {"uuid": uuid}
super().__init__(None, logger)
# Store

View File

@@ -13,7 +13,6 @@ from ..exceptions import (
DBusObjectError,
HardwareNotFound,
)
from ..resolution.const import UnhealthyReason
from .const import UdevSubsystem
from .data import Device
@@ -114,10 +113,8 @@ class HwDisk(CoreSysAttributes):
_LOGGER.warning("File not found: %s", child.as_posix())
continue
except OSError as err:
self.sys_resolution.check_oserror(err)
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
break
continue

View File

@@ -135,6 +135,7 @@ class HomeAssistantAPI(CoreSysAttributes):
"""
url = f"{self.sys_homeassistant.api_url}/{path}"
headers = headers or {}
client_timeout = aiohttp.ClientTimeout(total=timeout)
# Passthrough content type
if content_type is not None:
@@ -144,10 +145,11 @@ class HomeAssistantAPI(CoreSysAttributes):
try:
await self.ensure_access_token()
headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}"
async with getattr(self.sys_websession, method)(
async with self.sys_websession.request(
method,
url,
data=data,
timeout=timeout,
timeout=client_timeout,
json=json,
headers=headers,
params=params,
@@ -175,7 +177,10 @@ class HomeAssistantAPI(CoreSysAttributes):
async def get_config(self) -> dict[str, Any]:
"""Return Home Assistant config."""
return await self._get_json("api/config")
config = await self._get_json("api/config")
if config is None or not isinstance(config, dict):
raise HomeAssistantAPIError("No config received from Home Assistant API")
return config
async def get_core_state(self) -> dict[str, Any]:
"""Return Home Assistant core state."""
@@ -219,3 +224,32 @@ class HomeAssistantAPI(CoreSysAttributes):
if state := await self.get_api_state():
return state.core_state == "RUNNING" or state.offline_db_migration
return False
async def check_frontend_available(self) -> bool:
"""Check if the frontend is accessible by fetching the root path.
Caller should make sure that Home Assistant Core is running before
calling this method.
Returns:
True if the frontend responds successfully, False otherwise.
"""
try:
async with self.make_request("get", "", timeout=30) as resp:
# Frontend should return HTML content
if resp.status == 200:
content_type = resp.headers.get(hdrs.CONTENT_TYPE, "")
if "text/html" in content_type:
_LOGGER.debug("Frontend is accessible and serving HTML")
return True
_LOGGER.warning(
"Frontend responded but with unexpected content type: %s",
content_type,
)
return False
_LOGGER.warning("Frontend returned status %s", resp.status)
return False
except HomeAssistantAPIError as err:
_LOGGER.debug("Cannot reach frontend: %s", err)
return False

View File

@@ -13,7 +13,10 @@ from typing import Final
from awesomeversion import AwesomeVersion
from ..const import ATTR_HOMEASSISTANT, BusEvent
from supervisor.utils import remove_colors
from ..bus import EventListener
from ..const import ATTR_HOMEASSISTANT, BusEvent, CoreState
from ..coresys import CoreSys
from ..docker.const import ContainerState
from ..docker.homeassistant import DockerHomeAssistant
@@ -33,7 +36,6 @@ from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobConcurrency, JobThrot
from ..jobs.decorator import Job, JobCondition
from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType
from ..utils import convert_to_ascii
from ..utils.sentry import async_capture_exception
from .const import (
LANDINGPAGE,
@@ -74,6 +76,7 @@ class HomeAssistantCore(JobGroup):
super().__init__(coresys, JOB_GROUP_HOME_ASSISTANT_CORE)
self.instance: DockerHomeAssistant = DockerHomeAssistant(coresys)
self._error_state: bool = False
self._watchdog_listener: EventListener | None = None
@property
def error_state(self) -> bool:
@@ -82,9 +85,12 @@ class HomeAssistantCore(JobGroup):
async def load(self) -> None:
"""Prepare Home Assistant object."""
self.sys_bus.register_event(
self._watchdog_listener = self.sys_bus.register_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE, self.watchdog_container
)
self.sys_bus.register_event(
BusEvent.SUPERVISOR_STATE_CHANGE, self._supervisor_state_changed
)
try:
# Evaluate Version if we lost this information
@@ -176,28 +182,53 @@ class HomeAssistantCore(JobGroup):
concurrency=JobConcurrency.GROUP_REJECT,
)
async def install(self) -> None:
"""Install a landing page."""
"""Install Home Assistant Core."""
_LOGGER.info("Home Assistant setup")
while True:
# read homeassistant tag and install it
if not self.sys_homeassistant.latest_version:
await self.sys_updater.reload()
stop_progress_log = asyncio.Event()
if to_version := self.sys_homeassistant.latest_version:
async def _periodic_progress_log() -> None:
"""Log installation progress periodically for user visibility."""
while not stop_progress_log.is_set():
try:
await self.instance.update(
to_version,
image=self.sys_updater.image_homeassistant,
)
self.sys_homeassistant.version = self.instance.version or to_version
break
except (DockerError, JobException):
pass
except Exception as err: # pylint: disable=broad-except
await async_capture_exception(err)
await asyncio.wait_for(stop_progress_log.wait(), timeout=15)
except TimeoutError:
if (job := self.instance.active_job) and job.progress:
_LOGGER.info(
"Downloading Home Assistant Core image, %d%%",
int(job.progress),
)
else:
_LOGGER.info("Home Assistant Core installation in progress")
_LOGGER.warning("Error on Home Assistant installation. Retrying in 30sec")
await asyncio.sleep(30)
progress_task = self.sys_create_task(_periodic_progress_log())
try:
while True:
# read homeassistant tag and install it
if not self.sys_homeassistant.latest_version:
await self.sys_updater.reload()
if to_version := self.sys_homeassistant.latest_version:
try:
await self.instance.update(
to_version,
image=self.sys_updater.image_homeassistant,
)
self.sys_homeassistant.version = (
self.instance.version or to_version
)
break
except (DockerError, JobException):
pass
except Exception as err: # pylint: disable=broad-except
await async_capture_exception(err)
_LOGGER.warning(
"Error on Home Assistant installation. Retrying in 30sec"
)
await asyncio.sleep(30)
finally:
stop_progress_log.set()
await progress_task
_LOGGER.info("Home Assistant docker now installed")
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
@@ -303,12 +334,18 @@ class HomeAssistantCore(JobGroup):
except HomeAssistantError:
# The API stoped responding between the up checks an now
self._error_state = True
data = None
return
# Verify that the frontend is loaded
if data and "frontend" not in data.get("components", []):
if "frontend" not in data.get("components", []):
_LOGGER.error("API responds but frontend is not loaded")
self._error_state = True
# Check that the frontend is actually accessible
elif not await self.sys_homeassistant.api.check_frontend_available():
_LOGGER.error(
"Frontend component loaded but frontend is not accessible"
)
self._error_state = True
else:
return
@@ -321,12 +358,12 @@ class HomeAssistantCore(JobGroup):
# Make a copy of the current log file if it exists
logfile = self.sys_config.path_homeassistant / "home-assistant.log"
if logfile.exists():
if await self.sys_run_in_executor(logfile.exists):
rollback_log = (
self.sys_config.path_homeassistant / "home-assistant-rollback.log"
)
shutil.copy(logfile, rollback_log)
await self.sys_run_in_executor(shutil.copy, logfile, rollback_log)
_LOGGER.info(
"A backup of the logfile is stored in /config/home-assistant-rollback.log"
)
@@ -421,13 +458,6 @@ class HomeAssistantCore(JobGroup):
await self.instance.stop()
await self.start()
def logs(self) -> Awaitable[bytes]:
"""Get HomeAssistant docker logs.
Return a coroutine.
"""
return self.instance.logs()
async def stats(self) -> DockerStats:
"""Return stats of Home Assistant."""
try:
@@ -458,7 +488,15 @@ class HomeAssistantCore(JobGroup):
"""Run Home Assistant config check."""
try:
result = await self.instance.execute_command(
"python3 -m homeassistant -c /config --script check_config"
[
"python3",
"-m",
"homeassistant",
"-c",
"/config",
"--script",
"check_config",
]
)
except DockerError as err:
raise HomeAssistantError() from err
@@ -468,7 +506,7 @@ class HomeAssistantCore(JobGroup):
raise HomeAssistantError("Fatal error on config check!", _LOGGER.error)
# Convert output
log = convert_to_ascii(result.output)
log = remove_colors("\n".join(result.log))
_LOGGER.debug("Result config check: %s", log.strip())
# Parse output
@@ -550,6 +588,16 @@ class HomeAssistantCore(JobGroup):
if event.state in [ContainerState.FAILED, ContainerState.UNHEALTHY]:
await self._restart_after_problem(event.state)
async def _supervisor_state_changed(self, state: CoreState) -> None:
"""Handle supervisor state changes to disable watchdog during shutdown."""
if state in (CoreState.SHUTDOWN, CoreState.STOPPING, CoreState.CLOSE):
if self._watchdog_listener:
_LOGGER.debug(
"Unregistering Home Assistant watchdog due to system shutdown"
)
self.sys_bus.remove_listener(self._watchdog_listener)
self._watchdog_listener = None
@Job(
name="home_assistant_core_restart_after_problem",
throttle_period=WATCHDOG_THROTTLE_PERIOD,

View File

@@ -1,8 +1,6 @@
"""Home Assistant control object."""
import asyncio
from datetime import timedelta
import errno
from ipaddress import IPv4Address
import logging
from pathlib import Path, PurePath
@@ -13,7 +11,7 @@ from typing import Any
from uuid import UUID
from awesomeversion import AwesomeVersion, AwesomeVersionException
from securetar import AddFileError, atomic_contents_add, secure_path
from securetar import AddFileError, SecureTarFile, atomic_contents_add
import voluptuous as vol
from voluptuous.humanize import humanize_error
@@ -35,11 +33,11 @@ from ..const import (
ATTR_WATCHDOG,
FILE_HASSIO_HOMEASSISTANT,
BusEvent,
IngressSessionDataUser,
IngressSessionDataUserDict,
HomeAssistantUser,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
BackupInvalidError,
ConfigurationFileError,
HomeAssistantBackupError,
HomeAssistantError,
@@ -47,9 +45,7 @@ from ..exceptions import (
)
from ..hardware.const import PolicyGroup
from ..hardware.data import Device
from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason
from ..utils import remove_folder, remove_folder_with_excludes
from ..utils.common import FileConfiguration
from ..utils.json import read_json_file, write_json_file
@@ -75,6 +71,7 @@ HOMEASSISTANT_BACKUP_EXCLUDE = [
"backups/*.tar",
"tmp_backups/*.tar",
"tts/*",
".cache/*",
]
HOMEASSISTANT_BACKUP_EXCLUDE_DATABASE = [
"home-assistant_v?.db",
@@ -341,10 +338,7 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
try:
await self.sys_run_in_executor(write_pulse_config)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error("Home Assistant can't write pulse/client.config: %s", err)
else:
_LOGGER.info("Update pulse/client.config: %s", self.path_pulse)
@@ -359,15 +353,23 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
):
return
configuration: (
dict[str, Any] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "get_config"}
)
try:
configuration: (
dict[str, Any] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "get_config"}
)
except HomeAssistantWSError as err:
_LOGGER.warning(
"Can't get Home Assistant Core configuration: %s. Not sending hardware events to Home Assistant Core.",
err,
)
return
if not configuration or "usb" not in configuration.get("components", []):
return
self.sys_homeassistant.websocket.send_message({ATTR_TYPE: "usb/scan"})
self.sys_homeassistant.websocket.send_command({ATTR_TYPE: "usb/scan"})
@Job(name="home_assistant_module_begin_backup")
async def begin_backup(self) -> None:
@@ -409,7 +411,7 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
@Job(name="home_assistant_module_backup")
async def backup(
self, tar_file: tarfile.TarFile, exclude_database: bool = False
self, tar_file: SecureTarFile, exclude_database: bool = False
) -> None:
"""Backup Home Assistant Core config/directory."""
excludes = HOMEASSISTANT_BACKUP_EXCLUDE.copy()
@@ -469,7 +471,7 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
@Job(name="home_assistant_module_restore")
async def restore(
self, tar_file: tarfile.TarFile, exclude_database: bool = False
self, tar_file: SecureTarFile, exclude_database: bool | None = False
) -> None:
"""Restore Home Assistant Core config/ directory."""
@@ -486,11 +488,16 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
# extract backup
try:
with tar_file as backup:
# The tar filter rejects path traversal and absolute names,
# aborting restore of potentially crafted backups.
backup.extractall(
path=temp_path,
members=secure_path(backup),
filter="fully_trusted",
filter="tar",
)
except tarfile.FilterError as err:
raise BackupInvalidError(
f"Invalid tarfile {tar_file}: {err}", _LOGGER.error
) from err
except tarfile.TarError as err:
raise HomeAssistantError(
f"Can't read tarfile {tar_file}: {err}", _LOGGER.error
@@ -501,7 +508,7 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
temp_data = temp_path
_LOGGER.info("Restore Home Assistant Core config folder")
if exclude_database:
if exclude_database is True:
remove_folder_with_excludes(
self.sys_config.path_homeassistant,
excludes=HOMEASSISTANT_BACKUP_EXCLUDE_DATABASE,
@@ -561,21 +568,12 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
if attr in data:
self._data[attr] = data[attr]
@Job(
name="home_assistant_get_users",
throttle_period=timedelta(minutes=5),
internal=True,
concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
)
async def get_users(self) -> list[IngressSessionDataUser]:
"""Get list of all configured users."""
list_of_users: (
list[IngressSessionDataUserDict] | None
) = await self.sys_homeassistant.websocket.async_send_command(
async def list_users(self) -> list[HomeAssistantUser]:
"""Fetch list of all users from Home Assistant Core via WebSocket.
Raises HomeAssistantWSError on WebSocket connection/communication failure.
"""
raw: list[dict[str, Any]] = await self.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"}
)
if list_of_users:
return [IngressSessionDataUser.from_dict(data) for data in list_of_users]
return []
return [HomeAssistantUser.from_dict(data) for data in raw]

View File

@@ -30,12 +30,6 @@ from ..exceptions import (
from ..utils.json import json_dumps
from .const import CLOSING_STATES, WSEvent, WSType
MIN_VERSION = {
WSType.SUPERVISOR_EVENT: "2021.2.4",
WSType.BACKUP_START: "2022.1.0",
WSType.BACKUP_END: "2022.1.0",
}
_LOGGER: logging.Logger = logging.getLogger(__name__)
T = TypeVar("T")
@@ -46,7 +40,6 @@ class WSClient:
def __init__(
self,
loop: asyncio.BaseEventLoop,
ha_version: AwesomeVersion,
client: aiohttp.ClientWebSocketResponse,
):
@@ -54,7 +47,6 @@ class WSClient:
self.ha_version = ha_version
self._client = client
self._message_id: int = 0
self._loop = loop
self._futures: dict[int, asyncio.Future[T]] = {} # type: ignore
@property
@@ -73,20 +65,11 @@ class WSClient:
if not self._client.closed:
await self._client.close()
async def async_send_message(self, message: dict[str, Any]) -> None:
"""Send a websocket message, don't wait for response."""
self._message_id += 1
_LOGGER.debug("Sending: %s", message)
try:
await self._client.send_json(message, dumps=json_dumps)
except ConnectionError as err:
raise HomeAssistantWSConnectionError(str(err)) from err
async def async_send_command(self, message: dict[str, Any]) -> T | None:
async def async_send_command(self, message: dict[str, Any]) -> T:
"""Send a websocket message, and return the response."""
self._message_id += 1
message["id"] = self._message_id
self._futures[message["id"]] = self._loop.create_future()
self._futures[message["id"]] = asyncio.get_running_loop().create_future()
_LOGGER.debug("Sending: %s", message)
try:
await self._client.send_json(message, dumps=json_dumps)
@@ -157,13 +140,13 @@ class WSClient:
@classmethod
async def connect_with_auth(
cls, session: aiohttp.ClientSession, loop, url: str, token: str
cls, session: aiohttp.ClientSession, url: str, token: str
) -> WSClient:
"""Create an authenticated websocket client."""
try:
client = await session.ws_connect(url, ssl=False)
except aiohttp.client_exceptions.ClientConnectorError:
raise HomeAssistantWSError("Can't connect") from None
raise HomeAssistantWSConnectionError("Can't connect") from None
hello_message = await client.receive_json()
@@ -176,7 +159,7 @@ class WSClient:
if auth_ok_message[ATTR_TYPE] != "auth_ok":
raise HomeAssistantAPIError("AUTH NOT OK")
return cls(loop, AwesomeVersion(hello_message["ha_version"]), client)
return cls(AwesomeVersion(hello_message["ha_version"]), client)
class HomeAssistantWebSocket(CoreSysAttributes):
@@ -193,7 +176,7 @@ class HomeAssistantWebSocket(CoreSysAttributes):
"""Process queue once supervisor is running."""
if reference == CoreState.RUNNING:
for msg in self._queue:
await self.async_send_message(msg)
await self._async_send_command(msg)
self._queue.clear()
@@ -207,7 +190,6 @@ class HomeAssistantWebSocket(CoreSysAttributes):
await self.sys_homeassistant.api.ensure_access_token()
client = await WSClient.connect_with_auth(
self.sys_websession,
self.sys_loop,
self.sys_homeassistant.ws_url,
cast(str, self.sys_homeassistant.api.access_token),
)
@@ -215,38 +197,27 @@ class HomeAssistantWebSocket(CoreSysAttributes):
self.sys_create_task(client.start_listener())
return client
async def _can_send(self, message: dict[str, Any]) -> bool:
"""Determine if we can use WebSocket messages."""
async def _ensure_connected(self) -> None:
"""Ensure WebSocket connection is ready.
Raises HomeAssistantWSConnectionError if unable to connect.
Raises HomeAssistantAuthError if authentication with Core fails.
"""
if self.sys_core.state in CLOSING_STATES:
return False
raise HomeAssistantWSConnectionError(
"WebSocket not available, system is shutting down"
)
connected = self._client and self._client.connected
# If we are already connected, we can avoid the check_api_state call
# since it makes a new socket connection and we already have one.
if not connected and not await self.sys_homeassistant.api.check_api_state():
# No core access, don't try.
return False
if not self._client:
self._client = await self._get_ws_client()
if not self._client.connected:
self._client = await self._get_ws_client()
message_type = message.get("type")
if (
message_type is not None
and message_type in MIN_VERSION
and self._client.ha_version < MIN_VERSION[message_type]
):
_LOGGER.info(
"WebSocket command %s is not supported until core-%s. Ignoring WebSocket message.",
message_type,
MIN_VERSION[message_type],
raise HomeAssistantWSConnectionError(
"Can't connect to Home Assistant Core WebSocket, the API is not reachable"
)
return False
return True
if not self._client or not self._client.connected:
self._client = await self._get_ws_client()
async def load(self) -> None:
"""Set up queue processor after startup completes."""
@@ -254,53 +225,61 @@ class HomeAssistantWebSocket(CoreSysAttributes):
BusEvent.SUPERVISOR_STATE_CHANGE, self._process_queue
)
async def async_send_message(self, message: dict[str, Any]) -> None:
"""Send a message with the WS client."""
# Only commands allowed during startup as those tell Home Assistant to do something.
# Messages may cause clients to make follow-up API calls so those wait.
async def _async_send_command(self, message: dict[str, Any]) -> None:
"""Send a fire-and-forget command via WebSocket.
Queues messages during startup. Silently handles connection errors.
"""
if self.sys_core.state in STARTING_STATES:
self._queue.append(message)
_LOGGER.debug("Queuing message until startup has completed: %s", message)
return
if not await self._can_send(message):
try:
await self._ensure_connected()
except HomeAssistantWSError as err:
_LOGGER.debug("Can't send WebSocket command: %s", err)
return
# _ensure_connected guarantees self._client is set
assert self._client
try:
if self._client:
await self._client.async_send_command(message)
except HomeAssistantWSConnectionError:
await self._client.async_send_command(message)
except HomeAssistantWSConnectionError as err:
_LOGGER.debug("Fire-and-forget WebSocket command failed: %s", err)
if self._client:
await self._client.close()
self._client = None
async def async_send_command(self, message: dict[str, Any]) -> T | None:
"""Send a command with the WS client and wait for the response."""
if not await self._can_send(message):
return None
async def async_send_command(self, message: dict[str, Any]) -> T:
"""Send a command and return the response.
Raises HomeAssistantWSError on WebSocket connection or communication failure.
"""
await self._ensure_connected()
# _ensure_connected guarantees self._client is set
assert self._client
try:
if self._client:
return await self._client.async_send_command(message)
return await self._client.async_send_command(message)
except HomeAssistantWSConnectionError:
if self._client:
await self._client.close()
self._client = None
raise
return None
def send_message(self, message: dict[str, Any]) -> None:
"""Send a supervisor/event message."""
def send_command(self, message: dict[str, Any]) -> None:
"""Send a fire-and-forget command via WebSocket."""
if self.sys_core.state in CLOSING_STATES:
return
self.sys_create_task(self.async_send_message(message))
self.sys_create_task(self._async_send_command(message))
async def async_supervisor_event_custom(
self, event: WSEvent, extra_data: dict[str, Any] | None = None
) -> None:
"""Send a supervisor/event message to Home Assistant with custom data."""
try:
await self.async_send_message(
await self._async_send_command(
{
ATTR_TYPE: WSType.SUPERVISOR_EVENT,
ATTR_DATA: {

View File

@@ -3,7 +3,6 @@
from __future__ import annotations
from contextlib import suppress
import errno
import logging
from pathlib import Path
import shutil
@@ -12,7 +11,7 @@ from awesomeversion import AwesomeVersion
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import DBusError, HostAppArmorError
from ..resolution.const import UnhealthyReason, UnsupportedReason
from ..resolution.const import UnsupportedReason
from ..utils.apparmor import validate_profile
from .const import HostFeature
@@ -89,10 +88,7 @@ class AppArmorControl(CoreSysAttributes):
try:
await self.sys_run_in_executor(shutil.copyfile, profile_file, dest_profile)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
raise HostAppArmorError(
f"Can't copy {profile_file}: {err}", _LOGGER.error
) from err
@@ -116,10 +112,7 @@ class AppArmorControl(CoreSysAttributes):
try:
await self.sys_run_in_executor(profile_file.unlink)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
raise HostAppArmorError(
f"Can't remove profile: {err}", _LOGGER.error
) from err
@@ -134,10 +127,7 @@ class AppArmorControl(CoreSysAttributes):
try:
shutil.copy(profile_file, backup_file)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
raise HostAppArmorError(
f"Can't backup profile {profile_name}: {err}", _LOGGER.error
) from err

View File

@@ -64,6 +64,7 @@ class IpSetting:
method: InterfaceMethod
address: list[IPv4Interface | IPv6Interface]
gateway: IPv4Address | IPv6Address | None
route_metric: int | None
nameservers: list[IPv4Address | IPv6Address]
@@ -152,7 +153,7 @@ class Interface:
)
@staticmethod
def from_dbus_interface(inet: NetworkInterface) -> "Interface":
def from_dbus_interface(inet: NetworkInterface) -> Interface:
"""Coerce a dbus interface into normal Interface."""
if inet.settings and inet.settings.ipv4:
ipv4_setting = IpSetting(
@@ -166,6 +167,7 @@ class Interface:
gateway=IPv4Address(inet.settings.ipv4.gateway)
if inet.settings.ipv4.gateway
else None,
route_metric=inet.settings.ipv4.route_metric,
nameservers=[
IPv4Address(socket.ntohl(ip)) for ip in inet.settings.ipv4.dns
]
@@ -173,7 +175,7 @@ class Interface:
else [],
)
else:
ipv4_setting = IpSetting(InterfaceMethod.DISABLED, [], None, [])
ipv4_setting = IpSetting(InterfaceMethod.DISABLED, [], None, None, [])
if inet.settings and inet.settings.ipv6:
ipv6_setting = Ip6Setting(
@@ -193,12 +195,13 @@ class Interface:
gateway=IPv6Address(inet.settings.ipv6.gateway)
if inet.settings.ipv6.gateway
else None,
route_metric=inet.settings.ipv6.route_metric,
nameservers=[IPv6Address(bytes(ip)) for ip in inet.settings.ipv6.dns]
if inet.settings.ipv6.dns
else [],
)
else:
ipv6_setting = Ip6Setting(InterfaceMethod.DISABLED, [], None, [])
ipv6_setting = Ip6Setting(InterfaceMethod.DISABLED, [], None, None, [])
ipv4_ready = (
inet.connection is not None

165
supervisor/host/firewall.py Normal file
View File

@@ -0,0 +1,165 @@
"""Firewall rules for the Supervisor network gateway."""
import asyncio
from contextlib import suppress
import logging
from dbus_fast import Variant
from ..const import DOCKER_IPV4_NETWORK_MASK, DOCKER_IPV6_NETWORK_MASK, DOCKER_NETWORK
from ..coresys import CoreSys, CoreSysAttributes
from ..dbus.const import (
DBUS_ATTR_ACTIVE_STATE,
DBUS_IFACE_SYSTEMD_UNIT,
StartUnitMode,
UnitActiveState,
)
from ..dbus.systemd import ExecStartEntry
from ..exceptions import DBusError
from ..resolution.const import UnhealthyReason
_LOGGER: logging.Logger = logging.getLogger(__name__)
FIREWALL_SERVICE = "supervisor-firewall-gateway.service"
FIREWALL_UNIT_TIMEOUT = 30
BIN_SH = "/bin/sh"
IPTABLES_CMD = "/usr/sbin/iptables"
IP6TABLES_CMD = "/usr/sbin/ip6tables"
TERMINAL_STATES = {UnitActiveState.INACTIVE, UnitActiveState.FAILED}
class FirewallManager(CoreSysAttributes):
"""Manage firewall rules to protect the Supervisor network gateway.
Adds iptables rules in the raw PREROUTING chain to drop traffic addressed
to the bridge gateway IP that does not originate from the bridge or
loopback interfaces.
"""
def __init__(self, coresys: CoreSys) -> None:
"""Initialize firewall manager."""
self.coresys: CoreSys = coresys
@staticmethod
def _build_exec_start() -> list[ExecStartEntry]:
"""Build ExecStart entries for gateway firewall rules.
Each entry uses shell check-or-insert logic for idempotency.
We insert DROP first, then ACCEPT, using -I (insert at top).
The last inserted rule ends up first in the chain, so ACCEPT
for loopback ends up above the DROP for non-bridge interfaces.
"""
gateway_ipv4 = str(DOCKER_IPV4_NETWORK_MASK[1])
gateway_ipv6 = str(DOCKER_IPV6_NETWORK_MASK[1])
bridge = DOCKER_NETWORK
entries: list[ExecStartEntry] = []
for cmd, gateway in (
(IPTABLES_CMD, gateway_ipv4),
(IP6TABLES_CMD, gateway_ipv6),
):
# DROP packets to gateway from non-bridge, non-loopback interfaces
entries.append(
ExecStartEntry(
binary=BIN_SH,
argv=[
BIN_SH,
"-c",
f"{cmd} -t raw -C PREROUTING ! -i {bridge} -d {gateway}"
f" -j DROP 2>/dev/null"
f" || {cmd} -t raw -I PREROUTING ! -i {bridge} -d {gateway}"
f" -j DROP",
],
ignore_failure=False,
)
)
# ACCEPT loopback traffic to gateway (inserted last, ends up first)
entries.append(
ExecStartEntry(
binary=BIN_SH,
argv=[
BIN_SH,
"-c",
f"{cmd} -t raw -C PREROUTING -i lo -d {gateway}"
f" -j ACCEPT 2>/dev/null"
f" || {cmd} -t raw -I PREROUTING -i lo -d {gateway}"
f" -j ACCEPT",
],
ignore_failure=False,
)
)
return entries
async def _apply_gateway_firewall_rules(self) -> bool:
"""Apply iptables rules to restrict access to the Docker gateway.
Returns True if the rules were successfully applied.
"""
if not self.sys_dbus.systemd.is_connected:
_LOGGER.error("Systemd not available, cannot apply gateway firewall rules")
return False
# Clean up any previous failed unit
with suppress(DBusError):
await self.sys_dbus.systemd.reset_failed_unit(FIREWALL_SERVICE)
properties: list[tuple[str, Variant]] = [
("Description", Variant("s", "Supervisor gateway firewall rules")),
("Type", Variant("s", "oneshot")),
("ExecStart", Variant("a(sasb)", self._build_exec_start())),
]
try:
await self.sys_dbus.systemd.start_transient_unit(
FIREWALL_SERVICE,
StartUnitMode.REPLACE,
properties,
)
except DBusError as err:
_LOGGER.error("Failed to apply gateway firewall rules: %s", err)
return False
# Wait for the oneshot unit to finish and verify it succeeded
try:
unit = await self.sys_dbus.systemd.get_unit(FIREWALL_SERVICE)
async with (
asyncio.timeout(FIREWALL_UNIT_TIMEOUT),
unit.properties_changed() as signal,
):
state = await unit.get_active_state()
while state not in TERMINAL_STATES:
props = await signal.wait_for_signal()
if (
props[0] == DBUS_IFACE_SYSTEMD_UNIT
and DBUS_ATTR_ACTIVE_STATE in props[1]
):
state = UnitActiveState(props[1][DBUS_ATTR_ACTIVE_STATE].value)
except (DBusError, TimeoutError) as err:
_LOGGER.error(
"Failed waiting for gateway firewall unit to complete: %s", err
)
return False
if state == UnitActiveState.FAILED:
_LOGGER.error(
"Gateway firewall unit failed, iptables rules may not be applied"
)
return False
return True
async def apply_gateway_firewall_rules(self) -> None:
"""Apply gateway firewall rules, marking unsupported on failure."""
if self.sys_dev:
_LOGGER.info("Skipping gateway firewall rules in development mode")
return
if await self._apply_gateway_firewall_rules():
_LOGGER.info("Gateway firewall rules applied")
else:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.DOCKER_GATEWAY_UNPROTECTED
)

View File

@@ -1,7 +1,7 @@
"""Info control for host."""
import asyncio
from datetime import datetime, tzinfo
from datetime import UTC, datetime, tzinfo
import logging
from ..coresys import CoreSysAttributes
@@ -78,9 +78,9 @@ class InfoCenter(CoreSysAttributes):
return self.sys_dbus.timedate.timezone_tzinfo
@property
def dt_utc(self) -> datetime | None:
def dt_utc(self) -> datetime:
"""Return host UTC time."""
return self.sys_dbus.timedate.dt_utc
return datetime.now(UTC)
@property
def use_rtc(self) -> bool | None:

View File

@@ -15,6 +15,7 @@ from ..hardware.data import Device
from .apparmor import AppArmorControl
from .const import HostFeature
from .control import SystemControl
from .firewall import FirewallManager
from .info import InfoCenter
from .logs import LogsControl
from .network import NetworkManager
@@ -33,6 +34,7 @@ class HostManager(CoreSysAttributes):
self._apparmor: AppArmorControl = AppArmorControl(coresys)
self._control: SystemControl = SystemControl(coresys)
self._firewall: FirewallManager = FirewallManager(coresys)
self._info: InfoCenter = InfoCenter(coresys)
self._services: ServiceManager = ServiceManager(coresys)
self._network: NetworkManager = NetworkManager(coresys)
@@ -54,6 +56,11 @@ class HostManager(CoreSysAttributes):
"""Return host control handler."""
return self._control
@property
def firewall(self) -> FirewallManager:
"""Return host firewall handler."""
return self._firewall
@property
def info(self) -> InfoCenter:
"""Return host info handler."""
@@ -168,6 +175,9 @@ class HostManager(CoreSysAttributes):
await self.network.load()
# Apply firewall rules to restrict access to the Docker gateway
await self.firewall.apply_gateway_firewall_rules()
# Register for events
self.sys_bus.register_event(BusEvent.HARDWARE_NEW_DEVICE, self._hardware_events)
self.sys_bus.register_event(

View File

@@ -457,6 +457,11 @@ class Job(CoreSysAttributes):
if plugin.need_update
]
):
if not coresys.sys_updater.auto_update:
raise JobConditionException(
f"'{method_name}' blocked from execution, plugin(s) {', '.join(plugin.slug for plugin in out_of_date)} are not up to date and auto-update is disabled"
)
errors = await asyncio.gather(
*[plugin.update() for plugin in out_of_date], return_exceptions=True
)

View File

@@ -72,6 +72,9 @@ def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
"docker": coresys.docker.info.version,
"supervisor": coresys.supervisor.version,
},
"docker": {
"storage_driver": coresys.docker.info.storage,
},
"host": {
"machine": coresys.machine,
},
@@ -93,7 +96,7 @@ def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
"installed_addons": installed_addons,
},
"host": {
"arch": coresys.arch.default,
"arch": str(coresys.arch.default),
"board": coresys.os.board,
"deployment": coresys.host.info.deployment,
"disk_free_space": coresys.hardware.disk.get_disk_free_space(
@@ -111,6 +114,9 @@ def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
"docker": coresys.docker.info.version,
"supervisor": coresys.supervisor.version,
},
"docker": {
"storage_driver": coresys.docker.info.storage,
},
"resolution": {
"issues": [attr.asdict(issue) for issue in coresys.resolution.issues],
"suggestions": [

View File

@@ -13,6 +13,7 @@ from ..exceptions import (
AddonsError,
BackupFileNotFoundError,
HomeAssistantError,
HomeAssistantWSError,
ObserverError,
SupervisorUpdateError,
)
@@ -30,13 +31,13 @@ HASS_WATCHDOG_REANIMATE_FAILURES = "HASS_WATCHDOG_REANIMATE_FAILURES"
HASS_WATCHDOG_MAX_API_ATTEMPTS = 2
HASS_WATCHDOG_MAX_REANIMATE_ATTEMPTS = 5
RUN_UPDATE_SUPERVISOR = 29100
RUN_UPDATE_SUPERVISOR = 86400 # 24h
RUN_UPDATE_ADDONS = 57600
RUN_UPDATE_CLI = 28100
RUN_UPDATE_DNS = 30100
RUN_UPDATE_AUDIO = 30200
RUN_UPDATE_MULTICAST = 30300
RUN_UPDATE_OBSERVER = 30400
RUN_UPDATE_CLI = 43200 # 12h, staggered +2min per plugin
RUN_UPDATE_DNS = 43320
RUN_UPDATE_AUDIO = 43440
RUN_UPDATE_MULTICAST = 43560
RUN_UPDATE_OBSERVER = 43680
RUN_RELOAD_ADDONS = 10800
RUN_RELOAD_BACKUPS = 72000
@@ -52,7 +53,10 @@ RUN_WATCHDOG_OBSERVER_APPLICATION = 180
RUN_CORE_BACKUP_CLEANUP = 86200
PLUGIN_AUTO_UPDATE_CONDITIONS = PLUGIN_UPDATE_CONDITIONS + [JobCondition.RUNNING]
PLUGIN_AUTO_UPDATE_CONDITIONS = PLUGIN_UPDATE_CONDITIONS + [
JobCondition.AUTO_UPDATE,
JobCondition.RUNNING,
]
OLD_BACKUP_THRESHOLD = timedelta(days=2)
@@ -152,7 +156,13 @@ class Tasks(CoreSysAttributes):
"Sending update add-on WebSocket command to Home Assistant Core: %s",
message,
)
await self.sys_homeassistant.websocket.async_send_command(message)
try:
await self.sys_homeassistant.websocket.async_send_command(message)
except HomeAssistantWSError as err:
_LOGGER.warning(
"Could not send add-on update command to Home Assistant Core: %s",
err,
)
@Job(
name="tasks_update_supervisor",

View File

@@ -319,3 +319,52 @@ class MountManager(FileConfiguration, CoreSysAttributes):
mount.to_dict(skip_secrets=False) for mount in self.mounts
]
await super().save_data()
async def restore_mount(self, mount: Mount) -> asyncio.Task:
"""Restore a mount from backup.
Adds mount to internal state without activating it.
Returns an asyncio.Task for activating the mount in the background.
If a mount with the same name exists, it is replaced.
"""
if mount.name in self._mounts:
_LOGGER.info(
"Mount '%s' already exists, replacing with backup config", mount.name
)
# Unmount existing if it's bound
if mount.name in self._bound_mounts:
await self._bound_mounts[mount.name].bind_mount.unmount()
del self._bound_mounts[mount.name]
old_mount = self._mounts[mount.name]
await old_mount.unmount()
self._mounts[mount.name] = mount
return self.sys_create_task(self._activate_restored_mount(mount))
async def _activate_restored_mount(self, mount: Mount) -> None:
"""Activate a restored mount. Logs errors but doesn't raise."""
if HostFeature.MOUNT not in self.sys_host.features:
_LOGGER.warning(
"Cannot activate mount %s, mounting not supported on system",
mount.name,
)
return
try:
_LOGGER.info("Activating restored mount: %s", mount.name)
await mount.load()
if mount.usage == MountUsage.MEDIA:
await self._bind_media(mount)
elif mount.usage == MountUsage.SHARE:
await self._bind_share(mount)
_LOGGER.info("Mount %s activated successfully", mount.name)
except MountError as err:
_LOGGER.warning(
"Failed to activate mount %s (config was restored, "
"mount may come online later): %s",
mount.name,
err,
)

View File

@@ -59,7 +59,7 @@ class Mount(CoreSysAttributes, ABC):
)
@classmethod
def from_dict(cls, coresys: CoreSys, data: MountData) -> "Mount":
def from_dict(cls, coresys: CoreSys, data: MountData) -> Mount:
"""Make dictionary into mount object."""
if cls not in [Mount, NetworkMount]:
return cls(coresys, data)
@@ -215,10 +215,10 @@ class Mount(CoreSysAttributes, ABC):
await self._update_state(unit)
# If active, dismiss corresponding failed mount issue if found
if (
mounted := await self.is_mounted()
) and self.failed_issue in self.sys_resolution.issues:
self.sys_resolution.dismiss_issue(self.failed_issue)
if (mounted := await self.is_mounted()) and (
issue := self.sys_resolution.get_issue_if_present(self.failed_issue)
):
self.sys_resolution.dismiss_issue(issue)
return mounted
@@ -361,8 +361,8 @@ class Mount(CoreSysAttributes, ABC):
await self._restart()
# If it is mounted now, dismiss corresponding issue if present
if self.failed_issue in self.sys_resolution.issues:
self.sys_resolution.dismiss_issue(self.failed_issue)
if issue := self.sys_resolution.get_issue_if_present(self.failed_issue):
self.sys_resolution.dismiss_issue(issue)
async def _restart(self) -> None:
"""Restart mount unit to re-mount."""
@@ -562,7 +562,7 @@ class BindMount(Mount):
usage: MountUsage | None = None,
where: PurePath | None = None,
read_only: bool = False,
) -> "BindMount":
) -> BindMount:
"""Create a new bind mount instance."""
return BindMount(
coresys,

View File

@@ -57,7 +57,7 @@ class Disk:
@staticmethod
def from_udisks2_drive(
drive: UDisks2Drive, drive_block_device: UDisks2Block
) -> "Disk":
) -> Disk:
"""Convert UDisks2Drive into a Disk object."""
return Disk(
vendor=drive.vendor,

View File

@@ -2,7 +2,6 @@
from dataclasses import dataclass
from datetime import datetime
import errno
import logging
from pathlib import Path, PurePath
from typing import cast
@@ -23,7 +22,6 @@ from ..exceptions import (
)
from ..jobs.const import JobConcurrency, JobCondition
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason
from ..utils.sentry import async_capture_exception
from .data_disk import DataDisk
@@ -52,7 +50,7 @@ class SlotStatus:
parent: str | None = None
@classmethod
def from_dict(cls, data: SlotStatusDataType) -> "SlotStatus":
def from_dict(cls, data: SlotStatusDataType) -> SlotStatus:
"""Create SlotStatus from dictionary."""
return cls(
class_=data["class"],
@@ -214,10 +212,7 @@ class OSManager(CoreSysAttributes):
) from err
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
raise HassOSUpdateError(
f"Can't write OTA file: {err!s}", _LOGGER.error
) from err

View File

@@ -3,7 +3,6 @@
Code: https://github.com/home-assistant/plugin-audio
"""
import errno
import logging
from pathlib import Path, PurePath
import shutil
@@ -26,7 +25,6 @@ from ..exceptions import (
)
from ..jobs.const import JobThrottle
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason
from ..utils.json import write_json_file
from ..utils.sentry import async_capture_exception
from .base import PluginBase
@@ -94,11 +92,7 @@ class PluginAudio(PluginBase):
)
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error("Can't read pulse-client.tmpl: %s", err)
await super().load()
@@ -113,10 +107,7 @@ class PluginAudio(PluginBase):
try:
await self.sys_run_in_executor(setup_default_asound)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error("Can't create default asound: %s", err)
@Job(

View File

@@ -76,13 +76,6 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
"""Return True if a task is in progress."""
return self.instance.in_progress
def logs(self) -> Awaitable[bytes]:
"""Get docker plugin logs.
Return Coroutine.
"""
return self.instance.logs()
def is_running(self) -> Awaitable[bool]:
"""Return True if Docker container is running.

View File

@@ -5,7 +5,6 @@ Code: https://github.com/home-assistant/plugin-dns
import asyncio
from contextlib import suppress
import errno
from ipaddress import IPv4Address
import logging
from pathlib import Path
@@ -33,7 +32,7 @@ from ..exceptions import (
)
from ..jobs.const import JobThrottle
from ..jobs.decorator import Job
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.json import write_json_file
from ..utils.sentry import async_capture_exception
from ..validate import dns_url
@@ -232,10 +231,7 @@ class PluginDns(PluginBase):
await self.sys_run_in_executor(RESOLV_TMPL.read_text, encoding="utf-8")
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error("Can't read resolve.tmpl: %s", err)
try:
@@ -243,10 +239,7 @@ class PluginDns(PluginBase):
await self.sys_run_in_executor(HOSTS_TMPL.read_text, encoding="utf-8")
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error("Can't read hosts.tmpl: %s", err)
await self._init_hosts()
@@ -368,7 +361,7 @@ class PluginDns(PluginBase):
log = await self.instance.logs()
# Check the log for loop plugin output
if b"plugin/loop: Loop" in log:
if any("plugin/loop: Loop" in line for line in log):
_LOGGER.error("Detected a DNS loop in local Network!")
self._loop = True
self.sys_resolution.create_issue(
@@ -448,10 +441,7 @@ class PluginDns(PluginBase):
self.hosts.write_text, data, encoding="utf-8"
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
raise CoreDNSError(f"Can't update hosts: {err}", _LOGGER.error) from err
async def add_host(
@@ -533,10 +523,7 @@ class PluginDns(PluginBase):
try:
await self.sys_run_in_executor(resolv_conf.write_text, data)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.warning("Can't write/update %s: %s", resolv_conf, err)
return

View File

@@ -86,6 +86,13 @@ class PluginManager(CoreSysAttributes):
if self.sys_supervisor.need_update:
return
# Skip plugin auto-updates if auto updates are disabled
if not self.sys_updater.auto_update:
_LOGGER.debug(
"Skipping plugin auto-updates because Supervisor auto-update is disabled"
)
return
# Check requirements
for plugin in self.all_plugins:
# Check if need an update

View File

@@ -15,9 +15,11 @@ from ..docker.const import ContainerState
from ..docker.observer import DockerObserver
from ..docker.stats import DockerStats
from ..exceptions import (
DockerContainerPortConflict,
DockerError,
ObserverError,
ObserverJobError,
ObserverPortConflict,
ObserverUpdateError,
PluginError,
)
@@ -87,6 +89,8 @@ class PluginObserver(PluginBase):
_LOGGER.info("Starting observer plugin")
try:
await self.instance.run()
except DockerContainerPortConflict as err:
raise ObserverPortConflict(_LOGGER.error) from err
except DockerError as err:
_LOGGER.error("Can't start observer plugin")
raise ObserverError() from err

View File

@@ -6,7 +6,7 @@ from typing import Any
from ..const import ATTR_CHECKS
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ResolutionNotFound
from ..exceptions import ResolutionCheckNotFound
from ..utils.sentry import async_capture_exception
from .checks.base import CheckBase
from .validate import get_valid_modules
@@ -50,7 +50,7 @@ class ResolutionCheck(CoreSysAttributes):
if slug in self._checks:
return self._checks[slug]
raise ResolutionNotFound(f"Check with slug {slug} not found!")
raise ResolutionCheckNotFound(check=slug)
async def check_system(self) -> None:
"""Check the system."""

View File

@@ -1,72 +0,0 @@
"""Helpers to check core security."""
from enum import StrEnum
from pathlib import Path
from awesomeversion import AwesomeVersion, AwesomeVersionException
from ...const import CoreState
from ...coresys import CoreSys
from ..const import ContextType, IssueType, SuggestionType
from .base import CheckBase
def setup(coresys: CoreSys) -> CheckBase:
"""Check setup function."""
return CheckCoreSecurity(coresys)
class SecurityReference(StrEnum):
"""Version references."""
CUSTOM_COMPONENTS_BELOW_2021_1_5 = "custom_components_below_2021_1_5"
class CheckCoreSecurity(CheckBase):
"""CheckCoreSecurity class for check."""
async def run_check(self) -> None:
"""Run check if not affected by issue."""
# Security issue < 2021.1.5 & Custom components
try:
if self.sys_homeassistant.version < AwesomeVersion("2021.1.5"):
if await self.sys_run_in_executor(self._custom_components_exists):
self.sys_resolution.create_issue(
IssueType.SECURITY,
ContextType.CORE,
reference=SecurityReference.CUSTOM_COMPONENTS_BELOW_2021_1_5,
suggestions=[SuggestionType.EXECUTE_UPDATE],
)
except (AwesomeVersionException, OSError):
return
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
try:
if self.sys_homeassistant.version >= AwesomeVersion("2021.1.5"):
return False
except AwesomeVersionException:
return True
return await self.sys_run_in_executor(self._custom_components_exists)
def _custom_components_exists(self) -> bool:
"""Return true if custom components folder exists.
Must be run in executor.
"""
return Path(self.sys_config.path_homeassistant, "custom_components").exists()
@property
def issue(self) -> IssueType:
"""Return a IssueType enum."""
return IssueType.SECURITY
@property
def context(self) -> ContextType:
"""Return a ContextType enum."""
return ContextType.CORE
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this check can run."""
return [CoreState.RUNNING, CoreState.STARTUP]

View File

@@ -0,0 +1,61 @@
"""Helpers to check for add-ons using deprecated compatibility entries."""
from ...const import AddonStage, CoreState
from ...coresys import CoreSys
from ..const import ContextType, IssueType, SuggestionType
from .base import CheckBase
def setup(coresys: CoreSys) -> CheckBase:
"""Check setup function."""
return CheckDeprecatedArchAddon(coresys)
class CheckDeprecatedArchAddon(CheckBase):
"""CheckDeprecatedArchAddon class for check."""
async def run_check(self) -> None:
"""Run check if not affected by issue."""
for addon in self.sys_addons.installed:
if addon.stage == AddonStage.DEPRECATED:
continue
if (addon.has_deprecated_arch and not addon.has_supported_arch) or (
addon.has_deprecated_machine and not addon.has_supported_machine
):
self.sys_resolution.create_issue(
IssueType.DEPRECATED_ARCH_ADDON,
ContextType.ADDON,
reference=addon.slug,
suggestions=[SuggestionType.EXECUTE_REMOVE],
)
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
if not reference:
return False
addon = self.sys_addons.get_local_only(reference)
return (
addon is not None
and addon.stage != AddonStage.DEPRECATED
and (
(addon.has_deprecated_arch and not addon.has_supported_arch)
or (addon.has_deprecated_machine and not addon.has_supported_machine)
)
)
@property
def issue(self) -> IssueType:
"""Return a IssueType enum."""
return IssueType.DEPRECATED_ARCH_ADDON
@property
def context(self) -> ContextType:
"""Return a ContextType enum."""
return ContextType.ADDON
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this check can run."""
return [CoreState.SETUP, CoreState.RUNNING]

View File

@@ -22,7 +22,8 @@ async def check_server(
"""Check a DNS server and report issues."""
ip_addr = server[6:] if server.startswith("dns://") else server
async with DNSResolver(loop=loop, nameservers=[ip_addr]) as resolver:
await resolver.query(DNS_CHECK_HOST, qtype)
# following call should be changed to resolver.query() in aiodns 5.x
await resolver.query_dns(DNS_CHECK_HOST, qtype)
def setup(coresys: CoreSys) -> CheckBase:

View File

@@ -65,6 +65,7 @@ class UnhealthyReason(StrEnum):
"""Reasons for unsupported status."""
DOCKER = "docker"
DOCKER_GATEWAY_UNPROTECTED = "docker_gateway_unprotected"
DUPLICATE_OS_INSTALLATION = "duplicate_os_installation"
OSERROR_BAD_MESSAGE = "oserror_bad_message"
PRIVILEGED = "privileged"
@@ -81,6 +82,7 @@ class IssueType(StrEnum):
CORRUPT_REPOSITORY = "corrupt_repository"
CORRUPT_FILESYSTEM = "corrupt_filesystem"
DEPRECATED_ADDON = "deprecated_addon"
DEPRECATED_ARCH_ADDON = "deprecated_arch_addon"
DETACHED_ADDON_MISSING = "detached_addon_missing"
DETACHED_ADDON_REMOVED = "detached_addon_removed"
DEVICE_ACCESS_MISSING = "device_access_missing"
@@ -99,6 +101,7 @@ class IssueType(StrEnum):
MOUNT_FAILED = "mount_failed"
MULTIPLE_DATA_DISKS = "multiple_data_disks"
NO_CURRENT_BACKUP = "no_current_backup"
NTP_SYNC_FAILED = "ntp_sync_failed"
PWNED = "pwned"
REBOOT_REQUIRED = "reboot_required"
SECURITY = "security"
@@ -113,6 +116,7 @@ class SuggestionType(StrEnum):
CLEAR_FULL_BACKUP = "clear_full_backup"
CREATE_FULL_BACKUP = "create_full_backup"
DISABLE_BOOT = "disable_boot"
ENABLE_NTP = "enable_ntp"
EXECUTE_REBOOT = "execute_reboot"
EXECUTE_REBUILD = "execute_rebuild"
EXECUTE_RELOAD = "execute_reload"

View File

@@ -1,9 +1,9 @@
"""Evaluation class for container."""
import asyncio
import logging
from docker.errors import DockerException
from requests import RequestException
import aiodocker
from ...const import CoreState
from ...coresys import CoreSys
@@ -73,10 +73,9 @@ class EvaluateContainer(EvaluateBase):
self._images.clear()
try:
containers = await self.sys_run_in_executor(
self.sys_docker.containers_legacy.list
)
except (DockerException, RequestException) as err:
containers = await self.sys_docker.containers.list()
containers_metadata = await asyncio.gather(*[c.show() for c in containers])
except aiodocker.DockerError as err:
_LOGGER.error("Corrupt docker overlayfs detect: %s", err)
self.sys_resolution.create_issue(
IssueType.CORRUPT_DOCKER,
@@ -87,8 +86,8 @@ class EvaluateContainer(EvaluateBase):
images = {
image
for container in containers
if (config := container.attrs.get("Config")) is not None
for container in containers_metadata
if (config := container.get("Config")) is not None
and (image := config.get("Image")) is not None
}
for image in images:

Some files were not shown because too many files have changed in this diff Show More