Compare commits

..

185 Commits

Author SHA1 Message Date
Jan Čermák
0e5bd48b73 Use explicit Python fix version in GH actions
Specify explicitly Python 3.14.3, as the setup-python action otherwise default
to 3.14.2 when 3.14.3, leading to different version in CI and in production.
2026-02-23 14:24:53 +01:00
Jan Čermák
d57b5e0166 Update wheels ABI in the wheels builder to cp314 2026-02-23 14:12:10 +01:00
Jan Čermák
662a7ae6e6 Use Python 3.14(.3) in CI and base image
Update base image to the latest tag using Python 3.14.3 and update Python
version in CI workflows to 3.14.

With Python 3.14, backports.zstd is no longer necessary as it's now available
in the standard library.
2026-02-23 14:08:11 +01:00
dependabot[bot]
c79e58d584 Bump pylint from 4.0.4 to 4.0.5 (#6584)
Bumps [pylint](https://github.com/pylint-dev/pylint) from 4.0.4 to 4.0.5.
- [Release notes](https://github.com/pylint-dev/pylint/releases)
- [Commits](https://github.com/pylint-dev/pylint/compare/v4.0.4...v4.0.5)

---
updated-dependencies:
- dependency-name: pylint
  dependency-version: 4.0.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-23 10:09:38 +01:00
Stefan Agner
6070d54860 Harden backup tar extraction with Python tar_filter (#6559)
* Harden backup tar extraction with Python data filter

Replace filter="fully_trusted" with a custom backup_data_filter that
wraps tarfile.data_filter. This adds protection against symlink attacks
(absolute targets, destination escapes), device node injection, and
path traversal, while resetting uid/gid and sanitizing permissions.

Unlike using data_filter directly, the custom filter skips problematic
entries with a warning instead of aborting the entire extraction. This
ensures existing backups containing absolute symlinks (e.g. in shared
folders) still restore successfully with the dangerous entries omitted.

Also removes the now-redundant secure_path member filtering, as
data_filter is a strict superset of its protections. Fixes a standalone
bug in _folder_restore which had no member filtering at all.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Simplify security tests to test backup_data_filter directly

Test the public backup_data_filter function with plain tarfile
extraction instead of going through Backup internals. Removes
protected-access pylint warnings and unnecessary coresys setup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Switch to tar filter instead of custom data filter wrapper

Replace backup_data_filter (which wrapped data_filter and skipped
problematic entries) with the built-in tar filter. The tar filter
rejects path traversal and absolute names while preserving uid/gid
and file permissions, which is important for add-ons running as
non-root users.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Apply suggestions from code review

Co-authored-by: Erik Montnemery <erik@montnemery.com>

* Use BackupInvalidError instead of BackupError for tarfile.TarError

Make sure FilterErrors lead to BackupInvalidError instead of BackupError,
as they are not related to the backup process itself but rather to the
integrity of the backup data.

* Improve test coverage and use pytest.raises

* Only make FilterError a BackupInvalidError

* Add test case for FilterError during Home Assistant Core restore

* Add test cases for Add-ons

* Fix pylint warnings

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Erik Montnemery <erik@montnemery.com>
2026-02-23 10:09:19 +01:00
dependabot[bot]
03e110cb86 Bump ruff from 0.15.1 to 0.15.2 (#6583)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.1 to 0.15.2.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.1...0.15.2)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 10:12:25 +01:00
Mike Degatano
4a1c816b92 Finish dockerpy to aiodocker migration (#6578) 2026-02-18 08:49:15 +01:00
Mike Degatano
b70f44bf1f Bump aiodocker from 0.25.0 to 0.26.0 (#6577) 2026-02-17 14:26:01 -05:00
Stefan Agner
c981b3b4c2 Extend and improve release drafter config (#6576)
* Extend and improve release drafter config

Extend the release drafter config with more types (labels) and order
them by priority. Inspired by conventional commits, in particular
the list documented at (including the order):
https://github.com/pvdlg/conventional-changelog-metahub?tab=readme-ov-file#commit-types

Additionally, we left the "breaking-change" and "dependencies" labels.

* Add revert to the list of labels
2026-02-17 19:32:25 +01:00
Stefan Agner
f2d0ceab33 Add missing WIFI_P2P device type to NetworkManager enum (#6574)
Add the missing WIFI_P2P (30) entry to the DeviceType NetworkManager
enum. Without it, systems with a Wi-Fi P2P interface log a warning:

  Unknown DeviceType value received from D-Bus: 30

Closes #6573

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 13:13:02 -05:00
Stefan Agner
3147d080a2 Unify Core user handling with HomeAssistantUser model (#6558)
* Unify Core user listing with HomeAssistantUser model

Replace the ingress-specific IngressSessionDataUser with a general
HomeAssistantUser dataclass that models the Core config/auth/list WS
response. This deduplicates the WS call (previously in both auth.py
and module.py) into a single HomeAssistant.list_users() method.

- Add HomeAssistantUser dataclass with fields matching Core's user API
- Remove get_users() and its unnecessary 5-minute Job throttle
- Auth and ingress consumers both use HomeAssistant.list_users()
- Auth API endpoint uses typed attribute access instead of dict keys
- Migrate session serialization from legacy "displayname" to "name"
- Accept both keys in schema/deserialization for backwards compat
- Add test for loading persisted sessions with legacy displayname key

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Tighten list_users() to trust Core's auth/list contract

Core's config/auth/list WS command always returns a list, never None.
Replace the silent `if not raw: return []` (which also swallowed empty
lists) with an assert, remove the dead AuthListUsersNoneResponseError
exception class, and document the HomeAssistantWSError contract in the
docstring.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove | None from async_send_command return type

The WebSocket result is always set from data["result"] in _receive_json,
never explicitly to None. Remove the misleading | None from the return
type of both WSClient and HomeAssistantWebSocket async_send_command, and
drop the now-unnecessary assert in list_users.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Use HomeAssistantWSConnectionError in _ensure_connected

_ensure_connected and connect_with_auth raise on connection-level
failures, so use the more specific HomeAssistantWSConnectionError
instead of the broad HomeAssistantWSError. This allows callers to
distinguish connection errors from Core API errors (e.g. unsuccessful
WebSocket command responses). Also document that _ensure_connected can
propagate HomeAssistantAuthError from ensure_access_token.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove user list cache from _find_user_by_id

Drop the _list_of_users cache to avoid stale auth data in ingress
session creation. The method now fetches users fresh each time and
returns None on any API error instead of serving potentially outdated
cached results.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 18:31:08 +01:00
dependabot[bot]
09a4e9d5a2 Bump actions/stale from 10.1.1 to 10.2.0 (#6571)
Bumps [actions/stale](https://github.com/actions/stale) from 10.1.1 to 10.2.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](997185467f...b5d41d4e1d)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-17 10:36:56 +01:00
dependabot[bot]
d93e728918 Bump sentry-sdk from 2.52.0 to 2.53.0 (#6572)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.52.0 to 2.53.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.52.0...2.53.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.53.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-17 10:34:12 +01:00
c0ffeeca7
27c6af4b4b App store strings: rename add-on to app (#6569) 2026-02-16 09:20:53 +01:00
Stefan Agner
00f2578d61 Add missing BRIDGE device type to NetworkManager enum (#6567)
NMDeviceType 13 (NM_DEVICE_TYPE_BRIDGE) was not listed in the
DeviceType enum, causing a warning when NetworkManager reported
a bridge interface.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 10:25:15 -05:00
Stefan Agner
50e6c88237 Add periodic progress logging during initial Core installation (#6562)
* Add periodic progress logging during initial Core installation

Log installation progress every 15 seconds while downloading the
Home Assistant Core image during initial setup (landing page to core
transition). Uses asyncio.Event with wait_for timeout to produce
time-based logs independent of Docker pull events, ensuring visibility
even when the network stalls.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Add test coverage

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
2026-02-13 14:17:35 +01:00
dependabot[bot]
0cce2dad3c Bump ruff from 0.15.0 to 0.15.1 (#6565)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.0 to 0.15.1.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.0...0.15.1)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-13 08:59:59 +01:00
Stefan Agner
8dd42cb7a0 Fix getting Supervisor IP address in testing (#6564)
* Fix getting Supervisor IP address in testing

Newer Docker versions (probably newer than 29.x) do not have a global
IPAddress attribute under .NetworkSettings anymore. There is a network
specific map under Networks. For our case the hassio has the relevant
IP address. This network specific maps already existed before, hence
the new inspect format works for old as well as new Docker versions.

While at it, also adjust the test fixture.

* Actively wait for hassio IPAddress to become valid
2026-02-13 08:12:19 +01:00
Mike Degatano
590674ba7c Remove blocking I/O added to import_image (#6557)
* Remove blocking I/O added to import_image

* Add scanned modules to extra blockbuster functions

* Use same cast avoidance approach in export_image

* Remove unnecessary local image_writer variable

* Remove unnecessary local image_tar_stream variable

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2026-02-12 17:37:15 +01:00
Stefan Agner
da800b8889 Simplify HomeAssistantWebSocket and raise on connection errors (#6553)
* Raise HomeAssistantWSError when Core WebSocket is unreachable

Previously, async_send_command silently returned None when Home Assistant
Core was not reachable, leading to misleading error messages downstream
(e.g. "returned invalid response of None instead of a list of users").

Refactor _can_send to _ensure_connected which now raises
HomeAssistantWSError on connection failures while still returning False
for silent-skip cases (shutdown, unsupported version). async_send_message
catches the exception to preserve fire-and-forget behavior.

Update callers that don't handle HomeAssistantWSError: _hardware_events
and addon auto-update in tasks.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Simplify HomeAssistantWebSocket command/message distinction

The WebSocket layer had a confusing split between "messages" (fire-and-forget)
and "commands" (request/response) that didn't reflect Home Assistant Core's
architecture where everything is just a WS command.

- Remove dead WSClient.async_send_message (never called)
- Rename async_send_message → _async_send_command (private, fire-and-forget)
- Rename send_message → send_command (sync wrapper)
- Simplify _ensure_connected: drop message param, always raise on failure
- Simplify async_send_command: always raise on connection errors
- Remove MIN_VERSION gating (minimum supported Core is now 2024.2+)
- Remove begin_backup/end_backup version guards for Core < 2022.1.0
- Add debug logging for silently ignored connection errors

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Wait for Core to come up before backup

This is crucial since the WebSocket command to Core now fails with the
new error handling if Core is not running yet.

* Wait for Core install job instead

* Use CLI to fetch jobs instead of Supervisor API

The Supervisor API needs authentication token, which we have not
available at this point in the workflow. Instead of fetching the token,
we can use the CLI, which is available in the container.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 09:20:23 +01:00
Stefan Agner
7ae14b09a7 Reload ingress tokens on addon update and rebuild (#6556)
When an addon updates from having no ingress to having ingress, the
ingress token map was never rebuilt. Both update() and rebuild() called
_check_ingress_port() to assign a dynamic port but skipped the
sys_ingress.reload() call that registers the token. This caused
Ingress.get() to return None, resulting in a 503 error.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 15:20:08 -05:00
Jan Čermák
cc2da7284a Adjust tests in builder workflow to use apps instead of addons in CLI (#6554)
The CLI calls in the tests are still using deprecated add-ons terminology,
causing deprecation warnings. Change the commands and flags to the new ones.
2026-02-11 17:10:25 +01:00
Stefan Agner
6877a8b210 Add D-Bus tolerant enum base classes to prevent crashes on unknown values (#6545)
* Add D-Bus tolerant enum base classes to prevent crashes on unknown values

D-Bus services (systemd, NetworkManager, RAUC, UDisks2) can introduce
new enum values at any time via OS updates. Standard Python enum
construction raises ValueError for unknown values, which would crash
the Supervisor.

Introduce DBusStrEnum and DBusIntEnum base classes that use Python's
_missing_ hook to create pseudo-members for unknown values. These
pseudo-members pass isinstance checks (satisfying typeguard), preserve
the original value, don't pollute __members__, and report unknown
values to Sentry (deduplicated per class+value) for observability.

Migrate 17 D-Bus enums in dbus/const.py and udisks2/const.py to the
new base classes. Enums only sent TO D-Bus (StopUnitMode, StartUnitMode,
etc.) are left unchanged. Remove the manual try/except workaround in
NetworkInterface.type now that DBusIntEnum handles it automatically.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Add explicit enum conversions for systemd-resolved D-Bus properties

The resolved properties (dns_over_tls, dns_stub_listener, dnssec, llmnr,
multicast_dns, resolv_conf_mode) were returning raw string values from
D-Bus without converting to their declared enum types. This would fail
runtime type checking with typeguard.

Now safe to add explicit conversions since these enums use DBusStrEnum,
which tolerates unknown values from D-Bus without crashing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Avoid blocking I/O in D-Bus enum Sentry reporting

Move sentry_sdk.capture_message out of the event loop by adding a
fire_and_forget_capture_message helper that offloads the call to the
executor when a running loop is detected.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Handle exceptions when reporting message to Sentry

* Narrow typing of reported values

Use str/int explicitly since that is what the two existing Enum classes
can actually report.

* Adjust test style

* Apply suggestions from code review

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 15:53:19 +01:00
Stefan Agner
4b9f62b14b Add test style guideline to copilot instructions (#6552)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 11:54:59 +01:00
Stefan Agner
4dd58342b8 Fix RestartPolicy type annotation for runtime type checking (#6546)
* Fix RestartPolicy type annotation for runtime type checking

The restart_policy property returned a plain str from the Docker API
instead of a RestartPolicy instance, causing TypeCheckError with
typeguard. Use explicit mapping via _restart_policy_from_model(),
consistent with the existing _container_state_from_model() pattern,
to always return a proper RestartPolicy enum member. Unknown values
from Docker are logged and default to RestartPolicy.NO.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Drop unnecessary _RESTART_POLICY_MAP

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 10:17:52 +01:00
Mike Degatano
825ff415e0 Migrate export image to aiodocker (#6534)
* Migrate export image to aiodocker

* Remove aiofiles and just use executor

* Fixes from feedback
2026-02-11 10:03:50 +01:00
Stefan Agner
7e91cfe01c Reformat pyproject.toml using taplo (#6550)
Reformat pyproject.toml using taplo. This aligns formatting with Home
Assistant Core.
2026-02-11 09:56:00 +01:00
dependabot[bot]
327a2fe6b1 Bump cryptography from 46.0.4 to 46.0.5 (#6551)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.4 to 46.0.5.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.4...46.0.5)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.5
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-11 00:18:13 -05:00
dependabot[bot]
28b7cbe16b Bump coverage from 7.13.3 to 7.13.4 (#6548)
Bumps [coverage](https://github.com/coveragepy/coveragepy) from 7.13.3 to 7.13.4.
- [Release notes](https://github.com/coveragepy/coveragepy/releases)
- [Changelog](https://github.com/coveragepy/coveragepy/blob/main/CHANGES.rst)
- [Commits](https://github.com/coveragepy/coveragepy/compare/7.13.3...7.13.4)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.13.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-10 10:29:32 +01:00
Stefan Agner
3f1b3bb41f Remove deprecated loop parameter from WSClient (#6540)
The explicit event loop parameter passed to WSClient has been deprecated
since Python 3.8. Replace self._loop.create_future() with
asyncio.get_running_loop().create_future() and remove the loop parameter
from __init__, connect_with_auth, and its call site.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 09:44:25 +01:00
Stefan Agner
6b974a5b88 Validate device option type before path conversion in addon options (#6542)
Add a type check for device options in AddonOptions._single_validate
to ensure the value is a string before passing it to Path(). When a
non-string value (e.g. a dict) is provided for a device option, this
now raises a proper vol.Invalid error instead of an unhandled TypeError.

Fixes SUPERVISOR-175H

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 09:44:10 +01:00
Stefan Agner
66228f976d Use session.request() instead of getattr dispatch in HomeAssistantAPI (#6541)
Replace the dynamic `getattr(self.sys_websession, method)(...)` pattern
with the explicit `self.sys_websession.request(method, ...)` call. This
is type-safe and avoids runtime failures from typos in method names.

Also wrap the timeout parameter in `aiohttp.ClientTimeout` for
consistency with the typed `request()` signature.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-10 09:43:55 +01:00
Stefan Agner
74da5cdaf7 Fix builder workflow for workflow_dispatch events (#6547)
The retrieve-changed-files action only supports pull_request and push
events. Restrict the "Get changed files" step to those event types so
manual workflow_dispatch runs no longer fail. Also always build wheels
on manual dispatches since there are no changed files to compare against.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 23:12:46 +01:00
dependabot[bot]
b2baad7c28 Bump dbus-fast from 3.1.2 to 4.0.0 (#6515)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 3.1.2 to 4.0.0.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v3.1.2...v4.0.0)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 17:14:22 +01:00
Stefan Agner
db0bfa952f Remove frontend auto-update workflow (#6539)
Remove the automated frontend update workflow and version tracking file
as the frontend repository no longer builds supervisor-specific assets.
Frontend updates will now follow a different distribution mechanism.

Related to home-assistant/frontend#29132

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-09 11:38:56 +01:00
dependabot[bot]
b069358b93 Bump setuptools from 80.10.2 to 82.0.0 (#6537)
Bumps [setuptools](https://github.com/pypa/setuptools) from 80.10.2 to 82.0.0.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v80.10.2...v82.0.0)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-version: 82.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 10:53:48 +01:00
Stefan Agner
c3b9b9535c Fix RAUC D-Bus type annotations for runtime type checking (#6532)
Replace ctypes integer types (c_uint32, c_uint64) with standard Python int
in SlotStatusDataType to satisfy typeguard runtime type checking. D-Bus
returns standard Python integers, not ctypes objects.

Also fix the mark() method return type from tuple[str, str] to list[str] to
match the actual D-Bus return value, and add missing optional fields
"bundle.hash" and "installed.transaction" to SlotStatusDataType.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-05 13:00:35 +01:00
Stefan Agner
0cd668ec77 Fix typeguard errors by explicitly converting IP addresses to strings (#6531)
* Fix environment variable type errors by converting IP addresses to strings

Environment variables must be strings, but IPv4Address and IPv4Network
objects were being passed directly to container environment dictionaries,
causing typeguard validation errors.

Changes:
- Convert IPv4Address objects to strings in homeassistant.py for
  SUPERVISOR and HASSIO environment variables
- Convert IPv4Network object to string in observer.py for
  NETWORK_MASK environment variable
- Update tests to expect string values instead of IP objects in
  environment dictionaries
- Remove unused ip_network import from test_observer.py

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Use explicit string conversion for extra_hosts IP addresses

Use the !s format specifier in the f-string to explicitly convert
IPv4Address objects to strings when building the ExtraHosts list.
While f-strings implicitly convert objects to strings, using !s makes
the conversion explicit and consistent with the environment variable
fixes in the previous commit.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-05 11:00:43 +01:00
dependabot[bot]
d1cbb57c34 Bump sentry-sdk from 2.51.0 to 2.52.0 (#6530)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.51.0 to 2.52.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.51.0...2.52.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.52.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-05 09:28:13 +01:00
Stefan Agner
3d4849a3a2 Include Docker storage driver in Sentry reports (#6529)
Add the Docker storage driver (e.g., overlay2, vfs) to the context
information sent with Sentry error reports. This helps correlate
issues with specific storage backends and improves debugging of
Docker-related problems.

The storage driver is now included in both SETUP and RUNNING state
error reports under contexts.docker.storage_driver.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-05 09:27:51 +01:00
Tom Quist
4d8d44721d Fix MCP API proxy support for streaming and headers (#6461)
* Fix MCP API proxy support for streaming and headers

This commit fixes two issues with using the core API core/api/mcp through
the API proxy:

1. **Streaming support**: The proxy now detects text/event-stream responses
   and properly streams them instead of buffering all data. This is required
   for MCP's Server-Sent Events (SSE) transport.

2. **Header forwarding**: Added MCP-required headers to the forwarded headers:
   - Accept: Required for content negotiation
   - Last-Event-ID: Required for resuming broken SSE connections
   - Mcp-Session-Id: Required for session management across requests

The proxy now also preserves MCP-related response headers (Mcp-Session-Id)
and sets X-Accel-Buffering to "no" for streaming responses to prevent
buffering by intermediate proxies.

Tests added to verify:
- MCP headers are properly forwarded to Home Assistant
- Streaming responses (text/event-stream) are handled correctly
- Response headers are preserved

* Refactor: reuse stream logic for SSE responses (#3)

* Fix ruff format + cover streaming payload error

* Fix merge error

* Address review comments (headers / streaming proxy) (#4)

* Address review: header handling for streaming/non-streaming

* Forward MCP-Protocol-Version and Origin headers

* Do not forward Origin header through API proxy (#5)

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2026-02-04 17:28:11 +01:00
Stefan Agner
a849050369 Improve CpuArch type safety with explicit conversions (#6524)
The CpuArch enum was being used inconsistently throughout the codebase,
with some code expecting enum values and other code expecting strings.
This caused type checking issues and potential runtime errors.

Changes:
- Fix match_base() to return CpuArch enum instead of str
- Add explicit string conversions using !s formatting where arch values
  are used in f-strings (build.py, model.py)
- Convert CpuArch to str explicitly in contexts requiring strings
  (docker/addon.py, misc/filter.py)
- Update all tests to use CpuArch enum values instead of strings
- Update test mocks to return CpuArch enum values

This ensures type consistency and improves MyPy type checking accuracy
across the architecture detection and management code.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-04 11:34:23 +01:00
dependabot[bot]
2ee7e22bbd Bump coverage from 7.13.2 to 7.13.3 (#6528)
Bumps [coverage](https://github.com/coveragepy/coveragepy) from 7.13.2 to 7.13.3.
- [Release notes](https://github.com/coveragepy/coveragepy/releases)
- [Changelog](https://github.com/coveragepy/coveragepy/blob/main/CHANGES.rst)
- [Commits](https://github.com/coveragepy/coveragepy/compare/7.13.2...7.13.3)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.13.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-04 11:29:35 +01:00
dependabot[bot]
9f5def5fb7 Bump ruff from 0.14.14 to 0.15.0 (#6527)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.14 to 0.15.0.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.14...0.15.0)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-04 11:29:23 +01:00
Stefan Agner
df03b8fb68 Fix type annotations in backup and restore methods (#6523)
Update type hints throughout backup/restore code to match actual types:
- Change tarfile.TarFile to SecureTarFile for backup/restore methods
- Add None union types for Backup properties that can return None
- Fix exclude_database parameter to accept None in restore method
- Update API backup methods to handle None return from backup tasks
- Fix condition check for exclude_database to explicitly compare with True
- Add assertion to help type checker with indexed assignment

These changes improve type safety and resolve type checking issues
discovered by runtime type validation.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-04 11:18:06 +01:00
Stefan Agner
d1a576e711 Fix Docker Hub manifest fetching by using correct registry API endpoint (#6525)
The manifest fetcher was using docker.io as the registry API endpoint,
but Docker Hub's actual registry API is at registry-1.docker.io. When
trying to access https://docker.io/v2/..., requests were being redirected
to https://www.docker.com/ (the marketing site), which returned HTML
instead of JSON, causing manifest fetching to fail.

This matches exactly what Docker itself does internally - see
daemon/pkg/registry/config.go:49 where Docker hardcodes
DefaultRegistryHost = "registry-1.docker.io" for registry operations.

Changes:
- Add DOCKER_HUB_API constant for the actual API endpoint
- Add _get_api_endpoint() helper to translate docker.io to
  registry-1.docker.io for HTTP API calls
- Update _get_auth_token() and _fetch_manifest() to use the API endpoint
- Keep docker.io as the registry identifier for naming and credentials
- Add tests to verify the API endpoint translation

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-03 19:03:47 +01:00
Mike Degatano
a122b5f1e9 Migrate info, events and container logs to aiodocker (#6514)
* Migrate info and events to aiodocker

* Migrate container logs to aiodocker

* Fix dns plugin loop test

* Fix mocking for docker info

* Fixes from feedback

* Harden monitor error handling

* Deleted failing tests because they were not useful
2026-02-03 18:36:41 +01:00
Wendelin
c2de83e80d Add .cache to backup exclusion list (#6483)
* Add frontend_development_pr to backup exclusion list

* update frontend dev pr folder name to frontend_development_artifacts

* Update backup exclusion list to replace frontend_development_artifacts with .cache
2026-02-03 15:33:13 +01:00
Stefan Agner
6806c1d58a Fix Docker exec exit code handling by using detach=False (#6520)
* Fix Docker exec exit code handling by using detach=False

When executing commands inside containers using `container_run_inside()`,
the exec metadata did not contain a valid exit code because `detach=True`
starts the exec in the background and returns immediately before completion.

Root cause: With `detach=True`, Docker's exec start() returns an awaitable
that yields output bytes. However, the await only waits for the HTTP/REST
call to complete, NOT for the actual exec command to finish. The command
continues running in the background after the HTTP response is received.
Calling `inspect()` immediately after returns `ExitCode: None` because
the exec hasn't completed yet.

Solution: Use `detach=False` which returns a Stream object that:
- Automatically waits for exec completion by reading from the stream
- Provides actual command output (not just empty bytes)
- Makes exit code immediately available after stream closes
- No polling needed

Changes:
- Switch from `detach=True` to `detach=False` in container_run_inside()
- Read output from stream using async context manager
- Add defensive validation to ensure ExitCode is never None
- Update tests to mock the Stream interface using AsyncMock
- Add debug log showing exit code after command execution

Fixes #6518

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Address review feedback

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-03 13:36:24 +01:00
Jan Čermák
a2ee2223fa Instruct agents to use relative imports within supervisor package (#6522)
It seems that agents like Claude like to do absolute imports despite it's a
pattern we're trying to avoid. Adjust the instructions to avoid them doing it.
2026-02-03 13:36:12 +01:00
Stefan Agner
7ad9a911e8 Add DELETE method support to /core/api proxy (#6521)
The Supervisor's /core/api proxy previously only supported GET and POST
methods, returning 405 Method Not Allowed for DELETE requests. This
prevented addons from calling Home Assistant Core REST API endpoints
that require DELETE methods, such as deleting automations, scripts,
or scenes.

The underlying proxy implementation already supported passing through
any HTTP method via request.method.lower(), so only the route
registration was needed.

Fixes #6509

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-03 11:51:59 +01:00
Stefan Agner
05a58d4768 Add exception handling for pull progress tracking errors (#6516)
* Add exception handling for pull progress tracking errors

Wrap progress event processing in try-except blocks to prevent image
pulls from failing due to progress tracking issues. This ensures that
progress updates, which are purely informational, never abort the
actual Docker pull operation.

Catches two categories of exceptions:
- ValueError: Includes "Cannot update a job that is done" errors that
  can occur under rare event combinations (similar to #6513)
- All other exceptions: Defensive catch-all for any unexpected errors
  in the progress tracking logic

All exceptions are logged with full context (layer ID, status, progress)
and sent to Sentry for tracking and debugging. The pull continues
successfully in all cases.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Apply suggestions from code review

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Apply suggestions from code review

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2026-02-03 11:12:37 +01:00
dependabot[bot]
c89d28ae11 Bump orjson from 3.11.6 to 3.11.7 (#6517)
Bumps [orjson](https://github.com/ijl/orjson) from 3.11.6 to 3.11.7.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.11.6...3.11.7)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-03 10:51:51 +01:00
Stefan Agner
79f9afb4c2 Fix port conflict tests for aiodocker 0.25.0 compatibility (#6519)
The aiodocker 0.25.0 upgrade (PR #6448) changed how DockerError handles
the message parameter. The library now extracts the message string from
Docker API JSON responses before passing it to DockerError, rather than
passing the entire dict.

The port conflict detection tests were written before this change and
incorrectly passed dicts to DockerError. This caused TypeErrors when
the port conflict detection code tried to match err.message with a
regex, expecting a string but receiving a dict.

Update both test_addon_start_port_conflict_error and
test_observer_start_port_conflict to pass message strings directly,
matching the real aiodocker 0.25.0 behavior.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-03 10:34:47 +01:00
Mike Degatano
11b754102c Map port conflict on start error into a known error (#6445)
* Map port conflict on start error into a known error

* Apply suggestions from code review

* Run ruff format

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2026-02-02 17:16:31 +01:00
Stefan Agner
6957341c3e Refactor Docker pull progress with registry manifest fetcher (#6379)
* Use count-based progress for Docker image pulls

Refactor Docker image pull progress to use a simpler count-based approach
where each layer contributes equally (100% / total_layers) regardless of
size. This replaces the previous size-weighted calculation that was
susceptible to progress regression.

The core issue was that Docker rate-limits concurrent downloads (~3 at a
time) and reports layer sizes only when downloading starts. With size-
weighted progress, large layers appearing late would cause progress to
drop dramatically (e.g., 59% -> 29%) as the total size increased.

The new approach:
- Each layer contributes equally to overall progress
- Per-layer progress: 70% download weight, 30% extraction weight
- Progress only starts after first "Downloading" event (when layer
  count is known)
- Always caps at 99% - job completion handles final 100%

This simplifies the code by moving progress tracking to a dedicated
module (pull_progress.py) and removing complex size-based scaling logic
that tried to account for unknown layer sizes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Exclude already-existing layers from pull progress calculation

Layers that already exist locally should not count towards download
progress since there's nothing to download for them. Only layers that
need pulling are included in the progress calculation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add registry manifest fetcher for size-based pull progress

Fetch image manifests directly from container registries before pulling
to get accurate layer sizes upfront. This enables size-weighted progress
tracking where each layer contributes proportionally to its byte size,
rather than equal weight per layer.

Key changes:
- Add RegistryManifestFetcher that handles auth discovery via
  WWW-Authenticate headers, token fetching with optional credentials,
  and multi-arch manifest list resolution
- Update ImagePullProgress to accept manifest layer sizes via
  set_manifest() and calculate size-weighted progress
- Fall back to count-based progress when manifest fetch fails
- Pre-populate layer sizes from manifest when creating layer trackers

The manifest fetcher supports ghcr.io, Docker Hub, and private
registries by using credentials from Docker config when available.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Clamp progress to 100 to prevent floating point precision issues

Floating point arithmetic in weighted progress calculations can produce
values slightly above 100 (e.g., 100.00000000000001). This causes
validation errors when the progress value is checked.

Add min(100, ...) clamping to both size-weighted and count-based
progress calculations to ensure the result never exceeds 100.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Use sys_websession for manifest fetcher instead of creating new session

Reuse the existing CoreSys websession for registry manifest requests
instead of creating a new aiohttp session. This improves performance
and follows the established pattern used throughout the codebase.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Make platform parameter required and warn on missing platform

- Make platform a required parameter in get_manifest() and _fetch_manifest()
  since it's always provided by the calling code
- Return None and log warning when requested platform is not found in
  multi-arch manifest list, instead of falling back to first manifest
  which could be the wrong architecture

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Log manifest fetch failures at warning level

Users will notice degraded progress tracking when manifest fetch fails,
so log at warning level to help diagnose issues.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add pylint disable comments for protected access in manifest tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Separate download_current and total_size updates in pull progress

Update download_current and total_size independently in the DOWNLOADING
handler. This ensures download_current is updated even when total is
not yet available.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Reject invalid platform format in manifest selection

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-02-02 15:56:24 +01:00
Stefan Agner
77f3da7014 Disable Home Assistant watchdog during system shutdown (#6512)
During system shutdown (reboot/poweroff), the watchdog was incorrectly
detecting the Home Assistant Core container as failed and attempting to
restart it. This occurred because Docker was stopping all containers in
parallel with Supervisor's own shutdown sequence, causing the watchdog
to trigger while add-ons were still being stopped.

This led to an abrupt termination of Core before it could cleanly shut
down its SQLite database, resulting in a warning on the next startup:
"The system could not validate that the sqlite3 database was shutdown
cleanly".

The fix registers a supervisor state change listener that unregisters
the watchdog when entering any shutdown state (SHUTDOWN, STOPPING, or
CLOSE). This prevents restart attempts during both user-initiated
reboots (via API) and external shutdown signals (Docker SIGTERM,
console reboot commands).

Since SHUTDOWN, STOPPING, and CLOSE are terminal states with no reverse
transition back to RUNNING, no re-registration logic is needed.

Fixes #6511

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-31 17:01:05 +01:00
Stefan Agner
96d0593af2 Handle order issues in job progress updates during image pulls (#6513)
Catch ValueError exceptions with "Cannot update a job that is done"
during image pull progress updates. This error occurs intermittently
when progress events arrive after a job has completed. It is not clear
why this happens, maybe the job gets prematurely marked as done or
the pull events arrive in a different order than expected.

Rather than failing the entire pull operation, we now:
- Log a warning with context (layer ID, status, progress)
- Send the error to Sentry for tracking and investigation
- Continue with the pull operation

This prevents pull failures while gathering information to help
identify and fix the root cause of the race condition.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-30 17:06:53 -05:00
Stefan Agner
3db60170aa Fix flaky test_group_throttle_rate_limit race condition (#6504)
The test was failing intermittently in CI because concurrent async
operations in asyncio.gather() were getting slightly different
timestamps (microseconds apart) despite being inside a time_machine
context.

When test2.execute() calls were timestamped at start+2ms due to async
scheduling delays, they weren't cleaned up in the final test block
(cutoff = start+1ms), causing a false rate limit error.

Fix by using tick=False to completely freeze time during the gather,
ensuring all 4 calls get the exact same timestamp.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-30 17:17:50 +01:00
Mike Degatano
a5c3781f9d Migrate network interactions to aiodocker (#6505) 2026-01-30 15:34:12 +01:00
dependabot[bot]
2a4890e2b0 Bump aiodocker from 0.24.0 to 0.25.0 (#6448)
* Bump aiodocker from 0.24.0 to 0.25.0

Bumps [aiodocker](https://github.com/aio-libs/aiodocker) from 0.24.0 to 0.25.0.
- [Release notes](https://github.com/aio-libs/aiodocker/releases)
- [Changelog](https://github.com/aio-libs/aiodocker/blob/main/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiodocker/compare/v0.24.0...v0.25.0)

---
updated-dependencies:
- dependency-name: aiodocker
  dependency-version: 0.25.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update to new timeout configuration

* Fix pytest failure

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2026-01-30 09:39:06 +01:00
dependabot[bot]
8fa55bac9e Bump debugpy from 1.8.19 to 1.8.20 (#6508)
Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.8.19 to 1.8.20.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.8.19...v1.8.20)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-version: 1.8.20
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-30 08:39:39 +01:00
dependabot[bot]
72003346f4 Bump actions/cache from 5.0.2 to 5.0.3 (#6507)
Bumps [actions/cache](https://github.com/actions/cache) from 5.0.2 to 5.0.3.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](8b402f58fb...cdf6c1fa76)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: 5.0.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-30 08:38:54 +01:00
dependabot[bot]
51c447a1e8 Bump orjson from 3.11.5 to 3.11.6 (#6506)
Bumps [orjson](https://github.com/ijl/orjson) from 3.11.5 to 3.11.6.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.11.5...3.11.6)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-30 08:38:29 +01:00
dependabot[bot]
a3f5675c96 Bump docker/login-action from 3.6.0 to 3.7.0 (#6502)
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.6.0 to 3.7.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](5e57cd1181...c94ce9fb46)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.7.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-29 09:29:56 +01:00
dependabot[bot]
f7e4f6a1b2 Bump sentry-sdk from 2.50.0 to 2.51.0 (#6503)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.50.0 to 2.51.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.50.0...2.51.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.51.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-29 09:27:13 +01:00
Stefan Agner
a2db716a5f Check frontend availability after Home Assistant Core updates (#6311)
* Check frontend availability after Home Assistant Core updates

Add verification that the frontend is actually accessible at "/" after core
updates to ensure the web interface is serving properly, not just that the
API endpoints respond.

Previously, the update verification only checked API endpoints and whether
the frontend component was loaded. This could miss cases where the API is
responsive but the frontend fails to serve the UI.

Changes:
- Add check_frontend_available() method to HomeAssistantAPI that fetches
  the root path and verifies it returns HTML content
- Integrate frontend check into core update verification flow after
  confirming the frontend component is loaded
- Trigger automatic rollback if frontend is inaccessible after update
- Fix blocking I/O calls in rollback log file handling to use async
  executor

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Avoid checking frontend if config data is None

* Improve pytest tests

* Make sure Core returns a valid config

* Remove Core version check in frontend availability test

The call site already makes sure that an actual Home Assistant Core
instance is running before calling the frontend availability test.
So this is rather redundant. Simplify the code by removing the version
check and update tests accordingly.

* Add test coverage for get_config

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-29 09:06:45 +01:00
David Rapan
641b205ee7 Add configurable interface route metric (#6447)
* Add route_metric attribute to IpProperties class

Signed-off-by: David Rapan <david@rapan.cz>

* Refactor dbus setting IP constants

Signed-off-by: David Rapan <david@rapan.cz>

* Add route metric

Signed-off-by: David Rapan <david@rapan.cz>

* Merge test_api_network_interface_info

Signed-off-by: David Rapan <david@rapan.cz>

* Add test case for route metric update

Signed-off-by: David Rapan <david@rapan.cz>

---------

Signed-off-by: David Rapan <david@rapan.cz>
2026-01-28 13:08:36 +01:00
AlCalzone
de02bc991a fix: pull missing images before running (#6500)
* fix: pull missing images before running

* add tests for auto-pull behavior
2026-01-28 13:08:03 +01:00
dependabot[bot]
80cf00f195 Bump cryptography from 46.0.3 to 46.0.4 (#6501)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.3 to 46.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.3...46.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-28 10:39:42 +01:00
AlCalzone
df8201ca33 Update get_docker_args() to return mounts not volumes (#6499)
* Update `get_docker_args()` to return `mounts` not `volumes`

* fix more mocks to return PurePaths
2026-01-27 15:00:33 -05:00
Mike Degatano
909a2dda2f Migrate (almost) all docker container interactions to aiodocker (#6489)
* Migrate all docker container interactions to aiodocker

* Remove containers_legacy since its no longer used

* Add back remove color logic

* Revert accidental invert of conditional in setup_network

* Fix typos found by copilot

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Revert "Apply suggestions from code review"

This reverts commit 0a475433ea.

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-27 12:42:17 +01:00
Stefan Agner
515114fa69 Improve CI workflow wait logic and reduce duplication (#6496)
* Improve Supervisor startup wait logic in CI workflow

The 'Wait for Supervisor to come up' step was failing intermittently when
the Supervisor API wasn't immediately available. The original script relied
on bash's lenient error handling in command substitution, which could fail
unpredictably.

Changes:
- Use curl -f flag to properly handle HTTP errors
- Use jq -e for robust JSON validation and exit code handling
- Add explicit 5-minute timeout with elapsed time tracking
- Reduce log noise by only reporting progress every 15 seconds
- Add comprehensive error diagnostics on timeout:
  * Show last API response received
  * Dump last 50 lines of Supervisor logs
- Show startup time on success for performance monitoring

This makes the CI workflow more reliable and easier to debug when the
Supervisor fails to start.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Use YAML anchor to deduplicate wait step in CI workflow

The 'Wait for Supervisor to come up' step appears twice in the
run_supervisor job - once after starting and once after restarting.
Use a YAML anchor to define the step once and reference it on the
second occurrence.

This reduces duplication by 28 lines and makes future maintenance
easier by ensuring both wait steps remain identical.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-27 09:31:45 +01:00
dependabot[bot]
a0594c8a1f Bump coverage from 7.13.1 to 7.13.2 (#6494)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 08:28:41 +01:00
dependabot[bot]
df96fb711a Bump setuptools from 80.10.1 to 80.10.2 (#6495)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 08:28:27 +01:00
dependabot[bot]
d58d5769d4 Bump release-drafter/release-drafter from 6.1.1 to 6.2.0 (#6490)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-23 17:23:13 +01:00
dependabot[bot]
c4eda35184 Bump ruff from 0.14.13 to 0.14.14 (#6492)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-23 16:48:59 +01:00
dependabot[bot]
07a8350c40 Bump actions/checkout from 6.0.1 to 6.0.2 (#6491) 2026-01-23 09:28:51 +01:00
Jan Čermák
a8e5c4f1f2 Add missing syslog identifiers to default list for Host logs (#6485)
Add more syslog identifiers (most importantly containerd) extracted from real
systems that were missing in the list. This should make the host logs contain
the same events as journalctl logs, minus audit logs and Docker container logs.
2026-01-22 10:05:52 +01:00
dependabot[bot]
cbaef62d67 Bump setuptools from 80.9.0 to 80.10.1 (#6488)
Bumps [setuptools](https://github.com/pypa/setuptools) from 80.9.0 to 80.10.1.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v80.9.0...v80.10.1)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-version: 80.10.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-22 08:29:19 +01:00
dependabot[bot]
16cde8365f Bump peter-evans/create-pull-request from 8.0.0 to 8.1.0 (#6487)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 8.0.0 to 8.1.0.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](98357b18bf...c0f553fe54)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 8.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-22 08:28:53 +01:00
dependabot[bot]
afc0f37fef Bump actions/setup-python from 6.1.0 to 6.2.0 (#6486)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 6.1.0 to 6.2.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](83679a892e...a309ff8b42)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: 6.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-22 08:28:22 +01:00
dependabot[bot]
7b0ea51ef6 Bump sentry-sdk from 2.49.0 to 2.50.0 (#6484)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.49.0 to 2.50.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.49.0...2.50.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.50.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-21 12:27:57 +01:00
dependabot[bot]
94daaf4e52 Bump release-drafter/release-drafter from 6.1.0 to 6.1.1 (#6482)
Bumps [release-drafter/release-drafter](https://github.com/release-drafter/release-drafter) from 6.1.0 to 6.1.1.
- [Release notes](https://github.com/release-drafter/release-drafter/releases)
- [Commits](b1476f6e6e...267d2e0268)

---
updated-dependencies:
- dependency-name: release-drafter/release-drafter
  dependency-version: 6.1.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-21 12:27:11 +01:00
dependabot[bot]
2edd8d0407 Bump actions/cache from 5.0.1 to 5.0.2 (#6481)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-19 10:13:01 +01:00
dependabot[bot]
308589e1de Bump ruff from 0.14.11 to 0.14.13 (#6480)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-16 09:44:42 +01:00
Jan Čermák
ec0f7c2b9c Use query_dns instead of deprecated query after aiohttp 4.x update (#6478) 2026-01-14 15:22:12 +01:00
Jan Čermák
753021d4d5 Fix 'DockerMount is not JSON serializable' in DockerAPI.run_command (#6477) 2026-01-14 15:21:11 +01:00
Erich Gubler
3e3db696d3 Fix typo in log message when running commands in a container (#6472) 2026-01-14 13:28:15 +01:00
dependabot[bot]
4d708d34c8 Bump aiodns from 3.6.1 to 4.0.0 (#6473)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-13 22:43:50 +01:00
dependabot[bot]
e7a0559692 Bump types-requests from 2.32.4.20250913 to 2.32.4.20260107 (#6464)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-13 22:00:46 +01:00
dependabot[bot]
9ad1bf0f1a Bump sentry-sdk from 2.48.0 to 2.49.0 (#6467)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 22:51:15 +01:00
dependabot[bot]
0f30e2cb43 Bump ruff from 0.14.10 to 0.14.11 (#6468)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 22:48:34 +01:00
dependabot[bot]
1def9cc60e Bump types-docker from 7.1.0.20251202 to 7.1.0.20260109 (#6466)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 17:59:11 +01:00
dependabot[bot]
1f9cbb63ac Bump urllib3 from 2.6.2 to 2.6.3 (#6465)
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.6.2 to 2.6.3.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/2.6.2...2.6.3)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-version: 2.6.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-08 11:30:13 +01:00
Mike Degatano
1d1a8cdad3 Add API to force repository repair (#6439)
* Add API to force repository repair

* Fix inheritance for error

* Fix absolute import
2026-01-06 16:01:48 +01:00
Mike Degatano
5ebd200b1e Don't remove folder on Home Assistant restore (#6443) 2026-01-05 10:20:46 +01:00
Mike Degatano
1b8f51d5c7 Fix accidental absolute imports (#6446) 2026-01-05 10:19:02 +01:00
dependabot[bot]
6a29f92212 Bump astroid from 4.0.2 to 4.0.3 (#6459)
Bumps [astroid](https://github.com/pylint-dev/astroid) from 4.0.2 to 4.0.3.
- [Release notes](https://github.com/pylint-dev/astroid/releases)
- [Changelog](https://github.com/pylint-dev/astroid/blob/main/ChangeLog)
- [Commits](https://github.com/pylint-dev/astroid/compare/v4.0.2...v4.0.3)

---
updated-dependencies:
- dependency-name: astroid
  dependency-version: 4.0.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-05 10:18:06 +01:00
dependabot[bot]
61052f78df Bump getsentry/action-release from 3.4.0 to 3.5.0 (#6458)
Bumps [getsentry/action-release](https://github.com/getsentry/action-release) from 3.4.0 to 3.5.0.
- [Release notes](https://github.com/getsentry/action-release/releases)
- [Changelog](https://github.com/getsentry/action-release/blob/master/CHANGELOG.md)
- [Commits](128c5058bb...dab6548b3c)

---
updated-dependencies:
- dependency-name: getsentry/action-release
  dependency-version: 3.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-05 10:16:51 +01:00
dependabot[bot]
0c5f48e4af Bump aiohttp from 3.13.2 to 3.13.3 (#6456)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-03 08:26:28 -10:00
dependabot[bot]
4c7a0d5477 Bump gitpython from 3.1.45 to 3.1.46 (#6455) 2026-01-02 10:03:12 +01:00
dependabot[bot]
f384b9ce86 Bump backports-zstd from 1.2.0 to 1.3.0 (#6454) 2025-12-30 08:51:03 +01:00
dependabot[bot]
10327e73c9 Bump coverage from 7.13.0 to 7.13.1 (#6452)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-29 11:44:21 +01:00
dependabot[bot]
d4b1aa82ab Bump time-machine from 3.1.0 to 3.2.0 (#6438)
Bumps [time-machine](https://github.com/adamchainz/time-machine) from 3.1.0 to 3.2.0.
- [Changelog](https://github.com/adamchainz/time-machine/blob/main/docs/changelog.rst)
- [Commits](https://github.com/adamchainz/time-machine/compare/3.1.0...3.2.0)

---
updated-dependencies:
- dependency-name: time-machine
  dependency-version: 3.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-20 09:08:06 -05:00
Mike Degatano
7e39226f42 Remove cosign from container (#6442) 2025-12-20 08:56:36 -05:00
dependabot[bot]
1196343620 Bump voluptuous from 0.15.2 to 0.16.0 (#6440)
Bumps [voluptuous](https://github.com/alecthomas/voluptuous) from 0.15.2 to 0.16.0.
- [Release notes](https://github.com/alecthomas/voluptuous/releases)
- [Changelog](https://github.com/alecthomas/voluptuous/blob/master/CHANGELOG.md)
- [Commits](https://github.com/alecthomas/voluptuous/compare/0.15.2...0.16.0)

---
updated-dependencies:
- dependency-name: voluptuous
  dependency-version: 0.16.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-19 12:15:57 +01:00
dependabot[bot]
8a89beb85c Bump ruff from 0.14.9 to 0.14.10 (#6441)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.9 to 0.14.10.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.9...0.14.10)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.10
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-19 12:15:47 +01:00
Jan Čermák
75cf60f0d6 Bump uv to v0.9.18 (#6436)
Bump to latest version, full changelog at:
https://github.com/astral-sh/uv/blob/0.9.18/CHANGELOG.md
2025-12-18 16:21:59 +01:00
Jan Čermák
4a70cb0f4e Bump base Docker image to 2025.12.2 (#6437)
This mainly bumps Python to v3.13.11. For all changes, see:
- https://github.com/home-assistant/docker-base/releases/tag/2025.12.2
- https://github.com/home-assistant/docker-base/releases/tag/2025.12.1
- https://github.com/home-assistant/docker-base/releases/tag/2025.12.0
- https://github.com/home-assistant/docker-base/releases/tag/2025.11.3
- https://github.com/home-assistant/docker-base/releases/tag/2025.11.2
2025-12-18 16:21:43 +01:00
Jan Čermák
4b1a82562c Fix missing metadata of stopped add-ons after aiodocker migration (#6435)
After the aiodocker migration in #6415 some add-ons may have been missing IP
addresses because the metadata was retrieved for the container before it was
started and network initialized. This manifested as some containers being
unreachable through ingress (e.g. 'Ingress error: Cannot connect to host
0.0.0.0:8099'), especially if they have been started manually after Supervisor
startup.

To fix it, simply move retrieval of the container metadata (which is then
persisted in DockerInterface._meta) to the end of the run method, where all
attributes should have correct values. This is similar to the flow before the
refactoring, where Container.reload() was called to update the metadata.
2025-12-17 23:01:28 +01:00
dependabot[bot]
7bb361304f Bump sentry-sdk from 2.47.0 to 2.48.0 (#6433)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.47.0 to 2.48.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.47.0...2.48.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.48.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-17 19:43:25 +01:00
dependabot[bot]
60e2f00388 Bump pre-commit from 4.5.0 to 4.5.1 (#6434)
Bumps [pre-commit](https://github.com/pre-commit/pre-commit) from 4.5.0 to 4.5.1.
- [Release notes](https://github.com/pre-commit/pre-commit/releases)
- [Changelog](https://github.com/pre-commit/pre-commit/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pre-commit/pre-commit/compare/v4.5.0...v4.5.1)

---
updated-dependencies:
- dependency-name: pre-commit
  dependency-version: 4.5.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-17 19:42:50 +01:00
dependabot[bot]
07c0d538d1 Bump debugpy from 1.8.18 to 1.8.19 (#6431)
Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.8.18 to 1.8.19.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.8.18...v1.8.19)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-version: 1.8.19
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-16 16:14:50 +01:00
dependabot[bot]
29fdce5d79 Bump actions/download-artifact from 6.0.0 to 7.0.0 (#6429)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 6.0.0 to 7.0.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](018cc2cf5b...37930b1c2a)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 7.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 10:00:35 +01:00
dependabot[bot]
50542f526c Bump actions/upload-artifact from 5.0.0 to 6.0.0 (#6428)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 5.0.0 to 6.0.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](330a01c490...b7c566a772)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 10:00:26 +01:00
dependabot[bot]
e08b777814 Bump mypy from 1.19.0 to 1.19.1 (#6430)
Bumps [mypy](https://github.com/python/mypy) from 1.19.0 to 1.19.1.
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.19.0...v1.19.1)

---
updated-dependencies:
- dependency-name: mypy
  dependency-version: 1.19.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 10:00:17 +01:00
dependabot[bot]
f48f2fa21b Bump actions/cache from 4.3.0 to 5.0.1 (#6427)
Bumps [actions/cache](https://github.com/actions/cache) from 4.3.0 to 5.0.1.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](0057852bfa...9255dc7a25)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: 5.0.1
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 10:00:03 +01:00
dependabot[bot]
ba8b8a5a26 Bump dessant/lock-threads from 5.0.1 to 6.0.0 (#6426)
Bumps [dessant/lock-threads](https://github.com/dessant/lock-threads) from 5.0.1 to 6.0.0.
- [Release notes](https://github.com/dessant/lock-threads/releases)
- [Changelog](https://github.com/dessant/lock-threads/blob/main/CHANGELOG.md)
- [Commits](1bf7ec2505...7266a7ce5c)

---
updated-dependencies:
- dependency-name: dessant/lock-threads
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 09:59:55 +01:00
dependabot[bot]
a9964e9906 Bump ruff from 0.14.8 to 0.14.9 (#6422)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.8 to 0.14.9.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.8...0.14.9)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 09:59:50 +01:00
dependabot[bot]
272670878a Bump urllib3 from 2.6.1 to 2.6.2 (#6421)
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.6.1 to 2.6.2.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/2.6.1...2.6.2)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-version: 2.6.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 09:59:21 +01:00
Mike Degatano
d23bc291d5 Migrate create container to aiodocker (#6415)
* Migrate create container to aiodocker

* Fix extra hosts transformation

* Env not Environment

* Fix tests

* Fixes from feedback

---------

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
2025-12-15 09:57:30 +01:00
dependabot[bot]
4fc6acfceb Bump debugpy from 1.8.17 to 1.8.18 (#6418)
Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.8.17 to 1.8.18.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.8.17...v1.8.18)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-version: 1.8.18
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 09:55:28 +01:00
dependabot[bot]
4df0db9df4 Bump aiodns from 3.6.0 to 3.6.1 (#6423)
Bumps [aiodns](https://github.com/saghul/aiodns) from 3.6.0 to 3.6.1.
- [Release notes](https://github.com/saghul/aiodns/releases)
- [Changelog](https://github.com/aio-libs/aiodns/blob/master/ChangeLog)
- [Commits](https://github.com/saghul/aiodns/compare/v3.6.0...v3.6.1)

---
updated-dependencies:
- dependency-name: aiodns
  dependency-version: 3.6.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-11 23:42:40 +01:00
dependabot[bot]
27c53048f6 Bump codecov/codecov-action from 5.5.1 to 5.5.2 (#6416) 2025-12-10 09:06:35 +01:00
dependabot[bot]
88ab5e9196 Bump peter-evans/create-pull-request from 7.0.11 to 8.0.0 (#6417) 2025-12-10 07:46:19 +01:00
dependabot[bot]
b7a7475d47 Bump coverage from 7.12.0 to 7.13.0 (#6414) 2025-12-09 07:24:07 +01:00
dependabot[bot]
5fe6b934e2 Bump urllib3 from 2.6.0 to 2.6.1 (#6413) 2025-12-09 07:14:39 +01:00
Hendrik Bergunde
a2d301ed27 Increase timeout waiting for Core API to work around 2025.12.x issues (#6404)
* Fix too short timeouts for Synology NAS 

With Home Assistant Core 2025.12.x updates available the STARTUP_API_RESPONSE_TIMEOUT that HA supervisor is willing to wait (before assuming a startup failure and rolling back the entire core update) seems to be too low on not-so-beefy hosts. The problem has been seen on Synology NAS machines running Home Assistant on the side (like in my case). I have doubled the timeout from 3 to 6 minutes and the upgrade to Core 2025.12.1 works on my Synology DS723+. My update took 4min 56s -- hence the timeout increase was proven necessary.

* Fix tests for increased API Timeout

* Increase the timeout to 10 minutes

* Increase the timeout in tests

---------

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
2025-12-08 11:05:57 -05:00
Jan Čermák
cdef1831ba Add option to Core settings to enable duplicated logs (#6400)
Introduce new option `duplicate_log_file` to HA Core configuration that will
set an environment variable `HA_DUPLICATE_LOG_FILE=1` for the Core container if
enabled. This will serve as a flag for Core to enable the legacy log file,
along the standard logging which is handled by Systemd Journal.
2025-12-08 16:35:56 +01:00
dependabot[bot]
b79130816b Bump aiodns from 3.5.0 to 3.6.0 (#6408) 2025-12-08 08:24:12 +01:00
dependabot[bot]
923bc2ba87 Bump backports-zstd from 1.1.0 to 1.2.0 (#6410)
Bumps [backports-zstd](https://github.com/rogdham/backports.zstd) from 1.1.0 to 1.2.0.
- [Changelog](https://github.com/Rogdham/backports.zstd/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rogdham/backports.zstd/compare/v1.1.0...v1.2.0)

---
updated-dependencies:
- dependency-name: backports-zstd
  dependency-version: 1.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 08:22:07 +01:00
dependabot[bot]
0f6b211151 Bump pytest from 9.0.1 to 9.0.2 (#6409)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 9.0.1 to 9.0.2.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/9.0.1...9.0.2)

---
updated-dependencies:
- dependency-name: pytest
  dependency-version: 9.0.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 08:21:53 +01:00
dependabot[bot]
054c6d0365 Bump peter-evans/create-pull-request from 7.0.9 to 7.0.11 (#6406)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.9 to 7.0.11.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](84ae59a2cd...22a9089034)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 7.0.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 08:21:33 +01:00
dependabot[bot]
d920bde7e4 Bump orjson from 3.11.4 to 3.11.5 (#6407) 2025-12-08 07:24:53 +01:00
Stefan Agner
9862499751 Handle missing origin remote in git store pull operation (#6398)
Add AttributeError to the exception handler in the git pull operation.
This catches the case where a repository exists but has no 'origin'
remote configured, which can happen if the remote was renamed or
deleted by the user or due to repository corruption.

When this error occurs, it now creates a CORRUPT_REPOSITORY issue with
an EXECUTE_RESET suggestion, triggering the auto-fix mechanism to
re-clone the repository.

Fixes SUPERVISOR-69Z
Fixes SUPERVISOR-172C

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-07 00:38:38 +01:00
dependabot[bot]
287a58e004 Bump securetar from 2025.2.1 to 2025.12.0 (#6402)
* Bump securetar from 2025.2.1 to 2025.12.0

Bumps [securetar](https://github.com/pvizeli/securetar) from 2025.2.1 to 2025.12.0.
- [Release notes](https://github.com/pvizeli/securetar/releases)
- [Commits](https://github.com/pvizeli/securetar/compare/2025.2.1...2025.12.0)

---
updated-dependencies:
- dependency-name: securetar
  dependency-version: 2025.12.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Remove key derivation function from Supervisor

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-12-07 00:35:47 +01:00
dependabot[bot]
2993a23711 Bump urllib3 from 2.5.0 to 2.6.0 (#6401)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-05 22:29:34 +01:00
dependabot[bot]
3cae17cb79 Bump blockbuster from 1.5.25 to 1.5.26 (#6403)
Bumps [blockbuster](https://github.com/cbornet/blockbuster) from 1.5.25 to 1.5.26.
- [Release notes](https://github.com/cbornet/blockbuster/releases)
- [Commits](https://github.com/cbornet/blockbuster/compare/v1.5.25...v1.5.26)

---
updated-dependencies:
- dependency-name: blockbuster
  dependency-version: 1.5.26
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-05 17:06:24 +01:00
Jan Čermák
cd4e7f2530 Remove the option to revert to overlay2 driver (#6399)
OS Agent will no longer support migrating to the overlay2 driver due to reasons
explained in home-assistant/os-agent#245. Remove it from the Docker API as
well.
2025-12-05 14:45:56 +01:00
Stefan Agner
5d02b09a0d Fix addon options reset to defaults (#6397)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-05 13:53:51 +01:00
Stefan Agner
6f12d2cb6f Fix type annotations in addon options validation (#6392)
* Fix type annotations in addon options validation

The type annotations for validation methods in AddonOptions and
UiOptions were overly restrictive and did not match runtime behavior:

- _nested_validate_list and _nested_validate_dict receive user input
  that could be any type, with runtime isinstance checks to validate.
  Changed parameter types from list[Any]/dict[Any, Any] to Any.

- _ui_schema_element handles str, list, and dict values depending on
  the schema structure. Changed from str to the union type.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix type annotations in addon options validation

Add missing type annotations to AddonOptions and UiOptions classes:
- Add parameter and return type to AddonOptions.__call__
- Add explicit type annotation to UiOptions.coresys attribute
- Add return type to UiOptions._ui_schema_element method

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-05 12:25:38 +01:00
dependabot[bot]
f0db82d715 Bump ruff from 0.14.7 to 0.14.8 (#6396)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.7 to 0.14.8.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.7...0.14.8)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.8
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-05 12:25:24 +01:00
dependabot[bot]
4d9e2838fe Bump sentry-sdk from 2.46.0 to 2.47.0 (#6393) 2025-12-04 07:23:23 +01:00
Stefan Agner
382f0e8aef Disable timeout for Docker image pull operations (#6391)
* Disable timeout for Docker image pull operations

The aiodocker migration introduced a regression where image pulls could
timeout during slow downloads. The session-level timeout (900s total)
was being applied to pull operations, but docker-py explicitly sets
timeout=None for pulls, allowing them to run indefinitely.

When aiodocker receives timeout=None, it converts it to
ClientTimeout(total=None), which aiohttp treats as "no timeout"
(returns TimerNoop instead of enforcing a timeout).

This fixes TimeoutError exceptions that could occur during installation
on systems with slow network connections or when pulling large images.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix pytests

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-03 21:52:46 +01:00
Stefan Agner
3b3db2a9bc Fix type annotations in API modules (#6389)
* Fix incorrect type annotations in API modules

Correct several type annotation issues found during typeguard testing:

- Fix `options_config` return type from `None` to `dict[str, Any]`
  (method returns validation result dict)
- Fix `uninstall` return type from `Awaitable[None]` to `None` and
  remove unnecessary return statement (async methods already return
  awaitables)
- Fix `stats` return type from `dict[Any, str]` to `dict[str, Any]`
  (type arguments were reversed)
- Fix `stop` return type from `Awaitable[None]` to `None` (async
  method shouldn't declare Awaitable return type)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add missing type annotations to API methods

Add explicit return type annotations and request parameter types to
API endpoint methods that were missing them:

- backups.py: Add types to reload, download, upload methods
- docker.py: Add types to info, create_registry, remove_registry
- host.py: Add types to info, options, reboot, shutdown, reload,
  services, list_boots, list_identifiers, disk_usage; fix overly
  generic dict type
- services.py: Add types to list_services, set_service, get_service,
  del_service; add required imports
- store.py: Add types to add_repository, remove_repository
- supervisor.py: Add type to ping method

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-03 21:52:25 +01:00
Stefan Agner
7895bc9007 Fix return type hints for middleware methods (#6388)
* Fix return type hints for middleware methods

Adjust type hints in SecurityMiddleware to use StreamResponse instead
of Response. This correctly reflects that middleware handlers can return
any StreamResponse subclass, including FileResponse and other streaming
responses.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Improve type annotations in SecurityMiddleware

Add proper type parameters to improve type safety:
- Use Callable[[Request], Awaitable[StreamResponse]] for middleware
  handlers instead of bare Callable
- Add type parameter to re.Pattern[str] for ADDONS_ROLE_ACCESS

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-03 21:51:56 +01:00
Mike Degatano
81b7e54b18 Remove unknown errors from addons and auth (#6303)
* Remove unknown errors from addons

* Remove customized unknown error types

* Fix docker ratelimit exception and tests

* Fix stats test and add more for known errors

* Add defined error for when build fails

* Fixes from feedback

* Fix mypy issues

* Fix test failure due to rename

* Change auth reset error message
2025-12-03 18:11:51 +01:00
Stefan Agner
d203f20b7f Fix type annotations in AddonModel (#6387)
* Fix type annotations in AddonModel

Correct return type annotations for three properties in AddonModel
that were inconsistent with their actual return values:

- panel_admin: str -> bool
- with_tmpfs: str | None -> bool
- homeassistant_version: str | None -> AwesomeVersion | None

Based on the add-on schema _SCHEMA_ADDON_CONFIG in
supervisor/addons/validate.py.

Found while enabling typeguard for local testing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix docstrings

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-03 17:45:44 +01:00
Stefan Agner
fea8159ccf Fix typing issues in NetworkManager D-Bus integration (#6385)
* Fix typing for IPv6 addr-gen-mode and ip6-privacy settings

* Fix ConnectionStateType typing

* Rename ConnectionStateType to ConnectionState

The extra type suffix is unnecessary.

* Apply suggestions from code review

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>

---------

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
2025-12-03 16:28:43 +01:00
Jan Čermák
aeb8e59da4 Move wheels build to the build job, use ARM runner for aarch64 build (#6384)
* Move wheels build to the build job, use ARM runner for aarch64 build

There is problem that when wheels are not built, the depending jobs are
skipped. This will require to explicitly use `!cancelled() && !failure()` for
all jobs that depend on the build job. To avoid that, move the wheels build to
the build job. This means tha we need to run it on native ARM runner for
aarch64, but this isn't an issue as we'd like to do that anyway. Also renamed
the rather cryptic "requirements" output to "build_wheels", as that's what it
signalizes.

* Remove explicit "shell: bash"

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-03 14:36:48 +01:00
dependabot[bot]
bee0a4482e Bump actions/checkout from 6.0.0 to 6.0.1 (#6382)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-03 10:40:03 +01:00
dependabot[bot]
37cc078144 Bump actions/stale from 10.1.0 to 10.1.1 (#6383)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-03 10:37:24 +01:00
Stefan Agner
20f993e891 Avoid getting changed files for releases (#6381)
The changed files GitHub Action is not available for release events, so
we skip that step and directly set the output to false for releases.
This restores how releases worked before #6374.
2025-12-02 20:23:37 +01:00
Stefan Agner
d220fa801f Await aiodocker import_image coroutine (#6378)
The aiodocker images.import_image() method returns a coroutine that
needs to be awaited, but the code was iterating over it directly,
causing "TypeError: 'coroutine' object is not iterable".

Fixes SUPERVISOR-13D9

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 14:11:06 -05:00
Stefan Agner
abeee95eb1 Fix blocking I/O in git repository pull operation (#6380)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 19:03:28 +01:00
Stefan Agner
50d31202ae Use Docker's official registry domain detection logic (#6360)
* Use Docker's official registry domain detection logic

Replace the custom IMAGE_WITH_HOST regex with a proper implementation
based on Docker's reference parser (vendor/github.com/distribution/
reference/normalize.go).

Changes:
- Change DOCKER_HUB from "hub.docker.com" to "docker.io" (official default)
- Add DOCKER_HUB_LEGACY for backward compatibility with "hub.docker.com"
- Add IMAGE_DOMAIN_REGEX and get_domain() function that properly detects:
  - localhost (with optional port)
  - Domains with "." (e.g., ghcr.io, 127.0.0.1)
  - Domains with ":" port (e.g., myregistry:5000)
  - IPv6 addresses (e.g., [::1]:5000)
- Update credential handling to support both docker.io and hub.docker.com
- Add comprehensive tests for domain detection

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Refactor Docker domain detection to utils module

Move get_domain function to supervisor/docker/utils.py and rename it
to get_domain_from_image for consistency with get_registry_for_image.
Use named group in the regex for better readability.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Rename domain to registry for consistency

Use consistent "registry" terminology throughout the codebase:
- Rename get_domain_from_image to get_registry_from_image
- Rename IMAGE_DOMAIN_REGEX to IMAGE_REGISTRY_REGEX
- Update named group from "domain" to "registry"
- Update all related comments and variable names

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 14:30:03 +01:00
Jan Čermák
bac072a985 Use unpublished local wheels during PR builds (#6374)
* Use unpublished local wheels during PR builds

Refactor wheel building to use the new `local-wheels-repo-path` and move wheels
building into a separate CI job. Wheels are only published on published (i.e.
release or merged dev), for PR builds they are passed as artifacts to the build
job instead.

* Address review comments

* Add trailing slash for wheels folder
* Always run the changed_files check to ensure build_wheels runs on publish
* Use full path for workflow and escape dots in changed files regexp
2025-12-02 14:08:07 +01:00
dependabot[bot]
2fc6a7dcab Bump types-docker from 7.1.0.20251129 to 7.1.0.20251202 (#6376) 2025-12-02 07:36:51 +01:00
Stefan Agner
fa490210cd Improve CpuArch type safety across codebase (#6372)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 19:56:05 +01:00
Jan Čermák
ba82eb0620 Clean up Dockerfile after dropping deprecated architectures (#6373)
Clean up unnecessary arguments that were needed for deprecated architectures,
bind-mount requirements file to reduce image bloat.
2025-12-01 19:43:19 +01:00
dependabot[bot]
11e3fa0bb7 Bump mypy from 1.18.2 to 1.19.0 (#6366)
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-12-01 16:38:13 +01:00
dependabot[bot]
9466111d56 Bump types-docker from 7.1.0.20251127 to 7.1.0.20251129 (#6369)
* Bump types-docker from 7.1.0.20251127 to 7.1.0.20251129

Bumps [types-docker](https://github.com/typeshed-internal/stub_uploader) from 7.1.0.20251127 to 7.1.0.20251129.
- [Commits](https://github.com/typeshed-internal/stub_uploader/commits)

---
updated-dependencies:
- dependency-name: types-docker
  dependency-version: 7.1.0.20251129
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Fix type errors for types-docker 7.1.0.20251129

- Cast stats() return to dict[str, Any] when stream=False since the
  type stubs return Iterator | dict but we know it's dict when not
  streaming
- Cast attach_socket() return to SocketIO for local Docker connections
  via Unix socket, as the type stubs include types for SSH and other
  connection methods

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 15:08:39 +01:00
Stefan Agner
5ec3bea0dd Remove UP038 from ruff ignore list (#6370)
The UP038 rule was removed from ruff in version 0.13.0, causing a warning
when running ruff. Remove it from the ignore list to eliminate the warning.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 13:57:41 +01:00
dependabot[bot]
72159a0ae2 Bump pylint from 4.0.3 to 4.0.4 (#6368)
Bumps [pylint](https://github.com/pylint-dev/pylint) from 4.0.3 to 4.0.4.
- [Release notes](https://github.com/pylint-dev/pylint/releases)
- [Commits](https://github.com/pylint-dev/pylint/compare/v4.0.3...v4.0.4)

---
updated-dependencies:
- dependency-name: pylint
  dependency-version: 4.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 08:41:31 +01:00
dependabot[bot]
0a7b26187d Bump ruff from 0.14.6 to 0.14.7 (#6367)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.6 to 0.14.7.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.6...0.14.7)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 08:41:14 +01:00
dependabot[bot]
2dc1f9224e Bump home-assistant/builder from 2025.09.0 to 2025.11.0 (#6363)
Bumps [home-assistant/builder](https://github.com/home-assistant/builder) from 2025.09.0 to 2025.11.0.
- [Release notes](https://github.com/home-assistant/builder/releases)
- [Commits](https://github.com/home-assistant/builder/compare/2025.09.0...2025.11.0)

---
updated-dependencies:
- dependency-name: home-assistant/builder
  dependency-version: 2025.11.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 08:40:38 +01:00
Mike Degatano
6302c7d394 Fix progress when using containerd snapshotter (#6357)
* Fix progress when using containerd snapshotter

* Add test for tiny image download under containerd-snapshotter

* Fix API tests after progress allocation change

* Fix test for auth changes

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-27 16:26:22 +01:00
Jan Čermák
f55fd891e9 Add API endpoint for migrating Docker storage driver (#6361)
Implement Supervisor API for home-assistant/os-agent#238, adding possibility to
schedule migration either to Containerd overlayfs driver, or migration to the
graph overlay2 driver, once the device is rebooted the next time. While it's
technically in the DBus OS interface, in Supervisor's abstraction it makes more
sense to put it under `/docker` endpoints.
2025-11-27 16:02:39 +01:00
Stefan Agner
8a251e0324 Pass registry credentials to add-on build for private base images (#6356)
* Pass registry credentials to add-on build for private base images

When building add-ons that use a base image from a private registry,
the build would fail because credentials configured via the Supervisor
API were not passed to the Docker-in-Docker build container.

This fix:
- Adds get_docker_config_json() to generate a Docker config.json with
  registry credentials for the base image
- Creates a temporary config file and mounts it into the build container
  at /root/.docker/config.json so BuildKit can authenticate when pulling
  the base image
- Cleans up the temporary file after build completes

Fixes #6354

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix pylint errors

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Refactor registry credential extraction into shared helper

Extract duplicate logic for determining which registry matches an image
into a shared `get_registry_for_image()` method in `DockerConfig`. This
method is now used by both `DockerInterface._get_credentials()` and
`AddonBuild.get_docker_config_json()`.

Move `DOCKER_HUB` and `IMAGE_WITH_HOST` constants to `docker/const.py`
to avoid circular imports between manager.py and interface.py.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* ruff format

* Document raises

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-27 11:10:17 +01:00
dependabot[bot]
62b7b8c399 Bump types-docker from 7.1.0.20251125 to 7.1.0.20251127 (#6358) 2025-11-27 07:22:43 +01:00
Stefan Agner
3c87704802 Handle update errors in automatic Supervisor update task (#6328)
Wrap the Supervisor auto-update call with suppress(SupervisorUpdateError)
to prevent unhandled exceptions from propagating. When an automatic update
fails, errors are already logged by the exception handlers, and there's no
meaningful recovery action the scheduler task can take.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 14:11:51 -05:00
Stefan Agner
ae7700f52c Fix private registry authentication for aiodocker image pulls (#6355)
* Fix private registry authentication for aiodocker image pulls

After PR #6252 migrated image pulling from dockerpy to aiodocker,
private registry authentication stopped working. The old _docker_login()
method stored credentials in ~/.docker/config.json via dockerpy, but
aiodocker doesn't read that file - it requires credentials passed
explicitly via the auth parameter.

Changes:
- Remove unused _docker_login() method (dockerpy login was ineffective)
- Pass credentials directly to pull_image() via new auth parameter
- Add auth parameter to DockerAPI.pull_image() method
- Add unit tests for Docker Hub and custom registry authentication

Fixes #6345

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Ignore protected access in test

* Fix plug-in pull test

* Fix HA core tests

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 17:37:24 +01:00
Stefan Agner
e06e792e74 Fix type annotations for sentinel values in job manager (#6349)
Add `type[DEFAULT]` to type annotations for parameters that use the
DEFAULT sentinel value. This fixes runtime type checking failures with
typeguard when sentinel values are passed as arguments.

Use explicit type casts and restructured parameter passing to satisfy
mypy's type narrowing requirements. The sentinel pattern allows
distinguishing between "parameter not provided" and "parameter
explicitly set to None", which is critical for job management logic.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 09:17:17 +01:00
dependabot[bot]
5f55ab8de4 Bump home-assistant/wheels from 2025.10.0 to 2025.11.0 (#6352) 2025-11-26 07:56:32 +01:00
Stefan Agner
ca521c24cb Fix typeguard error in API decorator wrapper functions (#6350)
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 19:04:31 +01:00
dependabot[bot]
6042694d84 Bump dbus-fast from 2.45.1 to 3.1.2 (#6317)
* Bump dbus-fast from 2.45.1 to 3.1.2

Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.45.1 to 3.1.2.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.45.1...v3.1.2)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 3.1.2
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update unit tests for dbus-fast 3.1.2 changes

* Fix type annotations

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-11-25 16:25:06 +01:00
Stefan Agner
2b2aedae60 Fix D-Bus type annotation issues (#6348)
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 14:47:48 +01:00
Jan Čermák
4b4afd081b Drop build for deprecated architectures and re-tag legacy build instead (#6347)
To ensure that e.g. airgapped devices running on deprecated archs can still
update the Supervisor when they become online, the version of Supervisor in the
version file must stay available for all architectures. Since the base images
will no longer exist for those archs and to avoid the need for building it from
current source, add job that pulls the last available image, changes the label
in the metadata and publishes it under the new tag. That way we'll get a new
image with a different SHA (compared to a plain re-tag), so the GHCR metrics
should reflect how many devices still pull these old images.
2025-11-25 12:42:01 +01:00
Stefan Agner
a3dca10fd8 Fix blocking I/O call in DBusManager.load() (#6346)
Wrap SOCKET_DBUS.exists() call in sys_run_in_executor to avoid blocking
os.stat() call in async context. This follows the same pattern already
used in supervisor/resolution/evaluations/dbus.py.

Fixes SUPERVISOR-11HC

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 12:07:35 +01:00
Stefan Agner
d73682ee8a Fix blocking I/O in DockerInfo cpu realtime check (#6344)
The support_cpu_realtime property was performing blocking filesystem I/O
(Path.exists()) in async context, causing BlockingError e.g. when the
audio plugin started.

Changes:
- Convert support_cpu_realtime from property to dataclass field
- Make DockerInfo.new() async to properly handle I/O operations
- Run Path.exists() check in executor thread during initialization
- Store result as immutable field to avoid repeated filesystem access

Fixes SUPERVISOR-15WC

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 11:34:01 +01:00
Stefan Agner
032fa4cdc4 Add comment to explicit "used" calculation for disk usage API (#6340)
* Add explicit used calculation for disk usage API

Added explicit calculation for used disk space along with a comment
to clarify the reasoning behind the calculation method.

* Address review feedback
2025-11-25 11:00:46 +02:00
dependabot[bot]
7244e447ab Bump actions/setup-python from 6.0.0 to 6.1.0 (#6341)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-25 07:20:29 +01:00
dependabot[bot]
603ba57846 Bump types-docker from 7.1.0.20251009 to 7.1.0.20251125 (#6342)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-25 07:20:06 +01:00
dependabot[bot]
0ff12abdf4 Bump sentry-sdk from 2.45.0 to 2.46.0 (#6343)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-25 07:19:32 +01:00
186 changed files with 10339 additions and 4335 deletions

View File

@@ -1,6 +1,7 @@
# General files
.git
.github
.gitkeep
.devcontainer
.vscode

View File

@@ -91,8 +91,8 @@ availability.
### Python Requirements
- **Compatibility**: Python 3.13+
- **Language Features**: Use modern Python features:
- **Compatibility**: Python 3.14+
- **Language Features**: Use modern Python features:
- Type hints with `typing` module
- f-strings (preferred over `%` or `.format()`)
- Dataclasses and enum classes
@@ -233,6 +233,8 @@ async def backup_full(self, request: web.Request) -> dict[str, Any]:
- **Fixtures**: Extensive use of pytest fixtures for CoreSys setup
- **Mocking**: Mock external dependencies (Docker, D-Bus, network calls)
- **Coverage**: Minimum 90% test coverage, 100% for security-sensitive code
- **Style**: Use plain `test_` functions, not `Test*` classes — test classes are
considered legacy style in this project
### Error Handling
@@ -276,12 +278,14 @@ Always run the pre-commit hooks at the end of code editing.
- Access Docker via `self.sys_docker` not direct Docker API
- Use constants from `const.py` instead of hardcoding
- Store types in (per-module) `const.py` (e.g. supervisor/store/const.py)
- Use relative imports within the `supervisor/` package (e.g., `from ..docker.manager import ExecReturn`)
**❌ Avoid These Patterns**:
- Direct Docker API usage - use Supervisor's Docker manager
- Blocking operations in async context (use asyncio alternatives)
- Hardcoded values - use constants from `const.py`
- Manual error handling in API endpoints - let `@api_process` handle it
- Absolute imports within the `supervisor/` package (e.g., `from supervisor.docker.manager import ...`) - use relative imports instead
This guide provides the foundation for contributing to Home Assistant Supervisor.
Follow these patterns and guidelines to ensure code quality, security, and

View File

@@ -5,45 +5,53 @@ categories:
- title: ":boom: Breaking Changes"
label: "breaking-change"
- title: ":wrench: Build"
label: "build"
- title: ":boar: Chore"
label: "chore"
- title: ":sparkles: New Features"
label: "new-feature"
- title: ":zap: Performance"
label: "performance"
- title: ":recycle: Refactor"
label: "refactor"
- title: ":green_heart: CI"
label: "ci"
- title: ":bug: Bug Fixes"
label: "bugfix"
- title: ":white_check_mark: Test"
- title: ":gem: Style"
label: "style"
- title: ":package: Refactor"
label: "refactor"
- title: ":rocket: Performance"
label: "performance"
- title: ":rotating_light: Test"
label: "test"
- title: ":hammer_and_wrench: Build"
label: "build"
- title: ":gear: CI"
label: "ci"
- title: ":recycle: Chore"
label: "chore"
- title: ":wastebasket: Revert"
label: "revert"
- title: ":arrow_up: Dependency Updates"
label: "dependencies"
collapse-after: 1
include-labels:
- "breaking-change"
- "build"
- "chore"
- "performance"
- "refactor"
- "new-feature"
- "bugfix"
- "dependencies"
- "style"
- "refactor"
- "performance"
- "test"
- "build"
- "ci"
- "chore"
- "revert"
- "dependencies"
template: |

View File

@@ -33,7 +33,10 @@ on:
- setup.py
env:
DEFAULT_PYTHON: "3.13"
DEFAULT_PYTHON: "3.14.3"
COSIGN_VERSION: "v2.5.3"
CRANE_VERSION: "v0.20.7"
CRANE_SHA256: "8ef3564d264e6b5ca93f7b7f5652704c4dd29d33935aff6947dd5adefd05953e"
BUILD_NAME: supervisor
BUILD_TYPE: supervisor
@@ -50,10 +53,10 @@ jobs:
version: ${{ steps.version.outputs.version }}
channel: ${{ steps.version.outputs.channel }}
publish: ${{ steps.version.outputs.publish }}
requirements: ${{ steps.requirements.outputs.changed }}
build_wheels: ${{ steps.requirements.outputs.build_wheels }}
steps:
- name: Checkout the repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
@@ -69,20 +72,28 @@ jobs:
- name: Get changed files
id: changed_files
if: steps.version.outputs.publish == 'false'
if: github.event_name == 'pull_request' || github.event_name == 'push'
uses: masesgroup/retrieve-changed-files@491e80760c0e28d36ca6240a27b1ccb8e1402c13 # v3.0.0
- name: Check if requirements files changed
id: requirements
run: |
if [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements.txt|build.yaml) ]]; then
echo "changed=true" >> "$GITHUB_OUTPUT"
# No wheels build necessary for releases
if [[ "${{ github.event_name }}" == "release" ]]; then
echo "build_wheels=false" >> "$GITHUB_OUTPUT"
# Always build wheels for manual dispatches
elif [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "build_wheels=true" >> "$GITHUB_OUTPUT"
elif [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements\.txt|build\.yaml|\.github/workflows/builder\.yml) ]]; then
echo "build_wheels=true" >> "$GITHUB_OUTPUT"
else
echo "build_wheels=false" >> "$GITHUB_OUTPUT"
fi
build:
name: Build ${{ matrix.arch }} supervisor
needs: init
runs-on: ubuntu-latest
runs-on: ${{ matrix.runs-on }}
permissions:
contents: read
id-token: write
@@ -90,34 +101,66 @@ jobs:
strategy:
matrix:
arch: ${{ fromJson(needs.init.outputs.architectures) }}
include:
- runs-on: ubuntu-24.04
- runs-on: ubuntu-24.04-arm
arch: aarch64
env:
WHEELS_ABI: cp314
WHEELS_TAG: musllinux_1_2
WHEELS_APK_DEPS: "libffi-dev;openssl-dev;yaml-dev"
WHEELS_SKIP_BINARY: aiohttp
steps:
- name: Checkout the repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
- name: Write env-file
if: needs.init.outputs.requirements == 'true'
- name: Write env-file for wheels build
if: needs.init.outputs.build_wheels == 'true'
run: |
(
# Fix out of memory issues with rust
echo "CARGO_NET_GIT_FETCH_WITH_CLI=true"
) > .env_file
# home-assistant/wheels doesn't support sha pinning
- name: Build wheels
if: needs.init.outputs.requirements == 'true'
uses: home-assistant/wheels@2025.10.0
- name: Build and publish wheels
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'true'
uses: home-assistant/wheels@e5742a69d69f0e274e2689c998900c7d19652c21 # 2025.12.0
with:
abi: cp313
tag: musllinux_1_2
arch: ${{ matrix.arch }}
wheels-key: ${{ secrets.WHEELS_KEY }}
apk: "libffi-dev;openssl-dev;yaml-dev"
skip-binary: aiohttp
abi: ${{ env.WHEELS_ABI }}
tag: ${{ env.WHEELS_TAG }}
arch: ${{ matrix.arch }}
apk: ${{ env.WHEELS_APK_DEPS }}
skip-binary: ${{ env.WHEELS_SKIP_BINARY }}
env-file: true
requirements: "requirements.txt"
- name: Build local wheels
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
uses: home-assistant/wheels@e5742a69d69f0e274e2689c998900c7d19652c21 # 2025.12.0
with:
wheels-host: ""
wheels-user: ""
wheels-key: ""
local-wheels-repo-path: "wheels/"
abi: ${{ env.WHEELS_ABI }}
tag: ${{ env.WHEELS_TAG }}
arch: ${{ matrix.arch }}
apk: ${{ env.WHEELS_APK_DEPS }}
skip-binary: ${{ env.WHEELS_SKIP_BINARY }}
env-file: true
requirements: "requirements.txt"
- name: Upload local wheels artifact
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: wheels-${{ matrix.arch }}
path: wheels
retention-days: 1
- name: Set version
if: needs.init.outputs.publish == 'true'
uses: home-assistant/actions/helpers/version@master
@@ -126,7 +169,7 @@ jobs:
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
if: needs.init.outputs.publish == 'true'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
@@ -134,7 +177,7 @@ jobs:
if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with:
cosign-release: "v2.5.3"
cosign-release: ${{ env.COSIGN_VERSION }}
- name: Install dirhash and calc hash
if: needs.init.outputs.publish == 'true'
@@ -150,7 +193,7 @@ jobs:
- name: Login to GitHub Container Registry
if: needs.init.outputs.publish == 'true'
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -162,8 +205,9 @@ jobs:
# home-assistant/builder doesn't support sha pinning
- name: Build supervisor
uses: home-assistant/builder@2025.09.0
uses: home-assistant/builder@2025.11.0
with:
image: ${{ matrix.arch }}
args: |
$BUILD_ARGS \
--${{ matrix.arch }} \
@@ -173,12 +217,12 @@ jobs:
version:
name: Update version
needs: ["init", "run_supervisor"]
needs: ["init", "run_supervisor", "retag_deprecated"]
runs-on: ubuntu-latest
steps:
- name: Checkout the repository
if: needs.init.outputs.publish == 'true'
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Initialize git
if: needs.init.outputs.publish == 'true'
@@ -203,12 +247,19 @@ jobs:
timeout-minutes: 60
steps:
- name: Checkout the repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Download local wheels artifact
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
name: wheels-amd64
path: wheels
# home-assistant/builder doesn't support sha pinning
- name: Build the Supervisor
if: needs.init.outputs.publish != 'true'
uses: home-assistant/builder@2025.09.0
uses: home-assistant/builder@2025.11.0
with:
args: |
--test \
@@ -242,14 +293,68 @@ jobs:
- name: Start the Supervisor
run: docker start hassio_supervisor
- name: Wait for Supervisor to come up
- &wait_for_supervisor
name: Wait for Supervisor to come up
run: |
SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' hassio_supervisor)
ping="error"
while [ "$ping" != "ok" ]; do
ping=$(curl -sSL "http://$SUPERVISOR/supervisor/ping" | jq -r '.result')
sleep 5
until SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.Networks.hassio.IPAddress}}' hassio_supervisor 2>/dev/null) && \
[ -n "$SUPERVISOR" ] && [ "$SUPERVISOR" != "<no value>" ]; do
echo "Waiting for network configuration..."
sleep 1
done
echo "Waiting for Supervisor API at http://${SUPERVISOR}/supervisor/ping"
timeout=300
elapsed=0
while [ $elapsed -lt $timeout ]; do
if response=$(curl -sSf "http://${SUPERVISOR}/supervisor/ping" 2>/dev/null); then
if echo "$response" | jq -e '.result == "ok"' >/dev/null 2>&1; then
echo "Supervisor is up! (took ${elapsed}s)"
exit 0
fi
fi
if [ $((elapsed % 15)) -eq 0 ]; then
echo "Still waiting... (${elapsed}s/${timeout}s)"
fi
sleep 5
elapsed=$((elapsed + 5))
done
echo "ERROR: Supervisor failed to start within ${timeout}s"
echo "Last response: $response"
echo "Checking supervisor logs..."
docker logs --tail 50 hassio_supervisor
exit 1
# Wait for Core to come up so subsequent steps (backup, addon install) succeed.
# On first startup, Supervisor installs Core via the "home_assistant_core_install"
# job (which pulls the image and then starts Core). Jobs with cleanup=True are
# removed from the jobs list once done, so we poll until it's gone.
- name: Wait for Core to be started
run: |
echo "Waiting for Home Assistant Core to be installed and started..."
timeout=300
elapsed=0
while [ $elapsed -lt $timeout ]; do
jobs=$(docker exec hassio_cli ha jobs info --no-progress --raw-json | jq -r '.data.jobs[] | select(.name == "home_assistant_core_install" and .done == false) | .name' 2>/dev/null)
if [ -z "$jobs" ]; then
echo "Home Assistant Core install/start complete (took ${elapsed}s)"
exit 0
fi
if [ $((elapsed % 15)) -eq 0 ]; then
echo "Core still installing... (${elapsed}s/${timeout}s)"
fi
sleep 5
elapsed=$((elapsed + 5))
done
echo "ERROR: Home Assistant Core failed to install/start within ${timeout}s"
docker logs --tail 50 hassio_supervisor
exit 1
- name: Check the Supervisor
run: |
@@ -265,28 +370,28 @@ jobs:
exit 1
fi
- name: Check the Store / Addon
- name: Check the Store / App
run: |
echo "Install Core SSH Add-on"
test=$(docker exec hassio_cli ha addons install core_ssh --no-progress --raw-json | jq -r '.result')
echo "Install Core SSH app"
test=$(docker exec hassio_cli ha apps install core_ssh --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
# Make sure it actually installed
test=$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.version')
test=$(docker exec hassio_cli ha apps info core_ssh --no-progress --raw-json | jq -r '.data.version')
if [[ "$test" == "null" ]]; then
exit 1
fi
echo "Start Core SSH Add-on"
test=$(docker exec hassio_cli ha addons start core_ssh --no-progress --raw-json | jq -r '.result')
echo "Start Core SSH app"
test=$(docker exec hassio_cli ha apps start core_ssh --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
# Make sure its state is started
test="$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.state')"
test="$(docker exec hassio_cli ha apps info core_ssh --no-progress --raw-json | jq -r '.data.state')"
if [ "$test" != "started" ]; then
exit 1
fi
@@ -300,9 +405,9 @@ jobs:
fi
echo "slug=$(echo $test | jq -r '.data.slug')" >> "$GITHUB_OUTPUT"
- name: Uninstall SSH add-on
- name: Uninstall SSH app
run: |
test=$(docker exec hassio_cli ha addons uninstall core_ssh --no-progress --raw-json | jq -r '.result')
test=$(docker exec hassio_cli ha apps uninstall core_ssh --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
@@ -314,30 +419,23 @@ jobs:
exit 1
fi
- name: Wait for Supervisor to come up
run: |
SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' hassio_supervisor)
ping="error"
while [ "$ping" != "ok" ]; do
ping=$(curl -sSL "http://$SUPERVISOR/supervisor/ping" | jq -r '.result')
sleep 5
done
- *wait_for_supervisor
- name: Restore SSH add-on from backup
- name: Restore SSH app from backup
run: |
test=$(docker exec hassio_cli ha backups restore ${{ steps.backup.outputs.slug }} --addons core_ssh --no-progress --raw-json | jq -r '.result')
test=$(docker exec hassio_cli ha backups restore ${{ steps.backup.outputs.slug }} --app core_ssh --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
# Make sure it actually installed
test=$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.version')
test=$(docker exec hassio_cli ha apps info core_ssh --no-progress --raw-json | jq -r '.data.version')
if [[ "$test" == "null" ]]; then
exit 1
fi
# Make sure its state is started
test="$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.state')"
test="$(docker exec hassio_cli ha apps info core_ssh --no-progress --raw-json | jq -r '.data.state')"
if [ "$test" != "started" ]; then
exit 1
fi
@@ -352,3 +450,50 @@ jobs:
- name: Get supervisor logs on failiure
if: ${{ cancelled() || failure() }}
run: docker logs hassio_supervisor
retag_deprecated:
needs: ["build", "init"]
name: Re-tag deprecated ${{ matrix.arch }} images
if: needs.init.outputs.publish == 'true'
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
packages: write
strategy:
matrix:
arch: ["armhf", "armv7", "i386"]
env:
# Last available release for deprecated architectures
FROZEN_VERSION: "2025.11.5"
steps:
- name: Login to GitHub Container Registry
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Install Cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with:
cosign-release: ${{ env.COSIGN_VERSION }}
- name: Install crane
run: |
curl -sLO https://github.com/google/go-containerregistry/releases/download/${{ env.CRANE_VERSION }}/go-containerregistry_Linux_x86_64.tar.gz
echo "${{ env.CRANE_SHA256 }} go-containerregistry_Linux_x86_64.tar.gz" | sha256sum -c -
tar xzf go-containerregistry_Linux_x86_64.tar.gz crane
sudo mv crane /usr/local/bin/
- name: Re-tag deprecated image with updated version label
run: |
crane auth login ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.GITHUB_TOKEN }}
crane mutate \
--label io.hass.version=${{ needs.init.outputs.version }} \
--tag ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ needs.init.outputs.version }} \
ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ env.FROZEN_VERSION }}
- name: Sign image with Cosign
run: |
cosign sign --yes ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ needs.init.outputs.version }}

View File

@@ -8,7 +8,7 @@ on:
pull_request: ~
env:
DEFAULT_PYTHON: "3.13"
DEFAULT_PYTHON: "3.14.3"
PRE_COMMIT_CACHE: ~/.cache/pre-commit
MYPY_CACHE_VERSION: 1
@@ -26,15 +26,15 @@ jobs:
name: Prepare Python dependencies
steps:
- name: Check out code from GitHub
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: venv
key: |
@@ -48,7 +48,7 @@ jobs:
pip install -r requirements.txt -r requirements_tests.txt
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: ${{ env.PRE_COMMIT_CACHE }}
lookup-only: true
@@ -68,15 +68,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: venv
key: |
@@ -88,7 +88,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -111,15 +111,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: venv
key: |
@@ -131,7 +131,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -154,7 +154,7 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Register hadolint problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/hadolint.json"
@@ -169,15 +169,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: venv
key: |
@@ -189,7 +189,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -213,15 +213,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: venv
key: |
@@ -233,7 +233,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -257,15 +257,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: venv
key: |
@@ -293,9 +293,9 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
@@ -307,7 +307,7 @@ jobs:
echo "key=mypy-${{ env.MYPY_CACHE_VERSION }}-$mypy_version-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: venv
key: >-
@@ -318,7 +318,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore mypy cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: .mypy_cache
key: >-
@@ -339,9 +339,9 @@ jobs:
name: Run tests Python ${{ needs.prepare.outputs.python-version }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
@@ -351,7 +351,7 @@ jobs:
cosign-release: "v2.5.3"
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: venv
key: |
@@ -386,7 +386,7 @@ jobs:
-o console_output_style=count \
tests
- name: Upload coverage artifact
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: coverage
path: .coverage
@@ -398,15 +398,15 @@ jobs:
needs: ["pytest", "prepare"]
steps:
- name: Check out code from GitHub
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: venv
key: |
@@ -417,7 +417,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Download all coverage artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
name: coverage
path: coverage/
@@ -428,4 +428,4 @@ jobs:
coverage report
coverage xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2

View File

@@ -9,7 +9,7 @@ jobs:
lock:
runs-on: ubuntu-latest
steps:
- uses: dessant/lock-threads@1bf7ec25051fe7c00bdd17e6a7cf3d7bfb7dc771 # v5.0.1
- uses: dessant/lock-threads@7266a7ce5c1df01b1c6db85bf8cd86c737dadbe7 # v6.0.0
with:
github-token: ${{ github.token }}
issue-inactive-days: "30"

View File

@@ -11,7 +11,7 @@ jobs:
name: Release Drafter
steps:
- name: Checkout the repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
@@ -36,7 +36,7 @@ jobs:
echo "version=$datepre.$newpost" >> "$GITHUB_OUTPUT"
- name: Run Release Drafter
uses: release-drafter/release-drafter@b1476f6e6eb133afa41ed8589daba6dc69b4d3f5 # v6.1.0
uses: release-drafter/release-drafter@6db134d15f3909ccc9eefd369f02bd1e9cffdf97 # v6.2.0
with:
tag: ${{ steps.version.outputs.version }}
name: ${{ steps.version.outputs.version }}

View File

@@ -10,9 +10,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out code from GitHub
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Sentry Release
uses: getsentry/action-release@128c5058bbbe93c8e02147fe0a9c713f166259a6 # v3.4.0
uses: getsentry/action-release@dab6548b3c03c4717878099e43782cf5be654289 # v3.5.0
env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}

View File

@@ -9,7 +9,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
- uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30

View File

@@ -1,82 +0,0 @@
name: Update frontend
on:
schedule: # once a day
- cron: "0 0 * * *"
workflow_dispatch:
jobs:
check-version:
runs-on: ubuntu-latest
outputs:
skip: ${{ steps.check_version.outputs.skip || steps.check_existing_pr.outputs.skip }}
current_version: ${{ steps.check_version.outputs.current_version }}
latest_version: ${{ steps.latest_frontend_version.outputs.latest_tag }}
steps:
- name: Checkout code
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Get latest frontend release
id: latest_frontend_version
uses: abatilo/release-info-action@32cb932219f1cee3fc4f4a298fd65ead5d35b661 # v1.3.3
with:
owner: home-assistant
repo: frontend
- name: Check if version is up to date
id: check_version
run: |
current_version="$(cat .ha-frontend-version)"
latest_version="${{ steps.latest_frontend_version.outputs.latest_tag }}"
echo "current_version=${current_version}" >> $GITHUB_OUTPUT
echo "LATEST_VERSION=${latest_version}" >> $GITHUB_ENV
if [[ ! "$current_version" < "$latest_version" ]]; then
echo "Frontend version is up to date"
echo "skip=true" >> $GITHUB_OUTPUT
fi
- name: Check if there is no open PR with this version
if: steps.check_version.outputs.skip != 'true'
id: check_existing_pr
env:
GH_TOKEN: ${{ github.token }}
run: |
PR=$(gh pr list --state open --base main --json title --search "Update frontend to version $LATEST_VERSION")
if [[ "$PR" != "[]" ]]; then
echo "Skipping - There is already a PR open for version $LATEST_VERSION"
echo "skip=true" >> $GITHUB_OUTPUT
fi
create-pr:
runs-on: ubuntu-latest
needs: check-version
if: needs.check-version.outputs.skip != 'true'
steps:
- name: Checkout code
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Clear www folder
run: |
rm -rf supervisor/api/panel/*
- name: Update version file
run: |
echo "${{ needs.check-version.outputs.latest_version }}" > .ha-frontend-version
- name: Download release assets
uses: robinraju/release-downloader@daf26c55d821e836577a15f77d86ddc078948b05 # v1.12
with:
repository: 'home-assistant/frontend'
tag: ${{ needs.check-version.outputs.latest_version }}
fileName: home_assistant_frontend_supervisor-${{ needs.check-version.outputs.latest_version }}.tar.gz
extract: true
out-file-path: supervisor/api/panel/
- name: Remove release assets archive
run: |
rm -f supervisor/api/panel/home_assistant_frontend_supervisor-*.tar.gz
- name: Create PR
uses: peter-evans/create-pull-request@84ae59a2cdc2258d6fa0732dd66352dddae2a412 # v7.0.9
with:
commit-message: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
branch: autoupdate-frontend
base: main
draft: true
sign-commits: true
title: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
body: >
Update frontend from ${{ needs.check-version.outputs.current_version }} to
[${{ needs.check-version.outputs.latest_version }}](https://github.com/home-assistant/frontend/releases/tag/${{ needs.check-version.outputs.latest_version }})

5
.gitignore vendored
View File

@@ -24,6 +24,9 @@ var/
.installed.cfg
*.egg
# Local wheels
wheels/**/*.whl
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
@@ -102,4 +105,4 @@ ENV/
/.dmypy.json
# Mac
.DS_Store
.DS_Store

View File

@@ -1 +0,0 @@
20250925.1

View File

@@ -7,11 +7,6 @@ ENV \
CRYPTOGRAPHY_OPENSSL_NO_LEGACY=1 \
UV_SYSTEM_PYTHON=true
ARG \
COSIGN_VERSION \
BUILD_ARCH \
QEMU_CPU
# Install base
WORKDIR /usr/src
RUN \
@@ -27,20 +22,22 @@ RUN \
openssl \
yaml \
\
&& curl -Lso /usr/bin/cosign "https://github.com/home-assistant/cosign/releases/download/${COSIGN_VERSION}/cosign_${BUILD_ARCH}" \
&& chmod a+x /usr/bin/cosign \
&& pip3 install uv==0.8.9
&& pip3 install uv==0.9.18
# Install requirements
COPY requirements.txt .
RUN \
if [ "${BUILD_ARCH}" = "i386" ]; then \
setarch="linux32"; \
--mount=type=bind,source=./requirements.txt,target=/usr/src/requirements.txt \
--mount=type=bind,source=./wheels,target=/usr/src/wheels \
if ls /usr/src/wheels/musllinux/* >/dev/null 2>&1; then \
LOCAL_WHEELS=/usr/src/wheels/musllinux; \
echo "Using local wheels from: $LOCAL_WHEELS"; \
else \
setarch=""; \
fi \
&& ${setarch} uv pip install --compile-bytecode --no-cache --no-build -r requirements.txt \
&& rm -f requirements.txt
LOCAL_WHEELS=; \
echo "No local wheels found"; \
fi && \
uv pip install --compile-bytecode --no-cache --no-build \
-r requirements.txt \
${LOCAL_WHEELS:+--find-links $LOCAL_WHEELS}
# Install Home Assistant Supervisor
COPY . supervisor

View File

@@ -1,15 +1,10 @@
image: ghcr.io/home-assistant/{arch}-hassio-supervisor
build_from:
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.22-2025.11.1
armhf: ghcr.io/home-assistant/armhf-base-python:3.13-alpine3.22-2025.11.1
armv7: ghcr.io/home-assistant/armv7-base-python:3.13-alpine3.22-2025.11.1
amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.22-2025.11.1
i386: ghcr.io/home-assistant/i386-base-python:3.13-alpine3.22-2025.11.1
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.14-alpine3.22-2026.02.0
amd64: ghcr.io/home-assistant/amd64-base-python:3.14-alpine3.22-2026.02.0
cosign:
base_identity: https://github.com/home-assistant/docker-base/.*
identity: https://github.com/home-assistant/supervisor/.*
args:
COSIGN_VERSION: 2.5.3
labels:
io.hass.type: supervisor
org.opencontainers.image.title: Home Assistant Supervisor

View File

@@ -1,5 +1,5 @@
[build-system]
requires = ["setuptools~=80.9.0", "wheel~=0.46.1"]
requires = ["setuptools~=82.0.0", "wheel~=0.46.1"]
build-backend = "setuptools.build_meta"
[project]
@@ -9,7 +9,7 @@ license = { text = "Apache-2.0" }
description = "Open-source private cloud os for Home-Assistant based on HassOS"
readme = "README.md"
authors = [
{ name = "The Home Assistant Authors", email = "hello@home-assistant.io" },
{ name = "The Home Assistant Authors", email = "hello@home-assistant.io" },
]
keywords = ["docker", "home-assistant", "api"]
requires-python = ">=3.13.0"
@@ -53,154 +53,154 @@ good-names = ["id", "i", "j", "k", "ex", "Run", "_", "fp", "T", "os"]
# too-few-* - same as too-many-*
# unused-argument - generic callbacks and setup methods create a lot of warnings
disable = [
"format",
"abstract-method",
"cyclic-import",
"duplicate-code",
"locally-disabled",
"no-else-return",
"not-context-manager",
"too-few-public-methods",
"too-many-arguments",
"too-many-branches",
"too-many-instance-attributes",
"too-many-lines",
"too-many-locals",
"too-many-public-methods",
"too-many-return-statements",
"too-many-statements",
"unused-argument",
"consider-using-with",
"format",
"abstract-method",
"cyclic-import",
"duplicate-code",
"locally-disabled",
"no-else-return",
"not-context-manager",
"too-few-public-methods",
"too-many-arguments",
"too-many-branches",
"too-many-instance-attributes",
"too-many-lines",
"too-many-locals",
"too-many-public-methods",
"too-many-return-statements",
"too-many-statements",
"unused-argument",
"consider-using-with",
# Handled by ruff
# Ref: <https://github.com/astral-sh/ruff/issues/970>
"await-outside-async", # PLE1142
"bad-str-strip-call", # PLE1310
"bad-string-format-type", # PLE1307
"bidirectional-unicode", # PLE2502
"continue-in-finally", # PLE0116
"duplicate-bases", # PLE0241
"format-needs-mapping", # F502
"function-redefined", # F811
# Needed because ruff does not understand type of __all__ generated by a function
# "invalid-all-format", # PLE0605
"invalid-all-object", # PLE0604
"invalid-character-backspace", # PLE2510
"invalid-character-esc", # PLE2513
"invalid-character-nul", # PLE2514
"invalid-character-sub", # PLE2512
"invalid-character-zero-width-space", # PLE2515
"logging-too-few-args", # PLE1206
"logging-too-many-args", # PLE1205
"missing-format-string-key", # F524
"mixed-format-string", # F506
"no-method-argument", # N805
"no-self-argument", # N805
"nonexistent-operator", # B002
"nonlocal-without-binding", # PLE0117
"not-in-loop", # F701, F702
"notimplemented-raised", # F901
"return-in-init", # PLE0101
"return-outside-function", # F706
"syntax-error", # E999
"too-few-format-args", # F524
"too-many-format-args", # F522
"too-many-star-expressions", # F622
"truncated-format-string", # F501
"undefined-all-variable", # F822
"undefined-variable", # F821
"used-prior-global-declaration", # PLE0118
"yield-inside-async-function", # PLE1700
"yield-outside-function", # F704
"anomalous-backslash-in-string", # W605
"assert-on-string-literal", # PLW0129
"assert-on-tuple", # F631
"bad-format-string", # W1302, F
"bad-format-string-key", # W1300, F
"bare-except", # E722
"binary-op-exception", # PLW0711
"cell-var-from-loop", # B023
# "dangerous-default-value", # B006, ruff catches new occurrences, needs more work
"duplicate-except", # B014
"duplicate-key", # F601
"duplicate-string-formatting-argument", # F
"duplicate-value", # F
"eval-used", # PGH001
"exec-used", # S102
# "expression-not-assigned", # B018, ruff catches new occurrences, needs more work
"f-string-without-interpolation", # F541
"forgotten-debug-statement", # T100
"format-string-without-interpolation", # F
# "global-statement", # PLW0603, ruff catches new occurrences, needs more work
"global-variable-not-assigned", # PLW0602
"implicit-str-concat", # ISC001
"import-self", # PLW0406
"inconsistent-quotes", # Q000
"invalid-envvar-default", # PLW1508
"keyword-arg-before-vararg", # B026
"logging-format-interpolation", # G
"logging-fstring-interpolation", # G
"logging-not-lazy", # G
"misplaced-future", # F404
"named-expr-without-context", # PLW0131
"nested-min-max", # PLW3301
# "pointless-statement", # B018, ruff catches new occurrences, needs more work
"raise-missing-from", # TRY200
# "redefined-builtin", # A001, ruff is way more stricter, needs work
"try-except-raise", # TRY203
"unused-argument", # ARG001, we don't use it
"unused-format-string-argument", #F507
"unused-format-string-key", # F504
"unused-import", # F401
"unused-variable", # F841
"useless-else-on-loop", # PLW0120
"wildcard-import", # F403
"bad-classmethod-argument", # N804
"consider-iterating-dictionary", # SIM118
"empty-docstring", # D419
"invalid-name", # N815
"line-too-long", # E501, disabled globally
"missing-class-docstring", # D101
"missing-final-newline", # W292
"missing-function-docstring", # D103
"missing-module-docstring", # D100
"multiple-imports", #E401
"singleton-comparison", # E711, E712
"subprocess-run-check", # PLW1510
"superfluous-parens", # UP034
"ungrouped-imports", # I001
"unidiomatic-typecheck", # E721
"unnecessary-direct-lambda-call", # PLC3002
"unnecessary-lambda-assignment", # PLC3001
"unneeded-not", # SIM208
"useless-import-alias", # PLC0414
"wrong-import-order", # I001
"wrong-import-position", # E402
"comparison-of-constants", # PLR0133
"comparison-with-itself", # PLR0124
# "consider-alternative-union-syntax", # UP007, typing extension
"consider-merging-isinstance", # PLR1701
# "consider-using-alias", # UP006, typing extension
"consider-using-dict-comprehension", # C402
"consider-using-generator", # C417
"consider-using-get", # SIM401
"consider-using-set-comprehension", # C401
"consider-using-sys-exit", # PLR1722
"consider-using-ternary", # SIM108
"literal-comparison", # F632
"property-with-parameters", # PLR0206
"super-with-arguments", # UP008
"too-many-branches", # PLR0912
"too-many-return-statements", # PLR0911
"too-many-statements", # PLR0915
"trailing-comma-tuple", # COM818
"unnecessary-comprehension", # C416
"use-a-generator", # C417
"use-dict-literal", # C406
"use-list-literal", # C405
"useless-object-inheritance", # UP004
"useless-return", # PLR1711
# "no-self-use", # PLR6301 # Optional plugin, not enabled
# Handled by ruff
# Ref: <https://github.com/astral-sh/ruff/issues/970>
"await-outside-async", # PLE1142
"bad-str-strip-call", # PLE1310
"bad-string-format-type", # PLE1307
"bidirectional-unicode", # PLE2502
"continue-in-finally", # PLE0116
"duplicate-bases", # PLE0241
"format-needs-mapping", # F502
"function-redefined", # F811
# Needed because ruff does not understand type of __all__ generated by a function
# "invalid-all-format", # PLE0605
"invalid-all-object", # PLE0604
"invalid-character-backspace", # PLE2510
"invalid-character-esc", # PLE2513
"invalid-character-nul", # PLE2514
"invalid-character-sub", # PLE2512
"invalid-character-zero-width-space", # PLE2515
"logging-too-few-args", # PLE1206
"logging-too-many-args", # PLE1205
"missing-format-string-key", # F524
"mixed-format-string", # F506
"no-method-argument", # N805
"no-self-argument", # N805
"nonexistent-operator", # B002
"nonlocal-without-binding", # PLE0117
"not-in-loop", # F701, F702
"notimplemented-raised", # F901
"return-in-init", # PLE0101
"return-outside-function", # F706
"syntax-error", # E999
"too-few-format-args", # F524
"too-many-format-args", # F522
"too-many-star-expressions", # F622
"truncated-format-string", # F501
"undefined-all-variable", # F822
"undefined-variable", # F821
"used-prior-global-declaration", # PLE0118
"yield-inside-async-function", # PLE1700
"yield-outside-function", # F704
"anomalous-backslash-in-string", # W605
"assert-on-string-literal", # PLW0129
"assert-on-tuple", # F631
"bad-format-string", # W1302, F
"bad-format-string-key", # W1300, F
"bare-except", # E722
"binary-op-exception", # PLW0711
"cell-var-from-loop", # B023
# "dangerous-default-value", # B006, ruff catches new occurrences, needs more work
"duplicate-except", # B014
"duplicate-key", # F601
"duplicate-string-formatting-argument", # F
"duplicate-value", # F
"eval-used", # PGH001
"exec-used", # S102
# "expression-not-assigned", # B018, ruff catches new occurrences, needs more work
"f-string-without-interpolation", # F541
"forgotten-debug-statement", # T100
"format-string-without-interpolation", # F
# "global-statement", # PLW0603, ruff catches new occurrences, needs more work
"global-variable-not-assigned", # PLW0602
"implicit-str-concat", # ISC001
"import-self", # PLW0406
"inconsistent-quotes", # Q000
"invalid-envvar-default", # PLW1508
"keyword-arg-before-vararg", # B026
"logging-format-interpolation", # G
"logging-fstring-interpolation", # G
"logging-not-lazy", # G
"misplaced-future", # F404
"named-expr-without-context", # PLW0131
"nested-min-max", # PLW3301
# "pointless-statement", # B018, ruff catches new occurrences, needs more work
"raise-missing-from", # TRY200
# "redefined-builtin", # A001, ruff is way more stricter, needs work
"try-except-raise", # TRY203
"unused-argument", # ARG001, we don't use it
"unused-format-string-argument", #F507
"unused-format-string-key", # F504
"unused-import", # F401
"unused-variable", # F841
"useless-else-on-loop", # PLW0120
"wildcard-import", # F403
"bad-classmethod-argument", # N804
"consider-iterating-dictionary", # SIM118
"empty-docstring", # D419
"invalid-name", # N815
"line-too-long", # E501, disabled globally
"missing-class-docstring", # D101
"missing-final-newline", # W292
"missing-function-docstring", # D103
"missing-module-docstring", # D100
"multiple-imports", #E401
"singleton-comparison", # E711, E712
"subprocess-run-check", # PLW1510
"superfluous-parens", # UP034
"ungrouped-imports", # I001
"unidiomatic-typecheck", # E721
"unnecessary-direct-lambda-call", # PLC3002
"unnecessary-lambda-assignment", # PLC3001
"unneeded-not", # SIM208
"useless-import-alias", # PLC0414
"wrong-import-order", # I001
"wrong-import-position", # E402
"comparison-of-constants", # PLR0133
"comparison-with-itself", # PLR0124
# "consider-alternative-union-syntax", # UP007, typing extension
"consider-merging-isinstance", # PLR1701
# "consider-using-alias", # UP006, typing extension
"consider-using-dict-comprehension", # C402
"consider-using-generator", # C417
"consider-using-get", # SIM401
"consider-using-set-comprehension", # C401
"consider-using-sys-exit", # PLR1722
"consider-using-ternary", # SIM108
"literal-comparison", # F632
"property-with-parameters", # PLR0206
"super-with-arguments", # UP008
"too-many-branches", # PLR0912
"too-many-return-statements", # PLR0911
"too-many-statements", # PLR0915
"trailing-comma-tuple", # COM818
"unnecessary-comprehension", # C416
"use-a-generator", # C417
"use-dict-literal", # C406
"use-list-literal", # C405
"useless-object-inheritance", # UP004
"useless-return", # PLR1711
# "no-self-use", # PLR6301 # Optional plugin, not enabled
]
[tool.pylint.REPORTS]
@@ -226,122 +226,120 @@ log_date_format = "%Y-%m-%d %H:%M:%S"
asyncio_default_fixture_loop_scope = "function"
asyncio_mode = "auto"
filterwarnings = [
"error",
"ignore:pkg_resources is deprecated as an API:DeprecationWarning:dirhash",
"ignore::pytest.PytestUnraisableExceptionWarning",
"error",
"ignore:pkg_resources is deprecated as an API:DeprecationWarning:dirhash",
"ignore::pytest.PytestUnraisableExceptionWarning",
]
markers = [
"no_mock_init_websession: disable the autouse mock of init_websession for this test",
"no_mock_init_websession: disable the autouse mock of init_websession for this test",
]
[tool.ruff]
lint.select = [
"B002", # Python does not support the unary prefix increment
"B007", # Loop control variable {name} not used within loop body
"B014", # Exception handler with duplicate exception
"B023", # Function definition does not bind loop variable {name}
"B026", # Star-arg unpacking after a keyword argument is strongly discouraged
"B904", # Use raise from to specify exception cause
"C", # complexity
"COM818", # Trailing comma on bare tuple prohibited
"D", # docstrings
"DTZ003", # Use datetime.now(tz=) instead of datetime.utcnow()
"DTZ004", # Use datetime.fromtimestamp(ts, tz=) instead of datetime.utcfromtimestamp(ts)
"E", # pycodestyle
"F", # pyflakes/autoflake
"G", # flake8-logging-format
"I", # isort
"ICN001", # import concentions; {name} should be imported as {asname}
"N804", # First argument of a class method should be named cls
"N805", # First argument of a method should be named self
"N815", # Variable {name} in class scope should not be mixedCase
"PGH004", # Use specific rule codes when using noqa
"PLC0414", # Useless import alias. Import alias does not rename original package.
"PLC", # pylint
"PLE", # pylint
"PLR", # pylint
"PLW", # pylint
"Q000", # Double quotes found but single quotes preferred
"RUF006", # Store a reference to the return value of asyncio.create_task
"S102", # Use of exec detected
"S103", # bad-file-permissions
"S108", # hardcoded-temp-file
"S306", # suspicious-mktemp-usage
"S307", # suspicious-eval-usage
"S313", # suspicious-xmlc-element-tree-usage
"S314", # suspicious-xml-element-tree-usage
"S315", # suspicious-xml-expat-reader-usage
"S316", # suspicious-xml-expat-builder-usage
"S317", # suspicious-xml-sax-usage
"S318", # suspicious-xml-mini-dom-usage
"S319", # suspicious-xml-pull-dom-usage
"S601", # paramiko-call
"S602", # subprocess-popen-with-shell-equals-true
"S604", # call-with-shell-equals-true
"S608", # hardcoded-sql-expression
"S609", # unix-command-wildcard-injection
"SIM105", # Use contextlib.suppress({exception}) instead of try-except-pass
"SIM117", # Merge with-statements that use the same scope
"SIM118", # Use {key} in {dict} instead of {key} in {dict}.keys()
"SIM201", # Use {left} != {right} instead of not {left} == {right}
"SIM208", # Use {expr} instead of not (not {expr})
"SIM212", # Use {a} if {a} else {b} instead of {b} if not {a} else {a}
"SIM300", # Yoda conditions. Use 'age == 42' instead of '42 == age'.
"SIM401", # Use get from dict with default instead of an if block
"T100", # Trace found: {name} used
"T20", # flake8-print
"TID251", # Banned imports
"TRY004", # Prefer TypeError exception for invalid type
"TRY203", # Remove exception handler; error is immediately re-raised
"UP", # pyupgrade
"W", # pycodestyle
"B002", # Python does not support the unary prefix increment
"B007", # Loop control variable {name} not used within loop body
"B014", # Exception handler with duplicate exception
"B023", # Function definition does not bind loop variable {name}
"B026", # Star-arg unpacking after a keyword argument is strongly discouraged
"B904", # Use raise from to specify exception cause
"C", # complexity
"COM818", # Trailing comma on bare tuple prohibited
"D", # docstrings
"DTZ003", # Use datetime.now(tz=) instead of datetime.utcnow()
"DTZ004", # Use datetime.fromtimestamp(ts, tz=) instead of datetime.utcfromtimestamp(ts)
"E", # pycodestyle
"F", # pyflakes/autoflake
"G", # flake8-logging-format
"I", # isort
"ICN001", # import concentions; {name} should be imported as {asname}
"N804", # First argument of a class method should be named cls
"N805", # First argument of a method should be named self
"N815", # Variable {name} in class scope should not be mixedCase
"PGH004", # Use specific rule codes when using noqa
"PLC0414", # Useless import alias. Import alias does not rename original package.
"PLC", # pylint
"PLE", # pylint
"PLR", # pylint
"PLW", # pylint
"Q000", # Double quotes found but single quotes preferred
"RUF006", # Store a reference to the return value of asyncio.create_task
"S102", # Use of exec detected
"S103", # bad-file-permissions
"S108", # hardcoded-temp-file
"S306", # suspicious-mktemp-usage
"S307", # suspicious-eval-usage
"S313", # suspicious-xmlc-element-tree-usage
"S314", # suspicious-xml-element-tree-usage
"S315", # suspicious-xml-expat-reader-usage
"S316", # suspicious-xml-expat-builder-usage
"S317", # suspicious-xml-sax-usage
"S318", # suspicious-xml-mini-dom-usage
"S319", # suspicious-xml-pull-dom-usage
"S601", # paramiko-call
"S602", # subprocess-popen-with-shell-equals-true
"S604", # call-with-shell-equals-true
"S608", # hardcoded-sql-expression
"S609", # unix-command-wildcard-injection
"SIM105", # Use contextlib.suppress({exception}) instead of try-except-pass
"SIM117", # Merge with-statements that use the same scope
"SIM118", # Use {key} in {dict} instead of {key} in {dict}.keys()
"SIM201", # Use {left} != {right} instead of not {left} == {right}
"SIM208", # Use {expr} instead of not (not {expr})
"SIM212", # Use {a} if {a} else {b} instead of {b} if not {a} else {a}
"SIM300", # Yoda conditions. Use 'age == 42' instead of '42 == age'.
"SIM401", # Use get from dict with default instead of an if block
"T100", # Trace found: {name} used
"T20", # flake8-print
"TID251", # Banned imports
"TRY004", # Prefer TypeError exception for invalid type
"TRY203", # Remove exception handler; error is immediately re-raised
"UP", # pyupgrade
"W", # pycodestyle
]
lint.ignore = [
"D202", # No blank lines allowed after function docstring
"D203", # 1 blank line required before class docstring
"D213", # Multi-line docstring summary should start at the second line
"D406", # Section name should end with a newline
"D407", # Section name underlining
"E501", # line too long
"E731", # do not assign a lambda expression, use a def
"D202", # No blank lines allowed after function docstring
"D203", # 1 blank line required before class docstring
"D213", # Multi-line docstring summary should start at the second line
"D406", # Section name should end with a newline
"D407", # Section name underlining
"E501", # line too long
"E731", # do not assign a lambda expression, use a def
# Ignore ignored, as the rule is now back in preview/nursery, which cannot
# be ignored anymore without warnings.
# https://github.com/astral-sh/ruff/issues/7491
# "PLC1901", # Lots of false positives
# Ignore ignored, as the rule is now back in preview/nursery, which cannot
# be ignored anymore without warnings.
# https://github.com/astral-sh/ruff/issues/7491
# "PLC1901", # Lots of false positives
# False positives https://github.com/astral-sh/ruff/issues/5386
"PLC0208", # Use a sequence type instead of a `set` when iterating over values
"PLR0911", # Too many return statements ({returns} > {max_returns})
"PLR0912", # Too many branches ({branches} > {max_branches})
"PLR0913", # Too many arguments to function call ({c_args} > {max_args})
"PLR0915", # Too many statements ({statements} > {max_statements})
"PLR2004", # Magic value used in comparison, consider replacing {value} with a constant variable
"PLW2901", # Outer {outer_kind} variable {name} overwritten by inner {inner_kind} target
"UP006", # keep type annotation style as is
"UP007", # keep type annotation style as is
# Ignored due to performance: https://github.com/charliermarsh/ruff/issues/2923
"UP038", # Use `X | Y` in `isinstance` call instead of `(X, Y)`
# False positives https://github.com/astral-sh/ruff/issues/5386
"PLC0208", # Use a sequence type instead of a `set` when iterating over values
"PLR0911", # Too many return statements ({returns} > {max_returns})
"PLR0912", # Too many branches ({branches} > {max_branches})
"PLR0913", # Too many arguments to function call ({c_args} > {max_args})
"PLR0915", # Too many statements ({statements} > {max_statements})
"PLR2004", # Magic value used in comparison, consider replacing {value} with a constant variable
"PLW2901", # Outer {outer_kind} variable {name} overwritten by inner {inner_kind} target
"UP006", # keep type annotation style as is
"UP007", # keep type annotation style as is
# May conflict with the formatter, https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules
"W191",
"E111",
"E114",
"E117",
"D206",
"D300",
"Q000",
"Q001",
"Q002",
"Q003",
"COM812",
"COM819",
"ISC001",
"ISC002",
# May conflict with the formatter, https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules
"W191",
"E111",
"E114",
"E117",
"D206",
"D300",
"Q000",
"Q001",
"Q002",
"Q003",
"COM812",
"COM819",
"ISC001",
"ISC002",
# Disabled because ruff does not understand type of __all__ generated by a function
"PLE0605",
# Disabled because ruff does not understand type of __all__ generated by a function
"PLE0605",
]
[tool.ruff.lint.flake8-import-conventions.extend-aliases]
@@ -356,11 +354,11 @@ fixture-parentheses = false
[tool.ruff.lint.isort]
force-sort-within-sections = true
section-order = [
"future",
"standard-library",
"third-party",
"first-party",
"local-folder",
"future",
"standard-library",
"third-party",
"first-party",
"local-folder",
]
forced-separate = ["tests"]
known-first-party = ["supervisor", "tests"]

View File

@@ -1,32 +1,30 @@
aiodns==3.5.0
aiodocker==0.24.0
aiohttp==3.13.2
aiodns==4.0.0
aiodocker==0.26.0
aiohttp==3.13.3
atomicwrites-homeassistant==1.4.1
attrs==25.4.0
awesomeversion==25.8.0
backports.zstd==1.1.0
blockbuster==1.5.25
blockbuster==1.5.26
brotli==1.2.0
ciso8601==2.3.3
colorlog==6.10.1
cpe==1.3.1
cryptography==46.0.3
debugpy==1.8.17
cryptography==46.0.5
debugpy==1.8.20
deepmerge==2.0
dirhash==0.5.0
docker==7.1.0
faust-cchardet==2.1.19
gitpython==3.1.45
gitpython==3.1.46
jinja2==3.1.6
log-rate-limit==1.4.2
orjson==3.11.4
orjson==3.11.7
pulsectl==24.12.0
pyudev==0.24.4
PyYAML==6.0.3
requests==2.32.5
securetar==2025.2.1
sentry-sdk==2.45.0
setuptools==80.9.0
voluptuous==0.15.2
dbus-fast==2.45.1
securetar==2025.12.0
sentry-sdk==2.53.0
setuptools==82.0.0
voluptuous==0.16.0
dbus-fast==4.0.0
zlib-fast==0.2.1

View File

@@ -1,16 +1,15 @@
astroid==4.0.2
coverage==7.12.0
mypy==1.18.2
pre-commit==4.5.0
pylint==4.0.3
astroid==4.0.3
coverage==7.13.4
mypy==1.19.1
pre-commit==4.5.1
pylint==4.0.5
pytest-aiohttp==1.1.0
pytest-asyncio==1.3.0
pytest-cov==7.0.0
pytest-timeout==2.4.0
pytest==9.0.1
ruff==0.14.6
time-machine==3.1.0
types-docker==7.1.0.20251009
pytest==9.0.2
ruff==0.15.2
time-machine==3.2.0
types-pyyaml==6.0.12.20250915
types-requests==2.32.4.20250913
urllib3==2.5.0
types-requests==2.32.4.20260107
urllib3==2.6.3

View File

@@ -15,17 +15,15 @@ import secrets
import shutil
import tarfile
from tempfile import TemporaryDirectory
from typing import Any, Final
from typing import Any, Final, cast
import aiohttp
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from deepmerge import Merger
from securetar import AddFileError, atomic_contents_add, secure_path
from securetar import AddFileError, SecureTarFile, atomic_contents_add
import voluptuous as vol
from voluptuous.humanize import humanize_error
from supervisor.utils.dt import utc_from_timestamp
from ..bus import EventListener
from ..const import (
ATTR_ACCESS_TOKEN,
@@ -63,16 +61,29 @@ from ..const import (
from ..coresys import CoreSys
from ..docker.addon import DockerAddon
from ..docker.const import ContainerState
from ..docker.manager import ExecReturn
from ..docker.monitor import DockerContainerStateEvent
from ..docker.stats import DockerStats
from ..exceptions import (
AddonConfigurationError,
AddonBackupMetadataInvalidError,
AddonBuildFailedUnknownError,
AddonConfigurationInvalidError,
AddonNotRunningError,
AddonNotSupportedError,
AddonNotSupportedWriteStdinError,
AddonPortConflict,
AddonPrePostBackupCommandReturnedError,
AddonsError,
AddonsJobError,
AddonUnknownError,
BackupInvalidError,
BackupRestoreUnknownError,
ConfigurationFileError,
DockerBuildError,
DockerContainerPortConflict,
DockerError,
HostAppArmorError,
StoreAddonNotFoundError,
)
from ..hardware.data import Device
from ..homeassistant.const import WSEvent
@@ -83,6 +94,7 @@ from ..resolution.data import Issue
from ..store.addon import AddonStore
from ..utils import check_port
from ..utils.apparmor import adjust_profile
from ..utils.dt import utc_from_timestamp
from ..utils.json import read_json_file, write_json_file
from ..utils.sentry import async_capture_exception
from .const import (
@@ -235,7 +247,7 @@ class Addon(AddonModel):
await self.instance.check_image(self.version, default_image, self.arch)
except DockerError:
_LOGGER.info("No %s addon Docker image %s found", self.slug, self.image)
with suppress(DockerError):
with suppress(DockerError, AddonNotSupportedError):
await self.instance.install(self.version, default_image, arch=self.arch)
self.persist[ATTR_IMAGE] = default_image
@@ -718,18 +730,16 @@ class Addon(AddonModel):
options = self.schema.validate(self.options)
await self.sys_run_in_executor(write_json_file, self.path_options, options)
except vol.Invalid as ex:
_LOGGER.error(
"Add-on %s has invalid options: %s",
self.slug,
humanize_error(self.options, ex),
)
except ConfigurationFileError:
raise AddonConfigurationInvalidError(
_LOGGER.error,
addon=self.slug,
validation_error=humanize_error(self.options, ex),
) from None
except ConfigurationFileError as err:
_LOGGER.error("Add-on %s can't write options", self.slug)
else:
_LOGGER.debug("Add-on %s write options: %s", self.slug, options)
return
raise AddonUnknownError(addon=self.slug) from err
raise AddonConfigurationError()
_LOGGER.debug("Add-on %s write options: %s", self.slug, options)
@Job(
name="addon_unload",
@@ -772,7 +782,7 @@ class Addon(AddonModel):
async def install(self) -> None:
"""Install and setup this addon."""
if not self.addon_store:
raise AddonsError("Missing from store, cannot install!")
raise StoreAddonNotFoundError(addon=self.slug)
await self.sys_addons.data.install(self.addon_store)
@@ -793,9 +803,17 @@ class Addon(AddonModel):
await self.instance.install(
self.latest_version, self.addon_store.image, arch=self.arch
)
except DockerError as err:
except AddonsError:
await self.sys_addons.data.uninstall(self)
raise AddonsError() from err
raise
except DockerBuildError as err:
_LOGGER.error("Could not build image for addon %s: %s", self.slug, err)
await self.sys_addons.data.uninstall(self)
raise AddonBuildFailedUnknownError(addon=self.slug) from err
except DockerError as err:
_LOGGER.error("Could not pull image to update addon %s: %s", self.slug, err)
await self.sys_addons.data.uninstall(self)
raise AddonUnknownError(addon=self.slug) from err
# Finish initialization and set up listeners
await self.load()
@@ -819,7 +837,8 @@ class Addon(AddonModel):
try:
await self.instance.remove(remove_image=remove_image)
except DockerError as err:
raise AddonsError() from err
_LOGGER.error("Could not remove image for addon %s: %s", self.slug, err)
raise AddonUnknownError(addon=self.slug) from err
self.state = AddonState.UNKNOWN
@@ -884,7 +903,7 @@ class Addon(AddonModel):
if it was running. Else nothing is returned.
"""
if not self.addon_store:
raise AddonsError("Missing from store, cannot update!")
raise StoreAddonNotFoundError(addon=self.slug)
old_image = self.image
# Cache data to prevent races with other updates to global
@@ -892,8 +911,12 @@ class Addon(AddonModel):
try:
await self.instance.update(store.version, store.image, arch=self.arch)
except DockerBuildError as err:
_LOGGER.error("Could not build image for addon %s: %s", self.slug, err)
raise AddonBuildFailedUnknownError(addon=self.slug) from err
except DockerError as err:
raise AddonsError() from err
_LOGGER.error("Could not pull image to update addon %s: %s", self.slug, err)
raise AddonUnknownError(addon=self.slug) from err
# Stop the addon if running
if (last_state := self.state) in {AddonState.STARTED, AddonState.STARTUP}:
@@ -904,6 +927,10 @@ class Addon(AddonModel):
await self.sys_addons.data.update(store)
await self._check_ingress_port()
# Reload ingress tokens in case addon gained ingress support
if self.with_ingress:
await self.sys_ingress.reload()
# Cleanup
with suppress(DockerError):
await self.instance.cleanup(
@@ -935,17 +962,33 @@ class Addon(AddonModel):
"""
last_state: AddonState = self.state
try:
# remove docker container but not addon config
# remove docker container and image but not addon config
try:
await self.instance.remove()
await self.instance.install(self.version)
except DockerError as err:
raise AddonsError() from err
_LOGGER.error("Could not remove image for addon %s: %s", self.slug, err)
raise AddonUnknownError(addon=self.slug) from err
try:
await self.instance.install(self.version)
except DockerBuildError as err:
_LOGGER.error("Could not build image for addon %s: %s", self.slug, err)
raise AddonBuildFailedUnknownError(addon=self.slug) from err
except DockerError as err:
_LOGGER.error(
"Could not pull image to update addon %s: %s", self.slug, err
)
raise AddonUnknownError(addon=self.slug) from err
if self.addon_store:
await self.sys_addons.data.update(self.addon_store)
await self._check_ingress_port()
# Reload ingress tokens in case addon gained ingress support
if self.with_ingress:
await self.sys_ingress.reload()
_LOGGER.info("Add-on '%s' successfully rebuilt", self.slug)
finally:
@@ -1110,9 +1153,16 @@ class Addon(AddonModel):
self._startup_event.clear()
try:
await self.instance.run()
except DockerContainerPortConflict as err:
raise AddonPortConflict(
_LOGGER.error,
name=self.slug,
port=cast(dict[str, Any], err.extra_fields)["port"],
) from err
except DockerError as err:
_LOGGER.error("Could not start container for addon %s: %s", self.slug, err)
self.state = AddonState.ERROR
raise AddonsError() from err
raise AddonUnknownError(addon=self.slug) from err
return self.sys_create_task(self._wait_for_startup())
@@ -1127,8 +1177,9 @@ class Addon(AddonModel):
try:
await self.instance.stop()
except DockerError as err:
_LOGGER.error("Could not stop container for addon %s: %s", self.slug, err)
self.state = AddonState.ERROR
raise AddonsError() from err
raise AddonUnknownError(addon=self.slug) from err
@Job(
name="addon_restart",
@@ -1144,13 +1195,6 @@ class Addon(AddonModel):
await self.stop()
return await self.start()
def logs(self) -> Awaitable[bytes]:
"""Return add-ons log output.
Return a coroutine.
"""
return self.instance.logs()
def is_running(self) -> Awaitable[bool]:
"""Return True if Docker container is running.
@@ -1161,9 +1205,15 @@ class Addon(AddonModel):
async def stats(self) -> DockerStats:
"""Return stats of container."""
try:
if not await self.is_running():
raise AddonNotRunningError(_LOGGER.warning, addon=self.slug)
return await self.instance.stats()
except DockerError as err:
raise AddonsError() from err
_LOGGER.error(
"Could not get stats of container for addon %s: %s", self.slug, err
)
raise AddonUnknownError(addon=self.slug) from err
@Job(
name="addon_write_stdin",
@@ -1173,31 +1223,35 @@ class Addon(AddonModel):
async def write_stdin(self, data) -> None:
"""Write data to add-on stdin."""
if not self.with_stdin:
raise AddonNotSupportedError(
f"Add-on {self.slug} does not support writing to stdin!", _LOGGER.error
)
raise AddonNotSupportedWriteStdinError(_LOGGER.error, addon=self.slug)
try:
return await self.instance.write_stdin(data)
if not await self.is_running():
raise AddonNotRunningError(_LOGGER.warning, addon=self.slug)
await self.instance.write_stdin(data)
except DockerError as err:
raise AddonsError() from err
_LOGGER.error(
"Could not write stdin to container for addon %s: %s", self.slug, err
)
raise AddonUnknownError(addon=self.slug) from err
async def _backup_command(self, command: str) -> None:
try:
command_return = await self.instance.run_inside(command)
command_return: ExecReturn = await self.instance.run_inside(command)
if command_return.exit_code != 0:
_LOGGER.debug(
"Pre-/Post backup command failed with: %s", command_return.output
"Pre-/Post backup command failed with: %s",
command_return.output.decode("utf-8", errors="replace"),
)
raise AddonsError(
f"Pre-/Post backup command returned error code: {command_return.exit_code}",
_LOGGER.error,
raise AddonPrePostBackupCommandReturnedError(
_LOGGER.error, addon=self.slug, exit_code=command_return.exit_code
)
except DockerError as err:
raise AddonsError(
f"Failed running pre-/post backup command {command}: {str(err)}",
_LOGGER.error,
) from err
_LOGGER.error(
"Failed running pre-/post backup command %s: %s", command, err
)
raise AddonUnknownError(addon=self.slug) from err
@Job(
name="addon_begin_backup",
@@ -1264,7 +1318,7 @@ class Addon(AddonModel):
on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def backup(self, tar_file: tarfile.TarFile) -> asyncio.Task | None:
async def backup(self, tar_file: SecureTarFile) -> asyncio.Task | None:
"""Backup state of an add-on.
Returns a Task that completes when addon has state 'started' (see start)
@@ -1272,68 +1326,59 @@ class Addon(AddonModel):
"""
def _addon_backup(
store_image: bool,
metadata: dict[str, Any],
apparmor_profile: str | None,
addon_config_used: bool,
temp_dir: TemporaryDirectory,
temp_path: Path,
):
"""Start the backup process."""
with TemporaryDirectory(dir=self.sys_config.path_tmp) as temp:
temp_path = Path(temp)
# Store local configs/state
try:
write_json_file(temp_path.joinpath("addon.json"), metadata)
except ConfigurationFileError as err:
_LOGGER.error("Can't save meta for %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
# store local image
if store_image:
try:
self.instance.export_image(temp_path.joinpath("image.tar"))
except DockerError as err:
raise AddonsError() from err
# Store local configs/state
# Store AppArmor Profile
if apparmor_profile:
profile_backup_file = temp_path.joinpath("apparmor.txt")
try:
write_json_file(temp_path.joinpath("addon.json"), metadata)
except ConfigurationFileError as err:
raise AddonsError(
f"Can't save meta for {self.slug}", _LOGGER.error
) from err
self.sys_host.apparmor.backup_profile(
apparmor_profile, profile_backup_file
)
except HostAppArmorError as err:
_LOGGER.error(
"Can't backup AppArmor profile for %s: %s", self.slug, err
)
raise BackupRestoreUnknownError() from err
# Store AppArmor Profile
if apparmor_profile:
profile_backup_file = temp_path.joinpath("apparmor.txt")
try:
self.sys_host.apparmor.backup_profile(
apparmor_profile, profile_backup_file
)
except HostAppArmorError as err:
raise AddonsError(
"Can't backup AppArmor profile", _LOGGER.error
) from err
# Write tarfile
with tar_file as backup:
# Backup metadata
backup.add(temp_dir.name, arcname=".")
# Write tarfile
with tar_file as backup:
# Backup metadata
backup.add(temp, arcname=".")
# Backup data
atomic_contents_add(
backup,
self.path_data,
file_filter=partial(
self._is_excluded_by_filter, self.path_data, "data"
),
arcname="data",
)
# Backup data
# Backup config (if used and existing, restore handles this gracefully)
if addon_config_used and self.path_config.is_dir():
atomic_contents_add(
backup,
self.path_data,
self.path_config,
file_filter=partial(
self._is_excluded_by_filter, self.path_data, "data"
self._is_excluded_by_filter, self.path_config, "config"
),
arcname="data",
arcname="config",
)
# Backup config (if used and existing, restore handles this gracefully)
if addon_config_used and self.path_config.is_dir():
atomic_contents_add(
backup,
self.path_config,
file_filter=partial(
self._is_excluded_by_filter, self.path_config, "config"
),
arcname="config",
)
wait_for_start: asyncio.Task | None = None
data = {
@@ -1347,21 +1392,35 @@ class Addon(AddonModel):
)
was_running = await self.begin_backup()
temp_dir = await self.sys_run_in_executor(
TemporaryDirectory, dir=self.sys_config.path_tmp
)
temp_path = Path(temp_dir.name)
_LOGGER.info("Building backup for add-on %s", self.slug)
try:
_LOGGER.info("Building backup for add-on %s", self.slug)
# store local image
if self.need_build:
await self.instance.export_image(temp_path.joinpath("image.tar"))
await self.sys_run_in_executor(
partial(
_addon_backup,
store_image=self.need_build,
metadata=data,
apparmor_profile=apparmor_profile,
addon_config_used=self.addon_config_used,
temp_dir=temp_dir,
temp_path=temp_path,
)
)
_LOGGER.info("Finish backup for addon %s", self.slug)
except DockerError as err:
_LOGGER.error("Can't export image for addon %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
except (tarfile.TarError, OSError, AddFileError) as err:
raise AddonsError(f"Can't write tarfile: {err}", _LOGGER.error) from err
_LOGGER.error("Can't write backup tarfile for addon %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
finally:
await self.sys_run_in_executor(temp_dir.cleanup)
if was_running:
wait_for_start = await self.end_backup()
@@ -1372,7 +1431,7 @@ class Addon(AddonModel):
on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def restore(self, tar_file: tarfile.TarFile) -> asyncio.Task | None:
async def restore(self, tar_file: SecureTarFile) -> asyncio.Task | None:
"""Restore state of an add-on.
Returns a Task that completes when addon has state 'started' (see start)
@@ -1386,10 +1445,11 @@ class Addon(AddonModel):
tmp = TemporaryDirectory(dir=self.sys_config.path_tmp)
try:
with tar_file as backup:
# The tar filter rejects path traversal and absolute names,
# aborting restore of malicious backups with such exploits.
backup.extractall(
path=tmp.name,
members=secure_path(backup),
filter="fully_trusted",
filter="tar",
)
data = read_json_file(Path(tmp.name, "addon.json"))
@@ -1401,29 +1461,29 @@ class Addon(AddonModel):
try:
tmp, data = await self.sys_run_in_executor(_extract_tarfile)
except tarfile.TarError as err:
raise AddonsError(
f"Can't read tarfile {tar_file}: {err}", _LOGGER.error
except tarfile.FilterError as err:
raise BackupInvalidError(
f"Can't extract backup tarfile for {self.slug}: {err}",
_LOGGER.error,
) from err
except tarfile.TarError as err:
raise BackupRestoreUnknownError() from err
except ConfigurationFileError as err:
raise AddonsError() from err
raise AddonUnknownError(addon=self.slug) from err
try:
# Validate
try:
data = SCHEMA_ADDON_BACKUP(data)
except vol.Invalid as err:
raise AddonsError(
f"Can't validate {self.slug}, backup data: {humanize_error(data, err)}",
raise AddonBackupMetadataInvalidError(
_LOGGER.error,
addon=self.slug,
validation_error=humanize_error(data, err),
) from err
# If available
if not self._available(data[ATTR_SYSTEM]):
raise AddonNotSupportedError(
f"Add-on {self.slug} is not available for this platform",
_LOGGER.error,
)
# Validate availability. Raises if not
self._validate_availability(data[ATTR_SYSTEM], logger=_LOGGER.error)
# Restore local add-on information
_LOGGER.info("Restore config for addon %s", self.slug)
@@ -1482,9 +1542,10 @@ class Addon(AddonModel):
try:
await self.sys_run_in_executor(_restore_data)
except shutil.Error as err:
raise AddonsError(
f"Can't restore origin data: {err}", _LOGGER.error
) from err
_LOGGER.error(
"Can't restore origin data for %s: %s", self.slug, err
)
raise BackupRestoreUnknownError() from err
# Restore AppArmor
profile_file = Path(tmp.name, "apparmor.txt")
@@ -1495,10 +1556,11 @@ class Addon(AddonModel):
)
except HostAppArmorError as err:
_LOGGER.error(
"Can't restore AppArmor profile for add-on %s",
"Can't restore AppArmor profile for add-on %s: %s",
self.slug,
err,
)
raise AddonsError() from err
raise BackupRestoreUnknownError() from err
finally:
# Is add-on loaded

View File

@@ -2,8 +2,11 @@
from __future__ import annotations
import base64
from functools import cached_property
from pathlib import Path
import json
import logging
from pathlib import Path, PurePath
from typing import TYPE_CHECKING, Any
from awesomeversion import AwesomeVersion
@@ -12,20 +15,31 @@ from ..const import (
ATTR_ARGS,
ATTR_BUILD_FROM,
ATTR_LABELS,
ATTR_PASSWORD,
ATTR_SQUASH,
ATTR_USERNAME,
FILE_SUFFIX_CONFIGURATION,
META_ADDON,
SOCKET_DOCKER,
CpuArch,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..docker.const import DOCKER_HUB, DOCKER_HUB_LEGACY, DockerMount, MountType
from ..docker.interface import MAP_ARCH
from ..exceptions import ConfigurationFileError, HassioArchNotFound
from ..exceptions import (
AddonBuildArchitectureNotSupportedError,
AddonBuildDockerfileMissingError,
ConfigurationFileError,
HassioArchNotFound,
)
from ..utils.common import FileConfiguration, find_one_filetype
from .validate import SCHEMA_BUILD_CONFIG
if TYPE_CHECKING:
from .manager import AnyAddon
_LOGGER: logging.Logger = logging.getLogger(__name__)
class AddonBuild(FileConfiguration, CoreSysAttributes):
"""Handle build options for add-ons."""
@@ -62,7 +76,7 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
raise RuntimeError()
@cached_property
def arch(self) -> str:
def arch(self) -> CpuArch:
"""Return arch of the add-on."""
return self.sys_arch.match([self.addon.arch])
@@ -70,7 +84,7 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
def base_image(self) -> str:
"""Return base image for this add-on."""
if not self._data[ATTR_BUILD_FROM]:
return f"ghcr.io/home-assistant/{self.sys_arch.default}-base:latest"
return f"ghcr.io/home-assistant/{self.sys_arch.default!s}-base:latest"
if isinstance(self._data[ATTR_BUILD_FROM], str):
return self._data[ATTR_BUILD_FROM]
@@ -106,7 +120,7 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
return self.addon.path_location.joinpath(f"Dockerfile.{self.arch}")
return self.addon.path_location.joinpath("Dockerfile")
async def is_valid(self) -> bool:
async def is_valid(self) -> None:
"""Return true if the build env is valid."""
def build_is_valid() -> bool:
@@ -118,12 +132,58 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
)
try:
return await self.sys_run_in_executor(build_is_valid)
if not await self.sys_run_in_executor(build_is_valid):
raise AddonBuildDockerfileMissingError(
_LOGGER.error, addon=self.addon.slug
)
except HassioArchNotFound:
return False
raise AddonBuildArchitectureNotSupportedError(
_LOGGER.error,
addon=self.addon.slug,
addon_arch_list=self.addon.supported_arch,
system_arch_list=[arch.value for arch in self.sys_arch.supported],
) from None
def get_docker_config_json(self) -> str | None:
"""Generate Docker config.json content with registry credentials for base image.
Returns a JSON string with registry credentials for the base image's registry,
or None if no matching registry is configured.
Raises:
HassioArchNotFound: If the add-on is not supported on the current architecture.
"""
# Early return before accessing base_image to avoid unnecessary arch lookup
if not self.sys_docker.config.registries:
return None
registry = self.sys_docker.config.get_registry_for_image(self.base_image)
if not registry:
return None
stored = self.sys_docker.config.registries[registry]
username = stored[ATTR_USERNAME]
password = stored[ATTR_PASSWORD]
# Docker config.json uses base64-encoded "username:password" for auth
auth_string = base64.b64encode(f"{username}:{password}".encode()).decode()
# Use the actual registry URL for the key
# Docker Hub uses "https://index.docker.io/v1/" as the key
# Support both docker.io (official) and hub.docker.com (legacy)
registry_key = (
"https://index.docker.io/v1/"
if registry in (DOCKER_HUB, DOCKER_HUB_LEGACY)
else registry
)
config = {"auths": {registry_key: {"auth": auth_string}}}
return json.dumps(config)
def get_docker_args(
self, version: AwesomeVersion, image_tag: str
self, version: AwesomeVersion, image_tag: str, docker_config_path: Path | None
) -> dict[str, Any]:
"""Create a dict with Docker run args."""
dockerfile_path = self.get_dockerfile().relative_to(self.addon.path_location)
@@ -172,13 +232,39 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
self.addon.path_location
)
mounts = [
DockerMount(
type=MountType.BIND,
source=SOCKET_DOCKER.as_posix(),
target="/var/run/docker.sock",
read_only=False,
),
DockerMount(
type=MountType.BIND,
source=addon_extern_path.as_posix(),
target="/addon",
read_only=True,
),
]
# Mount Docker config with registry credentials if available
if docker_config_path:
docker_config_extern_path = self.sys_config.local_to_extern_path(
docker_config_path
)
mounts.append(
DockerMount(
type=MountType.BIND,
source=docker_config_extern_path.as_posix(),
target="/root/.docker/config.json",
read_only=True,
)
)
return {
"command": build_cmd,
"volumes": {
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
addon_extern_path: {"bind": "/addon", "mode": "ro"},
},
"working_dir": "/addon",
"mounts": mounts,
"working_dir": PurePath("/addon"),
}
def _fix_label(self, label_name: str) -> str:

View File

@@ -4,10 +4,10 @@ import asyncio
from collections.abc import Awaitable
from contextlib import suppress
import logging
import tarfile
from typing import Self, Union
from attr import evolve
from securetar import SecureTarFile
from ..const import AddonBoot, AddonStartup, AddonState
from ..coresys import CoreSys, CoreSysAttributes
@@ -334,9 +334,7 @@ class AddonManager(CoreSysAttributes):
],
on_condition=AddonsJobError,
)
async def restore(
self, slug: str, tar_file: tarfile.TarFile
) -> asyncio.Task | None:
async def restore(self, slug: str, tar_file: SecureTarFile) -> asyncio.Task | None:
"""Restore state of an add-on.
Returns a Task that completes when addon has state 'started' (see addon.start)

View File

@@ -11,8 +11,6 @@ from typing import Any
from awesomeversion import AwesomeVersion, AwesomeVersionException
from supervisor.utils.dt import utc_from_timestamp
from ..const import (
ATTR_ADVANCED,
ATTR_APPARMOR,
@@ -87,6 +85,7 @@ from ..const import (
AddonBootConfig,
AddonStage,
AddonStartup,
CpuArch,
)
from ..coresys import CoreSys
from ..docker.const import Capabilities
@@ -99,6 +98,7 @@ from ..exceptions import (
from ..jobs.const import JOB_GROUP_ADDON
from ..jobs.job_group import JobGroup
from ..utils import version_is_new_enough
from ..utils.dt import utc_from_timestamp
from .configuration import FolderMapping
from .const import (
ATTR_BACKUP,
@@ -315,12 +315,12 @@ class AddonModel(JobGroup, ABC):
@property
def panel_title(self) -> str:
"""Return panel icon for Ingress frame."""
"""Return panel title for Ingress frame."""
return self.data.get(ATTR_PANEL_TITLE, self.name)
@property
def panel_admin(self) -> str:
"""Return panel icon for Ingress frame."""
def panel_admin(self) -> bool:
"""Return if panel is only available for admin users."""
return self.data[ATTR_PANEL_ADMIN]
@property
@@ -488,7 +488,7 @@ class AddonModel(JobGroup, ABC):
return self.data[ATTR_DEVICETREE]
@property
def with_tmpfs(self) -> str | None:
def with_tmpfs(self) -> bool:
"""Return if tmp is in memory of add-on."""
return self.data[ATTR_TMPFS]
@@ -508,7 +508,7 @@ class AddonModel(JobGroup, ABC):
return self.data[ATTR_VIDEO]
@property
def homeassistant_version(self) -> str | None:
def homeassistant_version(self) -> AwesomeVersion | None:
"""Return min Home Assistant version they needed by Add-on."""
return self.data.get(ATTR_HOMEASSISTANT)
@@ -548,7 +548,7 @@ class AddonModel(JobGroup, ABC):
return self.data.get(ATTR_MACHINE, [])
@property
def arch(self) -> str:
def arch(self) -> CpuArch:
"""Return architecture to use for the addon's image."""
if ATTR_IMAGE in self.data:
return self.sys_arch.match(self.data[ATTR_ARCH])
@@ -725,4 +725,4 @@ class AddonModel(JobGroup, ABC):
return config[ATTR_IMAGE].format(arch=arch)
# local build
return f"{config[ATTR_REPOSITORY]}/{self.sys_arch.default}-addon-{config[ATTR_SLUG]}"
return f"{config[ATTR_REPOSITORY]}/{self.sys_arch.default!s}-addon-{config[ATTR_SLUG]}"

View File

@@ -75,7 +75,7 @@ class AddonOptions(CoreSysAttributes):
"""Create a schema for add-on options."""
return vol.Schema(vol.All(dict, self))
def __call__(self, struct):
def __call__(self, struct: dict[str, Any]) -> dict[str, Any]:
"""Create schema validator for add-ons options."""
options = {}
@@ -169,6 +169,10 @@ class AddonOptions(CoreSysAttributes):
elif typ.startswith(_LIST):
return vol.In(match.group("list").split("|"))(str(value))
elif typ.startswith(_DEVICE):
if not isinstance(value, str):
raise vol.Invalid(
f"Expected a string for option '{key}' in {self._name} ({self._slug})"
)
try:
device = self.sys_hardware.get_by_path(Path(value))
except HardwareNotFound:
@@ -193,9 +197,7 @@ class AddonOptions(CoreSysAttributes):
f"Fatal error for option '{key}' with type '{typ}' in {self._name} ({self._slug})"
) from None
def _nested_validate_list(
self, typ: Any, data_list: list[Any], key: str
) -> list[Any]:
def _nested_validate_list(self, typ: Any, data_list: Any, key: str) -> list[Any]:
"""Validate nested items."""
options = []
@@ -213,7 +215,7 @@ class AddonOptions(CoreSysAttributes):
return options
def _nested_validate_dict(
self, typ: dict[Any, Any], data_dict: dict[Any, Any], key: str
self, typ: dict[Any, Any], data_dict: Any, key: str
) -> dict[Any, Any]:
"""Validate nested items."""
options = {}
@@ -264,7 +266,7 @@ class UiOptions(CoreSysAttributes):
def __init__(self, coresys: CoreSys) -> None:
"""Initialize UI option render."""
self.coresys = coresys
self.coresys: CoreSys = coresys
def __call__(self, raw_schema: dict[str, Any]) -> list[dict[str, Any]]:
"""Generate UI schema."""
@@ -279,10 +281,10 @@ class UiOptions(CoreSysAttributes):
def _ui_schema_element(
self,
ui_schema: list[dict[str, Any]],
value: str,
value: str | list[Any] | dict[str, Any],
key: str,
multiple: bool = False,
):
) -> None:
if isinstance(value, list):
# nested value list
assert not multiple

View File

@@ -522,6 +522,7 @@ class RestAPI(CoreSysAttributes):
web.get("/core/api/stream", api_proxy.stream),
web.post("/core/api/{path:.+}", api_proxy.api),
web.get("/core/api/{path:.+}", api_proxy.api),
web.delete("/core/api/{path:.+}", api_proxy.api),
web.get("/core/api/", api_proxy.api),
]
)
@@ -782,6 +783,10 @@ class RestAPI(CoreSysAttributes):
web.delete(
"/store/repositories/{repository}", api_store.remove_repository
),
web.post(
"/store/repositories/{repository}/repair",
api_store.repositories_repository_repair,
),
]
)
@@ -813,6 +818,10 @@ class RestAPI(CoreSysAttributes):
self.webapp.add_routes(
[
web.get("/docker/info", api_docker.info),
web.post(
"/docker/migrate-storage-driver",
api_docker.migrate_docker_storage_driver,
),
web.post("/docker/options", api_docker.options),
web.get("/docker/registries", api_docker.registries),
web.post("/docker/registries", api_docker.create_registry),

View File

@@ -100,6 +100,9 @@ from ..const import (
from ..coresys import CoreSysAttributes
from ..docker.stats import DockerStats
from ..exceptions import (
AddonBootConfigCannotChangeError,
AddonConfigurationInvalidError,
AddonNotSupportedWriteStdinError,
APIAddonNotInstalled,
APIError,
APIForbidden,
@@ -125,6 +128,7 @@ SCHEMA_OPTIONS = vol.Schema(
vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str),
vol.Optional(ATTR_INGRESS_PANEL): vol.Boolean(),
vol.Optional(ATTR_WATCHDOG): vol.Boolean(),
vol.Optional(ATTR_OPTIONS): vol.Maybe(dict),
}
)
@@ -300,19 +304,24 @@ class APIAddons(CoreSysAttributes):
# Update secrets for validation
await self.sys_homeassistant.secrets.reload()
# Extend schema with add-on specific validation
addon_schema = SCHEMA_OPTIONS.extend(
{vol.Optional(ATTR_OPTIONS): vol.Maybe(addon.schema)}
)
# Validate/Process Body
body = await api_validate(addon_schema, request)
body = await api_validate(SCHEMA_OPTIONS, request)
if ATTR_OPTIONS in body:
addon.options = body[ATTR_OPTIONS]
# None resets options to defaults, otherwise validate the options
if body[ATTR_OPTIONS] is None:
addon.options = None
else:
try:
addon.options = addon.schema(body[ATTR_OPTIONS])
except vol.Invalid as ex:
raise AddonConfigurationInvalidError(
addon=addon.slug,
validation_error=humanize_error(body[ATTR_OPTIONS], ex),
) from None
if ATTR_BOOT in body:
if addon.boot_config == AddonBootConfig.MANUAL_ONLY:
raise APIError(
f"Addon {addon.slug} boot option is set to {addon.boot_config} so it cannot be changed"
raise AddonBootConfigCannotChangeError(
addon=addon.slug, boot_config=addon.boot_config.value
)
addon.boot = body[ATTR_BOOT]
if ATTR_AUTO_UPDATE in body:
@@ -385,7 +394,7 @@ class APIAddons(CoreSysAttributes):
return data
@api_process
async def options_config(self, request: web.Request) -> None:
async def options_config(self, request: web.Request) -> dict[str, Any]:
"""Validate user options for add-on."""
slug: str = request.match_info["addon"]
if slug != "self":
@@ -430,11 +439,11 @@ class APIAddons(CoreSysAttributes):
}
@api_process
async def uninstall(self, request: web.Request) -> Awaitable[None]:
async def uninstall(self, request: web.Request) -> None:
"""Uninstall add-on."""
addon = self.get_addon_for_request(request)
body: dict[str, Any] = await api_validate(SCHEMA_UNINSTALL, request)
return await asyncio.shield(
await asyncio.shield(
self.sys_addons.uninstall(
addon.slug, remove_config=body[ATTR_REMOVE_CONFIG]
)
@@ -476,7 +485,7 @@ class APIAddons(CoreSysAttributes):
"""Write to stdin of add-on."""
addon = self.get_addon_for_request(request)
if not addon.with_stdin:
raise APIError(f"STDIN not supported the {addon.slug} add-on")
raise AddonNotSupportedWriteStdinError(_LOGGER.error, addon=addon.slug)
data = await request.read()
await asyncio.shield(addon.write_stdin(data))

View File

@@ -15,7 +15,7 @@ import voluptuous as vol
from ..addons.addon import Addon
from ..const import ATTR_NAME, ATTR_PASSWORD, ATTR_USERNAME, REQUEST_FROM
from ..coresys import CoreSysAttributes
from ..exceptions import APIForbidden
from ..exceptions import APIForbidden, AuthInvalidNonStringValueError
from .const import (
ATTR_GROUP_IDS,
ATTR_IS_ACTIVE,
@@ -69,7 +69,9 @@ class APIAuth(CoreSysAttributes):
try:
_ = username.encode and password.encode # type: ignore
except AttributeError:
raise HTTPUnauthorized(headers=REALM_HEADER) from None
raise AuthInvalidNonStringValueError(
_LOGGER.error, headers=REALM_HEADER
) from None
return self.sys_auth.check_login(
addon, cast(str, username), cast(str, password)
@@ -125,14 +127,14 @@ class APIAuth(CoreSysAttributes):
return {
ATTR_USERS: [
{
ATTR_USERNAME: user[ATTR_USERNAME],
ATTR_NAME: user[ATTR_NAME],
ATTR_IS_OWNER: user[ATTR_IS_OWNER],
ATTR_IS_ACTIVE: user[ATTR_IS_ACTIVE],
ATTR_LOCAL_ONLY: user[ATTR_LOCAL_ONLY],
ATTR_GROUP_IDS: user[ATTR_GROUP_IDS],
ATTR_USERNAME: user.username,
ATTR_NAME: user.name,
ATTR_IS_OWNER: user.is_owner,
ATTR_IS_ACTIVE: user.is_active,
ATTR_LOCAL_ONLY: user.local_only,
ATTR_GROUP_IDS: user.group_ids,
}
for user in await self.sys_auth.list_users()
if user[ATTR_USERNAME]
if user.username
]
}

View File

@@ -4,7 +4,7 @@ from __future__ import annotations
import asyncio
import errno
from io import IOBase
from io import BufferedWriter
import logging
from pathlib import Path
import re
@@ -44,6 +44,7 @@ from ..const import (
ATTR_TIMEOUT,
ATTR_TYPE,
ATTR_VERSION,
DEFAULT_CHUNK_SIZE,
REQUEST_FROM,
)
from ..coresys import CoreSysAttributes
@@ -211,7 +212,7 @@ class APIBackups(CoreSysAttributes):
await self.sys_backups.save_data()
@api_process
async def reload(self, _):
async def reload(self, _: web.Request) -> bool:
"""Reload backup list."""
await asyncio.shield(self.sys_backups.reload())
return True
@@ -310,7 +311,7 @@ class APIBackups(CoreSysAttributes):
if background and not backup_task.done():
return {ATTR_JOB_ID: job_id}
backup: Backup = await backup_task
backup: Backup | None = await backup_task
if backup:
return {ATTR_JOB_ID: job_id, ATTR_SLUG: backup.slug}
raise APIError(
@@ -346,7 +347,7 @@ class APIBackups(CoreSysAttributes):
if background and not backup_task.done():
return {ATTR_JOB_ID: job_id}
backup: Backup = await backup_task
backup: Backup | None = await backup_task
if backup:
return {ATTR_JOB_ID: job_id, ATTR_SLUG: backup.slug}
raise APIError(
@@ -421,7 +422,7 @@ class APIBackups(CoreSysAttributes):
await self.sys_backups.remove(backup, locations=locations)
@api_process
async def download(self, request: web.Request):
async def download(self, request: web.Request) -> web.StreamResponse:
"""Download a backup file."""
backup = self._extract_slug(request)
# Query will give us '' for /backups, convert value to None
@@ -451,7 +452,7 @@ class APIBackups(CoreSysAttributes):
return response
@api_process
async def upload(self, request: web.Request):
async def upload(self, request: web.Request) -> dict[str, str] | bool:
"""Upload a backup file."""
location: LOCATION_TYPE = None
locations: list[LOCATION_TYPE] | None = None
@@ -480,14 +481,14 @@ class APIBackups(CoreSysAttributes):
tmp_path = await self.sys_backups.get_upload_path_for_location(location)
temp_dir: TemporaryDirectory | None = None
backup_file_stream: IOBase | None = None
backup_file_stream: BufferedWriter | None = None
def open_backup_file() -> Path:
def open_backup_file() -> tuple[Path, BufferedWriter]:
nonlocal temp_dir, backup_file_stream
temp_dir = TemporaryDirectory(dir=tmp_path.as_posix())
tar_file = Path(temp_dir.name, "upload.tar")
backup_file_stream = tar_file.open("wb")
return tar_file
return (tar_file, backup_file_stream)
def close_backup_file() -> None:
if backup_file_stream:
@@ -503,12 +504,10 @@ class APIBackups(CoreSysAttributes):
if not isinstance(contents, BodyPartReader):
raise APIError("Improperly formatted upload, could not read backup")
tar_file = await self.sys_run_in_executor(open_backup_file)
while chunk := await contents.read_chunk(size=2**16):
await self.sys_run_in_executor(
cast(IOBase, backup_file_stream).write, chunk
)
await self.sys_run_in_executor(cast(IOBase, backup_file_stream).close)
tar_file, backup_writer = await self.sys_run_in_executor(open_backup_file)
while chunk := await contents.read_chunk(size=DEFAULT_CHUNK_SIZE):
await self.sys_run_in_executor(backup_writer.write, chunk)
await self.sys_run_in_executor(backup_writer.close)
backup = await asyncio.shield(
self.sys_backups.import_backup(

View File

@@ -4,10 +4,9 @@ import logging
from typing import Any
from aiohttp import web
from awesomeversion import AwesomeVersion
import voluptuous as vol
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
from ..const import (
ATTR_ENABLE_IPV6,
ATTR_HOSTNAME,
@@ -16,11 +15,13 @@ from ..const import (
ATTR_PASSWORD,
ATTR_REGISTRIES,
ATTR_STORAGE,
ATTR_STORAGE_DRIVER,
ATTR_USERNAME,
ATTR_VERSION,
)
from ..coresys import CoreSysAttributes
from ..exceptions import APINotFound
from ..resolution.const import ContextType, IssueType, SuggestionType
from .utils import api_process, api_validate
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -42,12 +43,18 @@ SCHEMA_OPTIONS = vol.Schema(
}
)
SCHEMA_MIGRATE_DOCKER_STORAGE_DRIVER = vol.Schema(
{
vol.Required(ATTR_STORAGE_DRIVER): vol.In(["overlayfs"]),
}
)
class APIDocker(CoreSysAttributes):
"""Handle RESTful API for Docker configuration."""
@api_process
async def info(self, request: web.Request):
async def info(self, request: web.Request) -> dict[str, Any]:
"""Get docker info."""
data_registries = {}
for hostname, registry in self.sys_docker.config.registries.items():
@@ -105,7 +112,7 @@ class APIDocker(CoreSysAttributes):
return {ATTR_REGISTRIES: data_registries}
@api_process
async def create_registry(self, request: web.Request):
async def create_registry(self, request: web.Request) -> None:
"""Create a new docker registry."""
body = await api_validate(SCHEMA_DOCKER_REGISTRY, request)
@@ -115,7 +122,7 @@ class APIDocker(CoreSysAttributes):
await self.sys_docker.config.save_data()
@api_process
async def remove_registry(self, request: web.Request):
async def remove_registry(self, request: web.Request) -> None:
"""Delete a docker registry."""
hostname = request.match_info.get(ATTR_HOSTNAME)
if hostname not in self.sys_docker.config.registries:
@@ -123,3 +130,27 @@ class APIDocker(CoreSysAttributes):
del self.sys_docker.config.registries[hostname]
await self.sys_docker.config.save_data()
@api_process
async def migrate_docker_storage_driver(self, request: web.Request) -> None:
"""Migrate Docker storage driver."""
if (
not self.coresys.os.available
or not self.coresys.os.version
or self.coresys.os.version < AwesomeVersion("17.0.dev0")
):
raise APINotFound(
"Home Assistant OS 17.0 or newer required for Docker storage driver migration"
)
body = await api_validate(SCHEMA_MIGRATE_DOCKER_STORAGE_DRIVER, request)
await self.sys_dbus.agent.system.migrate_docker_storage_driver(
body[ATTR_STORAGE_DRIVER]
)
_LOGGER.info("Host system reboot required to apply Docker storage migration")
self.sys_resolution.create_issue(
IssueType.REBOOT_REQUIRED,
ContextType.SYSTEM,
suggestions=[SuggestionType.EXECUTE_REBOOT],
)

View File

@@ -18,6 +18,7 @@ from ..const import (
ATTR_BLK_WRITE,
ATTR_BOOT,
ATTR_CPU_PERCENT,
ATTR_DUPLICATE_LOG_FILE,
ATTR_IMAGE,
ATTR_IP_ADDRESS,
ATTR_JOB_ID,
@@ -55,6 +56,7 @@ SCHEMA_OPTIONS = vol.Schema(
vol.Optional(ATTR_AUDIO_OUTPUT): vol.Maybe(str),
vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str),
vol.Optional(ATTR_BACKUPS_EXCLUDE_DATABASE): vol.Boolean(),
vol.Optional(ATTR_DUPLICATE_LOG_FILE): vol.Boolean(),
}
)
@@ -112,6 +114,7 @@ class APIHomeAssistant(CoreSysAttributes):
ATTR_AUDIO_INPUT: self.sys_homeassistant.audio_input,
ATTR_AUDIO_OUTPUT: self.sys_homeassistant.audio_output,
ATTR_BACKUPS_EXCLUDE_DATABASE: self.sys_homeassistant.backups_exclude_database,
ATTR_DUPLICATE_LOG_FILE: self.sys_homeassistant.duplicate_log_file,
}
@api_process
@@ -151,10 +154,13 @@ class APIHomeAssistant(CoreSysAttributes):
ATTR_BACKUPS_EXCLUDE_DATABASE
]
if ATTR_DUPLICATE_LOG_FILE in body:
self.sys_homeassistant.duplicate_log_file = body[ATTR_DUPLICATE_LOG_FILE]
await self.sys_homeassistant.save_data()
@api_process
async def stats(self, request: web.Request) -> dict[Any, str]:
async def stats(self, request: web.Request) -> dict[str, Any]:
"""Return resource information."""
stats = await self.sys_homeassistant.core.stats()
if not stats:
@@ -191,7 +197,7 @@ class APIHomeAssistant(CoreSysAttributes):
return await update_task
@api_process
async def stop(self, request: web.Request) -> Awaitable[None]:
async def stop(self, request: web.Request) -> None:
"""Stop Home Assistant."""
body = await api_validate(SCHEMA_STOP, request)
await self._check_offline_migration(force=body[ATTR_FORCE])

View File

@@ -1,6 +1,7 @@
"""Init file for Supervisor host RESTful API."""
import asyncio
from collections.abc import Awaitable
from contextlib import suppress
import json
import logging
@@ -99,7 +100,7 @@ class APIHost(CoreSysAttributes):
)
@api_process
async def info(self, request):
async def info(self, request: web.Request) -> dict[str, Any]:
"""Return host information."""
return {
ATTR_AGENT_VERSION: self.sys_dbus.agent.version,
@@ -128,7 +129,7 @@ class APIHost(CoreSysAttributes):
}
@api_process
async def options(self, request):
async def options(self, request: web.Request) -> None:
"""Edit host settings."""
body = await api_validate(SCHEMA_OPTIONS, request)
@@ -139,7 +140,7 @@ class APIHost(CoreSysAttributes):
)
@api_process
async def reboot(self, request):
async def reboot(self, request: web.Request) -> None:
"""Reboot host."""
body = await api_validate(SCHEMA_SHUTDOWN, request)
await self._check_ha_offline_migration(force=body[ATTR_FORCE])
@@ -147,7 +148,7 @@ class APIHost(CoreSysAttributes):
return await asyncio.shield(self.sys_host.control.reboot())
@api_process
async def shutdown(self, request):
async def shutdown(self, request: web.Request) -> None:
"""Poweroff host."""
body = await api_validate(SCHEMA_SHUTDOWN, request)
await self._check_ha_offline_migration(force=body[ATTR_FORCE])
@@ -155,12 +156,12 @@ class APIHost(CoreSysAttributes):
return await asyncio.shield(self.sys_host.control.shutdown())
@api_process
def reload(self, request):
def reload(self, request: web.Request) -> Awaitable[None]:
"""Reload host data."""
return asyncio.shield(self.sys_host.reload())
@api_process
async def services(self, request):
async def services(self, request: web.Request) -> dict[str, Any]:
"""Return list of available services."""
services = []
for unit in self.sys_host.services:
@@ -175,7 +176,7 @@ class APIHost(CoreSysAttributes):
return {ATTR_SERVICES: services}
@api_process
async def list_boots(self, _: web.Request):
async def list_boots(self, _: web.Request) -> dict[str, Any]:
"""Return a list of boot IDs."""
boot_ids = await self.sys_host.logs.get_boot_ids()
return {
@@ -186,7 +187,7 @@ class APIHost(CoreSysAttributes):
}
@api_process
async def list_identifiers(self, _: web.Request):
async def list_identifiers(self, _: web.Request) -> dict[str, list[str]]:
"""Return a list of syslog identifiers."""
return {ATTR_IDENTIFIERS: await self.sys_host.logs.get_identifiers()}
@@ -332,7 +333,7 @@ class APIHost(CoreSysAttributes):
)
@api_process
async def disk_usage(self, request: web.Request) -> dict:
async def disk_usage(self, request: web.Request) -> dict[str, Any]:
"""Return a breakdown of storage usage for the system."""
max_depth = request.query.get(ATTR_MAX_DEPTH, 1)
@@ -347,6 +348,10 @@ class APIHost(CoreSysAttributes):
disk.disk_usage, self.sys_config.path_supervisor
)
# Calculate used by subtracting free makes sure we include reserved space
# in used space reporting.
used = total - free
known_paths = await self.sys_run_in_executor(
disk.get_dir_sizes,
{
@@ -365,13 +370,12 @@ class APIHost(CoreSysAttributes):
"id": "root",
"label": "Root",
"total_bytes": total,
"used_bytes": total - free,
"used_bytes": used,
"children": [
{
"id": "system",
"label": "System",
"used_bytes": total
- free
"used_bytes": used
- sum(path["used_bytes"] for path in known_paths),
},
*known_paths,

View File

@@ -29,8 +29,8 @@ from ..const import (
HEADER_REMOTE_USER_NAME,
HEADER_TOKEN,
HEADER_TOKEN_OLD,
HomeAssistantUser,
IngressSessionData,
IngressSessionDataUser,
)
from ..coresys import CoreSysAttributes
from ..exceptions import HomeAssistantAPIError
@@ -75,12 +75,6 @@ def status_code_must_be_empty_body(code: int) -> bool:
class APIIngress(CoreSysAttributes):
"""Ingress view to handle add-on webui routing."""
_list_of_users: list[IngressSessionDataUser]
def __init__(self) -> None:
"""Initialize APIIngress."""
self._list_of_users = []
def _extract_addon(self, request: web.Request) -> Addon:
"""Return addon, throw an exception it it doesn't exist."""
token = request.match_info["token"]
@@ -306,20 +300,15 @@ class APIIngress(CoreSysAttributes):
return response
async def _find_user_by_id(self, user_id: str) -> IngressSessionDataUser | None:
async def _find_user_by_id(self, user_id: str) -> HomeAssistantUser | None:
"""Find user object by the user's ID."""
try:
list_of_users = await self.sys_homeassistant.get_users()
except (HomeAssistantAPIError, TypeError) as err:
_LOGGER.error(
"%s error occurred while requesting list of users: %s", type(err), err
)
users = await self.sys_homeassistant.list_users()
except HomeAssistantAPIError as err:
_LOGGER.warning("Could not fetch list of users: %s", err)
return None
if list_of_users is not None:
self._list_of_users = list_of_users
return next((user for user in self._list_of_users if user.id == user_id), None)
return next((user for user in users if user.id == user_id), None)
def _init_header(
@@ -332,8 +321,8 @@ def _init_header(
headers[HEADER_REMOTE_USER_ID] = session_data.user.id
if session_data.user.username is not None:
headers[HEADER_REMOTE_USER_NAME] = session_data.user.username
if session_data.user.display_name is not None:
headers[HEADER_REMOTE_USER_DISPLAY_NAME] = session_data.user.display_name
if session_data.user.name is not None:
headers[HEADER_REMOTE_USER_DISPLAY_NAME] = session_data.user.name
# filter flags
for name, value in request.headers.items():

View File

@@ -1,17 +1,15 @@
"""Handle security part of this API."""
from collections.abc import Callable
from collections.abc import Awaitable, Callable
import logging
import re
from typing import Final
from urllib.parse import unquote
from aiohttp.web import Request, Response, middleware
from aiohttp.web import Request, StreamResponse, middleware
from aiohttp.web_exceptions import HTTPBadRequest, HTTPForbidden, HTTPUnauthorized
from awesomeversion import AwesomeVersion
from supervisor.homeassistant.const import LANDINGPAGE
from ...addons.const import RE_SLUG
from ...const import (
REQUEST_FROM,
@@ -23,6 +21,7 @@ from ...const import (
VALID_API_STATES,
)
from ...coresys import CoreSys, CoreSysAttributes
from ...homeassistant.const import LANDINGPAGE
from ...utils import version_is_new_enough
from ..utils import api_return_error, extract_supervisor_token
@@ -89,7 +88,7 @@ CORE_ONLY_PATHS: Final = re.compile(
)
# Policy role add-on API access
ADDONS_ROLE_ACCESS: dict[str, re.Pattern] = {
ADDONS_ROLE_ACCESS: dict[str, re.Pattern[str]] = {
ROLE_DEFAULT: re.compile(
r"^(?:"
r"|/.+/info"
@@ -180,7 +179,9 @@ class SecurityMiddleware(CoreSysAttributes):
return unquoted
@middleware
async def block_bad_requests(self, request: Request, handler: Callable) -> Response:
async def block_bad_requests(
self, request: Request, handler: Callable[[Request], Awaitable[StreamResponse]]
) -> StreamResponse:
"""Process request and tblock commonly known exploit attempts."""
if FILTERS.search(self._recursive_unquote(request.path)):
_LOGGER.warning(
@@ -198,7 +199,9 @@ class SecurityMiddleware(CoreSysAttributes):
return await handler(request)
@middleware
async def system_validation(self, request: Request, handler: Callable) -> Response:
async def system_validation(
self, request: Request, handler: Callable[[Request], Awaitable[StreamResponse]]
) -> StreamResponse:
"""Check if core is ready to response."""
if self.sys_core.state not in VALID_API_STATES:
return api_return_error(
@@ -208,7 +211,9 @@ class SecurityMiddleware(CoreSysAttributes):
return await handler(request)
@middleware
async def token_validation(self, request: Request, handler: Callable) -> Response:
async def token_validation(
self, request: Request, handler: Callable[[Request], Awaitable[StreamResponse]]
) -> StreamResponse:
"""Check security access of this layer."""
request_from: CoreSysAttributes | None = None
supervisor_token = extract_supervisor_token(request)
@@ -279,7 +284,9 @@ class SecurityMiddleware(CoreSysAttributes):
raise HTTPForbidden()
@middleware
async def core_proxy(self, request: Request, handler: Callable) -> Response:
async def core_proxy(
self, request: Request, handler: Callable[[Request], Awaitable[StreamResponse]]
) -> StreamResponse:
"""Validate user from Core API proxy."""
if (
request[REQUEST_FROM] != self.sys_homeassistant

View File

@@ -36,6 +36,7 @@ from ..const import (
ATTR_PRIMARY,
ATTR_PSK,
ATTR_READY,
ATTR_ROUTE_METRIC,
ATTR_SIGNAL,
ATTR_SSID,
ATTR_SUPERVISOR_INTERNET,
@@ -68,6 +69,7 @@ _SCHEMA_IPV4_CONFIG = vol.Schema(
vol.Optional(ATTR_ADDRESS): [vol.Coerce(IPv4Interface)],
vol.Optional(ATTR_METHOD): vol.Coerce(InterfaceMethod),
vol.Optional(ATTR_GATEWAY): vol.Coerce(IPv4Address),
vol.Optional(ATTR_ROUTE_METRIC): vol.Coerce(int),
vol.Optional(ATTR_NAMESERVERS): [vol.Coerce(IPv4Address)],
}
)
@@ -79,6 +81,7 @@ _SCHEMA_IPV6_CONFIG = vol.Schema(
vol.Optional(ATTR_ADDR_GEN_MODE): vol.Coerce(InterfaceAddrGenMode),
vol.Optional(ATTR_IP6_PRIVACY): vol.Coerce(InterfaceIp6Privacy),
vol.Optional(ATTR_GATEWAY): vol.Coerce(IPv6Address),
vol.Optional(ATTR_ROUTE_METRIC): vol.Coerce(int),
vol.Optional(ATTR_NAMESERVERS): [vol.Coerce(IPv6Address)],
}
)
@@ -113,6 +116,7 @@ def ip4config_struct(config: IpConfig, setting: IpSetting) -> dict[str, Any]:
ATTR_ADDRESS: [address.with_prefixlen for address in config.address],
ATTR_NAMESERVERS: [str(address) for address in config.nameservers],
ATTR_GATEWAY: str(config.gateway) if config.gateway else None,
ATTR_ROUTE_METRIC: setting.route_metric,
ATTR_READY: config.ready,
}
@@ -126,6 +130,7 @@ def ip6config_struct(config: IpConfig, setting: Ip6Setting) -> dict[str, Any]:
ATTR_ADDRESS: [address.with_prefixlen for address in config.address],
ATTR_NAMESERVERS: [str(address) for address in config.nameservers],
ATTR_GATEWAY: str(config.gateway) if config.gateway else None,
ATTR_ROUTE_METRIC: setting.route_metric,
ATTR_READY: config.ready,
}
@@ -201,7 +206,7 @@ class APINetwork(CoreSysAttributes):
raise APINotFound(f"Interface {name} does not exist") from None
@api_process
async def info(self, request: web.Request) -> dict[str, Any]:
async def info(self, _: web.Request) -> dict[str, Any]:
"""Return network information."""
return {
ATTR_INTERFACES: [
@@ -242,6 +247,7 @@ class APINetwork(CoreSysAttributes):
method=config.get(ATTR_METHOD, InterfaceMethod.STATIC),
address=config.get(ATTR_ADDRESS, []),
gateway=config.get(ATTR_GATEWAY),
route_metric=config.get(ATTR_ROUTE_METRIC),
nameservers=config.get(ATTR_NAMESERVERS, []),
)
elif key == ATTR_IPV6:
@@ -255,6 +261,7 @@ class APINetwork(CoreSysAttributes):
),
address=config.get(ATTR_ADDRESS, []),
gateway=config.get(ATTR_GATEWAY),
route_metric=config.get(ATTR_ROUTE_METRIC),
nameservers=config.get(ATTR_NAMESERVERS, []),
)
elif key == ATTR_WIFI:
@@ -275,7 +282,7 @@ class APINetwork(CoreSysAttributes):
await asyncio.shield(self.sys_host.network.apply_changes(interface))
@api_process
def reload(self, request: web.Request) -> Awaitable[None]:
def reload(self, _: web.Request) -> Awaitable[None]:
"""Reload network data."""
return asyncio.shield(
self.sys_host.network.update(force_connectivity_check=True)
@@ -325,7 +332,8 @@ class APINetwork(CoreSysAttributes):
ipv4_setting = IpSetting(
method=body[ATTR_IPV4].get(ATTR_METHOD, InterfaceMethod.AUTO),
address=body[ATTR_IPV4].get(ATTR_ADDRESS, []),
gateway=body[ATTR_IPV4].get(ATTR_GATEWAY, None),
gateway=body[ATTR_IPV4].get(ATTR_GATEWAY),
route_metric=body[ATTR_IPV4].get(ATTR_ROUTE_METRIC),
nameservers=body[ATTR_IPV4].get(ATTR_NAMESERVERS, []),
)
@@ -340,7 +348,8 @@ class APINetwork(CoreSysAttributes):
ATTR_IP6_PRIVACY, InterfaceIp6Privacy.DEFAULT
),
address=body[ATTR_IPV6].get(ATTR_ADDRESS, []),
gateway=body[ATTR_IPV6].get(ATTR_GATEWAY, None),
gateway=body[ATTR_IPV6].get(ATTR_GATEWAY),
route_metric=body[ATTR_IPV6].get(ATTR_ROUTE_METRIC),
nameservers=body[ATTR_IPV6].get(ATTR_NAMESERVERS, []),
)

View File

@@ -13,16 +13,21 @@ from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE
from aiohttp.http_websocket import WSMsgType
from aiohttp.web_exceptions import HTTPBadGateway, HTTPUnauthorized
from supervisor.utils.logging import AddonLoggerAdapter
from ..coresys import CoreSysAttributes
from ..exceptions import APIError, HomeAssistantAPIError, HomeAssistantAuthError
from ..utils.json import json_dumps
from ..utils.logging import AddonLoggerAdapter
_LOGGER: logging.Logger = logging.getLogger(__name__)
FORWARD_HEADERS = ("X-Speech-Content",)
FORWARD_HEADERS = (
"X-Speech-Content",
"Accept",
"Last-Event-ID",
"Mcp-Session-Id",
"MCP-Protocol-Version",
)
HEADER_HA_ACCESS = "X-Ha-Access"
# Maximum message size for websocket messages from Home Assistant.
@@ -36,6 +41,38 @@ MAX_MESSAGE_SIZE_FROM_CORE = 64 * 1024 * 1024
class APIProxy(CoreSysAttributes):
"""API Proxy for Home Assistant."""
async def _stream_client_response(
self,
request: web.Request,
client: aiohttp.ClientResponse,
*,
content_type: str,
headers_to_copy: tuple[str, ...] = (),
) -> web.StreamResponse:
"""Stream an upstream aiohttp response to the caller.
Used for event streams (e.g. Home Assistant /api/stream) and for SSE endpoints
such as MCP (text/event-stream).
"""
response = web.StreamResponse(status=client.status)
response.content_type = content_type
for header in headers_to_copy:
if header in client.headers:
response.headers[header] = client.headers[header]
response.headers["X-Accel-Buffering"] = "no"
try:
await response.prepare(request)
async for data in client.content:
await response.write(data)
except (aiohttp.ClientError, aiohttp.ClientPayloadError):
# Client disconnected or upstream closed
pass
return response
def _check_access(self, request: web.Request):
"""Check the Supervisor token."""
if AUTHORIZATION in request.headers:
@@ -96,16 +133,11 @@ class APIProxy(CoreSysAttributes):
_LOGGER.info("Home Assistant EventStream start")
async with self._api_client(request, "stream", timeout=None) as client:
response = web.StreamResponse()
response.content_type = request.headers.get(CONTENT_TYPE, "")
try:
response.headers["X-Accel-Buffering"] = "no"
await response.prepare(request)
async for data in client.content:
await response.write(data)
except (aiohttp.ClientError, aiohttp.ClientPayloadError):
pass
response = await self._stream_client_response(
request,
client,
content_type=request.headers.get(CONTENT_TYPE, ""),
)
_LOGGER.info("Home Assistant EventStream close")
return response
@@ -119,10 +151,31 @@ class APIProxy(CoreSysAttributes):
# Normal request
path = request.match_info.get("path", "")
async with self._api_client(request, path) as client:
# Check if this is a streaming response (e.g., MCP SSE endpoints)
if client.content_type == "text/event-stream":
return await self._stream_client_response(
request,
client,
content_type=client.content_type,
headers_to_copy=(
"Cache-Control",
"Mcp-Session-Id",
),
)
# Non-streaming response
data = await client.read()
return web.Response(
response = web.Response(
body=data, status=client.status, content_type=client.content_type
)
# Copy selected headers from the upstream response
for header in (
"Cache-Control",
"Mcp-Session-Id",
):
if header in client.headers:
response.headers[header] = client.headers[header]
return response
async def _websocket_client(self) -> ClientWebSocketResponse:
"""Initialize a WebSocket API connection."""

View File

@@ -5,10 +5,9 @@ from typing import Any
from aiohttp import web
import voluptuous as vol
from supervisor.exceptions import APIGone
from ..const import ATTR_FORCE_SECURITY, ATTR_PWNED
from ..coresys import CoreSysAttributes
from ..exceptions import APIGone
from .utils import api_process, api_validate
# pylint: disable=no-value-for-parameter

View File

@@ -1,5 +1,9 @@
"""Init file for Supervisor network RESTful API."""
from typing import Any
from aiohttp import web
from ..const import (
ATTR_AVAILABLE,
ATTR_PROVIDERS,
@@ -25,7 +29,7 @@ class APIServices(CoreSysAttributes):
return service
@api_process
async def list_services(self, request):
async def list_services(self, request: web.Request) -> dict[str, Any]:
"""Show register services."""
services = []
for service in self.sys_services.list_services:
@@ -40,7 +44,7 @@ class APIServices(CoreSysAttributes):
return {ATTR_SERVICES: services}
@api_process
async def set_service(self, request):
async def set_service(self, request: web.Request) -> None:
"""Write data into a service."""
service = self._extract_service(request)
body = await api_validate(service.schema, request)
@@ -50,7 +54,7 @@ class APIServices(CoreSysAttributes):
await service.set_service_data(addon, body)
@api_process
async def get_service(self, request):
async def get_service(self, request: web.Request) -> dict[str, Any]:
"""Read data into a service."""
service = self._extract_service(request)
@@ -62,7 +66,7 @@ class APIServices(CoreSysAttributes):
return service.get_service_data()
@api_process
async def del_service(self, request):
async def del_service(self, request: web.Request) -> None:
"""Delete data into a service."""
service = self._extract_service(request)
addon = request[REQUEST_FROM]

View File

@@ -53,7 +53,8 @@ from ..const import (
REQUEST_FROM,
)
from ..coresys import CoreSysAttributes
from ..exceptions import APIError, APIForbidden, APINotFound
from ..exceptions import APIError, APIForbidden, APINotFound, StoreAddonNotFoundError
from ..resolution.const import ContextType, SuggestionType
from ..store.addon import AddonStore
from ..store.repository import Repository
from ..store.validate import validate_repository
@@ -104,7 +105,7 @@ class APIStore(CoreSysAttributes):
addon_slug: str = request.match_info["addon"]
if not (addon := self.sys_addons.get(addon_slug)):
raise APINotFound(f"Addon {addon_slug} does not exist")
raise StoreAddonNotFoundError(addon=addon_slug)
if installed and not addon.is_installed:
raise APIError(f"Addon {addon_slug} is not installed")
@@ -112,7 +113,7 @@ class APIStore(CoreSysAttributes):
if not installed and addon.is_installed:
addon = cast(Addon, addon)
if not addon.addon_store:
raise APINotFound(f"Addon {addon_slug} does not exist in the store")
raise StoreAddonNotFoundError(addon=addon_slug)
return addon.addon_store
return addon
@@ -349,13 +350,30 @@ class APIStore(CoreSysAttributes):
return self._generate_repository_information(repository)
@api_process
async def add_repository(self, request: web.Request):
async def add_repository(self, request: web.Request) -> None:
"""Add repository to the store."""
body = await api_validate(SCHEMA_ADD_REPOSITORY, request)
await asyncio.shield(self.sys_store.add_repository(body[ATTR_REPOSITORY]))
@api_process
async def remove_repository(self, request: web.Request):
async def remove_repository(self, request: web.Request) -> None:
"""Remove repository from the store."""
repository: Repository = self._extract_repository(request)
await asyncio.shield(self.sys_store.remove_repository(repository))
@api_process
async def repositories_repository_repair(self, request: web.Request) -> None:
"""Repair repository."""
repository: Repository = self._extract_repository(request)
await asyncio.shield(repository.reset())
# If we have an execute reset suggestion on this repository, dismiss it and the issue
for suggestion in self.sys_resolution.suggestions:
if (
suggestion.type == SuggestionType.EXECUTE_RESET
and suggestion.context == ContextType.STORE
and suggestion.reference == repository.slug
):
for issue in self.sys_resolution.issues_for_suggestion(suggestion):
self.sys_resolution.dismiss_issue(issue)
return

View File

@@ -80,7 +80,7 @@ class APISupervisor(CoreSysAttributes):
"""Handle RESTful API for Supervisor functions."""
@api_process
async def ping(self, request):
async def ping(self, request: web.Request) -> bool:
"""Return ok for signal that the API is ready."""
return True
@@ -248,6 +248,7 @@ class APISupervisor(CoreSysAttributes):
return asyncio.shield(self.sys_supervisor.restart())
@api_process_raw(CONTENT_TYPE_TEXT, error_type=CONTENT_TYPE_TEXT)
def logs(self, request: web.Request) -> Awaitable[bytes]:
async def logs(self, request: web.Request) -> bytes:
"""Return supervisor Docker logs."""
return self.sys_supervisor.logs()
logs = await self.sys_supervisor.logs()
return "\n".join(logs).encode(errors="replace")

View File

@@ -1,7 +1,7 @@
"""Init file for Supervisor util for RESTful API."""
import asyncio
from collections.abc import Callable
from collections.abc import Callable, Mapping
import json
from typing import Any, cast
@@ -26,7 +26,7 @@ from ..const import (
RESULT_OK,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import APIError, BackupFileNotFoundError, DockerAPIError, HassioError
from ..exceptions import APIError, DockerAPIError, HassioError
from ..jobs import JobSchedulerOptions, SupervisorJob
from ..utils import check_exception_chain, get_message_from_exception_chain
from ..utils.json import json_dumps, json_loads as json_loads_util
@@ -63,16 +63,14 @@ def json_loads(data: Any) -> dict[str, Any]:
def api_process(method):
"""Wrap function with true/false calls to rest api."""
async def wrap_api(
api: CoreSysAttributes, *args, **kwargs
) -> web.Response | web.StreamResponse:
async def wrap_api(*args, **kwargs) -> web.Response | web.StreamResponse:
"""Return API information."""
try:
answer = await method(api, *args, **kwargs)
except BackupFileNotFoundError as err:
return api_return_error(err, status=404)
answer = await method(*args, **kwargs)
except APIError as err:
return api_return_error(err, status=err.status, job_id=err.job_id)
return api_return_error(
err, status=err.status, job_id=err.job_id, headers=err.headers
)
except HassioError as err:
return api_return_error(err)
@@ -109,12 +107,10 @@ def api_process_raw(content, *, error_type=None):
def wrap_method(method):
"""Wrap function with raw output to rest api."""
async def wrap_api(
api: CoreSysAttributes, *args, **kwargs
) -> web.Response | web.StreamResponse:
async def wrap_api(*args, **kwargs) -> web.Response | web.StreamResponse:
"""Return api information."""
try:
msg_data = await method(api, *args, **kwargs)
msg_data = await method(*args, **kwargs)
except APIError as err:
return api_return_error(
err,
@@ -143,6 +139,7 @@ def api_return_error(
error_type: str | None = None,
status: int = 400,
*,
headers: Mapping[str, str] | None = None,
job_id: str | None = None,
) -> web.Response:
"""Return an API error message."""
@@ -155,10 +152,15 @@ def api_return_error(
match error_type:
case const.CONTENT_TYPE_TEXT:
return web.Response(body=message, content_type=error_type, status=status)
return web.Response(
body=message, content_type=error_type, status=status, headers=headers
)
case const.CONTENT_TYPE_BINARY:
return web.Response(
body=message.encode(), content_type=error_type, status=status
body=message.encode(),
content_type=error_type,
status=status,
headers=headers,
)
case _:
result: dict[str, Any] = {
@@ -176,6 +178,7 @@ def api_return_error(
result,
status=status,
dumps=json_dumps,
headers=headers,
)

View File

@@ -4,6 +4,7 @@ import logging
from pathlib import Path
import platform
from .const import CpuArch
from .coresys import CoreSys, CoreSysAttributes
from .exceptions import ConfigurationFileError, HassioArchNotFound
from .utils.json import read_json_file
@@ -12,38 +13,40 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
ARCH_JSON: Path = Path(__file__).parent.joinpath("data/arch.json")
MAP_CPU = {
"armv7": "armv7",
"armv6": "armhf",
"armv8": "aarch64",
"aarch64": "aarch64",
"i686": "i386",
"x86_64": "amd64",
MAP_CPU: dict[str, CpuArch] = {
"armv7": CpuArch.ARMV7,
"armv6": CpuArch.ARMHF,
"armv8": CpuArch.AARCH64,
"aarch64": CpuArch.AARCH64,
"i686": CpuArch.I386,
"x86_64": CpuArch.AMD64,
}
class CpuArch(CoreSysAttributes):
class CpuArchManager(CoreSysAttributes):
"""Manage available architectures."""
def __init__(self, coresys: CoreSys) -> None:
"""Initialize CPU Architecture handler."""
self.coresys = coresys
self._supported_arch: list[str] = []
self._supported_set: set[str] = set()
self._default_arch: str
self._supported_arch: list[CpuArch] = []
self._supported_set: set[CpuArch] = set()
self._default_arch: CpuArch
@property
def default(self) -> str:
def default(self) -> CpuArch:
"""Return system default arch."""
return self._default_arch
@property
def supervisor(self) -> str:
def supervisor(self) -> CpuArch:
"""Return supervisor arch."""
return self.sys_supervisor.arch or self._default_arch
if self.sys_supervisor.arch:
return CpuArch(self.sys_supervisor.arch)
return self._default_arch
@property
def supported(self) -> list[str]:
def supported(self) -> list[CpuArch]:
"""Return support arch by CPU/Machine."""
return self._supported_arch
@@ -65,7 +68,7 @@ class CpuArch(CoreSysAttributes):
return
# Use configs from arch.json
self._supported_arch.extend(arch_data[self.sys_machine])
self._supported_arch.extend(CpuArch(a) for a in arch_data[self.sys_machine])
self._default_arch = self.supported[0]
# Make sure native support is in supported list
@@ -78,14 +81,14 @@ class CpuArch(CoreSysAttributes):
"""Return True if there is a supported arch by this platform."""
return not self._supported_set.isdisjoint(arch_list)
def match(self, arch_list: list[str]) -> str:
def match(self, arch_list: list[str]) -> CpuArch:
"""Return best match for this CPU/Platform."""
for self_arch in self.supported:
if self_arch in arch_list:
return self_arch
return CpuArch(self_arch)
raise HassioArchNotFound()
def detect_cpu(self) -> str:
def detect_cpu(self) -> CpuArch:
"""Return the arch type of local CPU."""
cpu = platform.machine()
for check, value in MAP_CPU.items():
@@ -96,9 +99,10 @@ class CpuArch(CoreSysAttributes):
"Unknown CPU architecture %s, falling back to Supervisor architecture.",
cpu,
)
return self.sys_supervisor.arch
return CpuArch(self.sys_supervisor.arch)
_LOGGER.warning(
"Unknown CPU architecture %s, assuming CPU architecture equals Supervisor architecture.",
cpu,
)
return cpu
# Return the cpu string as-is, wrapped in CpuArch (may fail if invalid)
return CpuArch(cpu)

View File

@@ -6,10 +6,11 @@ import logging
from typing import Any, TypedDict, cast
from .addons.addon import Addon
from .const import ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
from .const import ATTR_PASSWORD, ATTR_USERNAME, FILE_HASSIO_AUTH, HomeAssistantUser
from .coresys import CoreSys, CoreSysAttributes
from .exceptions import (
AuthError,
AuthHomeAssistantAPIValidationError,
AuthInvalidNonStringValueError,
AuthListUsersError,
AuthPasswordResetError,
HomeAssistantAPIError,
@@ -83,10 +84,8 @@ class Auth(FileConfiguration, CoreSysAttributes):
self, addon: Addon, username: str | None, password: str | None
) -> bool:
"""Check username login."""
if password is None:
raise AuthError("None as password is not supported!", _LOGGER.error)
if username is None:
raise AuthError("None as username is not supported!", _LOGGER.error)
if username is None or password is None:
raise AuthInvalidNonStringValueError(_LOGGER.error)
_LOGGER.info("Auth request from '%s' for '%s'", addon.slug, username)
@@ -137,7 +136,7 @@ class Auth(FileConfiguration, CoreSysAttributes):
finally:
self._running.pop(username, None)
raise AuthError()
raise AuthHomeAssistantAPIValidationError()
async def change_password(self, username: str, password: str) -> None:
"""Change user password login."""
@@ -155,26 +154,15 @@ class Auth(FileConfiguration, CoreSysAttributes):
except HomeAssistantAPIError as err:
_LOGGER.error("Can't request password reset on Home Assistant: %s", err)
raise AuthPasswordResetError()
raise AuthPasswordResetError(user=username)
async def list_users(self) -> list[dict[str, Any]]:
async def list_users(self) -> list[HomeAssistantUser]:
"""List users on the Home Assistant instance."""
try:
users: (
list[dict[str, Any]] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"}
)
return await self.sys_homeassistant.list_users()
except HomeAssistantWSError as err:
raise AuthListUsersError(
f"Can't request listing users on Home Assistant: {err}", _LOGGER.error
) from err
if users is not None:
return users
raise AuthListUsersError(
"Can't request listing users on Home Assistant!", _LOGGER.error
)
_LOGGER.error("Can't request listing users on Home Assistant: %s", err)
raise AuthListUsersError() from err
@staticmethod
def _rehash(value: str, salt2: str = "") -> str:

View File

@@ -18,7 +18,7 @@ import time
from typing import Any, Self, cast
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from securetar import AddFileError, SecureTarFile, atomic_contents_add, secure_path
from securetar import AddFileError, SecureTarFile, atomic_contents_add
import voluptuous as vol
from voluptuous.humanize import humanize_error
@@ -60,7 +60,6 @@ from ..utils.dt import parse_datetime, utcnow
from ..utils.json import json_bytes
from ..utils.sentinel import DEFAULT
from .const import BUF_SIZE, LOCATION_CLOUD_BACKUP, BackupType
from .utils import password_to_key
from .validate import SCHEMA_BACKUP
IGNORED_COMPARISON_FIELDS = {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}
@@ -101,7 +100,7 @@ class Backup(JobGroup):
self._data: dict[str, Any] = data or {ATTR_SLUG: slug}
self._tmp: TemporaryDirectory | None = None
self._outer_secure_tarfile: SecureTarFile | None = None
self._key: bytes | None = None
self._password: str | None = None
self._locations: dict[str | None, BackupLocation] = {
location: BackupLocation(
path=tar_file,
@@ -171,21 +170,21 @@ class Backup(JobGroup):
self._data[ATTR_REPOSITORIES] = value
@property
def homeassistant_version(self) -> AwesomeVersion:
def homeassistant_version(self) -> AwesomeVersion | None:
"""Return backup Home Assistant version."""
if self.homeassistant is None:
return None
return self.homeassistant[ATTR_VERSION]
@property
def homeassistant_exclude_database(self) -> bool:
def homeassistant_exclude_database(self) -> bool | None:
"""Return whether database was excluded from Home Assistant backup."""
if self.homeassistant is None:
return None
return self.homeassistant[ATTR_EXCLUDE_DATABASE]
@property
def homeassistant(self) -> dict[str, Any]:
def homeassistant(self) -> dict[str, Any] | None:
"""Return backup Home Assistant data."""
return self._data[ATTR_HOMEASSISTANT]
@@ -327,7 +326,7 @@ class Backup(JobGroup):
# Set password
if password:
self._init_password(password)
self._password = password
self._data[ATTR_PROTECTED] = True
self._data[ATTR_CRYPTO] = CRYPTO_AES128
self._locations[self.location].protected = True
@@ -337,14 +336,7 @@ class Backup(JobGroup):
def set_password(self, password: str | None) -> None:
"""Set the password for an existing backup."""
if password:
self._init_password(password)
else:
self._key = None
def _init_password(self, password: str) -> None:
"""Create key from password."""
self._key = password_to_key(password)
self._password = password
async def validate_backup(self, location: str | None) -> None:
"""Validate backup.
@@ -374,9 +366,9 @@ class Backup(JobGroup):
with SecureTarFile(
ending, # Not used
gzip=self.compressed,
key=self._key,
mode="r",
fileobj=test_tar_file,
password=self._password,
):
# If we can read the tar file, the password is correct
return
@@ -520,12 +512,24 @@ class Backup(JobGroup):
)
tmp = TemporaryDirectory(dir=str(backup_tarfile.parent))
with tarfile.open(backup_tarfile, "r:") as tar:
tar.extractall(
path=tmp.name,
members=secure_path(tar),
filter="fully_trusted",
)
try:
with tarfile.open(backup_tarfile, "r:") as tar:
# The tar filter rejects path traversal and absolute names,
# aborting restore of potentially crafted backups.
tar.extractall(
path=tmp.name,
filter="tar",
)
except tarfile.FilterError as err:
raise BackupInvalidError(
f"Can't read backup tarfile {backup_tarfile.as_posix()}: {err}",
_LOGGER.error,
) from err
except tarfile.TarError as err:
raise BackupError(
f"Can't read backup tarfile {backup_tarfile.as_posix()}: {err}",
_LOGGER.error,
) from err
return tmp
@@ -592,7 +596,7 @@ class Backup(JobGroup):
addon_file = self._outer_secure_tarfile.create_inner_tar(
f"./{tar_name}",
gzip=self.compressed,
key=self._key,
password=self._password,
)
# Take backup
try:
@@ -628,9 +632,6 @@ class Backup(JobGroup):
if start_task := await self._addon_save(addon):
start_tasks.append(start_task)
except BackupError as err:
err = BackupError(
f"Can't backup add-on {addon.slug}: {str(err)}", _LOGGER.error
)
self.sys_jobs.current.capture_error(err)
return start_tasks
@@ -646,9 +647,9 @@ class Backup(JobGroup):
addon_file = SecureTarFile(
Path(self._tmp.name, tar_name),
"r",
key=self._key,
gzip=self.compressed,
bufsize=BUF_SIZE,
password=self._password,
)
# If exists inside backup
@@ -744,7 +745,7 @@ class Backup(JobGroup):
with outer_secure_tarfile.create_inner_tar(
f"./{tar_name}",
gzip=self.compressed,
key=self._key,
password=self._password,
) as tar_file:
atomic_contents_add(
tar_file,
@@ -805,14 +806,21 @@ class Backup(JobGroup):
with SecureTarFile(
tar_name,
"r",
key=self._key,
gzip=self.compressed,
bufsize=BUF_SIZE,
password=self._password,
) as tar_file:
# The tar filter rejects path traversal and absolute names,
# aborting restore of potentially crafted backups.
tar_file.extractall(
path=origin_dir, members=tar_file, filter="fully_trusted"
path=origin_dir,
filter="tar",
)
_LOGGER.info("Restore folder %s done", name)
except tarfile.FilterError as err:
raise BackupInvalidError(
f"Can't restore folder {name}: {err}", _LOGGER.warning
) from err
except (tarfile.TarError, OSError) as err:
raise BackupError(
f"Can't restore folder {name}: {err}", _LOGGER.warning
@@ -868,13 +876,13 @@ class Backup(JobGroup):
homeassistant_file = self._outer_secure_tarfile.create_inner_tar(
f"./{tar_name}",
gzip=self.compressed,
key=self._key,
password=self._password,
)
await self.sys_homeassistant.backup(homeassistant_file, exclude_database)
# Store size
self.homeassistant[ATTR_SIZE] = await self.sys_run_in_executor(
self._data[ATTR_HOMEASSISTANT][ATTR_SIZE] = await self.sys_run_in_executor(
getattr, homeassistant_file, "size"
)
@@ -891,7 +899,11 @@ class Backup(JobGroup):
self._tmp.name, f"homeassistant.tar{'.gz' if self.compressed else ''}"
)
homeassistant_file = SecureTarFile(
tar_name, "r", key=self._key, gzip=self.compressed, bufsize=BUF_SIZE
tar_name,
"r",
gzip=self.compressed,
bufsize=BUF_SIZE,
password=self._password,
)
await self.sys_homeassistant.restore(

View File

@@ -6,21 +6,6 @@ import re
RE_DIGITS = re.compile(r"\d+")
def password_to_key(password: str) -> bytes:
"""Generate a AES Key from password."""
key: bytes = password.encode()
for _ in range(100):
key = hashlib.sha256(key).digest()
return key[:16]
def key_to_iv(key: bytes) -> bytes:
"""Generate an iv from Key."""
for _ in range(100):
key = hashlib.sha256(key).digest()
return key[:16]
def create_slug(name: str, date_str: str) -> str:
"""Generate a hash from repository."""
key = f"{date_str} - {name}".lower().encode()

View File

@@ -13,7 +13,7 @@ from colorlog import ColoredFormatter
from .addons.manager import AddonManager
from .api import RestAPI
from .arch import CpuArch
from .arch import CpuArchManager
from .auth import Auth
from .backups.manager import BackupManager
from .bus import Bus
@@ -71,7 +71,7 @@ async def initialize_coresys() -> CoreSys:
coresys.jobs = await JobManager(coresys).load_config()
coresys.core = await Core(coresys).post_init()
coresys.plugins = await PluginManager(coresys).load_config()
coresys.arch = CpuArch(coresys)
coresys.arch = CpuArchManager(coresys)
coresys.auth = await Auth(coresys).load_config()
coresys.updater = await Updater(coresys).load_config()
coresys.api = RestAPI(coresys)

View File

@@ -1,11 +1,12 @@
"""Constants file for Supervisor."""
from collections.abc import Mapping
from dataclasses import dataclass
from enum import StrEnum
from ipaddress import IPv4Network, IPv6Network
from pathlib import Path
from sys import version_info as systemversion
from typing import NotRequired, Self, TypedDict
from typing import Any, NotRequired, Self, TypedDict
from aiohttp import __version__ as aiohttpversion
@@ -179,6 +180,7 @@ ATTR_DOCKER = "docker"
ATTR_DOCKER_API = "docker_api"
ATTR_DOCUMENTATION = "documentation"
ATTR_DOMAINS = "domains"
ATTR_DUPLICATE_LOG_FILE = "duplicate_log_file"
ATTR_ENABLE = "enable"
ATTR_ENABLE_IPV6 = "enable_ipv6"
ATTR_ENABLED = "enabled"
@@ -304,6 +306,7 @@ ATTR_REGISTRIES = "registries"
ATTR_REGISTRY = "registry"
ATTR_REPOSITORIES = "repositories"
ATTR_REPOSITORY = "repository"
ATTR_ROUTE_METRIC = "route_metric"
ATTR_SCHEMA = "schema"
ATTR_SECURITY = "security"
ATTR_SERIAL = "serial"
@@ -328,6 +331,7 @@ ATTR_STATE = "state"
ATTR_STATIC = "static"
ATTR_STDIN = "stdin"
ATTR_STORAGE = "storage"
ATTR_STORAGE_DRIVER = "storage_driver"
ATTR_SUGGESTIONS = "suggestions"
ATTR_SUPERVISOR = "supervisor"
ATTR_SUPERVISOR_INTERNET = "supervisor_internet"
@@ -409,6 +413,11 @@ ROLE_ADMIN = "admin"
ROLE_ALL = [ROLE_DEFAULT, ROLE_HOMEASSISTANT, ROLE_BACKUP, ROLE_MANAGER, ROLE_ADMIN]
OBSERVER_PORT = 4357
# Used for stream operations
DEFAULT_CHUNK_SIZE = 2**16 # 64KiB
class AddonBootConfig(StrEnum):
"""Boot mode config for the add-on."""
@@ -528,60 +537,77 @@ class CpuArch(StrEnum):
AMD64 = "amd64"
class IngressSessionDataUserDict(TypedDict):
"""Response object for ingress session user."""
id: str
username: NotRequired[str | None]
# Name is an alias for displayname, only one should be used
displayname: NotRequired[str | None]
name: NotRequired[str | None]
@dataclass
class IngressSessionDataUser:
"""Format of an IngressSessionDataUser object."""
class HomeAssistantUser:
"""A Home Assistant Core user.
Incomplete model — Core's User object has additional fields
(credentials, refresh_tokens, etc.) that are not represented here.
Only fields used by the Supervisor are included.
"""
id: str
display_name: str | None = None
username: str | None = None
def to_dict(self) -> IngressSessionDataUserDict:
"""Get dictionary representation."""
return IngressSessionDataUserDict(
id=self.id, displayname=self.display_name, username=self.username
)
name: str | None = None
is_owner: bool = False
is_active: bool = False
local_only: bool = False
system_generated: bool = False
group_ids: list[str] | None = None
@classmethod
def from_dict(cls, data: IngressSessionDataUserDict) -> Self:
def from_dict(cls, data: Mapping[str, Any]) -> Self:
"""Return object from dictionary representation."""
return cls(
id=data["id"],
display_name=data.get("displayname") or data.get("name"),
username=data.get("username"),
# "displayname" is a legacy key from old ingress session data
name=data.get("name") or data.get("displayname"),
is_owner=data.get("is_owner", False),
is_active=data.get("is_active", False),
local_only=data.get("local_only", False),
system_generated=data.get("system_generated", False),
group_ids=data.get("group_ids"),
)
class IngressSessionDataUserDict(TypedDict):
"""Serialization format for user data stored in ingress sessions.
Legacy data may contain "displayname" instead of "name".
"""
id: str
username: NotRequired[str | None]
name: NotRequired[str | None]
class IngressSessionDataDict(TypedDict):
"""Response object for ingress session data."""
"""Serialization format for ingress session data."""
user: IngressSessionDataUserDict
@dataclass
class IngressSessionData:
"""Format of an IngressSessionData object."""
"""Ingress session data attached to a session token."""
user: IngressSessionDataUser
user: HomeAssistantUser
def to_dict(self) -> IngressSessionDataDict:
"""Get dictionary representation."""
return IngressSessionDataDict(user=self.user.to_dict())
return IngressSessionDataDict(
user=IngressSessionDataUserDict(
id=self.user.id,
name=self.user.name,
username=self.user.username,
)
)
@classmethod
def from_dict(cls, data: IngressSessionDataDict) -> Self:
def from_dict(cls, data: Mapping[str, Any]) -> Self:
"""Return object from dictionary representation."""
return cls(user=IngressSessionDataUser.from_dict(data["user"]))
return cls(user=HomeAssistantUser.from_dict(data["user"]))
STARTING_STATES = [

View File

@@ -434,7 +434,7 @@ class Core(CoreSysAttributes):
async def repair(self) -> None:
"""Repair system integrity."""
_LOGGER.info("Starting repair of Supervisor Environment")
await self.sys_run_in_executor(self.sys_docker.repair)
await self.sys_docker.repair()
# Fix plugins
await self.sys_plugins.repair()

View File

@@ -29,7 +29,7 @@ from .const import (
if TYPE_CHECKING:
from .addons.manager import AddonManager
from .api import RestAPI
from .arch import CpuArch
from .arch import CpuArchManager
from .auth import Auth
from .backups.manager import BackupManager
from .bus import Bus
@@ -78,7 +78,7 @@ class CoreSys:
# Internal objects pointers
self._docker: DockerAPI | None = None
self._core: Core | None = None
self._arch: CpuArch | None = None
self._arch: CpuArchManager | None = None
self._auth: Auth | None = None
self._homeassistant: HomeAssistant | None = None
self._supervisor: Supervisor | None = None
@@ -266,17 +266,17 @@ class CoreSys:
self._plugins = value
@property
def arch(self) -> CpuArch:
"""Return CpuArch object."""
def arch(self) -> CpuArchManager:
"""Return CpuArchManager object."""
if self._arch is None:
raise RuntimeError("CpuArch not set!")
raise RuntimeError("CpuArchManager not set!")
return self._arch
@arch.setter
def arch(self, value: CpuArch) -> None:
"""Set a CpuArch object."""
def arch(self, value: CpuArchManager) -> None:
"""Set a CpuArchManager object."""
if self._arch:
raise RuntimeError("CpuArch already set!")
raise RuntimeError("CpuArchManager already set!")
self._arch = value
@property
@@ -628,9 +628,17 @@ class CoreSys:
context = callback(context)
return context
def create_task(self, coroutine: Coroutine) -> asyncio.Task:
def create_task(
self, coroutine: Coroutine, *, eager_start: bool | None = None
) -> asyncio.Task:
"""Create an async task."""
return self.loop.create_task(coroutine, context=self._create_context())
# eager_start kwarg works but wasn't added for mypy visibility until 3.14
# can remove the type ignore then
return self.loop.create_task(
coroutine,
context=self._create_context(),
eager_start=eager_start, # type: ignore
)
def call_later(
self,
@@ -733,8 +741,8 @@ class CoreSysAttributes:
return self.coresys.plugins
@property
def sys_arch(self) -> CpuArch:
"""Return CpuArch object."""
def sys_arch(self) -> CpuArchManager:
"""Return CpuArchManager object."""
return self.coresys.arch
@property
@@ -847,9 +855,11 @@ class CoreSysAttributes:
"""Add a job to the executor pool."""
return self.coresys.run_in_executor(funct, *args, **kwargs)
def sys_create_task(self, coroutine: Coroutine) -> asyncio.Task:
def sys_create_task(
self, coroutine: Coroutine, *, eager_start: bool | None = None
) -> asyncio.Task:
"""Create an async task."""
return self.coresys.create_task(coroutine)
return self.coresys.create_task(coroutine, eager_start=eager_start)
def sys_call_later(
self,

View File

@@ -3,10 +3,17 @@
"bluetoothd",
"bthelper",
"btuart",
"containerd",
"dbus-broker",
"dbus-broker-launch",
"docker",
"docker-prepare",
"dockerd",
"dropbear",
"fstrim",
"haos-data-disk-detach",
"haos-swapfile",
"haos-wipe",
"hassos-apparmor",
"hassos-config",
"hassos-expand",
@@ -17,7 +24,10 @@
"kernel",
"mount",
"os-agent",
"qemu-ga",
"rauc",
"raucdb-update",
"sm-notify",
"systemd",
"systemd-coredump",
"systemd-fsck",

View File

@@ -2,8 +2,7 @@
from typing import Any
from supervisor.dbus.utils import dbus_connected
from ...utils import dbus_connected
from .const import BOARD_NAME_SUPERVISED
from .interface import BoardProxy

View File

@@ -15,3 +15,8 @@ class System(DBusInterface):
async def schedule_wipe_device(self) -> bool:
"""Schedule a factory reset on next system boot."""
return await self.connected_dbus.System.call("schedule_wipe_device")
@dbus_connected
async def migrate_docker_storage_driver(self, backend: str) -> None:
"""Migrate Docker storage driver."""
await self.connected_dbus.System.call("migrate_docker_storage_driver", backend)

View File

@@ -3,6 +3,8 @@
from enum import IntEnum, StrEnum
from socket import AF_INET, AF_INET6
from .enum import DBusIntEnum, DBusStrEnum
DBUS_NAME_HAOS = "io.hass.os"
DBUS_NAME_HOSTNAME = "org.freedesktop.hostname1"
DBUS_NAME_LOGIND = "org.freedesktop.login1"
@@ -208,7 +210,7 @@ DBUS_ATTR_WWN = "WWN"
DBUS_ERR_SYSTEMD_NO_SUCH_UNIT = "org.freedesktop.systemd1.NoSuchUnit"
class RaucState(StrEnum):
class RaucState(DBusStrEnum):
"""Rauc slot states."""
GOOD = "good"
@@ -216,7 +218,7 @@ class RaucState(StrEnum):
ACTIVE = "active"
class InterfaceMethod(StrEnum):
class InterfaceMethod(DBusStrEnum):
"""Interface method simple."""
AUTO = "auto"
@@ -225,7 +227,7 @@ class InterfaceMethod(StrEnum):
LINK_LOCAL = "link-local"
class InterfaceAddrGenMode(IntEnum):
class InterfaceAddrGenMode(DBusIntEnum):
"""Interface addr_gen_mode."""
EUI64 = 0
@@ -234,7 +236,7 @@ class InterfaceAddrGenMode(IntEnum):
DEFAULT = 3
class InterfaceIp6Privacy(IntEnum):
class InterfaceIp6Privacy(DBusIntEnum):
"""Interface ip6_privacy."""
DEFAULT = -1
@@ -243,14 +245,14 @@ class InterfaceIp6Privacy(IntEnum):
ENABLED = 2
class ConnectionType(StrEnum):
class ConnectionType(DBusStrEnum):
"""Connection type."""
ETHERNET = "802-3-ethernet"
WIRELESS = "802-11-wireless"
class ConnectionStateType(IntEnum):
class ConnectionState(DBusIntEnum):
"""Connection states.
https://networkmanager.dev/docs/api/latest/nm-dbus-types.html#NMActiveConnectionState
@@ -280,7 +282,7 @@ class ConnectionStateFlags(IntEnum):
EXTERNAL = 0x80
class ConnectivityState(IntEnum):
class ConnectivityState(DBusIntEnum):
"""Network connectvity.
https://networkmanager.dev/docs/api/latest/nm-dbus-types.html#NMConnectivityState
@@ -293,7 +295,7 @@ class ConnectivityState(IntEnum):
CONNECTIVITY_FULL = 4
class DeviceType(IntEnum):
class DeviceType(DBusIntEnum):
"""Device types.
https://networkmanager.dev/docs/api/latest/nm-dbus-types.html#NMDeviceType
@@ -304,13 +306,15 @@ class DeviceType(IntEnum):
WIRELESS = 2
BLUETOOTH = 5
VLAN = 11
BRIDGE = 13
TUN = 16
VETH = 20
WIREGUARD = 29
WIFI_P2P = 30
LOOPBACK = 32
class WirelessMethodType(IntEnum):
class WirelessMethodType(DBusIntEnum):
"""Device Type."""
UNKNOWN = 0
@@ -327,7 +331,7 @@ class DNSAddressFamily(IntEnum):
INET6 = AF_INET6
class MulticastProtocolEnabled(StrEnum):
class MulticastProtocolEnabled(DBusStrEnum):
"""Multicast protocol enabled or resolve."""
YES = "yes"
@@ -335,7 +339,7 @@ class MulticastProtocolEnabled(StrEnum):
RESOLVE = "resolve"
class MulticastDnsValue(IntEnum):
class MulticastDnsValue(DBusIntEnum):
"""Connection MulticastDNS (mdns/llmnr) values."""
DEFAULT = -1
@@ -344,7 +348,7 @@ class MulticastDnsValue(IntEnum):
ANNOUNCE = 2
class DNSOverTLSEnabled(StrEnum):
class DNSOverTLSEnabled(DBusStrEnum):
"""DNS over TLS enabled."""
YES = "yes"
@@ -352,7 +356,7 @@ class DNSOverTLSEnabled(StrEnum):
OPPORTUNISTIC = "opportunistic"
class DNSSECValidation(StrEnum):
class DNSSECValidation(DBusStrEnum):
"""DNSSEC validation enforced."""
YES = "yes"
@@ -360,7 +364,7 @@ class DNSSECValidation(StrEnum):
ALLOW_DOWNGRADE = "allow-downgrade"
class DNSStubListenerEnabled(StrEnum):
class DNSStubListenerEnabled(DBusStrEnum):
"""DNS stub listener enabled."""
YES = "yes"
@@ -369,7 +373,7 @@ class DNSStubListenerEnabled(StrEnum):
UDP_ONLY = "udp"
class ResolvConfMode(StrEnum):
class ResolvConfMode(DBusStrEnum):
"""Resolv.conf management mode."""
FOREIGN = "foreign"
@@ -398,7 +402,7 @@ class StartUnitMode(StrEnum):
ISOLATE = "isolate"
class UnitActiveState(StrEnum):
class UnitActiveState(DBusStrEnum):
"""Active state of a systemd unit."""
ACTIVE = "active"

56
supervisor/dbus/enum.py Normal file
View File

@@ -0,0 +1,56 @@
"""D-Bus tolerant enum base classes.
D-Bus services (systemd, NetworkManager, RAUC, UDisks2) can introduce new enum
values at any time via OS updates. Standard enum construction raises ValueError
for unknown values. These base classes use Python's _missing_ hook to create
pseudo-members for unknown values, preventing crashes while preserving the
original value for logging and debugging.
"""
from enum import IntEnum, StrEnum
import logging
from ..utils.sentry import fire_and_forget_capture_message
_LOGGER: logging.Logger = logging.getLogger(__name__)
_reported: set[tuple[str, str | int]] = set()
def _report_unknown_value(cls: type, value: str | int) -> None:
"""Log and report an unknown D-Bus enum value to Sentry."""
msg = f"Unknown {cls.__name__} value received from D-Bus: {value}"
_LOGGER.warning(msg)
key = (cls.__name__, value)
if key not in _reported:
_reported.add(key)
fire_and_forget_capture_message(msg)
class DBusStrEnum(StrEnum):
"""StrEnum that tolerates unknown values from D-Bus."""
@classmethod
def _missing_(cls, value: object) -> "DBusStrEnum | None":
if not isinstance(value, str):
return None
_report_unknown_value(cls, value)
obj = str.__new__(cls, value)
obj._name_ = value
obj._value_ = value
return obj
class DBusIntEnum(IntEnum):
"""IntEnum that tolerates unknown values from D-Bus."""
@classmethod
def _missing_(cls, value: object) -> "DBusIntEnum | None":
if not isinstance(value, int):
return None
_report_unknown_value(cls, value)
obj = int.__new__(cls, value)
obj._name_ = f"UNKNOWN_{value}"
obj._value_ = value
return obj

View File

@@ -7,8 +7,7 @@ from typing import Any
from dbus_fast.aio.message_bus import MessageBus
from supervisor.exceptions import DBusInterfaceError, DBusNotConnectedError
from ..exceptions import DBusInterfaceError, DBusNotConnectedError
from ..utils.dbus import DBus
from .utils import dbus_connected

View File

@@ -115,7 +115,7 @@ class DBusManager(CoreSysAttributes):
async def load(self) -> None:
"""Connect interfaces to D-Bus."""
if not SOCKET_DBUS.exists():
if not await self.sys_run_in_executor(SOCKET_DBUS.exists):
_LOGGER.error(
"No D-Bus support on Host. Disabled any kind of host control!"
)

View File

@@ -77,6 +77,7 @@ class IpProperties(ABC):
method: str | None
address_data: list[IpAddress] | None
gateway: str | None
route_metric: int | None
@dataclass(slots=True)
@@ -90,8 +91,8 @@ class Ip4Properties(IpProperties):
class Ip6Properties(IpProperties):
"""IPv6 properties object for Network Manager."""
addr_gen_mode: int
ip6_privacy: int
addr_gen_mode: int | None
ip6_privacy: int | None
dns: list[bytes] | None

View File

@@ -2,8 +2,6 @@
from typing import Any
from supervisor.dbus.network.setting import NetworkSetting
from ..const import (
DBUS_ATTR_CONNECTION,
DBUS_ATTR_ID,
@@ -16,12 +14,13 @@ from ..const import (
DBUS_IFACE_CONNECTION_ACTIVE,
DBUS_NAME_NM,
DBUS_OBJECT_BASE,
ConnectionState,
ConnectionStateFlags,
ConnectionStateType,
)
from ..interface import DBusInterfaceProxy, dbus_property
from ..utils import dbus_connected
from .ip_configuration import IpConfiguration
from .setting import NetworkSetting
class NetworkConnection(DBusInterfaceProxy):
@@ -67,9 +66,9 @@ class NetworkConnection(DBusInterfaceProxy):
@property
@dbus_property
def state(self) -> ConnectionStateType:
def state(self) -> ConnectionState:
"""Return the state of the connection."""
return ConnectionStateType(self.properties[DBUS_ATTR_STATE])
return ConnectionState(self.properties[DBUS_ATTR_STATE])
@property
def state_flags(self) -> set[ConnectionStateFlags]:

View File

@@ -60,15 +60,7 @@ class NetworkInterface(DBusInterfaceProxy):
@dbus_property
def type(self) -> DeviceType:
"""Return interface type."""
try:
return DeviceType(self.properties[DBUS_ATTR_DEVICE_TYPE])
except ValueError:
_LOGGER.debug(
"Unknown device type %s for %s, treating as UNKNOWN",
self.properties[DBUS_ATTR_DEVICE_TYPE],
self.object_path,
)
return DeviceType.UNKNOWN
return DeviceType(self.properties[DBUS_ATTR_DEVICE_TYPE])
@property
@dbus_property

View File

@@ -46,6 +46,15 @@ class IpConfiguration(DBusInterfaceProxy):
"""Primary interface of object to get property values from."""
return self._properties_interface
@property
@dbus_property
def address(self) -> list[IPv4Interface | IPv6Interface]:
"""Get address."""
return [
ip_interface(f"{address[ATTR_ADDRESS]}/{address[ATTR_PREFIX]}")
for address in self.properties[DBUS_ATTR_ADDRESS_DATA]
]
@property
@dbus_property
def gateway(self) -> IPv4Address | IPv6Address | None:
@@ -70,12 +79,3 @@ class IpConfiguration(DBusInterfaceProxy):
ip_address(bytes(nameserver))
for nameserver in self.properties[DBUS_ATTR_NAMESERVERS]
]
@property
@dbus_property
def address(self) -> list[IPv4Interface | IPv6Interface]:
"""Get address."""
return [
ip_interface(f"{address[ATTR_ADDRESS]}/{address[ATTR_PREFIX]}")
for address in self.properties[DBUS_ATTR_ADDRESS_DATA]
]

View File

@@ -53,27 +53,25 @@ CONF_ATTR_802_WIRELESS_SECURITY_AUTH_ALG = "auth-alg"
CONF_ATTR_802_WIRELESS_SECURITY_KEY_MGMT = "key-mgmt"
CONF_ATTR_802_WIRELESS_SECURITY_PSK = "psk"
CONF_ATTR_IPV4_METHOD = "method"
CONF_ATTR_IPV4_ADDRESS_DATA = "address-data"
CONF_ATTR_IPV4_GATEWAY = "gateway"
CONF_ATTR_IPV4_DNS = "dns"
CONF_ATTR_IP_METHOD = "method"
CONF_ATTR_IP_ADDRESS_DATA = "address-data"
CONF_ATTR_IP_GATEWAY = "gateway"
CONF_ATTR_IP_ROUTE_METRIC = "route-metric"
CONF_ATTR_IP_DNS = "dns"
CONF_ATTR_IPV6_METHOD = "method"
CONF_ATTR_IPV6_ADDR_GEN_MODE = "addr-gen-mode"
CONF_ATTR_IPV6_PRIVACY = "ip6-privacy"
CONF_ATTR_IPV6_ADDRESS_DATA = "address-data"
CONF_ATTR_IPV6_GATEWAY = "gateway"
CONF_ATTR_IPV6_DNS = "dns"
IPV4_6_IGNORE_FIELDS = [
_IP_IGNORE_FIELDS = [
CONF_ATTR_IP_METHOD,
CONF_ATTR_IP_ADDRESS_DATA,
CONF_ATTR_IP_GATEWAY,
CONF_ATTR_IP_ROUTE_METRIC,
CONF_ATTR_IP_DNS,
CONF_ATTR_IPV6_ADDR_GEN_MODE,
CONF_ATTR_IPV6_PRIVACY,
"addresses",
"address-data",
"dns",
"dns-data",
"gateway",
"method",
"addr-gen-mode",
"ip6-privacy",
]
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -195,13 +193,13 @@ class NetworkSetting(DBusInterface):
new_settings,
settings,
CONF_ATTR_IPV4,
ignore_current_value=IPV4_6_IGNORE_FIELDS,
ignore_current_value=_IP_IGNORE_FIELDS,
)
_merge_settings_attribute(
new_settings,
settings,
CONF_ATTR_IPV6,
ignore_current_value=IPV4_6_IGNORE_FIELDS,
ignore_current_value=_IP_IGNORE_FIELDS,
)
_merge_settings_attribute(new_settings, settings, CONF_ATTR_MATCH)
@@ -291,26 +289,28 @@ class NetworkSetting(DBusInterface):
if CONF_ATTR_IPV4 in data:
address_data = None
if ips := data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_ADDRESS_DATA):
if ips := data[CONF_ATTR_IPV4].get(CONF_ATTR_IP_ADDRESS_DATA):
address_data = [IpAddress(ip["address"], ip["prefix"]) for ip in ips]
self._ipv4 = Ip4Properties(
method=data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_METHOD),
method=data[CONF_ATTR_IPV4].get(CONF_ATTR_IP_METHOD),
address_data=address_data,
gateway=data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_GATEWAY),
dns=data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_DNS),
gateway=data[CONF_ATTR_IPV4].get(CONF_ATTR_IP_GATEWAY),
route_metric=data[CONF_ATTR_IPV4].get(CONF_ATTR_IP_ROUTE_METRIC),
dns=data[CONF_ATTR_IPV4].get(CONF_ATTR_IP_DNS),
)
if CONF_ATTR_IPV6 in data:
address_data = None
if ips := data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_ADDRESS_DATA):
if ips := data[CONF_ATTR_IPV6].get(CONF_ATTR_IP_ADDRESS_DATA):
address_data = [IpAddress(ip["address"], ip["prefix"]) for ip in ips]
self._ipv6 = Ip6Properties(
method=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_METHOD),
method=data[CONF_ATTR_IPV6].get(CONF_ATTR_IP_METHOD),
addr_gen_mode=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_ADDR_GEN_MODE),
ip6_privacy=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_PRIVACY),
address_data=address_data,
gateway=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_GATEWAY),
dns=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_DNS),
gateway=data[CONF_ATTR_IPV6].get(CONF_ATTR_IP_GATEWAY),
route_metric=data[CONF_ATTR_IPV6].get(CONF_ATTR_IP_ROUTE_METRIC),
dns=data[CONF_ATTR_IPV6].get(CONF_ATTR_IP_DNS),
)
if CONF_ATTR_MATCH in data:

View File

@@ -16,7 +16,11 @@ from ....host.const import (
InterfaceType,
MulticastDnsMode,
)
from ...const import MulticastDnsValue
from ...const import (
InterfaceAddrGenMode as NMInterfaceAddrGenMode,
InterfaceIp6Privacy as NMInterfaceIp6Privacy,
MulticastDnsValue,
)
from .. import NetworkManager
from . import (
CONF_ATTR_802_ETHERNET,
@@ -37,17 +41,14 @@ from . import (
CONF_ATTR_CONNECTION_MDNS,
CONF_ATTR_CONNECTION_TYPE,
CONF_ATTR_CONNECTION_UUID,
CONF_ATTR_IP_ADDRESS_DATA,
CONF_ATTR_IP_DNS,
CONF_ATTR_IP_GATEWAY,
CONF_ATTR_IP_METHOD,
CONF_ATTR_IP_ROUTE_METRIC,
CONF_ATTR_IPV4,
CONF_ATTR_IPV4_ADDRESS_DATA,
CONF_ATTR_IPV4_DNS,
CONF_ATTR_IPV4_GATEWAY,
CONF_ATTR_IPV4_METHOD,
CONF_ATTR_IPV6,
CONF_ATTR_IPV6_ADDR_GEN_MODE,
CONF_ATTR_IPV6_ADDRESS_DATA,
CONF_ATTR_IPV6_DNS,
CONF_ATTR_IPV6_GATEWAY,
CONF_ATTR_IPV6_METHOD,
CONF_ATTR_IPV6_PRIVACY,
CONF_ATTR_MATCH,
CONF_ATTR_MATCH_PATH,
@@ -71,11 +72,11 @@ MULTICAST_DNS_MODE_VALUE_MAPPING = {
def _get_ipv4_connection_settings(ipv4setting: IpSetting | None) -> dict:
ipv4 = {}
if not ipv4setting or ipv4setting.method == InterfaceMethod.AUTO:
ipv4[CONF_ATTR_IPV4_METHOD] = Variant("s", "auto")
ipv4[CONF_ATTR_IP_METHOD] = Variant("s", "auto")
elif ipv4setting.method == InterfaceMethod.DISABLED:
ipv4[CONF_ATTR_IPV4_METHOD] = Variant("s", "disabled")
ipv4[CONF_ATTR_IP_METHOD] = Variant("s", "disabled")
elif ipv4setting.method == InterfaceMethod.STATIC:
ipv4[CONF_ATTR_IPV4_METHOD] = Variant("s", "manual")
ipv4[CONF_ATTR_IP_METHOD] = Variant("s", "manual")
address_data = []
for address in ipv4setting.address:
@@ -86,26 +87,25 @@ def _get_ipv4_connection_settings(ipv4setting: IpSetting | None) -> dict:
}
)
ipv4[CONF_ATTR_IPV4_ADDRESS_DATA] = Variant("aa{sv}", address_data)
ipv4[CONF_ATTR_IP_ADDRESS_DATA] = Variant("aa{sv}", address_data)
if ipv4setting.gateway:
ipv4[CONF_ATTR_IPV4_GATEWAY] = Variant("s", str(ipv4setting.gateway))
ipv4[CONF_ATTR_IP_GATEWAY] = Variant("s", str(ipv4setting.gateway))
else:
raise RuntimeError("Invalid IPv4 InterfaceMethod")
if (
ipv4setting
and ipv4setting.nameservers
and ipv4setting.method
in (
if ipv4setting:
if ipv4setting.route_metric is not None:
ipv4[CONF_ATTR_IP_ROUTE_METRIC] = Variant("i", ipv4setting.route_metric)
if ipv4setting.nameservers and ipv4setting.method in (
InterfaceMethod.AUTO,
InterfaceMethod.STATIC,
)
):
nameservers = ipv4setting.nameservers if ipv4setting else []
ipv4[CONF_ATTR_IPV4_DNS] = Variant(
"au",
[socket.htonl(int(ip_address)) for ip_address in nameservers],
)
):
nameservers = ipv4setting.nameservers if ipv4setting else []
ipv4[CONF_ATTR_IP_DNS] = Variant(
"au",
[socket.htonl(int(ip_address)) for ip_address in nameservers],
)
return ipv4
@@ -115,31 +115,48 @@ def _get_ipv6_connection_settings(
) -> dict:
ipv6 = {}
if not ipv6setting or ipv6setting.method == InterfaceMethod.AUTO:
ipv6[CONF_ATTR_IPV6_METHOD] = Variant("s", "auto")
ipv6[CONF_ATTR_IP_METHOD] = Variant("s", "auto")
if ipv6setting:
if ipv6setting.addr_gen_mode == InterfaceAddrGenMode.EUI64:
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant("i", 0)
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant(
"i", NMInterfaceAddrGenMode.EUI64.value
)
elif (
not support_addr_gen_mode_defaults
or ipv6setting.addr_gen_mode == InterfaceAddrGenMode.STABLE_PRIVACY
):
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant("i", 1)
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant(
"i", NMInterfaceAddrGenMode.STABLE_PRIVACY.value
)
elif ipv6setting.addr_gen_mode == InterfaceAddrGenMode.DEFAULT_OR_EUI64:
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant("i", 2)
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant(
"i", NMInterfaceAddrGenMode.DEFAULT_OR_EUI64.value
)
else:
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant("i", 3)
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant(
"i", NMInterfaceAddrGenMode.DEFAULT.value
)
if ipv6setting.ip6_privacy == InterfaceIp6Privacy.DISABLED:
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant("i", 0)
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant(
"i", NMInterfaceIp6Privacy.DISABLED.value
)
elif ipv6setting.ip6_privacy == InterfaceIp6Privacy.ENABLED_PREFER_PUBLIC:
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant("i", 1)
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant(
"i", NMInterfaceIp6Privacy.ENABLED_PREFER_PUBLIC.value
)
elif ipv6setting.ip6_privacy == InterfaceIp6Privacy.ENABLED:
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant("i", 2)
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant(
"i", NMInterfaceIp6Privacy.ENABLED.value
)
else:
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant("i", -1)
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant(
"i", NMInterfaceIp6Privacy.DEFAULT.value
)
elif ipv6setting.method == InterfaceMethod.DISABLED:
ipv6[CONF_ATTR_IPV6_METHOD] = Variant("s", "link-local")
ipv6[CONF_ATTR_IP_METHOD] = Variant("s", "link-local")
elif ipv6setting.method == InterfaceMethod.STATIC:
ipv6[CONF_ATTR_IPV6_METHOD] = Variant("s", "manual")
ipv6[CONF_ATTR_IP_METHOD] = Variant("s", "manual")
address_data = []
for address in ipv6setting.address:
@@ -150,26 +167,26 @@ def _get_ipv6_connection_settings(
}
)
ipv6[CONF_ATTR_IPV6_ADDRESS_DATA] = Variant("aa{sv}", address_data)
ipv6[CONF_ATTR_IP_ADDRESS_DATA] = Variant("aa{sv}", address_data)
if ipv6setting.gateway:
ipv6[CONF_ATTR_IPV6_GATEWAY] = Variant("s", str(ipv6setting.gateway))
ipv6[CONF_ATTR_IP_GATEWAY] = Variant("s", str(ipv6setting.gateway))
else:
raise RuntimeError("Invalid IPv6 InterfaceMethod")
if (
ipv6setting
and ipv6setting.nameservers
and ipv6setting.method
in (
if ipv6setting:
if ipv6setting.route_metric is not None:
ipv6[CONF_ATTR_IP_ROUTE_METRIC] = Variant("i", ipv6setting.route_metric)
if ipv6setting.nameservers and ipv6setting.method in (
InterfaceMethod.AUTO,
InterfaceMethod.STATIC,
)
):
nameservers = ipv6setting.nameservers if ipv6setting else []
ipv6[CONF_ATTR_IPV6_DNS] = Variant(
"aay",
[ip_address.packed for ip_address in nameservers],
)
):
nameservers = ipv6setting.nameservers if ipv6setting else []
ipv6[CONF_ATTR_IP_DNS] = Variant(
"aay",
[ip_address.packed for ip_address in nameservers],
)
return ipv6

View File

@@ -1,6 +1,5 @@
"""D-Bus interface for rauc."""
from ctypes import c_uint32, c_uint64
import logging
from typing import Any, NotRequired, TypedDict
@@ -33,13 +32,15 @@ SlotStatusDataType = TypedDict(
"state": str,
"device": str,
"bundle.compatible": NotRequired[str],
"bundle.hash": NotRequired[str],
"sha256": NotRequired[str],
"size": NotRequired[c_uint64],
"installed.count": NotRequired[c_uint32],
"size": NotRequired[int],
"installed.count": NotRequired[int],
"installed.transaction": NotRequired[str],
"bundle.version": NotRequired[str],
"installed.timestamp": NotRequired[str],
"status": NotRequired[str],
"activated.count": NotRequired[c_uint32],
"activated.count": NotRequired[int],
"activated.timestamp": NotRequired[str],
"boot-status": NotRequired[str],
"bootname": NotRequired[str],
@@ -117,7 +118,7 @@ class Rauc(DBusInterfaceProxy):
return self.connected_dbus.signal(DBUS_SIGNAL_RAUC_INSTALLER_COMPLETED)
@dbus_connected
async def mark(self, state: RaucState, slot_identifier: str) -> tuple[str, str]:
async def mark(self, state: RaucState, slot_identifier: str) -> list[str]:
"""Get slot status."""
return await self.connected_dbus.Installer.call("mark", state, slot_identifier)

View File

@@ -75,7 +75,7 @@ class Resolved(DBusInterfaceProxy):
@dbus_property
def current_dns_server(
self,
) -> list[tuple[int, DNSAddressFamily, bytes]] | None:
) -> tuple[int, DNSAddressFamily, bytes] | None:
"""Return current DNS server."""
return self.properties[DBUS_ATTR_CURRENT_DNS_SERVER]
@@ -83,7 +83,7 @@ class Resolved(DBusInterfaceProxy):
@dbus_property
def current_dns_server_ex(
self,
) -> list[tuple[int, DNSAddressFamily, bytes, int, str]] | None:
) -> tuple[int, DNSAddressFamily, bytes, int, str] | None:
"""Return current DNS server including port and server name."""
return self.properties[DBUS_ATTR_CURRENT_DNS_SERVER_EX]
@@ -103,19 +103,19 @@ class Resolved(DBusInterfaceProxy):
@dbus_property
def dns_over_tls(self) -> DNSOverTLSEnabled | None:
"""Return DNS over TLS enabled."""
return self.properties[DBUS_ATTR_DNS_OVER_TLS]
return DNSOverTLSEnabled(self.properties[DBUS_ATTR_DNS_OVER_TLS])
@property
@dbus_property
def dns_stub_listener(self) -> DNSStubListenerEnabled | None:
"""Return DNS stub listener enabled on port 53."""
return self.properties[DBUS_ATTR_DNS_STUB_LISTENER]
return DNSStubListenerEnabled(self.properties[DBUS_ATTR_DNS_STUB_LISTENER])
@property
@dbus_property
def dnssec(self) -> DNSSECValidation | None:
"""Return DNSSEC validation enforced."""
return self.properties[DBUS_ATTR_DNSSEC]
return DNSSECValidation(self.properties[DBUS_ATTR_DNSSEC])
@property
@dbus_property
@@ -159,7 +159,7 @@ class Resolved(DBusInterfaceProxy):
@dbus_property
def llmnr(self) -> MulticastProtocolEnabled | None:
"""Return LLMNR enabled."""
return self.properties[DBUS_ATTR_LLMNR]
return MulticastProtocolEnabled(self.properties[DBUS_ATTR_LLMNR])
@property
@dbus_property
@@ -171,13 +171,13 @@ class Resolved(DBusInterfaceProxy):
@dbus_property
def multicast_dns(self) -> MulticastProtocolEnabled | None:
"""Return MDNS enabled."""
return self.properties[DBUS_ATTR_MULTICAST_DNS]
return MulticastProtocolEnabled(self.properties[DBUS_ATTR_MULTICAST_DNS])
@property
@dbus_property
def resolv_conf_mode(self) -> ResolvConfMode | None:
"""Return how /etc/resolv.conf managed on host."""
return self.properties[DBUS_ATTR_RESOLV_CONF_MODE]
return ResolvConfMode(self.properties[DBUS_ATTR_RESOLV_CONF_MODE])
@property
@dbus_property

View File

@@ -70,7 +70,7 @@ class SystemdUnit(DBusInterface):
@dbus_connected
async def get_active_state(self) -> UnitActiveState:
"""Get active state of the unit."""
return await self.connected_dbus.Unit.get("active_state")
return UnitActiveState(await self.connected_dbus.Unit.get("active_state"))
@dbus_connected
def properties_changed(self) -> DBusSignalWrapper:

View File

@@ -4,6 +4,8 @@ from enum import StrEnum
from dbus_fast import Variant
from ..enum import DBusStrEnum
UDISKS2_DEFAULT_OPTIONS = {"auth.no_user_interaction": Variant("b", True)}
@@ -31,7 +33,7 @@ class FormatType(StrEnum):
GPT = "gpt"
class PartitionTableType(StrEnum):
class PartitionTableType(DBusStrEnum):
"""Partition Table type."""
DOS = "dos"

View File

@@ -9,7 +9,7 @@ from dbus_fast import Variant
from .const import EncryptType, EraseMode
def udisks2_bytes_to_path(path_bytes: bytearray) -> Path:
def udisks2_bytes_to_path(path_bytes: bytes) -> Path:
"""Convert bytes to path object without null character on end."""
if path_bytes and path_bytes[-1] == 0:
return Path(path_bytes[:-1].decode())
@@ -73,7 +73,7 @@ FormatOptionsDataType = TypedDict(
{
"label": NotRequired[str],
"take-ownership": NotRequired[bool],
"encrypt.passphrase": NotRequired[bytearray],
"encrypt.passphrase": NotRequired[bytes],
"encrypt.type": NotRequired[str],
"erase": NotRequired[str],
"update-partition-type": NotRequired[bool],

View File

@@ -3,19 +3,16 @@
from __future__ import annotations
from contextlib import suppress
from http import HTTPStatus
from ipaddress import IPv4Address
import logging
import os
from pathlib import Path
from typing import TYPE_CHECKING, cast
import tempfile
from typing import TYPE_CHECKING, Any, Literal, cast
import aiodocker
from attr import evolve
from awesomeversion import AwesomeVersion
import docker
import docker.errors
from docker.types import Mount
import requests
from ..addons.build import AddonBuild
from ..addons.const import MappingType
@@ -33,6 +30,7 @@ from ..coresys import CoreSys
from ..exceptions import (
CoreDNSError,
DBusError,
DockerBuildError,
DockerError,
DockerJobError,
DockerNotFound,
@@ -64,8 +62,11 @@ from .const import (
PATH_SHARE,
PATH_SSL,
Capabilities,
DockerMount,
MountBindOptions,
MountType,
PropagationMode,
Ulimit,
)
from .interface import DockerInterface
@@ -128,7 +129,7 @@ class DockerAddon(DockerInterface):
def arch(self) -> str | None:
"""Return arch of Docker image."""
if self.addon.legacy:
return self.sys_arch.default
return str(self.sys_arch.default)
return super().arch
@property
@@ -268,7 +269,7 @@ class DockerAddon(DockerInterface):
}
@property
def network_mode(self) -> str | None:
def network_mode(self) -> Literal["host"] | None:
"""Return network mode for add-on."""
if self.addon.host_network:
return "host"
@@ -307,28 +308,28 @@ class DockerAddon(DockerInterface):
return None
@property
def ulimits(self) -> list[docker.types.Ulimit] | None:
def ulimits(self) -> list[Ulimit] | None:
"""Generate ulimits for add-on."""
limits: list[docker.types.Ulimit] = []
limits: list[Ulimit] = []
# Need schedule functions
if self.addon.with_realtime:
limits.append(docker.types.Ulimit(name="rtprio", soft=90, hard=99))
limits.append(Ulimit(name="rtprio", soft=90, hard=99))
# Set available memory for memlock to 128MB
mem = 128 * 1024 * 1024
limits.append(docker.types.Ulimit(name="memlock", soft=mem, hard=mem))
limits.append(Ulimit(name="memlock", soft=mem, hard=mem))
# Add configurable ulimits from add-on config
for name, config in self.addon.ulimits.items():
if isinstance(config, int):
# Simple format: both soft and hard limits are the same
limits.append(docker.types.Ulimit(name=name, soft=config, hard=config))
limits.append(Ulimit(name=name, soft=config, hard=config))
elif isinstance(config, dict):
# Detailed format: both soft and hard limits are mandatory
soft = config["soft"]
hard = config["hard"]
limits.append(docker.types.Ulimit(name=name, soft=soft, hard=hard))
limits.append(Ulimit(name=name, soft=soft, hard=hard))
# Return None if no ulimits are present
if limits:
@@ -347,7 +348,7 @@ class DockerAddon(DockerInterface):
return None
@property
def mounts(self) -> list[Mount]:
def mounts(self) -> list[DockerMount]:
"""Return mounts for container."""
addon_mapping = self.addon.map_volumes
@@ -357,8 +358,8 @@ class DockerAddon(DockerInterface):
mounts = [
MOUNT_DEV,
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.addon.path_extern_data.as_posix(),
target=target_data_path or PATH_PRIVATE_DATA.as_posix(),
read_only=False,
@@ -368,8 +369,8 @@ class DockerAddon(DockerInterface):
# setup config mappings
if MappingType.CONFIG in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.CONFIG].path
or PATH_HOMEASSISTANT_CONFIG_LEGACY.as_posix(),
@@ -381,8 +382,8 @@ class DockerAddon(DockerInterface):
# Map addon's public config folder if not using deprecated config option
if self.addon.addon_config_used:
mounts.append(
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.addon.path_extern_config.as_posix(),
target=addon_mapping[MappingType.ADDON_CONFIG].path
or PATH_PUBLIC_CONFIG.as_posix(),
@@ -393,8 +394,8 @@ class DockerAddon(DockerInterface):
# Map Home Assistant config in new way
if MappingType.HOMEASSISTANT_CONFIG in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.HOMEASSISTANT_CONFIG].path
or PATH_HOMEASSISTANT_CONFIG.as_posix(),
@@ -406,8 +407,8 @@ class DockerAddon(DockerInterface):
if MappingType.ALL_ADDON_CONFIGS in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_addon_configs.as_posix(),
target=addon_mapping[MappingType.ALL_ADDON_CONFIGS].path
or PATH_ALL_ADDON_CONFIGS.as_posix(),
@@ -417,8 +418,8 @@ class DockerAddon(DockerInterface):
if MappingType.SSL in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(),
target=addon_mapping[MappingType.SSL].path or PATH_SSL.as_posix(),
read_only=addon_mapping[MappingType.SSL].read_only,
@@ -427,8 +428,8 @@ class DockerAddon(DockerInterface):
if MappingType.ADDONS in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_addons_local.as_posix(),
target=addon_mapping[MappingType.ADDONS].path
or PATH_LOCAL_ADDONS.as_posix(),
@@ -438,8 +439,8 @@ class DockerAddon(DockerInterface):
if MappingType.BACKUP in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_backup.as_posix(),
target=addon_mapping[MappingType.BACKUP].path
or PATH_BACKUP.as_posix(),
@@ -449,25 +450,25 @@ class DockerAddon(DockerInterface):
if MappingType.SHARE in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(),
target=addon_mapping[MappingType.SHARE].path
or PATH_SHARE.as_posix(),
read_only=addon_mapping[MappingType.SHARE].read_only,
propagation=PropagationMode.RSLAVE,
bind_options=MountBindOptions(propagation=PropagationMode.RSLAVE),
)
)
if MappingType.MEDIA in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_media.as_posix(),
target=addon_mapping[MappingType.MEDIA].path
or PATH_MEDIA.as_posix(),
read_only=addon_mapping[MappingType.MEDIA].read_only,
propagation=PropagationMode.RSLAVE,
bind_options=MountBindOptions(propagation=PropagationMode.RSLAVE),
)
)
@@ -479,8 +480,8 @@ class DockerAddon(DockerInterface):
if not Path(gpio_path).exists():
continue
mounts.append(
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=gpio_path,
target=gpio_path,
read_only=False,
@@ -490,8 +491,8 @@ class DockerAddon(DockerInterface):
# DeviceTree support
if self.addon.with_devicetree:
mounts.append(
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source="/sys/firmware/devicetree/base",
target="/device-tree",
read_only=True,
@@ -505,8 +506,8 @@ class DockerAddon(DockerInterface):
# Kernel Modules support
if self.addon.with_kernel_modules:
mounts.append(
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source="/lib/modules",
target="/lib/modules",
read_only=True,
@@ -524,20 +525,20 @@ class DockerAddon(DockerInterface):
# Configuration Audio
if self.addon.with_audio:
mounts += [
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.addon.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf",
read_only=True,
),
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio",
read_only=True,
),
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf",
read_only=True,
@@ -547,14 +548,14 @@ class DockerAddon(DockerInterface):
# System Journal access
if self.addon.with_journald:
mounts += [
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
target=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
read_only=True,
),
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
target=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
read_only=True,
@@ -680,72 +681,109 @@ class DockerAddon(DockerInterface):
async def _build(self, version: AwesomeVersion, image: str | None = None) -> None:
"""Build a Docker container."""
build_env = await AddonBuild(self.coresys, self.addon).load_config()
if not await build_env.is_valid():
_LOGGER.error("Invalid build environment, can't build this add-on!")
raise DockerError()
# Check if the build environment is valid, raises if not
await build_env.is_valid()
_LOGGER.info("Starting build for %s:%s", self.image, version)
def build_image():
if build_env.squash:
_LOGGER.warning(
"Ignoring squash build option for %s as Docker BuildKit does not support it.",
self.addon.slug,
)
addon_image_tag = f"{image or self.addon.image}:{version!s}"
docker_version = self.sys_docker.info.version
builder_version_tag = f"{docker_version.major}.{docker_version.minor}.{docker_version.micro}-cli"
builder_name = f"addon_builder_{self.addon.slug}"
# Remove dangling builder container if it exists by any chance
# E.g. because of an abrupt host shutdown/reboot during a build
with suppress(docker.errors.NotFound):
self.sys_docker.containers.get(builder_name).remove(force=True, v=True)
result = self.sys_docker.run_command(
ADDON_BUILDER_IMAGE,
version=builder_version_tag,
name=builder_name,
**build_env.get_docker_args(version, addon_image_tag),
if build_env.squash:
_LOGGER.warning(
"Ignoring squash build option for %s as Docker BuildKit does not support it.",
self.addon.slug,
)
logs = result.output.decode("utf-8")
addon_image_tag = f"{image or self.addon.image}:{version!s}"
if result.exit_code != 0:
error_message = f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}"
raise docker.errors.DockerException(error_message)
docker_version = self.sys_docker.info.version
builder_version_tag = (
f"{docker_version.major}.{docker_version.minor}.{docker_version.micro}-cli"
)
return addon_image_tag, logs
builder_name = f"addon_builder_{self.addon.slug}"
# Remove dangling builder container if it exists by any chance
# E.g. because of an abrupt host shutdown/reboot during a build
try:
container = await self.sys_docker.containers.get(builder_name)
await container.delete(force=True, v=True)
except aiodocker.DockerError as err:
if err.status != HTTPStatus.NOT_FOUND:
raise DockerBuildError(
f"Can't clean up existing builder container: {err!s}", _LOGGER.error
) from err
# Generate Docker config with registry credentials for base image if needed
docker_config_content = build_env.get_docker_config_json()
temp_dir: tempfile.TemporaryDirectory | None = None
try:
addon_image_tag, log = await self.sys_run_in_executor(build_image)
_LOGGER.debug("Build %s:%s done: %s", self.image, version, log)
def pre_build_setup() -> tuple[
tempfile.TemporaryDirectory | None, dict[str, Any]
]:
docker_config_path: Path | None = None
temp_dir: tempfile.TemporaryDirectory | None = None
if docker_config_content:
# Create temporary directory for docker config
temp_dir = tempfile.TemporaryDirectory(
prefix="hassio_build_", dir=self.sys_config.path_tmp
)
docker_config_path = Path(temp_dir.name) / "config.json"
docker_config_path.write_text(
docker_config_content, encoding="utf-8"
)
_LOGGER.debug(
"Created temporary Docker config for build at %s",
docker_config_path,
)
return (
temp_dir,
build_env.get_docker_args(
version, addon_image_tag, docker_config_path
),
)
temp_dir, build_args = await self.sys_run_in_executor(pre_build_setup)
result = await self.sys_docker.run_command(
ADDON_BUILDER_IMAGE,
tag=builder_version_tag,
name=builder_name,
**build_args,
)
except DockerError as err:
raise DockerBuildError(
f"Can't build {self.image}:{version}: {err!s}", _LOGGER.error
) from err
finally:
# Clean up temporary directory
if temp_dir:
await self.sys_run_in_executor(temp_dir.cleanup)
logs = "\n".join(result.log)
if result.exit_code != 0:
raise DockerBuildError(
f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}",
_LOGGER.error,
)
_LOGGER.debug("Build %s:%s done: %s", self.image, version, logs)
try:
# Update meta data
self._meta = await self.sys_docker.images.inspect(addon_image_tag)
except (
docker.errors.DockerException,
requests.RequestException,
aiodocker.DockerError,
) as err:
_LOGGER.error("Can't build %s:%s: %s", self.image, version, err)
raise DockerError() from err
except aiodocker.DockerError as err:
raise DockerBuildError(
f"Can't get image metadata for {addon_image_tag} after build: {err!s}"
) from err
_LOGGER.info("Build %s:%s done", self.image, version)
def export_image(self, tar_file: Path) -> None:
"""Export current images into a tar file.
Must be run in executor.
"""
async def export_image(self, tar_file: Path) -> None:
"""Export current images into a tar file."""
if not self.image:
raise RuntimeError("Cannot export without image!")
self.sys_docker.export_image(self.image, self.version, tar_file)
await self.sys_docker.export_image(self.image, self.version, tar_file)
@Job(
name="docker_addon_import_image",
@@ -794,32 +832,24 @@ class DockerAddon(DockerInterface):
)
async def write_stdin(self, data: bytes) -> None:
"""Write to add-on stdin."""
if not await self.is_running():
raise DockerError()
await self.sys_run_in_executor(self._write_stdin, data)
def _write_stdin(self, data: bytes) -> None:
"""Write to add-on stdin.
Need run inside executor.
"""
try:
# Load needed docker objects
container = self.sys_docker.containers.get(self.name)
socket = container.attach_socket(params={"stdin": 1, "stream": 1})
except (docker.errors.DockerException, requests.RequestException) as err:
_LOGGER.error("Can't attach to %s stdin: %s", self.name, err)
raise DockerError() from err
container = await self.sys_docker.containers.get(self.name)
socket = container.attach(stdin=True)
except aiodocker.DockerError as err:
raise DockerError(
f"Can't attach to {self.name} stdin: {err!s}", _LOGGER.error
) from err
try:
# Write to stdin
data += b"\n"
os.write(socket.fileno(), data)
socket.close()
except OSError as err:
_LOGGER.error("Can't write to %s stdin: %s", self.name, err)
raise DockerError() from err
await socket.write_in(data + b"\n")
await socket.close()
# Seems to raise very generic exceptions like RuntimeError or AssertionError
# So we catch all exceptions and re-raise them as DockerError
except Exception as err:
raise DockerError(
f"Can't write to {self.name} stdin: {err!s}", _LOGGER.error
) from err
@Job(
name="docker_addon_stop",
@@ -865,15 +895,13 @@ class DockerAddon(DockerInterface):
return
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except docker.errors.NotFound:
if self._hw_listener:
self.sys_bus.remove_listener(self._hw_listener)
self._hw_listener = None
return
except (docker.errors.DockerException, requests.RequestException) as err:
docker_container = await self.sys_docker.containers.get(self.name)
except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
if self._hw_listener:
self.sys_bus.remove_listener(self._hw_listener)
self._hw_listener = None
return
raise DockerError(
f"Can't process Hardware Event on {self.name}: {err!s}", _LOGGER.error
) from err

View File

@@ -2,9 +2,6 @@
import logging
import docker
from docker.types import Mount
from ..const import DOCKER_CPU_RUNTIME_ALLOCATION
from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
@@ -19,7 +16,9 @@ from .const import (
MOUNT_UDEV,
PATH_PRIVATE_DATA,
Capabilities,
DockerMount,
MountType,
Ulimit,
)
from .interface import DockerInterface
@@ -42,12 +41,12 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
return AUDIO_DOCKER_NAME
@property
def mounts(self) -> list[Mount]:
def mounts(self) -> list[DockerMount]:
"""Return mounts for container."""
mounts = [
MOUNT_DEV,
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_audio.as_posix(),
target=PATH_PRIVATE_DATA.as_posix(),
read_only=False,
@@ -75,10 +74,10 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
return [Capabilities.SYS_NICE, Capabilities.SYS_RESOURCE]
@property
def ulimits(self) -> list[docker.types.Ulimit]:
def ulimits(self) -> list[Ulimit]:
"""Generate ulimits for audio."""
# Pulseaudio by default tries to use real-time scheduling with priority of 5.
return [docker.types.Ulimit(name="rtprio", soft=10, hard=10)]
return [Ulimit(name="rtprio", soft=10, hard=10)]
@property
def cpu_rt_runtime(self) -> int | None:

View File

@@ -2,19 +2,27 @@
from __future__ import annotations
from contextlib import suppress
from enum import Enum, StrEnum
from functools import total_ordering
from dataclasses import dataclass
from enum import StrEnum
from pathlib import PurePath
import re
from typing import cast
from docker.types import Mount
from typing import Any
from ..const import MACHINE_ID
RE_RETRYING_DOWNLOAD_STATUS = re.compile(r"Retrying in \d+ seconds?")
# Docker Hub registry identifier (official default)
# Docker's default registry is docker.io
DOCKER_HUB = "docker.io"
# Docker Hub API endpoint (used for direct registry API calls)
# While docker.io is the registry identifier, registry-1.docker.io is the actual API endpoint
DOCKER_HUB_API = "registry-1.docker.io"
# Legacy Docker Hub identifier for backward compatibility
DOCKER_HUB_LEGACY = "hub.docker.com"
class Capabilities(StrEnum):
"""Linux Capabilities."""
@@ -75,84 +83,94 @@ class PropagationMode(StrEnum):
RSLAVE = "rslave"
@total_ordering
class PullImageLayerStage(Enum):
"""Job stages for pulling an image layer.
@dataclass(slots=True, frozen=True)
class MountBindOptions:
"""Bind options for docker mount."""
These are a subset of the statuses in a docker pull image log. They
are the standardized ones that are the most useful to us.
"""
propagation: PropagationMode | None = None
read_only_non_recursive: bool | None = None
PULLING_FS_LAYER = 1, "Pulling fs layer"
RETRYING_DOWNLOAD = 2, "Retrying download"
DOWNLOADING = 2, "Downloading"
VERIFYING_CHECKSUM = 3, "Verifying Checksum"
DOWNLOAD_COMPLETE = 4, "Download complete"
EXTRACTING = 5, "Extracting"
PULL_COMPLETE = 6, "Pull complete"
def __init__(self, order: int, status: str) -> None:
"""Set fields from values."""
self.order = order
self.status = status
def __eq__(self, value: object, /) -> bool:
"""Check equality, allow StrEnum style comparisons on status."""
with suppress(AttributeError):
return self.status == cast(PullImageLayerStage, value).status
return self.status == value
def __lt__(self, other: object) -> bool:
"""Order instances."""
with suppress(AttributeError):
return self.order < cast(PullImageLayerStage, other).order
return False
def __hash__(self) -> int:
"""Hash instance."""
return hash(self.status)
@classmethod
def from_status(cls, status: str) -> PullImageLayerStage | None:
"""Return stage instance from pull log status."""
for i in cls:
if i.status == status:
return i
# This one includes number of seconds until download so its not constant
if RE_RETRYING_DOWNLOAD_STATUS.match(status):
return cls.RETRYING_DOWNLOAD
return None
def to_dict(self) -> dict[str, Any]:
"""To dictionary representation."""
out: dict[str, Any] = {}
if self.propagation:
out["Propagation"] = self.propagation.value
if self.read_only_non_recursive is not None:
out["ReadOnlyNonRecursive"] = self.read_only_non_recursive
return out
@dataclass(slots=True, frozen=True)
class DockerMount:
"""A docker mount."""
type: MountType
source: str
target: str
read_only: bool
bind_options: MountBindOptions | None = None
def to_dict(self) -> dict[str, Any]:
"""To dictionary representation."""
out: dict[str, Any] = {
"Type": self.type.value,
"Source": self.source,
"Target": self.target,
"ReadOnly": self.read_only,
}
if self.bind_options:
out["BindOptions"] = self.bind_options.to_dict()
return out
@dataclass(slots=True, frozen=True)
class Ulimit:
"""A linux user limit."""
name: str
soft: int
hard: int
def to_dict(self) -> dict[str, str | int]:
"""To dictionary representation."""
return {
"Name": self.name,
"Soft": self.soft,
"Hard": self.hard,
}
ENV_DUPLICATE_LOG_FILE = "HA_DUPLICATE_LOG_FILE"
ENV_TIME = "TZ"
ENV_TOKEN = "SUPERVISOR_TOKEN"
ENV_TOKEN_OLD = "HASSIO_TOKEN"
LABEL_MANAGED = "supervisor_managed"
MOUNT_DBUS = Mount(
type=MountType.BIND.value, source="/run/dbus", target="/run/dbus", read_only=True
MOUNT_DBUS = DockerMount(
type=MountType.BIND, source="/run/dbus", target="/run/dbus", read_only=True
)
MOUNT_DEV = Mount(
type=MountType.BIND.value, source="/dev", target="/dev", read_only=True
MOUNT_DEV = DockerMount(
type=MountType.BIND,
source="/dev",
target="/dev",
read_only=True,
bind_options=MountBindOptions(read_only_non_recursive=True),
)
MOUNT_DEV.setdefault("BindOptions", {})["ReadOnlyNonRecursive"] = True
MOUNT_DOCKER = Mount(
type=MountType.BIND.value,
MOUNT_DOCKER = DockerMount(
type=MountType.BIND,
source="/run/docker.sock",
target="/run/docker.sock",
read_only=True,
)
MOUNT_MACHINE_ID = Mount(
type=MountType.BIND.value,
MOUNT_MACHINE_ID = DockerMount(
type=MountType.BIND,
source=MACHINE_ID.as_posix(),
target=MACHINE_ID.as_posix(),
read_only=True,
)
MOUNT_UDEV = Mount(
type=MountType.BIND.value, source="/run/udev", target="/run/udev", read_only=True
MOUNT_UDEV = DockerMount(
type=MountType.BIND, source="/run/udev", target="/run/udev", read_only=True
)
PATH_PRIVATE_DATA = PurePath("/data")

View File

@@ -2,13 +2,11 @@
import logging
from docker.types import Mount
from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job
from .const import ENV_TIME, MOUNT_DBUS, MountType
from .const import ENV_TIME, MOUNT_DBUS, DockerMount, MountType
from .interface import DockerInterface
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -47,8 +45,8 @@ class DockerDNS(DockerInterface, CoreSysAttributes):
security_opt=self.security_opt,
environment={ENV_TIME: self.sys_timezone},
mounts=[
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_dns.as_posix(),
target="/config",
read_only=False,

View File

@@ -5,7 +5,6 @@ import logging
import re
from awesomeversion import AwesomeVersion
from docker.types import Mount
from ..const import LABEL_MACHINE
from ..exceptions import DockerJobError
@@ -14,6 +13,7 @@ from ..homeassistant.const import LANDINGPAGE
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job
from .const import (
ENV_DUPLICATE_LOG_FILE,
ENV_TIME,
ENV_TOKEN,
ENV_TOKEN_OLD,
@@ -25,6 +25,8 @@ from .const import (
PATH_PUBLIC_CONFIG,
PATH_SHARE,
PATH_SSL,
DockerMount,
MountBindOptions,
MountType,
PropagationMode,
)
@@ -90,15 +92,15 @@ class DockerHomeAssistant(DockerInterface):
)
@property
def mounts(self) -> list[Mount]:
def mounts(self) -> list[DockerMount]:
"""Return mounts for container."""
mounts = [
MOUNT_DEV,
MOUNT_DBUS,
MOUNT_UDEV,
# HA config folder
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target=PATH_PUBLIC_CONFIG.as_posix(),
read_only=False,
@@ -110,41 +112,45 @@ class DockerHomeAssistant(DockerInterface):
mounts.extend(
[
# All other folders
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(),
target=PATH_SSL.as_posix(),
read_only=True,
),
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(),
target=PATH_SHARE.as_posix(),
read_only=False,
propagation=PropagationMode.RSLAVE.value,
bind_options=MountBindOptions(
propagation=PropagationMode.RSLAVE
),
),
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_media.as_posix(),
target=PATH_MEDIA.as_posix(),
read_only=False,
propagation=PropagationMode.RSLAVE.value,
bind_options=MountBindOptions(
propagation=PropagationMode.RSLAVE
),
),
# Configuration audio
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_homeassistant.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf",
read_only=True,
),
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio",
read_only=True,
),
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf",
read_only=True,
@@ -166,14 +172,16 @@ class DockerHomeAssistant(DockerInterface):
async def run(self, *, restore_job_id: str | None = None) -> None:
"""Run Docker image."""
environment = {
"SUPERVISOR": self.sys_docker.network.supervisor,
"HASSIO": self.sys_docker.network.supervisor,
"SUPERVISOR": str(self.sys_docker.network.supervisor),
"HASSIO": str(self.sys_docker.network.supervisor),
ENV_TIME: self.sys_timezone,
ENV_TOKEN: self.sys_homeassistant.supervisor_token,
ENV_TOKEN_OLD: self.sys_homeassistant.supervisor_token,
}
if restore_job_id:
environment[ENV_RESTORE_JOB_ID] = restore_job_id
if self.sys_homeassistant.duplicate_log_file:
environment[ENV_DUPLICATE_LOG_FILE] = "1"
await self._run(
tag=(self.sys_homeassistant.version),
name=self.name,
@@ -202,31 +210,30 @@ class DockerHomeAssistant(DockerInterface):
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def execute_command(self, command: str) -> CommandReturn:
async def execute_command(self, command: list[str]) -> CommandReturn:
"""Create a temporary container and run command."""
return await self.sys_run_in_executor(
self.sys_docker.run_command,
return await self.sys_docker.run_command(
self.image,
version=self.sys_homeassistant.version,
tag=str(self.sys_homeassistant.version),
command=command,
privileged=True,
init=True,
entrypoint=[],
mounts=[
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target="/config",
read_only=False,
),
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(),
target="/ssl",
read_only=True,
),
Mount(
type=MountType.BIND.value,
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(),
target="/share",
read_only=False,

View File

@@ -8,19 +8,16 @@ from collections.abc import Awaitable
from contextlib import suppress
from http import HTTPStatus
import logging
import re
from time import time
from typing import Any, cast
from typing import Any
from uuid import uuid4
import aiodocker
import aiohttp
from awesomeversion import AwesomeVersion
from awesomeversion.strategy import AwesomeVersionStrategy
import docker
from docker.models.containers import Container
import requests
from ..bus import EventListener
from ..const import (
ATTR_PASSWORD,
ATTR_REGISTRY,
@@ -36,27 +33,23 @@ from ..exceptions import (
DockerError,
DockerHubRateLimitExceeded,
DockerJobError,
DockerLogOutOfOrder,
DockerNotFound,
DockerRequestError,
)
from ..jobs import SupervisorJob
from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobConcurrency
from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.sentry import async_capture_exception
from .const import ContainerState, PullImageLayerStage, RestartPolicy
from .manager import CommandReturn, PullLogEntry
from .const import DOCKER_HUB, DOCKER_HUB_LEGACY, ContainerState, RestartPolicy
from .manager import CommandReturn, ExecReturn, PullLogEntry
from .monitor import DockerContainerStateEvent
from .pull_progress import ImagePullProgress
from .stats import DockerStats
_LOGGER: logging.Logger = logging.getLogger(__name__)
IMAGE_WITH_HOST = re.compile(r"^((?:[a-z0-9]+(?:-[a-z0-9]+)*\.)+[a-z]{2,})\/.+")
DOCKER_HUB = "hub.docker.com"
MAP_ARCH: dict[CpuArch | str, str] = {
MAP_ARCH: dict[CpuArch, str] = {
CpuArch.ARMV7: "linux/arm/v7",
CpuArch.ARMHF: "linux/arm/v6",
CpuArch.AARCH64: "linux/arm64",
@@ -65,18 +58,37 @@ MAP_ARCH: dict[CpuArch | str, str] = {
}
def _container_state_from_model(docker_container: Container) -> ContainerState:
def _restart_policy_from_model(meta_host: dict[str, Any]) -> RestartPolicy | None:
"""Get restart policy from host config model."""
if "RestartPolicy" not in meta_host:
return None
name = meta_host["RestartPolicy"].get("Name")
if not name:
return RestartPolicy.NO
if name in RestartPolicy:
return RestartPolicy(name)
_LOGGER.warning("Unknown Docker restart policy '%s', treating as no", name)
return RestartPolicy.NO
def _container_state_from_model(container_metadata: dict[str, Any]) -> ContainerState:
"""Get container state from model."""
if docker_container.status == "running":
if "Health" in docker_container.attrs["State"]:
if "State" not in container_metadata:
return ContainerState.UNKNOWN
if container_metadata["State"]["Status"] == "running":
if "Health" in container_metadata["State"]:
return (
ContainerState.HEALTHY
if docker_container.attrs["State"]["Health"]["Status"] == "healthy"
if container_metadata["State"]["Health"]["Status"] == "healthy"
else ContainerState.UNHEALTHY
)
return ContainerState.RUNNING
if docker_container.attrs["State"]["ExitCode"] > 0:
if container_metadata["State"]["ExitCode"] > 0:
return ContainerState.FAILED
return ContainerState.STOPPED
@@ -161,11 +173,7 @@ class DockerInterface(JobGroup, ABC):
@property
def restart_policy(self) -> RestartPolicy | None:
"""Return restart policy of container."""
if "RestartPolicy" not in self.meta_host:
return None
policy = self.meta_host["RestartPolicy"].get("Name")
return policy if policy else RestartPolicy.NO
return _restart_policy_from_model(self.meta_host)
@property
def security_opt(self) -> list[str]:
@@ -180,25 +188,17 @@ class DockerInterface(JobGroup, ABC):
return self.meta_config.get("Healthcheck")
def _get_credentials(self, image: str) -> dict:
"""Return a dictionay with credentials for docker login."""
registry = None
"""Return a dictionary with credentials for docker login."""
credentials = {}
matcher = IMAGE_WITH_HOST.match(image)
# Custom registry
if matcher:
if matcher.group(1) in self.sys_docker.config.registries:
registry = matcher.group(1)
credentials[ATTR_REGISTRY] = registry
# If no match assume "dockerhub" as registry
elif DOCKER_HUB in self.sys_docker.config.registries:
registry = DOCKER_HUB
registry = self.sys_docker.config.get_registry_for_image(image)
if registry:
stored = self.sys_docker.config.registries[registry]
credentials[ATTR_USERNAME] = stored[ATTR_USERNAME]
credentials[ATTR_PASSWORD] = stored[ATTR_PASSWORD]
# Don't include registry for Docker Hub (both official and legacy)
if registry not in (DOCKER_HUB, DOCKER_HUB_LEGACY):
credentials[ATTR_REGISTRY] = registry
_LOGGER.debug(
"Logging in to %s as %s",
@@ -208,178 +208,6 @@ class DockerInterface(JobGroup, ABC):
return credentials
async def _docker_login(self, image: str) -> None:
"""Try to log in to the registry if there are credentials available."""
if not self.sys_docker.config.registries:
return
credentials = self._get_credentials(image)
if not credentials:
return
await self.sys_run_in_executor(self.sys_docker.dockerpy.login, **credentials)
def _process_pull_image_log( # noqa: C901
self, install_job_id: str, reference: PullLogEntry
) -> None:
"""Process events fired from a docker while pulling an image, filtered to a given job id."""
if (
reference.job_id != install_job_id
or not reference.id
or not reference.status
or not (stage := PullImageLayerStage.from_status(reference.status))
):
return
# Pulling FS Layer is our marker for a layer that needs to be downloaded and extracted. Otherwise it already exists and we can ignore
job: SupervisorJob | None = None
if stage == PullImageLayerStage.PULLING_FS_LAYER:
job = self.sys_jobs.new_job(
name="Pulling container image layer",
initial_stage=stage.status,
reference=reference.id,
parent_id=install_job_id,
internal=True,
)
job.done = False
return
# Find our sub job to update details of
for j in self.sys_jobs.jobs:
if j.parent_id == install_job_id and j.reference == reference.id:
job = j
break
# This likely only occurs if the logs came in out of sync and we got progress before the Pulling FS Layer one
if not job:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {install_job_id} but could not find a matching job, skipping",
_LOGGER.debug,
)
# Hopefully these come in order but if they sometimes get out of sync, avoid accidentally going backwards
# If it happens a lot though we may need to reconsider the value of this feature
if job.done:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} but job was done, skipping",
_LOGGER.debug,
)
if job.stage and stage < PullImageLayerStage.from_status(job.stage):
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} but job was already on stage {job.stage}, skipping",
_LOGGER.debug,
)
# For progress calcuation we assume downloading and extracting are each 50% of the time and others stages negligible
progress = job.progress
match stage:
case PullImageLayerStage.DOWNLOADING | PullImageLayerStage.EXTRACTING:
if (
reference.progress_detail
and reference.progress_detail.current
and reference.progress_detail.total
):
progress = 50 * (
reference.progress_detail.current
/ reference.progress_detail.total
)
if stage == PullImageLayerStage.EXTRACTING:
progress += 50
case (
PullImageLayerStage.VERIFYING_CHECKSUM
| PullImageLayerStage.DOWNLOAD_COMPLETE
):
progress = 50
case PullImageLayerStage.PULL_COMPLETE:
progress = 100
case PullImageLayerStage.RETRYING_DOWNLOAD:
progress = 0
if stage != PullImageLayerStage.RETRYING_DOWNLOAD and progress < job.progress:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} that implied progress was {progress} but current progress is {job.progress}, skipping",
_LOGGER.debug,
)
# Our filters have all passed. Time to update the job
# Only downloading and extracting have progress details. Use that to set extra
# We'll leave it around on later stages as the total bytes may be useful after that stage
# Enforce range to prevent float drift error
progress = max(0, min(progress, 100))
if (
stage in {PullImageLayerStage.DOWNLOADING, PullImageLayerStage.EXTRACTING}
and reference.progress_detail
and reference.progress_detail.current is not None
and reference.progress_detail.total is not None
):
job.update(
progress=progress,
stage=stage.status,
extra={
"current": reference.progress_detail.current,
"total": reference.progress_detail.total,
},
)
else:
# If we reach DOWNLOAD_COMPLETE without ever having set extra (small layers that skip
# the downloading phase), set a minimal extra so aggregate progress calculation can proceed
extra = job.extra
if stage == PullImageLayerStage.DOWNLOAD_COMPLETE and not job.extra:
extra = {"current": 1, "total": 1}
job.update(
progress=progress,
stage=stage.status,
done=stage == PullImageLayerStage.PULL_COMPLETE,
extra=None if stage == PullImageLayerStage.RETRYING_DOWNLOAD else extra,
)
# Once we have received a progress update for every child job, start to set status of the main one
install_job = self.sys_jobs.get_job(install_job_id)
layer_jobs = [
job
for job in self.sys_jobs.jobs
if job.parent_id == install_job.uuid
and job.name == "Pulling container image layer"
]
# First set the total bytes to be downloaded/extracted on the main job
if not install_job.extra:
total = 0
for job in layer_jobs:
if not job.extra:
return
total += job.extra["total"]
install_job.extra = {"total": total}
else:
total = install_job.extra["total"]
# Then determine total progress based on progress of each sub-job, factoring in size of each compared to total
progress = 0.0
stage = PullImageLayerStage.PULL_COMPLETE
for job in layer_jobs:
if not job.extra:
return
progress += job.progress * (job.extra["total"] / total)
job_stage = PullImageLayerStage.from_status(cast(str, job.stage))
if job_stage < PullImageLayerStage.EXTRACTING:
stage = PullImageLayerStage.DOWNLOADING
elif (
stage == PullImageLayerStage.PULL_COMPLETE
and job_stage < PullImageLayerStage.PULL_COMPLETE
):
stage = PullImageLayerStage.EXTRACTING
# Ensure progress is 100 at this point to prevent float drift
if stage == PullImageLayerStage.PULL_COMPLETE:
progress = 100
# To reduce noise, limit updates to when result has changed by an entire percent or when stage changed
if stage != install_job.stage or progress >= install_job.progress + 1:
install_job.update(stage=stage.status, progress=max(0, min(progress, 100)))
@Job(
name="docker_interface_install",
on_condition=DockerJobError,
@@ -398,35 +226,82 @@ class DockerInterface(JobGroup, ABC):
if not image:
raise ValueError("Cannot pull without an image!")
image_arch = str(arch) if arch else self.sys_arch.supervisor
listener: EventListener | None = None
image_arch = arch or self.sys_arch.supervisor
platform = MAP_ARCH[image_arch]
pull_progress = ImagePullProgress()
current_job = self.sys_jobs.current
# Try to fetch manifest for accurate size-based progress
# This is optional - if it fails, we fall back to count-based progress
try:
manifest = await self.sys_docker.manifest_fetcher.get_manifest(
image, str(version), platform=platform
)
if manifest:
pull_progress.set_manifest(manifest)
_LOGGER.debug(
"Using manifest for progress: %d layers, %d bytes",
manifest.layer_count,
manifest.total_size,
)
except (aiohttp.ClientError, TimeoutError) as err:
_LOGGER.warning("Could not fetch manifest for progress: %s", err)
async def process_pull_event(event: PullLogEntry) -> None:
"""Process pull event and update job progress."""
if event.job_id != current_job.uuid:
return
try:
# Process event through progress tracker
pull_progress.process_event(event)
# Update job if progress changed significantly (>= 1%)
should_update, progress = pull_progress.should_update_job()
if should_update:
stage = pull_progress.get_stage()
current_job.update(progress=progress, stage=stage)
except ValueError as err:
# Catch ValueError from progress tracking (e.g. "Cannot update a job
# that is done") which can occur under rare event combinations.
# Log with context and send to Sentry. Continue the pull anyway as
# progress updates are informational only.
_LOGGER.warning(
"Received an unprocessable update for pull progress (layer: %s, status: %s, progress: %s): %s",
event.id,
event.status,
event.progress,
err,
)
await async_capture_exception(err)
except Exception as err: # pylint: disable=broad-except
# Catch any other unexpected errors in progress tracking to prevent
# pull from failing. Progress updates are informational - the pull
# itself should continue. Send to Sentry for debugging.
_LOGGER.warning(
"Error updating pull progress (layer: %s, status: %s): %s",
event.id,
event.status,
err,
)
await async_capture_exception(err)
listener = self.sys_bus.register_event(
BusEvent.DOCKER_IMAGE_PULL_UPDATE, process_pull_event
)
_LOGGER.info("Downloading docker image %s with tag %s.", image, version)
try:
if self.sys_docker.config.registries:
# Try login if we have defined credentials
await self._docker_login(image)
# Get credentials for private registries to pass to aiodocker
credentials = self._get_credentials(image) or None
curr_job_id = self.sys_jobs.current.uuid
async def process_pull_image_log(reference: PullLogEntry) -> None:
try:
self._process_pull_image_log(curr_job_id, reference)
except DockerLogOutOfOrder as err:
# Send all these to sentry. Missing a few progress updates
# shouldn't matter to users but matters to us
await async_capture_exception(err)
listener = self.sys_bus.register_event(
BusEvent.DOCKER_IMAGE_PULL_UPDATE, process_pull_image_log
)
# Pull new image
# Pull new image, passing credentials to aiodocker
docker_image = await self.sys_docker.pull_image(
self.sys_jobs.current.uuid,
current_job.uuid,
image,
str(version),
platform=MAP_ARCH[image_arch],
platform=platform,
auth=credentials,
)
# Tag latest
@@ -437,18 +312,6 @@ class DockerInterface(JobGroup, ABC):
await self.sys_docker.images.tag(
docker_image["Id"], image, tag="latest"
)
except docker.errors.APIError as err:
if err.status_code == HTTPStatus.TOO_MANY_REQUESTS:
self.sys_resolution.create_issue(
IssueType.DOCKER_RATELIMIT,
ContextType.SYSTEM,
suggestions=[SuggestionType.REGISTRY_LOGIN],
)
raise DockerHubRateLimitExceeded(_LOGGER.error) from err
await async_capture_exception(err)
raise DockerError(
f"Can't install {image}:{version!s}: {err}", _LOGGER.error
) from err
except aiodocker.DockerError as err:
if err.status == HTTPStatus.TOO_MANY_REQUESTS:
self.sys_resolution.create_issue(
@@ -461,17 +324,8 @@ class DockerInterface(JobGroup, ABC):
raise DockerError(
f"Can't install {image}:{version!s}: {err}", _LOGGER.error
) from err
except (
docker.errors.DockerException,
requests.RequestException,
) as err:
await async_capture_exception(err)
raise DockerError(
f"Unknown error with {image}:{version!s} -> {err!s}", _LOGGER.error
) from err
finally:
if listener:
self.sys_bus.remove_listener(listener)
self.sys_bus.remove_listener(listener)
self._meta = docker_image
@@ -482,49 +336,47 @@ class DockerInterface(JobGroup, ABC):
return True
return False
async def _get_container(self) -> dict[str, Any] | None:
"""Get docker container, returns None if not found."""
try:
container = await self.sys_docker.containers.get(self.name)
return await container.show()
except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
return None
raise DockerAPIError(
f"Docker API error occurred while getting container information: {err!s}"
) from err
except requests.RequestException as err:
raise DockerRequestError(
f"Error communicating with Docker to get container information: {err!s}"
) from err
async def is_running(self) -> bool:
"""Return True if Docker is running."""
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except docker.errors.NotFound:
return False
except docker.errors.DockerException as err:
raise DockerAPIError() from err
except requests.RequestException as err:
raise DockerRequestError() from err
return docker_container.status == "running"
return bool(
(container_metadata := await self._get_container())
and "State" in container_metadata
and container_metadata["State"]["Running"]
)
async def current_state(self) -> ContainerState:
"""Return current state of container."""
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except docker.errors.NotFound:
return ContainerState.UNKNOWN
except docker.errors.DockerException as err:
raise DockerAPIError() from err
except requests.RequestException as err:
raise DockerRequestError() from err
return _container_state_from_model(docker_container)
if container_metadata := await self._get_container():
return _container_state_from_model(container_metadata)
return ContainerState.UNKNOWN
@Job(name="docker_interface_attach", concurrency=JobConcurrency.GROUP_QUEUE)
async def attach(
self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False
) -> None:
"""Attach to running Docker container."""
with suppress(docker.errors.DockerException, requests.RequestException):
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
self._meta = docker_container.attrs
self.sys_docker.monitor.watch_container(docker_container)
with suppress(aiodocker.DockerError, requests.RequestException):
docker_container = await self.sys_docker.containers.get(self.name)
self._meta = await docker_container.show()
self.sys_docker.monitor.watch_container(self._meta)
state = _container_state_from_model(docker_container)
state = _container_state_from_model(self._meta)
if not (
skip_state_event_if_down
and state in [ContainerState.STOPPED, ContainerState.FAILED]
@@ -533,7 +385,7 @@ class DockerInterface(JobGroup, ABC):
self.sys_bus.fire_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE,
DockerContainerStateEvent(
self.name, state, cast(str, docker_container.id), int(time())
self.name, state, docker_container.id, int(time())
),
)
@@ -545,7 +397,9 @@ class DockerInterface(JobGroup, ABC):
# Successful?
if not self._meta:
raise DockerError()
raise DockerError(
f"Could not get metadata on container or image for {self.name}"
)
_LOGGER.info("Attaching to %s with version %s", self.image, self.version)
@Job(
@@ -557,8 +411,11 @@ class DockerInterface(JobGroup, ABC):
"""Run Docker image."""
raise NotImplementedError()
async def _run(self, **kwargs) -> None:
"""Run Docker image with retry inf necessary."""
async def _run(self, *, name: str, **kwargs) -> None:
"""Run Docker image with retry if necessary."""
if not (image := self.image):
raise ValueError(f"Cannot determine image to use to run {self.name}!")
if await self.is_running():
return
@@ -567,16 +424,14 @@ class DockerInterface(JobGroup, ABC):
# Create & Run container
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.run, self.image, **kwargs
)
container_metadata = await self.sys_docker.run(image, name=name, **kwargs)
except DockerNotFound as err:
# If image is missing, capture the exception as this shouldn't happen
await async_capture_exception(err)
raise
# Store metadata
self._meta = docker_container.attrs
self._meta = container_metadata
@Job(
name="docker_interface_stop",
@@ -586,11 +441,8 @@ class DockerInterface(JobGroup, ABC):
async def stop(self, remove_container: bool = True) -> None:
"""Stop/remove Docker container."""
with suppress(DockerNotFound):
await self.sys_run_in_executor(
self.sys_docker.stop_container,
self.name,
self.timeout,
remove_container,
await self.sys_docker.stop_container(
self.name, self.timeout, remove_container
)
@Job(
@@ -600,7 +452,7 @@ class DockerInterface(JobGroup, ABC):
)
def start(self) -> Awaitable[None]:
"""Start Docker container."""
return self.sys_run_in_executor(self.sys_docker.start_container, self.name)
return self.sys_docker.start_container(self.name)
@Job(
name="docker_interface_remove",
@@ -635,9 +487,7 @@ class DockerInterface(JobGroup, ABC):
expected_cpu_arch: CpuArch | None = None,
) -> None:
"""Check we have expected image with correct arch."""
expected_image_cpu_arch = (
str(expected_cpu_arch) if expected_cpu_arch else self.sys_arch.supervisor
)
arch = expected_cpu_arch or self.sys_arch.supervisor
image_name = f"{expected_image}:{version!s}"
if self.image == expected_image:
try:
@@ -655,7 +505,7 @@ class DockerInterface(JobGroup, ABC):
# If we have an image and its the right arch, all set
# It seems that newer Docker version return a variant for arm64 images.
# Make sure we match linux/arm64 and linux/arm64/v8.
expected_image_arch = MAP_ARCH[expected_image_cpu_arch]
expected_image_arch = MAP_ARCH[arch]
if image_arch.startswith(expected_image_arch):
return
_LOGGER.info(
@@ -668,7 +518,7 @@ class DockerInterface(JobGroup, ABC):
# We're missing the image we need. Stop and clean up what we have then pull the right one
with suppress(DockerError):
await self.remove()
await self.install(version, expected_image, arch=expected_image_cpu_arch)
await self.install(version, expected_image, arch=arch)
@Job(
name="docker_interface_update",
@@ -695,14 +545,11 @@ class DockerInterface(JobGroup, ABC):
with suppress(DockerError):
await self.stop()
async def logs(self) -> bytes:
async def logs(self) -> list[str]:
"""Return Docker logs of container."""
with suppress(DockerError):
return await self.sys_run_in_executor(
self.sys_docker.container_logs, self.name
)
return b""
return await self.sys_docker.container_logs(self.name)
return []
@Job(name="docker_interface_cleanup", concurrency=JobConcurrency.GROUP_QUEUE)
async def cleanup(
@@ -728,9 +575,7 @@ class DockerInterface(JobGroup, ABC):
)
def restart(self) -> Awaitable[None]:
"""Restart docker container."""
return self.sys_run_in_executor(
self.sys_docker.restart_container, self.name, self.timeout
)
return self.sys_docker.restart_container(self.name, self.timeout)
@Job(
name="docker_interface_execute_command",
@@ -743,28 +588,12 @@ class DockerInterface(JobGroup, ABC):
async def stats(self) -> DockerStats:
"""Read and return stats from container."""
stats = await self.sys_run_in_executor(
self.sys_docker.container_stats, self.name
)
stats = await self.sys_docker.container_stats(self.name)
return DockerStats(stats)
async def is_failed(self) -> bool:
"""Return True if Docker is failing state."""
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except docker.errors.NotFound:
return False
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
# container is not running
if docker_container.status != "exited":
return False
# Check return value
return int(docker_container.attrs["State"]["ExitCode"]) != 0
return await self.current_state() == ContainerState.FAILED
async def get_latest_version(self) -> AwesomeVersion:
"""Return latest version of local image."""
@@ -802,8 +631,6 @@ class DockerInterface(JobGroup, ABC):
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
def run_inside(self, command: str) -> Awaitable[CommandReturn]:
def run_inside(self, command: str) -> Awaitable[ExecReturn]:
"""Execute a command inside Docker container."""
return self.sys_run_in_executor(
self.sys_docker.container_run_inside, self.name, command
)
return self.sys_docker.container_run_inside(self.name, command)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,352 @@
"""Docker registry manifest fetcher.
Fetches image manifests directly from container registries to get layer sizes
before pulling an image. This enables accurate size-based progress tracking.
"""
from __future__ import annotations
from dataclasses import dataclass
import logging
import re
from typing import TYPE_CHECKING
import aiohttp
from supervisor.docker.utils import get_registry_from_image
from .const import DOCKER_HUB, DOCKER_HUB_API, DOCKER_HUB_LEGACY
if TYPE_CHECKING:
from ..coresys import CoreSys
_LOGGER = logging.getLogger(__name__)
# Media types for manifest requests
MANIFEST_MEDIA_TYPES = (
"application/vnd.docker.distribution.manifest.v2+json",
"application/vnd.oci.image.manifest.v1+json",
"application/vnd.docker.distribution.manifest.list.v2+json",
"application/vnd.oci.image.index.v1+json",
)
@dataclass
class ImageManifest:
"""Container image manifest with layer information."""
digest: str
total_size: int
layers: dict[str, int] # digest -> size in bytes
@property
def layer_count(self) -> int:
"""Return number of layers."""
return len(self.layers)
def parse_image_reference(image: str, tag: str) -> tuple[str, str, str]:
"""Parse image reference into (registry, repository, tag).
Examples:
ghcr.io/home-assistant/home-assistant:2025.1.0
-> (ghcr.io, home-assistant/home-assistant, 2025.1.0)
homeassistant/home-assistant:latest
-> (registry-1.docker.io, homeassistant/home-assistant, latest)
alpine:3.18
-> (registry-1.docker.io, library/alpine, 3.18)
"""
# Check if image has explicit registry host
registry = get_registry_from_image(image)
if registry:
repository = image[len(registry) + 1 :] # Remove "registry/" prefix
else:
registry = DOCKER_HUB
repository = image
# Docker Hub requires "library/" prefix for official images
if "/" not in repository:
repository = f"library/{repository}"
return registry, repository, tag
class RegistryManifestFetcher:
"""Fetches manifests from container registries."""
def __init__(self, coresys: CoreSys) -> None:
"""Initialize the fetcher."""
self.coresys = coresys
@property
def _session(self) -> aiohttp.ClientSession:
"""Return the websession for HTTP requests."""
return self.coresys.websession
def _get_api_endpoint(self, registry: str) -> str:
"""Get the actual API endpoint for a registry.
Translates docker.io to registry-1.docker.io for Docker Hub.
This matches exactly what Docker itself does internally - see daemon/pkg/registry/config.go:49
where Docker hardcodes DefaultRegistryHost = "registry-1.docker.io" for registry operations.
Without this, requests to https://docker.io/v2/... redirect to https://www.docker.com/
"""
return DOCKER_HUB_API if registry == DOCKER_HUB else registry
def _get_credentials(self, registry: str) -> tuple[str, str] | None:
"""Get credentials for registry from Docker config.
Returns (username, password) tuple or None if no credentials.
"""
registries = self.coresys.docker.config.registries
# Map registry hostname to config key
# Docker Hub can be stored as "hub.docker.com" in config
if registry in (DOCKER_HUB, DOCKER_HUB_LEGACY):
if DOCKER_HUB in registries:
creds = registries[DOCKER_HUB]
return creds.get("username"), creds.get("password")
elif registry in registries:
creds = registries[registry]
return creds.get("username"), creds.get("password")
return None
async def _get_auth_token(
self,
registry: str,
repository: str,
) -> str | None:
"""Get authentication token for registry.
Uses the WWW-Authenticate header from a 401 response to discover
the token endpoint, then requests a token with appropriate scope.
"""
api_endpoint = self._get_api_endpoint(registry)
# First, make an unauthenticated request to get WWW-Authenticate header
manifest_url = f"https://{api_endpoint}/v2/{repository}/manifests/latest"
try:
async with self._session.get(manifest_url) as resp:
if resp.status == 200:
# No auth required
return None
if resp.status != 401:
_LOGGER.warning(
"Unexpected status %d from registry %s", resp.status, registry
)
return None
www_auth = resp.headers.get("WWW-Authenticate", "")
except aiohttp.ClientError as err:
_LOGGER.warning("Failed to connect to registry %s: %s", registry, err)
return None
# Parse WWW-Authenticate: Bearer realm="...",service="...",scope="..."
if not www_auth.startswith("Bearer "):
_LOGGER.warning("Unsupported auth type from %s: %s", registry, www_auth)
return None
params = {}
for match in re.finditer(r'(\w+)="([^"]*)"', www_auth):
params[match.group(1)] = match.group(2)
realm = params.get("realm")
service = params.get("service")
if not realm:
_LOGGER.warning("No realm in WWW-Authenticate from %s", registry)
return None
# Build token request URL
token_url = f"{realm}?scope=repository:{repository}:pull"
if service:
token_url += f"&service={service}"
# Check for credentials
auth = None
credentials = self._get_credentials(registry)
if credentials:
username, password = credentials
if username and password:
auth = aiohttp.BasicAuth(username, password)
_LOGGER.debug("Using credentials for %s", registry)
try:
async with self._session.get(token_url, auth=auth) as resp:
if resp.status != 200:
_LOGGER.warning(
"Failed to get token from %s: %d", realm, resp.status
)
return None
data = await resp.json()
return data.get("token") or data.get("access_token")
except aiohttp.ClientError as err:
_LOGGER.warning("Failed to get auth token: %s", err)
return None
async def _fetch_manifest(
self,
registry: str,
repository: str,
reference: str,
token: str | None,
platform: str,
) -> dict | None:
"""Fetch manifest from registry.
If the manifest is a manifest list (multi-arch), fetches the
platform-specific manifest.
"""
api_endpoint = self._get_api_endpoint(registry)
manifest_url = f"https://{api_endpoint}/v2/{repository}/manifests/{reference}"
headers = {"Accept": ", ".join(MANIFEST_MEDIA_TYPES)}
if token:
headers["Authorization"] = f"Bearer {token}"
try:
async with self._session.get(manifest_url, headers=headers) as resp:
if resp.status != 200:
_LOGGER.warning(
"Failed to fetch manifest for %s/%s:%s - %d",
registry,
repository,
reference,
resp.status,
)
return None
manifest = await resp.json()
except aiohttp.ClientError as err:
_LOGGER.warning("Failed to fetch manifest: %s", err)
return None
media_type = manifest.get("mediaType", "")
# Check if this is a manifest list (multi-arch image)
if "list" in media_type or "index" in media_type:
manifests = manifest.get("manifests", [])
if not manifests:
_LOGGER.warning("Empty manifest list for %s/%s", registry, repository)
return None
# Platform format is "linux/amd64", "linux/arm64", etc.
parts = platform.split("/")
if len(parts) < 2:
_LOGGER.warning("Invalid platform format: %s", platform)
return None
target_os, target_arch = parts[0], parts[1]
platform_manifest = None
for m in manifests:
plat = m.get("platform", {})
if (
plat.get("os") == target_os
and plat.get("architecture") == target_arch
):
platform_manifest = m
break
if not platform_manifest:
_LOGGER.warning(
"Platform %s/%s not found in manifest list for %s/%s, "
"cannot use manifest for progress tracking",
target_os,
target_arch,
registry,
repository,
)
return None
# Fetch the platform-specific manifest
return await self._fetch_manifest(
registry,
repository,
platform_manifest["digest"],
token,
platform,
)
return manifest
async def get_manifest(
self,
image: str,
tag: str,
platform: str,
) -> ImageManifest | None:
"""Fetch manifest and extract layer sizes.
Args:
image: Image name (e.g., "ghcr.io/home-assistant/home-assistant")
tag: Image tag (e.g., "2025.1.0")
platform: Target platform (e.g., "linux/amd64")
Returns:
ImageManifest with layer sizes, or None if fetch failed.
"""
registry, repository, tag = parse_image_reference(image, tag)
_LOGGER.debug(
"Fetching manifest for %s/%s:%s (platform=%s)",
registry,
repository,
tag,
platform,
)
# Get auth token
token = await self._get_auth_token(registry, repository)
# Fetch manifest
manifest = await self._fetch_manifest(
registry, repository, tag, token, platform
)
if not manifest:
return None
# Extract layer information
layers = manifest.get("layers", [])
if not layers:
_LOGGER.warning(
"No layers in manifest for %s/%s:%s", registry, repository, tag
)
return None
layer_sizes: dict[str, int] = {}
total_size = 0
for layer in layers:
digest = layer.get("digest", "")
size = layer.get("size", 0)
if digest and size:
# Store by short digest (first 12 chars after sha256:)
short_digest = (
digest.split(":")[1][:12] if ":" in digest else digest[:12]
)
layer_sizes[short_digest] = size
total_size += size
digest = manifest.get("config", {}).get("digest", "")
_LOGGER.debug(
"Manifest for %s/%s:%s - %d layers, %d bytes total",
registry,
repository,
tag,
len(layer_sizes),
total_size,
)
return ImageManifest(
digest=digest,
total_size=total_size,
layers=layer_sizes,
)

View File

@@ -1,21 +1,25 @@
"""Supervisor docker monitor based on events."""
from contextlib import suppress
import asyncio
from dataclasses import dataclass
import logging
from threading import Thread
from typing import Any
from docker.models.containers import Container
from docker.types.daemon import CancellableStream
import aiodocker
from aiodocker.channel import ChannelSubscriber
from ..const import BusEvent
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import HassioError
from ..utils.sentry import async_capture_exception, capture_exception
from .const import LABEL_MANAGED, ContainerState
_LOGGER: logging.Logger = logging.getLogger(__name__)
STOP_MONITOR_TIMEOUT = 5.0
@dataclass
@dataclass(slots=True, frozen=True)
class DockerContainerStateEvent:
"""Event for docker container state change."""
@@ -25,71 +29,157 @@ class DockerContainerStateEvent:
time: int
class DockerMonitor(CoreSysAttributes, Thread):
@dataclass(slots=True, frozen=True)
class DockerEventCallbackTask:
"""Docker event and task spawned for it."""
data: DockerContainerStateEvent
task: asyncio.Task
class DockerMonitor(CoreSysAttributes):
"""Docker monitor for supervisor."""
def __init__(self, coresys: CoreSys):
def __init__(self, coresys: CoreSys, docker_client: aiodocker.Docker):
"""Initialize Docker monitor object."""
super().__init__()
self.coresys = coresys
self._events: CancellableStream | None = None
self.docker = docker_client
self._unlabeled_managed_containers: list[str] = []
self._monitor_task: asyncio.Task | None = None
self._await_task: asyncio.Task | None = None
self._event_tasks: asyncio.Queue[DockerEventCallbackTask | None]
def watch_container(self, container: Container):
def watch_container(self, container_metadata: dict[str, Any]):
"""If container is missing the managed label, add name to list."""
if LABEL_MANAGED not in container.labels and container.name:
self._unlabeled_managed_containers += [container.name]
labels: dict[str, str] = container_metadata.get("Config", {}).get("Labels", {})
name: str | None = container_metadata.get("Name")
if name:
name = name.lstrip("/")
if LABEL_MANAGED not in labels and name:
self._unlabeled_managed_containers += [name]
async def load(self):
"""Start docker events monitor."""
self._events = self.sys_docker.events
Thread.start(self)
events = self.docker.events.subscribe()
self._event_tasks = asyncio.Queue()
self._monitor_task = self.sys_create_task(self._run(events), eager_start=True)
self._await_task = self.sys_create_task(
self._await_event_tasks(), eager_start=True
)
_LOGGER.info("Started docker events monitor")
async def unload(self):
"""Stop docker events monitor."""
self._events.close()
with suppress(RuntimeError):
self.join(timeout=5)
await self.docker.events.stop()
tasks = [task for task in (self._monitor_task, self._await_task) if task]
if tasks:
_, pending = await asyncio.wait(tasks, timeout=STOP_MONITOR_TIMEOUT)
if pending:
_LOGGER.warning(
"Timeout stopping docker events monitor, cancelling %s pending task(s)",
len(pending),
)
for task in pending:
task.cancel()
await asyncio.gather(*pending, return_exceptions=True)
self._event_tasks.shutdown(immediate=True)
self._monitor_task = None
self._await_task = None
_LOGGER.info("Stopped docker events monitor")
def run(self) -> None:
async def _run(self, events: ChannelSubscriber) -> None:
"""Monitor and process docker events."""
if not self._events:
raise RuntimeError("Monitor has not been loaded!")
try:
while True:
event: dict[str, Any] | None = await events.get()
if event is None:
break
for event in self._events:
attributes: dict[str, str] = event.get("Actor", {}).get("Attributes", {})
if event["Type"] == "container" and (
LABEL_MANAGED in attributes
or attributes.get("name") in self._unlabeled_managed_containers
):
container_state: ContainerState | None = None
action: str = event["Action"]
if action == "start":
container_state = ContainerState.RUNNING
elif action == "die":
container_state = (
ContainerState.STOPPED
if int(event["Actor"]["Attributes"]["exitCode"]) == 0
else ContainerState.FAILED
try:
attributes: dict[str, str] = event.get("Actor", {}).get(
"Attributes", {}
)
elif action == "health_status: healthy":
container_state = ContainerState.HEALTHY
elif action == "health_status: unhealthy":
container_state = ContainerState.UNHEALTHY
if container_state:
self.sys_loop.call_soon_threadsafe(
self.sys_bus.fire_event,
BusEvent.DOCKER_CONTAINER_STATE_CHANGE,
DockerContainerStateEvent(
name=attributes["name"],
state=container_state,
id=event["Actor"]["ID"],
time=event["time"],
),
if event["Type"] == "container" and (
LABEL_MANAGED in attributes
or attributes.get("name") in self._unlabeled_managed_containers
):
container_state: ContainerState | None = None
action: str = event["Action"]
if action == "start":
container_state = ContainerState.RUNNING
elif action == "die":
container_state = (
ContainerState.STOPPED
if int(event["Actor"]["Attributes"]["exitCode"]) == 0
else ContainerState.FAILED
)
elif action == "health_status: healthy":
container_state = ContainerState.HEALTHY
elif action == "health_status: unhealthy":
container_state = ContainerState.UNHEALTHY
if container_state:
state_event = DockerContainerStateEvent(
name=attributes["name"],
state=container_state,
id=event["Actor"]["ID"],
time=event["time"],
)
tasks = self.sys_bus.fire_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE, state_event
)
await asyncio.gather(
*[
self._event_tasks.put(
DockerEventCallbackTask(state_event, task)
)
for task in tasks
]
)
# Broad exception here because one bad event cannot stop the monitor
# Log what went wrong and send it to sentry but continue monitoring
except Exception as err: # pylint: disable=broad-exception-caught
await async_capture_exception(err)
_LOGGER.error(
"Could not process docker event, container state my be inaccurate: %s %s",
event,
err,
)
# Can only get to this except if an error raised while getting events from queue
# Shouldn't really happen but any errors raised there are catastrophic and end the monitor
# Log that the monitor broke and send the details to sentry to review
except Exception as err: # pylint: disable=broad-exception-caught
await async_capture_exception(err)
_LOGGER.error(
"Cannot get events from docker, monitor has crashed. Container "
"state information will be inaccurate: %s",
err,
)
finally:
await self._event_tasks.put(None)
async def _await_event_tasks(self):
"""Await event callback tasks to clean up and capture output."""
while (event := await self._event_tasks.get()) is not None:
try:
await event.task
# Exceptions which inherit from HassioError are already handled
# We can safely ignore these, we only track the unhandled ones here
except HassioError:
pass
except Exception as err: # pylint: disable=broad-exception-caught
capture_exception(err)
_LOGGER.error(
"Error encountered while processing docker container state event: %s %s %s",
event.task.get_name(),
event.data,
err,
)

View File

@@ -1,19 +1,18 @@
"""Internal network manager for Supervisor."""
import asyncio
from contextlib import suppress
from http import HTTPStatus
from ipaddress import IPv4Address
import logging
from typing import Self, cast
from typing import Any, Self, cast
import docker
import requests
import aiodocker
from aiodocker.networks import DockerNetwork as AiodockerNetwork
from ..const import (
ATTR_AUDIO,
ATTR_CLI,
ATTR_DNS,
ATTR_ENABLE_IPV6,
ATTR_OBSERVER,
ATTR_SUPERVISOR,
DOCKER_IPV4_NETWORK_MASK,
@@ -30,44 +29,112 @@ from ..exceptions import DockerError
_LOGGER: logging.Logger = logging.getLogger(__name__)
DOCKER_ENABLEIPV6 = "EnableIPv6"
DOCKER_NETWORK_PARAMS = {
"name": DOCKER_NETWORK,
"driver": DOCKER_NETWORK_DRIVER,
"ipam": docker.types.IPAMConfig(
pool_configs=[
docker.types.IPAMPool(subnet=str(DOCKER_IPV6_NETWORK_MASK)),
docker.types.IPAMPool(
subnet=str(DOCKER_IPV4_NETWORK_MASK),
gateway=str(DOCKER_IPV4_NETWORK_MASK[1]),
iprange=str(DOCKER_IPV4_NETWORK_RANGE),
),
]
),
ATTR_ENABLE_IPV6: True,
"options": {"com.docker.network.bridge.name": DOCKER_NETWORK},
}
DOCKER_OPTIONS = "Options"
DOCKER_ENABLE_IPV6_DEFAULT = True
DOCKER_NETWORK_PARAMS = {
"Name": DOCKER_NETWORK,
"Driver": DOCKER_NETWORK_DRIVER,
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": str(DOCKER_IPV6_NETWORK_MASK),
},
{
"Subnet": str(DOCKER_IPV4_NETWORK_MASK),
"Gateway": str(DOCKER_IPV4_NETWORK_MASK[1]),
"IPRange": str(DOCKER_IPV4_NETWORK_RANGE),
},
],
},
DOCKER_ENABLEIPV6: DOCKER_ENABLE_IPV6_DEFAULT,
DOCKER_OPTIONS: {"com.docker.network.bridge.name": DOCKER_NETWORK},
}
class DockerNetwork:
"""Internal Supervisor Network.
"""Internal Supervisor Network."""
This class is not AsyncIO safe!
"""
def __init__(self, docker_client: docker.DockerClient):
def __init__(self, docker_client: aiodocker.Docker):
"""Initialize internal Supervisor network."""
self.docker: docker.DockerClient = docker_client
self._network: docker.models.networks.Network
self.docker: aiodocker.Docker = docker_client
self._network: AiodockerNetwork | None = None
self._network_meta: dict[str, Any] | None = None
async def post_init(
self, enable_ipv6: bool | None = None, mtu: int | None = None
) -> Self:
"""Post init actions that must be done in event loop."""
self._network = await asyncio.get_running_loop().run_in_executor(
None, self._get_network, enable_ipv6, mtu
try:
self._network = network = await self.docker.networks.get(DOCKER_NETWORK)
except aiodocker.DockerError as err:
# If network was not found, create it instead. Can skip further checks since it's new
if err.status == HTTPStatus.NOT_FOUND:
await self._create_supervisor_network(enable_ipv6, mtu)
return self
raise DockerError(
f"Could not get network from Docker: {err!s}", _LOGGER.error
) from err
# Cache metadata for network
await self.reload()
current_ipv6: bool = self.network_meta.get(DOCKER_ENABLEIPV6, False)
current_mtu_str: str | None = self.network_meta.get(DOCKER_OPTIONS, {}).get(
"com.docker.network.driver.mtu"
)
current_mtu = int(current_mtu_str) if current_mtu_str is not None else None
# Check if we have explicitly provided settings that differ from what is set
changes = []
if enable_ipv6 is not None and current_ipv6 != enable_ipv6:
changes.append("IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only")
if mtu is not None and current_mtu != mtu:
changes.append(f"MTU {mtu}")
if not changes:
return self
_LOGGER.info("Migrating Supervisor network to %s", ", ".join(changes))
# System is considered running if any containers besides Supervisor and Observer are found
# A reboot is required then, we won't disconnect those containers to remake network
containers: dict[str, dict[str, Any]] = self.network_meta.get("Containers", {})
system_running = containers and any(
container.get("Name") not in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME)
for container in containers.values()
)
if system_running:
_LOGGER.warning(
"System appears to be running, not applying Supervisor network change. "
"Reboot your system to apply the change."
)
return self
# Disconnect all containers in the network
for c_id, meta in containers.items():
try:
await network.disconnect({"Container": c_id, "Force": True})
except aiodocker.DockerError:
_LOGGER.warning(
"Cannot apply Supervisor network changes because container %s "
"could not be disconnected. Reboot your system to apply change.",
meta.get("Name"),
)
return self
# Remove the network
try:
await network.delete()
except aiodocker.DockerError:
_LOGGER.warning(
"Cannot apply Supervisor network changes because Supervisor network "
"could not be removed and recreated. Reboot your system to apply change."
)
return self
# Recreate it with correct settings
await self._create_supervisor_network(enable_ipv6, mtu)
return self
@property
@@ -76,14 +143,23 @@ class DockerNetwork:
return DOCKER_NETWORK
@property
def network(self) -> docker.models.networks.Network:
def network(self) -> AiodockerNetwork:
"""Return docker network."""
if not self._network:
raise RuntimeError("Network not set!")
return self._network
@property
def containers(self) -> list[str]:
"""Return of connected containers from network."""
return list(self.network.attrs.get("Containers", {}).keys())
def network_meta(self) -> dict[str, Any]:
"""Return docker network metadata."""
if not self._network_meta:
raise RuntimeError("Network metadata not set!")
return self._network_meta
@property
def containers(self) -> dict[str, dict[str, Any]]:
"""Return metadata of connected containers to network."""
return self.network_meta.get("Containers", {})
@property
def gateway(self) -> IPv4Address:
@@ -115,94 +191,37 @@ class DockerNetwork:
"""Return observer of the network."""
return DOCKER_IPV4_NETWORK_MASK[6]
def _get_network(
async def _create_supervisor_network(
self, enable_ipv6: bool | None = None, mtu: int | None = None
) -> docker.models.networks.Network:
"""Get supervisor network."""
try:
if network := self.docker.networks.get(DOCKER_NETWORK):
current_ipv6 = network.attrs.get(DOCKER_ENABLEIPV6, False)
current_mtu = network.attrs.get("Options", {}).get(
"com.docker.network.driver.mtu"
)
current_mtu = int(current_mtu) if current_mtu else None
# If the network exists and we don't have explicit settings,
# simply stick with what we have.
if (enable_ipv6 is None or current_ipv6 == enable_ipv6) and (
mtu is None or current_mtu == mtu
):
return network
# We have explicit settings which differ from the current state.
changes = []
if enable_ipv6 is not None and current_ipv6 != enable_ipv6:
changes.append(
"IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only"
)
if mtu is not None and current_mtu != mtu:
changes.append(f"MTU {mtu}")
if changes:
_LOGGER.info(
"Migrating Supervisor network to %s", ", ".join(changes)
)
if (containers := network.containers) and (
containers_all := all(
container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME)
for container in containers
)
):
for container in containers:
with suppress(
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
):
network.disconnect(container, force=True)
if not containers or containers_all:
try:
network.remove()
except docker.errors.APIError:
_LOGGER.warning("Failed to remove existing Supervisor network")
return network
else:
_LOGGER.warning(
"System appears to be running, "
"not applying Supervisor network change. "
"Reboot your system to apply the change."
)
return network
except docker.errors.NotFound:
_LOGGER.info("Can't find Supervisor network, creating a new network")
) -> None:
"""Create supervisor network."""
network_params = DOCKER_NETWORK_PARAMS.copy()
network_params[ATTR_ENABLE_IPV6] = (
DOCKER_ENABLE_IPV6_DEFAULT if enable_ipv6 is None else enable_ipv6
)
if enable_ipv6 is not None:
network_params[DOCKER_ENABLEIPV6] = enable_ipv6
# Copy options and add MTU if specified
if mtu is not None:
options = cast(dict[str, str], network_params["options"]).copy()
options = cast(dict[str, str], network_params[DOCKER_OPTIONS]).copy()
options["com.docker.network.driver.mtu"] = str(mtu)
network_params["options"] = options
network_params[DOCKER_OPTIONS] = options
try:
self._network = self.docker.networks.create(**network_params) # type: ignore
except docker.errors.APIError as err:
self._network = await self.docker.networks.create(network_params)
except aiodocker.DockerError as err:
raise DockerError(
f"Can't create Supervisor network: {err}", _LOGGER.error
) from err
await self.reload()
with suppress(DockerError):
self.attach_container_by_name(
await self.attach_container_by_name(
SUPERVISOR_DOCKER_NAME, [ATTR_SUPERVISOR], self.supervisor
)
with suppress(DockerError):
self.attach_container_by_name(
await self.attach_container_by_name(
OBSERVER_DOCKER_NAME, [ATTR_OBSERVER], self.observer
)
@@ -212,103 +231,90 @@ class DockerNetwork:
(ATTR_AUDIO, self.audio),
):
with suppress(DockerError):
self.attach_container_by_name(f"{DOCKER_PREFIX}_{name}", [name], ip)
await self.attach_container_by_name(
f"{DOCKER_PREFIX}_{name}", [name], ip
)
return self._network
async def reload(self) -> None:
"""Get and cache metadata for supervisor network."""
try:
self._network_meta = await self.network.show()
except aiodocker.DockerError as err:
raise DockerError(
f"Could not get network metadata from Docker: {err!s}", _LOGGER.error
) from err
def attach_container(
async def attach_container(
self,
container: docker.models.containers.Container,
container_id: str,
name: str | None,
alias: list[str] | None = None,
ipv4: IPv4Address | None = None,
) -> None:
"""Attach container to Supervisor network.
Need run inside executor.
"""
"""Attach container to Supervisor network."""
# Reload Network information
with suppress(docker.errors.DockerException, requests.RequestException):
self.network.reload()
with suppress(DockerError):
await self.reload()
# Check stale Network
if container.name and container.name in (
val.get("Name") for val in self.network.attrs.get("Containers", {}).values()
):
self.stale_cleanup(container.name)
if name and name in (val.get("Name") for val in self.containers.values()):
await self.stale_cleanup(name)
# Attach Network
endpoint_config: dict[str, Any] = {}
if alias:
endpoint_config["Aliases"] = alias
if ipv4:
endpoint_config["IPAMConfig"] = {"IPv4Address": str(ipv4)}
try:
self.network.connect(
container, aliases=alias, ipv4_address=str(ipv4) if ipv4 else None
await self.network.connect(
{
"Container": container_id,
"EndpointConfig": endpoint_config,
}
)
except (
docker.errors.NotFound,
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
except aiodocker.DockerError as err:
raise DockerError(
f"Can't connect {container.name} to Supervisor network: {err}",
f"Can't connect {name or container_id} to Supervisor network: {err}",
_LOGGER.error,
) from err
def attach_container_by_name(
self,
name: str,
alias: list[str] | None = None,
ipv4: IPv4Address | None = None,
async def attach_container_by_name(
self, name: str, alias: list[str] | None = None, ipv4: IPv4Address | None = None
) -> None:
"""Attach container to Supervisor network.
Need run inside executor.
"""
"""Attach container to Supervisor network."""
try:
container = self.docker.containers.get(name)
except (
docker.errors.NotFound,
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
container = await self.docker.containers.get(name)
except aiodocker.DockerError as err:
raise DockerError(f"Can't find {name}: {err}", _LOGGER.error) from err
if container.id not in self.containers:
self.attach_container(container, alias, ipv4)
await self.attach_container(container.id, name, alias, ipv4)
def detach_default_bridge(
self, container: docker.models.containers.Container
async def detach_default_bridge(
self, container_id: str, name: str | None = None
) -> None:
"""Detach default Docker bridge.
Need run inside executor.
"""
"""Detach default Docker bridge."""
try:
default_network = self.docker.networks.get(DOCKER_NETWORK_DRIVER)
default_network.disconnect(container)
except docker.errors.NotFound:
pass
except (
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
default_network = await self.docker.networks.get(DOCKER_NETWORK_DRIVER)
await default_network.disconnect({"Container": container_id})
except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
return
raise DockerError(
f"Can't disconnect {container.name} from default network: {err}",
f"Can't disconnect {name or container_id} from default network: {err}",
_LOGGER.warning,
) from err
def stale_cleanup(self, name: str) -> None:
async def stale_cleanup(self, name: str) -> None:
"""Force remove a container from Network.
Fix: https://github.com/moby/moby/issues/23302
"""
try:
self.network.disconnect(name, force=True)
except (
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
await self.network.disconnect({"Container": name, "Force": True})
except aiodocker.DockerError as err:
raise DockerError(
f"Can't disconnect {name} from Supervisor network: {err}",
_LOGGER.warning,

View File

@@ -2,7 +2,7 @@
import logging
from ..const import DOCKER_IPV4_NETWORK_MASK, OBSERVER_DOCKER_NAME
from ..const import DOCKER_IPV4_NETWORK_MASK, OBSERVER_DOCKER_NAME, OBSERVER_PORT
from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..jobs.const import JobConcurrency
@@ -48,10 +48,10 @@ class DockerObserver(DockerInterface, CoreSysAttributes):
environment={
ENV_TIME: self.sys_timezone,
ENV_TOKEN: self.sys_plugins.observer.supervisor_token,
ENV_NETWORK_MASK: DOCKER_IPV4_NETWORK_MASK,
ENV_NETWORK_MASK: str(DOCKER_IPV4_NETWORK_MASK),
},
mounts=[MOUNT_DOCKER],
ports={"80/tcp": 4357},
ports={"80/tcp": OBSERVER_PORT},
oom_score_adj=-300,
)
_LOGGER.info(

View File

@@ -0,0 +1,368 @@
"""Image pull progress tracking."""
from __future__ import annotations
from contextlib import suppress
from dataclasses import dataclass, field
from enum import Enum
import logging
from typing import TYPE_CHECKING, cast
if TYPE_CHECKING:
from .manager import PullLogEntry
from .manifest import ImageManifest
_LOGGER = logging.getLogger(__name__)
# Progress weight distribution: 70% downloading, 30% extraction
DOWNLOAD_WEIGHT = 70.0
EXTRACT_WEIGHT = 30.0
class LayerPullStatus(Enum):
"""Status values for pulling an image layer.
These are a subset of the statuses in a docker pull image log.
The order field allows comparing which stage is further along.
"""
PULLING_FS_LAYER = 1, "Pulling fs layer"
WAITING = 1, "Waiting"
RETRYING = 2, "Retrying" # Matches "Retrying in N seconds"
DOWNLOADING = 3, "Downloading"
VERIFYING_CHECKSUM = 4, "Verifying Checksum"
DOWNLOAD_COMPLETE = 5, "Download complete"
EXTRACTING = 6, "Extracting"
PULL_COMPLETE = 7, "Pull complete"
ALREADY_EXISTS = 7, "Already exists"
def __init__(self, order: int, status: str) -> None:
"""Set fields from values."""
self.order = order
self.status = status
def __eq__(self, value: object, /) -> bool:
"""Check equality, allow string comparisons on status."""
with suppress(AttributeError):
return self.status == cast(LayerPullStatus, value).status
return self.status == value
def __hash__(self) -> int:
"""Return hash based on status string."""
return hash(self.status)
def __lt__(self, other: object) -> bool:
"""Order instances by stage progression."""
with suppress(AttributeError):
return self.order < cast(LayerPullStatus, other).order
return False
@classmethod
def from_status(cls, status: str) -> LayerPullStatus | None:
"""Get enum from status string, or None if not recognized."""
# Handle "Retrying in N seconds" pattern
if status.startswith("Retrying in "):
return cls.RETRYING
for member in cls:
if member.status == status:
return member
return None
@dataclass
class LayerProgress:
"""Track progress of a single layer."""
layer_id: str
total_size: int = 0 # Size in bytes (from downloading, reused for extraction)
download_current: int = 0
extract_current: int = 0 # Extraction progress in bytes (overlay2 only)
download_complete: bool = False
extract_complete: bool = False
already_exists: bool = False # Layer was already locally available
def calculate_progress(self) -> float:
"""Calculate layer progress 0-100.
Progress is weighted: 70% download, 30% extraction.
For overlay2, we have byte-based extraction progress.
For containerd, extraction jumps from 70% to 100% on completion.
"""
if self.already_exists or self.extract_complete:
return 100.0
if self.download_complete:
# Check if we have extraction progress (overlay2)
if self.extract_current > 0 and self.total_size > 0:
extract_pct = min(1.0, self.extract_current / self.total_size)
return DOWNLOAD_WEIGHT + (extract_pct * EXTRACT_WEIGHT)
# No extraction progress yet - return 70%
return DOWNLOAD_WEIGHT
if self.total_size > 0:
download_pct = min(1.0, self.download_current / self.total_size)
return download_pct * DOWNLOAD_WEIGHT
return 0.0
@dataclass
class ImagePullProgress:
"""Track overall progress of pulling an image.
When manifest layer sizes are provided, uses size-weighted progress where
each layer contributes proportionally to its size. This gives accurate
progress based on actual bytes to download.
When manifest is not available, falls back to count-based progress where
each layer contributes equally.
Layers that already exist locally are excluded from the progress calculation.
"""
layers: dict[str, LayerProgress] = field(default_factory=dict)
_last_reported_progress: float = field(default=0.0, repr=False)
_seen_downloading: bool = field(default=False, repr=False)
_manifest_layer_sizes: dict[str, int] = field(default_factory=dict, repr=False)
_total_manifest_size: int = field(default=0, repr=False)
def set_manifest(self, manifest: ImageManifest) -> None:
"""Set manifest layer sizes for accurate size-based progress.
Should be called before processing pull events.
"""
self._manifest_layer_sizes = dict(manifest.layers)
self._total_manifest_size = manifest.total_size
_LOGGER.debug(
"Manifest set: %d layers, %d bytes total",
len(self._manifest_layer_sizes),
self._total_manifest_size,
)
def get_or_create_layer(self, layer_id: str) -> LayerProgress:
"""Get existing layer or create new one."""
if layer_id not in self.layers:
# If we have manifest sizes, pre-populate the layer's total_size
manifest_size = self._manifest_layer_sizes.get(layer_id, 0)
self.layers[layer_id] = LayerProgress(
layer_id=layer_id, total_size=manifest_size
)
return self.layers[layer_id]
def process_event(self, entry: PullLogEntry) -> None:
"""Process a pull log event and update layer state."""
# Skip events without layer ID or status
if not entry.id or not entry.status:
return
# Skip metadata events that aren't layer-specific
# "Pulling from X" has id=tag but isn't a layer
if entry.status.startswith("Pulling from "):
return
# Parse status to enum (returns None for unrecognized statuses)
status = LayerPullStatus.from_status(entry.status)
if status is None:
return
layer = self.get_or_create_layer(entry.id)
# Handle "Already exists" - layer is locally available
if status is LayerPullStatus.ALREADY_EXISTS:
layer.already_exists = True
layer.download_complete = True
layer.extract_complete = True
return
# Handle "Pulling fs layer" / "Waiting" - layer is being tracked
if status in (LayerPullStatus.PULLING_FS_LAYER, LayerPullStatus.WAITING):
return
# Handle "Downloading" - update download progress
if status is LayerPullStatus.DOWNLOADING:
# Mark that we've seen downloading - now we know layer count is complete
self._seen_downloading = True
if entry.progress_detail and entry.progress_detail.current is not None:
layer.download_current = entry.progress_detail.current
if entry.progress_detail and entry.progress_detail.total is not None:
# Only set total_size if not already set or if this is larger
# (handles case where total changes during download)
layer.total_size = max(layer.total_size, entry.progress_detail.total)
return
# Handle "Verifying Checksum" - download is essentially complete
if status is LayerPullStatus.VERIFYING_CHECKSUM:
if layer.total_size > 0:
layer.download_current = layer.total_size
return
# Handle "Download complete" - download phase done
if status is LayerPullStatus.DOWNLOAD_COMPLETE:
layer.download_complete = True
if layer.total_size > 0:
layer.download_current = layer.total_size
elif layer.total_size == 0:
# Small layer that skipped downloading phase
# Set minimal size so it doesn't distort weighted average
layer.total_size = 1
layer.download_current = 1
return
# Handle "Extracting" - extraction in progress
if status is LayerPullStatus.EXTRACTING:
# For overlay2: progressDetail has {current, total} in bytes
# For containerd: progressDetail has {current, units: "s"} (time elapsed)
# We can only use byte-based progress (overlay2)
layer.download_complete = True
if layer.total_size > 0:
layer.download_current = layer.total_size
# Check if this is byte-based extraction progress (overlay2)
# Overlay2 has {current, total} in bytes, no units field
# Containerd has {current, units: "s"} which is useless for progress
if (
entry.progress_detail
and entry.progress_detail.current is not None
and entry.progress_detail.units is None
):
# Use layer's total_size from downloading phase (doesn't change)
layer.extract_current = entry.progress_detail.current
_LOGGER.debug(
"Layer %s extracting: %d/%d (%.1f%%)",
layer.layer_id,
layer.extract_current,
layer.total_size,
(layer.extract_current / layer.total_size * 100)
if layer.total_size > 0
else 0,
)
return
# Handle "Pull complete" - layer is fully done
if status is LayerPullStatus.PULL_COMPLETE:
layer.download_complete = True
layer.extract_complete = True
if layer.total_size > 0:
layer.download_current = layer.total_size
return
# Handle "Retrying in N seconds" - reset download progress
if status is LayerPullStatus.RETRYING:
layer.download_current = 0
layer.download_complete = False
return
def calculate_progress(self) -> float:
"""Calculate overall progress 0-100.
When manifest layer sizes are available, uses size-weighted progress
where each layer contributes proportionally to its size.
When manifest is not available, falls back to count-based progress
where each layer contributes equally.
Layers that already exist locally are excluded from the calculation.
Returns 0 until we've seen the first "Downloading" event, since Docker
reports "Already exists" and "Pulling fs layer" events before we know
the complete layer count.
"""
# Don't report progress until we've seen downloading start
# This ensures we know the full layer count before calculating progress
if not self._seen_downloading or not self.layers:
return 0.0
# Only count layers that need pulling (exclude already_exists)
layers_to_pull = [
layer for layer in self.layers.values() if not layer.already_exists
]
if not layers_to_pull:
# All layers already exist, nothing to download
return 100.0
# Use size-weighted progress if manifest sizes are available
if self._manifest_layer_sizes:
return min(100, self._calculate_size_weighted_progress(layers_to_pull))
# Fall back to count-based progress
total_progress = sum(layer.calculate_progress() for layer in layers_to_pull)
return min(100, total_progress / len(layers_to_pull))
def _calculate_size_weighted_progress(
self, layers_to_pull: list[LayerProgress]
) -> float:
"""Calculate size-weighted progress.
Each layer contributes to progress proportionally to its size.
Progress = sum(layer_progress * layer_size) / total_size
"""
# Calculate total size of layers that need pulling
total_size = sum(layer.total_size for layer in layers_to_pull)
if total_size == 0:
# No size info available, fall back to count-based
total_progress = sum(layer.calculate_progress() for layer in layers_to_pull)
return total_progress / len(layers_to_pull)
# Weight each layer's progress by its size
weighted_progress = 0.0
for layer in layers_to_pull:
if layer.total_size > 0:
layer_weight = layer.total_size / total_size
weighted_progress += layer.calculate_progress() * layer_weight
return weighted_progress
def get_stage(self) -> str | None:
"""Get current stage based on layer states."""
if not self.layers:
return None
# Check if any layer is still downloading
for layer in self.layers.values():
if layer.already_exists:
continue
if not layer.download_complete:
return "Downloading"
# All downloads complete, check if extracting
for layer in self.layers.values():
if layer.already_exists:
continue
if not layer.extract_complete:
return "Extracting"
# All done
return "Pull complete"
def should_update_job(self, threshold: float = 1.0) -> tuple[bool, float]:
"""Check if job should be updated based on progress change.
Returns (should_update, current_progress).
Updates are triggered when progress changes by at least threshold%.
Progress is guaranteed to only increase (monotonic).
"""
current_progress = self.calculate_progress()
# Ensure monotonic progress - never report a decrease
# This can happen when new layers get size info and change the weighted average
if current_progress < self._last_reported_progress:
_LOGGER.debug(
"Progress decreased from %.1f%% to %.1f%%, keeping last reported",
self._last_reported_progress,
current_progress,
)
return False, self._last_reported_progress
if current_progress >= self._last_reported_progress + threshold:
_LOGGER.debug(
"Progress update: %.1f%% -> %.1f%% (delta: %.1f%%)",
self._last_reported_progress,
current_progress,
current_progress - self._last_reported_progress,
)
self._last_reported_progress = current_progress
return True, current_progress
return False, self._last_reported_progress

View File

@@ -1,15 +1,12 @@
"""Init file for Supervisor Docker object."""
import asyncio
from collections.abc import Awaitable
from ipaddress import IPv4Address
import logging
import os
import aiodocker
from awesomeversion.awesomeversion import AwesomeVersion
import docker
import requests
from ..exceptions import DockerError
from ..jobs.const import JobConcurrency
@@ -53,13 +50,13 @@ class DockerSupervisor(DockerInterface):
) -> None:
"""Attach to running docker container."""
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
docker_container = await self.sys_docker.containers.get(self.name)
self._meta = await docker_container.show()
except aiodocker.DockerError as err:
raise DockerError(
f"Could not get supervisor container metadata: {err!s}"
) from err
self._meta = docker_container.attrs
_LOGGER.info(
"Attaching to Supervisor %s with version %s",
self.image,
@@ -72,40 +69,40 @@ class DockerSupervisor(DockerInterface):
# Attach to network
_LOGGER.info("Connecting Supervisor to hassio-network")
await self.sys_run_in_executor(
self.sys_docker.network.attach_container,
docker_container,
await self.sys_docker.network.attach_container(
docker_container.id,
self.name,
alias=["supervisor"],
ipv4=self.sys_docker.network.supervisor,
)
@Job(name="docker_supervisor_retag", concurrency=JobConcurrency.GROUP_QUEUE)
def retag(self) -> Awaitable[None]:
async def retag(self) -> None:
"""Retag latest image to version."""
return self.sys_run_in_executor(self._retag)
def _retag(self) -> None:
"""Retag latest image to version.
Need run inside executor.
"""
try:
docker_container = self.sys_docker.containers.get(self.name)
except (docker.errors.DockerException, requests.RequestException) as err:
docker_container = await self.sys_docker.containers.get(self.name)
container_metadata = await docker_container.show()
except aiodocker.DockerError as err:
raise DockerError(
f"Could not get Supervisor container for retag: {err}", _LOGGER.error
) from err
if not self.image or not docker_container.image:
# See https://github.com/docker/docker-py/blob/df3f8e2abc5a03de482e37214dddef9e0cee1bb1/docker/models/containers.py#L41
metadata_image = container_metadata.get("ImageID", container_metadata["Image"])
if not self.image or not metadata_image:
raise DockerError(
"Could not locate image from container metadata for retag",
_LOGGER.error,
)
try:
docker_container.image.tag(self.image, tag=str(self.version))
docker_container.image.tag(self.image, tag="latest")
except (docker.errors.DockerException, requests.RequestException) as err:
await asyncio.gather(
self.sys_docker.images.tag(
metadata_image, self.image, tag=str(self.version)
),
self.sys_docker.images.tag(metadata_image, self.image, tag="latest"),
)
except aiodocker.DockerError as err:
raise DockerError(
f"Can't retag Supervisor version: {err}", _LOGGER.error
) from err
@@ -117,28 +114,38 @@ class DockerSupervisor(DockerInterface):
async def update_start_tag(self, image: str, version: AwesomeVersion) -> None:
"""Update start tag to new version."""
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
docker_image = await self.sys_docker.images.inspect(f"{image}:{version!s}")
except (
aiodocker.DockerError,
docker.errors.DockerException,
requests.RequestException,
) as err:
docker_container = await self.sys_docker.containers.get(self.name)
container_metadata = await docker_container.show()
except aiodocker.DockerError as err:
raise DockerError(
f"Can't get image or container to fix start tag: {err}", _LOGGER.error
f"Can't get container to fix start tag: {err}", _LOGGER.error
) from err
if not docker_container.image:
# See https://github.com/docker/docker-py/blob/df3f8e2abc5a03de482e37214dddef9e0cee1bb1/docker/models/containers.py#L41
metadata_image = container_metadata.get("ImageID", container_metadata["Image"])
if not metadata_image:
raise DockerError(
"Cannot locate image from container metadata to fix start tag",
_LOGGER.error,
)
try:
container_image, new_image = await asyncio.gather(
self.sys_docker.images.inspect(metadata_image),
self.sys_docker.images.inspect(f"{image}:{version!s}"),
)
except aiodocker.DockerError as err:
raise DockerError(
f"Can't get image metadata to fix start tag: {err}", _LOGGER.error
) from err
try:
# Find start tag
for tag in docker_container.image.tags:
for tag in container_image["RepoTags"]:
# See https://github.com/docker/docker-py/blob/df3f8e2abc5a03de482e37214dddef9e0cee1bb1/docker/models/images.py#L47
if tag == "<none>:<none>":
continue
start_image = tag.partition(":")[0]
start_tag = tag.partition(":")[2] or "latest"
@@ -147,12 +154,12 @@ class DockerSupervisor(DockerInterface):
continue
await asyncio.gather(
self.sys_docker.images.tag(
docker_image["Id"], start_image, tag=start_tag
new_image["Id"], start_image, tag=start_tag
),
self.sys_docker.images.tag(
docker_image["Id"], start_image, tag=version.string
new_image["Id"], start_image, tag=version.string
),
)
except (aiodocker.DockerError, requests.RequestException) as err:
except aiodocker.DockerError as err:
raise DockerError(f"Can't fix start tag: {err}", _LOGGER.error) from err

View File

@@ -0,0 +1,57 @@
"""Docker utilities."""
from __future__ import annotations
import re
# Docker image reference domain regex
# Based on Docker's reference implementation:
# vendor/github.com/distribution/reference/normalize.go
#
# A domain is detected if the part before the first / contains:
# - "localhost" (with optional port)
# - Contains "." (like registry.example.com or 127.0.0.1)
# - Contains ":" (like myregistry:5000)
# - IPv6 addresses in brackets (like [::1]:5000)
#
# Note: Docker also treats uppercase letters as registry indicators since
# namespaces must be lowercase, but this regex handles lowercase matching
# and the get_registry_from_image() function validates the registry rules.
IMAGE_REGISTRY_REGEX = re.compile(
r"^(?P<registry>"
r"localhost(?::[0-9]+)?|" # localhost with optional port
r"(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])" # domain component
r"(?:\.(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]))*" # more components
r"(?::[0-9]+)?|" # optional port
r"\[[a-fA-F0-9:]+\](?::[0-9]+)?" # IPv6 with optional port
r")/" # must be followed by /
)
def get_registry_from_image(image_ref: str) -> str | None:
"""Extract registry from Docker image reference.
Returns the registry if the image reference contains one,
or None if the image uses Docker Hub (docker.io).
Based on Docker's reference implementation:
vendor/github.com/distribution/reference/normalize.go
Examples:
get_registry_from_image("nginx") -> None (docker.io)
get_registry_from_image("library/nginx") -> None (docker.io)
get_registry_from_image("myregistry.com/nginx") -> "myregistry.com"
get_registry_from_image("localhost/myimage") -> "localhost"
get_registry_from_image("localhost:5000/myimage") -> "localhost:5000"
get_registry_from_image("registry.io:5000/org/app:v1") -> "registry.io:5000"
get_registry_from_image("[::1]:5000/myimage") -> "[::1]:5000"
"""
match = IMAGE_REGISTRY_REGEX.match(image_ref)
if match:
registry = match.group("registry")
# Must contain '.' or ':' or be 'localhost' to be a real registry
# This prevents treating "myuser/myimage" as having registry "myuser"
if "." in registry or ":" in registry or registry == "localhost":
return registry
return None # No registry = Docker Hub (docker.io)

View File

@@ -1,25 +1,27 @@
"""Core Exceptions."""
from collections.abc import Callable
from collections.abc import Callable, Mapping
from typing import Any
from .const import OBSERVER_PORT
MESSAGE_CHECK_SUPERVISOR_LOGS = (
"Check supervisor logs for details (check with '{logs_command}')"
)
EXTRA_FIELDS_LOGS_COMMAND = {"logs_command": "ha supervisor logs"}
class HassioError(Exception):
"""Root exception."""
error_key: str | None = None
message_template: str | None = None
extra_fields: dict[str, Any] | None = None
def __init__(
self,
message: str | None = None,
logger: Callable[..., None] | None = None,
*,
extra_fields: dict[str, Any] | None = None,
self, message: str | None = None, logger: Callable[..., None] | None = None
) -> None:
"""Raise & log."""
self.extra_fields = extra_fields or {}
if not message and self.message_template:
message = (
self.message_template.format(**self.extra_fields)
@@ -41,6 +43,94 @@ class HassioNotSupportedError(HassioError):
"""Function is not supported."""
# API
class APIError(HassioError, RuntimeError):
"""API errors."""
status = 400
headers: Mapping[str, str] | None = None
def __init__(
self,
message: str | None = None,
logger: Callable[..., None] | None = None,
*,
headers: Mapping[str, str] | None = None,
job_id: str | None = None,
) -> None:
"""Raise & log, optionally with job."""
super().__init__(message, logger)
self.headers = headers
self.job_id = job_id
class APIUnauthorized(APIError):
"""API unauthorized error."""
status = 401
class APIForbidden(APIError):
"""API forbidden error."""
status = 403
class APINotFound(APIError):
"""API not found error."""
status = 404
class APIGone(APIError):
"""API is no longer available."""
status = 410
class APITooManyRequests(APIError):
"""API too many requests error."""
status = 429
class APIInternalServerError(APIError):
"""API internal server error."""
status = 500
class APIAddonNotInstalled(APIError):
"""Not installed addon requested at addons API."""
class APIDBMigrationInProgress(APIError):
"""Service is unavailable due to an offline DB migration is in progress."""
status = 503
class APIUnknownSupervisorError(APIError):
"""Unknown error occurred within supervisor. Adds supervisor check logs rider to message template."""
status = 500
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
job_id: str | None = None,
) -> None:
"""Initialize exception."""
self.message_template = (
f"{self.message_template}. {MESSAGE_CHECK_SUPERVISOR_LOGS}"
)
self.extra_fields = (self.extra_fields or {}) | EXTRA_FIELDS_LOGS_COMMAND
super().__init__(None, logger, job_id=job_id)
# JobManager
@@ -122,6 +212,13 @@ class SupervisorAppArmorError(SupervisorError):
"""Supervisor AppArmor error."""
class SupervisorUnknownError(SupervisorError, APIUnknownSupervisorError):
"""Raise when an unknown error occurs interacting with Supervisor or its container."""
error_key = "supervisor_unknown_error"
message_template = "An unknown error occurred with Supervisor"
class SupervisorJobError(SupervisorError, JobException):
"""Raise on job errors."""
@@ -194,6 +291,18 @@ class ObserverJobError(ObserverError, PluginJobError):
"""Raise on job error with observer plugin."""
class ObserverPortConflict(ObserverError, APIError):
"""Raise if observer cannot start due to a port conflict."""
error_key = "observer_port_conflict"
message_template = "Cannot start {observer} because port {port} is already in use"
extra_fields = {"observer": "observer", "port": OBSERVER_PORT}
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Raise & log."""
super().__init__(None, logger)
# Multicast
@@ -250,6 +359,68 @@ class AddonConfigurationError(AddonsError):
"""Error with add-on configuration."""
class AddonConfigurationInvalidError(AddonConfigurationError, APIError):
"""Raise if invalid configuration provided for addon."""
error_key = "addon_configuration_invalid_error"
message_template = "Add-on {addon} has invalid options: {validation_error}"
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
addon: str,
validation_error: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "validation_error": validation_error}
super().__init__(None, logger)
class AddonBootConfigCannotChangeError(AddonsError, APIError):
"""Raise if user attempts to change addon boot config when it can't be changed."""
error_key = "addon_boot_config_cannot_change_error"
message_template = (
"Addon {addon} boot option is set to {boot_config} so it cannot be changed"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str, boot_config: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "boot_config": boot_config}
super().__init__(None, logger)
class AddonNotRunningError(AddonsError, APIError):
"""Raise when an addon is not running."""
error_key = "addon_not_running_error"
message_template = "Add-on {addon} is not running"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(None, logger)
class AddonPortConflict(AddonsError, APIError):
"""Raise if addon cannot start due to a port conflict."""
error_key = "addon_port_conflict"
message_template = "Cannot start addon {name} because port {port} is already in use"
def __init__(
self, logger: Callable[..., None] | None = None, *, name: str, port: int
) -> None:
"""Raise & log."""
self.extra_fields = {"name": name, "port": port}
super().__init__(None, logger)
class AddonNotSupportedError(HassioNotSupportedError):
"""Addon doesn't support a function."""
@@ -268,11 +439,8 @@ class AddonNotSupportedArchitectureError(AddonNotSupportedError):
architectures: list[str],
) -> None:
"""Initialize exception."""
super().__init__(
None,
logger,
extra_fields={"slug": slug, "architectures": ", ".join(architectures)},
)
self.extra_fields = {"slug": slug, "architectures": ", ".join(architectures)}
super().__init__(None, logger)
class AddonNotSupportedMachineTypeError(AddonNotSupportedError):
@@ -289,11 +457,8 @@ class AddonNotSupportedMachineTypeError(AddonNotSupportedError):
machine_types: list[str],
) -> None:
"""Initialize exception."""
super().__init__(
None,
logger,
extra_fields={"slug": slug, "machine_types": ", ".join(machine_types)},
)
self.extra_fields = {"slug": slug, "machine_types": ", ".join(machine_types)}
super().__init__(None, logger)
class AddonNotSupportedHomeAssistantVersionError(AddonNotSupportedError):
@@ -310,11 +475,96 @@ class AddonNotSupportedHomeAssistantVersionError(AddonNotSupportedError):
version: str,
) -> None:
"""Initialize exception."""
super().__init__(
None,
logger,
extra_fields={"slug": slug, "version": version},
)
self.extra_fields = {"slug": slug, "version": version}
super().__init__(None, logger)
class AddonNotSupportedWriteStdinError(AddonNotSupportedError, APIError):
"""Addon does not support writing to stdin."""
error_key = "addon_not_supported_write_stdin_error"
message_template = "Add-on {addon} does not support writing to stdin"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(None, logger)
class AddonBuildDockerfileMissingError(AddonNotSupportedError, APIError):
"""Raise when addon build invalid because dockerfile is missing."""
error_key = "addon_build_dockerfile_missing_error"
message_template = (
"Cannot build addon '{addon}' because dockerfile is missing. A repair "
"using '{repair_command}' will fix this if the cause is data "
"corruption. Otherwise please report this to the addon developer."
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "repair_command": "ha supervisor repair"}
super().__init__(None, logger)
class AddonBuildArchitectureNotSupportedError(AddonNotSupportedError, APIError):
"""Raise when addon cannot be built on system because it doesn't support its architecture."""
error_key = "addon_build_architecture_not_supported_error"
message_template = (
"Cannot build addon '{addon}' because its supported architectures "
"({addon_arches}) do not match the system supported architectures ({system_arches})"
)
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
addon: str,
addon_arch_list: list[str],
system_arch_list: list[str],
) -> None:
"""Initialize exception."""
self.extra_fields = {
"addon": addon,
"addon_arches": ", ".join(addon_arch_list),
"system_arches": ", ".join(system_arch_list),
}
super().__init__(None, logger)
class AddonUnknownError(AddonsError, APIUnknownSupervisorError):
"""Raise when unknown error occurs taking an action for an addon."""
error_key = "addon_unknown_error"
message_template = "An unknown error occurred with addon {addon}"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(logger)
class AddonBuildFailedUnknownError(AddonsError, APIUnknownSupervisorError):
"""Raise when the build failed for an addon due to an unknown error."""
error_key = "addon_build_failed_unknown_error"
message_template = (
"An unknown error occurred while trying to build the image for addon {addon}"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(logger)
class AddonsJobError(AddonsError, JobException):
@@ -346,13 +596,52 @@ class AuthError(HassioError):
"""Auth errors."""
class AuthPasswordResetError(HassioError):
class AuthPasswordResetError(AuthError, APIError):
"""Auth error if password reset failed."""
error_key = "auth_password_reset_error"
message_template = "Username '{user}' does not exist. Check list of users using '{auth_list_command}'."
class AuthListUsersError(HassioError):
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
user: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"user": user, "auth_list_command": "ha auth list"}
super().__init__(None, logger)
class AuthListUsersError(AuthError, APIUnknownSupervisorError):
"""Auth error if listing users failed."""
error_key = "auth_list_users_error"
message_template = "Can't request listing users on Home Assistant"
class AuthInvalidNonStringValueError(AuthError, APIUnauthorized):
"""Auth error if something besides a string provided as username or password."""
error_key = "auth_invalid_non_string_value_error"
message_template = "Username and password must be strings"
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
headers: Mapping[str, str] | None = None,
) -> None:
"""Initialize exception."""
super().__init__(None, logger, headers=headers)
class AuthHomeAssistantAPIValidationError(AuthError, APIUnknownSupervisorError):
"""Error encountered trying to validate auth details via Home Assistant API."""
error_key = "auth_home_assistant_api_validation_error"
message_template = "Unable to validate authentication details with Home Assistant"
# Host
@@ -385,60 +674,6 @@ class HostLogError(HostError):
"""Internal error with host log."""
# API
class APIError(HassioError, RuntimeError):
"""API errors."""
status = 400
def __init__(
self,
message: str | None = None,
logger: Callable[..., None] | None = None,
*,
job_id: str | None = None,
error: HassioError | None = None,
) -> None:
"""Raise & log, optionally with job."""
# Allow these to be set from another error here since APIErrors essentially wrap others to add a status
self.error_key = error.error_key if error else None
self.message_template = error.message_template if error else None
super().__init__(
message, logger, extra_fields=error.extra_fields if error else None
)
self.job_id = job_id
class APIForbidden(APIError):
"""API forbidden error."""
status = 403
class APINotFound(APIError):
"""API not found error."""
status = 404
class APIGone(APIError):
"""API is no longer available."""
status = 410
class APIAddonNotInstalled(APIError):
"""Not installed addon requested at addons API."""
class APIDBMigrationInProgress(APIError):
"""Service is unavailable due to an offline DB migration is in progress."""
status = 503
# Service / Discovery
@@ -616,6 +851,10 @@ class DockerError(HassioError):
"""Docker API/Transport errors."""
class DockerBuildError(DockerError):
"""Docker error during build."""
class DockerAPIError(DockerError):
"""Docker API error."""
@@ -632,10 +871,6 @@ class DockerNotFound(DockerError):
"""Docker object don't Exists."""
class DockerLogOutOfOrder(DockerError):
"""Raise when log from docker action was out of order."""
class DockerNoSpaceOnDevice(DockerError):
"""Raise if a docker pull fails due to available space."""
@@ -647,7 +882,23 @@ class DockerNoSpaceOnDevice(DockerError):
super().__init__(None, logger=logger)
class DockerHubRateLimitExceeded(DockerError):
class DockerContainerPortConflict(DockerError, APIError):
"""Raise if docker cannot start a container due to a port conflict."""
error_key = "docker_container_port_conflict"
message_template = (
"Cannot start container {name} because port {port} is already in use"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, name: str, port: int
) -> None:
"""Raise & log."""
self.extra_fields = {"name": name, "port": port}
super().__init__(None, logger)
class DockerHubRateLimitExceeded(DockerError, APITooManyRequests):
"""Raise for docker hub rate limit exceeded error."""
error_key = "dockerhub_rate_limit_exceeded"
@@ -655,16 +906,13 @@ class DockerHubRateLimitExceeded(DockerError):
"Your IP address has made too many requests to Docker Hub which activated a rate limit. "
"For more details see {dockerhub_rate_limit_url}"
)
extra_fields = {
"dockerhub_rate_limit_url": "https://www.home-assistant.io/more-info/dockerhub-rate-limit"
}
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Raise & log."""
super().__init__(
None,
logger=logger,
extra_fields={
"dockerhub_rate_limit_url": "https://www.home-assistant.io/more-info/dockerhub-rate-limit"
},
)
super().__init__(None, logger=logger)
class DockerJobError(DockerError, JobException):
@@ -735,6 +983,32 @@ class StoreNotFound(StoreError):
"""Raise if slug is not known."""
class StoreAddonNotFoundError(StoreError, APINotFound):
"""Raise if a requested addon is not in the store."""
error_key = "store_addon_not_found_error"
message_template = "Addon {addon} does not exist in the store"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(None, logger)
class StoreRepositoryLocalCannotReset(StoreError, APIError):
"""Raise if user requests a reset on the local addon repository."""
error_key = "store_repository_local_cannot_reset"
message_template = "Can't reset repository {local_repo} as it is not git based!"
extra_fields = {"local_repo": "local"}
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Initialize exception."""
super().__init__(None, logger)
class StoreJobError(StoreError, JobException):
"""Raise on job error with git."""
@@ -743,6 +1017,18 @@ class StoreInvalidAddonRepo(StoreError):
"""Raise on invalid addon repo."""
class StoreRepositoryUnknownError(StoreError, APIUnknownSupervisorError):
"""Raise when unknown error occurs taking an action for a store repository."""
error_key = "store_repository_unknown_error"
message_template = "An unknown error occurred with addon repository {repo}"
def __init__(self, logger: Callable[..., None] | None = None, *, repo: str) -> None:
"""Initialize exception."""
self.extra_fields = {"repo": repo}
super().__init__(logger)
# Backup
@@ -770,7 +1056,7 @@ class BackupJobError(BackupError, JobException):
"""Raise on Backup job error."""
class BackupFileNotFoundError(BackupError):
class BackupFileNotFoundError(BackupError, APINotFound):
"""Raise if the backup file hasn't been found."""
@@ -782,6 +1068,55 @@ class BackupFileExistError(BackupError):
"""Raise if the backup file already exists."""
class AddonBackupMetadataInvalidError(BackupError, APIError):
"""Raise if invalid metadata file provided for addon in backup."""
error_key = "addon_backup_metadata_invalid_error"
message_template = (
"Metadata file for add-on {addon} in backup is invalid: {validation_error}"
)
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
addon: str,
validation_error: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "validation_error": validation_error}
super().__init__(None, logger)
class AddonPrePostBackupCommandReturnedError(BackupError, APIError):
"""Raise when addon's pre/post backup command returns an error."""
error_key = "addon_pre_post_backup_command_returned_error"
message_template = (
"Pre-/Post backup command for add-on {addon} returned error code: "
"{exit_code}. Please report this to the addon developer. Enable debug "
"logging to capture complete command output using {debug_logging_command}"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str, exit_code: int
) -> None:
"""Initialize exception."""
self.extra_fields = {
"addon": addon,
"exit_code": exit_code,
"debug_logging_command": "ha supervisor options --logging debug",
}
super().__init__(None, logger)
class BackupRestoreUnknownError(BackupError, APIUnknownSupervisorError):
"""Raise when an unknown error occurs during backup or restore."""
error_key = "backup_restore_unknown_error"
message_template = "An unknown error occurred during backup/restore"
# Security

View File

@@ -6,8 +6,6 @@ from pathlib import Path
import shutil
from typing import Any
from supervisor.resolution.const import UnhealthyReason
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
DBusError,
@@ -15,6 +13,7 @@ from ..exceptions import (
DBusObjectError,
HardwareNotFound,
)
from ..resolution.const import UnhealthyReason
from .const import UdevSubsystem
from .data import Device

View File

@@ -135,6 +135,7 @@ class HomeAssistantAPI(CoreSysAttributes):
"""
url = f"{self.sys_homeassistant.api_url}/{path}"
headers = headers or {}
client_timeout = aiohttp.ClientTimeout(total=timeout)
# Passthrough content type
if content_type is not None:
@@ -144,10 +145,11 @@ class HomeAssistantAPI(CoreSysAttributes):
try:
await self.ensure_access_token()
headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}"
async with getattr(self.sys_websession, method)(
async with self.sys_websession.request(
method,
url,
data=data,
timeout=timeout,
timeout=client_timeout,
json=json,
headers=headers,
params=params,
@@ -175,7 +177,10 @@ class HomeAssistantAPI(CoreSysAttributes):
async def get_config(self) -> dict[str, Any]:
"""Return Home Assistant config."""
return await self._get_json("api/config")
config = await self._get_json("api/config")
if config is None or not isinstance(config, dict):
raise HomeAssistantAPIError("No config received from Home Assistant API")
return config
async def get_core_state(self) -> dict[str, Any]:
"""Return Home Assistant core state."""
@@ -219,3 +224,32 @@ class HomeAssistantAPI(CoreSysAttributes):
if state := await self.get_api_state():
return state.core_state == "RUNNING" or state.offline_db_migration
return False
async def check_frontend_available(self) -> bool:
"""Check if the frontend is accessible by fetching the root path.
Caller should make sure that Home Assistant Core is running before
calling this method.
Returns:
True if the frontend responds successfully, False otherwise.
"""
try:
async with self.make_request("get", "", timeout=30) as resp:
# Frontend should return HTML content
if resp.status == 200:
content_type = resp.headers.get(hdrs.CONTENT_TYPE, "")
if "text/html" in content_type:
_LOGGER.debug("Frontend is accessible and serving HTML")
return True
_LOGGER.warning(
"Frontend responded but with unexpected content type: %s",
content_type,
)
return False
_LOGGER.warning("Frontend returned status %s", resp.status)
return False
except HomeAssistantAPIError as err:
_LOGGER.debug("Cannot reach frontend: %s", err)
return False

View File

@@ -13,7 +13,10 @@ from typing import Final
from awesomeversion import AwesomeVersion
from ..const import ATTR_HOMEASSISTANT, BusEvent
from supervisor.utils import remove_colors
from ..bus import EventListener
from ..const import ATTR_HOMEASSISTANT, BusEvent, CoreState
from ..coresys import CoreSys
from ..docker.const import ContainerState
from ..docker.homeassistant import DockerHomeAssistant
@@ -33,7 +36,6 @@ from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobConcurrency, JobThrot
from ..jobs.decorator import Job, JobCondition
from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType
from ..utils import convert_to_ascii
from ..utils.sentry import async_capture_exception
from .const import (
LANDINGPAGE,
@@ -48,7 +50,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
SECONDS_BETWEEN_API_CHECKS: Final[int] = 5
# Core Stage 1 and some wiggle room
STARTUP_API_RESPONSE_TIMEOUT: Final[timedelta] = timedelta(minutes=3)
STARTUP_API_RESPONSE_TIMEOUT: Final[timedelta] = timedelta(minutes=10)
# All stages plus event start timeout and some wiggle rooom
STARTUP_API_CHECK_RUNNING_TIMEOUT: Final[timedelta] = timedelta(minutes=15)
# While database migration is running, the timeout will be extended
@@ -74,6 +76,7 @@ class HomeAssistantCore(JobGroup):
super().__init__(coresys, JOB_GROUP_HOME_ASSISTANT_CORE)
self.instance: DockerHomeAssistant = DockerHomeAssistant(coresys)
self._error_state: bool = False
self._watchdog_listener: EventListener | None = None
@property
def error_state(self) -> bool:
@@ -82,9 +85,12 @@ class HomeAssistantCore(JobGroup):
async def load(self) -> None:
"""Prepare Home Assistant object."""
self.sys_bus.register_event(
self._watchdog_listener = self.sys_bus.register_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE, self.watchdog_container
)
self.sys_bus.register_event(
BusEvent.SUPERVISOR_STATE_CHANGE, self._supervisor_state_changed
)
try:
# Evaluate Version if we lost this information
@@ -176,28 +182,53 @@ class HomeAssistantCore(JobGroup):
concurrency=JobConcurrency.GROUP_REJECT,
)
async def install(self) -> None:
"""Install a landing page."""
"""Install Home Assistant Core."""
_LOGGER.info("Home Assistant setup")
while True:
# read homeassistant tag and install it
if not self.sys_homeassistant.latest_version:
await self.sys_updater.reload()
stop_progress_log = asyncio.Event()
if to_version := self.sys_homeassistant.latest_version:
async def _periodic_progress_log() -> None:
"""Log installation progress periodically for user visibility."""
while not stop_progress_log.is_set():
try:
await self.instance.update(
to_version,
image=self.sys_updater.image_homeassistant,
)
self.sys_homeassistant.version = self.instance.version or to_version
break
except (DockerError, JobException):
pass
except Exception as err: # pylint: disable=broad-except
await async_capture_exception(err)
await asyncio.wait_for(stop_progress_log.wait(), timeout=15)
except TimeoutError:
if (job := self.instance.active_job) and job.progress:
_LOGGER.info(
"Downloading Home Assistant Core image, %d%%",
int(job.progress),
)
else:
_LOGGER.info("Home Assistant Core installation in progress")
_LOGGER.warning("Error on Home Assistant installation. Retrying in 30sec")
await asyncio.sleep(30)
progress_task = self.sys_create_task(_periodic_progress_log())
try:
while True:
# read homeassistant tag and install it
if not self.sys_homeassistant.latest_version:
await self.sys_updater.reload()
if to_version := self.sys_homeassistant.latest_version:
try:
await self.instance.update(
to_version,
image=self.sys_updater.image_homeassistant,
)
self.sys_homeassistant.version = (
self.instance.version or to_version
)
break
except (DockerError, JobException):
pass
except Exception as err: # pylint: disable=broad-except
await async_capture_exception(err)
_LOGGER.warning(
"Error on Home Assistant installation. Retrying in 30sec"
)
await asyncio.sleep(30)
finally:
stop_progress_log.set()
await progress_task
_LOGGER.info("Home Assistant docker now installed")
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
@@ -303,12 +334,18 @@ class HomeAssistantCore(JobGroup):
except HomeAssistantError:
# The API stoped responding between the up checks an now
self._error_state = True
data = None
return
# Verify that the frontend is loaded
if data and "frontend" not in data.get("components", []):
if "frontend" not in data.get("components", []):
_LOGGER.error("API responds but frontend is not loaded")
self._error_state = True
# Check that the frontend is actually accessible
elif not await self.sys_homeassistant.api.check_frontend_available():
_LOGGER.error(
"Frontend component loaded but frontend is not accessible"
)
self._error_state = True
else:
return
@@ -321,12 +358,12 @@ class HomeAssistantCore(JobGroup):
# Make a copy of the current log file if it exists
logfile = self.sys_config.path_homeassistant / "home-assistant.log"
if logfile.exists():
if await self.sys_run_in_executor(logfile.exists):
rollback_log = (
self.sys_config.path_homeassistant / "home-assistant-rollback.log"
)
shutil.copy(logfile, rollback_log)
await self.sys_run_in_executor(shutil.copy, logfile, rollback_log)
_LOGGER.info(
"A backup of the logfile is stored in /config/home-assistant-rollback.log"
)
@@ -421,13 +458,6 @@ class HomeAssistantCore(JobGroup):
await self.instance.stop()
await self.start()
def logs(self) -> Awaitable[bytes]:
"""Get HomeAssistant docker logs.
Return a coroutine.
"""
return self.instance.logs()
async def stats(self) -> DockerStats:
"""Return stats of Home Assistant."""
try:
@@ -458,7 +488,15 @@ class HomeAssistantCore(JobGroup):
"""Run Home Assistant config check."""
try:
result = await self.instance.execute_command(
"python3 -m homeassistant -c /config --script check_config"
[
"python3",
"-m",
"homeassistant",
"-c",
"/config",
"--script",
"check_config",
]
)
except DockerError as err:
raise HomeAssistantError() from err
@@ -468,7 +506,7 @@ class HomeAssistantCore(JobGroup):
raise HomeAssistantError("Fatal error on config check!", _LOGGER.error)
# Convert output
log = convert_to_ascii(result.output)
log = remove_colors("\n".join(result.log))
_LOGGER.debug("Result config check: %s", log.strip())
# Parse output
@@ -550,6 +588,16 @@ class HomeAssistantCore(JobGroup):
if event.state in [ContainerState.FAILED, ContainerState.UNHEALTHY]:
await self._restart_after_problem(event.state)
async def _supervisor_state_changed(self, state: CoreState) -> None:
"""Handle supervisor state changes to disable watchdog during shutdown."""
if state in (CoreState.SHUTDOWN, CoreState.STOPPING, CoreState.CLOSE):
if self._watchdog_listener:
_LOGGER.debug(
"Unregistering Home Assistant watchdog due to system shutdown"
)
self.sys_bus.remove_listener(self._watchdog_listener)
self._watchdog_listener = None
@Job(
name="home_assistant_core_restart_after_problem",
throttle_period=WATCHDOG_THROTTLE_PERIOD,

View File

@@ -1,7 +1,6 @@
"""Home Assistant control object."""
import asyncio
from datetime import timedelta
import errno
from ipaddress import IPv4Address
import logging
@@ -13,7 +12,7 @@ from typing import Any
from uuid import UUID
from awesomeversion import AwesomeVersion, AwesomeVersionException
from securetar import AddFileError, atomic_contents_add, secure_path
from securetar import AddFileError, SecureTarFile, atomic_contents_add
import voluptuous as vol
from voluptuous.humanize import humanize_error
@@ -23,6 +22,7 @@ from ..const import (
ATTR_AUDIO_OUTPUT,
ATTR_BACKUPS_EXCLUDE_DATABASE,
ATTR_BOOT,
ATTR_DUPLICATE_LOG_FILE,
ATTR_IMAGE,
ATTR_MESSAGE,
ATTR_PORT,
@@ -34,11 +34,11 @@ from ..const import (
ATTR_WATCHDOG,
FILE_HASSIO_HOMEASSISTANT,
BusEvent,
IngressSessionDataUser,
IngressSessionDataUserDict,
HomeAssistantUser,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
BackupInvalidError,
ConfigurationFileError,
HomeAssistantBackupError,
HomeAssistantError,
@@ -46,7 +46,6 @@ from ..exceptions import (
)
from ..hardware.const import PolicyGroup
from ..hardware.data import Device
from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason
from ..utils import remove_folder, remove_folder_with_excludes
@@ -74,6 +73,7 @@ HOMEASSISTANT_BACKUP_EXCLUDE = [
"backups/*.tar",
"tmp_backups/*.tar",
"tts/*",
".cache/*",
]
HOMEASSISTANT_BACKUP_EXCLUDE_DATABASE = [
"home-assistant_v?.db",
@@ -299,6 +299,16 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
"""Set whether backups should exclude database by default."""
self._data[ATTR_BACKUPS_EXCLUDE_DATABASE] = value
@property
def duplicate_log_file(self) -> bool:
"""Return True if Home Assistant should duplicate logs to file."""
return self._data[ATTR_DUPLICATE_LOG_FILE]
@duplicate_log_file.setter
def duplicate_log_file(self, value: bool) -> None:
"""Set whether Home Assistant should duplicate logs to file."""
self._data[ATTR_DUPLICATE_LOG_FILE] = value
async def load(self) -> None:
"""Prepare Home Assistant object."""
await asyncio.wait(
@@ -348,15 +358,23 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
):
return
configuration: (
dict[str, Any] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "get_config"}
)
try:
configuration: (
dict[str, Any] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "get_config"}
)
except HomeAssistantWSError as err:
_LOGGER.warning(
"Can't get Home Assistant Core configuration: %s. Not sending hardware events to Home Assistant Core.",
err,
)
return
if not configuration or "usb" not in configuration.get("components", []):
return
self.sys_homeassistant.websocket.send_message({ATTR_TYPE: "usb/scan"})
self.sys_homeassistant.websocket.send_command({ATTR_TYPE: "usb/scan"})
@Job(name="home_assistant_module_begin_backup")
async def begin_backup(self) -> None:
@@ -398,7 +416,7 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
@Job(name="home_assistant_module_backup")
async def backup(
self, tar_file: tarfile.TarFile, exclude_database: bool = False
self, tar_file: SecureTarFile, exclude_database: bool = False
) -> None:
"""Backup Home Assistant Core config/directory."""
excludes = HOMEASSISTANT_BACKUP_EXCLUDE.copy()
@@ -458,7 +476,7 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
@Job(name="home_assistant_module_restore")
async def restore(
self, tar_file: tarfile.TarFile, exclude_database: bool = False
self, tar_file: SecureTarFile, exclude_database: bool | None = False
) -> None:
"""Restore Home Assistant Core config/ directory."""
@@ -475,11 +493,16 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
# extract backup
try:
with tar_file as backup:
# The tar filter rejects path traversal and absolute names,
# aborting restore of potentially crafted backups.
backup.extractall(
path=temp_path,
members=secure_path(backup),
filter="fully_trusted",
filter="tar",
)
except tarfile.FilterError as err:
raise BackupInvalidError(
f"Invalid tarfile {tar_file}: {err}", _LOGGER.error
) from err
except tarfile.TarError as err:
raise HomeAssistantError(
f"Can't read tarfile {tar_file}: {err}", _LOGGER.error
@@ -490,14 +513,14 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
temp_data = temp_path
_LOGGER.info("Restore Home Assistant Core config folder")
if exclude_database:
if exclude_database is True:
remove_folder_with_excludes(
self.sys_config.path_homeassistant,
excludes=HOMEASSISTANT_BACKUP_EXCLUDE_DATABASE,
tmp_dir=self.sys_config.path_tmp,
)
else:
remove_folder(self.sys_config.path_homeassistant)
remove_folder(self.sys_config.path_homeassistant, content_only=True)
try:
shutil.copytree(
@@ -550,21 +573,12 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
if attr in data:
self._data[attr] = data[attr]
@Job(
name="home_assistant_get_users",
throttle_period=timedelta(minutes=5),
internal=True,
concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
)
async def get_users(self) -> list[IngressSessionDataUser]:
"""Get list of all configured users."""
list_of_users: (
list[IngressSessionDataUserDict] | None
) = await self.sys_homeassistant.websocket.async_send_command(
async def list_users(self) -> list[HomeAssistantUser]:
"""Fetch list of all users from Home Assistant Core via WebSocket.
Raises HomeAssistantWSError on WebSocket connection/communication failure.
"""
raw: list[dict[str, Any]] = await self.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"}
)
if list_of_users:
return [IngressSessionDataUser.from_dict(data) for data in list_of_users]
return []
return [HomeAssistantUser.from_dict(data) for data in raw]

View File

@@ -10,6 +10,7 @@ from ..const import (
ATTR_AUDIO_OUTPUT,
ATTR_BACKUPS_EXCLUDE_DATABASE,
ATTR_BOOT,
ATTR_DUPLICATE_LOG_FILE,
ATTR_IMAGE,
ATTR_PORT,
ATTR_REFRESH_TOKEN,
@@ -36,6 +37,7 @@ SCHEMA_HASS_CONFIG = vol.Schema(
vol.Optional(ATTR_AUDIO_OUTPUT, default=None): vol.Maybe(str),
vol.Optional(ATTR_AUDIO_INPUT, default=None): vol.Maybe(str),
vol.Optional(ATTR_BACKUPS_EXCLUDE_DATABASE, default=False): vol.Boolean(),
vol.Optional(ATTR_DUPLICATE_LOG_FILE, default=False): vol.Boolean(),
vol.Optional(ATTR_OVERRIDE_IMAGE, default=False): vol.Boolean(),
},
extra=vol.REMOVE_EXTRA,

View File

@@ -30,12 +30,6 @@ from ..exceptions import (
from ..utils.json import json_dumps
from .const import CLOSING_STATES, WSEvent, WSType
MIN_VERSION = {
WSType.SUPERVISOR_EVENT: "2021.2.4",
WSType.BACKUP_START: "2022.1.0",
WSType.BACKUP_END: "2022.1.0",
}
_LOGGER: logging.Logger = logging.getLogger(__name__)
T = TypeVar("T")
@@ -46,7 +40,6 @@ class WSClient:
def __init__(
self,
loop: asyncio.BaseEventLoop,
ha_version: AwesomeVersion,
client: aiohttp.ClientWebSocketResponse,
):
@@ -54,7 +47,6 @@ class WSClient:
self.ha_version = ha_version
self._client = client
self._message_id: int = 0
self._loop = loop
self._futures: dict[int, asyncio.Future[T]] = {} # type: ignore
@property
@@ -73,20 +65,11 @@ class WSClient:
if not self._client.closed:
await self._client.close()
async def async_send_message(self, message: dict[str, Any]) -> None:
"""Send a websocket message, don't wait for response."""
self._message_id += 1
_LOGGER.debug("Sending: %s", message)
try:
await self._client.send_json(message, dumps=json_dumps)
except ConnectionError as err:
raise HomeAssistantWSConnectionError(str(err)) from err
async def async_send_command(self, message: dict[str, Any]) -> T | None:
async def async_send_command(self, message: dict[str, Any]) -> T:
"""Send a websocket message, and return the response."""
self._message_id += 1
message["id"] = self._message_id
self._futures[message["id"]] = self._loop.create_future()
self._futures[message["id"]] = asyncio.get_running_loop().create_future()
_LOGGER.debug("Sending: %s", message)
try:
await self._client.send_json(message, dumps=json_dumps)
@@ -157,13 +140,13 @@ class WSClient:
@classmethod
async def connect_with_auth(
cls, session: aiohttp.ClientSession, loop, url: str, token: str
cls, session: aiohttp.ClientSession, url: str, token: str
) -> WSClient:
"""Create an authenticated websocket client."""
try:
client = await session.ws_connect(url, ssl=False)
except aiohttp.client_exceptions.ClientConnectorError:
raise HomeAssistantWSError("Can't connect") from None
raise HomeAssistantWSConnectionError("Can't connect") from None
hello_message = await client.receive_json()
@@ -176,7 +159,7 @@ class WSClient:
if auth_ok_message[ATTR_TYPE] != "auth_ok":
raise HomeAssistantAPIError("AUTH NOT OK")
return cls(loop, AwesomeVersion(hello_message["ha_version"]), client)
return cls(AwesomeVersion(hello_message["ha_version"]), client)
class HomeAssistantWebSocket(CoreSysAttributes):
@@ -193,7 +176,7 @@ class HomeAssistantWebSocket(CoreSysAttributes):
"""Process queue once supervisor is running."""
if reference == CoreState.RUNNING:
for msg in self._queue:
await self.async_send_message(msg)
await self._async_send_command(msg)
self._queue.clear()
@@ -207,7 +190,6 @@ class HomeAssistantWebSocket(CoreSysAttributes):
await self.sys_homeassistant.api.ensure_access_token()
client = await WSClient.connect_with_auth(
self.sys_websession,
self.sys_loop,
self.sys_homeassistant.ws_url,
cast(str, self.sys_homeassistant.api.access_token),
)
@@ -215,38 +197,27 @@ class HomeAssistantWebSocket(CoreSysAttributes):
self.sys_create_task(client.start_listener())
return client
async def _can_send(self, message: dict[str, Any]) -> bool:
"""Determine if we can use WebSocket messages."""
async def _ensure_connected(self) -> None:
"""Ensure WebSocket connection is ready.
Raises HomeAssistantWSConnectionError if unable to connect.
Raises HomeAssistantAuthError if authentication with Core fails.
"""
if self.sys_core.state in CLOSING_STATES:
return False
raise HomeAssistantWSConnectionError(
"WebSocket not available, system is shutting down"
)
connected = self._client and self._client.connected
# If we are already connected, we can avoid the check_api_state call
# since it makes a new socket connection and we already have one.
if not connected and not await self.sys_homeassistant.api.check_api_state():
# No core access, don't try.
return False
if not self._client:
self._client = await self._get_ws_client()
if not self._client.connected:
self._client = await self._get_ws_client()
message_type = message.get("type")
if (
message_type is not None
and message_type in MIN_VERSION
and self._client.ha_version < MIN_VERSION[message_type]
):
_LOGGER.info(
"WebSocket command %s is not supported until core-%s. Ignoring WebSocket message.",
message_type,
MIN_VERSION[message_type],
raise HomeAssistantWSConnectionError(
"Can't connect to Home Assistant Core WebSocket, the API is not reachable"
)
return False
return True
if not self._client or not self._client.connected:
self._client = await self._get_ws_client()
async def load(self) -> None:
"""Set up queue processor after startup completes."""
@@ -254,53 +225,61 @@ class HomeAssistantWebSocket(CoreSysAttributes):
BusEvent.SUPERVISOR_STATE_CHANGE, self._process_queue
)
async def async_send_message(self, message: dict[str, Any]) -> None:
"""Send a message with the WS client."""
# Only commands allowed during startup as those tell Home Assistant to do something.
# Messages may cause clients to make follow-up API calls so those wait.
async def _async_send_command(self, message: dict[str, Any]) -> None:
"""Send a fire-and-forget command via WebSocket.
Queues messages during startup. Silently handles connection errors.
"""
if self.sys_core.state in STARTING_STATES:
self._queue.append(message)
_LOGGER.debug("Queuing message until startup has completed: %s", message)
return
if not await self._can_send(message):
try:
await self._ensure_connected()
except HomeAssistantWSError as err:
_LOGGER.debug("Can't send WebSocket command: %s", err)
return
# _ensure_connected guarantees self._client is set
assert self._client
try:
if self._client:
await self._client.async_send_command(message)
except HomeAssistantWSConnectionError:
await self._client.async_send_command(message)
except HomeAssistantWSConnectionError as err:
_LOGGER.debug("Fire-and-forget WebSocket command failed: %s", err)
if self._client:
await self._client.close()
self._client = None
async def async_send_command(self, message: dict[str, Any]) -> T | None:
"""Send a command with the WS client and wait for the response."""
if not await self._can_send(message):
return None
async def async_send_command(self, message: dict[str, Any]) -> T:
"""Send a command and return the response.
Raises HomeAssistantWSError on WebSocket connection or communication failure.
"""
await self._ensure_connected()
# _ensure_connected guarantees self._client is set
assert self._client
try:
if self._client:
return await self._client.async_send_command(message)
return await self._client.async_send_command(message)
except HomeAssistantWSConnectionError:
if self._client:
await self._client.close()
self._client = None
raise
return None
def send_message(self, message: dict[str, Any]) -> None:
"""Send a supervisor/event message."""
def send_command(self, message: dict[str, Any]) -> None:
"""Send a fire-and-forget command via WebSocket."""
if self.sys_core.state in CLOSING_STATES:
return
self.sys_create_task(self.async_send_message(message))
self.sys_create_task(self._async_send_command(message))
async def async_supervisor_event_custom(
self, event: WSEvent, extra_data: dict[str, Any] | None = None
) -> None:
"""Send a supervisor/event message to Home Assistant with custom data."""
try:
await self.async_send_message(
await self._async_send_command(
{
ATTR_TYPE: WSType.SUPERVISOR_EVENT,
ATTR_DATA: {

View File

@@ -6,8 +6,8 @@ import logging
import socket
from ..dbus.const import (
ConnectionState,
ConnectionStateFlags,
ConnectionStateType,
DeviceType,
InterfaceAddrGenMode as NMInterfaceAddrGenMode,
InterfaceIp6Privacy as NMInterfaceIp6Privacy,
@@ -64,6 +64,7 @@ class IpSetting:
method: InterfaceMethod
address: list[IPv4Interface | IPv6Interface]
gateway: IPv4Address | IPv6Address | None
route_metric: int | None
nameservers: list[IPv4Address | IPv6Address]
@@ -166,6 +167,7 @@ class Interface:
gateway=IPv4Address(inet.settings.ipv4.gateway)
if inet.settings.ipv4.gateway
else None,
route_metric=inet.settings.ipv4.route_metric,
nameservers=[
IPv4Address(socket.ntohl(ip)) for ip in inet.settings.ipv4.dns
]
@@ -173,7 +175,7 @@ class Interface:
else [],
)
else:
ipv4_setting = IpSetting(InterfaceMethod.DISABLED, [], None, [])
ipv4_setting = IpSetting(InterfaceMethod.DISABLED, [], None, None, [])
if inet.settings and inet.settings.ipv6:
ipv6_setting = Ip6Setting(
@@ -193,12 +195,13 @@ class Interface:
gateway=IPv6Address(inet.settings.ipv6.gateway)
if inet.settings.ipv6.gateway
else None,
route_metric=inet.settings.ipv6.route_metric,
nameservers=[IPv6Address(bytes(ip)) for ip in inet.settings.ipv6.dns]
if inet.settings.ipv6.dns
else [],
)
else:
ipv6_setting = Ip6Setting(InterfaceMethod.DISABLED, [], None, [])
ipv6_setting = Ip6Setting(InterfaceMethod.DISABLED, [], None, None, [])
ipv4_ready = (
inet.connection is not None
@@ -267,25 +270,47 @@ class Interface:
return InterfaceMethod.DISABLED
@staticmethod
def _map_nm_addr_gen_mode(addr_gen_mode: int) -> InterfaceAddrGenMode:
"""Map IPv6 interface addr_gen_mode."""
def _map_nm_addr_gen_mode(addr_gen_mode: int | None) -> InterfaceAddrGenMode:
"""Map IPv6 interface addr_gen_mode.
NetworkManager omits the addr_gen_mode property when set to DEFAULT, so we
treat None as DEFAULT here.
"""
mapping = {
NMInterfaceAddrGenMode.EUI64.value: InterfaceAddrGenMode.EUI64,
NMInterfaceAddrGenMode.STABLE_PRIVACY.value: InterfaceAddrGenMode.STABLE_PRIVACY,
NMInterfaceAddrGenMode.DEFAULT_OR_EUI64.value: InterfaceAddrGenMode.DEFAULT_OR_EUI64,
NMInterfaceAddrGenMode.DEFAULT.value: InterfaceAddrGenMode.DEFAULT,
None: InterfaceAddrGenMode.DEFAULT,
}
if addr_gen_mode not in mapping:
_LOGGER.warning(
"Unknown addr_gen_mode value from NetworkManager: %s", addr_gen_mode
)
return mapping.get(addr_gen_mode, InterfaceAddrGenMode.DEFAULT)
@staticmethod
def _map_nm_ip6_privacy(ip6_privacy: int) -> InterfaceIp6Privacy:
"""Map IPv6 interface ip6_privacy."""
def _map_nm_ip6_privacy(ip6_privacy: int | None) -> InterfaceIp6Privacy:
"""Map IPv6 interface ip6_privacy.
NetworkManager omits the ip6_privacy property when set to DEFAULT, so we
treat None as DEFAULT here.
"""
mapping = {
NMInterfaceIp6Privacy.DISABLED.value: InterfaceIp6Privacy.DISABLED,
NMInterfaceIp6Privacy.ENABLED_PREFER_PUBLIC.value: InterfaceIp6Privacy.ENABLED_PREFER_PUBLIC,
NMInterfaceIp6Privacy.ENABLED.value: InterfaceIp6Privacy.ENABLED,
NMInterfaceIp6Privacy.DEFAULT.value: InterfaceIp6Privacy.DEFAULT,
None: InterfaceIp6Privacy.DEFAULT,
}
if ip6_privacy not in mapping:
_LOGGER.warning(
"Unknown ip6_privacy value from NetworkManager: %s", ip6_privacy
)
return mapping.get(ip6_privacy, InterfaceIp6Privacy.DEFAULT)
@staticmethod
@@ -295,8 +320,8 @@ class Interface:
return False
return connection.state in (
ConnectionStateType.ACTIVATED,
ConnectionStateType.ACTIVATING,
ConnectionState.ACTIVATED,
ConnectionState.ACTIVATING,
)
@staticmethod

View File

@@ -5,8 +5,6 @@ from contextlib import suppress
import logging
from typing import Any
from supervisor.utils.sentry import async_capture_exception
from ..const import ATTR_HOST_INTERNET
from ..coresys import CoreSys, CoreSysAttributes
from ..dbus.const import (
@@ -16,7 +14,7 @@ from ..dbus.const import (
DBUS_IFACE_DNS,
DBUS_IFACE_NM,
DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED,
ConnectionStateType,
ConnectionState,
ConnectivityState,
DeviceType,
WirelessMethodType,
@@ -34,6 +32,7 @@ from ..exceptions import (
from ..jobs.const import JobCondition
from ..jobs.decorator import Job
from ..resolution.checks.network_interface_ipv4 import CheckNetworkInterfaceIPV4
from ..utils.sentry import async_capture_exception
from .configuration import AccessPoint, Interface
from .const import InterfaceMethod, WifiMode
@@ -338,16 +337,16 @@ class NetworkManager(CoreSysAttributes):
# the state change before this point. Get the state currently to
# avoid any race condition.
await con.update()
state: ConnectionStateType = con.state
state: ConnectionState = con.state
while state != ConnectionStateType.ACTIVATED:
if state == ConnectionStateType.DEACTIVATED:
while state != ConnectionState.ACTIVATED:
if state == ConnectionState.DEACTIVATED:
raise HostNetworkError(
"Activating connection failed, check connection settings."
)
msg = await signal.wait_for_signal()
state = msg[0]
state = ConnectionState(msg[0])
_LOGGER.debug("Active connection state changed to %s", state)
# update_only means not done by user so don't force a check afterwards

View File

@@ -9,7 +9,7 @@ from contextvars import Context, ContextVar, Token
from dataclasses import dataclass
from datetime import datetime
import logging
from typing import Any, Self
from typing import Any, Self, cast
from uuid import uuid4
from attr.validators import gt, lt
@@ -102,13 +102,17 @@ class SupervisorJobError:
"Unknown error, see Supervisor logs (check with 'ha supervisor logs')"
)
stage: str | None = None
error_key: str | None = None
extra_fields: dict[str, Any] | None = None
def as_dict(self) -> dict[str, str | None]:
def as_dict(self) -> dict[str, Any]:
"""Return dictionary representation."""
return {
"type": self.type_.__name__,
"message": self.message,
"stage": self.stage,
"error_key": self.error_key,
"extra_fields": self.extra_fields,
}
@@ -158,7 +162,9 @@ class SupervisorJob:
def capture_error(self, err: HassioError | None = None) -> None:
"""Capture an error or record that an unknown error has occurred."""
if err:
new_error = SupervisorJobError(type(err), str(err), self.stage)
new_error = SupervisorJobError(
type(err), str(err), self.stage, err.error_key, err.extra_fields
)
else:
new_error = SupervisorJobError(stage=self.stage)
self.errors += [new_error]
@@ -196,7 +202,7 @@ class SupervisorJob:
self,
progress: float | None = None,
stage: str | None = None,
extra: dict[str, Any] | None = DEFAULT, # type: ignore
extra: dict[str, Any] | None | type[DEFAULT] = DEFAULT,
done: bool | None = None,
) -> None:
"""Update multiple fields with one on change event."""
@@ -207,8 +213,8 @@ class SupervisorJob:
self.progress = progress
if stage is not None:
self.stage = stage
if extra != DEFAULT:
self.extra = extra
if extra is not DEFAULT:
self.extra = cast(dict[str, Any] | None, extra)
# Done has special event. use that to trigger on change if included
# If not then just use any other field to trigger
@@ -306,19 +312,21 @@ class JobManager(FileConfiguration, CoreSysAttributes):
reference: str | None = None,
initial_stage: str | None = None,
internal: bool = False,
parent_id: str | None = DEFAULT, # type: ignore
parent_id: str | None | type[DEFAULT] = DEFAULT,
child_job_syncs: list[ChildJobSyncFilter] | None = None,
) -> SupervisorJob:
"""Create a new job."""
job = SupervisorJob(
name,
reference=reference,
stage=initial_stage,
on_change=self._on_job_change,
internal=internal,
child_job_syncs=child_job_syncs,
**({} if parent_id == DEFAULT else {"parent_id": parent_id}), # type: ignore
)
kwargs: dict[str, Any] = {
"reference": reference,
"stage": initial_stage,
"on_change": self._on_job_change,
"internal": internal,
"child_job_syncs": child_job_syncs,
}
if parent_id is not DEFAULT:
kwargs["parent_id"] = parent_id
job = SupervisorJob(name, **kwargs)
# Shouldn't happen but inability to find a parent for progress reporting
# shouldn't raise and break the active job

View File

@@ -72,6 +72,9 @@ def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
"docker": coresys.docker.info.version,
"supervisor": coresys.supervisor.version,
},
"docker": {
"storage_driver": coresys.docker.info.storage,
},
"host": {
"machine": coresys.machine,
},
@@ -93,7 +96,7 @@ def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
"installed_addons": installed_addons,
},
"host": {
"arch": coresys.arch.default,
"arch": str(coresys.arch.default),
"board": coresys.os.board,
"deployment": coresys.host.info.deployment,
"disk_free_space": coresys.hardware.disk.get_disk_free_space(
@@ -111,6 +114,9 @@ def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
"docker": coresys.docker.info.version,
"supervisor": coresys.supervisor.version,
},
"docker": {
"storage_driver": coresys.docker.info.storage,
},
"resolution": {
"issues": [attr.asdict(issue) for issue in coresys.resolution.issues],
"suggestions": [

View File

@@ -1,5 +1,6 @@
"""A collection of tasks."""
from contextlib import suppress
from datetime import datetime, timedelta
import logging
from typing import cast
@@ -12,7 +13,9 @@ from ..exceptions import (
AddonsError,
BackupFileNotFoundError,
HomeAssistantError,
HomeAssistantWSError,
ObserverError,
SupervisorUpdateError,
)
from ..homeassistant.const import LANDINGPAGE, WSType
from ..jobs.const import JobConcurrency
@@ -150,7 +153,13 @@ class Tasks(CoreSysAttributes):
"Sending update add-on WebSocket command to Home Assistant Core: %s",
message,
)
await self.sys_homeassistant.websocket.async_send_command(message)
try:
await self.sys_homeassistant.websocket.async_send_command(message)
except HomeAssistantWSError as err:
_LOGGER.warning(
"Could not send add-on update command to Home Assistant Core: %s",
err,
)
@Job(
name="tasks_update_supervisor",
@@ -174,7 +183,11 @@ class Tasks(CoreSysAttributes):
"Found new Supervisor version %s, updating",
self.sys_supervisor.latest_version,
)
await self.sys_supervisor.update()
# Errors are logged by the exceptions, we can't really do something
# if an update fails here.
with suppress(SupervisorUpdateError):
await self.sys_supervisor.update()
async def _watchdog_homeassistant_api(self):
"""Create scheduler task for monitoring running state of API.

View File

@@ -135,7 +135,7 @@ class Mount(CoreSysAttributes, ABC):
@property
def state(self) -> UnitActiveState | None:
"""Get state of mount."""
return self._state
return UnitActiveState(self._state) if self._state is not None else None
@cached_property
def local_where(self) -> Path:

View File

@@ -76,13 +76,6 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
"""Return True if a task is in progress."""
return self.instance.in_progress
def logs(self) -> Awaitable[bytes]:
"""Get docker plugin logs.
Return Coroutine.
"""
return self.instance.logs()
def is_running(self) -> Awaitable[bool]:
"""Return True if Docker container is running.

View File

@@ -368,7 +368,7 @@ class PluginDns(PluginBase):
log = await self.instance.logs()
# Check the log for loop plugin output
if b"plugin/loop: Loop" in log:
if any("plugin/loop: Loop" in line for line in log):
_LOGGER.error("Detected a DNS loop in local Network!")
self._loop = True
self.sys_resolution.create_issue(

View File

@@ -15,9 +15,11 @@ from ..docker.const import ContainerState
from ..docker.observer import DockerObserver
from ..docker.stats import DockerStats
from ..exceptions import (
DockerContainerPortConflict,
DockerError,
ObserverError,
ObserverJobError,
ObserverPortConflict,
ObserverUpdateError,
PluginError,
)
@@ -87,6 +89,8 @@ class PluginObserver(PluginBase):
_LOGGER.info("Starting observer plugin")
try:
await self.instance.run()
except DockerContainerPortConflict as err:
raise ObserverPortConflict(_LOGGER.error) from err
except DockerError as err:
_LOGGER.error("Can't start observer plugin")
raise ObserverError() from err

View File

@@ -22,7 +22,8 @@ async def check_server(
"""Check a DNS server and report issues."""
ip_addr = server[6:] if server.startswith("dns://") else server
async with DNSResolver(loop=loop, nameservers=[ip_addr]) as resolver:
await resolver.query(DNS_CHECK_HOST, qtype)
# following call should be changed to resolver.query() in aiodns 5.x
await resolver.query_dns(DNS_CHECK_HOST, qtype)
def setup(coresys: CoreSys) -> CheckBase:

View File

@@ -2,7 +2,7 @@
from ...const import CoreState
from ...coresys import CoreSys
from ...dbus.const import ConnectionStateFlags, ConnectionStateType
from ...dbus.const import ConnectionState, ConnectionStateFlags
from ...dbus.network.interface import NetworkInterface
from ...exceptions import NetworkInterfaceNotFound
from ..const import ContextType, IssueType
@@ -47,7 +47,7 @@ class CheckNetworkInterfaceIPV4(CheckBase):
return not (
interface.connection.state
in [ConnectionStateType.ACTIVATED, ConnectionStateType.ACTIVATING]
in [ConnectionState.ACTIVATED, ConnectionState.ACTIVATING]
and ConnectionStateFlags.IP4_READY in interface.connection.state_flags
)

View File

@@ -1,14 +1,13 @@
"""Evaluation class for container."""
import asyncio
import logging
from docker.errors import DockerException
from requests import RequestException
from supervisor.docker.const import ADDON_BUILDER_IMAGE
import aiodocker
from ...const import CoreState
from ...coresys import CoreSys
from ...docker.const import ADDON_BUILDER_IMAGE
from ..const import (
ContextType,
IssueType,
@@ -74,8 +73,9 @@ class EvaluateContainer(EvaluateBase):
self._images.clear()
try:
containers = await self.sys_run_in_executor(self.sys_docker.containers.list)
except (DockerException, RequestException) as err:
containers = await self.sys_docker.containers.list()
containers_metadata = await asyncio.gather(*[c.show() for c in containers])
except aiodocker.DockerError as err:
_LOGGER.error("Corrupt docker overlayfs detect: %s", err)
self.sys_resolution.create_issue(
IssueType.CORRUPT_DOCKER,
@@ -86,8 +86,8 @@ class EvaluateContainer(EvaluateBase):
images = {
image
for container in containers
if (config := container.attrs.get("Config")) is not None
for container in containers_metadata
if (config := container.get("Config")) is not None
and (image := config.get("Image")) is not None
}
for image in images:

View File

@@ -1,10 +1,9 @@
"""Evaluation class for restart policy."""
from supervisor.docker.const import RestartPolicy
from supervisor.docker.interface import DockerInterface
from ...const import CoreState
from ...coresys import CoreSys
from ...docker.const import RestartPolicy
from ...docker.interface import DockerInterface
from ..const import UnsupportedReason
from .base import EvaluateBase

View File

@@ -2,10 +2,9 @@
import logging
from supervisor.exceptions import BackupFileNotFoundError
from ...backups.const import BackupType
from ...coresys import CoreSys
from ...exceptions import BackupFileNotFoundError
from ..const import MINIMUM_FULL_BACKUPS, ContextType, IssueType, SuggestionType
from .base import FixupBase

View File

@@ -1,11 +1,11 @@
{
"local": {
"name": "Local add-ons",
"name": "Local apps",
"url": "https://home-assistant.io/hassio",
"maintainer": "you"
},
"core": {
"name": "Official add-ons",
"name": "Official apps",
"url": "https://home-assistant.io/addons",
"maintainer": "Home Assistant"
}

Some files were not shown because too many files have changed in this diff Show More