Compare commits

...

140 Commits

Author SHA1 Message Date
Stefan Agner
1955d83325 Don't log missing plugin containers as error when creating network
When the Supervisor network doesn't exist yet and gets created on fresh
startup, the plugin containers (observer, cli, dns, audio) legitimately
don't exist either. The subsequent attach_container_by_name calls are
wrapped in suppress(DockerError), but the DockerError constructor was
passed _LOGGER.error, producing noisy ERROR lines before the exception
was suppressed.

Convert 404 responses from docker.containers.get() into DockerNotFound
(a DockerError subclass) raised without logging, while keeping ERROR-
level logging for any other Docker API failures.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 10:08:14 +02:00
Copilot
91625db2b1 Stop Supervisor from overriding NetworkManager Wi-Fi powersave policy (#6753)
* Initial plan

* Fix wireless profile generation to not force powersave ignore

Agent-Logs-Url: https://github.com/home-assistant/supervisor/sessions/6e2e9288-6d9b-403d-9d71-8d6ea44eb91b

Co-authored-by: agners <34061+agners@users.noreply.github.com>

* Set wireless powersave to default to reset existing profiles

Agent-Logs-Url: https://github.com/home-assistant/supervisor/sessions/4a4a2c09-0cdd-4417-9776-688837b51dcc

Co-authored-by: agners <34061+agners@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: agners <34061+agners@users.noreply.github.com>
2026-04-23 10:06:49 +02:00
Stefan Agner
814bcc447d Run Update version job only if version is published (#6758) 2026-04-22 10:38:34 +02:00
dependabot[bot]
9203c09f53 Bump mypy from 1.20.1 to 1.20.2 (#6756)
Bumps [mypy](https://github.com/python/mypy) from 1.20.1 to 1.20.2.
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.20.1...v1.20.2)

---
updated-dependencies:
- dependency-name: mypy
  dependency-version: 1.20.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 09:11:11 +02:00
dependabot[bot]
b791e97d0a Bump pre-commit from 4.5.1 to 4.6.0 (#6755)
Bumps [pre-commit](https://github.com/pre-commit/pre-commit) from 4.5.1 to 4.6.0.
- [Release notes](https://github.com/pre-commit/pre-commit/releases)
- [Changelog](https://github.com/pre-commit/pre-commit/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pre-commit/pre-commit/compare/v4.5.1...v4.6.0)

---
updated-dependencies:
- dependency-name: pre-commit
  dependency-version: 4.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 08:56:39 +02:00
dependabot[bot]
a6792f78d4 Bump gitpython from 3.1.46 to 3.1.47 (#6754)
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.46 to 3.1.47.
- [Release notes](https://github.com/gitpython-developers/GitPython/releases)
- [Changelog](https://github.com/gitpython-developers/GitPython/blob/main/CHANGES)
- [Commits](https://github.com/gitpython-developers/GitPython/compare/3.1.46...3.1.47)

---
updated-dependencies:
- dependency-name: gitpython
  dependency-version: 3.1.47
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-22 08:44:54 +02:00
Stefan Agner
97bc19d4b3 Detect container registry rate limits uniformly (#6732)
* Detect container registry rate limits uniformly

Container registry rate limits reach Supervisor in three distinct shapes:

  1. HTTP 429 from the daemon - recognised today, but the exception and
     resolution issue are hardcoded to Docker Hub. Since Core/Supervisor/
     plugin images all live on ghcr.io now, virtually every 429 we see in
     the field is actually a GHCR throttle that we mislabel. The biggest
     Sentry issue (SUPERVISOR-16BK) has >115k events / >93k users, all
     pulling a ghcr.io image, yet each user is told to "log into
     Docker Hub".
  2. HTTP 500 with 'toomanyrequests' in the body - not recognised. Docker
     daemons before 28.3.0 wrap upstream 429s as 500 (fixed upstream by
     moby/moby 23fa0ae74a, "Cleanup http status error checks"). The large
     fleet on older daemons still produces this shape.
  3. JSON error event during a streaming pull - not recognised. Once the
     daemon starts writing the 200 OK response body the status is locked
     in, so rate limits that land during layer download arrive as plain
     text in the pull stream. Happens on all recent daemon versions -
     SUPERVISOR-13FQ (>16k events) and SUPERVISOR-13E0 (>8k events) are
     two large examples.

Cases 2 and 3 propagate as plain DockerError, bypass the 429 detection in
install() entirely, never produce a DOCKER_RATELIMIT resolution issue, and
generate large amounts of Sentry noise. Case 1 is detected but routes
every GHCR 429 through Docker-Hub-specific messaging and suggestions.

Changes:

- Add DockerRegistryRateLimitExceeded as the common base class and
  GithubContainerRegistryRateLimitExceeded alongside the existing
  DockerHubRateLimitExceeded. All extend APITooManyRequests so callers
  and retry logic can key off a single type.
- Add GITHUB_RATELIMIT IssueType so GHCR failures don't show the
  "log in to Docker Hub" suggestion that DOCKER_RATELIMIT carries.
- PullLogEntry.exception now maps stream errors containing
  'toomanyrequests' to DockerRegistryRateLimitExceeded (case 3).
- docker/interface.py:install() routes all three cases through a single
  _registry_rate_limit_exception() helper that picks the right issue
  type, suggestion and exception subclass based on the image's registry.
- utils/sentry.py filters APITooManyRequests (and anything wrapping it
  via __cause__) in capture_exception / async_capture_exception. One
  point of policy, every caller benefits.

Callers (supervisor.update(), plugin manager, homeassistant core) are
unchanged - UPDATE_FAILED issues still get created alongside the
registry-specific rate limit issue, giving users the full picture.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Consolidate Sentry noise filtering in one before_send hook

Move the APITooManyRequests filter from capture_exception /
async_capture_exception wrappers into the existing filter_data
before_send hook in supervisor/misc/filter.py, alongside the
AddonConfigurationError filter.

One isinstance tuple check instead of multiple layers, and every path
that reaches Sentry (including logging-integration and excepthook
captures, not just our explicit wrappers) now gets the same treatment.
The filter walks the __cause__ chain so wrapped rate-limit errors
(e.g. DockerHubRateLimitExceeded inside SupervisorUpdateError) still
get filtered. A debug log is emitted on each dropped event for
observability.

Review feedback from mdegat01 on #6732.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Drop GITHUB_RATELIMIT resolution issue

There is no actionable remediation for a GHCR rate limit - logging in
doesn't lift the quota the way it does for Docker Hub, and the cap is
on the authenticated account anyway. A resolution issue that just tells
the user "you were rate limited" adds UI noise without helping them.

Keep the GithubContainerRegistryRateLimitExceeded exception - retry
logic and the Sentry filter still key off it - but don't create a
resolution issue. A log entry from the exception constructor is
sufficient. Docker Hub still gets DOCKER_RATELIMIT + registry-login
suggestion since that is actionable.

Review feedback on #6732.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-22 07:49:01 +02:00
Stefan Agner
53f84ec15b Bump devcontainer to 6 (#6747)
With devcontainer 6 dbus-daemon is installed in the container, which
is required for tests. The latest version also has support to disable
AppArmor using the `SUPERVISOR_UNCONFINED` environment variable.
2026-04-21 16:52:28 +02:00
Stefan Agner
d431526b14 Fix unhandled WebSocket handshake errors and unnecessary token refresh (#6725)
Raise HomeAssistantWSConnectionError instead of HomeAssistantAPIError
for WebSocket handshake failures. The broader HomeAssistantAPIError was
not caught by the fire-and-forget send path which only catches
HomeAssistantWSError, resulting in "Task exception was never retrieved"
errors when Core's WebSocket endpoint isn't ready.

Additionally, narrow the retry catch in connect_websocket from
HomeAssistantAPIError to HomeAssistantAuthError. The broad catch caused
connection errors (not auth failures) to trigger unnecessary token
refreshes and retries, spamming "Updated Home Assistant API token" logs.

Also raise the log level for failed fire-and-forget WebSocket commands
from debug to warning for better visibility.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-21 16:33:36 +02:00
Stefan Agner
ff2cdbfc36 Log unexpected errors in api_process wrappers (#6739)
* Log unexpected errors in api_process wrappers

The `api_process` and `api_process_raw` decorators silently swallowed
any `HassioError` that bubbled up from endpoint handlers, returning
`"Unknown error, see Supervisor logs"` to the caller while logging
nothing. This made the response message actively misleading: e.g. when
an endpoint touching D-Bus hit `DBusNotConnectedError` (raised without
a message by `@dbus_connected`), Core would surface
`SupervisorBadRequestError: Unknown error, see Supervisor logs` and
the Supervisor logs would contain no trace of it.

Log the caught `HassioError` with traceback before delegating to
`api_return_error` so the "see Supervisor logs" hint is actually
actionable. The `APIError` branch is left alone — those carry explicit
status codes and messages set by Supervisor code and are already
visible in the response.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Capture unexpected API errors to Sentry

Non-APIError HassioError exceptions reaching api_process indicate
missing error handling in the endpoint handler. In addition to the
logging added in the previous commit, also send these to Sentry so
they surface as actionable issues rather than silently returning
"Unknown error, see Supervisor logs" to the caller.

* Drop capture exception from set_boot_slot

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-21 16:04:39 +02:00
Stefan Agner
7fb621234e Add Unix socket support for Core communication with feature flag (#6742)
* Use Unix socket for Supervisor to Core communication

Reintroduce Unix socket support for Supervisor-to-Core communication
(reverted in #6735) with the addition of a feature flag gate. The
feature is now controlled by the `core_unix_socket` feature flag and
disabled by default.

When enabled and Core version supports it, Supervisor communicates with
Core via a Unix socket at /run/os/core.sock instead of TCP. This
eliminates the need for access token authentication on the socket path,
as Core authenticates the peer by the socket connection itself.

Key changes:
- Add FeatureFlag.CORE_UNIX_SOCKET to gate the feature
- HomeAssistantAPI: transport-aware session/url/websocket management
- WSClient: separate connect() (Unix, no auth) and connect_with_auth()
  (TCP) class methods with proper error handling
- APIProxy delegates websocket setup to api.connect_websocket()
- Container state tracking for Unix session lifecycle
- CI builder mounts /run/supervisor for integration tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Sort feature flags alphabetically

* Drop per-call max_msg_size from WSClient

Hardcode the WebSocket message size cap to 64 MB in WSClient and remove
the parameter from WSClient.connect, connect_with_auth, _ws_connect,
and HomeAssistantAPI.connect_websocket. This was only ever overridden
by APIProxy, so threading it through four layers was unnecessary.

max_msg_size is a cap, not a pre-allocation; aiohttp only grows buffers
to the size of actual incoming messages. Supervisor's own control
channel never approaches 64 MB, so unifying the limit has no runtime
cost.

Addresses review feedback on #6742.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-21 15:03:05 +02:00
Mike Degatano
56abe94d74 Add versioned v2 API with apps terminology (#6741)
* Add versioned v2 API with apps terminology

Introduce a v2 API sub-app mounted at /v2 that uses 'apps' terminology
throughout, while keeping v1 fully backward-compatible.

Key changes:
- Add ATTR_ADDONS = 'addons' constant alongside ATTR_APPS = 'apps' so
  backup file data (which must remain 'addons' for backward compat) and
  v2 API responses can use distinct constants
- Add FeatureFlag.SUPERVISOR_V2_API to gate v2 route registration
- Mount aiohttp sub-app at /v2 in RestAPI.load() when flag is enabled
- Add _AppSecurityPatterns frozen dataclass and _V1_PATTERNS/_V2_PATTERNS
  with strict per-version regex sets (no cross-version matching)
- Add _register_v2_apps, _register_v2_backups, _register_v2_store route
  registration methods
- Add v1 thin wrapper methods (*_v1) for all affected endpoints so
  business logic lives in the canonical v2 methods
- Extract _info_data() helper in APIApps so v1 closure can bypass
  @api_process and still catch APIAppNotInstalled for store routing
- Add _rename_apps_to_addons_in_backups(), _process_location_in_body(),
  _all_store_apps_info() shared helpers to eliminate duplication
- Add api_client_v2, api_client_with_prefix, app_api_client_with_root,
  store_app_api_client_with_root parameterized test fixtures
- Add test_v2_api_disabled_without_feature_flag
- Parameterize backup, addons, and store tests to cover both v1 and v2
  paths

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix pylint false positive for re.Pattern C extension methods

re.Pattern methods (match, search, etc.) are C extension methods.
Pylint cannot detect them via static analysis when re.Pattern is used
as a type annotation in a dataclass field, producing false E1101
no-member errors. Add generated-members to inform pylint these members
exist.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* pylint and feedback fixes

* Copilot suggested fixes

* Minor feedback fixes

---------

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-20 21:19:27 +02:00
Stefan Agner
38ddb3df54 Fix Core update rollback: delay image cleanup and fix missing rollback path (#6726)
* Delay old image cleanup until after health checks on Core update

Move the old Docker image cleanup from inside _update() to after the
post-update health checks (frontend loaded and accessible). This keeps
the previous version's image available locally when a rollback is
needed, avoiding a potentially slow re-download.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add test assertions for old image cleanup timing on Core update

Verify that the old Docker image is cleaned up only after health checks
pass, and not when a rollback is triggered.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fix missing rollback when get_config fails after Core update

The early return after setting error_state skipped the rollback block,
leaving the system on a broken new version when the API stopped
responding after update. The other health check failure paths correctly
fall through to the rollback logic; this was the only one that didn't.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 10:57:13 +02:00
dependabot[bot]
0db56b09ce Bump ruff from 0.15.10 to 0.15.11 (#6746)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.10 to 0.15.11.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.10...0.15.11)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-17 10:56:49 +02:00
Stefan Agner
a504d85745 Remove double newlines from build and check config output (#6743)
* Remove double newlines from build output

The log lines from run command already have newline characters, so
joining them with "\n" adds extra newlines. Joining them with an empty
string preserves the original formatting of the logs.

* Remove double newlines from check config output

The log lines from run command already have newline characters, so
joining them with "\n" adds extra newlines. Joining them with an empty
string preserves the original formatting of the logs.

* Fix pytest
2026-04-16 17:18:05 +02:00
Mike Degatano
1218326af3 Add development feature toggle system (#6719)
* Add experimental feature toggle system

Introduces an ExperimentalFeature enum and feature_flags config to allow
toggling experimental features via the supervisor options API. The first
feature flag is 'supervisor_v2_api' to gate the upcoming V2 API.

Absent keys in options request = no change (partial update, consistent
with existing options APIs). The info endpoint always returns all known
feature flags and their current state for discoverability.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* ExperimentalFeature -> FeatureFlag

* Use explicit value of StrEnum to be typesafe

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Minor comment improvement

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-15 13:13:45 +02:00
Stefan Agner
607ea88d74 Bump devcontainer to 5 (#6736)
* Bump devcontainer to 5

The latest Supervsior devcontainer correctly bind mounts /run/supervisor
into the Supervisor container (required for Unix socket communication
with Core) and comes with AppArmor support.

* Sync extensions with Core devcontainer
2026-04-14 17:54:59 +02:00
Mike Degatano
ba8c49935b Refactor internal addon references to app/apps (#6717)
* Rename addon→app in docstrings and comments

Updates all docstrings and inline comments across supervisor/ and
tests/ to use the new app/apps terminology. No runtime behaviour
is changed by this commit.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Rename addon→app in code (variables, args, class names, functions)

Renames all internal Python identifiers from addon/addons to app/apps:
- Variable and argument names
- Function and method names
- Class names (Addon→App, AddonManager→AppManager, DockerAddon→DockerApp,
  all exception, check, and fixup classes, etc.)
- String literals used as Python identifiers (pytest fixtures,
  parametrize param names, patch.object attribute strings,
  URL route match_info keys)

External API contracts are preserved: JSON keys, error codes,
discovery protocol fields, TypedDict/attr.s field names.
Import module paths (supervisor/addons/) are also unchanged.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix partial backup/restore API to remap addons key to apps

The external API accepts `addons` as the request body key (since
ATTR_APPS = "addons"), but do_backup_partial and do_restore_partial
now take an `apps` parameter after the rename. The **body expansion
in both endpoints would pass `addons=...` causing a TypeError.

Remap the key before expansion in both backup_partial and
restore_partial:

    if ATTR_APPS in body:
        body["apps"] = body.pop(ATTR_APPS)

Also adds test_restore_partial_with_addons_key to verify the restore
path correctly receives apps= when addons is passed in the request
body. This path had no existing test coverage.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix merge error

* Adjust AppLoggerAdapter to use app_name

Co-authored-by: Stefan Agner <stefan@agner.ch>

---------

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2026-04-14 16:47:20 +02:00
Stefan Agner
5c5428fde3 Revert "Use Unix socket for Supervisor to Core communication (#6590)" (#6735)
This reverts commit 28fa0b35bd.
2026-04-14 12:28:02 +02:00
dependabot[bot]
77e87faa00 Bump sentry-sdk from 2.57.0 to 2.58.0 (#6734)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.57.0 to 2.58.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.57.0...2.58.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.58.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-14 09:10:13 +02:00
dependabot[bot]
2c4048fcc0 Bump actions/cache from 5.0.4 to 5.0.5 (#6733)
Bumps [actions/cache](https://github.com/actions/cache) from 5.0.4 to 5.0.5.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](668228422a...27d5ce7f10)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: 5.0.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-14 09:09:55 +02:00
Stefan Agner
df65ded508 Drop wheel from build-system.requires (#6730)
Since setuptools 70.1 the bdist_wheel command is implemented inside
setuptools itself; the standalone wheel package is no longer required
to build (editable) installs of Supervisor. With setuptools~=82.0.0
pinned, the wheel entry in build-system.requires is legacy.

Supervisor is not published as a wheel either — it's only installed
editably inside the container — so there is no reason to keep wheel as
a build dependency. Dropping it also avoids churn from dependabot bumps
against the home-assistant musllinux wheels index, which is only
updated on merges to main.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-13 14:01:17 +02:00
dependabot[bot]
e7531463e6 Bump mypy from 1.20.0 to 1.20.1 (#6728)
Bumps [mypy](https://github.com/python/mypy) from 1.20.0 to 1.20.1.
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.20.0...v1.20.1)

---
updated-dependencies:
- dependency-name: mypy
  dependency-version: 1.20.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-13 10:36:47 +02:00
dependabot[bot]
eb25fc4b40 Bump actions/upload-artifact from 7.0.0 to 7.0.1 (#6727)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 7.0.0 to 7.0.1.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](bbbca2ddaa...043fb46d1a)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: 7.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-13 09:07:01 +02:00
Stefan Agner
28fa0b35bd Use Unix socket for Supervisor to Core communication (#6590)
* Use Unix socket for Supervisor to Core communication

Switch internal Supervisor-to-Core HTTP and WebSocket communication
from TCP (port 8123) to a Unix domain socket.

The existing /run/supervisor directory on the host (already mounted
at /run/os inside the Supervisor container) is bind-mounted into the
Core container at /run/supervisor. Core receives the socket path via
the SUPERVISOR_CORE_API_SOCKET environment variable, creates the
socket there, and Supervisor connects to it via aiohttp.UnixConnector
at /run/os/core.sock.

Since the Unix socket is only reachable by processes on the same host,
requests arriving over it are implicitly trusted and authenticated as
the existing Supervisor system user. This removes the token round-trip
where Supervisor had to obtain and send Bearer tokens on every Core
API call. WebSocket connections are likewise authenticated implicitly,
skipping the auth_required/auth handshake.

Key design decisions:
- Version-gated by CORE_UNIX_SOCKET_MIN_VERSION so older Core
  versions transparently continue using TCP with token auth
- LANDINGPAGE is explicitly excluded (not a CalVer version)
- Hard-fails with a clear error if the socket file is unexpectedly
  missing when Unix socket communication is expected
- WSClient.connect() for Unix socket (no auth) and
  WSClient.connect_with_auth() for TCP (token auth) separate the
  two connection modes cleanly
- Token refresh always uses the TCP websession since it is inherently
  a TCP/Bearer-auth operation
- Logs which transport (Unix socket vs TCP) is being used on first
  request

Closes #6626
Related Core PR: home-assistant/core#163907

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Close WebSocket on handshake failure and validate auth_required

Ensure the underlying WebSocket connection is closed before raising
when the handshake produces an unexpected message. Also validate that
the first TCP message is auth_required before sending credentials.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fix pylint protected-access warnings in tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Check running container env before using Unix socket

Split use_unix_socket into two properties to handle the Supervisor
upgrade transition where Core is still running with a container
started by the old Supervisor (without SUPERVISOR_CORE_API_SOCKET):

- supports_unix_socket: version check only, used when creating the
  Core container to decide whether to set the env var
- use_unix_socket: version check + running container env check, used
  for communication decisions

This ensures TCP fallback during the upgrade transition while still
hard-failing if the socket is missing after Supervisor configured
Core to use it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Improve Core API communication logging and error handling

- Remove transport log from make_request that logged before Core
  container was attached, causing misleading connection logs
- Log "Connected to Core via ..." once on first successful API response
  in get_api_state, when the transport is actually known
- Remove explicit socket existence check from session property, let
  aiohttp UnixConnector produce natural connection errors during
  Core startup (same as TCP connection refused)
- Add validation in get_core_state matching get_config pattern
- Restore make_request docstring

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Guard Core API requests with container running check

Add is_running() check to make_request and connect_websocket so no
HTTP or WebSocket connection is attempted when the Core container is
not running. This avoids misleading connection attempts during
Supervisor startup before Core is ready.

Also make use_unix_socket raise if container metadata is not available
instead of silently falling back to TCP. This is a defensive check
since is_running() guards should prevent reaching this state.

Add attached property to DockerInterface to expose whether container
metadata has been loaded.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Reset Core API connection state on container stop

Listen for Core container STOPPED/FAILED events to reset the
connection state: clear the _core_connected flag so the transport
is logged again on next successful connection, and close any stale
Unix socket session.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Only mount /run/supervisor if we use it

* Fix pytest errors

* Remove redundant is_running check from ingress panel update

The is_running() guard in update_hass_panel is now redundant since
make_request checks is_running() internally. Also mock is_running
in the websession test fixture since tests using it need make_request
to proceed past the container running check.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Bind mount /run/supervisor to Supervisor /run/os

Home Assistant OS (as well as the Supervised run scripts) bind mount
/run/supervisor to /run/os in Supervisor. Since we reuse this location
for the communication socket between Supervisor and Core, we need to
also bind mount /run/supervisor to Supervisor /run/os in CI.

* Wrap WebSocket handshake errors in HomeAssistantAPIError

Unexpected exceptions during the WebSocket handshake (KeyError,
ValueError, TypeError from malformed messages) are now wrapped in
HomeAssistantAPIError inside WSClient.connect/connect_with_auth.
This means callers only need to catch HomeAssistantAPIError.

Remove the now-unnecessary except (RuntimeError, ValueError,
TypeError) from proxy _websocket_client and add a proper error
message to the APIError per review feedback.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Narrow WebSocket handshake exception handling

Replace broad `except Exception` with specific exception types that
can actually occur during the WebSocket handshake: KeyError (missing
dict keys), ValueError (bad JSON), TypeError (non-text WS message),
aiohttp.ClientError (connection errors), and TimeoutError. This
avoids silently wrapping programming errors into HomeAssistantAPIError.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Remove unused create_mountpoint from MountBindOptions

The field was added but never used. The /run/supervisor host path
is guaranteed to exist since HAOS creates it for the Supervisor
container mount, so auto-creating the mountpoint is unnecessary.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Clear stale access token before raising on final retry

Move token clear before the attempt check in connect_websocket so
the stale token is always discarded, even when raising on the final
attempt. Without this, the next call would reuse the cached bad token
via _ensure_access_token's fast path, wasting a round-trip.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add tests for Unix socket communication and Core API

Add tests for the new Unix socket communication path and improve
existing test coverage:

- Version-based supports_unix_socket and env-based use_unix_socket
- api_url/ws_url transport selection
- Connection lifecycle: connected log after restart, ignoring
  unrelated container events
- get_api_state/check_api_state parameterized across versions,
  responses, and error cases
- make_request is_running guard and TCP flow with real token fetch
- connect_websocket for both Unix and TCP (with token verification)
- WSClient.connect/connect_with_auth handshake success, errors,
  cleanup on failure, and close with pending futures

Consolidate existing tests into parameterized form and drop synthetic
tests that covered very little.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 15:09:38 +02:00
Stefan Agner
5de5d594a5 Clean up old apps builder images after build (#6716)
* Clean up old add-on builder images after build

After a Docker engine update, the add-on builder image (docker:*-cli)
is pulled with a new version tag, leaving old versions behind. Clean
up old builder images after each successful add-on build using the
existing cleanup_old_images method.

The cleanup is done after build because cleanup_old_images needs the
current image to exist, and the builder image is only pulled on first
build (in run_command) after a Docker engine update.

Also change ADDON_BUILDER_IMAGE to use the short name "docker" instead
of the canonical "docker.io/library/docker", as Docker stores images
under the short name and the reference filter in cleanup_old_images
does not match the canonical form.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fix tests for short builder image name

Update test assertions to use the short "docker" image name
instead of the canonical "docker.io/library/docker".

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 15:00:03 +02:00
dependabot[bot]
8f3638ec0d Bump actions/github-script from 8.0.0 to 9.0.0 (#6722)
Bumps [actions/github-script](https://github.com/actions/github-script) from 8.0.0 to 9.0.0.
- [Release notes](https://github.com/actions/github-script/releases)
- [Commits](ed597411d8...3a2844b7e9)

---
updated-dependencies:
- dependency-name: actions/github-script
  dependency-version: 9.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 09:18:20 +02:00
dependabot[bot]
e71b8670b5 Bump release-drafter/release-drafter from 7.1.1 to 7.2.0 (#6723)
Bumps [release-drafter/release-drafter](https://github.com/release-drafter/release-drafter) from 7.1.1 to 7.2.0.
- [Release notes](https://github.com/release-drafter/release-drafter/releases)
- [Commits](139054aeaa...5de9358398)

---
updated-dependencies:
- dependency-name: release-drafter/release-drafter
  dependency-version: 7.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 09:16:47 +02:00
dependabot[bot]
de8abe2815 Bump ruff from 0.15.9 to 0.15.10 (#6724)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.9 to 0.15.10.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.9...0.15.10)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.10
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 09:16:33 +02:00
Stefan Agner
a30f2509a3 Enable IPv6 on Supervisor network by default for all installations (#6720)
* Improve Docker network test coverage and infrastructure

Add test cases for enable_ipv6=None (no user setting) to
test_network_recreation, verifying existing behavior where None
leaves the network unchanged. Use pytest.param with descriptive IDs
for better test readability.

Add create_network_mock side_effect to the docker fixture so network
creation returns realistic metadata built from the provided params.
Remove redundant manual create mock setups from individual tests.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Enable IPv6 on Supervisor network by default for all installations

Previously, IPv6 was only enabled by default for new installations
(when enable_ipv6 config was None). Existing installations with
IPv4-only networks were left unchanged unless the user explicitly
set enable_ipv6 to true.

Now, when no explicit IPv6 setting exists, the network is migrated
to dual-stack on next boot. The same safety checks apply: migration
is blocked if user containers are running and requires a reboot.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 18:08:30 +02:00
Stefan Agner
321b692370 Consolidate Supervisor auto-update into updater reload task (#6705)
* Consolidate Supervisor auto-update into updater reload task

The separate _update_supervisor task (24h interval) was redundant since
_reload_updater (previously ~7.5h interval) already triggered the same
update when a new version was found. This meant Supervisor updates were
effectively checked every 7.5h, undermining the intent of #6633 and
#6638 to reduce update frequency and registry pressure.

Merge the two code paths into one: _reload_updater now directly calls
_auto_update_supervisor (with the same job conditions) when a new
version is detected. The reload interval is increased from ~7.5h to 24h
to match the originally intended update cadence.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add regression test for scheduled supervisor auto-update

Verify that the scheduled reload_updater task triggers exactly one
supervisor update when a new version is available. Uses event loop
time patching to simulate the 24h interval firing.

This guards against the previous bug where a separate _update_supervisor
task ran on its own schedule, causing duplicate update attempts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 16:45:58 +02:00
Stefan Agner
1fcfededac Let write-side OSError propagate during backup creation (#6704)
* Let OSError propagate during backup creation

Securetar's create_tar context manager uses a two-phase header write:
on enter it writes a placeholder tar header (size unknown), and on exit
_finalize_tar_entry seeks back to rewrite the header with the actual
size. If an OSError (e.g. ENOSPC) occurs mid-write, the inner tar entry
is left with truncated data and a placeholder header. Continuing to
write more entries on top of this produces a structurally invalid tar
file that cannot be restored.

Previously, _folder_save wrapped OSError as BackupError, which
store_folders then caught and swallowed — allowing the backup to
continue writing to an already corrupt outer tar. Similarly,
_create_finalize silently swallowed OSError when writing backup.json,
and the finally block in create() could raise a secondary OSError from
_close_outer_tarfile that replaced the original exception.

Securetar already distinguishes read vs write errors: read-side errors
(e.g. permission denied on a source file) are wrapped as AddFileError
(non-fatal, skip the file), while write-side OSError propagates as-is.

With this change, write-side OSError is wrapped as BackupFatalError
(a BackupError subclass) instead of plain BackupError. This ensures:
- store_folders/store_addons do not swallow it (they only catch
  BackupError, and re-raise BackupFatalError explicitly).
- The job decorator handles it as a HassioError (no extra Sentry
  event). Letting OSError bubble up raw would cause the job decorator
  to treat it as an unhandled exception, capturing it to Sentry and
  wrapping it as JobException — producing more Sentry noise, not less.
- _do_backup catches it via `except BackupError` and deletes the
  incomplete backup file. This is the correct behavior since the tar
  is structurally corrupt and not restorable.

Changes:
- Add BackupFatalError exception for write-side I/O errors.
- In create(), use except/else instead of finally so that finalization
  is skipped when an error already occurred during yield. This prevents
  a secondary exception from _close_outer_tarfile replacing the
  original error.
- In _create_finalize, raise BackupFatalError on OSError instead of
  swallowing it.
- In _folder_save, wrap OSError as BackupFatalError (not BackupError).
- In store_folders, re-raise BackupFatalError instead of swallowing.
- In store_supervisor_config, wrap OSError as BackupFatalError.

Fixes SUPERVISOR-B53
Fixes SUPERVISOR-1FAJ
Fixes SUPERVISOR-BJ4
Fixes SUPERVISOR-18KS
Fixes SUPERVISOR-1HE6

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Rename BackupFatalError to BackupFatalIOError

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 16:45:43 +02:00
Stefan Agner
4768bfc50d Wait for systemd-timesyncd to fully stop before setting time (#6715)
* Wait for systemd-timesyncd to stop before setting time

The previous 1-second sleep was not always enough for
systemd-timesyncd to fully stop, causing timedated to still reject
the set_time call with "Automatic time synchronization is enabled".
Instead, listen for the unit's ActiveState D-Bus property to become
inactive before proceeding, with a 10-second timeout.

Refs SUPERVISOR-92R

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add wait_for_active_state helper to SystemdUnit

Centralize the repeated pattern of listening for D-Bus
PropertiesChanged signals to wait for a systemd unit's ActiveState
to reach a target state. Refactor core.py, host/firewall.py, and
mounts/mount.py to use the new helper.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Test time sync with active-to-inactive state transition

Exercise the actual wait_for_active_state signal-driven transition
in the time sync test: start the mock unit as "active" and drive it
to "inactive" via a PropertiesChanged signal, rather than starting
it as "inactive" which would make the wait a no-op.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 14:44:41 +02:00
Stefan Agner
5cb8b303bc Revert "Disable hassio gateway protection rules in dev mode (#6658)" (#6714)
This reverts commit 0cb96c36b6.
2026-04-08 11:25:21 +02:00
dependabot[bot]
657f8027a5 Bump securetar from 2026.2.0 to 2026.4.1 (#6710)
* Bump securetar from 2026.2.0 to 2026.4.1

Bumps [securetar](https://github.com/pvizeli/securetar) from 2026.2.0 to 2026.4.1.
- [Release notes](https://github.com/pvizeli/securetar/releases)
- [Commits](https://github.com/pvizeli/securetar/compare/2026.2.0...2026.4.1)

---
updated-dependencies:
- dependency-name: securetar
  dependency-version: 2026.4.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Avoid using SecureTarFile path attribute

Avoid using SecureTarFile path attribute which can be None. Instead,
introduce tar_path which can't be None and move backup existence check
before creating the SecureTarFile instance.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2026-04-08 10:20:53 +02:00
dependabot[bot]
e295b8f1bc Bump pytest from 9.0.2 to 9.0.3 (#6713)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 9.0.2 to 9.0.3.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/9.0.2...9.0.3)

---
updated-dependencies:
- dependency-name: pytest
  dependency-version: 9.0.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-08 09:48:56 +02:00
dependabot[bot]
4c071746b3 Bump types-pyyaml from 6.0.12.20250915 to 6.0.12.20260408 (#6712)
Bumps [types-pyyaml](https://github.com/python/typeshed) from 6.0.12.20250915 to 6.0.12.20260408.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyyaml
  dependency-version: 6.0.12.20260408
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-08 09:48:43 +02:00
dependabot[bot]
5db908a965 Bump cryptography from 46.0.6 to 46.0.7 (#6711)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.6 to 46.0.7.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.6...46.0.7)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-08 09:48:30 +02:00
Mike Degatano
941f7cd2be Change addons to apps in all user-facing strings (#6696)
* Change addons to apps in all user-facing strings

* Fix grammar in errors

* Apply suggestions from code review

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Stefan Agner <stefan@agner.ch>

---------

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2026-04-07 18:54:40 +02:00
Stefan Agner
49960e0ddb Explicitly allow multiple exceptions listed without paratheses (#6706)
Explicitly allow Python 3.14 syntax for except clauses with multiple
exceptions without parentheses. Despite calling out Python 3.14, AI
commonly suggests to change this syntax.
2026-04-07 18:12:19 +02:00
Jan Čermák
1264dbedaa Make app builds work without build.yaml (#6694)
* Make app builds work without build.yaml

The builds are moving build configuration into the Dockerfile itself (base
image defaults via `ARG`, labels via `LABEL`). This change makes `build.yaml`
optional for local app builds while preserving backward compatibility for apps
that still define it.

Key changes:
* Track whether `build.yaml` was found, log a warning if it is.
* Skip `BUILD_FROM` build arg when no build file exists, letting the
  Dockerfile's own `ARG BUILD_FROM=...` default take effect.
* Always include all configured registry credentials in docker config instead
  of matching only the base image's registry.
* Only set `io.hass.name` and `io.hass.description` labels when non-empty, as
  they could be defined in the Dockerfile directly.
* Log a deprecation warning when `build.yaml` is present.

Refs home-assistant/epics#33

* Add-on -> app

* Remove FileConfiguration from AddonBuild base classes

Since we're deprecating usage of build.yaml, stop abusing the
FileConfiguration class for parsing the build file. The AddonBuild class
has been rewritten to be populated from the config file in an async
factory class method instead of in the overloaded load_file. While
cleaning up, the squash property has been removed as it's no longer used
anywhere and only the warning is printed if it's present in the parsed
config.
2026-04-07 17:32:25 +02:00
Stefan Agner
fec8e859fe Use sets for unhealthy and unsupported reasons (#6703)
* Use sets for unhealthy and unsupported reasons

The unhealthy and unsupported reason collections in ResolutionManager
are unique by nature and used manual deduplication with list operations.
Convert them to sets for a more natural data structure fit.

At serialization boundaries (REST API and WebSocket events), use
sorted() to ensure deterministic output order.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fix set usage in Sentry filter and test

The unhealthy set in the Sentry filter context needs sorted() for
serialization, and the test was using list subscript on the set.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 17:20:23 +02:00
Stefan Agner
07f02a23be Prevent Sentry event feedback loop from warning handler (#6700)
The custom warning_handler sends warnings to Sentry via
capture_exception. When the Sentry SDK's own HTTP transport triggers a
warning (e.g. urllib3 InsecureRequestWarning due to broken CA certs),
this creates a self-inducing feedback loop: the warning is captured,
sent to Sentry, which triggers another warning, and so on.

Skip capture_exception for warnings originating from Sentry SDK
background threads (identified by the "sentry-sdk." thread name
prefix). Warnings are still logged normally via _LOGGER.warning.

A dedicated test validates that the SDK's BackgroundWorker thread uses
the expected naming convention, so any future SDK change will be caught.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 17:20:06 +02:00
dependabot[bot]
2d279fcfd0 Bump ruff from 0.15.8 to 0.15.9 (#6701)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.8 to 0.15.9.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.8...0.15.9)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-03 10:17:15 +02:00
Stefan Agner
43ca846232 Adapt devcontainer.json for systemd-based devcontainer image (#6693)
Update devcontainer.json settings to work with the new systemd-based
devcontainer image (v4).

- Bump image tag to 4-supervisor to get the systemd-enabled image
- Set overrideCommand to false so the image's CMD (/sbin/init) runs
  as PID 1 instead of being replaced by VS Code's default sleep command
- Set remoteUser to vscode to preserve the non-root shell experience
  (required when overrideCommand is false, since VS Code no longer
  injects its own user-switching wrapper)
- Add /var/lib/containerd volume mount because modern Docker uses the
  containerd snapshotter, which stores layer data outside
  /var/lib/docker
- Add tmpfs on /tmp to match typical systemd expectations and avoid
  leftover state across container restarts
2026-04-02 14:40:54 +02:00
dependabot[bot]
ec1ad8e838 Bump dbus-fast from 4.0.0 to 4.0.4 (#6697)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 4.0.0 to 4.0.4.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v4.0.0...v4.0.4)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 4.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-02 10:27:47 +02:00
Jan Čermák
a78f02eed8 Fix version/requirements parsing in setup.py, set version in Dockerfile (#6692)
* Fix version/requirements parsing in setup.py, set version in Dockerfile

Remove the step patching const.py with detected version in CI and do
this during Dockerfile version from BUILD_VERSION argument instead.

Also, fix parsing of the version in `setup.py` - it wasn't working
because it was splitting the files on "/n" instead of newlines,
resulting in all Python packages installed having version string set to
`9999.9.9.dev9999` instead of the expected version.

(Note: setuptools are doing a version string normalization, so the
installed package has stripped leading zeroes and the second component
is `9` instead of the literal `09` used in default string in all places.
Fixing this is out of scope of this change as the ideal solution would
be to change the versioning schema but it should be noted.)

Lastly, clean up builder.yaml environment variables (crane is not used
anymore after #6679).

* Generate setuptools-compatible version for PR builds

For PR builds, we're using plain commit SHA as the version. This is
perfectly fine for Docker tags but it's not acceptable for Python
package version. By fixing the setup.py bug this error surfaced. Work it
around by setting the Package version to the string we used previously
suffixed by the commit SHA as build metadata (delimited by `+`).
2026-04-01 16:01:46 +02:00
Jan Čermák
31636fe310 Add builder workflow to paths triggering rebuild on merge (#6691)
When only the workflow changes, CI doesn't trigger rebuild when the PR
is merged to the `main` branch. Since it's a legitimate build trigger,
add it to the paths list.
2026-04-01 10:26:51 +02:00
Jan Čermák
0b805a2c09 Fix builder multi-arch manifest step, use matrix prepare step (#6690)
The manifest step was failing because the image name wasn't set in the
env. Also we can standardize the workflow by using the shared matrix
prepare step.
2026-04-01 09:38:01 +02:00
dependabot[bot]
63a21de82d Bump mypy from 1.19.1 to 1.20.0 (#6689)
Bumps [mypy](https://github.com/python/mypy) from 1.19.1 to 1.20.0.
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.19.1...v1.20.0)

---
updated-dependencies:
- dependency-name: mypy
  dependency-version: 1.20.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-01 08:59:33 +02:00
dependabot[bot]
5308d98bad Bump sentry-sdk from 2.56.0 to 2.57.0 (#6688)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.56.0 to 2.57.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.56.0...2.57.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.57.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-01 08:58:53 +02:00
dependabot[bot]
1372fc2b53 Bump orjson from 3.11.7 to 3.11.8 (#6687)
Bumps [orjson](https://github.com/ijl/orjson) from 3.11.7 to 3.11.8.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.11.7...3.11.8)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.8
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-01 08:57:06 +02:00
dependabot[bot]
69eb1deb78 Bump aiohttp from 3.13.4 to 3.13.5 (#6686)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.13.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-01 08:56:37 +02:00
dependabot[bot]
1b585a556b Bump getsentry/action-release from 3.5.0 to 3.6.0 (#6685)
Bumps [getsentry/action-release](https://github.com/getsentry/action-release) from 3.5.0 to 3.6.0.
- [Release notes](https://github.com/getsentry/action-release/releases)
- [Changelog](https://github.com/getsentry/action-release/blob/master/CHANGELOG.md)
- [Commits](dab6548b3c...5657c9e888)

---
updated-dependencies:
- dependency-name: getsentry/action-release
  dependency-version: 3.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-01 08:49:35 +02:00
Stefan Agner
667bd62742 Remove CLI command hint from unknown error messages (#6684)
* Remove CLI command hint from unknown error messages

Since #6303 introduced specific error messages for many cases,
the generic "check with 'ha supervisor logs'" hint in unknown
error messages is no longer as useful. Remove the CLI command
part while keeping the "Check supervisor logs for details" rider.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Use consistently "Supervisor logs" with capitalization

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
2026-03-31 18:09:14 +02:00
Jan Čermák
b1ea4ce52f Publish multi-arch manifest after finished build (#6683)
Publish multi-arch manifest (`hassio-supervisor` image) after the build
finishes. The step's `needs` are intentionally not matching `version`
step, as the build itself publishes the arch-prefixed image already, so
only build is needed for the manifest to be published as well.

Closes #6646
2026-03-31 13:52:23 +02:00
Jan Čermák
ba9080dfdd Remove re-tagging of old Supervisor for unsupported architectures (#6679)
In #6347 we dropped the build for deprecated architectures and started
re-tagging of Supervisor 2025.11.5 images to make them available through
the tag of the latest version. This was to provide some interim period
of graceful handling of updates for devices which were not online at the
time when Supervisor dropped support for these architectures. As the
support was dropped almost 4 months ago already, the majority of users
should have hopefully migrated. The rest will now see Supervisor failing
to update with no message about architecture drop if they update from a
too old version. Since the re-tagged version also reported a failure to
update, the impact isn't so bad.
2026-03-31 11:30:12 +02:00
Stefan Agner
7f8a6f7e09 Drop unused crypto attribute from backups (#6682)
The "crypto" field in backup.json was introduced alongside encryption
support in 2018 to indicate the algorithm (aes128). Over time,
encryption moved into securetar and the field became write-only
metadata — no code reads it to make decisions. With securetar v3, the
field is now actively misleading since the actual encryption algorithm
differs from what "crypto": "aes128" suggests. Remove ATTR_CRYPTO and
CRYPTO_AES128 constants, stop writing "crypto" to new backups, and use
vol.Remove to silently strip the key when loading old backups that
still contain it.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 11:15:36 +02:00
Stefan Agner
a4a17a70a5 Add specific error message for registry authentication failures (#6678)
* Add specific error message for registry authentication failures

When a Docker image pull fails with 401 Unauthorized and registry
credentials are configured, raise DockerRegistryAuthError instead of
a generic DockerError. This surfaces a clear message to the user
("Docker registry authentication failed for <registry>. Check your
registry credentials") instead of "An unknown error occurred with
addon <name>".

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add tests for registry authentication error handling

Test that a 401 during image pull raises DockerRegistryAuthError when
credentials are configured, and falls back to generic DockerError
when no credentials are present.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add tests for addon install/update/rebuild auth failure handling

Test that DockerRegistryAuthError propagates correctly through
addon install, update, and rebuild paths without being wrapped
in a generic AddonUnknownError.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-31 09:29:49 +02:00
Stefan Agner
e630ec1ac4 Include Docker registry configurations in backups (#6674)
* Include Docker registry configurations in backups

Docker registry credentials were removed from backup metadata in a prior
change to avoid exposing secrets in unencrypted data. Now that the encrypted
supervisor.tar inner archive exists, add docker.json alongside mounts.json
to securely backup and restore registry configurations.

On restore, registries from the backup are merged with any existing ones.
Old backups without docker.json are handled gracefully.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Increase test coverage by testing more error paths

* Address review feedback for Docker registry backup

Remove unnecessary dict() copy when serializing registries for backup
since the property already returns a dict.

Change DockerConfig.registries to use direct key access instead of
.get() with a default. The schema guarantees the key exists, and
.get() with a default would return a detached temporary dict that
silently discards updates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 18:34:27 +02:00
Stefan Agner
be95349185 Reuse IMAGE_REGISTRY_REGEX for docker_image validation (#6667)
* Reuse IMAGE_REGISTRY_REGEX for docker_image validation

Replace the monolithic regex in docker_image validator with a
function-based approach that reuses get_registry_from_image() from
docker.utils for robust registry detection. This properly handles
domains, IPv4/IPv6 addresses, ports, and localhost while still
rejecting tags (managed separately by the add-on system).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Address review feedback: reorder checks for efficiency

Check falsy value before isinstance, and empty path before tag check,
as suggested in PR review.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 18:33:11 +02:00
Stefan Agner
1fd78dfc4e Fix Docker Hub registry auth for containerd image store (#6677)
aiodocker derives ServerAddress for X-Registry-Auth by doing
image.partition("/"). For Docker Hub images like
"homeassistant/amd64-supervisor", this extracts "homeassistant"
(the namespace) instead of "docker.io" (the registry).

With the classic graphdriver image store, ServerAddress was never
checked and credentials were sent regardless. With the containerd
image store (default since Docker v29 / HAOS 15), the resolver
compares ServerAddress against the actual registry host and silently
drops credentials on mismatch, falling back to anonymous access.

Fix by prefixing Docker Hub images with "docker.io/" when registry
credentials are configured, so aiodocker sets ServerAddress correctly.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 14:43:18 +02:00
Stefan Agner
6b41fd4112 Improve codecov settings for more sensible defaults (#6676)
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 13:24:22 +02:00
sepahewe
98bbb8869e Add support for negative numbers in apps options (#6673)
* Added support for negative numbers in options

* Do not allow -. as float

* Added tests for integers and floats in options.

* Fixed ruff errors

* Added tests for outside of int/float limits
2026-03-30 12:08:57 +02:00
dependabot[bot]
ef71ffb32b Bump aiohttp from 3.13.3 to 3.13.4 (#6675)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.13.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 10:56:15 +02:00
Stefan Agner
713354bf56 Handle certain OSError in get_latest_mtime during directory walk (#6632)
Besides file not found also catch "Too many levels of symbolic links"
which can happen when there are symbolic link loops in the add-on/apps
repository.

Also improve error handling in the repository update process to catch
OSError when checking for local modifications and raise a specific
error that can be handled appropriately.

Fixes SUPERVISOR-1FJ0

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-27 09:13:48 +01:00
dependabot[bot]
e2db2315b3 Bump ruff from 0.15.7 to 0.15.8 (#6670)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.7 to 0.15.8.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.7...0.15.8)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.8
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-27 09:13:23 +01:00
dependabot[bot]
39afa70cf6 Bump codecov/codecov-action from 5.5.3 to 6.0.0 (#6669)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.5.3 to 6.0.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](1af58845a9...57e3a136b7)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-27 09:13:10 +01:00
Paul Tarjan
2b9c8282a4 Include network storage mount configurations in backups (#6411)
* Include network storage mount configurations in backups

When creating backups, now stores mount configurations (CIFS/NFS shares)
including server, share, credentials, and other settings. When restoring
from a backup, mount configurations are automatically restored.

This fixes the issue where network storage definitions for backups
were lost after restoring from a backup, requiring users to manually
reconfigure their network storage mounts.

Changes:
- Add ATTR_MOUNTS schema to backup validation
- Add store_mounts() method to save mount configs during backup
- Add restore_mounts() method to restore mount configs during restore
- Add MOUNTS stage to backup/restore job stages
- Update BackupManager to call mount backup/restore methods
- Add tests for mount backup/restore functionality

Fixes home-assistant/core#148663

* Address reviewer feedback for mount backup/restore

Changes based on PR review:
- Store mount configs in encrypted mounts.tar instead of unencrypted
  backup metadata (security fix for passwords)
- Separate mount restore into config save + async activation tasks
  (mounts activate in background, failures don't block restore)
- Add replace_default_backup_mount parameter to control whether to
  overwrite existing default mount setting
- Remove unnecessary broad exception handler for default mount setter
- Simplify schema: ATTR_MOUNTS is now just a boolean flag since
  actual data is in the encrypted tar file
- Update tests to reflect new async API and return types

* Fix code review issues in mount backup/restore

- Add bind mount handling for MEDIA and SHARE usage types in
  _activate_restored_mount() to mirror MountManager.create_mount()
- Fix double save_data() call by using needs_save flag
- Import MountUsage const for usage type checks

* Add pylint disable comments for protected member access

* Tighten broad exception handlers in mount backup restore

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Address second round of reviewer feedback

- Catch OSError separately and check errno.EBADMSG for drive health
- Validate mounts JSON against SCHEMA_MOUNTS_CONFIG before importing
- Use mount_data[ATTR_NAME] instead of .get("name", "unknown")
- Overwrite existing mounts on restore instead of skipping
- Move restore_mount/activate logic to MountManager (no more
  protected-access in Backup)
- Drop unused replace_default_backup_mount parameter
- Fix test_backup_progress: add mounts stage to expected events
- Fix test_store_mounts: avoid create_mount which requires dbus

* Rename mounts.tar to supervisor.tar for generic supervisor config

Rename the inner tar from mounts.tar to supervisor.tar so it can hold
multiple config files (mounts.json now, docker credentials later).
Rename store_mounts/restore_mounts to store_supervisor_config/
restore_supervisor_config and update stage names accordingly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Fix pylint protected-access and test timeouts in backup tests

- Add pylint disable comment for _mounts protected access in test_backup.py
- Mock restore_supervisor_config in test_full_backup_to_mount and
  test_partial_backup_to_mount to avoid D-Bus mount activation during
  restore that causes timeouts in the test environment

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Address agners review feedback

- Change create_inner_tar() to create_tar() per #6575
- Remove "r" argument from SecureTarFile (now read-only by default)
- Change warning to info for missing mounts tar (old backups won't have it)
- Narrow exception handler to (MountError, vol.Invalid, KeyError, OSError)

* Update supervisor/backups/backup.py

* Address agners feedback: remove metadata flag, add mount feature check

- Remove ATTR_SUPERVISOR boolean flag from backup metadata; instead
  check for physical presence of supervisor.tar (like folder backups)
- Remove has_supervisor_config property
- Always attempt supervisor config restore (tar existence check handles it)
- Add HostFeature.MOUNT check in _activate_restored_mount before
  attempting to activate mounts on systems without mount support

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2026-03-26 17:08:16 -04:00
Stefan Agner
6525c8c231 Remove unused requests dependency (#6666)
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 10:32:19 +01:00
Stefan Agner
612664e3d6 Fix build workflow by hardcoding architectures matrix (#6665)
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 09:59:19 +01:00
dependabot[bot]
aa9a4c17f6 Bump cryptography from 46.0.5 to 46.0.6 (#6663)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.5 to 46.0.6.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.5...46.0.6)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-26 08:38:14 +01:00
dependabot[bot]
0f9cb9ee03 Bump sigstore/cosign-installer from 4.1.0 to 4.1.1 (#6662) 2026-03-26 07:53:12 +01:00
dependabot[bot]
03b1e95b94 Bump requests from 2.32.5 to 2.33.0 (#6664) 2026-03-26 07:47:29 +01:00
Stefan Agner
c0cca1ff8b Centralize OSError bad message handling in ResolutionManager (#6641)
Add check_oserror() method to ResolutionManager with an extensible
errno-to-unhealthy-reason mapping. Replace ~20 inline
`if err.errno == errno.EBADMSG` checks across 14 files with a single
call to self.sys_resolution.check_oserror(err). This makes it easy to
add handling for additional filesystem errors (e.g. EIO, ENOSPC) in
one place.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 16:47:36 +01:00
dependabot[bot]
2b2aca873b Bump sentry-sdk from 2.55.0 to 2.56.0 (#6661)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-25 07:49:24 +01:00
Jan Čermák
27609ee992 Migrate builder workflow to new builder actions (#6653)
* Migrate builder workflow to new builder actions

Migrate Supervisor image build to new builder actions. The resulting images
should be identical to those built by the builder.

Refs #6646 - does not implement multi-arch manifest publishing (will be done in
a follow-up)

* Update devcontainer version to 3
2026-03-24 10:04:18 +01:00
dependabot[bot]
573e5ac767 Bump types-requests from 2.32.4.20260107 to 2.32.4.20260324 (#6660)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-24 07:33:58 +01:00
Stefan Agner
c9a2da34c2 Call Subscribe on systemd D-Bus connect to enable signals (#6659)
systemd only emits bus signals (including PropertiesChanged) when at
least one client has called Subscribe() on the Manager interface. On
regular HAOS systems, systemd-logind calls Subscribe which enables
signals for all bus clients. However, in environments without
systemd-logind (such as the Supervisor devcontainer with systemd), no
signals are emitted, causing the firewall unit wait to time out.

Explicitly calling Subscribe() has no downsides and makes it clear
that the Supervisor relies on these signals. There is no need to call
Unsubscribe() as systemd automatically tracks clients and stops
emitting signals when all subscribers have disconnected from the bus.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 14:21:09 +01:00
Jan Čermák
0cb96c36b6 Disable hassio gateway protection rules in dev mode (#6658)
Since there's no Systemd guaranteed to be available for Devcontainer to create
the transient unit, we need to skip creating of the firewall rules when
Supervisor is started in dev mode.

See: https://github.com/home-assistant/supervisor/pull/6650#issuecomment-4097583552
2026-03-23 10:13:31 +01:00
dependabot[bot]
f719db30c4 Bump attrs from 25.4.0 to 26.1.0 (#6655)
Bumps [attrs](https://github.com/sponsors/hynek) from 25.4.0 to 26.1.0.
- [Commits](https://github.com/sponsors/hynek/commits)

---
updated-dependencies:
- dependency-name: attrs
  dependency-version: 26.1.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-23 09:44:42 +01:00
dependabot[bot]
ae3634709b Bump pytest-cov from 7.0.0 to 7.1.0 (#6657)
Bumps [pytest-cov](https://github.com/pytest-dev/pytest-cov) from 7.0.0 to 7.1.0.
- [Changelog](https://github.com/pytest-dev/pytest-cov/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest-cov/compare/v7.0.0...v7.1.0)

---
updated-dependencies:
- dependency-name: pytest-cov
  dependency-version: 7.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-23 09:11:21 +01:00
dependabot[bot]
64d9bbada5 Bump ruff from 0.15.6 to 0.15.7 (#6654)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-20 20:37:19 +01:00
Stefan Agner
36124eafae Add firewall rules to protect Docker gateway from external access (#6650)
Add iptables rules via a systemd transient unit to drop traffic
addressed to the bridge gateway IP from non-bridge interfaces.

The firewall manager waits for the transient unit to complete and
verifies success via D-Bus property change signals. On failure, the
system is marked unhealthy and host-network add-ons are prevented
from booting.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 16:46:17 +01:00
dependabot[bot]
c16b3ca516 Bump codecov/codecov-action from 5.5.2 to 5.5.3 (#6649)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.5.2 to 5.5.3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](671740ac38...1af58845a9)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 5.5.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-19 13:34:03 +01:00
dependabot[bot]
02b201d0f7 Bump actions/cache from 5.0.3 to 5.0.4 (#6647)
Bumps [actions/cache](https://github.com/actions/cache) from 5.0.3 to 5.0.4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](cdf6c1fa76...668228422a)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: 5.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-19 13:33:50 +01:00
dependabot[bot]
4bde25794f Bump release-drafter/release-drafter from 7.1.0 to 7.1.1 (#6648)
Bumps [release-drafter/release-drafter](https://github.com/release-drafter/release-drafter) from 7.1.0 to 7.1.1.
- [Release notes](https://github.com/release-drafter/release-drafter/releases)
- [Commits](44a942e465...139054aeaa)

---
updated-dependencies:
- dependency-name: release-drafter/release-drafter
  dependency-version: 7.1.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-19 13:33:36 +01:00
dependabot[bot]
82b893a5b1 Bump coverage from 7.13.4 to 7.13.5 (#6645)
Bumps [coverage](https://github.com/coveragepy/coveragepy) from 7.13.4 to 7.13.5.
- [Release notes](https://github.com/coveragepy/coveragepy/releases)
- [Changelog](https://github.com/coveragepy/coveragepy/blob/main/CHANGES.rst)
- [Commits](https://github.com/coveragepy/coveragepy/compare/7.13.4...7.13.5)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.13.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 09:48:13 +01:00
dependabot[bot]
b24ada6a21 Bump sentry-sdk from 2.54.0 to 2.55.0 (#6644)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.54.0 to 2.55.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.54.0...2.55.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.55.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 09:47:48 +01:00
dependabot[bot]
6dff48dbb4 Bump release-drafter/release-drafter from 7.0.0 to 7.1.0 (#6643)
Bumps [release-drafter/release-drafter](https://github.com/release-drafter/release-drafter) from 7.0.0 to 7.1.0.
- [Release notes](https://github.com/release-drafter/release-drafter/releases)
- [Commits](3a7fb5c85b...44a942e465)

---
updated-dependencies:
- dependency-name: release-drafter/release-drafter
  dependency-version: 7.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 09:46:47 +01:00
Stefan Agner
40f9504157 Increase plugin update check interval from ~8h to 12h, Supervisor to 24h (#6638)
Further slow down automatic update rollout to reduce pressure on container
registry infrastructure (GHCR rate limiting). Plugins are staggered 2 minutes
apart starting at 12h, Supervisor moves from 12h to 24h.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 19:28:36 +01:00
Stefan Agner
687dccd1f5 Increase Supervisor update check interval from 8h to 12h (#6633)
Slow down the automatic Supervisor update rollout to reduce pressure
on the container registry infrastructure (GHCR rate limiting).

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 17:19:26 +01:00
Stefan Agner
f41a8e9d08 Wait for addon startup task before unload to prevent data access race (#6630)
* Wait for addon startup task before unload to prevent data access race

Replace the cancel-based approach in unload() with an await of the outer
_wait_for_startup_task. The container removal and state change resolve the
startup event naturally, so we just need to ensure the task completes
before addon data is removed. This prevents a KeyError on self.name access
when _wait_for_startup times out after data has been removed.

Also simplify _wait_for_startup by removing the unnecessary inner task
wrapper — asyncio.wait_for can await the event directly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Drop asyncio.sleep() in test_manager.py

* Only clear startup task reference if still the current task

Prevent a race where an older _wait_for_startup task's finally block
could wipe the reference to a newer task, causing unload() to skip
the await.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Reuse existing pending startup wait task when addon is already running

If start() is called while the addon is already running and a startup
wait task is still pending, return the existing task instead of creating
a new one.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 12:46:29 +01:00
Thomas Kadauke
cbeb3520c3 fix: remove WebSocket message size limit in ingress proxy (#6604)
aiohttp's default max_msg_size of 4MB causes the ingress WebSocket proxy
to silently drop connections when an add-on sends messages larger than
that limit (e.g. Zigbee2MQTT's bridge/devices payload with many devices).

Setting max_msg_size=0 removes the limit on both the server-side
WebSocketResponse and the upstream ws_connect, fixing dropped connections
for add-ons that produce large WebSocket messages.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: J. Nick Koston <nick@koston.org>
2026-03-16 12:46:16 +01:00
dependabot[bot]
8b9928d313 Bump masesgroup/retrieve-changed-files from 3.0.0 to 4.0.0 (#6637)
Bumps [masesgroup/retrieve-changed-files](https://github.com/masesgroup/retrieve-changed-files) from 3.0.0 to 4.0.0.
- [Release notes](https://github.com/masesgroup/retrieve-changed-files/releases)
- [Commits](491e80760c...45a8b3b496)

---
updated-dependencies:
- dependency-name: masesgroup/retrieve-changed-files
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 09:47:28 +01:00
dependabot[bot]
f58d905082 Bump release-drafter/release-drafter from 6.4.0 to 7.0.0 (#6636)
Bumps [release-drafter/release-drafter](https://github.com/release-drafter/release-drafter) from 6.4.0 to 7.0.0.
- [Release notes](https://github.com/release-drafter/release-drafter/releases)
- [Commits](6a93d82988...3a7fb5c85b)

---
updated-dependencies:
- dependency-name: release-drafter/release-drafter
  dependency-version: 7.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 09:46:30 +01:00
Jan Čermák
093e98b164 Fix fallback time sync, create repair issue if time is out of sync (#6625)
* Fix fallback time sync, create repair issue if time is out of sync

The "poor man's NTP" using the whois service didn't work because it attempted
to sync the time when the NTP service was enabled, which is rejected by the
timedated service. To fix this, Supervisor now first disables the
systemd-timesyncd service and creates a repair issue before adjusting the time.
The timesyncd service stays disabled until submitting the fixup. Theoretically,
if the time moves backwards from an invalid time in the future,
systemd-timesyncd could otherwise restore the wrong time from a timestamp if we
did that after the time was set.

Also, the sync is now performed if the time is more that 1 hour off and in both
directions (previously it only intervened if it was more than 3 days in the
past).

Fixes #6015, refs #6549

* Update test_adjust_system_datetime_if_time_behind
2026-03-13 16:01:38 +01:00
dependabot[bot]
eedc623ec5 Bump ruff from 0.15.5 to 0.15.6 (#6629)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.5 to 0.15.6.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.5...0.15.6)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-13 08:49:48 +01:00
dependabot[bot]
7ac900da83 Bump actions/download-artifact from 8.0.0 to 8.0.1 (#6624)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 8.0.0 to 8.0.1.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](70fc10c6e5...3e5f45b2cf)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 8.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-12 09:56:02 +01:00
Stefan Agner
f8d3443f30 Use securetar v3 for backups when Core is 2026.3.0 or newer (#6621)
Core supports SecureTar v3 since 2026.3.0, so use the new version only
then to ensure compatibility. Fall back to v2 for older Core versions.
2026-03-11 18:50:24 +01:00
Stefan Agner
83c8c0aab0 Remove obsolete persistent notification system (#6623)
The core_security check (HA < 2021.1.5 with custom components) and the
ResolutionNotify class that created persistent notifications for it are
no longer needed. The minimum supported HA version is well past 2021.1.5,
so this check can never trigger. The notify module was the only consumer
of persistent notifications and had no other users.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 18:50:07 +01:00
Jan Čermák
3c703667ce Bump uv to v0.10.9 (#6622)
* https://github.com/astral-sh/uv/blob/0.10.9/CHANGELOG.md
2026-03-11 16:17:02 +01:00
Stefan Agner
31c2fcf377 Treat empty string password as None in backup restore (#6618)
* Treat empty string password as None in backup restore

Work around a securetar 2026.2.0 bug where an empty string password
sets encrypted=True but fails to derive a key, leading to an
AttributeError on restore. This also restores consistency with backup
creation which uses a truthiness check to skip encryption for empty
passwords.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Explicitly mention that "" is treated as no password

* Add tests for empty string password handling in backups

Verify that empty string password is treated as no password on both
backup creation (not marked as protected) and restore (normalized to
None in set_password).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Improve comment

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 09:53:43 +01:00
dependabot[bot]
8749d11e13 Bump sigstore/cosign-installer from 4.0.0 to 4.1.0 (#6619)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 4.0.0 to 4.1.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](faadad0cce...ba7bc0a3fe)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 4.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-10 08:30:46 +01:00
dependabot[bot]
0732999ea9 Bump setuptools from 82.0.0 to 82.0.1 (#6620) 2026-03-10 07:28:04 +01:00
Mike Degatano
f6c8a68207 Deprecate advanced mode option in addon config (#6614)
* Deprecate advanced mode option in addon config

* Note deprecation of field in addon info and list APIs

* Update docstring per copilot
2026-03-09 10:26:28 +01:00
dependabot[bot]
5c35d86abe Bump release-drafter/release-drafter from 6.2.0 to 6.4.0 (#6617)
Bumps [release-drafter/release-drafter](https://github.com/release-drafter/release-drafter) from 6.2.0 to 6.4.0.
- [Release notes](https://github.com/release-drafter/release-drafter/releases)
- [Commits](6db134d15f...6a93d82988)

---
updated-dependencies:
- dependency-name: release-drafter/release-drafter
  dependency-version: 6.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-09 09:48:30 +01:00
dependabot[bot]
38d6907377 Bump ruff from 0.15.4 to 0.15.5 (#6616)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-06 09:51:20 +01:00
Jan Čermák
b1be897439 Use Python 3.14(.3) in CI and base image (#6586)
* Use Python 3.14(.3) in CI and base image

Update base image to the latest tag using Python 3.14.3 and update Python
version in CI workflows to 3.14.

With Python 3.14, backports.zstd is no longer necessary as it's now available
in the standard library.

* Update wheels ABI in the wheels builder to cp314

* Use explicit Python fix version in GH actions

Specify explicitly Python 3.14.3, as the setup-python action otherwise default
to 3.14.2 when 3.14.3, leading to different version in CI and in production.

* Update Python version references in pyproject.toml

* Fix all ruff quoted-annotation (UP037) errors

* Revert unquoting of DBus types in tests and ignore UP037 where needed
2026-03-05 21:11:25 +01:00
dependabot[bot]
80f790bf5d Bump docker/login-action from 3.7.0 to 4.0.0 (#6615)
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.7.0 to 4.0.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](c94ce9fb46...b45d80f862)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-05 09:05:33 +01:00
Stefan Agner
5e1eaa9dfe Respect auto-update setting for plug-in auto-updates (#6606)
* Respect auto-update setting for plug-in auto-updates

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Also skip auto-updating plug-ins in decorator

* Raise if auto-update flag is not set and plug-in is not up to date

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 09:04:33 +01:00
Stefan Agner
9e0d3fe461 Return 401 for non-Basic Authorization headers on /auth endpoint (#6612)
aiohttp's BasicAuth.decode() raises ValueError for any non-Basic auth
method (e.g. Bearer tokens). This propagated as an unhandled exception,
causing a 500 response instead of the expected 401 Unauthorized.

Catch the ValueError in _process_basic() and raise HTTPUnauthorized with
the WWW-Authenticate realm header so clients get a proper 401 response.

Fixes SUPERVISOR-BFG

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-04 15:55:49 -05:00
Stefan Agner
659735d215 Guard _migrate validator against non-dict add-on configs (#6611)
The _migrate function in addons/validate.py is the first validator in the
SCHEMA_ADDON_CONFIG All() chain and was called directly with raw config data.
If a malformed add-on config file contained a non-dict value (e.g. a string),
config.get() would raise an AttributeError instead of a proper voluptuous
Invalid error, causing an unhandled exception.

Add an isinstance check at the top of _migrate to raise vol.Invalid for
non-dict inputs, letting validation fail gracefully.

Fixes SUPERVISOR-HMP

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-04 15:54:21 -05:00
Stefan Agner
0ef71d1dd1 Drop unsupported architectures and machines, create issue for affected apps (#6607)
* Drop unsupported architectures and machines from Supervisor

Since #5620 Supervisor no longer updates the version information on
unsupported architectures and machines. This means users can no longer
update to newer version of Supervisor since that PR got released.
Furthermore since #6347 we also no longer build for these
architectures. With this, any code related to these architectures
becomes dead code and should be removed.

This commit removes all refrences to the deprecated architectures and
machines from Supervisor.

This affects the following architectures:
- armhf
- armv7
- i386

And the following machines:
- odroid-xu
- qemuarm
- qemux86
- raspberrypi
- raspberrypi2
- raspberrypi3
- raspberrypi4
- tinker

* Create issue if an app using a deprecated architecture is installed

This adds a check to the resolution system to detect if an app is
installed that uses a deprecated architecture. If so, it will show a
warning to the user and recommend them to uninstall the app.

* Formally deprecate machine add-on configs as well

Not only deprecate add-on configs for unsupported architectures, but
also for unsupported machines.

* For installed add-ons architecture must always exist

Fail hard in case of missing architecture, as this is a required field
for installed add-ons. This will prevent the Supervisor from running
with an unsupported configuration and causing further issues down the
line.
2026-03-04 10:59:14 +01:00
Stefan Agner
96fb26462b Fix apps build using wrong architecture for non-native arch apps (#6610)
* Fix add-on build using wrong architecture for non-native arch add-ons

When building a locally-built add-on (no image tag), the architecture
was always set to sys_arch.default (e.g. amd64 on x86_64) instead of
matching against the add-on's declared architectures. This caused an
i386-only add-on to incorrectly build as amd64.

Use sys_arch.match() against the add-on's declared arch list in all
code paths: the arch property, image name generation, BUILD_ARCH build
arg, and default base image selection.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Use CpuArch enums to fix tests

* Explicitly set _supported_arch as new list to fix tests

* Fix pytests

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 15:36:30 +01:00
Stefan Agner
2627d55873 Add default verbose timestamps for plugin logs (#6598)
* Use verbose log output for plug-ins

All three plug-ins which support logging (dns, multicast and audio)
should use the verbose log format by default to make sure the log lines
are annotated with timestamp. Introduce a new flag default_verbose for
advanced logs.

* Use default_verbose for host logs as well

Use the new default_verbose flag for advanced logs, to make it more
explicit that we want timestamps for host logs as well.
2026-03-03 11:58:11 +01:00
dependabot[bot]
6668417e77 Bump sentry-sdk from 2.53.0 to 2.54.0 (#6609)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.53.0 to 2.54.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.53.0...2.54.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.54.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-03 07:58:26 +01:00
Jan Čermák
6a955527f3 Ensure dt_utc in /os/info always returns current time (#6602)
The /os/info API endpoint has been using D-Bus property TimeUSec which got
cached between requests, so the time returned was not always the same as
current time on the host system at the time of the request. Since there's no
reason to use D-Bus API for the time, as Supervisor runs on the same machine
and time is global, simply format current datetime object with Python and
return it in the response.

Fixes #6581
2026-02-27 17:59:11 +01:00
dependabot[bot]
8eb188f734 Bump actions/upload-artifact from 6.0.0 to 7.0.0 (#6599) 2026-02-27 08:22:28 +01:00
dependabot[bot]
e7e3882013 Bump ruff from 0.15.2 to 0.15.4 (#6601)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.2 to 0.15.4.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.2...0.15.4)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-27 07:53:59 +01:00
dependabot[bot]
caa2b8b486 Bump actions/download-artifact from 7.0.0 to 8.0.0 (#6600)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 7.0.0 to 8.0.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](37930b1c2a...70fc10c6e5)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 8.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-27 07:53:31 +01:00
Stefan Agner
3bf5ea4a05 Add remaining NetworkManager device type enums (#6593) 2026-02-26 18:50:52 +01:00
Stefan Agner
7f6327e94e Handle missing Accept header in host logs (#6594)
* Handle missing Accept header in host logs

Avoid indexing request headers directly in the host advanced logs handler when Accept is absent, preventing KeyError crashes on valid requests without that header. Fixes SUPERVISOR-1939.

* Add pytest
2026-02-26 11:30:08 +01:00
Mike Degatano
9f00b6e34f Ensure uuid of dismissed suggestion/issue matches an existing one (#6582)
* Ensure uuid of dismissed suggestion/issue matches an existing one

* Fix lint, test and feedback issues

* Adjust existing tests and remove new ones for not found errors

* fix device access issue usage
2026-02-25 10:26:44 +01:00
Stefan Agner
7a0b2e474a Remove unused Docker config from backup metadata (#6591)
Remove the docker property and schema validation from backup metadata.
The Docker config (registry credentials, IPv6 setting) was already
dropped from backup/restore operations in #5605, but the property and
schema entry remained. Old backups with the docker key still load fine
since the schema uses extra=vol.ALLOW_EXTRA.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 09:12:05 +01:00
dependabot[bot]
b74277ced0 Bump home-assistant/builder from 2025.11.0 to 2026.02.1 (#6592)
Bumps [home-assistant/builder](https://github.com/home-assistant/builder) from 2025.11.0 to 2026.02.1.
- [Release notes](https://github.com/home-assistant/builder/releases)
- [Commits](https://github.com/home-assistant/builder/compare/2025.11.0...2026.02.1)

---
updated-dependencies:
- dependency-name: home-assistant/builder
  dependency-version: 2026.02.1
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-25 09:07:24 +01:00
Stefan Agner
c9a874b352 Remove RuntimeError from APIError inheritance (#6588)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 22:46:16 +01:00
Stefan Agner
3de2deaf02 Bump securetar to 2026.2.0 (#6575)
* Bump securetar from 2025.12.0 to 2026.2.0

Adapt to the new securetar API:
- Use SecureTarArchive for outer backup tar (replaces SecureTarFile
  with gzip=False for the outer container)
- create_inner_tar() renamed to create_tar(), password now inherited
  from the archive rather than passed per inner tar
- SecureTarFile no longer accepts a mode parameter (read-only by
  default, InnerSecureTarFile for writing)
- Pass create_version=2 to keep protected backups at version 2

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Reformat imports

* Rename _create_cleanup to _create_finalize and update docstring

* Use constant for SecureTar create version

* Add test for SecureTarReadError in validate_backup

securetar >= 2026.2.0 raises SecureTarReadError instead of
tarfile.ReadError for invalid passwords. Catching this exception
and raising BackupInvalidError is required so Core shows the
encryption key dialog to the user.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Handle InvalidPasswordError for v3 backups

* Address typos

* Add securetar v3 encrypted password test fixture

Add a test fixture for a securetar v3 encrypted backup with password.
This will be used in the test suite to verify that the backup
extraction process correctly handles encrypted backups.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 13:08:14 +01:00
dependabot[bot]
c79e58d584 Bump pylint from 4.0.4 to 4.0.5 (#6584)
Bumps [pylint](https://github.com/pylint-dev/pylint) from 4.0.4 to 4.0.5.
- [Release notes](https://github.com/pylint-dev/pylint/releases)
- [Commits](https://github.com/pylint-dev/pylint/compare/v4.0.4...v4.0.5)

---
updated-dependencies:
- dependency-name: pylint
  dependency-version: 4.0.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-23 10:09:38 +01:00
Stefan Agner
6070d54860 Harden backup tar extraction with Python tar_filter (#6559)
* Harden backup tar extraction with Python data filter

Replace filter="fully_trusted" with a custom backup_data_filter that
wraps tarfile.data_filter. This adds protection against symlink attacks
(absolute targets, destination escapes), device node injection, and
path traversal, while resetting uid/gid and sanitizing permissions.

Unlike using data_filter directly, the custom filter skips problematic
entries with a warning instead of aborting the entire extraction. This
ensures existing backups containing absolute symlinks (e.g. in shared
folders) still restore successfully with the dangerous entries omitted.

Also removes the now-redundant secure_path member filtering, as
data_filter is a strict superset of its protections. Fixes a standalone
bug in _folder_restore which had no member filtering at all.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Simplify security tests to test backup_data_filter directly

Test the public backup_data_filter function with plain tarfile
extraction instead of going through Backup internals. Removes
protected-access pylint warnings and unnecessary coresys setup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Switch to tar filter instead of custom data filter wrapper

Replace backup_data_filter (which wrapped data_filter and skipped
problematic entries) with the built-in tar filter. The tar filter
rejects path traversal and absolute names while preserving uid/gid
and file permissions, which is important for add-ons running as
non-root users.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Apply suggestions from code review

Co-authored-by: Erik Montnemery <erik@montnemery.com>

* Use BackupInvalidError instead of BackupError for tarfile.TarError

Make sure FilterErrors lead to BackupInvalidError instead of BackupError,
as they are not related to the backup process itself but rather to the
integrity of the backup data.

* Improve test coverage and use pytest.raises

* Only make FilterError a BackupInvalidError

* Add test case for FilterError during Home Assistant Core restore

* Add test cases for Add-ons

* Fix pylint warnings

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Erik Montnemery <erik@montnemery.com>
2026-02-23 10:09:19 +01:00
dependabot[bot]
03e110cb86 Bump ruff from 0.15.1 to 0.15.2 (#6583)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.1 to 0.15.2.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.1...0.15.2)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-20 10:12:25 +01:00
Mike Degatano
4a1c816b92 Finish dockerpy to aiodocker migration (#6578) 2026-02-18 08:49:15 +01:00
Mike Degatano
b70f44bf1f Bump aiodocker from 0.25.0 to 0.26.0 (#6577) 2026-02-17 14:26:01 -05:00
Stefan Agner
c981b3b4c2 Extend and improve release drafter config (#6576)
* Extend and improve release drafter config

Extend the release drafter config with more types (labels) and order
them by priority. Inspired by conventional commits, in particular
the list documented at (including the order):
https://github.com/pvdlg/conventional-changelog-metahub?tab=readme-ov-file#commit-types

Additionally, we left the "breaking-change" and "dependencies" labels.

* Add revert to the list of labels
2026-02-17 19:32:25 +01:00
Stefan Agner
f2d0ceab33 Add missing WIFI_P2P device type to NetworkManager enum (#6574)
Add the missing WIFI_P2P (30) entry to the DeviceType NetworkManager
enum. Without it, systems with a Wi-Fi P2P interface log a warning:

  Unknown DeviceType value received from D-Bus: 30

Closes #6573

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 13:13:02 -05:00
Stefan Agner
3147d080a2 Unify Core user handling with HomeAssistantUser model (#6558)
* Unify Core user listing with HomeAssistantUser model

Replace the ingress-specific IngressSessionDataUser with a general
HomeAssistantUser dataclass that models the Core config/auth/list WS
response. This deduplicates the WS call (previously in both auth.py
and module.py) into a single HomeAssistant.list_users() method.

- Add HomeAssistantUser dataclass with fields matching Core's user API
- Remove get_users() and its unnecessary 5-minute Job throttle
- Auth and ingress consumers both use HomeAssistant.list_users()
- Auth API endpoint uses typed attribute access instead of dict keys
- Migrate session serialization from legacy "displayname" to "name"
- Accept both keys in schema/deserialization for backwards compat
- Add test for loading persisted sessions with legacy displayname key

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Tighten list_users() to trust Core's auth/list contract

Core's config/auth/list WS command always returns a list, never None.
Replace the silent `if not raw: return []` (which also swallowed empty
lists) with an assert, remove the dead AuthListUsersNoneResponseError
exception class, and document the HomeAssistantWSError contract in the
docstring.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove | None from async_send_command return type

The WebSocket result is always set from data["result"] in _receive_json,
never explicitly to None. Remove the misleading | None from the return
type of both WSClient and HomeAssistantWebSocket async_send_command, and
drop the now-unnecessary assert in list_users.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Use HomeAssistantWSConnectionError in _ensure_connected

_ensure_connected and connect_with_auth raise on connection-level
failures, so use the more specific HomeAssistantWSConnectionError
instead of the broad HomeAssistantWSError. This allows callers to
distinguish connection errors from Core API errors (e.g. unsuccessful
WebSocket command responses). Also document that _ensure_connected can
propagate HomeAssistantAuthError from ensure_access_token.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove user list cache from _find_user_by_id

Drop the _list_of_users cache to avoid stale auth data in ingress
session creation. The method now fetches users fresh each time and
returns None on any API error instead of serving potentially outdated
cached results.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 18:31:08 +01:00
dependabot[bot]
09a4e9d5a2 Bump actions/stale from 10.1.1 to 10.2.0 (#6571)
Bumps [actions/stale](https://github.com/actions/stale) from 10.1.1 to 10.2.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](997185467f...b5d41d4e1d)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-17 10:36:56 +01:00
dependabot[bot]
d93e728918 Bump sentry-sdk from 2.52.0 to 2.53.0 (#6572)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.52.0 to 2.53.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.52.0...2.53.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.53.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-17 10:34:12 +01:00
c0ffeeca7
27c6af4b4b App store strings: rename add-on to app (#6569) 2026-02-16 09:20:53 +01:00
Stefan Agner
00f2578d61 Add missing BRIDGE device type to NetworkManager enum (#6567)
NMDeviceType 13 (NM_DEVICE_TYPE_BRIDGE) was not listed in the
DeviceType enum, causing a warning when NetworkManager reported
a bridge interface.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 10:25:15 -05:00
229 changed files with 10242 additions and 5994 deletions

View File

@@ -1,6 +1,8 @@
{
"name": "Supervisor dev",
"image": "ghcr.io/home-assistant/devcontainer:2-supervisor",
"image": "ghcr.io/home-assistant/devcontainer:6-supervisor",
"overrideCommand": false,
"remoteUser": "vscode",
"containerEnv": {
"WORKSPACE_DIRECTORY": "${containerWorkspaceFolder}"
},
@@ -17,10 +19,10 @@
"charliermarsh.ruff",
"ms-python.pylint",
"ms-python.vscode-pylance",
"visualstudioexptteam.vscodeintellicode",
"redhat.vscode-yaml",
"esbenp.prettier-vscode",
"GitHub.vscode-pull-request-github"
"GitHub.vscode-pull-request-github",
"GitHub.copilot"
],
"settings": {
"python.defaultInterpreterPath": "/home/vscode/.local/ha-venv/bin/python",
@@ -46,6 +48,8 @@
},
"mounts": [
"type=volume,target=/var/lib/docker",
"type=volume,target=/mnt/supervisor"
"type=volume,target=/var/lib/containerd",
"type=volume,target=/mnt/supervisor",
"type=tmpfs,target=/tmp"
]
}

View File

@@ -91,13 +91,15 @@ availability.
### Python Requirements
- **Compatibility**: Python 3.13+
- **Language Features**: Use modern Python features:
- **Compatibility**: Python 3.14+
- **Language Features**: Use modern Python features:
- Type hints with `typing` module
- f-strings (preferred over `%` or `.format()`)
- Dataclasses and enum classes
- Async/await patterns
- Pattern matching where appropriate
- Parenthesis-free `except` clauses with comma-separated exceptions
(e.g., `except KeyError, TypeError:`) — available since Python 3.14
### Code Quality Standards

View File

@@ -5,45 +5,53 @@ categories:
- title: ":boom: Breaking Changes"
label: "breaking-change"
- title: ":wrench: Build"
label: "build"
- title: ":boar: Chore"
label: "chore"
- title: ":sparkles: New Features"
label: "new-feature"
- title: ":zap: Performance"
label: "performance"
- title: ":recycle: Refactor"
label: "refactor"
- title: ":green_heart: CI"
label: "ci"
- title: ":bug: Bug Fixes"
label: "bugfix"
- title: ":white_check_mark: Test"
- title: ":gem: Style"
label: "style"
- title: ":package: Refactor"
label: "refactor"
- title: ":rocket: Performance"
label: "performance"
- title: ":rotating_light: Test"
label: "test"
- title: ":hammer_and_wrench: Build"
label: "build"
- title: ":gear: CI"
label: "ci"
- title: ":recycle: Chore"
label: "chore"
- title: ":wastebasket: Revert"
label: "revert"
- title: ":arrow_up: Dependency Updates"
label: "dependencies"
collapse-after: 1
include-labels:
- "breaking-change"
- "build"
- "chore"
- "performance"
- "refactor"
- "new-feature"
- "bugfix"
- "dependencies"
- "style"
- "refactor"
- "performance"
- "test"
- "build"
- "ci"
- "chore"
- "revert"
- "dependencies"
template: |

View File

@@ -25,20 +25,20 @@ on:
push:
branches: ["main"]
paths:
- ".github/workflows/builder.yml"
- "rootfs/**"
- "supervisor/**"
- build.yaml
- Dockerfile
- requirements.txt
- setup.py
env:
DEFAULT_PYTHON: "3.13"
DEFAULT_PYTHON: "3.14.3"
COSIGN_VERSION: "v2.5.3"
CRANE_VERSION: "v0.20.7"
CRANE_SHA256: "8ef3564d264e6b5ca93f7b7f5652704c4dd29d33935aff6947dd5adefd05953e"
BUILD_NAME: supervisor
BUILD_TYPE: supervisor
IMAGE_NAME: hassio-supervisor
ARCHITECTURES: '["amd64", "aarch64"]'
concurrency:
group: "${{ github.workflow }}-${{ github.ref }}"
@@ -49,21 +49,17 @@ jobs:
name: Initialize build
runs-on: ubuntu-latest
outputs:
architectures: ${{ steps.info.outputs.architectures }}
version: ${{ steps.version.outputs.version }}
channel: ${{ steps.version.outputs.channel }}
publish: ${{ steps.version.outputs.publish }}
build_wheels: ${{ steps.requirements.outputs.build_wheels }}
matrix: ${{ steps.matrix.outputs.matrix }}
steps:
- name: Checkout the repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
- name: Get information
id: info
uses: home-assistant/actions/helpers/info@master
- name: Get version
id: version
uses: home-assistant/actions/helpers/version@master
@@ -73,7 +69,7 @@ jobs:
- name: Get changed files
id: changed_files
if: github.event_name == 'pull_request' || github.event_name == 'push'
uses: masesgroup/retrieve-changed-files@491e80760c0e28d36ca6240a27b1ccb8e1402c13 # v3.0.0
uses: masesgroup/retrieve-changed-files@45a8b3b496d2d6037cbd553e8a3450989b9384a2 # v4.0.0
- name: Check if requirements files changed
id: requirements
@@ -84,29 +80,31 @@ jobs:
# Always build wheels for manual dispatches
elif [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "build_wheels=true" >> "$GITHUB_OUTPUT"
elif [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements\.txt|build\.yaml|\.github/workflows/builder\.yml) ]]; then
elif [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements\.txt|\.github/workflows/builder\.yml) ]]; then
echo "build_wheels=true" >> "$GITHUB_OUTPUT"
else
echo "build_wheels=false" >> "$GITHUB_OUTPUT"
fi
- name: Get build matrix
id: matrix
uses: home-assistant/builder/actions/prepare-multi-arch-matrix@62a1597b84b3461abad9816d9cd92862a2b542c3 # 2026.03.2
with:
architectures: ${{ env.ARCHITECTURES }}
image-name: ${{ env.IMAGE_NAME }}
build:
name: Build ${{ matrix.arch }} supervisor
needs: init
runs-on: ${{ matrix.runs-on }}
runs-on: ${{ matrix.os }}
permissions:
contents: read
id-token: write
packages: write
strategy:
matrix:
arch: ${{ fromJson(needs.init.outputs.architectures) }}
include:
- runs-on: ubuntu-24.04
- runs-on: ubuntu-24.04-arm
arch: aarch64
matrix: ${{ fromJSON(needs.init.outputs.matrix) }}
env:
WHEELS_ABI: cp313
WHEELS_ABI: cp314
WHEELS_TAG: musllinux_1_2
WHEELS_APK_DEPS: "libffi-dev;openssl-dev;yaml-dev"
WHEELS_SKIP_BINARY: aiohttp
@@ -155,18 +153,12 @@ jobs:
- name: Upload local wheels artifact
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with:
name: wheels-${{ matrix.arch }}
path: wheels
retention-days: 1
- name: Set version
if: needs.init.outputs.publish == 'true'
uses: home-assistant/actions/helpers/version@master
with:
type: ${{ env.BUILD_TYPE }}
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
if: needs.init.outputs.publish == 'true'
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
@@ -175,7 +167,7 @@ jobs:
- name: Install Cosign
if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
with:
cosign-release: ${{ env.COSIGN_VERSION }}
@@ -191,41 +183,49 @@ jobs:
run: |
cosign sign-blob --yes rootfs/supervisor.sha256 --bundle rootfs/supervisor.sha256.sig
- name: Login to GitHub Container Registry
if: needs.init.outputs.publish == 'true'
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set build arguments
if: needs.init.outputs.publish == 'false'
run: echo "BUILD_ARGS=--test" >> $GITHUB_ENV
# home-assistant/builder doesn't support sha pinning
- name: Build supervisor
uses: home-assistant/builder@2025.11.0
uses: home-assistant/builder/actions/build-image@62a1597b84b3461abad9816d9cd92862a2b542c3 # 2026.03.2
with:
image: ${{ matrix.arch }}
args: |
$BUILD_ARGS \
--${{ matrix.arch }} \
--target /data \
--cosign \
--generic ${{ needs.init.outputs.version }}
arch: ${{ matrix.arch }}
container-registry-password: ${{ secrets.GITHUB_TOKEN }}
cosign-base-identity: 'https://github.com/home-assistant/docker-base/.*'
cosign-base-verify: ghcr.io/home-assistant/base-python:3.14-alpine3.22
image: ${{ matrix.image }}
image-tags: |
${{ needs.init.outputs.version }}
latest
push: ${{ needs.init.outputs.publish == 'true' }}
version: ${{ needs.init.outputs.version }}
manifest:
name: Publish multi-arch manifest
needs: ["init", "build"]
if: needs.init.outputs.publish == 'true'
runs-on: ubuntu-latest
permissions:
id-token: write
packages: write
steps:
- name: Publish multi-arch manifest
uses: home-assistant/builder/actions/publish-multi-arch-manifest@62a1597b84b3461abad9816d9cd92862a2b542c3 # 2026.03.2
with:
architectures: ${{ env.ARCHITECTURES }}
container-registry-password: ${{ secrets.GITHUB_TOKEN }}
image-name: ${{ env.IMAGE_NAME }}
image-tags: |
${{ needs.init.outputs.version }}
latest
version:
name: Update version
needs: ["init", "run_supervisor", "retag_deprecated"]
if: github.repository_owner == 'home-assistant' && needs.init.outputs.publish == 'true'
needs: ["init", "run_supervisor"]
runs-on: ubuntu-latest
steps:
- name: Checkout the repository
if: needs.init.outputs.publish == 'true'
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Initialize git
if: needs.init.outputs.publish == 'true'
uses: home-assistant/actions/helpers/git-init@master
with:
name: ${{ secrets.GIT_NAME }}
@@ -233,7 +233,6 @@ jobs:
token: ${{ secrets.GIT_TOKEN }}
- name: Update version file
if: needs.init.outputs.publish == 'true'
uses: home-assistant/actions/helpers/version-push@master
with:
key: ${{ env.BUILD_NAME }}
@@ -251,22 +250,24 @@ jobs:
- name: Download local wheels artifact
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
name: wheels-amd64
path: wheels
# home-assistant/builder doesn't support sha pinning
# Build the Supervisor for non-publish runs (e.g. PRs)
- name: Build the Supervisor
if: needs.init.outputs.publish != 'true'
uses: home-assistant/builder@2025.11.0
uses: home-assistant/builder/actions/build-image@62a1597b84b3461abad9816d9cd92862a2b542c3 # 2026.03.2
with:
args: |
--test \
--amd64 \
--target /data \
--generic runner
arch: amd64
container-registry-password: ${{ secrets.GITHUB_TOKEN }}
image: ghcr.io/home-assistant/amd64-hassio-supervisor
image-tags: runner
load: true
version: ${{ needs.init.outputs.version }}
# Pull the Supervisor for publish runs to test the published image
- name: Pull Supervisor
if: needs.init.outputs.publish == 'true'
run: |
@@ -280,9 +281,10 @@ jobs:
--privileged \
--security-opt seccomp=unconfined \
--security-opt apparmor=unconfined \
-v /run/docker.sock:/run/docker.sock \
-v /run/dbus:/run/dbus \
-v /tmp/supervisor/data:/data \
-v /run/docker.sock:/run/docker.sock:rw \
-v /run/dbus:/run/dbus:ro \
-v /run/supervisor:/run/os:rw \
-v /tmp/supervisor/data:/data:rw,slave \
-v /etc/machine-id:/etc/machine-id:ro \
-e SUPERVISOR_SHARE="/tmp/supervisor/data" \
-e SUPERVISOR_NAME=hassio_supervisor \
@@ -450,50 +452,3 @@ jobs:
- name: Get supervisor logs on failiure
if: ${{ cancelled() || failure() }}
run: docker logs hassio_supervisor
retag_deprecated:
needs: ["build", "init"]
name: Re-tag deprecated ${{ matrix.arch }} images
if: needs.init.outputs.publish == 'true'
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
packages: write
strategy:
matrix:
arch: ["armhf", "armv7", "i386"]
env:
# Last available release for deprecated architectures
FROZEN_VERSION: "2025.11.5"
steps:
- name: Login to GitHub Container Registry
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Install Cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with:
cosign-release: ${{ env.COSIGN_VERSION }}
- name: Install crane
run: |
curl -sLO https://github.com/google/go-containerregistry/releases/download/${{ env.CRANE_VERSION }}/go-containerregistry_Linux_x86_64.tar.gz
echo "${{ env.CRANE_SHA256 }} go-containerregistry_Linux_x86_64.tar.gz" | sha256sum -c -
tar xzf go-containerregistry_Linux_x86_64.tar.gz crane
sudo mv crane /usr/local/bin/
- name: Re-tag deprecated image with updated version label
run: |
crane auth login ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.GITHUB_TOKEN }}
crane mutate \
--label io.hass.version=${{ needs.init.outputs.version }} \
--tag ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ needs.init.outputs.version }} \
ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ env.FROZEN_VERSION }}
- name: Sign image with Cosign
run: |
cosign sign --yes ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ needs.init.outputs.version }}

View File

@@ -8,7 +8,7 @@ on:
pull_request: ~
env:
DEFAULT_PYTHON: "3.13"
DEFAULT_PYTHON: "3.14.3"
PRE_COMMIT_CACHE: ~/.cache/pre-commit
MYPY_CACHE_VERSION: 1
@@ -34,7 +34,7 @@ jobs:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: venv
key: |
@@ -48,7 +48,7 @@ jobs:
pip install -r requirements.txt -r requirements_tests.txt
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: ${{ env.PRE_COMMIT_CACHE }}
lookup-only: true
@@ -76,7 +76,7 @@ jobs:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: venv
key: |
@@ -88,7 +88,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -119,7 +119,7 @@ jobs:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: venv
key: |
@@ -131,7 +131,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -177,7 +177,7 @@ jobs:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: venv
key: |
@@ -189,7 +189,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -221,7 +221,7 @@ jobs:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: venv
key: |
@@ -233,7 +233,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -265,7 +265,7 @@ jobs:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: venv
key: |
@@ -307,7 +307,7 @@ jobs:
echo "key=mypy-${{ env.MYPY_CACHE_VERSION }}-$mypy_version-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: venv
key: >-
@@ -318,7 +318,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore mypy cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: .mypy_cache
key: >-
@@ -346,12 +346,12 @@ jobs:
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
with:
cosign-release: "v2.5.3"
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: venv
key: |
@@ -386,7 +386,7 @@ jobs:
-o console_output_style=count \
tests
- name: Upload coverage artifact
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with:
name: coverage
path: .coverage
@@ -406,7 +406,7 @@ jobs:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: venv
key: |
@@ -417,7 +417,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Download all coverage artifacts
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
name: coverage
path: coverage/
@@ -428,4 +428,4 @@ jobs:
coverage report
coverage xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0

View File

@@ -36,7 +36,7 @@ jobs:
echo "version=$datepre.$newpost" >> "$GITHUB_OUTPUT"
- name: Run Release Drafter
uses: release-drafter/release-drafter@6db134d15f3909ccc9eefd369f02bd1e9cffdf97 # v6.2.0
uses: release-drafter/release-drafter@5de93583980a40bd78603b6dfdcda5b4df377b32 # v7.2.0
with:
tag: ${{ steps.version.outputs.version }}
name: ${{ steps.version.outputs.version }}

View File

@@ -12,7 +12,7 @@ jobs:
if: github.event.issue.type.name == 'Task'
steps:
- name: Check if user is authorized
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0
with:
script: |
const issueAuthor = context.payload.issue.user.login;

View File

@@ -12,7 +12,7 @@ jobs:
- name: Check out code from GitHub
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Sentry Release
uses: getsentry/action-release@dab6548b3c03c4717878099e43782cf5be654289 # v3.5.0
uses: getsentry/action-release@5657c9e888b4e2cc85f4d29143ea4131fde4a73a # v3.6.0
env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}

View File

@@ -9,7 +9,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
- uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30

View File

@@ -1,4 +1,4 @@
ARG BUILD_FROM
ARG BUILD_FROM=ghcr.io/home-assistant/base-python:3.14-alpine3.22-2026.03.1
FROM ${BUILD_FROM}
ENV \
@@ -22,7 +22,7 @@ RUN \
openssl \
yaml \
\
&& pip3 install uv==0.9.18
&& pip3 install uv==0.10.9
# Install requirements
RUN \
@@ -40,11 +40,22 @@ RUN \
${LOCAL_WHEELS:+--find-links $LOCAL_WHEELS}
# Install Home Assistant Supervisor
ARG BUILD_VERSION="9999.09.9.dev9999"
COPY . supervisor
RUN \
uv pip install --no-cache -e ./supervisor \
sed -i "s/^SUPERVISOR_VERSION =.*/SUPERVISOR_VERSION = \"${BUILD_VERSION}\"/g" /usr/src/supervisor/supervisor/const.py \
&& uv pip install --no-cache -e ./supervisor \
&& python3 -m compileall ./supervisor/supervisor
WORKDIR /
COPY rootfs /
LABEL \
io.hass.type="supervisor" \
org.opencontainers.image.title="Home Assistant Supervisor" \
org.opencontainers.image.description="Container-based system for managing Home Assistant Core installation" \
org.opencontainers.image.authors="The Home Assistant Authors" \
org.opencontainers.image.url="https://www.home-assistant.io/" \
org.opencontainers.image.documentation="https://www.home-assistant.io/docs/" \
org.opencontainers.image.licenses="Apache License 2.0"

View File

@@ -1,16 +0,0 @@
image: ghcr.io/home-assistant/{arch}-hassio-supervisor
build_from:
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.22-2025.12.2
amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.22-2025.12.2
cosign:
base_identity: https://github.com/home-assistant/docker-base/.*
identity: https://github.com/home-assistant/supervisor/.*
labels:
io.hass.type: supervisor
org.opencontainers.image.title: Home Assistant Supervisor
org.opencontainers.image.description: Container-based system for managing Home Assistant Core installation
org.opencontainers.image.source: https://github.com/home-assistant/supervisor
org.opencontainers.image.authors: The Home Assistant Authors
org.opencontainers.image.url: https://www.home-assistant.io/
org.opencontainers.image.documentation: https://www.home-assistant.io/docs/
org.opencontainers.image.licenses: Apache License 2.0

View File

@@ -4,8 +4,11 @@ coverage:
status:
project:
default:
target: 40
threshold: 0.09
target: auto
threshold: 1
patch:
default:
target: 80
comment: false
github_checks:
annotations: false

View File

@@ -1,5 +1,5 @@
[build-system]
requires = ["setuptools~=82.0.0", "wheel~=0.46.1"]
requires = ["setuptools~=82.0.0"]
build-backend = "setuptools.build_meta"
[project]
@@ -12,7 +12,7 @@ authors = [
{ name = "The Home Assistant Authors", email = "hello@home-assistant.io" },
]
keywords = ["docker", "home-assistant", "api"]
requires-python = ">=3.13.0"
requires-python = ">=3.14.0"
[project.urls]
"Homepage" = "https://www.home-assistant.io/"
@@ -31,7 +31,7 @@ include-package-data = true
include = ["supervisor*"]
[tool.pylint.MAIN]
py-version = "3.13"
py-version = "3.14"
# Use a conservative default here; 2 should speed up most setups and not hurt
# any too bad. Override on command line as appropriate.
jobs = 2
@@ -208,6 +208,9 @@ score = false
[tool.pylint.TYPECHECK]
ignored-modules = ["distutils"]
# re.Pattern methods are C extension methods; pylint cannot detect them when
# re.Pattern is used as a dataclass field type annotation (false positive).
generated-members = ["re.Pattern.*"]
[tool.pylint.FORMAT]
expected-line-ending-format = "LF"
@@ -368,7 +371,7 @@ split-on-trailing-comma = false
[tool.ruff.lint.per-file-ignores]
# DBus Service Mocks must use typing and names understood by dbus-fast
"tests/dbus_service_mocks/*.py" = ["F722", "F821", "N815"]
"tests/dbus_service_mocks/*.py" = ["F722", "F821", "N815", "UP037"]
[tool.ruff.lint.mccabe]
max-complexity = 25

View File

@@ -1,32 +1,29 @@
aiodns==4.0.0
aiodocker==0.25.0
aiohttp==3.13.3
aiodocker==0.26.0
aiohttp==3.13.5
atomicwrites-homeassistant==1.4.1
attrs==25.4.0
attrs==26.1.0
awesomeversion==25.8.0
backports.zstd==1.3.0
blockbuster==1.5.26
brotli==1.2.0
ciso8601==2.3.3
colorlog==6.10.1
cpe==1.3.1
cryptography==46.0.5
cryptography==46.0.7
debugpy==1.8.20
deepmerge==2.0
dirhash==0.5.0
docker==7.1.0
faust-cchardet==2.1.19
gitpython==3.1.46
gitpython==3.1.47
jinja2==3.1.6
log-rate-limit==1.4.2
orjson==3.11.7
orjson==3.11.8
pulsectl==24.12.0
pyudev==0.24.4
PyYAML==6.0.3
requests==2.32.5
securetar==2025.12.0
sentry-sdk==2.52.0
setuptools==82.0.0
securetar==2026.4.1
sentry-sdk==2.58.0
setuptools==82.0.1
voluptuous==0.16.0
dbus-fast==4.0.0
dbus-fast==4.0.4
zlib-fast==0.2.1

View File

@@ -1,16 +1,14 @@
astroid==4.0.3
coverage==7.13.4
mypy==1.19.1
pre-commit==4.5.1
pylint==4.0.4
coverage==7.13.5
mypy==1.20.2
pre-commit==4.6.0
pylint==4.0.5
pytest-aiohttp==1.1.0
pytest-asyncio==1.3.0
pytest-cov==7.0.0
pytest-cov==7.1.0
pytest-timeout==2.4.0
pytest==9.0.2
ruff==0.15.1
pytest==9.0.3
ruff==0.15.11
time-machine==3.2.0
types-docker==7.1.0.20260109
types-pyyaml==6.0.12.20250915
types-requests==2.32.4.20260107
types-pyyaml==6.0.12.20260408
urllib3==2.6.3

View File

@@ -5,7 +5,9 @@ import re
from setuptools import setup
RE_SUPERVISOR_VERSION = re.compile(r"^SUPERVISOR_VERSION =\s*(.+)$")
RE_SUPERVISOR_VERSION = re.compile(
r'^SUPERVISOR_VERSION =\s*"?((?P<git_sha>[0-9a-f]{40})|[^"]+)"?$'
)
SUPERVISOR_DIR = Path(__file__).parent
REQUIREMENTS_FILE = SUPERVISOR_DIR / "requirements.txt"
@@ -16,13 +18,15 @@ CONSTANTS = CONST_FILE.read_text(encoding="utf-8")
def _get_supervisor_version():
for line in CONSTANTS.split("/n"):
for line in CONSTANTS.split("\n"):
if match := RE_SUPERVISOR_VERSION.match(line):
if git_sha := match.group("git_sha"):
return f"9999.09.9.dev9999+{git_sha}"
return match.group(1)
return "9999.09.9.dev9999"
setup(
version=_get_supervisor_version(),
dependencies=REQUIREMENTS.split("/n"),
dependencies=REQUIREMENTS.split("\n"),
)

View File

@@ -1 +1 @@
"""Init file for Supervisor add-ons."""
"""Init file for Supervisor apps."""

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
"""Supervisor add-on build environment."""
"""Supervisor app build environment."""
from __future__ import annotations
@@ -7,9 +7,10 @@ from functools import cached_property
import json
import logging
from pathlib import Path, PurePath
from typing import TYPE_CHECKING, Any
from typing import TYPE_CHECKING, Any, Self
from awesomeversion import AwesomeVersion
import voluptuous as vol
from ..const import (
ATTR_ARGS,
@@ -19,7 +20,13 @@ from ..const import (
ATTR_SQUASH,
ATTR_USERNAME,
FILE_SUFFIX_CONFIGURATION,
META_ADDON,
LABEL_ARCH,
LABEL_DESCRIPTION,
LABEL_NAME,
LABEL_TYPE,
LABEL_URL,
LABEL_VERSION,
META_APP,
SOCKET_DOCKER,
CpuArch,
)
@@ -27,98 +34,128 @@ from ..coresys import CoreSys, CoreSysAttributes
from ..docker.const import DOCKER_HUB, DOCKER_HUB_LEGACY, DockerMount, MountType
from ..docker.interface import MAP_ARCH
from ..exceptions import (
AddonBuildArchitectureNotSupportedError,
AddonBuildDockerfileMissingError,
AppBuildArchitectureNotSupportedError,
AppBuildDockerfileMissingError,
ConfigurationFileError,
HassioArchNotFound,
)
from ..utils.common import FileConfiguration, find_one_filetype
from ..utils.common import find_one_filetype, read_json_or_yaml_file
from .validate import SCHEMA_BUILD_CONFIG
if TYPE_CHECKING:
from .manager import AnyAddon
from .manager import AnyApp
_LOGGER: logging.Logger = logging.getLogger(__name__)
class AddonBuild(FileConfiguration, CoreSysAttributes):
"""Handle build options for add-ons."""
class AppBuild(CoreSysAttributes):
"""Handle build options for apps."""
def __init__(self, coresys: CoreSys, addon: AnyAddon) -> None:
"""Initialize Supervisor add-on builder."""
def __init__(self, coresys: CoreSys, app: AnyApp, data: dict[str, Any]) -> None:
"""Initialize Supervisor app builder."""
self.coresys: CoreSys = coresys
self.addon = addon
self.app = app
self._build_config: dict[str, Any] = data
# Search for build file later in executor
super().__init__(None, SCHEMA_BUILD_CONFIG)
@classmethod
async def create(cls, coresys: CoreSys, app: AnyApp) -> Self:
"""Create an AppBuild by reading the build configuration from disk."""
data = await coresys.run_in_executor(cls._read_build_config, app)
def _get_build_file(self) -> Path:
"""Get build file.
if data:
_LOGGER.warning(
"App %s uses build.yaml which is deprecated. "
"Move build parameters into the Dockerfile directly.",
app.slug,
)
if data[ATTR_SQUASH]:
_LOGGER.warning(
"Ignoring squash build option for %s as Docker BuildKit"
" does not support it.",
app.slug,
)
return cls(coresys, app, data or {})
@staticmethod
def _read_build_config(app: AnyApp) -> dict[str, Any] | None:
"""Find and read the build configuration file.
Must be run in executor.
"""
try:
return find_one_filetype(
self.addon.path_location, "build", FILE_SUFFIX_CONFIGURATION
build_file = find_one_filetype(
app.path_location, "build", FILE_SUFFIX_CONFIGURATION
)
except ConfigurationFileError:
return self.addon.path_location / "build.json"
# No build config file found, assuming modernized build
return None
async def read_data(self) -> None:
"""Load data from file."""
if not self._file:
self._file = await self.sys_run_in_executor(self._get_build_file)
try:
raw = read_json_or_yaml_file(build_file)
build_config = SCHEMA_BUILD_CONFIG(raw)
except ConfigurationFileError as ex:
_LOGGER.exception(
"Error reading %s build config (%s), using defaults",
app.slug,
ex,
)
build_config = SCHEMA_BUILD_CONFIG({})
except vol.Invalid as ex:
_LOGGER.warning(
"Error parsing %s build config (%s), using defaults", app.slug, ex
)
build_config = SCHEMA_BUILD_CONFIG({})
await super().read_data()
# Default base image is passed in BUILD_FROM only when build.yaml is used
# (this is legacy behavior - without build config, Dockerfile should specify it)
if not build_config[ATTR_BUILD_FROM]:
build_config[ATTR_BUILD_FROM] = "ghcr.io/home-assistant/base:latest"
async def save_data(self):
"""Ignore save function."""
raise RuntimeError()
return build_config
@cached_property
def arch(self) -> CpuArch:
"""Return arch of the add-on."""
return self.sys_arch.match([self.addon.arch])
"""Return arch of the app."""
return self.sys_arch.match([self.app.arch])
@property
def base_image(self) -> str:
"""Return base image for this add-on."""
if not self._data[ATTR_BUILD_FROM]:
return f"ghcr.io/home-assistant/{self.sys_arch.default!s}-base:latest"
def base_image(self) -> str | None:
"""Return base image for this app, or None to use Dockerfile default."""
# No build config (otherwise default is coerced when reading the config)
if not self._build_config.get(ATTR_BUILD_FROM):
return None
if isinstance(self._data[ATTR_BUILD_FROM], str):
return self._data[ATTR_BUILD_FROM]
# Single base image in build config
if isinstance(self._build_config[ATTR_BUILD_FROM], str):
return self._build_config[ATTR_BUILD_FROM]
# Evaluate correct base image
if self.arch not in self._data[ATTR_BUILD_FROM]:
# Dict - per-arch base images in build config
if self.arch not in self._build_config[ATTR_BUILD_FROM]:
raise HassioArchNotFound(
f"Add-on {self.addon.slug} is not supported on {self.arch}"
f"App {self.app.slug} is not supported on {self.arch}"
)
return self._data[ATTR_BUILD_FROM][self.arch]
@property
def squash(self) -> bool:
"""Return True or False if squash is active."""
return self._data[ATTR_SQUASH]
return self._build_config[ATTR_BUILD_FROM][self.arch]
@property
def additional_args(self) -> dict[str, str]:
"""Return additional Docker build arguments."""
return self._data[ATTR_ARGS]
return self._build_config.get(ATTR_ARGS, {})
@property
def additional_labels(self) -> dict[str, str]:
"""Return additional Docker labels."""
return self._data[ATTR_LABELS]
return self._build_config.get(ATTR_LABELS, {})
def get_dockerfile(self) -> Path:
"""Return Dockerfile path.
Must be run in executor.
"""
if self.addon.path_location.joinpath(f"Dockerfile.{self.arch}").exists():
return self.addon.path_location.joinpath(f"Dockerfile.{self.arch}")
return self.addon.path_location.joinpath("Dockerfile")
if self.app.path_location.joinpath(f"Dockerfile.{self.arch}").exists():
return self.app.path_location.joinpath(f"Dockerfile.{self.arch}")
return self.app.path_location.joinpath("Dockerfile")
async def is_valid(self) -> None:
"""Return true if the build env is valid."""
@@ -126,67 +163,55 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
def build_is_valid() -> bool:
return all(
[
self.addon.path_location.is_dir(),
self.app.path_location.is_dir(),
self.get_dockerfile().is_file(),
]
)
try:
if not await self.sys_run_in_executor(build_is_valid):
raise AddonBuildDockerfileMissingError(
_LOGGER.error, addon=self.addon.slug
)
raise AppBuildDockerfileMissingError(_LOGGER.error, app=self.app.slug)
except HassioArchNotFound:
raise AddonBuildArchitectureNotSupportedError(
raise AppBuildArchitectureNotSupportedError(
_LOGGER.error,
addon=self.addon.slug,
addon_arch_list=self.addon.supported_arch,
app=self.app.slug,
app_arch_list=self.app.supported_arch,
system_arch_list=[arch.value for arch in self.sys_arch.supported],
) from None
def _registry_key(self, registry: str) -> str:
"""Return the Docker config.json key for a registry."""
if registry in (DOCKER_HUB, DOCKER_HUB_LEGACY):
return "https://index.docker.io/v1/"
return registry
def _registry_auth(self, registry: str) -> str:
"""Return base64-encoded auth string for a registry."""
stored = self.sys_docker.config.registries[registry]
return base64.b64encode(
f"{stored[ATTR_USERNAME]}:{stored[ATTR_PASSWORD]}".encode()
).decode()
def get_docker_config_json(self) -> str | None:
"""Generate Docker config.json content with registry credentials for base image.
Returns a JSON string with registry credentials for the base image's registry,
or None if no matching registry is configured.
Raises:
HassioArchNotFound: If the add-on is not supported on the current architecture.
"""Generate Docker config.json content with all configured registry credentials.
Returns a JSON string with registry credentials, or None if no registries
are configured.
"""
# Early return before accessing base_image to avoid unnecessary arch lookup
if not self.sys_docker.config.registries:
return None
registry = self.sys_docker.config.get_registry_for_image(self.base_image)
if not registry:
return None
stored = self.sys_docker.config.registries[registry]
username = stored[ATTR_USERNAME]
password = stored[ATTR_PASSWORD]
# Docker config.json uses base64-encoded "username:password" for auth
auth_string = base64.b64encode(f"{username}:{password}".encode()).decode()
# Use the actual registry URL for the key
# Docker Hub uses "https://index.docker.io/v1/" as the key
# Support both docker.io (official) and hub.docker.com (legacy)
registry_key = (
"https://index.docker.io/v1/"
if registry in (DOCKER_HUB, DOCKER_HUB_LEGACY)
else registry
)
config = {"auths": {registry_key: {"auth": auth_string}}}
return json.dumps(config)
auths = {
self._registry_key(registry): {"auth": self._registry_auth(registry)}
for registry in self.sys_docker.config.registries
}
return json.dumps({"auths": auths})
def get_docker_args(
self, version: AwesomeVersion, image_tag: str, docker_config_path: Path | None
) -> dict[str, Any]:
"""Create a dict with Docker run args."""
dockerfile_path = self.get_dockerfile().relative_to(self.addon.path_location)
dockerfile_path = self.get_dockerfile().relative_to(self.app.path_location)
build_cmd = [
"docker",
@@ -203,34 +228,40 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
]
labels = {
"io.hass.version": version,
"io.hass.arch": self.arch,
"io.hass.type": META_ADDON,
"io.hass.name": self._fix_label("name"),
"io.hass.description": self._fix_label("description"),
LABEL_VERSION: version,
LABEL_ARCH: self.arch,
LABEL_TYPE: META_APP,
**self.additional_labels,
}
if self.addon.url:
labels["io.hass.url"] = self.addon.url
# Set name only if non-empty, could have been set in Dockerfile
if name := self._fix_label("name"):
labels[LABEL_NAME] = name
# Set description only if non-empty, could have been set in Dockerfile
if description := self._fix_label("description"):
labels[LABEL_DESCRIPTION] = description
if self.app.url:
labels[LABEL_URL] = self.app.url
for key, value in labels.items():
build_cmd.extend(["--label", f"{key}={value}"])
build_args = {
"BUILD_FROM": self.base_image,
"BUILD_VERSION": version,
"BUILD_ARCH": self.sys_arch.default,
"BUILD_ARCH": self.arch,
**self.additional_args,
}
if self.base_image is not None:
build_args["BUILD_FROM"] = self.base_image
for key, value in build_args.items():
build_cmd.extend(["--build-arg", f"{key}={value}"])
# The addon path will be mounted from the host system
addon_extern_path = self.sys_config.local_to_extern_path(
self.addon.path_location
)
# The app path will be mounted from the host system
app_extern_path = self.sys_config.local_to_extern_path(self.app.path_location)
mounts = [
DockerMount(
@@ -241,7 +272,7 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
),
DockerMount(
type=MountType.BIND,
source=addon_extern_path.as_posix(),
source=app_extern_path.as_posix(),
target="/addon",
read_only=True,
),
@@ -269,5 +300,5 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
def _fix_label(self, label_name: str) -> str:
"""Remove characters they are not supported."""
label = getattr(self.addon, label_name, "")
label = getattr(self.app, label_name, "")
return label.replace("'", "")

View File

@@ -1,4 +1,4 @@
"""Confgiuration Objects for Addon Config."""
"""Confgiuration Objects for App Config."""
from dataclasses import dataclass

View File

@@ -1,4 +1,4 @@
"""Add-on static data."""
"""App static data."""
from datetime import timedelta
from enum import StrEnum
@@ -6,15 +6,15 @@ from enum import StrEnum
from ..jobs.const import JobCondition
class AddonBackupMode(StrEnum):
"""Backup mode of an Add-on."""
class AppBackupMode(StrEnum):
"""Backup mode of an App."""
HOT = "hot"
COLD = "cold"
class MappingType(StrEnum):
"""Mapping type of an Add-on Folder."""
"""Mapping type of an App Folder."""
DATA = "data"
CONFIG = "config"
@@ -38,7 +38,7 @@ WATCHDOG_MAX_ATTEMPTS = 5
WATCHDOG_THROTTLE_PERIOD = timedelta(minutes=30)
WATCHDOG_THROTTLE_MAX_CALLS = 10
ADDON_UPDATE_CONDITIONS = [
APP_UPDATE_CONDITIONS = [
JobCondition.FREE_SPACE,
JobCondition.HEALTHY,
JobCondition.INTERNET_HOST,

View File

@@ -1,4 +1,4 @@
"""Init file for Supervisor add-on data."""
"""Init file for Supervisor app data."""
from copy import deepcopy
from typing import Any
@@ -12,16 +12,16 @@ from ..const import (
FILE_HASSIO_ADDONS,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..store.addon import AddonStore
from ..store.addon import AppStore
from ..utils.common import FileConfiguration
from .addon import Addon
from .addon import App
from .validate import SCHEMA_ADDONS_FILE
Config = dict[str, Any]
class AddonsData(FileConfiguration, CoreSysAttributes):
"""Hold data for installed Add-ons inside Supervisor."""
class AppsData(FileConfiguration, CoreSysAttributes):
"""Hold data for installed Apps inside Supervisor."""
def __init__(self, coresys: CoreSys):
"""Initialize data holder."""
@@ -30,42 +30,40 @@ class AddonsData(FileConfiguration, CoreSysAttributes):
@property
def user(self):
"""Return local add-on user data."""
"""Return local app user data."""
return self._data[ATTR_USER]
@property
def system(self):
"""Return local add-on data."""
"""Return local app data."""
return self._data[ATTR_SYSTEM]
async def install(self, addon: AddonStore) -> None:
"""Set addon as installed."""
self.system[addon.slug] = deepcopy(addon.data)
self.user[addon.slug] = {
async def install(self, app: AppStore) -> None:
"""Set app as installed."""
self.system[app.slug] = deepcopy(app.data)
self.user[app.slug] = {
ATTR_OPTIONS: {},
ATTR_VERSION: addon.version,
ATTR_IMAGE: addon.image,
ATTR_VERSION: app.version,
ATTR_IMAGE: app.image,
}
await self.save_data()
async def uninstall(self, addon: Addon) -> None:
"""Set add-on as uninstalled."""
self.system.pop(addon.slug, None)
self.user.pop(addon.slug, None)
async def uninstall(self, app: App) -> None:
"""Set app as uninstalled."""
self.system.pop(app.slug, None)
self.user.pop(app.slug, None)
await self.save_data()
async def update(self, addon: AddonStore) -> None:
"""Update version of add-on."""
self.system[addon.slug] = deepcopy(addon.data)
self.user[addon.slug].update(
{ATTR_VERSION: addon.version, ATTR_IMAGE: addon.image}
)
async def update(self, app: AppStore) -> None:
"""Update version of app."""
self.system[app.slug] = deepcopy(app.data)
self.user[app.slug].update({ATTR_VERSION: app.version, ATTR_IMAGE: app.image})
await self.save_data()
async def restore(
self, slug: str, user: Config, system: Config, image: str
) -> None:
"""Restore data to add-on."""
"""Restore data to app."""
self.user[slug] = deepcopy(user)
self.system[slug] = deepcopy(system)

View File

@@ -1,4 +1,4 @@
"""Supervisor add-on manager."""
"""Supervisor app manager."""
import asyncio
from collections.abc import Awaitable
@@ -9,12 +9,12 @@ from typing import Self, Union
from attr import evolve
from securetar import SecureTarFile
from ..const import AddonBoot, AddonStartup, AddonState
from ..const import AppBoot, AppStartup, AppState
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
AddonNotSupportedError,
AddonsError,
AddonsJobError,
AppNotSupportedError,
AppsError,
AppsJobError,
CoreDNSError,
DockerError,
HassioError,
@@ -22,61 +22,61 @@ from ..exceptions import (
from ..jobs import ChildJobSyncFilter
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..store.addon import AddonStore
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from ..store.addon import AppStore
from ..utils.sentry import async_capture_exception
from .addon import Addon
from .const import ADDON_UPDATE_CONDITIONS
from .data import AddonsData
from .addon import App
from .const import APP_UPDATE_CONDITIONS
from .data import AppsData
_LOGGER: logging.Logger = logging.getLogger(__name__)
AnyAddon = Union[Addon, AddonStore]
AnyApp = Union[App, AppStore]
class AddonManager(CoreSysAttributes):
"""Manage add-ons inside Supervisor."""
class AppManager(CoreSysAttributes):
"""Manage apps inside Supervisor."""
def __init__(self, coresys: CoreSys):
"""Initialize Docker base wrapper."""
self.coresys: CoreSys = coresys
self.data: AddonsData = AddonsData(coresys)
self.local: dict[str, Addon] = {}
self.store: dict[str, AddonStore] = {}
self.data: AppsData = AppsData(coresys)
self.local: dict[str, App] = {}
self.store: dict[str, AppStore] = {}
@property
def all(self) -> list[AnyAddon]:
"""Return a list of all add-ons."""
addons: dict[str, AnyAddon] = {**self.store, **self.local}
return list(addons.values())
def all(self) -> list[AnyApp]:
"""Return a list of all apps."""
apps: dict[str, AnyApp] = {**self.store, **self.local}
return list(apps.values())
@property
def installed(self) -> list[Addon]:
"""Return a list of all installed add-ons."""
def installed(self) -> list[App]:
"""Return a list of all installed apps."""
return list(self.local.values())
def get(self, addon_slug: str, local_only: bool = False) -> AnyAddon | None:
"""Return an add-on from slug.
def get(self, app_slug: str, local_only: bool = False) -> AnyApp | None:
"""Return an app from slug.
Prio:
1 - Local
2 - Store
"""
if addon_slug in self.local:
return self.local[addon_slug]
if app_slug in self.local:
return self.local[app_slug]
if not local_only:
return self.store.get(addon_slug)
return self.store.get(app_slug)
return None
def get_local_only(self, addon_slug: str) -> Addon | None:
"""Return an installed add-on from slug."""
return self.local.get(addon_slug)
def get_local_only(self, app_slug: str) -> App | None:
"""Return an installed app from slug."""
return self.local.get(app_slug)
def from_token(self, token: str) -> Addon | None:
"""Return an add-on from Supervisor token."""
for addon in self.installed:
if token == addon.supervisor_token:
return addon
def from_token(self, token: str) -> App | None:
"""Return an app from Supervisor token."""
for app in self.installed:
if token == app.supervisor_token:
return app
return None
async def load_config(self) -> Self:
@@ -85,50 +85,61 @@ class AddonManager(CoreSysAttributes):
return self
async def load(self) -> None:
"""Start up add-on management."""
# Refresh cache for all store addons
"""Start up app management."""
# Refresh cache for all store apps
tasks: list[Awaitable[None]] = [
store.refresh_path_cache() for store in self.store.values()
]
# Load all installed addons
# Load all installed apps
for slug in self.data.system:
addon = self.local[slug] = Addon(self.coresys, slug)
tasks.append(addon.load())
app = self.local[slug] = App(self.coresys, slug)
tasks.append(app.load())
# Run initial tasks
_LOGGER.info("Found %d installed add-ons", len(self.data.system))
_LOGGER.info("Found %d installed apps", len(self.data.system))
if tasks:
await asyncio.gather(*tasks)
# Sync DNS
await self.sync_dns()
async def boot(self, stage: AddonStartup) -> None:
"""Boot add-ons with mode auto."""
tasks: list[Addon] = []
for addon in self.installed:
if addon.boot != AddonBoot.AUTO or addon.startup != stage:
async def boot(self, stage: AppStartup) -> None:
"""Boot apps with mode auto."""
tasks: list[App] = []
for app in self.installed:
if app.boot != AppBoot.AUTO or app.startup != stage:
continue
tasks.append(addon)
if (
app.host_network
and UnhealthyReason.DOCKER_GATEWAY_UNPROTECTED
in self.sys_resolution.unhealthy
):
_LOGGER.warning(
"Skipping boot of app %s because gateway firewall"
" rules are not active",
app.slug,
)
continue
tasks.append(app)
# Evaluate add-ons which need to be started
_LOGGER.info("Phase '%s' starting %d add-ons", stage, len(tasks))
# Evaluate apps which need to be started
_LOGGER.info("Phase '%s' starting %d apps", stage, len(tasks))
if not tasks:
return
# Start Add-ons sequential
# Start Apps sequential
# avoid issue on slow IO
# Config.wait_boot is deprecated. Until addons update with healthchecks,
# Config.wait_boot is deprecated. Until apps update with healthchecks,
# add a sleep task for it to keep the same minimum amount of wait time
wait_boot: list[Awaitable[None]] = [asyncio.sleep(self.sys_config.wait_boot)]
for addon in tasks:
for app in tasks:
try:
if start_task := await addon.start():
if start_task := await app.start():
wait_boot.append(start_task)
except HassioError:
self.sys_resolution.add_issue(
evolve(addon.boot_failed_issue),
evolve(app.boot_failed_issue),
suggestions=[
SuggestionType.EXECUTE_START,
SuggestionType.DISABLE_BOOT,
@@ -137,50 +148,50 @@ class AddonManager(CoreSysAttributes):
else:
continue
_LOGGER.warning("Can't start Add-on %s", addon.slug)
_LOGGER.warning("Can't start app %s", app.slug)
# Ignore exceptions from waiting for addon startup, addon errors handled elsewhere
# Ignore exceptions from waiting for app startup, app errors handled elsewhere
await asyncio.gather(*wait_boot, return_exceptions=True)
# After waiting for startup, create an issue for boot addons that are error or unknown state
# Ignore stopped as single shot addons can be run at boot and this is successful exit
# Timeout waiting for startup is not a failure, addon is probably just slow
for addon in tasks:
if addon.state in {AddonState.ERROR, AddonState.UNKNOWN}:
# After waiting for startup, create an issue for boot apps that are error or unknown state
# Ignore stopped as single shot apps can be run at boot and this is successful exit
# Timeout waiting for startup is not a failure, app is probably just slow
for app in tasks:
if app.state in {AppState.ERROR, AppState.UNKNOWN}:
self.sys_resolution.add_issue(
evolve(addon.boot_failed_issue),
evolve(app.boot_failed_issue),
suggestions=[
SuggestionType.EXECUTE_START,
SuggestionType.DISABLE_BOOT,
],
)
async def shutdown(self, stage: AddonStartup) -> None:
"""Shutdown addons."""
tasks: list[Addon] = []
for addon in self.installed:
if addon.state != AddonState.STARTED or addon.startup != stage:
async def shutdown(self, stage: AppStartup) -> None:
"""Shutdown apps."""
tasks: list[App] = []
for app in self.installed:
if app.state != AppState.STARTED or app.startup != stage:
continue
tasks.append(addon)
tasks.append(app)
# Evaluate add-ons which need to be stopped
_LOGGER.info("Phase '%s' stopping %d add-ons", stage, len(tasks))
# Evaluate apps which need to be stopped
_LOGGER.info("Phase '%s' stopping %d apps", stage, len(tasks))
if not tasks:
return
# Stop Add-ons sequential
# Stop Apps sequential
# avoid issue on slow IO
for addon in tasks:
for app in tasks:
try:
await addon.stop()
await app.stop()
except Exception as err: # pylint: disable=broad-except
_LOGGER.warning("Can't stop Add-on %s: %s", addon.slug, err)
_LOGGER.warning("Can't stop app %s: %s", app.slug, err)
await async_capture_exception(err)
@Job(
name="addon_manager_install",
conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError,
conditions=APP_UPDATE_CONDITIONS,
on_condition=AppsJobError,
concurrency=JobConcurrency.QUEUE,
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
@@ -189,15 +200,15 @@ class AddonManager(CoreSysAttributes):
async def install(
self, slug: str, *, validation_complete: asyncio.Event | None = None
) -> None:
"""Install an add-on."""
"""Install an app."""
self.sys_jobs.current.reference = slug
if slug in self.local:
raise AddonsError(f"Add-on {slug} is already installed", _LOGGER.warning)
raise AppsError(f"App {slug} is already installed", _LOGGER.warning)
store = self.store.get(slug)
if not store:
raise AddonsError(f"Add-on {slug} does not exist", _LOGGER.error)
raise AppsError(f"App {slug} does not exist", _LOGGER.error)
store.validate_availability()
@@ -205,37 +216,37 @@ class AddonManager(CoreSysAttributes):
if validation_complete:
validation_complete.set()
await Addon(self.coresys, slug).install()
await App(self.coresys, slug).install()
_LOGGER.info("Add-on '%s' successfully installed", slug)
_LOGGER.info("App '%s' successfully installed", slug)
@Job(name="addon_manager_uninstall")
async def uninstall(self, slug: str, *, remove_config: bool = False) -> None:
"""Remove an add-on."""
"""Remove an app."""
if slug not in self.local:
_LOGGER.warning("Add-on %s is not installed", slug)
_LOGGER.warning("App %s is not installed", slug)
return
shared_image = any(
self.local[slug].image == addon.image
and self.local[slug].version == addon.version
for addon in self.installed
if addon.slug != slug
self.local[slug].image == app.image
and self.local[slug].version == app.version
for app in self.installed
if app.slug != slug
)
await self.local[slug].uninstall(
remove_config=remove_config, remove_image=not shared_image
)
_LOGGER.info("Add-on '%s' successfully removed", slug)
_LOGGER.info("App '%s' successfully removed", slug)
@Job(
name="addon_manager_update",
conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError,
conditions=APP_UPDATE_CONDITIONS,
on_condition=AppsJobError,
# We assume for now the docker image pull is 100% of this task for progress
# allocation. But from a user perspective that isn't true. Other steps
# that take time which is not accounted for in progress include:
# partial backup, image cleanup, apparmor update, and addon restart
# partial backup, image cleanup, apparmor update, and app restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
@@ -247,25 +258,23 @@ class AddonManager(CoreSysAttributes):
*,
validation_complete: asyncio.Event | None = None,
) -> asyncio.Task | None:
"""Update add-on.
"""Update app.
Returns a Task that completes when addon has state 'started' (see addon.start)
if addon is started after update. Else nothing is returned.
Returns a Task that completes when app has state 'started' (see app.start)
if app is started after update. Else nothing is returned.
"""
self.sys_jobs.current.reference = slug
if slug not in self.local:
raise AddonsError(f"Add-on {slug} is not installed", _LOGGER.error)
addon = self.local[slug]
raise AppsError(f"App {slug} is not installed", _LOGGER.error)
app = self.local[slug]
if addon.is_detached:
raise AddonsError(
f"Add-on {slug} is not available inside store", _LOGGER.error
)
if app.is_detached:
raise AppsError(f"App {slug} is not available inside store", _LOGGER.error)
store = self.store[slug]
if addon.version == store.version:
raise AddonsError(f"No update available for add-on {slug}", _LOGGER.warning)
if app.version == store.version:
raise AppsError(f"No update available for app {slug}", _LOGGER.warning)
# Check if available, Maybe something have changed
store.validate_availability()
@@ -276,14 +285,14 @@ class AddonManager(CoreSysAttributes):
if backup:
await self.sys_backups.do_backup_partial(
name=f"addon_{addon.slug}_{addon.version}",
name=f"addon_{app.slug}_{app.version}",
homeassistant=False,
addons=[addon.slug],
apps=[app.slug],
)
task = await addon.update()
task = await app.update()
_LOGGER.info("Add-on '%s' successfully updated", slug)
_LOGGER.info("App '%s' successfully updated", slug)
return task
@Job(
@@ -293,37 +302,35 @@ class AddonManager(CoreSysAttributes):
JobCondition.INTERNET_HOST,
JobCondition.HEALTHY,
],
on_condition=AddonsJobError,
on_condition=AppsJobError,
)
async def rebuild(self, slug: str, *, force: bool = False) -> asyncio.Task | None:
"""Perform a rebuild of local build add-on.
"""Perform a rebuild of local build app.
Returns a Task that completes when addon has state 'started' (see addon.start)
if addon is started after rebuild. Else nothing is returned.
Returns a Task that completes when app has state 'started' (see app.start)
if app is started after rebuild. Else nothing is returned.
"""
self.sys_jobs.current.reference = slug
if slug not in self.local:
raise AddonsError(f"Add-on {slug} is not installed", _LOGGER.error)
addon = self.local[slug]
raise AppsError(f"App {slug} is not installed", _LOGGER.error)
app = self.local[slug]
if addon.is_detached:
raise AddonsError(
f"Add-on {slug} is not available inside store", _LOGGER.error
)
if app.is_detached:
raise AppsError(f"App {slug} is not available inside store", _LOGGER.error)
store = self.store[slug]
# Check if a rebuild is possible now
if addon.version != store.version:
raise AddonsError(
if app.version != store.version:
raise AppsError(
"Version changed, use Update instead Rebuild", _LOGGER.error
)
if not force and not addon.need_build:
raise AddonNotSupportedError(
"Can't rebuild a image based add-on", _LOGGER.error
if not force and not app.need_build:
raise AppNotSupportedError(
"Can't rebuild an image-based app", _LOGGER.error
)
return await addon.rebuild()
return await app.rebuild()
@Job(
name="addon_manager_restore",
@@ -332,36 +339,36 @@ class AddonManager(CoreSysAttributes):
JobCondition.INTERNET_HOST,
JobCondition.HEALTHY,
],
on_condition=AddonsJobError,
on_condition=AppsJobError,
)
async def restore(self, slug: str, tar_file: SecureTarFile) -> asyncio.Task | None:
"""Restore state of an add-on.
"""Restore state of an app.
Returns a Task that completes when addon has state 'started' (see addon.start)
if addon is started after restore. Else nothing is returned.
Returns a Task that completes when app has state 'started' (see app.start)
if app is started after restore. Else nothing is returned.
"""
self.sys_jobs.current.reference = slug
if slug not in self.local:
_LOGGER.debug("Add-on %s is not local available for restore", slug)
addon = Addon(self.coresys, slug)
_LOGGER.debug("App %s is not locally available for restore", slug)
app = App(self.coresys, slug)
had_ingress: bool | None = False
else:
_LOGGER.debug("Add-on %s is local available for restore", slug)
addon = self.local[slug]
had_ingress = addon.ingress_panel
_LOGGER.debug("App %s is locally available for restore", slug)
app = self.local[slug]
had_ingress = app.ingress_panel
wait_for_start = await addon.restore(tar_file)
wait_for_start = await app.restore(tar_file)
# Check if new
if slug not in self.local:
_LOGGER.info("Detect new Add-on after restore %s", slug)
self.local[slug] = addon
_LOGGER.info("Detected new app after restore: %s", slug)
self.local[slug] = app
# Update ingress
if had_ingress != addon.ingress_panel:
if had_ingress != app.ingress_panel:
await self.sys_ingress.reload()
await self.sys_ingress.update_hass_panel(addon)
await self.sys_ingress.update_hass_panel(app)
return wait_for_start
@@ -370,60 +377,60 @@ class AddonManager(CoreSysAttributes):
conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_HOST],
)
async def repair(self) -> None:
"""Repair local add-ons."""
needs_repair: list[Addon] = []
"""Repair local apps."""
needs_repair: list[App] = []
# Evaluate Add-ons to repair
for addon in self.installed:
if await addon.instance.exists():
# Evaluate Apps to repair
for app in self.installed:
if await app.instance.exists():
continue
needs_repair.append(addon)
needs_repair.append(app)
_LOGGER.info("Found %d add-ons to repair", len(needs_repair))
_LOGGER.info("Found %d apps to repair", len(needs_repair))
if not needs_repair:
return
for addon in needs_repair:
_LOGGER.info("Repairing for add-on: %s", addon.slug)
for app in needs_repair:
_LOGGER.info("Repairing for app: %s", app.slug)
with suppress(DockerError, KeyError):
# Need pull a image again
if not addon.need_build:
await addon.instance.install(addon.version, addon.image)
if not app.need_build:
await app.instance.install(app.version, app.image)
continue
# Need local lookup
if addon.need_build and not addon.is_detached:
store = self.store[addon.slug]
# If this add-on is available for rebuild
if addon.version == store.version:
await addon.instance.install(addon.version, addon.image)
if app.need_build and not app.is_detached:
store = self.store[app.slug]
# If this app is available for rebuild
if app.version == store.version:
await app.instance.install(app.version, app.image)
continue
_LOGGER.error("Can't repair %s", addon.slug)
with suppress(AddonsError):
await self.uninstall(addon.slug)
_LOGGER.error("Can't repair %s", app.slug)
with suppress(AppsError):
await self.uninstall(app.slug)
async def sync_dns(self) -> None:
"""Sync add-ons DNS names."""
"""Sync apps DNS names."""
# Update hosts
add_host_coros: list[Awaitable[None]] = []
for addon in self.installed:
for app in self.installed:
try:
if not await addon.instance.is_running():
if not await app.instance.is_running():
continue
except DockerError as err:
_LOGGER.warning("Add-on %s is corrupt: %s", addon.slug, err)
_LOGGER.warning("App %s is corrupt: %s", app.slug, err)
self.sys_resolution.create_issue(
IssueType.CORRUPT_DOCKER,
ContextType.ADDON,
reference=addon.slug,
reference=app.slug,
suggestions=[SuggestionType.EXECUTE_REPAIR],
)
await async_capture_exception(err)
else:
add_host_coros.append(
self.sys_plugins.dns.add_host(
ipv4=addon.ip_address, names=[addon.hostname], write=False
ipv4=app.ip_address, names=[app.hostname], write=False
)
)

View File

@@ -1,4 +1,4 @@
"""Init file for Supervisor add-ons."""
"""Init file for Supervisor apps."""
from abc import ABC, abstractmethod
from collections import defaultdict
@@ -12,7 +12,7 @@ from typing import Any
from awesomeversion import AwesomeVersion, AwesomeVersionException
from ..const import (
ATTR_ADVANCED,
ARCH_DEPRECATED,
ATTR_APPARMOR,
ATTR_ARCH,
ATTR_AUDIO,
@@ -78,22 +78,24 @@ from ..const import (
ATTR_VIDEO,
ATTR_WATCHDOG,
ATTR_WEBUI,
MACHINE_DEPRECATED,
SECURITY_DEFAULT,
SECURITY_DISABLE,
SECURITY_PROFILE,
AddonBoot,
AddonBootConfig,
AddonStage,
AddonStartup,
AppBoot,
AppBootConfig,
AppStage,
AppStartup,
CpuArch,
)
from ..coresys import CoreSys
from ..docker.const import Capabilities
from ..exceptions import (
AddonNotSupportedArchitectureError,
AddonNotSupportedError,
AddonNotSupportedHomeAssistantVersionError,
AddonNotSupportedMachineTypeError,
AppNotSupportedArchitectureError,
AppNotSupportedError,
AppNotSupportedHomeAssistantVersionError,
AppNotSupportedMachineTypeError,
HassioArchNotFound,
)
from ..jobs.const import JOB_GROUP_ADDON
from ..jobs.job_group import JobGroup
@@ -105,10 +107,10 @@ from .const import (
ATTR_BREAKING_VERSIONS,
ATTR_PATH,
ATTR_READ_ONLY,
AddonBackupMode,
AppBackupMode,
MappingType,
)
from .options import AddonOptions, UiOptions
from .options import AppOptions, UiOptions
from .validate import RE_SERVICE
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -116,8 +118,8 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
Data = dict[str, Any]
class AddonModel(JobGroup, ABC):
"""Add-on Data layout."""
class AppModel(JobGroup, ABC):
"""App Data layout."""
def __init__(self, coresys: CoreSys, slug: str):
"""Initialize data holder."""
@@ -133,21 +135,21 @@ class AddonModel(JobGroup, ABC):
@property
@abstractmethod
def data(self) -> Data:
"""Return add-on config/data."""
"""Return app config/data."""
@property
@abstractmethod
def is_installed(self) -> bool:
"""Return True if an add-on is installed."""
"""Return True if an app is installed."""
@property
@abstractmethod
def is_detached(self) -> bool:
"""Return True if add-on is detached."""
"""Return True if app is detached."""
@property
def available(self) -> bool:
"""Return True if this add-on is available on this platform."""
"""Return True if this app is available on this platform."""
return self._available(self.data)
@property
@@ -156,14 +158,14 @@ class AddonModel(JobGroup, ABC):
return self.data[ATTR_OPTIONS]
@property
def boot_config(self) -> AddonBootConfig:
def boot_config(self) -> AppBootConfig:
"""Return boot config."""
return self.data[ATTR_BOOT]
@property
def boot(self) -> AddonBoot:
def boot(self) -> AppBoot:
"""Return boot config with prio local settings unless config is forced."""
return AddonBoot(self.data[ATTR_BOOT])
return AppBoot(self.data[ATTR_BOOT])
@property
def auto_update(self) -> bool | None:
@@ -172,27 +174,27 @@ class AddonModel(JobGroup, ABC):
@property
def name(self) -> str:
"""Return name of add-on."""
"""Return name of app."""
return self.data[ATTR_NAME]
@property
def hostname(self) -> str:
"""Return slug/id of add-on."""
"""Return slug/id of app."""
return self.slug.replace("_", "-")
@property
def dns(self) -> list[str]:
"""Return list of DNS name for that add-on."""
"""Return list of DNS name for that app."""
return []
@property
def timeout(self) -> int:
"""Return timeout of addon for docker stop."""
"""Return timeout of app for docker stop."""
return self.data[ATTR_TIMEOUT]
@property
def uuid(self) -> str | None:
"""Return an API token for this add-on."""
"""Return an API token for this app."""
return None
@property
@@ -212,22 +214,22 @@ class AddonModel(JobGroup, ABC):
@property
def description(self) -> str:
"""Return description of add-on."""
"""Return description of app."""
return self.data[ATTR_DESCRIPTON]
@property
def repository(self) -> str:
"""Return repository of add-on."""
"""Return repository of app."""
return self.data[ATTR_REPOSITORY]
@property
def translations(self) -> dict:
"""Return add-on translations."""
"""Return app translations."""
return self.data[ATTR_TRANSLATIONS]
@property
def latest_version(self) -> AwesomeVersion:
"""Return latest version of add-on."""
"""Return latest version of app."""
return self.data[ATTR_VERSION]
@property
@@ -237,27 +239,29 @@ class AddonModel(JobGroup, ABC):
@property
def version(self) -> AwesomeVersion:
"""Return version of add-on."""
"""Return version of app."""
return self.data[ATTR_VERSION]
@property
def protected(self) -> bool:
"""Return if add-on is in protected mode."""
"""Return if app is in protected mode."""
return True
@property
def startup(self) -> AddonStartup:
"""Return startup type of add-on."""
def startup(self) -> AppStartup:
"""Return startup type of app."""
return self.data[ATTR_STARTUP]
@property
def advanced(self) -> bool:
"""Return advanced mode of add-on."""
return self.data[ATTR_ADVANCED]
"""Return False; advanced mode is deprecated and no longer supported."""
# Deprecated since Supervisor 2026.03.0; always returns False and can be
# removed once that version is the minimum supported.
return False
@property
def stage(self) -> AddonStage:
"""Return stage mode of add-on."""
def stage(self) -> AppStage:
"""Return stage mode of app."""
return self.data[ATTR_STAGE]
@property
@@ -285,7 +289,7 @@ class AddonModel(JobGroup, ABC):
@property
def ports(self) -> dict[str, int | None] | None:
"""Return ports of add-on."""
"""Return ports of app."""
return self.data.get(ATTR_PORTS)
@property
@@ -325,37 +329,37 @@ class AddonModel(JobGroup, ABC):
@property
def host_network(self) -> bool:
"""Return True if add-on run on host network."""
"""Return True if app run on host network."""
return self.data[ATTR_HOST_NETWORK]
@property
def host_pid(self) -> bool:
"""Return True if add-on run on host PID namespace."""
"""Return True if app run on host PID namespace."""
return self.data[ATTR_HOST_PID]
@property
def host_ipc(self) -> bool:
"""Return True if add-on run on host IPC namespace."""
"""Return True if app run on host IPC namespace."""
return self.data[ATTR_HOST_IPC]
@property
def host_uts(self) -> bool:
"""Return True if add-on run on host UTS namespace."""
"""Return True if app run on host UTS namespace."""
return self.data[ATTR_HOST_UTS]
@property
def host_dbus(self) -> bool:
"""Return True if add-on run on host D-BUS."""
"""Return True if app run on host D-BUS."""
return self.data[ATTR_HOST_DBUS]
@property
def static_devices(self) -> list[Path]:
"""Return static devices of add-on."""
"""Return static devices of app."""
return [Path(node) for node in self.data.get(ATTR_DEVICES, [])]
@property
def environment(self) -> dict[str, str] | None:
"""Return environment of add-on."""
"""Return environment of app."""
return self.data.get(ATTR_ENVIRONMENT)
@property
@@ -374,22 +378,22 @@ class AddonModel(JobGroup, ABC):
@property
def legacy(self) -> bool:
"""Return if the add-on don't support Home Assistant labels."""
"""Return if the app don't support Home Assistant labels."""
return self.data[ATTR_LEGACY]
@property
def access_docker_api(self) -> bool:
"""Return if the add-on need read-only Docker API access."""
"""Return if the app need read-only Docker API access."""
return self.data[ATTR_DOCKER_API]
@property
def access_hassio_api(self) -> bool:
"""Return True if the add-on access to Supervisor REASTful API."""
"""Return True if the app access to Supervisor REASTful API."""
return self.data[ATTR_HASSIO_API]
@property
def access_homeassistant_api(self) -> bool:
"""Return True if the add-on access to Home Assistant API proxy."""
"""Return True if the app access to Home Assistant API proxy."""
return self.data[ATTR_HOMEASSISTANT_API]
@property
@@ -413,28 +417,28 @@ class AddonModel(JobGroup, ABC):
return self.data.get(ATTR_BACKUP_POST)
@property
def backup_mode(self) -> AddonBackupMode:
def backup_mode(self) -> AppBackupMode:
"""Return if backup is hot/cold."""
return self.data[ATTR_BACKUP]
@property
def default_init(self) -> bool:
"""Return True if the add-on have no own init."""
"""Return True if the app have no own init."""
return self.data[ATTR_INIT]
@property
def with_stdin(self) -> bool:
"""Return True if the add-on access use stdin input."""
"""Return True if the app access use stdin input."""
return self.data[ATTR_STDIN]
@property
def with_ingress(self) -> bool:
"""Return True if the add-on access support ingress."""
"""Return True if the app access support ingress."""
return self.data[ATTR_INGRESS]
@property
def ingress_panel(self) -> bool | None:
"""Return True if the add-on access support ingress."""
"""Return True if the app access support ingress."""
return None
@property
@@ -444,12 +448,12 @@ class AddonModel(JobGroup, ABC):
@property
def with_gpio(self) -> bool:
"""Return True if the add-on access to GPIO interface."""
"""Return True if the app access to GPIO interface."""
return self.data[ATTR_GPIO]
@property
def with_usb(self) -> bool:
"""Return True if the add-on need USB access."""
"""Return True if the app need USB access."""
return self.data[ATTR_USB]
@property
@@ -459,7 +463,7 @@ class AddonModel(JobGroup, ABC):
@property
def with_udev(self) -> bool:
"""Return True if the add-on have his own udev."""
"""Return True if the app have his own udev."""
return self.data[ATTR_UDEV]
@property
@@ -469,52 +473,52 @@ class AddonModel(JobGroup, ABC):
@property
def with_kernel_modules(self) -> bool:
"""Return True if the add-on access to kernel modules."""
"""Return True if the app access to kernel modules."""
return self.data[ATTR_KERNEL_MODULES]
@property
def with_realtime(self) -> bool:
"""Return True if the add-on need realtime schedule functions."""
"""Return True if the app need realtime schedule functions."""
return self.data[ATTR_REALTIME]
@property
def with_full_access(self) -> bool:
"""Return True if the add-on want full access to hardware."""
"""Return True if the app want full access to hardware."""
return self.data[ATTR_FULL_ACCESS]
@property
def with_devicetree(self) -> bool:
"""Return True if the add-on read access to devicetree."""
"""Return True if the app read access to devicetree."""
return self.data[ATTR_DEVICETREE]
@property
def with_tmpfs(self) -> bool:
"""Return if tmp is in memory of add-on."""
"""Return if tmp is in memory of app."""
return self.data[ATTR_TMPFS]
@property
def access_auth_api(self) -> bool:
"""Return True if the add-on access to login/auth backend."""
"""Return True if the app access to login/auth backend."""
return self.data[ATTR_AUTH_API]
@property
def with_audio(self) -> bool:
"""Return True if the add-on access to audio."""
"""Return True if the app access to audio."""
return self.data[ATTR_AUDIO]
@property
def with_video(self) -> bool:
"""Return True if the add-on access to video."""
"""Return True if the app access to video."""
return self.data[ATTR_VIDEO]
@property
def homeassistant_version(self) -> AwesomeVersion | None:
"""Return min Home Assistant version they needed by Add-on."""
"""Return min Home Assistant version they needed by App."""
return self.data.get(ATTR_HOMEASSISTANT)
@property
def url(self) -> str | None:
"""Return URL of add-on."""
"""Return URL of app."""
return self.data.get(ATTR_URL)
@property
@@ -542,6 +546,35 @@ class AddonModel(JobGroup, ABC):
"""Return list of supported arch."""
return self.data[ATTR_ARCH]
@property
def has_deprecated_arch(self) -> bool:
"""Return True if app includes deprecated architectures."""
return any(arch in ARCH_DEPRECATED for arch in self.supported_arch)
@property
def has_supported_arch(self) -> bool:
"""Return True if app supports any architecture on this system."""
return self.sys_arch.is_supported(self.supported_arch)
@property
def has_deprecated_machine(self) -> bool:
"""Return True if app includes deprecated machine entries."""
return any(
machine.lstrip("!") in MACHINE_DEPRECATED
for machine in self.supported_machine
)
@property
def has_supported_machine(self) -> bool:
"""Return True if app supports this machine."""
if not (machine_types := self.supported_machine):
return True
return (
f"!{self.sys_machine}" not in machine_types
and self.sys_machine in machine_types
)
@property
def supported_machine(self) -> list[str]:
"""Return list of supported machine."""
@@ -549,11 +582,8 @@ class AddonModel(JobGroup, ABC):
@property
def arch(self) -> CpuArch:
"""Return architecture to use for the addon's image."""
if ATTR_IMAGE in self.data:
return self.sys_arch.match(self.data[ATTR_ARCH])
return self.sys_arch.default
"""Return architecture to use for the app's image."""
return self.sys_arch.match(self.data[ATTR_ARCH])
@property
def image(self) -> str | None:
@@ -562,12 +592,12 @@ class AddonModel(JobGroup, ABC):
@property
def need_build(self) -> bool:
"""Return True if this add-on need a local build."""
"""Return True if this app need a local build."""
return ATTR_IMAGE not in self.data
@property
def map_volumes(self) -> dict[MappingType, FolderMapping]:
"""Return a dict of {MappingType: FolderMapping} from add-on."""
"""Return a dict of {MappingType: FolderMapping} from app."""
volumes = {}
for volume in self.data[ATTR_MAP]:
volumes[MappingType(volume[ATTR_TYPE])] = FolderMapping(
@@ -578,27 +608,27 @@ class AddonModel(JobGroup, ABC):
@property
def path_location(self) -> Path:
"""Return path to this add-on."""
"""Return path to this app."""
return Path(self.data[ATTR_LOCATION])
@property
def path_icon(self) -> Path:
"""Return path to add-on icon."""
"""Return path to app icon."""
return Path(self.path_location, "icon.png")
@property
def path_logo(self) -> Path:
"""Return path to add-on logo."""
"""Return path to app logo."""
return Path(self.path_location, "logo.png")
@property
def path_changelog(self) -> Path:
"""Return path to add-on changelog."""
"""Return path to app changelog."""
return Path(self.path_location, "CHANGELOG.md")
@property
def path_documentation(self) -> Path:
"""Return path to add-on changelog."""
"""Return path to app changelog."""
return Path(self.path_location, "DOCS.md")
@property
@@ -607,17 +637,17 @@ class AddonModel(JobGroup, ABC):
return Path(self.path_location, "apparmor.txt")
@property
def schema(self) -> AddonOptions:
"""Return Addon options validation object."""
def schema(self) -> AppOptions:
"""Return App options validation object."""
raw_schema = self.data[ATTR_SCHEMA]
if isinstance(raw_schema, bool):
raw_schema = {}
return AddonOptions(self.coresys, raw_schema, self.name, self.slug)
return AppOptions(self.coresys, raw_schema, self.name, self.slug)
@property
def schema_ui(self) -> list[dict[Any, Any]] | None:
"""Create a UI schema for add-on options."""
"""Create a UI schema for app options."""
raw_schema = self.data[ATTR_SCHEMA]
if isinstance(raw_schema, bool):
@@ -626,7 +656,7 @@ class AddonModel(JobGroup, ABC):
@property
def with_journald(self) -> bool:
"""Return True if the add-on accesses the system journal."""
"""Return True if the app accesses the system journal."""
return self.data[ATTR_JOURNALD]
@property
@@ -636,7 +666,7 @@ class AddonModel(JobGroup, ABC):
@property
def breaking_versions(self) -> list[AwesomeVersion]:
"""Return breaking versions of addon."""
"""Return breaking versions of app."""
return self.data[ATTR_BREAKING_VERSIONS]
async def long_description(self) -> str | None:
@@ -666,26 +696,26 @@ class AddonModel(JobGroup, ABC):
return self.sys_run_in_executor(check_paths)
def validate_availability(self) -> None:
"""Validate if addon is available for current system."""
"""Validate if app is available for current system."""
return self._validate_availability(self.data, logger=_LOGGER.error)
def __eq__(self, other: Any) -> bool:
"""Compare add-on objects."""
if not isinstance(other, AddonModel):
"""Compare app objects."""
if not isinstance(other, AppModel):
return False
return self.slug == other.slug
def __hash__(self) -> int:
"""Hash for add-on objects."""
"""Hash for app objects."""
return hash(self.slug)
def _validate_availability(
self, config, *, logger: Callable[..., None] | None = None
) -> None:
"""Validate if addon is available for current system."""
"""Validate if app is available for current system."""
# Architecture
if not self.sys_arch.is_supported(config[ATTR_ARCH]):
raise AddonNotSupportedArchitectureError(
raise AppNotSupportedArchitectureError(
logger, slug=self.slug, architectures=config[ATTR_ARCH]
)
@@ -694,7 +724,7 @@ class AddonModel(JobGroup, ABC):
if machine and (
f"!{self.sys_machine}" in machine or self.sys_machine not in machine
):
raise AddonNotSupportedMachineTypeError(
raise AppNotSupportedMachineTypeError(
logger, slug=self.slug, machine_types=machine
)
@@ -704,15 +734,15 @@ class AddonModel(JobGroup, ABC):
if version and not version_is_new_enough(
self.sys_homeassistant.version, version
):
raise AddonNotSupportedHomeAssistantVersionError(
raise AppNotSupportedHomeAssistantVersionError(
logger, slug=self.slug, version=str(version)
)
def _available(self, config) -> bool:
"""Return True if this add-on is available on this platform."""
"""Return True if this app is available on this platform."""
try:
self._validate_availability(config)
except AddonNotSupportedError:
except AppNotSupportedError:
return False
return True
@@ -721,8 +751,12 @@ class AddonModel(JobGroup, ABC):
"""Generate image name from data."""
# Repository with Dockerhub images
if ATTR_IMAGE in config:
arch = self.sys_arch.match(config[ATTR_ARCH])
try:
arch = self.sys_arch.match(config[ATTR_ARCH])
except HassioArchNotFound:
arch = self.sys_arch.default
return config[ATTR_IMAGE].format(arch=arch)
# local build
return f"{config[ATTR_REPOSITORY]}/{self.sys_arch.default!s}-addon-{config[ATTR_SLUG]}"
arch = self.sys_arch.match(config[ATTR_ARCH])
return f"{config[ATTR_REPOSITORY]}/{arch!s}-addon-{config[ATTR_SLUG]}"

View File

@@ -1,4 +1,4 @@
"""Add-on Options / UI rendering."""
"""App Options / UI rendering."""
import hashlib
import logging
@@ -37,8 +37,8 @@ RE_SCHEMA_ELEMENT = re.compile(
r"|device(?:\((?P<filter>subsystem=[a-z]+)\))?"
r"|str(?:\((?P<s_min>\d+)?,(?P<s_max>\d+)?\))?"
r"|password(?:\((?P<p_min>\d+)?,(?P<p_max>\d+)?\))?"
r"|int(?:\((?P<i_min>\d+)?,(?P<i_max>\d+)?\))?"
r"|float(?:\((?P<f_min>[\d\.]+)?,(?P<f_max>[\d\.]+)?\))?"
r"|int(?:\((?P<i_min>-?\d+)?,(?P<i_max>-?\d+)?\))?"
r"|float(?:\((?P<f_min>-?\d*\.?\d+)?,(?P<f_max>-?\d*\.?\d+)?\))?"
r"|match\((?P<match>.*)\)"
r"|list\((?P<list>.+)\)"
r")\??$"
@@ -56,8 +56,8 @@ _SCHEMA_LENGTH_PARTS = (
)
class AddonOptions(CoreSysAttributes):
"""Validate Add-ons Options."""
class AppOptions(CoreSysAttributes):
"""Validate Apps Options."""
def __init__(
self, coresys: CoreSys, raw_schema: dict[str, Any], name: str, slug: str
@@ -72,11 +72,11 @@ class AddonOptions(CoreSysAttributes):
@property
def validate(self) -> vol.Schema:
"""Create a schema for add-on options."""
"""Create a schema for app options."""
return vol.Schema(vol.All(dict, self))
def __call__(self, struct: dict[str, Any]) -> dict[str, Any]:
"""Create schema validator for add-ons options."""
"""Create schema validator for apps options."""
options = {}
# read options
@@ -262,7 +262,7 @@ class AddonOptions(CoreSysAttributes):
class UiOptions(CoreSysAttributes):
"""Render UI Add-ons Options."""
"""Render UI Apps Options."""
def __init__(self, coresys: CoreSys) -> None:
"""Initialize UI option render."""

View File

@@ -1,4 +1,4 @@
"""Util add-ons functions."""
"""Util apps functions."""
from __future__ import annotations
@@ -11,12 +11,12 @@ from ..const import ROLE_ADMIN, ROLE_MANAGER, SECURITY_DISABLE, SECURITY_PROFILE
from ..docker.const import Capabilities
if TYPE_CHECKING:
from .model import AddonModel
from .model import AppModel
_LOGGER: logging.Logger = logging.getLogger(__name__)
def rating_security(addon: AddonModel) -> int:
def rating_security(app: AppModel) -> int:
"""Return 1-8 for security rating.
1 = not secure
@@ -25,25 +25,25 @@ def rating_security(addon: AddonModel) -> int:
rating = 5
# AppArmor
if addon.apparmor == SECURITY_DISABLE:
if app.apparmor == SECURITY_DISABLE:
rating += -1
elif addon.apparmor == SECURITY_PROFILE:
elif app.apparmor == SECURITY_PROFILE:
rating += 1
# Home Assistant Login & Ingress
if addon.with_ingress:
if app.with_ingress:
rating += 2
elif addon.access_auth_api:
elif app.access_auth_api:
rating += 1
# Signed
if addon.signed:
if app.signed:
rating += 1
# Privileged options
if (
any(
privilege in addon.privileged
privilege in app.privileged
for privilege in (
Capabilities.BPF,
Capabilities.CHECKPOINT_RESTORE,
@@ -57,30 +57,30 @@ def rating_security(addon: AddonModel) -> int:
Capabilities.SYS_RAWIO,
)
)
or addon.with_kernel_modules
or app.with_kernel_modules
):
rating += -1
# API Supervisor role
if addon.hassio_role == ROLE_MANAGER:
if app.hassio_role == ROLE_MANAGER:
rating += -1
elif addon.hassio_role == ROLE_ADMIN:
elif app.hassio_role == ROLE_ADMIN:
rating += -2
# Not secure Networking
if addon.host_network:
if app.host_network:
rating += -1
# Insecure PID namespace
if addon.host_pid:
if app.host_pid:
rating += -2
# UTS host namespace allows to set hostname only with SYS_ADMIN
if addon.host_uts and Capabilities.SYS_ADMIN in addon.privileged:
if app.host_uts and Capabilities.SYS_ADMIN in app.privileged:
rating += -1
# Docker Access & full Access
if addon.access_docker_api or addon.with_full_access:
if app.access_docker_api or app.with_full_access:
rating = 1
return max(min(8, rating), 1)
@@ -102,4 +102,4 @@ def remove_data(folder: Path) -> None:
else:
return
_LOGGER.error("Can't remove Add-on Data: %s", error_msg)
_LOGGER.error("Can't remove app data: %s", error_msg)

View File

@@ -1,4 +1,4 @@
"""Validate add-ons options schema."""
"""Validate apps options schema."""
import logging
import re
@@ -9,7 +9,8 @@ import uuid
import voluptuous as vol
from ..const import (
ARCH_ALL,
ARCH_ALL_COMPAT,
ARCH_DEPRECATED,
ATTR_ACCESS_TOKEN,
ATTR_ADVANCED,
ATTR_APPARMOR,
@@ -97,13 +98,14 @@ from ..const import (
ATTR_VIDEO,
ATTR_WATCHDOG,
ATTR_WEBUI,
MACHINE_DEPRECATED,
ROLE_ALL,
ROLE_DEFAULT,
AddonBoot,
AddonBootConfig,
AddonStage,
AddonStartup,
AddonState,
AppBoot,
AppBootConfig,
AppStage,
AppStartup,
AppState,
)
from ..docker.const import Capabilities
from ..validate import (
@@ -122,7 +124,7 @@ from .const import (
ATTR_PATH,
ATTR_READ_ONLY,
RE_SLUG,
AddonBackupMode,
AppBackupMode,
MappingType,
)
from .options import RE_SCHEMA_ELEMENT
@@ -156,6 +158,8 @@ SCHEMA_ELEMENT = vol.Schema(
RE_MACHINE = re.compile(
r"^!?(?:"
r"|intel-nuc"
r"|khadas-vim3"
r"|generic-aarch64"
r"|generic-x86-64"
r"|odroid-c2"
r"|odroid-c4"
@@ -182,11 +186,20 @@ RE_MACHINE = re.compile(
RE_SLUG_FIELD = re.compile(r"^" + RE_SLUG + r"$")
def _warn_addon_config(config: dict[str, Any]):
def _warn_app_config(config: dict[str, Any]):
"""Warn about miss configs."""
name = config.get(ATTR_NAME)
if not name:
raise vol.Invalid("Invalid Add-on config!")
raise vol.Invalid("Invalid app config!")
if ATTR_ADVANCED in config:
# Deprecated since Supervisor 2026.03.0; this field is ignored and the
# warning can be removed once that version is the minimum supported.
_LOGGER.warning(
"App '%s' uses deprecated 'advanced' field in config. "
"This field is ignored by the Supervisor. Please report this to the maintainer.",
name,
)
if config.get(ATTR_FULL_ACCESS, False) and (
config.get(ATTR_DEVICES)
@@ -195,54 +208,76 @@ def _warn_addon_config(config: dict[str, Any]):
or config.get(ATTR_GPIO)
):
_LOGGER.warning(
"Add-on have full device access, and selective device access in the configuration. Please report this to the maintainer of %s",
"App has full device access, and selective device access in the configuration. Please report this to the maintainer of %s",
name,
)
if config.get(ATTR_BACKUP, AddonBackupMode.HOT) == AddonBackupMode.COLD and (
if config.get(ATTR_BACKUP, AppBackupMode.HOT) == AppBackupMode.COLD and (
config.get(ATTR_BACKUP_POST) or config.get(ATTR_BACKUP_PRE)
):
_LOGGER.warning(
"Add-on which only support COLD backups trying to use post/pre commands. Please report this to the maintainer of %s",
"An app that only supports COLD backups is trying to use pre/post commands. Please report this to the maintainer of %s",
name,
)
if deprecated_arches := [
arch for arch in config.get(ATTR_ARCH, []) if arch in ARCH_DEPRECATED
]:
_LOGGER.warning(
"App config 'arch' uses deprecated values %s. Please report this to the maintainer of %s",
deprecated_arches,
name,
)
if deprecated_machines := [
machine
for machine in config.get(ATTR_MACHINE, [])
if machine.lstrip("!") in MACHINE_DEPRECATED
]:
_LOGGER.warning(
"App config 'machine' uses deprecated values %s. Please report this to the maintainer of %s",
deprecated_machines,
name,
)
if ATTR_CODENOTARY in config:
_LOGGER.warning(
"Add-on '%s' uses deprecated 'codenotary' field in config. This field is no longer used and will be ignored. Please report this to the maintainer.",
"App '%s' uses deprecated 'codenotary' field in config. This field is no longer used and will be ignored. Please report this to the maintainer.",
name,
)
return config
def _migrate_addon_config(protocol=False):
"""Migrate addon config."""
def _migrate_app_config(protocol=False):
"""Migrate app config."""
def _migrate(config: dict[str, Any]):
if not isinstance(config, dict):
raise vol.Invalid("App config must be a dictionary!")
name = config.get(ATTR_NAME)
if not name:
raise vol.Invalid("Invalid Add-on config!")
raise vol.Invalid("Invalid app config!")
# Startup 2018-03-30
if config.get(ATTR_STARTUP) in ("before", "after"):
value = config[ATTR_STARTUP]
if protocol:
_LOGGER.warning(
"Add-on config 'startup' with '%s' is deprecated. Please report this to the maintainer of %s",
"App config 'startup' with '%s' is deprecated. Please report this to the maintainer of %s",
value,
name,
)
if value == "before":
config[ATTR_STARTUP] = AddonStartup.SERVICES
config[ATTR_STARTUP] = AppStartup.SERVICES
elif value == "after":
config[ATTR_STARTUP] = AddonStartup.APPLICATION
config[ATTR_STARTUP] = AppStartup.APPLICATION
# UART 2021-01-20
if "auto_uart" in config:
if protocol:
_LOGGER.warning(
"Add-on config 'auto_uart' is deprecated, use 'uart'. Please report this to the maintainer of %s",
"App config 'auto_uart' is deprecated, use 'uart'. Please report this to the maintainer of %s",
name,
)
config[ATTR_UART] = config.pop("auto_uart")
@@ -251,7 +286,7 @@ def _migrate_addon_config(protocol=False):
if ATTR_DEVICES in config and any(":" in line for line in config[ATTR_DEVICES]):
if protocol:
_LOGGER.warning(
"Add-on config 'devices' use a deprecated format, the new format uses a list of paths only. Please report this to the maintainer of %s",
"App config 'devices' uses a deprecated format instead of a list of paths only. Please report this to the maintainer of %s",
name,
)
config[ATTR_DEVICES] = [line.split(":")[0] for line in config[ATTR_DEVICES]]
@@ -260,7 +295,7 @@ def _migrate_addon_config(protocol=False):
if ATTR_TMPFS in config and not isinstance(config[ATTR_TMPFS], bool):
if protocol:
_LOGGER.warning(
"Add-on config 'tmpfs' use a deprecated format, new it's only a boolean. Please report this to the maintainer of %s",
"App config 'tmpfs' uses a deprecated format instead of just a boolean. Please report this to the maintainer of %s",
name,
)
config[ATTR_TMPFS] = True
@@ -276,7 +311,7 @@ def _migrate_addon_config(protocol=False):
new_entry = entry.replace("snapshot", "backup")
config[new_entry] = config.pop(entry)
_LOGGER.warning(
"Add-on config '%s' is deprecated, '%s' should be used instead. Please report this to the maintainer of %s",
"App config '%s' is deprecated, '%s' should be used instead. Please report this to the maintainer of %s",
entry,
new_entry,
name,
@@ -289,7 +324,7 @@ def _migrate_addon_config(protocol=False):
# Validate that dict entries have required 'type' field
if ATTR_TYPE not in entry:
_LOGGER.warning(
"Add-on config has invalid map entry missing 'type' field: %s. Skipping invalid entry for %s",
"App config has invalid map entry missing 'type' field: %s. Skipping invalid entry for %s",
entry,
name,
)
@@ -299,7 +334,7 @@ def _migrate_addon_config(protocol=False):
result = RE_VOLUME.match(entry)
if not result:
_LOGGER.warning(
"Add-on config has invalid map entry: %s. Skipping invalid entry for %s",
"App config has invalid map entry: %s. Skipping invalid entry for %s",
entry,
name,
)
@@ -314,7 +349,7 @@ def _migrate_addon_config(protocol=False):
# Always update config to clear potentially malformed ones
config[ATTR_MAP] = volumes
# 2023-10 "config" became "homeassistant" so /config can be used for addon's public config
# 2023-10 "config" became "homeassistant" so /config can be used for app's public config
if any(volume[ATTR_TYPE] == MappingType.CONFIG for volume in volumes):
if any(
volume
@@ -323,7 +358,7 @@ def _migrate_addon_config(protocol=False):
for volume in volumes
):
_LOGGER.warning(
"Add-on config using incompatible map options, '%s' and '%s' are ignored if '%s' is included. Please report this to the maintainer of %s",
"App config using incompatible map options, '%s' and '%s' are ignored if '%s' is included. Please report this to the maintainer of %s",
MappingType.ADDON_CONFIG,
MappingType.HOMEASSISTANT_CONFIG,
MappingType.CONFIG,
@@ -331,7 +366,7 @@ def _migrate_addon_config(protocol=False):
)
else:
_LOGGER.debug(
"Add-on config using deprecated map option '%s' instead of '%s'. Please report this to the maintainer of %s",
"App config using deprecated map option '%s' instead of '%s'. Please report this to the maintainer of %s",
MappingType.CONFIG,
MappingType.HOMEASSISTANT_CONFIG,
name,
@@ -349,18 +384,16 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
vol.Required(ATTR_VERSION): version_tag,
vol.Required(ATTR_SLUG): vol.Match(RE_SLUG_FIELD),
vol.Required(ATTR_DESCRIPTON): str,
vol.Required(ATTR_ARCH): [vol.In(ARCH_ALL)],
vol.Required(ATTR_ARCH): [vol.In(ARCH_ALL_COMPAT)],
vol.Optional(ATTR_MACHINE): vol.All([vol.Match(RE_MACHINE)], vol.Unique()),
vol.Optional(ATTR_URL): vol.Url(),
vol.Optional(ATTR_STARTUP, default=AddonStartup.APPLICATION): vol.Coerce(
AddonStartup
),
vol.Optional(ATTR_BOOT, default=AddonBootConfig.AUTO): vol.Coerce(
AddonBootConfig
vol.Optional(ATTR_STARTUP, default=AppStartup.APPLICATION): vol.Coerce(
AppStartup
),
vol.Optional(ATTR_BOOT, default=AppBootConfig.AUTO): vol.Coerce(AppBootConfig),
vol.Optional(ATTR_INIT, default=True): vol.Boolean(),
vol.Optional(ATTR_ADVANCED, default=False): vol.Boolean(),
vol.Optional(ATTR_STAGE, default=AddonStage.STABLE): vol.Coerce(AddonStage),
vol.Optional(ATTR_STAGE, default=AppStage.STABLE): vol.Coerce(AppStage),
vol.Optional(ATTR_PORTS): docker_ports,
vol.Optional(ATTR_PORTS_DESCRIPTION): docker_ports_description,
vol.Optional(ATTR_WATCHDOG): vol.Match(
@@ -420,9 +453,7 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
vol.Optional(ATTR_BACKUP_EXCLUDE): [str],
vol.Optional(ATTR_BACKUP_PRE): str,
vol.Optional(ATTR_BACKUP_POST): str,
vol.Optional(ATTR_BACKUP, default=AddonBackupMode.HOT): vol.Coerce(
AddonBackupMode
),
vol.Optional(ATTR_BACKUP, default=AppBackupMode.HOT): vol.Coerce(AppBackupMode),
vol.Optional(ATTR_OPTIONS, default={}): dict,
vol.Optional(ATTR_SCHEMA, default={}): vol.Any(
vol.Schema({str: SCHEMA_ELEMENT}),
@@ -453,7 +484,7 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
)
SCHEMA_ADDON_CONFIG = vol.All(
_migrate_addon_config(True), _warn_addon_config, _SCHEMA_ADDON_CONFIG
_migrate_app_config(True), _warn_app_config, _SCHEMA_ADDON_CONFIG
)
@@ -462,7 +493,7 @@ SCHEMA_BUILD_CONFIG = vol.Schema(
{
vol.Optional(ATTR_BUILD_FROM, default=dict): vol.Any(
vol.Match(RE_DOCKER_IMAGE_BUILD),
vol.Schema({vol.In(ARCH_ALL): vol.Match(RE_DOCKER_IMAGE_BUILD)}),
vol.Schema({vol.In(ARCH_ALL_COMPAT): vol.Match(RE_DOCKER_IMAGE_BUILD)}),
),
vol.Optional(ATTR_SQUASH, default=False): vol.Boolean(),
vol.Optional(ATTR_ARGS, default=dict): vol.Schema({str: str}),
@@ -500,7 +531,7 @@ SCHEMA_ADDON_USER = vol.Schema(
vol.Optional(ATTR_INGRESS_TOKEN, default=secrets.token_urlsafe): str,
vol.Optional(ATTR_OPTIONS, default=dict): dict,
vol.Optional(ATTR_AUTO_UPDATE, default=False): vol.Boolean(),
vol.Optional(ATTR_BOOT): vol.Coerce(AddonBoot),
vol.Optional(ATTR_BOOT): vol.Coerce(AppBoot),
vol.Optional(ATTR_NETWORK): docker_ports,
vol.Optional(ATTR_AUDIO_OUTPUT): vol.Maybe(str),
vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str),
@@ -514,7 +545,7 @@ SCHEMA_ADDON_USER = vol.Schema(
)
SCHEMA_ADDON_SYSTEM = vol.All(
_migrate_addon_config(),
_migrate_app_config(),
_SCHEMA_ADDON_CONFIG.extend(
{
vol.Required(ATTR_LOCATION): str,
@@ -540,7 +571,7 @@ SCHEMA_ADDON_BACKUP = vol.Schema(
{
vol.Required(ATTR_USER): SCHEMA_ADDON_USER,
vol.Required(ATTR_SYSTEM): SCHEMA_ADDON_SYSTEM,
vol.Required(ATTR_STATE): vol.Coerce(AddonState),
vol.Required(ATTR_STATE): vol.Coerce(AppState),
vol.Required(ATTR_VERSION): version_tag,
},
extra=vol.REMOVE_EXTRA,

View File

@@ -8,11 +8,12 @@ from typing import Any
from aiohttp import hdrs, web
from ..const import SUPERVISOR_DOCKER_NAME, AddonState
from ..addons.addon import App
from ..const import SUPERVISOR_DOCKER_NAME, AppState, FeatureFlag
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import APIAddonNotInstalled, HostNotSupportedError
from ..exceptions import APIAppNotInstalled, HostNotSupportedError
from ..utils.sentry import async_capture_exception
from .addons import APIAddons
from .addons import APIApps
from .audio import APIAudio
from .auth import APIAuth
from .backups import APIBackups
@@ -76,6 +77,9 @@ class RestAPI(CoreSysAttributes):
"max_field_size": MAX_LINE_SIZE,
},
)
# v2 sub-app: no middleware of its own — parent webapp's middleware
# stack runs first for all requests including sub-app routes.
self.v2_app: web.Application = web.Application()
# service stuff
self._runner: web.AppRunner = web.AppRunner(self.webapp, shutdown_timeout=5)
@@ -85,11 +89,16 @@ class RestAPI(CoreSysAttributes):
self._api_host: APIHost = APIHost()
self._api_host.coresys = coresys
# handler instances shared between v1 and v2 registrations
self._api_apps: APIApps | None = None
self._api_backups: APIBackups | None = None
self._api_store: APIStore | None = None
async def load(self) -> None:
"""Register REST API Calls."""
static_resource_configs: list[StaticResourceConfig] = []
self._register_addons()
self._register_apps()
self._register_audio()
self._register_auth()
self._register_backups()
@@ -116,6 +125,14 @@ class RestAPI(CoreSysAttributes):
self._register_store()
self._register_supervisor()
# Register v2 routes before mounting the sub-app
# (add_subapp freezes the sub-app's router)
if self.sys_config.feature_flags.get(FeatureFlag.SUPERVISOR_V2_API, False):
self._register_v2_apps()
self._register_v2_backups()
self._register_v2_store()
self.webapp.add_subapp("/v2", self.v2_app)
if static_resource_configs:
def process_configs() -> list[web.StaticResource]:
@@ -129,14 +146,23 @@ class RestAPI(CoreSysAttributes):
await self.start()
def _register_advanced_logs(self, path: str, syslog_identifier: str):
def _register_advanced_logs(
self,
path: str,
syslog_identifier: str,
default_verbose: bool = False,
):
"""Register logs endpoint for a given path, returning logs for single syslog identifier."""
self.webapp.add_routes(
[
web.get(
f"{path}/logs",
partial(self._api_host.advanced_logs, identifier=syslog_identifier),
partial(
self._api_host.advanced_logs,
identifier=syslog_identifier,
default_verbose=default_verbose,
),
),
web.get(
f"{path}/logs/follow",
@@ -144,6 +170,7 @@ class RestAPI(CoreSysAttributes):
self._api_host.advanced_logs,
identifier=syslog_identifier,
follow=True,
default_verbose=default_verbose,
),
),
web.get(
@@ -153,11 +180,16 @@ class RestAPI(CoreSysAttributes):
identifier=syslog_identifier,
latest=True,
no_colors=True,
default_verbose=default_verbose,
),
),
web.get(
f"{path}/logs/boots/{{bootid}}",
partial(self._api_host.advanced_logs, identifier=syslog_identifier),
partial(
self._api_host.advanced_logs,
identifier=syslog_identifier,
default_verbose=default_verbose,
),
),
web.get(
f"{path}/logs/boots/{{bootid}}/follow",
@@ -165,6 +197,7 @@ class RestAPI(CoreSysAttributes):
self._api_host.advanced_logs,
identifier=syslog_identifier,
follow=True,
default_verbose=default_verbose,
),
),
]
@@ -177,10 +210,13 @@ class RestAPI(CoreSysAttributes):
self.webapp.add_routes(
[
web.get("/host/info", api_host.info),
web.get("/host/logs", api_host.advanced_logs),
web.get(
"/host/logs",
partial(api_host.advanced_logs, default_verbose=True),
),
web.get(
"/host/logs/follow",
partial(api_host.advanced_logs, follow=True),
partial(api_host.advanced_logs, follow=True, default_verbose=True),
),
web.get("/host/logs/identifiers", api_host.list_identifiers),
web.get("/host/logs/identifiers/{identifier}", api_host.advanced_logs),
@@ -189,10 +225,13 @@ class RestAPI(CoreSysAttributes):
partial(api_host.advanced_logs, follow=True),
),
web.get("/host/logs/boots", api_host.list_boots),
web.get("/host/logs/boots/{bootid}", api_host.advanced_logs),
web.get(
"/host/logs/boots/{bootid}",
partial(api_host.advanced_logs, default_verbose=True),
),
web.get(
"/host/logs/boots/{bootid}/follow",
partial(api_host.advanced_logs, follow=True),
partial(api_host.advanced_logs, follow=True, default_verbose=True),
),
web.get(
"/host/logs/boots/{bootid}/identifiers/{identifier}",
@@ -335,7 +374,9 @@ class RestAPI(CoreSysAttributes):
web.post("/multicast/restart", api_multicast.restart),
]
)
self._register_advanced_logs("/multicast", "hassio_multicast")
self._register_advanced_logs(
"/multicast", "hassio_multicast", default_verbose=True
)
def _register_hardware(self) -> None:
"""Register hardware functions."""
@@ -539,74 +580,118 @@ class RestAPI(CoreSysAttributes):
]
)
def _register_addons(self) -> None:
"""Register Add-on functions."""
api_addons = APIAddons()
api_addons.coresys = self.coresys
def _register_apps(self) -> None:
"""Register App functions."""
api_apps = APIApps()
api_apps.coresys = self.coresys
self._api_apps = api_apps
self.webapp.add_routes(
[
web.get("/addons", api_addons.list_addons),
web.post("/addons/{addon}/uninstall", api_addons.uninstall),
web.post("/addons/{addon}/start", api_addons.start),
web.post("/addons/{addon}/stop", api_addons.stop),
web.post("/addons/{addon}/restart", api_addons.restart),
web.post("/addons/{addon}/options", api_addons.options),
web.post("/addons/{addon}/sys_options", api_addons.sys_options),
web.post(
"/addons/{addon}/options/validate", api_addons.options_validate
),
web.get("/addons/{addon}/options/config", api_addons.options_config),
web.post("/addons/{addon}/rebuild", api_addons.rebuild),
web.post("/addons/{addon}/stdin", api_addons.stdin),
web.post("/addons/{addon}/security", api_addons.security),
web.get("/addons/{addon}/stats", api_addons.stats),
web.get("/addons", api_apps.list_apps_v1),
web.post("/addons/{app}/uninstall", api_apps.uninstall),
web.post("/addons/{app}/start", api_apps.start),
web.post("/addons/{app}/stop", api_apps.stop),
web.post("/addons/{app}/restart", api_apps.restart),
web.post("/addons/{app}/options", api_apps.options),
web.post("/addons/{app}/sys_options", api_apps.sys_options),
web.post("/addons/{app}/options/validate", api_apps.options_validate),
web.get("/addons/{app}/options/config", api_apps.options_config),
web.post("/addons/{app}/rebuild", api_apps.rebuild),
web.post("/addons/{app}/stdin", api_apps.stdin),
web.post("/addons/{app}/security", api_apps.security),
web.get("/addons/{app}/stats", api_apps.stats),
]
)
@api_process_raw(CONTENT_TYPE_TEXT, error_type=CONTENT_TYPE_TEXT)
async def get_addon_logs(request, *args, **kwargs):
addon = api_addons.get_addon_for_request(request)
kwargs["identifier"] = f"addon_{addon.slug}"
async def get_app_logs(request, *args, **kwargs):
app = api_apps.get_app_for_request(request)
kwargs["identifier"] = f"addon_{app.slug}"
return await self._api_host.advanced_logs(request, *args, **kwargs)
self.webapp.add_routes(
[
web.get("/addons/{addon}/logs", get_addon_logs),
web.get("/addons/{app}/logs", get_app_logs),
web.get(
"/addons/{addon}/logs/follow",
partial(get_addon_logs, follow=True),
"/addons/{app}/logs/follow",
partial(get_app_logs, follow=True),
),
web.get(
"/addons/{addon}/logs/latest",
partial(get_addon_logs, latest=True, no_colors=True),
"/addons/{app}/logs/latest",
partial(get_app_logs, latest=True, no_colors=True),
),
web.get("/addons/{addon}/logs/boots/{bootid}", get_addon_logs),
web.get("/addons/{app}/logs/boots/{bootid}", get_app_logs),
web.get(
"/addons/{addon}/logs/boots/{bootid}/follow",
partial(get_addon_logs, follow=True),
"/addons/{app}/logs/boots/{bootid}/follow",
partial(get_app_logs, follow=True),
),
]
)
# Legacy routing to support requests for not installed addons
# Legacy routing to support requests for not installed apps
api_store = APIStore()
api_store.coresys = self.coresys
@api_process
async def addons_addon_info(request: web.Request) -> dict[str, Any]:
"""Route to store if info requested for not installed addon."""
async def apps_app_info(request: web.Request) -> dict[str, Any]:
"""Route to store if info requested for not installed app."""
try:
return await api_addons.info(request)
except APIAddonNotInstalled:
# Route to store/{addon}/info but add missing fields
app: App = api_apps.get_app_for_request(request)
return await api_apps.info_data(app)
except APIAppNotInstalled:
# Route to store/{app}/info but add missing fields
return dict(
await api_store.addons_addon_info_wrapped(request),
state=AddonState.UNKNOWN,
options=self.sys_addons.store[request.match_info["addon"]].options,
await api_store.apps_app_info_wrapped(request),
state=AppState.UNKNOWN,
options=self.sys_apps.store[request.match_info["app"]].options,
)
self.webapp.add_routes([web.get("/addons/{addon}/info", addons_addon_info)])
self.webapp.add_routes([web.get("/addons/{app}/info", apps_app_info)])
def _register_v2_apps(self) -> None:
"""Register v2 app routes on the v2 sub-app (accessible as /v2/apps/...)."""
assert self._api_apps is not None
api_apps = self._api_apps
@api_process_raw(CONTENT_TYPE_TEXT, error_type=CONTENT_TYPE_TEXT)
async def get_app_logs_v2(request, *args, **kwargs):
app = api_apps.get_app_for_request(request)
kwargs["identifier"] = f"addon_{app.slug}"
return await self._api_host.advanced_logs(request, *args, **kwargs)
self.v2_app.add_routes(
[
web.get("/apps", api_apps.list_apps),
web.post("/apps/{app}/uninstall", api_apps.uninstall),
web.post("/apps/{app}/start", api_apps.start),
web.post("/apps/{app}/stop", api_apps.stop),
web.post("/apps/{app}/restart", api_apps.restart),
web.post("/apps/{app}/options", api_apps.options),
web.post("/apps/{app}/sys_options", api_apps.sys_options),
web.post("/apps/{app}/options/validate", api_apps.options_validate),
web.get("/apps/{app}/options/config", api_apps.options_config),
web.post("/apps/{app}/rebuild", api_apps.rebuild),
web.post("/apps/{app}/stdin", api_apps.stdin),
web.post("/apps/{app}/security", api_apps.security),
web.get("/apps/{app}/stats", api_apps.stats),
web.get("/apps/{app}/info", api_apps.info),
web.get("/apps/{app}/logs", get_app_logs_v2),
web.get(
"/apps/{app}/logs/follow",
partial(get_app_logs_v2, follow=True),
),
web.get(
"/apps/{app}/logs/latest",
partial(get_app_logs_v2, latest=True, no_colors=True),
),
web.get("/apps/{app}/logs/boots/{bootid}", get_app_logs_v2),
web.get(
"/apps/{app}/logs/boots/{bootid}/follow",
partial(get_app_logs_v2, follow=True),
),
]
)
def _register_ingress(self) -> None:
"""Register Ingress functions."""
@@ -628,8 +713,36 @@ class RestAPI(CoreSysAttributes):
"""Register backups functions."""
api_backups = APIBackups()
api_backups.coresys = self.coresys
self._api_backups = api_backups
self.webapp.add_routes(
[
web.get("/backups", api_backups.list_backups_v1),
web.get("/backups/info", api_backups.info_v1),
web.post("/backups/options", api_backups.options),
web.post("/backups/reload", api_backups.reload),
web.post("/backups/freeze", api_backups.freeze),
web.post("/backups/thaw", api_backups.thaw),
web.post("/backups/new/full", api_backups.backup_full),
web.post("/backups/new/partial", api_backups.backup_partial_v1),
web.post("/backups/new/upload", api_backups.upload),
web.get("/backups/{slug}/info", api_backups.backup_info_v1),
web.delete("/backups/{slug}", api_backups.remove),
web.post("/backups/{slug}/restore/full", api_backups.restore_full),
web.post(
"/backups/{slug}/restore/partial",
api_backups.restore_partial_v1,
),
web.get("/backups/{slug}/download", api_backups.download),
]
)
def _register_v2_backups(self) -> None:
"""Register v2 backup routes on the v2 sub-app (accessible as /v2/backups/...)."""
assert self._api_backups is not None
api_backups = self._api_backups
self.v2_app.add_routes(
[
web.get("/backups", api_backups.list_backups),
web.get("/backups/info", api_backups.info),
@@ -695,7 +808,7 @@ class RestAPI(CoreSysAttributes):
]
)
self._register_advanced_logs("/dns", "hassio_dns")
self._register_advanced_logs("/dns", "hassio_dns", default_verbose=True)
def _register_audio(self) -> None:
"""Register Audio functions."""
@@ -718,7 +831,7 @@ class RestAPI(CoreSysAttributes):
]
)
self._register_advanced_logs("/audio", "hassio_audio")
self._register_advanced_logs("/audio", "hassio_audio", default_verbose=True)
def _register_mounts(self) -> None:
"""Register mounts endpoints."""
@@ -740,39 +853,36 @@ class RestAPI(CoreSysAttributes):
"""Register store endpoints."""
api_store = APIStore()
api_store.coresys = self.coresys
self._api_store = api_store
self.webapp.add_routes(
[
web.get("/store", api_store.store_info),
web.get("/store/addons", api_store.addons_list),
web.get("/store/addons/{addon}", api_store.addons_addon_info),
web.get("/store/addons/{addon}/icon", api_store.addons_addon_icon),
web.get("/store/addons/{addon}/logo", api_store.addons_addon_logo),
web.get("/store", api_store.store_info_v1),
web.get("/store/addons", api_store.apps_list_v1),
web.get("/store/addons/{app}", api_store.apps_app_info),
web.get("/store/addons/{app}/icon", api_store.apps_app_icon),
web.get("/store/addons/{app}/logo", api_store.apps_app_logo),
web.get("/store/addons/{app}/changelog", api_store.apps_app_changelog),
web.get(
"/store/addons/{addon}/changelog", api_store.addons_addon_changelog
"/store/addons/{app}/documentation",
api_store.apps_app_documentation,
),
web.get(
"/store/addons/{addon}/documentation",
api_store.addons_addon_documentation,
),
web.get(
"/store/addons/{addon}/availability",
api_store.addons_addon_availability,
"/store/addons/{app}/availability",
api_store.apps_app_availability,
),
web.post("/store/addons/{app}/install", api_store.apps_app_install),
web.post(
"/store/addons/{addon}/install", api_store.addons_addon_install
"/store/addons/{app}/install/{version}",
api_store.apps_app_install,
),
web.post("/store/addons/{app}/update", api_store.apps_app_update),
web.post(
"/store/addons/{addon}/install/{version}",
api_store.addons_addon_install,
),
web.post("/store/addons/{addon}/update", api_store.addons_addon_update),
web.post(
"/store/addons/{addon}/update/{version}",
api_store.addons_addon_update,
"/store/addons/{app}/update/{version}",
api_store.apps_app_update,
),
# Must be below others since it has a wildcard in resource path
web.get("/store/addons/{addon}/{version}", api_store.addons_addon_info),
web.get("/store/addons/{app}/{version}", api_store.apps_app_info),
web.post("/store/reload", api_store.reload),
web.get("/store/repositories", api_store.repositories_list),
web.get(
@@ -794,14 +904,64 @@ class RestAPI(CoreSysAttributes):
self.webapp.add_routes(
[
web.post("/addons/reload", api_store.reload),
web.post("/addons/{addon}/install", api_store.addons_addon_install),
web.post("/addons/{addon}/update", api_store.addons_addon_update),
web.get("/addons/{addon}/icon", api_store.addons_addon_icon),
web.get("/addons/{addon}/logo", api_store.addons_addon_logo),
web.get("/addons/{addon}/changelog", api_store.addons_addon_changelog),
web.post("/addons/{app}/install", api_store.apps_app_install),
web.post("/addons/{app}/update", api_store.apps_app_update),
web.get("/addons/{app}/icon", api_store.apps_app_icon),
web.get("/addons/{app}/logo", api_store.apps_app_logo),
web.get("/addons/{app}/changelog", api_store.apps_app_changelog),
web.get(
"/addons/{addon}/documentation",
api_store.addons_addon_documentation,
"/addons/{app}/documentation",
api_store.apps_app_documentation,
),
]
)
def _register_v2_store(self) -> None:
"""Register v2 store routes on the v2 sub-app (accessible as /v2/store/...)."""
assert self._api_store is not None
api_store = self._api_store
self.v2_app.add_routes(
[
web.get("/store", api_store.store_info),
web.get("/store/apps", api_store.apps_list),
web.get("/store/apps/{app}", api_store.apps_app_info),
web.get("/store/apps/{app}/icon", api_store.apps_app_icon),
web.get("/store/apps/{app}/logo", api_store.apps_app_logo),
web.get("/store/apps/{app}/changelog", api_store.apps_app_changelog),
web.get(
"/store/apps/{app}/documentation",
api_store.apps_app_documentation,
),
web.get(
"/store/apps/{app}/availability",
api_store.apps_app_availability,
),
web.post("/store/apps/{app}/install", api_store.apps_app_install),
web.post(
"/store/apps/{app}/install/{version}",
api_store.apps_app_install,
),
web.post("/store/apps/{app}/update", api_store.apps_app_update),
web.post(
"/store/apps/{app}/update/{version}",
api_store.apps_app_update,
),
# Must be below others since it has a wildcard in resource path
web.get("/store/apps/{app}/{version}", api_store.apps_app_info),
web.post("/store/reload", api_store.reload),
web.get("/store/repositories", api_store.repositories_list),
web.get(
"/store/repositories/{repository}",
api_store.repositories_repository_info,
),
web.post("/store/repositories", api_store.add_repository),
web.delete(
"/store/repositories/{repository}", api_store.remove_repository
),
web.post(
"/store/repositories/{repository}/repair",
api_store.repositories_repository_repair,
),
]
)

View File

@@ -9,12 +9,13 @@ from aiohttp import web
import voluptuous as vol
from voluptuous.humanize import humanize_error
from ..addons.addon import Addon
from ..addons.addon import App
from ..addons.utils import rating_security
from ..const import (
ATTR_ADDONS,
ATTR_ADVANCED,
ATTR_APPARMOR,
ATTR_APPS,
ATTR_ARCH,
ATTR_AUDIO,
ATTR_AUDIO_INPUT,
@@ -94,19 +95,19 @@ from ..const import (
ATTR_WATCHDOG,
ATTR_WEBUI,
REQUEST_FROM,
AddonBoot,
AddonBootConfig,
AppBoot,
AppBootConfig,
)
from ..coresys import CoreSysAttributes
from ..docker.stats import DockerStats
from ..exceptions import (
AddonBootConfigCannotChangeError,
AddonConfigurationInvalidError,
AddonNotSupportedWriteStdinError,
APIAddonNotInstalled,
APIAppNotInstalled,
APIError,
APIForbidden,
APINotFound,
AppBootConfigCannotChangeError,
AppConfigurationInvalidError,
AppNotSupportedWriteStdinError,
PwnedError,
PwnedSecret,
)
@@ -121,7 +122,7 @@ SCHEMA_VERSION = vol.Schema({vol.Optional(ATTR_VERSION): str})
# pylint: disable=no-value-for-parameter
SCHEMA_OPTIONS = vol.Schema(
{
vol.Optional(ATTR_BOOT): vol.Coerce(AddonBoot),
vol.Optional(ATTR_BOOT): vol.Coerce(AppBoot),
vol.Optional(ATTR_NETWORK): vol.Maybe(docker_ports),
vol.Optional(ATTR_AUTO_UPDATE): vol.Boolean(),
vol.Optional(ATTR_AUDIO_OUTPUT): vol.Maybe(str),
@@ -157,149 +158,157 @@ class OptionsValidateResponse(TypedDict):
pwned: bool | None
class APIAddons(CoreSysAttributes):
"""Handle RESTful API for add-on functions."""
class APIApps(CoreSysAttributes):
"""Handle RESTful API for app functions."""
def get_addon_for_request(self, request: web.Request) -> Addon:
"""Return addon, throw an exception if it doesn't exist."""
addon_slug: str = request.match_info["addon"]
def get_app_for_request(self, request: web.Request) -> App:
"""Return app, throw an exception if it doesn't exist."""
app_slug: str = request.match_info["app"]
# Lookup itself
if addon_slug == "self":
addon = request.get(REQUEST_FROM)
if not isinstance(addon, Addon):
raise APIError("Self is not an Addon")
return addon
if app_slug == "self":
app = request.get(REQUEST_FROM)
if not isinstance(app, App):
raise APIError("Self is not an App")
return app
addon = self.sys_addons.get(addon_slug)
if not addon:
raise APINotFound(f"Addon {addon_slug} does not exist")
if not isinstance(addon, Addon) or not addon.is_installed:
raise APIAddonNotInstalled("Addon is not installed")
app = self.sys_apps.get(app_slug)
if not app:
raise APINotFound(f"App {app_slug} does not exist")
if not isinstance(app, App) or not app.is_installed:
raise APIAppNotInstalled("App is not installed")
return addon
return app
@api_process
async def list_addons(self, request: web.Request) -> dict[str, Any]:
"""Return all add-ons or repositories."""
data_addons = [
def _list_apps_data(self) -> list[dict[str, Any]]:
"""Build the list of installed app data dicts."""
return [
{
ATTR_NAME: addon.name,
ATTR_SLUG: addon.slug,
ATTR_DESCRIPTON: addon.description,
ATTR_ADVANCED: addon.advanced,
ATTR_STAGE: addon.stage,
ATTR_VERSION: addon.version,
ATTR_VERSION_LATEST: addon.latest_version,
ATTR_UPDATE_AVAILABLE: addon.need_update,
ATTR_AVAILABLE: addon.available,
ATTR_DETACHED: addon.is_detached,
ATTR_HOMEASSISTANT: addon.homeassistant_version,
ATTR_STATE: addon.state,
ATTR_REPOSITORY: addon.repository,
ATTR_BUILD: addon.need_build,
ATTR_URL: addon.url,
ATTR_ICON: addon.with_icon,
ATTR_LOGO: addon.with_logo,
ATTR_SYSTEM_MANAGED: addon.system_managed,
ATTR_NAME: app.name,
ATTR_SLUG: app.slug,
ATTR_DESCRIPTON: app.description,
ATTR_ADVANCED: app.advanced, # Deprecated 2026.03
ATTR_STAGE: app.stage,
ATTR_VERSION: app.version,
ATTR_VERSION_LATEST: app.latest_version,
ATTR_UPDATE_AVAILABLE: app.need_update,
ATTR_AVAILABLE: app.available,
ATTR_DETACHED: app.is_detached,
ATTR_HOMEASSISTANT: app.homeassistant_version,
ATTR_STATE: app.state,
ATTR_REPOSITORY: app.repository,
ATTR_BUILD: app.need_build,
ATTR_URL: app.url,
ATTR_ICON: app.with_icon,
ATTR_LOGO: app.with_logo,
ATTR_SYSTEM_MANAGED: app.system_managed,
}
for addon in self.sys_addons.installed
for app in self.sys_apps.installed
]
return {ATTR_ADDONS: data_addons}
@api_process
async def list_apps(self, request: web.Request) -> dict[str, Any]:
"""Return all installed apps (v2: uses "apps" key)."""
return {ATTR_APPS: self._list_apps_data()}
@api_process
async def list_apps_v1(self, request: web.Request) -> dict[str, Any]:
"""Return all installed apps (v1: uses "addons" key)."""
return {ATTR_ADDONS: self._list_apps_data()}
@api_process
async def reload(self, request: web.Request) -> None:
"""Reload all add-on data from store."""
"""Reload all app data from store."""
await asyncio.shield(self.sys_store.reload())
async def info(self, request: web.Request) -> dict[str, Any]:
"""Return add-on information."""
addon: Addon = self.get_addon_for_request(request)
data = {
ATTR_NAME: addon.name,
ATTR_SLUG: addon.slug,
ATTR_HOSTNAME: addon.hostname,
ATTR_DNS: addon.dns,
ATTR_DESCRIPTON: addon.description,
ATTR_LONG_DESCRIPTION: await addon.long_description(),
ATTR_ADVANCED: addon.advanced,
ATTR_STAGE: addon.stage,
ATTR_REPOSITORY: addon.repository,
ATTR_VERSION_LATEST: addon.latest_version,
ATTR_PROTECTED: addon.protected,
ATTR_RATING: rating_security(addon),
ATTR_BOOT_CONFIG: addon.boot_config,
ATTR_BOOT: addon.boot,
ATTR_OPTIONS: addon.options,
ATTR_SCHEMA: addon.schema_ui,
ATTR_ARCH: addon.supported_arch,
ATTR_MACHINE: addon.supported_machine,
ATTR_HOMEASSISTANT: addon.homeassistant_version,
ATTR_URL: addon.url,
ATTR_DETACHED: addon.is_detached,
ATTR_AVAILABLE: addon.available,
ATTR_BUILD: addon.need_build,
ATTR_NETWORK: addon.ports,
ATTR_NETWORK_DESCRIPTION: addon.ports_description,
ATTR_HOST_NETWORK: addon.host_network,
ATTR_HOST_PID: addon.host_pid,
ATTR_HOST_IPC: addon.host_ipc,
ATTR_HOST_UTS: addon.host_uts,
ATTR_HOST_DBUS: addon.host_dbus,
ATTR_PRIVILEGED: addon.privileged,
ATTR_FULL_ACCESS: addon.with_full_access,
ATTR_APPARMOR: addon.apparmor,
ATTR_ICON: addon.with_icon,
ATTR_LOGO: addon.with_logo,
ATTR_CHANGELOG: addon.with_changelog,
ATTR_DOCUMENTATION: addon.with_documentation,
ATTR_STDIN: addon.with_stdin,
ATTR_HASSIO_API: addon.access_hassio_api,
ATTR_HASSIO_ROLE: addon.hassio_role,
ATTR_AUTH_API: addon.access_auth_api,
ATTR_HOMEASSISTANT_API: addon.access_homeassistant_api,
ATTR_GPIO: addon.with_gpio,
ATTR_USB: addon.with_usb,
ATTR_UART: addon.with_uart,
ATTR_KERNEL_MODULES: addon.with_kernel_modules,
ATTR_DEVICETREE: addon.with_devicetree,
ATTR_UDEV: addon.with_udev,
ATTR_DOCKER_API: addon.access_docker_api,
ATTR_VIDEO: addon.with_video,
ATTR_AUDIO: addon.with_audio,
ATTR_STARTUP: addon.startup,
ATTR_SERVICES: _pretty_services(addon),
ATTR_DISCOVERY: addon.discovery,
ATTR_TRANSLATIONS: addon.translations,
ATTR_INGRESS: addon.with_ingress,
ATTR_SIGNED: addon.signed,
ATTR_STATE: addon.state,
ATTR_WEBUI: addon.webui,
ATTR_INGRESS_ENTRY: addon.ingress_entry,
ATTR_INGRESS_URL: addon.ingress_url,
ATTR_INGRESS_PORT: addon.ingress_port,
ATTR_INGRESS_PANEL: addon.ingress_panel,
ATTR_AUDIO_INPUT: addon.audio_input,
ATTR_AUDIO_OUTPUT: addon.audio_output,
ATTR_AUTO_UPDATE: addon.auto_update,
ATTR_IP_ADDRESS: str(addon.ip_address),
ATTR_VERSION: addon.version,
ATTR_UPDATE_AVAILABLE: addon.need_update,
ATTR_WATCHDOG: addon.watchdog,
ATTR_DEVICES: addon.static_devices
+ [device.path for device in addon.devices],
ATTR_SYSTEM_MANAGED: addon.system_managed,
ATTR_SYSTEM_MANAGED_CONFIG_ENTRY: addon.system_managed_config_entry,
async def info_data(self, app: App) -> dict[str, Any]:
"""Build and return app information dict (raises on invalid state)."""
return {
ATTR_NAME: app.name,
ATTR_SLUG: app.slug,
ATTR_HOSTNAME: app.hostname,
ATTR_DNS: app.dns,
ATTR_DESCRIPTON: app.description,
ATTR_LONG_DESCRIPTION: await app.long_description(),
ATTR_ADVANCED: app.advanced, # Deprecated 2026.03
ATTR_STAGE: app.stage,
ATTR_REPOSITORY: app.repository,
ATTR_VERSION_LATEST: app.latest_version,
ATTR_PROTECTED: app.protected,
ATTR_RATING: rating_security(app),
ATTR_BOOT_CONFIG: app.boot_config,
ATTR_BOOT: app.boot,
ATTR_OPTIONS: app.options,
ATTR_SCHEMA: app.schema_ui,
ATTR_ARCH: app.supported_arch,
ATTR_MACHINE: app.supported_machine,
ATTR_HOMEASSISTANT: app.homeassistant_version,
ATTR_URL: app.url,
ATTR_DETACHED: app.is_detached,
ATTR_AVAILABLE: app.available,
ATTR_BUILD: app.need_build,
ATTR_NETWORK: app.ports,
ATTR_NETWORK_DESCRIPTION: app.ports_description,
ATTR_HOST_NETWORK: app.host_network,
ATTR_HOST_PID: app.host_pid,
ATTR_HOST_IPC: app.host_ipc,
ATTR_HOST_UTS: app.host_uts,
ATTR_HOST_DBUS: app.host_dbus,
ATTR_PRIVILEGED: app.privileged,
ATTR_FULL_ACCESS: app.with_full_access,
ATTR_APPARMOR: app.apparmor,
ATTR_ICON: app.with_icon,
ATTR_LOGO: app.with_logo,
ATTR_CHANGELOG: app.with_changelog,
ATTR_DOCUMENTATION: app.with_documentation,
ATTR_STDIN: app.with_stdin,
ATTR_HASSIO_API: app.access_hassio_api,
ATTR_HASSIO_ROLE: app.hassio_role,
ATTR_AUTH_API: app.access_auth_api,
ATTR_HOMEASSISTANT_API: app.access_homeassistant_api,
ATTR_GPIO: app.with_gpio,
ATTR_USB: app.with_usb,
ATTR_UART: app.with_uart,
ATTR_KERNEL_MODULES: app.with_kernel_modules,
ATTR_DEVICETREE: app.with_devicetree,
ATTR_UDEV: app.with_udev,
ATTR_DOCKER_API: app.access_docker_api,
ATTR_VIDEO: app.with_video,
ATTR_AUDIO: app.with_audio,
ATTR_STARTUP: app.startup,
ATTR_SERVICES: _pretty_services(app),
ATTR_DISCOVERY: app.discovery,
ATTR_TRANSLATIONS: app.translations,
ATTR_INGRESS: app.with_ingress,
ATTR_SIGNED: app.signed,
ATTR_STATE: app.state,
ATTR_WEBUI: app.webui,
ATTR_INGRESS_ENTRY: app.ingress_entry,
ATTR_INGRESS_URL: app.ingress_url,
ATTR_INGRESS_PORT: app.ingress_port,
ATTR_INGRESS_PANEL: app.ingress_panel,
ATTR_AUDIO_INPUT: app.audio_input,
ATTR_AUDIO_OUTPUT: app.audio_output,
ATTR_AUTO_UPDATE: app.auto_update,
ATTR_IP_ADDRESS: str(app.ip_address),
ATTR_VERSION: app.version,
ATTR_UPDATE_AVAILABLE: app.need_update,
ATTR_WATCHDOG: app.watchdog,
ATTR_DEVICES: app.static_devices + [device.path for device in app.devices],
ATTR_SYSTEM_MANAGED: app.system_managed,
ATTR_SYSTEM_MANAGED_CONFIG_ENTRY: app.system_managed_config_entry,
}
return data
@api_process
async def info(self, request: web.Request) -> dict[str, Any]:
"""Return app information."""
app: App = self.get_app_for_request(request)
return await self.info_data(app)
@api_process
async def options(self, request: web.Request) -> None:
"""Store user options for add-on."""
addon = self.get_addon_for_request(request)
"""Store user options for app."""
app = self.get_app_for_request(request)
# Update secrets for validation
await self.sys_homeassistant.secrets.reload()
@@ -309,61 +318,61 @@ class APIAddons(CoreSysAttributes):
if ATTR_OPTIONS in body:
# None resets options to defaults, otherwise validate the options
if body[ATTR_OPTIONS] is None:
addon.options = None
app.options = None
else:
try:
addon.options = addon.schema(body[ATTR_OPTIONS])
app.options = app.schema(body[ATTR_OPTIONS])
except vol.Invalid as ex:
raise AddonConfigurationInvalidError(
addon=addon.slug,
raise AppConfigurationInvalidError(
app=app.slug,
validation_error=humanize_error(body[ATTR_OPTIONS], ex),
) from None
if ATTR_BOOT in body:
if addon.boot_config == AddonBootConfig.MANUAL_ONLY:
raise AddonBootConfigCannotChangeError(
addon=addon.slug, boot_config=addon.boot_config.value
if app.boot_config == AppBootConfig.MANUAL_ONLY:
raise AppBootConfigCannotChangeError(
app=app.slug, boot_config=app.boot_config.value
)
addon.boot = body[ATTR_BOOT]
app.boot = body[ATTR_BOOT]
if ATTR_AUTO_UPDATE in body:
addon.auto_update = body[ATTR_AUTO_UPDATE]
app.auto_update = body[ATTR_AUTO_UPDATE]
if ATTR_NETWORK in body:
addon.ports = body[ATTR_NETWORK]
app.ports = body[ATTR_NETWORK]
if ATTR_AUDIO_INPUT in body:
addon.audio_input = body[ATTR_AUDIO_INPUT]
app.audio_input = body[ATTR_AUDIO_INPUT]
if ATTR_AUDIO_OUTPUT in body:
addon.audio_output = body[ATTR_AUDIO_OUTPUT]
app.audio_output = body[ATTR_AUDIO_OUTPUT]
if ATTR_INGRESS_PANEL in body:
addon.ingress_panel = body[ATTR_INGRESS_PANEL]
await self.sys_ingress.update_hass_panel(addon)
app.ingress_panel = body[ATTR_INGRESS_PANEL]
await self.sys_ingress.update_hass_panel(app)
if ATTR_WATCHDOG in body:
addon.watchdog = body[ATTR_WATCHDOG]
app.watchdog = body[ATTR_WATCHDOG]
await addon.save_persist()
await app.save_persist()
@api_process
async def sys_options(self, request: web.Request) -> None:
"""Store system options for an add-on."""
addon = self.get_addon_for_request(request)
"""Store system options for an app."""
app = self.get_app_for_request(request)
# Validate/Process Body
body = await api_validate(SCHEMA_SYS_OPTIONS, request)
if ATTR_SYSTEM_MANAGED in body:
addon.system_managed = body[ATTR_SYSTEM_MANAGED]
app.system_managed = body[ATTR_SYSTEM_MANAGED]
if ATTR_SYSTEM_MANAGED_CONFIG_ENTRY in body:
addon.system_managed_config_entry = body[ATTR_SYSTEM_MANAGED_CONFIG_ENTRY]
app.system_managed_config_entry = body[ATTR_SYSTEM_MANAGED_CONFIG_ENTRY]
await addon.save_persist()
await app.save_persist()
@api_process
async def options_validate(self, request: web.Request) -> OptionsValidateResponse:
"""Validate user options for add-on."""
addon = self.get_addon_for_request(request)
"""Validate user options for app."""
app = self.get_app_for_request(request)
data = OptionsValidateResponse(message="", valid=True, pwned=False)
options = await request.json(loads=json_loads) or addon.options
options = await request.json(loads=json_loads) or app.options
# Validate config
options_schema = addon.schema
options_schema = app.schema
try:
options_schema.validate(options)
except vol.Invalid as ex:
@@ -389,43 +398,43 @@ class APIAddons(CoreSysAttributes):
if data["pwned"] is None:
data["message"] = "Error happening on pwned secrets check!"
else:
data["message"] = "Add-on uses pwned secrets!"
data["message"] = "App uses pwned secrets!"
return data
@api_process
async def options_config(self, request: web.Request) -> dict[str, Any]:
"""Validate user options for add-on."""
slug: str = request.match_info["addon"]
"""Validate user options for app."""
slug: str = request.match_info["app"]
if slug != "self":
raise APIForbidden("This can be only read by the Add-on itself!")
addon = self.get_addon_for_request(request)
raise APIForbidden("This can be only read by the app itself!")
app = self.get_app_for_request(request)
# Lookup/reload secrets
await self.sys_homeassistant.secrets.reload()
try:
return addon.schema.validate(addon.options)
return app.schema.validate(app.options)
except vol.Invalid:
raise APIError("Invalid configuration data for the add-on") from None
raise APIError("Invalid configuration data for the app") from None
@api_process
async def security(self, request: web.Request) -> None:
"""Store security options for add-on."""
addon = self.get_addon_for_request(request)
"""Store security options for app."""
app = self.get_app_for_request(request)
body: dict[str, Any] = await api_validate(SCHEMA_SECURITY, request)
if ATTR_PROTECTED in body:
_LOGGER.warning("Changing protected flag for %s!", addon.slug)
addon.protected = body[ATTR_PROTECTED]
_LOGGER.warning("Changing protected flag for %s!", app.slug)
app.protected = body[ATTR_PROTECTED]
await addon.save_persist()
await app.save_persist()
@api_process
async def stats(self, request: web.Request) -> dict[str, Any]:
"""Return resource information."""
addon = self.get_addon_for_request(request)
app = self.get_app_for_request(request)
stats: DockerStats = await addon.stats()
stats: DockerStats = await app.stats()
return {
ATTR_CPU_PERCENT: stats.cpu_percent,
@@ -440,57 +449,55 @@ class APIAddons(CoreSysAttributes):
@api_process
async def uninstall(self, request: web.Request) -> None:
"""Uninstall add-on."""
addon = self.get_addon_for_request(request)
"""Uninstall app."""
app = self.get_app_for_request(request)
body: dict[str, Any] = await api_validate(SCHEMA_UNINSTALL, request)
await asyncio.shield(
self.sys_addons.uninstall(
addon.slug, remove_config=body[ATTR_REMOVE_CONFIG]
)
self.sys_apps.uninstall(app.slug, remove_config=body[ATTR_REMOVE_CONFIG])
)
@api_process
async def start(self, request: web.Request) -> None:
"""Start add-on."""
addon = self.get_addon_for_request(request)
if start_task := await asyncio.shield(addon.start()):
"""Start app."""
app = self.get_app_for_request(request)
if start_task := await asyncio.shield(app.start()):
await start_task
@api_process
def stop(self, request: web.Request) -> Awaitable[None]:
"""Stop add-on."""
addon = self.get_addon_for_request(request)
return asyncio.shield(addon.stop())
"""Stop app."""
app = self.get_app_for_request(request)
return asyncio.shield(app.stop())
@api_process
async def restart(self, request: web.Request) -> None:
"""Restart add-on."""
addon: Addon = self.get_addon_for_request(request)
if start_task := await asyncio.shield(addon.restart()):
"""Restart app."""
app: App = self.get_app_for_request(request)
if start_task := await asyncio.shield(app.restart()):
await start_task
@api_process
async def rebuild(self, request: web.Request) -> None:
"""Rebuild local build add-on."""
addon = self.get_addon_for_request(request)
"""Rebuild local build app."""
app = self.get_app_for_request(request)
body: dict[str, Any] = await api_validate(SCHEMA_REBUILD, request)
if start_task := await asyncio.shield(
self.sys_addons.rebuild(addon.slug, force=body[ATTR_FORCE])
self.sys_apps.rebuild(app.slug, force=body[ATTR_FORCE])
):
await start_task
@api_process
async def stdin(self, request: web.Request) -> None:
"""Write to stdin of add-on."""
addon = self.get_addon_for_request(request)
if not addon.with_stdin:
raise AddonNotSupportedWriteStdinError(_LOGGER.error, addon=addon.slug)
"""Write to stdin of app."""
app = self.get_app_for_request(request)
if not app.with_stdin:
raise AppNotSupportedWriteStdinError(_LOGGER.error, app=app.slug)
data = await request.read()
await asyncio.shield(addon.write_stdin(data))
await asyncio.shield(app.write_stdin(data))
def _pretty_services(addon: Addon) -> list[str]:
def _pretty_services(app: App) -> list[str]:
"""Return a simplified services role list."""
return [f"{name}:{access}" for name, access in addon.services_role.items()]
return [f"{name}:{access}" for name, access in app.services_role.items()]

View File

@@ -12,7 +12,7 @@ from aiohttp.web_exceptions import HTTPUnauthorized
from multidict import MultiDictProxy
import voluptuous as vol
from ..addons.addon import Addon
from ..addons.addon import App
from ..const import ATTR_NAME, ATTR_PASSWORD, ATTR_USERNAME, REQUEST_FROM
from ..coresys import CoreSysAttributes
from ..exceptions import APIForbidden, AuthInvalidNonStringValueError
@@ -44,18 +44,21 @@ REALM_HEADER: dict[str, str] = {
class APIAuth(CoreSysAttributes):
"""Handle RESTful API for auth functions."""
def _process_basic(self, request: web.Request, addon: Addon) -> Awaitable[bool]:
def _process_basic(self, request: web.Request, app: App) -> Awaitable[bool]:
"""Process login request with basic auth.
Return a coroutine.
"""
auth = BasicAuth.decode(request.headers[AUTHORIZATION])
return self.sys_auth.check_login(addon, auth.login, auth.password)
try:
auth = BasicAuth.decode(request.headers[AUTHORIZATION])
except ValueError as err:
raise HTTPUnauthorized(headers=REALM_HEADER) from err
return self.sys_auth.check_login(app, auth.login, auth.password)
def _process_dict(
self,
request: web.Request,
addon: Addon,
app: App,
data: dict[str, Any] | MultiDictProxy[str | bytes | FileField],
) -> Awaitable[bool]:
"""Process login with dict data.
@@ -73,35 +76,33 @@ class APIAuth(CoreSysAttributes):
_LOGGER.error, headers=REALM_HEADER
) from None
return self.sys_auth.check_login(
addon, cast(str, username), cast(str, password)
)
return self.sys_auth.check_login(app, cast(str, username), cast(str, password))
@api_process
async def auth(self, request: web.Request) -> bool:
"""Process login request."""
addon = request[REQUEST_FROM]
app = request[REQUEST_FROM]
if not isinstance(addon, Addon) or not addon.access_auth_api:
if not isinstance(app, App) or not app.access_auth_api:
raise APIForbidden("Can't use Home Assistant auth!")
# BasicAuth
if AUTHORIZATION in request.headers:
if not await self._process_basic(request, addon):
if not await self._process_basic(request, app):
raise HTTPUnauthorized(headers=REALM_HEADER)
return True
# Json
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON:
data = await request.json(loads=json_loads)
if not await self._process_dict(request, addon, data):
if not await self._process_dict(request, app, data):
raise HTTPUnauthorized()
return True
# URL encoded
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_URL:
data = await request.post()
if not await self._process_dict(request, addon, data):
if not await self._process_dict(request, app, data):
raise HTTPUnauthorized()
return True
@@ -127,14 +128,14 @@ class APIAuth(CoreSysAttributes):
return {
ATTR_USERS: [
{
ATTR_USERNAME: user[ATTR_USERNAME],
ATTR_NAME: user[ATTR_NAME],
ATTR_IS_OWNER: user[ATTR_IS_OWNER],
ATTR_IS_ACTIVE: user[ATTR_IS_ACTIVE],
ATTR_LOCAL_ONLY: user[ATTR_LOCAL_ONLY],
ATTR_GROUP_IDS: user[ATTR_GROUP_IDS],
ATTR_USERNAME: user.username,
ATTR_NAME: user.name,
ATTR_IS_OWNER: user.is_owner,
ATTR_IS_ACTIVE: user.is_active,
ATTR_LOCAL_ONLY: user.local_only,
ATTR_GROUP_IDS: user.group_ids,
}
for user in await self.sys_auth.list_users()
if user[ATTR_USERNAME]
if user.username
]
}

View File

@@ -3,7 +3,6 @@
from __future__ import annotations
import asyncio
import errno
from io import BufferedWriter
import logging
from pathlib import Path
@@ -21,6 +20,7 @@ from ..backups.const import LOCATION_CLOUD_BACKUP, LOCATION_TYPE
from ..backups.validate import ALL_FOLDERS, FOLDER_HOMEASSISTANT, days_until_stale
from ..const import (
ATTR_ADDONS,
ATTR_APPS,
ATTR_BACKUPS,
ATTR_COMPRESSED,
ATTR_CONTENT,
@@ -50,7 +50,6 @@ from ..const import (
from ..coresys import CoreSysAttributes
from ..exceptions import APIError, APIForbidden, APINotFound
from ..mounts.const import MountUsage
from ..resolution.const import UnhealthyReason
from .const import (
ATTR_ADDITIONAL_LOCATIONS,
ATTR_BACKGROUND,
@@ -62,7 +61,7 @@ from .utils import api_process, api_validate, background_task
_LOGGER: logging.Logger = logging.getLogger(__name__)
ALL_ADDONS_FLAG = "ALL"
ALL_APPS_FLAG = "ALL"
LOCATION_LOCAL = ".local"
@@ -101,7 +100,8 @@ SCHEMA_RESTORE_FULL = vol.Schema(
}
)
SCHEMA_RESTORE_PARTIAL = SCHEMA_RESTORE_FULL.extend(
# V1 schemas use "addons" as the request body key (legacy API contract).
SCHEMA_RESTORE_PARTIAL_V1 = SCHEMA_RESTORE_FULL.extend(
{
vol.Optional(ATTR_HOMEASSISTANT): vol.Boolean(),
vol.Optional(ATTR_ADDONS): vol.All([str], vol.Unique()),
@@ -109,6 +109,15 @@ SCHEMA_RESTORE_PARTIAL = SCHEMA_RESTORE_FULL.extend(
}
)
# V2 schemas use "apps" as the request body key.
SCHEMA_RESTORE_PARTIAL = SCHEMA_RESTORE_FULL.extend(
{
vol.Optional(ATTR_HOMEASSISTANT): vol.Boolean(),
vol.Optional(ATTR_APPS): vol.All([str], vol.Unique()),
vol.Optional(ATTR_FOLDERS): SCHEMA_FOLDERS,
}
)
SCHEMA_BACKUP_FULL = vol.Schema(
{
vol.Optional(ATTR_NAME): str,
@@ -122,11 +131,19 @@ SCHEMA_BACKUP_FULL = vol.Schema(
}
)
# V1 schema uses "addons" as the request body key (legacy API contract).
SCHEMA_BACKUP_PARTIAL_V1 = SCHEMA_BACKUP_FULL.extend(
{
vol.Optional(ATTR_ADDONS): vol.Or(ALL_APPS_FLAG, vol.All([str], vol.Unique())),
vol.Optional(ATTR_FOLDERS): SCHEMA_FOLDERS,
vol.Optional(ATTR_HOMEASSISTANT): vol.Boolean(),
}
)
# V2 schema uses "apps" as the request body key.
SCHEMA_BACKUP_PARTIAL = SCHEMA_BACKUP_FULL.extend(
{
vol.Optional(ATTR_ADDONS): vol.Or(
ALL_ADDONS_FLAG, vol.All([str], vol.Unique())
),
vol.Optional(ATTR_APPS): vol.Or(ALL_APPS_FLAG, vol.All([str], vol.Unique())),
vol.Optional(ATTR_FOLDERS): SCHEMA_FOLDERS,
vol.Optional(ATTR_HOMEASSISTANT): vol.Boolean(),
}
@@ -157,8 +174,8 @@ class APIBackups(CoreSysAttributes):
for loc in backup.locations
}
def _list_backups(self):
"""Return list of backups."""
def _list_backups(self) -> list[dict[str, Any]]:
"""Return list of backups using v2 field names (content["apps"])."""
return [
{
ATTR_SLUG: backup.slug,
@@ -174,7 +191,7 @@ class APIBackups(CoreSysAttributes):
ATTR_COMPRESSED: backup.compressed,
ATTR_CONTENT: {
ATTR_HOMEASSISTANT: backup.homeassistant_version is not None,
ATTR_ADDONS: backup.addon_list,
ATTR_APPS: backup.app_list,
ATTR_FOLDERS: backup.folders,
},
}
@@ -182,25 +199,76 @@ class APIBackups(CoreSysAttributes):
if backup.location != LOCATION_CLOUD_BACKUP
]
@api_process
async def list_backups(self, request):
"""Return backup list."""
data_backups = self._list_backups()
@staticmethod
def _rename_apps_to_addons_in_backups(
data_backups: list[dict[str, Any]],
) -> list[dict[str, Any]]:
"""Rename the content["apps"] key to content["addons"] for v1 responses."""
for backup in data_backups:
content = backup[ATTR_CONTENT]
content[ATTR_ADDONS] = content.pop(ATTR_APPS)
return data_backups
if request.path == "/snapshots":
# Kept for backwards compability
return {"snapshots": data_backups}
return {ATTR_BACKUPS: data_backups}
def _backup_info_data(self, backup: Backup) -> dict[str, Any]:
"""Return backup info dict using v2 field names (top-level "apps")."""
data_apps = [
{
ATTR_SLUG: app_data[ATTR_SLUG],
ATTR_NAME: app_data[ATTR_NAME],
ATTR_VERSION: app_data[ATTR_VERSION],
ATTR_SIZE: app_data[ATTR_SIZE],
}
for app_data in backup.apps
]
return {
ATTR_SLUG: backup.slug,
ATTR_TYPE: backup.sys_type,
ATTR_NAME: backup.name,
ATTR_DATE: backup.date,
ATTR_SIZE: backup.size,
ATTR_SIZE_BYTES: backup.size_bytes,
ATTR_COMPRESSED: backup.compressed,
ATTR_PROTECTED: backup.protected,
ATTR_LOCATION_ATTRIBUTES: self._make_location_attributes(backup),
ATTR_SUPERVISOR_VERSION: backup.supervisor_version,
ATTR_HOMEASSISTANT: backup.homeassistant_version,
ATTR_LOCATION: backup.location,
ATTR_LOCATIONS: backup.locations,
ATTR_APPS: data_apps,
ATTR_REPOSITORIES: backup.repositories,
ATTR_FOLDERS: backup.folders,
ATTR_HOMEASSISTANT_EXCLUDE_DATABASE: backup.homeassistant_exclude_database,
ATTR_EXTRA: backup.extra,
}
@api_process
async def info(self, request):
"""Return backup list and manager info."""
async def list_backups(self, request: web.Request) -> dict[str, Any]:
"""Return backup list (v2: content uses "apps" key)."""
return {ATTR_BACKUPS: self._list_backups()}
@api_process
async def list_backups_v1(self, request: web.Request) -> dict[str, Any]:
"""Return backup list (v1: content uses "addons" key)."""
return {
ATTR_BACKUPS: self._rename_apps_to_addons_in_backups(self._list_backups())
}
@api_process
async def info(self, request: web.Request) -> dict[str, Any]:
"""Return backup list and manager info (v2: content uses "apps" key)."""
return {
ATTR_BACKUPS: self._list_backups(),
ATTR_DAYS_UNTIL_STALE: self.sys_backups.days_until_stale,
}
@api_process
async def info_v1(self, request: web.Request) -> dict[str, Any]:
"""Return backup list and manager info (v1: content uses "addons" key)."""
return {
ATTR_BACKUPS: self._rename_apps_to_addons_in_backups(self._list_backups()),
ATTR_DAYS_UNTIL_STALE: self.sys_backups.days_until_stale,
}
@api_process
async def options(self, request):
"""Set backup manager options."""
@@ -218,41 +286,18 @@ class APIBackups(CoreSysAttributes):
return True
@api_process
async def backup_info(self, request):
"""Return backup info."""
async def backup_info(self, request: web.Request) -> dict[str, Any]:
"""Return backup info (v2: top-level "apps" key)."""
backup = self._extract_slug(request)
return self._backup_info_data(backup)
data_addons = []
for addon_data in backup.addons:
data_addons.append(
{
ATTR_SLUG: addon_data[ATTR_SLUG],
ATTR_NAME: addon_data[ATTR_NAME],
ATTR_VERSION: addon_data[ATTR_VERSION],
ATTR_SIZE: addon_data[ATTR_SIZE],
}
)
return {
ATTR_SLUG: backup.slug,
ATTR_TYPE: backup.sys_type,
ATTR_NAME: backup.name,
ATTR_DATE: backup.date,
ATTR_SIZE: backup.size,
ATTR_SIZE_BYTES: backup.size_bytes,
ATTR_COMPRESSED: backup.compressed,
ATTR_PROTECTED: backup.protected,
ATTR_LOCATION_ATTRIBUTES: self._make_location_attributes(backup),
ATTR_SUPERVISOR_VERSION: backup.supervisor_version,
ATTR_HOMEASSISTANT: backup.homeassistant_version,
ATTR_LOCATION: backup.location,
ATTR_LOCATIONS: backup.locations,
ATTR_ADDONS: data_addons,
ATTR_REPOSITORIES: backup.repositories,
ATTR_FOLDERS: backup.folders,
ATTR_HOMEASSISTANT_EXCLUDE_DATABASE: backup.homeassistant_exclude_database,
ATTR_EXTRA: backup.extra,
}
@api_process
async def backup_info_v1(self, request: web.Request) -> dict[str, Any]:
"""Return backup info (v1: top-level "addons" key)."""
backup = self._extract_slug(request)
data = self._backup_info_data(backup)
data[ATTR_ADDONS] = data.pop(ATTR_APPS)
return data
def _location_to_mount(self, location: str | None) -> LOCATION_TYPE:
"""Convert a single location to a mount if possible."""
@@ -286,6 +331,20 @@ class APIBackups(CoreSysAttributes):
f"Location {LOCATION_CLOUD_BACKUP} is only available for Home Assistant"
)
def _process_location_in_body(
self, request: web.Request, body: dict[str, Any]
) -> dict[str, Any]:
"""Validate and convert location field in partial backup/restore body."""
if ATTR_LOCATION not in body:
return body
location_names: list[str | None] = body.pop(ATTR_LOCATION)
self._validate_cloud_backup_location(request, location_names)
locations = [self._location_to_mount(loc) for loc in location_names]
body[ATTR_LOCATION] = locations.pop(0)
if locations:
body[ATTR_ADDITIONAL_LOCATIONS] = locations
return body
@api_process
async def backup_full(self, request: web.Request):
"""Create full backup."""
@@ -319,27 +378,10 @@ class APIBackups(CoreSysAttributes):
job_id=job_id,
)
@api_process
async def backup_partial(self, request: web.Request):
"""Create a partial backup."""
body = await api_validate(SCHEMA_BACKUP_PARTIAL, request)
locations: list[LOCATION_TYPE] | None = None
if ATTR_LOCATION in body:
location_names: list[str | None] = body.pop(ATTR_LOCATION)
self._validate_cloud_backup_location(request, location_names)
locations = [
self._location_to_mount(location) for location in location_names
]
body[ATTR_LOCATION] = locations.pop(0)
if locations:
body[ATTR_ADDITIONAL_LOCATIONS] = locations
if body.get(ATTR_ADDONS) == ALL_ADDONS_FLAG:
body[ATTR_ADDONS] = list(self.sys_addons.local)
background = body.pop(ATTR_BACKGROUND)
async def _do_backup_partial(
self, body: dict[str, Any], background: bool
) -> dict[str, Any]:
"""Run backup_partial business logic. Expects body["apps"] (v2 key)."""
backup_task, job_id = await background_task(
self, self.sys_backups.do_backup_partial, **body
)
@@ -355,6 +397,34 @@ class APIBackups(CoreSysAttributes):
job_id=job_id,
)
@api_process
async def backup_partial(self, request: web.Request):
"""Create a partial backup (v2: accepts "apps" key in request body)."""
body = await api_validate(SCHEMA_BACKUP_PARTIAL, request)
self._process_location_in_body(request, body)
if body.get(ATTR_APPS) == ALL_APPS_FLAG:
body[ATTR_APPS] = list(self.sys_apps.local)
background = body.pop(ATTR_BACKGROUND)
return await self._do_backup_partial(body, background)
@api_process
async def backup_partial_v1(self, request: web.Request):
"""Create a partial backup (v1: accepts "addons" key in request body)."""
body = await api_validate(SCHEMA_BACKUP_PARTIAL_V1, request)
self._process_location_in_body(request, body)
if body.get(ATTR_ADDONS) == ALL_APPS_FLAG:
body[ATTR_ADDONS] = list(self.sys_apps.local)
# Rename "addons" → "apps" so _do_backup_partial receives the v2 key
if ATTR_ADDONS in body:
body[ATTR_APPS] = body.pop(ATTR_ADDONS)
background = body.pop(ATTR_BACKGROUND)
return await self._do_backup_partial(body, background)
@api_process
async def restore_full(self, request: web.Request):
"""Full restore of a backup."""
@@ -375,15 +445,10 @@ class APIBackups(CoreSysAttributes):
job_id=job_id,
)
@api_process
async def restore_partial(self, request: web.Request):
"""Partial restore a backup."""
backup = self._extract_slug(request)
body = await api_validate(SCHEMA_RESTORE_PARTIAL, request)
self._validate_cloud_backup_location(
request, body.get(ATTR_LOCATION, backup.location)
)
background = body.pop(ATTR_BACKGROUND)
async def _do_restore_partial(
self, backup: Backup, body: dict[str, Any], background: bool
) -> dict[str, Any]:
"""Run restore_partial business logic. Expects body["apps"] (v2 key)."""
restore_task, job_id = await background_task(
self, self.sys_backups.do_restore_partial, backup, **body
)
@@ -395,6 +460,33 @@ class APIBackups(CoreSysAttributes):
job_id=job_id,
)
@api_process
async def restore_partial(self, request: web.Request):
"""Partial restore a backup (v2: accepts "apps" key in request body)."""
backup = self._extract_slug(request)
body = await api_validate(SCHEMA_RESTORE_PARTIAL, request)
self._validate_cloud_backup_location(
request, body.get(ATTR_LOCATION, backup.location)
)
background = body.pop(ATTR_BACKGROUND)
return await self._do_restore_partial(backup, body, background)
@api_process
async def restore_partial_v1(self, request: web.Request):
"""Partial restore a backup (v1: accepts "addons" key in request body)."""
backup = self._extract_slug(request)
body = await api_validate(SCHEMA_RESTORE_PARTIAL_V1, request)
self._validate_cloud_backup_location(
request, body.get(ATTR_LOCATION, backup.location)
)
background = body.pop(ATTR_BACKGROUND)
# Rename "addons" → "apps" so _do_restore_partial receives the v2 key
if ATTR_ADDONS in body:
body[ATTR_APPS] = body.pop(ATTR_ADDONS)
return await self._do_restore_partial(backup, body, background)
@api_process
async def freeze(self, request: web.Request):
"""Initiate manual freeze for external backup."""
@@ -518,13 +610,8 @@ class APIBackups(CoreSysAttributes):
)
)
except OSError as err:
if err.errno == errno.EBADMSG and location in {
LOCATION_CLOUD_BACKUP,
None,
}:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
if location in {LOCATION_CLOUD_BACKUP, None}:
self.sys_resolution.check_oserror(err)
_LOGGER.error("Can't write new backup file: %s", err)
return False

View File

@@ -6,16 +6,16 @@ from typing import Any
from aiohttp import web
import voluptuous as vol
from ..addons.addon import Addon
from ..addons.addon import App
from ..const import (
ATTR_ADDON,
ATTR_APP,
ATTR_CONFIG,
ATTR_DISCOVERY,
ATTR_SERVICE,
ATTR_SERVICES,
ATTR_UUID,
REQUEST_FROM,
AddonState,
AppState,
)
from ..coresys import CoreSysAttributes
from ..discovery import Message
@@ -49,25 +49,25 @@ class APIDiscovery(CoreSysAttributes):
# Get available discovery
discovery = [
{
ATTR_ADDON: message.addon,
ATTR_APP: message.addon,
ATTR_SERVICE: message.service,
ATTR_UUID: message.uuid,
ATTR_CONFIG: message.config,
}
for message in self.sys_discovery.list_messages
if (
discovered := self.sys_addons.get_local_only(
discovered := self.sys_apps.get_local_only(
message.addon,
)
)
and discovered.state == AddonState.STARTED
and discovered.state == AppState.STARTED
]
# Get available services/add-ons
# Get available services/apps
services: dict[str, list[str]] = {}
for addon in self.sys_addons.all:
for name in addon.discovery:
services.setdefault(name, []).append(addon.slug)
for app in self.sys_apps.all:
for name in app.discovery:
services.setdefault(name, []).append(app.slug)
return {ATTR_DISCOVERY: discovery, ATTR_SERVICES: services}
@@ -75,22 +75,22 @@ class APIDiscovery(CoreSysAttributes):
async def set_discovery(self, request: web.Request) -> dict[str, str]:
"""Write data into a discovery pipeline."""
body = await api_validate(SCHEMA_DISCOVERY, request)
addon: Addon = request[REQUEST_FROM]
app: App = request[REQUEST_FROM]
service = body[ATTR_SERVICE]
# Access?
if body[ATTR_SERVICE] not in addon.discovery:
if body[ATTR_SERVICE] not in app.discovery:
_LOGGER.error(
"Add-on %s attempted to send discovery for service %s which is not listed in its config. Please report this to the maintainer of the add-on",
addon.name,
"App %s attempted to send discovery for service %s which is not listed in its config. Please report this to the maintainer of the app",
app.name,
service,
)
raise APIForbidden(
"Add-ons must list services they provide via discovery in their config!"
"Apps must list services they provide via discovery in their config!"
)
# Process discovery message
message = await self.sys_discovery.send(addon, **body)
message = await self.sys_discovery.send(app, **body)
return {ATTR_UUID: message.uuid}
@@ -101,7 +101,7 @@ class APIDiscovery(CoreSysAttributes):
message = self._extract_message(request)
return {
ATTR_ADDON: message.addon,
ATTR_APP: message.addon,
ATTR_SERVICE: message.service,
ATTR_UUID: message.uuid,
ATTR_CONFIG: message.config,
@@ -111,10 +111,10 @@ class APIDiscovery(CoreSysAttributes):
async def del_discovery(self, request: web.Request) -> None:
"""Delete data into a discovery message."""
message = self._extract_message(request)
addon = request[REQUEST_FROM]
app = request[REQUEST_FROM]
# Permission
if message.addon != addon.slug:
if message.addon != app.slug:
raise APIForbidden("Can't remove discovery message")
await self.sys_discovery.remove(message)

View File

@@ -208,9 +208,10 @@ class APIHost(CoreSysAttributes):
follow: bool = False,
latest: bool = False,
no_colors: bool = False,
default_verbose: bool = False,
) -> web.StreamResponse:
"""Return systemd-journald logs."""
log_formatter = LogFormatter.PLAIN
log_formatter = LogFormatter.VERBOSE if default_verbose else LogFormatter.PLAIN
params: dict[str, Any] = {}
if identifier:
params[PARAM_SYSLOG_IDENTIFIER] = identifier
@@ -218,8 +219,6 @@ class APIHost(CoreSysAttributes):
params[PARAM_SYSLOG_IDENTIFIER] = request.match_info[IDENTIFIER]
else:
params[PARAM_SYSLOG_IDENTIFIER] = self.sys_host.logs.default_identifiers
# host logs should be always verbose, no matter what Accept header is used
log_formatter = LogFormatter.VERBOSE
if BOOTID in request.match_info:
params[PARAM_BOOT_ID] = await self._get_boot_id(request.match_info[BOOTID])
@@ -240,7 +239,9 @@ class APIHost(CoreSysAttributes):
f"Cannot determine CONTAINER_LOG_EPOCH of {identifier}, latest logs not available."
) from err
if ACCEPT in request.headers and request.headers[ACCEPT] not in [
accept_header = request.headers.get(ACCEPT)
if accept_header and accept_header not in [
CONTENT_TYPE_TEXT,
CONTENT_TYPE_X_LOG,
"*/*",
@@ -250,7 +251,7 @@ class APIHost(CoreSysAttributes):
"supported for now."
)
if "verbose" in request.query or request.headers[ACCEPT] == CONTENT_TYPE_X_LOG:
if "verbose" in request.query or accept_header == CONTENT_TYPE_X_LOG:
log_formatter = LogFormatter.VERBOSE
if "no_colors" in request.query:
@@ -326,10 +327,11 @@ class APIHost(CoreSysAttributes):
follow: bool = False,
latest: bool = False,
no_colors: bool = False,
default_verbose: bool = False,
) -> web.StreamResponse:
"""Return systemd-journald logs. Wrapped as standard API handler."""
return await self.advanced_logs_handler(
request, identifier, follow, latest, no_colors
request, identifier, follow, latest, no_colors, default_verbose
)
@api_process
@@ -355,8 +357,8 @@ class APIHost(CoreSysAttributes):
known_paths = await self.sys_run_in_executor(
disk.get_dir_sizes,
{
"addons_data": self.sys_config.path_addons_data,
"addons_config": self.sys_config.path_addon_configs,
"addons_data": self.sys_config.path_apps_data,
"addons_config": self.sys_config.path_app_configs,
"media": self.sys_config.path_media,
"share": self.sys_config.path_share,
"backup": self.sys_config.path_backup,

View File

@@ -1,4 +1,4 @@
"""Supervisor Add-on ingress service."""
"""Supervisor App ingress service."""
import asyncio
from ipaddress import ip_address
@@ -15,7 +15,7 @@ from aiohttp.web_exceptions import (
from multidict import CIMultiDict, istr
import voluptuous as vol
from ..addons.addon import Addon
from ..addons.addon import App
from ..const import (
ATTR_ADMIN,
ATTR_ENABLE,
@@ -29,8 +29,8 @@ from ..const import (
HEADER_REMOTE_USER_NAME,
HEADER_TOKEN,
HEADER_TOKEN_OLD,
HomeAssistantUser,
IngressSessionData,
IngressSessionDataUser,
)
from ..coresys import CoreSysAttributes
from ..exceptions import HomeAssistantAPIError
@@ -39,6 +39,8 @@ from .utils import api_process, api_validate, require_home_assistant
_LOGGER: logging.Logger = logging.getLogger(__name__)
MAX_WEBSOCKET_MESSAGE_SIZE = 16 * 1024 * 1024 # 16 MiB
VALIDATE_SESSION_DATA = vol.Schema({ATTR_SESSION: str})
"""Expected optional payload of create session request"""
@@ -73,43 +75,37 @@ def status_code_must_be_empty_body(code: int) -> bool:
class APIIngress(CoreSysAttributes):
"""Ingress view to handle add-on webui routing."""
"""Ingress view to handle app webui routing."""
_list_of_users: list[IngressSessionDataUser]
def __init__(self) -> None:
"""Initialize APIIngress."""
self._list_of_users = []
def _extract_addon(self, request: web.Request) -> Addon:
"""Return addon, throw an exception it it doesn't exist."""
def _extract_app(self, request: web.Request) -> App:
"""Return app, throw an exception it it doesn't exist."""
token = request.match_info["token"]
# Find correct add-on
addon = self.sys_ingress.get(token)
if not addon:
# Find correct app
app = self.sys_ingress.get(token)
if not app:
_LOGGER.warning("Ingress for %s not available", token)
raise HTTPServiceUnavailable()
return addon
return app
def _create_url(self, addon: Addon, path: str) -> str:
def _create_url(self, app: App, path: str) -> str:
"""Create URL to container."""
return f"http://{addon.ip_address}:{addon.ingress_port}/{path}"
return f"http://{app.ip_address}:{app.ingress_port}/{path}"
@api_process
async def panels(self, request: web.Request) -> dict[str, Any]:
"""Create a list of panel data."""
addons = {}
for addon in self.sys_ingress.addons:
addons[addon.slug] = {
ATTR_TITLE: addon.panel_title,
ATTR_ICON: addon.panel_icon,
ATTR_ADMIN: addon.panel_admin,
ATTR_ENABLE: addon.ingress_panel,
apps = {}
for app in self.sys_ingress.apps:
apps[app.slug] = {
ATTR_TITLE: app.panel_title,
ATTR_ICON: app.panel_icon,
ATTR_ADMIN: app.panel_admin,
ATTR_ENABLE: app.ingress_panel,
}
return {ATTR_PANELS: addons}
return {ATTR_PANELS: apps}
@api_process
@require_home_assistant
@@ -153,16 +149,16 @@ class APIIngress(CoreSysAttributes):
raise HTTPUnauthorized()
# Process requests
addon = self._extract_addon(request)
app = self._extract_app(request)
path = request.match_info.get("path", "")
session_data = self.sys_ingress.get_session_data(session)
try:
# Websocket
if _is_websocket(request):
return await self._handle_websocket(request, addon, path, session_data)
return await self._handle_websocket(request, app, path, session_data)
# Request
return await self._handle_request(request, addon, path, session_data)
return await self._handle_request(request, app, path, session_data)
except aiohttp.ClientError as err:
_LOGGER.error("Ingress error: %s", err)
@@ -172,7 +168,7 @@ class APIIngress(CoreSysAttributes):
async def _handle_websocket(
self,
request: web.Request,
addon: Addon,
app: App,
path: str,
session_data: IngressSessionData | None,
) -> web.WebSocketResponse:
@@ -186,13 +182,16 @@ class APIIngress(CoreSysAttributes):
req_protocols = []
ws_server = web.WebSocketResponse(
protocols=req_protocols, autoclose=False, autoping=False
protocols=req_protocols,
autoclose=False,
autoping=False,
max_msg_size=MAX_WEBSOCKET_MESSAGE_SIZE,
)
await ws_server.prepare(request)
# Preparing
url = self._create_url(addon, path)
source_header = _init_header(request, addon, session_data)
url = self._create_url(app, path)
source_header = _init_header(request, app, session_data)
# Support GET query
if request.query_string:
@@ -200,13 +199,14 @@ class APIIngress(CoreSysAttributes):
# Start proxy
try:
_LOGGER.debug("Proxing WebSocket to %s, upstream url: %s", addon.slug, url)
_LOGGER.debug("Proxing WebSocket to %s, upstream url: %s", app.slug, url)
async with self.sys_websession.ws_connect(
url,
headers=source_header,
protocols=req_protocols,
autoclose=False,
autoping=False,
max_msg_size=MAX_WEBSOCKET_MESSAGE_SIZE,
) as ws_client:
# Proxy requests
await asyncio.wait(
@@ -217,28 +217,28 @@ class APIIngress(CoreSysAttributes):
return_when=asyncio.FIRST_COMPLETED,
)
except TimeoutError:
_LOGGER.warning("WebSocket proxy to %s timed out", addon.slug)
_LOGGER.warning("WebSocket proxy to %s timed out", app.slug)
return ws_server
async def _handle_request(
self,
request: web.Request,
addon: Addon,
app: App,
path: str,
session_data: IngressSessionData | None,
) -> web.Response | web.StreamResponse:
"""Ingress route for request."""
url = self._create_url(addon, path)
source_header = _init_header(request, addon, session_data)
url = self._create_url(app, path)
source_header = _init_header(request, app, session_data)
# Passing the raw stream breaks requests for some webservers
# since we just need it for POST requests really, for all other methods
# we read the bytes and pass that to the request to the add-on
# add-ons needs to add support with that in the configuration
# we read the bytes and pass that to the request to the app
# apps needs to add support with that in the configuration
data = (
request.content
if request.method == "POST" and addon.ingress_stream
if request.method == "POST" and app.ingress_stream
else await request.read()
)
@@ -306,24 +306,19 @@ class APIIngress(CoreSysAttributes):
return response
async def _find_user_by_id(self, user_id: str) -> IngressSessionDataUser | None:
async def _find_user_by_id(self, user_id: str) -> HomeAssistantUser | None:
"""Find user object by the user's ID."""
try:
list_of_users = await self.sys_homeassistant.get_users()
except (HomeAssistantAPIError, TypeError) as err:
_LOGGER.error(
"%s error occurred while requesting list of users: %s", type(err), err
)
users = await self.sys_homeassistant.list_users()
except HomeAssistantAPIError as err:
_LOGGER.warning("Could not fetch list of users: %s", err)
return None
if list_of_users is not None:
self._list_of_users = list_of_users
return next((user for user in self._list_of_users if user.id == user_id), None)
return next((user for user in users if user.id == user_id), None)
def _init_header(
request: web.Request, addon: Addon, session_data: IngressSessionData | None
request: web.Request, app: App, session_data: IngressSessionData | None
) -> CIMultiDict[str]:
"""Create initial header."""
headers = CIMultiDict[str]()
@@ -332,8 +327,8 @@ def _init_header(
headers[HEADER_REMOTE_USER_ID] = session_data.user.id
if session_data.user.username is not None:
headers[HEADER_REMOTE_USER_NAME] = session_data.user.username
if session_data.user.display_name is not None:
headers[HEADER_REMOTE_USER_DISPLAY_NAME] = session_data.user.display_name
if session_data.user.name is not None:
headers[HEADER_REMOTE_USER_DISPLAY_NAME] = session_data.user.name
# filter flags
for name, value in request.headers.items():

View File

@@ -1,6 +1,7 @@
"""Handle security part of this API."""
from collections.abc import Awaitable, Callable
from dataclasses import dataclass
import logging
import re
from typing import Final
@@ -30,13 +31,13 @@ _CORE_VERSION: Final = AwesomeVersion("2023.3.4")
# fmt: off
_CORE_FRONTEND_PATHS: Final = (
_V1_FRONTEND_PATHS: Final = (
r"|/app/.*\.(?:js|gz|json|map|woff2)"
r"|/(store/)?addons/" + RE_SLUG + r"/(logo|icon)"
)
CORE_FRONTEND: Final = re.compile(
r"^(?:" + _CORE_FRONTEND_PATHS + r")$"
_V2_FRONTEND_PATHS: Final = (
r"|/store/apps/" + RE_SLUG + r"/(logo|icon)"
)
@@ -48,19 +49,6 @@ BLACKLIST: Final = re.compile(
r")$"
)
# Free to call or have own security concepts
NO_SECURITY_CHECK: Final = re.compile(
r"^(?:"
r"|/homeassistant/api/.*"
r"|/homeassistant/websocket"
r"|/core/api/.*"
r"|/core/websocket"
r"|/supervisor/ping"
r"|/ingress/[-_A-Za-z0-9]+/.*"
+ _CORE_FRONTEND_PATHS
+ r")$"
)
# Observer allow API calls
OBSERVER_CHECK: Final = re.compile(
r"^(?:"
@@ -68,80 +56,6 @@ OBSERVER_CHECK: Final = re.compile(
r")$"
)
# Can called by every add-on
ADDONS_API_BYPASS: Final = re.compile(
r"^(?:"
r"|/addons/self/(?!security|update)[^/]+"
r"|/addons/self/options/config"
r"|/info"
r"|/services.*"
r"|/discovery.*"
r"|/auth"
r")$"
)
# Home Assistant only
CORE_ONLY_PATHS: Final = re.compile(
r"^(?:"
r"/addons/" + RE_SLUG + "/sys_options"
r")$"
)
# Policy role add-on API access
ADDONS_ROLE_ACCESS: dict[str, re.Pattern[str]] = {
ROLE_DEFAULT: re.compile(
r"^(?:"
r"|/.+/info"
r")$"
),
ROLE_HOMEASSISTANT: re.compile(
r"^(?:"
r"|/.+/info"
r"|/core/.+"
r"|/homeassistant/.+"
r")$"
),
ROLE_BACKUP: re.compile(
r"^(?:"
r"|/.+/info"
r"|/backups.*"
r")$"
),
ROLE_MANAGER: re.compile(
r"^(?:"
r"|/.+/info"
r"|/addons(?:/" + RE_SLUG + r"/(?!security).+|/reload)?"
r"|/audio/.+"
r"|/auth/cache"
r"|/available_updates"
r"|/backups.*"
r"|/cli/.+"
r"|/core/.+"
r"|/dns/.+"
r"|/docker/.+"
r"|/jobs/.+"
r"|/hardware/.+"
r"|/hassos/.+"
r"|/homeassistant/.+"
r"|/host/.+"
r"|/mounts.*"
r"|/multicast/.+"
r"|/network/.+"
r"|/observer/.+"
r"|/os/(?!datadisk/wipe).+"
r"|/refresh_updates"
r"|/resolution/.+"
r"|/security/.+"
r"|/snapshots.*"
r"|/store.*"
r"|/supervisor/.+"
r")$"
),
ROLE_ADMIN: re.compile(
r".*"
),
}
FILTERS: Final = re.compile(
r"(?:"
@@ -162,9 +76,193 @@ FILTERS: Final = re.compile(
flags=re.IGNORECASE,
)
@dataclass(slots=True, frozen=True)
class _AppSecurityPatterns:
"""All compiled regex patterns for app API access control, per API version."""
# Paths where an installed app's token bypasses normal role checks
api_bypass: re.Pattern[str]
# Paths that only Home Assistant Core may call
core_only: re.Pattern[str]
# Per-role allowed path patterns for installed apps
role_access: dict[str, re.Pattern[str]]
# Paths serving frontend assets (checked in core_proxy middleware)
supervisor_frontend: re.Pattern[str]
# Paths that skip token validation entirely
no_security_check: re.Pattern[str]
# fmt: off
_V1_PATTERNS: Final = _AppSecurityPatterns(
api_bypass=re.compile(
r"^(?:"
r"|/addons/self/(?!security|update)[^/]+"
r"|/addons/self/options/config"
r"|/info"
r"|/services.*"
r"|/discovery.*"
r"|/auth"
r")$"
),
core_only=re.compile(
r"^(?:"
r"/addons/" + RE_SLUG + r"/sys_options"
r")$"
),
role_access={
ROLE_DEFAULT: re.compile(
r"^(?:"
r"|/.+/info"
r")$"
),
ROLE_HOMEASSISTANT: re.compile(
r"^(?:"
r"|/.+/info"
r"|/core/.+"
r"|/homeassistant/.+"
r")$"
),
ROLE_BACKUP: re.compile(
r"^(?:"
r"|/.+/info"
r"|/backups.*"
r")$"
),
ROLE_MANAGER: re.compile(
r"^(?:"
r"|/.+/info"
r"|/addons(?:/" + RE_SLUG + r"/(?!security).+|/reload)?"
r"|/audio/.+"
r"|/auth/cache"
r"|/available_updates"
r"|/backups.*"
r"|/cli/.+"
r"|/core/.+"
r"|/dns/.+"
r"|/docker/.+"
r"|/jobs/.+"
r"|/hardware/.+"
r"|/homeassistant/.+"
r"|/host/.+"
r"|/mounts.*"
r"|/multicast/.+"
r"|/network/.+"
r"|/observer/.+"
r"|/os/(?!datadisk/wipe).+"
r"|/refresh_updates"
r"|/resolution/.+"
r"|/security/.+"
r"|/snapshots.*"
r"|/store.*"
r"|/supervisor/.+"
r")$"
),
ROLE_ADMIN: re.compile(r".*"),
},
supervisor_frontend=re.compile(r"^(?:" + _V1_FRONTEND_PATHS + r")$"),
no_security_check=re.compile(
r"^(?:"
r"|/homeassistant/api/.*"
r"|/homeassistant/websocket"
r"|/core/api/.*"
r"|/core/websocket"
r"|/supervisor/ping"
r"|/ingress/[-_A-Za-z0-9]+/.*"
+ _V1_FRONTEND_PATHS
+ r")$"
),
)
_V2_PATTERNS: Final = _AppSecurityPatterns(
# /v2 is factored out as a literal prefix — alternatives only list the
# path suffix, making v1 ↔ v2 pattern diffs easy to read.
api_bypass=re.compile(
r"^/v2(?:"
r"|/apps/self/(?!security|update)[^/]+"
r"|/apps/self/options/config"
r"|/info"
r"|/services.*"
r"|/discovery.*"
r"|/auth"
r")$"
),
core_only=re.compile(
r"^/v2(?:"
r"/apps/" + RE_SLUG + r"/sys_options"
r")$"
),
role_access={
ROLE_DEFAULT: re.compile(
r"^/v2(?:"
r"|/.+/info"
r")$"
),
ROLE_HOMEASSISTANT: re.compile(
r"^/v2(?:"
r"|/.+/info"
r"|/core/.+"
r"|/homeassistant/.+"
r")$"
),
ROLE_BACKUP: re.compile(
r"^/v2(?:"
r"|/.+/info"
r"|/backups.*"
r")$"
),
ROLE_MANAGER: re.compile(
r"^/v2(?:"
r"|/.+/info"
r"|/apps(?:/" + RE_SLUG + r"/(?!security).+)?"
r"|/audio/.+"
r"|/auth/cache"
r"|/backups.*"
r"|/cli/.+"
r"|/core/.+"
r"|/dns/.+"
r"|/docker/.+"
r"|/jobs/.+"
r"|/hardware/.+"
r"|/homeassistant/.+"
r"|/host/.+"
r"|/mounts.*"
r"|/multicast/.+"
r"|/network/.+"
r"|/observer/.+"
r"|/os/(?!datadisk/wipe).+"
r"|/reload_updates"
r"|/resolution/.+"
r"|/security/.+"
r"|/store.*"
r"|/supervisor/.+"
r")$"
),
ROLE_ADMIN: re.compile(r".*"),
},
supervisor_frontend=re.compile(r"^/v2(?:" + _V2_FRONTEND_PATHS + r")$"),
no_security_check=re.compile(
r"^/v2(?:"
r"|/ingress/[-_A-Za-z0-9]+/.*"
+ _V2_FRONTEND_PATHS
+ r")$"
),
)
# fmt: on
def _get_app_security_patterns(request: Request) -> _AppSecurityPatterns:
"""Return the correct pattern set based on the request's API version."""
if request.path.startswith("/v2/"):
return _V2_PATTERNS
return _V1_PATTERNS
class SecurityMiddleware(CoreSysAttributes):
"""Security middleware functions."""
@@ -217,6 +315,7 @@ class SecurityMiddleware(CoreSysAttributes):
"""Check security access of this layer."""
request_from: CoreSysAttributes | None = None
supervisor_token = extract_supervisor_token(request)
patterns = _get_app_security_patterns(request)
# Blacklist
if BLACKLIST.match(request.path):
@@ -224,7 +323,7 @@ class SecurityMiddleware(CoreSysAttributes):
raise HTTPForbidden()
# Ignore security check
if NO_SECURITY_CHECK.match(request.path):
if patterns.no_security_check.match(request.path):
_LOGGER.debug("Passthrough %s", request.path)
request[REQUEST_FROM] = None
return await handler(request)
@@ -238,8 +337,11 @@ class SecurityMiddleware(CoreSysAttributes):
if supervisor_token == self.sys_homeassistant.supervisor_token:
_LOGGER.debug("%s access from Home Assistant", request.path)
request_from = self.sys_homeassistant
elif CORE_ONLY_PATHS.match(request.path):
_LOGGER.warning("Attempted access to %s from client besides Home Assistant")
elif patterns.core_only.match(request.path):
_LOGGER.warning(
"Attempted access to %s from client besides Home Assistant",
request.path,
)
raise HTTPForbidden()
# Host
@@ -255,26 +357,24 @@ class SecurityMiddleware(CoreSysAttributes):
_LOGGER.debug("%s access from Observer", request.path)
request_from = self.sys_plugins.observer
# Add-on
addon = None
# App
app = None
if supervisor_token and not request_from:
addon = self.sys_addons.from_token(supervisor_token)
app = self.sys_apps.from_token(supervisor_token)
# Check Add-on API access
if addon and ADDONS_API_BYPASS.match(request.path):
_LOGGER.debug("Passthrough %s from %s", request.path, addon.slug)
request_from = addon
elif addon and addon.access_hassio_api:
# Check App API access
if app and patterns.api_bypass.match(request.path):
_LOGGER.debug("Passthrough %s from %s", request.path, app.slug)
request_from = app
elif app and app.access_hassio_api:
# Check Role
if ADDONS_ROLE_ACCESS[addon.hassio_role].match(request.path):
_LOGGER.info("%s access from %s", request.path, addon.slug)
request_from = addon
if patterns.role_access[app.hassio_role].match(request.path):
_LOGGER.info("%s access from %s", request.path, app.slug)
request_from = app
else:
_LOGGER.warning("%s no role for %s", request.path, addon.slug)
elif addon:
_LOGGER.warning(
"%s missing API permission for %s", addon.slug, request.path
)
_LOGGER.warning("%s no role for %s", request.path, app.slug)
elif app:
_LOGGER.warning("%s missing API permission for %s", app.slug, request.path)
if request_from:
request[REQUEST_FROM] = request_from
@@ -322,8 +422,9 @@ class SecurityMiddleware(CoreSysAttributes):
and content_type_index - authorization_index == 1
)
patterns = _get_app_security_patterns(request)
if (
not CORE_FRONTEND.match(request.path) and is_proxy_request
not patterns.supervisor_frontend.match(request.path) and is_proxy_request
) or ingress_request:
raise HTTPBadRequest()
return await handler(request)

View File

@@ -7,7 +7,6 @@ import logging
import aiohttp
from aiohttp import WSCloseCode, WSMessageTypeError, web
from aiohttp.client_exceptions import ClientConnectorError
from aiohttp.client_ws import ClientWebSocketResponse
from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE
from aiohttp.http_websocket import WSMsgType
@@ -16,7 +15,7 @@ from aiohttp.web_exceptions import HTTPBadGateway, HTTPUnauthorized
from ..coresys import CoreSysAttributes
from ..exceptions import APIError, HomeAssistantAPIError, HomeAssistantAuthError
from ..utils.json import json_dumps
from ..utils.logging import AddonLoggerAdapter
from ..utils.logging import AppLoggerAdapter
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -30,13 +29,6 @@ FORWARD_HEADERS = (
)
HEADER_HA_ACCESS = "X-Ha-Access"
# Maximum message size for websocket messages from Home Assistant.
# Since these are coming from core we want the largest possible size
# that is not likely to cause a memory problem as most modern browsers
# support large messages.
# https://github.com/home-assistant/supervisor/issues/4392
MAX_MESSAGE_SIZE_FROM_CORE = 64 * 1024 * 1024
class APIProxy(CoreSysAttributes):
"""API Proxy for Home Assistant."""
@@ -81,13 +73,13 @@ class APIProxy(CoreSysAttributes):
else:
supervisor_token = request.headers.get(HEADER_HA_ACCESS, "")
addon = self.sys_addons.from_token(supervisor_token)
if not addon:
app = self.sys_apps.from_token(supervisor_token)
if not app:
_LOGGER.warning("Unknown Home Assistant API access!")
elif not addon.access_homeassistant_api:
_LOGGER.warning("Not permitted API access: %s", addon.slug)
elif not app.access_homeassistant_api:
_LOGGER.warning("Not permitted API access: %s", app.slug)
else:
_LOGGER.debug("%s access from %s", request.path, addon.slug)
_LOGGER.debug("%s access from %s", request.path, app.slug)
return
raise HTTPUnauthorized()
@@ -179,63 +171,20 @@ class APIProxy(CoreSysAttributes):
async def _websocket_client(self) -> ClientWebSocketResponse:
"""Initialize a WebSocket API connection."""
url = f"{self.sys_homeassistant.api_url}/api/websocket"
try:
client = await self.sys_websession.ws_connect(
url, heartbeat=30, ssl=False, max_msg_size=MAX_MESSAGE_SIZE_FROM_CORE
)
# Handle authentication
data = await client.receive_json()
if data.get("type") == "auth_ok":
return client
if data.get("type") != "auth_required":
# Invalid protocol
raise APIError(
f"Got unexpected response from Home Assistant WebSocket: {data}",
_LOGGER.error,
)
# Auth session
await self.sys_homeassistant.api.ensure_access_token()
await client.send_json(
{
"type": "auth",
"access_token": self.sys_homeassistant.api.access_token,
},
dumps=json_dumps,
)
data = await client.receive_json()
if data.get("type") == "auth_ok":
return client
# Renew the Token is invalid
if (
data.get("type") == "invalid_auth"
and self.sys_homeassistant.refresh_token
):
self.sys_homeassistant.api.access_token = None
return await self._websocket_client()
raise HomeAssistantAuthError()
except (RuntimeError, ValueError, TypeError, ClientConnectorError) as err:
_LOGGER.error("Client error on WebSocket API %s.", err)
except HomeAssistantAuthError:
_LOGGER.error("Failed authentication to Home Assistant WebSocket")
raise APIError()
ws_client = await self.sys_homeassistant.api.connect_websocket()
return ws_client.client
except HomeAssistantAPIError as err:
raise APIError(
f"Error connecting to Home Assistant WebSocket: {err}",
_LOGGER.error,
) from err
async def _proxy_message(
self,
source: web.WebSocketResponse | ClientWebSocketResponse,
target: web.WebSocketResponse | ClientWebSocketResponse,
logger: AddonLoggerAdapter,
logger: AppLoggerAdapter,
) -> None:
"""Proxy a message from client to server or vice versa."""
while not source.closed and not target.closed:
@@ -249,7 +198,7 @@ class APIProxy(CoreSysAttributes):
logger.debug(
"Received WebSocket message type %r from %s.",
msg.type,
"add-on" if type(source) is web.WebSocketResponse else "Core",
"app" if type(source) is web.WebSocketResponse else "Core",
)
await target.close()
case WSMsgType.CLOSING:
@@ -278,7 +227,7 @@ class APIProxy(CoreSysAttributes):
# init server
server = web.WebSocketResponse(heartbeat=30)
await server.prepare(request)
addon_name = None
app_name = None
# handle authentication
try:
@@ -292,9 +241,9 @@ class APIProxy(CoreSysAttributes):
supervisor_token = response.get("api_password") or response.get(
"access_token"
)
addon = self.sys_addons.from_token(supervisor_token)
app = self.sys_apps.from_token(supervisor_token)
if not addon or not addon.access_homeassistant_api:
if not app or not app.access_homeassistant_api:
_LOGGER.warning("Unauthorized WebSocket access!")
await server.send_json(
{"type": "auth_invalid", "message": "Invalid access"},
@@ -302,8 +251,8 @@ class APIProxy(CoreSysAttributes):
)
return server
addon_name = addon.slug
_LOGGER.info("WebSocket access from %s", addon_name)
app_name = app.slug
_LOGGER.info("WebSocket access from %s", app_name)
await server.send_json(
{"type": "auth_ok", "ha_version": self.sys_homeassistant.version},
@@ -327,7 +276,7 @@ class APIProxy(CoreSysAttributes):
except APIError:
return server
logger = AddonLoggerAdapter(_LOGGER, {"addon_name": addon_name})
logger = AppLoggerAdapter(_LOGGER, {"app_name": app_name})
logger.info("Home Assistant WebSocket API proxy running")
client_task = self.sys_create_task(self._proxy_message(client, server, logger))

View File

@@ -19,7 +19,6 @@ from ..const import (
ATTR_UNSUPPORTED,
)
from ..coresys import CoreSysAttributes
from ..exceptions import APINotFound, ResolutionNotFound
from ..resolution.checks.base import CheckBase
from ..resolution.data import Issue, Suggestion
from .utils import api_process, api_validate
@@ -32,24 +31,17 @@ class APIResoulution(CoreSysAttributes):
def _extract_issue(self, request: web.Request) -> Issue:
"""Extract issue from request or raise."""
try:
return self.sys_resolution.get_issue(request.match_info["issue"])
except ResolutionNotFound:
raise APINotFound("The supplied UUID is not a valid issue") from None
return self.sys_resolution.get_issue_by_id(request.match_info["issue"])
def _extract_suggestion(self, request: web.Request) -> Suggestion:
"""Extract suggestion from request or raise."""
try:
return self.sys_resolution.get_suggestion(request.match_info["suggestion"])
except ResolutionNotFound:
raise APINotFound("The supplied UUID is not a valid suggestion") from None
return self.sys_resolution.get_suggestion_by_id(
request.match_info["suggestion"]
)
def _extract_check(self, request: web.Request) -> CheckBase:
"""Extract check from request or raise."""
try:
return self.sys_resolution.check.get(request.match_info["check"])
except ResolutionNotFound:
raise APINotFound("The supplied check slug is not available") from None
return self.sys_resolution.check.get(request.match_info["check"])
def _generate_suggestion_information(self, suggestion: Suggestion):
"""Generate suggestion information for response."""
@@ -67,8 +59,8 @@ class APIResoulution(CoreSysAttributes):
async def info(self, request: web.Request) -> dict[str, Any]:
"""Return resolution information."""
return {
ATTR_UNSUPPORTED: self.sys_resolution.unsupported,
ATTR_UNHEALTHY: self.sys_resolution.unhealthy,
ATTR_UNSUPPORTED: sorted(self.sys_resolution.unsupported),
ATTR_UNHEALTHY: sorted(self.sys_resolution.unhealthy),
ATTR_SUGGESTIONS: [
self._generate_suggestion_information(suggestion)
for suggestion in self.sys_resolution.suggestions

View File

@@ -94,17 +94,17 @@ class APIRoot(CoreSysAttributes):
}
)
# Add-ons
# Apps
available_updates.extend(
{
ATTR_UPDATE_TYPE: "addon",
ATTR_NAME: addon.name,
ATTR_ICON: f"/addons/{addon.slug}/icon" if addon.with_icon else None,
ATTR_PANEL_PATH: f"/update-available/{addon.slug}",
ATTR_VERSION_LATEST: addon.latest_version,
ATTR_NAME: app.name,
ATTR_ICON: f"/addons/{app.slug}/icon" if app.with_icon else None,
ATTR_PANEL_PATH: f"/update-available/{app.slug}",
ATTR_VERSION_LATEST: app.latest_version,
}
for addon in self.sys_addons.installed
if addon.need_update
for app in self.sys_apps.installed
if app.need_update
)
return {ATTR_AVAILABLE_UPDATES: available_updates}

View File

@@ -48,10 +48,10 @@ class APIServices(CoreSysAttributes):
"""Write data into a service."""
service = self._extract_service(request)
body = await api_validate(service.schema, request)
addon = request[REQUEST_FROM]
app = request[REQUEST_FROM]
_check_access(request, service.slug)
await service.set_service_data(addon, body)
await service.set_service_data(app, body)
@api_process
async def get_service(self, request: web.Request) -> dict[str, Any]:
@@ -69,18 +69,18 @@ class APIServices(CoreSysAttributes):
async def del_service(self, request: web.Request) -> None:
"""Delete data into a service."""
service = self._extract_service(request)
addon = request[REQUEST_FROM]
app = request[REQUEST_FROM]
# Access
_check_access(request, service.slug, True)
await service.del_service_data(addon)
await service.del_service_data(app)
def _check_access(request, service, provide=False):
"""Raise error if the rights are wrong."""
addon = request[REQUEST_FROM]
if not addon.services_role.get(service):
app = request[REQUEST_FROM]
if not app.services_role.get(service):
raise APIForbidden(f"No access to {service} service!")
if provide and addon.services_role.get(service) != PROVIDE_SERVICE:
if provide and app.services_role.get(service) != PROVIDE_SERVICE:
raise APIForbidden(f"No access to write {service} service!")

View File

@@ -7,8 +7,8 @@ from typing import Any, cast
from aiohttp import web
import voluptuous as vol
from ..addons.addon import Addon
from ..addons.manager import AnyAddon
from ..addons.addon import App
from ..addons.manager import AnyApp
from ..addons.utils import rating_security
from ..api.const import ATTR_SIGNED
from ..api.utils import api_process, api_process_raw, api_validate
@@ -16,6 +16,7 @@ from ..const import (
ATTR_ADDONS,
ATTR_ADVANCED,
ATTR_APPARMOR,
ATTR_APPS,
ATTR_ARCH,
ATTR_AUTH_API,
ATTR_AVAILABLE,
@@ -53,9 +54,9 @@ from ..const import (
REQUEST_FROM,
)
from ..coresys import CoreSysAttributes
from ..exceptions import APIError, APIForbidden, APINotFound, StoreAddonNotFoundError
from ..exceptions import APIError, APIForbidden, APINotFound, StoreAppNotFoundError
from ..resolution.const import ContextType, SuggestionType
from ..store.addon import AddonStore
from ..store.addon import AppStore
from ..store.repository import Repository
from ..store.validate import validate_repository
from .const import ATTR_BACKGROUND, CONTENT_TYPE_PNG, CONTENT_TYPE_TEXT
@@ -100,23 +101,23 @@ def _read_static_binary_file(path: Path) -> Any:
class APIStore(CoreSysAttributes):
"""Handle RESTful API for store functions."""
def _extract_addon(self, request: web.Request, installed=False) -> AnyAddon:
"""Return add-on, throw an exception it it doesn't exist."""
addon_slug: str = request.match_info["addon"]
def _extract_app(self, request: web.Request, installed=False) -> AnyApp:
"""Return app, throw an exception it it doesn't exist."""
app_slug: str = request.match_info["app"]
if not (addon := self.sys_addons.get(addon_slug)):
raise StoreAddonNotFoundError(addon=addon_slug)
if not (app := self.sys_apps.get(app_slug)):
raise StoreAppNotFoundError(app=app_slug)
if installed and not addon.is_installed:
raise APIError(f"Addon {addon_slug} is not installed")
if installed and not app.is_installed:
raise APIError(f"App {app_slug} is not installed")
if not installed and addon.is_installed:
addon = cast(Addon, addon)
if not addon.addon_store:
raise StoreAddonNotFoundError(addon=addon_slug)
return addon.addon_store
if not installed and app.is_installed:
app = cast(App, app)
if not app.app_store:
raise StoreAppNotFoundError(app=app_slug)
return app.app_store
return addon
return app
def _extract_repository(self, request: web.Request) -> Repository:
"""Return repository, throw an exception it it doesn't exist."""
@@ -129,52 +130,50 @@ class APIStore(CoreSysAttributes):
return self.sys_store.get(repository_slug)
async def _generate_addon_information(
self, addon: AddonStore, extended: bool = False
async def _generate_app_information(
self, app: AppStore, extended: bool = False
) -> dict[str, Any]:
"""Generate addon information."""
"""Generate app information."""
installed = (
self.sys_addons.get_local_only(addon.slug) if addon.is_installed else None
)
installed = self.sys_apps.get_local_only(app.slug) if app.is_installed else None
data = {
ATTR_ADVANCED: addon.advanced,
ATTR_ARCH: addon.supported_arch,
ATTR_AVAILABLE: addon.available,
ATTR_BUILD: addon.need_build,
ATTR_DESCRIPTON: addon.description,
ATTR_DOCUMENTATION: addon.with_documentation,
ATTR_HOMEASSISTANT: addon.homeassistant_version,
ATTR_ICON: addon.with_icon,
ATTR_INSTALLED: addon.is_installed,
ATTR_LOGO: addon.with_logo,
ATTR_NAME: addon.name,
ATTR_REPOSITORY: addon.repository,
ATTR_SLUG: addon.slug,
ATTR_STAGE: addon.stage,
ATTR_ADVANCED: app.advanced,
ATTR_ARCH: app.supported_arch,
ATTR_AVAILABLE: app.available,
ATTR_BUILD: app.need_build,
ATTR_DESCRIPTON: app.description,
ATTR_DOCUMENTATION: app.with_documentation,
ATTR_HOMEASSISTANT: app.homeassistant_version,
ATTR_ICON: app.with_icon,
ATTR_INSTALLED: app.is_installed,
ATTR_LOGO: app.with_logo,
ATTR_NAME: app.name,
ATTR_REPOSITORY: app.repository,
ATTR_SLUG: app.slug,
ATTR_STAGE: app.stage,
ATTR_UPDATE_AVAILABLE: installed.need_update if installed else False,
ATTR_URL: addon.url,
ATTR_VERSION_LATEST: addon.latest_version,
ATTR_URL: app.url,
ATTR_VERSION_LATEST: app.latest_version,
ATTR_VERSION: installed.version if installed else None,
}
if extended:
data.update(
{
ATTR_APPARMOR: addon.apparmor,
ATTR_AUTH_API: addon.access_auth_api,
ATTR_DETACHED: addon.is_detached,
ATTR_DOCKER_API: addon.access_docker_api,
ATTR_FULL_ACCESS: addon.with_full_access,
ATTR_HASSIO_API: addon.access_hassio_api,
ATTR_HASSIO_ROLE: addon.hassio_role,
ATTR_HOMEASSISTANT_API: addon.access_homeassistant_api,
ATTR_HOST_NETWORK: addon.host_network,
ATTR_HOST_PID: addon.host_pid,
ATTR_INGRESS: addon.with_ingress,
ATTR_LONG_DESCRIPTION: await addon.long_description(),
ATTR_RATING: rating_security(addon),
ATTR_SIGNED: addon.signed,
ATTR_APPARMOR: app.apparmor,
ATTR_AUTH_API: app.access_auth_api,
ATTR_DETACHED: app.is_detached,
ATTR_DOCKER_API: app.access_docker_api,
ATTR_FULL_ACCESS: app.with_full_access,
ATTR_HASSIO_API: app.access_hassio_api,
ATTR_HASSIO_ROLE: app.hassio_role,
ATTR_HOMEASSISTANT_API: app.access_homeassistant_api,
ATTR_HOST_NETWORK: app.host_network,
ATTR_HOST_PID: app.host_pid,
ATTR_INGRESS: app.with_ingress,
ATTR_LONG_DESCRIPTION: await app.long_description(),
ATTR_RATING: rating_security(app),
ATTR_SIGNED: app.signed,
}
)
@@ -192,21 +191,27 @@ class APIStore(CoreSysAttributes):
ATTR_MAINTAINER: repository.maintainer,
}
async def _all_store_apps_info(self) -> list[dict[str, Any]]:
"""Return gathered info for all apps in the store."""
return list(
await asyncio.gather(
*[
self._generate_app_information(self.sys_apps.store[app])
for app in self.sys_apps.store
]
)
)
@api_process
async def reload(self, request: web.Request) -> None:
"""Reload all add-on data from store."""
"""Reload all app data from store."""
await asyncio.shield(self.sys_store.reload())
@api_process
async def store_info(self, request: web.Request) -> dict[str, Any]:
"""Return store information."""
"""Return store information (v2: uses "apps" key)."""
return {
ATTR_ADDONS: await asyncio.gather(
*[
self._generate_addon_information(self.sys_addons.store[addon])
for addon in self.sys_addons.store
]
),
ATTR_APPS: await self._all_store_apps_info(),
ATTR_REPOSITORIES: [
self._generate_repository_information(repository)
for repository in self.sys_store.all
@@ -214,27 +219,36 @@ class APIStore(CoreSysAttributes):
}
@api_process
async def addons_list(self, request: web.Request) -> dict[str, Any]:
"""Return all store add-ons."""
async def store_info_v1(self, request: web.Request) -> dict[str, Any]:
"""Return store information (v1: uses "addons" key)."""
return {
ATTR_ADDONS: await asyncio.gather(
*[
self._generate_addon_information(self.sys_addons.store[addon])
for addon in self.sys_addons.store
]
)
ATTR_ADDONS: await self._all_store_apps_info(),
ATTR_REPOSITORIES: [
self._generate_repository_information(repository)
for repository in self.sys_store.all
],
}
@api_process
async def addons_addon_install(self, request: web.Request) -> dict[str, str] | None:
"""Install add-on."""
addon = self._extract_addon(request)
async def apps_list(self, request: web.Request) -> dict[str, Any]:
"""Return all store apps (v2: uses "apps" key)."""
return {ATTR_APPS: await self._all_store_apps_info()}
@api_process
async def apps_list_v1(self, request: web.Request) -> dict[str, Any]:
"""Return all store apps (v1: uses "addons" key)."""
return {ATTR_ADDONS: await self._all_store_apps_info()}
@api_process
async def apps_app_install(self, request: web.Request) -> dict[str, str] | None:
"""Install app."""
app = self._extract_app(request)
body = await api_validate(SCHEMA_INSTALL, request)
background = body[ATTR_BACKGROUND]
install_task, job_id = await background_task(
self, self.sys_addons.install, addon.slug
self, self.sys_apps.install, app.slug
)
if background and not install_task.done():
@@ -243,19 +257,19 @@ class APIStore(CoreSysAttributes):
return await install_task
@api_process
async def addons_addon_update(self, request: web.Request) -> dict[str, str] | None:
"""Update add-on."""
addon = self._extract_addon(request, installed=True)
if addon == request.get(REQUEST_FROM):
raise APIForbidden(f"Add-on {addon.slug} can't update itself!")
async def apps_app_update(self, request: web.Request) -> dict[str, str] | None:
"""Update app."""
app = self._extract_app(request, installed=True)
if app == request.get(REQUEST_FROM):
raise APIForbidden(f"App {app.slug} can't update itself!")
body = await api_validate(SCHEMA_UPDATE, request)
background = body[ATTR_BACKGROUND]
update_task, job_id = await background_task(
self,
self.sys_addons.update,
addon.slug,
self.sys_apps.update,
app.slug,
backup=body.get(ATTR_BACKUP),
)
@@ -267,71 +281,71 @@ class APIStore(CoreSysAttributes):
return None
@api_process
async def addons_addon_info(self, request: web.Request) -> dict[str, Any]:
"""Return add-on information."""
return await self.addons_addon_info_wrapped(request)
async def apps_app_info(self, request: web.Request) -> dict[str, Any]:
"""Return app information."""
return await self.apps_app_info_wrapped(request)
# Used by legacy routing for addons/{addon}/info, can be refactored out when that is removed (1/2023)
async def addons_addon_info_wrapped(self, request: web.Request) -> dict[str, Any]:
"""Return add-on information directly (not api)."""
addon = cast(AddonStore, self._extract_addon(request))
return await self._generate_addon_information(addon, True)
# Used by legacy routing for apps/{app}/info, can be refactored out when that is removed (1/2023)
async def apps_app_info_wrapped(self, request: web.Request) -> dict[str, Any]:
"""Return app information directly (not api)."""
app = cast(AppStore, self._extract_app(request))
return await self._generate_app_information(app, True)
@api_process_raw(CONTENT_TYPE_PNG)
async def addons_addon_icon(self, request: web.Request) -> bytes:
"""Return icon from add-on."""
addon = self._extract_addon(request)
if not addon.with_icon:
raise APIError(f"No icon found for add-on {addon.slug}!")
async def apps_app_icon(self, request: web.Request) -> bytes:
"""Return icon from app."""
app = self._extract_app(request)
if not app.with_icon:
raise APIError(f"No icon found for app {app.slug}!")
return await self.sys_run_in_executor(_read_static_binary_file, addon.path_icon)
return await self.sys_run_in_executor(_read_static_binary_file, app.path_icon)
@api_process_raw(CONTENT_TYPE_PNG)
async def addons_addon_logo(self, request: web.Request) -> bytes:
"""Return logo from add-on."""
addon = self._extract_addon(request)
if not addon.with_logo:
raise APIError(f"No logo found for add-on {addon.slug}!")
async def apps_app_logo(self, request: web.Request) -> bytes:
"""Return logo from app."""
app = self._extract_app(request)
if not app.with_logo:
raise APIError(f"No logo found for app {app.slug}!")
return await self.sys_run_in_executor(_read_static_binary_file, addon.path_logo)
return await self.sys_run_in_executor(_read_static_binary_file, app.path_logo)
@api_process_raw(CONTENT_TYPE_TEXT)
async def addons_addon_changelog(self, request: web.Request) -> str:
"""Return changelog from add-on."""
async def apps_app_changelog(self, request: web.Request) -> str:
"""Return changelog from app."""
# Frontend can't handle error response here, need to return 200 and error as text for now
try:
addon = self._extract_addon(request)
app = self._extract_app(request)
except APIError as err:
return str(err)
if not addon.with_changelog:
return f"No changelog found for add-on {addon.slug}!"
if not app.with_changelog:
return f"No changelog found for app {app.slug}!"
return await self.sys_run_in_executor(
_read_static_text_file, addon.path_changelog
_read_static_text_file, app.path_changelog
)
@api_process_raw(CONTENT_TYPE_TEXT)
async def addons_addon_documentation(self, request: web.Request) -> str:
"""Return documentation from add-on."""
async def apps_app_documentation(self, request: web.Request) -> str:
"""Return documentation from app."""
# Frontend can't handle error response here, need to return 200 and error as text for now
try:
addon = self._extract_addon(request)
app = self._extract_app(request)
except APIError as err:
return str(err)
if not addon.with_documentation:
return f"No documentation found for add-on {addon.slug}!"
if not app.with_documentation:
return f"No documentation found for app {app.slug}!"
return await self.sys_run_in_executor(
_read_static_text_file, addon.path_documentation
_read_static_text_file, app.path_documentation
)
@api_process
async def addons_addon_availability(self, request: web.Request) -> None:
"""Check add-on availability for current system."""
addon = cast(AddonStore, self._extract_addon(request))
addon.validate_availability()
async def apps_app_availability(self, request: web.Request) -> None:
"""Check app availability for current system."""
app = cast(AppStore, self._extract_app(request))
app.validate_availability()
@api_process
async def repositories_list(self, request: web.Request) -> list[dict[str, Any]]:

View File

@@ -10,7 +10,7 @@ import voluptuous as vol
from ..const import (
ATTR_ADDONS,
ATTR_ADDONS_REPOSITORIES,
ATTR_APPS_REPOSITORIES,
ATTR_ARCH,
ATTR_AUTO_UPDATE,
ATTR_BLK_READ,
@@ -22,6 +22,7 @@ from ..const import (
ATTR_DEBUG_BLOCK,
ATTR_DETECT_BLOCKING_IO,
ATTR_DIAGNOSTICS,
ATTR_FEATURE_FLAGS,
ATTR_HEALTHY,
ATTR_ICON,
ATTR_IP_ADDRESS,
@@ -41,6 +42,7 @@ from ..const import (
ATTR_VERSION,
ATTR_VERSION_LATEST,
ATTR_WAIT_BOOT,
FeatureFlag,
LogLevel,
UpdateChannel,
)
@@ -60,7 +62,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
SCHEMA_OPTIONS = vol.Schema(
{
vol.Optional(ATTR_CHANNEL): vol.Coerce(UpdateChannel),
vol.Optional(ATTR_ADDONS_REPOSITORIES): repositories,
vol.Optional(ATTR_APPS_REPOSITORIES): repositories,
vol.Optional(ATTR_TIMEZONE): str,
vol.Optional(ATTR_WAIT_BOOT): wait_boot,
vol.Optional(ATTR_LOGGING): vol.Coerce(LogLevel),
@@ -70,6 +72,9 @@ SCHEMA_OPTIONS = vol.Schema(
vol.Optional(ATTR_AUTO_UPDATE): vol.Boolean(),
vol.Optional(ATTR_DETECT_BLOCKING_IO): vol.Coerce(DetectBlockingIO),
vol.Optional(ATTR_COUNTRY): str,
vol.Optional(ATTR_FEATURE_FLAGS): vol.Schema(
{vol.Coerce(FeatureFlag): vol.Boolean()}
),
}
)
@@ -104,22 +109,26 @@ class APISupervisor(CoreSysAttributes):
ATTR_AUTO_UPDATE: self.sys_updater.auto_update,
ATTR_DETECT_BLOCKING_IO: BlockBusterManager.is_enabled(),
ATTR_COUNTRY: self.sys_config.country,
# Depricated
ATTR_FEATURE_FLAGS: {
feature.value: self.sys_config.feature_flags.get(feature, False)
for feature in FeatureFlag
},
# Deprecated
ATTR_WAIT_BOOT: self.sys_config.wait_boot,
ATTR_ADDONS: [
{
ATTR_NAME: addon.name,
ATTR_SLUG: addon.slug,
ATTR_VERSION: addon.version,
ATTR_VERSION_LATEST: addon.latest_version,
ATTR_UPDATE_AVAILABLE: addon.need_update,
ATTR_STATE: addon.state,
ATTR_REPOSITORY: addon.repository,
ATTR_ICON: addon.with_icon,
ATTR_NAME: app.name,
ATTR_SLUG: app.slug,
ATTR_VERSION: app.version,
ATTR_VERSION_LATEST: app.latest_version,
ATTR_UPDATE_AVAILABLE: app.need_update,
ATTR_STATE: app.state,
ATTR_REPOSITORY: app.repository,
ATTR_ICON: app.with_icon,
}
for addon in self.sys_addons.local.values()
for app in self.sys_apps.local.values()
],
ATTR_ADDONS_REPOSITORIES: [
ATTR_APPS_REPOSITORIES: [
{ATTR_NAME: store.name, ATTR_SLUG: store.slug}
for store in self.sys_store.all
],
@@ -182,14 +191,18 @@ class APISupervisor(CoreSysAttributes):
if ATTR_WAIT_BOOT in body:
self.sys_config.wait_boot = body[ATTR_WAIT_BOOT]
# Save changes before processing addons in case of errors
if ATTR_FEATURE_FLAGS in body:
for feature, enabled in body[ATTR_FEATURE_FLAGS].items():
self.sys_config.set_feature_flag(feature, enabled)
# Save changes before processing apps in case of errors
await self.sys_updater.save_data()
await self.sys_config.save_data()
# Remove: 2022.9
if ATTR_ADDONS_REPOSITORIES in body:
if ATTR_APPS_REPOSITORIES in body:
await asyncio.shield(
self.sys_store.update_repositories(set(body[ATTR_ADDONS_REPOSITORIES]))
self.sys_store.update_repositories(set(body[ATTR_APPS_REPOSITORIES]))
)
await self.sys_resolution.evaluate.evaluate_system()
@@ -230,7 +243,7 @@ class APISupervisor(CoreSysAttributes):
@api_process
async def reload(self, request: web.Request) -> None:
"""Reload add-ons, configuration, etc."""
"""Reload apps, configuration, etc."""
await asyncio.gather(
asyncio.shield(self.sys_updater.reload()),
asyncio.shield(self.sys_homeassistant.secrets.reload()),

View File

@@ -3,6 +3,7 @@
import asyncio
from collections.abc import Callable, Mapping
import json
import logging
from typing import Any, cast
from aiohttp import web
@@ -31,8 +32,11 @@ from ..jobs import JobSchedulerOptions, SupervisorJob
from ..utils import check_exception_chain, get_message_from_exception_chain
from ..utils.json import json_dumps, json_loads as json_loads_util
from ..utils.log_format import format_message
from ..utils.sentry import async_capture_exception
from . import const
_LOGGER: logging.Logger = logging.getLogger(__name__)
def extract_supervisor_token(request: web.Request) -> str | None:
"""Extract Supervisor token from request."""
@@ -72,6 +76,8 @@ def api_process(method):
err, status=err.status, job_id=err.job_id, headers=err.headers
)
except HassioError as err:
_LOGGER.exception("Unexpected error during API call: %s", err)
await async_capture_exception(err)
return api_return_error(err)
if isinstance(answer, (dict, list)):
@@ -119,6 +125,8 @@ def api_process_raw(content, *, error_type=None):
job_id=err.job_id,
)
except HassioError as err:
_LOGGER.exception("Unexpected error during API call: %s", err)
await async_capture_exception(err)
return api_return_error(
err, error_type=error_type or const.CONTENT_TYPE_BINARY
)
@@ -148,7 +156,7 @@ def api_return_error(
if check_exception_chain(error, DockerAPIError):
message = format_message(message)
if not message:
message = "Unknown error, see Supervisor logs (check with 'ha supervisor logs')"
message = "Unknown error, see Supervisor logs"
match error_type:
case const.CONTENT_TYPE_TEXT:

View File

@@ -14,11 +14,8 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
ARCH_JSON: Path = Path(__file__).parent.joinpath("data/arch.json")
MAP_CPU: dict[str, CpuArch] = {
"armv7": CpuArch.ARMV7,
"armv6": CpuArch.ARMHF,
"armv8": CpuArch.AARCH64,
"aarch64": CpuArch.AARCH64,
"i686": CpuArch.I386,
"x86_64": CpuArch.AMD64,
}
@@ -64,11 +61,12 @@ class CpuArchManager(CoreSysAttributes):
if not self.sys_machine or self.sys_machine not in arch_data:
_LOGGER.warning("Can't detect the machine type!")
self._default_arch = native_support
self._supported_arch.append(self.default)
self._supported_arch = [self.default]
self._supported_set = {self.default}
return
# Use configs from arch.json
self._supported_arch.extend(CpuArch(a) for a in arch_data[self.sys_machine])
self._supported_arch = [CpuArch(a) for a in arch_data[self.sys_machine]]
self._default_arch = self.supported[0]
# Make sure native support is in supported list

View File

@@ -1,18 +1,17 @@
"""Manage SSO for Add-ons with Home Assistant user."""
"""Manage SSO for Apps with Home Assistant user."""
import asyncio
import hashlib
import logging
from typing import Any, TypedDict, cast
from .addons.addon import Addon
from .const import ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
from .addons.addon import App
from .const import ATTR_PASSWORD, ATTR_USERNAME, FILE_HASSIO_AUTH, HomeAssistantUser
from .coresys import CoreSys, CoreSysAttributes
from .exceptions import (
AuthHomeAssistantAPIValidationError,
AuthInvalidNonStringValueError,
AuthListUsersError,
AuthListUsersNoneResponseError,
AuthPasswordResetError,
HomeAssistantAPIError,
HomeAssistantWSError,
@@ -35,7 +34,7 @@ class BackendAuthRequest(TypedDict):
class Auth(FileConfiguration, CoreSysAttributes):
"""Manage SSO for Add-ons with Home Assistant user."""
"""Manage SSO for Apps with Home Assistant user."""
def __init__(self, coresys: CoreSys) -> None:
"""Initialize updater."""
@@ -82,13 +81,13 @@ class Auth(FileConfiguration, CoreSysAttributes):
await self.save_data()
async def check_login(
self, addon: Addon, username: str | None, password: str | None
self, app: App, username: str | None, password: str | None
) -> bool:
"""Check username login."""
if username is None or password is None:
raise AuthInvalidNonStringValueError(_LOGGER.error)
_LOGGER.info("Auth request from '%s' for '%s'", addon.slug, username)
_LOGGER.info("Auth request from '%s' for '%s'", app.slug, username)
# Get from cache
cache_hit = self._check_cache(username, password)
@@ -100,18 +99,18 @@ class Auth(FileConfiguration, CoreSysAttributes):
# No cache hit
if cache_hit is None:
return await self._backend_login(addon, username, password)
return await self._backend_login(app, username, password)
# Home Assistant Core take over 1-2sec to validate it
# Let's use the cache and update the cache in background
if username not in self._running:
self._running[username] = self.sys_create_task(
self._backend_login(addon, username, password)
self._backend_login(app, username, password)
)
return cache_hit
async def _backend_login(self, addon: Addon, username: str, password: str) -> bool:
async def _backend_login(self, app: App, username: str, password: str) -> bool:
"""Check username login on core."""
try:
async with self.sys_homeassistant.api.make_request(
@@ -120,7 +119,7 @@ class Auth(FileConfiguration, CoreSysAttributes):
json=cast(
dict[str, Any],
BackendAuthRequest(
username=username, password=password, addon=addon.slug
username=username, password=password, addon=app.slug
),
),
) as req:
@@ -157,22 +156,14 @@ class Auth(FileConfiguration, CoreSysAttributes):
raise AuthPasswordResetError(user=username)
async def list_users(self) -> list[dict[str, Any]]:
async def list_users(self) -> list[HomeAssistantUser]:
"""List users on the Home Assistant instance."""
try:
users: (
list[dict[str, Any]] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"}
)
return await self.sys_homeassistant.list_users()
except HomeAssistantWSError as err:
_LOGGER.error("Can't request listing users on Home Assistant: %s", err)
raise AuthListUsersError() from err
if users is not None:
return users
raise AuthListUsersNoneResponseError(_LOGGER.error)
@staticmethod
def _rehash(value: str, salt2: str = "") -> str:
"""Rehash a value."""

View File

@@ -3,7 +3,7 @@
import asyncio
from collections import defaultdict
from collections.abc import AsyncGenerator, Awaitable
from contextlib import asynccontextmanager
from contextlib import asynccontextmanager, suppress
from copy import deepcopy
from dataclasses import dataclass
from datetime import timedelta
@@ -12,21 +12,26 @@ import json
import logging
from pathlib import Path, PurePath
import tarfile
from tarfile import TarFile
from tempfile import TemporaryDirectory
import time
from typing import Any, Self, cast
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from securetar import AddFileError, SecureTarFile, atomic_contents_add, secure_path
from securetar import (
AddFileError,
InvalidPasswordError,
SecureTarArchive,
SecureTarFile,
SecureTarReadError,
atomic_contents_add,
)
import voluptuous as vol
from voluptuous.humanize import humanize_error
from ..addons.manager import Addon
from ..addons.manager import App
from ..const import (
ATTR_ADDONS,
ATTR_COMPRESSED,
ATTR_CRYPTO,
ATTR_DATE,
ATTR_DOCKER,
ATTR_EXCLUDE_DATABASE,
@@ -35,34 +40,48 @@ from ..const import (
ATTR_HOMEASSISTANT,
ATTR_NAME,
ATTR_PROTECTED,
ATTR_REGISTRIES,
ATTR_REPOSITORIES,
ATTR_SIZE,
ATTR_SLUG,
ATTR_SUPERVISOR_VERSION,
ATTR_TYPE,
ATTR_VERSION,
CRYPTO_AES128,
)
from ..coresys import CoreSys
from ..exceptions import (
AddonsError,
AppsError,
BackupError,
BackupFatalIOError,
BackupFileExistError,
BackupFileNotFoundError,
BackupInvalidError,
BackupPermissionError,
MountError,
)
from ..homeassistant.const import LANDINGPAGE
from ..jobs.const import JOB_GROUP_BACKUP
from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup
from ..utils import remove_folder
from ..mounts.const import ATTR_DEFAULT_BACKUP_MOUNT, ATTR_MOUNTS
from ..mounts.mount import Mount
from ..mounts.validate import SCHEMA_MOUNTS_CONFIG
from ..utils import remove_folder, version_is_new_enough
from ..utils.dt import parse_datetime, utcnow
from ..utils.json import json_bytes
from ..utils.sentinel import DEFAULT
from .const import BUF_SIZE, LOCATION_CLOUD_BACKUP, BackupType
from ..validate import SCHEMA_DOCKER_CONFIG
from .const import (
BUF_SIZE,
CORE_SECURETAR_V3_MIN_VERSION,
LOCATION_CLOUD_BACKUP,
SECURETAR_CREATE_VERSION,
SECURETAR_V3_CREATE_VERSION,
BackupType,
)
from .validate import SCHEMA_BACKUP
IGNORED_COMPARISON_FIELDS = {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}
IGNORED_COMPARISON_FIELDS = {ATTR_PROTECTED, ATTR_DOCKER}
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -99,7 +118,7 @@ class Backup(JobGroup):
)
self._data: dict[str, Any] = data or {ATTR_SLUG: slug}
self._tmp: TemporaryDirectory | None = None
self._outer_secure_tarfile: SecureTarFile | None = None
self._outer_secure_tarfile: SecureTarArchive | None = None
self._password: str | None = None
self._locations: dict[str | None, BackupLocation] = {
location: BackupLocation(
@@ -145,14 +164,14 @@ class Backup(JobGroup):
return self._data[ATTR_COMPRESSED]
@property
def addons(self) -> list[dict[str, Any]]:
"""Return backup date."""
def apps(self) -> list[dict[str, Any]]:
"""Return the apps included in the backup."""
return self._data[ATTR_ADDONS]
@property
def addon_list(self) -> list[str]:
"""Return a list of add-ons slugs."""
return [addon_data[ATTR_SLUG] for addon_data in self.addons]
def app_list(self) -> list[str]:
"""Return a list of apps slugs."""
return [app_data[ATTR_SLUG] for app_data in self.apps]
@property
def folders(self) -> list[str]:
@@ -161,12 +180,12 @@ class Backup(JobGroup):
@property
def repositories(self) -> list[str]:
"""Return add-on store repositories."""
"""Return app store repositories."""
return self._data[ATTR_REPOSITORIES]
@repositories.setter
def repositories(self, value: list[str]) -> None:
"""Set add-on store repositories."""
"""Set app store repositories."""
self._data[ATTR_REPOSITORIES] = value
@property
@@ -198,16 +217,6 @@ class Backup(JobGroup):
"""Get extra metadata added by client."""
return self._data[ATTR_EXTRA]
@property
def docker(self) -> dict[str, Any]:
"""Return backup Docker config data."""
return self._data.get(ATTR_DOCKER, {})
@docker.setter
def docker(self, value: dict[str, Any]) -> None:
"""Set the Docker config data."""
self._data[ATTR_DOCKER] = value
@property
def location(self) -> str | None:
"""Return the location of the backup."""
@@ -324,19 +333,24 @@ class Backup(JobGroup):
# Add defaults
self._data = SCHEMA_BACKUP(self._data)
# Set password
# Set password - intentionally using truthiness check so that empty
# string is treated as no password, consistent with set_password().
if password:
self._password = password
self._data[ATTR_PROTECTED] = True
self._data[ATTR_CRYPTO] = CRYPTO_AES128
self._locations[self.location].protected = True
if not compressed:
self._data[ATTR_COMPRESSED] = False
def set_password(self, password: str | None) -> None:
"""Set the password for an existing backup."""
self._password = password
"""Set the password for an existing backup.
Treat empty string as None to stay consistent with backup creation
and Supervisor behavior before #6402, independent of SecureTar
behavior in this regard.
"""
self._password = password or None
async def validate_backup(self, location: str | None) -> None:
"""Validate backup.
@@ -364,15 +378,17 @@ class Backup(JobGroup):
test_tar_file = backup.extractfile(test_tar_name)
try:
with SecureTarFile(
ending, # Not used
gzip=self.compressed,
mode="r",
fileobj=test_tar_file,
password=self._password,
):
# If we can read the tar file, the password is correct
return
except tarfile.ReadError as ex:
except (
tarfile.ReadError,
SecureTarReadError,
InvalidPasswordError,
) as ex:
raise BackupInvalidError(
f"Invalid password for backup {self.slug}", _LOGGER.error
) from ex
@@ -440,8 +456,17 @@ class Backup(JobGroup):
@asynccontextmanager
async def create(self) -> AsyncGenerator[None]:
"""Create new backup file."""
core_version = self.sys_homeassistant.version
if (
core_version is not None
and core_version != LANDINGPAGE
and version_is_new_enough(core_version, CORE_SECURETAR_V3_MIN_VERSION)
):
securetar_version = SECURETAR_V3_CREATE_VERSION
else:
securetar_version = SECURETAR_CREATE_VERSION
def _open_outer_tarfile() -> tuple[SecureTarFile, tarfile.TarFile]:
def _open_outer_tarfile() -> SecureTarArchive:
"""Create and open outer tarfile."""
if self.tarfile.is_file():
raise BackupFileExistError(
@@ -449,14 +474,15 @@ class Backup(JobGroup):
_LOGGER.error,
)
_outer_secure_tarfile = SecureTarFile(
_outer_secure_tarfile = SecureTarArchive(
self.tarfile,
"w",
gzip=False,
bufsize=BUF_SIZE,
create_version=securetar_version,
password=self._password,
)
try:
_outer_tarfile = _outer_secure_tarfile.open()
_outer_secure_tarfile.open()
except PermissionError as ex:
raise BackupPermissionError(
f"Cannot open backup file {self.tarfile.as_posix()}, permission error!",
@@ -468,11 +494,9 @@ class Backup(JobGroup):
_LOGGER.error,
) from ex
return _outer_secure_tarfile, _outer_tarfile
return _outer_secure_tarfile
outer_secure_tarfile, outer_tarfile = await self.sys_run_in_executor(
_open_outer_tarfile
)
outer_secure_tarfile = await self.sys_run_in_executor(_open_outer_tarfile)
self._outer_secure_tarfile = outer_secure_tarfile
def _close_outer_tarfile() -> int:
@@ -482,10 +506,20 @@ class Backup(JobGroup):
try:
yield
finally:
await self._create_cleanup(outer_tarfile)
except Exception:
self._outer_secure_tarfile = None
# Close may fail (e.g. ENOSPC writing end-of-archive
# markers), but tarfile's finally ensures the file handle
# is released regardless. The file is unlinked by the caller.
with suppress(Exception):
await self.sys_run_in_executor(outer_secure_tarfile.close)
raise
try:
await self._create_finalize(outer_secure_tarfile)
size_bytes = await self.sys_run_in_executor(_close_outer_tarfile)
self._locations[self.location].size_bytes = size_bytes
finally:
self._outer_secure_tarfile = None
@asynccontextmanager
@@ -512,12 +546,24 @@ class Backup(JobGroup):
)
tmp = TemporaryDirectory(dir=str(backup_tarfile.parent))
with tarfile.open(backup_tarfile, "r:") as tar:
tar.extractall(
path=tmp.name,
members=secure_path(tar),
filter="fully_trusted",
)
try:
with tarfile.open(backup_tarfile, "r:") as tar:
# The tar filter rejects path traversal and absolute names,
# aborting restore of potentially crafted backups.
tar.extractall(
path=tmp.name,
filter="tar",
)
except tarfile.FilterError as err:
raise BackupInvalidError(
f"Can't read backup tarfile {backup_tarfile.as_posix()}: {err}",
_LOGGER.error,
) from err
except tarfile.TarError as err:
raise BackupError(
f"Can't read backup tarfile {backup_tarfile.as_posix()}: {err}",
_LOGGER.error,
) from err
return tmp
@@ -531,11 +577,11 @@ class Backup(JobGroup):
if self._tmp:
await self.sys_run_in_executor(self._tmp.cleanup)
async def _create_cleanup(self, outer_tarfile: TarFile) -> None:
"""Cleanup after backup creation.
async def _create_finalize(self, outer_archive: SecureTarArchive) -> None:
"""Finalize backup creation.
Separate method to be called from create to ensure
that cleanup is always performed, even if an exception is raised.
Separate method to be called from create to ensure that the backup is
finalized.
"""
# validate data
try:
@@ -554,50 +600,53 @@ class Backup(JobGroup):
tar_info = tarfile.TarInfo(name="./backup.json")
tar_info.size = len(raw_bytes)
tar_info.mtime = int(time.time())
outer_tarfile.addfile(tar_info, fileobj=fileobj)
outer_archive.tar.addfile(tar_info, fileobj=fileobj)
try:
await self.sys_run_in_executor(_add_backup_json)
except (OSError, json.JSONDecodeError) as err:
except OSError as err:
raise BackupFatalIOError(
f"Can't write backup metadata: {err!s}", _LOGGER.error
) from err
except json.JSONDecodeError as err:
self.sys_jobs.current.capture_error(BackupError("Can't write backup"))
_LOGGER.error("Can't write backup: %s", err)
@Job(name="backup_addon_save", cleanup=False)
async def _addon_save(self, addon: Addon) -> asyncio.Task | None:
"""Store an add-on into backup."""
self.sys_jobs.current.reference = slug = addon.slug
async def _app_save(self, app: App) -> asyncio.Task | None:
"""Store an app into backup."""
self.sys_jobs.current.reference = slug = app.slug
if not self._outer_secure_tarfile:
raise RuntimeError(
"Cannot backup components without initializing backup tar"
)
# Ensure it is still installed and get current data before proceeding
if not (curr_addon := self.sys_addons.get_local_only(slug)):
if not (curr_app := self.sys_apps.get_local_only(slug)):
_LOGGER.warning(
"Skipping backup of add-on %s because it has been uninstalled",
"Skipping backup of app %s because it has been uninstalled",
slug,
)
return None
tar_name = f"{slug}.tar{'.gz' if self.compressed else ''}"
addon_file = self._outer_secure_tarfile.create_inner_tar(
app_file = self._outer_secure_tarfile.create_tar(
f"./{tar_name}",
gzip=self.compressed,
password=self._password,
)
# Take backup
try:
start_task = await curr_addon.backup(addon_file)
except AddonsError as err:
start_task = await curr_app.backup(app_file)
except AppsError as err:
raise BackupError(str(err)) from err
# Store to config
self._data[ATTR_ADDONS].append(
{
ATTR_SLUG: slug,
ATTR_NAME: curr_addon.name,
ATTR_VERSION: curr_addon.version,
ATTR_NAME: curr_app.name,
ATTR_VERSION: curr_app.version,
# Bug - addon_file.size used to give us this information
# It always returns 0 in current securetar. Skipping until fixed
ATTR_SIZE: 0,
@@ -607,64 +656,67 @@ class Backup(JobGroup):
return start_task
@Job(name="backup_store_addons", cleanup=False)
async def store_addons(self, addon_list: list[Addon]) -> list[asyncio.Task]:
"""Add a list of add-ons into backup.
async def store_apps(self, app_list: list[App]) -> list[asyncio.Task]:
"""Add a list of apps into backup.
For each addon that needs to be started after backup, returns a Task which
completes when that addon has state 'started' (see addon.start).
For each app that needs to be started after backup, returns a Task which
completes when that app has state 'started' (see app.start).
"""
# Save Add-ons sequential avoid issue on slow IO
# Save Apps sequential avoid issue on slow IO
start_tasks: list[asyncio.Task] = []
for addon in addon_list:
for app in app_list:
try:
if start_task := await self._addon_save(addon):
if start_task := await self._app_save(app):
start_tasks.append(start_task)
except BackupFatalIOError:
raise
except BackupError as err:
self.sys_jobs.current.capture_error(err)
return start_tasks
@Job(name="backup_addon_restore", cleanup=False)
async def _addon_restore(self, addon_slug: str) -> asyncio.Task | None:
"""Restore an add-on from backup."""
self.sys_jobs.current.reference = addon_slug
async def _app_restore(self, app_slug: str) -> asyncio.Task | None:
"""Restore an app from backup."""
self.sys_jobs.current.reference = app_slug
if not self._tmp:
raise RuntimeError("Cannot restore components without opening backup tar")
tar_name = f"{addon_slug}.tar{'.gz' if self.compressed else ''}"
addon_file = SecureTarFile(
Path(self._tmp.name, tar_name),
"r",
tar_name = f"{app_slug}.tar{'.gz' if self.compressed else ''}"
tar_path = Path(self._tmp.name, tar_name)
# Verify the backup exists before trying to restore it
if not await self.sys_run_in_executor(tar_path.exists):
raise BackupError(f"Can't find backup {app_slug}", _LOGGER.error)
app_file = SecureTarFile(
tar_path,
gzip=self.compressed,
bufsize=BUF_SIZE,
password=self._password,
)
# If exists inside backup
if not await self.sys_run_in_executor(addon_file.path.exists):
raise BackupError(f"Can't find backup {addon_slug}", _LOGGER.error)
# Perform a restore
try:
return await self.sys_addons.restore(addon_slug, addon_file)
except AddonsError as err:
return await self.sys_apps.restore(app_slug, app_file)
except AppsError as err:
raise BackupError(
f"Can't restore backup {addon_slug}", _LOGGER.error
f"Can't restore backup {app_slug}", _LOGGER.error
) from err
@Job(name="backup_restore_addons", cleanup=False)
async def restore_addons(
self, addon_list: list[str]
async def restore_apps(
self, app_list: list[str]
) -> tuple[bool, list[asyncio.Task]]:
"""Restore a list add-on from backup."""
# Save Add-ons sequential avoid issue on slow IO
"""Restore a list app from backup."""
# Save Apps sequential avoid issue on slow IO
start_tasks: list[asyncio.Task] = []
success = True
for slug in addon_list:
for slug in app_list:
try:
start_task = await self._addon_restore(slug)
start_task = await self._app_restore(slug)
except Exception as err: # pylint: disable=broad-except
_LOGGER.warning("Can't restore Add-on %s: %s", slug, err)
_LOGGER.warning("Can't restore app %s: %s", slug, err)
success = False
else:
if start_task:
@@ -673,20 +725,20 @@ class Backup(JobGroup):
return (success, start_tasks)
@Job(name="backup_remove_delta_addons", cleanup=False)
async def remove_delta_addons(self) -> bool:
"""Remove addons which are not in this backup."""
async def remove_delta_apps(self) -> bool:
"""Remove apps which are not in this backup."""
success = True
for addon in self.sys_addons.installed:
if addon.slug in self.addon_list:
for app in self.sys_apps.installed:
if app.slug in self.app_list:
continue
# Remove Add-on because it's not a part of the new env
# Remove App because it's not a part of the new env
# Do it sequential avoid issue on slow IO
try:
await self.sys_addons.uninstall(addon.slug)
except AddonsError as err:
await self.sys_apps.uninstall(app.slug)
except AppsError as err:
self.sys_jobs.current.capture_error(err)
_LOGGER.warning("Can't uninstall Add-on %s: %s", addon.slug, err)
_LOGGER.warning("Can't uninstall app %s: %s", app.slug, err)
success = False
return success
@@ -730,10 +782,9 @@ class Backup(JobGroup):
return False
with outer_secure_tarfile.create_inner_tar(
with outer_secure_tarfile.create_tar(
f"./{tar_name}",
gzip=self.compressed,
password=self._password,
) as tar_file:
atomic_contents_add(
tar_file,
@@ -748,8 +799,12 @@ class Backup(JobGroup):
try:
if await self.sys_run_in_executor(_save):
self._data[ATTR_FOLDERS].append(name)
except (tarfile.TarError, OSError, AddFileError) as err:
raise BackupError(f"Can't write tarfile: {str(err)}") from err
except OSError as err:
raise BackupFatalIOError(
f"Can't write tarfile: {err!s}", _LOGGER.error
) from err
except (tarfile.TarError, AddFileError) as err:
raise BackupError(f"Can't write tarfile: {err!s}") from err
@Job(name="backup_store_folders", cleanup=False)
async def store_folders(self, folder_list: list[str]):
@@ -758,6 +813,8 @@ class Backup(JobGroup):
for folder in folder_list:
try:
await self._folder_save(folder)
except BackupFatalIOError:
raise
except BackupError as err:
err = BackupError(
f"Can't backup folder {folder}: {str(err)}", _LOGGER.error
@@ -793,15 +850,21 @@ class Backup(JobGroup):
_LOGGER.info("Restore folder %s", name)
with SecureTarFile(
tar_name,
"r",
gzip=self.compressed,
bufsize=BUF_SIZE,
password=self._password,
) as tar_file:
# The tar filter rejects path traversal and absolute names,
# aborting restore of potentially crafted backups.
tar_file.extractall(
path=origin_dir, members=tar_file, filter="fully_trusted"
path=origin_dir,
filter="tar",
)
_LOGGER.info("Restore folder %s done", name)
except tarfile.FilterError as err:
raise BackupInvalidError(
f"Can't restore folder {name}: {err}", _LOGGER.warning
) from err
except (tarfile.TarError, OSError) as err:
raise BackupError(
f"Can't restore folder {name}: {err}", _LOGGER.warning
@@ -854,10 +917,9 @@ class Backup(JobGroup):
tar_name = f"homeassistant.tar{'.gz' if self.compressed else ''}"
# Backup Home Assistant Core config directory
homeassistant_file = self._outer_secure_tarfile.create_inner_tar(
homeassistant_file = self._outer_secure_tarfile.create_tar(
f"./{tar_name}",
gzip=self.compressed,
password=self._password,
)
await self.sys_homeassistant.backup(homeassistant_file, exclude_database)
@@ -881,7 +943,6 @@ class Backup(JobGroup):
)
homeassistant_file = SecureTarFile(
tar_name,
"r",
gzip=self.compressed,
bufsize=BUF_SIZE,
password=self._password,
@@ -920,3 +981,191 @@ class Backup(JobGroup):
return self.sys_store.update_repositories(
set(self.repositories), issue_on_error=True, replace=replace
)
@Job(name="backup_store_supervisor_config", cleanup=False)
async def store_supervisor_config(self) -> None:
"""Store supervisor configuration into backup as encrypted tar."""
if not self._outer_secure_tarfile:
raise RuntimeError(
"Cannot backup components without initializing backup tar"
)
registries = self.sys_docker.config.registries
if not self.sys_mounts.mounts and not registries:
return
mounts_data = {
ATTR_DEFAULT_BACKUP_MOUNT: (
self.sys_mounts.default_backup_mount.name
if self.sys_mounts.default_backup_mount
else None
),
ATTR_MOUNTS: [
mount.to_dict(skip_secrets=False) for mount in self.sys_mounts.mounts
],
}
docker_data = {ATTR_REGISTRIES: registries}
outer_secure_tarfile = self._outer_secure_tarfile
tar_name = f"supervisor.tar{'.gz' if self.compressed else ''}"
def _save() -> None:
"""Save supervisor config data to tar file."""
_LOGGER.info("Backing up supervisor configuration")
# Create JSON data
mounts_json = json.dumps(mounts_data).encode("utf-8")
docker_json = json.dumps(docker_data).encode("utf-8")
with outer_secure_tarfile.create_tar(
f"./{tar_name}",
gzip=self.compressed,
) as tar_file:
# Add mounts.json to tar
tarinfo = tarfile.TarInfo(name="mounts.json")
tarinfo.size = len(mounts_json)
tar_file.addfile(tarinfo, io.BytesIO(mounts_json))
# Add docker.json to tar
tarinfo = tarfile.TarInfo(name="docker.json")
tarinfo.size = len(docker_json)
tar_file.addfile(tarinfo, io.BytesIO(docker_json))
_LOGGER.info("Backup supervisor configuration done")
try:
await self.sys_run_in_executor(_save)
except OSError as err:
raise BackupFatalIOError(
f"Can't write supervisor config tarfile: {err!s}", _LOGGER.error
) from err
except tarfile.TarError as err:
raise BackupError(
f"Can't write supervisor config tarfile: {err!s}"
) from err
@Job(name="backup_restore_supervisor_config", cleanup=False)
async def restore_supervisor_config(self) -> tuple[bool, list[asyncio.Task]]:
"""Restore supervisor configuration from backup.
Returns tuple of (success, list of mount activation tasks).
The tasks should be awaited after the restore is complete to activate mounts.
"""
if not self._tmp:
raise RuntimeError("Cannot restore components without opening backup tar")
tar_name = Path(
self._tmp.name, f"supervisor.tar{'.gz' if self.compressed else ''}"
)
# Extract and parse supervisor data
def _load_supervisor_data() -> tuple[
dict[str, Any] | None, dict[str, Any] | None
]:
"""Load mounts and docker data from tar file."""
if not tar_name.exists():
_LOGGER.info("Supervisor tar file not found in backup")
return (None, None)
mounts_data = None
docker_data = None
with SecureTarFile(
tar_name,
gzip=self.compressed,
bufsize=BUF_SIZE,
password=self._password,
) as tar_file:
try:
member = tar_file.getmember("mounts.json")
file_obj = tar_file.extractfile(member)
if file_obj:
mounts_data = json.loads(file_obj.read().decode("utf-8"))
except KeyError:
_LOGGER.debug("mounts.json not found in supervisor tar")
try:
member = tar_file.getmember("docker.json")
file_obj = tar_file.extractfile(member)
if file_obj:
docker_data = json.loads(file_obj.read().decode("utf-8"))
except KeyError:
_LOGGER.debug("docker.json not found in supervisor tar")
return (mounts_data, docker_data)
try:
mounts_data, docker_data = await self.sys_run_in_executor(
_load_supervisor_data
)
except OSError as err:
self.sys_resolution.check_oserror(err)
_LOGGER.warning("Failed to read supervisor tar from backup: %s", err)
return (False, [])
except (tarfile.TarError, json.JSONDecodeError) as err:
_LOGGER.warning("Failed to read supervisor config from backup: %s", err)
return (False, [])
if not mounts_data and not docker_data:
return (True, [])
success = True
mount_tasks: list[asyncio.Task] = []
# Restore mount configurations
if mounts_data:
try:
mounts_data = SCHEMA_MOUNTS_CONFIG(mounts_data)
except vol.Invalid as err:
_LOGGER.warning("Invalid mounts data in supervisor config: %s", err)
success = False
mounts_data = None
if mounts_data:
for mount_data in mounts_data.get(ATTR_MOUNTS, []):
mount_name = mount_data[ATTR_NAME]
try:
mount = Mount.from_dict(self.coresys, mount_data)
mount_tasks.append(await self.sys_mounts.restore_mount(mount))
_LOGGER.info("Restored mount configuration: %s", mount_name)
except (MountError, vol.Invalid, KeyError, OSError) as err:
_LOGGER.warning("Failed to restore mount %s: %s", mount_name, err)
success = False
# Restore default backup mount if not already set
default_mount_name = mounts_data.get(ATTR_DEFAULT_BACKUP_MOUNT)
if (
default_mount_name
and default_mount_name in self.sys_mounts
and self.sys_mounts.default_backup_mount is None
):
self.sys_mounts.default_backup_mount = self.sys_mounts.get(
default_mount_name
)
_LOGGER.info("Restored default backup mount: %s", default_mount_name)
# Save mount configuration to disk
await self.sys_mounts.save_data()
# Restore Docker registry configurations
if docker_data:
try:
docker_data = SCHEMA_DOCKER_CONFIG(docker_data)
except vol.Invalid as err:
_LOGGER.warning("Invalid docker data in supervisor config: %s", err)
success = False
docker_data = None
if docker_data:
registries = docker_data.get(ATTR_REGISTRIES, {})
if registries:
self.sys_docker.config.registries.update(registries)
await self.sys_docker.config.save_data()
_LOGGER.info(
"Restored %d docker registry configuration(s)", len(registries)
)
return (success, mount_tasks)

View File

@@ -3,9 +3,14 @@
from enum import StrEnum
from typing import Literal
from awesomeversion import AwesomeVersion
from ..mounts.mount import Mount
BUF_SIZE = 2**20 * 4 # 4MB
SECURETAR_CREATE_VERSION = 2
SECURETAR_V3_CREATE_VERSION = 3
CORE_SECURETAR_V3_MIN_VERSION: AwesomeVersion = AwesomeVersion("2026.3.0")
DEFAULT_FREEZE_TIMEOUT = 600
LOCATION_CLOUD_BACKUP = ".cloud_backup"
@@ -27,6 +32,7 @@ class BackupJobStage(StrEnum):
FINISHING_FILE = "finishing_file"
FOLDERS = "folders"
HOME_ASSISTANT = "home_assistant"
SUPERVISOR_CONFIG = "supervisor_config"
COPY_ADDITONAL_LOCATIONS = "copy_additional_locations"
AWAIT_ADDON_RESTARTS = "await_addon_restarts"
@@ -40,4 +46,5 @@ class RestoreJobStage(StrEnum):
AWAIT_HOME_ASSISTANT_RESTART = "await_home_assistant_restart"
FOLDERS = "folders"
HOME_ASSISTANT = "home_assistant"
SUPERVISOR_CONFIG = "supervisor_config"
REMOVE_DELTA_ADDONS = "remove_delta_addons"

View File

@@ -10,7 +10,7 @@ from pathlib import Path
from shutil import copy
from typing import cast
from ..addons.addon import Addon
from ..addons.addon import App
from ..const import (
ATTR_DAYS_UNTIL_STALE,
FILE_HASSIO_BACKUPS,
@@ -210,13 +210,11 @@ class BackupManager(FileConfiguration, JobGroup):
try:
return await self.sys_run_in_executor(find_backups)
except OSError as err:
if err.errno == errno.EBADMSG and path in {
if path in {
self.sys_config.path_backup,
self.sys_config.path_core_backup,
}:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error("Could not list backups from %s: %s", path.as_posix(), err)
return []
@@ -365,13 +363,8 @@ class BackupManager(FileConfiguration, JobGroup):
) from err
except OSError as err:
msg = f"Could delete backup at {backup_tarfile.as_posix()}: {err!s}"
if err.errno == errno.EBADMSG and location in {
None,
LOCATION_CLOUD_BACKUP,
}:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
if location in {None, LOCATION_CLOUD_BACKUP}:
self.sys_resolution.check_oserror(err)
raise BackupError(msg, _LOGGER.error) from err
# If backup has been removed from all locations, remove it from cache
@@ -403,12 +396,10 @@ class BackupManager(FileConfiguration, JobGroup):
return (location_name, Path(path))
except OSError as err:
msg = f"Could not copy backup to {location_name} due to: {err!s}"
if err.errno == errno.EBADMSG and location in {
LOCATION_CLOUD_BACKUP,
None,
}:
raise BackupDataDiskBadMessageError(msg, _LOGGER.error) from err
if location in {LOCATION_CLOUD_BACKUP, None}:
self.sys_resolution.check_oserror(err)
if err.errno == errno.EBADMSG:
raise BackupDataDiskBadMessageError(msg, _LOGGER.error) from err
raise BackupError(msg, _LOGGER.error) from err
@Job(name="backup_copy_to_additional_locations", cleanup=False)
@@ -468,10 +459,8 @@ class BackupManager(FileConfiguration, JobGroup):
try:
await self.sys_run_in_executor(backup.tarfile.rename, tar_file)
except OSError as err:
if err.errno == errno.EBADMSG and location in {LOCATION_CLOUD_BACKUP, None}:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
if location in {LOCATION_CLOUD_BACKUP, None}:
self.sys_resolution.check_oserror(err)
_LOGGER.error("Can't move backup file to storage: %s", err)
return None
@@ -513,7 +502,7 @@ class BackupManager(FileConfiguration, JobGroup):
async def _do_backup(
self,
backup: Backup,
addon_list: list[Addon],
app_list: list[App],
folder_list: list[str],
homeassistant: bool,
homeassistant_exclude_database: bool | None,
@@ -524,11 +513,15 @@ class BackupManager(FileConfiguration, JobGroup):
Must be called from an existing backup job. If the backup failed, the
backup file is being deleted and None is returned.
"""
addon_start_tasks: list[Awaitable[None]] | None = None
app_start_tasks: list[Awaitable[None]] | None = None
try:
await self.sys_core.set_state(CoreState.FREEZE)
# Any exception leaving create() means the backup is incomplete
# and will be discarded (file unlinked below). Individual
# app/folder errors are captured inside store_addons/
# store_folders and do not propagate.
async with backup.create():
# HomeAssistant Folder is for v1
if homeassistant:
@@ -539,16 +532,20 @@ class BackupManager(FileConfiguration, JobGroup):
else homeassistant_exclude_database
)
# Backup add-ons
if addon_list:
# Backup apps
if app_list:
self._change_stage(BackupJobStage.ADDONS, backup)
addon_start_tasks = await backup.store_addons(addon_list)
app_start_tasks = await backup.store_apps(app_list)
# Backup folders
if folder_list:
self._change_stage(BackupJobStage.FOLDERS, backup)
await backup.store_folders(folder_list)
# Backup supervisor configuration (mounts, etc.)
self._change_stage(BackupJobStage.SUPERVISOR_CONFIG, backup)
await backup.store_supervisor_config()
self._change_stage(BackupJobStage.FINISHING_FILE, backup)
except BackupError as err:
@@ -571,10 +568,10 @@ class BackupManager(FileConfiguration, JobGroup):
self._change_stage(BackupJobStage.COPY_ADDITONAL_LOCATIONS, backup)
await self._copy_to_additional_locations(backup, additional_locations)
if addon_start_tasks:
if app_start_tasks:
self._change_stage(BackupJobStage.AWAIT_ADDON_RESTARTS, backup)
# Ignore exceptions from waiting for addon startup, addon errors handled elsewhere
await asyncio.gather(*addon_start_tasks, return_exceptions=True)
# Ignore exceptions from waiting for app startup, app errors handled elsewhere
await asyncio.gather(*app_start_tasks, return_exceptions=True)
return backup
finally:
@@ -622,7 +619,7 @@ class BackupManager(FileConfiguration, JobGroup):
_LOGGER.info("Creating new full backup with slug %s", new_backup.slug)
backup = await self._do_backup(
new_backup,
self.sys_addons.installed,
self.sys_apps.installed,
ALL_FOLDERS,
True,
homeassistant_exclude_database,
@@ -644,7 +641,7 @@ class BackupManager(FileConfiguration, JobGroup):
name: str = "",
filename: str | None = None,
*,
addons: list[str] | None = None,
apps: list[str] | None = None,
folders: list[str] | None = None,
password: str | None = None,
homeassistant: bool = False,
@@ -666,7 +663,7 @@ class BackupManager(FileConfiguration, JobGroup):
self, {JobCondition.FREE_SPACE}, "BackupManager.do_backup_partial"
)
addons = addons or []
apps = apps or []
folders = folders or []
# HomeAssistant Folder is for v1
@@ -674,7 +671,7 @@ class BackupManager(FileConfiguration, JobGroup):
folders.remove(FOLDER_HOMEASSISTANT)
homeassistant = True
if len(addons) == 0 and len(folders) == 0 and not homeassistant:
if len(apps) == 0 and len(folders) == 0 and not homeassistant:
_LOGGER.error("Nothing to create backup for")
new_backup = self._create_backup(
@@ -682,13 +679,13 @@ class BackupManager(FileConfiguration, JobGroup):
)
_LOGGER.info("Creating new partial backup with slug %s", new_backup.slug)
addon_list = []
for addon_slug in addons:
addon = self.sys_addons.get(addon_slug)
if addon and addon.is_installed:
addon_list.append(cast(Addon, addon))
app_list = []
for app_slug in apps:
app = self.sys_apps.get(app_slug)
if app and app.is_installed:
app_list.append(cast(App, app))
continue
_LOGGER.warning("Add-on %s not found/installed", addon_slug)
_LOGGER.warning("App %s not found/installed", app_slug)
# If being run in the background, notify caller that validation has completed
if validation_complete:
@@ -696,7 +693,7 @@ class BackupManager(FileConfiguration, JobGroup):
backup = await self._do_backup(
new_backup,
addon_list,
app_list,
folders,
homeassistant,
homeassistant_exclude_database,
@@ -709,7 +706,7 @@ class BackupManager(FileConfiguration, JobGroup):
async def _do_restore(
self,
backup: Backup,
addon_list: list[str],
app_list: list[str],
folder_list: list[str],
homeassistant: bool,
replace: bool,
@@ -719,7 +716,7 @@ class BackupManager(FileConfiguration, JobGroup):
Must be called from an existing restore job.
"""
addon_start_tasks: list[Awaitable[None]] | None = None
app_start_tasks: list[Awaitable[None]] | None = None
success = True
try:
@@ -735,21 +732,29 @@ class BackupManager(FileConfiguration, JobGroup):
self._change_stage(RestoreJobStage.HOME_ASSISTANT, backup)
task_hass = await backup.restore_homeassistant()
# Delete delta add-ons
# Delete delta apps
if replace:
self._change_stage(RestoreJobStage.REMOVE_DELTA_ADDONS, backup)
success = success and await backup.remove_delta_addons()
success = success and await backup.remove_delta_apps()
if addon_list:
if app_list:
self._change_stage(RestoreJobStage.ADDON_REPOSITORIES, backup)
await backup.restore_repositories(replace)
self._change_stage(RestoreJobStage.ADDONS, backup)
restore_success, addon_start_tasks = await backup.restore_addons(
addon_list
restore_success, app_start_tasks = await backup.restore_apps(
app_list
)
success = success and restore_success
# Restore supervisor configuration (mounts, etc.)
self._change_stage(RestoreJobStage.SUPERVISOR_CONFIG, backup)
(
mount_success,
mount_tasks,
) = await backup.restore_supervisor_config()
success = success and mount_success
# Wait for Home Assistant Core update/downgrade
if task_hass:
await task_hass
@@ -762,14 +767,20 @@ class BackupManager(FileConfiguration, JobGroup):
f"Restore {backup.slug} error, see supervisor logs"
) from err
else:
if addon_start_tasks:
if app_start_tasks:
self._change_stage(RestoreJobStage.AWAIT_ADDON_RESTARTS, backup)
# Failure to resume addons post restore is still a restore failure
if any(
await asyncio.gather(*addon_start_tasks, return_exceptions=True)
):
# Failure to resume apps post restore is still a restore failure
if any(await asyncio.gather(*app_start_tasks, return_exceptions=True)):
return False
# Wait for mount activations (failures don't affect restore success
# since config was already saved)
if mount_tasks:
results = await asyncio.gather(*mount_tasks, return_exceptions=True)
for result in results:
if isinstance(result, Exception):
_LOGGER.warning("Mount activation error: %s", result)
return success
finally:
# Leave Home Assistant alone if it wasn't part of the restore
@@ -858,12 +869,12 @@ class BackupManager(FileConfiguration, JobGroup):
await self.sys_core.set_state(CoreState.FREEZE)
try:
# Stop Home-Assistant / Add-ons
# Stop Home-Assistant / Apps
await self.sys_core.shutdown(remove_homeassistant_container=True)
success = await self._do_restore(
backup,
backup.addon_list,
backup.app_list,
backup.folders,
homeassistant=True,
replace=True,
@@ -894,7 +905,7 @@ class BackupManager(FileConfiguration, JobGroup):
backup: Backup,
*,
homeassistant: bool = False,
addons: list[str] | None = None,
apps: list[str] | None = None,
folders: list[str] | None = None,
password: str | None = None,
location: str | None | type[DEFAULT] = DEFAULT,
@@ -904,7 +915,7 @@ class BackupManager(FileConfiguration, JobGroup):
# Add backup ID to job
self.sys_jobs.current.reference = backup.slug
addon_list = addons or []
app_list = apps or []
folder_list = folders or []
# Version 1
@@ -936,7 +947,7 @@ class BackupManager(FileConfiguration, JobGroup):
try:
success = await self._do_restore(
backup,
addon_list,
app_list,
folder_list,
homeassistant=homeassistant,
replace=False,
@@ -959,27 +970,27 @@ class BackupManager(FileConfiguration, JobGroup):
"""Freeze system to prepare for an external backup such as an image snapshot."""
await self.sys_core.set_state(CoreState.FREEZE)
# Determine running addons
installed = self.sys_addons.installed.copy()
# Determine running apps
installed = self.sys_apps.installed.copy()
is_running: list[bool] = await asyncio.gather(
*[addon.is_running() for addon in installed]
*[app.is_running() for app in installed]
)
running_addons = [
running_apps = [
installed[ind] for ind in range(len(installed)) if is_running[ind]
]
# Create thaw task first to ensure we eventually undo freezes even if the below fails
self._thaw_task = asyncio.shield(
self.sys_create_task(self._thaw_all(running_addons, timeout))
self.sys_create_task(self._thaw_all(running_apps, timeout))
)
# Tell Home Assistant to freeze for a backup
self._change_stage(BackupJobStage.HOME_ASSISTANT)
await self.sys_homeassistant.begin_backup()
# Run all pre-backup tasks for addons
# Run all pre-backup tasks for apps
self._change_stage(BackupJobStage.ADDONS)
await asyncio.gather(*[addon.begin_backup() for addon in running_addons])
await asyncio.gather(*[app.begin_backup() for app in running_apps])
@Job(
name="backup_manager_thaw_all",
@@ -987,7 +998,7 @@ class BackupManager(FileConfiguration, JobGroup):
on_condition=BackupJobError,
)
async def _thaw_all(
self, running_addons: list[Addon], timeout: float = DEFAULT_FREEZE_TIMEOUT
self, running_apps: list[App], timeout: float = DEFAULT_FREEZE_TIMEOUT
) -> None:
"""Thaw system after user signal or timeout."""
try:
@@ -1002,10 +1013,10 @@ class BackupManager(FileConfiguration, JobGroup):
await self.sys_homeassistant.end_backup()
self._change_stage(BackupJobStage.ADDONS)
addon_start_tasks: list[asyncio.Task] = [
app_start_tasks: list[asyncio.Task] = [
task
for task in await asyncio.gather(
*[addon.end_backup() for addon in running_addons]
*[app.end_backup() for app in running_apps]
)
if task
]
@@ -1014,9 +1025,9 @@ class BackupManager(FileConfiguration, JobGroup):
self._thaw_event.clear()
self._thaw_task = None
if addon_start_tasks:
if app_start_tasks:
self._change_stage(BackupJobStage.AWAIT_ADDON_RESTARTS)
await asyncio.gather(*addon_start_tasks, return_exceptions=True)
await asyncio.gather(*app_start_tasks, return_exceptions=True)
@Job(
name="backup_manager_signal_thaw",

View File

@@ -1,4 +1,4 @@
"""Util add-on functions."""
"""Util app functions."""
import hashlib
import re

View File

@@ -11,10 +11,8 @@ from ..backups.const import BackupType
from ..const import (
ATTR_ADDONS,
ATTR_COMPRESSED,
ATTR_CRYPTO,
ATTR_DATE,
ATTR_DAYS_UNTIL_STALE,
ATTR_DOCKER,
ATTR_EXCLUDE_DATABASE,
ATTR_EXTRA,
ATTR_FOLDERS,
@@ -27,7 +25,6 @@ from ..const import (
ATTR_SUPERVISOR_VERSION,
ATTR_TYPE,
ATTR_VERSION,
CRYPTO_AES128,
FOLDER_ADDONS,
FOLDER_HOMEASSISTANT,
FOLDER_MEDIA,
@@ -35,7 +32,7 @@ from ..const import (
FOLDER_SSL,
)
from ..store.validate import repositories
from ..validate import SCHEMA_DOCKER_CONFIG, version_tag
from ..validate import version_tag
ALL_FOLDERS = [
FOLDER_SHARE,
@@ -45,13 +42,13 @@ ALL_FOLDERS = [
]
def unique_addons(addons_list):
"""Validate that an add-on is unique."""
single = {addon[ATTR_SLUG] for addon in addons_list}
def unique_apps(apps_list):
"""Validate that an app is unique."""
single = {app[ATTR_SLUG] for app in apps_list}
if len(single) != len(addons_list):
raise vol.Invalid("Invalid addon list in backup!") from None
return addons_list
if len(single) != len(apps_list):
raise vol.Invalid("Invalid app list in backup!") from None
return apps_list
def v1_homeassistant(
@@ -98,7 +95,7 @@ SCHEMA_BACKUP = vol.Schema(
vol.Optional(ATTR_PROTECTED, default=False): vol.All(
v1_protected, vol.Boolean()
),
vol.Optional(ATTR_CRYPTO, default=None): vol.Maybe(CRYPTO_AES128),
vol.Remove("crypto"): vol.Maybe("aes128"),
vol.Optional(ATTR_HOMEASSISTANT, default=None): vol.All(
v1_homeassistant,
vol.Maybe(
@@ -114,7 +111,6 @@ SCHEMA_BACKUP = vol.Schema(
)
),
),
vol.Optional(ATTR_DOCKER, default=dict): SCHEMA_DOCKER_CONFIG,
vol.Optional(ATTR_FOLDERS, default=list): vol.All(
v1_folderlist, [vol.In(ALL_FOLDERS)], vol.Unique()
),
@@ -130,7 +126,7 @@ SCHEMA_BACKUP = vol.Schema(
extra=vol.REMOVE_EXTRA,
)
],
unique_addons,
unique_apps,
),
vol.Optional(ATTR_REPOSITORIES, default=list): repositories,
vol.Optional(ATTR_EXTRA, default=dict): dict,

View File

@@ -7,11 +7,12 @@ from importlib import import_module
import logging
import os
import signal
import threading
import warnings
from colorlog import ColoredFormatter
from .addons.manager import AddonManager
from .addons.manager import AppManager
from .api import RestAPI
from .arch import CpuArchManager
from .auth import Auth
@@ -77,7 +78,7 @@ async def initialize_coresys() -> CoreSys:
coresys.api = RestAPI(coresys)
coresys.supervisor = Supervisor(coresys)
coresys.homeassistant = await HomeAssistant(coresys).load_config()
coresys.addons = await AddonManager(coresys).load_config()
coresys.apps = await AppManager(coresys).load_config()
coresys.backups = await BackupManager(coresys).load_config()
coresys.host = await HostManager(coresys).post_init()
coresys.hardware = await HardwareManager.create(coresys)
@@ -129,26 +130,26 @@ def initialize_system(coresys: CoreSys) -> None:
_LOGGER.debug("Creating Supervisor SSL/TLS folder at '%s'", config.path_ssl)
config.path_ssl.mkdir()
# Supervisor addon data folder
if not config.path_addons_data.is_dir():
# Supervisor app data folder
if not config.path_apps_data.is_dir():
_LOGGER.debug(
"Creating Supervisor Add-on data folder at '%s'", config.path_addons_data
"Creating Supervisor app data folder at '%s'", config.path_apps_data
)
config.path_addons_data.mkdir(parents=True)
config.path_apps_data.mkdir(parents=True)
if not config.path_addons_local.is_dir():
if not config.path_apps_local.is_dir():
_LOGGER.debug(
"Creating Supervisor Add-on local repository folder at '%s'",
config.path_addons_local,
"Creating Supervisor app local repository folder at '%s'",
config.path_apps_local,
)
config.path_addons_local.mkdir(parents=True)
config.path_apps_local.mkdir(parents=True)
if not config.path_addons_git.is_dir():
if not config.path_apps_git.is_dir():
_LOGGER.debug(
"Creating Supervisor Add-on git repositories folder at '%s'",
config.path_addons_git,
"Creating Supervisor app git repositories folder at '%s'",
config.path_apps_git,
)
config.path_addons_git.mkdir(parents=True)
config.path_apps_git.mkdir(parents=True)
# Supervisor tmp folder
if not config.path_tmp.is_dir():
@@ -218,13 +219,13 @@ def initialize_system(coresys: CoreSys) -> None:
)
config.path_emergency.mkdir()
# Addon Configs folder
if not config.path_addon_configs.is_dir():
# App Configs folder
if not config.path_app_configs.is_dir():
_LOGGER.debug(
"Creating Supervisor add-on configs folder at '%s'",
config.path_addon_configs,
"Creating Supervisor app configs folder at '%s'",
config.path_app_configs,
)
config.path_addon_configs.mkdir()
config.path_app_configs.mkdir()
if not config.path_cid_files.is_dir():
_LOGGER.debug("Creating Docker cidfiles folder at '%s'", config.path_cid_files)
@@ -235,6 +236,11 @@ def warning_handler(message, category, filename, lineno, file=None, line=None):
"""Warning handler which logs warnings using the logging module."""
_LOGGER.warning("%s:%s: %s: %s", filename, lineno, category.__name__, message)
if isinstance(message, Exception):
# Don't capture warnings originating from Sentry SDK threads to
# avoid a feedback loop: sending an event can trigger urllib3
# warnings which would be captured and sent as new events.
if threading.current_thread().name.startswith("sentry-sdk."):
return
capture_exception(message)

View File

@@ -9,12 +9,13 @@ from pathlib import Path, PurePath
from awesomeversion import AwesomeVersion
from .const import (
ATTR_ADDONS_CUSTOM_LIST,
ATTR_APPS_CUSTOM_LIST,
ATTR_COUNTRY,
ATTR_DEBUG,
ATTR_DEBUG_BLOCK,
ATTR_DETECT_BLOCKING_IO,
ATTR_DIAGNOSTICS,
ATTR_FEATURE_FLAGS,
ATTR_IMAGE,
ATTR_LAST_BOOT,
ATTR_LOGGING,
@@ -24,6 +25,7 @@ from .const import (
ENV_SUPERVISOR_SHARE,
FILE_HASSIO_CONFIG,
SUPERVISOR_DATA,
FeatureFlag,
LogLevel,
)
from .utils.common import FileConfiguration
@@ -195,6 +197,17 @@ class CoreConfig(FileConfiguration):
lvl = getattr(logging, self.logging.value.upper())
logging.getLogger("supervisor").setLevel(lvl)
@property
def feature_flags(self) -> dict[FeatureFlag, bool]:
"""Return current state of explicitly configured experimental feature flags."""
return self._data.get(ATTR_FEATURE_FLAGS, {})
def set_feature_flag(self, feature: FeatureFlag, enabled: bool) -> None:
"""Enable or disable an experimental feature flag."""
if ATTR_FEATURE_FLAGS not in self._data:
self._data[ATTR_FEATURE_FLAGS] = {}
self._data[ATTR_FEATURE_FLAGS][feature] = enabled
@property
def last_boot(self) -> datetime:
"""Return last boot datetime."""
@@ -241,43 +254,43 @@ class CoreConfig(FileConfiguration):
return self.path_supervisor / HASSIO_SSL
@property
def path_addons_core(self) -> Path:
"""Return git path for core Add-ons."""
def path_apps_core(self) -> Path:
"""Return git path for core Apps."""
return self.path_supervisor / ADDONS_CORE
@property
def path_addons_git(self) -> Path:
"""Return path for Git Add-on."""
def path_apps_git(self) -> Path:
"""Return path for Git App."""
return self.path_supervisor / ADDONS_GIT
@property
def path_addons_local(self) -> Path:
"""Return path for custom Add-ons."""
def path_apps_local(self) -> Path:
"""Return path for custom Apps."""
return self.path_supervisor / ADDONS_LOCAL
@property
def path_extern_addons_local(self) -> PurePath:
"""Return path for custom Add-ons."""
def path_extern_apps_local(self) -> PurePath:
"""Return path for custom Apps."""
return PurePath(self.path_extern_supervisor, ADDONS_LOCAL)
@property
def path_addons_data(self) -> Path:
"""Return root Add-on data folder."""
def path_apps_data(self) -> Path:
"""Return root App data folder."""
return self.path_supervisor / ADDONS_DATA
@property
def path_extern_addons_data(self) -> PurePath:
"""Return root add-on data folder external for Docker."""
def path_extern_apps_data(self) -> PurePath:
"""Return root app data folder external for Docker."""
return PurePath(self.path_extern_supervisor, ADDONS_DATA)
@property
def path_addon_configs(self) -> Path:
"""Return root Add-on configs folder."""
def path_app_configs(self) -> Path:
"""Return root App configs folder."""
return self.path_supervisor / ADDON_CONFIGS
@property
def path_extern_addon_configs(self) -> PurePath:
"""Return root Add-on configs folder external for Docker."""
def path_extern_app_configs(self) -> PurePath:
"""Return root App configs folder external for Docker."""
return PurePath(self.path_extern_supervisor, ADDON_CONFIGS)
@property
@@ -411,23 +424,23 @@ class CoreConfig(FileConfiguration):
return PurePath(self.path_extern_supervisor, CID_FILES)
@property
def addons_repositories(self) -> list[str]:
"""Return list of custom Add-on repositories."""
return self._data[ATTR_ADDONS_CUSTOM_LIST]
def apps_repositories(self) -> list[str]:
"""Return list of custom App repositories."""
return self._data[ATTR_APPS_CUSTOM_LIST]
def add_addon_repository(self, repo: str) -> None:
def add_app_repository(self, repo: str) -> None:
"""Add a custom repository to list."""
if repo in self._data[ATTR_ADDONS_CUSTOM_LIST]:
if repo in self._data[ATTR_APPS_CUSTOM_LIST]:
return
self._data[ATTR_ADDONS_CUSTOM_LIST].append(repo)
self._data[ATTR_APPS_CUSTOM_LIST].append(repo)
def drop_addon_repository(self, repo: str) -> None:
def drop_app_repository(self, repo: str) -> None:
"""Remove a custom repository from list."""
if repo not in self._data[ATTR_ADDONS_CUSTOM_LIST]:
if repo not in self._data[ATTR_APPS_CUSTOM_LIST]:
return
self._data[ATTR_ADDONS_CUSTOM_LIST].remove(repo)
self._data[ATTR_APPS_CUSTOM_LIST].remove(repo)
def local_to_extern_path(self, path: PurePath) -> PurePath:
"""Translate a path relative to supervisor data in the container to its extern path."""

View File

@@ -1,11 +1,12 @@
"""Constants file for Supervisor."""
from collections.abc import Mapping
from dataclasses import dataclass
from enum import StrEnum
from ipaddress import IPv4Network, IPv6Network
from pathlib import Path
from sys import version_info as systemversion
from typing import NotRequired, Self, TypedDict
from typing import Any, NotRequired, Self, TypedDict
from aiohttp import __version__ as aiohttpversion
@@ -38,9 +39,10 @@ FILE_HASSIO_SECURITY = Path(SUPERVISOR_DATA, "security.json")
FILE_SUFFIX_CONFIGURATION = [".yaml", ".yml", ".json"]
MACHINE_ID = Path("/etc/machine-id")
RUN_SUPERVISOR_STATE = Path("/run/supervisor")
SOCKET_CORE = Path("/run/os/core.sock")
SOCKET_DBUS = Path("/run/dbus/system_bus_socket")
SOCKET_DOCKER = Path("/run/docker.sock")
RUN_SUPERVISOR_STATE = Path("/run/supervisor")
SYSTEMD_JOURNAL_PERSISTENT = Path("/var/log/journal")
SYSTEMD_JOURNAL_VOLATILE = Path("/run/log/journal")
@@ -64,11 +66,15 @@ DOCKER_CPU_RUNTIME_ALLOCATION = int(DOCKER_CPU_RUNTIME_TOTAL / 5)
DNS_SUFFIX = "local.hass.io"
LABEL_ARCH = "io.hass.arch"
LABEL_DESCRIPTION = "io.hass.description"
LABEL_MACHINE = "io.hass.machine"
LABEL_NAME = "io.hass.name"
LABEL_TYPE = "io.hass.type"
LABEL_URL = "io.hass.url"
LABEL_VERSION = "io.hass.version"
META_ADDON = "addon"
META_ADDON = "addon" # legacy label for app
META_APP = "app"
META_HOMEASSISTANT = "homeassistant"
META_SUPERVISOR = "supervisor"
@@ -101,10 +107,11 @@ ATTR_ACCESS_TOKEN = "access_token"
ATTR_ACCESSPOINTS = "accesspoints"
ATTR_ACTIVE = "active"
ATTR_ACTIVITY_LED = "activity_led"
ATTR_ADDON = "addon"
ATTR_ADDONS = "addons"
ATTR_ADDONS_CUSTOM_LIST = "addons_custom_list"
ATTR_ADDONS_REPOSITORIES = "addons_repositories"
ATTR_APP = "addon"
ATTR_APPS = "apps"
ATTR_APPS_CUSTOM_LIST = "addons_custom_list"
ATTR_APPS_REPOSITORIES = "addons_repositories"
ATTR_ADDR_GEN_MODE = "addr_gen_mode"
ATTR_ADDRESS = "address"
ATTR_ADDRESS_DATA = "address-data"
@@ -152,7 +159,6 @@ ATTR_CONTENT_TRUST = "content_trust"
ATTR_COUNTRY = "country"
ATTR_CPE = "cpe"
ATTR_CPU_PERCENT = "cpu_percent"
ATTR_CRYPTO = "crypto"
ATTR_DATA = "data"
ATTR_DATE = "date"
ATTR_DAYS_UNTIL_STALE = "days_until_stale"
@@ -188,6 +194,7 @@ ATTR_ENVIRONMENT = "environment"
ATTR_EVENT = "event"
ATTR_EXCLUDE_DATABASE = "exclude_database"
ATTR_EXTRA = "extra"
ATTR_FEATURE_FLAGS = "feature_flags"
ATTR_FEATURES = "features"
ATTR_FIELDS = "fields"
ATTR_FILENAME = "filename"
@@ -387,7 +394,20 @@ ARCH_AARCH64 = "aarch64"
ARCH_AMD64 = "amd64"
ARCH_I386 = "i386"
ARCH_ALL = [ARCH_ARMHF, ARCH_ARMV7, ARCH_AARCH64, ARCH_AMD64, ARCH_I386]
ARCH_ALL = [ARCH_AARCH64, ARCH_AMD64]
ARCH_DEPRECATED = [ARCH_ARMHF, ARCH_ARMV7, ARCH_I386]
ARCH_ALL_COMPAT = ARCH_ALL + ARCH_DEPRECATED
MACHINE_DEPRECATED = [
"odroid-xu",
"qemuarm",
"qemux86",
"raspberrypi",
"raspberrypi2",
"raspberrypi3",
"raspberrypi4",
"tinker",
]
REPOSITORY_CORE = "core"
REPOSITORY_LOCAL = "local"
@@ -398,8 +418,6 @@ FOLDER_ADDONS = "addons/local"
FOLDER_SSL = "ssl"
FOLDER_MEDIA = "media"
CRYPTO_AES128 = "aes128"
SECURITY_PROFILE = "profile"
SECURITY_DEFAULT = "default"
SECURITY_DISABLE = "disable"
@@ -418,16 +436,16 @@ OBSERVER_PORT = 4357
DEFAULT_CHUNK_SIZE = 2**16 # 64KiB
class AddonBootConfig(StrEnum):
"""Boot mode config for the add-on."""
class AppBootConfig(StrEnum):
"""Boot mode config for the app."""
AUTO = "auto"
MANUAL = "manual"
MANUAL_ONLY = "manual_only"
class AddonBoot(StrEnum):
"""Boot mode for the add-on."""
class AppBoot(StrEnum):
"""Boot mode for the app."""
AUTO = "auto"
MANUAL = "manual"
@@ -435,15 +453,15 @@ class AddonBoot(StrEnum):
@classmethod
def _missing_(cls, value: object) -> Self | None:
"""Convert 'forced' config values to their counterpart."""
if value == AddonBootConfig.MANUAL_ONLY:
if value == AppBootConfig.MANUAL_ONLY:
for member in cls:
if member == AddonBoot.MANUAL:
if member == AppBoot.MANUAL:
return member
return None
class AddonStartup(StrEnum):
"""Startup types of Add-on."""
class AppStartup(StrEnum):
"""Startup types of App."""
INITIALIZE = "initialize"
SYSTEM = "system"
@@ -452,16 +470,16 @@ class AddonStartup(StrEnum):
ONCE = "once"
class AddonStage(StrEnum):
"""Stage types of add-on."""
class AppStage(StrEnum):
"""Stage types of app."""
STABLE = "stable"
EXPERIMENTAL = "experimental"
DEPRECATED = "deprecated"
class AddonState(StrEnum):
"""State of add-on."""
class AppState(StrEnum):
"""State of app."""
STARTUP = "startup"
STARTED = "started"
@@ -529,67 +547,88 @@ class BusEvent(StrEnum):
class CpuArch(StrEnum):
"""Supported CPU architectures."""
ARMV7 = "armv7"
ARMHF = "armhf"
AARCH64 = "aarch64"
I386 = "i386"
AMD64 = "amd64"
class IngressSessionDataUserDict(TypedDict):
"""Response object for ingress session user."""
class FeatureFlag(StrEnum):
"""Development features that can be toggled."""
id: str
username: NotRequired[str | None]
# Name is an alias for displayname, only one should be used
displayname: NotRequired[str | None]
name: NotRequired[str | None]
SUPERVISOR_V2_API = "supervisor_v2_api"
UNIX_SOCKET_CORE_API = "unix_socket_core_api"
@dataclass
class IngressSessionDataUser:
"""Format of an IngressSessionDataUser object."""
class HomeAssistantUser:
"""A Home Assistant Core user.
Incomplete model — Core's User object has additional fields
(credentials, refresh_tokens, etc.) that are not represented here.
Only fields used by the Supervisor are included.
"""
id: str
display_name: str | None = None
username: str | None = None
def to_dict(self) -> IngressSessionDataUserDict:
"""Get dictionary representation."""
return IngressSessionDataUserDict(
id=self.id, displayname=self.display_name, username=self.username
)
name: str | None = None
is_owner: bool = False
is_active: bool = False
local_only: bool = False
system_generated: bool = False
group_ids: list[str] | None = None
@classmethod
def from_dict(cls, data: IngressSessionDataUserDict) -> Self:
def from_dict(cls, data: Mapping[str, Any]) -> Self:
"""Return object from dictionary representation."""
return cls(
id=data["id"],
display_name=data.get("displayname") or data.get("name"),
username=data.get("username"),
# "displayname" is a legacy key from old ingress session data
name=data.get("name") or data.get("displayname"),
is_owner=data.get("is_owner", False),
is_active=data.get("is_active", False),
local_only=data.get("local_only", False),
system_generated=data.get("system_generated", False),
group_ids=data.get("group_ids"),
)
class IngressSessionDataUserDict(TypedDict):
"""Serialization format for user data stored in ingress sessions.
Legacy data may contain "displayname" instead of "name".
"""
id: str
username: NotRequired[str | None]
name: NotRequired[str | None]
class IngressSessionDataDict(TypedDict):
"""Response object for ingress session data."""
"""Serialization format for ingress session data."""
user: IngressSessionDataUserDict
@dataclass
class IngressSessionData:
"""Format of an IngressSessionData object."""
"""Ingress session data attached to a session token."""
user: IngressSessionDataUser
user: HomeAssistantUser
def to_dict(self) -> IngressSessionDataDict:
"""Get dictionary representation."""
return IngressSessionDataDict(user=self.user.to_dict())
return IngressSessionDataDict(
user=IngressSessionDataUserDict(
id=self.user.id,
name=self.user.name,
username=self.user.username,
)
)
@classmethod
def from_dict(cls, data: IngressSessionDataDict) -> Self:
def from_dict(cls, data: Mapping[str, Any]) -> Self:
"""Return object from dictionary representation."""
return cls(user=IngressSessionDataUser.from_dict(data["user"]))
return cls(user=HomeAssistantUser.from_dict(data["user"]))
STARTING_STATES = [

View File

@@ -11,11 +11,12 @@ from .const import (
ATTR_STARTUP,
RUN_SUPERVISOR_STATE,
STARTING_STATES,
AddonStartup,
AppStartup,
BusEvent,
CoreState,
)
from .coresys import CoreSys, CoreSysAttributes
from .dbus.const import StopUnitMode, UnitActiveState
from .exceptions import (
HassioError,
HomeAssistantCrashError,
@@ -168,8 +169,8 @@ class Core(CoreSysAttributes):
self.sys_arch.load(),
# Load Stores
self.sys_store.load(),
# Load Add-ons
self.sys_addons.load(),
# Load Apps
self.sys_apps.load(),
# load last available data
self.sys_backups.load(),
# load services
@@ -234,8 +235,8 @@ class Core(CoreSysAttributes):
return
try:
# Start addon mark as initialize
await self.sys_addons.boot(AddonStartup.INITIALIZE)
# Start app mark as initialize
await self.sys_apps.boot(AppStartup.INITIALIZE)
# HomeAssistant is already running, only Supervisor restarted
if await self.sys_hardware.helper.last_boot() == self.sys_config.last_boot:
@@ -245,11 +246,11 @@ class Core(CoreSysAttributes):
# reset register services / discovery
await self.sys_services.reset()
# start addon mark as system
await self.sys_addons.boot(AddonStartup.SYSTEM)
# start app mark as system
await self.sys_apps.boot(AppStartup.SYSTEM)
# start addon mark as services
await self.sys_addons.boot(AddonStartup.SERVICES)
# start app mark as services
await self.sys_apps.boot(AppStartup.SERVICES)
# run HomeAssistant
if (
@@ -278,8 +279,8 @@ class Core(CoreSysAttributes):
suggestions=[SuggestionType.EXECUTE_REPAIR],
)
# start addon mark as application
await self.sys_addons.boot(AddonStartup.APPLICATION)
# start app mark as application
await self.sys_apps.boot(AppStartup.APPLICATION)
# store new last boot
await self._update_last_boot()
@@ -337,6 +338,7 @@ class Core(CoreSysAttributes):
self.sys_create_task(coro)
for coro in (
self.sys_websession.close(),
self.sys_homeassistant.api.close(),
self.sys_ingress.unload(),
self.sys_hardware.unload(),
self.sys_dbus.unload(),
@@ -356,8 +358,8 @@ class Core(CoreSysAttributes):
if self.state == CoreState.RUNNING:
await self.set_state(CoreState.SHUTDOWN)
# Shutdown Application Add-ons, using Home Assistant API
await self.sys_addons.shutdown(AddonStartup.APPLICATION)
# Shutdown Application Apps, using Home Assistant API
await self.sys_apps.shutdown(AppStartup.APPLICATION)
# Close Home Assistant
with suppress(HassioError):
@@ -365,10 +367,10 @@ class Core(CoreSysAttributes):
remove_container=remove_homeassistant_container
)
# Shutdown System Add-ons
await self.sys_addons.shutdown(AddonStartup.SERVICES)
await self.sys_addons.shutdown(AddonStartup.SYSTEM)
await self.sys_addons.shutdown(AddonStartup.INITIALIZE)
# Shutdown System Apps
await self.sys_apps.shutdown(AppStartup.SERVICES)
await self.sys_apps.shutdown(AppStartup.SYSTEM)
await self.sys_apps.shutdown(AppStartup.INITIALIZE)
# Shutdown all Plugins
if self.state in (CoreState.STOPPING, CoreState.SHUTDOWN):
@@ -423,11 +425,40 @@ class Core(CoreSysAttributes):
await self.sys_host.control.set_timezone(timezone)
# Calculate if system time is out of sync
delta = data.dt_utc - utcnow()
if delta <= timedelta(days=3) or self.sys_host.info.dt_synchronized:
delta = abs(data.dt_utc - utcnow())
if delta <= timedelta(hours=1) or self.sys_host.info.dt_synchronized:
return
_LOGGER.warning("System time/date shift over more than 3 days found!")
_LOGGER.warning("System time/date shift over more than 1 hour detected!")
if self.sys_host.info.use_ntp:
# Stop timesyncd if NTP is enabled, as set_time is blocked while it runs.
# timedated rejects set_time while an NTP unit is active. We listen
# for the unit's ActiveState to become inactive before proceeding.
_LOGGER.info("Stopping systemd-timesyncd to allow manual time adjustment")
timesync_unit = await self.sys_dbus.systemd.get_unit(
"systemd-timesyncd.service"
)
try:
async with asyncio.timeout(10):
await self.sys_dbus.systemd.stop_unit(
"systemd-timesyncd.service", StopUnitMode.REPLACE
)
await timesync_unit.wait_for_active_state(
{UnitActiveState.INACTIVE}
)
except TimeoutError:
_LOGGER.warning(
"Timeout waiting for systemd-timesyncd to stop, "
"attempting time sync anyway"
)
# Create a repair issue so the user knows NTP was disabled
self.sys_resolution.create_issue(
IssueType.NTP_SYNC_FAILED,
ContextType.SYSTEM,
suggestions=[SuggestionType.ENABLE_NTP],
)
await self.sys_host.control.set_datetime(data.dt_utc)
await self.sys_supervisor.check_connectivity()
@@ -440,7 +471,7 @@ class Core(CoreSysAttributes):
await self.sys_plugins.repair()
# Restore core functionality
await self.sys_addons.repair()
await self.sys_apps.repair()
await self.sys_homeassistant.core.repair()
# Tag version for latest

View File

@@ -27,7 +27,7 @@ from .const import (
)
if TYPE_CHECKING:
from .addons.manager import AddonManager
from .addons.manager import AppManager
from .api import RestAPI
from .arch import CpuArchManager
from .auth import Auth
@@ -82,7 +82,7 @@ class CoreSys:
self._auth: Auth | None = None
self._homeassistant: HomeAssistant | None = None
self._supervisor: Supervisor | None = None
self._addons: AddonManager | None = None
self._apps: AppManager | None = None
self._api: RestAPI | None = None
self._updater: Updater | None = None
self._backups: BackupManager | None = None
@@ -350,18 +350,18 @@ class CoreSys:
self._updater = value
@property
def addons(self) -> AddonManager:
"""Return AddonManager object."""
if self._addons is None:
raise RuntimeError("AddonManager not set!")
return self._addons
def apps(self) -> AppManager:
"""Return AppManager object."""
if self._apps is None:
raise RuntimeError("AppManager not set!")
return self._apps
@addons.setter
def addons(self, value: AddonManager) -> None:
"""Set a AddonManager object."""
if self._addons:
raise RuntimeError("AddonManager already set!")
self._addons = value
@apps.setter
def apps(self, value: AppManager) -> None:
"""Set a AppManager object."""
if self._apps:
raise RuntimeError("AppManager already set!")
self._apps = value
@property
def store(self) -> StoreManager:
@@ -771,9 +771,9 @@ class CoreSysAttributes:
return self.coresys.updater
@property
def sys_addons(self) -> AddonManager:
"""Return AddonManager object."""
return self.coresys.addons
def sys_apps(self) -> AppManager:
"""Return AppManager object."""
return self.coresys.apps
@property
def sys_store(self) -> StoreManager:

View File

@@ -1,25 +1,17 @@
{
"raspberrypi": ["armhf"],
"raspberrypi2": ["armv7", "armhf"],
"raspberrypi3": ["armv7", "armhf"],
"raspberrypi3-64": ["aarch64", "armv7", "armhf"],
"raspberrypi4": ["armv7", "armhf"],
"raspberrypi4-64": ["aarch64", "armv7", "armhf"],
"raspberrypi5-64": ["aarch64", "armv7", "armhf"],
"yellow": ["aarch64", "armv7", "armhf"],
"green": ["aarch64", "armv7", "armhf"],
"tinker": ["armv7", "armhf"],
"odroid-c2": ["aarch64", "armv7", "armhf"],
"odroid-c4": ["aarch64", "armv7", "armhf"],
"odroid-m1": ["aarch64", "armv7", "armhf"],
"odroid-n2": ["aarch64", "armv7", "armhf"],
"odroid-xu": ["armv7", "armhf"],
"khadas-vim3": ["aarch64", "armv7", "armhf"],
"raspberrypi3-64": ["aarch64"],
"raspberrypi4-64": ["aarch64"],
"raspberrypi5-64": ["aarch64"],
"yellow": ["aarch64"],
"green": ["aarch64"],
"odroid-c2": ["aarch64"],
"odroid-c4": ["aarch64"],
"odroid-m1": ["aarch64"],
"odroid-n2": ["aarch64"],
"khadas-vim3": ["aarch64"],
"generic-aarch64": ["aarch64"],
"qemux86": ["i386"],
"qemux86-64": ["amd64", "i386"],
"qemuarm": ["armhf"],
"qemux86-64": ["amd64"],
"qemuarm-64": ["aarch64"],
"intel-nuc": ["amd64", "i386"],
"generic-x86-64": ["amd64", "i386"]
"intel-nuc": ["amd64"],
"generic-x86-64": ["amd64"]
}

View File

@@ -304,12 +304,38 @@ class DeviceType(DBusIntEnum):
UNKNOWN = 0
ETHERNET = 1
WIRELESS = 2
UNUSED1 = 3
UNUSED2 = 4
BLUETOOTH = 5
OLPC_MESH = 6
WIMAX = 7
MODEM = 8
INFINIBAND = 9
BOND = 10
VLAN = 11
ADSL = 12
BRIDGE = 13
GENERIC = 14
TEAM = 15
TUN = 16
IP_TUNNEL = 17
MAC_VLAN = 18
VXLAN = 19
VETH = 20
MACSEC = 21
DUMMY = 22
PPP = 23
OVS_INTERFACE = 24
OVS_PORT = 25
OVS_BRIDGE = 26
WPAN = 27
LOWPAN6 = 28
WIREGUARD = 29
WIFI_P2P = 30
VRF = 31
LOOPBACK = 32
HSR = 33
IPVLAN = 34
class WirelessMethodType(DBusIntEnum):

View File

@@ -32,7 +32,7 @@ class DBusStrEnum(StrEnum):
"""StrEnum that tolerates unknown values from D-Bus."""
@classmethod
def _missing_(cls, value: object) -> "DBusStrEnum | None":
def _missing_(cls, value: object) -> DBusStrEnum | None:
if not isinstance(value, str):
return None
_report_unknown_value(cls, value)
@@ -46,7 +46,7 @@ class DBusIntEnum(IntEnum):
"""IntEnum that tolerates unknown values from D-Bus."""
@classmethod
def _missing_(cls, value: object) -> "DBusIntEnum | None":
def _missing_(cls, value: object) -> DBusIntEnum | None:
if not isinstance(value, int):
return None
_report_unknown_value(cls, value)

View File

@@ -272,7 +272,7 @@ def get_connection_from_interface(
wireless = {
CONF_ATTR_802_WIRELESS_ASSIGNED_MAC: Variant("s", "preserve"),
CONF_ATTR_802_WIRELESS_MODE: Variant("s", "infrastructure"),
CONF_ATTR_802_WIRELESS_POWERSAVE: Variant("i", 1),
CONF_ATTR_802_WIRELESS_POWERSAVE: Variant("i", 0),
}
if interface.wifi and interface.wifi.ssid:
wireless[CONF_ATTR_802_WIRELESS_SSID] = Variant(

View File

@@ -2,6 +2,7 @@
from functools import wraps
import logging
from typing import NamedTuple
from dbus_fast import Variant
from dbus_fast.aio.message_bus import MessageBus
@@ -15,6 +16,7 @@ from ..exceptions import (
)
from ..utils.dbus import DBusSignalWrapper
from .const import (
DBUS_ATTR_ACTIVE_STATE,
DBUS_ATTR_FINISH_TIMESTAMP,
DBUS_ATTR_FIRMWARE_TIMESTAMP_MONOTONIC,
DBUS_ATTR_KERNEL_TIMESTAMP_MONOTONIC,
@@ -23,6 +25,7 @@ from .const import (
DBUS_ATTR_VIRTUALIZATION,
DBUS_ERR_SYSTEMD_NO_SUCH_UNIT,
DBUS_IFACE_SYSTEMD_MANAGER,
DBUS_IFACE_SYSTEMD_UNIT,
DBUS_NAME_SYSTEMD,
DBUS_OBJECT_SYSTEMD,
DBUS_SIGNAL_PROPERTIES_CHANGED,
@@ -36,6 +39,14 @@ from .utils import dbus_connected
_LOGGER: logging.Logger = logging.getLogger(__name__)
class ExecStartEntry(NamedTuple):
"""Systemd ExecStart entry for transient units (D-Bus type signature 'sasb')."""
binary: str
argv: list[str]
ignore_failure: bool
def systemd_errors(func):
"""Wrap systemd dbus methods to handle its specific error types."""
@@ -77,6 +88,25 @@ class SystemdUnit(DBusInterface):
"""Return signal wrapper for properties changed."""
return self.connected_dbus.signal(DBUS_SIGNAL_PROPERTIES_CHANGED)
@dbus_connected
async def wait_for_active_state(
self, target_states: set[UnitActiveState]
) -> UnitActiveState:
"""Wait for unit to reach one of the target active states.
Caller must handle TimeoutError if a timeout is desired.
"""
async with self.properties_changed() as signal:
state = await self.get_active_state()
while state not in target_states:
interface, changed, _ = await signal.wait_for_signal()
if (
interface == DBUS_IFACE_SYSTEMD_UNIT
and DBUS_ATTR_ACTIVE_STATE in changed
):
state = UnitActiveState(changed[DBUS_ATTR_ACTIVE_STATE].value)
return state
class Systemd(DBusInterfaceProxy):
"""Systemd function handler.
@@ -103,6 +133,12 @@ class Systemd(DBusInterfaceProxy):
"No systemd support on the host. Host control has been disabled."
)
if self.is_connected:
try:
await self.connected_dbus.Manager.call("subscribe")
except DBusError:
_LOGGER.warning("Could not subscribe to systemd signals")
@property
@dbus_property
def startup_time(self) -> float:

View File

@@ -8,12 +8,11 @@ from typing import Any
from dbus_fast.aio.message_bus import MessageBus
from ..exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError
from ..utils.dt import get_time_zone, utc_from_timestamp
from ..utils.dt import get_time_zone
from .const import (
DBUS_ATTR_LOCAL_RTC,
DBUS_ATTR_NTP,
DBUS_ATTR_NTPSYNCHRONIZED,
DBUS_ATTR_TIMEUSEC,
DBUS_ATTR_TIMEZONE,
DBUS_IFACE_TIMEDATE,
DBUS_NAME_TIMEDATE,
@@ -65,12 +64,6 @@ class TimeDate(DBusInterfaceProxy):
"""Return if NTP is synchronized."""
return self.properties[DBUS_ATTR_NTPSYNCHRONIZED]
@property
@dbus_property
def dt_utc(self) -> datetime:
"""Return the system UTC time."""
return utc_from_timestamp(self.properties[DBUS_ATTR_TIMEUSEC] / 1000000)
@property
def timezone_tzinfo(self) -> tzinfo | None:
"""Return timezone as tzinfo object."""

View File

@@ -72,7 +72,7 @@ class UDisks2Block(DBusInterfaceProxy):
@staticmethod
async def new(
object_path: str, bus: MessageBus, *, sync_properties: bool = True
) -> "UDisks2Block":
) -> UDisks2Block:
"""Create and connect object."""
obj = UDisks2Block(object_path, sync_properties=sync_properties)
await obj.connect(bus)

View File

@@ -46,7 +46,7 @@ class DeviceSpecification:
partlabel: str | None = None
@staticmethod
def from_dict(data: DeviceSpecificationDataType) -> "DeviceSpecification":
def from_dict(data: DeviceSpecificationDataType) -> DeviceSpecification:
"""Create DeviceSpecification from dict."""
return DeviceSpecification(
path=Path(data["path"]) if "path" in data else None,
@@ -108,7 +108,7 @@ class FormatOptions:
auth_no_user_interaction: bool | None = None
@staticmethod
def from_dict(data: FormatOptionsDataType) -> "FormatOptions":
def from_dict(data: FormatOptionsDataType) -> FormatOptions:
"""Create FormatOptions from dict."""
return FormatOptions(
label=data.get("label"),
@@ -182,7 +182,7 @@ class MountOptions:
auth_no_user_interaction: bool | None = None
@staticmethod
def from_dict(data: MountOptionsDataType) -> "MountOptions":
def from_dict(data: MountOptionsDataType) -> MountOptions:
"""Create MountOptions from dict."""
return MountOptions(
fstype=data.get("fstype"),
@@ -226,7 +226,7 @@ class UnmountOptions:
auth_no_user_interaction: bool | None = None
@staticmethod
def from_dict(data: UnmountOptionsDataType) -> "UnmountOptions":
def from_dict(data: UnmountOptionsDataType) -> UnmountOptions:
"""Create MountOptions from dict."""
return UnmountOptions(
force=data.get("force"),
@@ -268,7 +268,7 @@ class CreatePartitionOptions:
auth_no_user_interaction: bool | None = None
@staticmethod
def from_dict(data: CreatePartitionOptionsDataType) -> "CreatePartitionOptions":
def from_dict(data: CreatePartitionOptionsDataType) -> CreatePartitionOptions:
"""Create CreatePartitionOptions from dict."""
return CreatePartitionOptions(
partition_type=data.get("partition-type"),
@@ -310,7 +310,7 @@ class DeletePartitionOptions:
auth_no_user_interaction: bool | None = None
@staticmethod
def from_dict(data: DeletePartitionOptionsDataType) -> "DeletePartitionOptions":
def from_dict(data: DeletePartitionOptionsDataType) -> DeletePartitionOptions:
"""Create DeletePartitionOptions from dict."""
return DeletePartitionOptions(
tear_down=data.get("tear-down"),

View File

@@ -51,7 +51,7 @@ class UDisks2Drive(DBusInterfaceProxy):
await self._reload_interfaces()
@staticmethod
async def new(object_path: str, bus: MessageBus) -> "UDisks2Drive":
async def new(object_path: str, bus: MessageBus) -> UDisks2Drive:
"""Create and connect object."""
obj = UDisks2Drive(object_path)
await obj.connect(bus)

View File

@@ -96,7 +96,7 @@ class UDisks2NVMeController(DBusInterfaceProxy):
super().__init__()
@staticmethod
async def new(object_path: str, bus: MessageBus) -> "UDisks2NVMeController":
async def new(object_path: str, bus: MessageBus) -> UDisks2NVMeController:
"""Create and connect object."""
obj = UDisks2NVMeController(object_path)
await obj.connect(bus)

View File

@@ -15,7 +15,7 @@ from ..utils.common import FileConfiguration
from .validate import SCHEMA_DISCOVERY_CONFIG
if TYPE_CHECKING:
from ..addons.addon import Addon
from ..addons.addon import App
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -71,10 +71,10 @@ class Discovery(CoreSysAttributes, FileConfiguration):
"""Return list of available discovery messages."""
return list(self.message_obj.values())
async def send(self, addon: Addon, service: str, config: dict[str, Any]) -> Message:
async def send(self, app: App, service: str, config: dict[str, Any]) -> Message:
"""Send a discovery message to Home Assistant."""
# Create message
message = Message(addon.slug, service, config)
message = Message(app.slug, service, config)
# Already exists?
for exists_msg in self.list_messages:
@@ -84,12 +84,12 @@ class Discovery(CoreSysAttributes, FileConfiguration):
message = exists_msg
message.config = config
else:
_LOGGER.debug("Duplicate discovery message from %s", addon.slug)
_LOGGER.debug("Duplicate discovery message from %s", app.slug)
return exists_msg
break
_LOGGER.info(
"Sending discovery to Home Assistant %s from %s", service, addon.slug
"Sending discovery to Home Assistant %s from %s", service, app.slug
)
self.message_obj[message.uuid] = message
await self.save()

View File

@@ -2,7 +2,7 @@
import voluptuous as vol
from ..const import ATTR_ADDON, ATTR_CONFIG, ATTR_DISCOVERY, ATTR_SERVICE, ATTR_UUID
from ..const import ATTR_APP, ATTR_CONFIG, ATTR_DISCOVERY, ATTR_SERVICE, ATTR_UUID
from ..utils.validate import schema_or
from ..validate import uuid_match
@@ -11,7 +11,7 @@ SCHEMA_DISCOVERY = vol.Schema(
vol.Schema(
{
vol.Required(ATTR_UUID): uuid_match,
vol.Required(ATTR_ADDON): str,
vol.Required(ATTR_APP): str,
vol.Required(ATTR_SERVICE): str,
vol.Required(ATTR_CONFIG): vol.Maybe(dict),
},

View File

@@ -1,4 +1,4 @@
"""Init file for Supervisor add-on Docker object."""
"""Init file for Supervisor app Docker object."""
from __future__ import annotations
@@ -14,7 +14,7 @@ import aiodocker
from attr import evolve
from awesomeversion import AwesomeVersion
from ..addons.build import AddonBuild
from ..addons.build import AppBuild
from ..addons.const import MappingType
from ..bus import EventListener
from ..const import (
@@ -71,7 +71,7 @@ from .const import (
from .interface import DockerInterface
if TYPE_CHECKING:
from ..addons.addon import Addon
from ..addons.addon import App
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -79,12 +79,12 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
NO_ADDDRESS = IPv4Address("0.0.0.0")
class DockerAddon(DockerInterface):
class DockerApp(DockerInterface):
"""Docker Supervisor wrapper for Home Assistant."""
def __init__(self, coresys: CoreSys, addon: Addon):
def __init__(self, coresys: CoreSys, app: App):
"""Initialize Docker Home Assistant wrapper."""
self.addon: Addon = addon
self.app: App = app
super().__init__(coresys)
self._hw_listener: EventListener | None = None
@@ -97,12 +97,12 @@ class DockerAddon(DockerInterface):
@property
def image(self) -> str | None:
"""Return name of Docker image."""
return self.addon.image
return self.app.image
@property
def ip_address(self) -> IPv4Address:
"""Return IP address of this container."""
if self.addon.host_network:
if self.app.host_network:
return self.sys_docker.network.gateway
if not self._meta:
return NO_ADDDRESS
@@ -112,49 +112,49 @@ class DockerAddon(DockerInterface):
return IPv4Address(
self._meta["NetworkSettings"]["Networks"]["hassio"]["IPAddress"]
)
except (KeyError, TypeError, ValueError):
except KeyError, TypeError, ValueError:
return NO_ADDDRESS
@property
def timeout(self) -> int:
"""Return timeout for Docker actions."""
return self.addon.timeout
return self.app.timeout
@property
def version(self) -> AwesomeVersion:
"""Return version of Docker image."""
return self.addon.version
return self.app.version
@property
def arch(self) -> str | None:
"""Return arch of Docker image."""
if self.addon.legacy:
if self.app.legacy:
return str(self.sys_arch.default)
return super().arch
@property
def name(self) -> str:
"""Return name of Docker container."""
return DockerAddon.slug_to_name(self.addon.slug)
return DockerApp.slug_to_name(self.app.slug)
@property
def environment(self) -> dict[str, str | int | None]:
"""Return environment for Docker add-on."""
addon_env = cast(dict[str, str | int | None], self.addon.environment or {})
"""Return environment for Docker app."""
app_env = cast(dict[str, str | int | None], self.app.environment or {})
# Provide options for legacy add-ons
if self.addon.legacy:
for key, value in self.addon.options.items():
# Provide options for legacy apps
if self.app.legacy:
for key, value in self.app.options.items():
if isinstance(value, (int, str)):
addon_env[key] = value
app_env[key] = value
else:
_LOGGER.warning("Can not set nested option %s as Docker env", key)
return {
**addon_env,
**app_env,
ENV_TIME: self.sys_timezone,
ENV_TOKEN: self.addon.supervisor_token,
ENV_TOKEN_OLD: self.addon.supervisor_token,
ENV_TOKEN: self.app.supervisor_token,
ENV_TOKEN_OLD: self.app.supervisor_token,
}
@property
@@ -163,7 +163,7 @@ class DockerAddon(DockerInterface):
rules = set()
# Attach correct cgroups for static devices
for device_path in self.addon.static_devices:
for device_path in self.app.static_devices:
try:
device = self.sys_hardware.get_by_path(device_path)
except HardwareNotFound:
@@ -173,42 +173,42 @@ class DockerAddon(DockerInterface):
# Check access
if not self.sys_hardware.policy.allowed_for_access(device):
_LOGGER.error(
"Add-on %s try to access to blocked device %s!",
self.addon.name,
"App %s tried to access blocked device %s!",
self.app.name,
device.name,
)
continue
rules.add(self.sys_hardware.policy.get_cgroups_rule(device))
# Attach correct cgroups for devices
for device in self.addon.devices:
for device in self.app.devices:
if not self.sys_hardware.policy.allowed_for_access(device):
_LOGGER.error(
"Add-on %s try to access to blocked device %s!",
self.addon.name,
"App %s tried to access blocked device %s!",
self.app.name,
device.name,
)
continue
rules.add(self.sys_hardware.policy.get_cgroups_rule(device))
# Video
if self.addon.with_video:
if self.app.with_video:
rules.update(self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.VIDEO))
# GPIO
if self.addon.with_gpio:
if self.app.with_gpio:
rules.update(self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.GPIO))
# UART
if self.addon.with_uart:
if self.app.with_uart:
rules.update(self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.UART))
# USB
if self.addon.with_usb:
if self.app.with_usb:
rules.update(self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.USB))
# Full Access
if not self.addon.protected and self.addon.with_full_access:
if not self.app.protected and self.app.with_full_access:
return [self.sys_hardware.policy.get_full_access()]
# Return None if no rules is present
@@ -218,13 +218,13 @@ class DockerAddon(DockerInterface):
@property
def ports(self) -> dict[str, str | int | None] | None:
"""Filter None from add-on ports."""
if self.addon.host_network or not self.addon.ports:
"""Filter None from app ports."""
if self.app.host_network or not self.app.ports:
return None
return {
container_port: host_port
for container_port, host_port in self.addon.ports.items()
for container_port, host_port in self.app.ports.items()
if host_port
}
@@ -236,23 +236,23 @@ class DockerAddon(DockerInterface):
# AppArmor
if (
not self.sys_host.apparmor.available
or self.addon.apparmor == SECURITY_DISABLE
or self.app.apparmor == SECURITY_DISABLE
):
security.append("apparmor=unconfined")
elif self.addon.apparmor == SECURITY_PROFILE:
security.append(f"apparmor={self.addon.slug}")
elif self.app.apparmor == SECURITY_PROFILE:
security.append(f"apparmor={self.app.slug}")
return security
@property
def tmpfs(self) -> dict[str, str] | None:
"""Return tmpfs for Docker add-on."""
"""Return tmpfs for Docker app."""
tmpfs = {}
if self.addon.with_tmpfs:
if self.app.with_tmpfs:
tmpfs["/tmp"] = "" # noqa: S108
if not self.addon.host_ipc:
if not self.app.host_ipc:
tmpfs["/dev/shm"] = "" # noqa: S108
# Return None if no tmpfs is present
@@ -270,36 +270,36 @@ class DockerAddon(DockerInterface):
@property
def network_mode(self) -> Literal["host"] | None:
"""Return network mode for add-on."""
if self.addon.host_network:
"""Return network mode for app."""
if self.app.host_network:
return "host"
return None
@property
def pid_mode(self) -> str | None:
"""Return PID mode for add-on."""
if not self.addon.protected and self.addon.host_pid:
"""Return PID mode for app."""
if not self.app.protected and self.app.host_pid:
return "host"
return None
@property
def uts_mode(self) -> str | None:
"""Return UTS mode for add-on."""
if self.addon.host_uts:
"""Return UTS mode for app."""
if self.app.host_uts:
return "host"
return None
@property
def capabilities(self) -> list[Capabilities] | None:
"""Generate needed capabilities."""
capabilities: set[Capabilities] = set(self.addon.privileged)
capabilities: set[Capabilities] = set(self.app.privileged)
# Need work with kernel modules
if self.addon.with_kernel_modules:
if self.app.with_kernel_modules:
capabilities.add(Capabilities.SYS_MODULE)
# Need schedule functions
if self.addon.with_realtime:
if self.app.with_realtime:
capabilities.add(Capabilities.SYS_NICE)
# Return None if no capabilities is present
@@ -309,19 +309,19 @@ class DockerAddon(DockerInterface):
@property
def ulimits(self) -> list[Ulimit] | None:
"""Generate ulimits for add-on."""
"""Generate ulimits for app."""
limits: list[Ulimit] = []
# Need schedule functions
if self.addon.with_realtime:
if self.app.with_realtime:
limits.append(Ulimit(name="rtprio", soft=90, hard=99))
# Set available memory for memlock to 128MB
mem = 128 * 1024 * 1024
limits.append(Ulimit(name="memlock", soft=mem, hard=mem))
# Add configurable ulimits from add-on config
for name, config in self.addon.ulimits.items():
# Add configurable ulimits from app config
for name, config in self.app.ulimits.items():
if isinstance(config, int):
# Simple format: both soft and hard limits are the same
limits.append(Ulimit(name=name, soft=config, hard=config))
@@ -343,131 +343,129 @@ class DockerAddon(DockerInterface):
return None
# If need CPU RT
if self.addon.with_realtime:
if self.app.with_realtime:
return DOCKER_CPU_RUNTIME_ALLOCATION
return None
@property
def mounts(self) -> list[DockerMount]:
"""Return mounts for container."""
addon_mapping = self.addon.map_volumes
app_mapping = self.app.map_volumes
target_data_path: str | None = None
if MappingType.DATA in addon_mapping:
target_data_path = addon_mapping[MappingType.DATA].path
if MappingType.DATA in app_mapping:
target_data_path = app_mapping[MappingType.DATA].path
mounts = [
MOUNT_DEV,
DockerMount(
type=MountType.BIND,
source=self.addon.path_extern_data.as_posix(),
source=self.app.path_extern_data.as_posix(),
target=target_data_path or PATH_PRIVATE_DATA.as_posix(),
read_only=False,
),
]
# setup config mappings
if MappingType.CONFIG in addon_mapping:
if MappingType.CONFIG in app_mapping:
mounts.append(
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.CONFIG].path
target=app_mapping[MappingType.CONFIG].path
or PATH_HOMEASSISTANT_CONFIG_LEGACY.as_posix(),
read_only=addon_mapping[MappingType.CONFIG].read_only,
read_only=app_mapping[MappingType.CONFIG].read_only,
)
)
else:
# Map addon's public config folder if not using deprecated config option
if self.addon.addon_config_used:
# Map app's public config folder if not using deprecated config option
if self.app.app_config_used:
mounts.append(
DockerMount(
type=MountType.BIND,
source=self.addon.path_extern_config.as_posix(),
target=addon_mapping[MappingType.ADDON_CONFIG].path
source=self.app.path_extern_config.as_posix(),
target=app_mapping[MappingType.ADDON_CONFIG].path
or PATH_PUBLIC_CONFIG.as_posix(),
read_only=addon_mapping[MappingType.ADDON_CONFIG].read_only,
read_only=app_mapping[MappingType.ADDON_CONFIG].read_only,
)
)
# Map Home Assistant config in new way
if MappingType.HOMEASSISTANT_CONFIG in addon_mapping:
if MappingType.HOMEASSISTANT_CONFIG in app_mapping:
mounts.append(
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.HOMEASSISTANT_CONFIG].path
target=app_mapping[MappingType.HOMEASSISTANT_CONFIG].path
or PATH_HOMEASSISTANT_CONFIG.as_posix(),
read_only=addon_mapping[
read_only=app_mapping[
MappingType.HOMEASSISTANT_CONFIG
].read_only,
)
)
if MappingType.ALL_ADDON_CONFIGS in addon_mapping:
if MappingType.ALL_ADDON_CONFIGS in app_mapping:
mounts.append(
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_addon_configs.as_posix(),
target=addon_mapping[MappingType.ALL_ADDON_CONFIGS].path
source=self.sys_config.path_extern_app_configs.as_posix(),
target=app_mapping[MappingType.ALL_ADDON_CONFIGS].path
or PATH_ALL_ADDON_CONFIGS.as_posix(),
read_only=addon_mapping[MappingType.ALL_ADDON_CONFIGS].read_only,
read_only=app_mapping[MappingType.ALL_ADDON_CONFIGS].read_only,
)
)
if MappingType.SSL in addon_mapping:
if MappingType.SSL in app_mapping:
mounts.append(
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(),
target=addon_mapping[MappingType.SSL].path or PATH_SSL.as_posix(),
read_only=addon_mapping[MappingType.SSL].read_only,
target=app_mapping[MappingType.SSL].path or PATH_SSL.as_posix(),
read_only=app_mapping[MappingType.SSL].read_only,
)
)
if MappingType.ADDONS in addon_mapping:
if MappingType.ADDONS in app_mapping:
mounts.append(
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_addons_local.as_posix(),
target=addon_mapping[MappingType.ADDONS].path
source=self.sys_config.path_extern_apps_local.as_posix(),
target=app_mapping[MappingType.ADDONS].path
or PATH_LOCAL_ADDONS.as_posix(),
read_only=addon_mapping[MappingType.ADDONS].read_only,
read_only=app_mapping[MappingType.ADDONS].read_only,
)
)
if MappingType.BACKUP in addon_mapping:
if MappingType.BACKUP in app_mapping:
mounts.append(
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_backup.as_posix(),
target=addon_mapping[MappingType.BACKUP].path
target=app_mapping[MappingType.BACKUP].path
or PATH_BACKUP.as_posix(),
read_only=addon_mapping[MappingType.BACKUP].read_only,
read_only=app_mapping[MappingType.BACKUP].read_only,
)
)
if MappingType.SHARE in addon_mapping:
if MappingType.SHARE in app_mapping:
mounts.append(
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(),
target=addon_mapping[MappingType.SHARE].path
or PATH_SHARE.as_posix(),
read_only=addon_mapping[MappingType.SHARE].read_only,
target=app_mapping[MappingType.SHARE].path or PATH_SHARE.as_posix(),
read_only=app_mapping[MappingType.SHARE].read_only,
bind_options=MountBindOptions(propagation=PropagationMode.RSLAVE),
)
)
if MappingType.MEDIA in addon_mapping:
if MappingType.MEDIA in app_mapping:
mounts.append(
DockerMount(
type=MountType.BIND,
source=self.sys_config.path_extern_media.as_posix(),
target=addon_mapping[MappingType.MEDIA].path
or PATH_MEDIA.as_posix(),
read_only=addon_mapping[MappingType.MEDIA].read_only,
target=app_mapping[MappingType.MEDIA].path or PATH_MEDIA.as_posix(),
read_only=app_mapping[MappingType.MEDIA].read_only,
bind_options=MountBindOptions(propagation=PropagationMode.RSLAVE),
)
)
@@ -475,7 +473,7 @@ class DockerAddon(DockerInterface):
# Init other hardware mappings
# GPIO support
if self.addon.with_gpio and self.sys_hardware.helper.support_gpio:
if self.app.with_gpio and self.sys_hardware.helper.support_gpio:
for gpio_path in ("/sys/class/gpio", "/sys/devices/platform/soc"):
if not Path(gpio_path).exists():
continue
@@ -489,7 +487,7 @@ class DockerAddon(DockerInterface):
)
# DeviceTree support
if self.addon.with_devicetree:
if self.app.with_devicetree:
mounts.append(
DockerMount(
type=MountType.BIND,
@@ -500,11 +498,11 @@ class DockerAddon(DockerInterface):
)
# Host udev support
if self.addon.with_udev:
if self.app.with_udev:
mounts.append(MOUNT_UDEV)
# Kernel Modules support
if self.addon.with_kernel_modules:
if self.app.with_kernel_modules:
mounts.append(
DockerMount(
type=MountType.BIND,
@@ -515,19 +513,19 @@ class DockerAddon(DockerInterface):
)
# Docker API support
if not self.addon.protected and self.addon.access_docker_api:
if not self.app.protected and self.app.access_docker_api:
mounts.append(MOUNT_DOCKER)
# Host D-Bus system
if self.addon.host_dbus:
if self.app.host_dbus:
mounts.append(MOUNT_DBUS)
# Configuration Audio
if self.addon.with_audio:
if self.app.with_audio:
mounts += [
DockerMount(
type=MountType.BIND,
source=self.addon.path_extern_pulse.as_posix(),
source=self.app.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf",
read_only=True,
),
@@ -546,7 +544,7 @@ class DockerAddon(DockerInterface):
]
# System Journal access
if self.addon.with_journald:
if self.app.with_journald:
mounts += [
DockerMount(
type=MountType.BIND,
@@ -572,21 +570,21 @@ class DockerAddon(DockerInterface):
async def run(self) -> None:
"""Run Docker image."""
# Security check
if not self.addon.protected:
_LOGGER.warning("%s running with disabled protected mode!", self.addon.name)
if not self.app.protected:
_LOGGER.warning("%s running with disabled protected mode!", self.app.name)
# Don't set a hostname if no separate UTS namespace is used
hostname = None if self.uts_mode else self.addon.hostname
hostname = None if self.uts_mode else self.app.hostname
# Create & Run container
try:
await self._run(
tag=str(self.addon.version),
tag=str(self.app.version),
name=self.name,
hostname=hostname,
detach=True,
init=self.addon.default_init,
stdin_open=self.addon.with_stdin,
init=self.app.default_init,
stdin_open=self.app.with_stdin,
network_mode=self.network_mode,
pid_mode=self.pid_mode,
uts_mode=self.uts_mode,
@@ -606,26 +604,24 @@ class DockerAddon(DockerInterface):
self.sys_resolution.create_issue(
IssueType.MISSING_IMAGE,
ContextType.ADDON,
reference=self.addon.slug,
reference=self.app.slug,
suggestions=[SuggestionType.EXECUTE_REPAIR],
)
raise
_LOGGER.info(
"Starting Docker add-on %s with version %s", self.image, self.version
)
_LOGGER.info("Starting Docker app %s with version %s", self.image, self.version)
# Write data to DNS server
try:
await self.sys_plugins.dns.add_host(
ipv4=self.ip_address, names=[self.addon.hostname]
ipv4=self.ip_address, names=[self.app.hostname]
)
except CoreDNSError as err:
_LOGGER.warning("Can't update DNS for %s", self.name)
await async_capture_exception(err)
# Hardware Access
if self.addon.static_devices:
if self.app.static_devices:
self._hw_listener = self.sys_bus.register_event(
BusEvent.HARDWARE_NEW_DEVICE, self._hardware_events
)
@@ -655,7 +651,7 @@ class DockerAddon(DockerInterface):
image=image,
latest=latest,
arch=arch,
need_build=self.addon.latest_need_build,
need_build=self.app.latest_need_build,
)
@Job(
@@ -673,32 +669,27 @@ class DockerAddon(DockerInterface):
need_build: bool | None = None,
) -> None:
"""Pull Docker image or build it."""
if need_build is None and self.addon.need_build or need_build:
if need_build is None and self.app.need_build or need_build:
await self._build(version, image)
else:
await super().install(version, image, latest, arch)
async def _build(self, version: AwesomeVersion, image: str | None = None) -> None:
"""Build a Docker container."""
build_env = await AddonBuild(self.coresys, self.addon).load_config()
build_env = await AppBuild.create(self.coresys, self.app)
# Check if the build environment is valid, raises if not
await build_env.is_valid()
_LOGGER.info("Starting build for %s:%s", self.image, version)
if build_env.squash:
_LOGGER.warning(
"Ignoring squash build option for %s as Docker BuildKit does not support it.",
self.addon.slug,
)
addon_image_tag = f"{image or self.addon.image}:{version!s}"
app_image_tag = f"{image or self.app.image}:{version!s}"
docker_version = self.sys_docker.info.version
builder_version_tag = (
f"{docker_version.major}.{docker_version.minor}.{docker_version.micro}-cli"
)
builder_name = f"addon_builder_{self.addon.slug}"
builder_name = f"addon_builder_{self.app.slug}"
# Remove dangling builder container if it exists by any chance
# E.g. because of an abrupt host shutdown/reboot during a build
@@ -739,7 +730,7 @@ class DockerAddon(DockerInterface):
return (
temp_dir,
build_env.get_docker_args(
version, addon_image_tag, docker_config_path
version, app_image_tag, docker_config_path
),
)
@@ -760,10 +751,10 @@ class DockerAddon(DockerInterface):
if temp_dir:
await self.sys_run_in_executor(temp_dir.cleanup)
logs = "\n".join(result.log)
logs = "".join(result.log)
if result.exit_code != 0:
raise DockerBuildError(
f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}",
f"Docker build failed for {app_image_tag} (exit code {result.exit_code}). Build output:\n{logs}",
_LOGGER.error,
)
@@ -771,14 +762,23 @@ class DockerAddon(DockerInterface):
try:
# Update meta data
self._meta = await self.sys_docker.images.inspect(addon_image_tag)
self._meta = await self.sys_docker.images.inspect(app_image_tag)
except aiodocker.DockerError as err:
raise DockerBuildError(
f"Can't get image metadata for {addon_image_tag} after build: {err!s}"
f"Can't get image metadata for {app_image_tag} after build: {err!s}"
) from err
_LOGGER.info("Build %s:%s done", self.image, version)
# Clean up old add-on builder images from previous Docker versions.
# Done here after build because cleanup_old_images needs the current
# image to exist, and the builder image is only pulled on first build
# (in run_command) after a Docker engine update.
with suppress(DockerError):
await self.sys_docker.cleanup_old_images(
ADDON_BUILDER_IMAGE, AwesomeVersion(builder_version_tag)
)
async def export_image(self, tar_file: Path) -> None:
"""Export current images into a tar file."""
if not self.image:
@@ -817,11 +817,11 @@ class DockerAddon(DockerInterface):
use_version,
{old_image} if old_image else None,
keep_images={
f"{addon.image}:{addon.version}"
for addon in self.sys_addons.installed
if addon.slug != self.addon.slug
and addon.image
and addon.image in {old_image, use_image}
f"{app.image}:{app.version}"
for app in self.sys_apps.installed
if app.slug != self.app.slug
and app.image
and app.image in {old_image, use_image}
},
)
@@ -831,7 +831,7 @@ class DockerAddon(DockerInterface):
concurrency=JobConcurrency.GROUP_REJECT,
)
async def write_stdin(self, data: bytes) -> None:
"""Write to add-on stdin."""
"""Write to app stdin."""
try:
# Load needed docker objects
container = await self.sys_docker.containers.get(self.name)
@@ -861,7 +861,7 @@ class DockerAddon(DockerInterface):
# DNS
if self.ip_address != NO_ADDDRESS:
try:
await self.sys_plugins.dns.delete_host(self.addon.hostname)
await self.sys_plugins.dns.delete_host(self.app.hostname)
except CoreDNSError as err:
_LOGGER.warning("Can't update DNS for %s", self.name)
await async_capture_exception(err)
@@ -874,11 +874,12 @@ class DockerAddon(DockerInterface):
await super().stop(remove_container)
# If there is a device access issue and the container is removed, clear it
if (
remove_container
and self.addon.device_access_missing_issue in self.sys_resolution.issues
if remove_container and (
issue := self.sys_resolution.get_issue_if_present(
self.app.device_access_missing_issue
)
):
self.sys_resolution.dismiss_issue(self.addon.device_access_missing_issue)
self.sys_resolution.dismiss_issue(issue)
@Job(
name="docker_addon_hardware_events",
@@ -890,7 +891,7 @@ class DockerAddon(DockerInterface):
"""Process Hardware events for adjust device access."""
if not any(
device_path in (device.path, device.sysfs)
for device_path in self.addon.static_devices
for device_path in self.app.static_devices
):
return
@@ -911,7 +912,7 @@ class DockerAddon(DockerInterface):
and not self.sys_os.available
):
self.sys_resolution.add_issue(
evolve(self.addon.device_access_missing_issue),
evolve(self.app.device_access_missing_issue),
suggestions=[SuggestionType.EXECUTE_RESTART],
)
return

View File

@@ -23,6 +23,9 @@ DOCKER_HUB_API = "registry-1.docker.io"
# Legacy Docker Hub identifier for backward compatibility
DOCKER_HUB_LEGACY = "hub.docker.com"
# GitHub Container Registry identifier
GITHUB_CONTAINER_REGISTRY = "ghcr.io"
class Capabilities(StrEnum):
"""Linux Capabilities."""
@@ -140,6 +143,7 @@ class Ulimit:
}
ENV_CORE_API_SOCKET = "SUPERVISOR_CORE_API_SOCKET"
ENV_DUPLICATE_LOG_FILE = "HA_DUPLICATE_LOG_FILE"
ENV_TIME = "TZ"
ENV_TOKEN = "SUPERVISOR_TOKEN"
@@ -169,6 +173,12 @@ MOUNT_MACHINE_ID = DockerMount(
target=MACHINE_ID.as_posix(),
read_only=True,
)
MOUNT_CORE_RUN = DockerMount(
type=MountType.BIND,
source="/run/supervisor",
target="/run/supervisor",
read_only=False,
)
MOUNT_UDEV = DockerMount(
type=MountType.BIND, source="/run/udev", target="/run/udev", read_only=True
)
@@ -185,4 +195,6 @@ PATH_SHARE = PurePath("/share")
PATH_MEDIA = PurePath("/media")
# https://hub.docker.com/_/docker
ADDON_BUILDER_IMAGE = "docker.io/library/docker"
# Use short name as Docker stores it this way; the canonical docker.io/library/docker
# does not match the reference filter used by cleanup_old_images.
ADDON_BUILDER_IMAGE = "docker"

View File

@@ -13,10 +13,12 @@ from ..homeassistant.const import LANDINGPAGE
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job
from .const import (
ENV_CORE_API_SOCKET,
ENV_DUPLICATE_LOG_FILE,
ENV_TIME,
ENV_TOKEN,
ENV_TOKEN_OLD,
MOUNT_CORE_RUN,
MOUNT_DBUS,
MOUNT_DEV,
MOUNT_MACHINE_ID,
@@ -162,6 +164,9 @@ class DockerHomeAssistant(DockerInterface):
if self.sys_machine_id:
mounts.append(MOUNT_MACHINE_ID)
if self.sys_homeassistant.api.supports_unix_socket:
mounts.append(MOUNT_CORE_RUN)
return mounts
@Job(
@@ -180,6 +185,8 @@ class DockerHomeAssistant(DockerInterface):
}
if restore_job_id:
environment[ENV_RESTORE_JOB_ID] = restore_job_id
if self.sys_homeassistant.api.supports_unix_socket:
environment[ENV_CORE_API_SOCKET] = "/run/supervisor/core.sock"
if self.sys_homeassistant.duplicate_log_file:
environment[ENV_DUPLICATE_LOG_FILE] = "1"
await self._run(

View File

@@ -16,7 +16,6 @@ import aiodocker
import aiohttp
from awesomeversion import AwesomeVersion
from awesomeversion.strategy import AwesomeVersionStrategy
import requests
from ..const import (
ATTR_PASSWORD,
@@ -34,26 +33,32 @@ from ..exceptions import (
DockerHubRateLimitExceeded,
DockerJobError,
DockerNotFound,
DockerRequestError,
DockerRegistryAuthError,
DockerRegistryRateLimitExceeded,
GithubContainerRegistryRateLimitExceeded,
)
from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobConcurrency
from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.sentry import async_capture_exception
from .const import DOCKER_HUB, DOCKER_HUB_LEGACY, ContainerState, RestartPolicy
from .const import (
DOCKER_HUB,
DOCKER_HUB_LEGACY,
GITHUB_CONTAINER_REGISTRY,
ContainerState,
RestartPolicy,
)
from .manager import CommandReturn, ExecReturn, PullLogEntry
from .monitor import DockerContainerStateEvent
from .pull_progress import ImagePullProgress
from .stats import DockerStats
from .utils import get_registry_from_image
_LOGGER: logging.Logger = logging.getLogger(__name__)
MAP_ARCH: dict[CpuArch, str] = {
CpuArch.ARMV7: "linux/arm/v7",
CpuArch.ARMHF: "linux/arm/v6",
CpuArch.AARCH64: "linux/arm64",
CpuArch.I386: "linux/386",
CpuArch.AMD64: "linux/amd64",
}
@@ -119,6 +124,11 @@ class DockerInterface(JobGroup, ABC):
def name(self) -> str:
"""Return name of Docker container."""
@property
def attached(self) -> bool:
"""Return True if container/image metadata has been loaded."""
return self._meta is not None
@property
def meta_config(self) -> dict[str, Any]:
"""Return meta data of configuration for container/image."""
@@ -187,18 +197,31 @@ class DockerInterface(JobGroup, ABC):
"""Healthcheck of instance if it has one."""
return self.meta_config.get("Healthcheck")
def _get_credentials(self, image: str) -> dict:
"""Return a dictionary with credentials for docker login."""
def _get_credentials(self, image: str) -> tuple[dict, str]:
"""Return credentials for docker login and the qualified image name.
Returns a tuple of (credentials_dict, qualified_image) where the image
is prefixed with the registry when needed. This ensures aiodocker sets
the correct ServerAddress in the X-Registry-Auth header, which Docker's
containerd image store requires to match the actual registry host.
"""
credentials = {}
registry = self.sys_docker.config.get_registry_for_image(image)
qualified_image = image
if registry:
stored = self.sys_docker.config.registries[registry]
credentials[ATTR_USERNAME] = stored[ATTR_USERNAME]
credentials[ATTR_PASSWORD] = stored[ATTR_PASSWORD]
# Don't include registry for Docker Hub (both official and legacy)
if registry not in (DOCKER_HUB, DOCKER_HUB_LEGACY):
credentials[ATTR_REGISTRY] = registry
credentials[ATTR_REGISTRY] = registry
# For Docker Hub images, the image name typically lacks a registry
# prefix (e.g. "homeassistant/foo" instead of "docker.io/homeassistant/foo").
# aiodocker derives ServerAddress from image.partition("/"), so without
# the prefix it would use the namespace ("homeassistant") as ServerAddress,
# which Docker's containerd resolver rejects as a host mismatch.
if registry in (DOCKER_HUB, DOCKER_HUB_LEGACY):
qualified_image = f"{DOCKER_HUB}/{image}"
_LOGGER.debug(
"Logging in to %s as %s",
@@ -206,7 +229,30 @@ class DockerInterface(JobGroup, ABC):
stored[ATTR_USERNAME],
)
return credentials
return credentials, qualified_image
def _registry_rate_limit_exception(
self, image: str
) -> DockerRegistryRateLimitExceeded:
"""Return typed rate-limit exception and maybe create a resolution issue.
The registry is derived from the image reference. Docker Hub gets a
DOCKER_RATELIMIT resolution issue with a registry-login suggestion
(actionable - logging in lifts the unauthenticated quota). GHCR and
unknown registries only produce a typed exception and a log entry;
no resolution issue since there's nothing actionable for the user.
"""
registry = get_registry_from_image(image)
if registry == GITHUB_CONTAINER_REGISTRY:
return GithubContainerRegistryRateLimitExceeded(_LOGGER.warning)
if registry is None or registry in (DOCKER_HUB, DOCKER_HUB_LEGACY):
self.sys_resolution.create_issue(
IssueType.DOCKER_RATELIMIT,
ContextType.SYSTEM,
suggestions=[SuggestionType.REGISTRY_LOGIN],
)
return DockerHubRateLimitExceeded(_LOGGER.warning)
return DockerRegistryRateLimitExceeded(_LOGGER.warning)
@Job(
name="docker_interface_install",
@@ -293,15 +339,15 @@ class DockerInterface(JobGroup, ABC):
_LOGGER.info("Downloading docker image %s with tag %s.", image, version)
try:
# Get credentials for private registries to pass to aiodocker
credentials = self._get_credentials(image) or None
credentials, pull_image_name = self._get_credentials(image)
# Pull new image, passing credentials to aiodocker
docker_image = await self.sys_docker.pull_image(
current_job.uuid,
image,
pull_image_name,
str(version),
platform=platform,
auth=credentials,
auth=credentials or None,
)
# Tag latest
@@ -312,14 +358,29 @@ class DockerInterface(JobGroup, ABC):
await self.sys_docker.images.tag(
docker_image["Id"], image, tag="latest"
)
except DockerRegistryRateLimitExceeded as err:
# Rate limit surfaced via the streaming pull protocol (no HTTP
# status to key off of). Refine into a registry-specific exception
# now that we know which image was being pulled.
raise self._registry_rate_limit_exception(image) from err
except aiodocker.DockerError as err:
if err.status == HTTPStatus.TOO_MANY_REQUESTS:
self.sys_resolution.create_issue(
IssueType.DOCKER_RATELIMIT,
ContextType.SYSTEM,
suggestions=[SuggestionType.REGISTRY_LOGIN],
)
raise DockerHubRateLimitExceeded(_LOGGER.error) from err
# Pre-28.3.0 daemons wrap registry rate limits as HTTP 500
# instead of forwarding 429: api/server/httpstatus/status.go
# mapped cerrdefs.IsUnknown to 500. Fixed upstream by moby/moby
# commit 23fa0ae74a ("Cleanup http status error checks",
# first released in Docker 28.3.0). We still need to detect it
# for the large fleet on older daemons — match on the message
# body since the HTTP status is useless for that window.
message = str(err.message) if err.message else ""
if err.status == HTTPStatus.TOO_MANY_REQUESTS or (
err.status == HTTPStatus.INTERNAL_SERVER_ERROR
and "toomanyrequests" in message
):
raise self._registry_rate_limit_exception(image) from err
if err.status == HTTPStatus.UNAUTHORIZED and credentials:
raise DockerRegistryAuthError(
_LOGGER.error, registry=credentials[ATTR_REGISTRY]
) from err
await async_capture_exception(err)
raise DockerError(
f"Can't install {image}:{version!s}: {err}", _LOGGER.error
@@ -331,7 +392,7 @@ class DockerInterface(JobGroup, ABC):
async def exists(self) -> bool:
"""Return True if Docker image exists in local repository."""
with suppress(aiodocker.DockerError, requests.RequestException):
with suppress(aiodocker.DockerError):
await self.sys_docker.images.inspect(f"{self.image}:{self.version!s}")
return True
return False
@@ -347,10 +408,6 @@ class DockerInterface(JobGroup, ABC):
raise DockerAPIError(
f"Docker API error occurred while getting container information: {err!s}"
) from err
except requests.RequestException as err:
raise DockerRequestError(
f"Error communicating with Docker to get container information: {err!s}"
) from err
async def is_running(self) -> bool:
"""Return True if Docker is running."""
@@ -371,7 +428,7 @@ class DockerInterface(JobGroup, ABC):
self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False
) -> None:
"""Attach to running Docker container."""
with suppress(aiodocker.DockerError, requests.RequestException):
with suppress(aiodocker.DockerError):
docker_container = await self.sys_docker.containers.get(self.name)
self._meta = await docker_container.show()
self.sys_docker.monitor.watch_container(self._meta)
@@ -389,7 +446,7 @@ class DockerInterface(JobGroup, ABC):
),
)
with suppress(aiodocker.DockerError, requests.RequestException):
with suppress(aiodocker.DockerError):
if not self._meta and self.image:
self._meta = await self.sys_docker.images.inspect(
f"{self.image}:{version!s}"
@@ -492,7 +549,7 @@ class DockerInterface(JobGroup, ABC):
if self.image == expected_image:
try:
image = await self.sys_docker.images.inspect(image_name)
except (aiodocker.DockerError, requests.RequestException) as err:
except aiodocker.DockerError as err:
raise DockerError(
f"Could not get {image_name} for check due to: {err!s}",
_LOGGER.error,
@@ -615,10 +672,6 @@ class DockerInterface(JobGroup, ABC):
raise DockerNotFound(
f"No version found for {self.image}", _LOGGER.info
) from err
except requests.RequestException as err:
raise DockerRequestError(
f"Communication issues with dockerd on Host: {err}", _LOGGER.warning
) from err
_LOGGER.info("Found %s versions: %s", self.image, available_version)

View File

@@ -6,8 +6,6 @@ import asyncio
from collections.abc import Mapping
from contextlib import suppress
from dataclasses import dataclass
import errno
from functools import partial
from http import HTTPStatus
from io import BufferedReader, BufferedWriter
from ipaddress import IPv4Address
@@ -25,9 +23,6 @@ from aiodocker.stream import Stream
from aiodocker.types import JSONObject
from aiohttp import ClientTimeout, UnixConnector
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from docker import errors as docker_errors
from docker.client import DockerClient
import requests
from ..const import (
ATTR_ENABLE_IPV6,
@@ -48,9 +43,8 @@ from ..exceptions import (
DockerError,
DockerNoSpaceOnDevice,
DockerNotFound,
DockerRequestError,
DockerRegistryRateLimitExceeded,
)
from ..resolution.const import UnhealthyReason
from ..utils.common import FileConfiguration
from ..validate import SCHEMA_DOCKER_CONFIG
from .const import (
@@ -198,6 +192,12 @@ class PullLogEntry:
raise RuntimeError("No error to convert to exception!")
if self.error.endswith("no space left on device"):
return DockerNoSpaceOnDevice(_LOGGER.error)
if "toomanyrequests" in self.error:
# Registry rate limit. The streaming pull protocol doesn't carry
# HTTP status codes, so the error only surfaces as a text message
# here. Install() refines this into a Docker Hub / GHCR specific
# exception based on the image being pulled.
return DockerRegistryRateLimitExceeded(_LOGGER.warning)
return DockerError(self.error, _LOGGER.error)
@@ -231,7 +231,7 @@ class DockerConfig(FileConfiguration):
@property
def registries(self) -> dict[str, Any]:
"""Return credentials for docker registries."""
return self._data.get(ATTR_REGISTRIES, {})
return self._data[ATTR_REGISTRIES]
def get_registry_for_image(self, image: str) -> str | None:
"""Return the registry name if credentials are available for the image.
@@ -270,8 +270,6 @@ class DockerAPI(CoreSysAttributes):
def __init__(self, coresys: CoreSys):
"""Initialize Docker base wrapper."""
self.coresys = coresys
# We keep both until we can fully refactor to aiodocker
self._dockerpy: DockerClient | None = None
self.docker: aiodocker.Docker = aiodocker.Docker(
url="unix://localhost", # dummy hostname for URL composition
connector=UnixConnector(SOCKET_DOCKER.as_posix()),
@@ -289,15 +287,6 @@ class DockerAPI(CoreSysAttributes):
async def post_init(self) -> Self:
"""Post init actions that must be done in event loop."""
self._dockerpy = await asyncio.get_running_loop().run_in_executor(
None,
partial(
DockerClient,
base_url=f"unix:/{SOCKET_DOCKER.as_posix()}",
version="auto",
timeout=900,
),
)
self._info = await DockerInfo.new(await self.docker.system.info())
await self.config.read_data()
self._network = await DockerNetwork(self.docker).post_init(
@@ -305,13 +294,6 @@ class DockerAPI(CoreSysAttributes):
)
return self
@property
def dockerpy(self) -> DockerClient:
"""Get docker API client."""
if not self._dockerpy:
raise RuntimeError("Docker API Client not initialized!")
return self._dockerpy
@property
def network(self) -> DockerNetwork:
"""Get Docker network."""
@@ -612,12 +594,6 @@ class DockerAPI(CoreSysAttributes):
raise DockerAPIError(
f"Can't start {name or container.id}: {err}", _LOGGER.error
) from err
except requests.RequestException as err:
raise DockerRequestError(
f"Dockerd connection issue for {name or container.id}: {err}",
_LOGGER.error,
) from err
return container
async def run(
@@ -633,10 +609,6 @@ class DockerAPI(CoreSysAttributes):
raise DockerAPIError(
f"Can't inspect started container {name}: {err}", _LOGGER.error
) from err
except requests.RequestException as err:
raise DockerRequestError(
f"Dockerd connection issue for {name}: {err}", _LOGGER.error
) from err
return container_attrs
@@ -725,43 +697,40 @@ class DockerAPI(CoreSysAttributes):
async def repair(self) -> None:
"""Repair local docker overlayfs2 issues."""
def repair_docker_blocking():
_LOGGER.info("Prune stale containers")
try:
output = self.dockerpy.api.prune_containers()
_LOGGER.debug("Containers prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for containers prune: %s", err)
_LOGGER.info("Prune stale containers")
try:
output = await self.docker.containers.prune()
_LOGGER.debug("Containers prune: %s", output)
except aiodocker.DockerError as err:
_LOGGER.warning("Error for containers prune: %s", err)
_LOGGER.info("Prune stale images")
try:
output = self.dockerpy.api.prune_images(filters={"dangling": False})
_LOGGER.debug("Images prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for images prune: %s", err)
_LOGGER.info("Prune stale images")
try:
output = await self.images.prune(filters={"dangling": "false"})
_LOGGER.debug("Images prune: %s", output)
except aiodocker.DockerError as err:
_LOGGER.warning("Error for images prune: %s", err)
_LOGGER.info("Prune stale builds")
try:
output = self.dockerpy.api.prune_builds()
_LOGGER.debug("Builds prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for builds prune: %s", err)
_LOGGER.info("Prune stale builds")
try:
output = await self.images.prune_builds()
_LOGGER.debug("Builds prune: %s", output)
except aiodocker.DockerError as err:
_LOGGER.warning("Error for builds prune: %s", err)
_LOGGER.info("Prune stale volumes")
try:
output = self.dockerpy.api.prune_volumes()
_LOGGER.debug("Volumes prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for volumes prune: %s", err)
_LOGGER.info("Prune stale volumes")
try:
output = await self.docker.volumes.prune()
_LOGGER.debug("Volumes prune: %s", output)
except aiodocker.DockerError as err:
_LOGGER.warning("Error for volumes prune: %s", err)
_LOGGER.info("Prune stale networks")
try:
output = self.dockerpy.api.prune_networks()
_LOGGER.debug("Networks prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for networks prune: %s", err)
await self.sys_run_in_executor(repair_docker_blocking)
_LOGGER.info("Prune stale networks")
try:
output = await self.docker.networks.prune()
_LOGGER.debug("Networks prune: %s", output)
except aiodocker.DockerError as err:
_LOGGER.warning("Error for networks prune: %s", err)
_LOGGER.info("Fix stale container on hassio network")
try:
@@ -1018,7 +987,7 @@ class DockerAPI(CoreSysAttributes):
if err.status != HTTPStatus.NOT_FOUND:
raise
except (aiodocker.DockerError, requests.RequestException) as err:
except aiodocker.DockerError as err:
raise DockerError(
f"Can't remove image {image}: {err}", _LOGGER.warning
) from err
@@ -1039,10 +1008,7 @@ class DockerAPI(CoreSysAttributes):
f"Can't import image from tar: {err}", _LOGGER.error
) from err
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
raise DockerError(
f"Can't read tar file {tar_file}: {err}", _LOGGER.error
) from err
@@ -1070,7 +1036,7 @@ class DockerAPI(CoreSysAttributes):
try:
return await self.images.inspect(docker_image_list[0])
except (aiodocker.DockerError, requests.RequestException) as err:
except aiodocker.DockerError as err:
raise DockerError(
f"Could not inspect imported image due to: {err!s}", _LOGGER.error
) from err
@@ -1095,10 +1061,7 @@ class DockerAPI(CoreSysAttributes):
f"Can't fetch image {image}:{version}: {err}", _LOGGER.error
) from err
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
raise DockerError(
f"Can't write tar file {tar_file}: {err}", _LOGGER.error
) from err
@@ -1127,7 +1090,7 @@ class DockerAPI(CoreSysAttributes):
f"{current_image} not found for cleanup", _LOGGER.warning
) from None
raise
except (aiodocker.DockerError, requests.RequestException) as err:
except aiodocker.DockerError as err:
raise DockerError(
f"Can't get {current_image} for cleanup", _LOGGER.warning
) from err
@@ -1161,7 +1124,7 @@ class DockerAPI(CoreSysAttributes):
images_list = await self.images.list(
filters=json.dumps({"reference": image_names})
)
except (aiodocker.DockerError, requests.RequestException) as err:
except aiodocker.DockerError as err:
raise DockerError(
f"Corrupt docker overlayfs found: {err}", _LOGGER.warning
) from err
@@ -1170,6 +1133,6 @@ class DockerAPI(CoreSysAttributes):
if docker_image["Id"] in keep:
continue
with suppress(aiodocker.DockerError, requests.RequestException):
with suppress(aiodocker.DockerError):
_LOGGER.info("Cleanup images: %s", docker_image["RepoTags"])
await self.images.delete(docker_image["Id"], force=True)

View File

@@ -24,7 +24,7 @@ from ..const import (
OBSERVER_DOCKER_NAME,
SUPERVISOR_DOCKER_NAME,
)
from ..exceptions import DockerError
from ..exceptions import DockerError, DockerNotFound
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -85,10 +85,13 @@ class DockerNetwork:
)
current_mtu = int(current_mtu_str) if current_mtu_str is not None else None
# Check if we have explicitly provided settings that differ from what is set
# Check if settings differ from what is set. Use default if not explicitly set.
changes = []
if enable_ipv6 is not None and current_ipv6 != enable_ipv6:
changes.append("IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only")
effective_ipv6 = (
enable_ipv6 if enable_ipv6 is not None else DOCKER_ENABLE_IPV6_DEFAULT
)
if current_ipv6 != effective_ipv6:
changes.append("IPv4/IPv6 Dual-Stack" if effective_ipv6 else "IPv4-Only")
if mtu is not None and current_mtu != mtu:
changes.append(f"MTU {mtu}")
@@ -287,6 +290,8 @@ class DockerNetwork:
try:
container = await self.docker.containers.get(name)
except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
raise DockerNotFound(f"Can't find {name}") from err
raise DockerError(f"Can't find {name}: {err}", _LOGGER.error) from err
if container.id not in self.containers:

View File

@@ -5,11 +5,6 @@ from typing import Any
from .const import OBSERVER_PORT
MESSAGE_CHECK_SUPERVISOR_LOGS = (
"Check supervisor logs for details (check with '{logs_command}')"
)
EXTRA_FIELDS_LOGS_COMMAND = {"logs_command": "ha supervisor logs"}
class HassioError(Exception):
"""Root exception."""
@@ -46,7 +41,7 @@ class HassioNotSupportedError(HassioError):
# API
class APIError(HassioError, RuntimeError):
class APIError(HassioError):
"""API errors."""
status = 400
@@ -102,8 +97,8 @@ class APIInternalServerError(APIError):
status = 500
class APIAddonNotInstalled(APIError):
"""Not installed addon requested at addons API."""
class APIAppNotInstalled(APIError):
"""Not installed app requested at apps API."""
class APIDBMigrationInProgress(APIError):
@@ -125,9 +120,8 @@ class APIUnknownSupervisorError(APIError):
) -> None:
"""Initialize exception."""
self.message_template = (
f"{self.message_template}. {MESSAGE_CHECK_SUPERVISOR_LOGS}"
f"{self.message_template}. Check Supervisor logs for details"
)
self.extra_fields = (self.extra_fields or {}) | EXTRA_FIELDS_LOGS_COMMAND
super().__init__(None, logger, job_id=job_id)
@@ -348,70 +342,68 @@ class AudioJobError(AudioError, PluginJobError):
"""Raise on job error with audio plugin."""
# Addons
# Apps
class AddonsError(HassioError):
"""Addons exception."""
class AppsError(HassioError):
"""Apps exception."""
class AddonConfigurationError(AddonsError):
"""Error with add-on configuration."""
class AppConfigurationError(AppsError):
"""Error with app configuration."""
class AddonConfigurationInvalidError(AddonConfigurationError, APIError):
"""Raise if invalid configuration provided for addon."""
class AppConfigurationInvalidError(AppConfigurationError, APIError):
"""Raise if invalid configuration provided for app."""
error_key = "addon_configuration_invalid_error"
message_template = "Add-on {addon} has invalid options: {validation_error}"
message_template = "App {addon} has invalid options: {validation_error}"
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
addon: str,
app: str,
validation_error: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "validation_error": validation_error}
self.extra_fields = {"addon": app, "validation_error": validation_error}
super().__init__(None, logger)
class AddonBootConfigCannotChangeError(AddonsError, APIError):
"""Raise if user attempts to change addon boot config when it can't be changed."""
class AppBootConfigCannotChangeError(AppsError, APIError):
"""Raise if user attempts to change app boot config when it can't be changed."""
error_key = "addon_boot_config_cannot_change_error"
message_template = (
"Addon {addon} boot option is set to {boot_config} so it cannot be changed"
"App {addon} boot option is set to {boot_config} so it cannot be changed"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str, boot_config: str
self, logger: Callable[..., None] | None = None, *, app: str, boot_config: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "boot_config": boot_config}
self.extra_fields = {"addon": app, "boot_config": boot_config}
super().__init__(None, logger)
class AddonNotRunningError(AddonsError, APIError):
"""Raise when an addon is not running."""
class AppNotRunningError(AppsError, APIError):
"""Raise when an app is not running."""
error_key = "addon_not_running_error"
message_template = "Add-on {addon} is not running"
message_template = "App {addon} is not running"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
def __init__(self, logger: Callable[..., None] | None = None, *, app: str) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
self.extra_fields = {"addon": app}
super().__init__(None, logger)
class AddonPortConflict(AddonsError, APIError):
"""Raise if addon cannot start due to a port conflict."""
class AppPortConflict(AppsError, APIError):
"""Raise if app cannot start due to a port conflict."""
error_key = "addon_port_conflict"
message_template = "Cannot start addon {name} because port {port} is already in use"
message_template = "Cannot start app {name} because port {port} is already in use"
def __init__(
self, logger: Callable[..., None] | None = None, *, name: str, port: int
@@ -421,15 +413,15 @@ class AddonPortConflict(AddonsError, APIError):
super().__init__(None, logger)
class AddonNotSupportedError(HassioNotSupportedError):
"""Addon doesn't support a function."""
class AppNotSupportedError(HassioNotSupportedError):
"""App doesn't support a function."""
class AddonNotSupportedArchitectureError(AddonNotSupportedError):
"""Addon does not support system due to architecture."""
class AppNotSupportedArchitectureError(AppNotSupportedError):
"""App does not support system due to architecture."""
error_key = "addon_not_supported_architecture_error"
message_template = "Add-on {slug} not supported on this platform, supported architectures: {architectures}"
message_template = "App {slug} not supported on this platform, supported architectures: {architectures}"
def __init__(
self,
@@ -443,11 +435,11 @@ class AddonNotSupportedArchitectureError(AddonNotSupportedError):
super().__init__(None, logger)
class AddonNotSupportedMachineTypeError(AddonNotSupportedError):
"""Addon does not support system due to machine type."""
class AppNotSupportedMachineTypeError(AppNotSupportedError):
"""App does not support system due to machine type."""
error_key = "addon_not_supported_machine_type_error"
message_template = "Add-on {slug} not supported on this machine, supported machine types: {machine_types}"
message_template = "App {slug} not supported on this machine, supported machine types: {machine_types}"
def __init__(
self,
@@ -461,11 +453,11 @@ class AddonNotSupportedMachineTypeError(AddonNotSupportedError):
super().__init__(None, logger)
class AddonNotSupportedHomeAssistantVersionError(AddonNotSupportedError):
"""Addon does not support system due to Home Assistant version."""
class AppNotSupportedHomeAssistantVersionError(AppNotSupportedError):
"""App does not support system due to Home Assistant version."""
error_key = "addon_not_supported_home_assistant_version_error"
message_template = "Add-on {slug} not supported on this system, requires Home Assistant version {version} or greater"
message_template = "App {slug} not supported on this system, requires Home Assistant version {version} or greater"
def __init__(
self,
@@ -479,44 +471,40 @@ class AddonNotSupportedHomeAssistantVersionError(AddonNotSupportedError):
super().__init__(None, logger)
class AddonNotSupportedWriteStdinError(AddonNotSupportedError, APIError):
"""Addon does not support writing to stdin."""
class AppNotSupportedWriteStdinError(AppNotSupportedError, APIError):
"""App does not support writing to stdin."""
error_key = "addon_not_supported_write_stdin_error"
message_template = "Add-on {addon} does not support writing to stdin"
message_template = "App {addon} does not support writing to stdin"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
def __init__(self, logger: Callable[..., None] | None = None, *, app: str) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
self.extra_fields = {"addon": app}
super().__init__(None, logger)
class AddonBuildDockerfileMissingError(AddonNotSupportedError, APIError):
"""Raise when addon build invalid because dockerfile is missing."""
class AppBuildDockerfileMissingError(AppNotSupportedError, APIError):
"""Raise when app build invalid because dockerfile is missing."""
error_key = "addon_build_dockerfile_missing_error"
message_template = (
"Cannot build addon '{addon}' because dockerfile is missing. A repair "
"Cannot build app '{addon}' because dockerfile is missing. A repair "
"using '{repair_command}' will fix this if the cause is data "
"corruption. Otherwise please report this to the addon developer."
"corruption. Otherwise please report this to the app developer."
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
def __init__(self, logger: Callable[..., None] | None = None, *, app: str) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "repair_command": "ha supervisor repair"}
self.extra_fields = {"addon": app, "repair_command": "ha supervisor repair"}
super().__init__(None, logger)
class AddonBuildArchitectureNotSupportedError(AddonNotSupportedError, APIError):
"""Raise when addon cannot be built on system because it doesn't support its architecture."""
class AppBuildArchitectureNotSupportedError(AppNotSupportedError, APIError):
"""Raise when app cannot be built on system because it doesn't support its architecture."""
error_key = "addon_build_architecture_not_supported_error"
message_template = (
"Cannot build addon '{addon}' because its supported architectures "
"Cannot build app '{addon}' because its supported architectures "
"({addon_arches}) do not match the system supported architectures ({system_arches})"
)
@@ -524,50 +512,46 @@ class AddonBuildArchitectureNotSupportedError(AddonNotSupportedError, APIError):
self,
logger: Callable[..., None] | None = None,
*,
addon: str,
addon_arch_list: list[str],
app: str,
app_arch_list: list[str],
system_arch_list: list[str],
) -> None:
"""Initialize exception."""
self.extra_fields = {
"addon": addon,
"addon_arches": ", ".join(addon_arch_list),
"addon": app,
"addon_arches": ", ".join(app_arch_list),
"system_arches": ", ".join(system_arch_list),
}
super().__init__(None, logger)
class AddonUnknownError(AddonsError, APIUnknownSupervisorError):
"""Raise when unknown error occurs taking an action for an addon."""
class AppUnknownError(AppsError, APIUnknownSupervisorError):
"""Raise when unknown error occurs taking an action for an app."""
error_key = "addon_unknown_error"
message_template = "An unknown error occurred with addon {addon}"
message_template = "An unknown error occurred with app {addon}"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
def __init__(self, logger: Callable[..., None] | None = None, *, app: str) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
self.extra_fields = {"addon": app}
super().__init__(logger)
class AddonBuildFailedUnknownError(AddonsError, APIUnknownSupervisorError):
"""Raise when the build failed for an addon due to an unknown error."""
class AppBuildFailedUnknownError(AppsError, APIUnknownSupervisorError):
"""Raise when the build failed for an app due to an unknown error."""
error_key = "addon_build_failed_unknown_error"
message_template = (
"An unknown error occurred while trying to build the image for addon {addon}"
"An unknown error occurred while trying to build the image for app {addon}"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
def __init__(self, logger: Callable[..., None] | None = None, *, app: str) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
self.extra_fields = {"addon": app}
super().__init__(logger)
class AddonsJobError(AddonsError, JobException):
class AppsJobError(AppsError, JobException):
"""Raise on job errors."""
@@ -620,18 +604,6 @@ class AuthListUsersError(AuthError, APIUnknownSupervisorError):
message_template = "Can't request listing users on Home Assistant"
class AuthListUsersNoneResponseError(AuthError, APIInternalServerError):
"""Auth error if listing users returned invalid None response."""
error_key = "auth_list_users_none_response_error"
message_template = "Home Assistant returned invalid response of `{none}` instead of a list of users. Check Home Assistant logs for details (check with `{logs_command}`)"
extra_fields = {"none": "None", "logs_command": "ha core logs"}
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Initialize exception."""
super().__init__(None, logger)
class AuthInvalidNonStringValueError(AuthError, APIUnauthorized):
"""Auth error if something besides a string provided as username or password."""
@@ -871,10 +843,6 @@ class DockerAPIError(DockerError):
"""Docker API error."""
class DockerRequestError(DockerError):
"""Dockerd OS issues."""
class DockerTrustError(DockerError):
"""Raise if images are not trusted."""
@@ -910,8 +878,36 @@ class DockerContainerPortConflict(DockerError, APIError):
super().__init__(None, logger)
class DockerHubRateLimitExceeded(DockerError, APITooManyRequests):
"""Raise for docker hub rate limit exceeded error."""
class DockerRegistryAuthError(DockerError, APIError):
"""Raise when Docker registry authentication fails."""
error_key = "docker_registry_auth_error"
message_template = (
"Docker registry authentication failed for {registry}. "
"Check your registry credentials"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, registry: str
) -> None:
"""Raise & log."""
self.extra_fields = {"registry": registry}
super().__init__(None, logger=logger)
class DockerRegistryRateLimitExceeded(DockerError, APITooManyRequests):
"""Raise when a container registry rate limits requests."""
error_key = "container_registry_rate_limit_exceeded"
message_template = "Container registry rate limit exceeded"
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Raise & log."""
super().__init__(None, logger=logger)
class DockerHubRateLimitExceeded(DockerRegistryRateLimitExceeded):
"""Raise for Docker Hub rate limit exceeded error."""
error_key = "dockerhub_rate_limit_exceeded"
message_template = (
@@ -922,9 +918,15 @@ class DockerHubRateLimitExceeded(DockerError, APITooManyRequests):
"dockerhub_rate_limit_url": "https://www.home-assistant.io/more-info/dockerhub-rate-limit"
}
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Raise & log."""
super().__init__(None, logger=logger)
class GithubContainerRegistryRateLimitExceeded(DockerRegistryRateLimitExceeded):
"""Raise for GitHub Container Registry rate limit exceeded error."""
error_key = "ghcr_rate_limit_exceeded"
message_template = (
"GitHub Container Registry rate limited the request. "
"This is typically transient; the update will be retried."
)
class DockerJobError(DockerError, JobException):
@@ -976,6 +978,44 @@ class ResolutionFixupJobError(ResolutionFixupError, JobException):
"""Raise on job error."""
class ResolutionCheckNotFound(ResolutionNotFound, APINotFound): # pylint: disable=too-many-ancestors
"""Raise if check does not exist."""
error_key = "resolution_check_not_found_error"
message_template = "Check '{check}' does not exist"
def __init__(
self, logger: Callable[..., None] | None = None, *, check: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"check": check}
super().__init__(None, logger)
class ResolutionIssueNotFound(ResolutionNotFound, APINotFound): # pylint: disable=too-many-ancestors
"""Raise if issue does not exist."""
error_key = "resolution_issue_not_found_error"
message_template = "Issue {uuid} does not exist"
def __init__(self, logger: Callable[..., None] | None = None, *, uuid: str) -> None:
"""Initialize exception."""
self.extra_fields = {"uuid": uuid}
super().__init__(None, logger)
class ResolutionSuggestionNotFound(ResolutionNotFound, APINotFound): # pylint: disable=too-many-ancestors
"""Raise if suggestion does not exist."""
error_key = "resolution_suggestion_not_found_error"
message_template = "Suggestion {uuid} does not exist"
def __init__(self, logger: Callable[..., None] | None = None, *, uuid: str) -> None:
"""Initialize exception."""
self.extra_fields = {"uuid": uuid}
super().__init__(None, logger)
# Store
@@ -995,22 +1035,20 @@ class StoreNotFound(StoreError):
"""Raise if slug is not known."""
class StoreAddonNotFoundError(StoreError, APINotFound):
"""Raise if a requested addon is not in the store."""
class StoreAppNotFoundError(StoreError, APINotFound):
"""Raise if a requested app is not in the store."""
error_key = "store_addon_not_found_error"
message_template = "Addon {addon} does not exist in the store"
message_template = "App {addon} does not exist in the store"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
def __init__(self, logger: Callable[..., None] | None = None, *, app: str) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
self.extra_fields = {"addon": app}
super().__init__(None, logger)
class StoreRepositoryLocalCannotReset(StoreError, APIError):
"""Raise if user requests a reset on the local addon repository."""
"""Raise if user requests a reset on the local app repository."""
error_key = "store_repository_local_cannot_reset"
message_template = "Can't reset repository {local_repo} as it is not git based!"
@@ -1025,15 +1063,15 @@ class StoreJobError(StoreError, JobException):
"""Raise on job error with git."""
class StoreInvalidAddonRepo(StoreError):
"""Raise on invalid addon repo."""
class StoreInvalidAppRepo(StoreError):
"""Raise on invalid app repo."""
class StoreRepositoryUnknownError(StoreError, APIUnknownSupervisorError):
"""Raise when unknown error occurs taking an action for a store repository."""
error_key = "store_repository_unknown_error"
message_template = "An unknown error occurred with addon repository {repo}"
message_template = "An unknown error occurred with app repository {repo}"
def __init__(self, logger: Callable[..., None] | None = None, *, repo: str) -> None:
"""Initialize exception."""
@@ -1080,42 +1118,46 @@ class BackupFileExistError(BackupError):
"""Raise if the backup file already exists."""
class AddonBackupMetadataInvalidError(BackupError, APIError):
"""Raise if invalid metadata file provided for addon in backup."""
class BackupFatalIOError(BackupError):
"""Raise on write-side I/O errors that leave the backup tar corrupt."""
class AppBackupMetadataInvalidError(BackupError, APIError):
"""Raise if invalid metadata file provided for app in backup."""
error_key = "addon_backup_metadata_invalid_error"
message_template = (
"Metadata file for add-on {addon} in backup is invalid: {validation_error}"
"Metadata file for app {addon} in backup is invalid: {validation_error}"
)
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
addon: str,
app: str,
validation_error: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "validation_error": validation_error}
self.extra_fields = {"addon": app, "validation_error": validation_error}
super().__init__(None, logger)
class AddonPrePostBackupCommandReturnedError(BackupError, APIError):
"""Raise when addon's pre/post backup command returns an error."""
class AppPrePostBackupCommandReturnedError(BackupError, APIError):
"""Raise when app's pre/post backup command returns an error."""
error_key = "addon_pre_post_backup_command_returned_error"
message_template = (
"Pre-/Post backup command for add-on {addon} returned error code: "
"{exit_code}. Please report this to the addon developer. Enable debug "
"Pre-/Post backup command for app {addon} returned error code: "
"{exit_code}. Please report this to the app developer. Enable debug "
"logging to capture complete command output using {debug_logging_command}"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str, exit_code: int
self, logger: Callable[..., None] | None = None, *, app: str, exit_code: int
) -> None:
"""Initialize exception."""
self.extra_fields = {
"addon": addon,
"addon": app,
"exit_code": exit_code,
"debug_logging_command": "ha supervisor options --logging debug",
}

View File

@@ -13,7 +13,6 @@ from ..exceptions import (
DBusObjectError,
HardwareNotFound,
)
from ..resolution.const import UnhealthyReason
from .const import UdevSubsystem
from .data import Device
@@ -114,10 +113,8 @@ class HwDisk(CoreSysAttributes):
_LOGGER.warning("File not found: %s", child.as_posix())
continue
except OSError as err:
self.sys_resolution.check_oserror(err)
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
break
continue

View File

@@ -13,13 +13,20 @@ from aiohttp import hdrs
from awesomeversion import AwesomeVersion
from multidict import MultiMapping
from ..const import SOCKET_CORE, FeatureFlag
from ..coresys import CoreSys, CoreSysAttributes
from ..docker.const import ENV_CORE_API_SOCKET, ContainerState
from ..docker.monitor import DockerContainerStateEvent
from ..exceptions import HomeAssistantAPIError, HomeAssistantAuthError
from ..utils import version_is_new_enough
from .const import LANDINGPAGE
from .websocket import WSClient
_LOGGER: logging.Logger = logging.getLogger(__name__)
CORE_UNIX_SOCKET_MIN_VERSION: AwesomeVersion = AwesomeVersion(
"2026.4.0.dev202603250907"
)
GET_CORE_STATE_MIN_VERSION: AwesomeVersion = AwesomeVersion("2023.8.0.dev20230720")
@@ -39,11 +46,102 @@ class HomeAssistantAPI(CoreSysAttributes):
self.coresys: CoreSys = coresys
# We don't persist access tokens. Instead we fetch new ones when needed
self.access_token: str | None = None
self._access_token: str | None = None
self._access_token_expires: datetime | None = None
self._token_lock: asyncio.Lock = asyncio.Lock()
self._unix_session: aiohttp.ClientSession | None = None
self._core_connected: bool = False
async def ensure_access_token(self) -> None:
@property
def supports_unix_socket(self) -> bool:
"""Return True if the installed Core version supports Unix socket communication.
Used to decide whether to configure the env var when starting Core.
"""
return (
self.sys_config.feature_flags.get(FeatureFlag.UNIX_SOCKET_CORE_API, False)
and self.sys_homeassistant.version is not None
and self.sys_homeassistant.version != LANDINGPAGE
and version_is_new_enough(
self.sys_homeassistant.version, CORE_UNIX_SOCKET_MIN_VERSION
)
)
@property
def use_unix_socket(self) -> bool:
"""Return True if the running Core container is configured for Unix socket.
Checks both version support and that the container was actually started
with the SUPERVISOR_CORE_API_SOCKET env var. This prevents failures
during Supervisor upgrades where Core is still running with a container
started by the old Supervisor.
Requires container metadata to be available (via attach() or run()).
Callers should ensure the container is running before using this.
"""
if not self.supports_unix_socket:
return False
instance = self.sys_homeassistant.core.instance
if not instance.attached:
raise HomeAssistantAPIError(
"Cannot determine Core connection mode: container metadata not available"
)
return any(
env.startswith(f"{ENV_CORE_API_SOCKET}=")
for env in instance.meta_config.get("Env", [])
)
@property
def session(self) -> aiohttp.ClientSession:
"""Return session for Core communication.
Uses a Unix socket session when the installed Core version supports it,
otherwise falls back to the default TCP websession. If the socket does
not exist yet (e.g. during Core startup), requests will fail with a
connection error handled by the caller.
"""
if not self.use_unix_socket:
return self.sys_websession
if self._unix_session is None or self._unix_session.closed:
self._unix_session = aiohttp.ClientSession(
connector=aiohttp.UnixConnector(path=str(SOCKET_CORE))
)
return self._unix_session
@property
def api_url(self) -> str:
"""Return API base url for internal Supervisor to Core communication."""
if self.use_unix_socket:
return "http://localhost"
return self.sys_homeassistant.api_url
@property
def ws_url(self) -> str:
"""Return WebSocket url for internal Supervisor to Core communication."""
if self.use_unix_socket:
return "ws://localhost/api/websocket"
return self.sys_homeassistant.ws_url
async def container_state_changed(self, event: DockerContainerStateEvent) -> None:
"""Process Core container state changes."""
if event.name != self.sys_homeassistant.core.instance.name:
return
if event.state not in (ContainerState.STOPPED, ContainerState.FAILED):
return
self._core_connected = False
if self._unix_session and not self._unix_session.closed:
await self._unix_session.close()
self._unix_session = None
async def close(self) -> None:
"""Close the Unix socket session."""
if self._unix_session and not self._unix_session.closed:
await self._unix_session.close()
self._unix_session = None
async def _ensure_access_token(self) -> None:
"""Ensure there is a valid access token.
Raises:
@@ -55,7 +153,7 @@ class HomeAssistantAPI(CoreSysAttributes):
# Fast path check without lock (avoid unnecessary locking
# for the majority of calls).
if (
self.access_token
self._access_token
and self._access_token_expires
and self._access_token_expires > datetime.now(tz=UTC)
):
@@ -64,7 +162,7 @@ class HomeAssistantAPI(CoreSysAttributes):
async with self._token_lock:
# Double-check after acquiring lock (avoid race condition)
if (
self.access_token
self._access_token
and self._access_token_expires
and self._access_token_expires > datetime.now(tz=UTC)
):
@@ -86,11 +184,43 @@ class HomeAssistantAPI(CoreSysAttributes):
_LOGGER.info("Updated Home Assistant API token")
tokens = await resp.json()
self.access_token = tokens["access_token"]
self._access_token = tokens["access_token"]
self._access_token_expires = datetime.now(tz=UTC) + timedelta(
seconds=tokens["expires_in"]
)
async def connect_websocket(self) -> WSClient:
"""Connect a WebSocket to Core, handling auth as appropriate.
For Unix socket connections, no authentication is needed.
For TCP connections, handles token management with one retry
on auth failure.
Raises:
HomeAssistantAPIError: On connection or auth failure.
"""
if not await self.sys_homeassistant.core.instance.is_running():
raise HomeAssistantAPIError("Core container is not running", _LOGGER.debug)
if self.use_unix_socket:
return await WSClient.connect(self.session, self.ws_url)
for attempt in (1, 2):
try:
await self._ensure_access_token()
assert self._access_token
return await WSClient.connect_with_auth(
self.session, self.ws_url, self._access_token
)
except HomeAssistantAuthError:
self._access_token = None
if attempt == 2:
raise
# Unreachable, but satisfies type checker
raise RuntimeError("Unreachable")
@asynccontextmanager
async def make_request(
self,
@@ -103,15 +233,16 @@ class HomeAssistantAPI(CoreSysAttributes):
params: MultiMapping[str] | None = None,
headers: dict[str, str] | None = None,
) -> AsyncIterator[aiohttp.ClientResponse]:
"""Async context manager to make authenticated requests to Home Assistant API.
"""Async context manager to make requests to Home Assistant Core API.
This context manager handles authentication token management automatically,
including token refresh on 401 responses. It yields the HTTP response
for the caller to handle.
This context manager handles transport and authentication automatically.
For Unix socket connections, requests are made directly without auth.
For TCP connections, it manages access tokens and retries once on 401.
It yields the HTTP response for the caller to handle.
Error Handling:
- HTTP error status codes (4xx, 5xx) are preserved in the response
- Authentication is handled transparently with one retry on 401
- Authentication is handled transparently (TCP only)
- Network/connection failures raise HomeAssistantAPIError
- No logging is performed - callers should handle logging as needed
@@ -133,19 +264,22 @@ class HomeAssistantAPI(CoreSysAttributes):
network errors, timeouts, or connection failures
"""
url = f"{self.sys_homeassistant.api_url}/{path}"
if not await self.sys_homeassistant.core.instance.is_running():
raise HomeAssistantAPIError("Core container is not running", _LOGGER.debug)
url = f"{self.api_url}/{path}"
headers = headers or {}
client_timeout = aiohttp.ClientTimeout(total=timeout)
# Passthrough content type
if content_type is not None:
headers[hdrs.CONTENT_TYPE] = content_type
for _ in (1, 2):
try:
await self.ensure_access_token()
headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}"
async with self.sys_websession.request(
if not self.use_unix_socket:
await self._ensure_access_token()
headers[hdrs.AUTHORIZATION] = f"Bearer {self._access_token}"
async with self.session.request(
method,
url,
data=data,
@@ -155,9 +289,8 @@ class HomeAssistantAPI(CoreSysAttributes):
params=params,
ssl=False,
) as resp:
# Access token expired
if resp.status == 401:
self.access_token = None
if resp.status == 401 and not self.use_unix_socket:
self._access_token = None
continue
yield resp
return
@@ -184,7 +317,10 @@ class HomeAssistantAPI(CoreSysAttributes):
async def get_core_state(self) -> dict[str, Any]:
"""Return Home Assistant core state."""
return await self._get_json("api/core/state")
state = await self._get_json("api/core/state")
if state is None or not isinstance(state, dict):
raise HomeAssistantAPIError("No state received from Home Assistant API")
return state
async def get_api_state(self) -> APIState | None:
"""Return state of Home Assistant Core or None."""
@@ -206,14 +342,22 @@ class HomeAssistantAPI(CoreSysAttributes):
data = await self.get_core_state()
else:
data = await self.get_config()
# Older versions of home assistant does not expose the state
if data:
state = data.get("state", "RUNNING")
# Recorder state was added in HA Core 2024.8
recorder_state = data.get("recorder_state", {})
migrating = recorder_state.get("migration_in_progress", False)
live_migration = recorder_state.get("migration_is_live", False)
return APIState(state, migrating and not live_migration)
if not self._core_connected:
self._core_connected = True
transport = (
f"Unix socket {SOCKET_CORE}"
if self.use_unix_socket
else f"TCP {self.sys_homeassistant.api_url}"
)
_LOGGER.info("Connected to Core via %s", transport)
state = data.get("state", "RUNNING")
# Recorder state was added in HA Core 2024.8
recorder_state = data.get("recorder_state", {})
migrating = recorder_state.get("migration_in_progress", False)
live_migration = recorder_state.get("migration_is_live", False)
return APIState(state, migrating and not live_migration)
except HomeAssistantAPIError as err:
_LOGGER.debug("Can't connect to Home Assistant API: %s", err)

View File

@@ -321,8 +321,6 @@ class HomeAssistantCore(JobGroup):
# Successfull - last step
await self.sys_homeassistant.save_data()
with suppress(DockerError):
await self.instance.cleanup(old_image=old_image)
# Update Home Assistant
with suppress(HomeAssistantError):
@@ -332,9 +330,8 @@ class HomeAssistantCore(JobGroup):
try:
data = await self.sys_homeassistant.api.get_config()
except HomeAssistantError:
# The API stoped responding between the up checks an now
# The API stopped responding between the update and now
self._error_state = True
return
# Verify that the frontend is loaded
if "frontend" not in data.get("components", []):
@@ -347,6 +344,9 @@ class HomeAssistantCore(JobGroup):
)
self._error_state = True
else:
# Health checks passed, clean up old image
with suppress(DockerError):
await self.instance.cleanup(old_image=old_image)
return
# Update going wrong, revert it
@@ -506,7 +506,7 @@ class HomeAssistantCore(JobGroup):
raise HomeAssistantError("Fatal error on config check!", _LOGGER.error)
# Convert output
log = remove_colors("\n".join(result.log))
log = remove_colors("".join(result.log))
_LOGGER.debug("Result config check: %s", log.strip())
# Parse output

View File

@@ -1,8 +1,6 @@
"""Home Assistant control object."""
import asyncio
from datetime import timedelta
import errno
from ipaddress import IPv4Address
import logging
from pathlib import Path, PurePath
@@ -13,7 +11,7 @@ from typing import Any
from uuid import UUID
from awesomeversion import AwesomeVersion, AwesomeVersionException
from securetar import AddFileError, SecureTarFile, atomic_contents_add, secure_path
from securetar import AddFileError, SecureTarFile, atomic_contents_add
import voluptuous as vol
from voluptuous.humanize import humanize_error
@@ -35,11 +33,11 @@ from ..const import (
ATTR_WATCHDOG,
FILE_HASSIO_HOMEASSISTANT,
BusEvent,
IngressSessionDataUser,
IngressSessionDataUserDict,
HomeAssistantUser,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
BackupInvalidError,
ConfigurationFileError,
HomeAssistantBackupError,
HomeAssistantError,
@@ -47,9 +45,7 @@ from ..exceptions import (
)
from ..hardware.const import PolicyGroup
from ..hardware.data import Device
from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason
from ..utils import remove_folder, remove_folder_with_excludes
from ..utils.common import FileConfiguration
from ..utils.json import read_json_file, write_json_file
@@ -322,6 +318,10 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
)
# Register for events
self.sys_bus.register_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE,
self._api.container_state_changed,
)
self.sys_bus.register_event(BusEvent.HARDWARE_NEW_DEVICE, self._hardware_events)
self.sys_bus.register_event(
BusEvent.HARDWARE_REMOVE_DEVICE, self._hardware_events
@@ -342,10 +342,7 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
try:
await self.sys_run_in_executor(write_pulse_config)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error("Home Assistant can't write pulse/client.config: %s", err)
else:
_LOGGER.info("Update pulse/client.config: %s", self.path_pulse)
@@ -495,11 +492,16 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
# extract backup
try:
with tar_file as backup:
# The tar filter rejects path traversal and absolute names,
# aborting restore of potentially crafted backups.
backup.extractall(
path=temp_path,
members=secure_path(backup),
filter="fully_trusted",
filter="tar",
)
except tarfile.FilterError as err:
raise BackupInvalidError(
f"Invalid tarfile {tar_file}: {err}", _LOGGER.error
) from err
except tarfile.TarError as err:
raise HomeAssistantError(
f"Can't read tarfile {tar_file}: {err}", _LOGGER.error
@@ -570,21 +572,12 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
if attr in data:
self._data[attr] = data[attr]
@Job(
name="home_assistant_get_users",
throttle_period=timedelta(minutes=5),
internal=True,
concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
)
async def get_users(self) -> list[IngressSessionDataUser]:
"""Get list of all configured users."""
list_of_users: (
list[IngressSessionDataUserDict] | None
) = await self.sys_homeassistant.websocket.async_send_command(
async def list_users(self) -> list[HomeAssistantUser]:
"""Fetch list of all users from Home Assistant Core via WebSocket.
Raises HomeAssistantWSError on WebSocket connection/communication failure.
"""
raw: list[dict[str, Any]] = await self.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"}
)
if list_of_users:
return [IngressSessionDataUser.from_dict(data) for data in list_of_users]
return []
return [HomeAssistantUser.from_dict(data) for data in raw]

View File

@@ -1,4 +1,4 @@
"""Handle Home Assistant secrets to add-ons."""
"""Handle Home Assistant secrets to apps."""
from datetime import timedelta
import logging

View File

@@ -3,9 +3,8 @@
from __future__ import annotations
import asyncio
from contextlib import suppress
import logging
from typing import Any, TypeVar, cast
from typing import Any, TypeVar
import aiohttp
from aiohttp.http_websocket import WSMsgType
@@ -34,6 +33,11 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
T = TypeVar("T")
# Maximum message size for WebSocket messages from Core. Matches the cap used
# by the ingress proxy; Supervisor's own control channel never gets close to
# this but shares the setting for simplicity. See issue #4392.
MAX_MESSAGE_SIZE_FROM_CORE = 64 * 1024 * 1024
class WSClient:
"""Home Assistant Websocket client."""
@@ -45,14 +49,14 @@ class WSClient:
):
"""Initialise the WS client."""
self.ha_version = ha_version
self._client = client
self.client = client
self._message_id: int = 0
self._futures: dict[int, asyncio.Future[T]] = {} # type: ignore
@property
def connected(self) -> bool:
"""Return if we're currently connected."""
return self._client is not None and not self._client.closed
return self.client is not None and not self.client.closed
async def close(self) -> None:
"""Close down the client."""
@@ -62,17 +66,17 @@ class WSClient:
HomeAssistantWSConnectionError("Connection was closed")
)
if not self._client.closed:
await self._client.close()
if not self.client.closed:
await self.client.close()
async def async_send_command(self, message: dict[str, Any]) -> T | None:
async def async_send_command(self, message: dict[str, Any]) -> T:
"""Send a websocket message, and return the response."""
self._message_id += 1
message["id"] = self._message_id
self._futures[message["id"]] = asyncio.get_running_loop().create_future()
_LOGGER.debug("Sending: %s", message)
try:
await self._client.send_json(message, dumps=json_dumps)
await self.client.send_json(message, dumps=json_dumps)
except ConnectionError as err:
raise HomeAssistantWSConnectionError(str(err)) from err
@@ -97,7 +101,7 @@ class WSClient:
async def _receive_json(self) -> None:
"""Receive json."""
msg = await self._client.receive()
msg = await self.client.receive()
_LOGGER.debug("Received: %s", msg)
if msg.type == WSMsgType.CLOSE:
@@ -139,27 +143,101 @@ class WSClient:
)
@classmethod
async def connect_with_auth(
cls, session: aiohttp.ClientSession, url: str, token: str
) -> WSClient:
"""Create an authenticated websocket client."""
async def _ws_connect(
cls,
session: aiohttp.ClientSession,
url: str,
) -> aiohttp.ClientWebSocketResponse:
"""Open a raw WebSocket connection to Core."""
try:
client = await session.ws_connect(url, ssl=False)
return await session.ws_connect(
url, ssl=False, max_msg_size=MAX_MESSAGE_SIZE_FROM_CORE
)
except aiohttp.client_exceptions.ClientConnectorError:
raise HomeAssistantWSError("Can't connect") from None
raise HomeAssistantWSConnectionError("Can't connect") from None
hello_message = await client.receive_json()
@classmethod
async def connect(
cls,
session: aiohttp.ClientSession,
url: str,
) -> WSClient:
"""Connect via Unix socket (no auth exchange).
await client.send_json(
{ATTR_TYPE: WSType.AUTH, ATTR_ACCESS_TOKEN: token}, dumps=json_dumps
)
Core authenticates the peer by the socket connection itself
and sends auth_ok immediately.
"""
client = await cls._ws_connect(session, url)
try:
first_message = await client.receive_json()
auth_ok_message = await client.receive_json()
if first_message[ATTR_TYPE] != "auth_ok":
raise HomeAssistantAPIError(
f"Expected auth_ok on Unix socket, got {first_message[ATTR_TYPE]}"
)
if auth_ok_message[ATTR_TYPE] != "auth_ok":
raise HomeAssistantAPIError("AUTH NOT OK")
return cls(AwesomeVersion(first_message["ha_version"]), client)
except HomeAssistantAPIError:
await client.close()
raise
except (
KeyError,
ValueError,
TypeError,
aiohttp.ClientError,
TimeoutError,
) as err:
await client.close()
raise HomeAssistantWSConnectionError(
f"Unexpected error during WebSocket handshake: {err}"
) from err
return cls(AwesomeVersion(hello_message["ha_version"]), client)
@classmethod
async def connect_with_auth(
cls,
session: aiohttp.ClientSession,
url: str,
token: str,
) -> WSClient:
"""Connect via TCP with token authentication.
Expects auth_required from Core, sends the token, then expects auth_ok.
The auth_required message also carries ha_version.
"""
client = await cls._ws_connect(session, url)
try:
# auth_required message also carries ha_version
first_message = await client.receive_json()
if first_message[ATTR_TYPE] != "auth_required":
raise HomeAssistantAPIError(
f"Expected auth_required, got {first_message[ATTR_TYPE]}"
)
await client.send_json(
{ATTR_TYPE: WSType.AUTH, ATTR_ACCESS_TOKEN: token}, dumps=json_dumps
)
auth_ok_message = await client.receive_json()
if auth_ok_message[ATTR_TYPE] != "auth_ok":
raise HomeAssistantAPIError("AUTH NOT OK")
return cls(AwesomeVersion(first_message["ha_version"]), client)
except HomeAssistantAPIError:
await client.close()
raise
except (
KeyError,
ValueError,
TypeError,
aiohttp.ClientError,
TimeoutError,
) as err:
await client.close()
raise HomeAssistantWSConnectionError(
f"Unexpected error during WebSocket handshake: {err}"
) from err
class HomeAssistantWebSocket(CoreSysAttributes):
@@ -168,7 +246,7 @@ class HomeAssistantWebSocket(CoreSysAttributes):
def __init__(self, coresys: CoreSys):
"""Initialize Home Assistant object."""
self.coresys: CoreSys = coresys
self._client: WSClient | None = None
self.client: WSClient | None = None
self._lock: asyncio.Lock = asyncio.Lock()
self._queue: list[dict[str, Any]] = []
@@ -183,16 +261,10 @@ class HomeAssistantWebSocket(CoreSysAttributes):
async def _get_ws_client(self) -> WSClient:
"""Return a websocket client."""
async with self._lock:
if self._client is not None and self._client.connected:
return self._client
if self.client is not None and self.client.connected:
return self.client
with suppress(asyncio.TimeoutError, aiohttp.ClientError):
await self.sys_homeassistant.api.ensure_access_token()
client = await WSClient.connect_with_auth(
self.sys_websession,
self.sys_homeassistant.ws_url,
cast(str, self.sys_homeassistant.api.access_token),
)
client = await self.sys_homeassistant.api.connect_websocket()
self.sys_create_task(client.start_listener())
return client
@@ -200,23 +272,24 @@ class HomeAssistantWebSocket(CoreSysAttributes):
async def _ensure_connected(self) -> None:
"""Ensure WebSocket connection is ready.
Raises HomeAssistantWSError if unable to connect.
Raises HomeAssistantWSConnectionError if unable to connect.
Raises HomeAssistantAuthError if authentication with Core fails.
"""
if self.sys_core.state in CLOSING_STATES:
raise HomeAssistantWSError(
raise HomeAssistantWSConnectionError(
"WebSocket not available, system is shutting down"
)
connected = self._client and self._client.connected
connected = self.client and self.client.connected
# If we are already connected, we can avoid the check_api_state call
# since it makes a new socket connection and we already have one.
if not connected and not await self.sys_homeassistant.api.check_api_state():
raise HomeAssistantWSError(
raise HomeAssistantWSConnectionError(
"Can't connect to Home Assistant Core WebSocket, the API is not reachable"
)
if not self._client or not self._client.connected:
self._client = await self._get_ws_client()
if not self.client or not self.client.connected:
self.client = await self._get_ws_client()
async def load(self) -> None:
"""Set up queue processor after startup completes."""
@@ -237,34 +310,34 @@ class HomeAssistantWebSocket(CoreSysAttributes):
try:
await self._ensure_connected()
except HomeAssistantWSError as err:
_LOGGER.debug("Can't send WebSocket command: %s", err)
_LOGGER.warning("Can't send WebSocket command: %s", err)
return
# _ensure_connected guarantees self._client is set
assert self._client
# _ensure_connected guarantees self.client is set
assert self.client
try:
await self._client.async_send_command(message)
await self.client.async_send_command(message)
except HomeAssistantWSConnectionError as err:
_LOGGER.debug("Fire-and-forget WebSocket command failed: %s", err)
if self._client:
await self._client.close()
self._client = None
if self.client:
await self.client.close()
self.client = None
async def async_send_command(self, message: dict[str, Any]) -> T | None:
async def async_send_command(self, message: dict[str, Any]) -> T:
"""Send a command and return the response.
Raises HomeAssistantWSError if unable to connect to Home Assistant Core.
Raises HomeAssistantWSError on WebSocket connection or communication failure.
"""
await self._ensure_connected()
# _ensure_connected guarantees self._client is set
assert self._client
# _ensure_connected guarantees self.client is set
assert self.client
try:
return await self._client.async_send_command(message)
return await self.client.async_send_command(message)
except HomeAssistantWSConnectionError:
if self._client:
await self._client.close()
self._client = None
if self.client:
await self.client.close()
self.client = None
raise
def send_command(self, message: dict[str, Any]) -> None:

View File

@@ -3,7 +3,6 @@
from __future__ import annotations
from contextlib import suppress
import errno
import logging
from pathlib import Path
import shutil
@@ -12,7 +11,7 @@ from awesomeversion import AwesomeVersion
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import DBusError, HostAppArmorError
from ..resolution.const import UnhealthyReason, UnsupportedReason
from ..resolution.const import UnsupportedReason
from ..utils.apparmor import validate_profile
from .const import HostFeature
@@ -89,10 +88,7 @@ class AppArmorControl(CoreSysAttributes):
try:
await self.sys_run_in_executor(shutil.copyfile, profile_file, dest_profile)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
raise HostAppArmorError(
f"Can't copy {profile_file}: {err}", _LOGGER.error
) from err
@@ -116,10 +112,7 @@ class AppArmorControl(CoreSysAttributes):
try:
await self.sys_run_in_executor(profile_file.unlink)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
raise HostAppArmorError(
f"Can't remove profile: {err}", _LOGGER.error
) from err
@@ -134,10 +127,7 @@ class AppArmorControl(CoreSysAttributes):
try:
shutil.copy(profile_file, backup_file)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
raise HostAppArmorError(
f"Can't backup profile {profile_name}: {err}", _LOGGER.error
) from err

View File

@@ -153,7 +153,7 @@ class Interface:
)
@staticmethod
def from_dbus_interface(inet: NetworkInterface) -> "Interface":
def from_dbus_interface(inet: NetworkInterface) -> Interface:
"""Coerce a dbus interface into normal Interface."""
if inet.settings and inet.settings.ipv4:
ipv4_setting = IpSetting(

146
supervisor/host/firewall.py Normal file
View File

@@ -0,0 +1,146 @@
"""Firewall rules for the Supervisor network gateway."""
import asyncio
from contextlib import suppress
import logging
from dbus_fast import Variant
from ..const import DOCKER_IPV4_NETWORK_MASK, DOCKER_IPV6_NETWORK_MASK, DOCKER_NETWORK
from ..coresys import CoreSys, CoreSysAttributes
from ..dbus.const import StartUnitMode, UnitActiveState
from ..dbus.systemd import ExecStartEntry
from ..exceptions import DBusError
from ..resolution.const import UnhealthyReason
_LOGGER: logging.Logger = logging.getLogger(__name__)
FIREWALL_SERVICE = "supervisor-firewall-gateway.service"
FIREWALL_UNIT_TIMEOUT = 30
BIN_SH = "/bin/sh"
IPTABLES_CMD = "/usr/sbin/iptables"
IP6TABLES_CMD = "/usr/sbin/ip6tables"
TERMINAL_STATES = {UnitActiveState.INACTIVE, UnitActiveState.FAILED}
class FirewallManager(CoreSysAttributes):
"""Manage firewall rules to protect the Supervisor network gateway.
Adds iptables rules in the raw PREROUTING chain to drop traffic addressed
to the bridge gateway IP that does not originate from the bridge or
loopback interfaces.
"""
def __init__(self, coresys: CoreSys) -> None:
"""Initialize firewall manager."""
self.coresys: CoreSys = coresys
@staticmethod
def _build_exec_start() -> list[ExecStartEntry]:
"""Build ExecStart entries for gateway firewall rules.
Each entry uses shell check-or-insert logic for idempotency.
We insert DROP first, then ACCEPT, using -I (insert at top).
The last inserted rule ends up first in the chain, so ACCEPT
for loopback ends up above the DROP for non-bridge interfaces.
"""
gateway_ipv4 = str(DOCKER_IPV4_NETWORK_MASK[1])
gateway_ipv6 = str(DOCKER_IPV6_NETWORK_MASK[1])
bridge = DOCKER_NETWORK
entries: list[ExecStartEntry] = []
for cmd, gateway in (
(IPTABLES_CMD, gateway_ipv4),
(IP6TABLES_CMD, gateway_ipv6),
):
# DROP packets to gateway from non-bridge, non-loopback interfaces
entries.append(
ExecStartEntry(
binary=BIN_SH,
argv=[
BIN_SH,
"-c",
f"{cmd} -t raw -C PREROUTING ! -i {bridge} -d {gateway}"
f" -j DROP 2>/dev/null"
f" || {cmd} -t raw -I PREROUTING ! -i {bridge} -d {gateway}"
f" -j DROP",
],
ignore_failure=False,
)
)
# ACCEPT loopback traffic to gateway (inserted last, ends up first)
entries.append(
ExecStartEntry(
binary=BIN_SH,
argv=[
BIN_SH,
"-c",
f"{cmd} -t raw -C PREROUTING -i lo -d {gateway}"
f" -j ACCEPT 2>/dev/null"
f" || {cmd} -t raw -I PREROUTING -i lo -d {gateway}"
f" -j ACCEPT",
],
ignore_failure=False,
)
)
return entries
async def _apply_gateway_firewall_rules(self) -> bool:
"""Apply iptables rules to restrict access to the Docker gateway.
Returns True if the rules were successfully applied.
"""
if not self.sys_dbus.systemd.is_connected:
_LOGGER.error("Systemd not available, cannot apply gateway firewall rules")
return False
# Clean up any previous failed unit
with suppress(DBusError):
await self.sys_dbus.systemd.reset_failed_unit(FIREWALL_SERVICE)
properties: list[tuple[str, Variant]] = [
("Description", Variant("s", "Supervisor gateway firewall rules")),
("Type", Variant("s", "oneshot")),
("ExecStart", Variant("a(sasb)", self._build_exec_start())),
]
try:
await self.sys_dbus.systemd.start_transient_unit(
FIREWALL_SERVICE,
StartUnitMode.REPLACE,
properties,
)
except DBusError as err:
_LOGGER.error("Failed to apply gateway firewall rules: %s", err)
return False
# Wait for the oneshot unit to finish and verify it succeeded
try:
unit = await self.sys_dbus.systemd.get_unit(FIREWALL_SERVICE)
async with asyncio.timeout(FIREWALL_UNIT_TIMEOUT):
state = await unit.wait_for_active_state(TERMINAL_STATES)
except (DBusError, TimeoutError) as err:
_LOGGER.error(
"Failed waiting for gateway firewall unit to complete: %s", err
)
return False
if state == UnitActiveState.FAILED:
_LOGGER.error(
"Gateway firewall unit failed, iptables rules may not be applied"
)
return False
return True
async def apply_gateway_firewall_rules(self) -> None:
"""Apply gateway firewall rules, marking unsupported on failure."""
if await self._apply_gateway_firewall_rules():
_LOGGER.info("Gateway firewall rules applied")
else:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.DOCKER_GATEWAY_UNPROTECTED
)

View File

@@ -1,7 +1,7 @@
"""Info control for host."""
import asyncio
from datetime import datetime, tzinfo
from datetime import UTC, datetime, tzinfo
import logging
from ..coresys import CoreSysAttributes
@@ -78,9 +78,9 @@ class InfoCenter(CoreSysAttributes):
return self.sys_dbus.timedate.timezone_tzinfo
@property
def dt_utc(self) -> datetime | None:
def dt_utc(self) -> datetime:
"""Return host UTC time."""
return self.sys_dbus.timedate.dt_utc
return datetime.now(UTC)
@property
def use_rtc(self) -> bool | None:

View File

@@ -15,6 +15,7 @@ from ..hardware.data import Device
from .apparmor import AppArmorControl
from .const import HostFeature
from .control import SystemControl
from .firewall import FirewallManager
from .info import InfoCenter
from .logs import LogsControl
from .network import NetworkManager
@@ -33,6 +34,7 @@ class HostManager(CoreSysAttributes):
self._apparmor: AppArmorControl = AppArmorControl(coresys)
self._control: SystemControl = SystemControl(coresys)
self._firewall: FirewallManager = FirewallManager(coresys)
self._info: InfoCenter = InfoCenter(coresys)
self._services: ServiceManager = ServiceManager(coresys)
self._network: NetworkManager = NetworkManager(coresys)
@@ -54,6 +56,11 @@ class HostManager(CoreSysAttributes):
"""Return host control handler."""
return self._control
@property
def firewall(self) -> FirewallManager:
"""Return host firewall handler."""
return self._firewall
@property
def info(self) -> InfoCenter:
"""Return host info handler."""
@@ -168,6 +175,9 @@ class HostManager(CoreSysAttributes):
await self.network.load()
# Apply firewall rules to restrict access to the Docker gateway
await self.firewall.apply_gateway_firewall_rules()
# Register for events
self.sys_bus.register_event(BusEvent.HARDWARE_NEW_DEVICE, self._hardware_events)
self.sys_bus.register_event(

View File

@@ -37,7 +37,7 @@ class AudioApplication:
stream_type: StreamType
volume: float
mute: bool
addon: str
app: str
@dataclass(slots=True, frozen=True)

View File

@@ -5,7 +5,7 @@ import logging
import random
import secrets
from .addons.addon import Addon
from .addons.addon import App
from .const import (
ATTR_PORTS,
ATTR_SESSION,
@@ -33,11 +33,11 @@ class Ingress(FileConfiguration, CoreSysAttributes):
self.coresys: CoreSys = coresys
self.tokens: dict[str, str] = {}
def get(self, token: str) -> Addon | None:
"""Return addon they have this ingress token."""
def get(self, token: str) -> App | None:
"""Return app they have this ingress token."""
if token not in self.tokens:
return None
return self.sys_addons.get_local_only(self.tokens[token])
return self.sys_apps.get_local_only(self.tokens[token])
def get_session_data(self, session_id: str) -> IngressSessionData | None:
"""Return complementary data of current session or None."""
@@ -61,14 +61,14 @@ class Ingress(FileConfiguration, CoreSysAttributes):
return self._data[ATTR_PORTS]
@property
def addons(self) -> list[Addon]:
"""Return list of ingress Add-ons."""
addons = []
for addon in self.sys_addons.installed:
if not addon.with_ingress:
def apps(self) -> list[App]:
"""Return list of ingress Apps."""
apps = []
for app in self.sys_apps.installed:
if not app.with_ingress:
continue
addons.append(addon)
return addons
apps.append(app)
return apps
async def load(self) -> None:
"""Update internal data."""
@@ -115,13 +115,13 @@ class Ingress(FileConfiguration, CoreSysAttributes):
self.sessions_data.update(sessions_data)
def _update_token_list(self) -> None:
"""Regenerate token <-> Add-on map."""
"""Regenerate token <-> App map."""
self.tokens.clear()
# Read all ingress token and build a map
for addon in self.addons:
if addon.ingress_token:
self.tokens[addon.ingress_token] = addon.slug
for app in self.apps:
if app.ingress_token:
self.tokens[app.ingress_token] = app.slug
def create_session(self, data: IngressSessionData | None = None) -> str:
"""Create new session."""
@@ -158,10 +158,10 @@ class Ingress(FileConfiguration, CoreSysAttributes):
return True
async def get_dynamic_port(self, addon_slug: str) -> int:
async def get_dynamic_port(self, app_slug: str) -> int:
"""Get/Create a dynamic port from range."""
if addon_slug in self.ports:
return self.ports[addon_slug]
if app_slug in self.ports:
return self.ports[app_slug]
port = None
while (
@@ -172,37 +172,32 @@ class Ingress(FileConfiguration, CoreSysAttributes):
port = random.randint(62000, 65500)
# Save port for next time
self.ports[addon_slug] = port
self.ports[app_slug] = port
await self.save_data()
return port
async def del_dynamic_port(self, addon_slug: str) -> None:
async def del_dynamic_port(self, app_slug: str) -> None:
"""Remove a previously assigned dynamic port."""
if addon_slug not in self.ports:
if app_slug not in self.ports:
return
del self.ports[addon_slug]
del self.ports[app_slug]
await self.save_data()
async def update_hass_panel(self, addon: Addon):
"""Return True if Home Assistant up and running."""
if not await self.sys_homeassistant.core.is_running():
_LOGGER.debug("Ignoring panel update on Core")
return
# Update UI
method = "post" if addon.ingress_panel else "delete"
async def update_hass_panel(self, app: App):
"""Update the ingress panel registration in Home Assistant."""
method = "post" if app.ingress_panel else "delete"
try:
async with self.sys_homeassistant.api.make_request(
method, f"api/hassio_push/panel/{addon.slug}"
method, f"api/hassio_push/panel/{app.slug}"
) as resp:
if resp.status in (200, 201):
_LOGGER.info("Update Ingress as panel for %s", addon.slug)
_LOGGER.info("Update Ingress as panel for %s", app.slug)
else:
_LOGGER.warning(
"Failed to update the Ingress panel for %s with %i",
addon.slug,
app.slug,
resp.status,
)
except HomeAssistantAPIError as err:
_LOGGER.error("Panel update request failed for %s: %s", addon.slug, err)
_LOGGER.error("Panel update request failed for %s: %s", app.slug, err)

View File

@@ -98,9 +98,7 @@ class SupervisorJobError:
"""Representation of an error occurring during a supervisor job."""
type_: type[HassioError] = HassioError
message: str = (
"Unknown error, see Supervisor logs (check with 'ha supervisor logs')"
)
message: str = "Unknown error, see Supervisor logs"
stage: str | None = None
error_key: str | None = None
extra_fields: dict[str, Any] | None = None

View File

@@ -457,6 +457,11 @@ class Job(CoreSysAttributes):
if plugin.need_update
]
):
if not coresys.sys_updater.auto_update:
raise JobConditionException(
f"'{method_name}' blocked from execution, plugin(s) {', '.join(plugin.slug for plugin in out_of_date)} are not up to date and auto-update is disabled"
)
errors = await asyncio.gather(
*[plugin.update() for plugin in out_of_date], return_exceptions=True
)

View File

@@ -1,6 +1,7 @@
"""Filter tools."""
import ipaddress
import logging
import os
import re
from typing import cast
@@ -11,7 +12,9 @@ from sentry_sdk.types import Event, Hint
from ..const import DOCKER_IPV4_NETWORK_MASK, HEADER_TOKEN, HEADER_TOKEN_OLD, CoreState
from ..coresys import CoreSys
from ..exceptions import AddonConfigurationError
from ..exceptions import APITooManyRequests, AppConfigurationError
_LOGGER: logging.Logger = logging.getLogger(__name__)
RE_URL: re.Pattern = re.compile(r"(\w+:\/\/)(.*\.\w+)(.*)")
@@ -43,11 +46,21 @@ def sanitize_url(url: str) -> str:
def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
"""Filter event data before sending to sentry."""
# Ignore some exceptions
# Ignore some exceptions. Walk the __cause__ chain because rate-limit
# errors are often wrapped (e.g. DockerHubRateLimitExceeded wrapped in
# SupervisorUpdateError by supervisor.update()).
if "exc_info" in hint:
_, exc_value, _ = hint["exc_info"]
if isinstance(exc_value, (AddonConfigurationError)):
return None
err: BaseException | None = exc_value
while err is not None:
if isinstance(err, (AppConfigurationError, APITooManyRequests)):
_LOGGER.debug(
"Skipping Sentry event for %s: %s",
type(err).__name__,
exc_value,
)
return None
err = err.__cause__
# Ignore issue if system is not supported or diagnostics is disabled
if not coresys.config.diagnostics or not coresys.core.supported or coresys.dev:
@@ -82,10 +95,10 @@ def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
)
return event
# List installed addons
installed_addons = [
{"slug": addon.slug, "repository": addon.repository, "name": addon.name}
for addon in coresys.addons.installed
# List installed apps
installed_apps = [
{"slug": app.slug, "repository": app.repository, "name": app.name}
for app in coresys.apps.installed
]
# Update information
@@ -93,7 +106,7 @@ def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
{
"supervisor": {
"channel": coresys.updater.channel,
"installed_addons": installed_addons,
"installed_addons": installed_apps,
},
"host": {
"arch": str(coresys.arch.default),
@@ -123,7 +136,7 @@ def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
attr.asdict(suggestion)
for suggestion in coresys.resolution.suggestions
],
"unhealthy": coresys.resolution.unhealthy,
"unhealthy": sorted(coresys.resolution.unhealthy),
},
"store": {
"repositories": coresys.store.repository_urls,

View File

@@ -5,12 +5,12 @@ from datetime import datetime, timedelta
import logging
from typing import cast
from ..addons.const import ADDON_UPDATE_CONDITIONS
from ..addons.const import APP_UPDATE_CONDITIONS
from ..backups.const import LOCATION_CLOUD_BACKUP, LOCATION_TYPE
from ..const import ATTR_TYPE, AddonState
from ..const import ATTR_TYPE, AppState
from ..coresys import CoreSysAttributes
from ..exceptions import (
AddonsError,
AppsError,
BackupFileNotFoundError,
HomeAssistantError,
HomeAssistantWSError,
@@ -31,18 +31,17 @@ HASS_WATCHDOG_REANIMATE_FAILURES = "HASS_WATCHDOG_REANIMATE_FAILURES"
HASS_WATCHDOG_MAX_API_ATTEMPTS = 2
HASS_WATCHDOG_MAX_REANIMATE_ATTEMPTS = 5
RUN_UPDATE_SUPERVISOR = 29100
RUN_UPDATE_ADDONS = 57600
RUN_UPDATE_CLI = 28100
RUN_UPDATE_DNS = 30100
RUN_UPDATE_AUDIO = 30200
RUN_UPDATE_MULTICAST = 30300
RUN_UPDATE_OBSERVER = 30400
RUN_UPDATE_CLI = 43200 # 12h, staggered +2min per plugin
RUN_UPDATE_DNS = 43320
RUN_UPDATE_AUDIO = 43440
RUN_UPDATE_MULTICAST = 43560
RUN_UPDATE_OBSERVER = 43680
RUN_RELOAD_ADDONS = 10800
RUN_RELOAD_BACKUPS = 72000
RUN_RELOAD_HOST = 7600
RUN_RELOAD_UPDATER = 27100
RUN_RELOAD_UPDATER = 86400 # 24h
RUN_RELOAD_INGRESS = 930
RUN_RELOAD_MOUNTS = 900
@@ -53,7 +52,10 @@ RUN_WATCHDOG_OBSERVER_APPLICATION = 180
RUN_CORE_BACKUP_CLEANUP = 86200
PLUGIN_AUTO_UPDATE_CONDITIONS = PLUGIN_UPDATE_CONDITIONS + [JobCondition.RUNNING]
PLUGIN_AUTO_UPDATE_CONDITIONS = PLUGIN_UPDATE_CONDITIONS + [
JobCondition.AUTO_UPDATE,
JobCondition.RUNNING,
]
OLD_BACKUP_THRESHOLD = timedelta(days=2)
@@ -69,8 +71,7 @@ class Tasks(CoreSysAttributes):
async def load(self):
"""Add Tasks to scheduler."""
# Update
self.sys_scheduler.register_task(self._update_addons, RUN_UPDATE_ADDONS)
self.sys_scheduler.register_task(self._update_supervisor, RUN_UPDATE_SUPERVISOR)
self.sys_scheduler.register_task(self._update_apps, RUN_UPDATE_ADDONS)
self.sys_scheduler.register_task(self._update_cli, RUN_UPDATE_CLI)
self.sys_scheduler.register_task(self._update_dns, RUN_UPDATE_DNS)
self.sys_scheduler.register_task(self._update_audio, RUN_UPDATE_AUDIO)
@@ -93,7 +94,7 @@ class Tasks(CoreSysAttributes):
self._watchdog_observer_application, RUN_WATCHDOG_OBSERVER_APPLICATION
)
self.sys_scheduler.register_task(
self._watchdog_addon_application, RUN_WATCHDOG_ADDON_APPLICATON
self._watchdog_app_application, RUN_WATCHDOG_ADDON_APPLICATON
)
# Cleanup
@@ -105,90 +106,60 @@ class Tasks(CoreSysAttributes):
@Job(
name="tasks_update_addons",
conditions=ADDON_UPDATE_CONDITIONS + [JobCondition.RUNNING],
conditions=APP_UPDATE_CONDITIONS + [JobCondition.RUNNING],
)
async def _update_addons(self):
"""Check if an update is available for an Add-on and update it."""
for addon in self.sys_addons.all:
if not addon.is_installed or not addon.auto_update:
async def _update_apps(self):
"""Check if an update is available for an App and update it."""
for app in self.sys_apps.all:
if not app.is_installed or not app.auto_update:
continue
# Evaluate available updates
if not addon.need_update:
if not app.need_update:
continue
if not addon.auto_update_available:
if not app.auto_update_available:
_LOGGER.debug(
"Not updating add-on %s from %s to %s as that would cross a known breaking version",
addon.slug,
addon.version,
addon.latest_version,
"Not updating app %s from %s to %s as that would cross a known breaking version",
app.slug,
app.version,
app.latest_version,
)
continue
# Delay auto-updates for a day in case of issues
if utcnow() < addon.latest_version_timestamp + timedelta(days=1):
if utcnow() < app.latest_version_timestamp + timedelta(days=1):
_LOGGER.debug(
"Not updating add-on %s from %s to %s as the latest version is less than a day old",
addon.slug,
addon.version,
addon.latest_version,
"Not updating app %s from %s to %s as the latest version is less than a day old",
app.slug,
app.version,
app.latest_version,
)
continue
if not addon.test_update_schema():
_LOGGER.warning(
"Add-on %s will be ignored, schema tests failed", addon.slug
)
if not app.test_update_schema():
_LOGGER.warning("App %s will be ignored, schema tests failed", app.slug)
continue
_LOGGER.info("Add-on auto update process %s", addon.slug)
# Call Home Assistant Core to update add-on to make sure that backups
_LOGGER.info("App auto update process %s", app.slug)
# Call Home Assistant Core to update app to make sure that backups
# get created through the Home Assistant Core API (categorized correctly).
# Ultimately auto updates should be handled by Home Assistant Core itself
# through a update entity feature.
message = {
ATTR_TYPE: WSType.HASSIO_UPDATE_ADDON,
"addon": addon.slug,
"addon": app.slug,
"backup": True,
}
_LOGGER.debug(
"Sending update add-on WebSocket command to Home Assistant Core: %s",
"Sending update app WebSocket command to Home Assistant Core: %s",
message,
)
try:
await self.sys_homeassistant.websocket.async_send_command(message)
except HomeAssistantWSError as err:
_LOGGER.warning(
"Could not send add-on update command to Home Assistant Core: %s",
"Could not send app update command to Home Assistant Core: %s",
err,
)
@Job(
name="tasks_update_supervisor",
conditions=[
JobCondition.AUTO_UPDATE,
JobCondition.FREE_SPACE,
JobCondition.HEALTHY,
JobCondition.INTERNET_HOST,
JobCondition.OS_SUPPORTED,
JobCondition.RUNNING,
JobCondition.ARCHITECTURE_SUPPORTED,
],
concurrency=JobConcurrency.REJECT,
)
async def _update_supervisor(self):
"""Check and run update of Supervisor Supervisor."""
if not self.sys_supervisor.need_update:
return
_LOGGER.info(
"Found new Supervisor version %s, updating",
self.sys_supervisor.latest_version,
)
# Errors are logged by the exceptions, we can't really do something
# if an update fails here.
with suppress(SupervisorUpdateError):
await self.sys_supervisor.update()
async def _watchdog_homeassistant_api(self):
"""Create scheduler task for monitoring running state of API.
@@ -338,37 +309,37 @@ class Tasks(CoreSysAttributes):
except ObserverError:
_LOGGER.error("Observer watchdog reanimation failed!")
async def _watchdog_addon_application(self):
async def _watchdog_app_application(self):
"""Check running state of the application and start if they is hangs."""
for addon in self.sys_addons.installed:
for app in self.sys_apps.installed:
# if watchdog need looking for
if not addon.watchdog or addon.state != AddonState.STARTED:
if not app.watchdog or app.state != AppState.STARTED:
continue
# Init cache data
retry_scan = self._cache.get(addon.slug, 0)
retry_scan = self._cache.get(app.slug, 0)
# if Addon have running actions / Application work
if addon.in_progress or await addon.watchdog_application():
# if App have running actions / Application work
if app.in_progress or await app.watchdog_application():
continue
# Look like we run into a problem
retry_scan += 1
if retry_scan == 1:
self._cache[addon.slug] = retry_scan
self._cache[app.slug] = retry_scan
_LOGGER.warning(
"Watchdog missing application response from %s", addon.slug
"Watchdog missing application response from %s", app.slug
)
return
_LOGGER.warning("Watchdog found a problem with %s application!", addon.slug)
_LOGGER.warning("Watchdog found a problem with %s application!", app.slug)
try:
await (await addon.restart())
except AddonsError as err:
_LOGGER.error("%s watchdog reanimation failed with %s", addon.slug, err)
await (await app.restart())
except AppsError as err:
_LOGGER.error("%s watchdog reanimation failed with %s", app.slug, err)
await async_capture_exception(err)
finally:
self._cache[addon.slug] = 0
self._cache[app.slug] = 0
@Job(
name="tasks_reload_store",
@@ -379,7 +350,7 @@ class Tasks(CoreSysAttributes):
],
)
async def _reload_store(self) -> None:
"""Reload store and check for addon updates."""
"""Reload store and check for app updates."""
await self.sys_store.reload()
@Job(name="tasks_reload_updater")
@@ -387,9 +358,34 @@ class Tasks(CoreSysAttributes):
"""Check for new versions of Home Assistant, Supervisor, OS, etc."""
await self.sys_updater.reload()
# If there's a new version of supervisor, start update immediately
# If there's a new version of supervisor, update immediately
if self.sys_supervisor.need_update:
await self._update_supervisor()
await self._auto_update_supervisor()
@Job(
name="tasks_update_supervisor",
conditions=[
JobCondition.AUTO_UPDATE,
JobCondition.FREE_SPACE,
JobCondition.HEALTHY,
JobCondition.INTERNET_HOST,
JobCondition.OS_SUPPORTED,
JobCondition.RUNNING,
JobCondition.ARCHITECTURE_SUPPORTED,
],
concurrency=JobConcurrency.REJECT,
)
async def _auto_update_supervisor(self):
"""Auto update Supervisor if enabled."""
if not self.sys_supervisor.need_update:
return
_LOGGER.info(
"Found new Supervisor version %s, updating",
self.sys_supervisor.latest_version,
)
with suppress(SupervisorUpdateError):
await self.sys_supervisor.update()
@Job(name="tasks_core_backup_cleanup", conditions=[JobCondition.HEALTHY])
async def _core_backup_cleanup(self) -> None:

View File

@@ -319,3 +319,52 @@ class MountManager(FileConfiguration, CoreSysAttributes):
mount.to_dict(skip_secrets=False) for mount in self.mounts
]
await super().save_data()
async def restore_mount(self, mount: Mount) -> asyncio.Task:
"""Restore a mount from backup.
Adds mount to internal state without activating it.
Returns an asyncio.Task for activating the mount in the background.
If a mount with the same name exists, it is replaced.
"""
if mount.name in self._mounts:
_LOGGER.info(
"Mount '%s' already exists, replacing with backup config", mount.name
)
# Unmount existing if it's bound
if mount.name in self._bound_mounts:
await self._bound_mounts[mount.name].bind_mount.unmount()
del self._bound_mounts[mount.name]
old_mount = self._mounts[mount.name]
await old_mount.unmount()
self._mounts[mount.name] = mount
return self.sys_create_task(self._activate_restored_mount(mount))
async def _activate_restored_mount(self, mount: Mount) -> None:
"""Activate a restored mount. Logs errors but doesn't raise."""
if HostFeature.MOUNT not in self.sys_host.features:
_LOGGER.warning(
"Cannot activate mount %s, mounting not supported on system",
mount.name,
)
return
try:
_LOGGER.info("Activating restored mount: %s", mount.name)
await mount.load()
if mount.usage == MountUsage.MEDIA:
await self._bind_media(mount)
elif mount.usage == MountUsage.SHARE:
await self._bind_share(mount)
_LOGGER.info("Mount %s activated successfully", mount.name)
except MountError as err:
_LOGGER.warning(
"Failed to activate mount %s (config was restored, "
"mount may come online later): %s",
mount.name,
err,
)

View File

@@ -12,12 +12,10 @@ from voluptuous import Coerce
from ..coresys import CoreSys, CoreSysAttributes
from ..dbus.const import (
DBUS_ATTR_ACTIVE_STATE,
DBUS_ATTR_DESCRIPTION,
DBUS_ATTR_OPTIONS,
DBUS_ATTR_TYPE,
DBUS_ATTR_WHAT,
DBUS_IFACE_SYSTEMD_UNIT,
StartUnitMode,
StopUnitMode,
UnitActiveState,
@@ -59,7 +57,7 @@ class Mount(CoreSysAttributes, ABC):
)
@classmethod
def from_dict(cls, coresys: CoreSys, data: MountData) -> "Mount":
def from_dict(cls, coresys: CoreSys, data: MountData) -> Mount:
"""Make dictionary into mount object."""
if cls not in [Mount, NetworkMount]:
return cls(coresys, data)
@@ -144,7 +142,7 @@ class Mount(CoreSysAttributes, ABC):
@property
def container_where(self) -> PurePath | None:
"""Return where this is made available in managed containers (core, addons, etc.).
"""Return where this is made available in managed containers (core, apps, etc.).
This returns none if it is not made available in managed containers.
"""
@@ -179,7 +177,7 @@ class Mount(CoreSysAttributes, ABC):
await self.mount()
return
await self._update_state_await(unit, not_state=UnitActiveState.ACTIVATING)
await self._update_state_await(unit)
# If mount is not available, try to reload it
if not await self.is_mounted():
@@ -215,39 +213,30 @@ class Mount(CoreSysAttributes, ABC):
await self._update_state(unit)
# If active, dismiss corresponding failed mount issue if found
if (
mounted := await self.is_mounted()
) and self.failed_issue in self.sys_resolution.issues:
self.sys_resolution.dismiss_issue(self.failed_issue)
if (mounted := await self.is_mounted()) and (
issue := self.sys_resolution.get_issue_if_present(self.failed_issue)
):
self.sys_resolution.dismiss_issue(issue)
return mounted
async def _update_state_await(
self,
unit: SystemdUnit,
expected_states: list[UnitActiveState] | None = None,
not_state: UnitActiveState = UnitActiveState.ACTIVATING,
expected_states: set[UnitActiveState] | None = None,
) -> None:
"""Update state info about mount from dbus. Wait for one of expected_states to appear or state to change from not_state."""
"""Update state info about mount from dbus. Wait for one of expected_states to appear."""
if expected_states is None:
expected_states = {
UnitActiveState.ACTIVE,
UnitActiveState.FAILED,
UnitActiveState.INACTIVE,
}
try:
async with asyncio.timeout(30), unit.properties_changed() as signal:
await self._update_state(unit)
while (
expected_states
and self.state not in expected_states
or not expected_states
and self.state == not_state
):
prop_change_signal = await signal.wait_for_signal()
if (
prop_change_signal[0] == DBUS_IFACE_SYSTEMD_UNIT
and DBUS_ATTR_ACTIVE_STATE in prop_change_signal[1]
):
self._state = prop_change_signal[1][
DBUS_ATTR_ACTIVE_STATE
].value
async with asyncio.timeout(30):
self._state = await unit.wait_for_active_state(expected_states)
except TimeoutError:
await self._update_state(unit)
_LOGGER.warning(
"Mount %s still in state %s after waiting for 30 seconds to complete",
self.name,
@@ -300,7 +289,7 @@ class Mount(CoreSysAttributes, ABC):
) from err
if unit := await self._update_unit():
await self._update_state_await(unit, not_state=UnitActiveState.ACTIVATING)
await self._update_state_await(unit)
if not await self.is_mounted():
raise MountActivationError(
@@ -320,7 +309,7 @@ class Mount(CoreSysAttributes, ABC):
await self.sys_dbus.systemd.stop_unit(self.unit_name, StopUnitMode.FAIL)
await self._update_state_await(
unit, [UnitActiveState.INACTIVE, UnitActiveState.FAILED]
unit, {UnitActiveState.INACTIVE, UnitActiveState.FAILED}
)
if self.state == UnitActiveState.FAILED:
@@ -349,9 +338,7 @@ class Mount(CoreSysAttributes, ABC):
await self._restart()
else:
if unit := await self._update_unit():
await self._update_state_await(
unit, not_state=UnitActiveState.ACTIVATING
)
await self._update_state_await(unit)
if not await self.is_mounted():
_LOGGER.info(
@@ -361,8 +348,8 @@ class Mount(CoreSysAttributes, ABC):
await self._restart()
# If it is mounted now, dismiss corresponding issue if present
if self.failed_issue in self.sys_resolution.issues:
self.sys_resolution.dismiss_issue(self.failed_issue)
if issue := self.sys_resolution.get_issue_if_present(self.failed_issue):
self.sys_resolution.dismiss_issue(issue)
async def _restart(self) -> None:
"""Restart mount unit to re-mount."""
@@ -380,7 +367,7 @@ class Mount(CoreSysAttributes, ABC):
) from err
if unit := await self._update_unit():
await self._update_state_await(unit, not_state=UnitActiveState.ACTIVATING)
await self._update_state_await(unit)
if not await self.is_mounted():
raise MountActivationError(
@@ -562,7 +549,7 @@ class BindMount(Mount):
usage: MountUsage | None = None,
where: PurePath | None = None,
read_only: bool = False,
) -> "BindMount":
) -> BindMount:
"""Create a new bind mount instance."""
return BindMount(
coresys,

View File

@@ -57,7 +57,7 @@ class Disk:
@staticmethod
def from_udisks2_drive(
drive: UDisks2Drive, drive_block_device: UDisks2Block
) -> "Disk":
) -> Disk:
"""Convert UDisks2Drive into a Disk object."""
return Disk(
vendor=drive.vendor,

View File

@@ -2,7 +2,6 @@
from dataclasses import dataclass
from datetime import datetime
import errno
import logging
from pathlib import Path, PurePath
from typing import cast
@@ -23,8 +22,6 @@ from ..exceptions import (
)
from ..jobs.const import JobConcurrency, JobCondition
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason
from ..utils.sentry import async_capture_exception
from .data_disk import DataDisk
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -52,7 +49,7 @@ class SlotStatus:
parent: str | None = None
@classmethod
def from_dict(cls, data: SlotStatusDataType) -> "SlotStatus":
def from_dict(cls, data: SlotStatusDataType) -> SlotStatus:
"""Create SlotStatus from dictionary."""
return cls(
class_=data["class"],
@@ -214,10 +211,7 @@ class OSManager(CoreSysAttributes):
) from err
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
raise HassOSUpdateError(
f"Can't write OTA file: {err!s}", _LOGGER.error
) from err
@@ -368,7 +362,6 @@ class OSManager(CoreSysAttributes):
RaucState.ACTIVE, self.get_slot_name(boot_name)
)
except DBusError as err:
await async_capture_exception(err)
raise HassOSSlotUpdateError(
f"Can't mark {boot_name} as active!", _LOGGER.error
) from err

View File

@@ -3,7 +3,6 @@
Code: https://github.com/home-assistant/plugin-audio
"""
import errno
import logging
from pathlib import Path, PurePath
import shutil
@@ -26,7 +25,6 @@ from ..exceptions import (
)
from ..jobs.const import JobThrottle
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason
from ..utils.json import write_json_file
from ..utils.sentry import async_capture_exception
from .base import PluginBase
@@ -94,11 +92,7 @@ class PluginAudio(PluginBase):
)
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error("Can't read pulse-client.tmpl: %s", err)
await super().load()
@@ -113,10 +107,7 @@ class PluginAudio(PluginBase):
try:
await self.sys_run_in_executor(setup_default_asound)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error("Can't create default asound: %s", err)
@Job(

View File

@@ -5,7 +5,6 @@ Code: https://github.com/home-assistant/plugin-dns
import asyncio
from contextlib import suppress
import errno
from ipaddress import IPv4Address
import logging
from pathlib import Path
@@ -33,7 +32,7 @@ from ..exceptions import (
)
from ..jobs.const import JobThrottle
from ..jobs.decorator import Job
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.json import write_json_file
from ..utils.sentry import async_capture_exception
from ..validate import dns_url
@@ -232,10 +231,7 @@ class PluginDns(PluginBase):
await self.sys_run_in_executor(RESOLV_TMPL.read_text, encoding="utf-8")
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error("Can't read resolve.tmpl: %s", err)
try:
@@ -243,10 +239,7 @@ class PluginDns(PluginBase):
await self.sys_run_in_executor(HOSTS_TMPL.read_text, encoding="utf-8")
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.error("Can't read hosts.tmpl: %s", err)
await self._init_hosts()
@@ -343,7 +336,7 @@ class PluginDns(PluginBase):
# Reset loop protection
self._loop = False
await self.sys_addons.sync_dns()
await self.sys_apps.sync_dns()
async def watchdog_container(self, event: DockerContainerStateEvent) -> None:
"""Check for loop on failure before processing state change event."""
@@ -448,10 +441,7 @@ class PluginDns(PluginBase):
self.hosts.write_text, data, encoding="utf-8"
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
raise CoreDNSError(f"Can't update hosts: {err}", _LOGGER.error) from err
async def add_host(
@@ -533,10 +523,7 @@ class PluginDns(PluginBase):
try:
await self.sys_run_in_executor(resolv_conf.write_text, data)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.check_oserror(err)
_LOGGER.warning("Can't write/update %s: %s", resolv_conf, err)
return

View File

@@ -86,6 +86,13 @@ class PluginManager(CoreSysAttributes):
if self.sys_supervisor.need_update:
return
# Skip plugin auto-updates if auto updates are disabled
if not self.sys_updater.auto_update:
_LOGGER.debug(
"Skipping plugin auto-updates because Supervisor auto-update is disabled"
)
return
# Check requirements
for plugin in self.all_plugins:
# Check if need an update

View File

@@ -6,7 +6,7 @@ from typing import Any
from ..const import ATTR_CHECKS
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ResolutionNotFound
from ..exceptions import ResolutionCheckNotFound
from ..utils.sentry import async_capture_exception
from .checks.base import CheckBase
from .validate import get_valid_modules
@@ -50,7 +50,7 @@ class ResolutionCheck(CoreSysAttributes):
if slug in self._checks:
return self._checks[slug]
raise ResolutionNotFound(f"Check with slug {slug} not found!")
raise ResolutionCheckNotFound(check=slug)
async def check_system(self) -> None:
"""Check the system."""

View File

@@ -3,7 +3,7 @@
from datetime import timedelta
import logging
from ...const import AddonState, CoreState
from ...const import AppState, CoreState
from ...coresys import CoreSys
from ...exceptions import PwnedConnectivityError, PwnedError, PwnedSecret
from ...jobs.const import JobCondition, JobThrottle
@@ -16,11 +16,11 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
def setup(coresys: CoreSys) -> CheckBase:
"""Check setup function."""
return CheckAddonPwned(coresys)
return CheckAppPwned(coresys)
class CheckAddonPwned(CheckBase):
"""CheckAddonPwned class for check."""
class CheckAppPwned(CheckBase):
"""CheckAppPwned class for check."""
@Job(
name="check_addon_pwned_run",
@@ -35,8 +35,8 @@ class CheckAddonPwned(CheckBase):
return
await self.sys_homeassistant.secrets.reload()
for addon in self.sys_addons.installed:
secrets = addon.pwned
for app in self.sys_apps.installed:
secrets = app.pwned
if not secrets:
continue
@@ -49,7 +49,7 @@ class CheckAddonPwned(CheckBase):
return
except PwnedSecret:
# Check possible suggestion
if addon.state == AddonState.STARTED:
if app.state == AppState.STARTED:
suggestions = [SuggestionType.EXECUTE_STOP]
else:
suggestions = None
@@ -57,7 +57,7 @@ class CheckAddonPwned(CheckBase):
self.sys_resolution.create_issue(
IssueType.PWNED,
ContextType.ADDON,
reference=addon.slug,
reference=app.slug,
suggestions=suggestions,
)
break
@@ -71,11 +71,11 @@ class CheckAddonPwned(CheckBase):
return False
# Uninstalled
if not (addon := self.sys_addons.get_local_only(reference)):
if not (app := self.sys_apps.get_local_only(reference)):
return False
# Not in use anymore
secrets = addon.pwned
secrets = app.pwned
if not secrets:
return False

View File

@@ -1,72 +0,0 @@
"""Helpers to check core security."""
from enum import StrEnum
from pathlib import Path
from awesomeversion import AwesomeVersion, AwesomeVersionException
from ...const import CoreState
from ...coresys import CoreSys
from ..const import ContextType, IssueType, SuggestionType
from .base import CheckBase
def setup(coresys: CoreSys) -> CheckBase:
"""Check setup function."""
return CheckCoreSecurity(coresys)
class SecurityReference(StrEnum):
"""Version references."""
CUSTOM_COMPONENTS_BELOW_2021_1_5 = "custom_components_below_2021_1_5"
class CheckCoreSecurity(CheckBase):
"""CheckCoreSecurity class for check."""
async def run_check(self) -> None:
"""Run check if not affected by issue."""
# Security issue < 2021.1.5 & Custom components
try:
if self.sys_homeassistant.version < AwesomeVersion("2021.1.5"):
if await self.sys_run_in_executor(self._custom_components_exists):
self.sys_resolution.create_issue(
IssueType.SECURITY,
ContextType.CORE,
reference=SecurityReference.CUSTOM_COMPONENTS_BELOW_2021_1_5,
suggestions=[SuggestionType.EXECUTE_UPDATE],
)
except (AwesomeVersionException, OSError):
return
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
try:
if self.sys_homeassistant.version >= AwesomeVersion("2021.1.5"):
return False
except AwesomeVersionException:
return True
return await self.sys_run_in_executor(self._custom_components_exists)
def _custom_components_exists(self) -> bool:
"""Return true if custom components folder exists.
Must be run in executor.
"""
return Path(self.sys_config.path_homeassistant, "custom_components").exists()
@property
def issue(self) -> IssueType:
"""Return a IssueType enum."""
return IssueType.SECURITY
@property
def context(self) -> ContextType:
"""Return a ContextType enum."""
return ContextType.CORE
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this check can run."""
return [CoreState.RUNNING, CoreState.STARTUP]

Some files were not shown because too many files have changed in this diff Show More