Compare commits

...

55 Commits

Author SHA1 Message Date
Stefan Agner
620c90676c Clamp progress to 100 to prevent floating point precision issues
Floating point arithmetic in weighted progress calculations can produce
values slightly above 100 (e.g., 100.00000000000001). This causes
validation errors when the progress value is checked.

Add min(100, ...) clamping to both size-weighted and count-based
progress calculations to ensure the result never exceeds 100.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-03 18:48:58 +01:00
Stefan Agner
a1311ce288 Add registry manifest fetcher for size-based pull progress
Fetch image manifests directly from container registries before pulling
to get accurate layer sizes upfront. This enables size-weighted progress
tracking where each layer contributes proportionally to its byte size,
rather than equal weight per layer.

Key changes:
- Add RegistryManifestFetcher that handles auth discovery via
  WWW-Authenticate headers, token fetching with optional credentials,
  and multi-arch manifest list resolution
- Update ImagePullProgress to accept manifest layer sizes via
  set_manifest() and calculate size-weighted progress
- Fall back to count-based progress when manifest fetch fails
- Pre-populate layer sizes from manifest when creating layer trackers

The manifest fetcher supports ghcr.io, Docker Hub, and private
registries by using credentials from Docker config when available.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-03 18:48:58 +01:00
Stefan Agner
051404e8db Exclude already-existing layers from pull progress calculation
Layers that already exist locally should not count towards download
progress since there's nothing to download for them. Only layers that
need pulling are included in the progress calculation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-03 18:42:48 +01:00
Stefan Agner
a105676722 Use count-based progress for Docker image pulls
Refactor Docker image pull progress to use a simpler count-based approach
where each layer contributes equally (100% / total_layers) regardless of
size. This replaces the previous size-weighted calculation that was
susceptible to progress regression.

The core issue was that Docker rate-limits concurrent downloads (~3 at a
time) and reports layer sizes only when downloading starts. With size-
weighted progress, large layers appearing late would cause progress to
drop dramatically (e.g., 59% -> 29%) as the total size increased.

The new approach:
- Each layer contributes equally to overall progress
- Per-layer progress: 70% download weight, 30% extraction weight
- Progress only starts after first "Downloading" event (when layer
  count is known)
- Always caps at 99% - job completion handles final 100%

This simplifies the code by moving progress tracking to a dedicated
module (pull_progress.py) and removing complex size-based scaling logic
that tried to account for unknown layer sizes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-03 18:42:48 +01:00
Mike Degatano
81b7e54b18 Remove unknown errors from addons and auth (#6303)
* Remove unknown errors from addons

* Remove customized unknown error types

* Fix docker ratelimit exception and tests

* Fix stats test and add more for known errors

* Add defined error for when build fails

* Fixes from feedback

* Fix mypy issues

* Fix test failure due to rename

* Change auth reset error message
2025-12-03 18:11:51 +01:00
Stefan Agner
d203f20b7f Fix type annotations in AddonModel (#6387)
* Fix type annotations in AddonModel

Correct return type annotations for three properties in AddonModel
that were inconsistent with their actual return values:

- panel_admin: str -> bool
- with_tmpfs: str | None -> bool
- homeassistant_version: str | None -> AwesomeVersion | None

Based on the add-on schema _SCHEMA_ADDON_CONFIG in
supervisor/addons/validate.py.

Found while enabling typeguard for local testing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix docstrings

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-03 17:45:44 +01:00
Stefan Agner
fea8159ccf Fix typing issues in NetworkManager D-Bus integration (#6385)
* Fix typing for IPv6 addr-gen-mode and ip6-privacy settings

* Fix ConnectionStateType typing

* Rename ConnectionStateType to ConnectionState

The extra type suffix is unnecessary.

* Apply suggestions from code review

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>

---------

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
2025-12-03 16:28:43 +01:00
Jan Čermák
aeb8e59da4 Move wheels build to the build job, use ARM runner for aarch64 build (#6384)
* Move wheels build to the build job, use ARM runner for aarch64 build

There is problem that when wheels are not built, the depending jobs are
skipped. This will require to explicitly use `!cancelled() && !failure()` for
all jobs that depend on the build job. To avoid that, move the wheels build to
the build job. This means tha we need to run it on native ARM runner for
aarch64, but this isn't an issue as we'd like to do that anyway. Also renamed
the rather cryptic "requirements" output to "build_wheels", as that's what it
signalizes.

* Remove explicit "shell: bash"

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-03 14:36:48 +01:00
dependabot[bot]
bee0a4482e Bump actions/checkout from 6.0.0 to 6.0.1 (#6382)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-03 10:40:03 +01:00
dependabot[bot]
37cc078144 Bump actions/stale from 10.1.0 to 10.1.1 (#6383)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-03 10:37:24 +01:00
Stefan Agner
20f993e891 Avoid getting changed files for releases (#6381)
The changed files GitHub Action is not available for release events, so
we skip that step and directly set the output to false for releases.
This restores how releases worked before #6374.
2025-12-02 20:23:37 +01:00
Stefan Agner
d220fa801f Await aiodocker import_image coroutine (#6378)
The aiodocker images.import_image() method returns a coroutine that
needs to be awaited, but the code was iterating over it directly,
causing "TypeError: 'coroutine' object is not iterable".

Fixes SUPERVISOR-13D9

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 14:11:06 -05:00
Stefan Agner
abeee95eb1 Fix blocking I/O in git repository pull operation (#6380)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 19:03:28 +01:00
Stefan Agner
50d31202ae Use Docker's official registry domain detection logic (#6360)
* Use Docker's official registry domain detection logic

Replace the custom IMAGE_WITH_HOST regex with a proper implementation
based on Docker's reference parser (vendor/github.com/distribution/
reference/normalize.go).

Changes:
- Change DOCKER_HUB from "hub.docker.com" to "docker.io" (official default)
- Add DOCKER_HUB_LEGACY for backward compatibility with "hub.docker.com"
- Add IMAGE_DOMAIN_REGEX and get_domain() function that properly detects:
  - localhost (with optional port)
  - Domains with "." (e.g., ghcr.io, 127.0.0.1)
  - Domains with ":" port (e.g., myregistry:5000)
  - IPv6 addresses (e.g., [::1]:5000)
- Update credential handling to support both docker.io and hub.docker.com
- Add comprehensive tests for domain detection

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Refactor Docker domain detection to utils module

Move get_domain function to supervisor/docker/utils.py and rename it
to get_domain_from_image for consistency with get_registry_for_image.
Use named group in the regex for better readability.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Rename domain to registry for consistency

Use consistent "registry" terminology throughout the codebase:
- Rename get_domain_from_image to get_registry_from_image
- Rename IMAGE_DOMAIN_REGEX to IMAGE_REGISTRY_REGEX
- Update named group from "domain" to "registry"
- Update all related comments and variable names

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 14:30:03 +01:00
Jan Čermák
bac072a985 Use unpublished local wheels during PR builds (#6374)
* Use unpublished local wheels during PR builds

Refactor wheel building to use the new `local-wheels-repo-path` and move wheels
building into a separate CI job. Wheels are only published on published (i.e.
release or merged dev), for PR builds they are passed as artifacts to the build
job instead.

* Address review comments

* Add trailing slash for wheels folder
* Always run the changed_files check to ensure build_wheels runs on publish
* Use full path for workflow and escape dots in changed files regexp
2025-12-02 14:08:07 +01:00
dependabot[bot]
2fc6a7dcab Bump types-docker from 7.1.0.20251129 to 7.1.0.20251202 (#6376) 2025-12-02 07:36:51 +01:00
Stefan Agner
fa490210cd Improve CpuArch type safety across codebase (#6372)
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 19:56:05 +01:00
Jan Čermák
ba82eb0620 Clean up Dockerfile after dropping deprecated architectures (#6373)
Clean up unnecessary arguments that were needed for deprecated architectures,
bind-mount requirements file to reduce image bloat.
2025-12-01 19:43:19 +01:00
dependabot[bot]
11e3fa0bb7 Bump mypy from 1.18.2 to 1.19.0 (#6366)
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-12-01 16:38:13 +01:00
dependabot[bot]
9466111d56 Bump types-docker from 7.1.0.20251127 to 7.1.0.20251129 (#6369)
* Bump types-docker from 7.1.0.20251127 to 7.1.0.20251129

Bumps [types-docker](https://github.com/typeshed-internal/stub_uploader) from 7.1.0.20251127 to 7.1.0.20251129.
- [Commits](https://github.com/typeshed-internal/stub_uploader/commits)

---
updated-dependencies:
- dependency-name: types-docker
  dependency-version: 7.1.0.20251129
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Fix type errors for types-docker 7.1.0.20251129

- Cast stats() return to dict[str, Any] when stream=False since the
  type stubs return Iterator | dict but we know it's dict when not
  streaming
- Cast attach_socket() return to SocketIO for local Docker connections
  via Unix socket, as the type stubs include types for SSH and other
  connection methods

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 15:08:39 +01:00
Stefan Agner
5ec3bea0dd Remove UP038 from ruff ignore list (#6370)
The UP038 rule was removed from ruff in version 0.13.0, causing a warning
when running ruff. Remove it from the ignore list to eliminate the warning.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 13:57:41 +01:00
dependabot[bot]
72159a0ae2 Bump pylint from 4.0.3 to 4.0.4 (#6368)
Bumps [pylint](https://github.com/pylint-dev/pylint) from 4.0.3 to 4.0.4.
- [Release notes](https://github.com/pylint-dev/pylint/releases)
- [Commits](https://github.com/pylint-dev/pylint/compare/v4.0.3...v4.0.4)

---
updated-dependencies:
- dependency-name: pylint
  dependency-version: 4.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 08:41:31 +01:00
dependabot[bot]
0a7b26187d Bump ruff from 0.14.6 to 0.14.7 (#6367)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.6 to 0.14.7.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.6...0.14.7)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 08:41:14 +01:00
dependabot[bot]
2dc1f9224e Bump home-assistant/builder from 2025.09.0 to 2025.11.0 (#6363)
Bumps [home-assistant/builder](https://github.com/home-assistant/builder) from 2025.09.0 to 2025.11.0.
- [Release notes](https://github.com/home-assistant/builder/releases)
- [Commits](https://github.com/home-assistant/builder/compare/2025.09.0...2025.11.0)

---
updated-dependencies:
- dependency-name: home-assistant/builder
  dependency-version: 2025.11.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 08:40:38 +01:00
Mike Degatano
6302c7d394 Fix progress when using containerd snapshotter (#6357)
* Fix progress when using containerd snapshotter

* Add test for tiny image download under containerd-snapshotter

* Fix API tests after progress allocation change

* Fix test for auth changes

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-27 16:26:22 +01:00
Jan Čermák
f55fd891e9 Add API endpoint for migrating Docker storage driver (#6361)
Implement Supervisor API for home-assistant/os-agent#238, adding possibility to
schedule migration either to Containerd overlayfs driver, or migration to the
graph overlay2 driver, once the device is rebooted the next time. While it's
technically in the DBus OS interface, in Supervisor's abstraction it makes more
sense to put it under `/docker` endpoints.
2025-11-27 16:02:39 +01:00
Stefan Agner
8a251e0324 Pass registry credentials to add-on build for private base images (#6356)
* Pass registry credentials to add-on build for private base images

When building add-ons that use a base image from a private registry,
the build would fail because credentials configured via the Supervisor
API were not passed to the Docker-in-Docker build container.

This fix:
- Adds get_docker_config_json() to generate a Docker config.json with
  registry credentials for the base image
- Creates a temporary config file and mounts it into the build container
  at /root/.docker/config.json so BuildKit can authenticate when pulling
  the base image
- Cleans up the temporary file after build completes

Fixes #6354

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix pylint errors

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Refactor registry credential extraction into shared helper

Extract duplicate logic for determining which registry matches an image
into a shared `get_registry_for_image()` method in `DockerConfig`. This
method is now used by both `DockerInterface._get_credentials()` and
`AddonBuild.get_docker_config_json()`.

Move `DOCKER_HUB` and `IMAGE_WITH_HOST` constants to `docker/const.py`
to avoid circular imports between manager.py and interface.py.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* ruff format

* Document raises

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-27 11:10:17 +01:00
dependabot[bot]
62b7b8c399 Bump types-docker from 7.1.0.20251125 to 7.1.0.20251127 (#6358) 2025-11-27 07:22:43 +01:00
Stefan Agner
3c87704802 Handle update errors in automatic Supervisor update task (#6328)
Wrap the Supervisor auto-update call with suppress(SupervisorUpdateError)
to prevent unhandled exceptions from propagating. When an automatic update
fails, errors are already logged by the exception handlers, and there's no
meaningful recovery action the scheduler task can take.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 14:11:51 -05:00
Stefan Agner
ae7700f52c Fix private registry authentication for aiodocker image pulls (#6355)
* Fix private registry authentication for aiodocker image pulls

After PR #6252 migrated image pulling from dockerpy to aiodocker,
private registry authentication stopped working. The old _docker_login()
method stored credentials in ~/.docker/config.json via dockerpy, but
aiodocker doesn't read that file - it requires credentials passed
explicitly via the auth parameter.

Changes:
- Remove unused _docker_login() method (dockerpy login was ineffective)
- Pass credentials directly to pull_image() via new auth parameter
- Add auth parameter to DockerAPI.pull_image() method
- Add unit tests for Docker Hub and custom registry authentication

Fixes #6345

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Ignore protected access in test

* Fix plug-in pull test

* Fix HA core tests

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 17:37:24 +01:00
Stefan Agner
e06e792e74 Fix type annotations for sentinel values in job manager (#6349)
Add `type[DEFAULT]` to type annotations for parameters that use the
DEFAULT sentinel value. This fixes runtime type checking failures with
typeguard when sentinel values are passed as arguments.

Use explicit type casts and restructured parameter passing to satisfy
mypy's type narrowing requirements. The sentinel pattern allows
distinguishing between "parameter not provided" and "parameter
explicitly set to None", which is critical for job management logic.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 09:17:17 +01:00
dependabot[bot]
5f55ab8de4 Bump home-assistant/wheels from 2025.10.0 to 2025.11.0 (#6352) 2025-11-26 07:56:32 +01:00
Stefan Agner
ca521c24cb Fix typeguard error in API decorator wrapper functions (#6350)
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 19:04:31 +01:00
dependabot[bot]
6042694d84 Bump dbus-fast from 2.45.1 to 3.1.2 (#6317)
* Bump dbus-fast from 2.45.1 to 3.1.2

Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.45.1 to 3.1.2.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.45.1...v3.1.2)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 3.1.2
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update unit tests for dbus-fast 3.1.2 changes

* Fix type annotations

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-11-25 16:25:06 +01:00
Stefan Agner
2b2aedae60 Fix D-Bus type annotation issues (#6348)
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 14:47:48 +01:00
Jan Čermák
4b4afd081b Drop build for deprecated architectures and re-tag legacy build instead (#6347)
To ensure that e.g. airgapped devices running on deprecated archs can still
update the Supervisor when they become online, the version of Supervisor in the
version file must stay available for all architectures. Since the base images
will no longer exist for those archs and to avoid the need for building it from
current source, add job that pulls the last available image, changes the label
in the metadata and publishes it under the new tag. That way we'll get a new
image with a different SHA (compared to a plain re-tag), so the GHCR metrics
should reflect how many devices still pull these old images.
2025-11-25 12:42:01 +01:00
Stefan Agner
a3dca10fd8 Fix blocking I/O call in DBusManager.load() (#6346)
Wrap SOCKET_DBUS.exists() call in sys_run_in_executor to avoid blocking
os.stat() call in async context. This follows the same pattern already
used in supervisor/resolution/evaluations/dbus.py.

Fixes SUPERVISOR-11HC

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 12:07:35 +01:00
Stefan Agner
d73682ee8a Fix blocking I/O in DockerInfo cpu realtime check (#6344)
The support_cpu_realtime property was performing blocking filesystem I/O
(Path.exists()) in async context, causing BlockingError e.g. when the
audio plugin started.

Changes:
- Convert support_cpu_realtime from property to dataclass field
- Make DockerInfo.new() async to properly handle I/O operations
- Run Path.exists() check in executor thread during initialization
- Store result as immutable field to avoid repeated filesystem access

Fixes SUPERVISOR-15WC

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 11:34:01 +01:00
Stefan Agner
032fa4cdc4 Add comment to explicit "used" calculation for disk usage API (#6340)
* Add explicit used calculation for disk usage API

Added explicit calculation for used disk space along with a comment
to clarify the reasoning behind the calculation method.

* Address review feedback
2025-11-25 11:00:46 +02:00
dependabot[bot]
7244e447ab Bump actions/setup-python from 6.0.0 to 6.1.0 (#6341)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-25 07:20:29 +01:00
dependabot[bot]
603ba57846 Bump types-docker from 7.1.0.20251009 to 7.1.0.20251125 (#6342)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-25 07:20:06 +01:00
dependabot[bot]
0ff12abdf4 Bump sentry-sdk from 2.45.0 to 2.46.0 (#6343)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-25 07:19:32 +01:00
Petar Petrov
906838e325 Fix disk usage calculation (#6339) 2025-11-24 16:35:13 +02:00
Stefan Agner
3be0c13fc5 Drop Debian 12 from supported OS list (#6337)
* Drop Debian 12 from supported OS list

With the deprecation of Home Assistant Supervised installation method
Debian 12 is no longer supported. This change removes Debian 12
from the list of supported operating systems in the evaluation logic.

* Improve tests
2025-11-24 11:46:23 +01:00
dependabot[bot]
bb450cad4f Bump peter-evans/create-pull-request from 7.0.8 to 7.0.9 (#6332)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.8 to 7.0.9.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](271a8d0340...84ae59a2cd)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 7.0.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 08:52:46 +01:00
dependabot[bot]
10af48a65b Bump backports-zstd from 1.0.0 to 1.1.0 (#6336)
Bumps [backports-zstd](https://github.com/rogdham/backports.zstd) from 1.0.0 to 1.1.0.
- [Changelog](https://github.com/Rogdham/backports.zstd/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rogdham/backports.zstd/compare/v1.0.0...v1.1.0)

---
updated-dependencies:
- dependency-name: backports-zstd
  dependency-version: 1.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 08:50:56 +01:00
dependabot[bot]
2f334c48c3 Bump ruff from 0.14.5 to 0.14.6 (#6335)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.5 to 0.14.6.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.5...0.14.6)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 08:49:55 +01:00
dependabot[bot]
6d87e8f591 Bump pre-commit from 4.4.0 to 4.5.0 (#6334) 2025-11-24 07:54:42 +01:00
dependabot[bot]
4d1dd63248 Bump time-machine from 3.0.0 to 3.1.0 (#6333) 2025-11-24 07:46:27 +01:00
Stefan Agner
0c2d0cf5c1 Fix D-Bus enum type conversions for NetworkManager (#6325)
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-22 21:52:14 +01:00
Jan Čermák
ca7a3af676 Drop codenotary options from the build config (#6330)
These options are obsolete, as all the support has been dropped from the
builder and Supervisor as well. Remove them from our build config too.
2025-11-21 16:36:48 +01:00
Stefan Agner
93272fe4c0 Deprecate i386, armhf and armv7 Supervisor architectures (#5620)
* Deprecate i386, armhf and armv7 Supervisor architectures

* Exclude Core from architecture deprecation checks

This allows to download the latest available Core version still, even
on deprecated systems.

* Fix pytest
2025-11-21 16:35:26 +01:00
Jan Čermák
79a99cc66d Use release-suffixed base images (pin to 2025.11.1) (#6329)
Currently we're lacking control over what version of the base images is
used, and it only depends on when the build is launched. This doesn't
allow any (easy) rollback mechanisms and it's also not very transparent.

Use the newly introduced base image tags which include the release
version suffix so we have more control over this aspect.
2025-11-21 16:22:22 +01:00
dependabot[bot]
6af6c3157f Bump actions/checkout from 5.0.1 to 6.0.0 (#6327)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5.0.1 to 6.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](93cb6efe18...1af3b93b68)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-21 09:29:32 +01:00
Jan Čermák
5ed0c85168 Add optional no_colors query parameter to advanced logs endpoints (#6326)
Add support for `no_colors` query parameter on all advanced logs API endpoints,
allowing users to optionally strip ANSI color sequences from log output. This
complements the existing color stripping on /latest endpoints added in #6319.
2025-11-21 09:29:15 +01:00
100 changed files with 4599 additions and 974 deletions

View File

@@ -1,6 +1,7 @@
# General files
.git
.github
.gitkeep
.devcontainer
.vscode

View File

@@ -34,6 +34,9 @@ on:
env:
DEFAULT_PYTHON: "3.13"
COSIGN_VERSION: "v2.5.3"
CRANE_VERSION: "v0.20.7"
CRANE_SHA256: "8ef3564d264e6b5ca93f7b7f5652704c4dd29d33935aff6947dd5adefd05953e"
BUILD_NAME: supervisor
BUILD_TYPE: supervisor
@@ -50,10 +53,10 @@ jobs:
version: ${{ steps.version.outputs.version }}
channel: ${{ steps.version.outputs.channel }}
publish: ${{ steps.version.outputs.publish }}
requirements: ${{ steps.requirements.outputs.changed }}
build_wheels: ${{ steps.requirements.outputs.build_wheels }}
steps:
- name: Checkout the repository
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
@@ -69,20 +72,25 @@ jobs:
- name: Get changed files
id: changed_files
if: steps.version.outputs.publish == 'false'
if: github.event_name != 'release'
uses: masesgroup/retrieve-changed-files@491e80760c0e28d36ca6240a27b1ccb8e1402c13 # v3.0.0
- name: Check if requirements files changed
id: requirements
run: |
if [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements.txt|build.yaml) ]]; then
echo "changed=true" >> "$GITHUB_OUTPUT"
# No wheels build necessary for releases
if [[ "${{ github.event_name }}" == "release" ]]; then
echo "build_wheels=false" >> "$GITHUB_OUTPUT"
elif [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements\.txt|build\.yaml|\.github/workflows/builder\.yml) ]]; then
echo "build_wheels=true" >> "$GITHUB_OUTPUT"
else
echo "build_wheels=false" >> "$GITHUB_OUTPUT"
fi
build:
name: Build ${{ matrix.arch }} supervisor
needs: init
runs-on: ubuntu-latest
runs-on: ${{ matrix.runs-on }}
permissions:
contents: read
id-token: write
@@ -90,34 +98,66 @@ jobs:
strategy:
matrix:
arch: ${{ fromJson(needs.init.outputs.architectures) }}
include:
- runs-on: ubuntu-24.04
- runs-on: ubuntu-24.04-arm
arch: aarch64
env:
WHEELS_ABI: cp313
WHEELS_TAG: musllinux_1_2
WHEELS_APK_DEPS: "libffi-dev;openssl-dev;yaml-dev"
WHEELS_SKIP_BINARY: aiohttp
steps:
- name: Checkout the repository
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
- name: Write env-file
if: needs.init.outputs.requirements == 'true'
- name: Write env-file for wheels build
if: needs.init.outputs.build_wheels == 'true'
run: |
(
# Fix out of memory issues with rust
echo "CARGO_NET_GIT_FETCH_WITH_CLI=true"
) > .env_file
# home-assistant/wheels doesn't support sha pinning
- name: Build wheels
if: needs.init.outputs.requirements == 'true'
uses: home-assistant/wheels@2025.10.0
- name: Build and publish wheels
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'true'
uses: home-assistant/wheels@e5742a69d69f0e274e2689c998900c7d19652c21 # 2025.12.0
with:
abi: cp313
tag: musllinux_1_2
arch: ${{ matrix.arch }}
wheels-key: ${{ secrets.WHEELS_KEY }}
apk: "libffi-dev;openssl-dev;yaml-dev"
skip-binary: aiohttp
abi: ${{ env.WHEELS_ABI }}
tag: ${{ env.WHEELS_TAG }}
arch: ${{ matrix.arch }}
apk: ${{ env.WHEELS_APK_DEPS }}
skip-binary: ${{ env.WHEELS_SKIP_BINARY }}
env-file: true
requirements: "requirements.txt"
- name: Build local wheels
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
uses: home-assistant/wheels@e5742a69d69f0e274e2689c998900c7d19652c21 # 2025.12.0
with:
wheels-host: ""
wheels-user: ""
wheels-key: ""
local-wheels-repo-path: "wheels/"
abi: ${{ env.WHEELS_ABI }}
tag: ${{ env.WHEELS_TAG }}
arch: ${{ matrix.arch }}
apk: ${{ env.WHEELS_APK_DEPS }}
skip-binary: ${{ env.WHEELS_SKIP_BINARY }}
env-file: true
requirements: "requirements.txt"
- name: Upload local wheels artifact
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: wheels-${{ matrix.arch }}
path: wheels
retention-days: 1
- name: Set version
if: needs.init.outputs.publish == 'true'
uses: home-assistant/actions/helpers/version@master
@@ -126,7 +166,7 @@ jobs:
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
if: needs.init.outputs.publish == 'true'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
@@ -134,7 +174,7 @@ jobs:
if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with:
cosign-release: "v2.5.3"
cosign-release: ${{ env.COSIGN_VERSION }}
- name: Install dirhash and calc hash
if: needs.init.outputs.publish == 'true'
@@ -162,8 +202,9 @@ jobs:
# home-assistant/builder doesn't support sha pinning
- name: Build supervisor
uses: home-assistant/builder@2025.09.0
uses: home-assistant/builder@2025.11.0
with:
image: ${{ matrix.arch }}
args: |
$BUILD_ARGS \
--${{ matrix.arch }} \
@@ -173,12 +214,12 @@ jobs:
version:
name: Update version
needs: ["init", "run_supervisor"]
needs: ["init", "run_supervisor", "retag_deprecated"]
runs-on: ubuntu-latest
steps:
- name: Checkout the repository
if: needs.init.outputs.publish == 'true'
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Initialize git
if: needs.init.outputs.publish == 'true'
@@ -203,12 +244,19 @@ jobs:
timeout-minutes: 60
steps:
- name: Checkout the repository
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Download local wheels artifact
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: wheels-amd64
path: wheels
# home-assistant/builder doesn't support sha pinning
- name: Build the Supervisor
if: needs.init.outputs.publish != 'true'
uses: home-assistant/builder@2025.09.0
uses: home-assistant/builder@2025.11.0
with:
args: |
--test \
@@ -352,3 +400,50 @@ jobs:
- name: Get supervisor logs on failiure
if: ${{ cancelled() || failure() }}
run: docker logs hassio_supervisor
retag_deprecated:
needs: ["build", "init"]
name: Re-tag deprecated ${{ matrix.arch }} images
if: needs.init.outputs.publish == 'true'
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
packages: write
strategy:
matrix:
arch: ["armhf", "armv7", "i386"]
env:
# Last available release for deprecated architectures
FROZEN_VERSION: "2025.11.5"
steps:
- name: Login to GitHub Container Registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Install Cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with:
cosign-release: ${{ env.COSIGN_VERSION }}
- name: Install crane
run: |
curl -sLO https://github.com/google/go-containerregistry/releases/download/${{ env.CRANE_VERSION }}/go-containerregistry_Linux_x86_64.tar.gz
echo "${{ env.CRANE_SHA256 }} go-containerregistry_Linux_x86_64.tar.gz" | sha256sum -c -
tar xzf go-containerregistry_Linux_x86_64.tar.gz crane
sudo mv crane /usr/local/bin/
- name: Re-tag deprecated image with updated version label
run: |
crane auth login ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.GITHUB_TOKEN }}
crane mutate \
--label io.hass.version=${{ needs.init.outputs.version }} \
--tag ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ needs.init.outputs.version }} \
ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ env.FROZEN_VERSION }}
- name: Sign image with Cosign
run: |
cosign sign --yes ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ needs.init.outputs.version }}

View File

@@ -26,10 +26,10 @@ jobs:
name: Prepare Python dependencies
steps:
- name: Check out code from GitHub
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Set up Python
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Restore Python virtual environment
@@ -68,9 +68,9 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
@@ -111,9 +111,9 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
@@ -154,7 +154,7 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Register hadolint problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/hadolint.json"
@@ -169,9 +169,9 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
@@ -213,9 +213,9 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
@@ -257,9 +257,9 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
@@ -293,9 +293,9 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
@@ -339,9 +339,9 @@ jobs:
name: Run tests Python ${{ needs.prepare.outputs.python-version }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
@@ -398,9 +398,9 @@ jobs:
needs: ["pytest", "prepare"]
steps:
- name: Check out code from GitHub
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}

View File

@@ -11,7 +11,7 @@ jobs:
name: Release Drafter
steps:
- name: Checkout the repository
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out code from GitHub
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Sentry Release
uses: getsentry/action-release@128c5058bbbe93c8e02147fe0a9c713f166259a6 # v3.4.0
env:

View File

@@ -9,7 +9,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30

View File

@@ -14,7 +14,7 @@ jobs:
latest_version: ${{ steps.latest_frontend_version.outputs.latest_tag }}
steps:
- name: Checkout code
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Get latest frontend release
id: latest_frontend_version
uses: abatilo/release-info-action@32cb932219f1cee3fc4f4a298fd65ead5d35b661 # v1.3.3
@@ -49,7 +49,7 @@ jobs:
if: needs.check-version.outputs.skip != 'true'
steps:
- name: Checkout code
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Clear www folder
run: |
rm -rf supervisor/api/panel/*
@@ -68,7 +68,7 @@ jobs:
run: |
rm -f supervisor/api/panel/home_assistant_frontend_supervisor-*.tar.gz
- name: Create PR
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
uses: peter-evans/create-pull-request@84ae59a2cdc2258d6fa0732dd66352dddae2a412 # v7.0.9
with:
commit-message: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
branch: autoupdate-frontend

5
.gitignore vendored
View File

@@ -24,6 +24,9 @@ var/
.installed.cfg
*.egg
# Local wheels
wheels/**/*.whl
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
@@ -102,4 +105,4 @@ ENV/
/.dmypy.json
# Mac
.DS_Store
.DS_Store

View File

@@ -8,9 +8,7 @@ ENV \
UV_SYSTEM_PYTHON=true
ARG \
COSIGN_VERSION \
BUILD_ARCH \
QEMU_CPU
COSIGN_VERSION
# Install base
WORKDIR /usr/src
@@ -32,15 +30,19 @@ RUN \
&& pip3 install uv==0.8.9
# Install requirements
COPY requirements.txt .
RUN \
if [ "${BUILD_ARCH}" = "i386" ]; then \
setarch="linux32"; \
--mount=type=bind,source=./requirements.txt,target=/usr/src/requirements.txt \
--mount=type=bind,source=./wheels,target=/usr/src/wheels \
if ls /usr/src/wheels/musllinux/* >/dev/null 2>&1; then \
LOCAL_WHEELS=/usr/src/wheels/musllinux; \
echo "Using local wheels from: $LOCAL_WHEELS"; \
else \
setarch=""; \
fi \
&& ${setarch} uv pip install --compile-bytecode --no-cache --no-build -r requirements.txt \
&& rm -f requirements.txt
LOCAL_WHEELS=; \
echo "No local wheels found"; \
fi && \
uv pip install --compile-bytecode --no-cache --no-build \
-r requirements.txt \
${LOCAL_WHEELS:+--find-links $LOCAL_WHEELS}
# Install Home Assistant Supervisor
COPY . supervisor

View File

@@ -1,13 +1,7 @@
image: ghcr.io/home-assistant/{arch}-hassio-supervisor
build_from:
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.22
armhf: ghcr.io/home-assistant/armhf-base-python:3.13-alpine3.22
armv7: ghcr.io/home-assistant/armv7-base-python:3.13-alpine3.22
amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.22
i386: ghcr.io/home-assistant/i386-base-python:3.13-alpine3.22
codenotary:
signer: notary@home-assistant.io
base_image: notary@home-assistant.io
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.22-2025.11.1
amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.22-2025.11.1
cosign:
base_identity: https://github.com/home-assistant/docker-base/.*
identity: https://github.com/home-assistant/supervisor/.*

View File

@@ -321,8 +321,6 @@ lint.ignore = [
"PLW2901", # Outer {outer_kind} variable {name} overwritten by inner {inner_kind} target
"UP006", # keep type annotation style as is
"UP007", # keep type annotation style as is
# Ignored due to performance: https://github.com/charliermarsh/ruff/issues/2923
"UP038", # Use `X | Y` in `isinstance` call instead of `(X, Y)`
# May conflict with the formatter, https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules
"W191",

View File

@@ -4,7 +4,7 @@ aiohttp==3.13.2
atomicwrites-homeassistant==1.4.1
attrs==25.4.0
awesomeversion==25.8.0
backports.zstd==1.0.0
backports.zstd==1.1.0
blockbuster==1.5.25
brotli==1.2.0
ciso8601==2.3.3
@@ -25,8 +25,8 @@ pyudev==0.24.4
PyYAML==6.0.3
requests==2.32.5
securetar==2025.2.1
sentry-sdk==2.45.0
sentry-sdk==2.46.0
setuptools==80.9.0
voluptuous==0.15.2
dbus-fast==2.45.1
dbus-fast==3.1.2
zlib-fast==0.2.1

View File

@@ -1,16 +1,16 @@
astroid==4.0.2
coverage==7.12.0
mypy==1.18.2
pre-commit==4.4.0
pylint==4.0.3
mypy==1.19.0
pre-commit==4.5.0
pylint==4.0.4
pytest-aiohttp==1.1.0
pytest-asyncio==1.3.0
pytest-cov==7.0.0
pytest-timeout==2.4.0
pytest==9.0.1
ruff==0.14.5
time-machine==3.0.0
types-docker==7.1.0.20251009
ruff==0.14.7
time-machine==3.1.0
types-docker==7.1.0.20251202
types-pyyaml==6.0.12.20250915
types-requests==2.32.4.20250913
urllib3==2.5.0

View File

@@ -66,13 +66,22 @@ from ..docker.const import ContainerState
from ..docker.monitor import DockerContainerStateEvent
from ..docker.stats import DockerStats
from ..exceptions import (
AddonConfigurationError,
AddonBackupMetadataInvalidError,
AddonBuildFailedUnknownError,
AddonConfigurationInvalidError,
AddonNotRunningError,
AddonNotSupportedError,
AddonNotSupportedWriteStdinError,
AddonPrePostBackupCommandReturnedError,
AddonsError,
AddonsJobError,
AddonUnknownError,
BackupRestoreUnknownError,
ConfigurationFileError,
DockerBuildError,
DockerError,
HostAppArmorError,
StoreAddonNotFoundError,
)
from ..hardware.data import Device
from ..homeassistant.const import WSEvent
@@ -235,7 +244,7 @@ class Addon(AddonModel):
await self.instance.check_image(self.version, default_image, self.arch)
except DockerError:
_LOGGER.info("No %s addon Docker image %s found", self.slug, self.image)
with suppress(DockerError):
with suppress(DockerError, AddonNotSupportedError):
await self.instance.install(self.version, default_image, arch=self.arch)
self.persist[ATTR_IMAGE] = default_image
@@ -718,18 +727,16 @@ class Addon(AddonModel):
options = self.schema.validate(self.options)
await self.sys_run_in_executor(write_json_file, self.path_options, options)
except vol.Invalid as ex:
_LOGGER.error(
"Add-on %s has invalid options: %s",
self.slug,
humanize_error(self.options, ex),
)
except ConfigurationFileError:
raise AddonConfigurationInvalidError(
_LOGGER.error,
addon=self.slug,
validation_error=humanize_error(self.options, ex),
) from None
except ConfigurationFileError as err:
_LOGGER.error("Add-on %s can't write options", self.slug)
else:
_LOGGER.debug("Add-on %s write options: %s", self.slug, options)
return
raise AddonUnknownError(addon=self.slug) from err
raise AddonConfigurationError()
_LOGGER.debug("Add-on %s write options: %s", self.slug, options)
@Job(
name="addon_unload",
@@ -772,7 +779,7 @@ class Addon(AddonModel):
async def install(self) -> None:
"""Install and setup this addon."""
if not self.addon_store:
raise AddonsError("Missing from store, cannot install!")
raise StoreAddonNotFoundError(addon=self.slug)
await self.sys_addons.data.install(self.addon_store)
@@ -793,9 +800,17 @@ class Addon(AddonModel):
await self.instance.install(
self.latest_version, self.addon_store.image, arch=self.arch
)
except DockerError as err:
except AddonsError:
await self.sys_addons.data.uninstall(self)
raise AddonsError() from err
raise
except DockerBuildError as err:
_LOGGER.error("Could not build image for addon %s: %s", self.slug, err)
await self.sys_addons.data.uninstall(self)
raise AddonBuildFailedUnknownError(addon=self.slug) from err
except DockerError as err:
_LOGGER.error("Could not pull image to update addon %s: %s", self.slug, err)
await self.sys_addons.data.uninstall(self)
raise AddonUnknownError(addon=self.slug) from err
# Finish initialization and set up listeners
await self.load()
@@ -819,7 +834,8 @@ class Addon(AddonModel):
try:
await self.instance.remove(remove_image=remove_image)
except DockerError as err:
raise AddonsError() from err
_LOGGER.error("Could not remove image for addon %s: %s", self.slug, err)
raise AddonUnknownError(addon=self.slug) from err
self.state = AddonState.UNKNOWN
@@ -884,7 +900,7 @@ class Addon(AddonModel):
if it was running. Else nothing is returned.
"""
if not self.addon_store:
raise AddonsError("Missing from store, cannot update!")
raise StoreAddonNotFoundError(addon=self.slug)
old_image = self.image
# Cache data to prevent races with other updates to global
@@ -892,8 +908,12 @@ class Addon(AddonModel):
try:
await self.instance.update(store.version, store.image, arch=self.arch)
except DockerBuildError as err:
_LOGGER.error("Could not build image for addon %s: %s", self.slug, err)
raise AddonBuildFailedUnknownError(addon=self.slug) from err
except DockerError as err:
raise AddonsError() from err
_LOGGER.error("Could not pull image to update addon %s: %s", self.slug, err)
raise AddonUnknownError(addon=self.slug) from err
# Stop the addon if running
if (last_state := self.state) in {AddonState.STARTED, AddonState.STARTUP}:
@@ -935,12 +955,23 @@ class Addon(AddonModel):
"""
last_state: AddonState = self.state
try:
# remove docker container but not addon config
# remove docker container and image but not addon config
try:
await self.instance.remove()
await self.instance.install(self.version)
except DockerError as err:
raise AddonsError() from err
_LOGGER.error("Could not remove image for addon %s: %s", self.slug, err)
raise AddonUnknownError(addon=self.slug) from err
try:
await self.instance.install(self.version)
except DockerBuildError as err:
_LOGGER.error("Could not build image for addon %s: %s", self.slug, err)
raise AddonBuildFailedUnknownError(addon=self.slug) from err
except DockerError as err:
_LOGGER.error(
"Could not pull image to update addon %s: %s", self.slug, err
)
raise AddonUnknownError(addon=self.slug) from err
if self.addon_store:
await self.sys_addons.data.update(self.addon_store)
@@ -1111,8 +1142,9 @@ class Addon(AddonModel):
try:
await self.instance.run()
except DockerError as err:
_LOGGER.error("Could not start container for addon %s: %s", self.slug, err)
self.state = AddonState.ERROR
raise AddonsError() from err
raise AddonUnknownError(addon=self.slug) from err
return self.sys_create_task(self._wait_for_startup())
@@ -1127,8 +1159,9 @@ class Addon(AddonModel):
try:
await self.instance.stop()
except DockerError as err:
_LOGGER.error("Could not stop container for addon %s: %s", self.slug, err)
self.state = AddonState.ERROR
raise AddonsError() from err
raise AddonUnknownError(addon=self.slug) from err
@Job(
name="addon_restart",
@@ -1161,9 +1194,15 @@ class Addon(AddonModel):
async def stats(self) -> DockerStats:
"""Return stats of container."""
try:
if not await self.is_running():
raise AddonNotRunningError(_LOGGER.warning, addon=self.slug)
return await self.instance.stats()
except DockerError as err:
raise AddonsError() from err
_LOGGER.error(
"Could not get stats of container for addon %s: %s", self.slug, err
)
raise AddonUnknownError(addon=self.slug) from err
@Job(
name="addon_write_stdin",
@@ -1173,14 +1212,18 @@ class Addon(AddonModel):
async def write_stdin(self, data) -> None:
"""Write data to add-on stdin."""
if not self.with_stdin:
raise AddonNotSupportedError(
f"Add-on {self.slug} does not support writing to stdin!", _LOGGER.error
)
raise AddonNotSupportedWriteStdinError(_LOGGER.error, addon=self.slug)
try:
return await self.instance.write_stdin(data)
if not await self.is_running():
raise AddonNotRunningError(_LOGGER.warning, addon=self.slug)
await self.instance.write_stdin(data)
except DockerError as err:
raise AddonsError() from err
_LOGGER.error(
"Could not write stdin to container for addon %s: %s", self.slug, err
)
raise AddonUnknownError(addon=self.slug) from err
async def _backup_command(self, command: str) -> None:
try:
@@ -1189,15 +1232,14 @@ class Addon(AddonModel):
_LOGGER.debug(
"Pre-/Post backup command failed with: %s", command_return.output
)
raise AddonsError(
f"Pre-/Post backup command returned error code: {command_return.exit_code}",
_LOGGER.error,
raise AddonPrePostBackupCommandReturnedError(
_LOGGER.error, addon=self.slug, exit_code=command_return.exit_code
)
except DockerError as err:
raise AddonsError(
f"Failed running pre-/post backup command {command}: {str(err)}",
_LOGGER.error,
) from err
_LOGGER.error(
"Failed running pre-/post backup command %s: %s", command, err
)
raise AddonUnknownError(addon=self.slug) from err
@Job(
name="addon_begin_backup",
@@ -1286,15 +1328,14 @@ class Addon(AddonModel):
try:
self.instance.export_image(temp_path.joinpath("image.tar"))
except DockerError as err:
raise AddonsError() from err
raise BackupRestoreUnknownError() from err
# Store local configs/state
try:
write_json_file(temp_path.joinpath("addon.json"), metadata)
except ConfigurationFileError as err:
raise AddonsError(
f"Can't save meta for {self.slug}", _LOGGER.error
) from err
_LOGGER.error("Can't save meta for %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
# Store AppArmor Profile
if apparmor_profile:
@@ -1304,9 +1345,7 @@ class Addon(AddonModel):
apparmor_profile, profile_backup_file
)
except HostAppArmorError as err:
raise AddonsError(
"Can't backup AppArmor profile", _LOGGER.error
) from err
raise BackupRestoreUnknownError() from err
# Write tarfile
with tar_file as backup:
@@ -1360,7 +1399,8 @@ class Addon(AddonModel):
)
_LOGGER.info("Finish backup for addon %s", self.slug)
except (tarfile.TarError, OSError, AddFileError) as err:
raise AddonsError(f"Can't write tarfile: {err}", _LOGGER.error) from err
_LOGGER.error("Can't write backup tarfile for addon %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
finally:
if was_running:
wait_for_start = await self.end_backup()
@@ -1402,28 +1442,24 @@ class Addon(AddonModel):
try:
tmp, data = await self.sys_run_in_executor(_extract_tarfile)
except tarfile.TarError as err:
raise AddonsError(
f"Can't read tarfile {tar_file}: {err}", _LOGGER.error
) from err
_LOGGER.error("Can't extract backup tarfile for %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
except ConfigurationFileError as err:
raise AddonsError() from err
raise AddonUnknownError(addon=self.slug) from err
try:
# Validate
try:
data = SCHEMA_ADDON_BACKUP(data)
except vol.Invalid as err:
raise AddonsError(
f"Can't validate {self.slug}, backup data: {humanize_error(data, err)}",
raise AddonBackupMetadataInvalidError(
_LOGGER.error,
addon=self.slug,
validation_error=humanize_error(data, err),
) from err
# If available
if not self._available(data[ATTR_SYSTEM]):
raise AddonNotSupportedError(
f"Add-on {self.slug} is not available for this platform",
_LOGGER.error,
)
# Validate availability. Raises if not
self._validate_availability(data[ATTR_SYSTEM], logger=_LOGGER.error)
# Restore local add-on information
_LOGGER.info("Restore config for addon %s", self.slug)
@@ -1482,9 +1518,10 @@ class Addon(AddonModel):
try:
await self.sys_run_in_executor(_restore_data)
except shutil.Error as err:
raise AddonsError(
f"Can't restore origin data: {err}", _LOGGER.error
) from err
_LOGGER.error(
"Can't restore origin data for %s: %s", self.slug, err
)
raise BackupRestoreUnknownError() from err
# Restore AppArmor
profile_file = Path(tmp.name, "apparmor.txt")
@@ -1495,10 +1532,11 @@ class Addon(AddonModel):
)
except HostAppArmorError as err:
_LOGGER.error(
"Can't restore AppArmor profile for add-on %s",
"Can't restore AppArmor profile for add-on %s: %s",
self.slug,
err,
)
raise AddonsError() from err
raise BackupRestoreUnknownError() from err
finally:
# Is add-on loaded

View File

@@ -2,7 +2,10 @@
from __future__ import annotations
import base64
from functools import cached_property
import json
import logging
from pathlib import Path
from typing import TYPE_CHECKING, Any
@@ -12,20 +15,31 @@ from ..const import (
ATTR_ARGS,
ATTR_BUILD_FROM,
ATTR_LABELS,
ATTR_PASSWORD,
ATTR_SQUASH,
ATTR_USERNAME,
FILE_SUFFIX_CONFIGURATION,
META_ADDON,
SOCKET_DOCKER,
CpuArch,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..docker.const import DOCKER_HUB, DOCKER_HUB_LEGACY
from ..docker.interface import MAP_ARCH
from ..exceptions import ConfigurationFileError, HassioArchNotFound
from ..exceptions import (
AddonBuildArchitectureNotSupportedError,
AddonBuildDockerfileMissingError,
ConfigurationFileError,
HassioArchNotFound,
)
from ..utils.common import FileConfiguration, find_one_filetype
from .validate import SCHEMA_BUILD_CONFIG
if TYPE_CHECKING:
from .manager import AnyAddon
_LOGGER: logging.Logger = logging.getLogger(__name__)
class AddonBuild(FileConfiguration, CoreSysAttributes):
"""Handle build options for add-ons."""
@@ -62,7 +76,7 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
raise RuntimeError()
@cached_property
def arch(self) -> str:
def arch(self) -> CpuArch:
"""Return arch of the add-on."""
return self.sys_arch.match([self.addon.arch])
@@ -106,7 +120,7 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
return self.addon.path_location.joinpath(f"Dockerfile.{self.arch}")
return self.addon.path_location.joinpath("Dockerfile")
async def is_valid(self) -> bool:
async def is_valid(self) -> None:
"""Return true if the build env is valid."""
def build_is_valid() -> bool:
@@ -118,12 +132,58 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
)
try:
return await self.sys_run_in_executor(build_is_valid)
if not await self.sys_run_in_executor(build_is_valid):
raise AddonBuildDockerfileMissingError(
_LOGGER.error, addon=self.addon.slug
)
except HassioArchNotFound:
return False
raise AddonBuildArchitectureNotSupportedError(
_LOGGER.error,
addon=self.addon.slug,
addon_arch_list=self.addon.supported_arch,
system_arch_list=[arch.value for arch in self.sys_arch.supported],
) from None
def get_docker_config_json(self) -> str | None:
"""Generate Docker config.json content with registry credentials for base image.
Returns a JSON string with registry credentials for the base image's registry,
or None if no matching registry is configured.
Raises:
HassioArchNotFound: If the add-on is not supported on the current architecture.
"""
# Early return before accessing base_image to avoid unnecessary arch lookup
if not self.sys_docker.config.registries:
return None
registry = self.sys_docker.config.get_registry_for_image(self.base_image)
if not registry:
return None
stored = self.sys_docker.config.registries[registry]
username = stored[ATTR_USERNAME]
password = stored[ATTR_PASSWORD]
# Docker config.json uses base64-encoded "username:password" for auth
auth_string = base64.b64encode(f"{username}:{password}".encode()).decode()
# Use the actual registry URL for the key
# Docker Hub uses "https://index.docker.io/v1/" as the key
# Support both docker.io (official) and hub.docker.com (legacy)
registry_key = (
"https://index.docker.io/v1/"
if registry in (DOCKER_HUB, DOCKER_HUB_LEGACY)
else registry
)
config = {"auths": {registry_key: {"auth": auth_string}}}
return json.dumps(config)
def get_docker_args(
self, version: AwesomeVersion, image_tag: str
self, version: AwesomeVersion, image_tag: str, docker_config_path: Path | None
) -> dict[str, Any]:
"""Create a dict with Docker run args."""
dockerfile_path = self.get_dockerfile().relative_to(self.addon.path_location)
@@ -172,12 +232,24 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
self.addon.path_location
)
volumes = {
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
addon_extern_path: {"bind": "/addon", "mode": "ro"},
}
# Mount Docker config with registry credentials if available
if docker_config_path:
docker_config_extern_path = self.sys_config.local_to_extern_path(
docker_config_path
)
volumes[docker_config_extern_path] = {
"bind": "/root/.docker/config.json",
"mode": "ro",
}
return {
"command": build_cmd,
"volumes": {
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
addon_extern_path: {"bind": "/addon", "mode": "ro"},
},
"volumes": volumes,
"working_dir": "/addon",
}

View File

@@ -87,6 +87,7 @@ from ..const import (
AddonBootConfig,
AddonStage,
AddonStartup,
CpuArch,
)
from ..coresys import CoreSys
from ..docker.const import Capabilities
@@ -315,12 +316,12 @@ class AddonModel(JobGroup, ABC):
@property
def panel_title(self) -> str:
"""Return panel icon for Ingress frame."""
"""Return panel title for Ingress frame."""
return self.data.get(ATTR_PANEL_TITLE, self.name)
@property
def panel_admin(self) -> str:
"""Return panel icon for Ingress frame."""
def panel_admin(self) -> bool:
"""Return if panel is only available for admin users."""
return self.data[ATTR_PANEL_ADMIN]
@property
@@ -488,7 +489,7 @@ class AddonModel(JobGroup, ABC):
return self.data[ATTR_DEVICETREE]
@property
def with_tmpfs(self) -> str | None:
def with_tmpfs(self) -> bool:
"""Return if tmp is in memory of add-on."""
return self.data[ATTR_TMPFS]
@@ -508,7 +509,7 @@ class AddonModel(JobGroup, ABC):
return self.data[ATTR_VIDEO]
@property
def homeassistant_version(self) -> str | None:
def homeassistant_version(self) -> AwesomeVersion | None:
"""Return min Home Assistant version they needed by Add-on."""
return self.data.get(ATTR_HOMEASSISTANT)
@@ -548,7 +549,7 @@ class AddonModel(JobGroup, ABC):
return self.data.get(ATTR_MACHINE, [])
@property
def arch(self) -> str:
def arch(self) -> CpuArch:
"""Return architecture to use for the addon's image."""
if ATTR_IMAGE in self.data:
return self.sys_arch.match(self.data[ATTR_ARCH])

View File

@@ -813,6 +813,10 @@ class RestAPI(CoreSysAttributes):
self.webapp.add_routes(
[
web.get("/docker/info", api_docker.info),
web.post(
"/docker/migrate-storage-driver",
api_docker.migrate_docker_storage_driver,
),
web.post("/docker/options", api_docker.options),
web.get("/docker/registries", api_docker.registries),
web.post("/docker/registries", api_docker.create_registry),

View File

@@ -100,6 +100,9 @@ from ..const import (
from ..coresys import CoreSysAttributes
from ..docker.stats import DockerStats
from ..exceptions import (
AddonBootConfigCannotChangeError,
AddonConfigurationInvalidError,
AddonNotSupportedWriteStdinError,
APIAddonNotInstalled,
APIError,
APIForbidden,
@@ -125,6 +128,7 @@ SCHEMA_OPTIONS = vol.Schema(
vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str),
vol.Optional(ATTR_INGRESS_PANEL): vol.Boolean(),
vol.Optional(ATTR_WATCHDOG): vol.Boolean(),
vol.Optional(ATTR_OPTIONS): vol.Maybe(dict),
}
)
@@ -300,19 +304,20 @@ class APIAddons(CoreSysAttributes):
# Update secrets for validation
await self.sys_homeassistant.secrets.reload()
# Extend schema with add-on specific validation
addon_schema = SCHEMA_OPTIONS.extend(
{vol.Optional(ATTR_OPTIONS): vol.Maybe(addon.schema)}
)
# Validate/Process Body
body = await api_validate(addon_schema, request)
body = await api_validate(SCHEMA_OPTIONS, request)
if ATTR_OPTIONS in body:
addon.options = body[ATTR_OPTIONS]
try:
addon.options = addon.schema(body[ATTR_OPTIONS])
except vol.Invalid as ex:
raise AddonConfigurationInvalidError(
addon=addon.slug,
validation_error=humanize_error(body[ATTR_OPTIONS], ex),
) from None
if ATTR_BOOT in body:
if addon.boot_config == AddonBootConfig.MANUAL_ONLY:
raise APIError(
f"Addon {addon.slug} boot option is set to {addon.boot_config} so it cannot be changed"
raise AddonBootConfigCannotChangeError(
addon=addon.slug, boot_config=addon.boot_config.value
)
addon.boot = body[ATTR_BOOT]
if ATTR_AUTO_UPDATE in body:
@@ -476,7 +481,7 @@ class APIAddons(CoreSysAttributes):
"""Write to stdin of add-on."""
addon = self.get_addon_for_request(request)
if not addon.with_stdin:
raise APIError(f"STDIN not supported the {addon.slug} add-on")
raise AddonNotSupportedWriteStdinError(_LOGGER.error, addon=addon.slug)
data = await request.read()
await asyncio.shield(addon.write_stdin(data))

View File

@@ -15,7 +15,7 @@ import voluptuous as vol
from ..addons.addon import Addon
from ..const import ATTR_NAME, ATTR_PASSWORD, ATTR_USERNAME, REQUEST_FROM
from ..coresys import CoreSysAttributes
from ..exceptions import APIForbidden
from ..exceptions import APIForbidden, AuthInvalidNonStringValueError
from .const import (
ATTR_GROUP_IDS,
ATTR_IS_ACTIVE,
@@ -69,7 +69,9 @@ class APIAuth(CoreSysAttributes):
try:
_ = username.encode and password.encode # type: ignore
except AttributeError:
raise HTTPUnauthorized(headers=REALM_HEADER) from None
raise AuthInvalidNonStringValueError(
_LOGGER.error, headers=REALM_HEADER
) from None
return self.sys_auth.check_login(
addon, cast(str, username), cast(str, password)

View File

@@ -4,6 +4,7 @@ import logging
from typing import Any
from aiohttp import web
from awesomeversion import AwesomeVersion
import voluptuous as vol
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
@@ -16,6 +17,7 @@ from ..const import (
ATTR_PASSWORD,
ATTR_REGISTRIES,
ATTR_STORAGE,
ATTR_STORAGE_DRIVER,
ATTR_USERNAME,
ATTR_VERSION,
)
@@ -42,6 +44,12 @@ SCHEMA_OPTIONS = vol.Schema(
}
)
SCHEMA_MIGRATE_DOCKER_STORAGE_DRIVER = vol.Schema(
{
vol.Required(ATTR_STORAGE_DRIVER): vol.In(["overlayfs", "overlay2"]),
}
)
class APIDocker(CoreSysAttributes):
"""Handle RESTful API for Docker configuration."""
@@ -123,3 +131,27 @@ class APIDocker(CoreSysAttributes):
del self.sys_docker.config.registries[hostname]
await self.sys_docker.config.save_data()
@api_process
async def migrate_docker_storage_driver(self, request: web.Request) -> None:
"""Migrate Docker storage driver."""
if (
not self.coresys.os.available
or not self.coresys.os.version
or self.coresys.os.version < AwesomeVersion("17.0.dev0")
):
raise APINotFound(
"Home Assistant OS 17.0 or newer required for Docker storage driver migration"
)
body = await api_validate(SCHEMA_MIGRATE_DOCKER_STORAGE_DRIVER, request)
await self.sys_dbus.agent.system.migrate_docker_storage_driver(
body[ATTR_STORAGE_DRIVER]
)
_LOGGER.info("Host system reboot required to apply Docker storage migration")
self.sys_resolution.create_issue(
IssueType.REBOOT_REQUIRED,
ContextType.SYSTEM,
suggestions=[SuggestionType.EXECUTE_REBOOT],
)

View File

@@ -252,6 +252,9 @@ class APIHost(CoreSysAttributes):
if "verbose" in request.query or request.headers[ACCEPT] == CONTENT_TYPE_X_LOG:
log_formatter = LogFormatter.VERBOSE
if "no_colors" in request.query:
no_colors = True
if "lines" in request.query:
lines = request.query.get("lines", DEFAULT_LINES)
try:
@@ -340,10 +343,14 @@ class APIHost(CoreSysAttributes):
disk = self.sys_hardware.disk
total, used, _ = await self.sys_run_in_executor(
total, _, free = await self.sys_run_in_executor(
disk.disk_usage, self.sys_config.path_supervisor
)
# Calculate used by subtracting free makes sure we include reserved space
# in used space reporting.
used = total - free
known_paths = await self.sys_run_in_executor(
disk.get_dir_sizes,
{

View File

@@ -53,7 +53,7 @@ from ..const import (
REQUEST_FROM,
)
from ..coresys import CoreSysAttributes
from ..exceptions import APIError, APIForbidden, APINotFound
from ..exceptions import APIError, APIForbidden, APINotFound, StoreAddonNotFoundError
from ..store.addon import AddonStore
from ..store.repository import Repository
from ..store.validate import validate_repository
@@ -104,7 +104,7 @@ class APIStore(CoreSysAttributes):
addon_slug: str = request.match_info["addon"]
if not (addon := self.sys_addons.get(addon_slug)):
raise APINotFound(f"Addon {addon_slug} does not exist")
raise StoreAddonNotFoundError(addon=addon_slug)
if installed and not addon.is_installed:
raise APIError(f"Addon {addon_slug} is not installed")
@@ -112,7 +112,7 @@ class APIStore(CoreSysAttributes):
if not installed and addon.is_installed:
addon = cast(Addon, addon)
if not addon.addon_store:
raise APINotFound(f"Addon {addon_slug} does not exist in the store")
raise StoreAddonNotFoundError(addon=addon_slug)
return addon.addon_store
return addon

View File

@@ -1,7 +1,7 @@
"""Init file for Supervisor util for RESTful API."""
import asyncio
from collections.abc import Callable
from collections.abc import Callable, Mapping
import json
from typing import Any, cast
@@ -26,7 +26,7 @@ from ..const import (
RESULT_OK,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import APIError, BackupFileNotFoundError, DockerAPIError, HassioError
from ..exceptions import APIError, DockerAPIError, HassioError
from ..jobs import JobSchedulerOptions, SupervisorJob
from ..utils import check_exception_chain, get_message_from_exception_chain
from ..utils.json import json_dumps, json_loads as json_loads_util
@@ -63,16 +63,14 @@ def json_loads(data: Any) -> dict[str, Any]:
def api_process(method):
"""Wrap function with true/false calls to rest api."""
async def wrap_api(
api: CoreSysAttributes, *args, **kwargs
) -> web.Response | web.StreamResponse:
async def wrap_api(*args, **kwargs) -> web.Response | web.StreamResponse:
"""Return API information."""
try:
answer = await method(api, *args, **kwargs)
except BackupFileNotFoundError as err:
return api_return_error(err, status=404)
answer = await method(*args, **kwargs)
except APIError as err:
return api_return_error(err, status=err.status, job_id=err.job_id)
return api_return_error(
err, status=err.status, job_id=err.job_id, headers=err.headers
)
except HassioError as err:
return api_return_error(err)
@@ -109,12 +107,10 @@ def api_process_raw(content, *, error_type=None):
def wrap_method(method):
"""Wrap function with raw output to rest api."""
async def wrap_api(
api: CoreSysAttributes, *args, **kwargs
) -> web.Response | web.StreamResponse:
async def wrap_api(*args, **kwargs) -> web.Response | web.StreamResponse:
"""Return api information."""
try:
msg_data = await method(api, *args, **kwargs)
msg_data = await method(*args, **kwargs)
except APIError as err:
return api_return_error(
err,
@@ -143,6 +139,7 @@ def api_return_error(
error_type: str | None = None,
status: int = 400,
*,
headers: Mapping[str, str] | None = None,
job_id: str | None = None,
) -> web.Response:
"""Return an API error message."""
@@ -155,10 +152,15 @@ def api_return_error(
match error_type:
case const.CONTENT_TYPE_TEXT:
return web.Response(body=message, content_type=error_type, status=status)
return web.Response(
body=message, content_type=error_type, status=status, headers=headers
)
case const.CONTENT_TYPE_BINARY:
return web.Response(
body=message.encode(), content_type=error_type, status=status
body=message.encode(),
content_type=error_type,
status=status,
headers=headers,
)
case _:
result: dict[str, Any] = {
@@ -176,6 +178,7 @@ def api_return_error(
result,
status=status,
dumps=json_dumps,
headers=headers,
)

View File

@@ -4,6 +4,7 @@ import logging
from pathlib import Path
import platform
from .const import CpuArch
from .coresys import CoreSys, CoreSysAttributes
from .exceptions import ConfigurationFileError, HassioArchNotFound
from .utils.json import read_json_file
@@ -12,38 +13,40 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
ARCH_JSON: Path = Path(__file__).parent.joinpath("data/arch.json")
MAP_CPU = {
"armv7": "armv7",
"armv6": "armhf",
"armv8": "aarch64",
"aarch64": "aarch64",
"i686": "i386",
"x86_64": "amd64",
MAP_CPU: dict[str, CpuArch] = {
"armv7": CpuArch.ARMV7,
"armv6": CpuArch.ARMHF,
"armv8": CpuArch.AARCH64,
"aarch64": CpuArch.AARCH64,
"i686": CpuArch.I386,
"x86_64": CpuArch.AMD64,
}
class CpuArch(CoreSysAttributes):
class CpuArchManager(CoreSysAttributes):
"""Manage available architectures."""
def __init__(self, coresys: CoreSys) -> None:
"""Initialize CPU Architecture handler."""
self.coresys = coresys
self._supported_arch: list[str] = []
self._supported_set: set[str] = set()
self._default_arch: str
self._supported_arch: list[CpuArch] = []
self._supported_set: set[CpuArch] = set()
self._default_arch: CpuArch
@property
def default(self) -> str:
def default(self) -> CpuArch:
"""Return system default arch."""
return self._default_arch
@property
def supervisor(self) -> str:
def supervisor(self) -> CpuArch:
"""Return supervisor arch."""
return self.sys_supervisor.arch or self._default_arch
if self.sys_supervisor.arch:
return CpuArch(self.sys_supervisor.arch)
return self._default_arch
@property
def supported(self) -> list[str]:
def supported(self) -> list[CpuArch]:
"""Return support arch by CPU/Machine."""
return self._supported_arch
@@ -65,7 +68,7 @@ class CpuArch(CoreSysAttributes):
return
# Use configs from arch.json
self._supported_arch.extend(arch_data[self.sys_machine])
self._supported_arch.extend(CpuArch(a) for a in arch_data[self.sys_machine])
self._default_arch = self.supported[0]
# Make sure native support is in supported list
@@ -78,14 +81,14 @@ class CpuArch(CoreSysAttributes):
"""Return True if there is a supported arch by this platform."""
return not self._supported_set.isdisjoint(arch_list)
def match(self, arch_list: list[str]) -> str:
def match(self, arch_list: list[str]) -> CpuArch:
"""Return best match for this CPU/Platform."""
for self_arch in self.supported:
if self_arch in arch_list:
return self_arch
raise HassioArchNotFound()
def detect_cpu(self) -> str:
def detect_cpu(self) -> CpuArch:
"""Return the arch type of local CPU."""
cpu = platform.machine()
for check, value in MAP_CPU.items():
@@ -96,9 +99,10 @@ class CpuArch(CoreSysAttributes):
"Unknown CPU architecture %s, falling back to Supervisor architecture.",
cpu,
)
return self.sys_supervisor.arch
return CpuArch(self.sys_supervisor.arch)
_LOGGER.warning(
"Unknown CPU architecture %s, assuming CPU architecture equals Supervisor architecture.",
cpu,
)
return cpu
# Return the cpu string as-is, wrapped in CpuArch (may fail if invalid)
return CpuArch(cpu)

View File

@@ -9,8 +9,10 @@ from .addons.addon import Addon
from .const import ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
from .coresys import CoreSys, CoreSysAttributes
from .exceptions import (
AuthError,
AuthHomeAssistantAPIValidationError,
AuthInvalidNonStringValueError,
AuthListUsersError,
AuthListUsersNoneResponseError,
AuthPasswordResetError,
HomeAssistantAPIError,
HomeAssistantWSError,
@@ -83,10 +85,8 @@ class Auth(FileConfiguration, CoreSysAttributes):
self, addon: Addon, username: str | None, password: str | None
) -> bool:
"""Check username login."""
if password is None:
raise AuthError("None as password is not supported!", _LOGGER.error)
if username is None:
raise AuthError("None as username is not supported!", _LOGGER.error)
if username is None or password is None:
raise AuthInvalidNonStringValueError(_LOGGER.error)
_LOGGER.info("Auth request from '%s' for '%s'", addon.slug, username)
@@ -137,7 +137,7 @@ class Auth(FileConfiguration, CoreSysAttributes):
finally:
self._running.pop(username, None)
raise AuthError()
raise AuthHomeAssistantAPIValidationError()
async def change_password(self, username: str, password: str) -> None:
"""Change user password login."""
@@ -155,7 +155,7 @@ class Auth(FileConfiguration, CoreSysAttributes):
except HomeAssistantAPIError as err:
_LOGGER.error("Can't request password reset on Home Assistant: %s", err)
raise AuthPasswordResetError()
raise AuthPasswordResetError(user=username)
async def list_users(self) -> list[dict[str, Any]]:
"""List users on the Home Assistant instance."""
@@ -166,15 +166,12 @@ class Auth(FileConfiguration, CoreSysAttributes):
{ATTR_TYPE: "config/auth/list"}
)
except HomeAssistantWSError as err:
raise AuthListUsersError(
f"Can't request listing users on Home Assistant: {err}", _LOGGER.error
) from err
_LOGGER.error("Can't request listing users on Home Assistant: %s", err)
raise AuthListUsersError() from err
if users is not None:
return users
raise AuthListUsersError(
"Can't request listing users on Home Assistant!", _LOGGER.error
)
raise AuthListUsersNoneResponseError(_LOGGER.error)
@staticmethod
def _rehash(value: str, salt2: str = "") -> str:

View File

@@ -628,9 +628,6 @@ class Backup(JobGroup):
if start_task := await self._addon_save(addon):
start_tasks.append(start_task)
except BackupError as err:
err = BackupError(
f"Can't backup add-on {addon.slug}: {str(err)}", _LOGGER.error
)
self.sys_jobs.current.capture_error(err)
return start_tasks

View File

@@ -13,7 +13,7 @@ from colorlog import ColoredFormatter
from .addons.manager import AddonManager
from .api import RestAPI
from .arch import CpuArch
from .arch import CpuArchManager
from .auth import Auth
from .backups.manager import BackupManager
from .bus import Bus
@@ -71,7 +71,7 @@ async def initialize_coresys() -> CoreSys:
coresys.jobs = await JobManager(coresys).load_config()
coresys.core = await Core(coresys).post_init()
coresys.plugins = await PluginManager(coresys).load_config()
coresys.arch = CpuArch(coresys)
coresys.arch = CpuArchManager(coresys)
coresys.auth = await Auth(coresys).load_config()
coresys.updater = await Updater(coresys).load_config()
coresys.api = RestAPI(coresys)

View File

@@ -328,6 +328,7 @@ ATTR_STATE = "state"
ATTR_STATIC = "static"
ATTR_STDIN = "stdin"
ATTR_STORAGE = "storage"
ATTR_STORAGE_DRIVER = "storage_driver"
ATTR_SUGGESTIONS = "suggestions"
ATTR_SUPERVISOR = "supervisor"
ATTR_SUPERVISOR_INTERNET = "supervisor_internet"

View File

@@ -29,7 +29,7 @@ from .const import (
if TYPE_CHECKING:
from .addons.manager import AddonManager
from .api import RestAPI
from .arch import CpuArch
from .arch import CpuArchManager
from .auth import Auth
from .backups.manager import BackupManager
from .bus import Bus
@@ -78,7 +78,7 @@ class CoreSys:
# Internal objects pointers
self._docker: DockerAPI | None = None
self._core: Core | None = None
self._arch: CpuArch | None = None
self._arch: CpuArchManager | None = None
self._auth: Auth | None = None
self._homeassistant: HomeAssistant | None = None
self._supervisor: Supervisor | None = None
@@ -266,17 +266,17 @@ class CoreSys:
self._plugins = value
@property
def arch(self) -> CpuArch:
"""Return CpuArch object."""
def arch(self) -> CpuArchManager:
"""Return CpuArchManager object."""
if self._arch is None:
raise RuntimeError("CpuArch not set!")
raise RuntimeError("CpuArchManager not set!")
return self._arch
@arch.setter
def arch(self, value: CpuArch) -> None:
"""Set a CpuArch object."""
def arch(self, value: CpuArchManager) -> None:
"""Set a CpuArchManager object."""
if self._arch:
raise RuntimeError("CpuArch already set!")
raise RuntimeError("CpuArchManager already set!")
self._arch = value
@property
@@ -733,8 +733,8 @@ class CoreSysAttributes:
return self.coresys.plugins
@property
def sys_arch(self) -> CpuArch:
"""Return CpuArch object."""
def sys_arch(self) -> CpuArchManager:
"""Return CpuArchManager object."""
return self.coresys.arch
@property

View File

@@ -15,3 +15,8 @@ class System(DBusInterface):
async def schedule_wipe_device(self) -> bool:
"""Schedule a factory reset on next system boot."""
return await self.connected_dbus.System.call("schedule_wipe_device")
@dbus_connected
async def migrate_docker_storage_driver(self, backend: str) -> None:
"""Migrate Docker storage driver."""
await self.connected_dbus.System.call("migrate_docker_storage_driver", backend)

View File

@@ -250,7 +250,7 @@ class ConnectionType(StrEnum):
WIRELESS = "802-11-wireless"
class ConnectionStateType(IntEnum):
class ConnectionState(IntEnum):
"""Connection states.
https://networkmanager.dev/docs/api/latest/nm-dbus-types.html#NMActiveConnectionState
@@ -306,6 +306,8 @@ class DeviceType(IntEnum):
VLAN = 11
TUN = 16
VETH = 20
WIREGUARD = 29
LOOPBACK = 32
class WirelessMethodType(IntEnum):

View File

@@ -115,7 +115,7 @@ class DBusManager(CoreSysAttributes):
async def load(self) -> None:
"""Connect interfaces to D-Bus."""
if not SOCKET_DBUS.exists():
if not await self.sys_run_in_executor(SOCKET_DBUS.exists):
_LOGGER.error(
"No D-Bus support on Host. Disabled any kind of host control!"
)

View File

@@ -134,9 +134,10 @@ class NetworkManager(DBusInterfaceProxy):
async def check_connectivity(self, *, force: bool = False) -> ConnectivityState:
"""Check the connectivity of the host."""
if force:
return await self.connected_dbus.call("check_connectivity")
else:
return await self.connected_dbus.get("connectivity")
return ConnectivityState(
await self.connected_dbus.call("check_connectivity")
)
return ConnectivityState(await self.connected_dbus.get("connectivity"))
async def connect(self, bus: MessageBus) -> None:
"""Connect to system's D-Bus."""

View File

@@ -90,8 +90,8 @@ class Ip4Properties(IpProperties):
class Ip6Properties(IpProperties):
"""IPv6 properties object for Network Manager."""
addr_gen_mode: int
ip6_privacy: int
addr_gen_mode: int | None
ip6_privacy: int | None
dns: list[bytes] | None

View File

@@ -16,8 +16,8 @@ from ..const import (
DBUS_IFACE_CONNECTION_ACTIVE,
DBUS_NAME_NM,
DBUS_OBJECT_BASE,
ConnectionState,
ConnectionStateFlags,
ConnectionStateType,
)
from ..interface import DBusInterfaceProxy, dbus_property
from ..utils import dbus_connected
@@ -67,9 +67,9 @@ class NetworkConnection(DBusInterfaceProxy):
@property
@dbus_property
def state(self) -> ConnectionStateType:
def state(self) -> ConnectionState:
"""Return the state of the connection."""
return self.properties[DBUS_ATTR_STATE]
return ConnectionState(self.properties[DBUS_ATTR_STATE])
@property
def state_flags(self) -> set[ConnectionStateFlags]:

View File

@@ -1,5 +1,6 @@
"""NetworkInterface object for Network Manager."""
import logging
from typing import Any
from dbus_fast.aio.message_bus import MessageBus
@@ -23,6 +24,8 @@ from .connection import NetworkConnection
from .setting import NetworkSetting
from .wireless import NetworkWireless
_LOGGER: logging.Logger = logging.getLogger(__name__)
class NetworkInterface(DBusInterfaceProxy):
"""NetworkInterface object represents Network Manager Device objects.
@@ -57,7 +60,15 @@ class NetworkInterface(DBusInterfaceProxy):
@dbus_property
def type(self) -> DeviceType:
"""Return interface type."""
return self.properties[DBUS_ATTR_DEVICE_TYPE]
try:
return DeviceType(self.properties[DBUS_ATTR_DEVICE_TYPE])
except ValueError:
_LOGGER.debug(
"Unknown device type %s for %s, treating as UNKNOWN",
self.properties[DBUS_ATTR_DEVICE_TYPE],
self.object_path,
)
return DeviceType.UNKNOWN
@property
@dbus_property

View File

@@ -16,7 +16,11 @@ from ....host.const import (
InterfaceType,
MulticastDnsMode,
)
from ...const import MulticastDnsValue
from ...const import (
InterfaceAddrGenMode as NMInterfaceAddrGenMode,
InterfaceIp6Privacy as NMInterfaceIp6Privacy,
MulticastDnsValue,
)
from .. import NetworkManager
from . import (
CONF_ATTR_802_ETHERNET,
@@ -118,24 +122,41 @@ def _get_ipv6_connection_settings(
ipv6[CONF_ATTR_IPV6_METHOD] = Variant("s", "auto")
if ipv6setting:
if ipv6setting.addr_gen_mode == InterfaceAddrGenMode.EUI64:
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant("i", 0)
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant(
"i", NMInterfaceAddrGenMode.EUI64.value
)
elif (
not support_addr_gen_mode_defaults
or ipv6setting.addr_gen_mode == InterfaceAddrGenMode.STABLE_PRIVACY
):
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant("i", 1)
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant(
"i", NMInterfaceAddrGenMode.STABLE_PRIVACY.value
)
elif ipv6setting.addr_gen_mode == InterfaceAddrGenMode.DEFAULT_OR_EUI64:
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant("i", 2)
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant(
"i", NMInterfaceAddrGenMode.DEFAULT_OR_EUI64.value
)
else:
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant("i", 3)
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant(
"i", NMInterfaceAddrGenMode.DEFAULT.value
)
if ipv6setting.ip6_privacy == InterfaceIp6Privacy.DISABLED:
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant("i", 0)
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant(
"i", NMInterfaceIp6Privacy.DISABLED.value
)
elif ipv6setting.ip6_privacy == InterfaceIp6Privacy.ENABLED_PREFER_PUBLIC:
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant("i", 1)
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant(
"i", NMInterfaceIp6Privacy.ENABLED_PREFER_PUBLIC.value
)
elif ipv6setting.ip6_privacy == InterfaceIp6Privacy.ENABLED:
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant("i", 2)
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant(
"i", NMInterfaceIp6Privacy.ENABLED.value
)
else:
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant("i", -1)
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant(
"i", NMInterfaceIp6Privacy.DEFAULT.value
)
elif ipv6setting.method == InterfaceMethod.DISABLED:
ipv6[CONF_ATTR_IPV6_METHOD] = Variant("s", "link-local")
elif ipv6setting.method == InterfaceMethod.STATIC:

View File

@@ -75,7 +75,7 @@ class Resolved(DBusInterfaceProxy):
@dbus_property
def current_dns_server(
self,
) -> list[tuple[int, DNSAddressFamily, bytes]] | None:
) -> tuple[int, DNSAddressFamily, bytes] | None:
"""Return current DNS server."""
return self.properties[DBUS_ATTR_CURRENT_DNS_SERVER]
@@ -83,7 +83,7 @@ class Resolved(DBusInterfaceProxy):
@dbus_property
def current_dns_server_ex(
self,
) -> list[tuple[int, DNSAddressFamily, bytes, int, str]] | None:
) -> tuple[int, DNSAddressFamily, bytes, int, str] | None:
"""Return current DNS server including port and server name."""
return self.properties[DBUS_ATTR_CURRENT_DNS_SERVER_EX]

View File

@@ -70,7 +70,7 @@ class SystemdUnit(DBusInterface):
@dbus_connected
async def get_active_state(self) -> UnitActiveState:
"""Get active state of the unit."""
return await self.connected_dbus.Unit.get("active_state")
return UnitActiveState(await self.connected_dbus.Unit.get("active_state"))
@dbus_connected
def properties_changed(self) -> DBusSignalWrapper:

View File

@@ -9,7 +9,7 @@ from dbus_fast import Variant
from .const import EncryptType, EraseMode
def udisks2_bytes_to_path(path_bytes: bytearray) -> Path:
def udisks2_bytes_to_path(path_bytes: bytes) -> Path:
"""Convert bytes to path object without null character on end."""
if path_bytes and path_bytes[-1] == 0:
return Path(path_bytes[:-1].decode())
@@ -73,7 +73,7 @@ FormatOptionsDataType = TypedDict(
{
"label": NotRequired[str],
"take-ownership": NotRequired[bool],
"encrypt.passphrase": NotRequired[bytearray],
"encrypt.passphrase": NotRequired[bytes],
"encrypt.type": NotRequired[str],
"erase": NotRequired[str],
"update-partition-type": NotRequired[bool],

View File

@@ -2,11 +2,14 @@
from __future__ import annotations
from collections.abc import Awaitable
from contextlib import suppress
from ipaddress import IPv4Address
import logging
import os
from pathlib import Path
from socket import SocketIO
import tempfile
from typing import TYPE_CHECKING, cast
import aiodocker
@@ -33,6 +36,7 @@ from ..coresys import CoreSys
from ..exceptions import (
CoreDNSError,
DBusError,
DockerBuildError,
DockerError,
DockerJobError,
DockerNotFound,
@@ -680,13 +684,12 @@ class DockerAddon(DockerInterface):
async def _build(self, version: AwesomeVersion, image: str | None = None) -> None:
"""Build a Docker container."""
build_env = await AddonBuild(self.coresys, self.addon).load_config()
if not await build_env.is_valid():
_LOGGER.error("Invalid build environment, can't build this add-on!")
raise DockerError()
# Check if the build environment is valid, raises if not
await build_env.is_valid()
_LOGGER.info("Starting build for %s:%s", self.image, version)
def build_image():
def build_image() -> tuple[str, str]:
if build_env.squash:
_LOGGER.warning(
"Ignoring squash build option for %s as Docker BuildKit does not support it.",
@@ -705,12 +708,38 @@ class DockerAddon(DockerInterface):
with suppress(docker.errors.NotFound):
self.sys_docker.containers.get(builder_name).remove(force=True, v=True)
result = self.sys_docker.run_command(
ADDON_BUILDER_IMAGE,
version=builder_version_tag,
name=builder_name,
**build_env.get_docker_args(version, addon_image_tag),
)
# Generate Docker config with registry credentials for base image if needed
docker_config_path: Path | None = None
docker_config_content = build_env.get_docker_config_json()
temp_dir: tempfile.TemporaryDirectory | None = None
try:
if docker_config_content:
# Create temporary directory for docker config
temp_dir = tempfile.TemporaryDirectory(
prefix="hassio_build_", dir=self.sys_config.path_tmp
)
docker_config_path = Path(temp_dir.name) / "config.json"
docker_config_path.write_text(
docker_config_content, encoding="utf-8"
)
_LOGGER.debug(
"Created temporary Docker config for build at %s",
docker_config_path,
)
result = self.sys_docker.run_command(
ADDON_BUILDER_IMAGE,
version=builder_version_tag,
name=builder_name,
**build_env.get_docker_args(
version, addon_image_tag, docker_config_path
),
)
finally:
# Clean up temporary directory
if temp_dir:
temp_dir.cleanup()
logs = result.output.decode("utf-8")
@@ -733,8 +762,9 @@ class DockerAddon(DockerInterface):
requests.RequestException,
aiodocker.DockerError,
) as err:
_LOGGER.error("Can't build %s:%s: %s", self.image, version, err)
raise DockerError() from err
raise DockerBuildError(
f"Can't build {self.image}:{version}: {err!s}", _LOGGER.error
) from err
_LOGGER.info("Build %s:%s done", self.image, version)
@@ -792,12 +822,9 @@ class DockerAddon(DockerInterface):
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def write_stdin(self, data: bytes) -> None:
def write_stdin(self, data: bytes) -> Awaitable[None]:
"""Write to add-on stdin."""
if not await self.is_running():
raise DockerError()
await self.sys_run_in_executor(self._write_stdin, data)
return self.sys_run_in_executor(self._write_stdin, data)
def _write_stdin(self, data: bytes) -> None:
"""Write to add-on stdin.
@@ -807,7 +834,10 @@ class DockerAddon(DockerInterface):
try:
# Load needed docker objects
container = self.sys_docker.containers.get(self.name)
socket = container.attach_socket(params={"stdin": 1, "stream": 1})
# attach_socket returns SocketIO for local Docker connections (Unix socket)
socket = cast(
SocketIO, container.attach_socket(params={"stdin": 1, "stream": 1})
)
except (docker.errors.DockerException, requests.RequestException) as err:
_LOGGER.error("Can't attach to %s stdin: %s", self.name, err)
raise DockerError() from err

View File

@@ -2,12 +2,9 @@
from __future__ import annotations
from contextlib import suppress
from enum import Enum, StrEnum
from functools import total_ordering
from enum import StrEnum
from pathlib import PurePath
import re
from typing import cast
from docker.types import Mount
@@ -15,6 +12,13 @@ from ..const import MACHINE_ID
RE_RETRYING_DOWNLOAD_STATUS = re.compile(r"Retrying in \d+ seconds?")
# Docker Hub registry identifier (official default)
# Docker's default registry is docker.io
DOCKER_HUB = "docker.io"
# Legacy Docker Hub identifier for backward compatibility
DOCKER_HUB_LEGACY = "hub.docker.com"
class Capabilities(StrEnum):
"""Linux Capabilities."""
@@ -75,57 +79,6 @@ class PropagationMode(StrEnum):
RSLAVE = "rslave"
@total_ordering
class PullImageLayerStage(Enum):
"""Job stages for pulling an image layer.
These are a subset of the statuses in a docker pull image log. They
are the standardized ones that are the most useful to us.
"""
PULLING_FS_LAYER = 1, "Pulling fs layer"
RETRYING_DOWNLOAD = 2, "Retrying download"
DOWNLOADING = 2, "Downloading"
VERIFYING_CHECKSUM = 3, "Verifying Checksum"
DOWNLOAD_COMPLETE = 4, "Download complete"
EXTRACTING = 5, "Extracting"
PULL_COMPLETE = 6, "Pull complete"
def __init__(self, order: int, status: str) -> None:
"""Set fields from values."""
self.order = order
self.status = status
def __eq__(self, value: object, /) -> bool:
"""Check equality, allow StrEnum style comparisons on status."""
with suppress(AttributeError):
return self.status == cast(PullImageLayerStage, value).status
return self.status == value
def __lt__(self, other: object) -> bool:
"""Order instances."""
with suppress(AttributeError):
return self.order < cast(PullImageLayerStage, other).order
return False
def __hash__(self) -> int:
"""Hash instance."""
return hash(self.status)
@classmethod
def from_status(cls, status: str) -> PullImageLayerStage | None:
"""Return stage instance from pull log status."""
for i in cls:
if i.status == status:
return i
# This one includes number of seconds until download so its not constant
if RE_RETRYING_DOWNLOAD_STATUS.match(status):
return cls.RETRYING_DOWNLOAD
return None
ENV_TIME = "TZ"
ENV_TOKEN = "SUPERVISOR_TOKEN"
ENV_TOKEN_OLD = "HASSIO_TOKEN"

View File

@@ -8,19 +8,18 @@ from collections.abc import Awaitable
from contextlib import suppress
from http import HTTPStatus
import logging
import re
from time import time
from typing import Any, cast
from uuid import uuid4
import aiodocker
import aiohttp
from awesomeversion import AwesomeVersion
from awesomeversion.strategy import AwesomeVersionStrategy
import docker
from docker.models.containers import Container
import requests
from ..bus import EventListener
from ..const import (
ATTR_PASSWORD,
ATTR_REGISTRY,
@@ -36,27 +35,23 @@ from ..exceptions import (
DockerError,
DockerHubRateLimitExceeded,
DockerJobError,
DockerLogOutOfOrder,
DockerNotFound,
DockerRequestError,
)
from ..jobs import SupervisorJob
from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobConcurrency
from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.sentry import async_capture_exception
from .const import ContainerState, PullImageLayerStage, RestartPolicy
from .const import DOCKER_HUB, DOCKER_HUB_LEGACY, ContainerState, RestartPolicy
from .manager import CommandReturn, PullLogEntry
from .monitor import DockerContainerStateEvent
from .pull_progress import ImagePullProgress
from .stats import DockerStats
_LOGGER: logging.Logger = logging.getLogger(__name__)
IMAGE_WITH_HOST = re.compile(r"^((?:[a-z0-9]+(?:-[a-z0-9]+)*\.)+[a-z]{2,})\/.+")
DOCKER_HUB = "hub.docker.com"
MAP_ARCH: dict[CpuArch | str, str] = {
MAP_ARCH: dict[CpuArch, str] = {
CpuArch.ARMV7: "linux/arm/v7",
CpuArch.ARMHF: "linux/arm/v6",
CpuArch.AARCH64: "linux/arm64",
@@ -180,25 +175,17 @@ class DockerInterface(JobGroup, ABC):
return self.meta_config.get("Healthcheck")
def _get_credentials(self, image: str) -> dict:
"""Return a dictionay with credentials for docker login."""
registry = None
"""Return a dictionary with credentials for docker login."""
credentials = {}
matcher = IMAGE_WITH_HOST.match(image)
# Custom registry
if matcher:
if matcher.group(1) in self.sys_docker.config.registries:
registry = matcher.group(1)
credentials[ATTR_REGISTRY] = registry
# If no match assume "dockerhub" as registry
elif DOCKER_HUB in self.sys_docker.config.registries:
registry = DOCKER_HUB
registry = self.sys_docker.config.get_registry_for_image(image)
if registry:
stored = self.sys_docker.config.registries[registry]
credentials[ATTR_USERNAME] = stored[ATTR_USERNAME]
credentials[ATTR_PASSWORD] = stored[ATTR_PASSWORD]
# Don't include registry for Docker Hub (both official and legacy)
if registry not in (DOCKER_HUB, DOCKER_HUB_LEGACY):
credentials[ATTR_REGISTRY] = registry
_LOGGER.debug(
"Logging in to %s as %s",
@@ -208,178 +195,6 @@ class DockerInterface(JobGroup, ABC):
return credentials
async def _docker_login(self, image: str) -> None:
"""Try to log in to the registry if there are credentials available."""
if not self.sys_docker.config.registries:
return
credentials = self._get_credentials(image)
if not credentials:
return
await self.sys_run_in_executor(self.sys_docker.dockerpy.login, **credentials)
def _process_pull_image_log( # noqa: C901
self, install_job_id: str, reference: PullLogEntry
) -> None:
"""Process events fired from a docker while pulling an image, filtered to a given job id."""
if (
reference.job_id != install_job_id
or not reference.id
or not reference.status
or not (stage := PullImageLayerStage.from_status(reference.status))
):
return
# Pulling FS Layer is our marker for a layer that needs to be downloaded and extracted. Otherwise it already exists and we can ignore
job: SupervisorJob | None = None
if stage == PullImageLayerStage.PULLING_FS_LAYER:
job = self.sys_jobs.new_job(
name="Pulling container image layer",
initial_stage=stage.status,
reference=reference.id,
parent_id=install_job_id,
internal=True,
)
job.done = False
return
# Find our sub job to update details of
for j in self.sys_jobs.jobs:
if j.parent_id == install_job_id and j.reference == reference.id:
job = j
break
# This likely only occurs if the logs came in out of sync and we got progress before the Pulling FS Layer one
if not job:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {install_job_id} but could not find a matching job, skipping",
_LOGGER.debug,
)
# Hopefully these come in order but if they sometimes get out of sync, avoid accidentally going backwards
# If it happens a lot though we may need to reconsider the value of this feature
if job.done:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} but job was done, skipping",
_LOGGER.debug,
)
if job.stage and stage < PullImageLayerStage.from_status(job.stage):
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} but job was already on stage {job.stage}, skipping",
_LOGGER.debug,
)
# For progress calcuation we assume downloading and extracting are each 50% of the time and others stages negligible
progress = job.progress
match stage:
case PullImageLayerStage.DOWNLOADING | PullImageLayerStage.EXTRACTING:
if (
reference.progress_detail
and reference.progress_detail.current
and reference.progress_detail.total
):
progress = 50 * (
reference.progress_detail.current
/ reference.progress_detail.total
)
if stage == PullImageLayerStage.EXTRACTING:
progress += 50
case (
PullImageLayerStage.VERIFYING_CHECKSUM
| PullImageLayerStage.DOWNLOAD_COMPLETE
):
progress = 50
case PullImageLayerStage.PULL_COMPLETE:
progress = 100
case PullImageLayerStage.RETRYING_DOWNLOAD:
progress = 0
if stage != PullImageLayerStage.RETRYING_DOWNLOAD and progress < job.progress:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} that implied progress was {progress} but current progress is {job.progress}, skipping",
_LOGGER.debug,
)
# Our filters have all passed. Time to update the job
# Only downloading and extracting have progress details. Use that to set extra
# We'll leave it around on later stages as the total bytes may be useful after that stage
# Enforce range to prevent float drift error
progress = max(0, min(progress, 100))
if (
stage in {PullImageLayerStage.DOWNLOADING, PullImageLayerStage.EXTRACTING}
and reference.progress_detail
and reference.progress_detail.current is not None
and reference.progress_detail.total is not None
):
job.update(
progress=progress,
stage=stage.status,
extra={
"current": reference.progress_detail.current,
"total": reference.progress_detail.total,
},
)
else:
# If we reach DOWNLOAD_COMPLETE without ever having set extra (small layers that skip
# the downloading phase), set a minimal extra so aggregate progress calculation can proceed
extra = job.extra
if stage == PullImageLayerStage.DOWNLOAD_COMPLETE and not job.extra:
extra = {"current": 1, "total": 1}
job.update(
progress=progress,
stage=stage.status,
done=stage == PullImageLayerStage.PULL_COMPLETE,
extra=None if stage == PullImageLayerStage.RETRYING_DOWNLOAD else extra,
)
# Once we have received a progress update for every child job, start to set status of the main one
install_job = self.sys_jobs.get_job(install_job_id)
layer_jobs = [
job
for job in self.sys_jobs.jobs
if job.parent_id == install_job.uuid
and job.name == "Pulling container image layer"
]
# First set the total bytes to be downloaded/extracted on the main job
if not install_job.extra:
total = 0
for job in layer_jobs:
if not job.extra:
return
total += job.extra["total"]
install_job.extra = {"total": total}
else:
total = install_job.extra["total"]
# Then determine total progress based on progress of each sub-job, factoring in size of each compared to total
progress = 0.0
stage = PullImageLayerStage.PULL_COMPLETE
for job in layer_jobs:
if not job.extra:
return
progress += job.progress * (job.extra["total"] / total)
job_stage = PullImageLayerStage.from_status(cast(str, job.stage))
if job_stage < PullImageLayerStage.EXTRACTING:
stage = PullImageLayerStage.DOWNLOADING
elif (
stage == PullImageLayerStage.PULL_COMPLETE
and job_stage < PullImageLayerStage.PULL_COMPLETE
):
stage = PullImageLayerStage.EXTRACTING
# Ensure progress is 100 at this point to prevent float drift
if stage == PullImageLayerStage.PULL_COMPLETE:
progress = 100
# To reduce noise, limit updates to when result has changed by an entire percent or when stage changed
if stage != install_job.stage or progress >= install_job.progress + 1:
install_job.update(stage=stage.status, progress=max(0, min(progress, 100)))
@Job(
name="docker_interface_install",
on_condition=DockerJobError,
@@ -398,35 +213,57 @@ class DockerInterface(JobGroup, ABC):
if not image:
raise ValueError("Cannot pull without an image!")
image_arch = str(arch) if arch else self.sys_arch.supervisor
listener: EventListener | None = None
image_arch = arch or self.sys_arch.supervisor
platform = MAP_ARCH[image_arch]
pull_progress = ImagePullProgress()
current_job = self.sys_jobs.current
# Try to fetch manifest for accurate size-based progress
# This is optional - if it fails, we fall back to count-based progress
try:
manifest = await self.sys_docker.manifest_fetcher.get_manifest(
image, str(version), platform=platform
)
if manifest:
pull_progress.set_manifest(manifest)
_LOGGER.debug(
"Using manifest for progress: %d layers, %d bytes",
manifest.layer_count,
manifest.total_size,
)
except (aiohttp.ClientError, TimeoutError) as err:
_LOGGER.debug("Could not fetch manifest for progress: %s", err)
async def process_pull_event(event: PullLogEntry) -> None:
"""Process pull event and update job progress."""
if event.job_id != current_job.uuid:
return
# Process event through progress tracker
pull_progress.process_event(event)
# Update job if progress changed significantly (>= 1%)
should_update, progress = pull_progress.should_update_job()
if should_update:
stage = pull_progress.get_stage()
current_job.update(progress=progress, stage=stage)
listener = self.sys_bus.register_event(
BusEvent.DOCKER_IMAGE_PULL_UPDATE, process_pull_event
)
_LOGGER.info("Downloading docker image %s with tag %s.", image, version)
try:
if self.sys_docker.config.registries:
# Try login if we have defined credentials
await self._docker_login(image)
# Get credentials for private registries to pass to aiodocker
credentials = self._get_credentials(image) or None
curr_job_id = self.sys_jobs.current.uuid
async def process_pull_image_log(reference: PullLogEntry) -> None:
try:
self._process_pull_image_log(curr_job_id, reference)
except DockerLogOutOfOrder as err:
# Send all these to sentry. Missing a few progress updates
# shouldn't matter to users but matters to us
await async_capture_exception(err)
listener = self.sys_bus.register_event(
BusEvent.DOCKER_IMAGE_PULL_UPDATE, process_pull_image_log
)
# Pull new image
# Pull new image, passing credentials to aiodocker
docker_image = await self.sys_docker.pull_image(
self.sys_jobs.current.uuid,
current_job.uuid,
image,
str(version),
platform=MAP_ARCH[image_arch],
platform=platform,
auth=credentials,
)
# Tag latest
@@ -470,8 +307,7 @@ class DockerInterface(JobGroup, ABC):
f"Unknown error with {image}:{version!s} -> {err!s}", _LOGGER.error
) from err
finally:
if listener:
self.sys_bus.remove_listener(listener)
self.sys_bus.remove_listener(listener)
self._meta = docker_image
@@ -482,35 +318,34 @@ class DockerInterface(JobGroup, ABC):
return True
return False
async def is_running(self) -> bool:
"""Return True if Docker is running."""
async def _get_container(self) -> Container | None:
"""Get docker container, returns None if not found."""
try:
docker_container = await self.sys_run_in_executor(
return await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except docker.errors.NotFound:
return False
return None
except docker.errors.DockerException as err:
raise DockerAPIError() from err
raise DockerAPIError(
f"Docker API error occurred while getting container information: {err!s}"
) from err
except requests.RequestException as err:
raise DockerRequestError() from err
raise DockerRequestError(
f"Error communicating with Docker to get container information: {err!s}"
) from err
return docker_container.status == "running"
async def is_running(self) -> bool:
"""Return True if Docker is running."""
if docker_container := await self._get_container():
return docker_container.status == "running"
return False
async def current_state(self) -> ContainerState:
"""Return current state of container."""
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except docker.errors.NotFound:
return ContainerState.UNKNOWN
except docker.errors.DockerException as err:
raise DockerAPIError() from err
except requests.RequestException as err:
raise DockerRequestError() from err
return _container_state_from_model(docker_container)
if docker_container := await self._get_container():
return _container_state_from_model(docker_container)
return ContainerState.UNKNOWN
@Job(name="docker_interface_attach", concurrency=JobConcurrency.GROUP_QUEUE)
async def attach(
@@ -545,7 +380,9 @@ class DockerInterface(JobGroup, ABC):
# Successful?
if not self._meta:
raise DockerError()
raise DockerError(
f"Could not get metadata on container or image for {self.name}"
)
_LOGGER.info("Attaching to %s with version %s", self.image, self.version)
@Job(
@@ -635,9 +472,7 @@ class DockerInterface(JobGroup, ABC):
expected_cpu_arch: CpuArch | None = None,
) -> None:
"""Check we have expected image with correct arch."""
expected_image_cpu_arch = (
str(expected_cpu_arch) if expected_cpu_arch else self.sys_arch.supervisor
)
arch = expected_cpu_arch or self.sys_arch.supervisor
image_name = f"{expected_image}:{version!s}"
if self.image == expected_image:
try:
@@ -655,7 +490,7 @@ class DockerInterface(JobGroup, ABC):
# If we have an image and its the right arch, all set
# It seems that newer Docker version return a variant for arm64 images.
# Make sure we match linux/arm64 and linux/arm64/v8.
expected_image_arch = MAP_ARCH[expected_image_cpu_arch]
expected_image_arch = MAP_ARCH[arch]
if image_arch.startswith(expected_image_arch):
return
_LOGGER.info(
@@ -668,7 +503,7 @@ class DockerInterface(JobGroup, ABC):
# We're missing the image we need. Stop and clean up what we have then pull the right one
with suppress(DockerError):
await self.remove()
await self.install(version, expected_image, arch=expected_image_cpu_arch)
await self.install(version, expected_image, arch=arch)
@Job(
name="docker_interface_update",
@@ -750,14 +585,8 @@ class DockerInterface(JobGroup, ABC):
async def is_failed(self) -> bool:
"""Return True if Docker is failing state."""
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except docker.errors.NotFound:
if not (docker_container := await self._get_container()):
return False
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
# container is not running
if docker_container.status != "exited":

View File

@@ -49,9 +49,11 @@ from ..exceptions import (
)
from ..utils.common import FileConfiguration
from ..validate import SCHEMA_DOCKER_CONFIG
from .const import LABEL_MANAGED
from .const import DOCKER_HUB, DOCKER_HUB_LEGACY, LABEL_MANAGED
from .manifest import RegistryManifestFetcher
from .monitor import DockerMonitor
from .network import DockerNetwork
from .utils import get_registry_from_image
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -76,15 +78,25 @@ class DockerInfo:
storage: str = attr.ib()
logging: str = attr.ib()
cgroup: str = attr.ib()
support_cpu_realtime: bool = attr.ib()
@staticmethod
def new(data: dict[str, Any]):
async def new(data: dict[str, Any]) -> DockerInfo:
"""Create a object from docker info."""
# Check if CONFIG_RT_GROUP_SCHED is loaded (blocking I/O in executor)
cpu_rt_file_exists = await asyncio.get_running_loop().run_in_executor(
None, Path("/sys/fs/cgroup/cpu/cpu.rt_runtime_us").exists
)
cpu_rt_supported = (
cpu_rt_file_exists and os.environ.get(ENV_SUPERVISOR_CPU_RT) == "1"
)
return DockerInfo(
AwesomeVersion(data.get("ServerVersion", "0.0.0")),
data.get("Driver", "unknown"),
data.get("LoggingDriver", "unknown"),
data.get("CgroupVersion", "1"),
cpu_rt_supported,
)
@property
@@ -95,23 +107,21 @@ class DockerInfo:
except AwesomeVersionCompareException:
return False
@property
def support_cpu_realtime(self) -> bool:
"""Return true, if CONFIG_RT_GROUP_SCHED is loaded."""
if not Path("/sys/fs/cgroup/cpu/cpu.rt_runtime_us").exists():
return False
return bool(os.environ.get(ENV_SUPERVISOR_CPU_RT) == "1")
@dataclass(frozen=True, slots=True)
class PullProgressDetail:
"""Progress detail information for pull.
Documentation lacking but both of these seem to be in bytes when populated.
Containerd-snapshot update - When leveraging this new feature, this information
becomes useless to us while extracting. It simply tells elapsed time using
current and units.
"""
current: int | None = None
total: int | None = None
units: str | None = None
@classmethod
def from_pull_log_dict(cls, value: dict[str, int]) -> PullProgressDetail:
@@ -199,6 +209,33 @@ class DockerConfig(FileConfiguration):
"""Return credentials for docker registries."""
return self._data.get(ATTR_REGISTRIES, {})
def get_registry_for_image(self, image: str) -> str | None:
"""Return the registry name if credentials are available for the image.
Matches the image against configured registries and returns the registry
name if found, or None if no matching credentials are configured.
Uses Docker's domain detection logic from:
vendor/github.com/distribution/reference/normalize.go
"""
if not self.registries:
return None
# Check if image uses a custom registry (e.g., ghcr.io/org/image)
registry = get_registry_from_image(image)
if registry:
if registry in self.registries:
return registry
else:
# No registry prefix means Docker Hub
# Support both docker.io (official) and hub.docker.com (legacy)
if DOCKER_HUB in self.registries:
return DOCKER_HUB
if DOCKER_HUB_LEGACY in self.registries:
return DOCKER_HUB_LEGACY
return None
class DockerAPI(CoreSysAttributes):
"""Docker Supervisor wrapper.
@@ -222,6 +259,9 @@ class DockerAPI(CoreSysAttributes):
self._info: DockerInfo | None = None
self.config: DockerConfig = DockerConfig()
self._monitor: DockerMonitor = DockerMonitor(coresys)
self._manifest_fetcher: RegistryManifestFetcher = RegistryManifestFetcher(
coresys
)
async def post_init(self) -> Self:
"""Post init actions that must be done in event loop."""
@@ -234,7 +274,7 @@ class DockerAPI(CoreSysAttributes):
timeout=900,
),
)
self._info = DockerInfo.new(self.dockerpy.info())
self._info = await DockerInfo.new(self.dockerpy.info())
await self.config.read_data()
self._network = await DockerNetwork(self.dockerpy).post_init(
self.config.enable_ipv6, self.config.mtu
@@ -287,6 +327,11 @@ class DockerAPI(CoreSysAttributes):
"""Return docker events monitor."""
return self._monitor
@property
def manifest_fetcher(self) -> RegistryManifestFetcher:
"""Return manifest fetcher for registry access."""
return self._manifest_fetcher
async def load(self) -> None:
"""Start docker events monitor."""
await self.monitor.load()
@@ -429,6 +474,7 @@ class DockerAPI(CoreSysAttributes):
repository: str,
tag: str = "latest",
platform: str | None = None,
auth: dict[str, str] | None = None,
) -> dict[str, Any]:
"""Pull the specified image and return it.
@@ -438,7 +484,7 @@ class DockerAPI(CoreSysAttributes):
on the bus so listeners can use that to update status for users.
"""
async for e in self.images.pull(
repository, tag=tag, platform=platform, stream=True
repository, tag=tag, platform=platform, auth=auth, stream=True
):
entry = PullLogEntry.from_pull_log_dict(job_id, e)
if entry.error:
@@ -578,9 +624,15 @@ class DockerAPI(CoreSysAttributes):
except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
return False
raise DockerError() from err
raise DockerError(
f"Could not get container {name} or image {image}:{version} to check state: {err!s}",
_LOGGER.error,
) from err
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
raise DockerError(
f"Could not get container {name} or image {image}:{version} to check state: {err!s}",
_LOGGER.error,
) from err
# Check the image is correct and state is good
return (
@@ -596,9 +648,13 @@ class DockerAPI(CoreSysAttributes):
try:
docker_container: Container = self.containers.get(name)
except docker_errors.NotFound:
# Generally suppressed so we don't log this
raise DockerNotFound() from None
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
raise DockerError(
f"Could not get container {name} for stopping: {err!s}",
_LOGGER.error,
) from err
if docker_container.status == "running":
_LOGGER.info("Stopping %s application", name)
@@ -638,9 +694,13 @@ class DockerAPI(CoreSysAttributes):
try:
container: Container = self.containers.get(name)
except docker_errors.NotFound:
raise DockerNotFound() from None
raise DockerNotFound(
f"Container {name} not found for restarting", _LOGGER.warning
) from None
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
raise DockerError(
f"Could not get container {name} for restarting: {err!s}", _LOGGER.error
) from err
_LOGGER.info("Restarting %s", name)
try:
@@ -653,9 +713,13 @@ class DockerAPI(CoreSysAttributes):
try:
docker_container: Container = self.containers.get(name)
except docker_errors.NotFound:
raise DockerNotFound() from None
raise DockerNotFound(
f"Container {name} not found for logs", _LOGGER.warning
) from None
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
raise DockerError(
f"Could not get container {name} for logs: {err!s}", _LOGGER.error
) from err
try:
return docker_container.logs(tail=tail, stdout=True, stderr=True)
@@ -669,16 +733,21 @@ class DockerAPI(CoreSysAttributes):
try:
docker_container: Container = self.containers.get(name)
except docker_errors.NotFound:
raise DockerNotFound() from None
raise DockerNotFound(
f"Container {name} not found for stats", _LOGGER.warning
) from None
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
raise DockerError(
f"Could not inspect container '{name}': {err!s}", _LOGGER.error
) from err
# container is not running
if docker_container.status != "running":
raise DockerError(f"Container {name} is not running", _LOGGER.error)
try:
return docker_container.stats(stream=False)
# When stream=False, stats() returns dict, not Iterator
return cast(dict[str, Any], docker_container.stats(stream=False))
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't read stats from {name}: {err}", _LOGGER.error
@@ -689,15 +758,21 @@ class DockerAPI(CoreSysAttributes):
try:
docker_container: Container = self.containers.get(name)
except docker_errors.NotFound:
raise DockerNotFound() from None
raise DockerNotFound(
f"Container {name} not found for running command", _LOGGER.warning
) from None
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
raise DockerError(
f"Can't get container {name} to run command: {err!s}"
) from err
# Execute
try:
code, output = docker_container.exec_run(command)
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
raise DockerError(
f"Can't run command in container {name}: {err!s}"
) from err
return CommandReturn(code, output)
@@ -730,7 +805,7 @@ class DockerAPI(CoreSysAttributes):
"""Import a tar file as image."""
try:
with tar_file.open("rb") as read_tar:
resp: list[dict[str, Any]] = self.images.import_image(read_tar)
resp: list[dict[str, Any]] = await self.images.import_image(read_tar)
except (aiodocker.DockerError, OSError) as err:
raise DockerError(
f"Can't import image from tar: {err}", _LOGGER.error

View File

@@ -0,0 +1,352 @@
"""Docker registry manifest fetcher.
Fetches image manifests directly from container registries to get layer sizes
before pulling an image. This enables accurate size-based progress tracking.
"""
from __future__ import annotations
from dataclasses import dataclass
import logging
import re
from typing import TYPE_CHECKING
import aiohttp
from supervisor.docker.utils import get_registry_from_image
from .const import DOCKER_HUB, DOCKER_HUB_LEGACY
if TYPE_CHECKING:
from ..coresys import CoreSys
_LOGGER = logging.getLogger(__name__)
# Media types for manifest requests
MANIFEST_MEDIA_TYPES = (
"application/vnd.docker.distribution.manifest.v2+json",
"application/vnd.oci.image.manifest.v1+json",
"application/vnd.docker.distribution.manifest.list.v2+json",
"application/vnd.oci.image.index.v1+json",
)
@dataclass
class ImageManifest:
"""Container image manifest with layer information."""
digest: str
total_size: int
layers: dict[str, int] # digest -> size in bytes
@property
def layer_count(self) -> int:
"""Return number of layers."""
return len(self.layers)
def parse_image_reference(image: str, tag: str) -> tuple[str, str, str]:
"""Parse image reference into (registry, repository, tag).
Examples:
ghcr.io/home-assistant/home-assistant:2025.1.0
-> (ghcr.io, home-assistant/home-assistant, 2025.1.0)
homeassistant/home-assistant:latest
-> (registry-1.docker.io, homeassistant/home-assistant, latest)
alpine:3.18
-> (registry-1.docker.io, library/alpine, 3.18)
"""
# Check if image has explicit registry host
registry = get_registry_from_image(image)
if registry:
repository = image[len(registry) + 1 :] # Remove "registry/" prefix
else:
registry = DOCKER_HUB
repository = image
# Docker Hub requires "library/" prefix for official images
if "/" not in repository:
repository = f"library/{repository}"
return registry, repository, tag
class RegistryManifestFetcher:
"""Fetches manifests from container registries."""
def __init__(self, coresys: CoreSys) -> None:
"""Initialize the fetcher."""
self.coresys = coresys
self._session: aiohttp.ClientSession | None = None
async def _get_session(self) -> aiohttp.ClientSession:
"""Get or create aiohttp session."""
if self._session is None or self._session.closed:
self._session = aiohttp.ClientSession()
return self._session
async def close(self) -> None:
"""Close the session."""
if self._session and not self._session.closed:
await self._session.close()
self._session = None
def _get_credentials(self, registry: str) -> tuple[str, str] | None:
"""Get credentials for registry from Docker config.
Returns (username, password) tuple or None if no credentials.
"""
registries = self.coresys.docker.config.registries
# Map registry hostname to config key
# Docker Hub can be stored as "hub.docker.com" in config
if registry in (DOCKER_HUB, DOCKER_HUB_LEGACY):
if DOCKER_HUB in registries:
creds = registries[DOCKER_HUB]
return creds.get("username"), creds.get("password")
elif registry in registries:
creds = registries[registry]
return creds.get("username"), creds.get("password")
return None
async def _get_auth_token(
self,
session: aiohttp.ClientSession,
registry: str,
repository: str,
) -> str | None:
"""Get authentication token for registry.
Uses the WWW-Authenticate header from a 401 response to discover
the token endpoint, then requests a token with appropriate scope.
"""
# First, make an unauthenticated request to get WWW-Authenticate header
manifest_url = f"https://{registry}/v2/{repository}/manifests/latest"
try:
async with session.get(manifest_url) as resp:
if resp.status == 200:
# No auth required
return None
if resp.status != 401:
_LOGGER.warning(
"Unexpected status %d from registry %s", resp.status, registry
)
return None
www_auth = resp.headers.get("WWW-Authenticate", "")
except aiohttp.ClientError as err:
_LOGGER.warning("Failed to connect to registry %s: %s", registry, err)
return None
# Parse WWW-Authenticate: Bearer realm="...",service="...",scope="..."
if not www_auth.startswith("Bearer "):
_LOGGER.warning("Unsupported auth type from %s: %s", registry, www_auth)
return None
params = {}
for match in re.finditer(r'(\w+)="([^"]*)"', www_auth):
params[match.group(1)] = match.group(2)
realm = params.get("realm")
service = params.get("service")
if not realm:
_LOGGER.warning("No realm in WWW-Authenticate from %s", registry)
return None
# Build token request URL
token_url = f"{realm}?scope=repository:{repository}:pull"
if service:
token_url += f"&service={service}"
# Check for credentials
auth = None
credentials = self._get_credentials(registry)
if credentials:
username, password = credentials
if username and password:
auth = aiohttp.BasicAuth(username, password)
_LOGGER.debug("Using credentials for %s", registry)
try:
async with session.get(token_url, auth=auth) as resp:
if resp.status != 200:
_LOGGER.warning(
"Failed to get token from %s: %d", realm, resp.status
)
return None
data = await resp.json()
return data.get("token") or data.get("access_token")
except aiohttp.ClientError as err:
_LOGGER.warning("Failed to get auth token: %s", err)
return None
async def _fetch_manifest(
self,
session: aiohttp.ClientSession,
registry: str,
repository: str,
reference: str,
token: str | None,
platform: str | None = None,
) -> dict | None:
"""Fetch manifest from registry.
If the manifest is a manifest list (multi-arch), fetches the
platform-specific manifest.
"""
manifest_url = f"https://{registry}/v2/{repository}/manifests/{reference}"
headers = {"Accept": ", ".join(MANIFEST_MEDIA_TYPES)}
if token:
headers["Authorization"] = f"Bearer {token}"
try:
async with session.get(manifest_url, headers=headers) as resp:
if resp.status != 200:
_LOGGER.warning(
"Failed to fetch manifest for %s/%s:%s - %d",
registry,
repository,
reference,
resp.status,
)
return None
manifest = await resp.json()
except aiohttp.ClientError as err:
_LOGGER.warning("Failed to fetch manifest: %s", err)
return None
media_type = manifest.get("mediaType", "")
# Check if this is a manifest list (multi-arch image)
if "list" in media_type or "index" in media_type:
manifests = manifest.get("manifests", [])
if not manifests:
_LOGGER.warning("Empty manifest list for %s/%s", registry, repository)
return None
# Find matching platform
target_os = "linux"
target_arch = "amd64" # Default
if platform:
# Platform format is "linux/amd64", "linux/arm64", etc.
parts = platform.split("/")
if len(parts) >= 2:
target_os, target_arch = parts[0], parts[1]
platform_manifest = None
for m in manifests:
plat = m.get("platform", {})
if (
plat.get("os") == target_os
and plat.get("architecture") == target_arch
):
platform_manifest = m
break
if not platform_manifest:
# Fall back to first manifest
_LOGGER.debug(
"Platform %s/%s not found, using first manifest",
target_os,
target_arch,
)
platform_manifest = manifests[0]
# Fetch the platform-specific manifest
return await self._fetch_manifest(
session,
registry,
repository,
platform_manifest["digest"],
token,
platform,
)
return manifest
async def get_manifest(
self,
image: str,
tag: str,
platform: str | None = None,
) -> ImageManifest | None:
"""Fetch manifest and extract layer sizes.
Args:
image: Image name (e.g., "ghcr.io/home-assistant/home-assistant")
tag: Image tag (e.g., "2025.1.0")
platform: Target platform (e.g., "linux/amd64")
Returns:
ImageManifest with layer sizes, or None if fetch failed.
"""
registry, repository, tag = parse_image_reference(image, tag)
_LOGGER.debug(
"Fetching manifest for %s/%s:%s (platform=%s)",
registry,
repository,
tag,
platform,
)
session = await self._get_session()
# Get auth token
token = await self._get_auth_token(session, registry, repository)
# Fetch manifest
manifest = await self._fetch_manifest(
session, registry, repository, tag, token, platform
)
if not manifest:
return None
# Extract layer information
layers = manifest.get("layers", [])
if not layers:
_LOGGER.warning(
"No layers in manifest for %s/%s:%s", registry, repository, tag
)
return None
layer_sizes: dict[str, int] = {}
total_size = 0
for layer in layers:
digest = layer.get("digest", "")
size = layer.get("size", 0)
if digest and size:
# Store by short digest (first 12 chars after sha256:)
short_digest = (
digest.split(":")[1][:12] if ":" in digest else digest[:12]
)
layer_sizes[short_digest] = size
total_size += size
digest = manifest.get("config", {}).get("digest", "")
_LOGGER.debug(
"Manifest for %s/%s:%s - %d layers, %d bytes total",
registry,
repository,
tag,
len(layer_sizes),
total_size,
)
return ImageManifest(
digest=digest,
total_size=total_size,
layers=layer_sizes,
)

View File

@@ -7,6 +7,8 @@ import logging
from typing import Self, cast
import docker
from docker.models.containers import Container
from docker.models.networks import Network
import requests
from ..const import (
@@ -59,7 +61,7 @@ class DockerNetwork:
def __init__(self, docker_client: docker.DockerClient):
"""Initialize internal Supervisor network."""
self.docker: docker.DockerClient = docker_client
self._network: docker.models.networks.Network
self._network: Network
async def post_init(
self, enable_ipv6: bool | None = None, mtu: int | None = None
@@ -76,7 +78,7 @@ class DockerNetwork:
return DOCKER_NETWORK
@property
def network(self) -> docker.models.networks.Network:
def network(self) -> Network:
"""Return docker network."""
return self._network
@@ -117,7 +119,7 @@ class DockerNetwork:
def _get_network(
self, enable_ipv6: bool | None = None, mtu: int | None = None
) -> docker.models.networks.Network:
) -> Network:
"""Get supervisor network."""
try:
if network := self.docker.networks.get(DOCKER_NETWORK):
@@ -218,7 +220,7 @@ class DockerNetwork:
def attach_container(
self,
container: docker.models.containers.Container,
container: Container,
alias: list[str] | None = None,
ipv4: IPv4Address | None = None,
) -> None:
@@ -275,9 +277,7 @@ class DockerNetwork:
if container.id not in self.containers:
self.attach_container(container, alias, ipv4)
def detach_default_bridge(
self, container: docker.models.containers.Container
) -> None:
def detach_default_bridge(self, container: Container) -> None:
"""Detach default Docker bridge.
Need run inside executor.

View File

@@ -0,0 +1,371 @@
"""Image pull progress tracking."""
from __future__ import annotations
from contextlib import suppress
from dataclasses import dataclass, field
from enum import Enum
import logging
from typing import TYPE_CHECKING, cast
if TYPE_CHECKING:
from .manager import PullLogEntry
from .manifest import ImageManifest
_LOGGER = logging.getLogger(__name__)
# Progress weight distribution: 70% downloading, 30% extraction
DOWNLOAD_WEIGHT = 70.0
EXTRACT_WEIGHT = 30.0
class LayerPullStatus(Enum):
"""Status values for pulling an image layer.
These are a subset of the statuses in a docker pull image log.
The order field allows comparing which stage is further along.
"""
PULLING_FS_LAYER = 1, "Pulling fs layer"
WAITING = 1, "Waiting"
RETRYING = 2, "Retrying" # Matches "Retrying in N seconds"
DOWNLOADING = 3, "Downloading"
VERIFYING_CHECKSUM = 4, "Verifying Checksum"
DOWNLOAD_COMPLETE = 5, "Download complete"
EXTRACTING = 6, "Extracting"
PULL_COMPLETE = 7, "Pull complete"
ALREADY_EXISTS = 7, "Already exists"
def __init__(self, order: int, status: str) -> None:
"""Set fields from values."""
self.order = order
self.status = status
def __eq__(self, value: object, /) -> bool:
"""Check equality, allow string comparisons on status."""
with suppress(AttributeError):
return self.status == cast(LayerPullStatus, value).status
return self.status == value
def __hash__(self) -> int:
"""Return hash based on status string."""
return hash(self.status)
def __lt__(self, other: object) -> bool:
"""Order instances by stage progression."""
with suppress(AttributeError):
return self.order < cast(LayerPullStatus, other).order
return False
@classmethod
def from_status(cls, status: str) -> LayerPullStatus | None:
"""Get enum from status string, or None if not recognized."""
# Handle "Retrying in N seconds" pattern
if status.startswith("Retrying in "):
return cls.RETRYING
for member in cls:
if member.status == status:
return member
return None
@dataclass
class LayerProgress:
"""Track progress of a single layer."""
layer_id: str
total_size: int = 0 # Size in bytes (from downloading, reused for extraction)
download_current: int = 0
extract_current: int = 0 # Extraction progress in bytes (overlay2 only)
download_complete: bool = False
extract_complete: bool = False
already_exists: bool = False # Layer was already locally available
def calculate_progress(self) -> float:
"""Calculate layer progress 0-100.
Progress is weighted: 70% download, 30% extraction.
For overlay2, we have byte-based extraction progress.
For containerd, extraction jumps from 70% to 100% on completion.
"""
if self.already_exists or self.extract_complete:
return 100.0
if self.download_complete:
# Check if we have extraction progress (overlay2)
if self.extract_current > 0 and self.total_size > 0:
extract_pct = min(1.0, self.extract_current / self.total_size)
return DOWNLOAD_WEIGHT + (extract_pct * EXTRACT_WEIGHT)
# No extraction progress yet - return 70%
return DOWNLOAD_WEIGHT
if self.total_size > 0:
download_pct = min(1.0, self.download_current / self.total_size)
return download_pct * DOWNLOAD_WEIGHT
return 0.0
@dataclass
class ImagePullProgress:
"""Track overall progress of pulling an image.
When manifest layer sizes are provided, uses size-weighted progress where
each layer contributes proportionally to its size. This gives accurate
progress based on actual bytes to download.
When manifest is not available, falls back to count-based progress where
each layer contributes equally.
Layers that already exist locally are excluded from the progress calculation.
"""
layers: dict[str, LayerProgress] = field(default_factory=dict)
_last_reported_progress: float = field(default=0.0, repr=False)
_seen_downloading: bool = field(default=False, repr=False)
_manifest_layer_sizes: dict[str, int] = field(default_factory=dict, repr=False)
_total_manifest_size: int = field(default=0, repr=False)
def set_manifest(self, manifest: ImageManifest) -> None:
"""Set manifest layer sizes for accurate size-based progress.
Should be called before processing pull events.
"""
self._manifest_layer_sizes = dict(manifest.layers)
self._total_manifest_size = manifest.total_size
_LOGGER.debug(
"Manifest set: %d layers, %d bytes total",
len(self._manifest_layer_sizes),
self._total_manifest_size,
)
def get_or_create_layer(self, layer_id: str) -> LayerProgress:
"""Get existing layer or create new one."""
if layer_id not in self.layers:
# If we have manifest sizes, pre-populate the layer's total_size
manifest_size = self._manifest_layer_sizes.get(layer_id, 0)
self.layers[layer_id] = LayerProgress(
layer_id=layer_id, total_size=manifest_size
)
return self.layers[layer_id]
def process_event(self, entry: PullLogEntry) -> None:
"""Process a pull log event and update layer state."""
# Skip events without layer ID or status
if not entry.id or not entry.status:
return
# Skip metadata events that aren't layer-specific
# "Pulling from X" has id=tag but isn't a layer
if entry.status.startswith("Pulling from "):
return
# Parse status to enum (returns None for unrecognized statuses)
status = LayerPullStatus.from_status(entry.status)
if status is None:
return
layer = self.get_or_create_layer(entry.id)
# Handle "Already exists" - layer is locally available
if status is LayerPullStatus.ALREADY_EXISTS:
layer.already_exists = True
layer.download_complete = True
layer.extract_complete = True
return
# Handle "Pulling fs layer" / "Waiting" - layer is being tracked
if status in (LayerPullStatus.PULLING_FS_LAYER, LayerPullStatus.WAITING):
return
# Handle "Downloading" - update download progress
if status is LayerPullStatus.DOWNLOADING:
# Mark that we've seen downloading - now we know layer count is complete
self._seen_downloading = True
if (
entry.progress_detail
and entry.progress_detail.current is not None
and entry.progress_detail.total is not None
):
layer.download_current = entry.progress_detail.current
# Only set total_size if not already set or if this is larger
# (handles case where total changes during download)
layer.total_size = max(layer.total_size, entry.progress_detail.total)
return
# Handle "Verifying Checksum" - download is essentially complete
if status is LayerPullStatus.VERIFYING_CHECKSUM:
if layer.total_size > 0:
layer.download_current = layer.total_size
return
# Handle "Download complete" - download phase done
if status is LayerPullStatus.DOWNLOAD_COMPLETE:
layer.download_complete = True
if layer.total_size > 0:
layer.download_current = layer.total_size
elif layer.total_size == 0:
# Small layer that skipped downloading phase
# Set minimal size so it doesn't distort weighted average
layer.total_size = 1
layer.download_current = 1
return
# Handle "Extracting" - extraction in progress
if status is LayerPullStatus.EXTRACTING:
# For overlay2: progressDetail has {current, total} in bytes
# For containerd: progressDetail has {current, units: "s"} (time elapsed)
# We can only use byte-based progress (overlay2)
layer.download_complete = True
if layer.total_size > 0:
layer.download_current = layer.total_size
# Check if this is byte-based extraction progress (overlay2)
# Overlay2 has {current, total} in bytes, no units field
# Containerd has {current, units: "s"} which is useless for progress
if (
entry.progress_detail
and entry.progress_detail.current is not None
and entry.progress_detail.units is None
):
# Use layer's total_size from downloading phase (doesn't change)
layer.extract_current = entry.progress_detail.current
_LOGGER.debug(
"Layer %s extracting: %d/%d (%.1f%%)",
layer.layer_id,
layer.extract_current,
layer.total_size,
(layer.extract_current / layer.total_size * 100)
if layer.total_size > 0
else 0,
)
return
# Handle "Pull complete" - layer is fully done
if status is LayerPullStatus.PULL_COMPLETE:
layer.download_complete = True
layer.extract_complete = True
if layer.total_size > 0:
layer.download_current = layer.total_size
return
# Handle "Retrying in N seconds" - reset download progress
if status is LayerPullStatus.RETRYING:
layer.download_current = 0
layer.download_complete = False
return
def calculate_progress(self) -> float:
"""Calculate overall progress 0-100.
When manifest layer sizes are available, uses size-weighted progress
where each layer contributes proportionally to its size.
When manifest is not available, falls back to count-based progress
where each layer contributes equally.
Layers that already exist locally are excluded from the calculation.
Returns 0 until we've seen the first "Downloading" event, since Docker
reports "Already exists" and "Pulling fs layer" events before we know
the complete layer count.
"""
# Don't report progress until we've seen downloading start
# This ensures we know the full layer count before calculating progress
if not self._seen_downloading or not self.layers:
return 0.0
# Only count layers that need pulling (exclude already_exists)
layers_to_pull = [
layer for layer in self.layers.values() if not layer.already_exists
]
if not layers_to_pull:
# All layers already exist, nothing to download
return 100.0
# Use size-weighted progress if manifest sizes are available
if self._manifest_layer_sizes:
return min(100, self._calculate_size_weighted_progress(layers_to_pull))
# Fall back to count-based progress
total_progress = sum(layer.calculate_progress() for layer in layers_to_pull)
return min(100, total_progress / len(layers_to_pull))
def _calculate_size_weighted_progress(
self, layers_to_pull: list[LayerProgress]
) -> float:
"""Calculate size-weighted progress.
Each layer contributes to progress proportionally to its size.
Progress = sum(layer_progress * layer_size) / total_size
"""
# Calculate total size of layers that need pulling
total_size = sum(layer.total_size for layer in layers_to_pull)
if total_size == 0:
# No size info available, fall back to count-based
total_progress = sum(layer.calculate_progress() for layer in layers_to_pull)
return total_progress / len(layers_to_pull)
# Weight each layer's progress by its size
weighted_progress = 0.0
for layer in layers_to_pull:
if layer.total_size > 0:
layer_weight = layer.total_size / total_size
weighted_progress += layer.calculate_progress() * layer_weight
return weighted_progress
def get_stage(self) -> str | None:
"""Get current stage based on layer states."""
if not self.layers:
return None
# Check if any layer is still downloading
for layer in self.layers.values():
if layer.already_exists:
continue
if not layer.download_complete:
return "Downloading"
# All downloads complete, check if extracting
for layer in self.layers.values():
if layer.already_exists:
continue
if not layer.extract_complete:
return "Extracting"
# All done
return "Pull complete"
def should_update_job(self, threshold: float = 1.0) -> tuple[bool, float]:
"""Check if job should be updated based on progress change.
Returns (should_update, current_progress).
Updates are triggered when progress changes by at least threshold%.
Progress is guaranteed to only increase (monotonic).
"""
current_progress = self.calculate_progress()
# Ensure monotonic progress - never report a decrease
# This can happen when new layers get size info and change the weighted average
if current_progress < self._last_reported_progress:
_LOGGER.debug(
"Progress decreased from %.1f%% to %.1f%%, keeping last reported",
self._last_reported_progress,
current_progress,
)
return False, self._last_reported_progress
if current_progress >= self._last_reported_progress + threshold:
_LOGGER.debug(
"Progress update: %.1f%% -> %.1f%% (delta: %.1f%%)",
self._last_reported_progress,
current_progress,
current_progress - self._last_reported_progress,
)
self._last_reported_progress = current_progress
return True, current_progress
return False, self._last_reported_progress

View File

@@ -0,0 +1,57 @@
"""Docker utilities."""
from __future__ import annotations
import re
# Docker image reference domain regex
# Based on Docker's reference implementation:
# vendor/github.com/distribution/reference/normalize.go
#
# A domain is detected if the part before the first / contains:
# - "localhost" (with optional port)
# - Contains "." (like registry.example.com or 127.0.0.1)
# - Contains ":" (like myregistry:5000)
# - IPv6 addresses in brackets (like [::1]:5000)
#
# Note: Docker also treats uppercase letters as registry indicators since
# namespaces must be lowercase, but this regex handles lowercase matching
# and the get_registry_from_image() function validates the registry rules.
IMAGE_REGISTRY_REGEX = re.compile(
r"^(?P<registry>"
r"localhost(?::[0-9]+)?|" # localhost with optional port
r"(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])" # domain component
r"(?:\.(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]))*" # more components
r"(?::[0-9]+)?|" # optional port
r"\[[a-fA-F0-9:]+\](?::[0-9]+)?" # IPv6 with optional port
r")/" # must be followed by /
)
def get_registry_from_image(image_ref: str) -> str | None:
"""Extract registry from Docker image reference.
Returns the registry if the image reference contains one,
or None if the image uses Docker Hub (docker.io).
Based on Docker's reference implementation:
vendor/github.com/distribution/reference/normalize.go
Examples:
get_registry_from_image("nginx") -> None (docker.io)
get_registry_from_image("library/nginx") -> None (docker.io)
get_registry_from_image("myregistry.com/nginx") -> "myregistry.com"
get_registry_from_image("localhost/myimage") -> "localhost"
get_registry_from_image("localhost:5000/myimage") -> "localhost:5000"
get_registry_from_image("registry.io:5000/org/app:v1") -> "registry.io:5000"
get_registry_from_image("[::1]:5000/myimage") -> "[::1]:5000"
"""
match = IMAGE_REGISTRY_REGEX.match(image_ref)
if match:
registry = match.group("registry")
# Must contain '.' or ':' or be 'localhost' to be a real registry
# This prevents treating "myuser/myimage" as having registry "myuser"
if "." in registry or ":" in registry or registry == "localhost":
return registry
return None # No registry = Docker Hub (docker.io)

View File

@@ -1,25 +1,25 @@
"""Core Exceptions."""
from collections.abc import Callable
from collections.abc import Callable, Mapping
from typing import Any
MESSAGE_CHECK_SUPERVISOR_LOGS = (
"Check supervisor logs for details (check with '{logs_command}')"
)
EXTRA_FIELDS_LOGS_COMMAND = {"logs_command": "ha supervisor logs"}
class HassioError(Exception):
"""Root exception."""
error_key: str | None = None
message_template: str | None = None
extra_fields: dict[str, Any] | None = None
def __init__(
self,
message: str | None = None,
logger: Callable[..., None] | None = None,
*,
extra_fields: dict[str, Any] | None = None,
self, message: str | None = None, logger: Callable[..., None] | None = None
) -> None:
"""Raise & log."""
self.extra_fields = extra_fields or {}
if not message and self.message_template:
message = (
self.message_template.format(**self.extra_fields)
@@ -41,6 +41,94 @@ class HassioNotSupportedError(HassioError):
"""Function is not supported."""
# API
class APIError(HassioError, RuntimeError):
"""API errors."""
status = 400
headers: Mapping[str, str] | None = None
def __init__(
self,
message: str | None = None,
logger: Callable[..., None] | None = None,
*,
headers: Mapping[str, str] | None = None,
job_id: str | None = None,
) -> None:
"""Raise & log, optionally with job."""
super().__init__(message, logger)
self.headers = headers
self.job_id = job_id
class APIUnauthorized(APIError):
"""API unauthorized error."""
status = 401
class APIForbidden(APIError):
"""API forbidden error."""
status = 403
class APINotFound(APIError):
"""API not found error."""
status = 404
class APIGone(APIError):
"""API is no longer available."""
status = 410
class APITooManyRequests(APIError):
"""API too many requests error."""
status = 429
class APIInternalServerError(APIError):
"""API internal server error."""
status = 500
class APIAddonNotInstalled(APIError):
"""Not installed addon requested at addons API."""
class APIDBMigrationInProgress(APIError):
"""Service is unavailable due to an offline DB migration is in progress."""
status = 503
class APIUnknownSupervisorError(APIError):
"""Unknown error occurred within supervisor. Adds supervisor check logs rider to message template."""
status = 500
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
job_id: str | None = None,
) -> None:
"""Initialize exception."""
self.message_template = (
f"{self.message_template}. {MESSAGE_CHECK_SUPERVISOR_LOGS}"
)
self.extra_fields = (self.extra_fields or {}) | EXTRA_FIELDS_LOGS_COMMAND
super().__init__(None, logger, job_id=job_id)
# JobManager
@@ -122,6 +210,13 @@ class SupervisorAppArmorError(SupervisorError):
"""Supervisor AppArmor error."""
class SupervisorUnknownError(SupervisorError, APIUnknownSupervisorError):
"""Raise when an unknown error occurs interacting with Supervisor or its container."""
error_key = "supervisor_unknown_error"
message_template = "An unknown error occurred with Supervisor"
class SupervisorJobError(SupervisorError, JobException):
"""Raise on job errors."""
@@ -250,6 +345,54 @@ class AddonConfigurationError(AddonsError):
"""Error with add-on configuration."""
class AddonConfigurationInvalidError(AddonConfigurationError, APIError):
"""Raise if invalid configuration provided for addon."""
error_key = "addon_configuration_invalid_error"
message_template = "Add-on {addon} has invalid options: {validation_error}"
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
addon: str,
validation_error: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "validation_error": validation_error}
super().__init__(None, logger)
class AddonBootConfigCannotChangeError(AddonsError, APIError):
"""Raise if user attempts to change addon boot config when it can't be changed."""
error_key = "addon_boot_config_cannot_change_error"
message_template = (
"Addon {addon} boot option is set to {boot_config} so it cannot be changed"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str, boot_config: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "boot_config": boot_config}
super().__init__(None, logger)
class AddonNotRunningError(AddonsError, APIError):
"""Raise when an addon is not running."""
error_key = "addon_not_running_error"
message_template = "Add-on {addon} is not running"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(None, logger)
class AddonNotSupportedError(HassioNotSupportedError):
"""Addon doesn't support a function."""
@@ -268,11 +411,8 @@ class AddonNotSupportedArchitectureError(AddonNotSupportedError):
architectures: list[str],
) -> None:
"""Initialize exception."""
super().__init__(
None,
logger,
extra_fields={"slug": slug, "architectures": ", ".join(architectures)},
)
self.extra_fields = {"slug": slug, "architectures": ", ".join(architectures)}
super().__init__(None, logger)
class AddonNotSupportedMachineTypeError(AddonNotSupportedError):
@@ -289,11 +429,8 @@ class AddonNotSupportedMachineTypeError(AddonNotSupportedError):
machine_types: list[str],
) -> None:
"""Initialize exception."""
super().__init__(
None,
logger,
extra_fields={"slug": slug, "machine_types": ", ".join(machine_types)},
)
self.extra_fields = {"slug": slug, "machine_types": ", ".join(machine_types)}
super().__init__(None, logger)
class AddonNotSupportedHomeAssistantVersionError(AddonNotSupportedError):
@@ -310,11 +447,96 @@ class AddonNotSupportedHomeAssistantVersionError(AddonNotSupportedError):
version: str,
) -> None:
"""Initialize exception."""
super().__init__(
None,
logger,
extra_fields={"slug": slug, "version": version},
)
self.extra_fields = {"slug": slug, "version": version}
super().__init__(None, logger)
class AddonNotSupportedWriteStdinError(AddonNotSupportedError, APIError):
"""Addon does not support writing to stdin."""
error_key = "addon_not_supported_write_stdin_error"
message_template = "Add-on {addon} does not support writing to stdin"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(None, logger)
class AddonBuildDockerfileMissingError(AddonNotSupportedError, APIError):
"""Raise when addon build invalid because dockerfile is missing."""
error_key = "addon_build_dockerfile_missing_error"
message_template = (
"Cannot build addon '{addon}' because dockerfile is missing. A repair "
"using '{repair_command}' will fix this if the cause is data "
"corruption. Otherwise please report this to the addon developer."
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "repair_command": "ha supervisor repair"}
super().__init__(None, logger)
class AddonBuildArchitectureNotSupportedError(AddonNotSupportedError, APIError):
"""Raise when addon cannot be built on system because it doesn't support its architecture."""
error_key = "addon_build_architecture_not_supported_error"
message_template = (
"Cannot build addon '{addon}' because its supported architectures "
"({addon_arches}) do not match the system supported architectures ({system_arches})"
)
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
addon: str,
addon_arch_list: list[str],
system_arch_list: list[str],
) -> None:
"""Initialize exception."""
self.extra_fields = {
"addon": addon,
"addon_arches": ", ".join(addon_arch_list),
"system_arches": ", ".join(system_arch_list),
}
super().__init__(None, logger)
class AddonUnknownError(AddonsError, APIUnknownSupervisorError):
"""Raise when unknown error occurs taking an action for an addon."""
error_key = "addon_unknown_error"
message_template = "An unknown error occurred with addon {addon}"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(logger)
class AddonBuildFailedUnknownError(AddonsError, APIUnknownSupervisorError):
"""Raise when the build failed for an addon due to an unknown error."""
error_key = "addon_build_failed_unknown_error"
message_template = (
"An unknown error occurred while trying to build the image for addon {addon}"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(logger)
class AddonsJobError(AddonsError, JobException):
@@ -346,13 +568,64 @@ class AuthError(HassioError):
"""Auth errors."""
class AuthPasswordResetError(HassioError):
class AuthPasswordResetError(AuthError, APIError):
"""Auth error if password reset failed."""
error_key = "auth_password_reset_error"
message_template = "Username '{user}' does not exist. Check list of users using '{auth_list_command}'."
class AuthListUsersError(HassioError):
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
user: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"user": user, "auth_list_command": "ha auth list"}
super().__init__(None, logger)
class AuthListUsersError(AuthError, APIUnknownSupervisorError):
"""Auth error if listing users failed."""
error_key = "auth_list_users_error"
message_template = "Can't request listing users on Home Assistant"
class AuthListUsersNoneResponseError(AuthError, APIInternalServerError):
"""Auth error if listing users returned invalid None response."""
error_key = "auth_list_users_none_response_error"
message_template = "Home Assistant returned invalid response of `{none}` instead of a list of users. Check Home Assistant logs for details (check with `{logs_command}`)"
extra_fields = {"none": "None", "logs_command": "ha core logs"}
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Initialize exception."""
super().__init__(None, logger)
class AuthInvalidNonStringValueError(AuthError, APIUnauthorized):
"""Auth error if something besides a string provided as username or password."""
error_key = "auth_invalid_non_string_value_error"
message_template = "Username and password must be strings"
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
headers: Mapping[str, str] | None = None,
) -> None:
"""Initialize exception."""
super().__init__(None, logger, headers=headers)
class AuthHomeAssistantAPIValidationError(AuthError, APIUnknownSupervisorError):
"""Error encountered trying to validate auth details via Home Assistant API."""
error_key = "auth_home_assistant_api_validation_error"
message_template = "Unable to validate authentication details with Home Assistant"
# Host
@@ -385,60 +658,6 @@ class HostLogError(HostError):
"""Internal error with host log."""
# API
class APIError(HassioError, RuntimeError):
"""API errors."""
status = 400
def __init__(
self,
message: str | None = None,
logger: Callable[..., None] | None = None,
*,
job_id: str | None = None,
error: HassioError | None = None,
) -> None:
"""Raise & log, optionally with job."""
# Allow these to be set from another error here since APIErrors essentially wrap others to add a status
self.error_key = error.error_key if error else None
self.message_template = error.message_template if error else None
super().__init__(
message, logger, extra_fields=error.extra_fields if error else None
)
self.job_id = job_id
class APIForbidden(APIError):
"""API forbidden error."""
status = 403
class APINotFound(APIError):
"""API not found error."""
status = 404
class APIGone(APIError):
"""API is no longer available."""
status = 410
class APIAddonNotInstalled(APIError):
"""Not installed addon requested at addons API."""
class APIDBMigrationInProgress(APIError):
"""Service is unavailable due to an offline DB migration is in progress."""
status = 503
# Service / Discovery
@@ -616,6 +835,10 @@ class DockerError(HassioError):
"""Docker API/Transport errors."""
class DockerBuildError(DockerError):
"""Docker error during build."""
class DockerAPIError(DockerError):
"""Docker API error."""
@@ -632,10 +855,6 @@ class DockerNotFound(DockerError):
"""Docker object don't Exists."""
class DockerLogOutOfOrder(DockerError):
"""Raise when log from docker action was out of order."""
class DockerNoSpaceOnDevice(DockerError):
"""Raise if a docker pull fails due to available space."""
@@ -647,7 +866,7 @@ class DockerNoSpaceOnDevice(DockerError):
super().__init__(None, logger=logger)
class DockerHubRateLimitExceeded(DockerError):
class DockerHubRateLimitExceeded(DockerError, APITooManyRequests):
"""Raise for docker hub rate limit exceeded error."""
error_key = "dockerhub_rate_limit_exceeded"
@@ -655,16 +874,13 @@ class DockerHubRateLimitExceeded(DockerError):
"Your IP address has made too many requests to Docker Hub which activated a rate limit. "
"For more details see {dockerhub_rate_limit_url}"
)
extra_fields = {
"dockerhub_rate_limit_url": "https://www.home-assistant.io/more-info/dockerhub-rate-limit"
}
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Raise & log."""
super().__init__(
None,
logger=logger,
extra_fields={
"dockerhub_rate_limit_url": "https://www.home-assistant.io/more-info/dockerhub-rate-limit"
},
)
super().__init__(None, logger=logger)
class DockerJobError(DockerError, JobException):
@@ -735,6 +951,20 @@ class StoreNotFound(StoreError):
"""Raise if slug is not known."""
class StoreAddonNotFoundError(StoreError, APINotFound):
"""Raise if a requested addon is not in the store."""
error_key = "store_addon_not_found_error"
message_template = "Addon {addon} does not exist in the store"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(None, logger)
class StoreJobError(StoreError, JobException):
"""Raise on job error with git."""
@@ -770,7 +1000,7 @@ class BackupJobError(BackupError, JobException):
"""Raise on Backup job error."""
class BackupFileNotFoundError(BackupError):
class BackupFileNotFoundError(BackupError, APINotFound):
"""Raise if the backup file hasn't been found."""
@@ -782,6 +1012,55 @@ class BackupFileExistError(BackupError):
"""Raise if the backup file already exists."""
class AddonBackupMetadataInvalidError(BackupError, APIError):
"""Raise if invalid metadata file provided for addon in backup."""
error_key = "addon_backup_metadata_invalid_error"
message_template = (
"Metadata file for add-on {addon} in backup is invalid: {validation_error}"
)
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
addon: str,
validation_error: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "validation_error": validation_error}
super().__init__(None, logger)
class AddonPrePostBackupCommandReturnedError(BackupError, APIError):
"""Raise when addon's pre/post backup command returns an error."""
error_key = "addon_pre_post_backup_command_returned_error"
message_template = (
"Pre-/Post backup command for add-on {addon} returned error code: "
"{exit_code}. Please report this to the addon developer. Enable debug "
"logging to capture complete command output using {debug_logging_command}"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str, exit_code: int
) -> None:
"""Initialize exception."""
self.extra_fields = {
"addon": addon,
"exit_code": exit_code,
"debug_logging_command": "ha supervisor options --logging debug",
}
super().__init__(None, logger)
class BackupRestoreUnknownError(BackupError, APIUnknownSupervisorError):
"""Raise when an unknown error occurs during backup or restore."""
error_key = "backup_restore_unknown_error"
message_template = "An unknown error occurred during backup/restore"
# Security

View File

@@ -6,8 +6,8 @@ import logging
import socket
from ..dbus.const import (
ConnectionState,
ConnectionStateFlags,
ConnectionStateType,
DeviceType,
InterfaceAddrGenMode as NMInterfaceAddrGenMode,
InterfaceIp6Privacy as NMInterfaceIp6Privacy,
@@ -267,25 +267,47 @@ class Interface:
return InterfaceMethod.DISABLED
@staticmethod
def _map_nm_addr_gen_mode(addr_gen_mode: int) -> InterfaceAddrGenMode:
"""Map IPv6 interface addr_gen_mode."""
def _map_nm_addr_gen_mode(addr_gen_mode: int | None) -> InterfaceAddrGenMode:
"""Map IPv6 interface addr_gen_mode.
NetworkManager omits the addr_gen_mode property when set to DEFAULT, so we
treat None as DEFAULT here.
"""
mapping = {
NMInterfaceAddrGenMode.EUI64.value: InterfaceAddrGenMode.EUI64,
NMInterfaceAddrGenMode.STABLE_PRIVACY.value: InterfaceAddrGenMode.STABLE_PRIVACY,
NMInterfaceAddrGenMode.DEFAULT_OR_EUI64.value: InterfaceAddrGenMode.DEFAULT_OR_EUI64,
NMInterfaceAddrGenMode.DEFAULT.value: InterfaceAddrGenMode.DEFAULT,
None: InterfaceAddrGenMode.DEFAULT,
}
if addr_gen_mode not in mapping:
_LOGGER.warning(
"Unknown addr_gen_mode value from NetworkManager: %s", addr_gen_mode
)
return mapping.get(addr_gen_mode, InterfaceAddrGenMode.DEFAULT)
@staticmethod
def _map_nm_ip6_privacy(ip6_privacy: int) -> InterfaceIp6Privacy:
"""Map IPv6 interface ip6_privacy."""
def _map_nm_ip6_privacy(ip6_privacy: int | None) -> InterfaceIp6Privacy:
"""Map IPv6 interface ip6_privacy.
NetworkManager omits the ip6_privacy property when set to DEFAULT, so we
treat None as DEFAULT here.
"""
mapping = {
NMInterfaceIp6Privacy.DISABLED.value: InterfaceIp6Privacy.DISABLED,
NMInterfaceIp6Privacy.ENABLED_PREFER_PUBLIC.value: InterfaceIp6Privacy.ENABLED_PREFER_PUBLIC,
NMInterfaceIp6Privacy.ENABLED.value: InterfaceIp6Privacy.ENABLED,
NMInterfaceIp6Privacy.DEFAULT.value: InterfaceIp6Privacy.DEFAULT,
None: InterfaceIp6Privacy.DEFAULT,
}
if ip6_privacy not in mapping:
_LOGGER.warning(
"Unknown ip6_privacy value from NetworkManager: %s", ip6_privacy
)
return mapping.get(ip6_privacy, InterfaceIp6Privacy.DEFAULT)
@staticmethod
@@ -295,8 +317,8 @@ class Interface:
return False
return connection.state in (
ConnectionStateType.ACTIVATED,
ConnectionStateType.ACTIVATING,
ConnectionState.ACTIVATED,
ConnectionState.ACTIVATING,
)
@staticmethod

View File

@@ -16,7 +16,7 @@ from ..dbus.const import (
DBUS_IFACE_DNS,
DBUS_IFACE_NM,
DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED,
ConnectionStateType,
ConnectionState,
ConnectivityState,
DeviceType,
WirelessMethodType,
@@ -338,16 +338,16 @@ class NetworkManager(CoreSysAttributes):
# the state change before this point. Get the state currently to
# avoid any race condition.
await con.update()
state: ConnectionStateType = con.state
state: ConnectionState = con.state
while state != ConnectionStateType.ACTIVATED:
if state == ConnectionStateType.DEACTIVATED:
while state != ConnectionState.ACTIVATED:
if state == ConnectionState.DEACTIVATED:
raise HostNetworkError(
"Activating connection failed, check connection settings."
)
msg = await signal.wait_for_signal()
state = msg[0]
state = ConnectionState(msg[0])
_LOGGER.debug("Active connection state changed to %s", state)
# update_only means not done by user so don't force a check afterwards

View File

@@ -9,7 +9,7 @@ from contextvars import Context, ContextVar, Token
from dataclasses import dataclass
from datetime import datetime
import logging
from typing import Any, Self
from typing import Any, Self, cast
from uuid import uuid4
from attr.validators import gt, lt
@@ -102,13 +102,17 @@ class SupervisorJobError:
"Unknown error, see Supervisor logs (check with 'ha supervisor logs')"
)
stage: str | None = None
error_key: str | None = None
extra_fields: dict[str, Any] | None = None
def as_dict(self) -> dict[str, str | None]:
def as_dict(self) -> dict[str, Any]:
"""Return dictionary representation."""
return {
"type": self.type_.__name__,
"message": self.message,
"stage": self.stage,
"error_key": self.error_key,
"extra_fields": self.extra_fields,
}
@@ -158,7 +162,9 @@ class SupervisorJob:
def capture_error(self, err: HassioError | None = None) -> None:
"""Capture an error or record that an unknown error has occurred."""
if err:
new_error = SupervisorJobError(type(err), str(err), self.stage)
new_error = SupervisorJobError(
type(err), str(err), self.stage, err.error_key, err.extra_fields
)
else:
new_error = SupervisorJobError(stage=self.stage)
self.errors += [new_error]
@@ -196,7 +202,7 @@ class SupervisorJob:
self,
progress: float | None = None,
stage: str | None = None,
extra: dict[str, Any] | None = DEFAULT, # type: ignore
extra: dict[str, Any] | None | type[DEFAULT] = DEFAULT,
done: bool | None = None,
) -> None:
"""Update multiple fields with one on change event."""
@@ -207,8 +213,8 @@ class SupervisorJob:
self.progress = progress
if stage is not None:
self.stage = stage
if extra != DEFAULT:
self.extra = extra
if extra is not DEFAULT:
self.extra = cast(dict[str, Any] | None, extra)
# Done has special event. use that to trigger on change if included
# If not then just use any other field to trigger
@@ -306,19 +312,21 @@ class JobManager(FileConfiguration, CoreSysAttributes):
reference: str | None = None,
initial_stage: str | None = None,
internal: bool = False,
parent_id: str | None = DEFAULT, # type: ignore
parent_id: str | None | type[DEFAULT] = DEFAULT,
child_job_syncs: list[ChildJobSyncFilter] | None = None,
) -> SupervisorJob:
"""Create a new job."""
job = SupervisorJob(
name,
reference=reference,
stage=initial_stage,
on_change=self._on_job_change,
internal=internal,
child_job_syncs=child_job_syncs,
**({} if parent_id == DEFAULT else {"parent_id": parent_id}), # type: ignore
)
kwargs: dict[str, Any] = {
"reference": reference,
"stage": initial_stage,
"on_change": self._on_job_change,
"internal": internal,
"child_job_syncs": child_job_syncs,
}
if parent_id is not DEFAULT:
kwargs["parent_id"] = parent_id
job = SupervisorJob(name, **kwargs)
# Shouldn't happen but inability to find a parent for progress reporting
# shouldn't raise and break the active job

View File

@@ -34,6 +34,7 @@ class JobCondition(StrEnum):
PLUGINS_UPDATED = "plugins_updated"
RUNNING = "running"
SUPERVISOR_UPDATED = "supervisor_updated"
ARCHITECTURE_SUPPORTED = "architecture_supported"
class JobConcurrency(StrEnum):

View File

@@ -441,6 +441,14 @@ class Job(CoreSysAttributes):
raise JobConditionException(
f"'{method_name}' blocked from execution, supervisor needs to be updated first"
)
if (
JobCondition.ARCHITECTURE_SUPPORTED in used_conditions
and UnsupportedReason.SYSTEM_ARCHITECTURE
in coresys.sys_resolution.unsupported
):
raise JobConditionException(
f"'{method_name}' blocked from execution, unsupported system architecture"
)
if JobCondition.PLUGINS_UPDATED in used_conditions and (
out_of_date := [

View File

@@ -1,5 +1,6 @@
"""A collection of tasks."""
from contextlib import suppress
from datetime import datetime, timedelta
import logging
from typing import cast
@@ -13,6 +14,7 @@ from ..exceptions import (
BackupFileNotFoundError,
HomeAssistantError,
ObserverError,
SupervisorUpdateError,
)
from ..homeassistant.const import LANDINGPAGE, WSType
from ..jobs.const import JobConcurrency
@@ -161,6 +163,7 @@ class Tasks(CoreSysAttributes):
JobCondition.INTERNET_HOST,
JobCondition.OS_SUPPORTED,
JobCondition.RUNNING,
JobCondition.ARCHITECTURE_SUPPORTED,
],
concurrency=JobConcurrency.REJECT,
)
@@ -173,7 +176,11 @@ class Tasks(CoreSysAttributes):
"Found new Supervisor version %s, updating",
self.sys_supervisor.latest_version,
)
await self.sys_supervisor.update()
# Errors are logged by the exceptions, we can't really do something
# if an update fails here.
with suppress(SupervisorUpdateError):
await self.sys_supervisor.update()
async def _watchdog_homeassistant_api(self):
"""Create scheduler task for monitoring running state of API.

View File

@@ -135,7 +135,7 @@ class Mount(CoreSysAttributes, ABC):
@property
def state(self) -> UnitActiveState | None:
"""Get state of mount."""
return self._state
return UnitActiveState(self._state) if self._state is not None else None
@cached_property
def local_where(self) -> Path:

View File

@@ -23,4 +23,5 @@ PLUGIN_UPDATE_CONDITIONS = [
JobCondition.HEALTHY,
JobCondition.INTERNET_HOST,
JobCondition.SUPERVISOR_UPDATED,
JobCondition.ARCHITECTURE_SUPPORTED,
]

View File

@@ -2,7 +2,7 @@
from ...const import CoreState
from ...coresys import CoreSys
from ...dbus.const import ConnectionStateFlags, ConnectionStateType
from ...dbus.const import ConnectionState, ConnectionStateFlags
from ...dbus.network.interface import NetworkInterface
from ...exceptions import NetworkInterfaceNotFound
from ..const import ContextType, IssueType
@@ -47,7 +47,7 @@ class CheckNetworkInterfaceIPV4(CheckBase):
return not (
interface.connection.state
in [ConnectionStateType.ACTIVATED, ConnectionStateType.ACTIVATING]
in [ConnectionState.ACTIVATED, ConnectionState.ACTIVATING]
and ConnectionStateFlags.IP4_READY in interface.connection.state_flags
)

View File

@@ -58,6 +58,7 @@ class UnsupportedReason(StrEnum):
SYSTEMD_JOURNAL = "systemd_journal"
SYSTEMD_RESOLVED = "systemd_resolved"
VIRTUALIZATION_IMAGE = "virtualization_image"
SYSTEM_ARCHITECTURE = "system_architecture"
class UnhealthyReason(StrEnum):

View File

@@ -5,8 +5,6 @@ from ...coresys import CoreSys
from ..const import UnsupportedReason
from .base import EvaluateBase
SUPPORTED_OS = ["Debian GNU/Linux 12 (bookworm)"]
def setup(coresys: CoreSys) -> EvaluateBase:
"""Initialize evaluation-setup function."""
@@ -33,6 +31,4 @@ class EvaluateOperatingSystem(EvaluateBase):
async def evaluate(self) -> bool:
"""Run evaluation."""
if self.sys_os.available:
return False
return self.sys_host.info.operating_system not in SUPPORTED_OS
return not self.sys_os.available

View File

@@ -0,0 +1,38 @@
"""Evaluation class for system architecture support."""
from ...const import CoreState
from ...coresys import CoreSys
from ..const import UnsupportedReason
from .base import EvaluateBase
def setup(coresys: CoreSys) -> EvaluateBase:
"""Initialize evaluation-setup function."""
return EvaluateSystemArchitecture(coresys)
class EvaluateSystemArchitecture(EvaluateBase):
"""Evaluate if the current Supervisor architecture is supported."""
@property
def reason(self) -> UnsupportedReason:
"""Return a UnsupportedReason enum."""
return UnsupportedReason.SYSTEM_ARCHITECTURE
@property
def on_failure(self) -> str:
"""Return a string that is printed when self.evaluate is True."""
return "System architecture is no longer supported. Move to a supported system architecture."
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this evaluation can run."""
return [CoreState.INITIALIZE]
async def evaluate(self):
"""Run evaluation."""
return self.sys_host.info.sys_arch.supervisor in {
"i386",
"armhf",
"armv7",
}

View File

@@ -183,19 +183,22 @@ class GitRepo(CoreSysAttributes):
raise StoreGitError() from err
try:
branch = self.repo.active_branch.name
repo = self.repo
# Download data
await self.sys_run_in_executor(
ft.partial(
self.repo.remotes.origin.fetch,
**{"update-shallow": True, "depth": 1}, # type: ignore
def _fetch_and_check() -> tuple[str, bool]:
"""Fetch from origin and check if changed."""
# This property access is I/O bound
branch = repo.active_branch.name
repo.remotes.origin.fetch(
**{"update-shallow": True, "depth": 1} # type: ignore[arg-type]
)
)
changed = repo.commit(branch) != repo.commit(f"origin/{branch}")
return branch, changed
if changed := self.repo.commit(branch) != self.repo.commit(
f"origin/{branch}"
):
# Download data and check for changes
branch, changed = await self.sys_run_in_executor(_fetch_and_check)
if changed:
# Jump on top of that
await self.sys_run_in_executor(
ft.partial(self.repo.git.reset, f"origin/{branch}", hard=True)

View File

@@ -28,8 +28,8 @@ from .exceptions import (
DockerError,
HostAppArmorError,
SupervisorAppArmorError,
SupervisorError,
SupervisorJobError,
SupervisorUnknownError,
SupervisorUpdateError,
)
from .jobs.const import JobCondition, JobThrottle
@@ -261,7 +261,7 @@ class Supervisor(CoreSysAttributes):
try:
return await self.instance.stats()
except DockerError as err:
raise SupervisorError() from err
raise SupervisorUnknownError() from err
async def repair(self):
"""Repair local Supervisor data."""

View File

@@ -242,9 +242,10 @@ class Updater(FileConfiguration, CoreSysAttributes):
@Job(
name="updater_fetch_data",
conditions=[
JobCondition.ARCHITECTURE_SUPPORTED,
JobCondition.INTERNET_SYSTEM,
JobCondition.OS_SUPPORTED,
JobCondition.HOME_ASSISTANT_CORE_SUPPORTED,
JobCondition.OS_SUPPORTED,
],
on_condition=UpdaterJobError,
throttle_period=timedelta(seconds=30),

View File

@@ -7,13 +7,7 @@ from collections.abc import Awaitable, Callable
import logging
from typing import Any, Protocol, cast
from dbus_fast import (
ErrorType,
InvalidIntrospectionError,
Message,
MessageType,
Variant,
)
from dbus_fast import ErrorType, InvalidIntrospectionError, Message, MessageType
from dbus_fast.aio.message_bus import MessageBus
from dbus_fast.aio.proxy_object import ProxyInterface, ProxyObject
from dbus_fast.errors import DBusError as DBusFastDBusError
@@ -265,7 +259,7 @@ class DBus:
"""
async def sync_property_change(
prop_interface: str, changed: dict[str, Variant], invalidated: list[str]
prop_interface: str, changed: dict[str, Any], invalidated: list[str]
) -> None:
"""Sync property changes to cache."""
if interface != prop_interface:

View File

@@ -5,6 +5,7 @@ from datetime import timedelta
import errno
from http import HTTPStatus
from pathlib import Path
from typing import Any
from unittest.mock import MagicMock, PropertyMock, call, patch
import aiodocker
@@ -23,7 +24,13 @@ from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState
from supervisor.docker.manager import CommandReturn, DockerAPI
from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError
from supervisor.exceptions import (
AddonPrePostBackupCommandReturnedError,
AddonsJobError,
AddonUnknownError,
AudioUpdateError,
HassioError,
)
from supervisor.hardware.helper import HwHelper
from supervisor.ingress import Ingress
from supervisor.store.repository import Repository
@@ -502,31 +509,26 @@ async def test_backup_with_pre_post_command(
@pytest.mark.parametrize(
"get_error,exception_on_exec",
("container_get_side_effect", "exec_run_side_effect", "exc_type_raised"),
[
(NotFound("missing"), False),
(DockerException(), False),
(None, True),
(None, False),
(NotFound("missing"), [(1, None)], AddonUnknownError),
(DockerException(), [(1, None)], AddonUnknownError),
(None, DockerException(), AddonUnknownError),
(None, [(1, None)], AddonPrePostBackupCommandReturnedError),
],
)
@pytest.mark.usefixtures("tmp_supervisor_data", "path_extern")
async def test_backup_with_pre_command_error(
coresys: CoreSys,
install_addon_ssh: Addon,
container: MagicMock,
get_error: DockerException | None,
exception_on_exec: bool,
tmp_supervisor_data,
path_extern,
container_get_side_effect: DockerException | None,
exec_run_side_effect: DockerException | list[tuple[int, Any]],
exc_type_raised: type[HassioError],
) -> None:
"""Test backing up an addon with error running pre command."""
if get_error:
coresys.docker.containers.get.side_effect = get_error
if exception_on_exec:
container.exec_run.side_effect = DockerException()
else:
container.exec_run.return_value = (1, None)
coresys.docker.containers.get.side_effect = container_get_side_effect
container.exec_run.side_effect = exec_run_side_effect
install_addon_ssh.path_data.mkdir()
await install_addon_ssh.load()
@@ -535,7 +537,7 @@ async def test_backup_with_pre_command_error(
with (
patch.object(DockerAddon, "is_running", return_value=True),
patch.object(Addon, "backup_pre", new=PropertyMock(return_value="backup_pre")),
pytest.raises(AddonsError),
pytest.raises(exc_type_raised),
):
assert await install_addon_ssh.backup(tarfile) is None
@@ -947,7 +949,7 @@ async def test_addon_load_succeeds_with_docker_errors(
)
caplog.clear()
await install_addon_ssh.load()
assert "Invalid build environment" in caplog.text
assert "Cannot build addon 'local_ssh' because dockerfile is missing" in caplog.text
# Image build failure
caplog.clear()

View File

@@ -1,12 +1,18 @@
"""Test addon build."""
import base64
import json
from pathlib import Path
from unittest.mock import PropertyMock, patch
from awesomeversion import AwesomeVersion
import pytest
from supervisor.addons.addon import Addon
from supervisor.addons.build import AddonBuild
from supervisor.coresys import CoreSys
from supervisor.docker.const import DOCKER_HUB
from supervisor.exceptions import AddonBuildDockerfileMissingError
from tests.common import is_in_list
@@ -29,7 +35,7 @@ async def test_platform_set(coresys: CoreSys, install_addon_ssh: Addon):
),
):
args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest", None
)
assert is_in_list(["--platform", "linux/amd64"], args["command"])
@@ -53,7 +59,7 @@ async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon)
),
):
args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest", None
)
assert is_in_list(["--file", "Dockerfile"], args["command"])
@@ -81,7 +87,7 @@ async def test_dockerfile_evaluation_arch(coresys: CoreSys, install_addon_ssh: A
),
):
args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest", None
)
assert is_in_list(["--file", "Dockerfile.aarch64"], args["command"])
@@ -102,11 +108,11 @@ async def test_build_valid(coresys: CoreSys, install_addon_ssh: Addon):
type(coresys.arch), "default", new=PropertyMock(return_value="aarch64")
),
):
assert await build.is_valid()
assert (await build.is_valid()) is None
async def test_build_invalid(coresys: CoreSys, install_addon_ssh: Addon):
"""Test platform set in docker args."""
"""Test build not supported because Dockerfile missing for specified architecture."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
with (
patch.object(
@@ -115,5 +121,161 @@ async def test_build_invalid(coresys: CoreSys, install_addon_ssh: Addon):
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
),
pytest.raises(AddonBuildDockerfileMissingError),
):
assert not await build.is_valid()
await build.is_valid()
async def test_docker_config_no_registries(coresys: CoreSys, install_addon_ssh: Addon):
"""Test docker config generation when no registries configured."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
# No registries configured by default
assert build.get_docker_config_json() is None
async def test_docker_config_no_matching_registry(
coresys: CoreSys, install_addon_ssh: Addon
):
"""Test docker config generation when registry doesn't match base image."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
# Configure a registry that doesn't match the base image
# pylint: disable-next=protected-access
coresys.docker.config._data["registries"] = {
"some.other.registry": {"username": "user", "password": "pass"}
}
with (
patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
),
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
),
):
# Base image is ghcr.io/home-assistant/... which doesn't match
assert build.get_docker_config_json() is None
async def test_docker_config_matching_registry(
coresys: CoreSys, install_addon_ssh: Addon
):
"""Test docker config generation when registry matches base image."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
# Configure ghcr.io registry which matches the default base image
# pylint: disable-next=protected-access
coresys.docker.config._data["registries"] = {
"ghcr.io": {"username": "testuser", "password": "testpass"}
}
with (
patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
),
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
),
):
config_json = build.get_docker_config_json()
assert config_json is not None
config = json.loads(config_json)
assert "auths" in config
assert "ghcr.io" in config["auths"]
# Verify base64-encoded credentials
expected_auth = base64.b64encode(b"testuser:testpass").decode()
assert config["auths"]["ghcr.io"]["auth"] == expected_auth
async def test_docker_config_docker_hub(coresys: CoreSys, install_addon_ssh: Addon):
"""Test docker config generation for Docker Hub registry."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
# Configure Docker Hub registry
# pylint: disable-next=protected-access
coresys.docker.config._data["registries"] = {
DOCKER_HUB: {"username": "hubuser", "password": "hubpass"}
}
# Mock base_image to return a Docker Hub image (no registry prefix)
with patch.object(
type(build),
"base_image",
new=PropertyMock(return_value="library/alpine:latest"),
):
config_json = build.get_docker_config_json()
assert config_json is not None
config = json.loads(config_json)
# Docker Hub uses special URL as key
assert "https://index.docker.io/v1/" in config["auths"]
expected_auth = base64.b64encode(b"hubuser:hubpass").decode()
assert config["auths"]["https://index.docker.io/v1/"]["auth"] == expected_auth
async def test_docker_args_with_config_path(coresys: CoreSys, install_addon_ssh: Addon):
"""Test docker args include config volume when path provided."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
with (
patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
),
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
),
patch.object(
type(coresys.config),
"local_to_extern_path",
side_effect=lambda p: f"/extern{p}",
),
):
config_path = Path("/data/supervisor/tmp/config.json")
args = await coresys.run_in_executor(
build.get_docker_args,
AwesomeVersion("latest"),
"test-image:latest",
config_path,
)
# Check that config is mounted
assert "/extern/data/supervisor/tmp/config.json" in args["volumes"]
assert (
args["volumes"]["/extern/data/supervisor/tmp/config.json"]["bind"]
== "/root/.docker/config.json"
)
assert args["volumes"]["/extern/data/supervisor/tmp/config.json"]["mode"] == "ro"
async def test_docker_args_without_config_path(
coresys: CoreSys, install_addon_ssh: Addon
):
"""Test docker args don't include config volume when no path provided."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
with (
patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
),
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest", None
)
# Only docker socket and addon path should be mounted
assert len(args["volumes"]) == 2
# Verify no docker config mount
for bind in args["volumes"].values():
assert bind["bind"] != "/root/.docker/config.json"

View File

@@ -10,7 +10,7 @@ from awesomeversion import AwesomeVersion
import pytest
from supervisor.addons.addon import Addon
from supervisor.arch import CpuArch
from supervisor.arch import CpuArchManager
from supervisor.config import CoreConfig
from supervisor.const import AddonBoot, AddonStartup, AddonState, BusEvent
from supervisor.coresys import CoreSys
@@ -54,7 +54,9 @@ async def fixture_mock_arch_disk() -> AsyncGenerator[None]:
"""Mock supported arch and disk space."""
with (
patch("shutil.disk_usage", return_value=(42, 42, 2 * (1024.0**3))),
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
patch.object(
CpuArchManager, "supported", new=PropertyMock(return_value=["amd64"])
),
):
yield

View File

@@ -32,8 +32,24 @@ async def _common_test_api_advanced_logs(
range_header=DEFAULT_LOG_RANGE,
accept=LogFormat.JOURNAL,
)
journal_logs_reader.assert_called_with(ANY, LogFormatter.PLAIN, False)
journald_logs.reset_mock()
journal_logs_reader.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs?no_colors")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier},
range_header=DEFAULT_LOG_RANGE,
accept=LogFormat.JOURNAL,
)
journal_logs_reader.assert_called_with(ANY, LogFormatter.PLAIN, True)
journald_logs.reset_mock()
journal_logs_reader.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/follow")
assert resp.status == 200

View File

@@ -5,11 +5,12 @@ from unittest.mock import MagicMock, PropertyMock, patch
from aiohttp import ClientResponse
from aiohttp.test_utils import TestClient
from docker.errors import DockerException
import pytest
from supervisor.addons.addon import Addon
from supervisor.addons.build import AddonBuild
from supervisor.arch import CpuArch
from supervisor.arch import CpuArchManager
from supervisor.const import AddonState
from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon
@@ -236,7 +237,9 @@ async def test_api_addon_rebuild_healthcheck(
patch.object(AddonBuild, "is_valid", return_value=True),
patch.object(DockerAddon, "is_running", return_value=False),
patch.object(Addon, "need_build", new=PropertyMock(return_value=True)),
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
patch.object(
CpuArchManager, "supported", new=PropertyMock(return_value=["amd64"])
),
patch.object(DockerAddon, "run", new=container_events_task),
patch.object(
coresys.docker,
@@ -308,7 +311,9 @@ async def test_api_addon_rebuild_force(
patch.object(
Addon, "need_build", new=PropertyMock(return_value=False)
), # Image-based
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
patch.object(
CpuArchManager, "supported", new=PropertyMock(return_value=["amd64"])
),
):
resp = await api_client.post("/addons/local_ssh/rebuild")
@@ -326,7 +331,9 @@ async def test_api_addon_rebuild_force(
patch.object(
Addon, "need_build", new=PropertyMock(return_value=False)
), # Image-based
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
patch.object(
CpuArchManager, "supported", new=PropertyMock(return_value=["amd64"])
),
patch.object(DockerAddon, "run", new=container_events_task),
patch.object(
coresys.docker,
@@ -471,6 +478,11 @@ async def test_addon_options_boot_mode_manual_only_invalid(
body["message"]
== "Addon local_example boot option is set to manual_only so it cannot be changed"
)
assert body["error_key"] == "addon_boot_config_cannot_change_error"
assert body["extra_fields"] == {
"addon": "local_example",
"boot_config": "manual_only",
}
async def get_message(resp: ClientResponse, json_expected: bool) -> str:
@@ -539,3 +551,133 @@ async def test_addon_not_installed(
resp = await api_client.request(method, url)
assert resp.status == 400
assert await get_message(resp, json_expected) == "Addon is not installed"
async def test_addon_set_options(api_client: TestClient, install_addon_example: Addon):
"""Test setting options for an addon."""
resp = await api_client.post(
"/addons/local_example/options", json={"options": {"message": "test"}}
)
assert resp.status == 200
assert install_addon_example.options == {"message": "test"}
async def test_addon_set_options_error(
api_client: TestClient, install_addon_example: Addon
):
"""Test setting options for an addon."""
resp = await api_client.post(
"/addons/local_example/options", json={"options": {"message": True}}
)
assert resp.status == 400
body = await resp.json()
assert (
body["message"]
== "Add-on local_example has invalid options: not a valid value. Got {'message': True}"
)
assert body["error_key"] == "addon_configuration_invalid_error"
assert body["extra_fields"] == {
"addon": "local_example",
"validation_error": "not a valid value. Got {'message': True}",
}
async def test_addon_start_options_error(
api_client: TestClient,
install_addon_example: Addon,
caplog: pytest.LogCaptureFixture,
):
"""Test error writing options when trying to start addon."""
install_addon_example.options = {"message": "hello"}
# Simulate OS error trying to write the file
with patch("supervisor.utils.json.atomic_write", side_effect=OSError("fail")):
resp = await api_client.post("/addons/local_example/start")
assert resp.status == 500
body = await resp.json()
assert (
body["message"]
== "An unknown error occurred with addon local_example. Check supervisor logs for details (check with 'ha supervisor logs')"
)
assert body["error_key"] == "addon_unknown_error"
assert body["extra_fields"] == {
"addon": "local_example",
"logs_command": "ha supervisor logs",
}
assert "Add-on local_example can't write options" in caplog.text
# Simulate an update with a breaking change for options schema creating failure on start
caplog.clear()
install_addon_example.data["schema"] = {"message": "bool"}
resp = await api_client.post("/addons/local_example/start")
assert resp.status == 400
body = await resp.json()
assert (
body["message"]
== "Add-on local_example has invalid options: expected boolean. Got {'message': 'hello'}"
)
assert body["error_key"] == "addon_configuration_invalid_error"
assert body["extra_fields"] == {
"addon": "local_example",
"validation_error": "expected boolean. Got {'message': 'hello'}",
}
assert (
"Add-on local_example has invalid options: expected boolean. Got {'message': 'hello'}"
in caplog.text
)
@pytest.mark.parametrize(("method", "action"), [("get", "stats"), ("post", "stdin")])
@pytest.mark.usefixtures("install_addon_example")
async def test_addon_not_running_error(
api_client: TestClient, method: str, action: str
):
"""Test addon not running error for endpoints that require that."""
with patch.object(Addon, "with_stdin", new=PropertyMock(return_value=True)):
resp = await api_client.request(method, f"/addons/local_example/{action}")
assert resp.status == 400
body = await resp.json()
assert body["message"] == "Add-on local_example is not running"
assert body["error_key"] == "addon_not_running_error"
assert body["extra_fields"] == {"addon": "local_example"}
@pytest.mark.usefixtures("install_addon_example")
async def test_addon_write_stdin_not_supported_error(api_client: TestClient):
"""Test error when trying to write stdin to addon that does not support it."""
resp = await api_client.post("/addons/local_example/stdin")
assert resp.status == 400
body = await resp.json()
assert body["message"] == "Add-on local_example does not support writing to stdin"
assert body["error_key"] == "addon_not_supported_write_stdin_error"
assert body["extra_fields"] == {"addon": "local_example"}
@pytest.mark.usefixtures("install_addon_ssh")
async def test_addon_rebuild_fails_error(api_client: TestClient, coresys: CoreSys):
"""Test error when build fails during rebuild for addon."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.docker.containers.run.side_effect = DockerException("fail")
with (
patch.object(
CpuArchManager, "supported", new=PropertyMock(return_value=["aarch64"])
),
patch.object(
CpuArchManager, "default", new=PropertyMock(return_value="aarch64")
),
patch.object(AddonBuild, "get_docker_args", return_value={}),
):
resp = await api_client.post("/addons/local_ssh/rebuild")
assert resp.status == 500
body = await resp.json()
assert (
body["message"]
== "An unknown error occurred while trying to build the image for addon local_ssh. Check supervisor logs for details (check with 'ha supervisor logs')"
)
assert body["error_key"] == "addon_build_failed_unknown_error"
assert body["extra_fields"] == {
"addon": "local_ssh",
"logs_command": "ha supervisor logs",
}

View File

@@ -1,6 +1,7 @@
"""Test auth API."""
from datetime import UTC, datetime, timedelta
from typing import Any
from unittest.mock import AsyncMock, MagicMock, patch
from aiohttp.hdrs import WWW_AUTHENTICATE
@@ -9,6 +10,8 @@ import pytest
from supervisor.addons.addon import Addon
from supervisor.coresys import CoreSys
from supervisor.exceptions import HomeAssistantAPIError, HomeAssistantWSError
from supervisor.homeassistant.api import HomeAssistantAPI
from tests.common import MockResponse
from tests.const import TEST_ADDON_SLUG
@@ -100,6 +103,52 @@ async def test_password_reset(
assert "Successful password reset for 'john'" in caplog.text
@pytest.mark.parametrize(
("post_mock", "expected_log"),
[
(
MagicMock(return_value=MockResponse(status=400)),
"The user 'john' is not registered",
),
(
MagicMock(side_effect=HomeAssistantAPIError("fail")),
"Can't request password reset on Home Assistant: fail",
),
],
)
async def test_failed_password_reset(
api_client: TestClient,
coresys: CoreSys,
caplog: pytest.LogCaptureFixture,
websession: MagicMock,
post_mock: MagicMock,
expected_log: str,
):
"""Test failed password reset."""
coresys.homeassistant.api.access_token = "abc123"
# pylint: disable-next=protected-access
coresys.homeassistant.api._access_token_expires = datetime.now(tz=UTC) + timedelta(
days=1
)
websession.post = post_mock
resp = await api_client.post(
"/auth/reset", json={"username": "john", "password": "doe"}
)
assert resp.status == 400
body = await resp.json()
assert (
body["message"]
== "Username 'john' does not exist. Check list of users using 'ha auth list'."
)
assert body["error_key"] == "auth_password_reset_error"
assert body["extra_fields"] == {
"user": "john",
"auth_list_command": "ha auth list",
}
assert expected_log in caplog.text
async def test_list_users(
api_client: TestClient, coresys: CoreSys, ha_ws_client: AsyncMock
):
@@ -120,6 +169,48 @@ async def test_list_users(
]
@pytest.mark.parametrize(
("send_command_mock", "error_response", "expected_log"),
[
(
AsyncMock(return_value=None),
{
"result": "error",
"message": "Home Assistant returned invalid response of `None` instead of a list of users. Check Home Assistant logs for details (check with `ha core logs`)",
"error_key": "auth_list_users_none_response_error",
"extra_fields": {"none": "None", "logs_command": "ha core logs"},
},
"Home Assistant returned invalid response of `None` instead of a list of users. Check Home Assistant logs for details (check with `ha core logs`)",
),
(
AsyncMock(side_effect=HomeAssistantWSError("fail")),
{
"result": "error",
"message": "Can't request listing users on Home Assistant. Check supervisor logs for details (check with 'ha supervisor logs')",
"error_key": "auth_list_users_error",
"extra_fields": {"logs_command": "ha supervisor logs"},
},
"Can't request listing users on Home Assistant: fail",
),
],
)
async def test_list_users_failure(
api_client: TestClient,
ha_ws_client: AsyncMock,
caplog: pytest.LogCaptureFixture,
send_command_mock: AsyncMock,
error_response: dict[str, Any],
expected_log: str,
):
"""Test failure listing users via API."""
ha_ws_client.async_send_command = send_command_mock
resp = await api_client.get("/auth/list")
assert resp.status == 500
result = await resp.json()
assert result == error_response
assert expected_log in caplog.text
@pytest.mark.parametrize(
("field", "api_client"),
[("username", TEST_ADDON_SLUG), ("user", TEST_ADDON_SLUG)],
@@ -156,6 +247,13 @@ async def test_auth_json_failure_none(
mock_check_login.return_value = True
resp = await api_client.post("/auth", json={"username": user, "password": password})
assert resp.status == 401
assert (
resp.headers["WWW-Authenticate"]
== 'Basic realm="Home Assistant Authentication"'
)
body = await resp.json()
assert body["message"] == "Username and password must be strings"
assert body["error_key"] == "auth_invalid_non_string_value_error"
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
@@ -267,3 +365,26 @@ async def test_non_addon_token_no_auth_access(api_client: TestClient):
"""Test auth where add-on is not allowed to access auth API."""
resp = await api_client.post("/auth", json={"username": "test", "password": "pass"})
assert resp.status == 403
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
@pytest.mark.usefixtures("install_addon_ssh")
async def test_auth_backend_login_failure(api_client: TestClient):
"""Test backend login failure on auth."""
with (
patch.object(HomeAssistantAPI, "check_api_state", return_value=True),
patch.object(
HomeAssistantAPI, "make_request", side_effect=HomeAssistantAPIError("fail")
),
):
resp = await api_client.post(
"/auth", json={"username": "test", "password": "pass"}
)
assert resp.status == 500
body = await resp.json()
assert (
body["message"]
== "Unable to validate authentication details with Home Assistant. Check supervisor logs for details (check with 'ha supervisor logs')"
)
assert body["error_key"] == "auth_home_assistant_api_validation_error"
assert body["extra_fields"] == {"logs_command": "ha supervisor logs"}

View File

@@ -17,6 +17,7 @@ from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.docker.manager import DockerAPI
from supervisor.exceptions import (
AddonPrePostBackupCommandReturnedError,
AddonsError,
BackupInvalidError,
HomeAssistantBackupError,
@@ -24,6 +25,7 @@ from supervisor.exceptions import (
from supervisor.homeassistant.core import HomeAssistantCore
from supervisor.homeassistant.module import HomeAssistant
from supervisor.homeassistant.websocket import HomeAssistantWebSocket
from supervisor.jobs import SupervisorJob
from supervisor.mounts.mount import Mount
from supervisor.supervisor import Supervisor
@@ -401,6 +403,8 @@ async def test_api_backup_errors(
"type": "BackupError",
"message": str(err),
"stage": None,
"error_key": None,
"extra_fields": None,
}
]
assert job["child_jobs"][2]["name"] == "backup_store_folders"
@@ -437,6 +441,8 @@ async def test_api_backup_errors(
"type": "HomeAssistantBackupError",
"message": "Backup error",
"stage": "home_assistant",
"error_key": None,
"extra_fields": None,
}
]
assert job["child_jobs"][0]["name"] == "backup_store_homeassistant"
@@ -445,6 +451,8 @@ async def test_api_backup_errors(
"type": "HomeAssistantBackupError",
"message": "Backup error",
"stage": None,
"error_key": None,
"extra_fields": None,
}
]
assert len(job["child_jobs"]) == 1
@@ -749,6 +757,8 @@ async def test_backup_to_multiple_locations_error_on_copy(
"type": "BackupError",
"message": "Could not copy backup to .cloud_backup due to: ",
"stage": None,
"error_key": None,
"extra_fields": None,
}
]
@@ -1483,3 +1493,44 @@ async def test_immediate_list_after_missing_file_restore(
result = await resp.json()
assert len(result["data"]["backups"]) == 1
assert result["data"]["backups"][0]["slug"] == "93b462f8"
@pytest.mark.parametrize("command", ["backup_pre", "backup_post"])
@pytest.mark.usefixtures("install_addon_example", "tmp_supervisor_data")
async def test_pre_post_backup_command_error(
api_client: TestClient, coresys: CoreSys, container: MagicMock, command: str
):
"""Test pre/post backup command error."""
await coresys.core.set_state(CoreState.RUNNING)
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
container.status = "running"
container.exec_run.return_value = (1, b"")
with patch.object(Addon, command, new=PropertyMock(return_value="test")):
resp = await api_client.post(
"/backups/new/partial", json={"addons": ["local_example"]}
)
assert resp.status == 200
body = await resp.json()
job_id = body["data"]["job_id"]
job: SupervisorJob | None = None
for j in coresys.jobs.jobs:
if j.name == "backup_store_addons" and j.parent_id == job_id:
job = j
break
assert job
assert job.done is True
assert job.errors[0].type_ == AddonPrePostBackupCommandReturnedError
assert job.errors[0].message == (
"Pre-/Post backup command for add-on local_example returned error code: "
"1. Please report this to the addon developer. Enable debug "
"logging to capture complete command output using ha supervisor options --logging debug"
)
assert job.errors[0].error_key == "addon_pre_post_backup_command_returned_error"
assert job.errors[0].extra_fields == {
"addon": "local_example",
"exit_code": 1,
"debug_logging_command": "ha supervisor options --logging debug",
}

View File

@@ -4,6 +4,11 @@ from aiohttp.test_utils import TestClient
import pytest
from supervisor.coresys import CoreSys
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
from supervisor.resolution.data import Issue, Suggestion
from tests.dbus_service_mocks.agent_system import System as SystemService
from tests.dbus_service_mocks.base import DBusServiceMock
@pytest.mark.asyncio
@@ -84,3 +89,79 @@ async def test_registry_not_found(api_client: TestClient):
assert resp.status == 404
body = await resp.json()
assert body["message"] == "Hostname bad does not exist in registries"
@pytest.mark.parametrize("os_available", ["17.0.rc1"], indirect=True)
async def test_api_migrate_docker_storage_driver(
api_client: TestClient,
coresys: CoreSys,
os_agent_services: dict[str, DBusServiceMock],
os_available,
):
"""Test Docker storage driver migration."""
system_service: SystemService = os_agent_services["agent_system"]
system_service.MigrateDockerStorageDriver.calls.clear()
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "overlayfs"},
)
assert resp.status == 200
assert system_service.MigrateDockerStorageDriver.calls == [("overlayfs",)]
assert (
Issue(IssueType.REBOOT_REQUIRED, ContextType.SYSTEM)
in coresys.resolution.issues
)
assert (
Suggestion(SuggestionType.EXECUTE_REBOOT, ContextType.SYSTEM)
in coresys.resolution.suggestions
)
# Test migration back to overlay2 (graph driver)
system_service.MigrateDockerStorageDriver.calls.clear()
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "overlay2"},
)
assert resp.status == 200
assert system_service.MigrateDockerStorageDriver.calls == [("overlay2",)]
@pytest.mark.parametrize("os_available", ["17.0.rc1"], indirect=True)
async def test_api_migrate_docker_storage_driver_invalid_backend(
api_client: TestClient,
os_available,
):
"""Test 400 is returned for invalid storage driver."""
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "invalid"},
)
assert resp.status == 400
async def test_api_migrate_docker_storage_driver_not_os(
api_client: TestClient,
coresys: CoreSys,
):
"""Test 404 is returned if not running on HAOS."""
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "overlayfs"},
)
assert resp.status == 404
@pytest.mark.parametrize("os_available", ["16.2"], indirect=True)
async def test_api_migrate_docker_storage_driver_old_os(
api_client: TestClient,
coresys: CoreSys,
os_available,
):
"""Test 404 is returned if OS is older than 17.0."""
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "overlayfs"},
)
assert resp.status == 404

View File

@@ -305,6 +305,8 @@ async def test_api_progress_updates_home_assistant_update(
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == "home_assistant_core_update"
]
# Count-based progress: 2 layers need pulling (each worth 50%)
# Layers that already exist are excluded from progress calculation
assert events[:5] == [
{
"stage": None,
@@ -318,34 +320,34 @@ async def test_api_progress_updates_home_assistant_update(
},
{
"stage": None,
"progress": 0.1,
"progress": 9.2,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"progress": 25.6,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"progress": 35.4,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"progress": 95.5,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"progress": 96.9,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"progress": 98.2,
"done": False,
},
{

View File

@@ -292,6 +292,18 @@ async def test_advaced_logs_query_parameters(
)
journal_logs_reader.assert_called_with(ANY, LogFormatter.VERBOSE, False)
journal_logs_reader.reset_mock()
journald_logs.reset_mock()
# Check no_colors query parameter
await api_client.get("/host/logs?no_colors")
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": coresys.host.logs.default_identifiers},
range_header=DEFAULT_RANGE,
accept=LogFormat.JOURNAL,
)
journal_logs_reader.assert_called_with(ANY, LogFormatter.VERBOSE, True)
async def test_advanced_logs_boot_id_offset(
api_client: TestClient, coresys: CoreSys, journald_logs: MagicMock

View File

@@ -374,6 +374,8 @@ async def test_job_with_error(
"type": "SupervisorError",
"message": "bad",
"stage": "test",
"error_key": None,
"extra_fields": None,
}
],
"child_jobs": [
@@ -391,6 +393,8 @@ async def test_job_with_error(
"type": "SupervisorError",
"message": "bad",
"stage": None,
"error_key": None,
"extra_fields": None,
}
],
"child_jobs": [],

View File

@@ -4,13 +4,12 @@ import asyncio
from pathlib import Path
from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp import ClientResponse
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
import pytest
from supervisor.addons.addon import Addon
from supervisor.arch import CpuArch
from supervisor.arch import CpuArchManager
from supervisor.backups.manager import BackupManager
from supervisor.config import CoreConfig
from supervisor.const import AddonState, CoreState
@@ -191,7 +190,9 @@ async def test_api_store_update_healthcheck(
patch.object(DockerAddon, "run", new=container_events_task),
patch.object(DockerInterface, "install"),
patch.object(DockerAddon, "is_running", return_value=False),
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
patch.object(
CpuArchManager, "supported", new=PropertyMock(return_value=["amd64"])
),
):
resp = await api_client.post(f"/store/addons/{TEST_ADDON_SLUG}/update")
@@ -290,14 +291,6 @@ async def test_api_detached_addon_documentation(
assert result == "Addon local_ssh does not exist in the store"
async def get_message(resp: ClientResponse, json_expected: bool) -> str:
"""Get message from response based on response type."""
if json_expected:
body = await resp.json()
return body["message"]
return await resp.text()
@pytest.mark.parametrize(
("method", "url", "json_expected"),
[
@@ -323,7 +316,13 @@ async def test_store_addon_not_found(
"""Test store addon not found error."""
resp = await api_client.request(method, url)
assert resp.status == 404
assert await get_message(resp, json_expected) == "Addon bad does not exist"
if json_expected:
body = await resp.json()
assert body["message"] == "Addon bad does not exist in the store"
assert body["error_key"] == "store_addon_not_found_error"
assert body["extra_fields"] == {"addon": "bad"}
else:
assert await resp.text() == "Addon bad does not exist in the store"
@pytest.mark.parametrize(
@@ -548,7 +547,9 @@ async def test_api_store_addons_addon_availability_arch_not_supported(
coresys.addons.data.user[addon_obj.slug] = {"version": AwesomeVersion("0.0.1")}
# Mock the system architecture to be different
with patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])):
with patch.object(
CpuArchManager, "supported", new=PropertyMock(return_value=["amd64"])
):
resp = await api_client.request(
api_method, f"/store/addons/{addon_obj.slug}/{api_action}"
)
@@ -760,6 +761,8 @@ async def test_api_progress_updates_addon_install_update(
and evt.args[0]["data"]["data"]["name"] == job_name
and evt.args[0]["data"]["data"]["reference"] == addon_slug
]
# Count-based progress: 2 layers need pulling (each worth 50%)
# Layers that already exist are excluded from progress calculation
assert events[:4] == [
{
"stage": None,
@@ -768,34 +771,34 @@ async def test_api_progress_updates_addon_install_update(
},
{
"stage": None,
"progress": 0.1,
"progress": 9.2,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"progress": 25.6,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"progress": 35.4,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"progress": 95.5,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"progress": 96.9,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"progress": 98.2,
"done": False,
},
{

View File

@@ -7,6 +7,7 @@ from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
from blockbuster import BlockingError
from docker.errors import DockerException
import pytest
from supervisor.const import CoreState
@@ -358,6 +359,8 @@ async def test_api_progress_updates_supervisor_update(
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == "supervisor_update"
]
# Count-based progress: 2 layers need pulling (each worth 50%)
# Layers that already exist are excluded from progress calculation
assert events[:4] == [
{
"stage": None,
@@ -366,34 +369,34 @@ async def test_api_progress_updates_supervisor_update(
},
{
"stage": None,
"progress": 0.1,
"progress": 9.2,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"progress": 25.6,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"progress": 35.4,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"progress": 95.5,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"progress": 96.9,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"progress": 98.2,
"done": False,
},
{
@@ -407,3 +410,37 @@ async def test_api_progress_updates_supervisor_update(
"done": True,
},
]
async def test_api_supervisor_stats(api_client: TestClient, coresys: CoreSys):
"""Test supervisor stats."""
coresys.docker.containers.get.return_value.status = "running"
coresys.docker.containers.get.return_value.stats.return_value = load_json_fixture(
"container_stats.json"
)
resp = await api_client.get("/supervisor/stats")
assert resp.status == 200
result = await resp.json()
assert result["data"]["cpu_percent"] == 90.0
assert result["data"]["memory_usage"] == 59700000
assert result["data"]["memory_limit"] == 4000000000
assert result["data"]["memory_percent"] == 1.49
async def test_supervisor_api_stats_failure(
api_client: TestClient, coresys: CoreSys, caplog: pytest.LogCaptureFixture
):
"""Test supervisor stats failure."""
coresys.docker.containers.get.side_effect = DockerException("fail")
resp = await api_client.get("/supervisor/stats")
assert resp.status == 500
body = await resp.json()
assert (
body["message"]
== "An unknown error occurred with Supervisor. Check supervisor logs for details (check with 'ha supervisor logs')"
)
assert body["error_key"] == "supervisor_unknown_error"
assert body["extra_fields"] == {"logs_command": "ha supervisor logs"}
assert "Could not inspect container 'hassio_supervisor': fail" in caplog.text

View File

@@ -144,9 +144,9 @@ async def docker() -> DockerAPI:
docker_images.inspect.return_value = image_inspect
docker_images.list.return_value = [image_inspect]
docker_images.import_image.return_value = [
{"stream": "Loaded image: test:latest\n"}
]
docker_images.import_image = AsyncMock(
return_value=[{"stream": "Loaded image: test:latest\n"}]
)
docker_images.pull.return_value = AsyncIterator([{}])
@@ -154,6 +154,9 @@ async def docker() -> DockerAPI:
docker_obj.info.storage = "overlay2"
docker_obj.info.version = AwesomeVersion("1.0.0")
# Mock manifest fetcher to return None (falls back to count-based progress)
docker_obj._manifest_fetcher.get_manifest = AsyncMock(return_value=None)
yield docker_obj

View File

@@ -184,3 +184,20 @@ async def test_interface_becomes_unmanaged(
assert wireless.is_connected is False
assert eth0.connection is None
assert connection.is_connected is False
async def test_unknown_device_type(
device_eth0_service: DeviceService, dbus_session_bus: MessageBus
):
"""Test unknown device types are handled gracefully."""
interface = NetworkInterface("/org/freedesktop/NetworkManager/Devices/1")
await interface.connect(dbus_session_bus)
# Emit an unknown device type (e.g., 1000 which doesn't exist in the enum)
device_eth0_service.emit_properties_changed({"DeviceType": 1000})
await device_eth0_service.ping()
# Should return UNKNOWN instead of crashing
assert interface.type == DeviceType.UNKNOWN
# Wireless should be None since it's not a wireless device
assert interface.wireless is None

View File

@@ -6,7 +6,7 @@ from unittest.mock import Mock, PropertyMock, patch
from dbus_fast.aio.message_bus import MessageBus
import pytest
from supervisor.dbus.const import ConnectionStateType
from supervisor.dbus.const import ConnectionState
from supervisor.dbus.network import NetworkManager
from supervisor.dbus.network.interface import NetworkInterface
from supervisor.exceptions import (
@@ -93,7 +93,7 @@ async def test_activate_connection(
"/org/freedesktop/NetworkManager/Settings/1",
"/org/freedesktop/NetworkManager/Devices/1",
)
assert connection.state == ConnectionStateType.ACTIVATED
assert connection.state == ConnectionState.ACTIVATED
assert (
connection.settings.object_path == "/org/freedesktop/NetworkManager/Settings/1"
)
@@ -117,7 +117,7 @@ async def test_add_and_activate_connection(
)
assert settings.connection.uuid == "0c23631e-2118-355c-bbb0-8943229cb0d6"
assert settings.ipv4.method == "auto"
assert connection.state == ConnectionStateType.ACTIVATED
assert connection.state == ConnectionState.ACTIVATED
assert (
connection.settings.object_path == "/org/freedesktop/NetworkManager/Settings/1"
)

View File

@@ -41,51 +41,51 @@ async def test_dbus_resolved_info(
assert resolved.dns_over_tls == DNSOverTLSEnabled.NO
assert len(resolved.dns) == 2
assert resolved.dns[0] == [0, 2, inet_aton("127.0.0.1")]
assert resolved.dns[1] == [0, 10, inet_pton(AF_INET6, "::1")]
assert resolved.dns[0] == (0, 2, inet_aton("127.0.0.1"))
assert resolved.dns[1] == (0, 10, inet_pton(AF_INET6, "::1"))
assert len(resolved.dns_ex) == 2
assert resolved.dns_ex[0] == [0, 2, inet_aton("127.0.0.1"), 0, ""]
assert resolved.dns_ex[1] == [0, 10, inet_pton(AF_INET6, "::1"), 0, ""]
assert resolved.dns_ex[0] == (0, 2, inet_aton("127.0.0.1"), 0, "")
assert resolved.dns_ex[1] == (0, 10, inet_pton(AF_INET6, "::1"), 0, "")
assert len(resolved.fallback_dns) == 2
assert resolved.fallback_dns[0] == [0, 2, inet_aton("1.1.1.1")]
assert resolved.fallback_dns[1] == [
assert resolved.fallback_dns[0] == (0, 2, inet_aton("1.1.1.1"))
assert resolved.fallback_dns[1] == (
0,
10,
inet_pton(AF_INET6, "2606:4700:4700::1111"),
]
)
assert len(resolved.fallback_dns_ex) == 2
assert resolved.fallback_dns_ex[0] == [
assert resolved.fallback_dns_ex[0] == (
0,
2,
inet_aton("1.1.1.1"),
0,
"cloudflare-dns.com",
]
assert resolved.fallback_dns_ex[1] == [
)
assert resolved.fallback_dns_ex[1] == (
0,
10,
inet_pton(AF_INET6, "2606:4700:4700::1111"),
0,
"cloudflare-dns.com",
]
)
assert resolved.current_dns_server == [0, 2, inet_aton("127.0.0.1")]
assert resolved.current_dns_server_ex == [
assert resolved.current_dns_server == (0, 2, inet_aton("127.0.0.1"))
assert resolved.current_dns_server_ex == (
0,
2,
inet_aton("127.0.0.1"),
0,
"",
]
)
assert len(resolved.domains) == 1
assert resolved.domains[0] == [0, "local.hass.io", False]
assert resolved.domains[0] == (0, "local.hass.io", False)
assert resolved.transaction_statistics == [0, 100000]
assert resolved.cache_statistics == [10, 50000, 10000]
assert resolved.transaction_statistics == (0, 100000)
assert resolved.cache_statistics == (10, 50000, 10000)
assert resolved.dnssec == DNSSECValidation.NO
assert resolved.dnssec_statistics == [0, 0, 0, 0]
assert resolved.dnssec_statistics == (0, 0, 0, 0)
assert resolved.dnssec_supported is False
assert resolved.dnssec_negative_trust_anchors == [
"168.192.in-addr.arpa",

View File

@@ -185,10 +185,10 @@ async def test_start_transient_unit(
"tmp-test.mount",
"fail",
[
["Description", Variant("s", "Test")],
["What", Variant("s", "//homeassistant/config")],
["Type", Variant("s", "cifs")],
["Options", Variant("s", "username=homeassistant,password=password")],
("Description", Variant("s", "Test")),
("What", Variant("s", "//homeassistant/config")),
("Type", Variant("s", "cifs")),
("Options", Variant("s", "username=homeassistant,password=password")),
],
[],
)

View File

@@ -1,6 +1,6 @@
"""Mock of OS Agent System dbus service."""
from dbus_fast import DBusError
from dbus_fast import DBusError, ErrorType
from .base import DBusServiceMock, dbus_method
@@ -21,6 +21,7 @@ class System(DBusServiceMock):
object_path = "/io/hass/os/System"
interface = "io.hass.os.System"
response_schedule_wipe_device: bool | DBusError = True
response_migrate_docker_storage_driver: None | DBusError = None
@dbus_method()
def ScheduleWipeDevice(self) -> "b":
@@ -28,3 +29,14 @@ class System(DBusServiceMock):
if isinstance(self.response_schedule_wipe_device, DBusError):
raise self.response_schedule_wipe_device # pylint: disable=raising-bad-type
return self.response_schedule_wipe_device
@dbus_method()
def MigrateDockerStorageDriver(self, backend: "s") -> None:
"""Migrate Docker storage driver."""
if isinstance(self.response_migrate_docker_storage_driver, DBusError):
raise self.response_migrate_docker_storage_driver # pylint: disable=raising-bad-type
if backend not in ("overlayfs", "overlay2"):
raise DBusError(
ErrorType.FAILED,
f"unsupported driver: {backend} (only 'overlayfs' and 'overlay2' are supported)",
)

View File

@@ -45,8 +45,8 @@ class Resolved(DBusServiceMock):
def DNS(self) -> "a(iiay)":
"""Get DNS."""
return [
[0, 2, bytes([127, 0, 0, 1])],
[
(0, 2, bytes([127, 0, 0, 1])),
(
0,
10,
bytes(
@@ -69,15 +69,15 @@ class Resolved(DBusServiceMock):
0x1,
]
),
],
),
]
@dbus_property(access=PropertyAccess.READ)
def DNSEx(self) -> "a(iiayqs)":
"""Get DNSEx."""
return [
[0, 2, bytes([127, 0, 0, 1]), 0, ""],
[
(0, 2, bytes([127, 0, 0, 1]), 0, ""),
(
0,
10,
bytes(
@@ -102,15 +102,15 @@ class Resolved(DBusServiceMock):
),
0,
"",
],
),
]
@dbus_property(access=PropertyAccess.READ)
def FallbackDNS(self) -> "a(iiay)":
"""Get FallbackDNS."""
return [
[0, 2, bytes([1, 1, 1, 1])],
[
(0, 2, bytes([1, 1, 1, 1])),
(
0,
10,
bytes(
@@ -133,15 +133,15 @@ class Resolved(DBusServiceMock):
0x11,
]
),
],
),
]
@dbus_property(access=PropertyAccess.READ)
def FallbackDNSEx(self) -> "a(iiayqs)":
"""Get FallbackDNSEx."""
return [
[0, 2, bytes([1, 1, 1, 1]), 0, "cloudflare-dns.com"],
[
(0, 2, bytes([1, 1, 1, 1]), 0, "cloudflare-dns.com"),
(
0,
10,
bytes(
@@ -166,33 +166,33 @@ class Resolved(DBusServiceMock):
),
0,
"cloudflare-dns.com",
],
),
]
@dbus_property(access=PropertyAccess.READ)
def CurrentDNSServer(self) -> "(iiay)":
"""Get CurrentDNSServer."""
return [0, 2, bytes([127, 0, 0, 1])]
return (0, 2, bytes([127, 0, 0, 1]))
@dbus_property(access=PropertyAccess.READ)
def CurrentDNSServerEx(self) -> "(iiayqs)":
"""Get CurrentDNSServerEx."""
return [0, 2, bytes([127, 0, 0, 1]), 0, ""]
return (0, 2, bytes([127, 0, 0, 1]), 0, "")
@dbus_property(access=PropertyAccess.READ)
def Domains(self) -> "a(isb)":
"""Get Domains."""
return [[0, "local.hass.io", False]]
return [(0, "local.hass.io", False)]
@dbus_property(access=PropertyAccess.READ)
def TransactionStatistics(self) -> "(tt)":
"""Get TransactionStatistics."""
return [0, 100000]
return (0, 100000)
@dbus_property(access=PropertyAccess.READ)
def CacheStatistics(self) -> "(ttt)":
"""Get CacheStatistics."""
return [10, 50000, 10000]
return (10, 50000, 10000)
@dbus_property(access=PropertyAccess.READ)
def DNSSEC(self) -> "s":
@@ -202,7 +202,7 @@ class Resolved(DBusServiceMock):
@dbus_property(access=PropertyAccess.READ)
def DNSSECStatistics(self) -> "(tttt)":
"""Get DNSSECStatistics."""
return [0, 0, 0, 0]
return (0, 0, 0, 0)
@dbus_property(access=PropertyAccess.READ)
def DNSSECSupported(self) -> "b":

View File

@@ -1,8 +1,49 @@
"""Test docker login."""
import pytest
# pylint: disable=protected-access
from supervisor.coresys import CoreSys
from supervisor.docker.interface import DOCKER_HUB, DockerInterface
from supervisor.docker.const import DOCKER_HUB, DOCKER_HUB_LEGACY
from supervisor.docker.interface import DockerInterface
from supervisor.docker.utils import get_registry_from_image
@pytest.mark.parametrize(
("image_ref", "expected_registry"),
[
# No registry - Docker Hub images
("nginx", None),
("nginx:latest", None),
("library/nginx", None),
("library/nginx:latest", None),
("homeassistant/amd64-supervisor", None),
("homeassistant/amd64-supervisor:1.2.3", None),
# Registry with dot
("ghcr.io/homeassistant/amd64-supervisor", "ghcr.io"),
("ghcr.io/homeassistant/amd64-supervisor:latest", "ghcr.io"),
("myregistry.com/nginx", "myregistry.com"),
("registry.example.com/org/image:v1", "registry.example.com"),
("127.0.0.1/myimage", "127.0.0.1"),
# Registry with port
("myregistry:5000/myimage", "myregistry:5000"),
("localhost:5000/myimage", "localhost:5000"),
("registry.io:5000/org/app:v1", "registry.io:5000"),
# localhost special case
("localhost/myimage", "localhost"),
("localhost/myimage:tag", "localhost"),
# IPv6
("[::1]:5000/myimage", "[::1]:5000"),
("[2001:db8::1]:5000/myimage:tag", "[2001:db8::1]:5000"),
],
)
def test_get_registry_from_image(image_ref: str, expected_registry: str | None):
"""Test get_registry_from_image extracts registry from image reference.
Based on Docker's reference implementation:
vendor/github.com/distribution/reference/normalize.go
"""
assert get_registry_from_image(image_ref) == expected_registry
def test_no_credentials(coresys: CoreSys, test_docker_interface: DockerInterface):
@@ -46,3 +87,36 @@ def test_matching_credentials(coresys: CoreSys, test_docker_interface: DockerInt
)
assert credentials["username"] == "Spongebob Squarepants"
assert "registry" not in credentials
def test_legacy_docker_hub_credentials(
coresys: CoreSys, test_docker_interface: DockerInterface
):
"""Test legacy hub.docker.com credentials are used for Docker Hub images."""
coresys.docker.config._data["registries"] = {
DOCKER_HUB_LEGACY: {"username": "LegacyUser", "password": "Password1!"},
}
credentials = test_docker_interface._get_credentials(
"homeassistant/amd64-supervisor"
)
assert credentials["username"] == "LegacyUser"
# No registry should be included for Docker Hub
assert "registry" not in credentials
def test_docker_hub_preferred_over_legacy(
coresys: CoreSys, test_docker_interface: DockerInterface
):
"""Test docker.io is preferred over legacy hub.docker.com when both exist."""
coresys.docker.config._data["registries"] = {
DOCKER_HUB: {"username": "NewUser", "password": "Password1!"},
DOCKER_HUB_LEGACY: {"username": "LegacyUser", "password": "Password2!"},
}
credentials = test_docker_interface._get_credentials(
"homeassistant/amd64-supervisor"
)
# docker.io should be preferred
assert credentials["username"] == "NewUser"
assert "registry" not in credentials

View File

@@ -16,7 +16,7 @@ from supervisor.addons.manager import Addon
from supervisor.const import BusEvent, CoreState, CpuArch
from supervisor.coresys import CoreSys
from supervisor.docker.const import ContainerState
from supervisor.docker.interface import DockerInterface
from supervisor.docker.interface import DOCKER_HUB, DockerInterface
from supervisor.docker.manager import PullLogEntry, PullProgressDetail
from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import (
@@ -26,7 +26,10 @@ from supervisor.exceptions import (
DockerNotFound,
DockerRequestError,
)
from supervisor.jobs import JobSchedulerOptions, SupervisorJob
from supervisor.homeassistant.const import WSEvent, WSType
from supervisor.jobs import ChildJobSyncFilter, JobSchedulerOptions, SupervisorJob
from supervisor.jobs.decorator import Job
from supervisor.supervisor import Supervisor
from tests.common import AsyncIterator, load_json_fixture
@@ -51,7 +54,7 @@ async def test_docker_image_platform(
coresys.docker.images.inspect.return_value = {"Id": "test:1.2.3"}
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test", arch=cpu_arch)
coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform=platform, stream=True
"test", tag="1.2.3", platform=platform, auth=None, stream=True
)
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
@@ -68,12 +71,50 @@ async def test_docker_image_default_platform(
):
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test")
coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", stream=True
"test", tag="1.2.3", platform="linux/386", auth=None, stream=True
)
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
@pytest.mark.parametrize(
"image,registry_key",
[
("homeassistant/amd64-supervisor", DOCKER_HUB),
("ghcr.io/home-assistant/amd64-supervisor", "ghcr.io"),
],
)
async def test_private_registry_credentials_passed_to_pull(
coresys: CoreSys,
test_docker_interface: DockerInterface,
image: str,
registry_key: str,
):
"""Test credentials for private registries are passed to aiodocker pull."""
coresys.docker.images.inspect.return_value = {"Id": f"{image}:1.2.3"}
# Configure registry credentials
coresys.docker.config._data["registries"] = { # pylint: disable=protected-access
registry_key: {"username": "testuser", "password": "testpass"}
}
with patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value="amd64")
):
await test_docker_interface.install(
AwesomeVersion("1.2.3"), image, arch=CpuArch.AMD64
)
# Verify credentials were passed to aiodocker
expected_auth = {"username": "testuser", "password": "testpass"}
if registry_key != DOCKER_HUB:
expected_auth["registry"] = registry_key
coresys.docker.images.pull.assert_called_once_with(
image, tag="1.2.3", platform="linux/amd64", auth=expected_auth, stream=True
)
@pytest.mark.parametrize(
"attrs,expected",
[
@@ -276,7 +317,7 @@ async def test_install_fires_progress_events(
},
{"status": "Already exists", "progressDetail": {}, "id": "6e771e15690e"},
{"status": "Pulling fs layer", "progressDetail": {}, "id": "1578b14a573c"},
{"status": "Waiting", "progressDetail": {}, "id": "2488d0e401e1"},
{"status": "Waiting", "progressDetail": {}, "id": "1578b14a573c"},
{
"status": "Downloading",
"progressDetail": {"current": 1378, "total": 1486},
@@ -319,7 +360,7 @@ async def test_install_fires_progress_events(
):
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test")
coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", stream=True
"test", tag="1.2.3", platform="linux/386", auth=None, stream=True
)
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
@@ -346,7 +387,7 @@ async def test_install_fires_progress_events(
job_id=ANY,
status="Waiting",
progress_detail=PullProgressDetail(),
id="2488d0e401e1",
id="1578b14a573c",
),
PullLogEntry(
job_id=ANY,
@@ -500,6 +541,7 @@ async def test_install_raises_on_pull_error(
"status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.2",
},
{"status": "Pulling fs layer", "progressDetail": {}, "id": "1578b14a573c"},
{
"status": "Downloading",
"progressDetail": {"current": 1378, "total": 1486},
@@ -554,16 +596,39 @@ async def test_install_progress_handles_download_restart(
capture_exception.assert_not_called()
@pytest.mark.parametrize(
"extract_log",
[
{
"status": "Extracting",
"progressDetail": {"current": 96, "total": 96},
"progress": "[==================================================>] 96B/96B",
"id": "02a6e69d8d00",
},
{
"status": "Extracting",
"progressDetail": {"current": 1, "units": "s"},
"progress": "1 s",
"id": "02a6e69d8d00",
},
],
ids=["normal_extract_log", "containerd_snapshot_extract_log"],
)
async def test_install_progress_handles_layers_skipping_download(
coresys: CoreSys,
test_docker_interface: DockerInterface,
capture_exception: Mock,
extract_log: dict[str, Any],
):
"""Test install handles small layers that skip downloading phase and go directly to download complete.
Reproduces the real-world scenario from Supervisor issue #6286:
- Small layer (02a6e69d8d00) completes Download complete at 10:14:08 without ever Downloading
- Normal layer (3f4a84073184) starts Downloading at 10:14:09 with progress updates
Under containerd snapshotter this presumably can still occur and Supervisor will have even less info
since extract logs don't have a total. Supervisor should generally just ignore these and set progress
from the larger images that take all the time.
"""
coresys.core.set_state(CoreState.RUNNING)
@@ -607,12 +672,7 @@ async def test_install_progress_handles_layers_skipping_download(
},
{"status": "Pull complete", "progressDetail": {}, "id": "3f4a84073184"},
# Small layer finally extracts (10:14:58 in logs)
{
"status": "Extracting",
"progressDetail": {"current": 96, "total": 96},
"progress": "[==================================================>] 96B/96B",
"id": "02a6e69d8d00",
},
extract_log,
{"status": "Pull complete", "progressDetail": {}, "id": "02a6e69d8d00"},
{"status": "Digest: sha256:test"},
{"status": "Status: Downloaded newer image for test/image:latest"},
@@ -649,11 +709,18 @@ async def test_install_progress_handles_layers_skipping_download(
await install_task
await event.wait()
# First update from layer download should have rather low progress ((260937/25445459) / 2 ~ 0.5%)
assert install_job_snapshots[0]["progress"] < 1
# With the new progress calculation approach:
# - Progress is weighted by layer size
# - Small layers that skip downloading get minimal size (1 byte)
# - Progress should increase monotonically
assert len(install_job_snapshots) > 0
# Total 8 events should lead to a progress update on the install job
assert len(install_job_snapshots) == 8
# Verify progress is monotonically increasing (or stable)
for i in range(1, len(install_job_snapshots)):
assert (
install_job_snapshots[i]["progress"]
>= install_job_snapshots[i - 1]["progress"]
)
# Job should complete successfully
assert job.done is True
@@ -720,3 +787,88 @@ async def test_missing_total_handled_gracefully(
await event.wait()
capture_exception.assert_not_called()
async def test_install_progress_containerd_snapshot(
coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test install handles docker progress events using containerd snapshotter."""
coresys.core.set_state(CoreState.RUNNING)
class TestDockerInterface(DockerInterface):
"""Test interface for events."""
@property
def name(self) -> str:
"""Name of test interface."""
return "test_interface"
@Job(
name="mock_docker_interface_install",
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def mock_install(self) -> None:
"""Mock install."""
await super().install(
AwesomeVersion("1.2.3"), image="test", arch=CpuArch.I386
)
# Fixture emulates log as received when using containerd snapshotter
# Should not error but progress gets choppier once extraction starts
logs = load_json_fixture("docker_pull_image_log_containerd_snapshot.json")
coresys.docker.images.pull.return_value = AsyncIterator(logs)
test_docker_interface = TestDockerInterface(coresys)
with patch.object(Supervisor, "arch", PropertyMock(return_value="i386")):
await test_docker_interface.mock_install()
coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", auth=None, stream=True
)
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
await asyncio.sleep(1)
def job_event(progress: float, done: bool = False):
return {
"type": WSType.SUPERVISOR_EVENT,
"data": {
"event": WSEvent.JOB,
"data": {
"name": "mock_docker_interface_install",
"reference": "test_interface",
"uuid": ANY,
"progress": progress,
"stage": None,
"done": done,
"parent_id": None,
"errors": [],
"created": ANY,
"extra": None,
},
},
}
assert [c.args[0] for c in ha_ws_client.async_send_command.call_args_list] == [
# Count-based progress: 2 layers, each = 50%. Download = 0-35%, Extract = 35-50%
job_event(0),
job_event(1.7),
job_event(3.4),
job_event(8.4),
job_event(10.2),
job_event(15.2),
job_event(18.7),
job_event(28.8),
job_event(35.7),
job_event(42.4),
job_event(49.3),
job_event(55.8),
job_event(62.7),
# Downloading phase is considered 70% of layer's progress.
# After download complete, extraction takes remaining 30% per layer.
job_event(70.0),
job_event(85.0),
job_event(100),
job_event(100, True),
]

View File

@@ -2,7 +2,7 @@
import asyncio
from pathlib import Path
from unittest.mock import MagicMock, patch
from unittest.mock import AsyncMock, MagicMock, patch
from docker.errors import APIError, DockerException, NotFound
import pytest
@@ -412,9 +412,9 @@ async def test_repair_failures(coresys: CoreSys, caplog: pytest.LogCaptureFixtur
async def test_import_image(coresys: CoreSys, tmp_path: Path, log_starter: str):
"""Test importing an image into docker."""
(test_tar := tmp_path / "test.tar").touch()
coresys.docker.images.import_image.return_value = [
{"stream": f"{log_starter}: imported"}
]
coresys.docker.images.import_image = AsyncMock(
return_value=[{"stream": f"{log_starter}: imported"}]
)
coresys.docker.images.inspect.return_value = {"Id": "imported"}
image = await coresys.docker.import_image(test_tar)
@@ -426,9 +426,9 @@ async def test_import_image(coresys: CoreSys, tmp_path: Path, log_starter: str):
async def test_import_image_error(coresys: CoreSys, tmp_path: Path):
"""Test failure importing an image into docker."""
(test_tar := tmp_path / "test.tar").touch()
coresys.docker.images.import_image.return_value = [
{"errorDetail": {"message": "fail"}}
]
coresys.docker.images.import_image = AsyncMock(
return_value=[{"errorDetail": {"message": "fail"}}]
)
with pytest.raises(DockerError, match="Can't import image from tar: fail"):
await coresys.docker.import_image(test_tar)
@@ -441,10 +441,12 @@ async def test_import_multiple_images_in_tar(
):
"""Test importing an image into docker."""
(test_tar := tmp_path / "test.tar").touch()
coresys.docker.images.import_image.return_value = [
{"stream": "Loaded image: imported-1"},
{"stream": "Loaded image: imported-2"},
]
coresys.docker.images.import_image = AsyncMock(
return_value=[
{"stream": "Loaded image: imported-1"},
{"stream": "Loaded image: imported-2"},
]
)
assert await coresys.docker.import_image(test_tar) is None

View File

@@ -0,0 +1,166 @@
"""Tests for registry manifest fetcher."""
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from supervisor.docker.manifest import (
DOCKER_HUB,
ImageManifest,
RegistryManifestFetcher,
parse_image_reference,
)
class TestParseImageReference:
"""Tests for parse_image_reference function."""
def test_ghcr_io_image(self):
"""Test parsing ghcr.io image."""
registry, repo, tag = parse_image_reference(
"ghcr.io/home-assistant/home-assistant", "2025.1.0"
)
assert registry == "ghcr.io"
assert repo == "home-assistant/home-assistant"
assert tag == "2025.1.0"
def test_docker_hub_with_org(self):
"""Test parsing Docker Hub image with organization."""
registry, repo, tag = parse_image_reference(
"homeassistant/home-assistant", "latest"
)
assert registry == DOCKER_HUB
assert repo == "homeassistant/home-assistant"
assert tag == "latest"
def test_docker_hub_official_image(self):
"""Test parsing Docker Hub official image (no org)."""
registry, repo, tag = parse_image_reference("alpine", "3.18")
assert registry == DOCKER_HUB
assert repo == "library/alpine"
assert tag == "3.18"
def test_gcr_io_image(self):
"""Test parsing gcr.io image."""
registry, repo, tag = parse_image_reference("gcr.io/project/image", "v1")
assert registry == "gcr.io"
assert repo == "project/image"
assert tag == "v1"
class TestImageManifest:
"""Tests for ImageManifest dataclass."""
def test_layer_count(self):
"""Test layer_count property."""
manifest = ImageManifest(
digest="sha256:abc",
total_size=1000,
layers={"layer1": 500, "layer2": 500},
)
assert manifest.layer_count == 2
class TestRegistryManifestFetcher:
"""Tests for RegistryManifestFetcher class."""
# pylint: disable=protected-access
@pytest.fixture
def mock_coresys(self):
"""Create mock coresys."""
coresys = MagicMock()
coresys.docker.config.registries = {}
return coresys
@pytest.fixture
def fetcher(self, mock_coresys):
"""Create fetcher instance."""
return RegistryManifestFetcher(mock_coresys)
async def test_get_manifest_success(self, fetcher):
"""Test successful manifest fetch by mocking internal methods."""
manifest_data = {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {"digest": "sha256:abc123"},
"layers": [
{"digest": "sha256:layer1abc123def456789012", "size": 1000},
{"digest": "sha256:layer2def456abc789012345", "size": 2000},
],
}
# Mock the internal methods instead of the session
with (
patch.object(
fetcher, "_get_auth_token", new=AsyncMock(return_value="test-token")
),
patch.object(
fetcher, "_fetch_manifest", new=AsyncMock(return_value=manifest_data)
),
patch.object(fetcher, "_get_session", new=AsyncMock()),
):
result = await fetcher.get_manifest(
"test.io/org/image", "v1.0", platform="linux/amd64"
)
assert result is not None
assert result.total_size == 3000
assert result.layer_count == 2
# First 12 chars after sha256:
assert "layer1abc123" in result.layers
assert result.layers["layer1abc123"] == 1000
async def test_get_manifest_returns_none_on_failure(self, fetcher):
"""Test that get_manifest returns None on failure."""
with (
patch.object(
fetcher, "_get_auth_token", new=AsyncMock(return_value="test-token")
),
patch.object(fetcher, "_fetch_manifest", new=AsyncMock(return_value=None)),
patch.object(fetcher, "_get_session", new=AsyncMock()),
):
result = await fetcher.get_manifest(
"test.io/org/image", "v1.0", platform="linux/amd64"
)
assert result is None
async def test_close_session(self, fetcher):
"""Test session cleanup."""
mock_session = AsyncMock()
mock_session.closed = False
mock_session.close = AsyncMock()
fetcher._session = mock_session
await fetcher.close()
mock_session.close.assert_called_once()
assert fetcher._session is None
def test_get_credentials_docker_hub(self, mock_coresys, fetcher):
"""Test getting Docker Hub credentials."""
mock_coresys.docker.config.registries = {
"docker.io": {"username": "user", "password": "pass"}
}
creds = fetcher._get_credentials(DOCKER_HUB)
assert creds == ("user", "pass")
def test_get_credentials_custom_registry(self, mock_coresys, fetcher):
"""Test getting credentials for custom registry."""
mock_coresys.docker.config.registries = {
"ghcr.io": {"username": "user", "password": "token"}
}
creds = fetcher._get_credentials("ghcr.io")
assert creds == ("user", "token")
def test_get_credentials_not_found(self, mock_coresys, fetcher):
"""Test no credentials found."""
mock_coresys.docker.config.registries = {}
creds = fetcher._get_credentials("unknown.io")
assert creds is None

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,196 @@
[
{
"status": "Pulling from home-assistant/home-assistant",
"id": "2025.12.0.dev202511080235"
},
{ "status": "Pulling fs layer", "progressDetail": {}, "id": "eafecc6b43cc" },
{ "status": "Pulling fs layer", "progressDetail": {}, "id": "333270549f95" },
{
"status": "Downloading",
"progressDetail": { "current": 1048576, "total": 21863319 },
"progress": "[==\u003e ] 1.049MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 1048576, "total": 21179924 },
"progress": "[==\u003e ] 1.049MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 4194304, "total": 21863319 },
"progress": "[=========\u003e ] 4.194MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 2097152, "total": 21179924 },
"progress": "[====\u003e ] 2.097MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 7340032, "total": 21863319 },
"progress": "[================\u003e ] 7.34MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 4194304, "total": 21179924 },
"progress": "[=========\u003e ] 4.194MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 13631488, "total": 21863319 },
"progress": "[===============================\u003e ] 13.63MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 8388608, "total": 21179924 },
"progress": "[===================\u003e ] 8.389MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 17825792, "total": 21863319 },
"progress": "[========================================\u003e ] 17.83MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 12582912, "total": 21179924 },
"progress": "[=============================\u003e ] 12.58MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 21863319, "total": 21863319 },
"progress": "[==================================================\u003e] 21.86MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 16777216, "total": 21179924 },
"progress": "[=======================================\u003e ] 16.78MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 21179924, "total": 21179924 },
"progress": "[==================================================\u003e] 21.18MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Download complete",
"progressDetail": { "hidecounts": true },
"id": "eafecc6b43cc"
},
{
"status": "Download complete",
"progressDetail": { "hidecounts": true },
"id": "333270549f95"
},
{
"status": "Extracting",
"progressDetail": { "current": 1, "units": "s" },
"progress": "1 s",
"id": "333270549f95"
},
{
"status": "Extracting",
"progressDetail": { "current": 1, "units": "s" },
"progress": "1 s",
"id": "333270549f95"
},
{
"status": "Pull complete",
"progressDetail": { "hidecounts": true },
"id": "333270549f95"
},
{
"status": "Extracting",
"progressDetail": { "current": 1, "units": "s" },
"progress": "1 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 1, "units": "s" },
"progress": "1 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 2, "units": "s" },
"progress": "2 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 2, "units": "s" },
"progress": "2 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 3, "units": "s" },
"progress": "3 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 3, "units": "s" },
"progress": "3 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 4, "units": "s" },
"progress": "4 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 4, "units": "s" },
"progress": "4 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 5, "units": "s" },
"progress": "5 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 5, "units": "s" },
"progress": "5 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 6, "units": "s" },
"progress": "6 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 6, "units": "s" },
"progress": "6 s",
"id": "eafecc6b43cc"
},
{
"status": "Pull complete",
"progressDetail": { "hidecounts": true },
"id": "eafecc6b43cc"
},
{
"status": "Digest: sha256:bfc9efc13552c0c228f3d9d35987331cce68b43c9bc79c80a57eeadadd44cccf"
},
{
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/home-assistant:2025.12.0.dev202511080235"
}
]

View File

@@ -477,6 +477,7 @@ async def test_core_loads_wrong_image_for_machine(
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
auth=None,
)
container.remove.assert_called_once_with(force=True, v=True)
@@ -535,6 +536,7 @@ async def test_core_loads_wrong_image_for_architecture(
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
auth=None,
)
container.remove.assert_called_once_with(force=True, v=True)

View File

@@ -200,6 +200,8 @@ async def test_notify_on_change(coresys: CoreSys, ha_ws_client: AsyncMock):
"type": "HassioError",
"message": "Unknown error, see Supervisor logs (check with 'ha supervisor logs')",
"stage": "test",
"error_key": None,
"extra_fields": None,
}
],
"created": ANY,
@@ -228,6 +230,8 @@ async def test_notify_on_change(coresys: CoreSys, ha_ws_client: AsyncMock):
"type": "HassioError",
"message": "Unknown error, see Supervisor logs (check with 'ha supervisor logs')",
"stage": "test",
"error_key": None,
"extra_fields": None,
}
],
"created": ANY,

View File

@@ -119,10 +119,10 @@ async def test_load(
"mnt-data-supervisor-mounts-backup_test.mount",
"fail",
[
["Options", Variant("s", "noserverino,guest")],
["Type", Variant("s", "cifs")],
["Description", Variant("s", "Supervisor cifs mount: backup_test")],
["What", Variant("s", "//backup.local/backups")],
("Options", Variant("s", "noserverino,guest")),
("Type", Variant("s", "cifs")),
("Description", Variant("s", "Supervisor cifs mount: backup_test")),
("What", Variant("s", "//backup.local/backups")),
],
[],
),
@@ -130,10 +130,10 @@ async def test_load(
"mnt-data-supervisor-mounts-media_test.mount",
"fail",
[
["Options", Variant("s", "soft,timeo=200")],
["Type", Variant("s", "nfs")],
["Description", Variant("s", "Supervisor nfs mount: media_test")],
["What", Variant("s", "media.local:/media")],
("Options", Variant("s", "soft,timeo=200")),
("Type", Variant("s", "nfs")),
("Description", Variant("s", "Supervisor nfs mount: media_test")),
("What", Variant("s", "media.local:/media")),
],
[],
),
@@ -141,12 +141,12 @@ async def test_load(
"mnt-data-supervisor-media-media_test.mount",
"fail",
[
["Options", Variant("s", "bind")],
[
("Options", Variant("s", "bind")),
(
"Description",
Variant("s", "Supervisor bind mount: bind_media_test"),
],
["What", Variant("s", "/mnt/data/supervisor/mounts/media_test")],
),
("What", Variant("s", "/mnt/data/supervisor/mounts/media_test")),
],
[],
),
@@ -198,10 +198,10 @@ async def test_load_share_mount(
"mnt-data-supervisor-mounts-share_test.mount",
"fail",
[
["Options", Variant("s", "soft,timeo=200")],
["Type", Variant("s", "nfs")],
["Description", Variant("s", "Supervisor nfs mount: share_test")],
["What", Variant("s", "share.local:/share")],
("Options", Variant("s", "soft,timeo=200")),
("Type", Variant("s", "nfs")),
("Description", Variant("s", "Supervisor nfs mount: share_test")),
("What", Variant("s", "share.local:/share")),
],
[],
),
@@ -209,9 +209,9 @@ async def test_load_share_mount(
"mnt-data-supervisor-share-share_test.mount",
"fail",
[
["Options", Variant("s", "bind")],
["Description", Variant("s", "Supervisor bind mount: bind_share_test")],
["What", Variant("s", "/mnt/data/supervisor/mounts/share_test")],
("Options", Variant("s", "bind")),
("Description", Variant("s", "Supervisor bind mount: bind_share_test")),
("What", Variant("s", "/mnt/data/supervisor/mounts/share_test")),
],
[],
),
@@ -318,12 +318,12 @@ async def test_mount_failed_during_load(
"mnt-data-supervisor-media-media_test.mount",
"fail",
[
["Options", Variant("s", "ro,bind")],
[
("Options", Variant("s", "ro,bind")),
(
"Description",
Variant("s", "Supervisor bind mount: emergency_media_test"),
],
["What", Variant("s", "/mnt/data/supervisor/emergency/media_test")],
),
("What", Variant("s", "/mnt/data/supervisor/emergency/media_test")),
],
[],
)
@@ -634,10 +634,10 @@ async def test_reload_mounts_attempts_initial_mount(
"mnt-data-supervisor-mounts-media_test.mount",
"fail",
[
["Options", Variant("s", "soft,timeo=200")],
["Type", Variant("s", "nfs")],
["Description", Variant("s", "Supervisor nfs mount: media_test")],
["What", Variant("s", "media.local:/media")],
("Options", Variant("s", "soft,timeo=200")),
("Type", Variant("s", "nfs")),
("Description", Variant("s", "Supervisor nfs mount: media_test")),
("What", Variant("s", "media.local:/media")),
],
[],
),
@@ -645,9 +645,9 @@ async def test_reload_mounts_attempts_initial_mount(
"mnt-data-supervisor-media-media_test.mount",
"fail",
[
["Options", Variant("s", "bind")],
["Description", Variant("s", "Supervisor bind mount: bind_media_test")],
["What", Variant("s", "/mnt/data/supervisor/mounts/media_test")],
("Options", Variant("s", "bind")),
("Description", Variant("s", "Supervisor bind mount: bind_media_test")),
("What", Variant("s", "/mnt/data/supervisor/mounts/media_test")),
],
[],
),

View File

@@ -105,7 +105,7 @@ async def test_cifs_mount(
"mnt-data-supervisor-mounts-test.mount",
"fail",
[
[
(
"Options",
Variant(
"s",
@@ -117,10 +117,10 @@ async def test_cifs_mount(
]
),
),
],
["Type", Variant("s", "cifs")],
["Description", Variant("s", "Supervisor cifs mount: test")],
["What", Variant("s", "//test.local/camera")],
),
("Type", Variant("s", "cifs")),
("Description", Variant("s", "Supervisor cifs mount: test")),
("What", Variant("s", "//test.local/camera")),
],
[],
)
@@ -177,10 +177,10 @@ async def test_cifs_mount_read_only(
"mnt-data-supervisor-mounts-test.mount",
"fail",
[
["Options", Variant("s", "ro,noserverino,guest")],
["Type", Variant("s", "cifs")],
["Description", Variant("s", "Supervisor cifs mount: test")],
["What", Variant("s", "//test.local/camera")],
("Options", Variant("s", "ro,noserverino,guest")),
("Type", Variant("s", "cifs")),
("Description", Variant("s", "Supervisor cifs mount: test")),
("What", Variant("s", "//test.local/camera")),
],
[],
)
@@ -237,10 +237,10 @@ async def test_nfs_mount(
"mnt-data-supervisor-mounts-test.mount",
"fail",
[
["Options", Variant("s", "port=1234,soft,timeo=200")],
["Type", Variant("s", "nfs")],
["Description", Variant("s", "Supervisor nfs mount: test")],
["What", Variant("s", "test.local:/media/camera")],
("Options", Variant("s", "port=1234,soft,timeo=200")),
("Type", Variant("s", "nfs")),
("Description", Variant("s", "Supervisor nfs mount: test")),
("What", Variant("s", "test.local:/media/camera")),
],
[],
)
@@ -283,10 +283,10 @@ async def test_nfs_mount_read_only(
"mnt-data-supervisor-mounts-test.mount",
"fail",
[
["Options", Variant("s", "ro,port=1234,soft,timeo=200")],
["Type", Variant("s", "nfs")],
["Description", Variant("s", "Supervisor nfs mount: test")],
["What", Variant("s", "test.local:/media/camera")],
("Options", Variant("s", "ro,port=1234,soft,timeo=200")),
("Type", Variant("s", "nfs")),
("Description", Variant("s", "Supervisor nfs mount: test")),
("What", Variant("s", "test.local:/media/camera")),
],
[],
)
@@ -331,10 +331,10 @@ async def test_load(
"mnt-data-supervisor-mounts-test.mount",
"fail",
[
["Options", Variant("s", "noserverino,guest")],
["Type", Variant("s", "cifs")],
["Description", Variant("s", "Supervisor cifs mount: test")],
["What", Variant("s", "//test.local/share")],
("Options", Variant("s", "noserverino,guest")),
("Type", Variant("s", "cifs")),
("Description", Variant("s", "Supervisor cifs mount: test")),
("What", Variant("s", "//test.local/share")),
],
[],
)
@@ -736,10 +736,10 @@ async def test_mount_fails_if_down(
"mnt-data-supervisor-mounts-test.mount",
"fail",
[
["Options", Variant("s", "port=1234,soft,timeo=200")],
["Type", Variant("s", "nfs")],
["Description", Variant("s", "Supervisor nfs mount: test")],
["What", Variant("s", "test.local:/media/camera")],
("Options", Variant("s", "port=1234,soft,timeo=200")),
("Type", Variant("s", "nfs")),
("Description", Variant("s", "Supervisor nfs mount: test")),
("What", Variant("s", "test.local:/media/camera")),
],
[],
)

View File

@@ -369,7 +369,7 @@ async def test_load_with_incorrect_image(
with patch.object(DockerAPI, "pull_image", return_value=img_data) as pull_image:
await plugin.load()
pull_image.assert_called_once_with(
ANY, correct_image, "2024.4.0", platform="linux/amd64"
ANY, correct_image, "2024.4.0", platform="linux/amd64", auth=None
)
container.remove.assert_called_once_with(force=True, v=True)

View File

@@ -5,10 +5,7 @@ from unittest.mock import MagicMock, patch
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.resolution.evaluations.operating_system import (
SUPPORTED_OS,
EvaluateOperatingSystem,
)
from supervisor.resolution.evaluations.operating_system import EvaluateOperatingSystem
async def test_evaluation(coresys: CoreSys):
@@ -25,13 +22,7 @@ async def test_evaluation(coresys: CoreSys):
assert operating_system.reason in coresys.resolution.unsupported
coresys.os._available = True
await operating_system()
assert operating_system.reason not in coresys.resolution.unsupported
coresys.os._available = False
coresys.host._info = MagicMock(
operating_system=SUPPORTED_OS[0], timezone=None, timezone_tzinfo=None
)
assert coresys.os.available
await operating_system()
assert operating_system.reason not in coresys.resolution.unsupported

View File

@@ -0,0 +1,43 @@
"""Test evaluation supported system architectures."""
from unittest.mock import PropertyMock, patch
import pytest
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.resolution.evaluations.system_architecture import (
EvaluateSystemArchitecture,
)
@pytest.mark.parametrize("arch", ["i386", "armhf", "armv7"])
async def test_evaluation_unsupported_architectures(
coresys: CoreSys,
arch: str,
):
"""Test evaluation of unsupported system architectures."""
system_architecture = EvaluateSystemArchitecture(coresys)
await coresys.core.set_state(CoreState.INITIALIZE)
with patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value=arch)
):
await system_architecture()
assert system_architecture.reason in coresys.resolution.unsupported
@pytest.mark.parametrize("arch", ["amd64", "aarch64"])
async def test_evaluation_supported_architectures(
coresys: CoreSys,
arch: str,
):
"""Test evaluation of supported system architectures."""
system_architecture = EvaluateSystemArchitecture(coresys)
await coresys.core.set_state(CoreState.INITIALIZE)
with patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value=arch)
):
await system_architecture()
assert system_architecture.reason not in coresys.resolution.unsupported

View File

@@ -9,7 +9,7 @@ from awesomeversion import AwesomeVersion
import pytest
from supervisor.addons.addon import Addon
from supervisor.arch import CpuArch
from supervisor.arch import CpuArchManager
from supervisor.backups.manager import BackupManager
from supervisor.coresys import CoreSys
from supervisor.exceptions import AddonNotSupportedError, StoreJobError
@@ -163,7 +163,9 @@ async def test_update_unavailable_addon(
with (
patch.object(BackupManager, "do_backup_partial") as backup,
patch.object(AddonStore, "data", new=PropertyMock(return_value=addon_config)),
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
patch.object(
CpuArchManager, "supported", new=PropertyMock(return_value=["amd64"])
),
patch.object(CoreSys, "machine", new=PropertyMock(return_value="qemux86-64")),
patch.object(
HomeAssistant,
@@ -219,7 +221,9 @@ async def test_install_unavailable_addon(
with (
patch.object(AddonStore, "data", new=PropertyMock(return_value=addon_config)),
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
patch.object(
CpuArchManager, "supported", new=PropertyMock(return_value=["amd64"])
),
patch.object(CoreSys, "machine", new=PropertyMock(return_value="qemux86-64")),
patch.object(
HomeAssistant,

0
wheels/.gitkeep Normal file
View File