Compare commits

..

68 Commits

Author SHA1 Message Date
Stefan Agner
9342456b34 Scale progress by fraction of layers that have reported
When pulling Docker images with many layers, tiny layers that complete
immediately would show inflated progress (e.g., 70%) even when most
layers haven't started reporting yet. This made the UI jump to 70%
quickly, then appear stuck during actual download.

The fix scales progress by the fraction of layers that have reported:
- If 2/25 layers report at 70%, progress shows ~5.6% instead of 70%
- As more layers report, progress increases proportionally
- When all layers have reported, no scaling is applied

This ensures progress accurately reflects overall download status rather
than being dominated by a few tiny layers that complete first.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-27 19:54:39 +01:00
Stefan Agner
2080a2719e Add missing test fixture json 2025-11-27 17:19:58 +01:00
Stefan Agner
6820dbb4d2 Improve progress reporting when layers are locally present
With containerd snapshotter, some layers skip the "Downloading" phase
entirely and go directly from "Pulling fs layer" to "Download complete".
These layers only receive a placeholder extra={current: 1, total: 1}
instead of actual size information.

The previous fix in #6357 still required ALL layers to have extra before
calculating progress, which caused the UI to jump from 0% to 70%+ when
the first real layer reported after cached layers completed.

This change improves progress calculation by:
- Separating "real" layers (with actual size > 1 byte) from "placeholder"
  layers (cached/small layers with size = 1)
- Calculating weighted progress only from real layers
- Not reporting progress until at least one real layer has size info
- Correctly tracking stage when placeholder layers are still extracting

Tested with real-world pull events from a Home Assistant Core update
that had 12 layers where 5 skipped downloading entirely.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-27 16:28:14 +01:00
Mike Degatano
6302c7d394 Fix progress when using containerd snapshotter (#6357)
* Fix progress when using containerd snapshotter

* Add test for tiny image download under containerd-snapshotter

* Fix API tests after progress allocation change

* Fix test for auth changes

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-27 16:26:22 +01:00
Jan Čermák
f55fd891e9 Add API endpoint for migrating Docker storage driver (#6361)
Implement Supervisor API for home-assistant/os-agent#238, adding possibility to
schedule migration either to Containerd overlayfs driver, or migration to the
graph overlay2 driver, once the device is rebooted the next time. While it's
technically in the DBus OS interface, in Supervisor's abstraction it makes more
sense to put it under `/docker` endpoints.
2025-11-27 16:02:39 +01:00
Stefan Agner
8a251e0324 Pass registry credentials to add-on build for private base images (#6356)
* Pass registry credentials to add-on build for private base images

When building add-ons that use a base image from a private registry,
the build would fail because credentials configured via the Supervisor
API were not passed to the Docker-in-Docker build container.

This fix:
- Adds get_docker_config_json() to generate a Docker config.json with
  registry credentials for the base image
- Creates a temporary config file and mounts it into the build container
  at /root/.docker/config.json so BuildKit can authenticate when pulling
  the base image
- Cleans up the temporary file after build completes

Fixes #6354

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix pylint errors

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Refactor registry credential extraction into shared helper

Extract duplicate logic for determining which registry matches an image
into a shared `get_registry_for_image()` method in `DockerConfig`. This
method is now used by both `DockerInterface._get_credentials()` and
`AddonBuild.get_docker_config_json()`.

Move `DOCKER_HUB` and `IMAGE_WITH_HOST` constants to `docker/const.py`
to avoid circular imports between manager.py and interface.py.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* ruff format

* Document raises

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-27 11:10:17 +01:00
dependabot[bot]
62b7b8c399 Bump types-docker from 7.1.0.20251125 to 7.1.0.20251127 (#6358) 2025-11-27 07:22:43 +01:00
Stefan Agner
3c87704802 Handle update errors in automatic Supervisor update task (#6328)
Wrap the Supervisor auto-update call with suppress(SupervisorUpdateError)
to prevent unhandled exceptions from propagating. When an automatic update
fails, errors are already logged by the exception handlers, and there's no
meaningful recovery action the scheduler task can take.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 14:11:51 -05:00
Stefan Agner
ae7700f52c Fix private registry authentication for aiodocker image pulls (#6355)
* Fix private registry authentication for aiodocker image pulls

After PR #6252 migrated image pulling from dockerpy to aiodocker,
private registry authentication stopped working. The old _docker_login()
method stored credentials in ~/.docker/config.json via dockerpy, but
aiodocker doesn't read that file - it requires credentials passed
explicitly via the auth parameter.

Changes:
- Remove unused _docker_login() method (dockerpy login was ineffective)
- Pass credentials directly to pull_image() via new auth parameter
- Add auth parameter to DockerAPI.pull_image() method
- Add unit tests for Docker Hub and custom registry authentication

Fixes #6345

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Ignore protected access in test

* Fix plug-in pull test

* Fix HA core tests

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 17:37:24 +01:00
Stefan Agner
e06e792e74 Fix type annotations for sentinel values in job manager (#6349)
Add `type[DEFAULT]` to type annotations for parameters that use the
DEFAULT sentinel value. This fixes runtime type checking failures with
typeguard when sentinel values are passed as arguments.

Use explicit type casts and restructured parameter passing to satisfy
mypy's type narrowing requirements. The sentinel pattern allows
distinguishing between "parameter not provided" and "parameter
explicitly set to None", which is critical for job management logic.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-26 09:17:17 +01:00
dependabot[bot]
5f55ab8de4 Bump home-assistant/wheels from 2025.10.0 to 2025.11.0 (#6352) 2025-11-26 07:56:32 +01:00
Stefan Agner
ca521c24cb Fix typeguard error in API decorator wrapper functions (#6350)
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 19:04:31 +01:00
dependabot[bot]
6042694d84 Bump dbus-fast from 2.45.1 to 3.1.2 (#6317)
* Bump dbus-fast from 2.45.1 to 3.1.2

Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.45.1 to 3.1.2.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.45.1...v3.1.2)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 3.1.2
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update unit tests for dbus-fast 3.1.2 changes

* Fix type annotations

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-11-25 16:25:06 +01:00
Stefan Agner
2b2aedae60 Fix D-Bus type annotation issues (#6348)
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 14:47:48 +01:00
Jan Čermák
4b4afd081b Drop build for deprecated architectures and re-tag legacy build instead (#6347)
To ensure that e.g. airgapped devices running on deprecated archs can still
update the Supervisor when they become online, the version of Supervisor in the
version file must stay available for all architectures. Since the base images
will no longer exist for those archs and to avoid the need for building it from
current source, add job that pulls the last available image, changes the label
in the metadata and publishes it under the new tag. That way we'll get a new
image with a different SHA (compared to a plain re-tag), so the GHCR metrics
should reflect how many devices still pull these old images.
2025-11-25 12:42:01 +01:00
Stefan Agner
a3dca10fd8 Fix blocking I/O call in DBusManager.load() (#6346)
Wrap SOCKET_DBUS.exists() call in sys_run_in_executor to avoid blocking
os.stat() call in async context. This follows the same pattern already
used in supervisor/resolution/evaluations/dbus.py.

Fixes SUPERVISOR-11HC

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 12:07:35 +01:00
Stefan Agner
d73682ee8a Fix blocking I/O in DockerInfo cpu realtime check (#6344)
The support_cpu_realtime property was performing blocking filesystem I/O
(Path.exists()) in async context, causing BlockingError e.g. when the
audio plugin started.

Changes:
- Convert support_cpu_realtime from property to dataclass field
- Make DockerInfo.new() async to properly handle I/O operations
- Run Path.exists() check in executor thread during initialization
- Store result as immutable field to avoid repeated filesystem access

Fixes SUPERVISOR-15WC

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 11:34:01 +01:00
Stefan Agner
032fa4cdc4 Add comment to explicit "used" calculation for disk usage API (#6340)
* Add explicit used calculation for disk usage API

Added explicit calculation for used disk space along with a comment
to clarify the reasoning behind the calculation method.

* Address review feedback
2025-11-25 11:00:46 +02:00
dependabot[bot]
7244e447ab Bump actions/setup-python from 6.0.0 to 6.1.0 (#6341)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-25 07:20:29 +01:00
dependabot[bot]
603ba57846 Bump types-docker from 7.1.0.20251009 to 7.1.0.20251125 (#6342)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-25 07:20:06 +01:00
dependabot[bot]
0ff12abdf4 Bump sentry-sdk from 2.45.0 to 2.46.0 (#6343)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-25 07:19:32 +01:00
Petar Petrov
906838e325 Fix disk usage calculation (#6339) 2025-11-24 16:35:13 +02:00
Stefan Agner
3be0c13fc5 Drop Debian 12 from supported OS list (#6337)
* Drop Debian 12 from supported OS list

With the deprecation of Home Assistant Supervised installation method
Debian 12 is no longer supported. This change removes Debian 12
from the list of supported operating systems in the evaluation logic.

* Improve tests
2025-11-24 11:46:23 +01:00
dependabot[bot]
bb450cad4f Bump peter-evans/create-pull-request from 7.0.8 to 7.0.9 (#6332)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.8 to 7.0.9.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](271a8d0340...84ae59a2cd)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 7.0.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 08:52:46 +01:00
dependabot[bot]
10af48a65b Bump backports-zstd from 1.0.0 to 1.1.0 (#6336)
Bumps [backports-zstd](https://github.com/rogdham/backports.zstd) from 1.0.0 to 1.1.0.
- [Changelog](https://github.com/Rogdham/backports.zstd/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rogdham/backports.zstd/compare/v1.0.0...v1.1.0)

---
updated-dependencies:
- dependency-name: backports-zstd
  dependency-version: 1.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 08:50:56 +01:00
dependabot[bot]
2f334c48c3 Bump ruff from 0.14.5 to 0.14.6 (#6335)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.5 to 0.14.6.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.5...0.14.6)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 08:49:55 +01:00
dependabot[bot]
6d87e8f591 Bump pre-commit from 4.4.0 to 4.5.0 (#6334) 2025-11-24 07:54:42 +01:00
dependabot[bot]
4d1dd63248 Bump time-machine from 3.0.0 to 3.1.0 (#6333) 2025-11-24 07:46:27 +01:00
Stefan Agner
0c2d0cf5c1 Fix D-Bus enum type conversions for NetworkManager (#6325)
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-22 21:52:14 +01:00
Jan Čermák
ca7a3af676 Drop codenotary options from the build config (#6330)
These options are obsolete, as all the support has been dropped from the
builder and Supervisor as well. Remove them from our build config too.
2025-11-21 16:36:48 +01:00
Stefan Agner
93272fe4c0 Deprecate i386, armhf and armv7 Supervisor architectures (#5620)
* Deprecate i386, armhf and armv7 Supervisor architectures

* Exclude Core from architecture deprecation checks

This allows to download the latest available Core version still, even
on deprecated systems.

* Fix pytest
2025-11-21 16:35:26 +01:00
Jan Čermák
79a99cc66d Use release-suffixed base images (pin to 2025.11.1) (#6329)
Currently we're lacking control over what version of the base images is
used, and it only depends on when the build is launched. This doesn't
allow any (easy) rollback mechanisms and it's also not very transparent.

Use the newly introduced base image tags which include the release
version suffix so we have more control over this aspect.
2025-11-21 16:22:22 +01:00
dependabot[bot]
6af6c3157f Bump actions/checkout from 5.0.1 to 6.0.0 (#6327)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5.0.1 to 6.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](93cb6efe18...1af3b93b68)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-21 09:29:32 +01:00
Jan Čermák
5ed0c85168 Add optional no_colors query parameter to advanced logs endpoints (#6326)
Add support for `no_colors` query parameter on all advanced logs API endpoints,
allowing users to optionally strip ANSI color sequences from log output. This
complements the existing color stripping on /latest endpoints added in #6319.
2025-11-21 09:29:15 +01:00
Stefan Agner
63a3dff118 Handle pull events with complete progress details only (#6320)
* Handle pull events with complete progress details only

Under certain circumstances, Docker seems to send pull events with
incomplete progress details (i.e., missing 'current' or 'total' fields).
In practise, we've observed an empty dictionary for progress details
as well as missing 'total' field (while 'current' was present).
All events were using Docker 28.3.3 using the old, default Docker graph
backend.

* Fix docstring/comment
2025-11-19 12:21:27 +01:00
dependabot[bot]
fc8fc171c1 Bump time-machine from 2.19.0 to 3.0.0 (#6321)
Bumps [time-machine](https://github.com/adamchainz/time-machine) from 2.19.0 to 3.0.0.
- [Changelog](https://github.com/adamchainz/time-machine/blob/main/docs/changelog.rst)
- [Commits](https://github.com/adamchainz/time-machine/compare/2.19.0...3.0.0)

---
updated-dependencies:
- dependency-name: time-machine
  dependency-version: 3.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-19 12:21:17 +01:00
Stefan Agner
72bbc50c83 Fix call_at to use event loop time base instead of Unix timestamp (#6324)
* Fix call_at to use event loop time base instead of Unix timestamp

The CoreSys.call_at method was incorrectly passing Unix timestamps
directly to asyncio.loop.call_at(), which expects times in the event
loop's monotonic time base. This caused scheduled jobs to be scheduled
approximately 55 years in the future (the difference between Unix epoch
time and monotonic time since boot).

The bug was masked by time-machine 2.19.0, which patched time.monotonic()
and caused loop.time() to return Unix timestamps. Time-machine 3.0.0
removed this patching (as it caused event loop freezes), exposing the bug.

Fix by converting the datetime to event loop time base:
- Calculate delay from current Unix time to scheduled Unix time
- Add delay to current event loop time to get scheduled loop time

Also simplify test_job_scheduled_at to avoid time-machine's async
context managers, following the pattern of test_job_scheduled_delay.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add comment about dateime in the past

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-19 11:49:05 +01:00
Jan Čermák
0837e05cb2 Strip ANSI escape color sequences from /latest log responses (#6319)
* Strip ANSI escape color sequences from /latest log responses

Strip ANSI sequences of CSI commands [1] used for log coloring from
/latest log endpoints. These endpoint were primarily designed for log
downloads and colors are mostly not wanted in those. Add optional
argument for stripping the colors from the logs and enable it for the
/latest endpoints.

[1] https://en.wikipedia.org/wiki/ANSI_escape_code#CSIsection

* Refactor advanced logs' tests to use fixture factory

Introduce `advanced_logs_tester` fixture to simplify testing of advanced logs
in the API tests, declaring all the needed fixture in a single place. # Please
enter the commit message for your changes. Lines starting
2025-11-19 09:39:24 +01:00
dependabot[bot]
d3d652eba5 Bump sentry-sdk from 2.44.0 to 2.45.0 (#6322)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.44.0 to 2.45.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.44.0...2.45.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.45.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-19 09:27:59 +01:00
dependabot[bot]
2eea3c70eb Bump coverage from 7.11.3 to 7.12.0 (#6323)
Bumps [coverage](https://github.com/coveragepy/coveragepy) from 7.11.3 to 7.12.0.
- [Release notes](https://github.com/coveragepy/coveragepy/releases)
- [Changelog](https://github.com/coveragepy/coveragepy/blob/main/CHANGES.rst)
- [Commits](https://github.com/coveragepy/coveragepy/compare/7.11.3...7.12.0)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.12.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-19 09:27:45 +01:00
dependabot[bot]
95c106d502 Bump actions/checkout from 5.0.0 to 5.0.1 (#6318)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5.0.0 to 5.0.1.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](08c6903cd8...93cb6efe18)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 5.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-18 08:45:19 +01:00
dependabot[bot]
74f9431519 Bump ruff from 0.14.4 to 0.14.5 (#6314)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.4 to 0.14.5.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.4...0.14.5)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-14 09:06:58 +01:00
dependabot[bot]
0eef2169f7 Bump pylint from 4.0.2 to 4.0.3 (#6315) 2025-11-13 23:02:33 -08:00
dependabot[bot]
2656b451cd Bump pytest from 8.4.2 to 9.0.1 (#6309)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 8.4.2 to 9.0.1.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/8.4.2...9.0.1)

---
updated-dependencies:
- dependency-name: pytest
  dependency-version: 9.0.1
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-13 09:45:51 +01:00
dependabot[bot]
af7a629dd4 Bump pytest-asyncio from 1.2.0 to 1.3.0 (#6310)
Bumps [pytest-asyncio](https://github.com/pytest-dev/pytest-asyncio) from 1.2.0 to 1.3.0.
- [Release notes](https://github.com/pytest-dev/pytest-asyncio/releases)
- [Commits](https://github.com/pytest-dev/pytest-asyncio/compare/v1.2.0...v1.3.0)

---
updated-dependencies:
- dependency-name: pytest-asyncio
  dependency-version: 1.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-13 09:07:57 +01:00
Mike Degatano
30cc172199 Migrate images from dockerpy to aiodocker (#6252)
* Migrate images from dockerpy to aiodocker

* Add missing coverage and fix bug in repair

* Bind libraries to different files and refactor images.pull

* Use the same socket again

Try using the same socket again.

* Fix pytest

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-11-12 20:54:06 +01:00
Stefan Agner
69ae8db13c Add context to Sentry events during setup phase (#6308)
* Add context to Sentry events during setup phase

Since not all properties are safe to access the current code avoids
adding any context during initialization and setup phase. However,
quite some reports are during the setup phase. This change adds some
context to events during setup phase as well, to make debugging easier.

* Drop default arch (not available during setup)
2025-11-12 14:49:04 -05:00
Stefan Agner
d85aedc42b Avoid using deprecated 'id' field in Docker events (#6307) 2025-11-12 20:44:01 +01:00
dependabot[bot]
d541fe5c3a Bump sentry-sdk from 2.43.0 to 2.44.0 (#6306) 2025-11-11 22:28:34 -08:00
Stefan Agner
91a9cb98c3 Avoid adding Content-Type to non-body responses (#6266)
* Avoid adding Content-Type to non-body responses

The current code sets the content-type header for all responses
to the result's content_type property if upstream does not set a
content_type. The default value for content_type is
"application/octet-stream".

For responses that do not have a body (like 204 No Content or
304 Not Modified), setting a content-type header is unnecessary and
potentially misleading. Follow HTTP standards by only adding the
content-type header to responses that actually contain a body.

* Add pytest for ingress proxy

* Preserve Content-Type header for HEAD requests in ingress API
2025-11-10 17:39:10 +01:00
Stefan Agner
8f2b0763b7 Add zstd compression support (#6302)
Add zstd compression support to allow zstd compressed proxing for
ingress. Zstd is automatically supported by aiohttp if the package
is present.
2025-11-10 17:04:06 +01:00
Stefan Agner
5018d5d04e Bump pytest-asyncio to 1.2.0 (#6301) 2025-11-10 12:00:25 +01:00
Stefan Agner
1ba1ad9fc7 Remove Docker version from unhealthy reasons (#6292)
Any unhealthy reason blocks Home Assistant OS updates. If the Docker
version on a system running Home Assistant OS is outdated, the user
needs to be able to update Home Assistant OS to get a supported Docker
version. Therefore, we should not mark the system as unhealthy due to
an outdated Docker version.
2025-11-10 10:23:12 +01:00
dependabot[bot]
f0ef40eb3e Bump astroid from 4.0.1 to 4.0.2 (#6297)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-10 09:55:16 +01:00
dependabot[bot]
6eed5b02b4 Bump coverage from 7.11.0 to 7.11.3 (#6298) 2025-11-09 23:24:55 -08:00
dependabot[bot]
e59dcf7089 Bump dbus-fast from 2.44.5 to 2.45.1 (#6299) 2025-11-09 23:15:39 -08:00
dependabot[bot]
48da3d8a8d Bump pre-commit from 4.3.0 to 4.4.0 (#6300) 2025-11-09 23:07:49 -08:00
dependabot[bot]
7b82ebe3aa Bump ruff from 0.14.3 to 0.14.4 (#6291)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-07 09:06:06 +01:00
Stefan Agner
d96ea9aef9 Fix docker image pull progress blocked by small layers (#6287)
* Fix docker image pull progress blocked by small layers

Small Docker layers (typically <100 bytes) can skip the downloading phase
entirely, going directly from "Pulling fs layer" to "Download complete"
without emitting any progress events with byte counts. This caused the
aggregate progress calculation to block indefinitely, as it required all
layer jobs to have their `extra` field populated with byte counts before
proceeding.

The issue manifested as parent job progress jumping from 0% to 97.9% after
long delays, as seen when a 96-byte layer held up progress reporting for
~50 seconds until it finally reached the "Extracting" phase.

Set a minimal `extra` field (current=1, total=1) when layers reach
"Download complete" without having gone through the downloading phase.
This allows the aggregate progress calculation to proceed immediately
while still correctly representing the layer as 100% downloaded.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Update test to capture issue correctly

* Improve pytest

* Fix pytest comment

* Fix pylint warning

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-06 09:04:55 +01:00
dependabot[bot]
4e5ec2d6be Bump brotli from 1.1.0 to 1.2.0 (#6288)
Bumps [brotli](https://github.com/google/brotli) from 1.1.0 to 1.2.0.
- [Release notes](https://github.com/google/brotli/releases)
- [Changelog](https://github.com/google/brotli/blob/master/CHANGELOG.md)
- [Commits](https://github.com/google/brotli/compare/go/cbrotli/v1.1.0...v1.2.0)

---
updated-dependencies:
- dependency-name: brotli
  dependency-version: 1.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-06 09:04:00 +01:00
dependabot[bot]
c9ceb4a4e3 Bump getsentry/action-release from 3.3.0 to 3.4.0 (#6284)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-05 12:17:51 +01:00
Ashton
d33305379f Improve error message clarity by specifying to check supervisor logs (#6250)
* Improve error message clarity by specifying to check supervisor logs with 'ha supervisor logs'

* Fix ruff, supervisor -> Supervisor

---------

Co-authored-by: Jan Čermák <sairon@sairon.cz>
2025-11-04 17:12:15 -05:00
Stefan Agner
1448a33dbf Remove Codenotary integrity check (#6236)
* Formally deprecate CodeNotary build config

* Remove CodeNotary specific integrity checking

The current code is specific to how CodeNotary was doing integrity
checking. A future integrity checking mechanism likely will work
differently (e.g. through EROFS based containers). Remove the current
code to make way for a future implementation.

* Drop CodeNotary integrity fixups

* Drop unused tests

* Fix pytest

* Fix pytest

* Remove CodeNotary related exceptions and handling

Remove CodeNotary related exceptions and handling from the Docker
interface.

* Drop unnecessary comment

* Remove Codenotary specific IssueType/SuggestionType

* Drop Codenotary specific environment and secret reference

* Remove unused constants

* Introduce APIGone exception for removed APIs

Introduce a new exception class APIGone to indicate that certain API
features have been removed and are no longer available. Update the
security integrity check endpoint to raise this new exception instead
of a generic APIError, providing clearer communication to clients that
the feature has been intentionally removed.

* Drop content trust

A cosign based signature verification will likely be named differently
to avoid confusion with existing implementations. For now, remove the
content trust option entirely.

* Drop code sign test

* Remove source_mods/content_trust evaluations

* Remove content_trust reference in bootstrap.py

* Fix security tests

* Drop unused tests

* Drop codenotary from schema

Since we have "remove extra" in voluptuous, we can remove the
codenotary field from the addon schema.

* Remove content_trust from tests

* Remove content_trust unsupported reason

* Remove unnecessary comment

* Remove unrelated pytest

* Remove unrelated fixtures
2025-11-03 20:13:15 +01:00
Stefan Agner
1657769044 Fix parent job sync when parent reaches 100% progress (#6282)
* Fix parent job sync when parent reaches 100% progress

When a child job completes and syncs its progress to a parent job,
it can set the parent to 100% progress. However, there are situations
where a second child job for the same parent job is created after the
parent has already reached 100%. This caused a ValueError when creating
a ParentJobSync for subsequent child jobs, as the validator required
starting_progress < 100.0.

This issue was introduced in #6207 (shipped in 2025.10.0) but only
became visible in 2025.10.1 with #6195, which started using progress
syncs. The bug manifests during Core update rollbacks when a second
docker_interface_install job is created after the parent already
reached 100% from the first install.

Fix by skipping parent job sync setup when the parent is already done
or at 100% progress, as there's no value in syncing progress to a
completed parent.

* Update comment and add debug message
2025-11-03 17:58:40 +01:00
Copilot
a8b7923a42 Add test coverage for _map_nm_wifi method (#6275)
* Initial plan

* Add comprehensive test coverage for _map_nm_wifi method

Co-authored-by: agners <34061+agners@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: agners <34061+agners@users.noreply.github.com>
2025-11-03 10:04:01 +01:00
dependabot[bot]
b3b7bc29fa Bump ruff from 0.14.2 to 0.14.3 (#6280)
* Bump ruff from 0.14.2 to 0.14.3

Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.2 to 0.14.3.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.2...0.14.3)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update precommit ruff too

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2025-10-31 11:42:39 -04:00
dependabot[bot]
2098168d04 Bump sentry-sdk from 2.42.1 to 2.43.0 (#6279)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.42.1 to 2.43.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.42.1...2.43.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.43.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-30 14:24:50 -04:00
dependabot[bot]
02c4fd4a8c Bump aiohttp from 3.13.1 to 3.13.2 (#6273)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.13.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-29 10:36:28 +01:00
117 changed files with 8707 additions and 2263 deletions

View File

@@ -34,6 +34,9 @@ on:
env: env:
DEFAULT_PYTHON: "3.13" DEFAULT_PYTHON: "3.13"
COSIGN_VERSION: "v2.5.3"
CRANE_VERSION: "v0.20.7"
CRANE_SHA256: "8ef3564d264e6b5ca93f7b7f5652704c4dd29d33935aff6947dd5adefd05953e"
BUILD_NAME: supervisor BUILD_NAME: supervisor
BUILD_TYPE: supervisor BUILD_TYPE: supervisor
@@ -53,7 +56,7 @@ jobs:
requirements: ${{ steps.requirements.outputs.changed }} requirements: ${{ steps.requirements.outputs.changed }}
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -92,7 +95,7 @@ jobs:
arch: ${{ fromJson(needs.init.outputs.architectures) }} arch: ${{ fromJson(needs.init.outputs.architectures) }}
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -107,7 +110,7 @@ jobs:
# home-assistant/wheels doesn't support sha pinning # home-assistant/wheels doesn't support sha pinning
- name: Build wheels - name: Build wheels
if: needs.init.outputs.requirements == 'true' if: needs.init.outputs.requirements == 'true'
uses: home-assistant/wheels@2025.10.0 uses: home-assistant/wheels@2025.11.0
with: with:
abi: cp313 abi: cp313
tag: musllinux_1_2 tag: musllinux_1_2
@@ -126,7 +129,7 @@ jobs:
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
@@ -134,7 +137,7 @@ jobs:
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0 uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with: with:
cosign-release: "v2.5.3" cosign-release: ${{ env.COSIGN_VERSION }}
- name: Install dirhash and calc hash - name: Install dirhash and calc hash
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
@@ -170,17 +173,15 @@ jobs:
--target /data \ --target /data \
--cosign \ --cosign \
--generic ${{ needs.init.outputs.version }} --generic ${{ needs.init.outputs.version }}
env:
CAS_API_KEY: ${{ secrets.CAS_TOKEN }}
version: version:
name: Update version name: Update version
needs: ["init", "run_supervisor"] needs: ["init", "run_supervisor", "retag_deprecated"]
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout the repository - name: Checkout the repository
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Initialize git - name: Initialize git
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
@@ -205,7 +206,7 @@ jobs:
timeout-minutes: 60 timeout-minutes: 60
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
# home-assistant/builder doesn't support sha pinning # home-assistant/builder doesn't support sha pinning
- name: Build the Supervisor - name: Build the Supervisor
@@ -293,33 +294,6 @@ jobs:
exit 1 exit 1
fi fi
- name: Check the Supervisor code sign
if: needs.init.outputs.publish == 'true'
run: |
echo "Enable Content-Trust"
test=$(docker exec hassio_cli ha security options --content-trust=true --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
echo "Run supervisor health check"
test=$(docker exec hassio_cli ha resolution healthcheck --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
echo "Check supervisor unhealthy"
test=$(docker exec hassio_cli ha resolution info --no-progress --raw-json | jq -r '.data.unhealthy[]')
if [ "$test" != "" ]; then
exit 1
fi
echo "Check supervisor supported"
test=$(docker exec hassio_cli ha resolution info --no-progress --raw-json | jq -r '.data.unsupported[]')
if [[ "$test" =~ source_mods ]]; then
exit 1
fi
- name: Create full backup - name: Create full backup
id: backup id: backup
run: | run: |
@@ -381,3 +355,50 @@ jobs:
- name: Get supervisor logs on failiure - name: Get supervisor logs on failiure
if: ${{ cancelled() || failure() }} if: ${{ cancelled() || failure() }}
run: docker logs hassio_supervisor run: docker logs hassio_supervisor
retag_deprecated:
needs: ["build", "init"]
name: Re-tag deprecated ${{ matrix.arch }} images
if: needs.init.outputs.publish == 'true'
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
packages: write
strategy:
matrix:
arch: ["armhf", "armv7", "i386"]
env:
# Last available release for deprecated architectures
FROZEN_VERSION: "2025.11.5"
steps:
- name: Login to GitHub Container Registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Install Cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with:
cosign-release: ${{ env.COSIGN_VERSION }}
- name: Install crane
run: |
curl -sLO https://github.com/google/go-containerregistry/releases/download/${{ env.CRANE_VERSION }}/go-containerregistry_Linux_x86_64.tar.gz
echo "${{ env.CRANE_SHA256 }} go-containerregistry_Linux_x86_64.tar.gz" | sha256sum -c -
tar xzf go-containerregistry_Linux_x86_64.tar.gz crane
sudo mv crane /usr/local/bin/
- name: Re-tag deprecated image with updated version label
run: |
crane auth login ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.GITHUB_TOKEN }}
crane mutate \
--label io.hass.version=${{ needs.init.outputs.version }} \
--tag ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ needs.init.outputs.version }} \
ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ env.FROZEN_VERSION }}
- name: Sign image with Cosign
run: |
cosign sign --yes ghcr.io/home-assistant/${{ matrix.arch }}-hassio-supervisor:${{ needs.init.outputs.version }}

View File

@@ -26,10 +26,10 @@ jobs:
name: Prepare Python dependencies name: Prepare Python dependencies
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Set up Python - name: Set up Python
id: python id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
- name: Restore Python virtual environment - name: Restore Python virtual environment
@@ -68,9 +68,9 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@@ -111,9 +111,9 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@@ -154,7 +154,7 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Register hadolint problem matcher - name: Register hadolint problem matcher
run: | run: |
echo "::add-matcher::.github/workflows/matchers/hadolint.json" echo "::add-matcher::.github/workflows/matchers/hadolint.json"
@@ -169,9 +169,9 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@@ -213,9 +213,9 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@@ -257,9 +257,9 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@@ -293,9 +293,9 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@@ -339,9 +339,9 @@ jobs:
name: Run tests Python ${{ needs.prepare.outputs.python-version }} name: Run tests Python ${{ needs.prepare.outputs.python-version }}
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@@ -398,9 +398,9 @@ jobs:
needs: ["pytest", "prepare"] needs: ["pytest", "prepare"]
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}

View File

@@ -11,7 +11,7 @@ jobs:
name: Release Drafter name: Release Drafter
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with: with:
fetch-depth: 0 fetch-depth: 0

View File

@@ -10,9 +10,9 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Sentry Release - name: Sentry Release
uses: getsentry/action-release@4f502acc1df792390abe36f2dcb03612ef144818 # v3.3.0 uses: getsentry/action-release@128c5058bbbe93c8e02147fe0a9c713f166259a6 # v3.4.0
env: env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }} SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }} SENTRY_ORG: ${{ secrets.SENTRY_ORG }}

View File

@@ -14,7 +14,7 @@ jobs:
latest_version: ${{ steps.latest_frontend_version.outputs.latest_tag }} latest_version: ${{ steps.latest_frontend_version.outputs.latest_tag }}
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Get latest frontend release - name: Get latest frontend release
id: latest_frontend_version id: latest_frontend_version
uses: abatilo/release-info-action@32cb932219f1cee3fc4f4a298fd65ead5d35b661 # v1.3.3 uses: abatilo/release-info-action@32cb932219f1cee3fc4f4a298fd65ead5d35b661 # v1.3.3
@@ -49,7 +49,7 @@ jobs:
if: needs.check-version.outputs.skip != 'true' if: needs.check-version.outputs.skip != 'true'
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Clear www folder - name: Clear www folder
run: | run: |
rm -rf supervisor/api/panel/* rm -rf supervisor/api/panel/*
@@ -68,7 +68,7 @@ jobs:
run: | run: |
rm -f supervisor/api/panel/home_assistant_frontend_supervisor-*.tar.gz rm -f supervisor/api/panel/home_assistant_frontend_supervisor-*.tar.gz
- name: Create PR - name: Create PR
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8 uses: peter-evans/create-pull-request@84ae59a2cdc2258d6fa0732dd66352dddae2a412 # v7.0.9
with: with:
commit-message: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}" commit-message: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
branch: autoupdate-frontend branch: autoupdate-frontend

View File

@@ -1,6 +1,6 @@
repos: repos:
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.11.10 rev: v0.14.3
hooks: hooks:
- id: ruff - id: ruff
args: args:

View File

@@ -1,13 +1,7 @@
image: ghcr.io/home-assistant/{arch}-hassio-supervisor image: ghcr.io/home-assistant/{arch}-hassio-supervisor
build_from: build_from:
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.22 aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.22-2025.11.1
armhf: ghcr.io/home-assistant/armhf-base-python:3.13-alpine3.22 amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.22-2025.11.1
armv7: ghcr.io/home-assistant/armv7-base-python:3.13-alpine3.22
amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.22
i386: ghcr.io/home-assistant/i386-base-python:3.13-alpine3.22
codenotary:
signer: notary@home-assistant.io
base_image: notary@home-assistant.io
cosign: cosign:
base_identity: https://github.com/home-assistant/docker-base/.* base_identity: https://github.com/home-assistant/docker-base/.*
identity: https://github.com/home-assistant/supervisor/.* identity: https://github.com/home-assistant/supervisor/.*

View File

@@ -1,10 +1,12 @@
aiodns==3.5.0 aiodns==3.5.0
aiohttp==3.13.1 aiodocker==0.24.0
aiohttp==3.13.2
atomicwrites-homeassistant==1.4.1 atomicwrites-homeassistant==1.4.1
attrs==25.4.0 attrs==25.4.0
awesomeversion==25.8.0 awesomeversion==25.8.0
backports.zstd==1.1.0
blockbuster==1.5.25 blockbuster==1.5.25
brotli==1.1.0 brotli==1.2.0
ciso8601==2.3.3 ciso8601==2.3.3
colorlog==6.10.1 colorlog==6.10.1
cpe==1.3.1 cpe==1.3.1
@@ -23,8 +25,8 @@ pyudev==0.24.4
PyYAML==6.0.3 PyYAML==6.0.3
requests==2.32.5 requests==2.32.5
securetar==2025.2.1 securetar==2025.2.1
sentry-sdk==2.42.1 sentry-sdk==2.46.0
setuptools==80.9.0 setuptools==80.9.0
voluptuous==0.15.2 voluptuous==0.15.2
dbus-fast==2.44.5 dbus-fast==3.1.2
zlib-fast==0.2.1 zlib-fast==0.2.1

View File

@@ -1,16 +1,16 @@
astroid==4.0.1 astroid==4.0.2
coverage==7.11.0 coverage==7.12.0
mypy==1.18.2 mypy==1.18.2
pre-commit==4.3.0 pre-commit==4.5.0
pylint==4.0.2 pylint==4.0.3
pytest-aiohttp==1.1.0 pytest-aiohttp==1.1.0
pytest-asyncio==0.25.2 pytest-asyncio==1.3.0
pytest-cov==7.0.0 pytest-cov==7.0.0
pytest-timeout==2.4.0 pytest-timeout==2.4.0
pytest==8.4.2 pytest==9.0.1
ruff==0.14.2 ruff==0.14.6
time-machine==2.19.0 time-machine==3.1.0
types-docker==7.1.0.20251009 types-docker==7.1.0.20251127
types-pyyaml==6.0.12.20250915 types-pyyaml==6.0.12.20250915
types-requests==2.32.4.20250913 types-requests==2.32.4.20250913
urllib3==2.5.0 urllib3==2.5.0

View File

@@ -1513,13 +1513,6 @@ class Addon(AddonModel):
_LOGGER.info("Finished restore for add-on %s", self.slug) _LOGGER.info("Finished restore for add-on %s", self.slug)
return wait_for_start return wait_for_start
def check_trust(self) -> Awaitable[None]:
"""Calculate Addon docker content trust.
Return Coroutine.
"""
return self.instance.check_trust()
@Job( @Job(
name="addon_restart_after_problem", name="addon_restart_after_problem",
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,

View File

@@ -2,7 +2,9 @@
from __future__ import annotations from __future__ import annotations
import base64
from functools import cached_property from functools import cached_property
import json
from pathlib import Path from pathlib import Path
from typing import TYPE_CHECKING, Any from typing import TYPE_CHECKING, Any
@@ -12,12 +14,15 @@ from ..const import (
ATTR_ARGS, ATTR_ARGS,
ATTR_BUILD_FROM, ATTR_BUILD_FROM,
ATTR_LABELS, ATTR_LABELS,
ATTR_PASSWORD,
ATTR_SQUASH, ATTR_SQUASH,
ATTR_USERNAME,
FILE_SUFFIX_CONFIGURATION, FILE_SUFFIX_CONFIGURATION,
META_ADDON, META_ADDON,
SOCKET_DOCKER, SOCKET_DOCKER,
) )
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..docker.const import DOCKER_HUB
from ..docker.interface import MAP_ARCH from ..docker.interface import MAP_ARCH
from ..exceptions import ConfigurationFileError, HassioArchNotFound from ..exceptions import ConfigurationFileError, HassioArchNotFound
from ..utils.common import FileConfiguration, find_one_filetype from ..utils.common import FileConfiguration, find_one_filetype
@@ -122,8 +127,43 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
except HassioArchNotFound: except HassioArchNotFound:
return False return False
def get_docker_config_json(self) -> str | None:
"""Generate Docker config.json content with registry credentials for base image.
Returns a JSON string with registry credentials for the base image's registry,
or None if no matching registry is configured.
Raises:
HassioArchNotFound: If the add-on is not supported on the current architecture.
"""
# Early return before accessing base_image to avoid unnecessary arch lookup
if not self.sys_docker.config.registries:
return None
registry = self.sys_docker.config.get_registry_for_image(self.base_image)
if not registry:
return None
stored = self.sys_docker.config.registries[registry]
username = stored[ATTR_USERNAME]
password = stored[ATTR_PASSWORD]
# Docker config.json uses base64-encoded "username:password" for auth
auth_string = base64.b64encode(f"{username}:{password}".encode()).decode()
# Use the actual registry URL for the key
# Docker Hub uses "https://index.docker.io/v1/" as the key
registry_key = (
"https://index.docker.io/v1/" if registry == DOCKER_HUB else registry
)
config = {"auths": {registry_key: {"auth": auth_string}}}
return json.dumps(config)
def get_docker_args( def get_docker_args(
self, version: AwesomeVersion, image_tag: str self, version: AwesomeVersion, image_tag: str, docker_config_path: Path | None
) -> dict[str, Any]: ) -> dict[str, Any]:
"""Create a dict with Docker run args.""" """Create a dict with Docker run args."""
dockerfile_path = self.get_dockerfile().relative_to(self.addon.path_location) dockerfile_path = self.get_dockerfile().relative_to(self.addon.path_location)
@@ -172,12 +212,24 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
self.addon.path_location self.addon.path_location
) )
volumes = {
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
addon_extern_path: {"bind": "/addon", "mode": "ro"},
}
# Mount Docker config with registry credentials if available
if docker_config_path:
docker_config_extern_path = self.sys_config.local_to_extern_path(
docker_config_path
)
volumes[docker_config_extern_path] = {
"bind": "/root/.docker/config.json",
"mode": "ro",
}
return { return {
"command": build_cmd, "command": build_cmd,
"volumes": { "volumes": volumes,
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
addon_extern_path: {"bind": "/addon", "mode": "ro"},
},
"working_dir": "/addon", "working_dir": "/addon",
} }

View File

@@ -103,7 +103,6 @@ from .configuration import FolderMapping
from .const import ( from .const import (
ATTR_BACKUP, ATTR_BACKUP,
ATTR_BREAKING_VERSIONS, ATTR_BREAKING_VERSIONS,
ATTR_CODENOTARY,
ATTR_PATH, ATTR_PATH,
ATTR_READ_ONLY, ATTR_READ_ONLY,
AddonBackupMode, AddonBackupMode,
@@ -632,13 +631,8 @@ class AddonModel(JobGroup, ABC):
@property @property
def signed(self) -> bool: def signed(self) -> bool:
"""Return True if the image is signed.""" """Currently no signing support."""
return ATTR_CODENOTARY in self.data return False
@property
def codenotary(self) -> str | None:
"""Return Signer email address for CAS."""
return self.data.get(ATTR_CODENOTARY)
@property @property
def breaking_versions(self) -> list[AwesomeVersion]: def breaking_versions(self) -> list[AwesomeVersion]:

View File

@@ -207,6 +207,12 @@ def _warn_addon_config(config: dict[str, Any]):
name, name,
) )
if ATTR_CODENOTARY in config:
_LOGGER.warning(
"Add-on '%s' uses deprecated 'codenotary' field in config. This field is no longer used and will be ignored. Please report this to the maintainer.",
name,
)
return config return config
@@ -417,7 +423,6 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
vol.Optional(ATTR_BACKUP, default=AddonBackupMode.HOT): vol.Coerce( vol.Optional(ATTR_BACKUP, default=AddonBackupMode.HOT): vol.Coerce(
AddonBackupMode AddonBackupMode
), ),
vol.Optional(ATTR_CODENOTARY): vol.Email(),
vol.Optional(ATTR_OPTIONS, default={}): dict, vol.Optional(ATTR_OPTIONS, default={}): dict,
vol.Optional(ATTR_SCHEMA, default={}): vol.Any( vol.Optional(ATTR_SCHEMA, default={}): vol.Any(
vol.Schema({str: SCHEMA_ELEMENT}), vol.Schema({str: SCHEMA_ELEMENT}),

View File

@@ -152,6 +152,7 @@ class RestAPI(CoreSysAttributes):
self._api_host.advanced_logs, self._api_host.advanced_logs,
identifier=syslog_identifier, identifier=syslog_identifier,
latest=True, latest=True,
no_colors=True,
), ),
), ),
web.get( web.get(
@@ -449,6 +450,7 @@ class RestAPI(CoreSysAttributes):
await async_capture_exception(err) await async_capture_exception(err)
kwargs.pop("follow", None) # Follow is not supported for Docker logs kwargs.pop("follow", None) # Follow is not supported for Docker logs
kwargs.pop("latest", None) # Latest is not supported for Docker logs kwargs.pop("latest", None) # Latest is not supported for Docker logs
kwargs.pop("no_colors", None) # no_colors not supported for Docker logs
return await api_supervisor.logs(*args, **kwargs) return await api_supervisor.logs(*args, **kwargs)
self.webapp.add_routes( self.webapp.add_routes(
@@ -460,7 +462,7 @@ class RestAPI(CoreSysAttributes):
), ),
web.get( web.get(
"/supervisor/logs/latest", "/supervisor/logs/latest",
partial(get_supervisor_logs, latest=True), partial(get_supervisor_logs, latest=True, no_colors=True),
), ),
web.get("/supervisor/logs/boots/{bootid}", get_supervisor_logs), web.get("/supervisor/logs/boots/{bootid}", get_supervisor_logs),
web.get( web.get(
@@ -576,7 +578,7 @@ class RestAPI(CoreSysAttributes):
), ),
web.get( web.get(
"/addons/{addon}/logs/latest", "/addons/{addon}/logs/latest",
partial(get_addon_logs, latest=True), partial(get_addon_logs, latest=True, no_colors=True),
), ),
web.get("/addons/{addon}/logs/boots/{bootid}", get_addon_logs), web.get("/addons/{addon}/logs/boots/{bootid}", get_addon_logs),
web.get( web.get(
@@ -811,6 +813,10 @@ class RestAPI(CoreSysAttributes):
self.webapp.add_routes( self.webapp.add_routes(
[ [
web.get("/docker/info", api_docker.info), web.get("/docker/info", api_docker.info),
web.post(
"/docker/migrate-storage-driver",
api_docker.migrate_docker_storage_driver,
),
web.post("/docker/options", api_docker.options), web.post("/docker/options", api_docker.options),
web.get("/docker/registries", api_docker.registries), web.get("/docker/registries", api_docker.registries),
web.post("/docker/registries", api_docker.create_registry), web.post("/docker/registries", api_docker.create_registry),

View File

@@ -4,6 +4,7 @@ import logging
from typing import Any from typing import Any
from aiohttp import web from aiohttp import web
from awesomeversion import AwesomeVersion
import voluptuous as vol import voluptuous as vol
from supervisor.resolution.const import ContextType, IssueType, SuggestionType from supervisor.resolution.const import ContextType, IssueType, SuggestionType
@@ -16,6 +17,7 @@ from ..const import (
ATTR_PASSWORD, ATTR_PASSWORD,
ATTR_REGISTRIES, ATTR_REGISTRIES,
ATTR_STORAGE, ATTR_STORAGE,
ATTR_STORAGE_DRIVER,
ATTR_USERNAME, ATTR_USERNAME,
ATTR_VERSION, ATTR_VERSION,
) )
@@ -42,6 +44,12 @@ SCHEMA_OPTIONS = vol.Schema(
} }
) )
SCHEMA_MIGRATE_DOCKER_STORAGE_DRIVER = vol.Schema(
{
vol.Required(ATTR_STORAGE_DRIVER): vol.In(["overlayfs", "overlay2"]),
}
)
class APIDocker(CoreSysAttributes): class APIDocker(CoreSysAttributes):
"""Handle RESTful API for Docker configuration.""" """Handle RESTful API for Docker configuration."""
@@ -123,3 +131,27 @@ class APIDocker(CoreSysAttributes):
del self.sys_docker.config.registries[hostname] del self.sys_docker.config.registries[hostname]
await self.sys_docker.config.save_data() await self.sys_docker.config.save_data()
@api_process
async def migrate_docker_storage_driver(self, request: web.Request) -> None:
"""Migrate Docker storage driver."""
if (
not self.coresys.os.available
or not self.coresys.os.version
or self.coresys.os.version < AwesomeVersion("17.0.dev0")
):
raise APINotFound(
"Home Assistant OS 17.0 or newer required for Docker storage driver migration"
)
body = await api_validate(SCHEMA_MIGRATE_DOCKER_STORAGE_DRIVER, request)
await self.sys_dbus.agent.system.migrate_docker_storage_driver(
body[ATTR_STORAGE_DRIVER]
)
_LOGGER.info("Host system reboot required to apply Docker storage migration")
self.sys_resolution.create_issue(
IssueType.REBOOT_REQUIRED,
ContextType.SYSTEM,
suggestions=[SuggestionType.EXECUTE_REBOOT],
)

View File

@@ -206,6 +206,7 @@ class APIHost(CoreSysAttributes):
identifier: str | None = None, identifier: str | None = None,
follow: bool = False, follow: bool = False,
latest: bool = False, latest: bool = False,
no_colors: bool = False,
) -> web.StreamResponse: ) -> web.StreamResponse:
"""Return systemd-journald logs.""" """Return systemd-journald logs."""
log_formatter = LogFormatter.PLAIN log_formatter = LogFormatter.PLAIN
@@ -251,6 +252,9 @@ class APIHost(CoreSysAttributes):
if "verbose" in request.query or request.headers[ACCEPT] == CONTENT_TYPE_X_LOG: if "verbose" in request.query or request.headers[ACCEPT] == CONTENT_TYPE_X_LOG:
log_formatter = LogFormatter.VERBOSE log_formatter = LogFormatter.VERBOSE
if "no_colors" in request.query:
no_colors = True
if "lines" in request.query: if "lines" in request.query:
lines = request.query.get("lines", DEFAULT_LINES) lines = request.query.get("lines", DEFAULT_LINES)
try: try:
@@ -280,7 +284,9 @@ class APIHost(CoreSysAttributes):
response = web.StreamResponse() response = web.StreamResponse()
response.content_type = CONTENT_TYPE_TEXT response.content_type = CONTENT_TYPE_TEXT
headers_returned = False headers_returned = False
async for cursor, line in journal_logs_reader(resp, log_formatter): async for cursor, line in journal_logs_reader(
resp, log_formatter, no_colors
):
try: try:
if not headers_returned: if not headers_returned:
if cursor: if cursor:
@@ -318,9 +324,12 @@ class APIHost(CoreSysAttributes):
identifier: str | None = None, identifier: str | None = None,
follow: bool = False, follow: bool = False,
latest: bool = False, latest: bool = False,
no_colors: bool = False,
) -> web.StreamResponse: ) -> web.StreamResponse:
"""Return systemd-journald logs. Wrapped as standard API handler.""" """Return systemd-journald logs. Wrapped as standard API handler."""
return await self.advanced_logs_handler(request, identifier, follow, latest) return await self.advanced_logs_handler(
request, identifier, follow, latest, no_colors
)
@api_process @api_process
async def disk_usage(self, request: web.Request) -> dict: async def disk_usage(self, request: web.Request) -> dict:
@@ -334,10 +343,14 @@ class APIHost(CoreSysAttributes):
disk = self.sys_hardware.disk disk = self.sys_hardware.disk
total, used, _ = await self.sys_run_in_executor( total, _, free = await self.sys_run_in_executor(
disk.disk_usage, self.sys_config.path_supervisor disk.disk_usage, self.sys_config.path_supervisor
) )
# Calculate used by subtracting free makes sure we include reserved space
# in used space reporting.
used = total - free
known_paths = await self.sys_run_in_executor( known_paths = await self.sys_run_in_executor(
disk.get_dir_sizes, disk.get_dir_sizes,
{ {

View File

@@ -253,18 +253,28 @@ class APIIngress(CoreSysAttributes):
skip_auto_headers={hdrs.CONTENT_TYPE}, skip_auto_headers={hdrs.CONTENT_TYPE},
) as result: ) as result:
headers = _response_header(result) headers = _response_header(result)
# Avoid parsing content_type in simple cases for better performance # Avoid parsing content_type in simple cases for better performance
if maybe_content_type := result.headers.get(hdrs.CONTENT_TYPE): if maybe_content_type := result.headers.get(hdrs.CONTENT_TYPE):
content_type = (maybe_content_type.partition(";"))[0].strip() content_type = (maybe_content_type.partition(";"))[0].strip()
else: else:
content_type = result.content_type content_type = result.content_type
# Empty body responses (304, 204, HEAD, etc.) should not be streamed,
# otherwise aiohttp < 3.9.0 may generate an invalid "0\r\n\r\n" chunk
# This also avoids setting content_type for empty responses.
if must_be_empty_body(request.method, result.status):
# If upstream contains content-type, preserve it (e.g. for HEAD requests)
if maybe_content_type:
headers[hdrs.CONTENT_TYPE] = content_type
return web.Response(
headers=headers,
status=result.status,
)
# Simple request # Simple request
if ( if (
# empty body responses should not be streamed, hdrs.CONTENT_LENGTH in result.headers
# otherwise aiohttp < 3.9.0 may generate
# an invalid "0\r\n\r\n" chunk instead of an empty response.
must_be_empty_body(request.method, result.status)
or hdrs.CONTENT_LENGTH in result.headers
and int(result.headers.get(hdrs.CONTENT_LENGTH, 0)) < 4_194_000 and int(result.headers.get(hdrs.CONTENT_LENGTH, 0)) < 4_194_000
): ):
# Return Response # Return Response

View File

@@ -1,24 +1,20 @@
"""Init file for Supervisor Security RESTful API.""" """Init file for Supervisor Security RESTful API."""
import asyncio
import logging
from typing import Any from typing import Any
from aiohttp import web from aiohttp import web
import attr
import voluptuous as vol import voluptuous as vol
from ..const import ATTR_CONTENT_TRUST, ATTR_FORCE_SECURITY, ATTR_PWNED from supervisor.exceptions import APIGone
from ..const import ATTR_FORCE_SECURITY, ATTR_PWNED
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from .utils import api_process, api_validate from .utils import api_process, api_validate
_LOGGER: logging.Logger = logging.getLogger(__name__)
# pylint: disable=no-value-for-parameter # pylint: disable=no-value-for-parameter
SCHEMA_OPTIONS = vol.Schema( SCHEMA_OPTIONS = vol.Schema(
{ {
vol.Optional(ATTR_PWNED): vol.Boolean(), vol.Optional(ATTR_PWNED): vol.Boolean(),
vol.Optional(ATTR_CONTENT_TRUST): vol.Boolean(),
vol.Optional(ATTR_FORCE_SECURITY): vol.Boolean(), vol.Optional(ATTR_FORCE_SECURITY): vol.Boolean(),
} }
) )
@@ -31,7 +27,6 @@ class APISecurity(CoreSysAttributes):
async def info(self, request: web.Request) -> dict[str, Any]: async def info(self, request: web.Request) -> dict[str, Any]:
"""Return Security information.""" """Return Security information."""
return { return {
ATTR_CONTENT_TRUST: self.sys_security.content_trust,
ATTR_PWNED: self.sys_security.pwned, ATTR_PWNED: self.sys_security.pwned,
ATTR_FORCE_SECURITY: self.sys_security.force, ATTR_FORCE_SECURITY: self.sys_security.force,
} }
@@ -43,8 +38,6 @@ class APISecurity(CoreSysAttributes):
if ATTR_PWNED in body: if ATTR_PWNED in body:
self.sys_security.pwned = body[ATTR_PWNED] self.sys_security.pwned = body[ATTR_PWNED]
if ATTR_CONTENT_TRUST in body:
self.sys_security.content_trust = body[ATTR_CONTENT_TRUST]
if ATTR_FORCE_SECURITY in body: if ATTR_FORCE_SECURITY in body:
self.sys_security.force = body[ATTR_FORCE_SECURITY] self.sys_security.force = body[ATTR_FORCE_SECURITY]
@@ -54,6 +47,9 @@ class APISecurity(CoreSysAttributes):
@api_process @api_process
async def integrity_check(self, request: web.Request) -> dict[str, Any]: async def integrity_check(self, request: web.Request) -> dict[str, Any]:
"""Run backend integrity check.""" """Run backend integrity check.
result = await asyncio.shield(self.sys_security.integrity_check())
return attr.asdict(result) CodeNotary integrity checking has been removed. This endpoint now returns
an error indicating the feature is gone.
"""
raise APIGone("Integrity check feature has been removed.")

View File

@@ -16,14 +16,12 @@ from ..const import (
ATTR_BLK_READ, ATTR_BLK_READ,
ATTR_BLK_WRITE, ATTR_BLK_WRITE,
ATTR_CHANNEL, ATTR_CHANNEL,
ATTR_CONTENT_TRUST,
ATTR_COUNTRY, ATTR_COUNTRY,
ATTR_CPU_PERCENT, ATTR_CPU_PERCENT,
ATTR_DEBUG, ATTR_DEBUG,
ATTR_DEBUG_BLOCK, ATTR_DEBUG_BLOCK,
ATTR_DETECT_BLOCKING_IO, ATTR_DETECT_BLOCKING_IO,
ATTR_DIAGNOSTICS, ATTR_DIAGNOSTICS,
ATTR_FORCE_SECURITY,
ATTR_HEALTHY, ATTR_HEALTHY,
ATTR_ICON, ATTR_ICON,
ATTR_IP_ADDRESS, ATTR_IP_ADDRESS,
@@ -69,8 +67,6 @@ SCHEMA_OPTIONS = vol.Schema(
vol.Optional(ATTR_DEBUG): vol.Boolean(), vol.Optional(ATTR_DEBUG): vol.Boolean(),
vol.Optional(ATTR_DEBUG_BLOCK): vol.Boolean(), vol.Optional(ATTR_DEBUG_BLOCK): vol.Boolean(),
vol.Optional(ATTR_DIAGNOSTICS): vol.Boolean(), vol.Optional(ATTR_DIAGNOSTICS): vol.Boolean(),
vol.Optional(ATTR_CONTENT_TRUST): vol.Boolean(),
vol.Optional(ATTR_FORCE_SECURITY): vol.Boolean(),
vol.Optional(ATTR_AUTO_UPDATE): vol.Boolean(), vol.Optional(ATTR_AUTO_UPDATE): vol.Boolean(),
vol.Optional(ATTR_DETECT_BLOCKING_IO): vol.Coerce(DetectBlockingIO), vol.Optional(ATTR_DETECT_BLOCKING_IO): vol.Coerce(DetectBlockingIO),
vol.Optional(ATTR_COUNTRY): str, vol.Optional(ATTR_COUNTRY): str,

View File

@@ -63,12 +63,10 @@ def json_loads(data: Any) -> dict[str, Any]:
def api_process(method): def api_process(method):
"""Wrap function with true/false calls to rest api.""" """Wrap function with true/false calls to rest api."""
async def wrap_api( async def wrap_api(*args, **kwargs) -> web.Response | web.StreamResponse:
api: CoreSysAttributes, *args, **kwargs
) -> web.Response | web.StreamResponse:
"""Return API information.""" """Return API information."""
try: try:
answer = await method(api, *args, **kwargs) answer = await method(*args, **kwargs)
except BackupFileNotFoundError as err: except BackupFileNotFoundError as err:
return api_return_error(err, status=404) return api_return_error(err, status=404)
except APIError as err: except APIError as err:
@@ -109,12 +107,10 @@ def api_process_raw(content, *, error_type=None):
def wrap_method(method): def wrap_method(method):
"""Wrap function with raw output to rest api.""" """Wrap function with raw output to rest api."""
async def wrap_api( async def wrap_api(*args, **kwargs) -> web.Response | web.StreamResponse:
api: CoreSysAttributes, *args, **kwargs
) -> web.Response | web.StreamResponse:
"""Return api information.""" """Return api information."""
try: try:
msg_data = await method(api, *args, **kwargs) msg_data = await method(*args, **kwargs)
except APIError as err: except APIError as err:
return api_return_error( return api_return_error(
err, err,
@@ -151,7 +147,7 @@ def api_return_error(
if check_exception_chain(error, DockerAPIError): if check_exception_chain(error, DockerAPIError):
message = format_message(message) message = format_message(message)
if not message: if not message:
message = "Unknown error, see supervisor" message = "Unknown error, see Supervisor logs (check with 'ha supervisor logs')"
match error_type: match error_type:
case const.CONTENT_TYPE_TEXT: case const.CONTENT_TYPE_TEXT:

View File

@@ -105,7 +105,6 @@ async def initialize_coresys() -> CoreSys:
if coresys.dev: if coresys.dev:
coresys.updater.channel = UpdateChannel.DEV coresys.updater.channel = UpdateChannel.DEV
coresys.security.content_trust = False
# Convert datetime # Convert datetime
logging.Formatter.converter = lambda *args: coresys.now().timetuple() logging.Formatter.converter = lambda *args: coresys.now().timetuple()

View File

@@ -2,6 +2,7 @@
from __future__ import annotations from __future__ import annotations
from asyncio import Task
from collections.abc import Callable, Coroutine from collections.abc import Callable, Coroutine
import logging import logging
from typing import Any from typing import Any
@@ -38,11 +39,13 @@ class Bus(CoreSysAttributes):
self._listeners.setdefault(event, []).append(listener) self._listeners.setdefault(event, []).append(listener)
return listener return listener
def fire_event(self, event: BusEvent, reference: Any) -> None: def fire_event(self, event: BusEvent, reference: Any) -> list[Task]:
"""Fire an event to the bus.""" """Fire an event to the bus."""
_LOGGER.debug("Fire event '%s' with '%s'", event, reference) _LOGGER.debug("Fire event '%s' with '%s'", event, reference)
tasks: list[Task] = []
for listener in self._listeners.get(event, []): for listener in self._listeners.get(event, []):
self.sys_create_task(listener.callback(reference)) tasks.append(self.sys_create_task(listener.callback(reference)))
return tasks
def remove_listener(self, listener: EventListener) -> None: def remove_listener(self, listener: EventListener) -> None:
"""Unregister an listener.""" """Unregister an listener."""

View File

@@ -328,6 +328,7 @@ ATTR_STATE = "state"
ATTR_STATIC = "static" ATTR_STATIC = "static"
ATTR_STDIN = "stdin" ATTR_STDIN = "stdin"
ATTR_STORAGE = "storage" ATTR_STORAGE = "storage"
ATTR_STORAGE_DRIVER = "storage_driver"
ATTR_SUGGESTIONS = "suggestions" ATTR_SUGGESTIONS = "suggestions"
ATTR_SUPERVISOR = "supervisor" ATTR_SUPERVISOR = "supervisor"
ATTR_SUPERVISOR_INTERNET = "supervisor_internet" ATTR_SUPERVISOR_INTERNET = "supervisor_internet"

View File

@@ -9,6 +9,7 @@ from datetime import UTC, datetime, tzinfo
from functools import partial from functools import partial
import logging import logging
import os import os
import time
from types import MappingProxyType from types import MappingProxyType
from typing import TYPE_CHECKING, Any, Self, TypeVar from typing import TYPE_CHECKING, Any, Self, TypeVar
@@ -655,8 +656,14 @@ class CoreSys:
if kwargs: if kwargs:
funct = partial(funct, **kwargs) funct = partial(funct, **kwargs)
# Convert datetime to event loop time base
# If datetime is in the past, delay will be negative and call_at will
# schedule the call as soon as possible.
delay = when.timestamp() - time.time()
loop_time = self.loop.time() + delay
return self.loop.call_at( return self.loop.call_at(
when.timestamp(), funct, *args, context=self._create_context() loop_time, funct, *args, context=self._create_context()
) )

View File

@@ -15,3 +15,8 @@ class System(DBusInterface):
async def schedule_wipe_device(self) -> bool: async def schedule_wipe_device(self) -> bool:
"""Schedule a factory reset on next system boot.""" """Schedule a factory reset on next system boot."""
return await self.connected_dbus.System.call("schedule_wipe_device") return await self.connected_dbus.System.call("schedule_wipe_device")
@dbus_connected
async def migrate_docker_storage_driver(self, backend: str) -> None:
"""Migrate Docker storage driver."""
await self.connected_dbus.System.call("migrate_docker_storage_driver", backend)

View File

@@ -306,6 +306,8 @@ class DeviceType(IntEnum):
VLAN = 11 VLAN = 11
TUN = 16 TUN = 16
VETH = 20 VETH = 20
WIREGUARD = 29
LOOPBACK = 32
class WirelessMethodType(IntEnum): class WirelessMethodType(IntEnum):

View File

@@ -115,7 +115,7 @@ class DBusManager(CoreSysAttributes):
async def load(self) -> None: async def load(self) -> None:
"""Connect interfaces to D-Bus.""" """Connect interfaces to D-Bus."""
if not SOCKET_DBUS.exists(): if not await self.sys_run_in_executor(SOCKET_DBUS.exists):
_LOGGER.error( _LOGGER.error(
"No D-Bus support on Host. Disabled any kind of host control!" "No D-Bus support on Host. Disabled any kind of host control!"
) )

View File

@@ -134,9 +134,10 @@ class NetworkManager(DBusInterfaceProxy):
async def check_connectivity(self, *, force: bool = False) -> ConnectivityState: async def check_connectivity(self, *, force: bool = False) -> ConnectivityState:
"""Check the connectivity of the host.""" """Check the connectivity of the host."""
if force: if force:
return await self.connected_dbus.call("check_connectivity") return ConnectivityState(
else: await self.connected_dbus.call("check_connectivity")
return await self.connected_dbus.get("connectivity") )
return ConnectivityState(await self.connected_dbus.get("connectivity"))
async def connect(self, bus: MessageBus) -> None: async def connect(self, bus: MessageBus) -> None:
"""Connect to system's D-Bus.""" """Connect to system's D-Bus."""

View File

@@ -69,7 +69,7 @@ class NetworkConnection(DBusInterfaceProxy):
@dbus_property @dbus_property
def state(self) -> ConnectionStateType: def state(self) -> ConnectionStateType:
"""Return the state of the connection.""" """Return the state of the connection."""
return self.properties[DBUS_ATTR_STATE] return ConnectionStateType(self.properties[DBUS_ATTR_STATE])
@property @property
def state_flags(self) -> set[ConnectionStateFlags]: def state_flags(self) -> set[ConnectionStateFlags]:

View File

@@ -1,5 +1,6 @@
"""NetworkInterface object for Network Manager.""" """NetworkInterface object for Network Manager."""
import logging
from typing import Any from typing import Any
from dbus_fast.aio.message_bus import MessageBus from dbus_fast.aio.message_bus import MessageBus
@@ -23,6 +24,8 @@ from .connection import NetworkConnection
from .setting import NetworkSetting from .setting import NetworkSetting
from .wireless import NetworkWireless from .wireless import NetworkWireless
_LOGGER: logging.Logger = logging.getLogger(__name__)
class NetworkInterface(DBusInterfaceProxy): class NetworkInterface(DBusInterfaceProxy):
"""NetworkInterface object represents Network Manager Device objects. """NetworkInterface object represents Network Manager Device objects.
@@ -57,7 +60,15 @@ class NetworkInterface(DBusInterfaceProxy):
@dbus_property @dbus_property
def type(self) -> DeviceType: def type(self) -> DeviceType:
"""Return interface type.""" """Return interface type."""
return self.properties[DBUS_ATTR_DEVICE_TYPE] try:
return DeviceType(self.properties[DBUS_ATTR_DEVICE_TYPE])
except ValueError:
_LOGGER.debug(
"Unknown device type %s for %s, treating as UNKNOWN",
self.properties[DBUS_ATTR_DEVICE_TYPE],
self.object_path,
)
return DeviceType.UNKNOWN
@property @property
@dbus_property @dbus_property

View File

@@ -75,7 +75,7 @@ class Resolved(DBusInterfaceProxy):
@dbus_property @dbus_property
def current_dns_server( def current_dns_server(
self, self,
) -> list[tuple[int, DNSAddressFamily, bytes]] | None: ) -> tuple[int, DNSAddressFamily, bytes] | None:
"""Return current DNS server.""" """Return current DNS server."""
return self.properties[DBUS_ATTR_CURRENT_DNS_SERVER] return self.properties[DBUS_ATTR_CURRENT_DNS_SERVER]
@@ -83,7 +83,7 @@ class Resolved(DBusInterfaceProxy):
@dbus_property @dbus_property
def current_dns_server_ex( def current_dns_server_ex(
self, self,
) -> list[tuple[int, DNSAddressFamily, bytes, int, str]] | None: ) -> tuple[int, DNSAddressFamily, bytes, int, str] | None:
"""Return current DNS server including port and server name.""" """Return current DNS server including port and server name."""
return self.properties[DBUS_ATTR_CURRENT_DNS_SERVER_EX] return self.properties[DBUS_ATTR_CURRENT_DNS_SERVER_EX]

View File

@@ -70,7 +70,7 @@ class SystemdUnit(DBusInterface):
@dbus_connected @dbus_connected
async def get_active_state(self) -> UnitActiveState: async def get_active_state(self) -> UnitActiveState:
"""Get active state of the unit.""" """Get active state of the unit."""
return await self.connected_dbus.Unit.get("active_state") return UnitActiveState(await self.connected_dbus.Unit.get("active_state"))
@dbus_connected @dbus_connected
def properties_changed(self) -> DBusSignalWrapper: def properties_changed(self) -> DBusSignalWrapper:

View File

@@ -9,7 +9,7 @@ from dbus_fast import Variant
from .const import EncryptType, EraseMode from .const import EncryptType, EraseMode
def udisks2_bytes_to_path(path_bytes: bytearray) -> Path: def udisks2_bytes_to_path(path_bytes: bytes) -> Path:
"""Convert bytes to path object without null character on end.""" """Convert bytes to path object without null character on end."""
if path_bytes and path_bytes[-1] == 0: if path_bytes and path_bytes[-1] == 0:
return Path(path_bytes[:-1].decode()) return Path(path_bytes[:-1].decode())
@@ -73,7 +73,7 @@ FormatOptionsDataType = TypedDict(
{ {
"label": NotRequired[str], "label": NotRequired[str],
"take-ownership": NotRequired[bool], "take-ownership": NotRequired[bool],
"encrypt.passphrase": NotRequired[bytearray], "encrypt.passphrase": NotRequired[bytes],
"encrypt.type": NotRequired[str], "encrypt.type": NotRequired[str],
"erase": NotRequired[str], "erase": NotRequired[str],
"update-partition-type": NotRequired[bool], "update-partition-type": NotRequired[bool],

View File

@@ -7,8 +7,10 @@ from ipaddress import IPv4Address
import logging import logging
import os import os
from pathlib import Path from pathlib import Path
import tempfile
from typing import TYPE_CHECKING, cast from typing import TYPE_CHECKING, cast
import aiodocker
from attr import evolve from attr import evolve
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
import docker import docker
@@ -704,12 +706,38 @@ class DockerAddon(DockerInterface):
with suppress(docker.errors.NotFound): with suppress(docker.errors.NotFound):
self.sys_docker.containers.get(builder_name).remove(force=True, v=True) self.sys_docker.containers.get(builder_name).remove(force=True, v=True)
result = self.sys_docker.run_command( # Generate Docker config with registry credentials for base image if needed
ADDON_BUILDER_IMAGE, docker_config_path: Path | None = None
version=builder_version_tag, docker_config_content = build_env.get_docker_config_json()
name=builder_name, temp_dir: tempfile.TemporaryDirectory | None = None
**build_env.get_docker_args(version, addon_image_tag),
) try:
if docker_config_content:
# Create temporary directory for docker config
temp_dir = tempfile.TemporaryDirectory(
prefix="hassio_build_", dir=self.sys_config.path_tmp
)
docker_config_path = Path(temp_dir.name) / "config.json"
docker_config_path.write_text(
docker_config_content, encoding="utf-8"
)
_LOGGER.debug(
"Created temporary Docker config for build at %s",
docker_config_path,
)
result = self.sys_docker.run_command(
ADDON_BUILDER_IMAGE,
version=builder_version_tag,
name=builder_name,
**build_env.get_docker_args(
version, addon_image_tag, docker_config_path
),
)
finally:
# Clean up temporary directory
if temp_dir:
temp_dir.cleanup()
logs = result.output.decode("utf-8") logs = result.output.decode("utf-8")
@@ -717,19 +745,21 @@ class DockerAddon(DockerInterface):
error_message = f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}" error_message = f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}"
raise docker.errors.DockerException(error_message) raise docker.errors.DockerException(error_message)
addon_image = self.sys_docker.images.get(addon_image_tag) return addon_image_tag, logs
return addon_image, logs
try: try:
docker_image, log = await self.sys_run_in_executor(build_image) addon_image_tag, log = await self.sys_run_in_executor(build_image)
_LOGGER.debug("Build %s:%s done: %s", self.image, version, log) _LOGGER.debug("Build %s:%s done: %s", self.image, version, log)
# Update meta data # Update meta data
self._meta = docker_image.attrs self._meta = await self.sys_docker.images.inspect(addon_image_tag)
except (docker.errors.DockerException, requests.RequestException) as err: except (
docker.errors.DockerException,
requests.RequestException,
aiodocker.DockerError,
) as err:
_LOGGER.error("Can't build %s:%s: %s", self.image, version, err) _LOGGER.error("Can't build %s:%s: %s", self.image, version, err)
raise DockerError() from err raise DockerError() from err
@@ -751,11 +781,8 @@ class DockerAddon(DockerInterface):
) )
async def import_image(self, tar_file: Path) -> None: async def import_image(self, tar_file: Path) -> None:
"""Import a tar file as image.""" """Import a tar file as image."""
docker_image = await self.sys_run_in_executor( if docker_image := await self.sys_docker.import_image(tar_file):
self.sys_docker.import_image, tar_file self._meta = docker_image
)
if docker_image:
self._meta = docker_image.attrs
_LOGGER.info("Importing image %s and version %s", tar_file, self.version) _LOGGER.info("Importing image %s and version %s", tar_file, self.version)
with suppress(DockerError): with suppress(DockerError):
@@ -769,17 +796,21 @@ class DockerAddon(DockerInterface):
version: AwesomeVersion | None = None, version: AwesomeVersion | None = None,
) -> None: ) -> None:
"""Check if old version exists and cleanup other versions of image not in use.""" """Check if old version exists and cleanup other versions of image not in use."""
await self.sys_run_in_executor( if not (use_image := image or self.image):
self.sys_docker.cleanup_old_images, raise DockerError("Cannot determine image from metadata!", _LOGGER.error)
(image := image or self.image), if not (use_version := version or self.version):
version or self.version, raise DockerError("Cannot determine version from metadata!", _LOGGER.error)
await self.sys_docker.cleanup_old_images(
use_image,
use_version,
{old_image} if old_image else None, {old_image} if old_image else None,
keep_images={ keep_images={
f"{addon.image}:{addon.version}" f"{addon.image}:{addon.version}"
for addon in self.sys_addons.installed for addon in self.sys_addons.installed
if addon.slug != self.addon.slug if addon.slug != self.addon.slug
and addon.image and addon.image
and addon.image in {old_image, image} and addon.image in {old_image, use_image}
}, },
) )
@@ -846,16 +877,6 @@ class DockerAddon(DockerInterface):
): ):
self.sys_resolution.dismiss_issue(self.addon.device_access_missing_issue) self.sys_resolution.dismiss_issue(self.addon.device_access_missing_issue)
async def _validate_trust(self, image_id: str) -> None:
"""Validate trust of content."""
if not self.addon.signed:
return
checksum = image_id.partition(":")[2]
return await self.sys_security.verify_content(
cast(str, self.addon.codenotary), checksum
)
@Job( @Job(
name="docker_addon_hardware_events", name="docker_addon_hardware_events",
conditions=[JobCondition.OS_AGENT], conditions=[JobCondition.OS_AGENT],

View File

@@ -15,6 +15,12 @@ from ..const import MACHINE_ID
RE_RETRYING_DOWNLOAD_STATUS = re.compile(r"Retrying in \d+ seconds?") RE_RETRYING_DOWNLOAD_STATUS = re.compile(r"Retrying in \d+ seconds?")
# Docker Hub registry identifier
DOCKER_HUB = "hub.docker.com"
# Regex to match images with a registry host (e.g., ghcr.io/org/image)
IMAGE_WITH_HOST = re.compile(r"^((?:[a-z0-9]+(?:-[a-z0-9]+)*\.)+[a-z]{2,})\/.+")
class Capabilities(StrEnum): class Capabilities(StrEnum):
"""Linux Capabilities.""" """Linux Capabilities."""

View File

@@ -1,11 +1,10 @@
"""Init file for Supervisor Docker object.""" """Init file for Supervisor Docker object."""
from collections.abc import Awaitable
from ipaddress import IPv4Address from ipaddress import IPv4Address
import logging import logging
import re import re
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException from awesomeversion import AwesomeVersion
from docker.types import Mount from docker.types import Mount
from ..const import LABEL_MACHINE from ..const import LABEL_MACHINE
@@ -236,21 +235,10 @@ class DockerHomeAssistant(DockerInterface):
environment={ENV_TIME: self.sys_timezone}, environment={ENV_TIME: self.sys_timezone},
) )
def is_initialize(self) -> Awaitable[bool]: async def is_initialize(self) -> bool:
"""Return True if Docker container exists.""" """Return True if Docker container exists."""
return self.sys_run_in_executor( if not self.sys_homeassistant.version:
self.sys_docker.container_is_initialized, return False
self.name, return await self.sys_docker.container_is_initialized(
self.image, self.name, self.image, self.sys_homeassistant.version
self.sys_homeassistant.version,
) )
async def _validate_trust(self, image_id: str) -> None:
"""Validate trust of content."""
try:
if self.version in {None, LANDINGPAGE} or self.version < _VERIFY_TRUST:
return
except AwesomeVersionCompareException:
return
await super()._validate_trust(image_id)

View File

@@ -6,17 +6,17 @@ from abc import ABC, abstractmethod
from collections import defaultdict from collections import defaultdict
from collections.abc import Awaitable from collections.abc import Awaitable
from contextlib import suppress from contextlib import suppress
from http import HTTPStatus
import logging import logging
import re
from time import time from time import time
from typing import Any, cast from typing import Any, cast
from uuid import uuid4 from uuid import uuid4
import aiodocker
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from awesomeversion.strategy import AwesomeVersionStrategy from awesomeversion.strategy import AwesomeVersionStrategy
import docker import docker
from docker.models.containers import Container from docker.models.containers import Container
from docker.models.images import Image
import requests import requests
from ..bus import EventListener from ..bus import EventListener
@@ -31,15 +31,13 @@ from ..const import (
) )
from ..coresys import CoreSys from ..coresys import CoreSys
from ..exceptions import ( from ..exceptions import (
CodeNotaryError,
CodeNotaryUntrusted,
DockerAPIError, DockerAPIError,
DockerError, DockerError,
DockerHubRateLimitExceeded,
DockerJobError, DockerJobError,
DockerLogOutOfOrder, DockerLogOutOfOrder,
DockerNotFound, DockerNotFound,
DockerRequestError, DockerRequestError,
DockerTrustError,
) )
from ..jobs import SupervisorJob from ..jobs import SupervisorJob
from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobConcurrency from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobConcurrency
@@ -47,16 +45,13 @@ from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from .const import ContainerState, PullImageLayerStage, RestartPolicy from .const import DOCKER_HUB, ContainerState, PullImageLayerStage, RestartPolicy
from .manager import CommandReturn, PullLogEntry from .manager import CommandReturn, PullLogEntry
from .monitor import DockerContainerStateEvent from .monitor import DockerContainerStateEvent
from .stats import DockerStats from .stats import DockerStats
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
IMAGE_WITH_HOST = re.compile(r"^((?:[a-z0-9]+(?:-[a-z0-9]+)*\.)+[a-z]{2,})\/.+")
DOCKER_HUB = "hub.docker.com"
MAP_ARCH: dict[CpuArch | str, str] = { MAP_ARCH: dict[CpuArch | str, str] = {
CpuArch.ARMV7: "linux/arm/v7", CpuArch.ARMV7: "linux/arm/v7",
CpuArch.ARMHF: "linux/arm/v6", CpuArch.ARMHF: "linux/arm/v6",
@@ -181,25 +176,16 @@ class DockerInterface(JobGroup, ABC):
return self.meta_config.get("Healthcheck") return self.meta_config.get("Healthcheck")
def _get_credentials(self, image: str) -> dict: def _get_credentials(self, image: str) -> dict:
"""Return a dictionay with credentials for docker login.""" """Return a dictionary with credentials for docker login."""
registry = None
credentials = {} credentials = {}
matcher = IMAGE_WITH_HOST.match(image) registry = self.sys_docker.config.get_registry_for_image(image)
# Custom registry
if matcher:
if matcher.group(1) in self.sys_docker.config.registries:
registry = matcher.group(1)
credentials[ATTR_REGISTRY] = registry
# If no match assume "dockerhub" as registry
elif DOCKER_HUB in self.sys_docker.config.registries:
registry = DOCKER_HUB
if registry: if registry:
stored = self.sys_docker.config.registries[registry] stored = self.sys_docker.config.registries[registry]
credentials[ATTR_USERNAME] = stored[ATTR_USERNAME] credentials[ATTR_USERNAME] = stored[ATTR_USERNAME]
credentials[ATTR_PASSWORD] = stored[ATTR_PASSWORD] credentials[ATTR_PASSWORD] = stored[ATTR_PASSWORD]
if registry != DOCKER_HUB:
credentials[ATTR_REGISTRY] = registry
_LOGGER.debug( _LOGGER.debug(
"Logging in to %s as %s", "Logging in to %s as %s",
@@ -209,18 +195,7 @@ class DockerInterface(JobGroup, ABC):
return credentials return credentials
async def _docker_login(self, image: str) -> None: def _process_pull_image_log( # noqa: C901
"""Try to log in to the registry if there are credentials available."""
if not self.sys_docker.config.registries:
return
credentials = self._get_credentials(image)
if not credentials:
return
await self.sys_run_in_executor(self.sys_docker.docker.login, **credentials)
def _process_pull_image_log(
self, install_job_id: str, reference: PullLogEntry self, install_job_id: str, reference: PullLogEntry
) -> None: ) -> None:
"""Process events fired from a docker while pulling an image, filtered to a given job id.""" """Process events fired from a docker while pulling an image, filtered to a given job id."""
@@ -251,28 +226,16 @@ class DockerInterface(JobGroup, ABC):
job = j job = j
break break
# This likely only occurs if the logs came in out of sync and we got progress before the Pulling FS Layer one # There should no longer be any real risk of logs out of order anymore.
# However tests with very small images have shown that sometimes Docker
# skips stages in log. So keeping this one as a safety check on null job
if not job: if not job:
raise DockerLogOutOfOrder( raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {install_job_id} but could not find a matching job, skipping", f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {install_job_id} but could not find a matching job, skipping",
_LOGGER.debug, _LOGGER.debug,
) )
# Hopefully these come in order but if they sometimes get out of sync, avoid accidentally going backwards # For progress calculation we assume downloading is 70% of time, extracting is 30% and others stages negligible
# If it happens a lot though we may need to reconsider the value of this feature
if job.done:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} but job was done, skipping",
_LOGGER.debug,
)
if job.stage and stage < PullImageLayerStage.from_status(job.stage):
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} but job was already on stage {job.stage}, skipping",
_LOGGER.debug,
)
# For progress calcuation we assume downloading and extracting are each 50% of the time and others stages negligible
progress = job.progress progress = job.progress
match stage: match stage:
case PullImageLayerStage.DOWNLOADING | PullImageLayerStage.EXTRACTING: case PullImageLayerStage.DOWNLOADING | PullImageLayerStage.EXTRACTING:
@@ -281,22 +244,26 @@ class DockerInterface(JobGroup, ABC):
and reference.progress_detail.current and reference.progress_detail.current
and reference.progress_detail.total and reference.progress_detail.total
): ):
progress = 50 * ( progress = (
reference.progress_detail.current reference.progress_detail.current
/ reference.progress_detail.total / reference.progress_detail.total
) )
if stage == PullImageLayerStage.EXTRACTING: if stage == PullImageLayerStage.DOWNLOADING:
progress += 50 progress = 70 * progress
else:
progress = 70 + 30 * progress
case ( case (
PullImageLayerStage.VERIFYING_CHECKSUM PullImageLayerStage.VERIFYING_CHECKSUM
| PullImageLayerStage.DOWNLOAD_COMPLETE | PullImageLayerStage.DOWNLOAD_COMPLETE
): ):
progress = 50 progress = 70
case PullImageLayerStage.PULL_COMPLETE: case PullImageLayerStage.PULL_COMPLETE:
progress = 100 progress = 100
case PullImageLayerStage.RETRYING_DOWNLOAD: case PullImageLayerStage.RETRYING_DOWNLOAD:
progress = 0 progress = 0
# No real risk of getting things out of order in current implementation
# but keeping this one in case another change to these trips us up.
if stage != PullImageLayerStage.RETRYING_DOWNLOAD and progress < job.progress: if stage != PullImageLayerStage.RETRYING_DOWNLOAD and progress < job.progress:
raise DockerLogOutOfOrder( raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} that implied progress was {progress} but current progress is {job.progress}, skipping", f"Received pull image log with status {reference.status} for job {job.uuid} that implied progress was {progress} but current progress is {job.progress}, skipping",
@@ -311,6 +278,8 @@ class DockerInterface(JobGroup, ABC):
if ( if (
stage in {PullImageLayerStage.DOWNLOADING, PullImageLayerStage.EXTRACTING} stage in {PullImageLayerStage.DOWNLOADING, PullImageLayerStage.EXTRACTING}
and reference.progress_detail and reference.progress_detail
and reference.progress_detail.current is not None
and reference.progress_detail.total is not None
): ):
job.update( job.update(
progress=progress, progress=progress,
@@ -321,13 +290,17 @@ class DockerInterface(JobGroup, ABC):
}, },
) )
else: else:
# If we reach DOWNLOAD_COMPLETE without ever having set extra (small layers that skip
# the downloading phase), set a minimal extra so aggregate progress calculation can proceed
extra = job.extra
if stage == PullImageLayerStage.DOWNLOAD_COMPLETE and not job.extra:
extra = {"current": 1, "total": 1}
job.update( job.update(
progress=progress, progress=progress,
stage=stage.status, stage=stage.status,
done=stage == PullImageLayerStage.PULL_COMPLETE, done=stage == PullImageLayerStage.PULL_COMPLETE,
extra=None extra=None if stage == PullImageLayerStage.RETRYING_DOWNLOAD else extra,
if stage == PullImageLayerStage.RETRYING_DOWNLOAD
else job.extra,
) )
# Once we have received a progress update for every child job, start to set status of the main one # Once we have received a progress update for every child job, start to set status of the main one
@@ -339,24 +312,44 @@ class DockerInterface(JobGroup, ABC):
and job.name == "Pulling container image layer" and job.name == "Pulling container image layer"
] ]
# First set the total bytes to be downloaded/extracted on the main job # Calculate total from layers that have reported size info
if not install_job.extra: # With containerd snapshotter, some layers skip "Downloading" and go directly to
total = 0 # "Download complete", so we can't wait for all layers to have extra before reporting progress
for job in layer_jobs: layers_with_extra = [
if not job.extra: job for job in layer_jobs if job.extra and job.extra.get("total")
return ]
total += job.extra["total"] if not layers_with_extra:
install_job.extra = {"total": total} return
else:
total = install_job.extra["total"]
# Then determine total progress based on progress of each sub-job, factoring in size of each compared to total # Sum up total bytes. Layers that skip downloading get placeholder extra={1,1}
# which doesn't represent actual size. Separate "real" layers from placeholders.
# Filter guarantees job.extra is not None and has "total" key
real_layers = [
job for job in layers_with_extra if cast(dict, job.extra)["total"] > 1
]
placeholder_layers = [
job for job in layers_with_extra if cast(dict, job.extra)["total"] == 1
]
# If we only have placeholder layers (no real size info yet), don't report progress
# This prevents tiny cached layers from showing inflated progress before
# the actual download sizes are known
if not real_layers:
return
total = sum(cast(dict, job.extra)["total"] for job in real_layers)
if total == 0:
return
# Update install_job.extra with current total (may increase as more layers report)
install_job.extra = {"total": total}
# Calculate progress based on layers that have real size info
# Placeholder layers (skipped downloads) count as complete but don't affect weighted progress
progress = 0.0 progress = 0.0
stage = PullImageLayerStage.PULL_COMPLETE stage = PullImageLayerStage.PULL_COMPLETE
for job in layer_jobs: for job in real_layers:
if not job.extra: progress += job.progress * (cast(dict, job.extra)["total"] / total)
return
progress += job.progress * (job.extra["total"] / total)
job_stage = PullImageLayerStage.from_status(cast(str, job.stage)) job_stage = PullImageLayerStage.from_status(cast(str, job.stage))
if job_stage < PullImageLayerStage.EXTRACTING: if job_stage < PullImageLayerStage.EXTRACTING:
@@ -367,6 +360,28 @@ class DockerInterface(JobGroup, ABC):
): ):
stage = PullImageLayerStage.EXTRACTING stage = PullImageLayerStage.EXTRACTING
# Check if any layers are still pending (no extra yet)
# If so, we're still in downloading phase even if all layers_with_extra are done
layers_pending = len(layer_jobs) - len(layers_with_extra)
if layers_pending > 0:
# Scale progress to account for unreported layers
# This prevents tiny layers that complete first from showing inflated progress
# e.g., if 2/25 layers reported at 70%, actual progress is ~70 * 2/25 = 5.6%
layers_fraction = len(layers_with_extra) / len(layer_jobs)
progress = progress * layers_fraction
if stage == PullImageLayerStage.PULL_COMPLETE:
stage = PullImageLayerStage.DOWNLOADING
# Also check if all placeholders are done but we're waiting for real layers
if placeholder_layers and stage == PullImageLayerStage.PULL_COMPLETE:
# All real layers are done, but check if placeholders are still extracting
for job in placeholder_layers:
job_stage = PullImageLayerStage.from_status(cast(str, job.stage))
if job_stage < PullImageLayerStage.PULL_COMPLETE:
stage = PullImageLayerStage.EXTRACTING
break
# Ensure progress is 100 at this point to prevent float drift # Ensure progress is 100 at this point to prevent float drift
if stage == PullImageLayerStage.PULL_COMPLETE: if stage == PullImageLayerStage.PULL_COMPLETE:
progress = 100 progress = 100
@@ -398,9 +413,8 @@ class DockerInterface(JobGroup, ABC):
_LOGGER.info("Downloading docker image %s with tag %s.", image, version) _LOGGER.info("Downloading docker image %s with tag %s.", image, version)
try: try:
if self.sys_docker.config.registries: # Get credentials for private registries to pass to aiodocker
# Try login if we have defined credentials credentials = self._get_credentials(image) or None
await self._docker_login(image)
curr_job_id = self.sys_jobs.current.uuid curr_job_id = self.sys_jobs.current.uuid
@@ -416,74 +430,65 @@ class DockerInterface(JobGroup, ABC):
BusEvent.DOCKER_IMAGE_PULL_UPDATE, process_pull_image_log BusEvent.DOCKER_IMAGE_PULL_UPDATE, process_pull_image_log
) )
# Pull new image # Pull new image, passing credentials to aiodocker
docker_image = await self.sys_run_in_executor( docker_image = await self.sys_docker.pull_image(
self.sys_docker.pull_image,
self.sys_jobs.current.uuid, self.sys_jobs.current.uuid,
image, image,
str(version), str(version),
platform=MAP_ARCH[image_arch], platform=MAP_ARCH[image_arch],
auth=credentials,
) )
# Validate content
try:
await self._validate_trust(cast(str, docker_image.id))
except CodeNotaryError:
with suppress(docker.errors.DockerException):
await self.sys_run_in_executor(
self.sys_docker.images.remove,
image=f"{image}:{version!s}",
force=True,
)
raise
# Tag latest # Tag latest
if latest: if latest:
_LOGGER.info( _LOGGER.info(
"Tagging image %s with version %s as latest", image, version "Tagging image %s with version %s as latest", image, version
) )
await self.sys_run_in_executor(docker_image.tag, image, tag="latest") await self.sys_docker.images.tag(
docker_image["Id"], image, tag="latest"
)
except docker.errors.APIError as err: except docker.errors.APIError as err:
if err.status_code == 429: if err.status_code == HTTPStatus.TOO_MANY_REQUESTS:
self.sys_resolution.create_issue( self.sys_resolution.create_issue(
IssueType.DOCKER_RATELIMIT, IssueType.DOCKER_RATELIMIT,
ContextType.SYSTEM, ContextType.SYSTEM,
suggestions=[SuggestionType.REGISTRY_LOGIN], suggestions=[SuggestionType.REGISTRY_LOGIN],
) )
_LOGGER.info( raise DockerHubRateLimitExceeded(_LOGGER.error) from err
"Your IP address has made too many requests to Docker Hub which activated a rate limit. " await async_capture_exception(err)
"For more details see https://www.home-assistant.io/more-info/dockerhub-rate-limit"
)
raise DockerError( raise DockerError(
f"Can't install {image}:{version!s}: {err}", _LOGGER.error f"Can't install {image}:{version!s}: {err}", _LOGGER.error
) from err ) from err
except (docker.errors.DockerException, requests.RequestException) as err: except aiodocker.DockerError as err:
if err.status == HTTPStatus.TOO_MANY_REQUESTS:
self.sys_resolution.create_issue(
IssueType.DOCKER_RATELIMIT,
ContextType.SYSTEM,
suggestions=[SuggestionType.REGISTRY_LOGIN],
)
raise DockerHubRateLimitExceeded(_LOGGER.error) from err
await async_capture_exception(err)
raise DockerError(
f"Can't install {image}:{version!s}: {err}", _LOGGER.error
) from err
except (
docker.errors.DockerException,
requests.RequestException,
) as err:
await async_capture_exception(err) await async_capture_exception(err)
raise DockerError( raise DockerError(
f"Unknown error with {image}:{version!s} -> {err!s}", _LOGGER.error f"Unknown error with {image}:{version!s} -> {err!s}", _LOGGER.error
) from err ) from err
except CodeNotaryUntrusted as err:
raise DockerTrustError(
f"Pulled image {image}:{version!s} failed on content-trust verification!",
_LOGGER.critical,
) from err
except CodeNotaryError as err:
raise DockerTrustError(
f"Error happened on Content-Trust check for {image}:{version!s}: {err!s}",
_LOGGER.error,
) from err
finally: finally:
if listener: if listener:
self.sys_bus.remove_listener(listener) self.sys_bus.remove_listener(listener)
self._meta = docker_image.attrs self._meta = docker_image
async def exists(self) -> bool: async def exists(self) -> bool:
"""Return True if Docker image exists in local repository.""" """Return True if Docker image exists in local repository."""
with suppress(docker.errors.DockerException, requests.RequestException): with suppress(aiodocker.DockerError, requests.RequestException):
await self.sys_run_in_executor( await self.sys_docker.images.inspect(f"{self.image}:{self.version!s}")
self.sys_docker.images.get, f"{self.image}:{self.version!s}"
)
return True return True
return False return False
@@ -542,11 +547,11 @@ class DockerInterface(JobGroup, ABC):
), ),
) )
with suppress(docker.errors.DockerException, requests.RequestException): with suppress(aiodocker.DockerError, requests.RequestException):
if not self._meta and self.image: if not self._meta and self.image:
self._meta = self.sys_docker.images.get( self._meta = await self.sys_docker.images.inspect(
f"{self.image}:{version!s}" f"{self.image}:{version!s}"
).attrs )
# Successful? # Successful?
if not self._meta: if not self._meta:
@@ -614,14 +619,17 @@ class DockerInterface(JobGroup, ABC):
) )
async def remove(self, *, remove_image: bool = True) -> None: async def remove(self, *, remove_image: bool = True) -> None:
"""Remove Docker images.""" """Remove Docker images."""
if not self.image or not self.version:
raise DockerError(
"Cannot determine image and/or version from metadata!", _LOGGER.error
)
# Cleanup container # Cleanup container
with suppress(DockerError): with suppress(DockerError):
await self.stop() await self.stop()
if remove_image: if remove_image:
await self.sys_run_in_executor( await self.sys_docker.remove_image(self.image, self.version)
self.sys_docker.remove_image, self.image, self.version
)
self._meta = None self._meta = None
@@ -643,18 +651,16 @@ class DockerInterface(JobGroup, ABC):
image_name = f"{expected_image}:{version!s}" image_name = f"{expected_image}:{version!s}"
if self.image == expected_image: if self.image == expected_image:
try: try:
image: Image = await self.sys_run_in_executor( image = await self.sys_docker.images.inspect(image_name)
self.sys_docker.images.get, image_name except (aiodocker.DockerError, requests.RequestException) as err:
)
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError( raise DockerError(
f"Could not get {image_name} for check due to: {err!s}", f"Could not get {image_name} for check due to: {err!s}",
_LOGGER.error, _LOGGER.error,
) from err ) from err
image_arch = f"{image.attrs['Os']}/{image.attrs['Architecture']}" image_arch = f"{image['Os']}/{image['Architecture']}"
if "Variant" in image.attrs: if "Variant" in image:
image_arch = f"{image_arch}/{image.attrs['Variant']}" image_arch = f"{image_arch}/{image['Variant']}"
# If we have an image and its the right arch, all set # If we have an image and its the right arch, all set
# It seems that newer Docker version return a variant for arm64 images. # It seems that newer Docker version return a variant for arm64 images.
@@ -716,11 +722,13 @@ class DockerInterface(JobGroup, ABC):
version: AwesomeVersion | None = None, version: AwesomeVersion | None = None,
) -> None: ) -> None:
"""Check if old version exists and cleanup.""" """Check if old version exists and cleanup."""
await self.sys_run_in_executor( if not (use_image := image or self.image):
self.sys_docker.cleanup_old_images, raise DockerError("Cannot determine image from metadata!", _LOGGER.error)
image or self.image, if not (use_version := version or self.version):
version or self.version, raise DockerError("Cannot determine version from metadata!", _LOGGER.error)
{old_image} if old_image else None,
await self.sys_docker.cleanup_old_images(
use_image, use_version, {old_image} if old_image else None
) )
@Job( @Job(
@@ -772,10 +780,10 @@ class DockerInterface(JobGroup, ABC):
"""Return latest version of local image.""" """Return latest version of local image."""
available_version: list[AwesomeVersion] = [] available_version: list[AwesomeVersion] = []
try: try:
for image in await self.sys_run_in_executor( for image in await self.sys_docker.images.list(
self.sys_docker.images.list, self.image filters=f'{{"reference": ["{self.image}"]}}'
): ):
for tag in image.tags: for tag in image["RepoTags"]:
version = AwesomeVersion(tag.partition(":")[2]) version = AwesomeVersion(tag.partition(":")[2])
if version.strategy == AwesomeVersionStrategy.UNKNOWN: if version.strategy == AwesomeVersionStrategy.UNKNOWN:
continue continue
@@ -784,7 +792,7 @@ class DockerInterface(JobGroup, ABC):
if not available_version: if not available_version:
raise ValueError() raise ValueError()
except (docker.errors.DockerException, ValueError) as err: except (aiodocker.DockerError, ValueError) as err:
raise DockerNotFound( raise DockerNotFound(
f"No version found for {self.image}", _LOGGER.info f"No version found for {self.image}", _LOGGER.info
) from err ) from err
@@ -809,24 +817,3 @@ class DockerInterface(JobGroup, ABC):
return self.sys_run_in_executor( return self.sys_run_in_executor(
self.sys_docker.container_run_inside, self.name, command self.sys_docker.container_run_inside, self.name, command
) )
async def _validate_trust(self, image_id: str) -> None:
"""Validate trust of content."""
checksum = image_id.partition(":")[2]
return await self.sys_security.verify_own_content(checksum)
@Job(
name="docker_interface_check_trust",
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def check_trust(self) -> None:
"""Check trust of exists Docker image."""
try:
image = await self.sys_run_in_executor(
self.sys_docker.images.get, f"{self.image}:{self.version!s}"
)
except (docker.errors.DockerException, requests.RequestException):
return
await self._validate_trust(cast(str, image.id))

View File

@@ -6,20 +6,24 @@ import asyncio
from contextlib import suppress from contextlib import suppress
from dataclasses import dataclass from dataclasses import dataclass
from functools import partial from functools import partial
from http import HTTPStatus
from ipaddress import IPv4Address from ipaddress import IPv4Address
import json
import logging import logging
import os import os
from pathlib import Path from pathlib import Path
import re
from typing import Any, Final, Self, cast from typing import Any, Final, Self, cast
import aiodocker
from aiodocker.images import DockerImages
from aiohttp import ClientSession, ClientTimeout, UnixConnector
import attr import attr
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from docker import errors as docker_errors from docker import errors as docker_errors
from docker.api.client import APIClient from docker.api.client import APIClient
from docker.client import DockerClient from docker.client import DockerClient
from docker.errors import DockerException, ImageNotFound, NotFound
from docker.models.containers import Container, ContainerCollection from docker.models.containers import Container, ContainerCollection
from docker.models.images import Image, ImageCollection
from docker.models.networks import Network from docker.models.networks import Network
from docker.types.daemon import CancellableStream from docker.types.daemon import CancellableStream
import requests import requests
@@ -45,7 +49,7 @@ from ..exceptions import (
) )
from ..utils.common import FileConfiguration from ..utils.common import FileConfiguration
from ..validate import SCHEMA_DOCKER_CONFIG from ..validate import SCHEMA_DOCKER_CONFIG
from .const import LABEL_MANAGED from .const import DOCKER_HUB, IMAGE_WITH_HOST, LABEL_MANAGED
from .monitor import DockerMonitor from .monitor import DockerMonitor
from .network import DockerNetwork from .network import DockerNetwork
@@ -53,6 +57,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
MIN_SUPPORTED_DOCKER: Final = AwesomeVersion("24.0.0") MIN_SUPPORTED_DOCKER: Final = AwesomeVersion("24.0.0")
DOCKER_NETWORK_HOST: Final = "host" DOCKER_NETWORK_HOST: Final = "host"
RE_IMPORT_IMAGE_STREAM = re.compile(r"(^Loaded image ID: |^Loaded image: )(.+)$")
@attr.s(frozen=True) @attr.s(frozen=True)
@@ -71,15 +76,25 @@ class DockerInfo:
storage: str = attr.ib() storage: str = attr.ib()
logging: str = attr.ib() logging: str = attr.ib()
cgroup: str = attr.ib() cgroup: str = attr.ib()
support_cpu_realtime: bool = attr.ib()
@staticmethod @staticmethod
def new(data: dict[str, Any]): async def new(data: dict[str, Any]) -> DockerInfo:
"""Create a object from docker info.""" """Create a object from docker info."""
# Check if CONFIG_RT_GROUP_SCHED is loaded (blocking I/O in executor)
cpu_rt_file_exists = await asyncio.get_running_loop().run_in_executor(
None, Path("/sys/fs/cgroup/cpu/cpu.rt_runtime_us").exists
)
cpu_rt_supported = (
cpu_rt_file_exists and os.environ.get(ENV_SUPERVISOR_CPU_RT) == "1"
)
return DockerInfo( return DockerInfo(
AwesomeVersion(data.get("ServerVersion", "0.0.0")), AwesomeVersion(data.get("ServerVersion", "0.0.0")),
data.get("Driver", "unknown"), data.get("Driver", "unknown"),
data.get("LoggingDriver", "unknown"), data.get("LoggingDriver", "unknown"),
data.get("CgroupVersion", "1"), data.get("CgroupVersion", "1"),
cpu_rt_supported,
) )
@property @property
@@ -90,23 +105,21 @@ class DockerInfo:
except AwesomeVersionCompareException: except AwesomeVersionCompareException:
return False return False
@property
def support_cpu_realtime(self) -> bool:
"""Return true, if CONFIG_RT_GROUP_SCHED is loaded."""
if not Path("/sys/fs/cgroup/cpu/cpu.rt_runtime_us").exists():
return False
return bool(os.environ.get(ENV_SUPERVISOR_CPU_RT) == "1")
@dataclass(frozen=True, slots=True) @dataclass(frozen=True, slots=True)
class PullProgressDetail: class PullProgressDetail:
"""Progress detail information for pull. """Progress detail information for pull.
Documentation lacking but both of these seem to be in bytes when populated. Documentation lacking but both of these seem to be in bytes when populated.
Containerd-snapshot update - When leveraging this new feature, this information
becomes useless to us while extracting. It simply tells elapsed time using
current and units.
""" """
current: int | None = None current: int | None = None
total: int | None = None total: int | None = None
units: str | None = None
@classmethod @classmethod
def from_pull_log_dict(cls, value: dict[str, int]) -> PullProgressDetail: def from_pull_log_dict(cls, value: dict[str, int]) -> PullProgressDetail:
@@ -194,6 +207,27 @@ class DockerConfig(FileConfiguration):
"""Return credentials for docker registries.""" """Return credentials for docker registries."""
return self._data.get(ATTR_REGISTRIES, {}) return self._data.get(ATTR_REGISTRIES, {})
def get_registry_for_image(self, image: str) -> str | None:
"""Return the registry name if credentials are available for the image.
Matches the image against configured registries and returns the registry
name if found, or None if no matching credentials are configured.
"""
if not self.registries:
return None
# Check if image uses a custom registry (e.g., ghcr.io/org/image)
matcher = IMAGE_WITH_HOST.match(image)
if matcher:
registry = matcher.group(1)
if registry in self.registries:
return registry
# If no registry prefix, check for Docker Hub credentials
elif DOCKER_HUB in self.registries:
return DOCKER_HUB
return None
class DockerAPI(CoreSysAttributes): class DockerAPI(CoreSysAttributes):
"""Docker Supervisor wrapper. """Docker Supervisor wrapper.
@@ -204,7 +238,15 @@ class DockerAPI(CoreSysAttributes):
def __init__(self, coresys: CoreSys): def __init__(self, coresys: CoreSys):
"""Initialize Docker base wrapper.""" """Initialize Docker base wrapper."""
self.coresys = coresys self.coresys = coresys
self._docker: DockerClient | None = None # We keep both until we can fully refactor to aiodocker
self._dockerpy: DockerClient | None = None
self.docker: aiodocker.Docker = aiodocker.Docker(
url="unix://localhost", # dummy hostname for URL composition
connector=(connector := UnixConnector(SOCKET_DOCKER.as_posix())),
session=ClientSession(connector=connector, timeout=ClientTimeout(900)),
api_version="auto",
)
self._network: DockerNetwork | None = None self._network: DockerNetwork | None = None
self._info: DockerInfo | None = None self._info: DockerInfo | None = None
self.config: DockerConfig = DockerConfig() self.config: DockerConfig = DockerConfig()
@@ -212,28 +254,28 @@ class DockerAPI(CoreSysAttributes):
async def post_init(self) -> Self: async def post_init(self) -> Self:
"""Post init actions that must be done in event loop.""" """Post init actions that must be done in event loop."""
self._docker = await asyncio.get_running_loop().run_in_executor( self._dockerpy = await asyncio.get_running_loop().run_in_executor(
None, None,
partial( partial(
DockerClient, DockerClient,
base_url=f"unix:/{str(SOCKET_DOCKER)}", base_url=f"unix:/{SOCKET_DOCKER.as_posix()}",
version="auto", version="auto",
timeout=900, timeout=900,
), ),
) )
self._info = DockerInfo.new(self.docker.info()) self._info = await DockerInfo.new(self.dockerpy.info())
await self.config.read_data() await self.config.read_data()
self._network = await DockerNetwork(self.docker).post_init( self._network = await DockerNetwork(self.dockerpy).post_init(
self.config.enable_ipv6, self.config.mtu self.config.enable_ipv6, self.config.mtu
) )
return self return self
@property @property
def docker(self) -> DockerClient: def dockerpy(self) -> DockerClient:
"""Get docker API client.""" """Get docker API client."""
if not self._docker: if not self._dockerpy:
raise RuntimeError("Docker API Client not initialized!") raise RuntimeError("Docker API Client not initialized!")
return self._docker return self._dockerpy
@property @property
def network(self) -> DockerNetwork: def network(self) -> DockerNetwork:
@@ -243,19 +285,19 @@ class DockerAPI(CoreSysAttributes):
return self._network return self._network
@property @property
def images(self) -> ImageCollection: def images(self) -> DockerImages:
"""Return API images.""" """Return API images."""
return self.docker.images return self.docker.images
@property @property
def containers(self) -> ContainerCollection: def containers(self) -> ContainerCollection:
"""Return API containers.""" """Return API containers."""
return self.docker.containers return self.dockerpy.containers
@property @property
def api(self) -> APIClient: def api(self) -> APIClient:
"""Return API containers.""" """Return API containers."""
return self.docker.api return self.dockerpy.api
@property @property
def info(self) -> DockerInfo: def info(self) -> DockerInfo:
@@ -267,7 +309,7 @@ class DockerAPI(CoreSysAttributes):
@property @property
def events(self) -> CancellableStream: def events(self) -> CancellableStream:
"""Return docker event stream.""" """Return docker event stream."""
return self.docker.events(decode=True) return self.dockerpy.events(decode=True)
@property @property
def monitor(self) -> DockerMonitor: def monitor(self) -> DockerMonitor:
@@ -383,7 +425,7 @@ class DockerAPI(CoreSysAttributes):
with suppress(DockerError): with suppress(DockerError):
self.network.detach_default_bridge(container) self.network.detach_default_bridge(container)
else: else:
host_network: Network = self.docker.networks.get(DOCKER_NETWORK_HOST) host_network: Network = self.dockerpy.networks.get(DOCKER_NETWORK_HOST)
# Check if container is register on host # Check if container is register on host
# https://github.com/moby/moby/issues/23302 # https://github.com/moby/moby/issues/23302
@@ -410,35 +452,33 @@ class DockerAPI(CoreSysAttributes):
return container return container
def pull_image( async def pull_image(
self, self,
job_id: str, job_id: str,
repository: str, repository: str,
tag: str = "latest", tag: str = "latest",
platform: str | None = None, platform: str | None = None,
) -> Image: auth: dict[str, str] | None = None,
) -> dict[str, Any]:
"""Pull the specified image and return it. """Pull the specified image and return it.
This mimics the high level API of images.pull but provides better error handling by raising This mimics the high level API of images.pull but provides better error handling by raising
based on a docker error on pull. Whereas the high level API ignores all errors on pull and based on a docker error on pull. Whereas the high level API ignores all errors on pull and
raises only if the get fails afterwards. Additionally it fires progress reports for the pull raises only if the get fails afterwards. Additionally it fires progress reports for the pull
on the bus so listeners can use that to update status for users. on the bus so listeners can use that to update status for users.
Must be run in executor.
""" """
pull_log = self.docker.api.pull( async for e in self.images.pull(
repository, tag=tag, platform=platform, stream=True, decode=True repository, tag=tag, platform=platform, auth=auth, stream=True
) ):
for e in pull_log:
entry = PullLogEntry.from_pull_log_dict(job_id, e) entry = PullLogEntry.from_pull_log_dict(job_id, e)
if entry.error: if entry.error:
raise entry.exception raise entry.exception
self.sys_loop.call_soon_threadsafe( await asyncio.gather(
self.sys_bus.fire_event, BusEvent.DOCKER_IMAGE_PULL_UPDATE, entry *self.sys_bus.fire_event(BusEvent.DOCKER_IMAGE_PULL_UPDATE, entry)
) )
sep = "@" if tag.startswith("sha256:") else ":" sep = "@" if tag.startswith("sha256:") else ":"
return self.images.get(f"{repository}{sep}{tag}") return await self.images.inspect(f"{repository}{sep}{tag}")
def run_command( def run_command(
self, self,
@@ -459,7 +499,7 @@ class DockerAPI(CoreSysAttributes):
_LOGGER.info("Runing command '%s' on %s", command, image_with_tag) _LOGGER.info("Runing command '%s' on %s", command, image_with_tag)
container = None container = None
try: try:
container = self.docker.containers.run( container = self.dockerpy.containers.run(
image_with_tag, image_with_tag,
command=command, command=command,
detach=True, detach=True,
@@ -487,35 +527,35 @@ class DockerAPI(CoreSysAttributes):
"""Repair local docker overlayfs2 issues.""" """Repair local docker overlayfs2 issues."""
_LOGGER.info("Prune stale containers") _LOGGER.info("Prune stale containers")
try: try:
output = self.docker.api.prune_containers() output = self.dockerpy.api.prune_containers()
_LOGGER.debug("Containers prune: %s", output) _LOGGER.debug("Containers prune: %s", output)
except docker_errors.APIError as err: except docker_errors.APIError as err:
_LOGGER.warning("Error for containers prune: %s", err) _LOGGER.warning("Error for containers prune: %s", err)
_LOGGER.info("Prune stale images") _LOGGER.info("Prune stale images")
try: try:
output = self.docker.api.prune_images(filters={"dangling": False}) output = self.dockerpy.api.prune_images(filters={"dangling": False})
_LOGGER.debug("Images prune: %s", output) _LOGGER.debug("Images prune: %s", output)
except docker_errors.APIError as err: except docker_errors.APIError as err:
_LOGGER.warning("Error for images prune: %s", err) _LOGGER.warning("Error for images prune: %s", err)
_LOGGER.info("Prune stale builds") _LOGGER.info("Prune stale builds")
try: try:
output = self.docker.api.prune_builds() output = self.dockerpy.api.prune_builds()
_LOGGER.debug("Builds prune: %s", output) _LOGGER.debug("Builds prune: %s", output)
except docker_errors.APIError as err: except docker_errors.APIError as err:
_LOGGER.warning("Error for builds prune: %s", err) _LOGGER.warning("Error for builds prune: %s", err)
_LOGGER.info("Prune stale volumes") _LOGGER.info("Prune stale volumes")
try: try:
output = self.docker.api.prune_builds() output = self.dockerpy.api.prune_volumes()
_LOGGER.debug("Volumes prune: %s", output) _LOGGER.debug("Volumes prune: %s", output)
except docker_errors.APIError as err: except docker_errors.APIError as err:
_LOGGER.warning("Error for volumes prune: %s", err) _LOGGER.warning("Error for volumes prune: %s", err)
_LOGGER.info("Prune stale networks") _LOGGER.info("Prune stale networks")
try: try:
output = self.docker.api.prune_networks() output = self.dockerpy.api.prune_networks()
_LOGGER.debug("Networks prune: %s", output) _LOGGER.debug("Networks prune: %s", output)
except docker_errors.APIError as err: except docker_errors.APIError as err:
_LOGGER.warning("Error for networks prune: %s", err) _LOGGER.warning("Error for networks prune: %s", err)
@@ -537,11 +577,11 @@ class DockerAPI(CoreSysAttributes):
Fix: https://github.com/moby/moby/issues/23302 Fix: https://github.com/moby/moby/issues/23302
""" """
network: Network = self.docker.networks.get(network_name) network: Network = self.dockerpy.networks.get(network_name)
for cid, data in network.attrs.get("Containers", {}).items(): for cid, data in network.attrs.get("Containers", {}).items():
try: try:
self.docker.containers.get(cid) self.dockerpy.containers.get(cid)
continue continue
except docker_errors.NotFound: except docker_errors.NotFound:
_LOGGER.debug( _LOGGER.debug(
@@ -556,22 +596,26 @@ class DockerAPI(CoreSysAttributes):
with suppress(docker_errors.DockerException, requests.RequestException): with suppress(docker_errors.DockerException, requests.RequestException):
network.disconnect(data.get("Name", cid), force=True) network.disconnect(data.get("Name", cid), force=True)
def container_is_initialized( async def container_is_initialized(
self, name: str, image: str, version: AwesomeVersion self, name: str, image: str, version: AwesomeVersion
) -> bool: ) -> bool:
"""Return True if docker container exists in good state and is built from expected image.""" """Return True if docker container exists in good state and is built from expected image."""
try: try:
docker_container = self.containers.get(name) docker_container = await self.sys_run_in_executor(self.containers.get, name)
docker_image = self.images.get(f"{image}:{version}") docker_image = await self.images.inspect(f"{image}:{version}")
except NotFound: except docker_errors.NotFound:
return False return False
except (DockerException, requests.RequestException) as err: except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
return False
raise DockerError() from err
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err raise DockerError() from err
# Check the image is correct and state is good # Check the image is correct and state is good
return ( return (
docker_container.image is not None docker_container.image is not None
and docker_container.image.id == docker_image.id and docker_container.image.id == docker_image["Id"]
and docker_container.status in ("exited", "running", "created") and docker_container.status in ("exited", "running", "created")
) )
@@ -581,18 +625,18 @@ class DockerAPI(CoreSysAttributes):
"""Stop/remove Docker container.""" """Stop/remove Docker container."""
try: try:
docker_container: Container = self.containers.get(name) docker_container: Container = self.containers.get(name)
except NotFound: except docker_errors.NotFound:
raise DockerNotFound() from None raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err: except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err raise DockerError() from err
if docker_container.status == "running": if docker_container.status == "running":
_LOGGER.info("Stopping %s application", name) _LOGGER.info("Stopping %s application", name)
with suppress(DockerException, requests.RequestException): with suppress(docker_errors.DockerException, requests.RequestException):
docker_container.stop(timeout=timeout) docker_container.stop(timeout=timeout)
if remove_container: if remove_container:
with suppress(DockerException, requests.RequestException): with suppress(docker_errors.DockerException, requests.RequestException):
_LOGGER.info("Cleaning %s application", name) _LOGGER.info("Cleaning %s application", name)
docker_container.remove(force=True, v=True) docker_container.remove(force=True, v=True)
@@ -604,11 +648,11 @@ class DockerAPI(CoreSysAttributes):
"""Start Docker container.""" """Start Docker container."""
try: try:
docker_container: Container = self.containers.get(name) docker_container: Container = self.containers.get(name)
except NotFound: except docker_errors.NotFound:
raise DockerNotFound( raise DockerNotFound(
f"{name} not found for starting up", _LOGGER.error f"{name} not found for starting up", _LOGGER.error
) from None ) from None
except (DockerException, requests.RequestException) as err: except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError( raise DockerError(
f"Could not get {name} for starting up", _LOGGER.error f"Could not get {name} for starting up", _LOGGER.error
) from err ) from err
@@ -616,36 +660,36 @@ class DockerAPI(CoreSysAttributes):
_LOGGER.info("Starting %s", name) _LOGGER.info("Starting %s", name)
try: try:
docker_container.start() docker_container.start()
except (DockerException, requests.RequestException) as err: except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(f"Can't start {name}: {err}", _LOGGER.error) from err raise DockerError(f"Can't start {name}: {err}", _LOGGER.error) from err
def restart_container(self, name: str, timeout: int) -> None: def restart_container(self, name: str, timeout: int) -> None:
"""Restart docker container.""" """Restart docker container."""
try: try:
container: Container = self.containers.get(name) container: Container = self.containers.get(name)
except NotFound: except docker_errors.NotFound:
raise DockerNotFound() from None raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err: except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err raise DockerError() from err
_LOGGER.info("Restarting %s", name) _LOGGER.info("Restarting %s", name)
try: try:
container.restart(timeout=timeout) container.restart(timeout=timeout)
except (DockerException, requests.RequestException) as err: except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(f"Can't restart {name}: {err}", _LOGGER.warning) from err raise DockerError(f"Can't restart {name}: {err}", _LOGGER.warning) from err
def container_logs(self, name: str, tail: int = 100) -> bytes: def container_logs(self, name: str, tail: int = 100) -> bytes:
"""Return Docker logs of container.""" """Return Docker logs of container."""
try: try:
docker_container: Container = self.containers.get(name) docker_container: Container = self.containers.get(name)
except NotFound: except docker_errors.NotFound:
raise DockerNotFound() from None raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err: except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err raise DockerError() from err
try: try:
return docker_container.logs(tail=tail, stdout=True, stderr=True) return docker_container.logs(tail=tail, stdout=True, stderr=True)
except (DockerException, requests.RequestException) as err: except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError( raise DockerError(
f"Can't grep logs from {name}: {err}", _LOGGER.warning f"Can't grep logs from {name}: {err}", _LOGGER.warning
) from err ) from err
@@ -654,9 +698,9 @@ class DockerAPI(CoreSysAttributes):
"""Read and return stats from container.""" """Read and return stats from container."""
try: try:
docker_container: Container = self.containers.get(name) docker_container: Container = self.containers.get(name)
except NotFound: except docker_errors.NotFound:
raise DockerNotFound() from None raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err: except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err raise DockerError() from err
# container is not running # container is not running
@@ -665,7 +709,7 @@ class DockerAPI(CoreSysAttributes):
try: try:
return docker_container.stats(stream=False) return docker_container.stats(stream=False)
except (DockerException, requests.RequestException) as err: except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError( raise DockerError(
f"Can't read stats from {name}: {err}", _LOGGER.error f"Can't read stats from {name}: {err}", _LOGGER.error
) from err ) from err
@@ -674,61 +718,84 @@ class DockerAPI(CoreSysAttributes):
"""Execute a command inside Docker container.""" """Execute a command inside Docker container."""
try: try:
docker_container: Container = self.containers.get(name) docker_container: Container = self.containers.get(name)
except NotFound: except docker_errors.NotFound:
raise DockerNotFound() from None raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err: except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err raise DockerError() from err
# Execute # Execute
try: try:
code, output = docker_container.exec_run(command) code, output = docker_container.exec_run(command)
except (DockerException, requests.RequestException) as err: except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err raise DockerError() from err
return CommandReturn(code, output) return CommandReturn(code, output)
def remove_image( async def remove_image(
self, image: str, version: AwesomeVersion, latest: bool = True self, image: str, version: AwesomeVersion, latest: bool = True
) -> None: ) -> None:
"""Remove a Docker image by version and latest.""" """Remove a Docker image by version and latest."""
try: try:
if latest: if latest:
_LOGGER.info("Removing image %s with latest", image) _LOGGER.info("Removing image %s with latest", image)
with suppress(ImageNotFound): try:
self.images.remove(image=f"{image}:latest", force=True) await self.images.delete(f"{image}:latest", force=True)
except aiodocker.DockerError as err:
if err.status != HTTPStatus.NOT_FOUND:
raise
_LOGGER.info("Removing image %s with %s", image, version) _LOGGER.info("Removing image %s with %s", image, version)
with suppress(ImageNotFound): try:
self.images.remove(image=f"{image}:{version!s}", force=True) await self.images.delete(f"{image}:{version!s}", force=True)
except aiodocker.DockerError as err:
if err.status != HTTPStatus.NOT_FOUND:
raise
except (DockerException, requests.RequestException) as err: except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError( raise DockerError(
f"Can't remove image {image}: {err}", _LOGGER.warning f"Can't remove image {image}: {err}", _LOGGER.warning
) from err ) from err
def import_image(self, tar_file: Path) -> Image | None: async def import_image(self, tar_file: Path) -> dict[str, Any] | None:
"""Import a tar file as image.""" """Import a tar file as image."""
try: try:
with tar_file.open("rb") as read_tar: with tar_file.open("rb") as read_tar:
docker_image_list: list[Image] = self.images.load(read_tar) # type: ignore resp: list[dict[str, Any]] = self.images.import_image(read_tar)
except (aiodocker.DockerError, OSError) as err:
if len(docker_image_list) != 1:
_LOGGER.warning(
"Unexpected image count %d while importing image from tar",
len(docker_image_list),
)
return None
return docker_image_list[0]
except (DockerException, OSError) as err:
raise DockerError( raise DockerError(
f"Can't import image from tar: {err}", _LOGGER.error f"Can't import image from tar: {err}", _LOGGER.error
) from err ) from err
docker_image_list: list[str] = []
for chunk in resp:
if "errorDetail" in chunk:
raise DockerError(
f"Can't import image from tar: {chunk['errorDetail']['message']}",
_LOGGER.error,
)
if "stream" in chunk:
if match := RE_IMPORT_IMAGE_STREAM.search(chunk["stream"]):
docker_image_list.append(match.group(2))
if len(docker_image_list) != 1:
_LOGGER.warning(
"Unexpected image count %d while importing image from tar",
len(docker_image_list),
)
return None
try:
return await self.images.inspect(docker_image_list[0])
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(
f"Could not inspect imported image due to: {err!s}", _LOGGER.error
) from err
def export_image(self, image: str, version: AwesomeVersion, tar_file: Path) -> None: def export_image(self, image: str, version: AwesomeVersion, tar_file: Path) -> None:
"""Export current images into a tar file.""" """Export current images into a tar file."""
try: try:
docker_image = self.api.get_image(f"{image}:{version}") docker_image = self.api.get_image(f"{image}:{version}")
except (DockerException, requests.RequestException) as err: except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError( raise DockerError(
f"Can't fetch image {image}: {err}", _LOGGER.error f"Can't fetch image {image}: {err}", _LOGGER.error
) from err ) from err
@@ -745,7 +812,7 @@ class DockerAPI(CoreSysAttributes):
_LOGGER.info("Export image %s done", image) _LOGGER.info("Export image %s done", image)
def cleanup_old_images( async def cleanup_old_images(
self, self,
current_image: str, current_image: str,
current_version: AwesomeVersion, current_version: AwesomeVersion,
@@ -756,46 +823,57 @@ class DockerAPI(CoreSysAttributes):
"""Clean up old versions of an image.""" """Clean up old versions of an image."""
image = f"{current_image}:{current_version!s}" image = f"{current_image}:{current_version!s}"
try: try:
keep = {cast(str, self.images.get(image).id)} try:
except ImageNotFound: image_attr = await self.images.inspect(image)
raise DockerNotFound( except aiodocker.DockerError as err:
f"{current_image} not found for cleanup", _LOGGER.warning if err.status == HTTPStatus.NOT_FOUND:
) from None raise DockerNotFound(
except (DockerException, requests.RequestException) as err: f"{current_image} not found for cleanup", _LOGGER.warning
) from None
raise
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError( raise DockerError(
f"Can't get {current_image} for cleanup", _LOGGER.warning f"Can't get {current_image} for cleanup", _LOGGER.warning
) from err ) from err
keep = {cast(str, image_attr["Id"])}
if keep_images: if keep_images:
keep_images -= {image} keep_images -= {image}
try: results = await asyncio.gather(
for image in keep_images: *[self.images.inspect(image) for image in keep_images],
# If its not found, no need to preserve it from getting removed return_exceptions=True,
with suppress(ImageNotFound): )
keep.add(cast(str, self.images.get(image).id)) for result in results:
except (DockerException, requests.RequestException) as err: # If its not found, no need to preserve it from getting removed
raise DockerError( if (
f"Failed to get one or more images from {keep} during cleanup", isinstance(result, aiodocker.DockerError)
_LOGGER.warning, and result.status == HTTPStatus.NOT_FOUND
) from err ):
continue
if isinstance(result, BaseException):
raise DockerError(
f"Failed to get one or more images from {keep} during cleanup",
_LOGGER.warning,
) from result
keep.add(cast(str, result["Id"]))
# Cleanup old and current # Cleanup old and current
image_names = list( image_names = list(
old_images | {current_image} if old_images else {current_image} old_images | {current_image} if old_images else {current_image}
) )
try: try:
# This API accepts a list of image names. Tested and confirmed working on docker==7.1.0 images_list = await self.images.list(
# Its typing does say only `str` though. Bit concerning, could an update break this? filters=json.dumps({"reference": image_names})
images_list = self.images.list(name=image_names) # type: ignore )
except (DockerException, requests.RequestException) as err: except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError( raise DockerError(
f"Corrupt docker overlayfs found: {err}", _LOGGER.warning f"Corrupt docker overlayfs found: {err}", _LOGGER.warning
) from err ) from err
for docker_image in images_list: for docker_image in images_list:
if docker_image.id in keep: if docker_image["Id"] in keep:
continue continue
with suppress(DockerException, requests.RequestException): with suppress(aiodocker.DockerError, requests.RequestException):
_LOGGER.info("Cleanup images: %s", docker_image.tags) _LOGGER.info("Cleanup images: %s", docker_image["RepoTags"])
self.images.remove(docker_image.id, force=True) await self.images.delete(docker_image["Id"], force=True)

View File

@@ -89,7 +89,7 @@ class DockerMonitor(CoreSysAttributes, Thread):
DockerContainerStateEvent( DockerContainerStateEvent(
name=attributes["name"], name=attributes["name"],
state=container_state, state=container_state,
id=event["id"], id=event["Actor"]["ID"],
time=event["time"], time=event["time"],
), ),
) )

View File

@@ -1,10 +1,12 @@
"""Init file for Supervisor Docker object.""" """Init file for Supervisor Docker object."""
import asyncio
from collections.abc import Awaitable from collections.abc import Awaitable
from ipaddress import IPv4Address from ipaddress import IPv4Address
import logging import logging
import os import os
import aiodocker
from awesomeversion.awesomeversion import AwesomeVersion from awesomeversion.awesomeversion import AwesomeVersion
import docker import docker
import requests import requests
@@ -112,19 +114,18 @@ class DockerSupervisor(DockerInterface):
name="docker_supervisor_update_start_tag", name="docker_supervisor_update_start_tag",
concurrency=JobConcurrency.GROUP_QUEUE, concurrency=JobConcurrency.GROUP_QUEUE,
) )
def update_start_tag(self, image: str, version: AwesomeVersion) -> Awaitable[None]: async def update_start_tag(self, image: str, version: AwesomeVersion) -> None:
"""Update start tag to new version.""" """Update start tag to new version."""
return self.sys_run_in_executor(self._update_start_tag, image, version)
def _update_start_tag(self, image: str, version: AwesomeVersion) -> None:
"""Update start tag to new version.
Need run inside executor.
"""
try: try:
docker_container = self.sys_docker.containers.get(self.name) docker_container = await self.sys_run_in_executor(
docker_image = self.sys_docker.images.get(f"{image}:{version!s}") self.sys_docker.containers.get, self.name
except (docker.errors.DockerException, requests.RequestException) as err: )
docker_image = await self.sys_docker.images.inspect(f"{image}:{version!s}")
except (
aiodocker.DockerError,
docker.errors.DockerException,
requests.RequestException,
) as err:
raise DockerError( raise DockerError(
f"Can't get image or container to fix start tag: {err}", _LOGGER.error f"Can't get image or container to fix start tag: {err}", _LOGGER.error
) from err ) from err
@@ -144,8 +145,14 @@ class DockerSupervisor(DockerInterface):
# If version tag # If version tag
if start_tag != "latest": if start_tag != "latest":
continue continue
docker_image.tag(start_image, start_tag) await asyncio.gather(
docker_image.tag(start_image, version.string) self.sys_docker.images.tag(
docker_image["Id"], start_image, tag=start_tag
),
self.sys_docker.images.tag(
docker_image["Id"], start_image, tag=version.string
),
)
except (docker.errors.DockerException, requests.RequestException) as err: except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(f"Can't fix start tag: {err}", _LOGGER.error) from err raise DockerError(f"Can't fix start tag: {err}", _LOGGER.error) from err

View File

@@ -423,6 +423,12 @@ class APINotFound(APIError):
status = 404 status = 404
class APIGone(APIError):
"""API is no longer available."""
status = 410
class APIAddonNotInstalled(APIError): class APIAddonNotInstalled(APIError):
"""Not installed addon requested at addons API.""" """Not installed addon requested at addons API."""
@@ -577,21 +583,6 @@ class PwnedConnectivityError(PwnedError):
"""Connectivity errors while checking pwned passwords.""" """Connectivity errors while checking pwned passwords."""
# util/codenotary
class CodeNotaryError(HassioError):
"""Error general with CodeNotary."""
class CodeNotaryUntrusted(CodeNotaryError):
"""Error on untrusted content."""
class CodeNotaryBackendError(CodeNotaryError):
"""CodeNotary backend error happening."""
# util/whoami # util/whoami
@@ -648,9 +639,32 @@ class DockerLogOutOfOrder(DockerError):
class DockerNoSpaceOnDevice(DockerError): class DockerNoSpaceOnDevice(DockerError):
"""Raise if a docker pull fails due to available space.""" """Raise if a docker pull fails due to available space."""
error_key = "docker_no_space_on_device"
message_template = "No space left on disk"
def __init__(self, logger: Callable[..., None] | None = None) -> None: def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Raise & log.""" """Raise & log."""
super().__init__("No space left on disk", logger=logger) super().__init__(None, logger=logger)
class DockerHubRateLimitExceeded(DockerError):
"""Raise for docker hub rate limit exceeded error."""
error_key = "dockerhub_rate_limit_exceeded"
message_template = (
"Your IP address has made too many requests to Docker Hub which activated a rate limit. "
"For more details see {dockerhub_rate_limit_url}"
)
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Raise & log."""
super().__init__(
None,
logger=logger,
extra_fields={
"dockerhub_rate_limit_url": "https://www.home-assistant.io/more-info/dockerhub-rate-limit"
},
)
class DockerJobError(DockerError, JobException): class DockerJobError(DockerError, JobException):

View File

@@ -428,13 +428,6 @@ class HomeAssistantCore(JobGroup):
""" """
return self.instance.logs() return self.instance.logs()
def check_trust(self) -> Awaitable[None]:
"""Calculate HomeAssistant docker content trust.
Return Coroutine.
"""
return self.instance.check_trust()
async def stats(self) -> DockerStats: async def stats(self) -> DockerStats:
"""Return stats of Home Assistant.""" """Return stats of Home Assistant."""
try: try:

View File

@@ -9,7 +9,7 @@ from contextvars import Context, ContextVar, Token
from dataclasses import dataclass from dataclasses import dataclass
from datetime import datetime from datetime import datetime
import logging import logging
from typing import Any, Self from typing import Any, Self, cast
from uuid import uuid4 from uuid import uuid4
from attr.validators import gt, lt from attr.validators import gt, lt
@@ -98,7 +98,9 @@ class SupervisorJobError:
"""Representation of an error occurring during a supervisor job.""" """Representation of an error occurring during a supervisor job."""
type_: type[HassioError] = HassioError type_: type[HassioError] = HassioError
message: str = "Unknown error, see supervisor logs" message: str = (
"Unknown error, see Supervisor logs (check with 'ha supervisor logs')"
)
stage: str | None = None stage: str | None = None
def as_dict(self) -> dict[str, str | None]: def as_dict(self) -> dict[str, str | None]:
@@ -194,7 +196,7 @@ class SupervisorJob:
self, self,
progress: float | None = None, progress: float | None = None,
stage: str | None = None, stage: str | None = None,
extra: dict[str, Any] | None = DEFAULT, # type: ignore extra: dict[str, Any] | None | type[DEFAULT] = DEFAULT,
done: bool | None = None, done: bool | None = None,
) -> None: ) -> None:
"""Update multiple fields with one on change event.""" """Update multiple fields with one on change event."""
@@ -205,8 +207,8 @@ class SupervisorJob:
self.progress = progress self.progress = progress
if stage is not None: if stage is not None:
self.stage = stage self.stage = stage
if extra != DEFAULT: if extra is not DEFAULT:
self.extra = extra self.extra = cast(dict[str, Any] | None, extra)
# Done has special event. use that to trigger on change if included # Done has special event. use that to trigger on change if included
# If not then just use any other field to trigger # If not then just use any other field to trigger
@@ -304,19 +306,21 @@ class JobManager(FileConfiguration, CoreSysAttributes):
reference: str | None = None, reference: str | None = None,
initial_stage: str | None = None, initial_stage: str | None = None,
internal: bool = False, internal: bool = False,
parent_id: str | None = DEFAULT, # type: ignore parent_id: str | None | type[DEFAULT] = DEFAULT,
child_job_syncs: list[ChildJobSyncFilter] | None = None, child_job_syncs: list[ChildJobSyncFilter] | None = None,
) -> SupervisorJob: ) -> SupervisorJob:
"""Create a new job.""" """Create a new job."""
job = SupervisorJob( kwargs: dict[str, Any] = {
name, "reference": reference,
reference=reference, "stage": initial_stage,
stage=initial_stage, "on_change": self._on_job_change,
on_change=self._on_job_change, "internal": internal,
internal=internal, "child_job_syncs": child_job_syncs,
child_job_syncs=child_job_syncs, }
**({} if parent_id == DEFAULT else {"parent_id": parent_id}), # type: ignore if parent_id is not DEFAULT:
) kwargs["parent_id"] = parent_id
job = SupervisorJob(name, **kwargs)
# Shouldn't happen but inability to find a parent for progress reporting # Shouldn't happen but inability to find a parent for progress reporting
# shouldn't raise and break the active job # shouldn't raise and break the active job
@@ -327,6 +331,17 @@ class JobManager(FileConfiguration, CoreSysAttributes):
if not curr_parent.child_job_syncs: if not curr_parent.child_job_syncs:
continue continue
# HACK: If parent trigger the same child job, we just skip this second
# sync. Maybe it would be better to have this reflected in the job stage
# and reset progress to 0 instead? There is no support for such stage
# information on Core update entities today though.
if curr_parent.done is True or curr_parent.progress >= 100:
_LOGGER.debug(
"Skipping parent job sync for done parent job %s",
curr_parent.name,
)
continue
# Break after first match at each parent as it doesn't make sense # Break after first match at each parent as it doesn't make sense
# to match twice. But it could match multiple parents # to match twice. But it could match multiple parents
for sync in curr_parent.child_job_syncs: for sync in curr_parent.child_job_syncs:

View File

@@ -34,6 +34,7 @@ class JobCondition(StrEnum):
PLUGINS_UPDATED = "plugins_updated" PLUGINS_UPDATED = "plugins_updated"
RUNNING = "running" RUNNING = "running"
SUPERVISOR_UPDATED = "supervisor_updated" SUPERVISOR_UPDATED = "supervisor_updated"
ARCHITECTURE_SUPPORTED = "architecture_supported"
class JobConcurrency(StrEnum): class JobConcurrency(StrEnum):

View File

@@ -441,6 +441,14 @@ class Job(CoreSysAttributes):
raise JobConditionException( raise JobConditionException(
f"'{method_name}' blocked from execution, supervisor needs to be updated first" f"'{method_name}' blocked from execution, supervisor needs to be updated first"
) )
if (
JobCondition.ARCHITECTURE_SUPPORTED in used_conditions
and UnsupportedReason.SYSTEM_ARCHITECTURE
in coresys.sys_resolution.unsupported
):
raise JobConditionException(
f"'{method_name}' blocked from execution, unsupported system architecture"
)
if JobCondition.PLUGINS_UPDATED in used_conditions and ( if JobCondition.PLUGINS_UPDATED in used_conditions and (
out_of_date := [ out_of_date := [

View File

@@ -64,6 +64,19 @@ def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
# Not full startup - missing information # Not full startup - missing information
if coresys.core.state in (CoreState.INITIALIZE, CoreState.SETUP): if coresys.core.state in (CoreState.INITIALIZE, CoreState.SETUP):
# During SETUP, we have basic system info available for better debugging
if coresys.core.state == CoreState.SETUP:
event.setdefault("contexts", {}).update(
{
"versions": {
"docker": coresys.docker.info.version,
"supervisor": coresys.supervisor.version,
},
"host": {
"machine": coresys.machine,
},
}
)
return event return event
# List installed addons # List installed addons

View File

@@ -1,5 +1,6 @@
"""A collection of tasks.""" """A collection of tasks."""
from contextlib import suppress
from datetime import datetime, timedelta from datetime import datetime, timedelta
import logging import logging
from typing import cast from typing import cast
@@ -13,6 +14,7 @@ from ..exceptions import (
BackupFileNotFoundError, BackupFileNotFoundError,
HomeAssistantError, HomeAssistantError,
ObserverError, ObserverError,
SupervisorUpdateError,
) )
from ..homeassistant.const import LANDINGPAGE, WSType from ..homeassistant.const import LANDINGPAGE, WSType
from ..jobs.const import JobConcurrency from ..jobs.const import JobConcurrency
@@ -161,6 +163,7 @@ class Tasks(CoreSysAttributes):
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
JobCondition.OS_SUPPORTED, JobCondition.OS_SUPPORTED,
JobCondition.RUNNING, JobCondition.RUNNING,
JobCondition.ARCHITECTURE_SUPPORTED,
], ],
concurrency=JobConcurrency.REJECT, concurrency=JobConcurrency.REJECT,
) )
@@ -173,7 +176,11 @@ class Tasks(CoreSysAttributes):
"Found new Supervisor version %s, updating", "Found new Supervisor version %s, updating",
self.sys_supervisor.latest_version, self.sys_supervisor.latest_version,
) )
await self.sys_supervisor.update()
# Errors are logged by the exceptions, we can't really do something
# if an update fails here.
with suppress(SupervisorUpdateError):
await self.sys_supervisor.update()
async def _watchdog_homeassistant_api(self): async def _watchdog_homeassistant_api(self):
"""Create scheduler task for monitoring running state of API. """Create scheduler task for monitoring running state of API.

View File

@@ -135,7 +135,7 @@ class Mount(CoreSysAttributes, ABC):
@property @property
def state(self) -> UnitActiveState | None: def state(self) -> UnitActiveState | None:
"""Get state of mount.""" """Get state of mount."""
return self._state return UnitActiveState(self._state) if self._state is not None else None
@cached_property @cached_property
def local_where(self) -> Path: def local_where(self) -> Path:

View File

@@ -76,13 +76,6 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
"""Return True if a task is in progress.""" """Return True if a task is in progress."""
return self.instance.in_progress return self.instance.in_progress
def check_trust(self) -> Awaitable[None]:
"""Calculate plugin docker content trust.
Return Coroutine.
"""
return self.instance.check_trust()
def logs(self) -> Awaitable[bytes]: def logs(self) -> Awaitable[bytes]:
"""Get docker plugin logs. """Get docker plugin logs.

View File

@@ -23,4 +23,5 @@ PLUGIN_UPDATE_CONDITIONS = [
JobCondition.HEALTHY, JobCondition.HEALTHY,
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
JobCondition.SUPERVISOR_UPDATED, JobCondition.SUPERVISOR_UPDATED,
JobCondition.ARCHITECTURE_SUPPORTED,
] ]

View File

@@ -1,59 +0,0 @@
"""Helpers to check supervisor trust."""
import logging
from ...const import CoreState
from ...coresys import CoreSys
from ...exceptions import CodeNotaryError, CodeNotaryUntrusted
from ..const import ContextType, IssueType, UnhealthyReason
from .base import CheckBase
_LOGGER: logging.Logger = logging.getLogger(__name__)
def setup(coresys: CoreSys) -> CheckBase:
"""Check setup function."""
return CheckSupervisorTrust(coresys)
class CheckSupervisorTrust(CheckBase):
"""CheckSystemTrust class for check."""
async def run_check(self) -> None:
"""Run check if not affected by issue."""
if not self.sys_security.content_trust:
_LOGGER.warning(
"Skipping %s, content_trust is globally disabled", self.slug
)
return
try:
await self.sys_supervisor.check_trust()
except CodeNotaryUntrusted:
self.sys_resolution.add_unhealthy_reason(UnhealthyReason.UNTRUSTED)
self.sys_resolution.create_issue(IssueType.TRUST, ContextType.SUPERVISOR)
except CodeNotaryError:
pass
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
try:
await self.sys_supervisor.check_trust()
except CodeNotaryError:
return True
return False
@property
def issue(self) -> IssueType:
"""Return a IssueType enum."""
return IssueType.TRUST
@property
def context(self) -> ContextType:
"""Return a ContextType enum."""
return ContextType.SUPERVISOR
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this check can run."""
return [CoreState.RUNNING, CoreState.STARTUP]

View File

@@ -39,7 +39,6 @@ class UnsupportedReason(StrEnum):
APPARMOR = "apparmor" APPARMOR = "apparmor"
CGROUP_VERSION = "cgroup_version" CGROUP_VERSION = "cgroup_version"
CONNECTIVITY_CHECK = "connectivity_check" CONNECTIVITY_CHECK = "connectivity_check"
CONTENT_TRUST = "content_trust"
DBUS = "dbus" DBUS = "dbus"
DNS_SERVER = "dns_server" DNS_SERVER = "dns_server"
DOCKER_CONFIGURATION = "docker_configuration" DOCKER_CONFIGURATION = "docker_configuration"
@@ -54,12 +53,12 @@ class UnsupportedReason(StrEnum):
PRIVILEGED = "privileged" PRIVILEGED = "privileged"
RESTART_POLICY = "restart_policy" RESTART_POLICY = "restart_policy"
SOFTWARE = "software" SOFTWARE = "software"
SOURCE_MODS = "source_mods"
SUPERVISOR_VERSION = "supervisor_version" SUPERVISOR_VERSION = "supervisor_version"
SYSTEMD = "systemd" SYSTEMD = "systemd"
SYSTEMD_JOURNAL = "systemd_journal" SYSTEMD_JOURNAL = "systemd_journal"
SYSTEMD_RESOLVED = "systemd_resolved" SYSTEMD_RESOLVED = "systemd_resolved"
VIRTUALIZATION_IMAGE = "virtualization_image" VIRTUALIZATION_IMAGE = "virtualization_image"
SYSTEM_ARCHITECTURE = "system_architecture"
class UnhealthyReason(StrEnum): class UnhealthyReason(StrEnum):
@@ -103,7 +102,6 @@ class IssueType(StrEnum):
PWNED = "pwned" PWNED = "pwned"
REBOOT_REQUIRED = "reboot_required" REBOOT_REQUIRED = "reboot_required"
SECURITY = "security" SECURITY = "security"
TRUST = "trust"
UPDATE_FAILED = "update_failed" UPDATE_FAILED = "update_failed"
UPDATE_ROLLBACK = "update_rollback" UPDATE_ROLLBACK = "update_rollback"
@@ -115,7 +113,6 @@ class SuggestionType(StrEnum):
CLEAR_FULL_BACKUP = "clear_full_backup" CLEAR_FULL_BACKUP = "clear_full_backup"
CREATE_FULL_BACKUP = "create_full_backup" CREATE_FULL_BACKUP = "create_full_backup"
DISABLE_BOOT = "disable_boot" DISABLE_BOOT = "disable_boot"
EXECUTE_INTEGRITY = "execute_integrity"
EXECUTE_REBOOT = "execute_reboot" EXECUTE_REBOOT = "execute_reboot"
EXECUTE_REBUILD = "execute_rebuild" EXECUTE_REBUILD = "execute_rebuild"
EXECUTE_RELOAD = "execute_reload" EXECUTE_RELOAD = "execute_reload"

View File

@@ -13,7 +13,6 @@ from .validate import get_valid_modules
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
UNHEALTHY = [ UNHEALTHY = [
UnsupportedReason.DOCKER_VERSION,
UnsupportedReason.LXC, UnsupportedReason.LXC,
UnsupportedReason.PRIVILEGED, UnsupportedReason.PRIVILEGED,
] ]

View File

@@ -5,8 +5,6 @@ from ...coresys import CoreSys
from ..const import UnsupportedReason from ..const import UnsupportedReason
from .base import EvaluateBase from .base import EvaluateBase
SUPPORTED_OS = ["Debian GNU/Linux 12 (bookworm)"]
def setup(coresys: CoreSys) -> EvaluateBase: def setup(coresys: CoreSys) -> EvaluateBase:
"""Initialize evaluation-setup function.""" """Initialize evaluation-setup function."""
@@ -33,6 +31,4 @@ class EvaluateOperatingSystem(EvaluateBase):
async def evaluate(self) -> bool: async def evaluate(self) -> bool:
"""Run evaluation.""" """Run evaluation."""
if self.sys_os.available: return not self.sys_os.available
return False
return self.sys_host.info.operating_system not in SUPPORTED_OS

View File

@@ -1,72 +0,0 @@
"""Evaluation class for Content Trust."""
import errno
import logging
from pathlib import Path
from ...const import CoreState
from ...coresys import CoreSys
from ...exceptions import CodeNotaryError, CodeNotaryUntrusted
from ...utils.codenotary import calc_checksum_path_sourcecode
from ..const import ContextType, IssueType, UnhealthyReason, UnsupportedReason
from .base import EvaluateBase
_SUPERVISOR_SOURCE = Path("/usr/src/supervisor/supervisor")
_LOGGER: logging.Logger = logging.getLogger(__name__)
def setup(coresys: CoreSys) -> EvaluateBase:
"""Initialize evaluation-setup function."""
return EvaluateSourceMods(coresys)
class EvaluateSourceMods(EvaluateBase):
"""Evaluate supervisor source modifications."""
@property
def reason(self) -> UnsupportedReason:
"""Return a UnsupportedReason enum."""
return UnsupportedReason.SOURCE_MODS
@property
def on_failure(self) -> str:
"""Return a string that is printed when self.evaluate is True."""
return "System detect unauthorized source code modifications."
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this evaluation can run."""
return [CoreState.RUNNING]
async def evaluate(self) -> bool:
"""Run evaluation."""
if not self.sys_security.content_trust:
_LOGGER.warning("Disabled content-trust, skipping evaluation")
return False
# Calculate sume of the sourcecode
try:
checksum = await self.sys_run_in_executor(
calc_checksum_path_sourcecode, _SUPERVISOR_SOURCE
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.create_issue(
IssueType.CORRUPT_FILESYSTEM, ContextType.SYSTEM
)
_LOGGER.error("Can't calculate checksum of source code: %s", err)
return False
# Validate checksum
try:
await self.sys_security.verify_own_content(checksum)
except CodeNotaryUntrusted:
return True
except CodeNotaryError:
pass
return False

View File

@@ -1,4 +1,4 @@
"""Evaluation class for Content Trust.""" """Evaluation class for system architecture support."""
from ...const import CoreState from ...const import CoreState
from ...coresys import CoreSys from ...coresys import CoreSys
@@ -8,27 +8,31 @@ from .base import EvaluateBase
def setup(coresys: CoreSys) -> EvaluateBase: def setup(coresys: CoreSys) -> EvaluateBase:
"""Initialize evaluation-setup function.""" """Initialize evaluation-setup function."""
return EvaluateContentTrust(coresys) return EvaluateSystemArchitecture(coresys)
class EvaluateContentTrust(EvaluateBase): class EvaluateSystemArchitecture(EvaluateBase):
"""Evaluate system content trust level.""" """Evaluate if the current Supervisor architecture is supported."""
@property @property
def reason(self) -> UnsupportedReason: def reason(self) -> UnsupportedReason:
"""Return a UnsupportedReason enum.""" """Return a UnsupportedReason enum."""
return UnsupportedReason.CONTENT_TRUST return UnsupportedReason.SYSTEM_ARCHITECTURE
@property @property
def on_failure(self) -> str: def on_failure(self) -> str:
"""Return a string that is printed when self.evaluate is True.""" """Return a string that is printed when self.evaluate is True."""
return "System run with disabled trusted content security." return "System architecture is no longer supported. Move to a supported system architecture."
@property @property
def states(self) -> list[CoreState]: def states(self) -> list[CoreState]:
"""Return a list of valid states when this evaluation can run.""" """Return a list of valid states when this evaluation can run."""
return [CoreState.INITIALIZE, CoreState.SETUP, CoreState.RUNNING] return [CoreState.INITIALIZE]
async def evaluate(self) -> bool: async def evaluate(self):
"""Run evaluation.""" """Run evaluation."""
return not self.sys_security.content_trust return self.sys_host.info.sys_arch.supervisor in {
"i386",
"armhf",
"armv7",
}

View File

@@ -1,67 +0,0 @@
"""Helpers to check and fix issues with free space."""
from datetime import timedelta
import logging
from ...coresys import CoreSys
from ...exceptions import ResolutionFixupError, ResolutionFixupJobError
from ...jobs.const import JobCondition, JobThrottle
from ...jobs.decorator import Job
from ...security.const import ContentTrustResult
from ..const import ContextType, IssueType, SuggestionType
from .base import FixupBase
_LOGGER: logging.Logger = logging.getLogger(__name__)
def setup(coresys: CoreSys) -> FixupBase:
"""Check setup function."""
return FixupSystemExecuteIntegrity(coresys)
class FixupSystemExecuteIntegrity(FixupBase):
"""Storage class for fixup."""
@Job(
name="fixup_system_execute_integrity_process",
conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=ResolutionFixupJobError,
throttle_period=timedelta(hours=8),
throttle=JobThrottle.THROTTLE,
)
async def process_fixup(self, reference: str | None = None) -> None:
"""Initialize the fixup class."""
result = await self.sys_security.integrity_check()
if ContentTrustResult.FAILED in (result.core, result.supervisor):
raise ResolutionFixupError()
for plugin in result.plugins:
if plugin != ContentTrustResult.FAILED:
continue
raise ResolutionFixupError()
for addon in result.addons:
if addon != ContentTrustResult.FAILED:
continue
raise ResolutionFixupError()
@property
def suggestion(self) -> SuggestionType:
"""Return a SuggestionType enum."""
return SuggestionType.EXECUTE_INTEGRITY
@property
def context(self) -> ContextType:
"""Return a ContextType enum."""
return ContextType.SYSTEM
@property
def issues(self) -> list[IssueType]:
"""Return a IssueType enum list."""
return [IssueType.TRUST]
@property
def auto(self) -> bool:
"""Return if a fixup can be apply as auto fix."""
return True

View File

@@ -1,24 +0,0 @@
"""Security constants."""
from enum import StrEnum
import attr
class ContentTrustResult(StrEnum):
"""Content trust result enum."""
PASS = "pass"
ERROR = "error"
FAILED = "failed"
UNTESTED = "untested"
@attr.s
class IntegrityResult:
"""Result of a full integrity check."""
supervisor: ContentTrustResult = attr.ib(default=ContentTrustResult.UNTESTED)
core: ContentTrustResult = attr.ib(default=ContentTrustResult.UNTESTED)
plugins: dict[str, ContentTrustResult] = attr.ib(default={})
addons: dict[str, ContentTrustResult] = attr.ib(default={})

View File

@@ -4,27 +4,12 @@ from __future__ import annotations
import logging import logging
from ..const import ( from ..const import ATTR_FORCE_SECURITY, ATTR_PWNED, FILE_HASSIO_SECURITY
ATTR_CONTENT_TRUST,
ATTR_FORCE_SECURITY,
ATTR_PWNED,
FILE_HASSIO_SECURITY,
)
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ( from ..exceptions import PwnedError
CodeNotaryError,
CodeNotaryUntrusted,
PwnedError,
SecurityJobError,
)
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.codenotary import cas_validate
from ..utils.common import FileConfiguration from ..utils.common import FileConfiguration
from ..utils.pwned import check_pwned_password from ..utils.pwned import check_pwned_password
from ..validate import SCHEMA_SECURITY_CONFIG from ..validate import SCHEMA_SECURITY_CONFIG
from .const import ContentTrustResult, IntegrityResult
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -37,16 +22,6 @@ class Security(FileConfiguration, CoreSysAttributes):
super().__init__(FILE_HASSIO_SECURITY, SCHEMA_SECURITY_CONFIG) super().__init__(FILE_HASSIO_SECURITY, SCHEMA_SECURITY_CONFIG)
self.coresys = coresys self.coresys = coresys
@property
def content_trust(self) -> bool:
"""Return if content trust is enabled/disabled."""
return self._data[ATTR_CONTENT_TRUST]
@content_trust.setter
def content_trust(self, value: bool) -> None:
"""Set content trust is enabled/disabled."""
self._data[ATTR_CONTENT_TRUST] = value
@property @property
def force(self) -> bool: def force(self) -> bool:
"""Return if force security is enabled/disabled.""" """Return if force security is enabled/disabled."""
@@ -67,30 +42,6 @@ class Security(FileConfiguration, CoreSysAttributes):
"""Set pwned is enabled/disabled.""" """Set pwned is enabled/disabled."""
self._data[ATTR_PWNED] = value self._data[ATTR_PWNED] = value
async def verify_content(self, signer: str, checksum: str) -> None:
"""Verify content on CAS."""
if not self.content_trust:
_LOGGER.warning("Disabled content-trust, skip validation")
return
try:
await cas_validate(signer, checksum)
except CodeNotaryUntrusted:
raise
except CodeNotaryError:
if self.force:
raise
self.sys_resolution.create_issue(
IssueType.TRUST,
ContextType.SYSTEM,
suggestions=[SuggestionType.EXECUTE_INTEGRITY],
)
return
async def verify_own_content(self, checksum: str) -> None:
"""Verify content from HA org."""
return await self.verify_content("notary@home-assistant.io", checksum)
async def verify_secret(self, pwned_hash: str) -> None: async def verify_secret(self, pwned_hash: str) -> None:
"""Verify pwned state of a secret.""" """Verify pwned state of a secret."""
if not self.pwned: if not self.pwned:
@@ -103,73 +54,3 @@ class Security(FileConfiguration, CoreSysAttributes):
if self.force: if self.force:
raise raise
return return
@Job(
name="security_manager_integrity_check",
conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=SecurityJobError,
concurrency=JobConcurrency.REJECT,
)
async def integrity_check(self) -> IntegrityResult:
"""Run a full system integrity check of the platform.
We only allow to install trusted content.
This is a out of the band manual check.
"""
result: IntegrityResult = IntegrityResult()
if not self.content_trust:
_LOGGER.warning(
"Skipping integrity check, content_trust is globally disabled"
)
return result
# Supervisor
try:
await self.sys_supervisor.check_trust()
result.supervisor = ContentTrustResult.PASS
except CodeNotaryUntrusted:
result.supervisor = ContentTrustResult.ERROR
self.sys_resolution.create_issue(IssueType.TRUST, ContextType.SUPERVISOR)
except CodeNotaryError:
result.supervisor = ContentTrustResult.FAILED
# Core
try:
await self.sys_homeassistant.core.check_trust()
result.core = ContentTrustResult.PASS
except CodeNotaryUntrusted:
result.core = ContentTrustResult.ERROR
self.sys_resolution.create_issue(IssueType.TRUST, ContextType.CORE)
except CodeNotaryError:
result.core = ContentTrustResult.FAILED
# Plugins
for plugin in self.sys_plugins.all_plugins:
try:
await plugin.check_trust()
result.plugins[plugin.slug] = ContentTrustResult.PASS
except CodeNotaryUntrusted:
result.plugins[plugin.slug] = ContentTrustResult.ERROR
self.sys_resolution.create_issue(
IssueType.TRUST, ContextType.PLUGIN, reference=plugin.slug
)
except CodeNotaryError:
result.plugins[plugin.slug] = ContentTrustResult.FAILED
# Add-ons
for addon in self.sys_addons.installed:
if not addon.signed:
result.addons[addon.slug] = ContentTrustResult.UNTESTED
continue
try:
await addon.check_trust()
result.addons[addon.slug] = ContentTrustResult.PASS
except CodeNotaryUntrusted:
result.addons[addon.slug] = ContentTrustResult.ERROR
self.sys_resolution.create_issue(
IssueType.TRUST, ContextType.ADDON, reference=addon.slug
)
except CodeNotaryError:
result.addons[addon.slug] = ContentTrustResult.FAILED
return result

View File

@@ -25,8 +25,6 @@ from .coresys import CoreSys, CoreSysAttributes
from .docker.stats import DockerStats from .docker.stats import DockerStats
from .docker.supervisor import DockerSupervisor from .docker.supervisor import DockerSupervisor
from .exceptions import ( from .exceptions import (
CodeNotaryError,
CodeNotaryUntrusted,
DockerError, DockerError,
HostAppArmorError, HostAppArmorError,
SupervisorAppArmorError, SupervisorAppArmorError,
@@ -37,7 +35,6 @@ from .exceptions import (
from .jobs.const import JobCondition, JobThrottle from .jobs.const import JobCondition, JobThrottle
from .jobs.decorator import Job from .jobs.decorator import Job
from .resolution.const import ContextType, IssueType, UnhealthyReason from .resolution.const import ContextType, IssueType, UnhealthyReason
from .utils.codenotary import calc_checksum
from .utils.sentry import async_capture_exception from .utils.sentry import async_capture_exception
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -150,20 +147,6 @@ class Supervisor(CoreSysAttributes):
_LOGGER.error, _LOGGER.error,
) from err ) from err
# Validate
try:
await self.sys_security.verify_own_content(calc_checksum(data))
except CodeNotaryUntrusted as err:
raise SupervisorAppArmorError(
"Content-Trust is broken for the AppArmor profile fetch!",
_LOGGER.critical,
) from err
except CodeNotaryError as err:
raise SupervisorAppArmorError(
f"CodeNotary error while processing AppArmor fetch: {err!s}",
_LOGGER.error,
) from err
# Load # Load
temp_dir: TemporaryDirectory | None = None temp_dir: TemporaryDirectory | None = None
@@ -273,13 +256,6 @@ class Supervisor(CoreSysAttributes):
""" """
return self.instance.logs() return self.instance.logs()
def check_trust(self) -> Awaitable[None]:
"""Calculate Supervisor docker content trust.
Return Coroutine.
"""
return self.instance.check_trust()
async def stats(self) -> DockerStats: async def stats(self) -> DockerStats:
"""Return stats of Supervisor.""" """Return stats of Supervisor."""
try: try:

View File

@@ -31,14 +31,8 @@ from .const import (
UpdateChannel, UpdateChannel,
) )
from .coresys import CoreSys, CoreSysAttributes from .coresys import CoreSys, CoreSysAttributes
from .exceptions import ( from .exceptions import UpdaterError, UpdaterJobError
CodeNotaryError,
CodeNotaryUntrusted,
UpdaterError,
UpdaterJobError,
)
from .jobs.decorator import Job, JobCondition from .jobs.decorator import Job, JobCondition
from .utils.codenotary import calc_checksum
from .utils.common import FileConfiguration from .utils.common import FileConfiguration
from .validate import SCHEMA_UPDATER_CONFIG from .validate import SCHEMA_UPDATER_CONFIG
@@ -248,9 +242,10 @@ class Updater(FileConfiguration, CoreSysAttributes):
@Job( @Job(
name="updater_fetch_data", name="updater_fetch_data",
conditions=[ conditions=[
JobCondition.ARCHITECTURE_SUPPORTED,
JobCondition.INTERNET_SYSTEM, JobCondition.INTERNET_SYSTEM,
JobCondition.OS_SUPPORTED,
JobCondition.HOME_ASSISTANT_CORE_SUPPORTED, JobCondition.HOME_ASSISTANT_CORE_SUPPORTED,
JobCondition.OS_SUPPORTED,
], ],
on_condition=UpdaterJobError, on_condition=UpdaterJobError,
throttle_period=timedelta(seconds=30), throttle_period=timedelta(seconds=30),
@@ -289,19 +284,6 @@ class Updater(FileConfiguration, CoreSysAttributes):
self.sys_bus.remove_listener(self._connectivity_listener) self.sys_bus.remove_listener(self._connectivity_listener)
self._connectivity_listener = None self._connectivity_listener = None
# Validate
try:
await self.sys_security.verify_own_content(calc_checksum(data))
except CodeNotaryUntrusted as err:
raise UpdaterError(
"Content-Trust is broken for the version file fetch!", _LOGGER.critical
) from err
except CodeNotaryError as err:
raise UpdaterError(
f"CodeNotary error while processing version fetch: {err!s}",
_LOGGER.error,
) from err
# Parse data # Parse data
try: try:
data = json.loads(data) data = json.loads(data)

View File

@@ -1,109 +0,0 @@
"""Small wrapper for CodeNotary."""
from __future__ import annotations
import asyncio
import hashlib
import json
import logging
from pathlib import Path
import shlex
from typing import Final
from dirhash import dirhash
from ..exceptions import CodeNotaryBackendError, CodeNotaryError, CodeNotaryUntrusted
from . import clean_env
_LOGGER: logging.Logger = logging.getLogger(__name__)
_CAS_CMD: str = (
"cas authenticate --signerID {signer} --silent --output json --hash {sum}"
)
_CACHE: set[tuple[str, str]] = set()
_ATTR_ERROR: Final = "error"
_ATTR_STATUS: Final = "status"
_FALLBACK_ERROR: Final = "Unknown CodeNotary backend issue"
def calc_checksum(data: str | bytes) -> str:
"""Generate checksum for CodeNotary."""
if isinstance(data, str):
return hashlib.sha256(data.encode()).hexdigest()
return hashlib.sha256(data).hexdigest()
def calc_checksum_path_sourcecode(folder: Path) -> str:
"""Calculate checksum for a path source code.
Need catch OSError.
"""
return dirhash(folder.as_posix(), "sha256", match=["*.py"])
# pylint: disable=unreachable
async def cas_validate(
signer: str,
checksum: str,
) -> None:
"""Validate data against CodeNotary."""
return
if (checksum, signer) in _CACHE:
return
# Generate command for request
command = shlex.split(_CAS_CMD.format(signer=signer, sum=checksum))
# Request notary authorization
_LOGGER.debug("Send cas command: %s", command)
try:
proc = await asyncio.create_subprocess_exec(
*command,
stdin=asyncio.subprocess.DEVNULL,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
env=clean_env(),
)
async with asyncio.timeout(15):
data, error = await proc.communicate()
except TimeoutError:
raise CodeNotaryBackendError(
"Timeout while processing CodeNotary", _LOGGER.warning
) from None
except OSError as err:
raise CodeNotaryError(
f"CodeNotary fatal error: {err!s}", _LOGGER.critical
) from err
# Check if Notarized
if proc.returncode != 0 and not data:
if error:
try:
error = error.decode("utf-8")
except UnicodeDecodeError as err:
raise CodeNotaryBackendError(_FALLBACK_ERROR, _LOGGER.warning) from err
if "not notarized" in error:
raise CodeNotaryUntrusted()
else:
error = _FALLBACK_ERROR
raise CodeNotaryBackendError(error, _LOGGER.warning)
# Parse data
try:
data_json = json.loads(data)
_LOGGER.debug("CodeNotary response with: %s", data_json)
except (json.JSONDecodeError, UnicodeDecodeError) as err:
raise CodeNotaryError(
f"Can't parse CodeNotary output: {data!s} - {err!s}", _LOGGER.error
) from err
if _ATTR_ERROR in data_json:
raise CodeNotaryBackendError(data_json[_ATTR_ERROR], _LOGGER.warning)
if data_json[_ATTR_STATUS] == 0:
_CACHE.add((checksum, signer))
else:
raise CodeNotaryUntrusted()

View File

@@ -7,13 +7,7 @@ from collections.abc import Awaitable, Callable
import logging import logging
from typing import Any, Protocol, cast from typing import Any, Protocol, cast
from dbus_fast import ( from dbus_fast import ErrorType, InvalidIntrospectionError, Message, MessageType
ErrorType,
InvalidIntrospectionError,
Message,
MessageType,
Variant,
)
from dbus_fast.aio.message_bus import MessageBus from dbus_fast.aio.message_bus import MessageBus
from dbus_fast.aio.proxy_object import ProxyInterface, ProxyObject from dbus_fast.aio.proxy_object import ProxyInterface, ProxyObject
from dbus_fast.errors import DBusError as DBusFastDBusError from dbus_fast.errors import DBusError as DBusFastDBusError
@@ -265,7 +259,7 @@ class DBus:
""" """
async def sync_property_change( async def sync_property_change(
prop_interface: str, changed: dict[str, Variant], invalidated: list[str] prop_interface: str, changed: dict[str, Any], invalidated: list[str]
) -> None: ) -> None:
"""Sync property changes to cache.""" """Sync property changes to cache."""
if interface != prop_interface: if interface != prop_interface:

View File

@@ -5,12 +5,20 @@ from collections.abc import AsyncGenerator
from datetime import UTC, datetime from datetime import UTC, datetime
from functools import wraps from functools import wraps
import json import json
import re
from aiohttp import ClientResponse from aiohttp import ClientResponse
from supervisor.exceptions import MalformedBinaryEntryError from supervisor.exceptions import MalformedBinaryEntryError
from supervisor.host.const import LogFormatter from supervisor.host.const import LogFormatter
_RE_ANSI_CSI_COLORS_PATTERN = re.compile(r"\x1B\[[0-9;]*m")
def _strip_ansi_colors(message: str) -> str:
"""Remove ANSI color codes from a message string."""
return _RE_ANSI_CSI_COLORS_PATTERN.sub("", message)
def formatter(required_fields: list[str]): def formatter(required_fields: list[str]):
"""Decorate journal entry formatters with list of required fields. """Decorate journal entry formatters with list of required fields.
@@ -31,9 +39,9 @@ def formatter(required_fields: list[str]):
@formatter(["MESSAGE"]) @formatter(["MESSAGE"])
def journal_plain_formatter(entries: dict[str, str]) -> str: def journal_plain_formatter(entries: dict[str, str], no_colors: bool = False) -> str:
"""Format parsed journal entries as a plain message.""" """Format parsed journal entries as a plain message."""
return entries["MESSAGE"] return _strip_ansi_colors(entries["MESSAGE"]) if no_colors else entries["MESSAGE"]
@formatter( @formatter(
@@ -45,7 +53,7 @@ def journal_plain_formatter(entries: dict[str, str]) -> str:
"MESSAGE", "MESSAGE",
] ]
) )
def journal_verbose_formatter(entries: dict[str, str]) -> str: def journal_verbose_formatter(entries: dict[str, str], no_colors: bool = False) -> str:
"""Format parsed journal entries to a journalctl-like format.""" """Format parsed journal entries to a journalctl-like format."""
ts = datetime.fromtimestamp( ts = datetime.fromtimestamp(
int(entries["__REALTIME_TIMESTAMP"]) / 1e6, UTC int(entries["__REALTIME_TIMESTAMP"]) / 1e6, UTC
@@ -58,14 +66,24 @@ def journal_verbose_formatter(entries: dict[str, str]) -> str:
else entries.get("SYSLOG_IDENTIFIER", "_UNKNOWN_") else entries.get("SYSLOG_IDENTIFIER", "_UNKNOWN_")
) )
return f"{ts} {entries.get('_HOSTNAME', '')} {identifier}: {entries.get('MESSAGE', '')}" message = (
_strip_ansi_colors(entries.get("MESSAGE", ""))
if no_colors
else entries.get("MESSAGE", "")
)
return f"{ts} {entries.get('_HOSTNAME', '')} {identifier}: {message}"
async def journal_logs_reader( async def journal_logs_reader(
journal_logs: ClientResponse, log_formatter: LogFormatter = LogFormatter.PLAIN journal_logs: ClientResponse,
log_formatter: LogFormatter = LogFormatter.PLAIN,
no_colors: bool = False,
) -> AsyncGenerator[tuple[str | None, str]]: ) -> AsyncGenerator[tuple[str | None, str]]:
"""Read logs from systemd journal line by line, formatted using the given formatter. """Read logs from systemd journal line by line, formatted using the given formatter.
Optionally strip ANSI color codes from the entries' messages.
Returns a generator of (cursor, formatted_entry) tuples. Returns a generator of (cursor, formatted_entry) tuples.
""" """
match log_formatter: match log_formatter:
@@ -84,7 +102,10 @@ async def journal_logs_reader(
# at EOF (likely race between at_eof and EOF check in readuntil) # at EOF (likely race between at_eof and EOF check in readuntil)
if line == b"\n" or not line: if line == b"\n" or not line:
if entries: if entries:
yield entries.get("__CURSOR"), formatter_(entries) yield (
entries.get("__CURSOR"),
formatter_(entries, no_colors=no_colors),
)
entries = {} entries = {}
continue continue

View File

@@ -12,7 +12,6 @@ from .const import (
ATTR_AUTO_UPDATE, ATTR_AUTO_UPDATE,
ATTR_CHANNEL, ATTR_CHANNEL,
ATTR_CLI, ATTR_CLI,
ATTR_CONTENT_TRUST,
ATTR_COUNTRY, ATTR_COUNTRY,
ATTR_DEBUG, ATTR_DEBUG,
ATTR_DEBUG_BLOCK, ATTR_DEBUG_BLOCK,
@@ -229,7 +228,6 @@ SCHEMA_INGRESS_CONFIG = vol.Schema(
# pylint: disable=no-value-for-parameter # pylint: disable=no-value-for-parameter
SCHEMA_SECURITY_CONFIG = vol.Schema( SCHEMA_SECURITY_CONFIG = vol.Schema(
{ {
vol.Optional(ATTR_CONTENT_TRUST, default=True): vol.Boolean(),
vol.Optional(ATTR_PWNED, default=True): vol.Boolean(), vol.Optional(ATTR_PWNED, default=True): vol.Boolean(),
vol.Optional(ATTR_FORCE_SECURITY, default=False): vol.Boolean(), vol.Optional(ATTR_FORCE_SECURITY, default=False): vol.Boolean(),
}, },

View File

@@ -3,22 +3,25 @@
import asyncio import asyncio
from datetime import timedelta from datetime import timedelta
import errno import errno
from http import HTTPStatus
from pathlib import Path from pathlib import Path
from unittest.mock import MagicMock, PropertyMock, patch from unittest.mock import MagicMock, PropertyMock, call, patch
import aiodocker
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from docker.errors import DockerException, ImageNotFound, NotFound from docker.errors import APIError, DockerException, NotFound
import pytest import pytest
from securetar import SecureTarFile from securetar import SecureTarFile
from supervisor.addons.addon import Addon from supervisor.addons.addon import Addon
from supervisor.addons.const import AddonBackupMode from supervisor.addons.const import AddonBackupMode
from supervisor.addons.model import AddonModel from supervisor.addons.model import AddonModel
from supervisor.config import CoreConfig
from supervisor.const import AddonBoot, AddonState, BusEvent from supervisor.const import AddonBoot, AddonState, BusEvent
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState from supervisor.docker.const import ContainerState
from supervisor.docker.manager import CommandReturn from supervisor.docker.manager import CommandReturn, DockerAPI
from supervisor.docker.monitor import DockerContainerStateEvent from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError
from supervisor.hardware.helper import HwHelper from supervisor.hardware.helper import HwHelper
@@ -861,16 +864,14 @@ async def test_addon_loads_wrong_image(
container.remove.assert_called_with(force=True, v=True) container.remove.assert_called_with(force=True, v=True)
# one for removing the addon, one for removing the addon builder # one for removing the addon, one for removing the addon builder
assert coresys.docker.images.remove.call_count == 2 assert coresys.docker.images.delete.call_count == 2
assert coresys.docker.images.remove.call_args_list[0].kwargs == { assert coresys.docker.images.delete.call_args_list[0] == call(
"image": "local/aarch64-addon-ssh:latest", "local/aarch64-addon-ssh:latest", force=True
"force": True, )
} assert coresys.docker.images.delete.call_args_list[1] == call(
assert coresys.docker.images.remove.call_args_list[1].kwargs == { "local/aarch64-addon-ssh:9.2.1", force=True
"image": "local/aarch64-addon-ssh:9.2.1", )
"force": True,
}
mock_run_command.assert_called_once() mock_run_command.assert_called_once()
assert mock_run_command.call_args.args[0] == "docker.io/library/docker" assert mock_run_command.call_args.args[0] == "docker.io/library/docker"
assert mock_run_command.call_args.kwargs["version"] == "1.0.0-cli" assert mock_run_command.call_args.kwargs["version"] == "1.0.0-cli"
@@ -894,7 +895,9 @@ async def test_addon_loads_missing_image(
mock_amd64_arch_supported, mock_amd64_arch_supported,
): ):
"""Test addon corrects a missing image on load.""" """Test addon corrects a missing image on load."""
coresys.docker.images.get.side_effect = ImageNotFound("missing") coresys.docker.images.inspect.side_effect = aiodocker.DockerError(
HTTPStatus.NOT_FOUND, {"message": "missing"}
)
with ( with (
patch("pathlib.Path.is_file", return_value=True), patch("pathlib.Path.is_file", return_value=True),
@@ -926,41 +929,51 @@ async def test_addon_loads_missing_image(
assert install_addon_ssh.image == "local/amd64-addon-ssh" assert install_addon_ssh.image == "local/amd64-addon-ssh"
@pytest.mark.parametrize(
"pull_image_exc",
[APIError("error"), aiodocker.DockerError(400, {"message": "error"})],
)
@pytest.mark.usefixtures("container", "mock_amd64_arch_supported")
async def test_addon_load_succeeds_with_docker_errors( async def test_addon_load_succeeds_with_docker_errors(
coresys: CoreSys, coresys: CoreSys,
install_addon_ssh: Addon, install_addon_ssh: Addon,
container: MagicMock,
caplog: pytest.LogCaptureFixture, caplog: pytest.LogCaptureFixture,
mock_amd64_arch_supported, pull_image_exc: Exception,
): ):
"""Docker errors while building/pulling an image during load should not raise and fail setup.""" """Docker errors while building/pulling an image during load should not raise and fail setup."""
# Build env invalid failure # Build env invalid failure
coresys.docker.images.get.side_effect = ImageNotFound("missing") coresys.docker.images.inspect.side_effect = aiodocker.DockerError(
HTTPStatus.NOT_FOUND, {"message": "missing"}
)
caplog.clear() caplog.clear()
await install_addon_ssh.load() await install_addon_ssh.load()
assert "Invalid build environment" in caplog.text assert "Invalid build environment" in caplog.text
# Image build failure # Image build failure
coresys.docker.images.build.side_effect = DockerException()
caplog.clear() caplog.clear()
with ( with (
patch("pathlib.Path.is_file", return_value=True), patch("pathlib.Path.is_file", return_value=True),
patch.object( patch.object(
type(coresys.config), CoreConfig, "local_to_extern_path", return_value="/addon/path/on/host"
"local_to_extern_path", ),
return_value="/addon/path/on/host", patch.object(
DockerAPI,
"run_command",
return_value=MagicMock(exit_code=1, output=b"error"),
), ),
): ):
await install_addon_ssh.load() await install_addon_ssh.load()
assert "Can't build local/amd64-addon-ssh:9.2.1" in caplog.text assert (
"Can't build local/amd64-addon-ssh:9.2.1: Docker build failed for local/amd64-addon-ssh:9.2.1 (exit code 1). Build output:\nerror"
in caplog.text
)
# Image pull failure # Image pull failure
install_addon_ssh.data["image"] = "test/amd64-addon-ssh" install_addon_ssh.data["image"] = "test/amd64-addon-ssh"
coresys.docker.images.build.reset_mock(side_effect=True)
coresys.docker.pull_image.side_effect = DockerException()
caplog.clear() caplog.clear()
await install_addon_ssh.load() with patch.object(DockerAPI, "pull_image", side_effect=pull_image_exc):
assert "Unknown error with test/amd64-addon-ssh:9.2.1" in caplog.text await install_addon_ssh.load()
assert "Can't install test/amd64-addon-ssh:9.2.1:" in caplog.text
async def test_addon_manual_only_boot(coresys: CoreSys, install_addon_example: Addon): async def test_addon_manual_only_boot(coresys: CoreSys, install_addon_example: Addon):

View File

@@ -1,5 +1,8 @@
"""Test addon build.""" """Test addon build."""
import base64
import json
from pathlib import Path
from unittest.mock import PropertyMock, patch from unittest.mock import PropertyMock, patch
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
@@ -7,6 +10,7 @@ from awesomeversion import AwesomeVersion
from supervisor.addons.addon import Addon from supervisor.addons.addon import Addon
from supervisor.addons.build import AddonBuild from supervisor.addons.build import AddonBuild
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.const import DOCKER_HUB
from tests.common import is_in_list from tests.common import is_in_list
@@ -29,7 +33,7 @@ async def test_platform_set(coresys: CoreSys, install_addon_ssh: Addon):
), ),
): ):
args = await coresys.run_in_executor( args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest" build.get_docker_args, AwesomeVersion("latest"), "test-image:latest", None
) )
assert is_in_list(["--platform", "linux/amd64"], args["command"]) assert is_in_list(["--platform", "linux/amd64"], args["command"])
@@ -53,7 +57,7 @@ async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon)
), ),
): ):
args = await coresys.run_in_executor( args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest" build.get_docker_args, AwesomeVersion("latest"), "test-image:latest", None
) )
assert is_in_list(["--file", "Dockerfile"], args["command"]) assert is_in_list(["--file", "Dockerfile"], args["command"])
@@ -81,7 +85,7 @@ async def test_dockerfile_evaluation_arch(coresys: CoreSys, install_addon_ssh: A
), ),
): ):
args = await coresys.run_in_executor( args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest" build.get_docker_args, AwesomeVersion("latest"), "test-image:latest", None
) )
assert is_in_list(["--file", "Dockerfile.aarch64"], args["command"]) assert is_in_list(["--file", "Dockerfile.aarch64"], args["command"])
@@ -117,3 +121,158 @@ async def test_build_invalid(coresys: CoreSys, install_addon_ssh: Addon):
), ),
): ):
assert not await build.is_valid() assert not await build.is_valid()
async def test_docker_config_no_registries(coresys: CoreSys, install_addon_ssh: Addon):
"""Test docker config generation when no registries configured."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
# No registries configured by default
assert build.get_docker_config_json() is None
async def test_docker_config_no_matching_registry(
coresys: CoreSys, install_addon_ssh: Addon
):
"""Test docker config generation when registry doesn't match base image."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
# Configure a registry that doesn't match the base image
# pylint: disable-next=protected-access
coresys.docker.config._data["registries"] = {
"some.other.registry": {"username": "user", "password": "pass"}
}
with (
patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
),
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
),
):
# Base image is ghcr.io/home-assistant/... which doesn't match
assert build.get_docker_config_json() is None
async def test_docker_config_matching_registry(
coresys: CoreSys, install_addon_ssh: Addon
):
"""Test docker config generation when registry matches base image."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
# Configure ghcr.io registry which matches the default base image
# pylint: disable-next=protected-access
coresys.docker.config._data["registries"] = {
"ghcr.io": {"username": "testuser", "password": "testpass"}
}
with (
patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
),
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
),
):
config_json = build.get_docker_config_json()
assert config_json is not None
config = json.loads(config_json)
assert "auths" in config
assert "ghcr.io" in config["auths"]
# Verify base64-encoded credentials
expected_auth = base64.b64encode(b"testuser:testpass").decode()
assert config["auths"]["ghcr.io"]["auth"] == expected_auth
async def test_docker_config_docker_hub(coresys: CoreSys, install_addon_ssh: Addon):
"""Test docker config generation for Docker Hub registry."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
# Configure Docker Hub registry
# pylint: disable-next=protected-access
coresys.docker.config._data["registries"] = {
DOCKER_HUB: {"username": "hubuser", "password": "hubpass"}
}
# Mock base_image to return a Docker Hub image (no registry prefix)
with patch.object(
type(build),
"base_image",
new=PropertyMock(return_value="library/alpine:latest"),
):
config_json = build.get_docker_config_json()
assert config_json is not None
config = json.loads(config_json)
# Docker Hub uses special URL as key
assert "https://index.docker.io/v1/" in config["auths"]
expected_auth = base64.b64encode(b"hubuser:hubpass").decode()
assert config["auths"]["https://index.docker.io/v1/"]["auth"] == expected_auth
async def test_docker_args_with_config_path(coresys: CoreSys, install_addon_ssh: Addon):
"""Test docker args include config volume when path provided."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
with (
patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
),
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
),
patch.object(
type(coresys.config),
"local_to_extern_path",
side_effect=lambda p: f"/extern{p}",
),
):
config_path = Path("/data/supervisor/tmp/config.json")
args = await coresys.run_in_executor(
build.get_docker_args,
AwesomeVersion("latest"),
"test-image:latest",
config_path,
)
# Check that config is mounted
assert "/extern/data/supervisor/tmp/config.json" in args["volumes"]
assert (
args["volumes"]["/extern/data/supervisor/tmp/config.json"]["bind"]
== "/root/.docker/config.json"
)
assert args["volumes"]["/extern/data/supervisor/tmp/config.json"]["mode"] == "ro"
async def test_docker_args_without_config_path(
coresys: CoreSys, install_addon_ssh: Addon
):
"""Test docker args don't include config volume when no path provided."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
with (
patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
),
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest", None
)
# Only docker socket and addon path should be mounted
assert len(args["volumes"]) == 2
# Verify no docker config mount
for bind in args["volumes"].values():
assert bind["bind"] != "/root/.docker/config.json"

View File

@@ -4,7 +4,7 @@ import asyncio
from collections.abc import AsyncGenerator, Generator from collections.abc import AsyncGenerator, Generator
from copy import deepcopy from copy import deepcopy
from pathlib import Path from pathlib import Path
from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, patch from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, call, patch
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
import pytest import pytest
@@ -514,19 +514,13 @@ async def test_shared_image_kept_on_uninstall(
latest = f"{install_addon_example.image}:latest" latest = f"{install_addon_example.image}:latest"
await coresys.addons.uninstall("local_example2") await coresys.addons.uninstall("local_example2")
coresys.docker.images.remove.assert_not_called() coresys.docker.images.delete.assert_not_called()
assert not coresys.addons.get("local_example2", local_only=True) assert not coresys.addons.get("local_example2", local_only=True)
await coresys.addons.uninstall("local_example") await coresys.addons.uninstall("local_example")
assert coresys.docker.images.remove.call_count == 2 assert coresys.docker.images.delete.call_count == 2
assert coresys.docker.images.remove.call_args_list[0].kwargs == { assert coresys.docker.images.delete.call_args_list[0] == call(latest, force=True)
"image": latest, assert coresys.docker.images.delete.call_args_list[1] == call(image, force=True)
"force": True,
}
assert coresys.docker.images.remove.call_args_list[1].kwargs == {
"image": image,
"force": True,
}
assert not coresys.addons.get("local_example", local_only=True) assert not coresys.addons.get("local_example", local_only=True)
@@ -554,19 +548,17 @@ async def test_shared_image_kept_on_update(
assert example_2.version == "1.2.0" assert example_2.version == "1.2.0"
assert install_addon_example_image.version == "1.2.0" assert install_addon_example_image.version == "1.2.0"
image_new = MagicMock() image_new = {"Id": "image_new", "RepoTags": ["image_new:latest"]}
image_new.id = "image_new" image_old = {"Id": "image_old", "RepoTags": ["image_old:latest"]}
image_old = MagicMock() docker.images.inspect.side_effect = [image_new, image_old]
image_old.id = "image_old"
docker.images.get.side_effect = [image_new, image_old]
docker.images.list.return_value = [image_new, image_old] docker.images.list.return_value = [image_new, image_old]
with patch.object(DockerAPI, "pull_image", return_value=image_new): with patch.object(DockerAPI, "pull_image", return_value=image_new):
await coresys.addons.update("local_example2") await coresys.addons.update("local_example2")
docker.images.remove.assert_not_called() docker.images.delete.assert_not_called()
assert example_2.version == "1.3.0" assert example_2.version == "1.3.0"
docker.images.get.side_effect = [image_new] docker.images.inspect.side_effect = [image_new]
await coresys.addons.update("local_example_image") await coresys.addons.update("local_example_image")
docker.images.remove.assert_called_once_with("image_old", force=True) docker.images.delete.assert_called_once_with("image_old", force=True)
assert install_addon_example_image.version == "1.3.0" assert install_addon_example_image.version == "1.3.0"

View File

@@ -1,95 +1 @@
"""Test for API calls.""" """Test for API calls."""
from unittest.mock import AsyncMock, MagicMock
from aiohttp.test_utils import TestClient
from supervisor.coresys import CoreSys
from supervisor.host.const import LogFormat
DEFAULT_LOG_RANGE = "entries=:-99:100"
DEFAULT_LOG_RANGE_FOLLOW = "entries=:-99:18446744073709551615"
async def common_test_api_advanced_logs(
path_prefix: str,
syslog_identifier: str,
api_client: TestClient,
journald_logs: MagicMock,
coresys: CoreSys,
os_available: None,
):
"""Template for tests of endpoints using advanced logs."""
resp = await api_client.get(f"{path_prefix}/logs")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier},
range_header=DEFAULT_LOG_RANGE,
accept=LogFormat.JOURNAL,
)
journald_logs.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/follow")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier, "follow": ""},
range_header=DEFAULT_LOG_RANGE_FOLLOW,
accept=LogFormat.JOURNAL,
)
journald_logs.reset_mock()
mock_response = MagicMock()
mock_response.text = AsyncMock(
return_value='{"CONTAINER_LOG_EPOCH": "12345"}\n{"CONTAINER_LOG_EPOCH": "12345"}\n'
)
journald_logs.return_value.__aenter__.return_value = mock_response
resp = await api_client.get(f"{path_prefix}/logs/latest")
assert resp.status == 200
assert journald_logs.call_count == 2
# Check the first call for getting epoch
epoch_call = journald_logs.call_args_list[0]
assert epoch_call[1]["params"] == {"CONTAINER_NAME": syslog_identifier}
assert epoch_call[1]["range_header"] == "entries=:-1:2"
# Check the second call for getting logs with the epoch
logs_call = journald_logs.call_args_list[1]
assert logs_call[1]["params"]["SYSLOG_IDENTIFIER"] == syslog_identifier
assert logs_call[1]["params"]["CONTAINER_LOG_EPOCH"] == "12345"
assert logs_call[1]["range_header"] == "entries=:0:18446744073709551615"
journald_logs.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/boots/0")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier, "_BOOT_ID": "ccc"},
range_header=DEFAULT_LOG_RANGE,
accept=LogFormat.JOURNAL,
)
journald_logs.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/boots/0/follow")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={
"SYSLOG_IDENTIFIER": syslog_identifier,
"_BOOT_ID": "ccc",
"follow": "",
},
range_header=DEFAULT_LOG_RANGE_FOLLOW,
accept=LogFormat.JOURNAL,
)

149
tests/api/conftest.py Normal file
View File

@@ -0,0 +1,149 @@
"""Fixtures for API tests."""
from collections.abc import Awaitable, Callable
from unittest.mock import ANY, AsyncMock, MagicMock
from aiohttp.test_utils import TestClient
import pytest
from supervisor.coresys import CoreSys
from supervisor.host.const import LogFormat, LogFormatter
DEFAULT_LOG_RANGE = "entries=:-99:100"
DEFAULT_LOG_RANGE_FOLLOW = "entries=:-99:18446744073709551615"
async def _common_test_api_advanced_logs(
path_prefix: str,
syslog_identifier: str,
api_client: TestClient,
journald_logs: MagicMock,
coresys: CoreSys,
os_available: None,
journal_logs_reader: MagicMock,
):
"""Template for tests of endpoints using advanced logs."""
resp = await api_client.get(f"{path_prefix}/logs")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier},
range_header=DEFAULT_LOG_RANGE,
accept=LogFormat.JOURNAL,
)
journal_logs_reader.assert_called_with(ANY, LogFormatter.PLAIN, False)
journald_logs.reset_mock()
journal_logs_reader.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs?no_colors")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier},
range_header=DEFAULT_LOG_RANGE,
accept=LogFormat.JOURNAL,
)
journal_logs_reader.assert_called_with(ANY, LogFormatter.PLAIN, True)
journald_logs.reset_mock()
journal_logs_reader.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/follow")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier, "follow": ""},
range_header=DEFAULT_LOG_RANGE_FOLLOW,
accept=LogFormat.JOURNAL,
)
journal_logs_reader.assert_called_with(ANY, LogFormatter.PLAIN, False)
journald_logs.reset_mock()
journal_logs_reader.reset_mock()
mock_response = MagicMock()
mock_response.text = AsyncMock(
return_value='{"CONTAINER_LOG_EPOCH": "12345"}\n{"CONTAINER_LOG_EPOCH": "12345"}\n'
)
journald_logs.return_value.__aenter__.return_value = mock_response
resp = await api_client.get(f"{path_prefix}/logs/latest")
assert resp.status == 200
assert journald_logs.call_count == 2
# Check the first call for getting epoch
epoch_call = journald_logs.call_args_list[0]
assert epoch_call[1]["params"] == {"CONTAINER_NAME": syslog_identifier}
assert epoch_call[1]["range_header"] == "entries=:-1:2"
# Check the second call for getting logs with the epoch
logs_call = journald_logs.call_args_list[1]
assert logs_call[1]["params"]["SYSLOG_IDENTIFIER"] == syslog_identifier
assert logs_call[1]["params"]["CONTAINER_LOG_EPOCH"] == "12345"
assert logs_call[1]["range_header"] == "entries=:0:18446744073709551615"
journal_logs_reader.assert_called_with(ANY, LogFormatter.PLAIN, True)
journald_logs.reset_mock()
journal_logs_reader.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/boots/0")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier, "_BOOT_ID": "ccc"},
range_header=DEFAULT_LOG_RANGE,
accept=LogFormat.JOURNAL,
)
journald_logs.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/boots/0/follow")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={
"SYSLOG_IDENTIFIER": syslog_identifier,
"_BOOT_ID": "ccc",
"follow": "",
},
range_header=DEFAULT_LOG_RANGE_FOLLOW,
accept=LogFormat.JOURNAL,
)
@pytest.fixture
async def advanced_logs_tester(
api_client: TestClient,
journald_logs: MagicMock,
coresys: CoreSys,
os_available,
journal_logs_reader: MagicMock,
) -> Callable[[str, str], Awaitable[None]]:
"""Fixture that returns a function to test advanced logs endpoints.
This allows tests to avoid explicitly passing all the required fixtures.
Usage:
async def test_my_logs(advanced_logs_tester):
await advanced_logs_tester("/path/prefix", "syslog_identifier")
"""
async def test_logs(path_prefix: str, syslog_identifier: str):
await _common_test_api_advanced_logs(
path_prefix,
syslog_identifier,
api_client,
journald_logs,
coresys,
os_available,
journal_logs_reader,
)
return test_logs

View File

@@ -20,7 +20,6 @@ from supervisor.exceptions import HassioError
from supervisor.store.repository import Repository from supervisor.store.repository import Repository
from ..const import TEST_ADDON_SLUG from ..const import TEST_ADDON_SLUG
from . import common_test_api_advanced_logs
def _create_test_event(name: str, state: ContainerState) -> DockerContainerStateEvent: def _create_test_event(name: str, state: ContainerState) -> DockerContainerStateEvent:
@@ -72,21 +71,11 @@ async def test_addons_info_not_installed(
async def test_api_addon_logs( async def test_api_addon_logs(
api_client: TestClient, advanced_logs_tester,
journald_logs: MagicMock,
coresys: CoreSys,
os_available,
install_addon_ssh: Addon, install_addon_ssh: Addon,
): ):
"""Test addon logs.""" """Test addon logs."""
await common_test_api_advanced_logs( await advanced_logs_tester("/addons/local_ssh", "addon_local_ssh")
"/addons/local_ssh",
"addon_local_ssh",
api_client,
journald_logs,
coresys,
os_available,
)
async def test_api_addon_logs_not_installed(api_client: TestClient): async def test_api_addon_logs_not_installed(api_client: TestClient):

View File

@@ -1,18 +1,6 @@
"""Test audio api.""" """Test audio api."""
from unittest.mock import MagicMock
from aiohttp.test_utils import TestClient async def test_api_audio_logs(advanced_logs_tester) -> None:
from supervisor.coresys import CoreSys
from tests.api import common_test_api_advanced_logs
async def test_api_audio_logs(
api_client: TestClient, journald_logs: MagicMock, coresys: CoreSys, os_available
):
"""Test audio logs.""" """Test audio logs."""
await common_test_api_advanced_logs( await advanced_logs_tester("/audio", "hassio_audio")
"/audio", "hassio_audio", api_client, journald_logs, coresys, os_available
)

View File

@@ -1,13 +1,12 @@
"""Test DNS API.""" """Test DNS API."""
from unittest.mock import MagicMock, patch from unittest.mock import patch
from aiohttp.test_utils import TestClient from aiohttp.test_utils import TestClient
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.dbus.resolved import Resolved from supervisor.dbus.resolved import Resolved
from tests.api import common_test_api_advanced_logs
from tests.dbus_service_mocks.base import DBusServiceMock from tests.dbus_service_mocks.base import DBusServiceMock
from tests.dbus_service_mocks.resolved import Resolved as ResolvedService from tests.dbus_service_mocks.resolved import Resolved as ResolvedService
@@ -66,15 +65,6 @@ async def test_options(api_client: TestClient, coresys: CoreSys):
restart.assert_called_once() restart.assert_called_once()
async def test_api_dns_logs( async def test_api_dns_logs(advanced_logs_tester):
api_client: TestClient, journald_logs: MagicMock, coresys: CoreSys, os_available
):
"""Test dns logs.""" """Test dns logs."""
await common_test_api_advanced_logs( await advanced_logs_tester("/dns", "hassio_dns")
"/dns",
"hassio_dns",
api_client,
journald_logs,
coresys,
os_available,
)

View File

@@ -4,6 +4,11 @@ from aiohttp.test_utils import TestClient
import pytest import pytest
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
from supervisor.resolution.data import Issue, Suggestion
from tests.dbus_service_mocks.agent_system import System as SystemService
from tests.dbus_service_mocks.base import DBusServiceMock
@pytest.mark.asyncio @pytest.mark.asyncio
@@ -84,3 +89,79 @@ async def test_registry_not_found(api_client: TestClient):
assert resp.status == 404 assert resp.status == 404
body = await resp.json() body = await resp.json()
assert body["message"] == "Hostname bad does not exist in registries" assert body["message"] == "Hostname bad does not exist in registries"
@pytest.mark.parametrize("os_available", ["17.0.rc1"], indirect=True)
async def test_api_migrate_docker_storage_driver(
api_client: TestClient,
coresys: CoreSys,
os_agent_services: dict[str, DBusServiceMock],
os_available,
):
"""Test Docker storage driver migration."""
system_service: SystemService = os_agent_services["agent_system"]
system_service.MigrateDockerStorageDriver.calls.clear()
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "overlayfs"},
)
assert resp.status == 200
assert system_service.MigrateDockerStorageDriver.calls == [("overlayfs",)]
assert (
Issue(IssueType.REBOOT_REQUIRED, ContextType.SYSTEM)
in coresys.resolution.issues
)
assert (
Suggestion(SuggestionType.EXECUTE_REBOOT, ContextType.SYSTEM)
in coresys.resolution.suggestions
)
# Test migration back to overlay2 (graph driver)
system_service.MigrateDockerStorageDriver.calls.clear()
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "overlay2"},
)
assert resp.status == 200
assert system_service.MigrateDockerStorageDriver.calls == [("overlay2",)]
@pytest.mark.parametrize("os_available", ["17.0.rc1"], indirect=True)
async def test_api_migrate_docker_storage_driver_invalid_backend(
api_client: TestClient,
os_available,
):
"""Test 400 is returned for invalid storage driver."""
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "invalid"},
)
assert resp.status == 400
async def test_api_migrate_docker_storage_driver_not_os(
api_client: TestClient,
coresys: CoreSys,
):
"""Test 404 is returned if not running on HAOS."""
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "overlayfs"},
)
assert resp.status == 404
@pytest.mark.parametrize("os_available", ["16.2"], indirect=True)
async def test_api_migrate_docker_storage_driver_old_os(
api_client: TestClient,
coresys: CoreSys,
os_available,
):
"""Test 404 is returned if OS is older than 17.0."""
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "overlayfs"},
)
assert resp.status == 404

View File

@@ -18,26 +18,18 @@ from supervisor.homeassistant.const import WSEvent
from supervisor.homeassistant.core import HomeAssistantCore from supervisor.homeassistant.core import HomeAssistantCore
from supervisor.homeassistant.module import HomeAssistant from supervisor.homeassistant.module import HomeAssistant
from tests.api import common_test_api_advanced_logs from tests.common import AsyncIterator, load_json_fixture
from tests.common import load_json_fixture
@pytest.mark.parametrize("legacy_route", [True, False]) @pytest.mark.parametrize("legacy_route", [True, False])
async def test_api_core_logs( async def test_api_core_logs(
api_client: TestClient, advanced_logs_tester: AsyncMock,
journald_logs: MagicMock,
coresys: CoreSys,
os_available,
legacy_route: bool, legacy_route: bool,
): ):
"""Test core logs.""" """Test core logs."""
await common_test_api_advanced_logs( await advanced_logs_tester(
f"/{'homeassistant' if legacy_route else 'core'}", f"/{'homeassistant' if legacy_route else 'core'}",
"homeassistant", "homeassistant",
api_client,
journald_logs,
coresys,
os_available,
) )
@@ -283,9 +275,9 @@ async def test_api_progress_updates_home_assistant_update(
"""Test progress updates sent to Home Assistant for updates.""" """Test progress updates sent to Home Assistant for updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000 coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING) coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json" logs = load_json_fixture("docker_pull_image_log.json")
) coresys.docker.images.pull.return_value = AsyncIterator(logs)
coresys.homeassistant.version = AwesomeVersion("2025.8.0") coresys.homeassistant.version = AwesomeVersion("2025.8.0")
with ( with (
@@ -331,29 +323,29 @@ async def test_api_progress_updates_home_assistant_update(
}, },
{ {
"stage": None, "stage": None,
"progress": 1.2, "progress": 1.7,
"done": False, "done": False,
}, },
{ {
"stage": None, "stage": None,
"progress": 2.8, "progress": 4.0,
"done": False, "done": False,
}, },
] ]
assert events[-5:] == [ assert events[-5:] == [
{ {
"stage": None, "stage": None,
"progress": 97.2, "progress": 98.2,
"done": False, "done": False,
}, },
{ {
"stage": None, "stage": None,
"progress": 98.4, "progress": 98.3,
"done": False, "done": False,
}, },
{ {
"stage": None, "stage": None,
"progress": 99.4, "progress": 99.3,
"done": False, "done": False,
}, },
{ {

View File

@@ -272,7 +272,7 @@ async def test_advaced_logs_query_parameters(
range_header=DEFAULT_RANGE, range_header=DEFAULT_RANGE,
accept=LogFormat.JOURNAL, accept=LogFormat.JOURNAL,
) )
journal_logs_reader.assert_called_with(ANY, LogFormatter.VERBOSE) journal_logs_reader.assert_called_with(ANY, LogFormatter.VERBOSE, False)
journal_logs_reader.reset_mock() journal_logs_reader.reset_mock()
journald_logs.reset_mock() journald_logs.reset_mock()
@@ -290,7 +290,19 @@ async def test_advaced_logs_query_parameters(
range_header="entries=:-52:53", range_header="entries=:-52:53",
accept=LogFormat.JOURNAL, accept=LogFormat.JOURNAL,
) )
journal_logs_reader.assert_called_with(ANY, LogFormatter.VERBOSE) journal_logs_reader.assert_called_with(ANY, LogFormatter.VERBOSE, False)
journal_logs_reader.reset_mock()
journald_logs.reset_mock()
# Check no_colors query parameter
await api_client.get("/host/logs?no_colors")
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": coresys.host.logs.default_identifiers},
range_header=DEFAULT_RANGE,
accept=LogFormat.JOURNAL,
)
journal_logs_reader.assert_called_with(ANY, LogFormatter.VERBOSE, True)
async def test_advanced_logs_boot_id_offset( async def test_advanced_logs_boot_id_offset(
@@ -343,24 +355,24 @@ async def test_advanced_logs_formatters(
"""Test advanced logs formatters varying on Accept header.""" """Test advanced logs formatters varying on Accept header."""
await api_client.get("/host/logs") await api_client.get("/host/logs")
journal_logs_reader.assert_called_once_with(ANY, LogFormatter.VERBOSE) journal_logs_reader.assert_called_once_with(ANY, LogFormatter.VERBOSE, False)
journal_logs_reader.reset_mock() journal_logs_reader.reset_mock()
headers = {"Accept": "text/x-log"} headers = {"Accept": "text/x-log"}
await api_client.get("/host/logs", headers=headers) await api_client.get("/host/logs", headers=headers)
journal_logs_reader.assert_called_once_with(ANY, LogFormatter.VERBOSE) journal_logs_reader.assert_called_once_with(ANY, LogFormatter.VERBOSE, False)
journal_logs_reader.reset_mock() journal_logs_reader.reset_mock()
await api_client.get("/host/logs/identifiers/test") await api_client.get("/host/logs/identifiers/test")
journal_logs_reader.assert_called_once_with(ANY, LogFormatter.PLAIN) journal_logs_reader.assert_called_once_with(ANY, LogFormatter.PLAIN, False)
journal_logs_reader.reset_mock() journal_logs_reader.reset_mock()
headers = {"Accept": "text/x-log"} headers = {"Accept": "text/x-log"}
await api_client.get("/host/logs/identifiers/test", headers=headers) await api_client.get("/host/logs/identifiers/test", headers=headers)
journal_logs_reader.assert_called_once_with(ANY, LogFormatter.VERBOSE) journal_logs_reader.assert_called_once_with(ANY, LogFormatter.VERBOSE, False)
async def test_advanced_logs_errors(coresys: CoreSys, api_client: TestClient): async def test_advanced_logs_errors(coresys: CoreSys, api_client: TestClient):

View File

@@ -1,12 +1,28 @@
"""Test ingress API.""" """Test ingress API."""
from unittest.mock import AsyncMock, patch from collections.abc import AsyncGenerator
from unittest.mock import AsyncMock, MagicMock, patch
from aiohttp.test_utils import TestClient import aiohttp
from aiohttp import hdrs, web
from aiohttp.test_utils import TestClient, TestServer
import pytest
from supervisor.addons.addon import Addon
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
@pytest.fixture(name="real_websession")
async def fixture_real_websession(
coresys: CoreSys,
) -> AsyncGenerator[aiohttp.ClientSession]:
"""Fixture for real aiohttp ClientSession for ingress proxy tests."""
session = aiohttp.ClientSession()
coresys._websession = session # pylint: disable=W0212
yield session
await session.close()
async def test_validate_session(api_client: TestClient, coresys: CoreSys): async def test_validate_session(api_client: TestClient, coresys: CoreSys):
"""Test validating ingress session.""" """Test validating ingress session."""
with patch("aiohttp.web_request.BaseRequest.__getitem__", return_value=None): with patch("aiohttp.web_request.BaseRequest.__getitem__", return_value=None):
@@ -86,3 +102,126 @@ async def test_validate_session_with_user_id(
assert ( assert (
coresys.ingress.get_session_data(session).user.display_name == "Some Name" coresys.ingress.get_session_data(session).user.display_name == "Some Name"
) )
async def test_ingress_proxy_no_content_type_for_empty_body_responses(
api_client: TestClient, coresys: CoreSys, real_websession: aiohttp.ClientSession
):
"""Test that empty body responses don't get Content-Type header."""
# Create a mock add-on backend server that returns various status codes
async def mock_addon_handler(request: web.Request) -> web.Response:
"""Mock add-on handler that returns different status codes based on path."""
path = request.path
if path == "/204":
# 204 No Content - should not have Content-Type
return web.Response(status=204)
elif path == "/304":
# 304 Not Modified - should not have Content-Type
return web.Response(status=304)
elif path == "/100":
# 100 Continue - should not have Content-Type
return web.Response(status=100)
elif path == "/head":
# HEAD request - should have Content-Type (same as GET would)
return web.Response(body=b"test", content_type="text/html")
elif path == "/200":
# 200 OK with body - should have Content-Type
return web.Response(body=b"test content", content_type="text/plain")
elif path == "/200-no-content-type":
# 200 OK without explicit Content-Type - should get default
return web.Response(body=b"test content")
elif path == "/200-json":
# 200 OK with JSON - should preserve Content-Type
return web.Response(
body=b'{"key": "value"}', content_type="application/json"
)
else:
return web.Response(body=b"default", content_type="text/html")
# Create test server for mock add-on
app = web.Application()
app.router.add_route("*", "/{tail:.*}", mock_addon_handler)
addon_server = TestServer(app)
await addon_server.start_server()
try:
# Create ingress session
resp = await api_client.post("/ingress/session")
result = await resp.json()
session = result["data"]["session"]
# Create a mock add-on
mock_addon = MagicMock(spec=Addon)
mock_addon.slug = "test_addon"
mock_addon.ip_address = addon_server.host
mock_addon.ingress_port = addon_server.port
mock_addon.ingress_stream = False
# Generate an ingress token and register the add-on
ingress_token = coresys.ingress.create_session()
with patch.object(coresys.ingress, "get", return_value=mock_addon):
# Test 204 No Content - should NOT have Content-Type
resp = await api_client.get(
f"/ingress/{ingress_token}/204",
cookies={"ingress_session": session},
)
assert resp.status == 204
assert hdrs.CONTENT_TYPE not in resp.headers
# Test 304 Not Modified - should NOT have Content-Type
resp = await api_client.get(
f"/ingress/{ingress_token}/304",
cookies={"ingress_session": session},
)
assert resp.status == 304
assert hdrs.CONTENT_TYPE not in resp.headers
# Test HEAD request - SHOULD have Content-Type (same as GET)
# per RFC 9110: HEAD should return same headers as GET
resp = await api_client.head(
f"/ingress/{ingress_token}/head",
cookies={"ingress_session": session},
)
assert resp.status == 200
assert hdrs.CONTENT_TYPE in resp.headers
assert "text/html" in resp.headers[hdrs.CONTENT_TYPE]
# Body should be empty for HEAD
body = await resp.read()
assert body == b""
# Test 200 OK with body - SHOULD have Content-Type
resp = await api_client.get(
f"/ingress/{ingress_token}/200",
cookies={"ingress_session": session},
)
assert resp.status == 200
assert hdrs.CONTENT_TYPE in resp.headers
assert resp.headers[hdrs.CONTENT_TYPE] == "text/plain"
body = await resp.read()
assert body == b"test content"
# Test 200 OK without explicit Content-Type - SHOULD get default
resp = await api_client.get(
f"/ingress/{ingress_token}/200-no-content-type",
cookies={"ingress_session": session},
)
assert resp.status == 200
assert hdrs.CONTENT_TYPE in resp.headers
# Should get application/octet-stream as default from aiohttp ClientResponse
assert "application/octet-stream" in resp.headers[hdrs.CONTENT_TYPE]
# Test 200 OK with JSON - SHOULD preserve Content-Type
resp = await api_client.get(
f"/ingress/{ingress_token}/200-json",
cookies={"ingress_session": session},
)
assert resp.status == 200
assert hdrs.CONTENT_TYPE in resp.headers
assert "application/json" in resp.headers[hdrs.CONTENT_TYPE]
body = await resp.read()
assert body == b'{"key": "value"}'
finally:
await addon_server.close()

View File

@@ -1,23 +1,6 @@
"""Test multicast api.""" """Test multicast api."""
from unittest.mock import MagicMock
from aiohttp.test_utils import TestClient async def test_api_multicast_logs(advanced_logs_tester):
from supervisor.coresys import CoreSys
from tests.api import common_test_api_advanced_logs
async def test_api_multicast_logs(
api_client: TestClient, journald_logs: MagicMock, coresys: CoreSys, os_available
):
"""Test multicast logs.""" """Test multicast logs."""
await common_test_api_advanced_logs( await advanced_logs_tester("/multicast", "hassio_multicast")
"/multicast",
"hassio_multicast",
api_client,
journald_logs,
coresys,
os_available,
)

View File

@@ -17,16 +17,6 @@ async def test_api_security_options_force_security(api_client, coresys: CoreSys)
assert coresys.security.force assert coresys.security.force
@pytest.mark.asyncio
async def test_api_security_options_content_trust(api_client, coresys: CoreSys):
"""Test security options content trust."""
assert coresys.security.content_trust
await api_client.post("/security/options", json={"content_trust": False})
assert not coresys.security.content_trust
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_api_security_options_pwned(api_client, coresys: CoreSys): async def test_api_security_options_pwned(api_client, coresys: CoreSys):
"""Test security options pwned.""" """Test security options pwned."""
@@ -41,11 +31,8 @@ async def test_api_security_options_pwned(api_client, coresys: CoreSys):
async def test_api_integrity_check( async def test_api_integrity_check(
api_client, coresys: CoreSys, supervisor_internet: AsyncMock api_client, coresys: CoreSys, supervisor_internet: AsyncMock
): ):
"""Test security integrity check.""" """Test security integrity check - now deprecated."""
coresys.security.content_trust = False
resp = await api_client.post("/security/integrity") resp = await api_client.post("/security/integrity")
result = await resp.json()
assert result["data"]["core"] == "untested" # CodeNotary integrity check has been removed, should return 410 Gone
assert result["data"]["supervisor"] == "untested" assert resp.status == 410

View File

@@ -24,7 +24,7 @@ from supervisor.homeassistant.module import HomeAssistant
from supervisor.store.addon import AddonStore from supervisor.store.addon import AddonStore
from supervisor.store.repository import Repository from supervisor.store.repository import Repository
from tests.common import load_json_fixture from tests.common import AsyncIterator, load_json_fixture
from tests.const import TEST_ADDON_SLUG from tests.const import TEST_ADDON_SLUG
REPO_URL = "https://github.com/awesome-developer/awesome-repo" REPO_URL = "https://github.com/awesome-developer/awesome-repo"
@@ -732,9 +732,10 @@ async def test_api_progress_updates_addon_install_update(
"""Test progress updates sent to Home Assistant for installs/updates.""" """Test progress updates sent to Home Assistant for installs/updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000 coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING) coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json" logs = load_json_fixture("docker_pull_image_log.json")
) coresys.docker.images.pull.return_value = AsyncIterator(logs)
coresys.arch._supported_arch = ["amd64"] # pylint: disable=protected-access coresys.arch._supported_arch = ["amd64"] # pylint: disable=protected-access
install_addon_example.data_store["version"] = AwesomeVersion("2.0.0") install_addon_example.data_store["version"] = AwesomeVersion("2.0.0")
@@ -772,29 +773,29 @@ async def test_api_progress_updates_addon_install_update(
}, },
{ {
"stage": None, "stage": None,
"progress": 1.2, "progress": 1.7,
"done": False, "done": False,
}, },
{ {
"stage": None, "stage": None,
"progress": 2.8, "progress": 4.0,
"done": False, "done": False,
}, },
] ]
assert events[-5:] == [ assert events[-5:] == [
{ {
"stage": None, "stage": None,
"progress": 97.2, "progress": 98.2,
"done": False, "done": False,
}, },
{ {
"stage": None, "stage": None,
"progress": 98.4, "progress": 98.3,
"done": False, "done": False,
}, },
{ {
"stage": None, "stage": None,
"progress": 99.4, "progress": 99.3,
"done": False, "done": False,
}, },
{ {

View File

@@ -18,8 +18,7 @@ from supervisor.store.repository import Repository
from supervisor.supervisor import Supervisor from supervisor.supervisor import Supervisor
from supervisor.updater import Updater from supervisor.updater import Updater
from tests.api import common_test_api_advanced_logs from tests.common import AsyncIterator, load_json_fixture
from tests.common import load_json_fixture
from tests.dbus_service_mocks.base import DBusServiceMock from tests.dbus_service_mocks.base import DBusServiceMock
from tests.dbus_service_mocks.os_agent import OSAgent as OSAgentService from tests.dbus_service_mocks.os_agent import OSAgent as OSAgentService
@@ -155,18 +154,9 @@ async def test_api_supervisor_options_diagnostics(
assert coresys.dbus.agent.diagnostics is False assert coresys.dbus.agent.diagnostics is False
async def test_api_supervisor_logs( async def test_api_supervisor_logs(advanced_logs_tester):
api_client: TestClient, journald_logs: MagicMock, coresys: CoreSys, os_available
):
"""Test supervisor logs.""" """Test supervisor logs."""
await common_test_api_advanced_logs( await advanced_logs_tester("/supervisor", "hassio_supervisor")
"/supervisor",
"hassio_supervisor",
api_client,
journald_logs,
coresys,
os_available,
)
async def test_api_supervisor_fallback( async def test_api_supervisor_fallback(
@@ -332,9 +322,9 @@ async def test_api_progress_updates_supervisor_update(
"""Test progress updates sent to Home Assistant for updates.""" """Test progress updates sent to Home Assistant for updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000 coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING) coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json" logs = load_json_fixture("docker_pull_image_log.json")
) coresys.docker.images.pull.return_value = AsyncIterator(logs)
with ( with (
patch.object( patch.object(
@@ -381,29 +371,29 @@ async def test_api_progress_updates_supervisor_update(
}, },
{ {
"stage": None, "stage": None,
"progress": 1.2, "progress": 1.7,
"done": False, "done": False,
}, },
{ {
"stage": None, "stage": None,
"progress": 2.8, "progress": 4.0,
"done": False, "done": False,
}, },
] ]
assert events[-5:] == [ assert events[-5:] == [
{ {
"stage": None, "stage": None,
"progress": 97.2, "progress": 98.2,
"done": False, "done": False,
}, },
{ {
"stage": None, "stage": None,
"progress": 98.4, "progress": 98.3,
"done": False, "done": False,
}, },
{ {
"stage": None, "stage": None,
"progress": 99.4, "progress": 99.3,
"done": False, "done": False,
}, },
{ {

View File

@@ -1,13 +1,14 @@
"""Common test functions.""" """Common test functions."""
import asyncio import asyncio
from collections.abc import Sequence
from datetime import datetime from datetime import datetime
from functools import partial from functools import partial
from importlib import import_module from importlib import import_module
from inspect import getclosurevars from inspect import getclosurevars
import json import json
from pathlib import Path from pathlib import Path
from typing import Any from typing import Any, Self
from dbus_fast.aio.message_bus import MessageBus from dbus_fast.aio.message_bus import MessageBus
@@ -145,3 +146,22 @@ class MockResponse:
async def __aexit__(self, exc_type, exc, tb): async def __aexit__(self, exc_type, exc, tb):
"""Exit the context manager.""" """Exit the context manager."""
class AsyncIterator:
"""Make list/fixture into async iterator for test mocks."""
def __init__(self, seq: Sequence[Any]) -> None:
"""Initialize with sequence."""
self.iter = iter(seq)
def __aiter__(self) -> Self:
"""Implement aiter."""
return self
async def __anext__(self) -> Any:
"""Return next in sequence."""
try:
return next(self.iter)
except StopIteration:
raise StopAsyncIteration() from None

View File

@@ -9,6 +9,7 @@ import subprocess
from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, patch from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, patch
from uuid import uuid4 from uuid import uuid4
from aiodocker.docker import DockerImages
from aiohttp import ClientSession, web from aiohttp import ClientSession, web
from aiohttp.test_utils import TestClient from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
@@ -55,6 +56,7 @@ from supervisor.store.repository import Repository
from supervisor.utils.dt import utcnow from supervisor.utils.dt import utcnow
from .common import ( from .common import (
AsyncIterator,
MockResponse, MockResponse,
load_binary_fixture, load_binary_fixture,
load_fixture, load_fixture,
@@ -112,40 +114,46 @@ async def supervisor_name() -> None:
@pytest.fixture @pytest.fixture
async def docker() -> DockerAPI: async def docker() -> DockerAPI:
"""Mock DockerAPI.""" """Mock DockerAPI."""
images = [MagicMock(tags=["ghcr.io/home-assistant/amd64-hassio-supervisor:latest"])] image_inspect = {
image = MagicMock() "Os": "linux",
image.attrs = {"Os": "linux", "Architecture": "amd64"} "Architecture": "amd64",
"Id": "test123",
"RepoTags": ["ghcr.io/home-assistant/amd64-hassio-supervisor:latest"],
}
with ( with (
patch("supervisor.docker.manager.DockerClient", return_value=MagicMock()), patch("supervisor.docker.manager.DockerClient", return_value=MagicMock()),
patch("supervisor.docker.manager.DockerAPI.images", return_value=MagicMock()),
patch( patch(
"supervisor.docker.manager.DockerAPI.containers", return_value=MagicMock() "supervisor.docker.manager.DockerAPI.containers", return_value=MagicMock()
), ),
patch( patch("supervisor.docker.manager.DockerAPI.api", return_value=MagicMock()),
"supervisor.docker.manager.DockerAPI.api", patch("supervisor.docker.manager.DockerAPI.info", return_value=MagicMock()),
return_value=(api_mock := MagicMock()),
),
patch("supervisor.docker.manager.DockerAPI.images.get", return_value=image),
patch("supervisor.docker.manager.DockerAPI.images.list", return_value=images),
patch(
"supervisor.docker.manager.DockerAPI.info",
return_value=MagicMock(),
),
patch("supervisor.docker.manager.DockerAPI.unload"), patch("supervisor.docker.manager.DockerAPI.unload"),
patch("supervisor.docker.manager.aiodocker.Docker", return_value=MagicMock()),
patch(
"supervisor.docker.manager.DockerAPI.images",
new=PropertyMock(
return_value=(docker_images := MagicMock(spec=DockerImages))
),
),
): ):
docker_obj = await DockerAPI(MagicMock()).post_init() docker_obj = await DockerAPI(MagicMock()).post_init()
docker_obj.config._data = {"registries": {}} docker_obj.config._data = {"registries": {}}
with patch("supervisor.docker.monitor.DockerMonitor.load"): with patch("supervisor.docker.monitor.DockerMonitor.load"):
await docker_obj.load() await docker_obj.load()
docker_images.inspect.return_value = image_inspect
docker_images.list.return_value = [image_inspect]
docker_images.import_image.return_value = [
{"stream": "Loaded image: test:latest\n"}
]
docker_images.pull.return_value = AsyncIterator([{}])
docker_obj.info.logging = "journald" docker_obj.info.logging = "journald"
docker_obj.info.storage = "overlay2" docker_obj.info.storage = "overlay2"
docker_obj.info.version = AwesomeVersion("1.0.0") docker_obj.info.version = AwesomeVersion("1.0.0")
# Need an iterable for logs
api_mock.pull.return_value = []
yield docker_obj yield docker_obj
@@ -838,11 +846,9 @@ async def container(docker: DockerAPI) -> MagicMock:
"""Mock attrs and status for container on attach.""" """Mock attrs and status for container on attach."""
docker.containers.get.return_value = addon = MagicMock() docker.containers.get.return_value = addon = MagicMock()
docker.containers.create.return_value = addon docker.containers.create.return_value = addon
docker.images.build.return_value = (addon, "")
addon.status = "stopped" addon.status = "stopped"
addon.attrs = {"State": {"ExitCode": 0}} addon.attrs = {"State": {"ExitCode": 0}}
with patch.object(DockerAPI, "pull_image", return_value=addon): yield addon
yield addon
@pytest.fixture @pytest.fixture

View File

@@ -184,3 +184,20 @@ async def test_interface_becomes_unmanaged(
assert wireless.is_connected is False assert wireless.is_connected is False
assert eth0.connection is None assert eth0.connection is None
assert connection.is_connected is False assert connection.is_connected is False
async def test_unknown_device_type(
device_eth0_service: DeviceService, dbus_session_bus: MessageBus
):
"""Test unknown device types are handled gracefully."""
interface = NetworkInterface("/org/freedesktop/NetworkManager/Devices/1")
await interface.connect(dbus_session_bus)
# Emit an unknown device type (e.g., 1000 which doesn't exist in the enum)
device_eth0_service.emit_properties_changed({"DeviceType": 1000})
await device_eth0_service.ping()
# Should return UNKNOWN instead of crashing
assert interface.type == DeviceType.UNKNOWN
# Wireless should be None since it's not a wireless device
assert interface.wireless is None

View File

@@ -41,51 +41,51 @@ async def test_dbus_resolved_info(
assert resolved.dns_over_tls == DNSOverTLSEnabled.NO assert resolved.dns_over_tls == DNSOverTLSEnabled.NO
assert len(resolved.dns) == 2 assert len(resolved.dns) == 2
assert resolved.dns[0] == [0, 2, inet_aton("127.0.0.1")] assert resolved.dns[0] == (0, 2, inet_aton("127.0.0.1"))
assert resolved.dns[1] == [0, 10, inet_pton(AF_INET6, "::1")] assert resolved.dns[1] == (0, 10, inet_pton(AF_INET6, "::1"))
assert len(resolved.dns_ex) == 2 assert len(resolved.dns_ex) == 2
assert resolved.dns_ex[0] == [0, 2, inet_aton("127.0.0.1"), 0, ""] assert resolved.dns_ex[0] == (0, 2, inet_aton("127.0.0.1"), 0, "")
assert resolved.dns_ex[1] == [0, 10, inet_pton(AF_INET6, "::1"), 0, ""] assert resolved.dns_ex[1] == (0, 10, inet_pton(AF_INET6, "::1"), 0, "")
assert len(resolved.fallback_dns) == 2 assert len(resolved.fallback_dns) == 2
assert resolved.fallback_dns[0] == [0, 2, inet_aton("1.1.1.1")] assert resolved.fallback_dns[0] == (0, 2, inet_aton("1.1.1.1"))
assert resolved.fallback_dns[1] == [ assert resolved.fallback_dns[1] == (
0, 0,
10, 10,
inet_pton(AF_INET6, "2606:4700:4700::1111"), inet_pton(AF_INET6, "2606:4700:4700::1111"),
] )
assert len(resolved.fallback_dns_ex) == 2 assert len(resolved.fallback_dns_ex) == 2
assert resolved.fallback_dns_ex[0] == [ assert resolved.fallback_dns_ex[0] == (
0, 0,
2, 2,
inet_aton("1.1.1.1"), inet_aton("1.1.1.1"),
0, 0,
"cloudflare-dns.com", "cloudflare-dns.com",
] )
assert resolved.fallback_dns_ex[1] == [ assert resolved.fallback_dns_ex[1] == (
0, 0,
10, 10,
inet_pton(AF_INET6, "2606:4700:4700::1111"), inet_pton(AF_INET6, "2606:4700:4700::1111"),
0, 0,
"cloudflare-dns.com", "cloudflare-dns.com",
] )
assert resolved.current_dns_server == [0, 2, inet_aton("127.0.0.1")] assert resolved.current_dns_server == (0, 2, inet_aton("127.0.0.1"))
assert resolved.current_dns_server_ex == [ assert resolved.current_dns_server_ex == (
0, 0,
2, 2,
inet_aton("127.0.0.1"), inet_aton("127.0.0.1"),
0, 0,
"", "",
] )
assert len(resolved.domains) == 1 assert len(resolved.domains) == 1
assert resolved.domains[0] == [0, "local.hass.io", False] assert resolved.domains[0] == (0, "local.hass.io", False)
assert resolved.transaction_statistics == [0, 100000] assert resolved.transaction_statistics == (0, 100000)
assert resolved.cache_statistics == [10, 50000, 10000] assert resolved.cache_statistics == (10, 50000, 10000)
assert resolved.dnssec == DNSSECValidation.NO assert resolved.dnssec == DNSSECValidation.NO
assert resolved.dnssec_statistics == [0, 0, 0, 0] assert resolved.dnssec_statistics == (0, 0, 0, 0)
assert resolved.dnssec_supported is False assert resolved.dnssec_supported is False
assert resolved.dnssec_negative_trust_anchors == [ assert resolved.dnssec_negative_trust_anchors == [
"168.192.in-addr.arpa", "168.192.in-addr.arpa",

View File

@@ -185,10 +185,10 @@ async def test_start_transient_unit(
"tmp-test.mount", "tmp-test.mount",
"fail", "fail",
[ [
["Description", Variant("s", "Test")], ("Description", Variant("s", "Test")),
["What", Variant("s", "//homeassistant/config")], ("What", Variant("s", "//homeassistant/config")),
["Type", Variant("s", "cifs")], ("Type", Variant("s", "cifs")),
["Options", Variant("s", "username=homeassistant,password=password")], ("Options", Variant("s", "username=homeassistant,password=password")),
], ],
[], [],
) )

View File

@@ -1,6 +1,6 @@
"""Mock of OS Agent System dbus service.""" """Mock of OS Agent System dbus service."""
from dbus_fast import DBusError from dbus_fast import DBusError, ErrorType
from .base import DBusServiceMock, dbus_method from .base import DBusServiceMock, dbus_method
@@ -21,6 +21,7 @@ class System(DBusServiceMock):
object_path = "/io/hass/os/System" object_path = "/io/hass/os/System"
interface = "io.hass.os.System" interface = "io.hass.os.System"
response_schedule_wipe_device: bool | DBusError = True response_schedule_wipe_device: bool | DBusError = True
response_migrate_docker_storage_driver: None | DBusError = None
@dbus_method() @dbus_method()
def ScheduleWipeDevice(self) -> "b": def ScheduleWipeDevice(self) -> "b":
@@ -28,3 +29,14 @@ class System(DBusServiceMock):
if isinstance(self.response_schedule_wipe_device, DBusError): if isinstance(self.response_schedule_wipe_device, DBusError):
raise self.response_schedule_wipe_device # pylint: disable=raising-bad-type raise self.response_schedule_wipe_device # pylint: disable=raising-bad-type
return self.response_schedule_wipe_device return self.response_schedule_wipe_device
@dbus_method()
def MigrateDockerStorageDriver(self, backend: "s") -> None:
"""Migrate Docker storage driver."""
if isinstance(self.response_migrate_docker_storage_driver, DBusError):
raise self.response_migrate_docker_storage_driver # pylint: disable=raising-bad-type
if backend not in ("overlayfs", "overlay2"):
raise DBusError(
ErrorType.FAILED,
f"unsupported driver: {backend} (only 'overlayfs' and 'overlay2' are supported)",
)

View File

@@ -45,8 +45,8 @@ class Resolved(DBusServiceMock):
def DNS(self) -> "a(iiay)": def DNS(self) -> "a(iiay)":
"""Get DNS.""" """Get DNS."""
return [ return [
[0, 2, bytes([127, 0, 0, 1])], (0, 2, bytes([127, 0, 0, 1])),
[ (
0, 0,
10, 10,
bytes( bytes(
@@ -69,15 +69,15 @@ class Resolved(DBusServiceMock):
0x1, 0x1,
] ]
), ),
], ),
] ]
@dbus_property(access=PropertyAccess.READ) @dbus_property(access=PropertyAccess.READ)
def DNSEx(self) -> "a(iiayqs)": def DNSEx(self) -> "a(iiayqs)":
"""Get DNSEx.""" """Get DNSEx."""
return [ return [
[0, 2, bytes([127, 0, 0, 1]), 0, ""], (0, 2, bytes([127, 0, 0, 1]), 0, ""),
[ (
0, 0,
10, 10,
bytes( bytes(
@@ -102,15 +102,15 @@ class Resolved(DBusServiceMock):
), ),
0, 0,
"", "",
], ),
] ]
@dbus_property(access=PropertyAccess.READ) @dbus_property(access=PropertyAccess.READ)
def FallbackDNS(self) -> "a(iiay)": def FallbackDNS(self) -> "a(iiay)":
"""Get FallbackDNS.""" """Get FallbackDNS."""
return [ return [
[0, 2, bytes([1, 1, 1, 1])], (0, 2, bytes([1, 1, 1, 1])),
[ (
0, 0,
10, 10,
bytes( bytes(
@@ -133,15 +133,15 @@ class Resolved(DBusServiceMock):
0x11, 0x11,
] ]
), ),
], ),
] ]
@dbus_property(access=PropertyAccess.READ) @dbus_property(access=PropertyAccess.READ)
def FallbackDNSEx(self) -> "a(iiayqs)": def FallbackDNSEx(self) -> "a(iiayqs)":
"""Get FallbackDNSEx.""" """Get FallbackDNSEx."""
return [ return [
[0, 2, bytes([1, 1, 1, 1]), 0, "cloudflare-dns.com"], (0, 2, bytes([1, 1, 1, 1]), 0, "cloudflare-dns.com"),
[ (
0, 0,
10, 10,
bytes( bytes(
@@ -166,33 +166,33 @@ class Resolved(DBusServiceMock):
), ),
0, 0,
"cloudflare-dns.com", "cloudflare-dns.com",
], ),
] ]
@dbus_property(access=PropertyAccess.READ) @dbus_property(access=PropertyAccess.READ)
def CurrentDNSServer(self) -> "(iiay)": def CurrentDNSServer(self) -> "(iiay)":
"""Get CurrentDNSServer.""" """Get CurrentDNSServer."""
return [0, 2, bytes([127, 0, 0, 1])] return (0, 2, bytes([127, 0, 0, 1]))
@dbus_property(access=PropertyAccess.READ) @dbus_property(access=PropertyAccess.READ)
def CurrentDNSServerEx(self) -> "(iiayqs)": def CurrentDNSServerEx(self) -> "(iiayqs)":
"""Get CurrentDNSServerEx.""" """Get CurrentDNSServerEx."""
return [0, 2, bytes([127, 0, 0, 1]), 0, ""] return (0, 2, bytes([127, 0, 0, 1]), 0, "")
@dbus_property(access=PropertyAccess.READ) @dbus_property(access=PropertyAccess.READ)
def Domains(self) -> "a(isb)": def Domains(self) -> "a(isb)":
"""Get Domains.""" """Get Domains."""
return [[0, "local.hass.io", False]] return [(0, "local.hass.io", False)]
@dbus_property(access=PropertyAccess.READ) @dbus_property(access=PropertyAccess.READ)
def TransactionStatistics(self) -> "(tt)": def TransactionStatistics(self) -> "(tt)":
"""Get TransactionStatistics.""" """Get TransactionStatistics."""
return [0, 100000] return (0, 100000)
@dbus_property(access=PropertyAccess.READ) @dbus_property(access=PropertyAccess.READ)
def CacheStatistics(self) -> "(ttt)": def CacheStatistics(self) -> "(ttt)":
"""Get CacheStatistics.""" """Get CacheStatistics."""
return [10, 50000, 10000] return (10, 50000, 10000)
@dbus_property(access=PropertyAccess.READ) @dbus_property(access=PropertyAccess.READ)
def DNSSEC(self) -> "s": def DNSSEC(self) -> "s":
@@ -202,7 +202,7 @@ class Resolved(DBusServiceMock):
@dbus_property(access=PropertyAccess.READ) @dbus_property(access=PropertyAccess.READ)
def DNSSECStatistics(self) -> "(tttt)": def DNSSECStatistics(self) -> "(tttt)":
"""Get DNSSECStatistics.""" """Get DNSSECStatistics."""
return [0, 0, 0, 0] return (0, 0, 0, 0)
@dbus_property(access=PropertyAccess.READ) @dbus_property(access=PropertyAccess.READ)
def DNSSECSupported(self) -> "b": def DNSSECSupported(self) -> "b":

View File

@@ -2,7 +2,8 @@
# pylint: disable=protected-access # pylint: disable=protected-access
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.interface import DOCKER_HUB, DockerInterface from supervisor.docker.const import DOCKER_HUB
from supervisor.docker.interface import DockerInterface
def test_no_credentials(coresys: CoreSys, test_docker_interface: DockerInterface): def test_no_credentials(coresys: CoreSys, test_docker_interface: DockerInterface):

View File

@@ -5,10 +5,10 @@ from pathlib import Path
from typing import Any from typing import Any
from unittest.mock import ANY, AsyncMock, MagicMock, Mock, PropertyMock, call, patch from unittest.mock import ANY, AsyncMock, MagicMock, Mock, PropertyMock, call, patch
import aiodocker
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from docker.errors import DockerException, NotFound from docker.errors import DockerException, NotFound
from docker.models.containers import Container from docker.models.containers import Container
from docker.models.images import Image
import pytest import pytest
from requests import RequestException from requests import RequestException
@@ -16,7 +16,7 @@ from supervisor.addons.manager import Addon
from supervisor.const import BusEvent, CoreState, CpuArch from supervisor.const import BusEvent, CoreState, CpuArch
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.const import ContainerState from supervisor.docker.const import ContainerState
from supervisor.docker.interface import DockerInterface from supervisor.docker.interface import DOCKER_HUB, DockerInterface
from supervisor.docker.manager import PullLogEntry, PullProgressDetail from supervisor.docker.manager import PullLogEntry, PullProgressDetail
from supervisor.docker.monitor import DockerContainerStateEvent from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import ( from supervisor.exceptions import (
@@ -26,18 +26,12 @@ from supervisor.exceptions import (
DockerNotFound, DockerNotFound,
DockerRequestError, DockerRequestError,
) )
from supervisor.jobs import JobSchedulerOptions, SupervisorJob from supervisor.homeassistant.const import WSEvent, WSType
from supervisor.jobs import ChildJobSyncFilter, JobSchedulerOptions, SupervisorJob
from supervisor.jobs.decorator import Job
from supervisor.supervisor import Supervisor
from tests.common import load_json_fixture from tests.common import AsyncIterator, load_json_fixture
@pytest.fixture(autouse=True)
def mock_verify_content(coresys: CoreSys):
"""Mock verify_content utility during tests."""
with patch.object(
coresys.security, "verify_content", return_value=None
) as verify_content:
yield verify_content
@pytest.mark.parametrize( @pytest.mark.parametrize(
@@ -57,35 +51,68 @@ async def test_docker_image_platform(
platform: str, platform: str,
): ):
"""Test platform set correctly from arch.""" """Test platform set correctly from arch."""
with patch.object( coresys.docker.images.inspect.return_value = {"Id": "test:1.2.3"}
coresys.docker.images, "get", return_value=Mock(id="test:1.2.3") await test_docker_interface.install(AwesomeVersion("1.2.3"), "test", arch=cpu_arch)
) as get: coresys.docker.images.pull.assert_called_once_with(
await test_docker_interface.install( "test", tag="1.2.3", platform=platform, auth=None, stream=True
AwesomeVersion("1.2.3"), "test", arch=cpu_arch )
) coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
coresys.docker.docker.api.pull.assert_called_once_with(
"test", tag="1.2.3", platform=platform, stream=True, decode=True
)
get.assert_called_once_with("test:1.2.3")
async def test_docker_image_default_platform( async def test_docker_image_default_platform(
coresys: CoreSys, test_docker_interface: DockerInterface coresys: CoreSys, test_docker_interface: DockerInterface
): ):
"""Test platform set using supervisor arch when omitted.""" """Test platform set using supervisor arch when omitted."""
coresys.docker.images.inspect.return_value = {"Id": "test:1.2.3"}
with ( with (
patch.object( patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value="i386") type(coresys.supervisor), "arch", PropertyMock(return_value="i386")
), ),
patch.object(
coresys.docker.images, "get", return_value=Mock(id="test:1.2.3")
) as get,
): ):
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test") await test_docker_interface.install(AwesomeVersion("1.2.3"), "test")
coresys.docker.docker.api.pull.assert_called_once_with( coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", stream=True, decode=True "test", tag="1.2.3", platform="linux/386", auth=None, stream=True
) )
get.assert_called_once_with("test:1.2.3")
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
@pytest.mark.parametrize(
"image,registry_key",
[
("homeassistant/amd64-supervisor", DOCKER_HUB),
("ghcr.io/home-assistant/amd64-supervisor", "ghcr.io"),
],
)
async def test_private_registry_credentials_passed_to_pull(
coresys: CoreSys,
test_docker_interface: DockerInterface,
image: str,
registry_key: str,
):
"""Test credentials for private registries are passed to aiodocker pull."""
coresys.docker.images.inspect.return_value = {"Id": f"{image}:1.2.3"}
# Configure registry credentials
coresys.docker.config._data["registries"] = { # pylint: disable=protected-access
registry_key: {"username": "testuser", "password": "testpass"}
}
with patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value="amd64")
):
await test_docker_interface.install(
AwesomeVersion("1.2.3"), image, arch=CpuArch.AMD64
)
# Verify credentials were passed to aiodocker
expected_auth = {"username": "testuser", "password": "testpass"}
if registry_key != DOCKER_HUB:
expected_auth["registry"] = registry_key
coresys.docker.images.pull.assert_called_once_with(
image, tag="1.2.3", platform="linux/amd64", auth=expected_auth, stream=True
)
@pytest.mark.parametrize( @pytest.mark.parametrize(
@@ -216,57 +243,40 @@ async def test_attach_existing_container(
async def test_attach_container_failure(coresys: CoreSys): async def test_attach_container_failure(coresys: CoreSys):
"""Test attach fails to find container but finds image.""" """Test attach fails to find container but finds image."""
container_collection = MagicMock() coresys.docker.containers.get.side_effect = DockerException()
container_collection.get.side_effect = DockerException() coresys.docker.images.inspect.return_value.setdefault("Config", {})["Image"] = (
image_collection = MagicMock() "sha256:abc123"
image_config = {"Image": "sha256:abc123"} )
image_collection.get.return_value = Image({"Config": image_config}) with patch.object(type(coresys.bus), "fire_event") as fire_event:
with (
patch(
"supervisor.docker.manager.DockerAPI.containers",
new=PropertyMock(return_value=container_collection),
),
patch(
"supervisor.docker.manager.DockerAPI.images",
new=PropertyMock(return_value=image_collection),
),
patch.object(type(coresys.bus), "fire_event") as fire_event,
):
await coresys.homeassistant.core.instance.attach(AwesomeVersion("2022.7.3")) await coresys.homeassistant.core.instance.attach(AwesomeVersion("2022.7.3"))
assert not [ assert not [
event event
for event in fire_event.call_args_list for event in fire_event.call_args_list
if event.args[0] == BusEvent.DOCKER_CONTAINER_STATE_CHANGE if event.args[0] == BusEvent.DOCKER_CONTAINER_STATE_CHANGE
] ]
assert coresys.homeassistant.core.instance.meta_config == image_config assert (
coresys.homeassistant.core.instance.meta_config["Image"] == "sha256:abc123"
)
async def test_attach_total_failure(coresys: CoreSys): async def test_attach_total_failure(coresys: CoreSys):
"""Test attach fails to find container or image.""" """Test attach fails to find container or image."""
container_collection = MagicMock() coresys.docker.containers.get.side_effect = DockerException
container_collection.get.side_effect = DockerException() coresys.docker.images.inspect.side_effect = aiodocker.DockerError(
image_collection = MagicMock() 400, {"message": ""}
image_collection.get.side_effect = DockerException() )
with ( with pytest.raises(DockerError):
patch(
"supervisor.docker.manager.DockerAPI.containers",
new=PropertyMock(return_value=container_collection),
),
patch(
"supervisor.docker.manager.DockerAPI.images",
new=PropertyMock(return_value=image_collection),
),
pytest.raises(DockerError),
):
await coresys.homeassistant.core.instance.attach(AwesomeVersion("2022.7.3")) await coresys.homeassistant.core.instance.attach(AwesomeVersion("2022.7.3"))
@pytest.mark.parametrize("err", [DockerException(), RequestException()]) @pytest.mark.parametrize(
"err", [aiodocker.DockerError(400, {"message": ""}), RequestException()]
)
async def test_image_pull_fail( async def test_image_pull_fail(
coresys: CoreSys, capture_exception: Mock, err: Exception coresys: CoreSys, capture_exception: Mock, err: Exception
): ):
"""Test failure to pull image.""" """Test failure to pull image."""
coresys.docker.images.get.side_effect = err coresys.docker.images.inspect.side_effect = err
with pytest.raises(DockerError): with pytest.raises(DockerError):
await coresys.homeassistant.core.instance.install( await coresys.homeassistant.core.instance.install(
AwesomeVersion("2022.7.3"), arch=CpuArch.AMD64 AwesomeVersion("2022.7.3"), arch=CpuArch.AMD64
@@ -298,15 +308,16 @@ async def test_install_fires_progress_events(
coresys: CoreSys, test_docker_interface: DockerInterface coresys: CoreSys, test_docker_interface: DockerInterface
): ):
"""Test progress events are fired during an install for listeners.""" """Test progress events are fired during an install for listeners."""
# This is from a sample pull. Filtered log to just one per unique status for test # This is from a sample pull. Filtered log to just one per unique status for test
coresys.docker.docker.api.pull.return_value = [ logs = [
{ {
"status": "Pulling from home-assistant/odroid-n2-homeassistant", "status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.2", "id": "2025.7.2",
}, },
{"status": "Already exists", "progressDetail": {}, "id": "6e771e15690e"}, {"status": "Already exists", "progressDetail": {}, "id": "6e771e15690e"},
{"status": "Pulling fs layer", "progressDetail": {}, "id": "1578b14a573c"}, {"status": "Pulling fs layer", "progressDetail": {}, "id": "1578b14a573c"},
{"status": "Waiting", "progressDetail": {}, "id": "2488d0e401e1"}, {"status": "Waiting", "progressDetail": {}, "id": "1578b14a573c"},
{ {
"status": "Downloading", "status": "Downloading",
"progressDetail": {"current": 1378, "total": 1486}, "progressDetail": {"current": 1378, "total": 1486},
@@ -321,7 +332,11 @@ async def test_install_fires_progress_events(
"id": "1578b14a573c", "id": "1578b14a573c",
}, },
{"status": "Pull complete", "progressDetail": {}, "id": "1578b14a573c"}, {"status": "Pull complete", "progressDetail": {}, "id": "1578b14a573c"},
{"status": "Verifying Checksum", "progressDetail": {}, "id": "6a1e931d8f88"}, {
"status": "Verifying Checksum",
"progressDetail": {},
"id": "6a1e931d8f88",
},
{ {
"status": "Digest: sha256:490080d7da0f385928022927990e04f604615f7b8c622ef3e58253d0f089881d" "status": "Digest: sha256:490080d7da0f385928022927990e04f604615f7b8c622ef3e58253d0f089881d"
}, },
@@ -329,6 +344,7 @@ async def test_install_fires_progress_events(
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/odroid-n2-homeassistant:2025.7.2" "status": "Status: Downloaded newer image for ghcr.io/home-assistant/odroid-n2-homeassistant:2025.7.2"
}, },
] ]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
events: list[PullLogEntry] = [] events: list[PullLogEntry] = []
@@ -343,10 +359,10 @@ async def test_install_fires_progress_events(
), ),
): ):
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test") await test_docker_interface.install(AwesomeVersion("1.2.3"), "test")
coresys.docker.docker.api.pull.assert_called_once_with( coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", stream=True, decode=True "test", tag="1.2.3", platform="linux/386", auth=None, stream=True
) )
coresys.docker.images.get.assert_called_once_with("test:1.2.3") coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
await asyncio.sleep(1) await asyncio.sleep(1)
assert events == [ assert events == [
@@ -371,7 +387,7 @@ async def test_install_fires_progress_events(
job_id=ANY, job_id=ANY,
status="Waiting", status="Waiting",
progress_detail=PullProgressDetail(), progress_detail=PullProgressDetail(),
id="2488d0e401e1", id="1578b14a573c",
), ),
PullLogEntry( PullLogEntry(
job_id=ANY, job_id=ANY,
@@ -424,10 +440,11 @@ async def test_install_progress_rounding_does_not_cause_misses(
): ):
"""Test extremely close progress events do not create rounding issues.""" """Test extremely close progress events do not create rounding issues."""
coresys.core.set_state(CoreState.RUNNING) coresys.core.set_state(CoreState.RUNNING)
# Current numbers chosen to create a rounding issue with original code # Current numbers chosen to create a rounding issue with original code
# Where a progress update came in with a value between the actual previous # Where a progress update came in with a value between the actual previous
# value and what it was rounded to. It should not raise an out of order exception # value and what it was rounded to. It should not raise an out of order exception
coresys.docker.docker.api.pull.return_value = [ logs = [
{ {
"status": "Pulling from home-assistant/odroid-n2-homeassistant", "status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.1", "id": "2025.7.1",
@@ -467,29 +484,25 @@ async def test_install_progress_rounding_does_not_cause_misses(
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/odroid-n2-homeassistant:2025.7.1" "status": "Status: Downloaded newer image for ghcr.io/home-assistant/odroid-n2-homeassistant:2025.7.1"
}, },
] ]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
with ( # Schedule job so we can listen for the end. Then we can assert against the WS mock
patch.object( event = asyncio.Event()
type(coresys.supervisor), "arch", PropertyMock(return_value="i386") job, install_task = coresys.jobs.schedule_job(
), test_docker_interface.install,
): JobSchedulerOptions(),
# Schedule job so we can listen for the end. Then we can assert against the WS mock AwesomeVersion("1.2.3"),
event = asyncio.Event() "test",
job, install_task = coresys.jobs.schedule_job( )
test_docker_interface.install,
JobSchedulerOptions(),
AwesomeVersion("1.2.3"),
"test",
)
async def listen_for_job_end(reference: SupervisorJob): async def listen_for_job_end(reference: SupervisorJob):
if reference.uuid != job.uuid: if reference.uuid != job.uuid:
return return
event.set() event.set()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end) coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end)
await install_task await install_task
await event.wait() await event.wait()
capture_exception.assert_not_called() capture_exception.assert_not_called()
@@ -522,11 +535,13 @@ async def test_install_raises_on_pull_error(
exc_msg: str, exc_msg: str,
): ):
"""Test exceptions raised from errors in pull log.""" """Test exceptions raised from errors in pull log."""
coresys.docker.docker.api.pull.return_value = [
logs = [
{ {
"status": "Pulling from home-assistant/odroid-n2-homeassistant", "status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.2", "id": "2025.7.2",
}, },
{"status": "Pulling fs layer", "progressDetail": {}, "id": "1578b14a573c"},
{ {
"status": "Downloading", "status": "Downloading",
"progressDetail": {"current": 1378, "total": 1486}, "progressDetail": {"current": 1378, "total": 1486},
@@ -535,6 +550,7 @@ async def test_install_raises_on_pull_error(
}, },
error_log, error_log,
] ]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
with pytest.raises(exc_type, match=exc_msg): with pytest.raises(exc_type, match=exc_msg):
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test") await test_docker_interface.install(AwesomeVersion("1.2.3"), "test")
@@ -548,11 +564,11 @@ async def test_install_progress_handles_download_restart(
): ):
"""Test install handles docker progress events that include a download restart.""" """Test install handles docker progress events that include a download restart."""
coresys.core.set_state(CoreState.RUNNING) coresys.core.set_state(CoreState.RUNNING)
# Fixture emulates a download restart as it docker logs it # Fixture emulates a download restart as it docker logs it
# A log out of order exception should not be raised # A log out of order exception should not be raised
coresys.docker.docker.api.pull.return_value = load_json_fixture( logs = load_json_fixture("docker_pull_image_log_restart.json")
"docker_pull_image_log_restart.json" coresys.docker.images.pull.return_value = AsyncIterator(logs)
)
with ( with (
patch.object( patch.object(
@@ -578,3 +594,358 @@ async def test_install_progress_handles_download_restart(
await event.wait() await event.wait()
capture_exception.assert_not_called() capture_exception.assert_not_called()
@pytest.mark.parametrize(
"extract_log",
[
{
"status": "Extracting",
"progressDetail": {"current": 96, "total": 96},
"progress": "[==================================================>] 96B/96B",
"id": "02a6e69d8d00",
},
{
"status": "Extracting",
"progressDetail": {"current": 1, "units": "s"},
"progress": "1 s",
"id": "02a6e69d8d00",
},
],
ids=["normal_extract_log", "containerd_snapshot_extract_log"],
)
async def test_install_progress_handles_layers_skipping_download(
coresys: CoreSys,
test_docker_interface: DockerInterface,
capture_exception: Mock,
extract_log: dict[str, Any],
):
"""Test install handles small layers that skip downloading phase and go directly to download complete.
Reproduces the real-world scenario from Supervisor issue #6286:
- Small layer (02a6e69d8d00) completes Download complete at 10:14:08 without ever Downloading
- Normal layer (3f4a84073184) starts Downloading at 10:14:09 with progress updates
Under containerd snapshotter this presumably can still occur and Supervisor will have even less info
since extract logs don't have a total. Supervisor should generally just ignore these and set progress
from the larger images that take all the time.
"""
coresys.core.set_state(CoreState.RUNNING)
# Reproduce EXACT sequence from SupervisorNoUpdateProgressLogs.txt:
# Small layer (02a6e69d8d00) completes BEFORE normal layer (3f4a84073184) starts downloading
logs = [
{"status": "Pulling from test/image", "id": "latest"},
# Small layer that skips downloading (02a6e69d8d00 in logs, 96 bytes)
{"status": "Pulling fs layer", "progressDetail": {}, "id": "02a6e69d8d00"},
{"status": "Pulling fs layer", "progressDetail": {}, "id": "3f4a84073184"},
{"status": "Waiting", "progressDetail": {}, "id": "02a6e69d8d00"},
{"status": "Waiting", "progressDetail": {}, "id": "3f4a84073184"},
# Goes straight to Download complete (10:14:08 in logs) - THIS IS THE KEY MOMENT
{"status": "Download complete", "progressDetail": {}, "id": "02a6e69d8d00"},
# Normal layer that downloads (3f4a84073184 in logs, 25MB)
# Downloading starts (10:14:09 in logs) - progress updates should happen NOW!
{
"status": "Downloading",
"progressDetail": {"current": 260937, "total": 25371463},
"progress": "[> ] 260.9kB/25.37MB",
"id": "3f4a84073184",
},
{
"status": "Downloading",
"progressDetail": {"current": 5505024, "total": 25371463},
"progress": "[==========> ] 5.505MB/25.37MB",
"id": "3f4a84073184",
},
{
"status": "Downloading",
"progressDetail": {"current": 11272192, "total": 25371463},
"progress": "[======================> ] 11.27MB/25.37MB",
"id": "3f4a84073184",
},
{"status": "Download complete", "progressDetail": {}, "id": "3f4a84073184"},
{
"status": "Extracting",
"progressDetail": {"current": 25371463, "total": 25371463},
"progress": "[==================================================>] 25.37MB/25.37MB",
"id": "3f4a84073184",
},
{"status": "Pull complete", "progressDetail": {}, "id": "3f4a84073184"},
# Small layer finally extracts (10:14:58 in logs)
extract_log,
{"status": "Pull complete", "progressDetail": {}, "id": "02a6e69d8d00"},
{"status": "Digest: sha256:test"},
{"status": "Status: Downloaded newer image for test/image:latest"},
]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
# Capture immutable snapshots of install job progress using job.as_dict()
# This solves the mutable object problem - we snapshot state at call time
install_job_snapshots = []
original_on_job_change = coresys.jobs._on_job_change # pylint: disable=W0212
def capture_and_forward(job_obj, attribute, value):
# Capture immutable snapshot if this is the install job with progress
if job_obj.name == "docker_interface_install" and job_obj.progress > 0:
install_job_snapshots.append(job_obj.as_dict())
# Forward to original to maintain functionality
return original_on_job_change(job_obj, attribute, value)
with patch.object(coresys.jobs, "_on_job_change", side_effect=capture_and_forward):
event = asyncio.Event()
job, install_task = coresys.jobs.schedule_job(
test_docker_interface.install,
JobSchedulerOptions(),
AwesomeVersion("1.2.3"),
"test",
)
async def listen_for_job_end(reference: SupervisorJob):
if reference.uuid != job.uuid:
return
event.set()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end)
await install_task
await event.wait()
# First update from layer download should have rather low progress ((260937/25371463) ~= 1%)
assert install_job_snapshots[0]["progress"] < 2
# Total 7 events should lead to a progress update on the install job:
# 3 Downloading events + Download complete (70%) + Extracting + Pull complete (100%) + stage change
# Note: The small placeholder layer ({1,1}) is excluded from progress calculation
assert len(install_job_snapshots) == 7
# Job should complete successfully
assert job.done is True
assert job.progress == 100
capture_exception.assert_not_called()
async def test_missing_total_handled_gracefully(
coresys: CoreSys,
test_docker_interface: DockerInterface,
ha_ws_client: AsyncMock,
capture_exception: Mock,
):
"""Test missing 'total' fields in progress details handled gracefully."""
coresys.core.set_state(CoreState.RUNNING)
# Progress details with missing 'total' fields observed in real-world pulls
logs = [
{
"status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.1",
},
{"status": "Pulling fs layer", "progressDetail": {}, "id": "1e214cd6d7d0"},
{
"status": "Downloading",
"progressDetail": {"current": 436480882},
"progress": "[===================================================] 436.5MB/436.5MB",
"id": "1e214cd6d7d0",
},
{"status": "Verifying Checksum", "progressDetail": {}, "id": "1e214cd6d7d0"},
{"status": "Download complete", "progressDetail": {}, "id": "1e214cd6d7d0"},
{
"status": "Extracting",
"progressDetail": {"current": 436480882},
"progress": "[===================================================] 436.5MB/436.5MB",
"id": "1e214cd6d7d0",
},
{"status": "Pull complete", "progressDetail": {}, "id": "1e214cd6d7d0"},
{
"status": "Digest: sha256:7d97da645f232f82a768d0a537e452536719d56d484d419836e53dbe3e4ec736"
},
{
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/odroid-n2-homeassistant:2025.7.1"
},
]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
# Schedule job so we can listen for the end. Then we can assert against the WS mock
event = asyncio.Event()
job, install_task = coresys.jobs.schedule_job(
test_docker_interface.install,
JobSchedulerOptions(),
AwesomeVersion("1.2.3"),
"test",
)
async def listen_for_job_end(reference: SupervisorJob):
if reference.uuid != job.uuid:
return
event.set()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end)
await install_task
await event.wait()
capture_exception.assert_not_called()
async def test_install_progress_containerd_snapshot(
coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test install handles docker progress events using containerd snapshotter."""
coresys.core.set_state(CoreState.RUNNING)
class TestDockerInterface(DockerInterface):
"""Test interface for events."""
@property
def name(self) -> str:
"""Name of test interface."""
return "test_interface"
@Job(
name="mock_docker_interface_install",
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def mock_install(self) -> None:
"""Mock install."""
await super().install(
AwesomeVersion("1.2.3"), image="test", arch=CpuArch.I386
)
# Fixture emulates log as received when using containerd snapshotter
# Should not error but progress gets choppier once extraction starts
logs = load_json_fixture("docker_pull_image_log_containerd_snapshot.json")
coresys.docker.images.pull.return_value = AsyncIterator(logs)
test_docker_interface = TestDockerInterface(coresys)
with patch.object(Supervisor, "arch", PropertyMock(return_value="i386")):
await test_docker_interface.mock_install()
coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", auth=None, stream=True
)
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
await asyncio.sleep(1)
def job_event(progress: float, done: bool = False):
return {
"type": WSType.SUPERVISOR_EVENT,
"data": {
"event": WSEvent.JOB,
"data": {
"name": "mock_docker_interface_install",
"reference": "test_interface",
"uuid": ANY,
"progress": progress,
"stage": None,
"done": done,
"parent_id": None,
"errors": [],
"created": ANY,
"extra": None,
},
},
}
# Get progress values from the events
job_events = [
c.args[0]
for c in ha_ws_client.async_send_command.call_args_list
if c.args[0].get("data", {}).get("event") == WSEvent.JOB
and c.args[0].get("data", {}).get("data", {}).get("name")
== "mock_docker_interface_install"
]
progress_values = [e["data"]["data"]["progress"] for e in job_events]
# Should have multiple progress updates (not just 0 and 100)
assert len(progress_values) >= 10, (
f"Expected >=10 progress updates, got {len(progress_values)}"
)
# Progress should be monotonically increasing
for i in range(1, len(progress_values)):
assert progress_values[i] >= progress_values[i - 1], (
f"Progress decreased at index {i}: {progress_values[i - 1]} -> {progress_values[i]}"
)
# Should start at 0 and end at 100
assert progress_values[0] == 0
assert progress_values[-1] == 100
# Should have progress values in the downloading phase (< 70%)
# Note: with layer scaling, early progress may be lower than before
downloading_progress = [p for p in progress_values if 0 < p < 70]
assert len(downloading_progress) > 0, (
"Expected progress updates during downloading phase"
)
async def test_install_progress_containerd_snapshotter_real_world(
coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test install handles real-world containerd snapshotter events.
This test uses real pull events captured from a Home Assistant Core update
where some layers skip the Downloading phase entirely (going directly from
"Pulling fs layer" to "Download complete"). This causes the bug where progress
jumps from 0 to 100 without intermediate updates.
Root cause: _update_install_job_status() returns early if ANY layer has
extra=None. Layers that skip Downloading don't get extra until Download complete,
so progress cannot be calculated until ALL layers reach Download complete.
"""
coresys.core.set_state(CoreState.RUNNING)
class TestDockerInterface(DockerInterface):
"""Test interface for events."""
@property
def name(self) -> str:
"""Name of test interface."""
return "test_interface"
@Job(
name="mock_docker_interface_install_realworld",
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def mock_install(self) -> None:
"""Mock install."""
await super().install(
AwesomeVersion("1.2.3"), image="test", arch=CpuArch.I386
)
# Real-world fixture: 12 layers, 262 Downloading events
# Some layers skip Downloading entirely (small layers with containerd snapshotter)
logs = load_json_fixture("docker_pull_image_log_containerd_snapshotter_real.json")
coresys.docker.images.pull.return_value = AsyncIterator(logs)
test_docker_interface = TestDockerInterface(coresys)
with patch.object(Supervisor, "arch", PropertyMock(return_value="i386")):
await test_docker_interface.mock_install()
await asyncio.sleep(1)
# Get progress events for the parent job (what UI sees)
job_events = [
c.args[0]
for c in ha_ws_client.async_send_command.call_args_list
if c.args[0].get("data", {}).get("event") == WSEvent.JOB
and c.args[0].get("data", {}).get("data", {}).get("name")
== "mock_docker_interface_install_realworld"
]
progress_values = [e["data"]["data"]["progress"] for e in job_events]
# We should have intermediate progress updates, not just 0 and 100
assert len(progress_values) > 3, (
f"BUG: Progress jumped 0->100 without intermediate updates. "
f"Got {len(progress_values)} updates: {progress_values}. "
f"Expected intermediate progress during the 262 Downloading events."
)
# Progress should be monotonically increasing
for i in range(1, len(progress_values)):
assert progress_values[i] >= progress_values[i - 1]
# Should see progress in downloading phase (0-70%)
downloading_progress = [p for p in progress_values if 0 < p < 70]
assert len(downloading_progress) > 0

View File

@@ -1,9 +1,10 @@
"""Test Docker manager.""" """Test Docker manager."""
import asyncio import asyncio
from pathlib import Path
from unittest.mock import MagicMock, patch from unittest.mock import MagicMock, patch
from docker.errors import DockerException from docker.errors import APIError, DockerException, NotFound
import pytest import pytest
from requests import RequestException from requests import RequestException
@@ -20,7 +21,7 @@ async def test_run_command_success(docker: DockerAPI):
mock_container.logs.return_value = b"command output" mock_container.logs.return_value = b"command output"
# Mock docker containers.run to return our mock container # Mock docker containers.run to return our mock container
docker.docker.containers.run.return_value = mock_container docker.dockerpy.containers.run.return_value = mock_container
# Execute the command # Execute the command
result = docker.run_command( result = docker.run_command(
@@ -33,7 +34,7 @@ async def test_run_command_success(docker: DockerAPI):
assert result.output == b"command output" assert result.output == b"command output"
# Verify docker.containers.run was called correctly # Verify docker.containers.run was called correctly
docker.docker.containers.run.assert_called_once_with( docker.dockerpy.containers.run.assert_called_once_with(
"alpine:3.18", "alpine:3.18",
command="echo hello", command="echo hello",
detach=True, detach=True,
@@ -55,7 +56,7 @@ async def test_run_command_with_defaults(docker: DockerAPI):
mock_container.logs.return_value = b"error output" mock_container.logs.return_value = b"error output"
# Mock docker containers.run to return our mock container # Mock docker containers.run to return our mock container
docker.docker.containers.run.return_value = mock_container docker.dockerpy.containers.run.return_value = mock_container
# Execute the command with minimal parameters # Execute the command with minimal parameters
result = docker.run_command(image="ubuntu") result = docker.run_command(image="ubuntu")
@@ -66,7 +67,7 @@ async def test_run_command_with_defaults(docker: DockerAPI):
assert result.output == b"error output" assert result.output == b"error output"
# Verify docker.containers.run was called with defaults # Verify docker.containers.run was called with defaults
docker.docker.containers.run.assert_called_once_with( docker.dockerpy.containers.run.assert_called_once_with(
"ubuntu:latest", # default tag "ubuntu:latest", # default tag
command=None, # default command command=None, # default command
detach=True, detach=True,
@@ -81,7 +82,7 @@ async def test_run_command_with_defaults(docker: DockerAPI):
async def test_run_command_docker_exception(docker: DockerAPI): async def test_run_command_docker_exception(docker: DockerAPI):
"""Test command execution when Docker raises an exception.""" """Test command execution when Docker raises an exception."""
# Mock docker containers.run to raise DockerException # Mock docker containers.run to raise DockerException
docker.docker.containers.run.side_effect = DockerException("Docker error") docker.dockerpy.containers.run.side_effect = DockerException("Docker error")
# Execute the command and expect DockerError # Execute the command and expect DockerError
with pytest.raises(DockerError, match="Can't execute command: Docker error"): with pytest.raises(DockerError, match="Can't execute command: Docker error"):
@@ -91,7 +92,7 @@ async def test_run_command_docker_exception(docker: DockerAPI):
async def test_run_command_request_exception(docker: DockerAPI): async def test_run_command_request_exception(docker: DockerAPI):
"""Test command execution when requests raises an exception.""" """Test command execution when requests raises an exception."""
# Mock docker containers.run to raise RequestException # Mock docker containers.run to raise RequestException
docker.docker.containers.run.side_effect = RequestException("Connection error") docker.dockerpy.containers.run.side_effect = RequestException("Connection error")
# Execute the command and expect DockerError # Execute the command and expect DockerError
with pytest.raises(DockerError, match="Can't execute command: Connection error"): with pytest.raises(DockerError, match="Can't execute command: Connection error"):
@@ -104,7 +105,7 @@ async def test_run_command_cleanup_on_exception(docker: DockerAPI):
mock_container = MagicMock() mock_container = MagicMock()
# Mock docker.containers.run to return container, but container.wait to raise exception # Mock docker.containers.run to return container, but container.wait to raise exception
docker.docker.containers.run.return_value = mock_container docker.dockerpy.containers.run.return_value = mock_container
mock_container.wait.side_effect = DockerException("Wait failed") mock_container.wait.side_effect = DockerException("Wait failed")
# Execute the command and expect DockerError # Execute the command and expect DockerError
@@ -123,7 +124,7 @@ async def test_run_command_custom_stdout_stderr(docker: DockerAPI):
mock_container.logs.return_value = b"output" mock_container.logs.return_value = b"output"
# Mock docker containers.run to return our mock container # Mock docker containers.run to return our mock container
docker.docker.containers.run.return_value = mock_container docker.dockerpy.containers.run.return_value = mock_container
# Execute the command with custom stdout/stderr # Execute the command with custom stdout/stderr
result = docker.run_command( result = docker.run_command(
@@ -150,7 +151,7 @@ async def test_run_container_with_cidfile(
cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid" cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid"
extern_cidfile_path = coresys.config.path_extern_cid_files / f"{container_name}.cid" extern_cidfile_path = coresys.config.path_extern_cid_files / f"{container_name}.cid"
docker.docker.containers.run.return_value = mock_container docker.dockerpy.containers.run.return_value = mock_container
# Mock container creation # Mock container creation
with patch.object( with patch.object(
@@ -351,3 +352,101 @@ async def test_run_container_with_leftover_cidfile_directory(
assert cidfile_path.read_text() == mock_container.id assert cidfile_path.read_text() == mock_container.id
assert result == mock_container assert result == mock_container
async def test_repair(coresys: CoreSys, caplog: pytest.LogCaptureFixture):
"""Test repair API."""
coresys.docker.dockerpy.networks.get.side_effect = [
hassio := MagicMock(
attrs={
"Containers": {
"good": {"Name": "good"},
"corrupt": {"Name": "corrupt"},
"fail": {"Name": "fail"},
}
}
),
host := MagicMock(attrs={"Containers": {}}),
]
coresys.docker.dockerpy.containers.get.side_effect = [
MagicMock(),
NotFound("corrupt"),
DockerException("fail"),
]
await coresys.run_in_executor(coresys.docker.repair)
coresys.docker.dockerpy.api.prune_containers.assert_called_once()
coresys.docker.dockerpy.api.prune_images.assert_called_once_with(
filters={"dangling": False}
)
coresys.docker.dockerpy.api.prune_builds.assert_called_once()
coresys.docker.dockerpy.api.prune_volumes.assert_called_once()
coresys.docker.dockerpy.api.prune_networks.assert_called_once()
hassio.disconnect.assert_called_once_with("corrupt", force=True)
host.disconnect.assert_not_called()
assert "Docker fatal error on container fail on hassio" in caplog.text
async def test_repair_failures(coresys: CoreSys, caplog: pytest.LogCaptureFixture):
"""Test repair proceeds best it can through failures."""
coresys.docker.dockerpy.api.prune_containers.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_images.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_builds.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_volumes.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_networks.side_effect = APIError("fail")
coresys.docker.dockerpy.networks.get.side_effect = NotFound("missing")
await coresys.run_in_executor(coresys.docker.repair)
assert "Error for containers prune: fail" in caplog.text
assert "Error for images prune: fail" in caplog.text
assert "Error for builds prune: fail" in caplog.text
assert "Error for volumes prune: fail" in caplog.text
assert "Error for networks prune: fail" in caplog.text
assert "Error for networks hassio prune: missing" in caplog.text
assert "Error for networks host prune: missing" in caplog.text
@pytest.mark.parametrize("log_starter", [("Loaded image ID"), ("Loaded image")])
async def test_import_image(coresys: CoreSys, tmp_path: Path, log_starter: str):
"""Test importing an image into docker."""
(test_tar := tmp_path / "test.tar").touch()
coresys.docker.images.import_image.return_value = [
{"stream": f"{log_starter}: imported"}
]
coresys.docker.images.inspect.return_value = {"Id": "imported"}
image = await coresys.docker.import_image(test_tar)
assert image["Id"] == "imported"
coresys.docker.images.inspect.assert_called_once_with("imported")
async def test_import_image_error(coresys: CoreSys, tmp_path: Path):
"""Test failure importing an image into docker."""
(test_tar := tmp_path / "test.tar").touch()
coresys.docker.images.import_image.return_value = [
{"errorDetail": {"message": "fail"}}
]
with pytest.raises(DockerError, match="Can't import image from tar: fail"):
await coresys.docker.import_image(test_tar)
coresys.docker.images.inspect.assert_not_called()
async def test_import_multiple_images_in_tar(
coresys: CoreSys, tmp_path: Path, caplog: pytest.LogCaptureFixture
):
"""Test importing an image into docker."""
(test_tar := tmp_path / "test.tar").touch()
coresys.docker.images.import_image.return_value = [
{"stream": "Loaded image: imported-1"},
{"stream": "Loaded image: imported-2"},
]
assert await coresys.docker.import_image(test_tar) is None
assert "Unexpected image count 2 while importing image from tar" in caplog.text
coresys.docker.images.inspect.assert_not_called()

View File

@@ -88,7 +88,7 @@ async def test_events(
): ):
"""Test events created from docker events.""" """Test events created from docker events."""
event["Actor"]["Attributes"]["name"] = "some_container" event["Actor"]["Attributes"]["name"] = "some_container"
event["id"] = "abc123" event["Actor"]["ID"] = "abc123"
event["time"] = 123 event["time"] = 123
with ( with (
patch( patch(
@@ -131,12 +131,12 @@ async def test_unlabeled_container(coresys: CoreSys):
new=PropertyMock( new=PropertyMock(
return_value=[ return_value=[
{ {
"id": "abc123",
"time": 123, "time": 123,
"Type": "container", "Type": "container",
"Action": "die", "Action": "die",
"Actor": { "Actor": {
"Attributes": {"name": "homeassistant", "exitCode": "137"} "ID": "abc123",
"Attributes": {"name": "homeassistant", "exitCode": "137"},
}, },
} }
] ]

View File

@@ -0,0 +1,196 @@
[
{
"status": "Pulling from home-assistant/home-assistant",
"id": "2025.12.0.dev202511080235"
},
{ "status": "Pulling fs layer", "progressDetail": {}, "id": "eafecc6b43cc" },
{ "status": "Pulling fs layer", "progressDetail": {}, "id": "333270549f95" },
{
"status": "Downloading",
"progressDetail": { "current": 1048576, "total": 21863319 },
"progress": "[==\u003e ] 1.049MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 1048576, "total": 21179924 },
"progress": "[==\u003e ] 1.049MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 4194304, "total": 21863319 },
"progress": "[=========\u003e ] 4.194MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 2097152, "total": 21179924 },
"progress": "[====\u003e ] 2.097MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 7340032, "total": 21863319 },
"progress": "[================\u003e ] 7.34MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 4194304, "total": 21179924 },
"progress": "[=========\u003e ] 4.194MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 13631488, "total": 21863319 },
"progress": "[===============================\u003e ] 13.63MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 8388608, "total": 21179924 },
"progress": "[===================\u003e ] 8.389MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 17825792, "total": 21863319 },
"progress": "[========================================\u003e ] 17.83MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 12582912, "total": 21179924 },
"progress": "[=============================\u003e ] 12.58MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 21863319, "total": 21863319 },
"progress": "[==================================================\u003e] 21.86MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 16777216, "total": 21179924 },
"progress": "[=======================================\u003e ] 16.78MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 21179924, "total": 21179924 },
"progress": "[==================================================\u003e] 21.18MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Download complete",
"progressDetail": { "hidecounts": true },
"id": "eafecc6b43cc"
},
{
"status": "Download complete",
"progressDetail": { "hidecounts": true },
"id": "333270549f95"
},
{
"status": "Extracting",
"progressDetail": { "current": 1, "units": "s" },
"progress": "1 s",
"id": "333270549f95"
},
{
"status": "Extracting",
"progressDetail": { "current": 1, "units": "s" },
"progress": "1 s",
"id": "333270549f95"
},
{
"status": "Pull complete",
"progressDetail": { "hidecounts": true },
"id": "333270549f95"
},
{
"status": "Extracting",
"progressDetail": { "current": 1, "units": "s" },
"progress": "1 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 1, "units": "s" },
"progress": "1 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 2, "units": "s" },
"progress": "2 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 2, "units": "s" },
"progress": "2 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 3, "units": "s" },
"progress": "3 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 3, "units": "s" },
"progress": "3 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 4, "units": "s" },
"progress": "4 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 4, "units": "s" },
"progress": "4 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 5, "units": "s" },
"progress": "5 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 5, "units": "s" },
"progress": "5 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 6, "units": "s" },
"progress": "6 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 6, "units": "s" },
"progress": "6 s",
"id": "eafecc6b43cc"
},
{
"status": "Pull complete",
"progressDetail": { "hidecounts": true },
"id": "eafecc6b43cc"
},
{
"status": "Digest: sha256:bfc9efc13552c0c228f3d9d35987331cce68b43c9bc79c80a57eeadadd44cccf"
},
{
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/home-assistant:2025.12.0.dev202511080235"
}
]

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,14 @@
"""Test Home Assistant core.""" """Test Home Assistant core."""
from datetime import datetime, timedelta from datetime import datetime, timedelta
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, patch from http import HTTPStatus
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, call, patch
import aiodocker
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from docker.errors import APIError, DockerException, ImageNotFound, NotFound from docker.errors import APIError, DockerException, NotFound
import pytest import pytest
from requests import RequestException
from time_machine import travel from time_machine import travel
from supervisor.const import CpuArch from supervisor.const import CpuArch
@@ -23,8 +26,12 @@ from supervisor.exceptions import (
from supervisor.homeassistant.api import APIState from supervisor.homeassistant.api import APIState
from supervisor.homeassistant.core import HomeAssistantCore from supervisor.homeassistant.core import HomeAssistantCore
from supervisor.homeassistant.module import HomeAssistant from supervisor.homeassistant.module import HomeAssistant
from supervisor.resolution.const import ContextType, IssueType
from supervisor.resolution.data import Issue
from supervisor.updater import Updater from supervisor.updater import Updater
from tests.common import AsyncIterator
async def test_update_fails_if_out_of_date(coresys: CoreSys): async def test_update_fails_if_out_of_date(coresys: CoreSys):
"""Test update of Home Assistant fails when supervisor or plugin is out of date.""" """Test update of Home Assistant fails when supervisor or plugin is out of date."""
@@ -52,11 +59,23 @@ async def test_update_fails_if_out_of_date(coresys: CoreSys):
await coresys.homeassistant.core.update() await coresys.homeassistant.core.update()
async def test_install_landingpage_docker_error( @pytest.mark.parametrize(
coresys: CoreSys, capture_exception: Mock, caplog: pytest.LogCaptureFixture "err",
[
aiodocker.DockerError(HTTPStatus.TOO_MANY_REQUESTS, {"message": "ratelimit"}),
APIError("ratelimit", MagicMock(status_code=HTTPStatus.TOO_MANY_REQUESTS)),
],
)
async def test_install_landingpage_docker_ratelimit_error(
coresys: CoreSys,
capture_exception: Mock,
caplog: pytest.LogCaptureFixture,
err: Exception,
): ):
"""Test install landing page fails due to docker error.""" """Test install landing page fails due to docker ratelimit error."""
coresys.security.force = True coresys.security.force = True
coresys.docker.images.pull.side_effect = [err, AsyncIterator([{}])]
with ( with (
patch.object(DockerHomeAssistant, "attach", side_effect=DockerError), patch.object(DockerHomeAssistant, "attach", side_effect=DockerError),
patch.object( patch.object(
@@ -69,19 +88,35 @@ async def test_install_landingpage_docker_error(
), ),
patch("supervisor.homeassistant.core.asyncio.sleep") as sleep, patch("supervisor.homeassistant.core.asyncio.sleep") as sleep,
): ):
coresys.docker.images.get.side_effect = [APIError("fail"), MagicMock()]
await coresys.homeassistant.core.install_landingpage() await coresys.homeassistant.core.install_landingpage()
sleep.assert_awaited_once_with(30) sleep.assert_awaited_once_with(30)
assert "Failed to install landingpage, retrying after 30sec" in caplog.text assert "Failed to install landingpage, retrying after 30sec" in caplog.text
capture_exception.assert_not_called() capture_exception.assert_not_called()
assert (
Issue(IssueType.DOCKER_RATELIMIT, ContextType.SYSTEM)
in coresys.resolution.issues
)
@pytest.mark.parametrize(
"err",
[
aiodocker.DockerError(HTTPStatus.INTERNAL_SERVER_ERROR, {"message": "fail"}),
APIError("fail"),
DockerException(),
RequestException(),
OSError(),
],
)
async def test_install_landingpage_other_error( async def test_install_landingpage_other_error(
coresys: CoreSys, capture_exception: Mock, caplog: pytest.LogCaptureFixture coresys: CoreSys,
capture_exception: Mock,
caplog: pytest.LogCaptureFixture,
err: Exception,
): ):
"""Test install landing page fails due to other error.""" """Test install landing page fails due to other error."""
coresys.docker.images.get.side_effect = [(err := OSError()), MagicMock()] coresys.docker.images.inspect.side_effect = [err, MagicMock()]
with ( with (
patch.object(DockerHomeAssistant, "attach", side_effect=DockerError), patch.object(DockerHomeAssistant, "attach", side_effect=DockerError),
@@ -102,11 +137,23 @@ async def test_install_landingpage_other_error(
capture_exception.assert_called_once_with(err) capture_exception.assert_called_once_with(err)
async def test_install_docker_error( @pytest.mark.parametrize(
coresys: CoreSys, capture_exception: Mock, caplog: pytest.LogCaptureFixture "err",
[
aiodocker.DockerError(HTTPStatus.TOO_MANY_REQUESTS, {"message": "ratelimit"}),
APIError("ratelimit", MagicMock(status_code=HTTPStatus.TOO_MANY_REQUESTS)),
],
)
async def test_install_docker_ratelimit_error(
coresys: CoreSys,
capture_exception: Mock,
caplog: pytest.LogCaptureFixture,
err: Exception,
): ):
"""Test install fails due to docker error.""" """Test install fails due to docker ratelimit error."""
coresys.security.force = True coresys.security.force = True
coresys.docker.images.pull.side_effect = [err, AsyncIterator([{}])]
with ( with (
patch.object(HomeAssistantCore, "start"), patch.object(HomeAssistantCore, "start"),
patch.object(DockerHomeAssistant, "cleanup"), patch.object(DockerHomeAssistant, "cleanup"),
@@ -123,19 +170,35 @@ async def test_install_docker_error(
), ),
patch("supervisor.homeassistant.core.asyncio.sleep") as sleep, patch("supervisor.homeassistant.core.asyncio.sleep") as sleep,
): ):
coresys.docker.images.get.side_effect = [APIError("fail"), MagicMock()]
await coresys.homeassistant.core.install() await coresys.homeassistant.core.install()
sleep.assert_awaited_once_with(30) sleep.assert_awaited_once_with(30)
assert "Error on Home Assistant installation. Retrying in 30sec" in caplog.text assert "Error on Home Assistant installation. Retrying in 30sec" in caplog.text
capture_exception.assert_not_called() capture_exception.assert_not_called()
assert (
Issue(IssueType.DOCKER_RATELIMIT, ContextType.SYSTEM)
in coresys.resolution.issues
)
@pytest.mark.parametrize(
"err",
[
aiodocker.DockerError(HTTPStatus.INTERNAL_SERVER_ERROR, {"message": "fail"}),
APIError("fail"),
DockerException(),
RequestException(),
OSError(),
],
)
async def test_install_other_error( async def test_install_other_error(
coresys: CoreSys, capture_exception: Mock, caplog: pytest.LogCaptureFixture coresys: CoreSys,
capture_exception: Mock,
caplog: pytest.LogCaptureFixture,
err: Exception,
): ):
"""Test install fails due to other error.""" """Test install fails due to other error."""
coresys.docker.images.get.side_effect = [(err := OSError()), MagicMock()] coresys.docker.images.inspect.side_effect = [err, MagicMock()]
with ( with (
patch.object(HomeAssistantCore, "start"), patch.object(HomeAssistantCore, "start"),
@@ -161,21 +224,29 @@ async def test_install_other_error(
@pytest.mark.parametrize( @pytest.mark.parametrize(
"container_exists,image_exists", [(False, True), (True, False), (True, True)] ("container_exc", "image_exc", "remove_calls"),
[
(NotFound("missing"), None, []),
(
None,
aiodocker.DockerError(404, {"message": "missing"}),
[call(force=True, v=True)],
),
(None, None, [call(force=True, v=True)]),
],
) )
@pytest.mark.usefixtures("path_extern")
async def test_start( async def test_start(
coresys: CoreSys, container_exists: bool, image_exists: bool, path_extern coresys: CoreSys,
container_exc: DockerException | None,
image_exc: aiodocker.DockerError | None,
remove_calls: list[call],
): ):
"""Test starting Home Assistant.""" """Test starting Home Assistant."""
if image_exists: coresys.docker.images.inspect.return_value = {"Id": "123"}
coresys.docker.images.get.return_value.id = "123" coresys.docker.images.inspect.side_effect = image_exc
else: coresys.docker.containers.get.return_value.id = "123"
coresys.docker.images.get.side_effect = ImageNotFound("missing") coresys.docker.containers.get.side_effect = container_exc
if container_exists:
coresys.docker.containers.get.return_value.image.id = "123"
else:
coresys.docker.containers.get.side_effect = NotFound("missing")
with ( with (
patch.object( patch.object(
@@ -198,18 +269,14 @@ async def test_start(
assert run.call_args.kwargs["hostname"] == "homeassistant" assert run.call_args.kwargs["hostname"] == "homeassistant"
coresys.docker.containers.get.return_value.stop.assert_not_called() coresys.docker.containers.get.return_value.stop.assert_not_called()
if container_exists: assert (
coresys.docker.containers.get.return_value.remove.assert_called_once_with( coresys.docker.containers.get.return_value.remove.call_args_list == remove_calls
force=True, )
v=True,
)
else:
coresys.docker.containers.get.return_value.remove.assert_not_called()
async def test_start_existing_container(coresys: CoreSys, path_extern): async def test_start_existing_container(coresys: CoreSys, path_extern):
"""Test starting Home Assistant when container exists and is viable.""" """Test starting Home Assistant when container exists and is viable."""
coresys.docker.images.get.return_value.id = "123" coresys.docker.images.inspect.return_value = {"Id": "123"}
coresys.docker.containers.get.return_value.image.id = "123" coresys.docker.containers.get.return_value.image.id = "123"
coresys.docker.containers.get.return_value.status = "exited" coresys.docker.containers.get.return_value.status = "exited"
@@ -394,24 +461,33 @@ async def test_core_loads_wrong_image_for_machine(
"""Test core is loaded with wrong image for machine.""" """Test core is loaded with wrong image for machine."""
coresys.homeassistant.set_image("ghcr.io/home-assistant/odroid-n2-homeassistant") coresys.homeassistant.set_image("ghcr.io/home-assistant/odroid-n2-homeassistant")
coresys.homeassistant.version = AwesomeVersion("2024.4.0") coresys.homeassistant.version = AwesomeVersion("2024.4.0")
container.attrs["Config"] = {"Labels": {"io.hass.version": "2024.4.0"}}
await coresys.homeassistant.core.load() with patch.object(
DockerAPI,
"pull_image",
return_value={
"Id": "abc123",
"Config": {"Labels": {"io.hass.version": "2024.4.0"}},
},
) as pull_image:
container.attrs |= pull_image.return_value
await coresys.homeassistant.core.load()
pull_image.assert_called_once_with(
ANY,
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
auth=None,
)
container.remove.assert_called_once_with(force=True, v=True) container.remove.assert_called_once_with(force=True, v=True)
assert coresys.docker.images.remove.call_args_list[0].kwargs == { assert coresys.docker.images.delete.call_args_list[0] == call(
"image": "ghcr.io/home-assistant/odroid-n2-homeassistant:latest", "ghcr.io/home-assistant/odroid-n2-homeassistant:latest",
"force": True, force=True,
} )
assert coresys.docker.images.remove.call_args_list[1].kwargs == { assert coresys.docker.images.delete.call_args_list[1] == call(
"image": "ghcr.io/home-assistant/odroid-n2-homeassistant:2024.4.0", "ghcr.io/home-assistant/odroid-n2-homeassistant:2024.4.0",
"force": True, force=True,
}
coresys.docker.pull_image.assert_called_once_with(
ANY,
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
) )
assert ( assert (
coresys.homeassistant.image == "ghcr.io/home-assistant/qemux86-64-homeassistant" coresys.homeassistant.image == "ghcr.io/home-assistant/qemux86-64-homeassistant"
@@ -428,8 +504,8 @@ async def test_core_load_allows_image_override(coresys: CoreSys, container: Magi
await coresys.homeassistant.core.load() await coresys.homeassistant.core.load()
container.remove.assert_not_called() container.remove.assert_not_called()
coresys.docker.images.remove.assert_not_called() coresys.docker.images.delete.assert_not_called()
coresys.docker.images.get.assert_not_called() coresys.docker.images.inspect.assert_not_called()
assert ( assert (
coresys.homeassistant.image == "ghcr.io/home-assistant/odroid-n2-homeassistant" coresys.homeassistant.image == "ghcr.io/home-assistant/odroid-n2-homeassistant"
) )
@@ -440,27 +516,37 @@ async def test_core_loads_wrong_image_for_architecture(
): ):
"""Test core is loaded with wrong image for architecture.""" """Test core is loaded with wrong image for architecture."""
coresys.homeassistant.version = AwesomeVersion("2024.4.0") coresys.homeassistant.version = AwesomeVersion("2024.4.0")
container.attrs["Config"] = {"Labels": {"io.hass.version": "2024.4.0"}} coresys.docker.images.inspect.return_value = img_data = (
coresys.docker.images.get("ghcr.io/home-assistant/qemux86-64-homeassistant").attrs[ coresys.docker.images.inspect.return_value
"Architecture" | {
] = "arm64" "Architecture": "arm64",
"Config": {"Labels": {"io.hass.version": "2024.4.0"}},
}
)
container.attrs |= img_data
await coresys.homeassistant.core.load() with patch.object(
DockerAPI,
"pull_image",
return_value=img_data | {"Architecture": "amd64"},
) as pull_image:
await coresys.homeassistant.core.load()
pull_image.assert_called_once_with(
ANY,
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
auth=None,
)
container.remove.assert_called_once_with(force=True, v=True) container.remove.assert_called_once_with(force=True, v=True)
assert coresys.docker.images.remove.call_args_list[0].kwargs == { assert coresys.docker.images.delete.call_args_list[0] == call(
"image": "ghcr.io/home-assistant/qemux86-64-homeassistant:latest", "ghcr.io/home-assistant/qemux86-64-homeassistant:latest",
"force": True, force=True,
} )
assert coresys.docker.images.remove.call_args_list[1].kwargs == { assert coresys.docker.images.delete.call_args_list[1] == call(
"image": "ghcr.io/home-assistant/qemux86-64-homeassistant:2024.4.0", "ghcr.io/home-assistant/qemux86-64-homeassistant:2024.4.0",
"force": True, force=True,
}
coresys.docker.pull_image.assert_called_once_with(
ANY,
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
) )
assert ( assert (
coresys.homeassistant.image == "ghcr.io/home-assistant/qemux86-64-homeassistant" coresys.homeassistant.image == "ghcr.io/home-assistant/qemux86-64-homeassistant"

View File

@@ -7,8 +7,8 @@ import pytest
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.dbus.const import DeviceType from supervisor.dbus.const import DeviceType
from supervisor.host.configuration import Interface, VlanConfig from supervisor.host.configuration import Interface, VlanConfig, WifiConfig
from supervisor.host.const import InterfaceType from supervisor.host.const import AuthMethod, InterfaceType, WifiMode
from tests.dbus_service_mocks.base import DBusServiceMock from tests.dbus_service_mocks.base import DBusServiceMock
from tests.dbus_service_mocks.network_connection_settings import ( from tests.dbus_service_mocks.network_connection_settings import (
@@ -291,3 +291,237 @@ async def test_equals_dbus_interface_eth0_10_real(
# Test should pass with matching VLAN config # Test should pass with matching VLAN config
assert test_vlan_interface.equals_dbus_interface(network_interface) is True assert test_vlan_interface.equals_dbus_interface(network_interface) is True
def test_map_nm_wifi_non_wireless_interface():
"""Test _map_nm_wifi returns None for non-wireless interface."""
# Mock non-wireless interface
mock_interface = Mock()
mock_interface.type = DeviceType.ETHERNET
mock_interface.settings = Mock()
result = Interface._map_nm_wifi(mock_interface)
assert result is None
def test_map_nm_wifi_no_settings():
"""Test _map_nm_wifi returns None when interface has no settings."""
# Mock wireless interface without settings
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = None
result = Interface._map_nm_wifi(mock_interface)
assert result is None
def test_map_nm_wifi_open_authentication():
"""Test _map_nm_wifi with open authentication (no security)."""
# Mock wireless interface with open authentication
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "TestSSID"
mock_interface.settings.wireless.mode = "infrastructure"
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert isinstance(result, WifiConfig)
assert result.mode == WifiMode.INFRASTRUCTURE
assert result.ssid == "TestSSID"
assert result.auth == AuthMethod.OPEN
assert result.psk is None
assert result.signal is None
def test_map_nm_wifi_wep_authentication():
"""Test _map_nm_wifi with WEP authentication."""
# Mock wireless interface with WEP authentication
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = Mock()
mock_interface.settings.wireless_security.key_mgmt = "none"
mock_interface.settings.wireless_security.psk = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "WEPNetwork"
mock_interface.settings.wireless.mode = "infrastructure"
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert isinstance(result, WifiConfig)
assert result.auth == AuthMethod.WEP
assert result.ssid == "WEPNetwork"
assert result.psk is None
def test_map_nm_wifi_wpa_psk_authentication():
"""Test _map_nm_wifi with WPA-PSK authentication."""
# Mock wireless interface with WPA-PSK authentication
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = Mock()
mock_interface.settings.wireless_security.key_mgmt = "wpa-psk"
mock_interface.settings.wireless_security.psk = "SecretPassword123"
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "SecureNetwork"
mock_interface.settings.wireless.mode = "infrastructure"
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert isinstance(result, WifiConfig)
assert result.auth == AuthMethod.WPA_PSK
assert result.ssid == "SecureNetwork"
assert result.psk == "SecretPassword123"
def test_map_nm_wifi_unsupported_authentication():
"""Test _map_nm_wifi returns None for unsupported authentication method."""
# Mock wireless interface with unsupported authentication
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = Mock()
mock_interface.settings.wireless_security.key_mgmt = "wpa-eap" # Unsupported
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "EnterpriseNetwork"
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is None
def test_map_nm_wifi_different_modes():
"""Test _map_nm_wifi with different wifi modes."""
modes_to_test = [
("infrastructure", WifiMode.INFRASTRUCTURE),
("mesh", WifiMode.MESH),
("adhoc", WifiMode.ADHOC),
("ap", WifiMode.AP),
]
for mode_value, expected_mode in modes_to_test:
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "TestSSID"
mock_interface.settings.wireless.mode = mode_value
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert result.mode == expected_mode
def test_map_nm_wifi_with_signal():
"""Test _map_nm_wifi with wireless signal strength."""
# Mock wireless interface with active connection and signal
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "TestSSID"
mock_interface.settings.wireless.mode = "infrastructure"
mock_interface.wireless = Mock()
mock_interface.wireless.active = Mock()
mock_interface.wireless.active.strength = 75
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert result.signal == 75
def test_map_nm_wifi_without_signal():
"""Test _map_nm_wifi without wireless signal (no active connection)."""
# Mock wireless interface without active connection
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "TestSSID"
mock_interface.settings.wireless.mode = "infrastructure"
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert result.signal is None
def test_map_nm_wifi_wireless_no_active_ap():
"""Test _map_nm_wifi with wireless object but no active access point."""
# Mock wireless interface with wireless object but no active AP
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "TestSSID"
mock_interface.settings.wireless.mode = "infrastructure"
mock_interface.wireless = Mock()
mock_interface.wireless.active = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert result.signal is None
def test_map_nm_wifi_no_wireless_settings():
"""Test _map_nm_wifi when wireless settings are missing."""
# Mock wireless interface without wireless settings
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = None
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert result.ssid == ""
assert result.mode == WifiMode.INFRASTRUCTURE # Default mode
def test_map_nm_wifi_no_wireless_mode():
"""Test _map_nm_wifi when wireless mode is not specified."""
# Mock wireless interface without mode specified
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "TestSSID"
mock_interface.settings.wireless.mode = None
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert result.mode == WifiMode.INFRASTRUCTURE # Default mode

View File

@@ -90,6 +90,49 @@ async def test_logs_coloured(journald_gateway: MagicMock, coresys: CoreSys):
) )
async def test_logs_no_colors(journald_gateway: MagicMock, coresys: CoreSys):
"""Test ANSI color codes being stripped when no_colors=True."""
journald_gateway.content.feed_data(
load_fixture("logs_export_supervisor.txt").encode("utf-8")
)
journald_gateway.content.feed_eof()
async with coresys.host.logs.journald_logs() as resp:
cursor, line = await anext(journal_logs_reader(resp, no_colors=True))
assert (
cursor
== "s=83fee99ca0c3466db5fc120d52ca7dd8;i=2049389;b=f5a5c442fa6548cf97474d2d57c920b3;m=4263828e8c;t=612dda478b01b;x=9ae12394c9326930"
)
# Colors should be stripped
assert (
line == "24-03-04 23:56:56 INFO (MainThread) [__main__] Closing Supervisor"
)
async def test_logs_verbose_no_colors(journald_gateway: MagicMock, coresys: CoreSys):
"""Test ANSI color codes being stripped from verbose formatted logs when no_colors=True."""
journald_gateway.content.feed_data(
load_fixture("logs_export_supervisor.txt").encode("utf-8")
)
journald_gateway.content.feed_eof()
async with coresys.host.logs.journald_logs() as resp:
cursor, line = await anext(
journal_logs_reader(
resp, log_formatter=LogFormatter.VERBOSE, no_colors=True
)
)
assert (
cursor
== "s=83fee99ca0c3466db5fc120d52ca7dd8;i=2049389;b=f5a5c442fa6548cf97474d2d57c920b3;m=4263828e8c;t=612dda478b01b;x=9ae12394c9326930"
)
# Colors should be stripped in verbose format too
assert (
line
== "2024-03-04 22:56:56.709 ha-hloub hassio_supervisor[466]: 24-03-04 23:56:56 INFO (MainThread) [__main__] Closing Supervisor"
)
async def test_boot_ids( async def test_boot_ids(
journald_gateway: MagicMock, journald_gateway: MagicMock,
coresys: CoreSys, coresys: CoreSys,

View File

@@ -1179,7 +1179,6 @@ async def test_job_scheduled_delay(coresys: CoreSys):
async def test_job_scheduled_at(coresys: CoreSys): async def test_job_scheduled_at(coresys: CoreSys):
"""Test job that schedules a job to start at a specified time.""" """Test job that schedules a job to start at a specified time."""
dt = datetime.now()
class TestClass: class TestClass:
"""Test class.""" """Test class."""
@@ -1189,10 +1188,12 @@ async def test_job_scheduled_at(coresys: CoreSys):
self.coresys = coresys self.coresys = coresys
@Job(name="test_job_scheduled_at_job_scheduler") @Job(name="test_job_scheduled_at_job_scheduler")
async def job_scheduler(self) -> tuple[SupervisorJob, asyncio.TimerHandle]: async def job_scheduler(
self, scheduled_time: datetime
) -> tuple[SupervisorJob, asyncio.TimerHandle]:
"""Schedule a job to run at specified time.""" """Schedule a job to run at specified time."""
return self.coresys.jobs.schedule_job( return self.coresys.jobs.schedule_job(
self.job_task, JobSchedulerOptions(start_at=dt + timedelta(seconds=0.1)) self.job_task, JobSchedulerOptions(start_at=scheduled_time)
) )
@Job(name="test_job_scheduled_at_job_task") @Job(name="test_job_scheduled_at_job_task")
@@ -1201,29 +1202,28 @@ async def test_job_scheduled_at(coresys: CoreSys):
self.coresys.jobs.current.stage = "work" self.coresys.jobs.current.stage = "work"
test = TestClass(coresys) test = TestClass(coresys)
job_started = asyncio.Event()
job_ended = asyncio.Event() # Schedule job to run 0.1 seconds from now
scheduled_time = datetime.now() + timedelta(seconds=0.1)
job, _ = await test.job_scheduler(scheduled_time)
started = False
ended = False
async def start_listener(evt_job: SupervisorJob): async def start_listener(evt_job: SupervisorJob):
if evt_job.uuid == job.uuid: nonlocal started
job_started.set() started = started or evt_job.uuid == job.uuid
async def end_listener(evt_job: SupervisorJob): async def end_listener(evt_job: SupervisorJob):
if evt_job.uuid == job.uuid: nonlocal ended
job_ended.set() ended = ended or evt_job.uuid == job.uuid
async with time_machine.travel(dt): coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_START, start_listener)
job, _ = await test.job_scheduler() coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, end_listener)
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_START, start_listener) await asyncio.sleep(0.2)
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, end_listener)
# Advance time to exactly when job should start and wait for completion
async with time_machine.travel(dt + timedelta(seconds=0.1)):
await asyncio.wait_for(
asyncio.gather(job_started.wait(), job_ended.wait()), timeout=1.0
)
assert started
assert ended
assert job.done assert job.done
assert job.name == "test_job_scheduled_at_job_task" assert job.name == "test_job_scheduled_at_job_task"
assert job.stage == "work" assert job.stage == "work"

View File

@@ -198,7 +198,7 @@ async def test_notify_on_change(coresys: CoreSys, ha_ws_client: AsyncMock):
"errors": [ "errors": [
{ {
"type": "HassioError", "type": "HassioError",
"message": "Unknown error, see supervisor logs", "message": "Unknown error, see Supervisor logs (check with 'ha supervisor logs')",
"stage": "test", "stage": "test",
} }
], ],
@@ -226,7 +226,7 @@ async def test_notify_on_change(coresys: CoreSys, ha_ws_client: AsyncMock):
"errors": [ "errors": [
{ {
"type": "HassioError", "type": "HassioError",
"message": "Unknown error, see supervisor logs", "message": "Unknown error, see Supervisor logs (check with 'ha supervisor logs')",
"stage": "test", "stage": "test",
} }
], ],

View File

@@ -115,7 +115,17 @@ async def test_not_started(coresys):
assert filter_data(coresys, SAMPLE_EVENT, {}) == SAMPLE_EVENT assert filter_data(coresys, SAMPLE_EVENT, {}) == SAMPLE_EVENT
await coresys.core.set_state(CoreState.SETUP) await coresys.core.set_state(CoreState.SETUP)
assert filter_data(coresys, SAMPLE_EVENT, {}) == SAMPLE_EVENT filtered = filter_data(coresys, SAMPLE_EVENT, {})
# During SETUP, we should have basic system info available
assert "contexts" in filtered
assert "versions" in filtered["contexts"]
assert "docker" in filtered["contexts"]["versions"]
assert "supervisor" in filtered["contexts"]["versions"]
assert "host" in filtered["contexts"]
assert "machine" in filtered["contexts"]["host"]
assert filtered["contexts"]["versions"]["docker"] == coresys.docker.info.version
assert filtered["contexts"]["versions"]["supervisor"] == coresys.supervisor.version
assert filtered["contexts"]["host"]["machine"] == coresys.machine
async def test_defaults(coresys): async def test_defaults(coresys):

Some files were not shown because too many files have changed in this diff Show More