Compare commits

...

73 Commits

Author SHA1 Message Date
Mike Degatano
1dc84d6eff Fix stats test and add more for known errors 2025-11-19 22:46:25 +00:00
Mike Degatano
842baf689d Fix docker ratelimit exception and tests 2025-11-19 22:46:20 +00:00
Mike Degatano
772d074db9 Remove customized unknown error types 2025-11-19 22:46:19 +00:00
Mike Degatano
c399a3ef27 Remove unknown errors from addons 2025-11-19 22:46:17 +00:00
Stefan Agner
63a3dff118 Handle pull events with complete progress details only (#6320)
* Handle pull events with complete progress details only

Under certain circumstances, Docker seems to send pull events with
incomplete progress details (i.e., missing 'current' or 'total' fields).
In practise, we've observed an empty dictionary for progress details
as well as missing 'total' field (while 'current' was present).
All events were using Docker 28.3.3 using the old, default Docker graph
backend.

* Fix docstring/comment
2025-11-19 12:21:27 +01:00
dependabot[bot]
fc8fc171c1 Bump time-machine from 2.19.0 to 3.0.0 (#6321)
Bumps [time-machine](https://github.com/adamchainz/time-machine) from 2.19.0 to 3.0.0.
- [Changelog](https://github.com/adamchainz/time-machine/blob/main/docs/changelog.rst)
- [Commits](https://github.com/adamchainz/time-machine/compare/2.19.0...3.0.0)

---
updated-dependencies:
- dependency-name: time-machine
  dependency-version: 3.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-19 12:21:17 +01:00
Stefan Agner
72bbc50c83 Fix call_at to use event loop time base instead of Unix timestamp (#6324)
* Fix call_at to use event loop time base instead of Unix timestamp

The CoreSys.call_at method was incorrectly passing Unix timestamps
directly to asyncio.loop.call_at(), which expects times in the event
loop's monotonic time base. This caused scheduled jobs to be scheduled
approximately 55 years in the future (the difference between Unix epoch
time and monotonic time since boot).

The bug was masked by time-machine 2.19.0, which patched time.monotonic()
and caused loop.time() to return Unix timestamps. Time-machine 3.0.0
removed this patching (as it caused event loop freezes), exposing the bug.

Fix by converting the datetime to event loop time base:
- Calculate delay from current Unix time to scheduled Unix time
- Add delay to current event loop time to get scheduled loop time

Also simplify test_job_scheduled_at to avoid time-machine's async
context managers, following the pattern of test_job_scheduled_delay.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add comment about dateime in the past

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-19 11:49:05 +01:00
Jan Čermák
0837e05cb2 Strip ANSI escape color sequences from /latest log responses (#6319)
* Strip ANSI escape color sequences from /latest log responses

Strip ANSI sequences of CSI commands [1] used for log coloring from
/latest log endpoints. These endpoint were primarily designed for log
downloads and colors are mostly not wanted in those. Add optional
argument for stripping the colors from the logs and enable it for the
/latest endpoints.

[1] https://en.wikipedia.org/wiki/ANSI_escape_code#CSIsection

* Refactor advanced logs' tests to use fixture factory

Introduce `advanced_logs_tester` fixture to simplify testing of advanced logs
in the API tests, declaring all the needed fixture in a single place. # Please
enter the commit message for your changes. Lines starting
2025-11-19 09:39:24 +01:00
dependabot[bot]
d3d652eba5 Bump sentry-sdk from 2.44.0 to 2.45.0 (#6322)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.44.0 to 2.45.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.44.0...2.45.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.45.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-19 09:27:59 +01:00
dependabot[bot]
2eea3c70eb Bump coverage from 7.11.3 to 7.12.0 (#6323)
Bumps [coverage](https://github.com/coveragepy/coveragepy) from 7.11.3 to 7.12.0.
- [Release notes](https://github.com/coveragepy/coveragepy/releases)
- [Changelog](https://github.com/coveragepy/coveragepy/blob/main/CHANGES.rst)
- [Commits](https://github.com/coveragepy/coveragepy/compare/7.11.3...7.12.0)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.12.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-19 09:27:45 +01:00
dependabot[bot]
95c106d502 Bump actions/checkout from 5.0.0 to 5.0.1 (#6318)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5.0.0 to 5.0.1.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](08c6903cd8...93cb6efe18)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 5.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-18 08:45:19 +01:00
dependabot[bot]
74f9431519 Bump ruff from 0.14.4 to 0.14.5 (#6314)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.4 to 0.14.5.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.4...0.14.5)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-14 09:06:58 +01:00
dependabot[bot]
0eef2169f7 Bump pylint from 4.0.2 to 4.0.3 (#6315) 2025-11-13 23:02:33 -08:00
dependabot[bot]
2656b451cd Bump pytest from 8.4.2 to 9.0.1 (#6309)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 8.4.2 to 9.0.1.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/8.4.2...9.0.1)

---
updated-dependencies:
- dependency-name: pytest
  dependency-version: 9.0.1
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-13 09:45:51 +01:00
dependabot[bot]
af7a629dd4 Bump pytest-asyncio from 1.2.0 to 1.3.0 (#6310)
Bumps [pytest-asyncio](https://github.com/pytest-dev/pytest-asyncio) from 1.2.0 to 1.3.0.
- [Release notes](https://github.com/pytest-dev/pytest-asyncio/releases)
- [Commits](https://github.com/pytest-dev/pytest-asyncio/compare/v1.2.0...v1.3.0)

---
updated-dependencies:
- dependency-name: pytest-asyncio
  dependency-version: 1.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-13 09:07:57 +01:00
Mike Degatano
30cc172199 Migrate images from dockerpy to aiodocker (#6252)
* Migrate images from dockerpy to aiodocker

* Add missing coverage and fix bug in repair

* Bind libraries to different files and refactor images.pull

* Use the same socket again

Try using the same socket again.

* Fix pytest

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-11-12 20:54:06 +01:00
Stefan Agner
69ae8db13c Add context to Sentry events during setup phase (#6308)
* Add context to Sentry events during setup phase

Since not all properties are safe to access the current code avoids
adding any context during initialization and setup phase. However,
quite some reports are during the setup phase. This change adds some
context to events during setup phase as well, to make debugging easier.

* Drop default arch (not available during setup)
2025-11-12 14:49:04 -05:00
Stefan Agner
d85aedc42b Avoid using deprecated 'id' field in Docker events (#6307) 2025-11-12 20:44:01 +01:00
dependabot[bot]
d541fe5c3a Bump sentry-sdk from 2.43.0 to 2.44.0 (#6306) 2025-11-11 22:28:34 -08:00
Stefan Agner
91a9cb98c3 Avoid adding Content-Type to non-body responses (#6266)
* Avoid adding Content-Type to non-body responses

The current code sets the content-type header for all responses
to the result's content_type property if upstream does not set a
content_type. The default value for content_type is
"application/octet-stream".

For responses that do not have a body (like 204 No Content or
304 Not Modified), setting a content-type header is unnecessary and
potentially misleading. Follow HTTP standards by only adding the
content-type header to responses that actually contain a body.

* Add pytest for ingress proxy

* Preserve Content-Type header for HEAD requests in ingress API
2025-11-10 17:39:10 +01:00
Stefan Agner
8f2b0763b7 Add zstd compression support (#6302)
Add zstd compression support to allow zstd compressed proxing for
ingress. Zstd is automatically supported by aiohttp if the package
is present.
2025-11-10 17:04:06 +01:00
Stefan Agner
5018d5d04e Bump pytest-asyncio to 1.2.0 (#6301) 2025-11-10 12:00:25 +01:00
Stefan Agner
1ba1ad9fc7 Remove Docker version from unhealthy reasons (#6292)
Any unhealthy reason blocks Home Assistant OS updates. If the Docker
version on a system running Home Assistant OS is outdated, the user
needs to be able to update Home Assistant OS to get a supported Docker
version. Therefore, we should not mark the system as unhealthy due to
an outdated Docker version.
2025-11-10 10:23:12 +01:00
dependabot[bot]
f0ef40eb3e Bump astroid from 4.0.1 to 4.0.2 (#6297)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-10 09:55:16 +01:00
dependabot[bot]
6eed5b02b4 Bump coverage from 7.11.0 to 7.11.3 (#6298) 2025-11-09 23:24:55 -08:00
dependabot[bot]
e59dcf7089 Bump dbus-fast from 2.44.5 to 2.45.1 (#6299) 2025-11-09 23:15:39 -08:00
dependabot[bot]
48da3d8a8d Bump pre-commit from 4.3.0 to 4.4.0 (#6300) 2025-11-09 23:07:49 -08:00
dependabot[bot]
7b82ebe3aa Bump ruff from 0.14.3 to 0.14.4 (#6291)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-07 09:06:06 +01:00
Stefan Agner
d96ea9aef9 Fix docker image pull progress blocked by small layers (#6287)
* Fix docker image pull progress blocked by small layers

Small Docker layers (typically <100 bytes) can skip the downloading phase
entirely, going directly from "Pulling fs layer" to "Download complete"
without emitting any progress events with byte counts. This caused the
aggregate progress calculation to block indefinitely, as it required all
layer jobs to have their `extra` field populated with byte counts before
proceeding.

The issue manifested as parent job progress jumping from 0% to 97.9% after
long delays, as seen when a 96-byte layer held up progress reporting for
~50 seconds until it finally reached the "Extracting" phase.

Set a minimal `extra` field (current=1, total=1) when layers reach
"Download complete" without having gone through the downloading phase.
This allows the aggregate progress calculation to proceed immediately
while still correctly representing the layer as 100% downloaded.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Update test to capture issue correctly

* Improve pytest

* Fix pytest comment

* Fix pylint warning

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-06 09:04:55 +01:00
dependabot[bot]
4e5ec2d6be Bump brotli from 1.1.0 to 1.2.0 (#6288)
Bumps [brotli](https://github.com/google/brotli) from 1.1.0 to 1.2.0.
- [Release notes](https://github.com/google/brotli/releases)
- [Changelog](https://github.com/google/brotli/blob/master/CHANGELOG.md)
- [Commits](https://github.com/google/brotli/compare/go/cbrotli/v1.1.0...v1.2.0)

---
updated-dependencies:
- dependency-name: brotli
  dependency-version: 1.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-06 09:04:00 +01:00
dependabot[bot]
c9ceb4a4e3 Bump getsentry/action-release from 3.3.0 to 3.4.0 (#6284)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-05 12:17:51 +01:00
Ashton
d33305379f Improve error message clarity by specifying to check supervisor logs (#6250)
* Improve error message clarity by specifying to check supervisor logs with 'ha supervisor logs'

* Fix ruff, supervisor -> Supervisor

---------

Co-authored-by: Jan Čermák <sairon@sairon.cz>
2025-11-04 17:12:15 -05:00
Stefan Agner
1448a33dbf Remove Codenotary integrity check (#6236)
* Formally deprecate CodeNotary build config

* Remove CodeNotary specific integrity checking

The current code is specific to how CodeNotary was doing integrity
checking. A future integrity checking mechanism likely will work
differently (e.g. through EROFS based containers). Remove the current
code to make way for a future implementation.

* Drop CodeNotary integrity fixups

* Drop unused tests

* Fix pytest

* Fix pytest

* Remove CodeNotary related exceptions and handling

Remove CodeNotary related exceptions and handling from the Docker
interface.

* Drop unnecessary comment

* Remove Codenotary specific IssueType/SuggestionType

* Drop Codenotary specific environment and secret reference

* Remove unused constants

* Introduce APIGone exception for removed APIs

Introduce a new exception class APIGone to indicate that certain API
features have been removed and are no longer available. Update the
security integrity check endpoint to raise this new exception instead
of a generic APIError, providing clearer communication to clients that
the feature has been intentionally removed.

* Drop content trust

A cosign based signature verification will likely be named differently
to avoid confusion with existing implementations. For now, remove the
content trust option entirely.

* Drop code sign test

* Remove source_mods/content_trust evaluations

* Remove content_trust reference in bootstrap.py

* Fix security tests

* Drop unused tests

* Drop codenotary from schema

Since we have "remove extra" in voluptuous, we can remove the
codenotary field from the addon schema.

* Remove content_trust from tests

* Remove content_trust unsupported reason

* Remove unnecessary comment

* Remove unrelated pytest

* Remove unrelated fixtures
2025-11-03 20:13:15 +01:00
Stefan Agner
1657769044 Fix parent job sync when parent reaches 100% progress (#6282)
* Fix parent job sync when parent reaches 100% progress

When a child job completes and syncs its progress to a parent job,
it can set the parent to 100% progress. However, there are situations
where a second child job for the same parent job is created after the
parent has already reached 100%. This caused a ValueError when creating
a ParentJobSync for subsequent child jobs, as the validator required
starting_progress < 100.0.

This issue was introduced in #6207 (shipped in 2025.10.0) but only
became visible in 2025.10.1 with #6195, which started using progress
syncs. The bug manifests during Core update rollbacks when a second
docker_interface_install job is created after the parent already
reached 100% from the first install.

Fix by skipping parent job sync setup when the parent is already done
or at 100% progress, as there's no value in syncing progress to a
completed parent.

* Update comment and add debug message
2025-11-03 17:58:40 +01:00
Copilot
a8b7923a42 Add test coverage for _map_nm_wifi method (#6275)
* Initial plan

* Add comprehensive test coverage for _map_nm_wifi method

Co-authored-by: agners <34061+agners@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: agners <34061+agners@users.noreply.github.com>
2025-11-03 10:04:01 +01:00
dependabot[bot]
b3b7bc29fa Bump ruff from 0.14.2 to 0.14.3 (#6280)
* Bump ruff from 0.14.2 to 0.14.3

Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.2 to 0.14.3.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.2...0.14.3)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update precommit ruff too

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2025-10-31 11:42:39 -04:00
dependabot[bot]
2098168d04 Bump sentry-sdk from 2.42.1 to 2.43.0 (#6279)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.42.1 to 2.43.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.42.1...2.43.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.43.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-30 14:24:50 -04:00
dependabot[bot]
02c4fd4a8c Bump aiohttp from 3.13.1 to 3.13.2 (#6273)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.13.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-29 10:36:28 +01:00
Erwin Douna
0bee5c6f37 Add exponential backoff strategy to watchdog restart (#6262) 2025-10-28 17:46:40 +01:00
dependabot[bot]
9c0174f1fd Bump actions/upload-artifact from 4.6.2 to 5.0.0 (#6269)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.6.2 to 5.0.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](ea165f8d65...330a01c490)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 12:28:43 +01:00
dependabot[bot]
dc3d8b9266 Bump actions/download-artifact from 5.0.0 to 6.0.0 (#6270)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 5.0.0 to 6.0.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](634f93cb29...018cc2cf5b)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 12:28:13 +01:00
dependabot[bot]
06d96db55b Bump orjson from 3.11.3 to 3.11.4 (#6271)
Bumps [orjson](https://github.com/ijl/orjson) from 3.11.3 to 3.11.4.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.11.3...3.11.4)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 12:27:39 +01:00
Mike Degatano
131cc3b6d1 Prevent float drift error on progress update (#6267) 2025-10-24 09:37:19 -04:00
Michel Nederlof
b92f5976a3 If D-Bus to UDisks2 is not connected, return None (#6265)
When fetching the host info without UDisks2 D-Bus connected it would raise an exception and disable managing addons via home assistant (and other GUI functions that need host info status).
2025-10-24 10:22:02 +02:00
dependabot[bot]
370c961c9e Bump ruff from 0.14.1 to 0.14.2 (#6268)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.1 to 0.14.2.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.1...0.14.2)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-24 09:46:04 +02:00
dependabot[bot]
b903e1196f Bump sigstore/cosign-installer from 3.10.0 to 4.0.0 (#6256)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.10.0 to 4.0.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](d7543c93d8...faadad0cce)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-23 08:37:59 +02:00
dependabot[bot]
9f8e8ab15a Bump home-assistant/wheels from 2025.09.1 to 2025.10.0 (#6260)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-22 23:17:38 +02:00
dependabot[bot]
56bffc839b Bump aiohttp from 3.13.0 to 3.13.1 (#6261)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-22 23:14:01 +02:00
dependabot[bot]
952a553c3b Bump pylint from 4.0.1 to 4.0.2 (#6263) 2025-10-22 10:20:35 +02:00
dependabot[bot]
717f1c85f5 Bump sentry-sdk from 2.42.0 to 2.42.1 (#6264)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.42.0 to 2.42.1.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.42.0...2.42.1)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.42.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-22 09:25:14 +02:00
dependabot[bot]
ffd498a515 Bump sentry-sdk from 2.41.0 to 2.42.0 (#6253)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-18 23:30:45 +02:00
dependabot[bot]
35f0645cb9 Bump cryptography from 46.0.2 to 46.0.3 (#6255)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-18 22:04:32 +02:00
dependabot[bot]
15c6547382 Bump coverage from 7.10.7 to 7.11.0 (#6254)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-17 23:42:02 +02:00
dependabot[bot]
adefa242e5 Bump colorlog from 6.9.0 to 6.10.1 (#6258)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-17 09:14:38 +02:00
dependabot[bot]
583a8a82fb Bump ruff from 0.14.0 to 0.14.1 (#6257)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-17 09:12:47 +02:00
dependabot[bot]
322df15e73 Bump pylint from 4.0.0 to 4.0.1 (#6251)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-15 19:58:36 +02:00
dependabot[bot]
51490c8e41 Bump pylint from 3.3.9 to 4.0.0 and astroid from 3.3.11 to 4.0.1 (#6248)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-10-14 10:45:17 +02:00
dependabot[bot]
3c21a8b8ef Bump sentry-sdk from 2.40.0 to 2.41.0 (#6246)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.40.0 to 2.41.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.40.0...2.41.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.41.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-10 08:38:17 +02:00
Jan Čermák
ddb8588d77 Treat containerd snapshotter/overlayfs driver as supported (#6242)
* Treat containerd snapshotter/overlayfs driver as supported

With home-assistant/operating-system#4252 the storage driver would
change to "overlayfs". We don't want the system to be marked as
unsupported. It should be safe to treat it as supported even now, so add
it to the list of allowed values.

* Flip the logic

(note for self: don't forget to check for unstaged changes before push)

* Set valid storage for invalid logging test case
2025-10-09 18:47:06 +02:00
dependabot[bot]
81e46b20b8 Bump pyudev from 0.24.3 to 0.24.4 (#6244)
Bumps [pyudev](https://github.com/pyudev/pyudev) from 0.24.3 to 0.24.4.
- [Changelog](https://github.com/pyudev/pyudev/blob/master/CHANGES.rst)
- [Commits](https://github.com/pyudev/pyudev/compare/v0.24.3...v0.24.4)

---
updated-dependencies:
- dependency-name: pyudev
  dependency-version: 0.24.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-09 09:48:34 +02:00
dependabot[bot]
5041a1ed5c Bump types-docker from 7.1.0.20250916 to 7.1.0.20251009 (#6243)
Bumps [types-docker](https://github.com/typeshed-internal/stub_uploader) from 7.1.0.20250916 to 7.1.0.20251009.
- [Commits](https://github.com/typeshed-internal/stub_uploader/commits)

---
updated-dependencies:
- dependency-name: types-docker
  dependency-version: 7.1.0.20251009
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-09 09:38:12 +02:00
Stefan Agner
337731a55a Let only bugs go stale (#6239)
Let's apply the stale label only to issues which are bugs. Tasks are
more long lived and should not be marked as stale automatically.
2025-10-08 15:56:35 +02:00
Stefan Agner
53a8044aff Add support for ulimit in addon config (#6206)
* Add support for ulimit in addon config

Similar to docker-compose, this adds support for setting ulimits
for addons via the addon config. This is useful e.g. for InfluxDB
which on its own does not support setting higher open file descriptor
limits, but recommends increasing limits on the host.

* Make soft and hard limit mandatory if ulimit is a dict
2025-10-08 12:43:12 +02:00
Jan Čermák
c71553f37d Add AGENTS.md symlink (#6237)
Add AGENTS.md along the CLAUDE.md for agents that support it. While
CLAUDE.md is still required and specific to Claude Code, AGENTS.md
covers various other agents that implemented this proposed standard.

Core already adopted the same approach recently.
2025-10-08 10:44:49 +02:00
dependabot[bot]
c1eb97d8ab Bump ruff from 0.13.3 to 0.14.0 (#6238)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.13.3 to 0.14.0.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.13.3...0.14.0)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-08 09:47:19 +02:00
Mike Degatano
190b734332 Add progress reporting to addon, HA and Supervisor updates (#6195)
* Add progress reporting to addon, HA and Supervisor updates

* Fix assert in test

* Add progress to addon, core, supervisor updates/installs

* Fix double install bug in addons install

* Remove initial_install and re-arrange order of load
2025-10-07 16:54:11 +02:00
dependabot[bot]
559b6982a3 Bump aiohttp from 3.12.15 to 3.13.0 (#6234)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.13.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 12:29:47 +02:00
dependabot[bot]
301362e9e5 Bump attrs from 25.3.0 to 25.4.0 (#6235)
Bumps [attrs](https://github.com/sponsors/hynek) from 25.3.0 to 25.4.0.
- [Commits](https://github.com/sponsors/hynek/commits)

---
updated-dependencies:
- dependency-name: attrs
  dependency-version: 25.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 12:29:03 +02:00
dependabot[bot]
fc928d294c Bump sentry-sdk from 2.39.0 to 2.40.0 (#6233)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.39.0 to 2.40.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.39.0...2.40.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.40.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 09:47:03 +02:00
dependabot[bot]
f42aeb4937 Bump dbus-fast from 2.44.3 to 2.44.5 (#6232)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.44.3 to 2.44.5.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.44.3...v2.44.5)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 2.44.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:37:22 +02:00
dependabot[bot]
fd21886de9 Bump pylint from 3.3.8 to 3.3.9 (#6230)
Bumps [pylint](https://github.com/pylint-dev/pylint) from 3.3.8 to 3.3.9.
- [Release notes](https://github.com/pylint-dev/pylint/releases)
- [Commits](https://github.com/pylint-dev/pylint/compare/v3.3.8...v3.3.9)

---
updated-dependencies:
- dependency-name: pylint
  dependency-version: 3.3.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:36:03 +02:00
dependabot[bot]
e4bb415e30 Bump actions/stale from 10.0.0 to 10.1.0 (#6229)
Bumps [actions/stale](https://github.com/actions/stale) from 10.0.0 to 10.1.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](3a9db7e6a4...5f858e3efb)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:35:49 +02:00
dependabot[bot]
622dda5382 Bump ruff from 0.13.2 to 0.13.3 (#6228)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.13.2 to 0.13.3.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.13.2...0.13.3)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.13.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-03 14:42:13 +02:00
102 changed files with 3353 additions and 2628 deletions

View File

@@ -53,7 +53,7 @@ jobs:
requirements: ${{ steps.requirements.outputs.changed }}
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
with:
fetch-depth: 0
@@ -92,7 +92,7 @@ jobs:
arch: ${{ fromJson(needs.init.outputs.architectures) }}
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
with:
fetch-depth: 0
@@ -107,7 +107,7 @@ jobs:
# home-assistant/wheels doesn't support sha pinning
- name: Build wheels
if: needs.init.outputs.requirements == 'true'
uses: home-assistant/wheels@2025.09.1
uses: home-assistant/wheels@2025.10.0
with:
abi: cp313
tag: musllinux_1_2
@@ -132,7 +132,7 @@ jobs:
- name: Install Cosign
if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with:
cosign-release: "v2.5.3"
@@ -170,8 +170,6 @@ jobs:
--target /data \
--cosign \
--generic ${{ needs.init.outputs.version }}
env:
CAS_API_KEY: ${{ secrets.CAS_TOKEN }}
version:
name: Update version
@@ -180,7 +178,7 @@ jobs:
steps:
- name: Checkout the repository
if: needs.init.outputs.publish == 'true'
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Initialize git
if: needs.init.outputs.publish == 'true'
@@ -205,7 +203,7 @@ jobs:
timeout-minutes: 60
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
# home-assistant/builder doesn't support sha pinning
- name: Build the Supervisor
@@ -293,33 +291,6 @@ jobs:
exit 1
fi
- name: Check the Supervisor code sign
if: needs.init.outputs.publish == 'true'
run: |
echo "Enable Content-Trust"
test=$(docker exec hassio_cli ha security options --content-trust=true --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
echo "Run supervisor health check"
test=$(docker exec hassio_cli ha resolution healthcheck --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
echo "Check supervisor unhealthy"
test=$(docker exec hassio_cli ha resolution info --no-progress --raw-json | jq -r '.data.unhealthy[]')
if [ "$test" != "" ]; then
exit 1
fi
echo "Check supervisor supported"
test=$(docker exec hassio_cli ha resolution info --no-progress --raw-json | jq -r '.data.unsupported[]')
if [[ "$test" =~ source_mods ]]; then
exit 1
fi
- name: Create full backup
id: backup
run: |

View File

@@ -26,7 +26,7 @@ jobs:
name: Prepare Python dependencies
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Set up Python
id: python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
@@ -68,7 +68,7 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
@@ -111,7 +111,7 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
@@ -154,7 +154,7 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Register hadolint problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/hadolint.json"
@@ -169,7 +169,7 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
@@ -213,7 +213,7 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
@@ -257,7 +257,7 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
@@ -293,7 +293,7 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
@@ -339,14 +339,14 @@ jobs:
name: Run tests Python ${{ needs.prepare.outputs.python-version }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with:
cosign-release: "v2.5.3"
- name: Restore Python virtual environment
@@ -386,7 +386,7 @@ jobs:
-o console_output_style=count \
tests
- name: Upload coverage artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: coverage
path: .coverage
@@ -398,7 +398,7 @@ jobs:
needs: ["pytest", "prepare"]
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
@@ -417,7 +417,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Download all coverage artifacts
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: coverage
path: coverage/

View File

@@ -11,7 +11,7 @@ jobs:
name: Release Drafter
steps:
- name: Checkout the repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
with:
fetch-depth: 0

View File

@@ -10,9 +10,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out code from GitHub
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Sentry Release
uses: getsentry/action-release@4f502acc1df792390abe36f2dcb03612ef144818 # v3.3.0
uses: getsentry/action-release@128c5058bbbe93c8e02147fe0a9c713f166259a6 # v3.4.0
env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}

View File

@@ -9,13 +9,14 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0
- uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30
days-before-close: 7
stale-issue-label: "stale"
exempt-issue-labels: "no-stale,Help%20wanted,help-wanted,pinned,rfc,security"
only-issue-types: "bug"
stale-issue-message: >
There hasn't been any activity on this issue recently. Due to the
high number of incoming GitHub notifications, we have to clean some

View File

@@ -14,7 +14,7 @@ jobs:
latest_version: ${{ steps.latest_frontend_version.outputs.latest_tag }}
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Get latest frontend release
id: latest_frontend_version
uses: abatilo/release-info-action@32cb932219f1cee3fc4f4a298fd65ead5d35b661 # v1.3.3
@@ -49,7 +49,7 @@ jobs:
if: needs.check-version.outputs.skip != 'true'
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
- name: Clear www folder
run: |
rm -rf supervisor/api/panel/*

View File

@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.11.10
rev: v0.14.3
hooks:
- id: ruff
args:

1
AGENTS.md Symbolic link
View File

@@ -0,0 +1 @@
.github/copilot-instructions.md

View File

@@ -1,14 +1,16 @@
aiodns==3.5.0
aiohttp==3.12.15
aiodocker==0.24.0
aiohttp==3.13.2
atomicwrites-homeassistant==1.4.1
attrs==25.3.0
attrs==25.4.0
awesomeversion==25.8.0
backports.zstd==1.0.0
blockbuster==1.5.25
brotli==1.1.0
brotli==1.2.0
ciso8601==2.3.3
colorlog==6.9.0
colorlog==6.10.1
cpe==1.3.1
cryptography==46.0.2
cryptography==46.0.3
debugpy==1.8.17
deepmerge==2.0
dirhash==0.5.0
@@ -17,14 +19,14 @@ faust-cchardet==2.1.19
gitpython==3.1.45
jinja2==3.1.6
log-rate-limit==1.4.2
orjson==3.11.3
orjson==3.11.4
pulsectl==24.12.0
pyudev==0.24.3
pyudev==0.24.4
PyYAML==6.0.3
requests==2.32.5
securetar==2025.2.1
sentry-sdk==2.39.0
sentry-sdk==2.45.0
setuptools==80.9.0
voluptuous==0.15.2
dbus-fast==2.44.3
dbus-fast==2.45.1
zlib-fast==0.2.1

View File

@@ -1,16 +1,16 @@
astroid==3.3.11
coverage==7.10.7
astroid==4.0.2
coverage==7.12.0
mypy==1.18.2
pre-commit==4.3.0
pylint==3.3.8
pre-commit==4.4.0
pylint==4.0.3
pytest-aiohttp==1.1.0
pytest-asyncio==0.25.2
pytest-asyncio==1.3.0
pytest-cov==7.0.0
pytest-timeout==2.4.0
pytest==8.4.2
ruff==0.13.2
time-machine==2.19.0
types-docker==7.1.0.20250916
pytest==9.0.1
ruff==0.14.5
time-machine==3.0.0
types-docker==7.1.0.20251009
types-pyyaml==6.0.12.20250915
types-requests==2.32.4.20250913
urllib3==2.5.0

View File

@@ -66,13 +66,21 @@ from ..docker.const import ContainerState
from ..docker.monitor import DockerContainerStateEvent
from ..docker.stats import DockerStats
from ..exceptions import (
AddonConfigurationError,
AddonBackupMetadataInvalidError,
AddonConfigurationInvalidError,
AddonNotRunningError,
AddonNotSupportedError,
AddonNotSupportedWriteStdinError,
AddonPrePostBackupCommandReturnedError,
AddonsError,
AddonsJobError,
AddonUnknownError,
BackupRestoreUnknownError,
ConfigurationFileError,
DockerBuildError,
DockerError,
HostAppArmorError,
StoreAddonNotFoundError,
)
from ..hardware.data import Device
from ..homeassistant.const import WSEvent
@@ -226,6 +234,7 @@ class Addon(AddonModel):
)
await self._check_ingress_port()
default_image = self._image(self.data)
try:
await self.instance.attach(version=self.version)
@@ -234,7 +243,7 @@ class Addon(AddonModel):
await self.instance.check_image(self.version, default_image, self.arch)
except DockerError:
_LOGGER.info("No %s addon Docker image %s found", self.slug, self.image)
with suppress(DockerError):
with suppress(DockerError, AddonNotSupportedError):
await self.instance.install(self.version, default_image, arch=self.arch)
self.persist[ATTR_IMAGE] = default_image
@@ -717,18 +726,16 @@ class Addon(AddonModel):
options = self.schema.validate(self.options)
await self.sys_run_in_executor(write_json_file, self.path_options, options)
except vol.Invalid as ex:
_LOGGER.error(
"Add-on %s has invalid options: %s",
self.slug,
humanize_error(self.options, ex),
)
except ConfigurationFileError:
raise AddonConfigurationInvalidError(
_LOGGER.error,
addon=self.slug,
validation_error=humanize_error(self.options, ex),
) from None
except ConfigurationFileError as err:
_LOGGER.error("Add-on %s can't write options", self.slug)
else:
_LOGGER.debug("Add-on %s write options: %s", self.slug, options)
return
raise AddonUnknownError(addon=self.slug) from err
raise AddonConfigurationError()
_LOGGER.debug("Add-on %s write options: %s", self.slug, options)
@Job(
name="addon_unload",
@@ -771,10 +778,9 @@ class Addon(AddonModel):
async def install(self) -> None:
"""Install and setup this addon."""
if not self.addon_store:
raise AddonsError("Missing from store, cannot install!")
raise StoreAddonNotFoundError(addon=self.slug)
await self.sys_addons.data.install(self.addon_store)
await self.load()
def setup_data():
if not self.path_data.is_dir():
@@ -793,9 +799,20 @@ class Addon(AddonModel):
await self.instance.install(
self.latest_version, self.addon_store.image, arch=self.arch
)
except DockerError as err:
except AddonsError:
await self.sys_addons.data.uninstall(self)
raise AddonsError() from err
raise
except DockerBuildError as err:
_LOGGER.error("Could not build image for addon %s: %s", self.slug, err)
await self.sys_addons.data.uninstall(self)
raise AddonUnknownError(addon=self.slug) from err
except DockerError as err:
_LOGGER.error("Could not pull image to update addon %s: %s", self.slug, err)
await self.sys_addons.data.uninstall(self)
raise AddonUnknownError(addon=self.slug) from err
# Finish initialization and set up listeners
await self.load()
# Add to addon manager
self.sys_addons.local[self.slug] = self
@@ -816,7 +833,8 @@ class Addon(AddonModel):
try:
await self.instance.remove(remove_image=remove_image)
except DockerError as err:
raise AddonsError() from err
_LOGGER.error("Could not remove image for addon %s: %s", self.slug, err)
raise AddonUnknownError(addon=self.slug) from err
self.state = AddonState.UNKNOWN
@@ -881,7 +899,7 @@ class Addon(AddonModel):
if it was running. Else nothing is returned.
"""
if not self.addon_store:
raise AddonsError("Missing from store, cannot update!")
raise StoreAddonNotFoundError(addon=self.slug)
old_image = self.image
# Cache data to prevent races with other updates to global
@@ -889,8 +907,12 @@ class Addon(AddonModel):
try:
await self.instance.update(store.version, store.image, arch=self.arch)
except DockerBuildError as err:
_LOGGER.error("Could not build image for addon %s: %s", self.slug, err)
raise AddonUnknownError(addon=self.slug) from err
except DockerError as err:
raise AddonsError() from err
_LOGGER.error("Could not pull image to update addon %s: %s", self.slug, err)
raise AddonUnknownError(addon=self.slug) from err
# Stop the addon if running
if (last_state := self.state) in {AddonState.STARTED, AddonState.STARTUP}:
@@ -932,12 +954,23 @@ class Addon(AddonModel):
"""
last_state: AddonState = self.state
try:
# remove docker container but not addon config
# remove docker container and image but not addon config
try:
await self.instance.remove()
await self.instance.install(self.version)
except DockerError as err:
raise AddonsError() from err
_LOGGER.error("Could not remove image for addon %s: %s", self.slug, err)
raise AddonUnknownError(addon=self.slug) from err
try:
await self.instance.install(self.version)
except DockerBuildError as err:
_LOGGER.error("Could not build image for addon %s: %s", self.slug, err)
raise AddonUnknownError(addon=self.slug) from err
except DockerError as err:
_LOGGER.error(
"Could not pull image to update addon %s: %s", self.slug, err
)
raise AddonUnknownError(addon=self.slug) from err
if self.addon_store:
await self.sys_addons.data.update(self.addon_store)
@@ -1108,8 +1141,9 @@ class Addon(AddonModel):
try:
await self.instance.run()
except DockerError as err:
_LOGGER.error("Could not start container for addon %s: %s", self.slug, err)
self.state = AddonState.ERROR
raise AddonsError() from err
raise AddonUnknownError(addon=self.slug) from err
return self.sys_create_task(self._wait_for_startup())
@@ -1124,8 +1158,9 @@ class Addon(AddonModel):
try:
await self.instance.stop()
except DockerError as err:
_LOGGER.error("Could not stop container for addon %s: %s", self.slug, err)
self.state = AddonState.ERROR
raise AddonsError() from err
raise AddonUnknownError(addon=self.slug) from err
@Job(
name="addon_restart",
@@ -1158,9 +1193,15 @@ class Addon(AddonModel):
async def stats(self) -> DockerStats:
"""Return stats of container."""
try:
if not await self.is_running():
raise AddonNotRunningError(_LOGGER.warning, addon=self.slug)
return await self.instance.stats()
except DockerError as err:
raise AddonsError() from err
_LOGGER.error(
"Could not get stats of container for addon %s: %s", self.slug, err
)
raise AddonUnknownError(addon=self.slug) from err
@Job(
name="addon_write_stdin",
@@ -1170,14 +1211,18 @@ class Addon(AddonModel):
async def write_stdin(self, data) -> None:
"""Write data to add-on stdin."""
if not self.with_stdin:
raise AddonNotSupportedError(
f"Add-on {self.slug} does not support writing to stdin!", _LOGGER.error
)
raise AddonNotSupportedWriteStdinError(_LOGGER.error, addon=self.slug)
try:
return await self.instance.write_stdin(data)
if not await self.is_running():
raise AddonNotRunningError(_LOGGER.warning, addon=self.slug)
await self.instance.write_stdin(data)
except DockerError as err:
raise AddonsError() from err
_LOGGER.error(
"Could not write stdin to container for addon %s: %s", self.slug, err
)
raise AddonUnknownError(addon=self.slug) from err
async def _backup_command(self, command: str) -> None:
try:
@@ -1186,15 +1231,14 @@ class Addon(AddonModel):
_LOGGER.debug(
"Pre-/Post backup command failed with: %s", command_return.output
)
raise AddonsError(
f"Pre-/Post backup command returned error code: {command_return.exit_code}",
_LOGGER.error,
raise AddonPrePostBackupCommandReturnedError(
_LOGGER.error, addon=self.slug, exit_code=command_return.exit_code
)
except DockerError as err:
raise AddonsError(
f"Failed running pre-/post backup command {command}: {str(err)}",
_LOGGER.error,
) from err
_LOGGER.error(
"Failed running pre-/post backup command %s: %s", command, err
)
raise AddonUnknownError(addon=self.slug) from err
@Job(
name="addon_begin_backup",
@@ -1283,15 +1327,14 @@ class Addon(AddonModel):
try:
self.instance.export_image(temp_path.joinpath("image.tar"))
except DockerError as err:
raise AddonsError() from err
raise BackupRestoreUnknownError() from err
# Store local configs/state
try:
write_json_file(temp_path.joinpath("addon.json"), metadata)
except ConfigurationFileError as err:
raise AddonsError(
f"Can't save meta for {self.slug}", _LOGGER.error
) from err
_LOGGER.error("Can't save meta for %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
# Store AppArmor Profile
if apparmor_profile:
@@ -1301,9 +1344,7 @@ class Addon(AddonModel):
apparmor_profile, profile_backup_file
)
except HostAppArmorError as err:
raise AddonsError(
"Can't backup AppArmor profile", _LOGGER.error
) from err
raise BackupRestoreUnknownError() from err
# Write tarfile
with tar_file as backup:
@@ -1357,7 +1398,8 @@ class Addon(AddonModel):
)
_LOGGER.info("Finish backup for addon %s", self.slug)
except (tarfile.TarError, OSError, AddFileError) as err:
raise AddonsError(f"Can't write tarfile: {err}", _LOGGER.error) from err
_LOGGER.error("Can't write backup tarfile for addon %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
finally:
if was_running:
wait_for_start = await self.end_backup()
@@ -1399,28 +1441,24 @@ class Addon(AddonModel):
try:
tmp, data = await self.sys_run_in_executor(_extract_tarfile)
except tarfile.TarError as err:
raise AddonsError(
f"Can't read tarfile {tar_file}: {err}", _LOGGER.error
) from err
_LOGGER.error("Can't extract backup tarfile for %s: %s", self.slug, err)
raise BackupRestoreUnknownError() from err
except ConfigurationFileError as err:
raise AddonsError() from err
raise AddonUnknownError(addon=self.slug) from err
try:
# Validate
try:
data = SCHEMA_ADDON_BACKUP(data)
except vol.Invalid as err:
raise AddonsError(
f"Can't validate {self.slug}, backup data: {humanize_error(data, err)}",
raise AddonBackupMetadataInvalidError(
_LOGGER.error,
addon=self.slug,
validation_error=humanize_error(data, err),
) from err
# If available
if not self._available(data[ATTR_SYSTEM]):
raise AddonNotSupportedError(
f"Add-on {self.slug} is not available for this platform",
_LOGGER.error,
)
# Validate availability. Raises if not
self._validate_availability(data[ATTR_SYSTEM], logger=_LOGGER.error)
# Restore local add-on information
_LOGGER.info("Restore config for addon %s", self.slug)
@@ -1479,9 +1517,10 @@ class Addon(AddonModel):
try:
await self.sys_run_in_executor(_restore_data)
except shutil.Error as err:
raise AddonsError(
f"Can't restore origin data: {err}", _LOGGER.error
) from err
_LOGGER.error(
"Can't restore origin data for %s: %s", self.slug, err
)
raise BackupRestoreUnknownError() from err
# Restore AppArmor
profile_file = Path(tmp.name, "apparmor.txt")
@@ -1492,10 +1531,11 @@ class Addon(AddonModel):
)
except HostAppArmorError as err:
_LOGGER.error(
"Can't restore AppArmor profile for add-on %s",
"Can't restore AppArmor profile for add-on %s: %s",
self.slug,
err,
)
raise AddonsError() from err
raise BackupRestoreUnknownError() from err
finally:
# Is add-on loaded
@@ -1510,13 +1550,6 @@ class Addon(AddonModel):
_LOGGER.info("Finished restore for add-on %s", self.slug)
return wait_for_start
def check_trust(self) -> Awaitable[None]:
"""Calculate Addon docker content trust.
Return Coroutine.
"""
return self.instance.check_trust()
@Job(
name="addon_restart_after_problem",
throttle_period=WATCHDOG_THROTTLE_PERIOD,
@@ -1559,7 +1592,15 @@ class Addon(AddonModel):
)
break
await asyncio.sleep(WATCHDOG_RETRY_SECONDS)
# Exponential backoff to spread retries over the throttle window
delay = WATCHDOG_RETRY_SECONDS * (1 << max(attempts - 1, 0))
_LOGGER.debug(
"Watchdog will retry addon %s in %s seconds (attempt %s)",
self.name,
delay,
attempts + 1,
)
await asyncio.sleep(delay)
async def container_state_changed(self, event: DockerContainerStateEvent) -> None:
"""Set addon state from container state."""

View File

@@ -3,6 +3,7 @@
from __future__ import annotations
from functools import cached_property
import logging
from pathlib import Path
from typing import TYPE_CHECKING, Any
@@ -19,13 +20,20 @@ from ..const import (
)
from ..coresys import CoreSys, CoreSysAttributes
from ..docker.interface import MAP_ARCH
from ..exceptions import ConfigurationFileError, HassioArchNotFound
from ..exceptions import (
AddonBuildArchitectureNotSupportedError,
AddonBuildDockerfileMissingError,
ConfigurationFileError,
HassioArchNotFound,
)
from ..utils.common import FileConfiguration, find_one_filetype
from .validate import SCHEMA_BUILD_CONFIG
if TYPE_CHECKING:
from .manager import AnyAddon
_LOGGER: logging.Logger = logging.getLogger(__name__)
class AddonBuild(FileConfiguration, CoreSysAttributes):
"""Handle build options for add-ons."""
@@ -106,7 +114,7 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
return self.addon.path_location.joinpath(f"Dockerfile.{self.arch}")
return self.addon.path_location.joinpath("Dockerfile")
async def is_valid(self) -> bool:
async def is_valid(self) -> None:
"""Return true if the build env is valid."""
def build_is_valid() -> bool:
@@ -118,9 +126,17 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
)
try:
return await self.sys_run_in_executor(build_is_valid)
if not await self.sys_run_in_executor(build_is_valid):
raise AddonBuildDockerfileMissingError(
_LOGGER.error, addon=self.addon.slug
)
except HassioArchNotFound:
return False
raise AddonBuildArchitectureNotSupportedError(
_LOGGER.error,
addon=self.addon.slug,
addon_arch_list=self.addon.supported_arch,
system_arch_list=self.sys_arch.supported,
) from None
def get_docker_args(
self, version: AwesomeVersion, image_tag: str

View File

@@ -9,8 +9,6 @@ from typing import Self, Union
from attr import evolve
from supervisor.jobs.const import JobConcurrency
from ..const import AddonBoot, AddonStartup, AddonState
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
@@ -21,6 +19,8 @@ from ..exceptions import (
DockerError,
HassioError,
)
from ..jobs import ChildJobSyncFilter
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..store.addon import AddonStore
@@ -182,6 +182,9 @@ class AddonManager(CoreSysAttributes):
conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError,
concurrency=JobConcurrency.QUEUE,
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def install(
self, slug: str, *, validation_complete: asyncio.Event | None = None
@@ -229,6 +232,13 @@ class AddonManager(CoreSysAttributes):
name="addon_manager_update",
conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError,
# We assume for now the docker image pull is 100% of this task for progress
# allocation. But from a user perspective that isn't true. Other steps
# that take time which is not accounted for in progress include:
# partial backup, image cleanup, apparmor update, and addon restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def update(
self,
@@ -271,7 +281,10 @@ class AddonManager(CoreSysAttributes):
addons=[addon.slug],
)
return await addon.update()
task = await addon.update()
_LOGGER.info("Add-on '%s' successfully updated", slug)
return task
@Job(
name="addon_manager_rebuild",

View File

@@ -72,6 +72,7 @@ from ..const import (
ATTR_TYPE,
ATTR_UART,
ATTR_UDEV,
ATTR_ULIMITS,
ATTR_URL,
ATTR_USB,
ATTR_VERSION,
@@ -102,7 +103,6 @@ from .configuration import FolderMapping
from .const import (
ATTR_BACKUP,
ATTR_BREAKING_VERSIONS,
ATTR_CODENOTARY,
ATTR_PATH,
ATTR_READ_ONLY,
AddonBackupMode,
@@ -462,6 +462,11 @@ class AddonModel(JobGroup, ABC):
"""Return True if the add-on have his own udev."""
return self.data[ATTR_UDEV]
@property
def ulimits(self) -> dict[str, Any]:
"""Return ulimits configuration."""
return self.data[ATTR_ULIMITS]
@property
def with_kernel_modules(self) -> bool:
"""Return True if the add-on access to kernel modules."""
@@ -626,13 +631,8 @@ class AddonModel(JobGroup, ABC):
@property
def signed(self) -> bool:
"""Return True if the image is signed."""
return ATTR_CODENOTARY in self.data
@property
def codenotary(self) -> str | None:
"""Return Signer email address for CAS."""
return self.data.get(ATTR_CODENOTARY)
"""Currently no signing support."""
return False
@property
def breaking_versions(self) -> list[AwesomeVersion]:

View File

@@ -88,6 +88,7 @@ from ..const import (
ATTR_TYPE,
ATTR_UART,
ATTR_UDEV,
ATTR_ULIMITS,
ATTR_URL,
ATTR_USB,
ATTR_USER,
@@ -206,6 +207,12 @@ def _warn_addon_config(config: dict[str, Any]):
name,
)
if ATTR_CODENOTARY in config:
_LOGGER.warning(
"Add-on '%s' uses deprecated 'codenotary' field in config. This field is no longer used and will be ignored. Please report this to the maintainer.",
name,
)
return config
@@ -416,13 +423,26 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
vol.Optional(ATTR_BACKUP, default=AddonBackupMode.HOT): vol.Coerce(
AddonBackupMode
),
vol.Optional(ATTR_CODENOTARY): vol.Email(),
vol.Optional(ATTR_OPTIONS, default={}): dict,
vol.Optional(ATTR_SCHEMA, default={}): vol.Any(
vol.Schema({str: SCHEMA_ELEMENT}),
False,
),
vol.Optional(ATTR_IMAGE): docker_image,
vol.Optional(ATTR_ULIMITS, default=dict): vol.Any(
{str: vol.Coerce(int)}, # Simple format: {name: limit}
{
str: vol.Any(
vol.Coerce(int), # Simple format for individual entries
vol.Schema(
{ # Detailed format for individual entries
vol.Required("soft"): vol.Coerce(int),
vol.Required("hard"): vol.Coerce(int),
}
),
)
},
),
vol.Optional(ATTR_TIMEOUT, default=10): vol.All(
vol.Coerce(int), vol.Range(min=10, max=300)
),

View File

@@ -152,6 +152,7 @@ class RestAPI(CoreSysAttributes):
self._api_host.advanced_logs,
identifier=syslog_identifier,
latest=True,
no_colors=True,
),
),
web.get(
@@ -449,6 +450,7 @@ class RestAPI(CoreSysAttributes):
await async_capture_exception(err)
kwargs.pop("follow", None) # Follow is not supported for Docker logs
kwargs.pop("latest", None) # Latest is not supported for Docker logs
kwargs.pop("no_colors", None) # no_colors not supported for Docker logs
return await api_supervisor.logs(*args, **kwargs)
self.webapp.add_routes(
@@ -460,7 +462,7 @@ class RestAPI(CoreSysAttributes):
),
web.get(
"/supervisor/logs/latest",
partial(get_supervisor_logs, latest=True),
partial(get_supervisor_logs, latest=True, no_colors=True),
),
web.get("/supervisor/logs/boots/{bootid}", get_supervisor_logs),
web.get(
@@ -576,7 +578,7 @@ class RestAPI(CoreSysAttributes):
),
web.get(
"/addons/{addon}/logs/latest",
partial(get_addon_logs, latest=True),
partial(get_addon_logs, latest=True, no_colors=True),
),
web.get("/addons/{addon}/logs/boots/{bootid}", get_addon_logs),
web.get(

View File

@@ -100,6 +100,9 @@ from ..const import (
from ..coresys import CoreSysAttributes
from ..docker.stats import DockerStats
from ..exceptions import (
AddonBootConfigCannotChangeError,
AddonConfigurationInvalidError,
AddonNotSupportedWriteStdinError,
APIAddonNotInstalled,
APIError,
APIForbidden,
@@ -125,6 +128,7 @@ SCHEMA_OPTIONS = vol.Schema(
vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str),
vol.Optional(ATTR_INGRESS_PANEL): vol.Boolean(),
vol.Optional(ATTR_WATCHDOG): vol.Boolean(),
vol.Optional(ATTR_OPTIONS): vol.Maybe(dict),
}
)
@@ -300,19 +304,20 @@ class APIAddons(CoreSysAttributes):
# Update secrets for validation
await self.sys_homeassistant.secrets.reload()
# Extend schema with add-on specific validation
addon_schema = SCHEMA_OPTIONS.extend(
{vol.Optional(ATTR_OPTIONS): vol.Maybe(addon.schema)}
)
# Validate/Process Body
body = await api_validate(addon_schema, request)
body = await api_validate(SCHEMA_OPTIONS, request)
if ATTR_OPTIONS in body:
addon.options = body[ATTR_OPTIONS]
try:
addon.options = addon.schema(body[ATTR_OPTIONS])
except vol.Invalid as ex:
raise AddonConfigurationInvalidError(
addon=addon.slug,
validation_error=humanize_error(body[ATTR_OPTIONS], ex),
) from None
if ATTR_BOOT in body:
if addon.boot_config == AddonBootConfig.MANUAL_ONLY:
raise APIError(
f"Addon {addon.slug} boot option is set to {addon.boot_config} so it cannot be changed"
raise AddonBootConfigCannotChangeError(
addon=addon.slug, boot_config=addon.boot_config.value
)
addon.boot = body[ATTR_BOOT]
if ATTR_AUTO_UPDATE in body:
@@ -476,7 +481,7 @@ class APIAddons(CoreSysAttributes):
"""Write to stdin of add-on."""
addon = self.get_addon_for_request(request)
if not addon.with_stdin:
raise APIError(f"STDIN not supported the {addon.slug} add-on")
raise AddonNotSupportedWriteStdinError(_LOGGER.error, addon=addon.slug)
data = await request.read()
await asyncio.shield(addon.write_stdin(data))

View File

@@ -15,7 +15,7 @@ import voluptuous as vol
from ..addons.addon import Addon
from ..const import ATTR_NAME, ATTR_PASSWORD, ATTR_USERNAME, REQUEST_FROM
from ..coresys import CoreSysAttributes
from ..exceptions import APIForbidden
from ..exceptions import APIForbidden, AuthInvalidNonStringValueError
from .const import (
ATTR_GROUP_IDS,
ATTR_IS_ACTIVE,
@@ -69,7 +69,9 @@ class APIAuth(CoreSysAttributes):
try:
_ = username.encode and password.encode # type: ignore
except AttributeError:
raise HTTPUnauthorized(headers=REALM_HEADER) from None
raise AuthInvalidNonStringValueError(
_LOGGER.error, headers=REALM_HEADER
) from None
return self.sys_auth.check_login(
addon, cast(str, username), cast(str, password)

View File

@@ -206,6 +206,7 @@ class APIHost(CoreSysAttributes):
identifier: str | None = None,
follow: bool = False,
latest: bool = False,
no_colors: bool = False,
) -> web.StreamResponse:
"""Return systemd-journald logs."""
log_formatter = LogFormatter.PLAIN
@@ -280,7 +281,9 @@ class APIHost(CoreSysAttributes):
response = web.StreamResponse()
response.content_type = CONTENT_TYPE_TEXT
headers_returned = False
async for cursor, line in journal_logs_reader(resp, log_formatter):
async for cursor, line in journal_logs_reader(
resp, log_formatter, no_colors
):
try:
if not headers_returned:
if cursor:
@@ -318,9 +321,12 @@ class APIHost(CoreSysAttributes):
identifier: str | None = None,
follow: bool = False,
latest: bool = False,
no_colors: bool = False,
) -> web.StreamResponse:
"""Return systemd-journald logs. Wrapped as standard API handler."""
return await self.advanced_logs_handler(request, identifier, follow, latest)
return await self.advanced_logs_handler(
request, identifier, follow, latest, no_colors
)
@api_process
async def disk_usage(self, request: web.Request) -> dict:

View File

@@ -253,18 +253,28 @@ class APIIngress(CoreSysAttributes):
skip_auto_headers={hdrs.CONTENT_TYPE},
) as result:
headers = _response_header(result)
# Avoid parsing content_type in simple cases for better performance
if maybe_content_type := result.headers.get(hdrs.CONTENT_TYPE):
content_type = (maybe_content_type.partition(";"))[0].strip()
else:
content_type = result.content_type
# Empty body responses (304, 204, HEAD, etc.) should not be streamed,
# otherwise aiohttp < 3.9.0 may generate an invalid "0\r\n\r\n" chunk
# This also avoids setting content_type for empty responses.
if must_be_empty_body(request.method, result.status):
# If upstream contains content-type, preserve it (e.g. for HEAD requests)
if maybe_content_type:
headers[hdrs.CONTENT_TYPE] = content_type
return web.Response(
headers=headers,
status=result.status,
)
# Simple request
if (
# empty body responses should not be streamed,
# otherwise aiohttp < 3.9.0 may generate
# an invalid "0\r\n\r\n" chunk instead of an empty response.
must_be_empty_body(request.method, result.status)
or hdrs.CONTENT_LENGTH in result.headers
hdrs.CONTENT_LENGTH in result.headers
and int(result.headers.get(hdrs.CONTENT_LENGTH, 0)) < 4_194_000
):
# Return Response

View File

@@ -1,24 +1,20 @@
"""Init file for Supervisor Security RESTful API."""
import asyncio
import logging
from typing import Any
from aiohttp import web
import attr
import voluptuous as vol
from ..const import ATTR_CONTENT_TRUST, ATTR_FORCE_SECURITY, ATTR_PWNED
from supervisor.exceptions import APIGone
from ..const import ATTR_FORCE_SECURITY, ATTR_PWNED
from ..coresys import CoreSysAttributes
from .utils import api_process, api_validate
_LOGGER: logging.Logger = logging.getLogger(__name__)
# pylint: disable=no-value-for-parameter
SCHEMA_OPTIONS = vol.Schema(
{
vol.Optional(ATTR_PWNED): vol.Boolean(),
vol.Optional(ATTR_CONTENT_TRUST): vol.Boolean(),
vol.Optional(ATTR_FORCE_SECURITY): vol.Boolean(),
}
)
@@ -31,7 +27,6 @@ class APISecurity(CoreSysAttributes):
async def info(self, request: web.Request) -> dict[str, Any]:
"""Return Security information."""
return {
ATTR_CONTENT_TRUST: self.sys_security.content_trust,
ATTR_PWNED: self.sys_security.pwned,
ATTR_FORCE_SECURITY: self.sys_security.force,
}
@@ -43,8 +38,6 @@ class APISecurity(CoreSysAttributes):
if ATTR_PWNED in body:
self.sys_security.pwned = body[ATTR_PWNED]
if ATTR_CONTENT_TRUST in body:
self.sys_security.content_trust = body[ATTR_CONTENT_TRUST]
if ATTR_FORCE_SECURITY in body:
self.sys_security.force = body[ATTR_FORCE_SECURITY]
@@ -54,6 +47,9 @@ class APISecurity(CoreSysAttributes):
@api_process
async def integrity_check(self, request: web.Request) -> dict[str, Any]:
"""Run backend integrity check."""
result = await asyncio.shield(self.sys_security.integrity_check())
return attr.asdict(result)
"""Run backend integrity check.
CodeNotary integrity checking has been removed. This endpoint now returns
an error indicating the feature is gone.
"""
raise APIGone("Integrity check feature has been removed.")

View File

@@ -53,7 +53,7 @@ from ..const import (
REQUEST_FROM,
)
from ..coresys import CoreSysAttributes
from ..exceptions import APIError, APIForbidden, APINotFound
from ..exceptions import APIError, APIForbidden, APINotFound, StoreAddonNotFoundError
from ..store.addon import AddonStore
from ..store.repository import Repository
from ..store.validate import validate_repository
@@ -104,7 +104,7 @@ class APIStore(CoreSysAttributes):
addon_slug: str = request.match_info["addon"]
if not (addon := self.sys_addons.get(addon_slug)):
raise APINotFound(f"Addon {addon_slug} does not exist")
raise StoreAddonNotFoundError(addon=addon_slug)
if installed and not addon.is_installed:
raise APIError(f"Addon {addon_slug} is not installed")
@@ -112,7 +112,7 @@ class APIStore(CoreSysAttributes):
if not installed and addon.is_installed:
addon = cast(Addon, addon)
if not addon.addon_store:
raise APINotFound(f"Addon {addon_slug} does not exist in the store")
raise StoreAddonNotFoundError(addon=addon_slug)
return addon.addon_store
return addon

View File

@@ -16,14 +16,12 @@ from ..const import (
ATTR_BLK_READ,
ATTR_BLK_WRITE,
ATTR_CHANNEL,
ATTR_CONTENT_TRUST,
ATTR_COUNTRY,
ATTR_CPU_PERCENT,
ATTR_DEBUG,
ATTR_DEBUG_BLOCK,
ATTR_DETECT_BLOCKING_IO,
ATTR_DIAGNOSTICS,
ATTR_FORCE_SECURITY,
ATTR_HEALTHY,
ATTR_ICON,
ATTR_IP_ADDRESS,
@@ -69,8 +67,6 @@ SCHEMA_OPTIONS = vol.Schema(
vol.Optional(ATTR_DEBUG): vol.Boolean(),
vol.Optional(ATTR_DEBUG_BLOCK): vol.Boolean(),
vol.Optional(ATTR_DIAGNOSTICS): vol.Boolean(),
vol.Optional(ATTR_CONTENT_TRUST): vol.Boolean(),
vol.Optional(ATTR_FORCE_SECURITY): vol.Boolean(),
vol.Optional(ATTR_AUTO_UPDATE): vol.Boolean(),
vol.Optional(ATTR_DETECT_BLOCKING_IO): vol.Coerce(DetectBlockingIO),
vol.Optional(ATTR_COUNTRY): str,

View File

@@ -1,7 +1,7 @@
"""Init file for Supervisor util for RESTful API."""
import asyncio
from collections.abc import Callable
from collections.abc import Callable, Mapping
import json
from typing import Any, cast
@@ -26,7 +26,7 @@ from ..const import (
RESULT_OK,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import APIError, BackupFileNotFoundError, DockerAPIError, HassioError
from ..exceptions import APIError, DockerAPIError, HassioError
from ..jobs import JobSchedulerOptions, SupervisorJob
from ..utils import check_exception_chain, get_message_from_exception_chain
from ..utils.json import json_dumps, json_loads as json_loads_util
@@ -69,10 +69,10 @@ def api_process(method):
"""Return API information."""
try:
answer = await method(api, *args, **kwargs)
except BackupFileNotFoundError as err:
return api_return_error(err, status=404)
except APIError as err:
return api_return_error(err, status=err.status, job_id=err.job_id)
return api_return_error(
err, status=err.status, job_id=err.job_id, headers=err.headers
)
except HassioError as err:
return api_return_error(err)
@@ -143,6 +143,7 @@ def api_return_error(
error_type: str | None = None,
status: int = 400,
*,
headers: Mapping[str, str] | None = None,
job_id: str | None = None,
) -> web.Response:
"""Return an API error message."""
@@ -151,14 +152,19 @@ def api_return_error(
if check_exception_chain(error, DockerAPIError):
message = format_message(message)
if not message:
message = "Unknown error, see supervisor"
message = "Unknown error, see Supervisor logs (check with 'ha supervisor logs')"
match error_type:
case const.CONTENT_TYPE_TEXT:
return web.Response(body=message, content_type=error_type, status=status)
return web.Response(
body=message, content_type=error_type, status=status, headers=headers
)
case const.CONTENT_TYPE_BINARY:
return web.Response(
body=message.encode(), content_type=error_type, status=status
body=message.encode(),
content_type=error_type,
status=status,
headers=headers,
)
case _:
result: dict[str, Any] = {
@@ -176,6 +182,7 @@ def api_return_error(
result,
status=status,
dumps=json_dumps,
headers=headers,
)

View File

@@ -9,8 +9,10 @@ from .addons.addon import Addon
from .const import ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
from .coresys import CoreSys, CoreSysAttributes
from .exceptions import (
AuthError,
AuthHomeAssistantAPIValidationError,
AuthInvalidNonStringValueError,
AuthListUsersError,
AuthListUsersNoneResponseError,
AuthPasswordResetError,
HomeAssistantAPIError,
HomeAssistantWSError,
@@ -83,10 +85,8 @@ class Auth(FileConfiguration, CoreSysAttributes):
self, addon: Addon, username: str | None, password: str | None
) -> bool:
"""Check username login."""
if password is None:
raise AuthError("None as password is not supported!", _LOGGER.error)
if username is None:
raise AuthError("None as username is not supported!", _LOGGER.error)
if username is None or password is None:
raise AuthInvalidNonStringValueError(_LOGGER.error)
_LOGGER.info("Auth request from '%s' for '%s'", addon.slug, username)
@@ -137,7 +137,7 @@ class Auth(FileConfiguration, CoreSysAttributes):
finally:
self._running.pop(username, None)
raise AuthError()
raise AuthHomeAssistantAPIValidationError()
async def change_password(self, username: str, password: str) -> None:
"""Change user password login."""
@@ -155,7 +155,7 @@ class Auth(FileConfiguration, CoreSysAttributes):
except HomeAssistantAPIError as err:
_LOGGER.error("Can't request password reset on Home Assistant: %s", err)
raise AuthPasswordResetError()
raise AuthPasswordResetError(user=username)
async def list_users(self) -> list[dict[str, Any]]:
"""List users on the Home Assistant instance."""
@@ -166,15 +166,12 @@ class Auth(FileConfiguration, CoreSysAttributes):
{ATTR_TYPE: "config/auth/list"}
)
except HomeAssistantWSError as err:
raise AuthListUsersError(
f"Can't request listing users on Home Assistant: {err}", _LOGGER.error
) from err
_LOGGER.error("Can't request listing users on Home Assistant: %s", err)
raise AuthListUsersError() from err
if users is not None:
return users
raise AuthListUsersError(
"Can't request listing users on Home Assistant!", _LOGGER.error
)
raise AuthListUsersNoneResponseError(_LOGGER.error)
@staticmethod
def _rehash(value: str, salt2: str = "") -> str:

View File

@@ -628,9 +628,6 @@ class Backup(JobGroup):
if start_task := await self._addon_save(addon):
start_tasks.append(start_task)
except BackupError as err:
err = BackupError(
f"Can't backup add-on {addon.slug}: {str(err)}", _LOGGER.error
)
self.sys_jobs.current.capture_error(err)
return start_tasks

View File

@@ -105,7 +105,6 @@ async def initialize_coresys() -> CoreSys:
if coresys.dev:
coresys.updater.channel = UpdateChannel.DEV
coresys.security.content_trust = False
# Convert datetime
logging.Formatter.converter = lambda *args: coresys.now().timetuple()

View File

@@ -2,6 +2,7 @@
from __future__ import annotations
from asyncio import Task
from collections.abc import Callable, Coroutine
import logging
from typing import Any
@@ -38,11 +39,13 @@ class Bus(CoreSysAttributes):
self._listeners.setdefault(event, []).append(listener)
return listener
def fire_event(self, event: BusEvent, reference: Any) -> None:
def fire_event(self, event: BusEvent, reference: Any) -> list[Task]:
"""Fire an event to the bus."""
_LOGGER.debug("Fire event '%s' with '%s'", event, reference)
tasks: list[Task] = []
for listener in self._listeners.get(event, []):
self.sys_create_task(listener.callback(reference))
tasks.append(self.sys_create_task(listener.callback(reference)))
return tasks
def remove_listener(self, listener: EventListener) -> None:
"""Unregister an listener."""

View File

@@ -348,6 +348,7 @@ ATTR_TRANSLATIONS = "translations"
ATTR_TYPE = "type"
ATTR_UART = "uart"
ATTR_UDEV = "udev"
ATTR_ULIMITS = "ulimits"
ATTR_UNHEALTHY = "unhealthy"
ATTR_UNSAVED = "unsaved"
ATTR_UNSUPPORTED = "unsupported"

View File

@@ -9,6 +9,7 @@ from datetime import UTC, datetime, tzinfo
from functools import partial
import logging
import os
import time
from types import MappingProxyType
from typing import TYPE_CHECKING, Any, Self, TypeVar
@@ -655,8 +656,14 @@ class CoreSys:
if kwargs:
funct = partial(funct, **kwargs)
# Convert datetime to event loop time base
# If datetime is in the past, delay will be negative and call_at will
# schedule the call as soon as possible.
delay = when.timestamp() - time.time()
loop_time = self.loop.time() + delay
return self.loop.call_at(
when.timestamp(), funct, *args, context=self._create_context()
loop_time, funct, *args, context=self._create_context()
)

View File

@@ -2,6 +2,7 @@
from __future__ import annotations
from collections.abc import Awaitable
from contextlib import suppress
from ipaddress import IPv4Address
import logging
@@ -9,6 +10,7 @@ import os
from pathlib import Path
from typing import TYPE_CHECKING, cast
import aiodocker
from attr import evolve
from awesomeversion import AwesomeVersion
import docker
@@ -32,6 +34,7 @@ from ..coresys import CoreSys
from ..exceptions import (
CoreDNSError,
DBusError,
DockerBuildError,
DockerError,
DockerJobError,
DockerNotFound,
@@ -318,7 +321,18 @@ class DockerAddon(DockerInterface):
mem = 128 * 1024 * 1024
limits.append(docker.types.Ulimit(name="memlock", soft=mem, hard=mem))
# Return None if no capabilities is present
# Add configurable ulimits from add-on config
for name, config in self.addon.ulimits.items():
if isinstance(config, int):
# Simple format: both soft and hard limits are the same
limits.append(docker.types.Ulimit(name=name, soft=config, hard=config))
elif isinstance(config, dict):
# Detailed format: both soft and hard limits are mandatory
soft = config["soft"]
hard = config["hard"]
limits.append(docker.types.Ulimit(name=name, soft=soft, hard=hard))
# Return None if no ulimits are present
if limits:
return limits
return None
@@ -668,9 +682,8 @@ class DockerAddon(DockerInterface):
async def _build(self, version: AwesomeVersion, image: str | None = None) -> None:
"""Build a Docker container."""
build_env = await AddonBuild(self.coresys, self.addon).load_config()
if not await build_env.is_valid():
_LOGGER.error("Invalid build environment, can't build this add-on!")
raise DockerError()
# Check if the build environment is valid, raises if not
await build_env.is_valid()
_LOGGER.info("Starting build for %s:%s", self.image, version)
@@ -706,21 +719,24 @@ class DockerAddon(DockerInterface):
error_message = f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}"
raise docker.errors.DockerException(error_message)
addon_image = self.sys_docker.images.get(addon_image_tag)
return addon_image, logs
return addon_image_tag, logs
try:
docker_image, log = await self.sys_run_in_executor(build_image)
addon_image_tag, log = await self.sys_run_in_executor(build_image)
_LOGGER.debug("Build %s:%s done: %s", self.image, version, log)
# Update meta data
self._meta = docker_image.attrs
self._meta = await self.sys_docker.images.inspect(addon_image_tag)
except (docker.errors.DockerException, requests.RequestException) as err:
_LOGGER.error("Can't build %s:%s: %s", self.image, version, err)
raise DockerError() from err
except (
docker.errors.DockerException,
requests.RequestException,
aiodocker.DockerError,
) as err:
raise DockerBuildError(
f"Can't build {self.image}:{version}: {err!s}", _LOGGER.error
) from err
_LOGGER.info("Build %s:%s done", self.image, version)
@@ -740,11 +756,8 @@ class DockerAddon(DockerInterface):
)
async def import_image(self, tar_file: Path) -> None:
"""Import a tar file as image."""
docker_image = await self.sys_run_in_executor(
self.sys_docker.import_image, tar_file
)
if docker_image:
self._meta = docker_image.attrs
if docker_image := await self.sys_docker.import_image(tar_file):
self._meta = docker_image
_LOGGER.info("Importing image %s and version %s", tar_file, self.version)
with suppress(DockerError):
@@ -758,17 +771,21 @@ class DockerAddon(DockerInterface):
version: AwesomeVersion | None = None,
) -> None:
"""Check if old version exists and cleanup other versions of image not in use."""
await self.sys_run_in_executor(
self.sys_docker.cleanup_old_images,
(image := image or self.image),
version or self.version,
if not (use_image := image or self.image):
raise DockerError("Cannot determine image from metadata!", _LOGGER.error)
if not (use_version := version or self.version):
raise DockerError("Cannot determine version from metadata!", _LOGGER.error)
await self.sys_docker.cleanup_old_images(
use_image,
use_version,
{old_image} if old_image else None,
keep_images={
f"{addon.image}:{addon.version}"
for addon in self.sys_addons.installed
if addon.slug != self.addon.slug
and addon.image
and addon.image in {old_image, image}
and addon.image in {old_image, use_image}
},
)
@@ -777,12 +794,9 @@ class DockerAddon(DockerInterface):
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def write_stdin(self, data: bytes) -> None:
def write_stdin(self, data: bytes) -> Awaitable[None]:
"""Write to add-on stdin."""
if not await self.is_running():
raise DockerError()
await self.sys_run_in_executor(self._write_stdin, data)
return self.sys_run_in_executor(self._write_stdin, data)
def _write_stdin(self, data: bytes) -> None:
"""Write to add-on stdin.
@@ -835,16 +849,6 @@ class DockerAddon(DockerInterface):
):
self.sys_resolution.dismiss_issue(self.addon.device_access_missing_issue)
async def _validate_trust(self, image_id: str) -> None:
"""Validate trust of content."""
if not self.addon.signed:
return
checksum = image_id.partition(":")[2]
return await self.sys_security.verify_content(
cast(str, self.addon.codenotary), checksum
)
@Job(
name="docker_addon_hardware_events",
conditions=[JobCondition.OS_AGENT],

View File

@@ -1,11 +1,10 @@
"""Init file for Supervisor Docker object."""
from collections.abc import Awaitable
from ipaddress import IPv4Address
import logging
import re
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from awesomeversion import AwesomeVersion
from docker.types import Mount
from ..const import LABEL_MACHINE
@@ -236,21 +235,10 @@ class DockerHomeAssistant(DockerInterface):
environment={ENV_TIME: self.sys_timezone},
)
def is_initialize(self) -> Awaitable[bool]:
async def is_initialize(self) -> bool:
"""Return True if Docker container exists."""
return self.sys_run_in_executor(
self.sys_docker.container_is_initialized,
self.name,
self.image,
self.sys_homeassistant.version,
if not self.sys_homeassistant.version:
return False
return await self.sys_docker.container_is_initialized(
self.name, self.image, self.sys_homeassistant.version
)
async def _validate_trust(self, image_id: str) -> None:
"""Validate trust of content."""
try:
if self.version in {None, LANDINGPAGE} or self.version < _VERIFY_TRUST:
return
except AwesomeVersionCompareException:
return
await super()._validate_trust(image_id)

View File

@@ -6,17 +6,18 @@ from abc import ABC, abstractmethod
from collections import defaultdict
from collections.abc import Awaitable
from contextlib import suppress
from http import HTTPStatus
import logging
import re
from time import time
from typing import Any, cast
from uuid import uuid4
import aiodocker
from awesomeversion import AwesomeVersion
from awesomeversion.strategy import AwesomeVersionStrategy
import docker
from docker.models.containers import Container
from docker.models.images import Image
import requests
from ..bus import EventListener
@@ -31,15 +32,13 @@ from ..const import (
)
from ..coresys import CoreSys
from ..exceptions import (
CodeNotaryError,
CodeNotaryUntrusted,
DockerAPIError,
DockerError,
DockerHubRateLimitExceeded,
DockerJobError,
DockerLogOutOfOrder,
DockerNotFound,
DockerRequestError,
DockerTrustError,
)
from ..jobs import SupervisorJob
from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobConcurrency
@@ -218,12 +217,14 @@ class DockerInterface(JobGroup, ABC):
if not credentials:
return
await self.sys_run_in_executor(self.sys_docker.docker.login, **credentials)
await self.sys_run_in_executor(self.sys_docker.dockerpy.login, **credentials)
def _process_pull_image_log(self, job_id: str, reference: PullLogEntry) -> None:
def _process_pull_image_log( # noqa: C901
self, install_job_id: str, reference: PullLogEntry
) -> None:
"""Process events fired from a docker while pulling an image, filtered to a given job id."""
if (
reference.job_id != job_id
reference.job_id != install_job_id
or not reference.id
or not reference.status
or not (stage := PullImageLayerStage.from_status(reference.status))
@@ -237,21 +238,22 @@ class DockerInterface(JobGroup, ABC):
name="Pulling container image layer",
initial_stage=stage.status,
reference=reference.id,
parent_id=job_id,
parent_id=install_job_id,
internal=True,
)
job.done = False
return
# Find our sub job to update details of
for j in self.sys_jobs.jobs:
if j.parent_id == job_id and j.reference == reference.id:
if j.parent_id == install_job_id and j.reference == reference.id:
job = j
break
# This likely only occurs if the logs came in out of sync and we got progress before the Pulling FS Layer one
if not job:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {job_id} but could not find a matching job, skipping",
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {install_job_id} but could not find a matching job, skipping",
_LOGGER.debug,
)
@@ -303,9 +305,13 @@ class DockerInterface(JobGroup, ABC):
# Our filters have all passed. Time to update the job
# Only downloading and extracting have progress details. Use that to set extra
# We'll leave it around on later stages as the total bytes may be useful after that stage
# Enforce range to prevent float drift error
progress = max(0, min(progress, 100))
if (
stage in {PullImageLayerStage.DOWNLOADING, PullImageLayerStage.EXTRACTING}
and reference.progress_detail
and reference.progress_detail.current is not None
and reference.progress_detail.total is not None
):
job.update(
progress=progress,
@@ -316,19 +322,69 @@ class DockerInterface(JobGroup, ABC):
},
)
else:
# If we reach DOWNLOAD_COMPLETE without ever having set extra (small layers that skip
# the downloading phase), set a minimal extra so aggregate progress calculation can proceed
extra = job.extra
if stage == PullImageLayerStage.DOWNLOAD_COMPLETE and not job.extra:
extra = {"current": 1, "total": 1}
job.update(
progress=progress,
stage=stage.status,
done=stage == PullImageLayerStage.PULL_COMPLETE,
extra=None
if stage == PullImageLayerStage.RETRYING_DOWNLOAD
else job.extra,
extra=None if stage == PullImageLayerStage.RETRYING_DOWNLOAD else extra,
)
# Once we have received a progress update for every child job, start to set status of the main one
install_job = self.sys_jobs.get_job(install_job_id)
layer_jobs = [
job
for job in self.sys_jobs.jobs
if job.parent_id == install_job.uuid
and job.name == "Pulling container image layer"
]
# First set the total bytes to be downloaded/extracted on the main job
if not install_job.extra:
total = 0
for job in layer_jobs:
if not job.extra:
return
total += job.extra["total"]
install_job.extra = {"total": total}
else:
total = install_job.extra["total"]
# Then determine total progress based on progress of each sub-job, factoring in size of each compared to total
progress = 0.0
stage = PullImageLayerStage.PULL_COMPLETE
for job in layer_jobs:
if not job.extra:
return
progress += job.progress * (job.extra["total"] / total)
job_stage = PullImageLayerStage.from_status(cast(str, job.stage))
if job_stage < PullImageLayerStage.EXTRACTING:
stage = PullImageLayerStage.DOWNLOADING
elif (
stage == PullImageLayerStage.PULL_COMPLETE
and job_stage < PullImageLayerStage.PULL_COMPLETE
):
stage = PullImageLayerStage.EXTRACTING
# Ensure progress is 100 at this point to prevent float drift
if stage == PullImageLayerStage.PULL_COMPLETE:
progress = 100
# To reduce noise, limit updates to when result has changed by an entire percent or when stage changed
if stage != install_job.stage or progress >= install_job.progress + 1:
install_job.update(stage=stage.status, progress=max(0, min(progress, 100)))
@Job(
name="docker_interface_install",
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
internal=True,
)
async def install(
self,
@@ -351,11 +407,11 @@ class DockerInterface(JobGroup, ABC):
# Try login if we have defined credentials
await self._docker_login(image)
job_id = self.sys_jobs.current.uuid
curr_job_id = self.sys_jobs.current.uuid
async def process_pull_image_log(reference: PullLogEntry) -> None:
try:
self._process_pull_image_log(job_id, reference)
self._process_pull_image_log(curr_job_id, reference)
except DockerLogOutOfOrder as err:
# Send all these to sentry. Missing a few progress updates
# shouldn't matter to users but matters to us
@@ -366,105 +422,94 @@ class DockerInterface(JobGroup, ABC):
)
# Pull new image
docker_image = await self.sys_run_in_executor(
self.sys_docker.pull_image,
docker_image = await self.sys_docker.pull_image(
self.sys_jobs.current.uuid,
image,
str(version),
platform=MAP_ARCH[image_arch],
)
# Validate content
try:
await self._validate_trust(cast(str, docker_image.id))
except CodeNotaryError:
with suppress(docker.errors.DockerException):
await self.sys_run_in_executor(
self.sys_docker.images.remove,
image=f"{image}:{version!s}",
force=True,
)
raise
# Tag latest
if latest:
_LOGGER.info(
"Tagging image %s with version %s as latest", image, version
)
await self.sys_run_in_executor(docker_image.tag, image, tag="latest")
await self.sys_docker.images.tag(
docker_image["Id"], image, tag="latest"
)
except docker.errors.APIError as err:
if err.status_code == 429:
if err.status_code == HTTPStatus.TOO_MANY_REQUESTS:
self.sys_resolution.create_issue(
IssueType.DOCKER_RATELIMIT,
ContextType.SYSTEM,
suggestions=[SuggestionType.REGISTRY_LOGIN],
)
_LOGGER.info(
"Your IP address has made too many requests to Docker Hub which activated a rate limit. "
"For more details see https://www.home-assistant.io/more-info/dockerhub-rate-limit"
)
raise DockerHubRateLimitExceeded(_LOGGER.error) from err
await async_capture_exception(err)
raise DockerError(
f"Can't install {image}:{version!s}: {err}", _LOGGER.error
) from err
except (docker.errors.DockerException, requests.RequestException) as err:
except aiodocker.DockerError as err:
if err.status == HTTPStatus.TOO_MANY_REQUESTS:
self.sys_resolution.create_issue(
IssueType.DOCKER_RATELIMIT,
ContextType.SYSTEM,
suggestions=[SuggestionType.REGISTRY_LOGIN],
)
raise DockerHubRateLimitExceeded(_LOGGER.error) from err
await async_capture_exception(err)
raise DockerError(
f"Can't install {image}:{version!s}: {err}", _LOGGER.error
) from err
except (
docker.errors.DockerException,
requests.RequestException,
) as err:
await async_capture_exception(err)
raise DockerError(
f"Unknown error with {image}:{version!s} -> {err!s}", _LOGGER.error
) from err
except CodeNotaryUntrusted as err:
raise DockerTrustError(
f"Pulled image {image}:{version!s} failed on content-trust verification!",
_LOGGER.critical,
) from err
except CodeNotaryError as err:
raise DockerTrustError(
f"Error happened on Content-Trust check for {image}:{version!s}: {err!s}",
_LOGGER.error,
) from err
finally:
if listener:
self.sys_bus.remove_listener(listener)
self._meta = docker_image.attrs
self._meta = docker_image
async def exists(self) -> bool:
"""Return True if Docker image exists in local repository."""
with suppress(docker.errors.DockerException, requests.RequestException):
await self.sys_run_in_executor(
self.sys_docker.images.get, f"{self.image}:{self.version!s}"
)
with suppress(aiodocker.DockerError, requests.RequestException):
await self.sys_docker.images.inspect(f"{self.image}:{self.version!s}")
return True
return False
async def is_running(self) -> bool:
"""Return True if Docker is running."""
async def _get_container(self) -> Container | None:
"""Get docker container, returns None if not found."""
try:
docker_container = await self.sys_run_in_executor(
return await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except docker.errors.NotFound:
return False
return None
except docker.errors.DockerException as err:
raise DockerAPIError() from err
raise DockerAPIError(
f"Docker API error occurred while getting container information: {err!s}"
) from err
except requests.RequestException as err:
raise DockerRequestError() from err
raise DockerRequestError(
f"Error communicating with Docker to get container information: {err!s}"
) from err
return docker_container.status == "running"
async def is_running(self) -> bool:
"""Return True if Docker is running."""
if docker_container := await self._get_container():
return docker_container.status == "running"
return False
async def current_state(self) -> ContainerState:
"""Return current state of container."""
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except docker.errors.NotFound:
return ContainerState.UNKNOWN
except docker.errors.DockerException as err:
raise DockerAPIError() from err
except requests.RequestException as err:
raise DockerRequestError() from err
return _container_state_from_model(docker_container)
if docker_container := await self._get_container():
return _container_state_from_model(docker_container)
return ContainerState.UNKNOWN
@Job(name="docker_interface_attach", concurrency=JobConcurrency.GROUP_QUEUE)
async def attach(
@@ -491,15 +536,17 @@ class DockerInterface(JobGroup, ABC):
),
)
with suppress(docker.errors.DockerException, requests.RequestException):
with suppress(aiodocker.DockerError, requests.RequestException):
if not self._meta and self.image:
self._meta = self.sys_docker.images.get(
self._meta = await self.sys_docker.images.inspect(
f"{self.image}:{version!s}"
).attrs
)
# Successful?
if not self._meta:
raise DockerError()
raise DockerError(
f"Could not get metadata on container or image for {self.name}"
)
_LOGGER.info("Attaching to %s with version %s", self.image, self.version)
@Job(
@@ -563,14 +610,17 @@ class DockerInterface(JobGroup, ABC):
)
async def remove(self, *, remove_image: bool = True) -> None:
"""Remove Docker images."""
if not self.image or not self.version:
raise DockerError(
"Cannot determine image and/or version from metadata!", _LOGGER.error
)
# Cleanup container
with suppress(DockerError):
await self.stop()
if remove_image:
await self.sys_run_in_executor(
self.sys_docker.remove_image, self.image, self.version
)
await self.sys_docker.remove_image(self.image, self.version)
self._meta = None
@@ -592,18 +642,16 @@ class DockerInterface(JobGroup, ABC):
image_name = f"{expected_image}:{version!s}"
if self.image == expected_image:
try:
image: Image = await self.sys_run_in_executor(
self.sys_docker.images.get, image_name
)
except (docker.errors.DockerException, requests.RequestException) as err:
image = await self.sys_docker.images.inspect(image_name)
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(
f"Could not get {image_name} for check due to: {err!s}",
_LOGGER.error,
) from err
image_arch = f"{image.attrs['Os']}/{image.attrs['Architecture']}"
if "Variant" in image.attrs:
image_arch = f"{image_arch}/{image.attrs['Variant']}"
image_arch = f"{image['Os']}/{image['Architecture']}"
if "Variant" in image:
image_arch = f"{image_arch}/{image['Variant']}"
# If we have an image and its the right arch, all set
# It seems that newer Docker version return a variant for arm64 images.
@@ -629,7 +677,10 @@ class DockerInterface(JobGroup, ABC):
concurrency=JobConcurrency.GROUP_REJECT,
)
async def update(
self, version: AwesomeVersion, image: str | None = None, latest: bool = False
self,
version: AwesomeVersion,
image: str | None = None,
latest: bool = False,
) -> None:
"""Update a Docker image."""
image = image or self.image
@@ -662,11 +713,13 @@ class DockerInterface(JobGroup, ABC):
version: AwesomeVersion | None = None,
) -> None:
"""Check if old version exists and cleanup."""
await self.sys_run_in_executor(
self.sys_docker.cleanup_old_images,
image or self.image,
version or self.version,
{old_image} if old_image else None,
if not (use_image := image or self.image):
raise DockerError("Cannot determine image from metadata!", _LOGGER.error)
if not (use_version := version or self.version):
raise DockerError("Cannot determine version from metadata!", _LOGGER.error)
await self.sys_docker.cleanup_old_images(
use_image, use_version, {old_image} if old_image else None
)
@Job(
@@ -698,14 +751,8 @@ class DockerInterface(JobGroup, ABC):
async def is_failed(self) -> bool:
"""Return True if Docker is failing state."""
try:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except docker.errors.NotFound:
if not (docker_container := await self._get_container()):
return False
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
# container is not running
if docker_container.status != "exited":
@@ -718,10 +765,10 @@ class DockerInterface(JobGroup, ABC):
"""Return latest version of local image."""
available_version: list[AwesomeVersion] = []
try:
for image in await self.sys_run_in_executor(
self.sys_docker.images.list, self.image
for image in await self.sys_docker.images.list(
filters=f'{{"reference": ["{self.image}"]}}'
):
for tag in image.tags:
for tag in image["RepoTags"]:
version = AwesomeVersion(tag.partition(":")[2])
if version.strategy == AwesomeVersionStrategy.UNKNOWN:
continue
@@ -730,7 +777,7 @@ class DockerInterface(JobGroup, ABC):
if not available_version:
raise ValueError()
except (docker.errors.DockerException, ValueError) as err:
except (aiodocker.DockerError, ValueError) as err:
raise DockerNotFound(
f"No version found for {self.image}", _LOGGER.info
) from err
@@ -755,24 +802,3 @@ class DockerInterface(JobGroup, ABC):
return self.sys_run_in_executor(
self.sys_docker.container_run_inside, self.name, command
)
async def _validate_trust(self, image_id: str) -> None:
"""Validate trust of content."""
checksum = image_id.partition(":")[2]
return await self.sys_security.verify_own_content(checksum)
@Job(
name="docker_interface_check_trust",
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def check_trust(self) -> None:
"""Check trust of exists Docker image."""
try:
image = await self.sys_run_in_executor(
self.sys_docker.images.get, f"{self.image}:{self.version!s}"
)
except (docker.errors.DockerException, requests.RequestException):
return
await self._validate_trust(cast(str, image.id))

View File

@@ -6,20 +6,24 @@ import asyncio
from contextlib import suppress
from dataclasses import dataclass
from functools import partial
from http import HTTPStatus
from ipaddress import IPv4Address
import json
import logging
import os
from pathlib import Path
import re
from typing import Any, Final, Self, cast
import aiodocker
from aiodocker.images import DockerImages
from aiohttp import ClientSession, ClientTimeout, UnixConnector
import attr
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from docker import errors as docker_errors
from docker.api.client import APIClient
from docker.client import DockerClient
from docker.errors import DockerException, ImageNotFound, NotFound
from docker.models.containers import Container, ContainerCollection
from docker.models.images import Image, ImageCollection
from docker.models.networks import Network
from docker.types.daemon import CancellableStream
import requests
@@ -53,6 +57,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
MIN_SUPPORTED_DOCKER: Final = AwesomeVersion("24.0.0")
DOCKER_NETWORK_HOST: Final = "host"
RE_IMPORT_IMAGE_STREAM = re.compile(r"(^Loaded image ID: |^Loaded image: )(.+)$")
@attr.s(frozen=True)
@@ -204,7 +209,15 @@ class DockerAPI(CoreSysAttributes):
def __init__(self, coresys: CoreSys):
"""Initialize Docker base wrapper."""
self.coresys = coresys
self._docker: DockerClient | None = None
# We keep both until we can fully refactor to aiodocker
self._dockerpy: DockerClient | None = None
self.docker: aiodocker.Docker = aiodocker.Docker(
url="unix://localhost", # dummy hostname for URL composition
connector=(connector := UnixConnector(SOCKET_DOCKER.as_posix())),
session=ClientSession(connector=connector, timeout=ClientTimeout(900)),
api_version="auto",
)
self._network: DockerNetwork | None = None
self._info: DockerInfo | None = None
self.config: DockerConfig = DockerConfig()
@@ -212,28 +225,28 @@ class DockerAPI(CoreSysAttributes):
async def post_init(self) -> Self:
"""Post init actions that must be done in event loop."""
self._docker = await asyncio.get_running_loop().run_in_executor(
self._dockerpy = await asyncio.get_running_loop().run_in_executor(
None,
partial(
DockerClient,
base_url=f"unix:/{str(SOCKET_DOCKER)}",
base_url=f"unix:/{SOCKET_DOCKER.as_posix()}",
version="auto",
timeout=900,
),
)
self._info = DockerInfo.new(self.docker.info())
self._info = DockerInfo.new(self.dockerpy.info())
await self.config.read_data()
self._network = await DockerNetwork(self.docker).post_init(
self._network = await DockerNetwork(self.dockerpy).post_init(
self.config.enable_ipv6, self.config.mtu
)
return self
@property
def docker(self) -> DockerClient:
def dockerpy(self) -> DockerClient:
"""Get docker API client."""
if not self._docker:
if not self._dockerpy:
raise RuntimeError("Docker API Client not initialized!")
return self._docker
return self._dockerpy
@property
def network(self) -> DockerNetwork:
@@ -243,19 +256,19 @@ class DockerAPI(CoreSysAttributes):
return self._network
@property
def images(self) -> ImageCollection:
def images(self) -> DockerImages:
"""Return API images."""
return self.docker.images
@property
def containers(self) -> ContainerCollection:
"""Return API containers."""
return self.docker.containers
return self.dockerpy.containers
@property
def api(self) -> APIClient:
"""Return API containers."""
return self.docker.api
return self.dockerpy.api
@property
def info(self) -> DockerInfo:
@@ -267,7 +280,7 @@ class DockerAPI(CoreSysAttributes):
@property
def events(self) -> CancellableStream:
"""Return docker event stream."""
return self.docker.events(decode=True)
return self.dockerpy.events(decode=True)
@property
def monitor(self) -> DockerMonitor:
@@ -383,7 +396,7 @@ class DockerAPI(CoreSysAttributes):
with suppress(DockerError):
self.network.detach_default_bridge(container)
else:
host_network: Network = self.docker.networks.get(DOCKER_NETWORK_HOST)
host_network: Network = self.dockerpy.networks.get(DOCKER_NETWORK_HOST)
# Check if container is register on host
# https://github.com/moby/moby/issues/23302
@@ -410,35 +423,32 @@ class DockerAPI(CoreSysAttributes):
return container
def pull_image(
async def pull_image(
self,
job_id: str,
repository: str,
tag: str = "latest",
platform: str | None = None,
) -> Image:
) -> dict[str, Any]:
"""Pull the specified image and return it.
This mimics the high level API of images.pull but provides better error handling by raising
based on a docker error on pull. Whereas the high level API ignores all errors on pull and
raises only if the get fails afterwards. Additionally it fires progress reports for the pull
on the bus so listeners can use that to update status for users.
Must be run in executor.
"""
pull_log = self.docker.api.pull(
repository, tag=tag, platform=platform, stream=True, decode=True
)
for e in pull_log:
async for e in self.images.pull(
repository, tag=tag, platform=platform, stream=True
):
entry = PullLogEntry.from_pull_log_dict(job_id, e)
if entry.error:
raise entry.exception
self.sys_loop.call_soon_threadsafe(
self.sys_bus.fire_event, BusEvent.DOCKER_IMAGE_PULL_UPDATE, entry
await asyncio.gather(
*self.sys_bus.fire_event(BusEvent.DOCKER_IMAGE_PULL_UPDATE, entry)
)
sep = "@" if tag.startswith("sha256:") else ":"
return self.images.get(f"{repository}{sep}{tag}")
return await self.images.inspect(f"{repository}{sep}{tag}")
def run_command(
self,
@@ -459,7 +469,7 @@ class DockerAPI(CoreSysAttributes):
_LOGGER.info("Runing command '%s' on %s", command, image_with_tag)
container = None
try:
container = self.docker.containers.run(
container = self.dockerpy.containers.run(
image_with_tag,
command=command,
detach=True,
@@ -487,35 +497,35 @@ class DockerAPI(CoreSysAttributes):
"""Repair local docker overlayfs2 issues."""
_LOGGER.info("Prune stale containers")
try:
output = self.docker.api.prune_containers()
output = self.dockerpy.api.prune_containers()
_LOGGER.debug("Containers prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for containers prune: %s", err)
_LOGGER.info("Prune stale images")
try:
output = self.docker.api.prune_images(filters={"dangling": False})
output = self.dockerpy.api.prune_images(filters={"dangling": False})
_LOGGER.debug("Images prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for images prune: %s", err)
_LOGGER.info("Prune stale builds")
try:
output = self.docker.api.prune_builds()
output = self.dockerpy.api.prune_builds()
_LOGGER.debug("Builds prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for builds prune: %s", err)
_LOGGER.info("Prune stale volumes")
try:
output = self.docker.api.prune_builds()
output = self.dockerpy.api.prune_volumes()
_LOGGER.debug("Volumes prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for volumes prune: %s", err)
_LOGGER.info("Prune stale networks")
try:
output = self.docker.api.prune_networks()
output = self.dockerpy.api.prune_networks()
_LOGGER.debug("Networks prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for networks prune: %s", err)
@@ -537,11 +547,11 @@ class DockerAPI(CoreSysAttributes):
Fix: https://github.com/moby/moby/issues/23302
"""
network: Network = self.docker.networks.get(network_name)
network: Network = self.dockerpy.networks.get(network_name)
for cid, data in network.attrs.get("Containers", {}).items():
try:
self.docker.containers.get(cid)
self.dockerpy.containers.get(cid)
continue
except docker_errors.NotFound:
_LOGGER.debug(
@@ -556,22 +566,32 @@ class DockerAPI(CoreSysAttributes):
with suppress(docker_errors.DockerException, requests.RequestException):
network.disconnect(data.get("Name", cid), force=True)
def container_is_initialized(
async def container_is_initialized(
self, name: str, image: str, version: AwesomeVersion
) -> bool:
"""Return True if docker container exists in good state and is built from expected image."""
try:
docker_container = self.containers.get(name)
docker_image = self.images.get(f"{image}:{version}")
except NotFound:
docker_container = await self.sys_run_in_executor(self.containers.get, name)
docker_image = await self.images.inspect(f"{image}:{version}")
except docker_errors.NotFound:
return False
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
return False
raise DockerError(
f"Could not get container {name} or image {image}:{version} to check state: {err!s}",
_LOGGER.error,
) from err
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Could not get container {name} or image {image}:{version} to check state: {err!s}",
_LOGGER.error,
) from err
# Check the image is correct and state is good
return (
docker_container.image is not None
and docker_container.image.id == docker_image.id
and docker_container.image.id == docker_image["Id"]
and docker_container.status in ("exited", "running", "created")
)
@@ -581,18 +601,22 @@ class DockerAPI(CoreSysAttributes):
"""Stop/remove Docker container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
except docker_errors.NotFound:
# Generally suppressed so we don't log this
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Could not get container {name} for stopping: {err!s}",
_LOGGER.error,
) from err
if docker_container.status == "running":
_LOGGER.info("Stopping %s application", name)
with suppress(DockerException, requests.RequestException):
with suppress(docker_errors.DockerException, requests.RequestException):
docker_container.stop(timeout=timeout)
if remove_container:
with suppress(DockerException, requests.RequestException):
with suppress(docker_errors.DockerException, requests.RequestException):
_LOGGER.info("Cleaning %s application", name)
docker_container.remove(force=True, v=True)
@@ -604,11 +628,11 @@ class DockerAPI(CoreSysAttributes):
"""Start Docker container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
except docker_errors.NotFound:
raise DockerNotFound(
f"{name} not found for starting up", _LOGGER.error
) from None
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Could not get {name} for starting up", _LOGGER.error
) from err
@@ -616,36 +640,44 @@ class DockerAPI(CoreSysAttributes):
_LOGGER.info("Starting %s", name)
try:
docker_container.start()
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(f"Can't start {name}: {err}", _LOGGER.error) from err
def restart_container(self, name: str, timeout: int) -> None:
"""Restart docker container."""
try:
container: Container = self.containers.get(name)
except NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
except docker_errors.NotFound:
raise DockerNotFound(
f"Container {name} not found for restarting", _LOGGER.warning
) from None
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Could not get container {name} for restarting: {err!s}", _LOGGER.error
) from err
_LOGGER.info("Restarting %s", name)
try:
container.restart(timeout=timeout)
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(f"Can't restart {name}: {err}", _LOGGER.warning) from err
def container_logs(self, name: str, tail: int = 100) -> bytes:
"""Return Docker logs of container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
except docker_errors.NotFound:
raise DockerNotFound(
f"Container {name} not found for logs", _LOGGER.warning
) from None
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Could not get container {name} for logs: {err!s}", _LOGGER.error
) from err
try:
return docker_container.logs(tail=tail, stdout=True, stderr=True)
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't grep logs from {name}: {err}", _LOGGER.warning
) from err
@@ -654,10 +686,14 @@ class DockerAPI(CoreSysAttributes):
"""Read and return stats from container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
except docker_errors.NotFound:
raise DockerNotFound(
f"Container {name} not found for stats", _LOGGER.warning
) from None
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Could not inspect container '{name}': {err!s}", _LOGGER.error
) from err
# container is not running
if docker_container.status != "running":
@@ -665,7 +701,7 @@ class DockerAPI(CoreSysAttributes):
try:
return docker_container.stats(stream=False)
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't read stats from {name}: {err}", _LOGGER.error
) from err
@@ -674,61 +710,90 @@ class DockerAPI(CoreSysAttributes):
"""Execute a command inside Docker container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
except docker_errors.NotFound:
raise DockerNotFound(
f"Container {name} not found for running command", _LOGGER.warning
) from None
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't get container {name} to run command: {err!s}"
) from err
# Execute
try:
code, output = docker_container.exec_run(command)
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't run command in container {name}: {err!s}"
) from err
return CommandReturn(code, output)
def remove_image(
async def remove_image(
self, image: str, version: AwesomeVersion, latest: bool = True
) -> None:
"""Remove a Docker image by version and latest."""
try:
if latest:
_LOGGER.info("Removing image %s with latest", image)
with suppress(ImageNotFound):
self.images.remove(image=f"{image}:latest", force=True)
try:
await self.images.delete(f"{image}:latest", force=True)
except aiodocker.DockerError as err:
if err.status != HTTPStatus.NOT_FOUND:
raise
_LOGGER.info("Removing image %s with %s", image, version)
with suppress(ImageNotFound):
self.images.remove(image=f"{image}:{version!s}", force=True)
try:
await self.images.delete(f"{image}:{version!s}", force=True)
except aiodocker.DockerError as err:
if err.status != HTTPStatus.NOT_FOUND:
raise
except (DockerException, requests.RequestException) as err:
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(
f"Can't remove image {image}: {err}", _LOGGER.warning
) from err
def import_image(self, tar_file: Path) -> Image | None:
async def import_image(self, tar_file: Path) -> dict[str, Any] | None:
"""Import a tar file as image."""
try:
with tar_file.open("rb") as read_tar:
docker_image_list: list[Image] = self.images.load(read_tar) # type: ignore
if len(docker_image_list) != 1:
_LOGGER.warning(
"Unexpected image count %d while importing image from tar",
len(docker_image_list),
)
return None
return docker_image_list[0]
except (DockerException, OSError) as err:
resp: list[dict[str, Any]] = self.images.import_image(read_tar)
except (aiodocker.DockerError, OSError) as err:
raise DockerError(
f"Can't import image from tar: {err}", _LOGGER.error
) from err
docker_image_list: list[str] = []
for chunk in resp:
if "errorDetail" in chunk:
raise DockerError(
f"Can't import image from tar: {chunk['errorDetail']['message']}",
_LOGGER.error,
)
if "stream" in chunk:
if match := RE_IMPORT_IMAGE_STREAM.search(chunk["stream"]):
docker_image_list.append(match.group(2))
if len(docker_image_list) != 1:
_LOGGER.warning(
"Unexpected image count %d while importing image from tar",
len(docker_image_list),
)
return None
try:
return await self.images.inspect(docker_image_list[0])
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(
f"Could not inspect imported image due to: {err!s}", _LOGGER.error
) from err
def export_image(self, image: str, version: AwesomeVersion, tar_file: Path) -> None:
"""Export current images into a tar file."""
try:
docker_image = self.api.get_image(f"{image}:{version}")
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't fetch image {image}: {err}", _LOGGER.error
) from err
@@ -745,7 +810,7 @@ class DockerAPI(CoreSysAttributes):
_LOGGER.info("Export image %s done", image)
def cleanup_old_images(
async def cleanup_old_images(
self,
current_image: str,
current_version: AwesomeVersion,
@@ -756,46 +821,57 @@ class DockerAPI(CoreSysAttributes):
"""Clean up old versions of an image."""
image = f"{current_image}:{current_version!s}"
try:
keep = {cast(str, self.images.get(image).id)}
except ImageNotFound:
raise DockerNotFound(
f"{current_image} not found for cleanup", _LOGGER.warning
) from None
except (DockerException, requests.RequestException) as err:
try:
image_attr = await self.images.inspect(image)
except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
raise DockerNotFound(
f"{current_image} not found for cleanup", _LOGGER.warning
) from None
raise
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(
f"Can't get {current_image} for cleanup", _LOGGER.warning
) from err
keep = {cast(str, image_attr["Id"])}
if keep_images:
keep_images -= {image}
try:
for image in keep_images:
# If its not found, no need to preserve it from getting removed
with suppress(ImageNotFound):
keep.add(cast(str, self.images.get(image).id))
except (DockerException, requests.RequestException) as err:
raise DockerError(
f"Failed to get one or more images from {keep} during cleanup",
_LOGGER.warning,
) from err
results = await asyncio.gather(
*[self.images.inspect(image) for image in keep_images],
return_exceptions=True,
)
for result in results:
# If its not found, no need to preserve it from getting removed
if (
isinstance(result, aiodocker.DockerError)
and result.status == HTTPStatus.NOT_FOUND
):
continue
if isinstance(result, BaseException):
raise DockerError(
f"Failed to get one or more images from {keep} during cleanup",
_LOGGER.warning,
) from result
keep.add(cast(str, result["Id"]))
# Cleanup old and current
image_names = list(
old_images | {current_image} if old_images else {current_image}
)
try:
# This API accepts a list of image names. Tested and confirmed working on docker==7.1.0
# Its typing does say only `str` though. Bit concerning, could an update break this?
images_list = self.images.list(name=image_names) # type: ignore
except (DockerException, requests.RequestException) as err:
images_list = await self.images.list(
filters=json.dumps({"reference": image_names})
)
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(
f"Corrupt docker overlayfs found: {err}", _LOGGER.warning
) from err
for docker_image in images_list:
if docker_image.id in keep:
if docker_image["Id"] in keep:
continue
with suppress(DockerException, requests.RequestException):
_LOGGER.info("Cleanup images: %s", docker_image.tags)
self.images.remove(docker_image.id, force=True)
with suppress(aiodocker.DockerError, requests.RequestException):
_LOGGER.info("Cleanup images: %s", docker_image["RepoTags"])
await self.images.delete(docker_image["Id"], force=True)

View File

@@ -89,7 +89,7 @@ class DockerMonitor(CoreSysAttributes, Thread):
DockerContainerStateEvent(
name=attributes["name"],
state=container_state,
id=event["id"],
id=event["Actor"]["ID"],
time=event["time"],
),
)

View File

@@ -1,10 +1,12 @@
"""Init file for Supervisor Docker object."""
import asyncio
from collections.abc import Awaitable
from ipaddress import IPv4Address
import logging
import os
import aiodocker
from awesomeversion.awesomeversion import AwesomeVersion
import docker
import requests
@@ -112,19 +114,18 @@ class DockerSupervisor(DockerInterface):
name="docker_supervisor_update_start_tag",
concurrency=JobConcurrency.GROUP_QUEUE,
)
def update_start_tag(self, image: str, version: AwesomeVersion) -> Awaitable[None]:
async def update_start_tag(self, image: str, version: AwesomeVersion) -> None:
"""Update start tag to new version."""
return self.sys_run_in_executor(self._update_start_tag, image, version)
def _update_start_tag(self, image: str, version: AwesomeVersion) -> None:
"""Update start tag to new version.
Need run inside executor.
"""
try:
docker_container = self.sys_docker.containers.get(self.name)
docker_image = self.sys_docker.images.get(f"{image}:{version!s}")
except (docker.errors.DockerException, requests.RequestException) as err:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
docker_image = await self.sys_docker.images.inspect(f"{image}:{version!s}")
except (
aiodocker.DockerError,
docker.errors.DockerException,
requests.RequestException,
) as err:
raise DockerError(
f"Can't get image or container to fix start tag: {err}", _LOGGER.error
) from err
@@ -144,8 +145,14 @@ class DockerSupervisor(DockerInterface):
# If version tag
if start_tag != "latest":
continue
docker_image.tag(start_image, start_tag)
docker_image.tag(start_image, version.string)
await asyncio.gather(
self.sys_docker.images.tag(
docker_image["Id"], start_image, tag=start_tag
),
self.sys_docker.images.tag(
docker_image["Id"], start_image, tag=version.string
),
)
except (docker.errors.DockerException, requests.RequestException) as err:
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(f"Can't fix start tag: {err}", _LOGGER.error) from err

View File

@@ -1,25 +1,25 @@
"""Core Exceptions."""
from collections.abc import Callable
from collections.abc import Callable, Mapping
from typing import Any
MESSAGE_CHECK_SUPERVISOR_LOGS = (
"Check supervisor logs for details (check with '{logs_command}')"
)
EXTRA_FIELDS_LOGS_COMMAND = {"logs_command": "ha supervisor logs"}
class HassioError(Exception):
"""Root exception."""
error_key: str | None = None
message_template: str | None = None
extra_fields: dict[str, Any] | None = None
def __init__(
self,
message: str | None = None,
logger: Callable[..., None] | None = None,
*,
extra_fields: dict[str, Any] | None = None,
self, message: str | None = None, logger: Callable[..., None] | None = None
) -> None:
"""Raise & log."""
self.extra_fields = extra_fields or {}
if not message and self.message_template:
message = (
self.message_template.format(**self.extra_fields)
@@ -41,6 +41,94 @@ class HassioNotSupportedError(HassioError):
"""Function is not supported."""
# API
class APIError(HassioError, RuntimeError):
"""API errors."""
status = 400
headers: Mapping[str, str] | None = None
def __init__(
self,
message: str | None = None,
logger: Callable[..., None] | None = None,
*,
headers: Mapping[str, str] | None = None,
job_id: str | None = None,
) -> None:
"""Raise & log, optionally with job."""
super().__init__(message, logger)
self.headers = headers
self.job_id = job_id
class APIUnauthorized(APIError):
"""API unauthorized error."""
status = 401
class APIForbidden(APIError):
"""API forbidden error."""
status = 403
class APINotFound(APIError):
"""API not found error."""
status = 404
class APIGone(APIError):
"""API is no longer available."""
status = 410
class APITooManyRequests(APIError):
"""API too many requests error."""
status = 429
class APIInternalServerError(APIError):
"""API internal server error."""
status = 500
class APIAddonNotInstalled(APIError):
"""Not installed addon requested at addons API."""
class APIDBMigrationInProgress(APIError):
"""Service is unavailable due to an offline DB migration is in progress."""
status = 503
class APIUnknownSupervisorError(APIError):
"""Unknown error occurred within supervisor. Adds supervisor check logs rider to mesage template."""
status = 500
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
job_id: str | None = None,
) -> None:
"""Initialize exception."""
self.message_template = (
f"{self.message_template}. {MESSAGE_CHECK_SUPERVISOR_LOGS}"
)
self.extra_fields = (self.extra_fields or {}) | EXTRA_FIELDS_LOGS_COMMAND
super().__init__(None, logger, job_id=job_id)
# JobManager
@@ -122,6 +210,13 @@ class SupervisorAppArmorError(SupervisorError):
"""Supervisor AppArmor error."""
class SupervisorUnknownError(SupervisorError, APIUnknownSupervisorError):
"""Raise when an unknown error occurs interacting with Supervisor or its container."""
error_key = "supervisor_unknown_error"
message_template = "An unknown error occurred with Supervisor"
class SupervisorJobError(SupervisorError, JobException):
"""Raise on job errors."""
@@ -250,6 +345,54 @@ class AddonConfigurationError(AddonsError):
"""Error with add-on configuration."""
class AddonConfigurationInvalidError(AddonConfigurationError, APIError):
"""Raise if invalid configuration provided for addon."""
error_key = "addon_configuration_invalid_error"
message_template = "Add-on {addon} has invalid options: {validation_error}"
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
addon: str,
validation_error: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "validation_error": validation_error}
super().__init__(None, logger)
class AddonBootConfigCannotChangeError(AddonsError, APIError):
"""Raise if user attempts to change addon boot config when it can't be changed."""
error_key = "addon_boot_config_cannot_change_error"
message_template = (
"Addon {addon} boot option is set to {boot_config} so it cannot be changed"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str, boot_config: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "boot_config": boot_config}
super().__init__(None, logger)
class AddonNotRunningError(AddonsError, APIError):
"""Raise when an addon is not running."""
error_key = "addon_not_running_error"
message_template = "Add-on {addon} is not running"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(None, logger)
class AddonNotSupportedError(HassioNotSupportedError):
"""Addon doesn't support a function."""
@@ -268,11 +411,8 @@ class AddonNotSupportedArchitectureError(AddonNotSupportedError):
architectures: list[str],
) -> None:
"""Initialize exception."""
super().__init__(
None,
logger,
extra_fields={"slug": slug, "architectures": ", ".join(architectures)},
)
self.extra_fields = {"slug": slug, "architectures": ", ".join(architectures)}
super().__init__(None, logger)
class AddonNotSupportedMachineTypeError(AddonNotSupportedError):
@@ -289,11 +429,8 @@ class AddonNotSupportedMachineTypeError(AddonNotSupportedError):
machine_types: list[str],
) -> None:
"""Initialize exception."""
super().__init__(
None,
logger,
extra_fields={"slug": slug, "machine_types": ", ".join(machine_types)},
)
self.extra_fields = {"slug": slug, "machine_types": ", ".join(machine_types)}
super().__init__(None, logger)
class AddonNotSupportedHomeAssistantVersionError(AddonNotSupportedError):
@@ -310,11 +447,80 @@ class AddonNotSupportedHomeAssistantVersionError(AddonNotSupportedError):
version: str,
) -> None:
"""Initialize exception."""
super().__init__(
None,
logger,
extra_fields={"slug": slug, "version": version},
)
self.extra_fields = {"slug": slug, "version": version}
super().__init__(None, logger)
class AddonNotSupportedWriteStdinError(AddonNotSupportedError, APIError):
"""Addon does not support writing to stdin."""
error_key = "addon_not_supported_write_stdin_error"
message_template = "Add-on {addon} does not support writing to stdin"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(None, logger)
class AddonBuildDockerfileMissingError(AddonNotSupportedError, APIError):
"""Raise when addon build invalid because dockerfile is missing."""
error_key = "addon_build_dockerfile_missing_error"
message_template = (
"Cannot build addon '{addon}' because dockerfile is missing. A repair "
"using '{repair_command}' will fix this if the cause is data "
"corruption. Otherwise please report this to the addon developer."
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "repair_command": "ha supervisor repair"}
super().__init__(None, logger)
class AddonBuildArchitectureNotSupportedError(AddonNotSupportedError, APIError):
"""Raise when addon cannot be built on system because it doesn't support its architecture."""
error_key = "addon_build_architecture_not_supported_error"
message_template = (
"Cannot build addon '{addon}' because its supported architectures "
"({addon_arches}) do not match the system supported architectures ({system_arches})"
)
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
addon: str,
addon_arch_list: list[str],
system_arch_list: list[str],
) -> None:
"""Initialize exception."""
self.extra_fields = {
"addon": addon,
"addon_arches": ", ".join(addon_arch_list),
"system_arches": ", ".join(system_arch_list),
}
super().__init__(None, logger)
class AddonUnknownError(AddonsError, APIUnknownSupervisorError):
"""Raise when unknown error occurs taking an action for an addon."""
error_key = "addon_unknown_error"
message_template = "An unknown error occurred with addon {addon}"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(logger)
class AddonsJobError(AddonsError, JobException):
@@ -346,13 +552,68 @@ class AuthError(HassioError):
"""Auth errors."""
class AuthPasswordResetError(HassioError):
# This one uses the check logs rider even though its not a 500 error because it
# is bad practice to return error specifics from a password reset API.
class AuthPasswordResetError(AuthError, APIError):
"""Auth error if password reset failed."""
error_key = "auth_password_reset_error"
message_template = (
f"Unable to reset password for '{{user}}'. {MESSAGE_CHECK_SUPERVISOR_LOGS}"
)
class AuthListUsersError(HassioError):
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
user: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"user": user} | EXTRA_FIELDS_LOGS_COMMAND
super().__init__(None, logger)
class AuthListUsersError(AuthError, APIUnknownSupervisorError):
"""Auth error if listing users failed."""
error_key = "auth_list_users_error"
message_template = "Can't request listing users on Home Assistant"
class AuthListUsersNoneResponseError(AuthError, APIInternalServerError):
"""Auth error if listing users returned invalid None response."""
error_key = "auth_list_users_none_response_error"
message_template = "Home Assistant returned invalid response of `{none}` instead of a list of users. Check Home Assistant logs for details (check with `{logs_command}`)"
extra_fields = {"none": "None", "logs_command": "ha core logs"}
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Initialize exception."""
super().__init__(None, logger)
class AuthInvalidNonStringValueError(AuthError, APIUnauthorized):
"""Auth error if something besides a string provided as username or password."""
error_key = "auth_invalid_non_string_value_error"
message_template = "Username and password must be strings"
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
headers: Mapping[str, str] | None = None,
) -> None:
"""Initialize exception."""
super().__init__(None, logger, headers=headers)
class AuthHomeAssistantAPIValidationError(AuthError, APIUnknownSupervisorError):
"""Error encountered trying to validate auth details via Home Assistant API."""
error_key = "auth_home_assistant_api_validation_error"
message_template = "Unable to validate authentication details with Home Assistant"
# Host
@@ -385,54 +646,6 @@ class HostLogError(HostError):
"""Internal error with host log."""
# API
class APIError(HassioError, RuntimeError):
"""API errors."""
status = 400
def __init__(
self,
message: str | None = None,
logger: Callable[..., None] | None = None,
*,
job_id: str | None = None,
error: HassioError | None = None,
) -> None:
"""Raise & log, optionally with job."""
# Allow these to be set from another error here since APIErrors essentially wrap others to add a status
self.error_key = error.error_key if error else None
self.message_template = error.message_template if error else None
super().__init__(
message, logger, extra_fields=error.extra_fields if error else None
)
self.job_id = job_id
class APIForbidden(APIError):
"""API forbidden error."""
status = 403
class APINotFound(APIError):
"""API not found error."""
status = 404
class APIAddonNotInstalled(APIError):
"""Not installed addon requested at addons API."""
class APIDBMigrationInProgress(APIError):
"""Service is unavailable due to an offline DB migration is in progress."""
status = 503
# Service / Discovery
@@ -577,21 +790,6 @@ class PwnedConnectivityError(PwnedError):
"""Connectivity errors while checking pwned passwords."""
# util/codenotary
class CodeNotaryError(HassioError):
"""Error general with CodeNotary."""
class CodeNotaryUntrusted(CodeNotaryError):
"""Error on untrusted content."""
class CodeNotaryBackendError(CodeNotaryError):
"""CodeNotary backend error happening."""
# util/whoami
@@ -625,6 +823,10 @@ class DockerError(HassioError):
"""Docker API/Transport errors."""
class DockerBuildError(DockerError):
"""Docker error during build."""
class DockerAPIError(DockerError):
"""Docker API error."""
@@ -648,9 +850,29 @@ class DockerLogOutOfOrder(DockerError):
class DockerNoSpaceOnDevice(DockerError):
"""Raise if a docker pull fails due to available space."""
error_key = "docker_no_space_on_device"
message_template = "No space left on disk"
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Raise & log."""
super().__init__("No space left on disk", logger=logger)
super().__init__(None, logger=logger)
class DockerHubRateLimitExceeded(DockerError, APITooManyRequests):
"""Raise for docker hub rate limit exceeded error."""
error_key = "dockerhub_rate_limit_exceeded"
message_template = (
"Your IP address has made too many requests to Docker Hub which activated a rate limit. "
"For more details see {dockerhub_rate_limit_url}"
)
extra_fields = {
"dockerhub_rate_limit_url": "https://www.home-assistant.io/more-info/dockerhub-rate-limit"
}
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Raise & log."""
super().__init__(None, logger=logger)
class DockerJobError(DockerError, JobException):
@@ -721,6 +943,20 @@ class StoreNotFound(StoreError):
"""Raise if slug is not known."""
class StoreAddonNotFoundError(StoreError, APINotFound):
"""Raise if a requested addon is not in the store."""
error_key = "store_addon_not_found_error"
message_template = "Addon {addon} does not exist in the store"
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon}
super().__init__(None, logger)
class StoreJobError(StoreError, JobException):
"""Raise on job error with git."""
@@ -756,7 +992,7 @@ class BackupJobError(BackupError, JobException):
"""Raise on Backup job error."""
class BackupFileNotFoundError(BackupError):
class BackupFileNotFoundError(BackupError, APINotFound):
"""Raise if the backup file hasn't been found."""
@@ -768,6 +1004,55 @@ class BackupFileExistError(BackupError):
"""Raise if the backup file already exists."""
class AddonBackupMetadataInvalidError(BackupError, APIError):
"""Raise if invalid metadata file provided for addon in backup."""
error_key = "addon_backup_metadata_invalid_error"
message_template = (
"Metadata file for add-on {addon} in backup is invalid: {validation_error}"
)
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
addon: str,
validation_error: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"addon": addon, "validation_error": validation_error}
super().__init__(None, logger)
class AddonPrePostBackupCommandReturnedError(BackupError, APIError):
"""Raise when addon's pre/post backup command returns an error."""
error_key = "addon_pre_post_backup_command_returned_error"
message_template = (
"Pre-/Post backup command for add-on {addon} returned error code: "
"{exit_code}. Please report this to the addon developer. Enable debug "
"logging to capture complete command output using {debug_logging_command}"
)
def __init__(
self, logger: Callable[..., None] | None = None, *, addon: str, exit_code: int
) -> None:
"""Initialize exception."""
self.extra_fields = {
"addon": addon,
"exit_code": exit_code,
"debug_logging_command": "ha supervisor options --logging debug",
}
super().__init__(None, logger)
class BackupRestoreUnknownError(BackupError, APIUnknownSupervisorError):
"""Raise when an unknown error occurs during backup or restore."""
error_key = "backup_restore_unknown_error"
message_template = "An unknown error occurred during backup/restore"
# Security

View File

@@ -9,7 +9,12 @@ from typing import Any
from supervisor.resolution.const import UnhealthyReason
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import DBusError, DBusObjectError, HardwareNotFound
from ..exceptions import (
DBusError,
DBusNotConnectedError,
DBusObjectError,
HardwareNotFound,
)
from .const import UdevSubsystem
from .data import Device
@@ -207,6 +212,8 @@ class HwDisk(CoreSysAttributes):
try:
block_device = self.sys_dbus.udisks2.get_block_device_by_path(device_path)
drive = self.sys_dbus.udisks2.get_drive(block_device.drive)
except DBusNotConnectedError:
return None
except DBusObjectError:
_LOGGER.warning(
"Unable to find UDisks2 drive for device at %s", device_path.as_posix()

View File

@@ -28,6 +28,7 @@ from ..exceptions import (
HomeAssistantUpdateError,
JobException,
)
from ..jobs import ChildJobSyncFilter
from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobConcurrency, JobThrottle
from ..jobs.decorator import Job, JobCondition
from ..jobs.job_group import JobGroup
@@ -224,6 +225,13 @@ class HomeAssistantCore(JobGroup):
],
on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
# We assume for now the docker image pull is 100% of this task. But from
# a user perspective that isn't true. Other steps that take time which
# is not accounted for in progress include: partial backup, image
# cleanup, and Home Assistant restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def update(
self,
@@ -420,13 +428,6 @@ class HomeAssistantCore(JobGroup):
"""
return self.instance.logs()
def check_trust(self) -> Awaitable[None]:
"""Calculate HomeAssistant docker content trust.
Return Coroutine.
"""
return self.instance.check_trust()
async def stats(self) -> DockerStats:
"""Return stats of Home Assistant."""
try:

View File

@@ -98,15 +98,21 @@ class SupervisorJobError:
"""Representation of an error occurring during a supervisor job."""
type_: type[HassioError] = HassioError
message: str = "Unknown error, see supervisor logs"
message: str = (
"Unknown error, see Supervisor logs (check with 'ha supervisor logs')"
)
stage: str | None = None
error_key: str | None = None
extra_fields: dict[str, Any] | None = None
def as_dict(self) -> dict[str, str | None]:
def as_dict(self) -> dict[str, Any]:
"""Return dictionary representation."""
return {
"type": self.type_.__name__,
"message": self.message,
"stage": self.stage,
"error_key": self.error_key,
"extra_fields": self.extra_fields,
}
@@ -156,7 +162,9 @@ class SupervisorJob:
def capture_error(self, err: HassioError | None = None) -> None:
"""Capture an error or record that an unknown error has occurred."""
if err:
new_error = SupervisorJobError(type(err), str(err), self.stage)
new_error = SupervisorJobError(
type(err), str(err), self.stage, err.error_key, err.extra_fields
)
else:
new_error = SupervisorJobError(stage=self.stage)
self.errors += [new_error]
@@ -282,8 +290,10 @@ class JobManager(FileConfiguration, CoreSysAttributes):
# reporting shouldn't raise and break the active job
continue
progress = sync.starting_progress + (
sync.progress_allocation * job_data["progress"]
progress = min(
100,
sync.starting_progress
+ (sync.progress_allocation * job_data["progress"]),
)
# Using max would always trigger on change even if progress was unchanged
# pylint: disable-next=R1731
@@ -325,6 +335,17 @@ class JobManager(FileConfiguration, CoreSysAttributes):
if not curr_parent.child_job_syncs:
continue
# HACK: If parent trigger the same child job, we just skip this second
# sync. Maybe it would be better to have this reflected in the job stage
# and reset progress to 0 instead? There is no support for such stage
# information on Core update entities today though.
if curr_parent.done is True or curr_parent.progress >= 100:
_LOGGER.debug(
"Skipping parent job sync for done parent job %s",
curr_parent.name,
)
continue
# Break after first match at each parent as it doesn't make sense
# to match twice. But it could match multiple parents
for sync in curr_parent.child_job_syncs:

View File

@@ -64,6 +64,19 @@ def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
# Not full startup - missing information
if coresys.core.state in (CoreState.INITIALIZE, CoreState.SETUP):
# During SETUP, we have basic system info available for better debugging
if coresys.core.state == CoreState.SETUP:
event.setdefault("contexts", {}).update(
{
"versions": {
"docker": coresys.docker.info.version,
"supervisor": coresys.supervisor.version,
},
"host": {
"machine": coresys.machine,
},
}
)
return event
# List installed addons

View File

@@ -76,13 +76,6 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
"""Return True if a task is in progress."""
return self.instance.in_progress
def check_trust(self) -> Awaitable[None]:
"""Calculate plugin docker content trust.
Return Coroutine.
"""
return self.instance.check_trust()
def logs(self) -> Awaitable[bytes]:
"""Get docker plugin logs.

View File

@@ -1,59 +0,0 @@
"""Helpers to check supervisor trust."""
import logging
from ...const import CoreState
from ...coresys import CoreSys
from ...exceptions import CodeNotaryError, CodeNotaryUntrusted
from ..const import ContextType, IssueType, UnhealthyReason
from .base import CheckBase
_LOGGER: logging.Logger = logging.getLogger(__name__)
def setup(coresys: CoreSys) -> CheckBase:
"""Check setup function."""
return CheckSupervisorTrust(coresys)
class CheckSupervisorTrust(CheckBase):
"""CheckSystemTrust class for check."""
async def run_check(self) -> None:
"""Run check if not affected by issue."""
if not self.sys_security.content_trust:
_LOGGER.warning(
"Skipping %s, content_trust is globally disabled", self.slug
)
return
try:
await self.sys_supervisor.check_trust()
except CodeNotaryUntrusted:
self.sys_resolution.add_unhealthy_reason(UnhealthyReason.UNTRUSTED)
self.sys_resolution.create_issue(IssueType.TRUST, ContextType.SUPERVISOR)
except CodeNotaryError:
pass
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
try:
await self.sys_supervisor.check_trust()
except CodeNotaryError:
return True
return False
@property
def issue(self) -> IssueType:
"""Return a IssueType enum."""
return IssueType.TRUST
@property
def context(self) -> ContextType:
"""Return a ContextType enum."""
return ContextType.SUPERVISOR
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this check can run."""
return [CoreState.RUNNING, CoreState.STARTUP]

View File

@@ -39,7 +39,6 @@ class UnsupportedReason(StrEnum):
APPARMOR = "apparmor"
CGROUP_VERSION = "cgroup_version"
CONNECTIVITY_CHECK = "connectivity_check"
CONTENT_TRUST = "content_trust"
DBUS = "dbus"
DNS_SERVER = "dns_server"
DOCKER_CONFIGURATION = "docker_configuration"
@@ -54,7 +53,6 @@ class UnsupportedReason(StrEnum):
PRIVILEGED = "privileged"
RESTART_POLICY = "restart_policy"
SOFTWARE = "software"
SOURCE_MODS = "source_mods"
SUPERVISOR_VERSION = "supervisor_version"
SYSTEMD = "systemd"
SYSTEMD_JOURNAL = "systemd_journal"
@@ -103,7 +101,6 @@ class IssueType(StrEnum):
PWNED = "pwned"
REBOOT_REQUIRED = "reboot_required"
SECURITY = "security"
TRUST = "trust"
UPDATE_FAILED = "update_failed"
UPDATE_ROLLBACK = "update_rollback"
@@ -115,7 +112,6 @@ class SuggestionType(StrEnum):
CLEAR_FULL_BACKUP = "clear_full_backup"
CREATE_FULL_BACKUP = "create_full_backup"
DISABLE_BOOT = "disable_boot"
EXECUTE_INTEGRITY = "execute_integrity"
EXECUTE_REBOOT = "execute_reboot"
EXECUTE_REBUILD = "execute_rebuild"
EXECUTE_RELOAD = "execute_reload"

View File

@@ -13,7 +13,6 @@ from .validate import get_valid_modules
_LOGGER: logging.Logger = logging.getLogger(__name__)
UNHEALTHY = [
UnsupportedReason.DOCKER_VERSION,
UnsupportedReason.LXC,
UnsupportedReason.PRIVILEGED,
]

View File

@@ -1,34 +0,0 @@
"""Evaluation class for Content Trust."""
from ...const import CoreState
from ...coresys import CoreSys
from ..const import UnsupportedReason
from .base import EvaluateBase
def setup(coresys: CoreSys) -> EvaluateBase:
"""Initialize evaluation-setup function."""
return EvaluateContentTrust(coresys)
class EvaluateContentTrust(EvaluateBase):
"""Evaluate system content trust level."""
@property
def reason(self) -> UnsupportedReason:
"""Return a UnsupportedReason enum."""
return UnsupportedReason.CONTENT_TRUST
@property
def on_failure(self) -> str:
"""Return a string that is printed when self.evaluate is True."""
return "System run with disabled trusted content security."
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this evaluation can run."""
return [CoreState.INITIALIZE, CoreState.SETUP, CoreState.RUNNING]
async def evaluate(self) -> bool:
"""Run evaluation."""
return not self.sys_security.content_trust

View File

@@ -8,7 +8,7 @@ from ..const import UnsupportedReason
from .base import EvaluateBase
EXPECTED_LOGGING = "journald"
EXPECTED_STORAGE = "overlay2"
EXPECTED_STORAGE = ("overlay2", "overlayfs")
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -41,14 +41,18 @@ class EvaluateDockerConfiguration(EvaluateBase):
storage_driver = self.sys_docker.info.storage
logging_driver = self.sys_docker.info.logging
if storage_driver != EXPECTED_STORAGE:
is_unsupported = False
if storage_driver not in EXPECTED_STORAGE:
is_unsupported = True
_LOGGER.warning(
"Docker storage driver %s is not supported!", storage_driver
)
if logging_driver != EXPECTED_LOGGING:
is_unsupported = True
_LOGGER.warning(
"Docker logging driver %s is not supported!", logging_driver
)
return storage_driver != EXPECTED_STORAGE or logging_driver != EXPECTED_LOGGING
return is_unsupported

View File

@@ -1,72 +0,0 @@
"""Evaluation class for Content Trust."""
import errno
import logging
from pathlib import Path
from ...const import CoreState
from ...coresys import CoreSys
from ...exceptions import CodeNotaryError, CodeNotaryUntrusted
from ...utils.codenotary import calc_checksum_path_sourcecode
from ..const import ContextType, IssueType, UnhealthyReason, UnsupportedReason
from .base import EvaluateBase
_SUPERVISOR_SOURCE = Path("/usr/src/supervisor/supervisor")
_LOGGER: logging.Logger = logging.getLogger(__name__)
def setup(coresys: CoreSys) -> EvaluateBase:
"""Initialize evaluation-setup function."""
return EvaluateSourceMods(coresys)
class EvaluateSourceMods(EvaluateBase):
"""Evaluate supervisor source modifications."""
@property
def reason(self) -> UnsupportedReason:
"""Return a UnsupportedReason enum."""
return UnsupportedReason.SOURCE_MODS
@property
def on_failure(self) -> str:
"""Return a string that is printed when self.evaluate is True."""
return "System detect unauthorized source code modifications."
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this evaluation can run."""
return [CoreState.RUNNING]
async def evaluate(self) -> bool:
"""Run evaluation."""
if not self.sys_security.content_trust:
_LOGGER.warning("Disabled content-trust, skipping evaluation")
return False
# Calculate sume of the sourcecode
try:
checksum = await self.sys_run_in_executor(
calc_checksum_path_sourcecode, _SUPERVISOR_SOURCE
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.create_issue(
IssueType.CORRUPT_FILESYSTEM, ContextType.SYSTEM
)
_LOGGER.error("Can't calculate checksum of source code: %s", err)
return False
# Validate checksum
try:
await self.sys_security.verify_own_content(checksum)
except CodeNotaryUntrusted:
return True
except CodeNotaryError:
pass
return False

View File

@@ -1,67 +0,0 @@
"""Helpers to check and fix issues with free space."""
from datetime import timedelta
import logging
from ...coresys import CoreSys
from ...exceptions import ResolutionFixupError, ResolutionFixupJobError
from ...jobs.const import JobCondition, JobThrottle
from ...jobs.decorator import Job
from ...security.const import ContentTrustResult
from ..const import ContextType, IssueType, SuggestionType
from .base import FixupBase
_LOGGER: logging.Logger = logging.getLogger(__name__)
def setup(coresys: CoreSys) -> FixupBase:
"""Check setup function."""
return FixupSystemExecuteIntegrity(coresys)
class FixupSystemExecuteIntegrity(FixupBase):
"""Storage class for fixup."""
@Job(
name="fixup_system_execute_integrity_process",
conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=ResolutionFixupJobError,
throttle_period=timedelta(hours=8),
throttle=JobThrottle.THROTTLE,
)
async def process_fixup(self, reference: str | None = None) -> None:
"""Initialize the fixup class."""
result = await self.sys_security.integrity_check()
if ContentTrustResult.FAILED in (result.core, result.supervisor):
raise ResolutionFixupError()
for plugin in result.plugins:
if plugin != ContentTrustResult.FAILED:
continue
raise ResolutionFixupError()
for addon in result.addons:
if addon != ContentTrustResult.FAILED:
continue
raise ResolutionFixupError()
@property
def suggestion(self) -> SuggestionType:
"""Return a SuggestionType enum."""
return SuggestionType.EXECUTE_INTEGRITY
@property
def context(self) -> ContextType:
"""Return a ContextType enum."""
return ContextType.SYSTEM
@property
def issues(self) -> list[IssueType]:
"""Return a IssueType enum list."""
return [IssueType.TRUST]
@property
def auto(self) -> bool:
"""Return if a fixup can be apply as auto fix."""
return True

View File

@@ -1,24 +0,0 @@
"""Security constants."""
from enum import StrEnum
import attr
class ContentTrustResult(StrEnum):
"""Content trust result enum."""
PASS = "pass"
ERROR = "error"
FAILED = "failed"
UNTESTED = "untested"
@attr.s
class IntegrityResult:
"""Result of a full integrity check."""
supervisor: ContentTrustResult = attr.ib(default=ContentTrustResult.UNTESTED)
core: ContentTrustResult = attr.ib(default=ContentTrustResult.UNTESTED)
plugins: dict[str, ContentTrustResult] = attr.ib(default={})
addons: dict[str, ContentTrustResult] = attr.ib(default={})

View File

@@ -4,27 +4,12 @@ from __future__ import annotations
import logging
from ..const import (
ATTR_CONTENT_TRUST,
ATTR_FORCE_SECURITY,
ATTR_PWNED,
FILE_HASSIO_SECURITY,
)
from ..const import ATTR_FORCE_SECURITY, ATTR_PWNED, FILE_HASSIO_SECURITY
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
CodeNotaryError,
CodeNotaryUntrusted,
PwnedError,
SecurityJobError,
)
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.codenotary import cas_validate
from ..exceptions import PwnedError
from ..utils.common import FileConfiguration
from ..utils.pwned import check_pwned_password
from ..validate import SCHEMA_SECURITY_CONFIG
from .const import ContentTrustResult, IntegrityResult
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -37,16 +22,6 @@ class Security(FileConfiguration, CoreSysAttributes):
super().__init__(FILE_HASSIO_SECURITY, SCHEMA_SECURITY_CONFIG)
self.coresys = coresys
@property
def content_trust(self) -> bool:
"""Return if content trust is enabled/disabled."""
return self._data[ATTR_CONTENT_TRUST]
@content_trust.setter
def content_trust(self, value: bool) -> None:
"""Set content trust is enabled/disabled."""
self._data[ATTR_CONTENT_TRUST] = value
@property
def force(self) -> bool:
"""Return if force security is enabled/disabled."""
@@ -67,30 +42,6 @@ class Security(FileConfiguration, CoreSysAttributes):
"""Set pwned is enabled/disabled."""
self._data[ATTR_PWNED] = value
async def verify_content(self, signer: str, checksum: str) -> None:
"""Verify content on CAS."""
if not self.content_trust:
_LOGGER.warning("Disabled content-trust, skip validation")
return
try:
await cas_validate(signer, checksum)
except CodeNotaryUntrusted:
raise
except CodeNotaryError:
if self.force:
raise
self.sys_resolution.create_issue(
IssueType.TRUST,
ContextType.SYSTEM,
suggestions=[SuggestionType.EXECUTE_INTEGRITY],
)
return
async def verify_own_content(self, checksum: str) -> None:
"""Verify content from HA org."""
return await self.verify_content("notary@home-assistant.io", checksum)
async def verify_secret(self, pwned_hash: str) -> None:
"""Verify pwned state of a secret."""
if not self.pwned:
@@ -103,73 +54,3 @@ class Security(FileConfiguration, CoreSysAttributes):
if self.force:
raise
return
@Job(
name="security_manager_integrity_check",
conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=SecurityJobError,
concurrency=JobConcurrency.REJECT,
)
async def integrity_check(self) -> IntegrityResult:
"""Run a full system integrity check of the platform.
We only allow to install trusted content.
This is a out of the band manual check.
"""
result: IntegrityResult = IntegrityResult()
if not self.content_trust:
_LOGGER.warning(
"Skipping integrity check, content_trust is globally disabled"
)
return result
# Supervisor
try:
await self.sys_supervisor.check_trust()
result.supervisor = ContentTrustResult.PASS
except CodeNotaryUntrusted:
result.supervisor = ContentTrustResult.ERROR
self.sys_resolution.create_issue(IssueType.TRUST, ContextType.SUPERVISOR)
except CodeNotaryError:
result.supervisor = ContentTrustResult.FAILED
# Core
try:
await self.sys_homeassistant.core.check_trust()
result.core = ContentTrustResult.PASS
except CodeNotaryUntrusted:
result.core = ContentTrustResult.ERROR
self.sys_resolution.create_issue(IssueType.TRUST, ContextType.CORE)
except CodeNotaryError:
result.core = ContentTrustResult.FAILED
# Plugins
for plugin in self.sys_plugins.all_plugins:
try:
await plugin.check_trust()
result.plugins[plugin.slug] = ContentTrustResult.PASS
except CodeNotaryUntrusted:
result.plugins[plugin.slug] = ContentTrustResult.ERROR
self.sys_resolution.create_issue(
IssueType.TRUST, ContextType.PLUGIN, reference=plugin.slug
)
except CodeNotaryError:
result.plugins[plugin.slug] = ContentTrustResult.FAILED
# Add-ons
for addon in self.sys_addons.installed:
if not addon.signed:
result.addons[addon.slug] = ContentTrustResult.UNTESTED
continue
try:
await addon.check_trust()
result.addons[addon.slug] = ContentTrustResult.PASS
except CodeNotaryUntrusted:
result.addons[addon.slug] = ContentTrustResult.ERROR
self.sys_resolution.create_issue(
IssueType.TRUST, ContextType.ADDON, reference=addon.slug
)
except CodeNotaryError:
result.addons[addon.slug] = ContentTrustResult.FAILED
return result

View File

@@ -13,6 +13,8 @@ import aiohttp
from aiohttp.client_exceptions import ClientError
from awesomeversion import AwesomeVersion, AwesomeVersionException
from supervisor.jobs import ChildJobSyncFilter
from .const import (
ATTR_SUPERVISOR_INTERNET,
SUPERVISOR_VERSION,
@@ -23,19 +25,16 @@ from .coresys import CoreSys, CoreSysAttributes
from .docker.stats import DockerStats
from .docker.supervisor import DockerSupervisor
from .exceptions import (
CodeNotaryError,
CodeNotaryUntrusted,
DockerError,
HostAppArmorError,
SupervisorAppArmorError,
SupervisorError,
SupervisorJobError,
SupervisorUnknownError,
SupervisorUpdateError,
)
from .jobs.const import JobCondition, JobThrottle
from .jobs.decorator import Job
from .resolution.const import ContextType, IssueType, UnhealthyReason
from .utils.codenotary import calc_checksum
from .utils.sentry import async_capture_exception
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -148,20 +147,6 @@ class Supervisor(CoreSysAttributes):
_LOGGER.error,
) from err
# Validate
try:
await self.sys_security.verify_own_content(calc_checksum(data))
except CodeNotaryUntrusted as err:
raise SupervisorAppArmorError(
"Content-Trust is broken for the AppArmor profile fetch!",
_LOGGER.critical,
) from err
except CodeNotaryError as err:
raise SupervisorAppArmorError(
f"CodeNotary error while processing AppArmor fetch: {err!s}",
_LOGGER.error,
) from err
# Load
temp_dir: TemporaryDirectory | None = None
@@ -195,6 +180,15 @@ class Supervisor(CoreSysAttributes):
if temp_dir:
await self.sys_run_in_executor(temp_dir.cleanup)
@Job(
name="supervisor_update",
# We assume for now the docker image pull is 100% of this task. But from
# a user perspective that isn't true. Other steps that take time which
# is not accounted for in progress include: app armor update and restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def update(self, version: AwesomeVersion | None = None) -> None:
"""Update Supervisor version."""
version = version or self.latest_version or self.version
@@ -221,6 +215,7 @@ class Supervisor(CoreSysAttributes):
# Update container
_LOGGER.info("Update Supervisor to version %s", version)
try:
await self.instance.install(version, image=image)
await self.instance.update_start_tag(image, version)
@@ -261,19 +256,12 @@ class Supervisor(CoreSysAttributes):
"""
return self.instance.logs()
def check_trust(self) -> Awaitable[None]:
"""Calculate Supervisor docker content trust.
Return Coroutine.
"""
return self.instance.check_trust()
async def stats(self) -> DockerStats:
"""Return stats of Supervisor."""
try:
return await self.instance.stats()
except DockerError as err:
raise SupervisorError() from err
raise SupervisorUnknownError() from err
async def repair(self):
"""Repair local Supervisor data."""

View File

@@ -31,14 +31,8 @@ from .const import (
UpdateChannel,
)
from .coresys import CoreSys, CoreSysAttributes
from .exceptions import (
CodeNotaryError,
CodeNotaryUntrusted,
UpdaterError,
UpdaterJobError,
)
from .exceptions import UpdaterError, UpdaterJobError
from .jobs.decorator import Job, JobCondition
from .utils.codenotary import calc_checksum
from .utils.common import FileConfiguration
from .validate import SCHEMA_UPDATER_CONFIG
@@ -289,19 +283,6 @@ class Updater(FileConfiguration, CoreSysAttributes):
self.sys_bus.remove_listener(self._connectivity_listener)
self._connectivity_listener = None
# Validate
try:
await self.sys_security.verify_own_content(calc_checksum(data))
except CodeNotaryUntrusted as err:
raise UpdaterError(
"Content-Trust is broken for the version file fetch!", _LOGGER.critical
) from err
except CodeNotaryError as err:
raise UpdaterError(
f"CodeNotary error while processing version fetch: {err!s}",
_LOGGER.error,
) from err
# Parse data
try:
data = json.loads(data)

View File

@@ -1,109 +0,0 @@
"""Small wrapper for CodeNotary."""
from __future__ import annotations
import asyncio
import hashlib
import json
import logging
from pathlib import Path
import shlex
from typing import Final
from dirhash import dirhash
from ..exceptions import CodeNotaryBackendError, CodeNotaryError, CodeNotaryUntrusted
from . import clean_env
_LOGGER: logging.Logger = logging.getLogger(__name__)
_CAS_CMD: str = (
"cas authenticate --signerID {signer} --silent --output json --hash {sum}"
)
_CACHE: set[tuple[str, str]] = set()
_ATTR_ERROR: Final = "error"
_ATTR_STATUS: Final = "status"
_FALLBACK_ERROR: Final = "Unknown CodeNotary backend issue"
def calc_checksum(data: str | bytes) -> str:
"""Generate checksum for CodeNotary."""
if isinstance(data, str):
return hashlib.sha256(data.encode()).hexdigest()
return hashlib.sha256(data).hexdigest()
def calc_checksum_path_sourcecode(folder: Path) -> str:
"""Calculate checksum for a path source code.
Need catch OSError.
"""
return dirhash(folder.as_posix(), "sha256", match=["*.py"])
# pylint: disable=unreachable
async def cas_validate(
signer: str,
checksum: str,
) -> None:
"""Validate data against CodeNotary."""
return
if (checksum, signer) in _CACHE:
return
# Generate command for request
command = shlex.split(_CAS_CMD.format(signer=signer, sum=checksum))
# Request notary authorization
_LOGGER.debug("Send cas command: %s", command)
try:
proc = await asyncio.create_subprocess_exec(
*command,
stdin=asyncio.subprocess.DEVNULL,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
env=clean_env(),
)
async with asyncio.timeout(15):
data, error = await proc.communicate()
except TimeoutError:
raise CodeNotaryBackendError(
"Timeout while processing CodeNotary", _LOGGER.warning
) from None
except OSError as err:
raise CodeNotaryError(
f"CodeNotary fatal error: {err!s}", _LOGGER.critical
) from err
# Check if Notarized
if proc.returncode != 0 and not data:
if error:
try:
error = error.decode("utf-8")
except UnicodeDecodeError as err:
raise CodeNotaryBackendError(_FALLBACK_ERROR, _LOGGER.warning) from err
if "not notarized" in error:
raise CodeNotaryUntrusted()
else:
error = _FALLBACK_ERROR
raise CodeNotaryBackendError(error, _LOGGER.warning)
# Parse data
try:
data_json = json.loads(data)
_LOGGER.debug("CodeNotary response with: %s", data_json)
except (json.JSONDecodeError, UnicodeDecodeError) as err:
raise CodeNotaryError(
f"Can't parse CodeNotary output: {data!s} - {err!s}", _LOGGER.error
) from err
if _ATTR_ERROR in data_json:
raise CodeNotaryBackendError(data_json[_ATTR_ERROR], _LOGGER.warning)
if data_json[_ATTR_STATUS] == 0:
_CACHE.add((checksum, signer))
else:
raise CodeNotaryUntrusted()

View File

@@ -5,12 +5,20 @@ from collections.abc import AsyncGenerator
from datetime import UTC, datetime
from functools import wraps
import json
import re
from aiohttp import ClientResponse
from supervisor.exceptions import MalformedBinaryEntryError
from supervisor.host.const import LogFormatter
_RE_ANSI_CSI_COLORS_PATTERN = re.compile(r"\x1B\[[0-9;]*m")
def _strip_ansi_colors(message: str) -> str:
"""Remove ANSI color codes from a message string."""
return _RE_ANSI_CSI_COLORS_PATTERN.sub("", message)
def formatter(required_fields: list[str]):
"""Decorate journal entry formatters with list of required fields.
@@ -31,9 +39,9 @@ def formatter(required_fields: list[str]):
@formatter(["MESSAGE"])
def journal_plain_formatter(entries: dict[str, str]) -> str:
def journal_plain_formatter(entries: dict[str, str], no_colors: bool = False) -> str:
"""Format parsed journal entries as a plain message."""
return entries["MESSAGE"]
return _strip_ansi_colors(entries["MESSAGE"]) if no_colors else entries["MESSAGE"]
@formatter(
@@ -45,7 +53,7 @@ def journal_plain_formatter(entries: dict[str, str]) -> str:
"MESSAGE",
]
)
def journal_verbose_formatter(entries: dict[str, str]) -> str:
def journal_verbose_formatter(entries: dict[str, str], no_colors: bool = False) -> str:
"""Format parsed journal entries to a journalctl-like format."""
ts = datetime.fromtimestamp(
int(entries["__REALTIME_TIMESTAMP"]) / 1e6, UTC
@@ -58,14 +66,24 @@ def journal_verbose_formatter(entries: dict[str, str]) -> str:
else entries.get("SYSLOG_IDENTIFIER", "_UNKNOWN_")
)
return f"{ts} {entries.get('_HOSTNAME', '')} {identifier}: {entries.get('MESSAGE', '')}"
message = (
_strip_ansi_colors(entries.get("MESSAGE", ""))
if no_colors
else entries.get("MESSAGE", "")
)
return f"{ts} {entries.get('_HOSTNAME', '')} {identifier}: {message}"
async def journal_logs_reader(
journal_logs: ClientResponse, log_formatter: LogFormatter = LogFormatter.PLAIN
journal_logs: ClientResponse,
log_formatter: LogFormatter = LogFormatter.PLAIN,
no_colors: bool = False,
) -> AsyncGenerator[tuple[str | None, str]]:
"""Read logs from systemd journal line by line, formatted using the given formatter.
Optionally strip ANSI color codes from the entries' messages.
Returns a generator of (cursor, formatted_entry) tuples.
"""
match log_formatter:
@@ -84,7 +102,10 @@ async def journal_logs_reader(
# at EOF (likely race between at_eof and EOF check in readuntil)
if line == b"\n" or not line:
if entries:
yield entries.get("__CURSOR"), formatter_(entries)
yield (
entries.get("__CURSOR"),
formatter_(entries, no_colors=no_colors),
)
entries = {}
continue

View File

@@ -12,7 +12,6 @@ from .const import (
ATTR_AUTO_UPDATE,
ATTR_CHANNEL,
ATTR_CLI,
ATTR_CONTENT_TRUST,
ATTR_COUNTRY,
ATTR_DEBUG,
ATTR_DEBUG_BLOCK,
@@ -229,7 +228,6 @@ SCHEMA_INGRESS_CONFIG = vol.Schema(
# pylint: disable=no-value-for-parameter
SCHEMA_SECURITY_CONFIG = vol.Schema(
{
vol.Optional(ATTR_CONTENT_TRUST, default=True): vol.Boolean(),
vol.Optional(ATTR_PWNED, default=True): vol.Boolean(),
vol.Optional(ATTR_FORCE_SECURITY, default=False): vol.Boolean(),
},

View File

@@ -3,24 +3,34 @@
import asyncio
from datetime import timedelta
import errno
from http import HTTPStatus
from pathlib import Path
from unittest.mock import MagicMock, PropertyMock, patch
from typing import Any
from unittest.mock import MagicMock, PropertyMock, call, patch
import aiodocker
from awesomeversion import AwesomeVersion
from docker.errors import DockerException, ImageNotFound, NotFound
from docker.errors import APIError, DockerException, NotFound
import pytest
from securetar import SecureTarFile
from supervisor.addons.addon import Addon
from supervisor.addons.const import AddonBackupMode
from supervisor.addons.model import AddonModel
from supervisor.config import CoreConfig
from supervisor.const import AddonBoot, AddonState, BusEvent
from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState
from supervisor.docker.manager import CommandReturn
from supervisor.docker.manager import CommandReturn, DockerAPI
from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError
from supervisor.exceptions import (
AddonPrePostBackupCommandReturnedError,
AddonsJobError,
AddonUnknownError,
AudioUpdateError,
HassioError,
)
from supervisor.hardware.helper import HwHelper
from supervisor.ingress import Ingress
from supervisor.store.repository import Repository
@@ -499,31 +509,26 @@ async def test_backup_with_pre_post_command(
@pytest.mark.parametrize(
"get_error,exception_on_exec",
("container_get_side_effect", "exec_run_side_effect", "exc_type_raised"),
[
(NotFound("missing"), False),
(DockerException(), False),
(None, True),
(None, False),
(NotFound("missing"), [(1, None)], AddonUnknownError),
(DockerException(), [(1, None)], AddonUnknownError),
(None, DockerException(), AddonUnknownError),
(None, [(1, None)], AddonPrePostBackupCommandReturnedError),
],
)
@pytest.mark.usefixtures("tmp_supervisor_data", "path_extern")
async def test_backup_with_pre_command_error(
coresys: CoreSys,
install_addon_ssh: Addon,
container: MagicMock,
get_error: DockerException | None,
exception_on_exec: bool,
tmp_supervisor_data,
path_extern,
container_get_side_effect: DockerException | None,
exec_run_side_effect: DockerException | list[tuple[int, Any]],
exc_type_raised: type[HassioError],
) -> None:
"""Test backing up an addon with error running pre command."""
if get_error:
coresys.docker.containers.get.side_effect = get_error
if exception_on_exec:
container.exec_run.side_effect = DockerException()
else:
container.exec_run.return_value = (1, None)
coresys.docker.containers.get.side_effect = container_get_side_effect
container.exec_run.side_effect = exec_run_side_effect
install_addon_ssh.path_data.mkdir()
await install_addon_ssh.load()
@@ -532,7 +537,7 @@ async def test_backup_with_pre_command_error(
with (
patch.object(DockerAddon, "is_running", return_value=True),
patch.object(Addon, "backup_pre", new=PropertyMock(return_value="backup_pre")),
pytest.raises(AddonsError),
pytest.raises(exc_type_raised),
):
assert await install_addon_ssh.backup(tarfile) is None
@@ -861,16 +866,14 @@ async def test_addon_loads_wrong_image(
container.remove.assert_called_with(force=True, v=True)
# one for removing the addon, one for removing the addon builder
assert coresys.docker.images.remove.call_count == 2
assert coresys.docker.images.delete.call_count == 2
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": "local/aarch64-addon-ssh:latest",
"force": True,
}
assert coresys.docker.images.remove.call_args_list[1].kwargs == {
"image": "local/aarch64-addon-ssh:9.2.1",
"force": True,
}
assert coresys.docker.images.delete.call_args_list[0] == call(
"local/aarch64-addon-ssh:latest", force=True
)
assert coresys.docker.images.delete.call_args_list[1] == call(
"local/aarch64-addon-ssh:9.2.1", force=True
)
mock_run_command.assert_called_once()
assert mock_run_command.call_args.args[0] == "docker.io/library/docker"
assert mock_run_command.call_args.kwargs["version"] == "1.0.0-cli"
@@ -894,7 +897,9 @@ async def test_addon_loads_missing_image(
mock_amd64_arch_supported,
):
"""Test addon corrects a missing image on load."""
coresys.docker.images.get.side_effect = ImageNotFound("missing")
coresys.docker.images.inspect.side_effect = aiodocker.DockerError(
HTTPStatus.NOT_FOUND, {"message": "missing"}
)
with (
patch("pathlib.Path.is_file", return_value=True),
@@ -926,41 +931,51 @@ async def test_addon_loads_missing_image(
assert install_addon_ssh.image == "local/amd64-addon-ssh"
@pytest.mark.parametrize(
"pull_image_exc",
[APIError("error"), aiodocker.DockerError(400, {"message": "error"})],
)
@pytest.mark.usefixtures("container", "mock_amd64_arch_supported")
async def test_addon_load_succeeds_with_docker_errors(
coresys: CoreSys,
install_addon_ssh: Addon,
container: MagicMock,
caplog: pytest.LogCaptureFixture,
mock_amd64_arch_supported,
pull_image_exc: Exception,
):
"""Docker errors while building/pulling an image during load should not raise and fail setup."""
# Build env invalid failure
coresys.docker.images.get.side_effect = ImageNotFound("missing")
coresys.docker.images.inspect.side_effect = aiodocker.DockerError(
HTTPStatus.NOT_FOUND, {"message": "missing"}
)
caplog.clear()
await install_addon_ssh.load()
assert "Invalid build environment" in caplog.text
assert "Cannot build addon 'local_ssh' because dockerfile is missing" in caplog.text
# Image build failure
coresys.docker.images.build.side_effect = DockerException()
caplog.clear()
with (
patch("pathlib.Path.is_file", return_value=True),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
CoreConfig, "local_to_extern_path", return_value="/addon/path/on/host"
),
patch.object(
DockerAPI,
"run_command",
return_value=MagicMock(exit_code=1, output=b"error"),
),
):
await install_addon_ssh.load()
assert "Can't build local/amd64-addon-ssh:9.2.1" in caplog.text
assert (
"Can't build local/amd64-addon-ssh:9.2.1: Docker build failed for local/amd64-addon-ssh:9.2.1 (exit code 1). Build output:\nerror"
in caplog.text
)
# Image pull failure
install_addon_ssh.data["image"] = "test/amd64-addon-ssh"
coresys.docker.images.build.reset_mock(side_effect=True)
coresys.docker.pull_image.side_effect = DockerException()
caplog.clear()
await install_addon_ssh.load()
assert "Unknown error with test/amd64-addon-ssh:9.2.1" in caplog.text
with patch.object(DockerAPI, "pull_image", side_effect=pull_image_exc):
await install_addon_ssh.load()
assert "Can't install test/amd64-addon-ssh:9.2.1:" in caplog.text
async def test_addon_manual_only_boot(coresys: CoreSys, install_addon_example: Addon):

View File

@@ -3,10 +3,12 @@
from unittest.mock import PropertyMock, patch
from awesomeversion import AwesomeVersion
import pytest
from supervisor.addons.addon import Addon
from supervisor.addons.build import AddonBuild
from supervisor.coresys import CoreSys
from supervisor.exceptions import AddonBuildDockerfileMissingError
from tests.common import is_in_list
@@ -102,11 +104,11 @@ async def test_build_valid(coresys: CoreSys, install_addon_ssh: Addon):
type(coresys.arch), "default", new=PropertyMock(return_value="aarch64")
),
):
assert await build.is_valid()
assert (await build.is_valid()) is None
async def test_build_invalid(coresys: CoreSys, install_addon_ssh: Addon):
"""Test platform set in docker args."""
"""Test build not supported because Dockerfile missing for specified architecture."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
with (
patch.object(
@@ -115,5 +117,6 @@ async def test_build_invalid(coresys: CoreSys, install_addon_ssh: Addon):
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
),
pytest.raises(AddonBuildDockerfileMissingError),
):
assert not await build.is_valid()
await build.is_valid()

View File

@@ -419,3 +419,71 @@ def test_valid_schema():
config["schema"] = {"field": "invalid"}
with pytest.raises(vol.Invalid):
assert vd.SCHEMA_ADDON_CONFIG(config)
def test_ulimits_simple_format():
"""Test ulimits simple format validation."""
config = load_json_fixture("basic-addon-config.json")
config["ulimits"] = {"nofile": 65535, "nproc": 32768, "memlock": 134217728}
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["ulimits"]["nofile"] == 65535
assert valid_config["ulimits"]["nproc"] == 32768
assert valid_config["ulimits"]["memlock"] == 134217728
def test_ulimits_detailed_format():
"""Test ulimits detailed format validation."""
config = load_json_fixture("basic-addon-config.json")
config["ulimits"] = {
"nofile": {"soft": 20000, "hard": 40000},
"nproc": 32768, # Mixed format should work
"memlock": {"soft": 67108864, "hard": 134217728},
}
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["ulimits"]["nofile"]["soft"] == 20000
assert valid_config["ulimits"]["nofile"]["hard"] == 40000
assert valid_config["ulimits"]["nproc"] == 32768
assert valid_config["ulimits"]["memlock"]["soft"] == 67108864
assert valid_config["ulimits"]["memlock"]["hard"] == 134217728
def test_ulimits_empty_dict():
"""Test ulimits with empty dict (default)."""
config = load_json_fixture("basic-addon-config.json")
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["ulimits"] == {}
def test_ulimits_invalid_values():
"""Test ulimits with invalid values."""
config = load_json_fixture("basic-addon-config.json")
# Invalid string values
config["ulimits"] = {"nofile": "invalid"}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Invalid detailed format
config["ulimits"] = {"nofile": {"invalid_key": 1000}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Missing hard value in detailed format
config["ulimits"] = {"nofile": {"soft": 1000}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Missing soft value in detailed format
config["ulimits"] = {"nofile": {"hard": 1000}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Empty dict in detailed format
config["ulimits"] = {"nofile": {}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)

View File

@@ -4,7 +4,7 @@ import asyncio
from collections.abc import AsyncGenerator, Generator
from copy import deepcopy
from pathlib import Path
from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, patch
from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, call, patch
from awesomeversion import AwesomeVersion
import pytest
@@ -514,19 +514,13 @@ async def test_shared_image_kept_on_uninstall(
latest = f"{install_addon_example.image}:latest"
await coresys.addons.uninstall("local_example2")
coresys.docker.images.remove.assert_not_called()
coresys.docker.images.delete.assert_not_called()
assert not coresys.addons.get("local_example2", local_only=True)
await coresys.addons.uninstall("local_example")
assert coresys.docker.images.remove.call_count == 2
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": latest,
"force": True,
}
assert coresys.docker.images.remove.call_args_list[1].kwargs == {
"image": image,
"force": True,
}
assert coresys.docker.images.delete.call_count == 2
assert coresys.docker.images.delete.call_args_list[0] == call(latest, force=True)
assert coresys.docker.images.delete.call_args_list[1] == call(image, force=True)
assert not coresys.addons.get("local_example", local_only=True)
@@ -554,19 +548,17 @@ async def test_shared_image_kept_on_update(
assert example_2.version == "1.2.0"
assert install_addon_example_image.version == "1.2.0"
image_new = MagicMock()
image_new.id = "image_new"
image_old = MagicMock()
image_old.id = "image_old"
docker.images.get.side_effect = [image_new, image_old]
image_new = {"Id": "image_new", "RepoTags": ["image_new:latest"]}
image_old = {"Id": "image_old", "RepoTags": ["image_old:latest"]}
docker.images.inspect.side_effect = [image_new, image_old]
docker.images.list.return_value = [image_new, image_old]
with patch.object(DockerAPI, "pull_image", return_value=image_new):
await coresys.addons.update("local_example2")
docker.images.remove.assert_not_called()
docker.images.delete.assert_not_called()
assert example_2.version == "1.3.0"
docker.images.get.side_effect = [image_new]
docker.images.inspect.side_effect = [image_new]
await coresys.addons.update("local_example_image")
docker.images.remove.assert_called_once_with("image_old", force=True)
docker.images.delete.assert_called_once_with("image_old", force=True)
assert install_addon_example_image.version == "1.3.0"

View File

@@ -1,95 +1 @@
"""Test for API calls."""
from unittest.mock import AsyncMock, MagicMock
from aiohttp.test_utils import TestClient
from supervisor.coresys import CoreSys
from supervisor.host.const import LogFormat
DEFAULT_LOG_RANGE = "entries=:-99:100"
DEFAULT_LOG_RANGE_FOLLOW = "entries=:-99:18446744073709551615"
async def common_test_api_advanced_logs(
path_prefix: str,
syslog_identifier: str,
api_client: TestClient,
journald_logs: MagicMock,
coresys: CoreSys,
os_available: None,
):
"""Template for tests of endpoints using advanced logs."""
resp = await api_client.get(f"{path_prefix}/logs")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier},
range_header=DEFAULT_LOG_RANGE,
accept=LogFormat.JOURNAL,
)
journald_logs.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/follow")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier, "follow": ""},
range_header=DEFAULT_LOG_RANGE_FOLLOW,
accept=LogFormat.JOURNAL,
)
journald_logs.reset_mock()
mock_response = MagicMock()
mock_response.text = AsyncMock(
return_value='{"CONTAINER_LOG_EPOCH": "12345"}\n{"CONTAINER_LOG_EPOCH": "12345"}\n'
)
journald_logs.return_value.__aenter__.return_value = mock_response
resp = await api_client.get(f"{path_prefix}/logs/latest")
assert resp.status == 200
assert journald_logs.call_count == 2
# Check the first call for getting epoch
epoch_call = journald_logs.call_args_list[0]
assert epoch_call[1]["params"] == {"CONTAINER_NAME": syslog_identifier}
assert epoch_call[1]["range_header"] == "entries=:-1:2"
# Check the second call for getting logs with the epoch
logs_call = journald_logs.call_args_list[1]
assert logs_call[1]["params"]["SYSLOG_IDENTIFIER"] == syslog_identifier
assert logs_call[1]["params"]["CONTAINER_LOG_EPOCH"] == "12345"
assert logs_call[1]["range_header"] == "entries=:0:18446744073709551615"
journald_logs.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/boots/0")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier, "_BOOT_ID": "ccc"},
range_header=DEFAULT_LOG_RANGE,
accept=LogFormat.JOURNAL,
)
journald_logs.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/boots/0/follow")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={
"SYSLOG_IDENTIFIER": syslog_identifier,
"_BOOT_ID": "ccc",
"follow": "",
},
range_header=DEFAULT_LOG_RANGE_FOLLOW,
accept=LogFormat.JOURNAL,
)

133
tests/api/conftest.py Normal file
View File

@@ -0,0 +1,133 @@
"""Fixtures for API tests."""
from collections.abc import Awaitable, Callable
from unittest.mock import ANY, AsyncMock, MagicMock
from aiohttp.test_utils import TestClient
import pytest
from supervisor.coresys import CoreSys
from supervisor.host.const import LogFormat, LogFormatter
DEFAULT_LOG_RANGE = "entries=:-99:100"
DEFAULT_LOG_RANGE_FOLLOW = "entries=:-99:18446744073709551615"
async def _common_test_api_advanced_logs(
path_prefix: str,
syslog_identifier: str,
api_client: TestClient,
journald_logs: MagicMock,
coresys: CoreSys,
os_available: None,
journal_logs_reader: MagicMock,
):
"""Template for tests of endpoints using advanced logs."""
resp = await api_client.get(f"{path_prefix}/logs")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier},
range_header=DEFAULT_LOG_RANGE,
accept=LogFormat.JOURNAL,
)
journald_logs.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/follow")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier, "follow": ""},
range_header=DEFAULT_LOG_RANGE_FOLLOW,
accept=LogFormat.JOURNAL,
)
journal_logs_reader.assert_called_with(ANY, LogFormatter.PLAIN, False)
journald_logs.reset_mock()
journal_logs_reader.reset_mock()
mock_response = MagicMock()
mock_response.text = AsyncMock(
return_value='{"CONTAINER_LOG_EPOCH": "12345"}\n{"CONTAINER_LOG_EPOCH": "12345"}\n'
)
journald_logs.return_value.__aenter__.return_value = mock_response
resp = await api_client.get(f"{path_prefix}/logs/latest")
assert resp.status == 200
assert journald_logs.call_count == 2
# Check the first call for getting epoch
epoch_call = journald_logs.call_args_list[0]
assert epoch_call[1]["params"] == {"CONTAINER_NAME": syslog_identifier}
assert epoch_call[1]["range_header"] == "entries=:-1:2"
# Check the second call for getting logs with the epoch
logs_call = journald_logs.call_args_list[1]
assert logs_call[1]["params"]["SYSLOG_IDENTIFIER"] == syslog_identifier
assert logs_call[1]["params"]["CONTAINER_LOG_EPOCH"] == "12345"
assert logs_call[1]["range_header"] == "entries=:0:18446744073709551615"
journal_logs_reader.assert_called_with(ANY, LogFormatter.PLAIN, True)
journald_logs.reset_mock()
journal_logs_reader.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/boots/0")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={"SYSLOG_IDENTIFIER": syslog_identifier, "_BOOT_ID": "ccc"},
range_header=DEFAULT_LOG_RANGE,
accept=LogFormat.JOURNAL,
)
journald_logs.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/boots/0/follow")
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.assert_called_once_with(
params={
"SYSLOG_IDENTIFIER": syslog_identifier,
"_BOOT_ID": "ccc",
"follow": "",
},
range_header=DEFAULT_LOG_RANGE_FOLLOW,
accept=LogFormat.JOURNAL,
)
@pytest.fixture
async def advanced_logs_tester(
api_client: TestClient,
journald_logs: MagicMock,
coresys: CoreSys,
os_available,
journal_logs_reader: MagicMock,
) -> Callable[[str, str], Awaitable[None]]:
"""Fixture that returns a function to test advanced logs endpoints.
This allows tests to avoid explicitly passing all the required fixtures.
Usage:
async def test_my_logs(advanced_logs_tester):
await advanced_logs_tester("/path/prefix", "syslog_identifier")
"""
async def test_logs(path_prefix: str, syslog_identifier: str):
await _common_test_api_advanced_logs(
path_prefix,
syslog_identifier,
api_client,
journald_logs,
coresys,
os_available,
journal_logs_reader,
)
return test_logs

View File

@@ -20,7 +20,6 @@ from supervisor.exceptions import HassioError
from supervisor.store.repository import Repository
from ..const import TEST_ADDON_SLUG
from . import common_test_api_advanced_logs
def _create_test_event(name: str, state: ContainerState) -> DockerContainerStateEvent:
@@ -72,21 +71,11 @@ async def test_addons_info_not_installed(
async def test_api_addon_logs(
api_client: TestClient,
journald_logs: MagicMock,
coresys: CoreSys,
os_available,
advanced_logs_tester,
install_addon_ssh: Addon,
):
"""Test addon logs."""
await common_test_api_advanced_logs(
"/addons/local_ssh",
"addon_local_ssh",
api_client,
journald_logs,
coresys,
os_available,
)
await advanced_logs_tester("/addons/local_ssh", "addon_local_ssh")
async def test_api_addon_logs_not_installed(api_client: TestClient):
@@ -482,6 +471,11 @@ async def test_addon_options_boot_mode_manual_only_invalid(
body["message"]
== "Addon local_example boot option is set to manual_only so it cannot be changed"
)
assert body["error_key"] == "addon_boot_config_cannot_change_error"
assert body["extra_fields"] == {
"addon": "local_example",
"boot_config": "manual_only",
}
async def get_message(resp: ClientResponse, json_expected: bool) -> str:
@@ -550,3 +544,106 @@ async def test_addon_not_installed(
resp = await api_client.request(method, url)
assert resp.status == 400
assert await get_message(resp, json_expected) == "Addon is not installed"
async def test_addon_set_options(api_client: TestClient, install_addon_example: Addon):
"""Test setting options for an addon."""
resp = await api_client.post(
"/addons/local_example/options", json={"options": {"message": "test"}}
)
assert resp.status == 200
assert install_addon_example.options == {"message": "test"}
async def test_addon_set_options_error(
api_client: TestClient, install_addon_example: Addon
):
"""Test setting options for an addon."""
resp = await api_client.post(
"/addons/local_example/options", json={"options": {"message": True}}
)
assert resp.status == 400
body = await resp.json()
assert (
body["message"]
== "Add-on local_example has invalid options: not a valid value. Got {'message': True}"
)
assert body["error_key"] == "addon_configuration_invalid_error"
assert body["extra_fields"] == {
"addon": "local_example",
"validation_error": "not a valid value. Got {'message': True}",
}
async def test_addon_start_options_error(
api_client: TestClient,
install_addon_example: Addon,
caplog: pytest.LogCaptureFixture,
):
"""Test error writing options when trying to start addon."""
install_addon_example.options = {"message": "hello"}
# Simulate OS error trying to write the file
with patch("supervisor.utils.json.atomic_write", side_effect=OSError("fail")):
resp = await api_client.post("/addons/local_example/start")
assert resp.status == 500
body = await resp.json()
assert (
body["message"]
== "An unknown error occurred with addon local_example. Check supervisor logs for details (check with 'ha supervisor logs')"
)
assert body["error_key"] == "addon_unknown_error"
assert body["extra_fields"] == {
"addon": "local_example",
"logs_command": "ha supervisor logs",
}
assert "Add-on local_example can't write options" in caplog.text
# Simulate an update with a breaking change for options schema creating failure on start
caplog.clear()
install_addon_example.data["schema"] = {"message": "bool"}
resp = await api_client.post("/addons/local_example/start")
assert resp.status == 400
body = await resp.json()
assert (
body["message"]
== "Add-on local_example has invalid options: expected boolean. Got {'message': 'hello'}"
)
assert body["error_key"] == "addon_configuration_invalid_error"
assert body["extra_fields"] == {
"addon": "local_example",
"validation_error": "expected boolean. Got {'message': 'hello'}",
}
assert (
"Add-on local_example has invalid options: expected boolean. Got {'message': 'hello'}"
in caplog.text
)
@pytest.mark.parametrize(("method", "action"), [("get", "stats"), ("post", "stdin")])
@pytest.mark.usefixtures("install_addon_example")
async def test_addon_not_running_error(
api_client: TestClient, method: str, action: str
):
"""Test addon not running error for endpoints that require that."""
with patch.object(
Addon, "with_stdin", return_value=PropertyMock(return_value=True)
):
resp = await api_client.request(method, f"/addons/local_example/{action}")
assert resp.status == 400
body = await resp.json()
assert body["message"] == "Add-on local_example is not running"
assert body["error_key"] == "addon_not_running_error"
assert body["extra_fields"] == {"addon": "local_example"}
@pytest.mark.usefixtures("install_addon_example")
async def test_addon_write_stdin_not_supported_error(api_client: TestClient):
"""Test error when trying to write stdin to addon that does not support it."""
resp = await api_client.post("/addons/local_example/stdin")
assert resp.status == 400
body = await resp.json()
assert body["message"] == "Add-on local_example does not support writing to stdin"
assert body["error_key"] == "addon_not_supported_write_stdin_error"
assert body["extra_fields"] == {"addon": "local_example"}

View File

@@ -1,18 +1,6 @@
"""Test audio api."""
from unittest.mock import MagicMock
from aiohttp.test_utils import TestClient
from supervisor.coresys import CoreSys
from tests.api import common_test_api_advanced_logs
async def test_api_audio_logs(
api_client: TestClient, journald_logs: MagicMock, coresys: CoreSys, os_available
):
async def test_api_audio_logs(advanced_logs_tester) -> None:
"""Test audio logs."""
await common_test_api_advanced_logs(
"/audio", "hassio_audio", api_client, journald_logs, coresys, os_available
)
await advanced_logs_tester("/audio", "hassio_audio")

View File

@@ -6,9 +6,12 @@ from unittest.mock import AsyncMock, MagicMock, patch
from aiohttp.hdrs import WWW_AUTHENTICATE
from aiohttp.test_utils import TestClient
import pytest
from securetar import Any
from supervisor.addons.addon import Addon
from supervisor.coresys import CoreSys
from supervisor.exceptions import HomeAssistantAPIError, HomeAssistantWSError
from supervisor.homeassistant.api import HomeAssistantAPI
from tests.common import MockResponse
from tests.const import TEST_ADDON_SLUG
@@ -100,6 +103,52 @@ async def test_password_reset(
assert "Successful password reset for 'john'" in caplog.text
@pytest.mark.parametrize(
("post_mock", "expected_log"),
[
(
MagicMock(return_value=MockResponse(status=400)),
"The user 'john' is not registered",
),
(
MagicMock(side_effect=HomeAssistantAPIError("fail")),
"Can't request password reset on Home Assistant: fail",
),
],
)
async def test_failed_password_reset(
api_client: TestClient,
coresys: CoreSys,
caplog: pytest.LogCaptureFixture,
websession: MagicMock,
post_mock: MagicMock,
expected_log: str,
):
"""Test failed password reset."""
coresys.homeassistant.api.access_token = "abc123"
# pylint: disable-next=protected-access
coresys.homeassistant.api._access_token_expires = datetime.now(tz=UTC) + timedelta(
days=1
)
websession.post = post_mock
resp = await api_client.post(
"/auth/reset", json={"username": "john", "password": "doe"}
)
assert resp.status == 400
body = await resp.json()
assert (
body["message"]
== "Unable to reset password for 'john'. Check supervisor logs for details (check with 'ha supervisor logs')"
)
assert body["error_key"] == "auth_password_reset_error"
assert body["extra_fields"] == {
"user": "john",
"logs_command": "ha supervisor logs",
}
assert expected_log in caplog.text
async def test_list_users(
api_client: TestClient, coresys: CoreSys, ha_ws_client: AsyncMock
):
@@ -120,6 +169,48 @@ async def test_list_users(
]
@pytest.mark.parametrize(
("send_command_mock", "error_response", "expected_log"),
[
(
AsyncMock(return_value=None),
{
"result": "error",
"message": "Home Assistant returned invalid response of `None` instead of a list of users. Check Home Assistant logs for details (check with `ha core logs`)",
"error_key": "auth_list_users_none_response_error",
"extra_fields": {"none": "None", "logs_command": "ha core logs"},
},
"Home Assistant returned invalid response of `None` instead of a list of users. Check Home Assistant logs for details (check with `ha core logs`)",
),
(
AsyncMock(side_effect=HomeAssistantWSError("fail")),
{
"result": "error",
"message": "Can't request listing users on Home Assistant. Check supervisor logs for details (check with 'ha supervisor logs')",
"error_key": "auth_list_users_error",
"extra_fields": {"logs_command": "ha supervisor logs"},
},
"Can't request listing users on Home Assistant: fail",
),
],
)
async def test_list_users_failure(
api_client: TestClient,
ha_ws_client: AsyncMock,
caplog: pytest.LogCaptureFixture,
send_command_mock: AsyncMock,
error_response: dict[str, Any],
expected_log: str,
):
"""Test failure listing users via API."""
ha_ws_client.async_send_command = send_command_mock
resp = await api_client.get("/auth/list")
assert resp.status == 500
result = await resp.json()
assert result == error_response
assert expected_log in caplog.text
@pytest.mark.parametrize(
("field", "api_client"),
[("username", TEST_ADDON_SLUG), ("user", TEST_ADDON_SLUG)],
@@ -156,6 +247,13 @@ async def test_auth_json_failure_none(
mock_check_login.return_value = True
resp = await api_client.post("/auth", json={"username": user, "password": password})
assert resp.status == 401
assert (
resp.headers["WWW-Authenticate"]
== 'Basic realm="Home Assistant Authentication"'
)
body = await resp.json()
assert body["message"] == "Username and password must be strings"
assert body["error_key"] == "auth_invalid_non_string_value_error"
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
@@ -267,3 +365,26 @@ async def test_non_addon_token_no_auth_access(api_client: TestClient):
"""Test auth where add-on is not allowed to access auth API."""
resp = await api_client.post("/auth", json={"username": "test", "password": "pass"})
assert resp.status == 403
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
@pytest.mark.usefixtures("install_addon_ssh")
async def test_auth_backend_login_failure(api_client: TestClient):
"""Test backend login failure on auth."""
with (
patch.object(HomeAssistantAPI, "check_api_state", return_value=True),
patch.object(
HomeAssistantAPI, "make_request", side_effect=HomeAssistantAPIError("fail")
),
):
resp = await api_client.post(
"/auth", json={"username": "test", "password": "pass"}
)
assert resp.status == 500
body = await resp.json()
assert (
body["message"]
== "Unable to validate authentication details with Home Assistant. Check supervisor logs for details (check with 'ha supervisor logs')"
)
assert body["error_key"] == "auth_home_assistant_api_validation_error"
assert body["extra_fields"] == {"logs_command": "ha supervisor logs"}

View File

@@ -17,6 +17,7 @@ from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.docker.manager import DockerAPI
from supervisor.exceptions import (
AddonPrePostBackupCommandReturnedError,
AddonsError,
BackupInvalidError,
HomeAssistantBackupError,
@@ -24,6 +25,7 @@ from supervisor.exceptions import (
from supervisor.homeassistant.core import HomeAssistantCore
from supervisor.homeassistant.module import HomeAssistant
from supervisor.homeassistant.websocket import HomeAssistantWebSocket
from supervisor.jobs import SupervisorJob
from supervisor.mounts.mount import Mount
from supervisor.supervisor import Supervisor
@@ -401,6 +403,8 @@ async def test_api_backup_errors(
"type": "BackupError",
"message": str(err),
"stage": None,
"error_key": None,
"extra_fields": None,
}
]
assert job["child_jobs"][2]["name"] == "backup_store_folders"
@@ -437,6 +441,8 @@ async def test_api_backup_errors(
"type": "HomeAssistantBackupError",
"message": "Backup error",
"stage": "home_assistant",
"error_key": None,
"extra_fields": None,
}
]
assert job["child_jobs"][0]["name"] == "backup_store_homeassistant"
@@ -445,6 +451,8 @@ async def test_api_backup_errors(
"type": "HomeAssistantBackupError",
"message": "Backup error",
"stage": None,
"error_key": None,
"extra_fields": None,
}
]
assert len(job["child_jobs"]) == 1
@@ -749,6 +757,8 @@ async def test_backup_to_multiple_locations_error_on_copy(
"type": "BackupError",
"message": "Could not copy backup to .cloud_backup due to: ",
"stage": None,
"error_key": None,
"extra_fields": None,
}
]
@@ -1483,3 +1493,44 @@ async def test_immediate_list_after_missing_file_restore(
result = await resp.json()
assert len(result["data"]["backups"]) == 1
assert result["data"]["backups"][0]["slug"] == "93b462f8"
@pytest.mark.parametrize("command", ["backup_pre", "backup_post"])
@pytest.mark.usefixtures("install_addon_example", "tmp_supervisor_data")
async def test_pre_post_backup_command_error(
api_client: TestClient, coresys: CoreSys, container: MagicMock, command: str
):
"""Test pre/post backup command error."""
await coresys.core.set_state(CoreState.RUNNING)
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
container.status = "running"
container.exec_run.return_value = (1, b"")
with patch.object(Addon, command, return_value=PropertyMock(return_value="test")):
resp = await api_client.post(
"/backups/new/partial", json={"addons": ["local_example"]}
)
assert resp.status == 200
body = await resp.json()
job_id = body["data"]["job_id"]
job: SupervisorJob | None = None
for j in coresys.jobs.jobs:
if j.name == "backup_store_addons" and j.parent_id == job_id:
job = j
break
assert job
assert job.done is True
assert job.errors[0].type_ == AddonPrePostBackupCommandReturnedError
assert job.errors[0].message == (
"Pre-/Post backup command for add-on local_example returned error code: "
"1. Please report this to the addon developer. Enable debug "
"logging to capture complete command output using ha supervisor options --logging debug"
)
assert job.errors[0].error_key == "addon_pre_post_backup_command_returned_error"
assert job.errors[0].extra_fields == {
"addon": "local_example",
"exit_code": 1,
"debug_logging_command": "ha supervisor options --logging debug",
}

View File

@@ -1,13 +1,12 @@
"""Test DNS API."""
from unittest.mock import MagicMock, patch
from unittest.mock import patch
from aiohttp.test_utils import TestClient
from supervisor.coresys import CoreSys
from supervisor.dbus.resolved import Resolved
from tests.api import common_test_api_advanced_logs
from tests.dbus_service_mocks.base import DBusServiceMock
from tests.dbus_service_mocks.resolved import Resolved as ResolvedService
@@ -66,15 +65,6 @@ async def test_options(api_client: TestClient, coresys: CoreSys):
restart.assert_called_once()
async def test_api_dns_logs(
api_client: TestClient, journald_logs: MagicMock, coresys: CoreSys, os_available
):
async def test_api_dns_logs(advanced_logs_tester):
"""Test dns logs."""
await common_test_api_advanced_logs(
"/dns",
"hassio_dns",
api_client,
journald_logs,
coresys,
os_available,
)
await advanced_logs_tester("/dns", "hassio_dns")

View File

@@ -2,39 +2,34 @@
import asyncio
from pathlib import Path
from unittest.mock import MagicMock, PropertyMock, patch
from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
import pytest
from supervisor.backups.manager import BackupManager
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.docker.homeassistant import DockerHomeAssistant
from supervisor.docker.interface import DockerInterface
from supervisor.homeassistant.api import APIState
from supervisor.homeassistant.api import APIState, HomeAssistantAPI
from supervisor.homeassistant.const import WSEvent
from supervisor.homeassistant.core import HomeAssistantCore
from supervisor.homeassistant.module import HomeAssistant
from tests.api import common_test_api_advanced_logs
from tests.common import load_json_fixture
from tests.common import AsyncIterator, load_json_fixture
@pytest.mark.parametrize("legacy_route", [True, False])
async def test_api_core_logs(
api_client: TestClient,
journald_logs: MagicMock,
coresys: CoreSys,
os_available,
advanced_logs_tester: AsyncMock,
legacy_route: bool,
):
"""Test core logs."""
await common_test_api_advanced_logs(
await advanced_logs_tester(
f"/{'homeassistant' if legacy_route else 'core'}",
"homeassistant",
api_client,
journald_logs,
coresys,
os_available,
)
@@ -271,3 +266,96 @@ async def test_background_home_assistant_update_fails_fast(
assert resp.status == 400
body = await resp.json()
assert body["message"] == "Version 2025.8.3 is already installed"
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_home_assistant_update(
api_client: TestClient, coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test progress updates sent to Home Assistant for updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
logs = load_json_fixture("docker_pull_image_log.json")
coresys.docker.images.pull.return_value = AsyncIterator(logs)
coresys.homeassistant.version = AwesomeVersion("2025.8.0")
with (
patch.object(
DockerHomeAssistant,
"version",
new=PropertyMock(return_value=AwesomeVersion("2025.8.0")),
),
patch.object(
HomeAssistantAPI, "get_config", return_value={"components": ["frontend"]}
),
):
resp = await api_client.post("/core/update", json={"version": "2025.8.3"})
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == "home_assistant_core_update"
]
assert events[:5] == [
{
"stage": None,
"progress": 0,
"done": None,
},
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": None,
"progress": 0.1,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": True,
},
]

View File

@@ -272,7 +272,7 @@ async def test_advaced_logs_query_parameters(
range_header=DEFAULT_RANGE,
accept=LogFormat.JOURNAL,
)
journal_logs_reader.assert_called_with(ANY, LogFormatter.VERBOSE)
journal_logs_reader.assert_called_with(ANY, LogFormatter.VERBOSE, False)
journal_logs_reader.reset_mock()
journald_logs.reset_mock()
@@ -290,7 +290,7 @@ async def test_advaced_logs_query_parameters(
range_header="entries=:-52:53",
accept=LogFormat.JOURNAL,
)
journal_logs_reader.assert_called_with(ANY, LogFormatter.VERBOSE)
journal_logs_reader.assert_called_with(ANY, LogFormatter.VERBOSE, False)
async def test_advanced_logs_boot_id_offset(
@@ -343,24 +343,24 @@ async def test_advanced_logs_formatters(
"""Test advanced logs formatters varying on Accept header."""
await api_client.get("/host/logs")
journal_logs_reader.assert_called_once_with(ANY, LogFormatter.VERBOSE)
journal_logs_reader.assert_called_once_with(ANY, LogFormatter.VERBOSE, False)
journal_logs_reader.reset_mock()
headers = {"Accept": "text/x-log"}
await api_client.get("/host/logs", headers=headers)
journal_logs_reader.assert_called_once_with(ANY, LogFormatter.VERBOSE)
journal_logs_reader.assert_called_once_with(ANY, LogFormatter.VERBOSE, False)
journal_logs_reader.reset_mock()
await api_client.get("/host/logs/identifiers/test")
journal_logs_reader.assert_called_once_with(ANY, LogFormatter.PLAIN)
journal_logs_reader.assert_called_once_with(ANY, LogFormatter.PLAIN, False)
journal_logs_reader.reset_mock()
headers = {"Accept": "text/x-log"}
await api_client.get("/host/logs/identifiers/test", headers=headers)
journal_logs_reader.assert_called_once_with(ANY, LogFormatter.VERBOSE)
journal_logs_reader.assert_called_once_with(ANY, LogFormatter.VERBOSE, False)
async def test_advanced_logs_errors(coresys: CoreSys, api_client: TestClient):

View File

@@ -1,12 +1,28 @@
"""Test ingress API."""
from unittest.mock import AsyncMock, patch
from collections.abc import AsyncGenerator
from unittest.mock import AsyncMock, MagicMock, patch
from aiohttp.test_utils import TestClient
import aiohttp
from aiohttp import hdrs, web
from aiohttp.test_utils import TestClient, TestServer
import pytest
from supervisor.addons.addon import Addon
from supervisor.coresys import CoreSys
@pytest.fixture(name="real_websession")
async def fixture_real_websession(
coresys: CoreSys,
) -> AsyncGenerator[aiohttp.ClientSession]:
"""Fixture for real aiohttp ClientSession for ingress proxy tests."""
session = aiohttp.ClientSession()
coresys._websession = session # pylint: disable=W0212
yield session
await session.close()
async def test_validate_session(api_client: TestClient, coresys: CoreSys):
"""Test validating ingress session."""
with patch("aiohttp.web_request.BaseRequest.__getitem__", return_value=None):
@@ -86,3 +102,126 @@ async def test_validate_session_with_user_id(
assert (
coresys.ingress.get_session_data(session).user.display_name == "Some Name"
)
async def test_ingress_proxy_no_content_type_for_empty_body_responses(
api_client: TestClient, coresys: CoreSys, real_websession: aiohttp.ClientSession
):
"""Test that empty body responses don't get Content-Type header."""
# Create a mock add-on backend server that returns various status codes
async def mock_addon_handler(request: web.Request) -> web.Response:
"""Mock add-on handler that returns different status codes based on path."""
path = request.path
if path == "/204":
# 204 No Content - should not have Content-Type
return web.Response(status=204)
elif path == "/304":
# 304 Not Modified - should not have Content-Type
return web.Response(status=304)
elif path == "/100":
# 100 Continue - should not have Content-Type
return web.Response(status=100)
elif path == "/head":
# HEAD request - should have Content-Type (same as GET would)
return web.Response(body=b"test", content_type="text/html")
elif path == "/200":
# 200 OK with body - should have Content-Type
return web.Response(body=b"test content", content_type="text/plain")
elif path == "/200-no-content-type":
# 200 OK without explicit Content-Type - should get default
return web.Response(body=b"test content")
elif path == "/200-json":
# 200 OK with JSON - should preserve Content-Type
return web.Response(
body=b'{"key": "value"}', content_type="application/json"
)
else:
return web.Response(body=b"default", content_type="text/html")
# Create test server for mock add-on
app = web.Application()
app.router.add_route("*", "/{tail:.*}", mock_addon_handler)
addon_server = TestServer(app)
await addon_server.start_server()
try:
# Create ingress session
resp = await api_client.post("/ingress/session")
result = await resp.json()
session = result["data"]["session"]
# Create a mock add-on
mock_addon = MagicMock(spec=Addon)
mock_addon.slug = "test_addon"
mock_addon.ip_address = addon_server.host
mock_addon.ingress_port = addon_server.port
mock_addon.ingress_stream = False
# Generate an ingress token and register the add-on
ingress_token = coresys.ingress.create_session()
with patch.object(coresys.ingress, "get", return_value=mock_addon):
# Test 204 No Content - should NOT have Content-Type
resp = await api_client.get(
f"/ingress/{ingress_token}/204",
cookies={"ingress_session": session},
)
assert resp.status == 204
assert hdrs.CONTENT_TYPE not in resp.headers
# Test 304 Not Modified - should NOT have Content-Type
resp = await api_client.get(
f"/ingress/{ingress_token}/304",
cookies={"ingress_session": session},
)
assert resp.status == 304
assert hdrs.CONTENT_TYPE not in resp.headers
# Test HEAD request - SHOULD have Content-Type (same as GET)
# per RFC 9110: HEAD should return same headers as GET
resp = await api_client.head(
f"/ingress/{ingress_token}/head",
cookies={"ingress_session": session},
)
assert resp.status == 200
assert hdrs.CONTENT_TYPE in resp.headers
assert "text/html" in resp.headers[hdrs.CONTENT_TYPE]
# Body should be empty for HEAD
body = await resp.read()
assert body == b""
# Test 200 OK with body - SHOULD have Content-Type
resp = await api_client.get(
f"/ingress/{ingress_token}/200",
cookies={"ingress_session": session},
)
assert resp.status == 200
assert hdrs.CONTENT_TYPE in resp.headers
assert resp.headers[hdrs.CONTENT_TYPE] == "text/plain"
body = await resp.read()
assert body == b"test content"
# Test 200 OK without explicit Content-Type - SHOULD get default
resp = await api_client.get(
f"/ingress/{ingress_token}/200-no-content-type",
cookies={"ingress_session": session},
)
assert resp.status == 200
assert hdrs.CONTENT_TYPE in resp.headers
# Should get application/octet-stream as default from aiohttp ClientResponse
assert "application/octet-stream" in resp.headers[hdrs.CONTENT_TYPE]
# Test 200 OK with JSON - SHOULD preserve Content-Type
resp = await api_client.get(
f"/ingress/{ingress_token}/200-json",
cookies={"ingress_session": session},
)
assert resp.status == 200
assert hdrs.CONTENT_TYPE in resp.headers
assert "application/json" in resp.headers[hdrs.CONTENT_TYPE]
body = await resp.read()
assert body == b'{"key": "value"}'
finally:
await addon_server.close()

View File

@@ -374,6 +374,8 @@ async def test_job_with_error(
"type": "SupervisorError",
"message": "bad",
"stage": "test",
"error_key": None,
"extra_fields": None,
}
],
"child_jobs": [
@@ -391,6 +393,8 @@ async def test_job_with_error(
"type": "SupervisorError",
"message": "bad",
"stage": None,
"error_key": None,
"extra_fields": None,
}
],
"child_jobs": [],

View File

@@ -1,23 +1,6 @@
"""Test multicast api."""
from unittest.mock import MagicMock
from aiohttp.test_utils import TestClient
from supervisor.coresys import CoreSys
from tests.api import common_test_api_advanced_logs
async def test_api_multicast_logs(
api_client: TestClient, journald_logs: MagicMock, coresys: CoreSys, os_available
):
async def test_api_multicast_logs(advanced_logs_tester):
"""Test multicast logs."""
await common_test_api_advanced_logs(
"/multicast",
"hassio_multicast",
api_client,
journald_logs,
coresys,
os_available,
)
await advanced_logs_tester("/multicast", "hassio_multicast")

View File

@@ -17,16 +17,6 @@ async def test_api_security_options_force_security(api_client, coresys: CoreSys)
assert coresys.security.force
@pytest.mark.asyncio
async def test_api_security_options_content_trust(api_client, coresys: CoreSys):
"""Test security options content trust."""
assert coresys.security.content_trust
await api_client.post("/security/options", json={"content_trust": False})
assert not coresys.security.content_trust
@pytest.mark.asyncio
async def test_api_security_options_pwned(api_client, coresys: CoreSys):
"""Test security options pwned."""
@@ -41,11 +31,8 @@ async def test_api_security_options_pwned(api_client, coresys: CoreSys):
async def test_api_integrity_check(
api_client, coresys: CoreSys, supervisor_internet: AsyncMock
):
"""Test security integrity check."""
coresys.security.content_trust = False
"""Test security integrity check - now deprecated."""
resp = await api_client.post("/security/integrity")
result = await resp.json()
assert result["data"]["core"] == "untested"
assert result["data"]["supervisor"] == "untested"
# CodeNotary integrity check has been removed, should return 410 Gone
assert resp.status == 410

View File

@@ -4,7 +4,6 @@ import asyncio
from pathlib import Path
from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp import ClientResponse
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
import pytest
@@ -13,17 +12,18 @@ from supervisor.addons.addon import Addon
from supervisor.arch import CpuArch
from supervisor.backups.manager import BackupManager
from supervisor.config import CoreConfig
from supervisor.const import AddonState
from supervisor.const import AddonState, CoreState
from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState
from supervisor.docker.interface import DockerInterface
from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.homeassistant.const import WSEvent
from supervisor.homeassistant.module import HomeAssistant
from supervisor.store.addon import AddonStore
from supervisor.store.repository import Repository
from tests.common import load_json_fixture
from tests.common import AsyncIterator, load_json_fixture
from tests.const import TEST_ADDON_SLUG
REPO_URL = "https://github.com/awesome-developer/awesome-repo"
@@ -289,14 +289,6 @@ async def test_api_detached_addon_documentation(
assert result == "Addon local_ssh does not exist in the store"
async def get_message(resp: ClientResponse, json_expected: bool) -> str:
"""Get message from response based on response type."""
if json_expected:
body = await resp.json()
return body["message"]
return await resp.text()
@pytest.mark.parametrize(
("method", "url", "json_expected"),
[
@@ -322,7 +314,13 @@ async def test_store_addon_not_found(
"""Test store addon not found error."""
resp = await api_client.request(method, url)
assert resp.status == 404
assert await get_message(resp, json_expected) == "Addon bad does not exist"
if json_expected:
body = await resp.json()
assert body["message"] == "Addon bad does not exist in the store"
assert body["error_key"] == "store_addon_not_found_error"
assert body["extra_fields"] == {"addon": "bad"}
else:
assert await resp.text() == "Addon bad does not exist in the store"
@pytest.mark.parametrize(
@@ -709,3 +707,102 @@ async def test_api_store_addons_addon_availability_installed_addon(
assert (
"requires Home Assistant version 2023.1.1 or greater" in result["message"]
)
@pytest.mark.parametrize(
("action", "job_name", "addon_slug"),
[
("install", "addon_manager_install", "local_ssh"),
("update", "addon_manager_update", "local_example"),
],
)
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_addon_install_update(
api_client: TestClient,
coresys: CoreSys,
ha_ws_client: AsyncMock,
install_addon_example: Addon,
action: str,
job_name: str,
addon_slug: str,
):
"""Test progress updates sent to Home Assistant for installs/updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
logs = load_json_fixture("docker_pull_image_log.json")
coresys.docker.images.pull.return_value = AsyncIterator(logs)
coresys.arch._supported_arch = ["amd64"] # pylint: disable=protected-access
install_addon_example.data_store["version"] = AwesomeVersion("2.0.0")
with (
patch.object(Addon, "load"),
patch.object(Addon, "need_build", new=PropertyMock(return_value=False)),
patch.object(Addon, "latest_need_build", new=PropertyMock(return_value=False)),
):
resp = await api_client.post(f"/store/addons/{addon_slug}/{action}")
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == job_name
and evt.args[0]["data"]["data"]["reference"] == addon_slug
]
assert events[:4] == [
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": None,
"progress": 0.1,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": True,
},
]

View File

@@ -2,17 +2,24 @@
# pylint: disable=protected-access
import time
from unittest.mock import AsyncMock, MagicMock, patch
from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
from blockbuster import BlockingError
from docker.errors import DockerException
import pytest
from supervisor.const import CoreState
from supervisor.core import Core
from supervisor.coresys import CoreSys
from supervisor.exceptions import HassioError, HostNotSupportedError, StoreGitError
from supervisor.homeassistant.const import WSEvent
from supervisor.store.repository import Repository
from supervisor.supervisor import Supervisor
from supervisor.updater import Updater
from tests.api import common_test_api_advanced_logs
from tests.common import AsyncIterator, load_json_fixture
from tests.dbus_service_mocks.base import DBusServiceMock
from tests.dbus_service_mocks.os_agent import OSAgent as OSAgentService
@@ -148,18 +155,9 @@ async def test_api_supervisor_options_diagnostics(
assert coresys.dbus.agent.diagnostics is False
async def test_api_supervisor_logs(
api_client: TestClient, journald_logs: MagicMock, coresys: CoreSys, os_available
):
async def test_api_supervisor_logs(advanced_logs_tester):
"""Test supervisor logs."""
await common_test_api_advanced_logs(
"/supervisor",
"hassio_supervisor",
api_client,
journald_logs,
coresys,
os_available,
)
await advanced_logs_tester("/supervisor", "hassio_supervisor")
async def test_api_supervisor_fallback(
@@ -316,3 +314,131 @@ async def test_api_supervisor_options_blocking_io(
# This should not raise blocking error anymore
time.sleep(0)
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_supervisor_update(
api_client: TestClient, coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test progress updates sent to Home Assistant for updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
logs = load_json_fixture("docker_pull_image_log.json")
coresys.docker.images.pull.return_value = AsyncIterator(logs)
with (
patch.object(
Supervisor,
"version",
new=PropertyMock(return_value=AwesomeVersion("2025.08.0")),
),
patch.object(
Updater,
"version_supervisor",
new=PropertyMock(return_value=AwesomeVersion("2025.08.3")),
),
patch.object(
Updater, "image_supervisor", new=PropertyMock(return_value="supervisor")
),
patch.object(Supervisor, "update_apparmor"),
patch.object(Core, "stop"),
):
resp = await api_client.post("/supervisor/update")
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == "supervisor_update"
]
assert events[:4] == [
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": None,
"progress": 0.1,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": True,
},
]
async def test_api_supervisor_stats(api_client: TestClient, coresys: CoreSys):
"""Test supervisor stats."""
coresys.docker.containers.get.return_value.status = "running"
coresys.docker.containers.get.return_value.stats.return_value = load_json_fixture(
"container_stats.json"
)
resp = await api_client.get("/supervisor/stats")
assert resp.status == 200
result = await resp.json()
assert result["data"]["cpu_percent"] == 90.0
assert result["data"]["memory_usage"] == 59700000
assert result["data"]["memory_limit"] == 4000000000
assert result["data"]["memory_percent"] == 1.49
async def test_supervisor_api_stats_failure(
api_client: TestClient, coresys: CoreSys, caplog: pytest.LogCaptureFixture
):
"""Test supervisor stats failure."""
coresys.docker.containers.get.side_effect = DockerException("fail")
resp = await api_client.get("/supervisor/stats")
assert resp.status == 500
body = await resp.json()
assert (
body["message"]
== "An unknown error occurred with Supervisor. Check supervisor logs for details (check with 'ha supervisor logs')"
)
assert body["error_key"] == "supervisor_unknown_error"
assert body["extra_fields"] == {"logs_command": "ha supervisor logs"}
assert "Could not inspect container 'hassio_supervisor': fail" in caplog.text

View File

@@ -1,13 +1,14 @@
"""Common test functions."""
import asyncio
from collections.abc import Sequence
from datetime import datetime
from functools import partial
from importlib import import_module
from inspect import getclosurevars
import json
from pathlib import Path
from typing import Any
from typing import Any, Self
from dbus_fast.aio.message_bus import MessageBus
@@ -145,3 +146,22 @@ class MockResponse:
async def __aexit__(self, exc_type, exc, tb):
"""Exit the context manager."""
class AsyncIterator:
"""Make list/fixture into async iterator for test mocks."""
def __init__(self, seq: Sequence[Any]) -> None:
"""Initialize with sequence."""
self.iter = iter(seq)
def __aiter__(self) -> Self:
"""Implement aiter."""
return self
async def __anext__(self) -> Any:
"""Return next in sequence."""
try:
return next(self.iter)
except StopIteration:
raise StopAsyncIteration() from None

View File

@@ -9,6 +9,7 @@ import subprocess
from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, patch
from uuid import uuid4
from aiodocker.docker import DockerImages
from aiohttp import ClientSession, web
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
@@ -55,6 +56,7 @@ from supervisor.store.repository import Repository
from supervisor.utils.dt import utcnow
from .common import (
AsyncIterator,
MockResponse,
load_binary_fixture,
load_fixture,
@@ -112,40 +114,46 @@ async def supervisor_name() -> None:
@pytest.fixture
async def docker() -> DockerAPI:
"""Mock DockerAPI."""
images = [MagicMock(tags=["ghcr.io/home-assistant/amd64-hassio-supervisor:latest"])]
image = MagicMock()
image.attrs = {"Os": "linux", "Architecture": "amd64"}
image_inspect = {
"Os": "linux",
"Architecture": "amd64",
"Id": "test123",
"RepoTags": ["ghcr.io/home-assistant/amd64-hassio-supervisor:latest"],
}
with (
patch("supervisor.docker.manager.DockerClient", return_value=MagicMock()),
patch("supervisor.docker.manager.DockerAPI.images", return_value=MagicMock()),
patch(
"supervisor.docker.manager.DockerAPI.containers", return_value=MagicMock()
),
patch(
"supervisor.docker.manager.DockerAPI.api",
return_value=(api_mock := MagicMock()),
),
patch("supervisor.docker.manager.DockerAPI.images.get", return_value=image),
patch("supervisor.docker.manager.DockerAPI.images.list", return_value=images),
patch(
"supervisor.docker.manager.DockerAPI.info",
return_value=MagicMock(),
),
patch("supervisor.docker.manager.DockerAPI.api", return_value=MagicMock()),
patch("supervisor.docker.manager.DockerAPI.info", return_value=MagicMock()),
patch("supervisor.docker.manager.DockerAPI.unload"),
patch("supervisor.docker.manager.aiodocker.Docker", return_value=MagicMock()),
patch(
"supervisor.docker.manager.DockerAPI.images",
new=PropertyMock(
return_value=(docker_images := MagicMock(spec=DockerImages))
),
),
):
docker_obj = await DockerAPI(MagicMock()).post_init()
docker_obj.config._data = {"registries": {}}
with patch("supervisor.docker.monitor.DockerMonitor.load"):
await docker_obj.load()
docker_images.inspect.return_value = image_inspect
docker_images.list.return_value = [image_inspect]
docker_images.import_image.return_value = [
{"stream": "Loaded image: test:latest\n"}
]
docker_images.pull.return_value = AsyncIterator([{}])
docker_obj.info.logging = "journald"
docker_obj.info.storage = "overlay2"
docker_obj.info.version = AwesomeVersion("1.0.0")
# Need an iterable for logs
api_mock.pull.return_value = []
yield docker_obj
@@ -838,11 +846,9 @@ async def container(docker: DockerAPI) -> MagicMock:
"""Mock attrs and status for container on attach."""
docker.containers.get.return_value = addon = MagicMock()
docker.containers.create.return_value = addon
docker.images.build.return_value = (addon, "")
addon.status = "stopped"
addon.attrs = {"State": {"ExitCode": 0}}
with patch.object(DockerAPI, "pull_image", return_value=addon):
yield addon
yield addon
@pytest.fixture

View File

@@ -503,3 +503,93 @@ async def test_addon_new_device_no_haos(
await install_addon_ssh.stop()
assert coresys.resolution.issues == []
assert coresys.resolution.suggestions == []
async def test_ulimits_integration(
coresys: CoreSys,
install_addon_ssh: Addon,
):
"""Test ulimits integration with Docker addon."""
docker_addon = DockerAddon(coresys, install_addon_ssh)
# Test default case (no ulimits, no realtime)
assert docker_addon.ulimits is None
# Test with realtime enabled (should have built-in ulimits)
install_addon_ssh.data["realtime"] = True
ulimits = docker_addon.ulimits
assert ulimits is not None
assert len(ulimits) == 2
# Check for rtprio limit
rtprio_limit = next((u for u in ulimits if u.name == "rtprio"), None)
assert rtprio_limit is not None
assert rtprio_limit.soft == 90
assert rtprio_limit.hard == 99
# Check for memlock limit
memlock_limit = next((u for u in ulimits if u.name == "memlock"), None)
assert memlock_limit is not None
assert memlock_limit.soft == 128 * 1024 * 1024
assert memlock_limit.hard == 128 * 1024 * 1024
# Test with configurable ulimits (simple format)
install_addon_ssh.data["realtime"] = False
install_addon_ssh.data["ulimits"] = {"nofile": 65535, "nproc": 32768}
ulimits = docker_addon.ulimits
assert ulimits is not None
assert len(ulimits) == 2
nofile_limit = next((u for u in ulimits if u.name == "nofile"), None)
assert nofile_limit is not None
assert nofile_limit.soft == 65535
assert nofile_limit.hard == 65535
nproc_limit = next((u for u in ulimits if u.name == "nproc"), None)
assert nproc_limit is not None
assert nproc_limit.soft == 32768
assert nproc_limit.hard == 32768
# Test with configurable ulimits (detailed format)
install_addon_ssh.data["ulimits"] = {
"nofile": {"soft": 20000, "hard": 40000},
"memlock": {"soft": 67108864, "hard": 134217728},
}
ulimits = docker_addon.ulimits
assert ulimits is not None
assert len(ulimits) == 2
nofile_limit = next((u for u in ulimits if u.name == "nofile"), None)
assert nofile_limit is not None
assert nofile_limit.soft == 20000
assert nofile_limit.hard == 40000
memlock_limit = next((u for u in ulimits if u.name == "memlock"), None)
assert memlock_limit is not None
assert memlock_limit.soft == 67108864
assert memlock_limit.hard == 134217728
# Test mixed format and realtime (realtime + custom ulimits)
install_addon_ssh.data["realtime"] = True
install_addon_ssh.data["ulimits"] = {
"nofile": 65535,
"core": {"soft": 0, "hard": 0}, # Disable core dumps
}
ulimits = docker_addon.ulimits
assert ulimits is not None
assert (
len(ulimits) == 4
) # rtprio, memlock (from realtime) + nofile, core (from config)
# Check realtime limits still present
rtprio_limit = next((u for u in ulimits if u.name == "rtprio"), None)
assert rtprio_limit is not None
# Check custom limits added
nofile_limit = next((u for u in ulimits if u.name == "nofile"), None)
assert nofile_limit is not None
assert nofile_limit.soft == 65535
assert nofile_limit.hard == 65535
core_limit = next((u for u in ulimits if u.name == "core"), None)
assert core_limit is not None
assert core_limit.soft == 0
assert core_limit.hard == 0

View File

@@ -5,10 +5,10 @@ from pathlib import Path
from typing import Any
from unittest.mock import ANY, AsyncMock, MagicMock, Mock, PropertyMock, call, patch
import aiodocker
from awesomeversion import AwesomeVersion
from docker.errors import DockerException, NotFound
from docker.models.containers import Container
from docker.models.images import Image
import pytest
from requests import RequestException
@@ -26,19 +26,9 @@ from supervisor.exceptions import (
DockerNotFound,
DockerRequestError,
)
from supervisor.homeassistant.const import WSEvent
from supervisor.jobs import JobSchedulerOptions, SupervisorJob
from tests.common import load_json_fixture
@pytest.fixture(autouse=True)
def mock_verify_content(coresys: CoreSys):
"""Mock verify_content utility during tests."""
with patch.object(
coresys.security, "verify_content", return_value=None
) as verify_content:
yield verify_content
from tests.common import AsyncIterator, load_json_fixture
@pytest.mark.parametrize(
@@ -58,35 +48,30 @@ async def test_docker_image_platform(
platform: str,
):
"""Test platform set correctly from arch."""
with patch.object(
coresys.docker.images, "get", return_value=Mock(id="test:1.2.3")
) as get:
await test_docker_interface.install(
AwesomeVersion("1.2.3"), "test", arch=cpu_arch
)
coresys.docker.docker.api.pull.assert_called_once_with(
"test", tag="1.2.3", platform=platform, stream=True, decode=True
)
get.assert_called_once_with("test:1.2.3")
coresys.docker.images.inspect.return_value = {"Id": "test:1.2.3"}
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test", arch=cpu_arch)
coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform=platform, stream=True
)
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
async def test_docker_image_default_platform(
coresys: CoreSys, test_docker_interface: DockerInterface
):
"""Test platform set using supervisor arch when omitted."""
coresys.docker.images.inspect.return_value = {"Id": "test:1.2.3"}
with (
patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value="i386")
),
patch.object(
coresys.docker.images, "get", return_value=Mock(id="test:1.2.3")
) as get,
):
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test")
coresys.docker.docker.api.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", stream=True, decode=True
coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", stream=True
)
get.assert_called_once_with("test:1.2.3")
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
@pytest.mark.parametrize(
@@ -217,57 +202,40 @@ async def test_attach_existing_container(
async def test_attach_container_failure(coresys: CoreSys):
"""Test attach fails to find container but finds image."""
container_collection = MagicMock()
container_collection.get.side_effect = DockerException()
image_collection = MagicMock()
image_config = {"Image": "sha256:abc123"}
image_collection.get.return_value = Image({"Config": image_config})
with (
patch(
"supervisor.docker.manager.DockerAPI.containers",
new=PropertyMock(return_value=container_collection),
),
patch(
"supervisor.docker.manager.DockerAPI.images",
new=PropertyMock(return_value=image_collection),
),
patch.object(type(coresys.bus), "fire_event") as fire_event,
):
coresys.docker.containers.get.side_effect = DockerException()
coresys.docker.images.inspect.return_value.setdefault("Config", {})["Image"] = (
"sha256:abc123"
)
with patch.object(type(coresys.bus), "fire_event") as fire_event:
await coresys.homeassistant.core.instance.attach(AwesomeVersion("2022.7.3"))
assert not [
event
for event in fire_event.call_args_list
if event.args[0] == BusEvent.DOCKER_CONTAINER_STATE_CHANGE
]
assert coresys.homeassistant.core.instance.meta_config == image_config
assert (
coresys.homeassistant.core.instance.meta_config["Image"] == "sha256:abc123"
)
async def test_attach_total_failure(coresys: CoreSys):
"""Test attach fails to find container or image."""
container_collection = MagicMock()
container_collection.get.side_effect = DockerException()
image_collection = MagicMock()
image_collection.get.side_effect = DockerException()
with (
patch(
"supervisor.docker.manager.DockerAPI.containers",
new=PropertyMock(return_value=container_collection),
),
patch(
"supervisor.docker.manager.DockerAPI.images",
new=PropertyMock(return_value=image_collection),
),
pytest.raises(DockerError),
):
coresys.docker.containers.get.side_effect = DockerException
coresys.docker.images.inspect.side_effect = aiodocker.DockerError(
400, {"message": ""}
)
with pytest.raises(DockerError):
await coresys.homeassistant.core.instance.attach(AwesomeVersion("2022.7.3"))
@pytest.mark.parametrize("err", [DockerException(), RequestException()])
@pytest.mark.parametrize(
"err", [aiodocker.DockerError(400, {"message": ""}), RequestException()]
)
async def test_image_pull_fail(
coresys: CoreSys, capture_exception: Mock, err: Exception
):
"""Test failure to pull image."""
coresys.docker.images.get.side_effect = err
coresys.docker.images.inspect.side_effect = err
with pytest.raises(DockerError):
await coresys.homeassistant.core.instance.install(
AwesomeVersion("2022.7.3"), arch=CpuArch.AMD64
@@ -299,8 +267,9 @@ async def test_install_fires_progress_events(
coresys: CoreSys, test_docker_interface: DockerInterface
):
"""Test progress events are fired during an install for listeners."""
# This is from a sample pull. Filtered log to just one per unique status for test
coresys.docker.docker.api.pull.return_value = [
logs = [
{
"status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.2",
@@ -322,7 +291,11 @@ async def test_install_fires_progress_events(
"id": "1578b14a573c",
},
{"status": "Pull complete", "progressDetail": {}, "id": "1578b14a573c"},
{"status": "Verifying Checksum", "progressDetail": {}, "id": "6a1e931d8f88"},
{
"status": "Verifying Checksum",
"progressDetail": {},
"id": "6a1e931d8f88",
},
{
"status": "Digest: sha256:490080d7da0f385928022927990e04f604615f7b8c622ef3e58253d0f089881d"
},
@@ -330,6 +303,7 @@ async def test_install_fires_progress_events(
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/odroid-n2-homeassistant:2025.7.2"
},
]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
events: list[PullLogEntry] = []
@@ -344,10 +318,10 @@ async def test_install_fires_progress_events(
),
):
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test")
coresys.docker.docker.api.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", stream=True, decode=True
coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", stream=True
)
coresys.docker.images.get.assert_called_once_with("test:1.2.3")
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
await asyncio.sleep(1)
assert events == [
@@ -417,197 +391,19 @@ async def test_install_fires_progress_events(
]
async def test_install_sends_progress_to_home_assistant(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock
):
"""Test progress events are sent as job updates to Home Assistant."""
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
with (
patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value="i386")
),
):
# Schedule job so we can listen for the end. Then we can assert against the WS mock
event = asyncio.Event()
job, install_task = coresys.jobs.schedule_job(
test_docker_interface.install,
JobSchedulerOptions(),
AwesomeVersion("1.2.3"),
"test",
)
async def listen_for_job_end(reference: SupervisorJob):
if reference.uuid != job.uuid:
return
event.set()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end)
await install_task
await event.wait()
events = [
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0] and evt.args[0]["data"]["event"] == WSEvent.JOB
]
assert events[0]["name"] == "docker_interface_install"
assert events[0]["uuid"] == job.uuid
assert events[0]["done"] is None
assert events[1]["name"] == "docker_interface_install"
assert events[1]["uuid"] == job.uuid
assert events[1]["done"] is False
assert events[-1]["name"] == "docker_interface_install"
assert events[-1]["uuid"] == job.uuid
assert events[-1]["done"] is True
def make_sub_log(layer_id: str):
return [
{
"stage": evt["stage"],
"progress": evt["progress"],
"done": evt["done"],
"extra": evt["extra"],
}
for evt in events
if evt["name"] == "Pulling container image layer"
and evt["reference"] == layer_id
and evt["parent_id"] == job.uuid
]
layer_1_log = make_sub_log("1e214cd6d7d0")
layer_2_log = make_sub_log("1a38e1d5e18d")
assert len(layer_1_log) == 20
assert len(layer_2_log) == 19
assert len(events) == 42
assert layer_1_log == [
{"stage": "Pulling fs layer", "progress": 0, "done": False, "extra": None},
{
"stage": "Downloading",
"progress": 0.1,
"done": False,
"extra": {"current": 539462, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 0.6,
"done": False,
"extra": {"current": 4864838, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 0.9,
"done": False,
"extra": {"current": 7552896, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 1.2,
"done": False,
"extra": {"current": 10252544, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 2.9,
"done": False,
"extra": {"current": 25369792, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Verifying Checksum",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Download complete",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 50.1,
"done": False,
"extra": {"current": 557056, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 60.3,
"done": False,
"extra": {"current": 89686016, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 70.0,
"done": False,
"extra": {"current": 174358528, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 80.0,
"done": False,
"extra": {"current": 261816320, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 88.4,
"done": False,
"extra": {"current": 334790656, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 94.0,
"done": False,
"extra": {"current": 383811584, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 99.9,
"done": False,
"extra": {"current": 435617792, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 100.0,
"done": False,
"extra": {"current": 436480882, "total": 436480882},
},
{
"stage": "Pull complete",
"progress": 100.0,
"done": True,
"extra": {"current": 436480882, "total": 436480882},
},
]
async def test_install_progress_rounding_does_not_cause_misses(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock
coresys: CoreSys,
test_docker_interface: DockerInterface,
ha_ws_client: AsyncMock,
capture_exception: Mock,
):
"""Test extremely close progress events do not create rounding issues."""
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = [
# Current numbers chosen to create a rounding issue with original code
# Where a progress update came in with a value between the actual previous
# value and what it was rounded to. It should not raise an out of order exception
logs = [
{
"status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.1",
@@ -647,89 +443,27 @@ async def test_install_progress_rounding_does_not_cause_misses(
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/odroid-n2-homeassistant:2025.7.1"
},
]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
with (
patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value="i386")
),
):
# Schedule job so we can listen for the end. Then we can assert against the WS mock
event = asyncio.Event()
job, install_task = coresys.jobs.schedule_job(
test_docker_interface.install,
JobSchedulerOptions(),
AwesomeVersion("1.2.3"),
"test",
)
# Schedule job so we can listen for the end. Then we can assert against the WS mock
event = asyncio.Event()
job, install_task = coresys.jobs.schedule_job(
test_docker_interface.install,
JobSchedulerOptions(),
AwesomeVersion("1.2.3"),
"test",
)
async def listen_for_job_end(reference: SupervisorJob):
if reference.uuid != job.uuid:
return
event.set()
async def listen_for_job_end(reference: SupervisorJob):
if reference.uuid != job.uuid:
return
event.set()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end)
await install_task
await event.wait()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end)
await install_task
await event.wait()
events = [
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["reference"] == "1e214cd6d7d0"
and evt.args[0]["data"]["data"]["stage"] in {"Downloading", "Extracting"}
]
assert events == [
{
"name": "Pulling container image layer",
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 432700000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
{
"name": "Pulling container image layer",
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 432800000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
{
"name": "Pulling container image layer",
"stage": "Extracting",
"progress": 99.6,
"done": False,
"extra": {"current": 432700000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
{
"name": "Pulling container image layer",
"stage": "Extracting",
"progress": 99.6,
"done": False,
"extra": {"current": 432800000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
]
capture_exception.assert_not_called()
@pytest.mark.parametrize(
@@ -760,7 +494,8 @@ async def test_install_raises_on_pull_error(
exc_msg: str,
):
"""Test exceptions raised from errors in pull log."""
coresys.docker.docker.api.pull.return_value = [
logs = [
{
"status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.2",
@@ -773,19 +508,25 @@ async def test_install_raises_on_pull_error(
},
error_log,
]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
with pytest.raises(exc_type, match=exc_msg):
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test")
async def test_install_progress_handles_download_restart(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock
coresys: CoreSys,
test_docker_interface: DockerInterface,
ha_ws_client: AsyncMock,
capture_exception: Mock,
):
"""Test install handles docker progress events that include a download restart."""
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log_restart.json"
)
# Fixture emulates a download restart as it docker logs it
# A log out of order exception should not be raised
logs = load_json_fixture("docker_pull_image_log_restart.json")
coresys.docker.images.pull.return_value = AsyncIterator(logs)
with (
patch.object(
@@ -810,106 +551,172 @@ async def test_install_progress_handles_download_restart(
await install_task
await event.wait()
events = [
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0] and evt.args[0]["data"]["event"] == WSEvent.JOB
capture_exception.assert_not_called()
async def test_install_progress_handles_layers_skipping_download(
coresys: CoreSys,
test_docker_interface: DockerInterface,
capture_exception: Mock,
):
"""Test install handles small layers that skip downloading phase and go directly to download complete.
Reproduces the real-world scenario from Supervisor issue #6286:
- Small layer (02a6e69d8d00) completes Download complete at 10:14:08 without ever Downloading
- Normal layer (3f4a84073184) starts Downloading at 10:14:09 with progress updates
"""
coresys.core.set_state(CoreState.RUNNING)
# Reproduce EXACT sequence from SupervisorNoUpdateProgressLogs.txt:
# Small layer (02a6e69d8d00) completes BEFORE normal layer (3f4a84073184) starts downloading
logs = [
{"status": "Pulling from test/image", "id": "latest"},
# Small layer that skips downloading (02a6e69d8d00 in logs, 96 bytes)
{"status": "Pulling fs layer", "progressDetail": {}, "id": "02a6e69d8d00"},
{"status": "Pulling fs layer", "progressDetail": {}, "id": "3f4a84073184"},
{"status": "Waiting", "progressDetail": {}, "id": "02a6e69d8d00"},
{"status": "Waiting", "progressDetail": {}, "id": "3f4a84073184"},
# Goes straight to Download complete (10:14:08 in logs) - THIS IS THE KEY MOMENT
{"status": "Download complete", "progressDetail": {}, "id": "02a6e69d8d00"},
# Normal layer that downloads (3f4a84073184 in logs, 25MB)
# Downloading starts (10:14:09 in logs) - progress updates should happen NOW!
{
"status": "Downloading",
"progressDetail": {"current": 260937, "total": 25371463},
"progress": "[> ] 260.9kB/25.37MB",
"id": "3f4a84073184",
},
{
"status": "Downloading",
"progressDetail": {"current": 5505024, "total": 25371463},
"progress": "[==========> ] 5.505MB/25.37MB",
"id": "3f4a84073184",
},
{
"status": "Downloading",
"progressDetail": {"current": 11272192, "total": 25371463},
"progress": "[======================> ] 11.27MB/25.37MB",
"id": "3f4a84073184",
},
{"status": "Download complete", "progressDetail": {}, "id": "3f4a84073184"},
{
"status": "Extracting",
"progressDetail": {"current": 25371463, "total": 25371463},
"progress": "[==================================================>] 25.37MB/25.37MB",
"id": "3f4a84073184",
},
{"status": "Pull complete", "progressDetail": {}, "id": "3f4a84073184"},
# Small layer finally extracts (10:14:58 in logs)
{
"status": "Extracting",
"progressDetail": {"current": 96, "total": 96},
"progress": "[==================================================>] 96B/96B",
"id": "02a6e69d8d00",
},
{"status": "Pull complete", "progressDetail": {}, "id": "02a6e69d8d00"},
{"status": "Digest: sha256:test"},
{"status": "Status: Downloaded newer image for test/image:latest"},
]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
def make_sub_log(layer_id: str):
return [
{
"stage": evt["stage"],
"progress": evt["progress"],
"done": evt["done"],
"extra": evt["extra"],
}
for evt in events
if evt["name"] == "Pulling container image layer"
and evt["reference"] == layer_id
and evt["parent_id"] == job.uuid
]
# Capture immutable snapshots of install job progress using job.as_dict()
# This solves the mutable object problem - we snapshot state at call time
install_job_snapshots = []
original_on_job_change = coresys.jobs._on_job_change # pylint: disable=W0212
layer_1_log = make_sub_log("1e214cd6d7d0")
assert len(layer_1_log) == 14
assert layer_1_log == [
{"stage": "Pulling fs layer", "progress": 0, "done": False, "extra": None},
def capture_and_forward(job_obj, attribute, value):
# Capture immutable snapshot if this is the install job with progress
if job_obj.name == "docker_interface_install" and job_obj.progress > 0:
install_job_snapshots.append(job_obj.as_dict())
# Forward to original to maintain functionality
return original_on_job_change(job_obj, attribute, value)
with patch.object(coresys.jobs, "_on_job_change", side_effect=capture_and_forward):
event = asyncio.Event()
job, install_task = coresys.jobs.schedule_job(
test_docker_interface.install,
JobSchedulerOptions(),
AwesomeVersion("1.2.3"),
"test",
)
async def listen_for_job_end(reference: SupervisorJob):
if reference.uuid != job.uuid:
return
event.set()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end)
await install_task
await event.wait()
# First update from layer download should have rather low progress ((260937/25445459) / 2 ~ 0.5%)
assert install_job_snapshots[0]["progress"] < 1
# Total 8 events should lead to a progress update on the install job
assert len(install_job_snapshots) == 8
# Job should complete successfully
assert job.done is True
assert job.progress == 100
capture_exception.assert_not_called()
async def test_missing_total_handled_gracefully(
coresys: CoreSys,
test_docker_interface: DockerInterface,
ha_ws_client: AsyncMock,
capture_exception: Mock,
):
"""Test missing 'total' fields in progress details handled gracefully."""
coresys.core.set_state(CoreState.RUNNING)
# Progress details with missing 'total' fields observed in real-world pulls
logs = [
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
"status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.1",
},
{"status": "Pulling fs layer", "progressDetail": {}, "id": "1e214cd6d7d0"},
{
"status": "Downloading",
"progressDetail": {"current": 436480882},
"progress": "[===================================================] 436.5MB/436.5MB",
"id": "1e214cd6d7d0",
},
{"status": "Verifying Checksum", "progressDetail": {}, "id": "1e214cd6d7d0"},
{"status": "Download complete", "progressDetail": {}, "id": "1e214cd6d7d0"},
{
"status": "Extracting",
"progressDetail": {"current": 436480882},
"progress": "[===================================================] 436.5MB/436.5MB",
"id": "1e214cd6d7d0",
},
{"status": "Pull complete", "progressDetail": {}, "id": "1e214cd6d7d0"},
{
"status": "Digest: sha256:7d97da645f232f82a768d0a537e452536719d56d484d419836e53dbe3e4ec736"
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Retrying download",
"progress": 0,
"done": False,
"extra": None,
},
{
"stage": "Retrying download",
"progress": 0,
"done": False,
"extra": None,
},
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Verifying Checksum",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Download complete",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 80.0,
"done": False,
"extra": {"current": 261816320, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 100.0,
"done": False,
"extra": {"current": 436480882, "total": 436480882},
},
{
"stage": "Pull complete",
"progress": 100.0,
"done": True,
"extra": {"current": 436480882, "total": 436480882},
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/odroid-n2-homeassistant:2025.7.1"
},
]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
# Schedule job so we can listen for the end. Then we can assert against the WS mock
event = asyncio.Event()
job, install_task = coresys.jobs.schedule_job(
test_docker_interface.install,
JobSchedulerOptions(),
AwesomeVersion("1.2.3"),
"test",
)
async def listen_for_job_end(reference: SupervisorJob):
if reference.uuid != job.uuid:
return
event.set()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end)
await install_task
await event.wait()
capture_exception.assert_not_called()

View File

@@ -1,9 +1,10 @@
"""Test Docker manager."""
import asyncio
from pathlib import Path
from unittest.mock import MagicMock, patch
from docker.errors import DockerException
from docker.errors import APIError, DockerException, NotFound
import pytest
from requests import RequestException
@@ -20,7 +21,7 @@ async def test_run_command_success(docker: DockerAPI):
mock_container.logs.return_value = b"command output"
# Mock docker containers.run to return our mock container
docker.docker.containers.run.return_value = mock_container
docker.dockerpy.containers.run.return_value = mock_container
# Execute the command
result = docker.run_command(
@@ -33,7 +34,7 @@ async def test_run_command_success(docker: DockerAPI):
assert result.output == b"command output"
# Verify docker.containers.run was called correctly
docker.docker.containers.run.assert_called_once_with(
docker.dockerpy.containers.run.assert_called_once_with(
"alpine:3.18",
command="echo hello",
detach=True,
@@ -55,7 +56,7 @@ async def test_run_command_with_defaults(docker: DockerAPI):
mock_container.logs.return_value = b"error output"
# Mock docker containers.run to return our mock container
docker.docker.containers.run.return_value = mock_container
docker.dockerpy.containers.run.return_value = mock_container
# Execute the command with minimal parameters
result = docker.run_command(image="ubuntu")
@@ -66,7 +67,7 @@ async def test_run_command_with_defaults(docker: DockerAPI):
assert result.output == b"error output"
# Verify docker.containers.run was called with defaults
docker.docker.containers.run.assert_called_once_with(
docker.dockerpy.containers.run.assert_called_once_with(
"ubuntu:latest", # default tag
command=None, # default command
detach=True,
@@ -81,7 +82,7 @@ async def test_run_command_with_defaults(docker: DockerAPI):
async def test_run_command_docker_exception(docker: DockerAPI):
"""Test command execution when Docker raises an exception."""
# Mock docker containers.run to raise DockerException
docker.docker.containers.run.side_effect = DockerException("Docker error")
docker.dockerpy.containers.run.side_effect = DockerException("Docker error")
# Execute the command and expect DockerError
with pytest.raises(DockerError, match="Can't execute command: Docker error"):
@@ -91,7 +92,7 @@ async def test_run_command_docker_exception(docker: DockerAPI):
async def test_run_command_request_exception(docker: DockerAPI):
"""Test command execution when requests raises an exception."""
# Mock docker containers.run to raise RequestException
docker.docker.containers.run.side_effect = RequestException("Connection error")
docker.dockerpy.containers.run.side_effect = RequestException("Connection error")
# Execute the command and expect DockerError
with pytest.raises(DockerError, match="Can't execute command: Connection error"):
@@ -104,7 +105,7 @@ async def test_run_command_cleanup_on_exception(docker: DockerAPI):
mock_container = MagicMock()
# Mock docker.containers.run to return container, but container.wait to raise exception
docker.docker.containers.run.return_value = mock_container
docker.dockerpy.containers.run.return_value = mock_container
mock_container.wait.side_effect = DockerException("Wait failed")
# Execute the command and expect DockerError
@@ -123,7 +124,7 @@ async def test_run_command_custom_stdout_stderr(docker: DockerAPI):
mock_container.logs.return_value = b"output"
# Mock docker containers.run to return our mock container
docker.docker.containers.run.return_value = mock_container
docker.dockerpy.containers.run.return_value = mock_container
# Execute the command with custom stdout/stderr
result = docker.run_command(
@@ -150,7 +151,7 @@ async def test_run_container_with_cidfile(
cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid"
extern_cidfile_path = coresys.config.path_extern_cid_files / f"{container_name}.cid"
docker.docker.containers.run.return_value = mock_container
docker.dockerpy.containers.run.return_value = mock_container
# Mock container creation
with patch.object(
@@ -351,3 +352,101 @@ async def test_run_container_with_leftover_cidfile_directory(
assert cidfile_path.read_text() == mock_container.id
assert result == mock_container
async def test_repair(coresys: CoreSys, caplog: pytest.LogCaptureFixture):
"""Test repair API."""
coresys.docker.dockerpy.networks.get.side_effect = [
hassio := MagicMock(
attrs={
"Containers": {
"good": {"Name": "good"},
"corrupt": {"Name": "corrupt"},
"fail": {"Name": "fail"},
}
}
),
host := MagicMock(attrs={"Containers": {}}),
]
coresys.docker.dockerpy.containers.get.side_effect = [
MagicMock(),
NotFound("corrupt"),
DockerException("fail"),
]
await coresys.run_in_executor(coresys.docker.repair)
coresys.docker.dockerpy.api.prune_containers.assert_called_once()
coresys.docker.dockerpy.api.prune_images.assert_called_once_with(
filters={"dangling": False}
)
coresys.docker.dockerpy.api.prune_builds.assert_called_once()
coresys.docker.dockerpy.api.prune_volumes.assert_called_once()
coresys.docker.dockerpy.api.prune_networks.assert_called_once()
hassio.disconnect.assert_called_once_with("corrupt", force=True)
host.disconnect.assert_not_called()
assert "Docker fatal error on container fail on hassio" in caplog.text
async def test_repair_failures(coresys: CoreSys, caplog: pytest.LogCaptureFixture):
"""Test repair proceeds best it can through failures."""
coresys.docker.dockerpy.api.prune_containers.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_images.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_builds.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_volumes.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_networks.side_effect = APIError("fail")
coresys.docker.dockerpy.networks.get.side_effect = NotFound("missing")
await coresys.run_in_executor(coresys.docker.repair)
assert "Error for containers prune: fail" in caplog.text
assert "Error for images prune: fail" in caplog.text
assert "Error for builds prune: fail" in caplog.text
assert "Error for volumes prune: fail" in caplog.text
assert "Error for networks prune: fail" in caplog.text
assert "Error for networks hassio prune: missing" in caplog.text
assert "Error for networks host prune: missing" in caplog.text
@pytest.mark.parametrize("log_starter", [("Loaded image ID"), ("Loaded image")])
async def test_import_image(coresys: CoreSys, tmp_path: Path, log_starter: str):
"""Test importing an image into docker."""
(test_tar := tmp_path / "test.tar").touch()
coresys.docker.images.import_image.return_value = [
{"stream": f"{log_starter}: imported"}
]
coresys.docker.images.inspect.return_value = {"Id": "imported"}
image = await coresys.docker.import_image(test_tar)
assert image["Id"] == "imported"
coresys.docker.images.inspect.assert_called_once_with("imported")
async def test_import_image_error(coresys: CoreSys, tmp_path: Path):
"""Test failure importing an image into docker."""
(test_tar := tmp_path / "test.tar").touch()
coresys.docker.images.import_image.return_value = [
{"errorDetail": {"message": "fail"}}
]
with pytest.raises(DockerError, match="Can't import image from tar: fail"):
await coresys.docker.import_image(test_tar)
coresys.docker.images.inspect.assert_not_called()
async def test_import_multiple_images_in_tar(
coresys: CoreSys, tmp_path: Path, caplog: pytest.LogCaptureFixture
):
"""Test importing an image into docker."""
(test_tar := tmp_path / "test.tar").touch()
coresys.docker.images.import_image.return_value = [
{"stream": "Loaded image: imported-1"},
{"stream": "Loaded image: imported-2"},
]
assert await coresys.docker.import_image(test_tar) is None
assert "Unexpected image count 2 while importing image from tar" in caplog.text
coresys.docker.images.inspect.assert_not_called()

View File

@@ -88,7 +88,7 @@ async def test_events(
):
"""Test events created from docker events."""
event["Actor"]["Attributes"]["name"] = "some_container"
event["id"] = "abc123"
event["Actor"]["ID"] = "abc123"
event["time"] = 123
with (
patch(
@@ -131,12 +131,12 @@ async def test_unlabeled_container(coresys: CoreSys):
new=PropertyMock(
return_value=[
{
"id": "abc123",
"time": 123,
"Type": "container",
"Action": "die",
"Actor": {
"Attributes": {"name": "homeassistant", "exitCode": "137"}
"ID": "abc123",
"Attributes": {"name": "homeassistant", "exitCode": "137"},
},
}
]

View File

@@ -376,3 +376,14 @@ async def test_try_get_nvme_life_time_missing_percent_used(
coresys.config.path_supervisor
)
assert lifetime is None
async def test_try_get_nvme_life_time_dbus_not_connected(coresys: CoreSys):
"""Test getting lifetime info from an NVMe when DBUS is not connected."""
# Set the dbus for udisks2 bus to be None, to make it forcibly disconnected.
coresys.dbus.udisks2.dbus = None
lifetime = await coresys.hardware.disk.get_disk_life_time(
coresys.config.path_supervisor
)
assert lifetime is None

View File

@@ -1,11 +1,14 @@
"""Test Home Assistant core."""
from datetime import datetime, timedelta
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, patch
from http import HTTPStatus
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, call, patch
import aiodocker
from awesomeversion import AwesomeVersion
from docker.errors import APIError, DockerException, ImageNotFound, NotFound
from docker.errors import APIError, DockerException, NotFound
import pytest
from requests import RequestException
from time_machine import travel
from supervisor.const import CpuArch
@@ -23,8 +26,12 @@ from supervisor.exceptions import (
from supervisor.homeassistant.api import APIState
from supervisor.homeassistant.core import HomeAssistantCore
from supervisor.homeassistant.module import HomeAssistant
from supervisor.resolution.const import ContextType, IssueType
from supervisor.resolution.data import Issue
from supervisor.updater import Updater
from tests.common import AsyncIterator
async def test_update_fails_if_out_of_date(coresys: CoreSys):
"""Test update of Home Assistant fails when supervisor or plugin is out of date."""
@@ -52,11 +59,23 @@ async def test_update_fails_if_out_of_date(coresys: CoreSys):
await coresys.homeassistant.core.update()
async def test_install_landingpage_docker_error(
coresys: CoreSys, capture_exception: Mock, caplog: pytest.LogCaptureFixture
@pytest.mark.parametrize(
"err",
[
aiodocker.DockerError(HTTPStatus.TOO_MANY_REQUESTS, {"message": "ratelimit"}),
APIError("ratelimit", MagicMock(status_code=HTTPStatus.TOO_MANY_REQUESTS)),
],
)
async def test_install_landingpage_docker_ratelimit_error(
coresys: CoreSys,
capture_exception: Mock,
caplog: pytest.LogCaptureFixture,
err: Exception,
):
"""Test install landing page fails due to docker error."""
"""Test install landing page fails due to docker ratelimit error."""
coresys.security.force = True
coresys.docker.images.pull.side_effect = [err, AsyncIterator([{}])]
with (
patch.object(DockerHomeAssistant, "attach", side_effect=DockerError),
patch.object(
@@ -69,19 +88,35 @@ async def test_install_landingpage_docker_error(
),
patch("supervisor.homeassistant.core.asyncio.sleep") as sleep,
):
coresys.docker.images.get.side_effect = [APIError("fail"), MagicMock()]
await coresys.homeassistant.core.install_landingpage()
sleep.assert_awaited_once_with(30)
assert "Failed to install landingpage, retrying after 30sec" in caplog.text
capture_exception.assert_not_called()
assert (
Issue(IssueType.DOCKER_RATELIMIT, ContextType.SYSTEM)
in coresys.resolution.issues
)
@pytest.mark.parametrize(
"err",
[
aiodocker.DockerError(HTTPStatus.INTERNAL_SERVER_ERROR, {"message": "fail"}),
APIError("fail"),
DockerException(),
RequestException(),
OSError(),
],
)
async def test_install_landingpage_other_error(
coresys: CoreSys, capture_exception: Mock, caplog: pytest.LogCaptureFixture
coresys: CoreSys,
capture_exception: Mock,
caplog: pytest.LogCaptureFixture,
err: Exception,
):
"""Test install landing page fails due to other error."""
coresys.docker.images.get.side_effect = [(err := OSError()), MagicMock()]
coresys.docker.images.inspect.side_effect = [err, MagicMock()]
with (
patch.object(DockerHomeAssistant, "attach", side_effect=DockerError),
@@ -102,11 +137,23 @@ async def test_install_landingpage_other_error(
capture_exception.assert_called_once_with(err)
async def test_install_docker_error(
coresys: CoreSys, capture_exception: Mock, caplog: pytest.LogCaptureFixture
@pytest.mark.parametrize(
"err",
[
aiodocker.DockerError(HTTPStatus.TOO_MANY_REQUESTS, {"message": "ratelimit"}),
APIError("ratelimit", MagicMock(status_code=HTTPStatus.TOO_MANY_REQUESTS)),
],
)
async def test_install_docker_ratelimit_error(
coresys: CoreSys,
capture_exception: Mock,
caplog: pytest.LogCaptureFixture,
err: Exception,
):
"""Test install fails due to docker error."""
"""Test install fails due to docker ratelimit error."""
coresys.security.force = True
coresys.docker.images.pull.side_effect = [err, AsyncIterator([{}])]
with (
patch.object(HomeAssistantCore, "start"),
patch.object(DockerHomeAssistant, "cleanup"),
@@ -123,19 +170,35 @@ async def test_install_docker_error(
),
patch("supervisor.homeassistant.core.asyncio.sleep") as sleep,
):
coresys.docker.images.get.side_effect = [APIError("fail"), MagicMock()]
await coresys.homeassistant.core.install()
sleep.assert_awaited_once_with(30)
assert "Error on Home Assistant installation. Retrying in 30sec" in caplog.text
capture_exception.assert_not_called()
assert (
Issue(IssueType.DOCKER_RATELIMIT, ContextType.SYSTEM)
in coresys.resolution.issues
)
@pytest.mark.parametrize(
"err",
[
aiodocker.DockerError(HTTPStatus.INTERNAL_SERVER_ERROR, {"message": "fail"}),
APIError("fail"),
DockerException(),
RequestException(),
OSError(),
],
)
async def test_install_other_error(
coresys: CoreSys, capture_exception: Mock, caplog: pytest.LogCaptureFixture
coresys: CoreSys,
capture_exception: Mock,
caplog: pytest.LogCaptureFixture,
err: Exception,
):
"""Test install fails due to other error."""
coresys.docker.images.get.side_effect = [(err := OSError()), MagicMock()]
coresys.docker.images.inspect.side_effect = [err, MagicMock()]
with (
patch.object(HomeAssistantCore, "start"),
@@ -161,21 +224,29 @@ async def test_install_other_error(
@pytest.mark.parametrize(
"container_exists,image_exists", [(False, True), (True, False), (True, True)]
("container_exc", "image_exc", "remove_calls"),
[
(NotFound("missing"), None, []),
(
None,
aiodocker.DockerError(404, {"message": "missing"}),
[call(force=True, v=True)],
),
(None, None, [call(force=True, v=True)]),
],
)
@pytest.mark.usefixtures("path_extern")
async def test_start(
coresys: CoreSys, container_exists: bool, image_exists: bool, path_extern
coresys: CoreSys,
container_exc: DockerException | None,
image_exc: aiodocker.DockerError | None,
remove_calls: list[call],
):
"""Test starting Home Assistant."""
if image_exists:
coresys.docker.images.get.return_value.id = "123"
else:
coresys.docker.images.get.side_effect = ImageNotFound("missing")
if container_exists:
coresys.docker.containers.get.return_value.image.id = "123"
else:
coresys.docker.containers.get.side_effect = NotFound("missing")
coresys.docker.images.inspect.return_value = {"Id": "123"}
coresys.docker.images.inspect.side_effect = image_exc
coresys.docker.containers.get.return_value.id = "123"
coresys.docker.containers.get.side_effect = container_exc
with (
patch.object(
@@ -198,18 +269,14 @@ async def test_start(
assert run.call_args.kwargs["hostname"] == "homeassistant"
coresys.docker.containers.get.return_value.stop.assert_not_called()
if container_exists:
coresys.docker.containers.get.return_value.remove.assert_called_once_with(
force=True,
v=True,
)
else:
coresys.docker.containers.get.return_value.remove.assert_not_called()
assert (
coresys.docker.containers.get.return_value.remove.call_args_list == remove_calls
)
async def test_start_existing_container(coresys: CoreSys, path_extern):
"""Test starting Home Assistant when container exists and is viable."""
coresys.docker.images.get.return_value.id = "123"
coresys.docker.images.inspect.return_value = {"Id": "123"}
coresys.docker.containers.get.return_value.image.id = "123"
coresys.docker.containers.get.return_value.status = "exited"
@@ -394,24 +461,32 @@ async def test_core_loads_wrong_image_for_machine(
"""Test core is loaded with wrong image for machine."""
coresys.homeassistant.set_image("ghcr.io/home-assistant/odroid-n2-homeassistant")
coresys.homeassistant.version = AwesomeVersion("2024.4.0")
container.attrs["Config"] = {"Labels": {"io.hass.version": "2024.4.0"}}
await coresys.homeassistant.core.load()
with patch.object(
DockerAPI,
"pull_image",
return_value={
"Id": "abc123",
"Config": {"Labels": {"io.hass.version": "2024.4.0"}},
},
) as pull_image:
container.attrs |= pull_image.return_value
await coresys.homeassistant.core.load()
pull_image.assert_called_once_with(
ANY,
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
)
container.remove.assert_called_once_with(force=True, v=True)
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": "ghcr.io/home-assistant/odroid-n2-homeassistant:latest",
"force": True,
}
assert coresys.docker.images.remove.call_args_list[1].kwargs == {
"image": "ghcr.io/home-assistant/odroid-n2-homeassistant:2024.4.0",
"force": True,
}
coresys.docker.pull_image.assert_called_once_with(
ANY,
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
assert coresys.docker.images.delete.call_args_list[0] == call(
"ghcr.io/home-assistant/odroid-n2-homeassistant:latest",
force=True,
)
assert coresys.docker.images.delete.call_args_list[1] == call(
"ghcr.io/home-assistant/odroid-n2-homeassistant:2024.4.0",
force=True,
)
assert (
coresys.homeassistant.image == "ghcr.io/home-assistant/qemux86-64-homeassistant"
@@ -428,8 +503,8 @@ async def test_core_load_allows_image_override(coresys: CoreSys, container: Magi
await coresys.homeassistant.core.load()
container.remove.assert_not_called()
coresys.docker.images.remove.assert_not_called()
coresys.docker.images.get.assert_not_called()
coresys.docker.images.delete.assert_not_called()
coresys.docker.images.inspect.assert_not_called()
assert (
coresys.homeassistant.image == "ghcr.io/home-assistant/odroid-n2-homeassistant"
)
@@ -440,27 +515,36 @@ async def test_core_loads_wrong_image_for_architecture(
):
"""Test core is loaded with wrong image for architecture."""
coresys.homeassistant.version = AwesomeVersion("2024.4.0")
container.attrs["Config"] = {"Labels": {"io.hass.version": "2024.4.0"}}
coresys.docker.images.get("ghcr.io/home-assistant/qemux86-64-homeassistant").attrs[
"Architecture"
] = "arm64"
coresys.docker.images.inspect.return_value = img_data = (
coresys.docker.images.inspect.return_value
| {
"Architecture": "arm64",
"Config": {"Labels": {"io.hass.version": "2024.4.0"}},
}
)
container.attrs |= img_data
await coresys.homeassistant.core.load()
with patch.object(
DockerAPI,
"pull_image",
return_value=img_data | {"Architecture": "amd64"},
) as pull_image:
await coresys.homeassistant.core.load()
pull_image.assert_called_once_with(
ANY,
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
)
container.remove.assert_called_once_with(force=True, v=True)
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": "ghcr.io/home-assistant/qemux86-64-homeassistant:latest",
"force": True,
}
assert coresys.docker.images.remove.call_args_list[1].kwargs == {
"image": "ghcr.io/home-assistant/qemux86-64-homeassistant:2024.4.0",
"force": True,
}
coresys.docker.pull_image.assert_called_once_with(
ANY,
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
assert coresys.docker.images.delete.call_args_list[0] == call(
"ghcr.io/home-assistant/qemux86-64-homeassistant:latest",
force=True,
)
assert coresys.docker.images.delete.call_args_list[1] == call(
"ghcr.io/home-assistant/qemux86-64-homeassistant:2024.4.0",
force=True,
)
assert (
coresys.homeassistant.image == "ghcr.io/home-assistant/qemux86-64-homeassistant"

View File

@@ -7,8 +7,8 @@ import pytest
from supervisor.coresys import CoreSys
from supervisor.dbus.const import DeviceType
from supervisor.host.configuration import Interface, VlanConfig
from supervisor.host.const import InterfaceType
from supervisor.host.configuration import Interface, VlanConfig, WifiConfig
from supervisor.host.const import AuthMethod, InterfaceType, WifiMode
from tests.dbus_service_mocks.base import DBusServiceMock
from tests.dbus_service_mocks.network_connection_settings import (
@@ -291,3 +291,237 @@ async def test_equals_dbus_interface_eth0_10_real(
# Test should pass with matching VLAN config
assert test_vlan_interface.equals_dbus_interface(network_interface) is True
def test_map_nm_wifi_non_wireless_interface():
"""Test _map_nm_wifi returns None for non-wireless interface."""
# Mock non-wireless interface
mock_interface = Mock()
mock_interface.type = DeviceType.ETHERNET
mock_interface.settings = Mock()
result = Interface._map_nm_wifi(mock_interface)
assert result is None
def test_map_nm_wifi_no_settings():
"""Test _map_nm_wifi returns None when interface has no settings."""
# Mock wireless interface without settings
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = None
result = Interface._map_nm_wifi(mock_interface)
assert result is None
def test_map_nm_wifi_open_authentication():
"""Test _map_nm_wifi with open authentication (no security)."""
# Mock wireless interface with open authentication
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "TestSSID"
mock_interface.settings.wireless.mode = "infrastructure"
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert isinstance(result, WifiConfig)
assert result.mode == WifiMode.INFRASTRUCTURE
assert result.ssid == "TestSSID"
assert result.auth == AuthMethod.OPEN
assert result.psk is None
assert result.signal is None
def test_map_nm_wifi_wep_authentication():
"""Test _map_nm_wifi with WEP authentication."""
# Mock wireless interface with WEP authentication
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = Mock()
mock_interface.settings.wireless_security.key_mgmt = "none"
mock_interface.settings.wireless_security.psk = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "WEPNetwork"
mock_interface.settings.wireless.mode = "infrastructure"
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert isinstance(result, WifiConfig)
assert result.auth == AuthMethod.WEP
assert result.ssid == "WEPNetwork"
assert result.psk is None
def test_map_nm_wifi_wpa_psk_authentication():
"""Test _map_nm_wifi with WPA-PSK authentication."""
# Mock wireless interface with WPA-PSK authentication
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = Mock()
mock_interface.settings.wireless_security.key_mgmt = "wpa-psk"
mock_interface.settings.wireless_security.psk = "SecretPassword123"
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "SecureNetwork"
mock_interface.settings.wireless.mode = "infrastructure"
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert isinstance(result, WifiConfig)
assert result.auth == AuthMethod.WPA_PSK
assert result.ssid == "SecureNetwork"
assert result.psk == "SecretPassword123"
def test_map_nm_wifi_unsupported_authentication():
"""Test _map_nm_wifi returns None for unsupported authentication method."""
# Mock wireless interface with unsupported authentication
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = Mock()
mock_interface.settings.wireless_security.key_mgmt = "wpa-eap" # Unsupported
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "EnterpriseNetwork"
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is None
def test_map_nm_wifi_different_modes():
"""Test _map_nm_wifi with different wifi modes."""
modes_to_test = [
("infrastructure", WifiMode.INFRASTRUCTURE),
("mesh", WifiMode.MESH),
("adhoc", WifiMode.ADHOC),
("ap", WifiMode.AP),
]
for mode_value, expected_mode in modes_to_test:
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "TestSSID"
mock_interface.settings.wireless.mode = mode_value
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert result.mode == expected_mode
def test_map_nm_wifi_with_signal():
"""Test _map_nm_wifi with wireless signal strength."""
# Mock wireless interface with active connection and signal
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "TestSSID"
mock_interface.settings.wireless.mode = "infrastructure"
mock_interface.wireless = Mock()
mock_interface.wireless.active = Mock()
mock_interface.wireless.active.strength = 75
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert result.signal == 75
def test_map_nm_wifi_without_signal():
"""Test _map_nm_wifi without wireless signal (no active connection)."""
# Mock wireless interface without active connection
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "TestSSID"
mock_interface.settings.wireless.mode = "infrastructure"
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert result.signal is None
def test_map_nm_wifi_wireless_no_active_ap():
"""Test _map_nm_wifi with wireless object but no active access point."""
# Mock wireless interface with wireless object but no active AP
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "TestSSID"
mock_interface.settings.wireless.mode = "infrastructure"
mock_interface.wireless = Mock()
mock_interface.wireless.active = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert result.signal is None
def test_map_nm_wifi_no_wireless_settings():
"""Test _map_nm_wifi when wireless settings are missing."""
# Mock wireless interface without wireless settings
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = None
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert result.ssid == ""
assert result.mode == WifiMode.INFRASTRUCTURE # Default mode
def test_map_nm_wifi_no_wireless_mode():
"""Test _map_nm_wifi when wireless mode is not specified."""
# Mock wireless interface without mode specified
mock_interface = Mock()
mock_interface.type = DeviceType.WIRELESS
mock_interface.settings = Mock()
mock_interface.settings.wireless_security = None
mock_interface.settings.wireless = Mock()
mock_interface.settings.wireless.ssid = "TestSSID"
mock_interface.settings.wireless.mode = None
mock_interface.wireless = None
mock_interface.interface_name = "wlan0"
result = Interface._map_nm_wifi(mock_interface)
assert result is not None
assert result.mode == WifiMode.INFRASTRUCTURE # Default mode

View File

@@ -90,6 +90,49 @@ async def test_logs_coloured(journald_gateway: MagicMock, coresys: CoreSys):
)
async def test_logs_no_colors(journald_gateway: MagicMock, coresys: CoreSys):
"""Test ANSI color codes being stripped when no_colors=True."""
journald_gateway.content.feed_data(
load_fixture("logs_export_supervisor.txt").encode("utf-8")
)
journald_gateway.content.feed_eof()
async with coresys.host.logs.journald_logs() as resp:
cursor, line = await anext(journal_logs_reader(resp, no_colors=True))
assert (
cursor
== "s=83fee99ca0c3466db5fc120d52ca7dd8;i=2049389;b=f5a5c442fa6548cf97474d2d57c920b3;m=4263828e8c;t=612dda478b01b;x=9ae12394c9326930"
)
# Colors should be stripped
assert (
line == "24-03-04 23:56:56 INFO (MainThread) [__main__] Closing Supervisor"
)
async def test_logs_verbose_no_colors(journald_gateway: MagicMock, coresys: CoreSys):
"""Test ANSI color codes being stripped from verbose formatted logs when no_colors=True."""
journald_gateway.content.feed_data(
load_fixture("logs_export_supervisor.txt").encode("utf-8")
)
journald_gateway.content.feed_eof()
async with coresys.host.logs.journald_logs() as resp:
cursor, line = await anext(
journal_logs_reader(
resp, log_formatter=LogFormatter.VERBOSE, no_colors=True
)
)
assert (
cursor
== "s=83fee99ca0c3466db5fc120d52ca7dd8;i=2049389;b=f5a5c442fa6548cf97474d2d57c920b3;m=4263828e8c;t=612dda478b01b;x=9ae12394c9326930"
)
# Colors should be stripped in verbose format too
assert (
line
== "2024-03-04 22:56:56.709 ha-hloub hassio_supervisor[466]: 24-03-04 23:56:56 INFO (MainThread) [__main__] Closing Supervisor"
)
async def test_boot_ids(
journald_gateway: MagicMock,
coresys: CoreSys,

View File

@@ -1179,7 +1179,6 @@ async def test_job_scheduled_delay(coresys: CoreSys):
async def test_job_scheduled_at(coresys: CoreSys):
"""Test job that schedules a job to start at a specified time."""
dt = datetime.now()
class TestClass:
"""Test class."""
@@ -1189,10 +1188,12 @@ async def test_job_scheduled_at(coresys: CoreSys):
self.coresys = coresys
@Job(name="test_job_scheduled_at_job_scheduler")
async def job_scheduler(self) -> tuple[SupervisorJob, asyncio.TimerHandle]:
async def job_scheduler(
self, scheduled_time: datetime
) -> tuple[SupervisorJob, asyncio.TimerHandle]:
"""Schedule a job to run at specified time."""
return self.coresys.jobs.schedule_job(
self.job_task, JobSchedulerOptions(start_at=dt + timedelta(seconds=0.1))
self.job_task, JobSchedulerOptions(start_at=scheduled_time)
)
@Job(name="test_job_scheduled_at_job_task")
@@ -1201,29 +1202,28 @@ async def test_job_scheduled_at(coresys: CoreSys):
self.coresys.jobs.current.stage = "work"
test = TestClass(coresys)
job_started = asyncio.Event()
job_ended = asyncio.Event()
# Schedule job to run 0.1 seconds from now
scheduled_time = datetime.now() + timedelta(seconds=0.1)
job, _ = await test.job_scheduler(scheduled_time)
started = False
ended = False
async def start_listener(evt_job: SupervisorJob):
if evt_job.uuid == job.uuid:
job_started.set()
nonlocal started
started = started or evt_job.uuid == job.uuid
async def end_listener(evt_job: SupervisorJob):
if evt_job.uuid == job.uuid:
job_ended.set()
nonlocal ended
ended = ended or evt_job.uuid == job.uuid
async with time_machine.travel(dt):
job, _ = await test.job_scheduler()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_START, start_listener)
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, end_listener)
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_START, start_listener)
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, end_listener)
# Advance time to exactly when job should start and wait for completion
async with time_machine.travel(dt + timedelta(seconds=0.1)):
await asyncio.wait_for(
asyncio.gather(job_started.wait(), job_ended.wait()), timeout=1.0
)
await asyncio.sleep(0.2)
assert started
assert ended
assert job.done
assert job.name == "test_job_scheduled_at_job_task"
assert job.stage == "work"

View File

@@ -198,8 +198,10 @@ async def test_notify_on_change(coresys: CoreSys, ha_ws_client: AsyncMock):
"errors": [
{
"type": "HassioError",
"message": "Unknown error, see supervisor logs",
"message": "Unknown error, see Supervisor logs (check with 'ha supervisor logs')",
"stage": "test",
"error_key": None,
"extra_fields": None,
}
],
"created": ANY,
@@ -226,8 +228,10 @@ async def test_notify_on_change(coresys: CoreSys, ha_ws_client: AsyncMock):
"errors": [
{
"type": "HassioError",
"message": "Unknown error, see supervisor logs",
"message": "Unknown error, see Supervisor logs (check with 'ha supervisor logs')",
"stage": "test",
"error_key": None,
"extra_fields": None,
}
],
"created": ANY,

View File

@@ -115,7 +115,17 @@ async def test_not_started(coresys):
assert filter_data(coresys, SAMPLE_EVENT, {}) == SAMPLE_EVENT
await coresys.core.set_state(CoreState.SETUP)
assert filter_data(coresys, SAMPLE_EVENT, {}) == SAMPLE_EVENT
filtered = filter_data(coresys, SAMPLE_EVENT, {})
# During SETUP, we should have basic system info available
assert "contexts" in filtered
assert "versions" in filtered["contexts"]
assert "docker" in filtered["contexts"]["versions"]
assert "supervisor" in filtered["contexts"]["versions"]
assert "host" in filtered["contexts"]
assert "machine" in filtered["contexts"]["host"]
assert filtered["contexts"]["versions"]["docker"] == coresys.docker.info.version
assert filtered["contexts"]["versions"]["supervisor"] == coresys.supervisor.version
assert filtered["contexts"]["host"]["machine"] == coresys.machine
async def test_defaults(coresys):

View File

@@ -181,7 +181,6 @@ async def test_reload_updater_triggers_supervisor_update(
"""Test an updater reload triggers a supervisor update if there is one."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
await coresys.core.set_state(CoreState.RUNNING)
coresys.security.content_trust = False
with (
patch.object(

View File

@@ -2,7 +2,7 @@
import asyncio
from pathlib import Path
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, patch
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, call, patch
from awesomeversion import AwesomeVersion
import pytest
@@ -11,13 +11,13 @@ from supervisor.const import BusEvent, CpuArch
from supervisor.coresys import CoreSys
from supervisor.docker.const import ContainerState
from supervisor.docker.interface import DockerInterface
from supervisor.docker.manager import DockerAPI
from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import (
AudioError,
AudioJobError,
CliError,
CliJobError,
CodeNotaryUntrusted,
CoreDNSError,
CoreDNSJobError,
DockerError,
@@ -337,14 +337,12 @@ async def test_repair_failed(
patch.object(
DockerInterface, "arch", new=PropertyMock(return_value=CpuArch.AMD64)
),
patch(
"supervisor.security.module.cas_validate", side_effect=CodeNotaryUntrusted
),
patch.object(DockerInterface, "install", side_effect=DockerError),
):
await plugin.repair()
capture_exception.assert_called_once()
assert check_exception_chain(capture_exception.call_args[0][0], CodeNotaryUntrusted)
assert check_exception_chain(capture_exception.call_args[0][0], DockerError)
@pytest.mark.parametrize(
@@ -362,21 +360,26 @@ async def test_load_with_incorrect_image(
plugin.version = AwesomeVersion("2024.4.0")
container.status = "running"
container.attrs["Config"] = {"Labels": {"io.hass.version": "2024.4.0"}}
coresys.docker.images.inspect.return_value = img_data = (
coresys.docker.images.inspect.return_value
| {"Config": {"Labels": {"io.hass.version": "2024.4.0"}}}
)
container.attrs |= img_data
await plugin.load()
with patch.object(DockerAPI, "pull_image", return_value=img_data) as pull_image:
await plugin.load()
pull_image.assert_called_once_with(
ANY, correct_image, "2024.4.0", platform="linux/amd64"
)
container.remove.assert_called_once_with(force=True, v=True)
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": f"{old_image}:latest",
"force": True,
}
assert coresys.docker.images.remove.call_args_list[1].kwargs == {
"image": f"{old_image}:2024.4.0",
"force": True,
}
coresys.docker.pull_image.assert_called_once_with(
ANY, correct_image, "2024.4.0", platform="linux/amd64"
assert coresys.docker.images.delete.call_args_list[0] == call(
f"{old_image}:latest",
force=True,
)
assert coresys.docker.images.delete.call_args_list[1] == call(
f"{old_image}:2024.4.0",
force=True,
)
assert plugin.image == correct_image

View File

@@ -51,7 +51,6 @@ async def test_if_check_make_issue(coresys: CoreSys):
"""Test check for setup."""
free_space = Issue(IssueType.FREE_SPACE, ContextType.SYSTEM)
await coresys.core.set_state(CoreState.RUNNING)
coresys.security.content_trust = False
with patch("shutil.disk_usage", return_value=(1, 1, 1)):
await coresys.resolution.check.check_system()
@@ -63,7 +62,6 @@ async def test_if_check_cleanup_issue(coresys: CoreSys):
"""Test check for setup."""
free_space = Issue(IssueType.FREE_SPACE, ContextType.SYSTEM)
await coresys.core.set_state(CoreState.RUNNING)
coresys.security.content_trust = False
with patch("shutil.disk_usage", return_value=(1, 1, 1)):
await coresys.resolution.check.check_system()

View File

@@ -1,96 +0,0 @@
"""Test Check Supervisor trust."""
# pylint: disable=import-error,protected-access
from unittest.mock import AsyncMock, patch
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.exceptions import CodeNotaryError, CodeNotaryUntrusted
from supervisor.resolution.checks.supervisor_trust import CheckSupervisorTrust
from supervisor.resolution.const import IssueType, UnhealthyReason
async def test_base(coresys: CoreSys):
"""Test check basics."""
supervisor_trust = CheckSupervisorTrust(coresys)
assert supervisor_trust.slug == "supervisor_trust"
assert supervisor_trust.enabled
async def test_check(coresys: CoreSys):
"""Test check."""
supervisor_trust = CheckSupervisorTrust(coresys)
await coresys.core.set_state(CoreState.RUNNING)
assert len(coresys.resolution.issues) == 0
coresys.supervisor.check_trust = AsyncMock(side_effect=CodeNotaryError)
await supervisor_trust.run_check()
assert coresys.supervisor.check_trust.called
coresys.supervisor.check_trust = AsyncMock(return_value=None)
await supervisor_trust.run_check()
assert coresys.supervisor.check_trust.called
assert len(coresys.resolution.issues) == 0
coresys.supervisor.check_trust = AsyncMock(side_effect=CodeNotaryUntrusted)
await supervisor_trust.run_check()
assert coresys.supervisor.check_trust.called
assert len(coresys.resolution.issues) == 1
assert coresys.resolution.issues[-1].type == IssueType.TRUST
assert UnhealthyReason.UNTRUSTED in coresys.resolution.unhealthy
async def test_approve(coresys: CoreSys):
"""Test check."""
supervisor_trust = CheckSupervisorTrust(coresys)
await coresys.core.set_state(CoreState.RUNNING)
coresys.supervisor.check_trust = AsyncMock(side_effect=CodeNotaryUntrusted)
assert await supervisor_trust.approve_check()
coresys.supervisor.check_trust = AsyncMock(return_value=None)
assert not await supervisor_trust.approve_check()
async def test_with_global_disable(coresys: CoreSys, caplog):
"""Test when pwned is globally disabled."""
coresys.security.content_trust = False
supervisor_trust = CheckSupervisorTrust(coresys)
await coresys.core.set_state(CoreState.RUNNING)
assert len(coresys.resolution.issues) == 0
coresys.security.verify_own_content = AsyncMock(side_effect=CodeNotaryUntrusted)
await supervisor_trust.run_check()
assert not coresys.security.verify_own_content.called
assert (
"Skipping supervisor_trust, content_trust is globally disabled" in caplog.text
)
async def test_did_run(coresys: CoreSys):
"""Test that the check ran as expected."""
supervisor_trust = CheckSupervisorTrust(coresys)
should_run = supervisor_trust.states
should_not_run = [state for state in CoreState if state not in should_run]
assert len(should_run) != 0
assert len(should_not_run) != 0
with patch(
"supervisor.resolution.checks.supervisor_trust.CheckSupervisorTrust.run_check",
return_value=None,
) as check:
for state in should_run:
await coresys.core.set_state(state)
await supervisor_trust()
check.assert_called_once()
check.reset_mock()
for state in should_not_run:
await coresys.core.set_state(state)
await supervisor_trust()
check.assert_not_called()
check.reset_mock()

View File

@@ -1,46 +0,0 @@
"""Test evaluation base."""
# pylint: disable=import-error,protected-access
from unittest.mock import patch
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.resolution.evaluations.content_trust import EvaluateContentTrust
async def test_evaluation(coresys: CoreSys):
"""Test evaluation."""
job_conditions = EvaluateContentTrust(coresys)
await coresys.core.set_state(CoreState.SETUP)
await job_conditions()
assert job_conditions.reason not in coresys.resolution.unsupported
coresys.security.content_trust = False
await job_conditions()
assert job_conditions.reason in coresys.resolution.unsupported
async def test_did_run(coresys: CoreSys):
"""Test that the evaluation ran as expected."""
job_conditions = EvaluateContentTrust(coresys)
should_run = job_conditions.states
should_not_run = [state for state in CoreState if state not in should_run]
assert len(should_run) != 0
assert len(should_not_run) != 0
with patch(
"supervisor.resolution.evaluations.content_trust.EvaluateContentTrust.evaluate",
return_value=None,
) as evaluate:
for state in should_run:
await coresys.core.set_state(state)
await job_conditions()
evaluate.assert_called_once()
evaluate.reset_mock()
for state in should_not_run:
await coresys.core.set_state(state)
await job_conditions()
evaluate.assert_not_called()
evaluate.reset_mock()

View File

@@ -25,13 +25,18 @@ async def test_evaluation(coresys: CoreSys):
assert docker_configuration.reason in coresys.resolution.unsupported
coresys.resolution.unsupported.clear()
coresys.docker.info.storage = EXPECTED_STORAGE
coresys.docker.info.storage = EXPECTED_STORAGE[0]
coresys.docker.info.logging = "unsupported"
await docker_configuration()
assert docker_configuration.reason in coresys.resolution.unsupported
coresys.resolution.unsupported.clear()
coresys.docker.info.storage = EXPECTED_STORAGE
coresys.docker.info.storage = "overlay2"
coresys.docker.info.logging = EXPECTED_LOGGING
await docker_configuration()
assert docker_configuration.reason not in coresys.resolution.unsupported
coresys.docker.info.storage = "overlayfs"
coresys.docker.info.logging = EXPECTED_LOGGING
await docker_configuration()
assert docker_configuration.reason not in coresys.resolution.unsupported

View File

@@ -1,89 +0,0 @@
"""Test evaluation base."""
# pylint: disable=import-error,protected-access
import errno
import os
from pathlib import Path
from unittest.mock import AsyncMock, patch
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.exceptions import CodeNotaryError, CodeNotaryUntrusted
from supervisor.resolution.const import ContextType, IssueType
from supervisor.resolution.data import Issue
from supervisor.resolution.evaluations.source_mods import EvaluateSourceMods
async def test_evaluation(coresys: CoreSys):
"""Test evaluation."""
with patch(
"supervisor.resolution.evaluations.source_mods._SUPERVISOR_SOURCE",
Path(f"{os.getcwd()}/supervisor"),
):
sourcemods = EvaluateSourceMods(coresys)
await coresys.core.set_state(CoreState.RUNNING)
assert sourcemods.reason not in coresys.resolution.unsupported
coresys.security.verify_own_content = AsyncMock(side_effect=CodeNotaryUntrusted)
await sourcemods()
assert sourcemods.reason in coresys.resolution.unsupported
coresys.security.verify_own_content = AsyncMock(side_effect=CodeNotaryError)
await sourcemods()
assert sourcemods.reason not in coresys.resolution.unsupported
coresys.security.verify_own_content = AsyncMock()
await sourcemods()
assert sourcemods.reason not in coresys.resolution.unsupported
async def test_did_run(coresys: CoreSys):
"""Test that the evaluation ran as expected."""
sourcemods = EvaluateSourceMods(coresys)
should_run = sourcemods.states
should_not_run = [state for state in CoreState if state not in should_run]
assert len(should_run) != 0
assert len(should_not_run) != 0
with patch(
"supervisor.resolution.evaluations.source_mods.EvaluateSourceMods.evaluate",
return_value=None,
) as evaluate:
for state in should_run:
await coresys.core.set_state(state)
await sourcemods()
evaluate.assert_called_once()
evaluate.reset_mock()
for state in should_not_run:
await coresys.core.set_state(state)
await sourcemods()
evaluate.assert_not_called()
evaluate.reset_mock()
async def test_evaluation_error(coresys: CoreSys):
"""Test error reading file during evaluation."""
sourcemods = EvaluateSourceMods(coresys)
await coresys.core.set_state(CoreState.RUNNING)
corrupt_fs = Issue(IssueType.CORRUPT_FILESYSTEM, ContextType.SYSTEM)
assert sourcemods.reason not in coresys.resolution.unsupported
assert corrupt_fs not in coresys.resolution.issues
with patch(
"supervisor.utils.codenotary.dirhash",
side_effect=(err := OSError()),
):
err.errno = errno.EBUSY
await sourcemods()
assert sourcemods.reason not in coresys.resolution.unsupported
assert corrupt_fs in coresys.resolution.issues
assert coresys.core.healthy is True
coresys.resolution.dismiss_issue(corrupt_fs)
err.errno = errno.EBADMSG
await sourcemods()
assert sourcemods.reason not in coresys.resolution.unsupported
assert corrupt_fs in coresys.resolution.issues
assert coresys.core.healthy is False

View File

@@ -1,8 +1,9 @@
"""Test fixup addon execute repair."""
from unittest.mock import MagicMock, patch
from http import HTTPStatus
from unittest.mock import patch
from docker.errors import NotFound
import aiodocker
import pytest
from supervisor.addons.addon import Addon
@@ -17,7 +18,9 @@ from supervisor.resolution.fixups.addon_execute_repair import FixupAddonExecuteR
async def test_fixup(docker: DockerAPI, coresys: CoreSys, install_addon_ssh: Addon):
"""Test fixup rebuilds addon's container."""
docker.images.get.side_effect = NotFound("missing")
docker.images.inspect.side_effect = aiodocker.DockerError(
HTTPStatus.NOT_FOUND, {"message": "missing"}
)
install_addon_ssh.data["image"] = "test_image"
addon_execute_repair = FixupAddonExecuteRepair(coresys)
@@ -41,7 +44,9 @@ async def test_fixup_max_auto_attempts(
docker: DockerAPI, coresys: CoreSys, install_addon_ssh: Addon
):
"""Test fixup stops being auto-applied after 5 failures."""
docker.images.get.side_effect = NotFound("missing")
docker.images.inspect.side_effect = aiodocker.DockerError(
HTTPStatus.NOT_FOUND, {"message": "missing"}
)
install_addon_ssh.data["image"] = "test_image"
addon_execute_repair = FixupAddonExecuteRepair(coresys)
@@ -82,8 +87,6 @@ async def test_fixup_image_exists(
docker: DockerAPI, coresys: CoreSys, install_addon_ssh: Addon
):
"""Test fixup dismisses if image exists."""
docker.images.get.return_value = MagicMock()
addon_execute_repair = FixupAddonExecuteRepair(coresys)
assert addon_execute_repair.auto is True

View File

@@ -1,69 +0,0 @@
"""Test evaluation base."""
# pylint: disable=import-error,protected-access
from datetime import timedelta
from unittest.mock import AsyncMock
import time_machine
from supervisor.coresys import CoreSys
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
from supervisor.resolution.data import Issue, Suggestion
from supervisor.resolution.fixups.system_execute_integrity import (
FixupSystemExecuteIntegrity,
)
from supervisor.security.const import ContentTrustResult, IntegrityResult
from supervisor.utils.dt import utcnow
async def test_fixup(coresys: CoreSys, supervisor_internet: AsyncMock):
"""Test fixup."""
system_execute_integrity = FixupSystemExecuteIntegrity(coresys)
assert system_execute_integrity.auto
coresys.resolution.add_suggestion(
Suggestion(SuggestionType.EXECUTE_INTEGRITY, ContextType.SYSTEM)
)
coresys.resolution.add_issue(Issue(IssueType.TRUST, ContextType.SYSTEM))
coresys.security.integrity_check = AsyncMock(
return_value=IntegrityResult(
ContentTrustResult.PASS,
ContentTrustResult.PASS,
{"audio": ContentTrustResult.PASS},
)
)
await system_execute_integrity()
assert coresys.security.integrity_check.called
assert len(coresys.resolution.suggestions) == 0
assert len(coresys.resolution.issues) == 0
async def test_fixup_error(coresys: CoreSys, supervisor_internet: AsyncMock):
"""Test fixup."""
system_execute_integrity = FixupSystemExecuteIntegrity(coresys)
assert system_execute_integrity.auto
coresys.resolution.add_suggestion(
Suggestion(SuggestionType.EXECUTE_INTEGRITY, ContextType.SYSTEM)
)
coresys.resolution.add_issue(Issue(IssueType.TRUST, ContextType.SYSTEM))
coresys.security.integrity_check = AsyncMock(
return_value=IntegrityResult(
ContentTrustResult.FAILED,
ContentTrustResult.PASS,
{"audio": ContentTrustResult.PASS},
)
)
with time_machine.travel(utcnow() + timedelta(hours=24)):
await system_execute_integrity()
assert coresys.security.integrity_check.called
assert len(coresys.resolution.suggestions) == 1
assert len(coresys.resolution.issues) == 1

View File

@@ -1,21 +1,15 @@
"""Test evaluations."""
from unittest.mock import Mock, patch
from unittest.mock import Mock
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.utils import check_exception_chain
async def test_evaluate_system_error(coresys: CoreSys, capture_exception: Mock):
"""Test error while evaluating system."""
await coresys.core.set_state(CoreState.RUNNING)
with patch(
"supervisor.resolution.evaluations.source_mods.calc_checksum_path_sourcecode",
side_effect=RuntimeError,
):
await coresys.resolution.evaluate.evaluate_system()
await coresys.resolution.evaluate.evaluate_system()
capture_exception.assert_called_once()
assert check_exception_chain(capture_exception.call_args[0][0], RuntimeError)
capture_exception.assert_not_called()

View File

@@ -1,127 +0,0 @@
"""Testing handling with Security."""
from unittest.mock import AsyncMock, patch
import pytest
from supervisor.coresys import CoreSys
from supervisor.exceptions import CodeNotaryError, CodeNotaryUntrusted
from supervisor.security.const import ContentTrustResult
async def test_content_trust(coresys: CoreSys):
"""Test Content-Trust."""
with patch("supervisor.security.module.cas_validate", AsyncMock()) as cas_validate:
await coresys.security.verify_content("test@mail.com", "ffffffffffffff")
assert cas_validate.called
cas_validate.assert_called_once_with("test@mail.com", "ffffffffffffff")
with patch(
"supervisor.security.module.cas_validate", AsyncMock()
) as cas_validate:
await coresys.security.verify_own_content("ffffffffffffff")
assert cas_validate.called
cas_validate.assert_called_once_with(
"notary@home-assistant.io", "ffffffffffffff"
)
async def test_disabled_content_trust(coresys: CoreSys):
"""Test Content-Trust."""
coresys.security.content_trust = False
with patch("supervisor.security.module.cas_validate", AsyncMock()) as cas_validate:
await coresys.security.verify_content("test@mail.com", "ffffffffffffff")
assert not cas_validate.called
with patch("supervisor.security.module.cas_validate", AsyncMock()) as cas_validate:
await coresys.security.verify_own_content("ffffffffffffff")
assert not cas_validate.called
async def test_force_content_trust(coresys: CoreSys):
"""Force Content-Trust tests."""
with patch(
"supervisor.security.module.cas_validate",
AsyncMock(side_effect=CodeNotaryError),
) as cas_validate:
await coresys.security.verify_content("test@mail.com", "ffffffffffffff")
assert cas_validate.called
cas_validate.assert_called_once_with("test@mail.com", "ffffffffffffff")
coresys.security.force = True
with (
patch(
"supervisor.security.module.cas_validate",
AsyncMock(side_effect=CodeNotaryError),
) as cas_validate,
pytest.raises(CodeNotaryError),
):
await coresys.security.verify_content("test@mail.com", "ffffffffffffff")
async def test_integrity_check_disabled(coresys: CoreSys):
"""Test integrity check with disabled content trust."""
coresys.security.content_trust = False
result = await coresys.security.integrity_check.__wrapped__(coresys.security)
assert result.core == ContentTrustResult.UNTESTED
assert result.supervisor == ContentTrustResult.UNTESTED
async def test_integrity_check(coresys: CoreSys, install_addon_ssh):
"""Test integrity check with content trust."""
coresys.homeassistant.core.check_trust = AsyncMock()
coresys.supervisor.check_trust = AsyncMock()
install_addon_ssh.check_trust = AsyncMock()
install_addon_ssh.data["codenotary"] = "test@example.com"
result = await coresys.security.integrity_check.__wrapped__(coresys.security)
assert result.core == ContentTrustResult.PASS
assert result.supervisor == ContentTrustResult.PASS
assert result.addons[install_addon_ssh.slug] == ContentTrustResult.PASS
async def test_integrity_check_error(coresys: CoreSys, install_addon_ssh):
"""Test integrity check with content trust issues."""
coresys.homeassistant.core.check_trust = AsyncMock(side_effect=CodeNotaryUntrusted)
coresys.supervisor.check_trust = AsyncMock(side_effect=CodeNotaryUntrusted)
install_addon_ssh.check_trust = AsyncMock(side_effect=CodeNotaryUntrusted)
install_addon_ssh.data["codenotary"] = "test@example.com"
result = await coresys.security.integrity_check.__wrapped__(coresys.security)
assert result.core == ContentTrustResult.ERROR
assert result.supervisor == ContentTrustResult.ERROR
assert result.addons[install_addon_ssh.slug] == ContentTrustResult.ERROR
async def test_integrity_check_failed(coresys: CoreSys, install_addon_ssh):
"""Test integrity check with content trust failed."""
coresys.homeassistant.core.check_trust = AsyncMock(side_effect=CodeNotaryError)
coresys.supervisor.check_trust = AsyncMock(side_effect=CodeNotaryError)
install_addon_ssh.check_trust = AsyncMock(side_effect=CodeNotaryError)
install_addon_ssh.data["codenotary"] = "test@example.com"
result = await coresys.security.integrity_check.__wrapped__(coresys.security)
assert result.core == ContentTrustResult.FAILED
assert result.supervisor == ContentTrustResult.FAILED
assert result.addons[install_addon_ssh.slug] == ContentTrustResult.FAILED
async def test_integrity_check_addon(coresys: CoreSys, install_addon_ssh):
"""Test integrity check with content trust but no signed add-ons."""
coresys.homeassistant.core.check_trust = AsyncMock()
coresys.supervisor.check_trust = AsyncMock()
result = await coresys.security.integrity_check.__wrapped__(coresys.security)
assert result.core == ContentTrustResult.PASS
assert result.supervisor == ContentTrustResult.PASS
assert result.addons[install_addon_ssh.slug] == ContentTrustResult.UNTESTED

View File

@@ -86,10 +86,9 @@ async def test_os_update_path(
"""Test OS upgrade path across major versions."""
coresys.os._board = "rpi4" # pylint: disable=protected-access
coresys.os._version = AwesomeVersion(version) # pylint: disable=protected-access
with patch.object(type(coresys.security), "verify_own_content"):
await coresys.updater.fetch_data()
await coresys.updater.fetch_data()
assert coresys.updater.version_hassos == AwesomeVersion(expected)
assert coresys.updater.version_hassos == AwesomeVersion(expected)
@pytest.mark.usefixtures("no_job_throttle")
@@ -105,7 +104,6 @@ async def test_delayed_fetch_for_connectivity(
load_binary_fixture("version_stable.json")
)
coresys.websession.head = AsyncMock()
coresys.security.verify_own_content = AsyncMock()
# Network connectivity change causes a series of async tasks to eventually do a version fetch
# Rather then use some kind of sleep loop, set up listener for start of fetch data job

Some files were not shown because too many files have changed in this diff Show More