Compare commits

..

57 Commits

Author SHA1 Message Date
Mike Degatano
041bd4536e Fix typo
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-18 10:38:39 -04:00
Mike Degatano
12799d6632 Fix assert in test 2025-09-18 10:38:39 -04:00
Mike Degatano
e5aa998a47 Add progress reporting to addon, HA and Supervisor updates 2025-09-18 10:38:39 -04:00
dependabot[bot]
c1ccb00946 Bump debugpy from 1.8.16 to 1.8.17 (#6196)
Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.8.16 to 1.8.17.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.8.16...v1.8.17)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-version: 1.8.17
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-18 10:40:31 +02:00
dependabot[bot]
5693a5be0d Bump cryptography from 45.0.7 to 46.0.1 (#6194)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-17 09:32:47 +02:00
Mike Degatano
01911a44cd Persistent notifications to repairs and fix free_space check (#6179)
* Persistent notifications to repairs and fix free_space check

* Fix tests mocking too little free space
2025-09-16 11:22:59 -04:00
Lukas Waslowski
857dae7736 Allow adding translations for nested fields to support home-assistant/frontend#26997 (#6180) 2025-09-16 11:36:45 +02:00
Stefan Agner
d2ddd9579c Cancel Supervisor startup task on early shutdown signal (#6175)
When receiving a shutdown signal during startup, the Supervisor should
cancel its startup task to ensure a graceful shutdown. This prevents
Supervisor accidentally accessing the Event loop after it has been
closed by the stop procedure:

RuntimeError: Event loop stopped before Future completed.
2025-09-16 11:32:43 +02:00
Lukas Waslowski
ac9947d599 Allow deeply nested dicts and lists in addon config schemas (#6171)
* Allow arbitrarily nested addon config schemas

* Disallow lists directly nested in another list in addon schema

* Handle arbitrarily nested addon schemas in UiOptions class

* Handle arbitrarily nested addon schemas in AddonOptions class

* Add tests for addon config schemas

* Add tests for addon option validation
2025-09-16 11:32:28 +02:00
Jan Čermák
2e22e1e884 Add endpoint for complete logs of the latest container startup (#6163)
* Add endpoint for complete logs of the latest container startup

Add endpoint that returns complete logs of the latest startup of
container, which can be used for downloading Core logs in the frontend.

Realtime filtering header is used for the Journal API and StartedAt
parameter from the Docker API is used as the reference point. This means
that any other Range header is ignored for this parameter, yet the
"lines" query argument can be used to limit the number of lines. By
default "infinite" number of lines is returned.

Closes #6147

* Implement fallback for latest logs for OS older than 16.0

Implement fallback which uses the internal CONTAINER_LOG_EPOCH metadata
added to logs created by the Docker logger. Still prefer the time-based
method, as it has lower overhead and using public APIs.

* Address review comments

* Only use CONTAINER_LOG_EPOCH for latest logs

As pointed out in the review comments, we might not be able to get the
StartedAt for add-ons that are not running. Thus we need to use the only
reliable mechanism available now, which is the container log epoch.

* Remove dead code for 'Range: realtime' header handling
2025-09-16 11:29:28 +02:00
dependabot[bot]
e7f3573e32 Bump sentry-sdk from 2.37.1 to 2.38.0 (#6192)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-16 10:00:50 +02:00
dependabot[bot]
b26451a59a Bump types-docker from 7.1.0.20250907 to 7.1.0.20250916 (#6191)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-16 09:57:24 +02:00
dependabot[bot]
4e882f7c76 Bump home-assistant/builder from 2025.03.0 to 2025.09.0 (#6190) 2025-09-16 09:06:10 +02:00
dependabot[bot]
5fa50ccf05 Bump types-pyyaml from 6.0.12.20250822 to 6.0.12.20250915 (#6189)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 11:36:09 +02:00
dependabot[bot]
3891df5266 Bump sigstore/cosign-installer from 3.9.2 to 3.10.0 (#6188)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 09:44:35 +02:00
dependabot[bot]
5aad32c15b Bump types-requests from 2.32.4.20250809 to 2.32.4.20250913 (#6187) 2025-09-15 08:36:51 +02:00
Simon Lamon
4a40490af7 Pin SHA for all Github Actions (#6186) 2025-09-14 17:54:02 +02:00
Stefan Agner
0a46e030f5 Bump minimal Docker to 24.0.0 (#6178)
* Bump minimal Docker to 23.0.0

Home Assistant OS 10.0 update to Docker 23.0.3, lets make this
Docker version the minimum we support. This will allow us to use
zstd compression for layers (see https://github.com/home-assistant/builder/pull/245).

* Bump minimal Docker version to 24.0.0
2025-09-12 15:00:59 +02:00
dependabot[bot]
bd00f90304 Bump mypy from 1.17.1 to 1.18.1 (#6184)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-12 09:39:13 +02:00
dependabot[bot]
819f097f01 Bump ruff from 0.12.12 to 0.13.0 (#6181)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.12.12 to 0.13.0.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.12.12...0.13.0)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.13.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-11 09:08:07 +02:00
dependabot[bot]
4513592993 Bump sentry-sdk from 2.37.0 to 2.37.1 (#6177)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.37.0 to 2.37.1.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.37.0...2.37.1)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.37.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-10 09:00:16 +02:00
dependabot[bot]
7e526a26af Bump pytest-cov from 6.3.0 to 7.0.0 (#6176)
Bumps [pytest-cov](https://github.com/pytest-dev/pytest-cov) from 6.3.0 to 7.0.0.
- [Changelog](https://github.com/pytest-dev/pytest-cov/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest-cov/compare/v6.3.0...v7.0.0)

---
updated-dependencies:
- dependency-name: pytest-cov
  dependency-version: 7.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-10 09:00:09 +02:00
Petar Petrov
b3af22f048 Ignore missing files when counting used space (#6174) 2025-09-09 13:39:06 +02:00
Jan Čermák
bbb9469c1c Write cidfiles of Docker containers and mount them individually to /run/cid (#6154)
* Write cidfiles of Docker containers and mount them individually to /run/cid

There is no standard way to get the container ID in the container
itself, which can be needed for instance for #6006. The usual pattern is
to use the --cidfile argument of Docker CLI and mount the generated file
to the container. However, this is feature of Docker CLI and we can't
use it when creating the containers via API. To get container ID to
implement native logging in e.g. Core as well, we need the help of the
Supervisor.

This change implements similar feature fully in Supervisor's DockerAPI
class that orchestrates lifetime of all containers managed by
Supervisor. The files are created in the SUPERVISOR_DATA directory, as
it needs to be persisted between reboots, just as the instances of
Docker containers are.

Supervisor's cidfile must be created when starting the Supervisor
itself, for that see home-assistant/operating-system#4276.

* Address review comments, fix mounting of the cidfile
2025-09-09 13:38:31 +02:00
Stefan Agner
859c32a706 Drop unused coud backup dir constant (#6172)
The constant PATH_CLOUD_BACKUP is not used anywhere in the codebase.
Remove it to clean up the code. This is a leftover from a removed
initial cloud backup support implementation and got missed in #5464.
2025-09-08 15:04:24 -04:00
Stefan Agner
87fc84c65c Avoid setup failure on missing timedatectl (#6169)
When timedatectl is not available (e.g. in minimal devcontainers),
the code currently fails to setup due to missing timedate service on
D-Bus. This change makes the code more robust by checking only checking
for the presence of the service if we actually going to use it.
2025-09-08 11:29:44 +02:00
dependabot[bot]
e38ca5acb4 Bump pytest-cov from 6.2.1 to 6.3.0 (#6167)
Bumps [pytest-cov](https://github.com/pytest-dev/pytest-cov) from 6.2.1 to 6.3.0.
- [Changelog](https://github.com/pytest-dev/pytest-cov/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest-cov/compare/v6.2.1...v6.3.0)

---
updated-dependencies:
- dependency-name: pytest-cov
  dependency-version: 6.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 09:28:31 +02:00
dependabot[bot]
09cd8eede2 Bump types-docker from 7.1.0.20250822 to 7.1.0.20250907 (#6166)
Bumps [types-docker](https://github.com/typeshed-internal/stub_uploader) from 7.1.0.20250822 to 7.1.0.20250907.
- [Commits](https://github.com/typeshed-internal/stub_uploader/commits)

---
updated-dependencies:
- dependency-name: types-docker
  dependency-version: 7.1.0.20250907
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 09:28:24 +02:00
dependabot[bot]
d1c537b280 Bump sentry-sdk from 2.36.0 to 2.37.0 (#6168)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.36.0 to 2.37.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.36.0...2.37.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.37.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 09:28:15 +02:00
dependabot[bot]
e6785d6a89 Bump ruff from 0.12.11 to 0.12.12 (#6161)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.12.11 to 0.12.12.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.12.11...0.12.12)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.12.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-05 10:31:12 +02:00
Stefan Agner
59e051ad93 Avoid duplicate evaluate_system() calls during resolution manager setup (#6133)
* Avoid duplicate evaluate_system() calls during resolution manager setup

During resolution manager initialization, both the initial healthcheck call
and the subsequent setup() call would trigger evaluate_system(), causing
redundant system evaluation. All following calls in healthcheck() are
already suppressed during the setup stage, we can optimize this by
calling check_system() directly during load() instead of the full
healthcheck().

This reduces unnecessary processing during supervisor startup while
maintaining the same functional behavior.

* Call full healthcheck on setup and move diagnostics to core start

The OS Agent diagnostics if statement accesses OS Agent through D-Bus
already. This makes the exception handling inside the if statement
not really useful.

Move OS Agent diagnostics setting to core start so we can leverage
the existing global Exception handling in start() instead of
having to add another try/except block in setup(). It also covers the
if statement itself.
2025-09-05 10:16:06 +02:00
dependabot[bot]
3397def8b9 Bump actions/github-script from 7 to 8 (#6158)
Bumps [actions/github-script](https://github.com/actions/github-script) from 7 to 8.
- [Release notes](https://github.com/actions/github-script/releases)
- [Commits](https://github.com/actions/github-script/compare/v7...v8)

---
updated-dependencies:
- dependency-name: actions/github-script
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-05 09:55:13 +02:00
dependabot[bot]
b832edc10d Bump codecov/codecov-action from 5.5.0 to 5.5.1 (#6159)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.5.0 to 5.5.1.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v5.5.0...v5.5.1)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 5.5.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-05 09:54:33 +02:00
dependabot[bot]
f69071878c Bump sentry-sdk from 2.35.2 to 2.36.0 (#6162)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.35.2 to 2.36.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.35.2...2.36.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.36.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-05 09:54:09 +02:00
dependabot[bot]
e065ba6081 Bump pytest from 8.4.1 to 8.4.2 (#6160)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 8.4.1 to 8.4.2.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/8.4.1...8.4.2)

---
updated-dependencies:
- dependency-name: pytest
  dependency-version: 8.4.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-05 09:52:53 +02:00
dependabot[bot]
38611ad12f Bump actions/stale from 9.1.0 to 10.0.0 (#6155)
Bumps [actions/stale](https://github.com/actions/stale) from 9.1.0 to 10.0.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v9.1.0...v10.0.0)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 14:40:56 +02:00
dependabot[bot]
8beb66d46c Bump actions/setup-python from 5.6.0 to 6.0.0 (#6156)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.6.0 to 6.0.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5.6.0...v6.0.0)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-04 14:40:42 +02:00
Stefan Agner
c277f3cad6 Store and persist OS upgrade map to fix update path evaluation (#6152)
* Store and persist OS upgrade map to fix update path evaluation

The existing logic calculated OS upgrade paths inline during fetch_data,
which will not get reevaluted when the current OS is unsupported
(JobCondition.OS_SUPPORTED). E.g. after updating from 11.4 to 11.5, the
system wouldn't offer the next available update (15.2) because the
upgrade path calculation relied on fresh data from the blocked fetch
operation.

Changes:
- Add ATTR_HASSOS_UPGRADE constant and schema validation
- Store hassos-upgrade map from version JSON in updater data
- Refactor version_hassos property to use stored upgrade map instead of
  inline calculation during fetch_data
- Maintain upgrade path logic: upgrade within major version first, then
  jump to next major version when at the latest in current major
- Add type safety checks for version.major access

This ensures upgrade paths work correctly even when update data refresh
is blocked due to unsupported OS versions, fixing the scenario where
HAOS 11.5 wouldn't show 15.2 as the next available update.

* Update supervisor/updater.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Address mypy issue

* Fix pytest

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-04 13:19:31 +02:00
Igor Yamolov
236c39cbb0 Add network interface settings for mDNS/LLMNR (#5520) 2025-09-04 13:18:11 +02:00
Mike Degatano
7ed83a15fe Add availability API for addons (#6140)
* Add availability API for addons

* Add cast back and test for latest version of installed addon

* Make error responses more translation/client library friendly

* Add test cases for install/update APIs
2025-09-04 11:14:42 +02:00
Felipe Santos
a3a5f6ba98 Fix WebSocket proxy for add-ons not forwarding ping/pong frame data (#6144)
* Fix proxied add-on WebSocket connections closing after 40 seconds

* Undo autoping=True

* Add debug logging for ping/pong frames

* Foward ping and pong msg.data too

* Add temporary workaround for devcontainer issue

* Forward 8000 through docker devcontainer

* Remove debug code
2025-09-04 11:12:42 +02:00
Stefan Agner
8d3ededf2f Update NM developer page URL (#6151)
* Update NM developer page URL

* Update remaining NetworkManager URLs to new location
2025-09-04 11:02:34 +02:00
Jan Čermák
3d62c9afb1 Make test_job_decorator tests timezone agnostic (#6153)
Running tests in UTC+2 timezone makes some of the tests fail because the
mocked time in the future is actually in the past, as UTC is used as the
new reference point. Adjust the tests to mock also the time when the
first execution of function happens.

Instances where the second execution happened "immediately" were mocked
to happen 1ms later. The 1ms delta is also needed to be added when
mocking time 1h in the future, otherwise it will be throttled too.
2025-09-03 17:55:28 +02:00
dependabot[bot]
ef313d1fb5 Bump sentry-sdk from 2.35.1 to 2.35.2 (#6150)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.35.1 to 2.35.2.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.35.1...2.35.2)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-03 13:50:48 +02:00
dependabot[bot]
cae31637ae Bump cryptography from 45.0.6 to 45.0.7 (#6149)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-03 11:21:12 +02:00
Mike Degatano
9392d10625 Add background option to update/install APIs (#6134)
* Add background option to update/install APIs

* Refactor to use common background_task utility in backups too

* Use a validation_complete event rather then looking for bus events
2025-09-03 08:33:00 +02:00
dependabot[bot]
5ce62f324f Bump coverage from 7.10.5 to 7.10.6 (#6145)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-01 14:05:57 +02:00
dependabot[bot]
f84d514958 Bump ruff from 0.12.10 to 0.12.11 (#6141)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-29 11:15:52 +02:00
dependabot[bot]
3c39f2f785 Bump sentry-sdk from 2.35.0 to 2.35.1 (#6139)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-28 10:31:03 +02:00
Stefan Agner
30db72df78 Add WebSocket proxy timeout handling (#6138)
Add TimeoutError handling for WebSocket connections to add-ons. Also
log debug information for WebSocket proxy connections.
2025-08-28 10:18:46 +02:00
Stefan Agner
00a78f372b Fix ConnectionResetError during ingress proxing (#6137)
Under certain (timing) conditions ConnectionResetError can be raised
when the client closes the connection while we are still writing to it.
Make sure to handle the appropriate exceptions to avoid flooding the
logs with stack traces.
2025-08-28 10:15:32 +02:00
dependabot[bot]
b69546f2c1 Bump orjson from 3.11.2 to 3.11.3 (#6135)
Bumps [orjson](https://github.com/ijl/orjson) from 3.11.2 to 3.11.3.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.11.2...3.11.3)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 16:13:52 +02:00
Mike Degatano
78be155b94 Handle download retart in pull progres log (#6131) 2025-08-25 23:20:00 +02:00
Mike Degatano
9900dfc8ca Do not skip messages in pull progress log due to rounding (#6129) 2025-08-25 22:25:38 +02:00
Stefan Agner
3a1ebc9d37 Handle malformed addon map entries gracefully (#6126)
* Handle missing type attribute in add-on map config

Handle missing type attribute in the add-on `map` configuration key.

* Make sure wrong volumes are cleared in any case

Also add warning when string mapping is rejected.

* Add unit tests

* Improve test coverage
2025-08-25 22:24:46 +02:00
Jan Čermák
580c3273dc Fix guarding of timezone setting for older OS 16.2 dev builds (#6127)
As some 16.2 dev versions did not support setting of the timezone yet,
if they were not updated before the Supervisor #6099 was merged, the
system could end up unhealthy as setting of the timezone during setup
fails there. This would prevent such systems from being updated to the
new OS version.

Now that we know an exact OS version with TZ setting support, only
attempt doing it if it's supported.
2025-08-25 19:47:47 +02:00
dependabot[bot]
b889f94ca4 Bump coverage from 7.10.4 to 7.10.5 (#6125)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.10.4 to 7.10.5.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.10.4...7.10.5)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.10.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-25 14:13:13 +02:00
85 changed files with 2518 additions and 668 deletions

View File

@@ -53,7 +53,7 @@ jobs:
requirements: ${{ steps.requirements.outputs.changed }}
steps:
- name: Checkout the repository
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
@@ -70,7 +70,7 @@ jobs:
- name: Get changed files
id: changed_files
if: steps.version.outputs.publish == 'false'
uses: masesgroup/retrieve-changed-files@v3.0.0
uses: masesgroup/retrieve-changed-files@491e80760c0e28d36ca6240a27b1ccb8e1402c13 # v3.0.0
- name: Check if requirements files changed
id: requirements
@@ -92,7 +92,7 @@ jobs:
arch: ${{ fromJson(needs.init.outputs.architectures) }}
steps:
- name: Checkout the repository
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
@@ -104,6 +104,7 @@ jobs:
echo "CARGO_NET_GIT_FETCH_WITH_CLI=true"
) > .env_file
# home-assistant/wheels doesn't support sha pinning
- name: Build wheels
if: needs.init.outputs.requirements == 'true'
uses: home-assistant/wheels@2025.07.0
@@ -125,13 +126,13 @@ jobs:
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
if: needs.init.outputs.publish == 'true'
uses: actions/setup-python@v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Install Cosign
if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@v3.9.2
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
with:
cosign-release: "v2.4.3"
@@ -149,7 +150,7 @@ jobs:
- name: Login to GitHub Container Registry
if: needs.init.outputs.publish == 'true'
uses: docker/login-action@v3.5.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -159,8 +160,9 @@ jobs:
if: needs.init.outputs.publish == 'false'
run: echo "BUILD_ARGS=--test" >> $GITHUB_ENV
# home-assistant/builder doesn't support sha pinning
- name: Build supervisor
uses: home-assistant/builder@2025.03.0
uses: home-assistant/builder@2025.09.0
with:
args: |
$BUILD_ARGS \
@@ -178,7 +180,7 @@ jobs:
steps:
- name: Checkout the repository
if: needs.init.outputs.publish == 'true'
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Initialize git
if: needs.init.outputs.publish == 'true'
@@ -203,11 +205,12 @@ jobs:
timeout-minutes: 60
steps:
- name: Checkout the repository
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# home-assistant/builder doesn't support sha pinning
- name: Build the Supervisor
if: needs.init.outputs.publish != 'true'
uses: home-assistant/builder@2025.03.0
uses: home-assistant/builder@2025.09.0
with:
args: |
--test \

View File

@@ -26,15 +26,15 @@ jobs:
name: Prepare Python dependencies
steps:
- name: Check out code from GitHub
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python
id: python
uses: actions/setup-python@v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: venv
key: |
@@ -48,7 +48,7 @@ jobs:
pip install -r requirements.txt -r requirements_tests.txt
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
lookup-only: true
@@ -68,15 +68,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: venv
key: |
@@ -88,7 +88,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -111,15 +111,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: venv
key: |
@@ -131,7 +131,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -154,7 +154,7 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Register hadolint problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/hadolint.json"
@@ -169,15 +169,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: venv
key: |
@@ -189,7 +189,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -213,15 +213,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: venv
key: |
@@ -233,7 +233,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -257,15 +257,15 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: venv
key: |
@@ -293,9 +293,9 @@ jobs:
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
@@ -307,7 +307,7 @@ jobs:
echo "key=mypy-${{ env.MYPY_CACHE_VERSION }}-$mypy_version-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: venv
key: >-
@@ -318,7 +318,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore mypy cache
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: .mypy_cache
key: >-
@@ -339,19 +339,19 @@ jobs:
name: Run tests Python ${{ needs.prepare.outputs.python-version }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign
uses: sigstore/cosign-installer@v3.9.2
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
with:
cosign-release: "v2.4.3"
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: venv
key: |
@@ -386,7 +386,7 @@ jobs:
-o console_output_style=count \
tests
- name: Upload coverage artifact
uses: actions/upload-artifact@v4.6.2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: coverage
path: .coverage
@@ -398,15 +398,15 @@ jobs:
needs: ["pytest", "prepare"]
steps:
- name: Check out code from GitHub
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.4
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: venv
key: |
@@ -417,7 +417,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Download all coverage artifacts
uses: actions/download-artifact@v5.0.0
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
name: coverage
path: coverage/
@@ -428,4 +428,4 @@ jobs:
coverage report
coverage xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5.5.0
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1

View File

@@ -9,7 +9,7 @@ jobs:
lock:
runs-on: ubuntu-latest
steps:
- uses: dessant/lock-threads@v5.0.1
- uses: dessant/lock-threads@1bf7ec25051fe7c00bdd17e6a7cf3d7bfb7dc771 # v5.0.1
with:
github-token: ${{ github.token }}
issue-inactive-days: "30"

View File

@@ -11,7 +11,7 @@ jobs:
name: Release Drafter
steps:
- name: Checkout the repository
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
@@ -36,7 +36,7 @@ jobs:
echo "version=$datepre.$newpost" >> "$GITHUB_OUTPUT"
- name: Run Release Drafter
uses: release-drafter/release-drafter@v6.1.0
uses: release-drafter/release-drafter@b1476f6e6eb133afa41ed8589daba6dc69b4d3f5 # v6.1.0
with:
tag: ${{ steps.version.outputs.version }}
name: ${{ steps.version.outputs.version }}

View File

@@ -12,7 +12,7 @@ jobs:
if: github.event.issue.type.name == 'Task'
steps:
- name: Check if user is authorized
uses: actions/github-script@v7
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
const issueAuthor = context.payload.issue.user.login;

View File

@@ -10,9 +10,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out code from GitHub
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Sentry Release
uses: getsentry/action-release@v3.2.0
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c # v3.2.0
env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}

View File

@@ -9,7 +9,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9.1.0
- uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30

View File

@@ -14,10 +14,10 @@ jobs:
latest_version: ${{ steps.latest_frontend_version.outputs.latest_tag }}
steps:
- name: Checkout code
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Get latest frontend release
id: latest_frontend_version
uses: abatilo/release-info-action@v1.3.3
uses: abatilo/release-info-action@32cb932219f1cee3fc4f4a298fd65ead5d35b661 # v1.3.3
with:
owner: home-assistant
repo: frontend
@@ -49,7 +49,7 @@ jobs:
if: needs.check-version.outputs.skip != 'true'
steps:
- name: Checkout code
uses: actions/checkout@v5.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Clear www folder
run: |
rm -rf supervisor/api/panel/*
@@ -57,7 +57,7 @@ jobs:
run: |
echo "${{ needs.check-version.outputs.latest_version }}" > .ha-frontend-version
- name: Download release assets
uses: robinraju/release-downloader@v1
uses: robinraju/release-downloader@daf26c55d821e836577a15f77d86ddc078948b05 # v1.12
with:
repository: 'home-assistant/frontend'
tag: ${{ needs.check-version.outputs.latest_version }}
@@ -68,7 +68,7 @@ jobs:
run: |
rm -f supervisor/api/panel/home_assistant_frontend_supervisor-*.tar.gz
- name: Create PR
uses: peter-evans/create-pull-request@v7
uses: peter-evans/create-pull-request@271a8d0340265f705b14b6d32b9829c1cb33d45e # v7.0.8
with:
commit-message: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
branch: autoupdate-frontend

View File

@@ -8,8 +8,8 @@ brotli==1.1.0
ciso8601==2.3.3
colorlog==6.9.0
cpe==1.3.1
cryptography==45.0.6
debugpy==1.8.16
cryptography==46.0.1
debugpy==1.8.17
deepmerge==2.0
dirhash==0.5.0
docker==7.1.0
@@ -17,13 +17,13 @@ faust-cchardet==2.1.19
gitpython==3.1.45
jinja2==3.1.6
log-rate-limit==1.4.2
orjson==3.11.2
orjson==3.11.3
pulsectl==24.12.0
pyudev==0.24.3
PyYAML==6.0.2
requests==2.32.5
securetar==2025.2.1
sentry-sdk==2.35.0
sentry-sdk==2.38.0
setuptools==80.9.0
voluptuous==0.15.2
dbus-fast==2.44.3

View File

@@ -1,16 +1,16 @@
astroid==3.3.11
coverage==7.10.4
mypy==1.17.1
coverage==7.10.6
mypy==1.18.1
pre-commit==4.3.0
pylint==3.3.8
pytest-aiohttp==1.1.0
pytest-asyncio==0.25.2
pytest-cov==6.2.1
pytest-cov==7.0.0
pytest-timeout==2.4.0
pytest==8.4.1
ruff==0.12.10
pytest==8.4.2
ruff==0.13.0
time-machine==2.19.0
types-docker==7.1.0.20250822
types-pyyaml==6.0.12.20250822
types-requests==2.32.4.20250809
types-docker==7.1.0.20250916
types-pyyaml==6.0.12.20250915
types-requests==2.32.4.20250913
urllib3==2.5.0

View File

@@ -66,10 +66,23 @@ if __name__ == "__main__":
_LOGGER.info("Setting up Supervisor")
loop.run_until_complete(coresys.core.setup())
bootstrap.register_signal_handlers(loop, coresys)
# Create startup task that can be cancelled gracefully
startup_task = loop.create_task(coresys.core.start())
def shutdown_handler() -> None:
"""Handle shutdown signals gracefully during startup."""
if not startup_task.done():
_LOGGER.warning("Supervisor startup interrupted by shutdown signal")
startup_task.cancel()
coresys.create_task(coresys.core.stop())
bootstrap.register_signal_handlers(loop, shutdown_handler)
try:
loop.run_until_complete(coresys.core.start())
loop.run_until_complete(startup_task)
except asyncio.CancelledError:
_LOGGER.warning("Supervisor startup cancelled")
except Exception as err: # pylint: disable=broad-except
# Supervisor itself is running at this point, just something didn't
# start as expected. Log with traceback to get more insights for

View File

@@ -67,9 +67,9 @@ from ..docker.monitor import DockerContainerStateEvent
from ..docker.stats import DockerStats
from ..exceptions import (
AddonConfigurationError,
AddonNotSupportedError,
AddonsError,
AddonsJobError,
AddonsNotSupportedError,
ConfigurationFileError,
DockerError,
HomeAssistantAPIError,
@@ -769,7 +769,7 @@ class Addon(AddonModel):
on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def install(self) -> None:
async def install(self, *, progress_job_id: str | None = None) -> None:
"""Install and setup this addon."""
if not self.addon_store:
raise AddonsError("Missing from store, cannot install!")
@@ -792,7 +792,10 @@ class Addon(AddonModel):
# Install image
try:
await self.instance.install(
self.latest_version, self.addon_store.image, arch=self.arch
self.latest_version,
self.addon_store.image,
arch=self.arch,
progress_job_id=progress_job_id,
)
except DockerError as err:
await self.sys_addons.data.uninstall(self)
@@ -876,7 +879,9 @@ class Addon(AddonModel):
on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def update(self) -> asyncio.Task | None:
async def update(
self, *, progress_job_id: str | None = None
) -> asyncio.Task | None:
"""Update this addon to latest version.
Returns a Task that completes when addon has state 'started' (see start)
@@ -890,7 +895,12 @@ class Addon(AddonModel):
store = self.addon_store.clone()
try:
await self.instance.update(store.version, store.image, arch=self.arch)
await self.instance.update(
store.version,
store.image,
arch=self.arch,
progress_job_id=progress_job_id,
)
except DockerError as err:
raise AddonsError() from err
@@ -1172,7 +1182,7 @@ class Addon(AddonModel):
async def write_stdin(self, data) -> None:
"""Write data to add-on stdin."""
if not self.with_stdin:
raise AddonsNotSupportedError(
raise AddonNotSupportedError(
f"Add-on {self.slug} does not support writing to stdin!", _LOGGER.error
)
@@ -1419,7 +1429,7 @@ class Addon(AddonModel):
# If available
if not self._available(data[ATTR_SYSTEM]):
raise AddonsNotSupportedError(
raise AddonNotSupportedError(
f"Add-on {self.slug} is not available for this platform",
_LOGGER.error,
)

View File

@@ -9,19 +9,18 @@ from typing import Self, Union
from attr import evolve
from supervisor.jobs.const import JobConcurrency
from ..const import AddonBoot, AddonStartup, AddonState
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
AddonNotSupportedError,
AddonsError,
AddonsJobError,
AddonsNotSupportedError,
CoreDNSError,
DockerError,
HassioError,
HomeAssistantAPIError,
)
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..store.addon import AddonStore
@@ -184,7 +183,9 @@ class AddonManager(CoreSysAttributes):
on_condition=AddonsJobError,
concurrency=JobConcurrency.QUEUE,
)
async def install(self, slug: str) -> None:
async def install(
self, slug: str, *, validation_complete: asyncio.Event | None = None
) -> None:
"""Install an add-on."""
self.sys_jobs.current.reference = slug
@@ -197,7 +198,17 @@ class AddonManager(CoreSysAttributes):
store.validate_availability()
await Addon(self.coresys, slug).install()
# If being run in the background, notify caller that validation has completed
if validation_complete:
validation_complete.set()
await Addon(self.coresys, slug).install(
progress_job_id=self.sys_jobs.current.uuid
)
# Make sure our job finishes at 100% so users aren't confused
if self.sys_jobs.current.progress != 100:
self.sys_jobs.current.progress = 100
_LOGGER.info("Add-on '%s' successfully installed", slug)
@@ -226,7 +237,11 @@ class AddonManager(CoreSysAttributes):
on_condition=AddonsJobError,
)
async def update(
self, slug: str, backup: bool | None = False
self,
slug: str,
backup: bool | None = False,
*,
validation_complete: asyncio.Event | None = None,
) -> asyncio.Task | None:
"""Update add-on.
@@ -251,6 +266,10 @@ class AddonManager(CoreSysAttributes):
# Check if available, Maybe something have changed
store.validate_availability()
# If being run in the background, notify caller that validation has completed
if validation_complete:
validation_complete.set()
if backup:
await self.sys_backups.do_backup_partial(
name=f"addon_{addon.slug}_{addon.version}",
@@ -258,7 +277,18 @@ class AddonManager(CoreSysAttributes):
addons=[addon.slug],
)
return await addon.update()
# Assume for now the docker image pull is 100% of this task. But from a user
# perspective that isn't true. Other steps we could consider allocating a fixed
# amount of progress for to improve accuracy include: partial backup, image
# cleanup, apparmor update, and addon restart
task = await addon.update(progress_job_id=self.sys_jobs.current.uuid)
# Make sure our job finishes at 100% so users aren't confused
if self.sys_jobs.current.progress != 100:
self.sys_jobs.current.progress = 100
_LOGGER.info("Add-on '%s' successfully updated", slug)
return task
@Job(
name="addon_manager_rebuild",
@@ -293,7 +323,7 @@ class AddonManager(CoreSysAttributes):
"Version changed, use Update instead Rebuild", _LOGGER.error
)
if not force and not addon.need_build:
raise AddonsNotSupportedError(
raise AddonNotSupportedError(
"Can't rebuild a image based add-on", _LOGGER.error
)

View File

@@ -89,7 +89,12 @@ from ..const import (
)
from ..coresys import CoreSys
from ..docker.const import Capabilities
from ..exceptions import AddonsNotSupportedError
from ..exceptions import (
AddonNotSupportedArchitectureError,
AddonNotSupportedError,
AddonNotSupportedHomeAssistantVersionError,
AddonNotSupportedMachineTypeError,
)
from ..jobs.const import JOB_GROUP_ADDON
from ..jobs.job_group import JobGroup
from ..utils import version_is_new_enough
@@ -680,9 +685,8 @@ class AddonModel(JobGroup, ABC):
"""Validate if addon is available for current system."""
# Architecture
if not self.sys_arch.is_supported(config[ATTR_ARCH]):
raise AddonsNotSupportedError(
f"Add-on {self.slug} not supported on this platform, supported architectures: {', '.join(config[ATTR_ARCH])}",
logger,
raise AddonNotSupportedArchitectureError(
logger, slug=self.slug, architectures=config[ATTR_ARCH]
)
# Machine / Hardware
@@ -690,9 +694,8 @@ class AddonModel(JobGroup, ABC):
if machine and (
f"!{self.sys_machine}" in machine or self.sys_machine not in machine
):
raise AddonsNotSupportedError(
f"Add-on {self.slug} not supported on this machine, supported machine types: {', '.join(machine)}",
logger,
raise AddonNotSupportedMachineTypeError(
logger, slug=self.slug, machine_types=machine
)
# Home Assistant
@@ -701,16 +704,15 @@ class AddonModel(JobGroup, ABC):
if version and not version_is_new_enough(
self.sys_homeassistant.version, version
):
raise AddonsNotSupportedError(
f"Add-on {self.slug} not supported on this system, requires Home Assistant version {version} or greater",
logger,
raise AddonNotSupportedHomeAssistantVersionError(
logger, slug=self.slug, version=str(version)
)
def _available(self, config) -> bool:
"""Return True if this add-on is available on this platform."""
try:
self._validate_availability(config)
except AddonsNotSupportedError:
except AddonNotSupportedError:
return False
return True

View File

@@ -93,15 +93,7 @@ class AddonOptions(CoreSysAttributes):
typ = self.raw_schema[key]
try:
if isinstance(typ, list):
# nested value list
options[key] = self._nested_validate_list(typ[0], value, key)
elif isinstance(typ, dict):
# nested value dict
options[key] = self._nested_validate_dict(typ, value, key)
else:
# normal value
options[key] = self._single_validate(typ, value, key)
options[key] = self._validate_element(typ, value, key)
except (IndexError, KeyError):
raise vol.Invalid(
f"Type error for option '{key}' in {self._name} ({self._slug})"
@@ -111,7 +103,20 @@ class AddonOptions(CoreSysAttributes):
return options
# pylint: disable=no-value-for-parameter
def _single_validate(self, typ: str, value: Any, key: str):
def _validate_element(self, typ: Any, value: Any, key: str) -> Any:
"""Validate a value against a type specification."""
if isinstance(typ, list):
# nested value list
return self._nested_validate_list(typ[0], value, key)
elif isinstance(typ, dict):
# nested value dict
return self._nested_validate_dict(typ, value, key)
else:
# normal value
return self._single_validate(typ, value, key)
# pylint: disable=no-value-for-parameter
def _single_validate(self, typ: str, value: Any, key: str) -> Any:
"""Validate a single element."""
# if required argument
if value is None:
@@ -188,7 +193,9 @@ class AddonOptions(CoreSysAttributes):
f"Fatal error for option '{key}' with type '{typ}' in {self._name} ({self._slug})"
) from None
def _nested_validate_list(self, typ: Any, data_list: list[Any], key: str):
def _nested_validate_list(
self, typ: Any, data_list: list[Any], key: str
) -> list[Any]:
"""Validate nested items."""
options = []
@@ -201,17 +208,13 @@ class AddonOptions(CoreSysAttributes):
# Process list
for element in data_list:
# Nested?
if isinstance(typ, dict):
c_options = self._nested_validate_dict(typ, element, key)
options.append(c_options)
else:
options.append(self._single_validate(typ, element, key))
options.append(self._validate_element(typ, element, key))
return options
def _nested_validate_dict(
self, typ: dict[Any, Any], data_dict: dict[Any, Any], key: str
):
) -> dict[Any, Any]:
"""Validate nested items."""
options = {}
@@ -231,12 +234,7 @@ class AddonOptions(CoreSysAttributes):
continue
# Nested?
if isinstance(typ[c_key], list):
options[c_key] = self._nested_validate_list(
typ[c_key][0], c_value, c_key
)
else:
options[c_key] = self._single_validate(typ[c_key], c_value, c_key)
options[c_key] = self._validate_element(typ[c_key], c_value, c_key)
self._check_missing_options(typ, options, key)
return options
@@ -274,18 +272,28 @@ class UiOptions(CoreSysAttributes):
# read options
for key, value in raw_schema.items():
if isinstance(value, list):
# nested value list
self._nested_ui_list(ui_schema, value, key)
elif isinstance(value, dict):
# nested value dict
self._nested_ui_dict(ui_schema, value, key)
else:
# normal value
self._single_ui_option(ui_schema, value, key)
self._ui_schema_element(ui_schema, value, key)
return ui_schema
def _ui_schema_element(
self,
ui_schema: list[dict[str, Any]],
value: str,
key: str,
multiple: bool = False,
):
if isinstance(value, list):
# nested value list
assert not multiple
self._nested_ui_list(ui_schema, value, key)
elif isinstance(value, dict):
# nested value dict
self._nested_ui_dict(ui_schema, value, key, multiple)
else:
# normal value
self._single_ui_option(ui_schema, value, key, multiple)
def _single_ui_option(
self,
ui_schema: list[dict[str, Any]],
@@ -377,10 +385,7 @@ class UiOptions(CoreSysAttributes):
_LOGGER.error("Invalid schema %s", key)
return
if isinstance(element, dict):
self._nested_ui_dict(ui_schema, element, key, multiple=True)
else:
self._single_ui_option(ui_schema, element, key, multiple=True)
self._ui_schema_element(ui_schema, element, key, multiple=True)
def _nested_ui_dict(
self,
@@ -399,11 +404,7 @@ class UiOptions(CoreSysAttributes):
nested_schema: list[dict[str, Any]] = []
for c_key, c_value in option_dict.items():
# Nested?
if isinstance(c_value, list):
self._nested_ui_list(nested_schema, c_value, c_key)
else:
self._single_ui_option(nested_schema, c_value, c_key)
self._ui_schema_element(nested_schema, c_value, c_key)
ui_node["schema"] = nested_schema
ui_schema.append(ui_node)

View File

@@ -32,6 +32,7 @@ from ..const import (
ATTR_DISCOVERY,
ATTR_DOCKER_API,
ATTR_ENVIRONMENT,
ATTR_FIELDS,
ATTR_FULL_ACCESS,
ATTR_GPIO,
ATTR_HASSIO_API,
@@ -137,7 +138,19 @@ RE_DOCKER_IMAGE_BUILD = re.compile(
r"^([a-zA-Z\-\.:\d{}]+/)*?([\-\w{}]+)/([\-\w{}]+)(:[\.\-\w{}]+)?$"
)
SCHEMA_ELEMENT = vol.Match(RE_SCHEMA_ELEMENT)
SCHEMA_ELEMENT = vol.Schema(
vol.Any(
vol.Match(RE_SCHEMA_ELEMENT),
[
# A list may not directly contain another list
vol.Any(
vol.Match(RE_SCHEMA_ELEMENT),
{str: vol.Self},
)
],
{str: vol.Self},
)
)
RE_MACHINE = re.compile(
r"^!?(?:"
@@ -266,10 +279,23 @@ def _migrate_addon_config(protocol=False):
volumes = []
for entry in config.get(ATTR_MAP, []):
if isinstance(entry, dict):
# Validate that dict entries have required 'type' field
if ATTR_TYPE not in entry:
_LOGGER.warning(
"Add-on config has invalid map entry missing 'type' field: %s. Skipping invalid entry for %s",
entry,
name,
)
continue
volumes.append(entry)
if isinstance(entry, str):
result = RE_VOLUME.match(entry)
if not result:
_LOGGER.warning(
"Add-on config has invalid map entry: %s. Skipping invalid entry for %s",
entry,
name,
)
continue
volumes.append(
{
@@ -278,8 +304,8 @@ def _migrate_addon_config(protocol=False):
}
)
if volumes:
config[ATTR_MAP] = volumes
# Always update config to clear potentially malformed ones
config[ATTR_MAP] = volumes
# 2023-10 "config" became "homeassistant" so /config can be used for addon's public config
if any(volume[ATTR_TYPE] == MappingType.CONFIG for volume in volumes):
@@ -393,20 +419,7 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
vol.Optional(ATTR_CODENOTARY): vol.Email(),
vol.Optional(ATTR_OPTIONS, default={}): dict,
vol.Optional(ATTR_SCHEMA, default={}): vol.Any(
vol.Schema(
{
str: vol.Any(
SCHEMA_ELEMENT,
[
vol.Any(
SCHEMA_ELEMENT,
{str: vol.Any(SCHEMA_ELEMENT, [SCHEMA_ELEMENT])},
)
],
vol.Schema({str: vol.Any(SCHEMA_ELEMENT, [SCHEMA_ELEMENT])}),
)
}
),
vol.Schema({str: SCHEMA_ELEMENT}),
False,
),
vol.Optional(ATTR_IMAGE): docker_image,
@@ -442,6 +455,7 @@ SCHEMA_TRANSLATION_CONFIGURATION = vol.Schema(
{
vol.Required(ATTR_NAME): str,
vol.Optional(ATTR_DESCRIPTON): vol.Maybe(str),
vol.Optional(ATTR_FIELDS): {str: vol.Self},
},
extra=vol.REMOVE_EXTRA,
)

View File

@@ -146,6 +146,14 @@ class RestAPI(CoreSysAttributes):
follow=True,
),
),
web.get(
f"{path}/logs/latest",
partial(
self._api_host.advanced_logs,
identifier=syslog_identifier,
latest=True,
),
),
web.get(
f"{path}/logs/boots/{{bootid}}",
partial(self._api_host.advanced_logs, identifier=syslog_identifier),
@@ -440,6 +448,7 @@ class RestAPI(CoreSysAttributes):
# is known and reported to the user using the resolution center.
await async_capture_exception(err)
kwargs.pop("follow", None) # Follow is not supported for Docker logs
kwargs.pop("latest", None) # Latest is not supported for Docker logs
return await api_supervisor.logs(*args, **kwargs)
self.webapp.add_routes(
@@ -449,6 +458,10 @@ class RestAPI(CoreSysAttributes):
"/supervisor/logs/follow",
partial(get_supervisor_logs, follow=True),
),
web.get(
"/supervisor/logs/latest",
partial(get_supervisor_logs, latest=True),
),
web.get("/supervisor/logs/boots/{bootid}", get_supervisor_logs),
web.get(
"/supervisor/logs/boots/{bootid}/follow",
@@ -561,6 +574,10 @@ class RestAPI(CoreSysAttributes):
"/addons/{addon}/logs/follow",
partial(get_addon_logs, follow=True),
),
web.get(
"/addons/{addon}/logs/latest",
partial(get_addon_logs, latest=True),
),
web.get("/addons/{addon}/logs/boots/{bootid}", get_addon_logs),
web.get(
"/addons/{addon}/logs/boots/{bootid}/follow",
@@ -735,6 +752,10 @@ class RestAPI(CoreSysAttributes):
"/store/addons/{addon}/documentation",
api_store.addons_addon_documentation,
),
web.get(
"/store/addons/{addon}/availability",
api_store.addons_addon_availability,
),
web.post(
"/store/addons/{addon}/install", api_store.addons_addon_install
),

View File

@@ -3,7 +3,6 @@
from __future__ import annotations
import asyncio
from collections.abc import Callable
import errno
from io import IOBase
import logging
@@ -46,12 +45,9 @@ from ..const import (
ATTR_TYPE,
ATTR_VERSION,
REQUEST_FROM,
BusEvent,
CoreState,
)
from ..coresys import CoreSysAttributes
from ..exceptions import APIError, APIForbidden, APINotFound
from ..jobs import JobSchedulerOptions, SupervisorJob
from ..mounts.const import MountUsage
from ..resolution.const import UnhealthyReason
from .const import (
@@ -61,7 +57,7 @@ from .const import (
ATTR_LOCATIONS,
CONTENT_TYPE_TAR,
)
from .utils import api_process, api_validate
from .utils import api_process, api_validate, background_task
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -289,41 +285,6 @@ class APIBackups(CoreSysAttributes):
f"Location {LOCATION_CLOUD_BACKUP} is only available for Home Assistant"
)
async def _background_backup_task(
self, backup_method: Callable, *args, **kwargs
) -> tuple[asyncio.Task, str]:
"""Start backup task in background and return task and job ID."""
event = asyncio.Event()
job, backup_task = cast(
tuple[SupervisorJob, asyncio.Task],
self.sys_jobs.schedule_job(
backup_method, JobSchedulerOptions(), *args, **kwargs
),
)
async def release_on_freeze(new_state: CoreState):
if new_state == CoreState.FREEZE:
event.set()
# Wait for system to get into freeze state before returning
# If the backup fails validation it will raise before getting there
listener = self.sys_bus.register_event(
BusEvent.SUPERVISOR_STATE_CHANGE, release_on_freeze
)
try:
event_task = self.sys_create_task(event.wait())
_, pending = await asyncio.wait(
(backup_task, event_task),
return_when=asyncio.FIRST_COMPLETED,
)
# It seems backup returned early (error or something), make sure to cancel
# the event task to avoid "Task was destroyed but it is pending!" errors.
if event_task in pending:
event_task.cancel()
return (backup_task, job.uuid)
finally:
self.sys_bus.remove_listener(listener)
@api_process
async def backup_full(self, request: web.Request):
"""Create full backup."""
@@ -342,8 +303,8 @@ class APIBackups(CoreSysAttributes):
body[ATTR_ADDITIONAL_LOCATIONS] = locations
background = body.pop(ATTR_BACKGROUND)
backup_task, job_id = await self._background_backup_task(
self.sys_backups.do_backup_full, **body
backup_task, job_id = await background_task(
self, self.sys_backups.do_backup_full, **body
)
if background and not backup_task.done():
@@ -378,8 +339,8 @@ class APIBackups(CoreSysAttributes):
body[ATTR_ADDONS] = list(self.sys_addons.local)
background = body.pop(ATTR_BACKGROUND)
backup_task, job_id = await self._background_backup_task(
self.sys_backups.do_backup_partial, **body
backup_task, job_id = await background_task(
self, self.sys_backups.do_backup_partial, **body
)
if background and not backup_task.done():
@@ -402,8 +363,8 @@ class APIBackups(CoreSysAttributes):
request, body.get(ATTR_LOCATION, backup.location)
)
background = body.pop(ATTR_BACKGROUND)
restore_task, job_id = await self._background_backup_task(
self.sys_backups.do_restore_full, backup, **body
restore_task, job_id = await background_task(
self, self.sys_backups.do_restore_full, backup, **body
)
if background and not restore_task.done() or await restore_task:
@@ -422,8 +383,8 @@ class APIBackups(CoreSysAttributes):
request, body.get(ATTR_LOCATION, backup.location)
)
background = body.pop(ATTR_BACKGROUND)
restore_task, job_id = await self._background_backup_task(
self.sys_backups.do_restore_partial, backup, **body
restore_task, job_id = await background_task(
self, self.sys_backups.do_restore_partial, backup, **body
)
if background and not restore_task.done() or await restore_task:

View File

@@ -20,6 +20,7 @@ from ..const import (
ATTR_CPU_PERCENT,
ATTR_IMAGE,
ATTR_IP_ADDRESS,
ATTR_JOB_ID,
ATTR_MACHINE,
ATTR_MEMORY_LIMIT,
ATTR_MEMORY_PERCENT,
@@ -37,8 +38,8 @@ from ..const import (
from ..coresys import CoreSysAttributes
from ..exceptions import APIDBMigrationInProgress, APIError
from ..validate import docker_image, network_port, version_tag
from .const import ATTR_FORCE, ATTR_SAFE_MODE
from .utils import api_process, api_validate
from .const import ATTR_BACKGROUND, ATTR_FORCE, ATTR_SAFE_MODE
from .utils import api_process, api_validate, background_task
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -61,6 +62,7 @@ SCHEMA_UPDATE = vol.Schema(
{
vol.Optional(ATTR_VERSION): version_tag,
vol.Optional(ATTR_BACKUP): bool,
vol.Optional(ATTR_BACKGROUND, default=False): bool,
}
)
@@ -170,18 +172,24 @@ class APIHomeAssistant(CoreSysAttributes):
}
@api_process
async def update(self, request: web.Request) -> None:
async def update(self, request: web.Request) -> dict[str, str] | None:
"""Update Home Assistant."""
body = await api_validate(SCHEMA_UPDATE, request)
await self._check_offline_migration()
await asyncio.shield(
self.sys_homeassistant.core.update(
version=body.get(ATTR_VERSION, self.sys_homeassistant.latest_version),
backup=body.get(ATTR_BACKUP),
)
background = body[ATTR_BACKGROUND]
update_task, job_id = await background_task(
self,
self.sys_homeassistant.core.update,
version=body.get(ATTR_VERSION, self.sys_homeassistant.latest_version),
backup=body.get(ATTR_BACKUP),
)
if background and not update_task.done():
return {ATTR_JOB_ID: job_id}
return await update_task
@api_process
async def stop(self, request: web.Request) -> Awaitable[None]:
"""Stop Home Assistant."""

View File

@@ -2,10 +2,17 @@
import asyncio
from contextlib import suppress
import json
import logging
from typing import Any
from aiohttp import ClientConnectionResetError, ClientPayloadError, web
from aiohttp import (
ClientConnectionResetError,
ClientError,
ClientPayloadError,
ClientTimeout,
web,
)
from aiohttp.hdrs import ACCEPT, RANGE
import voluptuous as vol
from voluptuous.error import CoerceInvalid
@@ -194,7 +201,11 @@ class APIHost(CoreSysAttributes):
return possible_offset
async def advanced_logs_handler(
self, request: web.Request, identifier: str | None = None, follow: bool = False
self,
request: web.Request,
identifier: str | None = None,
follow: bool = False,
latest: bool = False,
) -> web.StreamResponse:
"""Return systemd-journald logs."""
log_formatter = LogFormatter.PLAIN
@@ -213,6 +224,20 @@ class APIHost(CoreSysAttributes):
if follow:
params[PARAM_FOLLOW] = ""
if latest:
if not identifier:
raise APIError(
"Latest logs can only be fetched for a specific identifier."
)
try:
epoch = await self._get_container_last_epoch(identifier)
params["CONTAINER_LOG_EPOCH"] = epoch
except HostLogError as err:
raise APIError(
f"Cannot determine CONTAINER_LOG_EPOCH of {identifier}, latest logs not available."
) from err
if ACCEPT in request.headers and request.headers[ACCEPT] not in [
CONTENT_TYPE_TEXT,
CONTENT_TYPE_X_LOG,
@@ -241,6 +266,8 @@ class APIHost(CoreSysAttributes):
lines = max(2, lines)
# entries=cursor[[:num_skip]:num_entries]
range_header = f"entries=:-{lines - 1}:{SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX if follow else lines}"
elif latest:
range_header = f"entries=0:{SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX}"
elif RANGE in request.headers:
range_header = request.headers[RANGE]
else:
@@ -286,10 +313,14 @@ class APIHost(CoreSysAttributes):
@api_process_raw(CONTENT_TYPE_TEXT, error_type=CONTENT_TYPE_TEXT)
async def advanced_logs(
self, request: web.Request, identifier: str | None = None, follow: bool = False
self,
request: web.Request,
identifier: str | None = None,
follow: bool = False,
latest: bool = False,
) -> web.StreamResponse:
"""Return systemd-journald logs. Wrapped as standard API handler."""
return await self.advanced_logs_handler(request, identifier, follow)
return await self.advanced_logs_handler(request, identifier, follow, latest)
@api_process
async def disk_usage(self, request: web.Request) -> dict:
@@ -336,3 +367,27 @@ class APIHost(CoreSysAttributes):
*known_paths,
],
}
async def _get_container_last_epoch(self, identifier: str) -> str | None:
"""Get Docker's internal log epoch of the latest log entry for the given identifier."""
try:
async with self.sys_host.logs.journald_logs(
params={"CONTAINER_NAME": identifier},
range_header="entries=:-1:2", # -1 = next to the last entry
accept=LogFormat.JSON,
timeout=ClientTimeout(total=10),
) as resp:
text = await resp.text()
except (ClientError, TimeoutError) as err:
raise HostLogError(
"Could not get last container epoch from systemd-journal-gatewayd",
_LOGGER.error,
) from err
try:
return json.loads(text.strip().split("\n")[-1])["CONTAINER_LOG_EPOCH"]
except (json.JSONDecodeError, KeyError, IndexError) as err:
raise HostLogError(
f"Failed to parse CONTAINER_LOG_EPOCH of {identifier} container, got: {text}",
_LOGGER.error,
) from err

View File

@@ -199,21 +199,25 @@ class APIIngress(CoreSysAttributes):
url = f"{url}?{request.query_string}"
# Start proxy
async with self.sys_websession.ws_connect(
url,
headers=source_header,
protocols=req_protocols,
autoclose=False,
autoping=False,
) as ws_client:
# Proxy requests
await asyncio.wait(
[
self.sys_create_task(_websocket_forward(ws_server, ws_client)),
self.sys_create_task(_websocket_forward(ws_client, ws_server)),
],
return_when=asyncio.FIRST_COMPLETED,
)
try:
_LOGGER.debug("Proxing WebSocket to %s, upstream url: %s", addon.slug, url)
async with self.sys_websession.ws_connect(
url,
headers=source_header,
protocols=req_protocols,
autoclose=False,
autoping=False,
) as ws_client:
# Proxy requests
await asyncio.wait(
[
self.sys_create_task(_websocket_forward(ws_server, ws_client)),
self.sys_create_task(_websocket_forward(ws_client, ws_server)),
],
return_when=asyncio.FIRST_COMPLETED,
)
except TimeoutError:
_LOGGER.warning("WebSocket proxy to %s timed out", addon.slug)
return ws_server
@@ -286,6 +290,7 @@ class APIIngress(CoreSysAttributes):
aiohttp.ClientError,
aiohttp.ClientPayloadError,
ConnectionResetError,
ConnectionError,
) as err:
_LOGGER.error("Stream error with %s: %s", url, err)
@@ -386,9 +391,9 @@ async def _websocket_forward(ws_from, ws_to):
elif msg.type == aiohttp.WSMsgType.BINARY:
await ws_to.send_bytes(msg.data)
elif msg.type == aiohttp.WSMsgType.PING:
await ws_to.ping()
await ws_to.ping(msg.data)
elif msg.type == aiohttp.WSMsgType.PONG:
await ws_to.pong()
await ws_to.pong(msg.data)
elif ws_to.closed:
await ws_to.close(code=ws_to.close_code, message=msg.extra)
except RuntimeError:

View File

@@ -26,7 +26,9 @@ from ..const import (
ATTR_IP6_PRIVACY,
ATTR_IPV4,
ATTR_IPV6,
ATTR_LLMNR,
ATTR_MAC,
ATTR_MDNS,
ATTR_METHOD,
ATTR_MODE,
ATTR_NAMESERVERS,
@@ -54,6 +56,7 @@ from ..host.configuration import (
Ip6Setting,
IpConfig,
IpSetting,
MulticastDnsMode,
VlanConfig,
WifiConfig,
)
@@ -97,6 +100,8 @@ SCHEMA_UPDATE = vol.Schema(
vol.Optional(ATTR_IPV6): _SCHEMA_IPV6_CONFIG,
vol.Optional(ATTR_WIFI): _SCHEMA_WIFI_CONFIG,
vol.Optional(ATTR_ENABLED): vol.Boolean(),
vol.Optional(ATTR_MDNS): vol.Coerce(MulticastDnsMode),
vol.Optional(ATTR_LLMNR): vol.Coerce(MulticastDnsMode),
}
)
@@ -160,6 +165,8 @@ def interface_struct(interface: Interface) -> dict[str, Any]:
else None,
ATTR_WIFI: wifi_struct(interface.wifi) if interface.wifi else None,
ATTR_VLAN: vlan_struct(interface.vlan) if interface.vlan else None,
ATTR_MDNS: interface.mdns,
ATTR_LLMNR: interface.llmnr,
}
@@ -260,6 +267,10 @@ class APINetwork(CoreSysAttributes):
)
elif key == ATTR_ENABLED:
interface.enabled = config
elif key == ATTR_MDNS:
interface.mdns = config
elif key == ATTR_LLMNR:
interface.llmnr = config
await asyncio.shield(self.sys_host.network.apply_changes(interface))
@@ -300,6 +311,15 @@ class APINetwork(CoreSysAttributes):
vlan_config = VlanConfig(vlan, interface.name)
mdns_mode = MulticastDnsMode.DEFAULT
llmnr_mode = MulticastDnsMode.DEFAULT
if ATTR_MDNS in body:
mdns_mode = body[ATTR_MDNS]
if ATTR_LLMNR in body:
llmnr_mode = body[ATTR_LLMNR]
ipv4_setting = None
if ATTR_IPV4 in body:
ipv4_setting = IpSetting(
@@ -338,5 +358,7 @@ class APINetwork(CoreSysAttributes):
ipv6_setting,
None,
vlan_config,
mdns=mdns_mode,
llmnr=llmnr_mode,
)
await asyncio.shield(self.sys_host.network.create_vlan(vlan_interface))

View File

@@ -1,7 +1,6 @@
"""Init file for Supervisor Home Assistant RESTful API."""
import asyncio
from collections.abc import Awaitable
from pathlib import Path
from typing import Any, cast
@@ -36,6 +35,7 @@ from ..const import (
ATTR_ICON,
ATTR_INGRESS,
ATTR_INSTALLED,
ATTR_JOB_ID,
ATTR_LOGO,
ATTR_LONG_DESCRIPTION,
ATTR_MAINTAINER,
@@ -57,11 +57,13 @@ from ..exceptions import APIError, APIForbidden, APINotFound
from ..store.addon import AddonStore
from ..store.repository import Repository
from ..store.validate import validate_repository
from .const import CONTENT_TYPE_PNG, CONTENT_TYPE_TEXT
from .const import ATTR_BACKGROUND, CONTENT_TYPE_PNG, CONTENT_TYPE_TEXT
from .utils import background_task
SCHEMA_UPDATE = vol.Schema(
{
vol.Optional(ATTR_BACKUP): bool,
vol.Optional(ATTR_BACKGROUND, default=False): bool,
}
)
@@ -69,6 +71,12 @@ SCHEMA_ADD_REPOSITORY = vol.Schema(
{vol.Required(ATTR_REPOSITORY): vol.All(str, validate_repository)}
)
SCHEMA_INSTALL = vol.Schema(
{
vol.Optional(ATTR_BACKGROUND, default=False): bool,
}
)
def _read_static_text_file(path: Path) -> Any:
"""Read in a static text file asset for API output.
@@ -217,24 +225,45 @@ class APIStore(CoreSysAttributes):
}
@api_process
def addons_addon_install(self, request: web.Request) -> Awaitable[None]:
async def addons_addon_install(self, request: web.Request) -> dict[str, str] | None:
"""Install add-on."""
addon = self._extract_addon(request)
return asyncio.shield(self.sys_addons.install(addon.slug))
body = await api_validate(SCHEMA_INSTALL, request)
background = body[ATTR_BACKGROUND]
install_task, job_id = await background_task(
self, self.sys_addons.install, addon.slug
)
if background and not install_task.done():
return {ATTR_JOB_ID: job_id}
return await install_task
@api_process
async def addons_addon_update(self, request: web.Request) -> None:
async def addons_addon_update(self, request: web.Request) -> dict[str, str] | None:
"""Update add-on."""
addon = self._extract_addon(request, installed=True)
if addon == request.get(REQUEST_FROM):
raise APIForbidden(f"Add-on {addon.slug} can't update itself!")
body = await api_validate(SCHEMA_UPDATE, request)
background = body[ATTR_BACKGROUND]
if start_task := await asyncio.shield(
self.sys_addons.update(addon.slug, backup=body.get(ATTR_BACKUP))
):
update_task, job_id = await background_task(
self,
self.sys_addons.update,
addon.slug,
backup=body.get(ATTR_BACKUP),
)
if background and not update_task.done():
return {ATTR_JOB_ID: job_id}
if start_task := await update_task:
await start_task
return None
@api_process
async def addons_addon_info(self, request: web.Request) -> dict[str, Any]:
@@ -297,6 +326,12 @@ class APIStore(CoreSysAttributes):
_read_static_text_file, addon.path_documentation
)
@api_process
async def addons_addon_availability(self, request: web.Request) -> None:
"""Check add-on availability for current system."""
addon = cast(AddonStore, self._extract_addon(request))
addon.validate_availability()
@api_process
async def repositories_list(self, request: web.Request) -> list[dict[str, Any]]:
"""Return all repositories."""

View File

@@ -1,7 +1,9 @@
"""Init file for Supervisor util for RESTful API."""
import asyncio
from collections.abc import Callable
import json
from typing import Any
from typing import Any, cast
from aiohttp import web
from aiohttp.hdrs import AUTHORIZATION
@@ -14,8 +16,11 @@ from ..const import (
HEADER_TOKEN,
HEADER_TOKEN_OLD,
JSON_DATA,
JSON_ERROR_KEY,
JSON_EXTRA_FIELDS,
JSON_JOB_ID,
JSON_MESSAGE,
JSON_MESSAGE_TEMPLATE,
JSON_RESULT,
REQUEST_FROM,
RESULT_ERROR,
@@ -23,6 +28,7 @@ from ..const import (
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import APIError, BackupFileNotFoundError, DockerAPIError, HassioError
from ..jobs import JobSchedulerOptions, SupervisorJob
from ..utils import check_exception_chain, get_message_from_exception_chain
from ..utils.json import json_dumps, json_loads as json_loads_util
from ..utils.log_format import format_message
@@ -133,10 +139,11 @@ def api_process_raw(content, *, error_type=None):
def api_return_error(
error: Exception | None = None,
error: HassioError | None = None,
message: str | None = None,
error_type: str | None = None,
status: int = 400,
*,
job_id: str | None = None,
) -> web.Response:
"""Return an API error message."""
@@ -155,12 +162,18 @@ def api_return_error(
body=message.encode(), content_type=error_type, status=status
)
case _:
result = {
result: dict[str, Any] = {
JSON_RESULT: RESULT_ERROR,
JSON_MESSAGE: message,
}
if job_id:
result[JSON_JOB_ID] = job_id
if error and error.error_key:
result[JSON_ERROR_KEY] = error.error_key
if error and error.message_template:
result[JSON_MESSAGE_TEMPLATE] = error.message_template
if error and error.extra_fields:
result[JSON_EXTRA_FIELDS] = error.extra_fields
return web.json_response(
result,
@@ -198,3 +211,47 @@ async def api_validate(
data_validated[origin_value] = data[origin_value]
return data_validated
async def background_task(
coresys_obj: CoreSysAttributes,
task_method: Callable,
*args,
**kwargs,
) -> tuple[asyncio.Task, str]:
"""Start task in background and return task and job ID.
Args:
coresys_obj: Instance that accesses coresys data using CoreSysAttributes
task_method: The method to execute in the background. Must include a keyword arg 'validation_complete' of type asyncio.Event. Should set it after any initial validation has completed
*args: Arguments to pass to task_method
**kwargs: Keyword arguments to pass to task_method
Returns:
Tuple of (task, job_id)
"""
event = asyncio.Event()
job, task = cast(
tuple[SupervisorJob, asyncio.Task],
coresys_obj.sys_jobs.schedule_job(
task_method,
JobSchedulerOptions(),
*args,
validation_complete=event,
**kwargs,
),
)
# Wait for provided event before returning
# If the task fails validation it should raise before getting there
event_task = coresys_obj.sys_create_task(event.wait())
_, pending = await asyncio.wait(
(task, event_task),
return_when=asyncio.FIRST_COMPLETED,
)
# It seems task returned early (error or something), make sure to cancel
# the event task to avoid "Task was destroyed but it is pending!" errors.
if event_task in pending:
event_task.cancel()
return (task, job.uuid)

View File

@@ -598,6 +598,7 @@ class BackupManager(FileConfiguration, JobGroup):
homeassistant_exclude_database: bool | None = None,
extra: dict | None = None,
additional_locations: list[LOCATION_TYPE] | None = None,
validation_complete: asyncio.Event | None = None,
) -> Backup | None:
"""Create a full backup."""
await self._check_location(location)
@@ -614,6 +615,10 @@ class BackupManager(FileConfiguration, JobGroup):
name, filename, BackupType.FULL, password, compressed, location, extra
)
# If being run in the background, notify caller that validation has completed
if validation_complete:
validation_complete.set()
_LOGGER.info("Creating new full backup with slug %s", new_backup.slug)
backup = await self._do_backup(
new_backup,
@@ -648,6 +653,7 @@ class BackupManager(FileConfiguration, JobGroup):
homeassistant_exclude_database: bool | None = None,
extra: dict | None = None,
additional_locations: list[LOCATION_TYPE] | None = None,
validation_complete: asyncio.Event | None = None,
) -> Backup | None:
"""Create a partial backup."""
await self._check_location(location)
@@ -684,6 +690,10 @@ class BackupManager(FileConfiguration, JobGroup):
continue
_LOGGER.warning("Add-on %s not found/installed", addon_slug)
# If being run in the background, notify caller that validation has completed
if validation_complete:
validation_complete.set()
backup = await self._do_backup(
new_backup,
addon_list,
@@ -817,8 +827,10 @@ class BackupManager(FileConfiguration, JobGroup):
async def do_restore_full(
self,
backup: Backup,
*,
password: str | None = None,
location: str | None | type[DEFAULT] = DEFAULT,
validation_complete: asyncio.Event | None = None,
) -> bool:
"""Restore a backup."""
# Add backup ID to job
@@ -838,6 +850,10 @@ class BackupManager(FileConfiguration, JobGroup):
_LOGGER.error,
)
# If being run in the background, notify caller that validation has completed
if validation_complete:
validation_complete.set()
_LOGGER.info("Full-Restore %s start", backup.slug)
await self.sys_core.set_state(CoreState.FREEZE)
@@ -876,11 +892,13 @@ class BackupManager(FileConfiguration, JobGroup):
async def do_restore_partial(
self,
backup: Backup,
*,
homeassistant: bool = False,
addons: list[str] | None = None,
folders: list[str] | None = None,
password: str | None = None,
location: str | None | type[DEFAULT] = DEFAULT,
validation_complete: asyncio.Event | None = None,
) -> bool:
"""Restore a backup."""
# Add backup ID to job
@@ -908,6 +926,10 @@ class BackupManager(FileConfiguration, JobGroup):
_LOGGER.error,
)
# If being run in the background, notify caller that validation has completed
if validation_complete:
validation_complete.set()
_LOGGER.info("Partial-Restore %s start", backup.slug)
await self.sys_core.set_state(CoreState.FREEZE)

View File

@@ -2,6 +2,7 @@
# ruff: noqa: T100
import asyncio
from collections.abc import Callable
from importlib import import_module
import logging
import os
@@ -226,6 +227,10 @@ def initialize_system(coresys: CoreSys) -> None:
)
config.path_addon_configs.mkdir()
if not config.path_cid_files.is_dir():
_LOGGER.debug("Creating Docker cidfiles folder at '%s'", config.path_cid_files)
config.path_cid_files.mkdir()
def warning_handler(message, category, filename, lineno, file=None, line=None):
"""Warning handler which logs warnings using the logging module."""
@@ -285,26 +290,22 @@ def check_environment() -> None:
_LOGGER.critical("Can't find Docker socket!")
def register_signal_handlers(loop: asyncio.AbstractEventLoop, coresys: CoreSys) -> None:
def register_signal_handlers(
loop: asyncio.AbstractEventLoop, shutdown_handler: Callable[[], None]
) -> None:
"""Register SIGTERM, SIGHUP and SIGKILL to stop the Supervisor."""
try:
loop.add_signal_handler(
signal.SIGTERM, lambda: loop.create_task(coresys.core.stop())
)
loop.add_signal_handler(signal.SIGTERM, shutdown_handler)
except (ValueError, RuntimeError):
_LOGGER.warning("Could not bind to SIGTERM")
try:
loop.add_signal_handler(
signal.SIGHUP, lambda: loop.create_task(coresys.core.stop())
)
loop.add_signal_handler(signal.SIGHUP, shutdown_handler)
except (ValueError, RuntimeError):
_LOGGER.warning("Could not bind to SIGHUP")
try:
loop.add_signal_handler(
signal.SIGINT, lambda: loop.create_task(coresys.core.stop())
)
loop.add_signal_handler(signal.SIGINT, shutdown_handler)
except (ValueError, RuntimeError):
_LOGGER.warning("Could not bind to SIGINT")

View File

@@ -54,6 +54,7 @@ MOUNTS_CREDENTIALS = PurePath(".mounts_credentials")
EMERGENCY_DATA = PurePath("emergency")
ADDON_CONFIGS = PurePath("addon_configs")
CORE_BACKUP_DATA = PurePath("core/backup")
CID_FILES = PurePath("cid_files")
DEFAULT_BOOT_TIME = datetime.fromtimestamp(0, UTC).isoformat()
@@ -399,6 +400,16 @@ class CoreConfig(FileConfiguration):
"""Return root media data folder external for Docker."""
return PurePath(self.path_extern_supervisor, MEDIA_DATA)
@property
def path_cid_files(self) -> Path:
"""Return CID files folder."""
return self.path_supervisor / CID_FILES
@property
def path_extern_cid_files(self) -> PurePath:
"""Return CID files folder."""
return PurePath(self.path_extern_supervisor, CID_FILES)
@property
def addons_repositories(self) -> list[str]:
"""Return list of custom Add-on repositories."""

View File

@@ -76,6 +76,9 @@ JSON_DATA = "data"
JSON_MESSAGE = "message"
JSON_RESULT = "result"
JSON_JOB_ID = "job_id"
JSON_ERROR_KEY = "error_key"
JSON_MESSAGE_TEMPLATE = "message_template"
JSON_EXTRA_FIELDS = "extra_fields"
RESULT_ERROR = "error"
RESULT_OK = "ok"
@@ -186,6 +189,7 @@ ATTR_EVENT = "event"
ATTR_EXCLUDE_DATABASE = "exclude_database"
ATTR_EXTRA = "extra"
ATTR_FEATURES = "features"
ATTR_FIELDS = "fields"
ATTR_FILENAME = "filename"
ATTR_FLAGS = "flags"
ATTR_FOLDERS = "folders"
@@ -199,6 +203,7 @@ ATTR_HASSIO_API = "hassio_api"
ATTR_HASSIO_ROLE = "hassio_role"
ATTR_HASSOS = "hassos"
ATTR_HASSOS_UNRESTRICTED = "hassos_unrestricted"
ATTR_HASSOS_UPGRADE = "hassos_upgrade"
ATTR_HEALTHY = "healthy"
ATTR_HEARTBEAT_LED = "heartbeat_led"
ATTR_HOMEASSISTANT = "homeassistant"
@@ -242,6 +247,7 @@ ATTR_KERNEL_MODULES = "kernel_modules"
ATTR_LABELS = "labels"
ATTR_LAST_BOOT = "last_boot"
ATTR_LEGACY = "legacy"
ATTR_LLMNR = "llmnr"
ATTR_LOCALS = "locals"
ATTR_LOCATION = "location"
ATTR_LOGGING = "logging"
@@ -252,6 +258,7 @@ ATTR_MACHINE = "machine"
ATTR_MACHINE_ID = "machine_id"
ATTR_MAINTAINER = "maintainer"
ATTR_MAP = "map"
ATTR_MDNS = "mdns"
ATTR_MEMORY_LIMIT = "memory_limit"
ATTR_MEMORY_PERCENT = "memory_percent"
ATTR_MEMORY_USAGE = "memory_usage"

View File

@@ -196,30 +196,20 @@ class Core(CoreSysAttributes):
self.sys_resolution.add_unhealthy_reason(UnhealthyReason.SETUP)
await async_capture_exception(err)
# Set OS Agent diagnostics if needed
if (
self.sys_config.diagnostics is not None
and self.sys_dbus.agent.diagnostics != self.sys_config.diagnostics
and not self.sys_dev
and self.supported
):
try:
await self.sys_dbus.agent.set_diagnostics(self.sys_config.diagnostics)
except Exception as err: # pylint: disable=broad-except
_LOGGER.warning(
"Could not set diagnostics to %s due to %s",
self.sys_config.diagnostics,
err,
)
await async_capture_exception(err)
# Evaluate the system
await self.sys_resolution.evaluate.evaluate_system()
async def start(self) -> None:
"""Start Supervisor orchestration."""
await self.set_state(CoreState.STARTUP)
# Set OS Agent diagnostics if needed
if (
self.sys_dbus.agent.is_connected
and self.sys_config.diagnostics is not None
and self.sys_dbus.agent.diagnostics != self.sys_config.diagnostics
and self.supported
):
_LOGGER.debug("Set OS Agent diagnostics to %s", self.sys_config.diagnostics)
await self.sys_dbus.agent.set_diagnostics(self.sys_config.diagnostics)
# Check if system is healthy
if not self.supported:
_LOGGER.warning("System running in a unsupported environment!")

View File

@@ -253,7 +253,7 @@ class ConnectionType(StrEnum):
class ConnectionStateType(IntEnum):
"""Connection states.
https://developer.gnome.org/NetworkManager/stable/nm-dbus-types.html#NMActiveConnectionState
https://networkmanager.dev/docs/api/latest/nm-dbus-types.html#NMActiveConnectionState
"""
UNKNOWN = 0
@@ -266,7 +266,7 @@ class ConnectionStateType(IntEnum):
class ConnectionStateFlags(IntEnum):
"""Connection state flags.
https://developer-old.gnome.org/NetworkManager/stable/nm-dbus-types.html#NMActivationStateFlags
https://networkmanager.dev/docs/api/latest/nm-dbus-types.html#NMActivationStateFlags
"""
NONE = 0
@@ -283,7 +283,7 @@ class ConnectionStateFlags(IntEnum):
class ConnectivityState(IntEnum):
"""Network connectvity.
https://developer.gnome.org/NetworkManager/unstable/nm-dbus-types.html#NMConnectivityState
https://networkmanager.dev/docs/api/latest/nm-dbus-types.html#NMConnectivityState
"""
CONNECTIVITY_UNKNOWN = 0
@@ -296,7 +296,7 @@ class ConnectivityState(IntEnum):
class DeviceType(IntEnum):
"""Device types.
https://developer.gnome.org/NetworkManager/stable/nm-dbus-types.html#NMDeviceType
https://networkmanager.dev/docs/api/latest/nm-dbus-types.html#NMDeviceType
"""
UNKNOWN = 0
@@ -333,6 +333,15 @@ class MulticastProtocolEnabled(StrEnum):
RESOLVE = "resolve"
class MulticastDnsValue(IntEnum):
"""Connection MulticastDNS (mdns/llmnr) values."""
DEFAULT = -1
OFF = 0
RESOLVE = 1
ANNOUNCE = 2
class DNSOverTLSEnabled(StrEnum):
"""DNS over TLS enabled."""

View File

@@ -44,7 +44,7 @@ MINIMAL_VERSION = AwesomeVersion("1.14.6")
class NetworkManager(DBusInterfaceProxy):
"""Handle D-Bus interface for Network Manager.
https://developer.gnome.org/NetworkManager/stable/gdbus-org.freedesktop.NetworkManager.html
https://networkmanager.dev/docs/api/latest/gdbus-org.freedesktop.NetworkManager.html
"""
name: str = DBUS_NAME_NM

View File

@@ -15,7 +15,7 @@ from ..interface import DBusInterfaceProxy, dbus_property
class NetworkWirelessAP(DBusInterfaceProxy):
"""NetworkWireless AP object for Network Manager.
https://developer.gnome.org/NetworkManager/stable/gdbus-org.freedesktop.NetworkManager.AccessPoint.html
https://networkmanager.dev/docs/api/latest/gdbus-org.freedesktop.NetworkManager.AccessPoint.html
"""
bus_name: str = DBUS_NAME_NM

View File

@@ -24,6 +24,8 @@ class ConnectionProperties:
uuid: str | None
type: str | None
interface_name: str | None
mdns: int | None
llmnr: int | None
@dataclass(slots=True)

View File

@@ -27,7 +27,7 @@ from .ip_configuration import IpConfiguration
class NetworkConnection(DBusInterfaceProxy):
"""Active network connection object for Network Manager.
https://developer.gnome.org/NetworkManager/stable/gdbus-org.freedesktop.NetworkManager.Connection.Active.html
https://networkmanager.dev/docs/api/latest/gdbus-org.freedesktop.NetworkManager.Connection.Active.html
"""
bus_name: str = DBUS_NAME_NM

View File

@@ -32,7 +32,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
class NetworkManagerDNS(DBusInterfaceProxy):
"""Handle D-Bus interface for NM DnsManager.
https://developer.gnome.org/NetworkManager/stable/gdbus-org.freedesktop.NetworkManager.DnsManager.html
https://networkmanager.dev/docs/api/latest/gdbus-org.freedesktop.NetworkManager.DnsManager.html
"""
bus_name: str = DBUS_NAME_NM

View File

@@ -27,7 +27,7 @@ from .wireless import NetworkWireless
class NetworkInterface(DBusInterfaceProxy):
"""NetworkInterface object represents Network Manager Device objects.
https://developer.gnome.org/NetworkManager/stable/gdbus-org.freedesktop.NetworkManager.Device.html
https://networkmanager.dev/docs/api/latest/gdbus-org.freedesktop.NetworkManager.Device.html
"""
bus_name: str = DBUS_NAME_NM

View File

@@ -6,7 +6,7 @@ from typing import Any
from dbus_fast import Variant
from dbus_fast.aio.message_bus import MessageBus
from ...const import DBUS_NAME_NM
from ...const import DBUS_NAME_NM, MulticastDnsValue
from ...interface import DBusInterface
from ...utils import dbus_connected
from ..configuration import (
@@ -225,7 +225,7 @@ class NetworkSetting(DBusInterface):
data = await self.get_settings()
# Get configuration settings we care about
# See: https://developer-old.gnome.org/NetworkManager/stable/ch01.html
# See: https://networkmanager.dev/docs/api/latest/nm-settings-dbus.html
if CONF_ATTR_CONNECTION in data:
self._connection = ConnectionProperties(
id=data[CONF_ATTR_CONNECTION].get(CONF_ATTR_CONNECTION_ID),
@@ -234,6 +234,12 @@ class NetworkSetting(DBusInterface):
interface_name=data[CONF_ATTR_CONNECTION].get(
CONF_ATTR_CONNECTION_INTERFACE_NAME
),
mdns=data[CONF_ATTR_CONNECTION].get(
CONF_ATTR_CONNECTION_MDNS, MulticastDnsValue.DEFAULT.value
),
llmnr=data[CONF_ATTR_CONNECTION].get(
CONF_ATTR_CONNECTION_LLMNR, MulticastDnsValue.DEFAULT.value
),
)
if CONF_ATTR_802_ETHERNET in data:

View File

@@ -14,7 +14,9 @@ from ....host.const import (
InterfaceIp6Privacy,
InterfaceMethod,
InterfaceType,
MulticastDnsMode,
)
from ...const import MulticastDnsValue
from .. import NetworkManager
from . import (
CONF_ATTR_802_ETHERNET,
@@ -58,6 +60,14 @@ if TYPE_CHECKING:
from ....host.configuration import Interface
MULTICAST_DNS_MODE_VALUE_MAPPING = {
MulticastDnsMode.DEFAULT: MulticastDnsValue.DEFAULT,
MulticastDnsMode.OFF: MulticastDnsValue.OFF,
MulticastDnsMode.RESOLVE: MulticastDnsValue.RESOLVE,
MulticastDnsMode.ANNOUNCE: MulticastDnsValue.ANNOUNCE,
}
def _get_ipv4_connection_settings(ipv4setting: IpSetting | None) -> dict:
ipv4 = {}
if not ipv4setting or ipv4setting.method == InterfaceMethod.AUTO:
@@ -163,6 +173,13 @@ def _get_ipv6_connection_settings(
return ipv6
def _map_mdns_setting(mode: MulticastDnsMode | None) -> MulticastDnsValue:
if mode is None:
return MulticastDnsValue.DEFAULT
return MULTICAST_DNS_MODE_VALUE_MAPPING.get(mode, MulticastDnsValue.DEFAULT)
def get_connection_from_interface(
interface: Interface,
network_manager: NetworkManager,
@@ -189,13 +206,16 @@ def get_connection_from_interface(
if not uuid:
uuid = str(uuid4())
llmnr = _map_mdns_setting(interface.llmnr)
mdns = _map_mdns_setting(interface.mdns)
conn: dict[str, dict[str, Variant]] = {
CONF_ATTR_CONNECTION: {
CONF_ATTR_CONNECTION_ID: Variant("s", name),
CONF_ATTR_CONNECTION_UUID: Variant("s", uuid),
CONF_ATTR_CONNECTION_TYPE: Variant("s", iftype),
CONF_ATTR_CONNECTION_LLMNR: Variant("i", 2),
CONF_ATTR_CONNECTION_MDNS: Variant("i", 2),
CONF_ATTR_CONNECTION_LLMNR: Variant("i", int(llmnr)),
CONF_ATTR_CONNECTION_MDNS: Variant("i", int(mdns)),
CONF_ATTR_CONNECTION_AUTOCONNECT: Variant("b", True),
},
}

View File

@@ -17,7 +17,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
class NetworkManagerSettings(DBusInterface):
"""Handle D-Bus interface for Network Manager Connection Settings Profile Manager.
https://developer.gnome.org/NetworkManager/stable/gdbus-org.freedesktop.NetworkManager.Settings.html
https://networkmanager.dev/docs/api/latest/gdbus-org.freedesktop.NetworkManager.Settings.html
"""
bus_name: str = DBUS_NAME_NM

View File

@@ -21,7 +21,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
class NetworkWireless(DBusInterfaceProxy):
"""Wireless object for Network Manager.
https://developer.gnome.org/NetworkManager/stable/gdbus-org.freedesktop.NetworkManager.Device.Wireless.html
https://networkmanager.dev/docs/api/latest/gdbus-org.freedesktop.NetworkManager.Device.Wireless.html
"""
bus_name: str = DBUS_NAME_NM

View File

@@ -628,6 +628,8 @@ class DockerAddon(DockerInterface):
image: str | None = None,
latest: bool = False,
arch: CpuArch | None = None,
*,
progress_job_id: str | None = None,
) -> None:
"""Update a docker image."""
image = image or self.image
@@ -643,6 +645,7 @@ class DockerAddon(DockerInterface):
latest=latest,
arch=arch,
need_build=self.addon.latest_need_build,
progress_job_id=progress_job_id,
)
@Job(
@@ -658,12 +661,15 @@ class DockerAddon(DockerInterface):
arch: CpuArch | None = None,
*,
need_build: bool | None = None,
progress_job_id: str | None = None,
) -> None:
"""Pull Docker image or build it."""
if need_build is None and self.addon.need_build or need_build:
await self._build(version, image)
else:
await super().install(version, image, latest, arch)
await super().install(
version, image, latest, arch, progress_job_id=progress_job_id
)
async def _build(self, version: AwesomeVersion, image: str | None = None) -> None:
"""Build a Docker container."""

View File

@@ -1,15 +1,20 @@
"""Docker constants."""
from __future__ import annotations
from contextlib import suppress
from enum import Enum, StrEnum
from functools import total_ordering
from pathlib import PurePath
from typing import Self, cast
import re
from typing import cast
from docker.types import Mount
from ..const import MACHINE_ID
RE_RETRYING_DOWNLOAD_STATUS = re.compile(r"Retrying in \d+ seconds?")
class Capabilities(StrEnum):
"""Linux Capabilities."""
@@ -79,6 +84,7 @@ class PullImageLayerStage(Enum):
"""
PULLING_FS_LAYER = 1, "Pulling fs layer"
RETRYING_DOWNLOAD = 2, "Retrying download"
DOWNLOADING = 2, "Downloading"
VERIFYING_CHECKSUM = 3, "Verifying Checksum"
DOWNLOAD_COMPLETE = 4, "Download complete"
@@ -107,11 +113,16 @@ class PullImageLayerStage(Enum):
return hash(self.status)
@classmethod
def from_status(cls, status: str) -> Self | None:
def from_status(cls, status: str) -> PullImageLayerStage | None:
"""Return stage instance from pull log status."""
for i in cls:
if i.status == status:
return i
# This one includes number of seconds until download so its not constant
if RE_RETRYING_DOWNLOAD_STATUS.match(status):
return cls.RETRYING_DOWNLOAD
return None
@@ -154,7 +165,6 @@ PATH_LOCAL_ADDONS = PurePath("/addons")
PATH_BACKUP = PurePath("/backup")
PATH_SHARE = PurePath("/share")
PATH_MEDIA = PurePath("/media")
PATH_CLOUD_BACKUP = PurePath("/cloud_backup")
# https://hub.docker.com/_/docker
ADDON_BUILDER_IMAGE = "docker.io/library/docker"

View File

@@ -220,10 +220,12 @@ class DockerInterface(JobGroup, ABC):
await self.sys_run_in_executor(self.sys_docker.docker.login, **credentials)
def _process_pull_image_log(self, job_id: str, reference: PullLogEntry) -> None:
def _process_pull_image_log(
self, reference_id: str, progress_job_id: str, reference: PullLogEntry
) -> None:
"""Process events fired from a docker while pulling an image, filtered to a given job id."""
if (
reference.job_id != job_id
reference.job_id != reference_id
or not reference.id
or not reference.status
or not (stage := PullImageLayerStage.from_status(reference.status))
@@ -237,21 +239,22 @@ class DockerInterface(JobGroup, ABC):
name="Pulling container image layer",
initial_stage=stage.status,
reference=reference.id,
parent_id=job_id,
parent_id=progress_job_id,
internal=True,
)
job.done = False
return
# Find our sub job to update details of
for j in self.sys_jobs.jobs:
if j.parent_id == job_id and j.reference == reference.id:
if j.parent_id == progress_job_id and j.reference == reference.id:
job = j
break
# This likely only occurs if the logs came in out of sync and we got progress before the Pulling FS Layer one
if not job:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {job_id} but could not find a matching job, skipping",
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {progress_job_id} but could not find a matching job, skipping",
_LOGGER.debug,
)
@@ -291,8 +294,10 @@ class DockerInterface(JobGroup, ABC):
progress = 50
case PullImageLayerStage.PULL_COMPLETE:
progress = 100
case PullImageLayerStage.RETRYING_DOWNLOAD:
progress = 0
if progress < job.progress:
if stage != PullImageLayerStage.RETRYING_DOWNLOAD and progress < job.progress:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} that implied progress was {progress} but current progress is {job.progress}, skipping",
_LOGGER.debug,
@@ -300,7 +305,7 @@ class DockerInterface(JobGroup, ABC):
# Our filters have all passed. Time to update the job
# Only downloading and extracting have progress details. Use that to set extra
# We'll leave it around on other stages as the total bytes may be useful after that stage
# We'll leave it around on later stages as the total bytes may be useful after that stage
if (
stage in {PullImageLayerStage.DOWNLOADING, PullImageLayerStage.EXTRACTING}
and reference.progress_detail
@@ -318,12 +323,61 @@ class DockerInterface(JobGroup, ABC):
progress=progress,
stage=stage.status,
done=stage == PullImageLayerStage.PULL_COMPLETE,
extra=None
if stage == PullImageLayerStage.RETRYING_DOWNLOAD
else job.extra,
)
# Once we have received a progress update for every child job, start to set status of the main one
install_job = self.sys_jobs.get_job(progress_job_id)
layer_jobs = [
job
for job in self.sys_jobs.jobs
if job.parent_id == install_job.uuid
and job.name == "Pulling container image layer"
]
# First set the total bytes to be downloaded/extracted on the main job
if not install_job.extra:
total = 0
for job in layer_jobs:
if not job.extra:
return
total += job.extra["total"]
install_job.extra = {"total": total}
else:
total = install_job.extra["total"]
# Then determine total progress based on progress of each sub-job, factoring in size of each compared to total
progress = 0.0
stage = PullImageLayerStage.PULL_COMPLETE
for job in layer_jobs:
if not job.extra:
return
progress += job.progress * (job.extra["total"] / total)
job_stage = PullImageLayerStage.from_status(cast(str, job.stage))
if job_stage < PullImageLayerStage.EXTRACTING:
stage = PullImageLayerStage.DOWNLOADING
elif (
stage == PullImageLayerStage.PULL_COMPLETE
and job_stage < PullImageLayerStage.PULL_COMPLETE
):
stage = PullImageLayerStage.EXTRACTING
# Ensure progress is 100 at this point to prevent float drift
if stage == PullImageLayerStage.PULL_COMPLETE:
progress = 100
# To reduce noise, limit updates to when result has changed by an entire percent or when stage changed
if stage != install_job.stage or progress >= install_job.progress + 1:
install_job.update(stage=stage.status, progress=progress)
@Job(
name="docker_interface_install",
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
internal=True,
)
async def install(
self,
@@ -331,6 +385,8 @@ class DockerInterface(JobGroup, ABC):
image: str | None = None,
latest: bool = False,
arch: CpuArch | None = None,
*,
progress_job_id: str | None = None,
) -> None:
"""Pull docker image."""
image = image or self.image
@@ -346,11 +402,15 @@ class DockerInterface(JobGroup, ABC):
# Try login if we have defined credentials
await self._docker_login(image)
job_id = self.sys_jobs.current.uuid
reference_id = self.sys_jobs.current.uuid
if not progress_job_id:
progress_job_id = reference_id
async def process_pull_image_log(reference: PullLogEntry) -> None:
try:
self._process_pull_image_log(job_id, reference)
self._process_pull_image_log(
reference_id, progress_job_id, reference
)
except DockerLogOutOfOrder as err:
# Send all these to sentry. Missing a few progress updates
# shouldn't matter to users but matters to us
@@ -624,7 +684,12 @@ class DockerInterface(JobGroup, ABC):
concurrency=JobConcurrency.GROUP_REJECT,
)
async def update(
self, version: AwesomeVersion, image: str | None = None, latest: bool = False
self,
version: AwesomeVersion,
image: str | None = None,
latest: bool = False,
*,
progress_job_id: str | None = None,
) -> None:
"""Update a Docker image."""
image = image or self.image
@@ -634,7 +699,9 @@ class DockerInterface(JobGroup, ABC):
)
# Update docker image
await self.install(version, image=image, latest=latest)
await self.install(
version, image=image, latest=latest, progress_job_id=progress_job_id
)
# Stop container & cleanup
with suppress(DockerError):

View File

@@ -51,7 +51,7 @@ from .network import DockerNetwork
_LOGGER: logging.Logger = logging.getLogger(__name__)
MIN_SUPPORTED_DOCKER: Final = AwesomeVersion("20.10.1")
MIN_SUPPORTED_DOCKER: Final = AwesomeVersion("24.0.0")
DOCKER_NETWORK_HOST: Final = "host"
@@ -321,11 +321,36 @@ class DockerAPI(CoreSysAttributes):
if not network_mode:
kwargs["network"] = None
# Setup cidfile and bind mount it
cidfile_path = None
if name:
cidfile_path = self.coresys.config.path_cid_files / f"{name}.cid"
# Remove the file if it exists e.g. as a leftover from unclean shutdown
if cidfile_path.is_file():
with suppress(OSError):
cidfile_path.unlink(missing_ok=True)
extern_cidfile_path = (
self.coresys.config.path_extern_cid_files / f"{name}.cid"
)
# Bind mount to /run/cid in container
if "volumes" not in kwargs:
kwargs["volumes"] = {}
kwargs["volumes"][str(extern_cidfile_path)] = {
"bind": "/run/cid",
"mode": "ro",
}
# Create container
try:
container = self.containers.create(
f"{image}:{tag}", use_config_proxy=False, **kwargs
)
if cidfile_path:
with cidfile_path.open("w", encoding="ascii") as cidfile:
cidfile.write(str(container.id))
except docker_errors.NotFound as err:
raise DockerNotFound(
f"Image {image}:{tag} does not exist for {name}", _LOGGER.error
@@ -563,6 +588,10 @@ class DockerAPI(CoreSysAttributes):
_LOGGER.info("Cleaning %s application", name)
docker_container.remove(force=True, v=True)
cidfile_path = self.coresys.config.path_cid_files / f"{name}.cid"
with suppress(OSError):
cidfile_path.unlink(missing_ok=True)
def start_container(self, name: str) -> None:
"""Start Docker container."""
try:

View File

@@ -1,17 +1,32 @@
"""Core Exceptions."""
from collections.abc import Callable
from typing import Any
class HassioError(Exception):
"""Root exception."""
error_key: str | None = None
message_template: str | None = None
def __init__(
self,
message: str | None = None,
logger: Callable[..., None] | None = None,
*,
extra_fields: dict[str, Any] | None = None,
) -> None:
"""Raise & log."""
self.extra_fields = extra_fields or {}
if not message and self.message_template:
message = (
self.message_template.format(**self.extra_fields)
if self.extra_fields
else self.message_template
)
if logger is not None and message is not None:
logger(message)
@@ -235,8 +250,71 @@ class AddonConfigurationError(AddonsError):
"""Error with add-on configuration."""
class AddonsNotSupportedError(HassioNotSupportedError):
"""Addons don't support a function."""
class AddonNotSupportedError(HassioNotSupportedError):
"""Addon doesn't support a function."""
class AddonNotSupportedArchitectureError(AddonNotSupportedError):
"""Addon does not support system due to architecture."""
error_key = "addon_not_supported_architecture_error"
message_template = "Add-on {slug} not supported on this platform, supported architectures: {architectures}"
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
slug: str,
architectures: list[str],
) -> None:
"""Initialize exception."""
super().__init__(
None,
logger,
extra_fields={"slug": slug, "architectures": ", ".join(architectures)},
)
class AddonNotSupportedMachineTypeError(AddonNotSupportedError):
"""Addon does not support system due to machine type."""
error_key = "addon_not_supported_machine_type_error"
message_template = "Add-on {slug} not supported on this machine, supported machine types: {machine_types}"
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
slug: str,
machine_types: list[str],
) -> None:
"""Initialize exception."""
super().__init__(
None,
logger,
extra_fields={"slug": slug, "machine_types": ", ".join(machine_types)},
)
class AddonNotSupportedHomeAssistantVersionError(AddonNotSupportedError):
"""Addon does not support system due to Home Assistant version."""
error_key = "addon_not_supported_home_assistant_version_error"
message_template = "Add-on {slug} not supported on this system, requires Home Assistant version {version} or greater"
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
slug: str,
version: str,
) -> None:
"""Initialize exception."""
super().__init__(
None,
logger,
extra_fields={"slug": slug, "version": version},
)
class AddonsJobError(AddonsError, JobException):
@@ -319,10 +397,17 @@ class APIError(HassioError, RuntimeError):
self,
message: str | None = None,
logger: Callable[..., None] | None = None,
*,
job_id: str | None = None,
error: HassioError | None = None,
) -> None:
"""Raise & log, optionally with job."""
super().__init__(message, logger)
# Allow these to be set from another error here since APIErrors essentially wrap others to add a status
self.error_key = error.error_key if error else None
self.message_template = error.message_template if error else None
super().__init__(
message, logger, extra_fields=error.extra_fields if error else None
)
self.job_id = job_id

View File

@@ -99,18 +99,16 @@ class HwDisk(CoreSysAttributes):
root_device = path.stat().st_dev
for child in path.iterdir():
if not child.is_dir():
size += child.stat(follow_symlinks=False).st_size
continue
# Skip symlinks to avoid infinite loops
if child.is_symlink():
continue
try:
# Skip if not on same device (external mount)
if child.stat().st_dev != root_device:
continue
stat = child.stat(follow_symlinks=False)
except FileNotFoundError:
# File might disappear between listing and stat, ignore
_LOGGER.warning("File not found: %s", child.as_posix())
continue
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
@@ -119,6 +117,13 @@ class HwDisk(CoreSysAttributes):
break
continue
if stat.st_dev != root_device:
continue
if not child.is_dir():
size += stat.st_size
continue
child_result = self.get_dir_structure_sizes(child, max_depth - 1)
if child_result["used_bytes"] > 0:
size += child_result["used_bytes"]

View File

@@ -229,6 +229,7 @@ class HomeAssistantCore(JobGroup):
self,
version: AwesomeVersion | None = None,
backup: bool | None = False,
validation_complete: asyncio.Event | None = None,
) -> None:
"""Update HomeAssistant version."""
to_version = version or self.sys_homeassistant.latest_version
@@ -248,6 +249,10 @@ class HomeAssistantCore(JobGroup):
f"Version {to_version!s} is already installed", _LOGGER.warning
)
# If being run in the background, notify caller that validation has completed
if validation_complete:
validation_complete.set()
if backup:
await self.sys_backups.do_backup_partial(
name=f"core_{self.instance.version}",
@@ -256,12 +261,18 @@ class HomeAssistantCore(JobGroup):
)
# process an update
# Assume for now the docker image pull is 100% of this task. But from a user
# perspective that isn't true. Other steps we could consider allocating a fixed
# amount of progress for to improve accuracy include: partial backup, image
# cleanup, and Home Assistant restart
async def _update(to_version: AwesomeVersion) -> None:
"""Run Home Assistant update."""
_LOGGER.info("Updating Home Assistant to version %s", to_version)
try:
await self.instance.update(
to_version, image=self.sys_updater.image_homeassistant
to_version,
image=self.sys_updater.image_homeassistant,
progress_job_id=self.sys_jobs.current.uuid,
)
except DockerError as err:
raise HomeAssistantUpdateError(
@@ -280,6 +291,11 @@ class HomeAssistantCore(JobGroup):
with suppress(DockerError):
await self.instance.cleanup(old_image=old_image)
# If the user never left the update screen they may actually see the progress bar again depending
# on how frontend works. Just in case (and as good practice) set job to 100% at successful end
if self.sys_jobs.current.progress != 100:
self.sys_jobs.current.progress = 100
# Update Home Assistant
with suppress(HomeAssistantError):
await _update(to_version)

View File

@@ -12,6 +12,7 @@ from ..dbus.const import (
InterfaceAddrGenMode as NMInterfaceAddrGenMode,
InterfaceIp6Privacy as NMInterfaceIp6Privacy,
InterfaceMethod as NMInterfaceMethod,
MulticastDnsValue,
)
from ..dbus.network.connection import NetworkConnection
from ..dbus.network.interface import NetworkInterface
@@ -21,11 +22,19 @@ from .const import (
InterfaceIp6Privacy,
InterfaceMethod,
InterfaceType,
MulticastDnsMode,
WifiMode,
)
_LOGGER: logging.Logger = logging.getLogger(__name__)
_MULTICAST_DNS_VALUE_MODE_MAPPING: dict[int, MulticastDnsMode] = {
MulticastDnsValue.DEFAULT.value: MulticastDnsMode.DEFAULT,
MulticastDnsValue.OFF.value: MulticastDnsMode.OFF,
MulticastDnsValue.RESOLVE.value: MulticastDnsMode.RESOLVE,
MulticastDnsValue.ANNOUNCE.value: MulticastDnsMode.ANNOUNCE,
}
@dataclass(slots=True)
class AccessPoint:
@@ -107,6 +116,8 @@ class Interface:
ipv6setting: Ip6Setting | None
wifi: WifiConfig | None
vlan: VlanConfig | None
mdns: MulticastDnsMode | None
llmnr: MulticastDnsMode | None
def equals_dbus_interface(self, inet: NetworkInterface) -> bool:
"""Return true if this represents the dbus interface."""
@@ -198,6 +209,13 @@ class Interface:
and ConnectionStateFlags.IP6_READY in inet.connection.state_flags
)
if inet.settings and inet.settings.connection:
mdns = inet.settings.connection.mdns
llmnr = inet.settings.connection.llmnr
else:
mdns = None
llmnr = None
return Interface(
name=inet.interface_name,
mac=inet.hw_address,
@@ -234,6 +252,8 @@ class Interface:
ipv6setting=ipv6_setting,
wifi=Interface._map_nm_wifi(inet),
vlan=Interface._map_nm_vlan(inet),
mdns=Interface._map_nm_multicast_dns(mdns),
llmnr=Interface._map_nm_multicast_dns(llmnr),
)
@staticmethod
@@ -340,3 +360,10 @@ class Interface:
return None
return VlanConfig(inet.settings.vlan.id, inet.settings.vlan.parent)
@staticmethod
def _map_nm_multicast_dns(mode: int | None) -> MulticastDnsMode | None:
if mode is None:
return None
return _MULTICAST_DNS_VALUE_MODE_MAPPING.get(mode)

View File

@@ -89,3 +89,12 @@ class LogFormatter(StrEnum):
PLAIN = "plain"
VERBOSE = "verbose"
class MulticastDnsMode(StrEnum):
"""Multicast DNS (MDNS/LLMNR) mode."""
DEFAULT = "default"
OFF = "off"
RESOLVE = "resolve"
ANNOUNCE = "announce"

View File

@@ -85,14 +85,13 @@ class SystemControl(CoreSysAttributes):
async def set_timezone(self, timezone: str) -> None:
"""Set timezone on host."""
self._check_dbus(HostFeature.TIMEDATE)
# /etc/localtime is not writable on OS older than 16.2
if (
self.coresys.os.available
and self.coresys.os.version is not None
and self.sys_os.version >= AwesomeVersion("16.2.dev0")
and self.sys_os.version >= AwesomeVersion("16.2.dev20250814")
):
self._check_dbus(HostFeature.TIMEDATE)
_LOGGER.info("Setting host timezone: %s", timezone)
await self.sys_dbus.timedate.set_timezone(timezone)
await self.sys_dbus.timedate.update()

View File

@@ -97,7 +97,6 @@ class SupervisorJob:
default=0,
validator=[ge(0), le(100), _invalid_if_done],
on_setattr=_on_change,
converter=lambda val: round(val, 1),
)
stage: str | None = field(
default=None, validator=[_invalid_if_done], on_setattr=_on_change
@@ -118,7 +117,7 @@ class SupervisorJob:
"name": self.name,
"reference": self.reference,
"uuid": self.uuid,
"progress": self.progress,
"progress": round(self.progress, 1),
"stage": self.stage,
"done": self.done,
"parent_id": self.parent_id,
@@ -175,12 +174,15 @@ class SupervisorJob:
self.stage = stage
if extra != DEFAULT:
self.extra = extra
if done is not None:
self.done = done
self.on_change = on_change
# Just triggers the normal on change
self.reference = self.reference
if done is not None:
# Done has a special event, use it to trigger on change if included
self.done = done
else:
# Just set something to trigger the normal on change
self.reference = self.reference
class JobManager(FileConfiguration, CoreSysAttributes):
@@ -230,12 +232,14 @@ class JobManager(FileConfiguration, CoreSysAttributes):
self, job: SupervisorJob, attribute: Attribute, value: Any
) -> None:
"""Notify Home Assistant of a change to a job and bus on job start/end."""
# Send out job data as a dictionary to prevent concurrency issue with shared job object
# Plus then we can fold in the newly updated value
if attribute.name == "errors":
value = [err.as_dict() for err in value]
job_data = job.as_dict() | {attribute.name: value}
self.sys_homeassistant.websocket.supervisor_event(
WSEvent.JOB, job.as_dict() | {attribute.name: value}
)
if not job.internal:
self.sys_homeassistant.websocket.supervisor_event(WSEvent.JOB, job_data)
if attribute.name == "done":
if value is False:
@@ -256,7 +260,7 @@ class JobManager(FileConfiguration, CoreSysAttributes):
name,
reference=reference,
stage=initial_stage,
on_change=None if internal else self._notify_on_job_change,
on_change=self._notify_on_job_change,
internal=internal,
**({} if parent_id == DEFAULT else {"parent_id": parent_id}), # type: ignore
)

View File

@@ -1,15 +1,8 @@
"""Helpers to check and fix issues with free space."""
from ...backups.const import BackupType
from ...const import CoreState
from ...coresys import CoreSys
from ..const import (
MINIMUM_FREE_SPACE_THRESHOLD,
MINIMUM_FULL_BACKUPS,
ContextType,
IssueType,
SuggestionType,
)
from ..const import MINIMUM_FREE_SPACE_THRESHOLD, ContextType, IssueType
from .base import CheckBase
@@ -23,31 +16,12 @@ class CheckFreeSpace(CheckBase):
async def run_check(self) -> None:
"""Run check if not affected by issue."""
if await self.sys_host.info.free_space() > MINIMUM_FREE_SPACE_THRESHOLD:
return
suggestions: list[SuggestionType] = []
if (
len(
[
backup
for backup in self.sys_backups.list_backups
if backup.sys_type == BackupType.FULL
]
)
> MINIMUM_FULL_BACKUPS
):
suggestions.append(SuggestionType.CLEAR_FULL_BACKUP)
self.sys_resolution.create_issue(
IssueType.FREE_SPACE, ContextType.SYSTEM, suggestions=suggestions
)
if await self.approve_check():
self.sys_resolution.create_issue(IssueType.FREE_SPACE, ContextType.SYSTEM)
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
if await self.sys_host.info.free_space() > MINIMUM_FREE_SPACE_THRESHOLD:
return False
return True
return await self.sys_host.info.free_space() <= MINIMUM_FREE_SPACE_THRESHOLD
@property
def issue(self) -> IssueType:

View File

@@ -9,7 +9,7 @@ FILE_CONFIG_RESOLUTION = Path(SUPERVISOR_DATA, "resolution.json")
SCHEDULED_HEALTHCHECK = 3600
MINIMUM_FREE_SPACE_THRESHOLD = 1
MINIMUM_FREE_SPACE_THRESHOLD = 2
MINIMUM_FULL_BACKUPS = 2
DNS_CHECK_HOST = "_checkdns.home-assistant.io"

View File

@@ -215,7 +215,7 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
async def load(self):
"""Load the resoulution manager."""
# Initial healthcheck when the manager is loaded
# Initial healthcheck check
await self.healthcheck()
# Schedule the healthcheck

View File

@@ -10,9 +10,16 @@ from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import HomeAssistantAPIError
from .checks.core_security import SecurityReference
from .const import ContextType, IssueType
from .data import Issue
_LOGGER: logging.Logger = logging.getLogger(__name__)
ISSUE_SECURITY_CUSTOM_COMP_2021_1_5 = Issue(
IssueType.SECURITY,
ContextType.CORE,
reference=SecurityReference.CUSTOM_COMPONENTS_BELOW_2021_1_5,
)
class ResolutionNotify(CoreSysAttributes):
"""Notify class for resolution."""
@@ -29,44 +36,17 @@ class ResolutionNotify(CoreSysAttributes):
):
return
messages = []
for issue in self.sys_resolution.issues:
if issue.type == IssueType.FREE_SPACE:
messages.append(
{
"title": "Available space is less than 1GB!",
"message": f"Available space is {await self.sys_host.info.free_space()}GB, see https://www.home-assistant.io/more-info/free-space for more information.",
"notification_id": "supervisor_issue_free_space",
}
)
if issue.type == IssueType.SECURITY and issue.context == ContextType.CORE:
if (
issue.reference
== SecurityReference.CUSTOM_COMPONENTS_BELOW_2021_1_5
):
messages.append(
{
"title": "Security notification",
"message": "The Supervisor detected that this version of Home Assistant could be insecure in combination with custom integrations. [Update as soon as possible.](/hassio/dashboard)\n\nFor more information see the [Security alert](https://www.home-assistant.io/latest-security-alert).",
"notification_id": "supervisor_update_home_assistant_2021_1_5",
}
)
if issue.type == IssueType.PWNED and issue.context == ContextType.ADDON:
messages.append(
{
"title": f"Insecure secrets in {issue.reference}",
"message": f"The add-on {issue.reference} uses secrets which are detected as not secure, see https://www.home-assistant.io/more-info/pwned-passwords for more information.",
"notification_id": f"supervisor_issue_pwned_{issue.reference}",
}
)
for message in messages:
# This one issue must remain a persistent notification rather then a repair because repairs didn't exist in HA 2021.1.5
if ISSUE_SECURITY_CUSTOM_COMP_2021_1_5 in self.sys_resolution.issues:
try:
async with self.sys_homeassistant.api.make_request(
"post",
"api/services/persistent_notification/create",
json=message,
json={
"title": "Security notification",
"message": "The Supervisor detected that this version of Home Assistant could be insecure in combination with custom integrations. [Update as soon as possible.](/hassio/dashboard)\n\nFor more information see the [Security alert](https://www.home-assistant.io/latest-security-alert).",
"notification_id": "supervisor_update_home_assistant_2021_1_5",
},
) as resp:
if resp.status in (200, 201):
_LOGGER.debug("Successfully created persistent_notification")

View File

@@ -195,6 +195,7 @@ class Supervisor(CoreSysAttributes):
if temp_dir:
await self.sys_run_in_executor(temp_dir.cleanup)
@Job(name="supervisor_update")
async def update(self, version: AwesomeVersion | None = None) -> None:
"""Update Supervisor version."""
version = version or self.latest_version or self.version
@@ -221,8 +222,14 @@ class Supervisor(CoreSysAttributes):
# Update container
_LOGGER.info("Update Supervisor to version %s", version)
# Assume for now the docker image pull is 100% of this task. But from a user
# perspective that isn't true. Should consider allocating a fixed amount of
# progress to the app armor update and restart to improve accuracy
try:
await self.instance.install(version, image=image)
await self.instance.install(
version, image=image, progress_job_id=self.sys_jobs.current.uuid
)
await self.instance.update_start_tag(image, version)
except DockerError as err:
self.sys_resolution.create_issue(
@@ -237,6 +244,8 @@ class Supervisor(CoreSysAttributes):
self.sys_config.image = image
await self.sys_config.save_data()
# Normally we'd set the progress bar to 100% at the end. But once Supervisor stops
# we it's gone so for this one we'll just leave it wherever it was after image pull
self.sys_create_task(self.sys_core.stop())
@Job(

View File

@@ -17,8 +17,8 @@ from .const import (
ATTR_CHANNEL,
ATTR_CLI,
ATTR_DNS,
ATTR_HASSOS,
ATTR_HASSOS_UNRESTRICTED,
ATTR_HASSOS_UPGRADE,
ATTR_HOMEASSISTANT,
ATTR_IMAGE,
ATTR_MULTICAST,
@@ -93,13 +93,46 @@ class Updater(FileConfiguration, CoreSysAttributes):
@property
def version_hassos(self) -> AwesomeVersion | None:
"""Return latest version of HassOS."""
return self._data.get(ATTR_HASSOS)
upgrade_map = self.upgrade_map_hassos
unrestricted = self.version_hassos_unrestricted
# If no upgrade map exists, fall back to unrestricted version
if not upgrade_map:
return unrestricted
# If we have no unrestricted version or no current OS version, return unrestricted
if (
not unrestricted
or not self.sys_os.version
or self.sys_os.version.major is None
):
return unrestricted
current_major = str(self.sys_os.version.major)
# Check if there's an upgrade path for current major version
if current_major in upgrade_map:
last_in_major = AwesomeVersion(upgrade_map[current_major])
# If we're not at the last version in our major, upgrade to that first
if self.sys_os.version != last_in_major:
return last_in_major
# If we are at the last version in our major, check for next major
next_major = str(int(self.sys_os.version.major) + 1)
if next_major in upgrade_map:
return AwesomeVersion(upgrade_map[next_major])
# Fall back to unrestricted version
return unrestricted
@property
def version_hassos_unrestricted(self) -> AwesomeVersion | None:
"""Return latest version of HassOS ignoring upgrade restrictions."""
return self._data.get(ATTR_HASSOS_UNRESTRICTED)
@property
def upgrade_map_hassos(self) -> dict[str, str] | None:
"""Return HassOS upgrade map."""
return self._data.get(ATTR_HASSOS_UPGRADE)
@property
def version_cli(self) -> AwesomeVersion | None:
"""Return latest version of CLI."""
@@ -291,18 +324,10 @@ class Updater(FileConfiguration, CoreSysAttributes):
if self.sys_os.board:
self._data[ATTR_OTA] = data["ota"]
if version := data["hassos"].get(self.sys_os.board):
self._data[ATTR_HASSOS_UNRESTRICTED] = version
self._data[ATTR_HASSOS_UNRESTRICTED] = AwesomeVersion(version)
# Store the upgrade map for persistent access
self._data[ATTR_HASSOS_UPGRADE] = data.get("hassos-upgrade", {})
events.append("os")
upgrade_map = data.get("hassos-upgrade", {})
if last_in_major := upgrade_map.get(str(self.sys_os.version.major)):
if self.sys_os.version != AwesomeVersion(last_in_major):
version = last_in_major
elif last_in_next_major := upgrade_map.get(
str(int(self.sys_os.version.major) + 1)
):
version = last_in_next_major
self._data[ATTR_HASSOS] = AwesomeVersion(version)
else:
_LOGGER.warning(
"Board '%s' not found in version file. No OS updates.",

View File

@@ -24,6 +24,7 @@ from .const import (
ATTR_FORCE_SECURITY,
ATTR_HASSOS,
ATTR_HASSOS_UNRESTRICTED,
ATTR_HASSOS_UPGRADE,
ATTR_HOMEASSISTANT,
ATTR_ID,
ATTR_IMAGE,
@@ -129,6 +130,9 @@ SCHEMA_UPDATER_CONFIG = vol.Schema(
vol.Optional(ATTR_SUPERVISOR): version_tag,
vol.Optional(ATTR_HASSOS): version_tag,
vol.Optional(ATTR_HASSOS_UNRESTRICTED): version_tag,
vol.Optional(ATTR_HASSOS_UPGRADE): vol.Schema(
{vol.Extra: version_tag}, extra=vol.ALLOW_EXTRA
),
vol.Optional(ATTR_CLI): version_tag,
vol.Optional(ATTR_DNS): version_tag,
vol.Optional(ATTR_AUDIO): version_tag,

View File

@@ -140,6 +140,46 @@ def test_valid_map():
vd.SCHEMA_ADDON_CONFIG(config)
def test_malformed_map_entries():
"""Test that malformed map entries are handled gracefully (issue #6124)."""
config = load_json_fixture("basic-addon-config.json")
# Test case 1: Empty dict in map (should be skipped with warning)
config["map"] = [{}]
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["map"] == []
# Test case 2: Dict missing required 'type' field (should be skipped with warning)
config["map"] = [{"read_only": False, "path": "/custom"}]
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["map"] == []
# Test case 3: Invalid string format that doesn't match regex
config["map"] = ["invalid_format", "not:a:valid:mapping", "share:invalid_mode"]
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["map"] == []
# Test case 4: Mix of valid and invalid entries (invalid should be filtered out)
config["map"] = [
"share:rw", # Valid string format
"invalid_string", # Invalid string format
{}, # Invalid empty dict
{"type": "config", "read_only": True}, # Valid dict format
{"read_only": False}, # Invalid - missing type
]
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
# Should only keep the valid entries
assert len(valid_config["map"]) == 2
assert any(entry["type"] == "share" for entry in valid_config["map"])
assert any(entry["type"] == "config" for entry in valid_config["map"])
# Test case 5: The specific case from the UplandJacob repo (malformed YAML format)
# This simulates what YAML "- addon_config: rw" creates
config["map"] = [{"addon_config": "rw"}] # Wrong structure, missing 'type' key
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["map"] == []
def test_valid_basic_build():
"""Validate basic build config."""
config = load_json_fixture("basic-build-config.json")
@@ -285,3 +325,97 @@ def test_valid_slug():
config["slug"] = "complemento telefónico"
with pytest.raises(vol.Invalid):
assert vd.SCHEMA_ADDON_CONFIG(config)
def test_valid_schema():
"""Test valid and invalid addon slugs."""
config = load_json_fixture("basic-addon-config.json")
# Basic types
config["schema"] = {
"bool_basic": "bool",
"mail_basic": "email",
"url_basic": "url",
"port_basic": "port",
"match_basic": "match(.*@.*)",
"list_basic": "list(option1|option2|option3)",
# device
"device_basic": "device",
"device_filter": "device(subsystem=tty)",
# str
"str_basic": "str",
"str_basic2": "str(,)",
"str_min": "str(5,)",
"str_max": "str(,10)",
"str_minmax": "str(5,10)",
# password
"password_basic": "password",
"password_basic2": "password(,)",
"password_min": "password(5,)",
"password_max": "password(,10)",
"password_minmax": "password(5,10)",
# int
"int_basic": "int",
"int_basic2": "int(,)",
"int_min": "int(5,)",
"int_max": "int(,10)",
"int_minmax": "int(5,10)",
# float
"float_basic": "float",
"float_basic2": "float(,)",
"float_min": "float(5,)",
"float_max": "float(,10)",
"float_minmax": "float(5,10)",
}
assert vd.SCHEMA_ADDON_CONFIG(config)
# Different valid ways of nesting dicts and lists
config["schema"] = {
"str_list": ["str"],
"dict_in_list": [
{
"required": "str",
"optional": "str?",
}
],
"dict": {
"required": "str",
"optional": "str?",
"str_list_in_dict": ["str"],
"dict_in_list_in_dict": [
{
"required": "str",
"optional": "str?",
"str_list_in_dict_in_list_in_dict": ["str"],
}
],
"dict_in_dict": {
"str_list_in_dict_in_dict": ["str"],
"dict_in_list_in_dict_in_dict": [
{
"required": "str",
"optional": "str?",
}
],
"dict_in_dict_in_dict": {
"required": "str",
"optional": "str",
},
},
},
}
assert vd.SCHEMA_ADDON_CONFIG(config)
# List nested within dict within list
config["schema"] = {"field": [{"subfield": ["str"]}]}
assert vd.SCHEMA_ADDON_CONFIG(config)
# No lists directly nested within each other
config["schema"] = {"field": [["str"]]}
with pytest.raises(vol.Invalid):
assert vd.SCHEMA_ADDON_CONFIG(config)
# Field types must be valid
config["schema"] = {"field": "invalid"}
with pytest.raises(vol.Invalid):
assert vd.SCHEMA_ADDON_CONFIG(config)

View File

@@ -4,7 +4,7 @@ import asyncio
from collections.abc import AsyncGenerator, Generator
from copy import deepcopy
from pathlib import Path
from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, patch
from unittest.mock import ANY, AsyncMock, MagicMock, Mock, PropertyMock, patch
from awesomeversion import AwesomeVersion
import pytest
@@ -102,7 +102,11 @@ async def test_image_added_removed_on_update(
await coresys.addons.update(TEST_ADDON_SLUG)
build.assert_not_called()
install.assert_called_once_with(
AwesomeVersion("10.0.0"), "test/amd64-my-ssh-addon", False, "amd64"
AwesomeVersion("10.0.0"),
"test/amd64-my-ssh-addon",
False,
"amd64",
progress_job_id=ANY,
)
assert install_addon_ssh.need_update is False

View File

@@ -129,6 +129,64 @@ def test_complex_schema_dict(coresys):
)({"name": "Pascal", "password": "1234", "extend": "test"})
def test_complex_schema_dict_and_list(coresys):
"""Test with complex dict/list nested schema."""
assert AddonOptions(
coresys,
{
"name": "str",
"packages": [
{
"name": "str",
"options": {"optional": "bool"},
"dependencies": [{"name": "str"}],
}
],
},
MOCK_ADDON_NAME,
MOCK_ADDON_SLUG,
)(
{
"name": "Pascal",
"packages": [
{
"name": "core",
"options": {"optional": False},
"dependencies": [{"name": "supervisor"}, {"name": "audio"}],
}
],
}
)
with pytest.raises(vol.error.Invalid):
assert AddonOptions(
coresys,
{
"name": "str",
"packages": [
{
"name": "str",
"options": {"optional": "bool"},
"dependencies": [{"name": "str"}],
}
],
},
MOCK_ADDON_NAME,
MOCK_ADDON_SLUG,
)(
{
"name": "Pascal",
"packages": [
{
"name": "core",
"options": {"optional": False},
"dependencies": [{"name": "supervisor"}, "wrong"],
}
],
}
)
def test_simple_device_schema(coresys):
"""Test with simple schema."""
for device in (

View File

@@ -1,9 +1,10 @@
"""Test for API calls."""
from unittest.mock import MagicMock
from unittest.mock import AsyncMock, MagicMock
from aiohttp.test_utils import TestClient
from supervisor.coresys import CoreSys
from supervisor.host.const import LogFormat
DEFAULT_LOG_RANGE = "entries=:-99:100"
@@ -15,6 +16,8 @@ async def common_test_api_advanced_logs(
syslog_identifier: str,
api_client: TestClient,
journald_logs: MagicMock,
coresys: CoreSys,
os_available: None,
):
"""Template for tests of endpoints using advanced logs."""
resp = await api_client.get(f"{path_prefix}/logs")
@@ -41,6 +44,30 @@ async def common_test_api_advanced_logs(
journald_logs.reset_mock()
mock_response = MagicMock()
mock_response.text = AsyncMock(
return_value='{"CONTAINER_LOG_EPOCH": "12345"}\n{"CONTAINER_LOG_EPOCH": "12345"}\n'
)
journald_logs.return_value.__aenter__.return_value = mock_response
resp = await api_client.get(f"{path_prefix}/logs/latest")
assert resp.status == 200
assert journald_logs.call_count == 2
# Check the first call for getting epoch
epoch_call = journald_logs.call_args_list[0]
assert epoch_call[1]["params"] == {"CONTAINER_NAME": syslog_identifier}
assert epoch_call[1]["range_header"] == "entries=:-1:2"
# Check the second call for getting logs with the epoch
logs_call = journald_logs.call_args_list[1]
assert logs_call[1]["params"]["SYSLOG_IDENTIFIER"] == syslog_identifier
assert logs_call[1]["params"]["CONTAINER_LOG_EPOCH"] == "12345"
assert logs_call[1]["range_header"] == "entries=0:18446744073709551615"
journald_logs.reset_mock()
resp = await api_client.get(f"{path_prefix}/logs/boots/0")
assert resp.status == 200
assert resp.content_type == "text/plain"

View File

@@ -72,11 +72,20 @@ async def test_addons_info_not_installed(
async def test_api_addon_logs(
api_client: TestClient, journald_logs: MagicMock, install_addon_ssh: Addon
api_client: TestClient,
journald_logs: MagicMock,
coresys: CoreSys,
os_available,
install_addon_ssh: Addon,
):
"""Test addon logs."""
await common_test_api_advanced_logs(
"/addons/local_ssh", "addon_local_ssh", api_client, journald_logs
"/addons/local_ssh",
"addon_local_ssh",
api_client,
journald_logs,
coresys,
os_available,
)

View File

@@ -4,11 +4,15 @@ from unittest.mock import MagicMock
from aiohttp.test_utils import TestClient
from supervisor.coresys import CoreSys
from tests.api import common_test_api_advanced_logs
async def test_api_audio_logs(api_client: TestClient, journald_logs: MagicMock):
async def test_api_audio_logs(
api_client: TestClient, journald_logs: MagicMock, coresys: CoreSys, os_available
):
"""Test audio logs."""
await common_test_api_advanced_logs(
"/audio", "hassio_audio", api_client, journald_logs
"/audio", "hassio_audio", api_client, journald_logs, coresys, os_available
)

View File

@@ -66,6 +66,15 @@ async def test_options(api_client: TestClient, coresys: CoreSys):
restart.assert_called_once()
async def test_api_dns_logs(api_client: TestClient, journald_logs: MagicMock):
async def test_api_dns_logs(
api_client: TestClient, journald_logs: MagicMock, coresys: CoreSys, os_available
):
"""Test dns logs."""
await common_test_api_advanced_logs("/dns", "hassio_dns", api_client, journald_logs)
await common_test_api_advanced_logs(
"/dns",
"hassio_dns",
api_client,
journald_logs,
coresys,
os_available,
)

View File

@@ -1,14 +1,20 @@
"""Test homeassistant api."""
import asyncio
from pathlib import Path
from unittest.mock import MagicMock, patch
from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
import pytest
from supervisor.backups.manager import BackupManager
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.homeassistant.api import APIState
from supervisor.docker.homeassistant import DockerHomeAssistant
from supervisor.docker.interface import DockerInterface
from supervisor.homeassistant.api import APIState, HomeAssistantAPI
from supervisor.homeassistant.const import WSEvent
from supervisor.homeassistant.core import HomeAssistantCore
from supervisor.homeassistant.module import HomeAssistant
@@ -18,7 +24,11 @@ from tests.common import load_json_fixture
@pytest.mark.parametrize("legacy_route", [True, False])
async def test_api_core_logs(
api_client: TestClient, journald_logs: MagicMock, legacy_route: bool
api_client: TestClient,
journald_logs: MagicMock,
coresys: CoreSys,
os_available,
legacy_route: bool,
):
"""Test core logs."""
await common_test_api_advanced_logs(
@@ -26,6 +36,8 @@ async def test_api_core_logs(
"homeassistant",
api_client,
journald_logs,
coresys,
os_available,
)
@@ -188,3 +200,170 @@ async def test_force_stop_during_migration(api_client: TestClient, coresys: Core
with patch.object(HomeAssistantCore, "stop") as stop:
await api_client.post("/homeassistant/stop", json={"force": True})
stop.assert_called_once()
@pytest.mark.parametrize(
("make_backup", "backup_called", "update_called"),
[(True, True, False), (False, False, True)],
)
async def test_home_assistant_background_update(
api_client: TestClient,
coresys: CoreSys,
make_backup: bool,
backup_called: bool,
update_called: bool,
):
"""Test background update of Home Assistant."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
event = asyncio.Event()
mock_update_called = mock_backup_called = False
# Mock backup/update as long-running tasks
async def mock_docker_interface_update(*args, **kwargs):
nonlocal mock_update_called
mock_update_called = True
await event.wait()
async def mock_partial_backup(*args, **kwargs):
nonlocal mock_backup_called
mock_backup_called = True
await event.wait()
with (
patch.object(DockerInterface, "update", new=mock_docker_interface_update),
patch.object(BackupManager, "do_backup_partial", new=mock_partial_backup),
patch.object(
DockerInterface,
"version",
new=PropertyMock(return_value=AwesomeVersion("2025.8.0")),
),
):
resp = await api_client.post(
"/core/update",
json={"background": True, "backup": make_backup, "version": "2025.8.3"},
)
assert mock_backup_called is backup_called
assert mock_update_called is update_called
assert resp.status == 200
body = await resp.json()
assert (job := coresys.jobs.get_job(body["data"]["job_id"]))
assert job.name == "home_assistant_core_update"
event.set()
async def test_background_home_assistant_update_fails_fast(
api_client: TestClient, coresys: CoreSys
):
"""Test background Home Assistant update returns error not job if validation doesn't succeed."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
with (
patch.object(
DockerInterface,
"version",
new=PropertyMock(return_value=AwesomeVersion("2025.8.3")),
),
):
resp = await api_client.post(
"/core/update",
json={"background": True, "version": "2025.8.3"},
)
assert resp.status == 400
body = await resp.json()
assert body["message"] == "Version 2025.8.3 is already installed"
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_home_assistant_update(
api_client: TestClient, coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test progress updates sent to Home Assistant for updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
coresys.homeassistant.version = AwesomeVersion("2025.8.0")
with (
patch.object(
DockerHomeAssistant,
"version",
new=PropertyMock(return_value=AwesomeVersion("2025.8.0")),
),
patch.object(
HomeAssistantAPI, "get_config", return_value={"components": ["frontend"]}
),
):
resp = await api_client.post("/core/update", json={"version": "2025.8.3"})
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == "home_assistant_core_update"
]
assert events[:5] == [
{
"stage": None,
"progress": 0,
"done": None,
},
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": "Downloading",
"progress": 0.1,
"done": False,
},
{
"stage": "Downloading",
"progress": 1.2,
"done": False,
},
{
"stage": "Downloading",
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": "Extracting",
"progress": 97.2,
"done": False,
},
{
"stage": "Extracting",
"progress": 98.4,
"done": False,
},
{
"stage": "Extracting",
"progress": 99.4,
"done": False,
},
{
"stage": "Pull complete",
"progress": 100,
"done": False,
},
{
"stage": "Pull complete",
"progress": 100,
"done": True,
},
]

View File

@@ -243,6 +243,10 @@ async def test_advanced_logs(
accept=LogFormat.JOURNAL,
)
# Host logs don't have a /latest endpoint
resp = await api_client.get("/host/logs/latest")
assert resp.status == 404
async def test_advaced_logs_query_parameters(
api_client: TestClient,

View File

@@ -4,11 +4,20 @@ from unittest.mock import MagicMock
from aiohttp.test_utils import TestClient
from supervisor.coresys import CoreSys
from tests.api import common_test_api_advanced_logs
async def test_api_multicast_logs(api_client: TestClient, journald_logs: MagicMock):
async def test_api_multicast_logs(
api_client: TestClient, journald_logs: MagicMock, coresys: CoreSys, os_available
):
"""Test multicast logs."""
await common_test_api_advanced_logs(
"/multicast", "hassio_multicast", api_client, journald_logs
"/multicast",
"hassio_multicast",
api_client,
journald_logs,
coresys,
os_available,
)

View File

@@ -88,6 +88,8 @@ async def test_api_network_interface_info(api_client: TestClient, interface_id:
]
assert result["data"]["ipv6"]["ready"] is True
assert result["data"]["interface"] == TEST_INTERFACE_ETH_NAME
assert result["data"]["mdns"] == "announce"
assert result["data"]["llmnr"] == "announce"
async def test_api_network_interface_info_default(api_client: TestClient):
@@ -109,6 +111,8 @@ async def test_api_network_interface_info_default(api_client: TestClient):
]
assert result["data"]["ipv6"]["ready"] is True
assert result["data"]["interface"] == TEST_INTERFACE_ETH_NAME
assert result["data"]["mdns"] == "announce"
assert result["data"]["llmnr"] == "announce"
@pytest.mark.parametrize(
@@ -278,6 +282,33 @@ async def test_api_network_interface_update_wifi_error(api_client: TestClient):
)
async def test_api_network_interface_update_mdns(
api_client: TestClient,
coresys: CoreSys,
network_manager_service: NetworkManagerService,
connection_settings_service: ConnectionSettingsService,
):
"""Test network manager API update with mDNS/LLMNR mode."""
network_manager_service.CheckConnectivity.calls.clear()
connection_settings_service.Update.calls.clear()
resp = await api_client.post(
f"/network/interface/{TEST_INTERFACE_ETH_NAME}/update",
json={
"mdns": "resolve",
"llmnr": "off",
},
)
result = await resp.json()
assert result["result"] == "ok"
assert len(connection_settings_service.Update.calls) == 1
settings = connection_settings_service.Update.calls[0][0]
assert "connection" in settings
assert settings["connection"]["mdns"] == Variant("i", 1)
assert settings["connection"]["llmnr"] == Variant("i", 0)
async def test_api_network_interface_update_remove(api_client: TestClient):
"""Test network manager api."""
resp = await api_client.post(
@@ -380,7 +411,7 @@ async def test_api_network_vlan(
settings_service.AddConnection.calls.clear()
resp = await api_client.post(
f"/network/interface/{TEST_INTERFACE_ETH_NAME}/vlan/1",
json={"ipv4": {"method": "auto"}},
json={"ipv4": {"method": "auto"}, "llmnr": "off"},
)
result = await resp.json()
assert result["result"] == "ok"
@@ -391,8 +422,8 @@ async def test_api_network_vlan(
assert connection["connection"] == {
"id": Variant("s", "Supervisor eth0.1"),
"type": Variant("s", "vlan"),
"llmnr": Variant("i", 2),
"mdns": Variant("i", 2),
"mdns": Variant("i", -1), # Default mode
"llmnr": Variant("i", 0),
"autoconnect": Variant("b", True),
"uuid": connection["connection"]["uuid"],
}

View File

@@ -48,7 +48,7 @@ async def test_api_available_updates(
"version_latest": "9.2.1",
}
coresys.updater._data["hassos"] = "321"
coresys.updater._data["hassos_unrestricted"] = "321"
coresys.os._version = "123"
updates = await available_updates()
assert len(updates) == 2

View File

@@ -6,17 +6,21 @@ from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp import ClientResponse
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
import pytest
from supervisor.addons.addon import Addon
from supervisor.arch import CpuArch
from supervisor.backups.manager import BackupManager
from supervisor.config import CoreConfig
from supervisor.const import AddonState
from supervisor.const import AddonState, CoreState
from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState
from supervisor.docker.interface import DockerInterface
from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.homeassistant.const import WSEvent
from supervisor.homeassistant.module import HomeAssistant
from supervisor.store.addon import AddonStore
from supervisor.store.repository import Repository
@@ -305,6 +309,7 @@ async def get_message(resp: ClientResponse, json_expected: bool) -> str:
("post", "/store/addons/bad/install/1", True),
("post", "/store/addons/bad/update", True),
("post", "/store/addons/bad/update/1", True),
("get", "/store/addons/bad/availability", True),
# Legacy paths
("get", "/addons/bad/icon", False),
("get", "/addons/bad/logo", False),
@@ -390,3 +395,425 @@ async def test_api_store_addons_changelog_corrupted(
assert resp.status == 200
result = await resp.text()
assert result == "Text with an invalid UTF-8 char: <20>"
@pytest.mark.usefixtures("test_repository", "tmp_supervisor_data")
async def test_addon_install_in_background(api_client: TestClient, coresys: CoreSys):
"""Test installing an addon in the background."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
event = asyncio.Event()
# Mock a long-running install task
async def mock_addon_install(*args, **kwargs):
await event.wait()
with patch.object(Addon, "install", new=mock_addon_install):
resp = await api_client.post(
"/store/addons/local_ssh/install", json={"background": True}
)
assert resp.status == 200
body = await resp.json()
assert (job := coresys.jobs.get_job(body["data"]["job_id"]))
assert job.name == "addon_manager_install"
event.set()
@pytest.mark.usefixtures("install_addon_ssh")
async def test_background_addon_install_fails_fast(
api_client: TestClient, coresys: CoreSys
):
"""Test background addon install returns error not job if validation fails."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
resp = await api_client.post(
"/store/addons/local_ssh/install", json={"background": True}
)
assert resp.status == 400
body = await resp.json()
assert body["message"] == "Add-on local_ssh is already installed"
@pytest.mark.parametrize(
("make_backup", "backup_called", "update_called"),
[(True, True, False), (False, False, True)],
)
@pytest.mark.usefixtures("test_repository", "tmp_supervisor_data")
async def test_addon_update_in_background(
api_client: TestClient,
coresys: CoreSys,
install_addon_ssh: Addon,
make_backup: bool,
backup_called: bool,
update_called: bool,
):
"""Test updating an addon in the background."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
install_addon_ssh.data_store["version"] = "10.0.0"
event = asyncio.Event()
mock_update_called = mock_backup_called = False
# Mock backup/update as long-running tasks
async def mock_addon_update(*args, **kwargs):
nonlocal mock_update_called
mock_update_called = True
await event.wait()
async def mock_partial_backup(*args, **kwargs):
nonlocal mock_backup_called
mock_backup_called = True
await event.wait()
with (
patch.object(Addon, "update", new=mock_addon_update),
patch.object(BackupManager, "do_backup_partial", new=mock_partial_backup),
):
resp = await api_client.post(
"/store/addons/local_ssh/update",
json={"background": True, "backup": make_backup},
)
assert mock_backup_called is backup_called
assert mock_update_called is update_called
assert resp.status == 200
body = await resp.json()
assert (job := coresys.jobs.get_job(body["data"]["job_id"]))
assert job.name == "addon_manager_update"
event.set()
@pytest.mark.usefixtures("install_addon_ssh")
async def test_background_addon_update_fails_fast(
api_client: TestClient, coresys: CoreSys
):
"""Test background addon update returns error not job if validation doesn't succeed."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
resp = await api_client.post(
"/store/addons/local_ssh/update", json={"background": True}
)
assert resp.status == 400
body = await resp.json()
assert body["message"] == "No update available for add-on local_ssh"
async def test_api_store_addons_addon_availability_success(
api_client: TestClient, store_addon: AddonStore
):
"""Test /store/addons/{addon}/availability REST API - success case."""
resp = await api_client.get(f"/store/addons/{store_addon.slug}/availability")
assert resp.status == 200
@pytest.mark.parametrize(
("supported_architectures", "api_action", "api_method", "installed"),
[
(["i386"], "availability", "get", False),
(["i386", "aarch64"], "availability", "get", False),
(["i386"], "install", "post", False),
(["i386", "aarch64"], "install", "post", False),
(["i386"], "update", "post", True),
(["i386", "aarch64"], "update", "post", True),
],
)
async def test_api_store_addons_addon_availability_arch_not_supported(
api_client: TestClient,
coresys: CoreSys,
supported_architectures: list[str],
api_action: str,
api_method: str,
installed: bool,
):
"""Test availability errors for /store/addons/{addon}/* REST APIs - architecture not supported."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
# Create an addon with unsupported architecture
addon_obj = AddonStore(coresys, "test_arch_addon")
coresys.addons.store[addon_obj.slug] = addon_obj
# Set addon config with unsupported architecture
addon_config = {
"advanced": False,
"arch": supported_architectures,
"slug": "test_arch_addon",
"description": "Test arch add-on",
"name": "Test Arch Add-on",
"repository": "test",
"stage": "stable",
"version": "1.0.0",
}
coresys.store.data.addons[addon_obj.slug] = addon_config
if installed:
coresys.addons.local[addon_obj.slug] = Addon(coresys, addon_obj.slug)
coresys.addons.data.user[addon_obj.slug] = {"version": AwesomeVersion("0.0.1")}
# Mock the system architecture to be different
with patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])):
resp = await api_client.request(
api_method, f"/store/addons/{addon_obj.slug}/{api_action}"
)
assert resp.status == 400
result = await resp.json()
assert result["error_key"] == "addon_not_supported_architecture_error"
assert (
result["message_template"]
== "Add-on {slug} not supported on this platform, supported architectures: {architectures}"
)
assert result["extra_fields"] == {
"slug": "test_arch_addon",
"architectures": ", ".join(supported_architectures),
}
assert result["message"] == result["message_template"].format(
**result["extra_fields"]
)
@pytest.mark.parametrize(
("supported_machines", "api_action", "api_method", "installed"),
[
(["odroid-n2"], "availability", "get", False),
(["!qemux86-64"], "availability", "get", False),
(["a", "b"], "availability", "get", False),
(["odroid-n2"], "install", "post", False),
(["!qemux86-64"], "install", "post", False),
(["a", "b"], "install", "post", False),
(["odroid-n2"], "update", "post", True),
(["!qemux86-64"], "update", "post", True),
(["a", "b"], "update", "post", True),
],
)
async def test_api_store_addons_addon_availability_machine_not_supported(
api_client: TestClient,
coresys: CoreSys,
supported_machines: list[str],
api_action: str,
api_method: str,
installed: bool,
):
"""Test availability errors for /store/addons/{addon}/* REST APIs - machine not supported."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
# Create an addon with unsupported machine type
addon_obj = AddonStore(coresys, "test_machine_addon")
coresys.addons.store[addon_obj.slug] = addon_obj
# Set addon config with unsupported machine
addon_config = {
"advanced": False,
"arch": ["amd64"],
"machine": supported_machines,
"slug": "test_machine_addon",
"description": "Test machine add-on",
"name": "Test Machine Add-on",
"repository": "test",
"stage": "stable",
"version": "1.0.0",
}
coresys.store.data.addons[addon_obj.slug] = addon_config
if installed:
coresys.addons.local[addon_obj.slug] = Addon(coresys, addon_obj.slug)
coresys.addons.data.user[addon_obj.slug] = {"version": AwesomeVersion("0.0.1")}
# Mock the system machine to be different
with patch.object(CoreSys, "machine", new=PropertyMock(return_value="qemux86-64")):
resp = await api_client.request(
api_method, f"/store/addons/{addon_obj.slug}/{api_action}"
)
assert resp.status == 400
result = await resp.json()
assert result["error_key"] == "addon_not_supported_machine_type_error"
assert (
result["message_template"]
== "Add-on {slug} not supported on this machine, supported machine types: {machine_types}"
)
assert result["extra_fields"] == {
"slug": "test_machine_addon",
"machine_types": ", ".join(supported_machines),
}
assert result["message"] == result["message_template"].format(
**result["extra_fields"]
)
@pytest.mark.parametrize(
("api_action", "api_method", "installed"),
[
("availability", "get", False),
("install", "post", False),
("update", "post", True),
],
)
async def test_api_store_addons_addon_availability_homeassistant_version_too_old(
api_client: TestClient,
coresys: CoreSys,
api_action: str,
api_method: str,
installed: bool,
):
"""Test availability errors for /store/addons/{addon}/* REST APIs - Home Assistant version too old."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
# Create an addon that requires newer Home Assistant version
addon_obj = AddonStore(coresys, "test_version_addon")
coresys.addons.store[addon_obj.slug] = addon_obj
# Set addon config with minimum Home Assistant version requirement
addon_config = {
"advanced": False,
"arch": ["amd64"],
"homeassistant": "2023.1.1", # Requires newer version than current
"slug": "test_version_addon",
"description": "Test version add-on",
"name": "Test Version Add-on",
"repository": "test",
"stage": "stable",
"version": "1.0.0",
}
coresys.store.data.addons[addon_obj.slug] = addon_config
if installed:
coresys.addons.local[addon_obj.slug] = Addon(coresys, addon_obj.slug)
coresys.addons.data.user[addon_obj.slug] = {"version": AwesomeVersion("0.0.1")}
# Mock the Home Assistant version to be older
with patch.object(
HomeAssistant,
"version",
new=PropertyMock(return_value=AwesomeVersion("2022.1.1")),
):
resp = await api_client.request(
api_method, f"/store/addons/{addon_obj.slug}/{api_action}"
)
assert resp.status == 400
result = await resp.json()
assert result["error_key"] == "addon_not_supported_home_assistant_version_error"
assert (
result["message_template"]
== "Add-on {slug} not supported on this system, requires Home Assistant version {version} or greater"
)
assert result["extra_fields"] == {
"slug": "test_version_addon",
"version": "2023.1.1",
}
assert result["message"] == result["message_template"].format(
**result["extra_fields"]
)
async def test_api_store_addons_addon_availability_installed_addon(
api_client: TestClient, install_addon_ssh: Addon
):
"""Test /store/addons/{addon}/availability REST API - installed addon checks against latest version."""
resp = await api_client.get("/store/addons/local_ssh/availability")
assert resp.status == 200
install_addon_ssh.data_store["version"] = AwesomeVersion("10.0.0")
install_addon_ssh.data_store["homeassistant"] = AwesomeVersion("2023.1.1")
# Mock the Home Assistant version to be older
with patch.object(
HomeAssistant,
"version",
new=PropertyMock(return_value=AwesomeVersion("2022.1.1")),
):
resp = await api_client.get("/store/addons/local_ssh/availability")
assert resp.status == 400
result = await resp.json()
assert (
"requires Home Assistant version 2023.1.1 or greater" in result["message"]
)
@pytest.mark.parametrize(
("action", "job_name", "addon_slug"),
[
("install", "addon_manager_install", "local_ssh"),
("update", "addon_manager_update", "local_example"),
],
)
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_addon_install_update(
api_client: TestClient,
coresys: CoreSys,
ha_ws_client: AsyncMock,
install_addon_example: Addon,
action: str,
job_name: str,
addon_slug: str,
):
"""Test progress updates sent to Home Assistant for installs/updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
coresys.arch._supported_arch = ["amd64"] # pylint: disable=protected-access
install_addon_example.data_store["version"] = AwesomeVersion("2.0.0")
with (
patch.object(Addon, "load"),
patch.object(Addon, "need_build", new=PropertyMock(return_value=False)),
patch.object(Addon, "latest_need_build", new=PropertyMock(return_value=False)),
):
resp = await api_client.post(f"/store/addons/{addon_slug}/{action}")
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == job_name
and evt.args[0]["data"]["data"]["reference"] == addon_slug
]
assert events[:4] == [
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": "Downloading",
"progress": 0.1,
"done": False,
},
{
"stage": "Downloading",
"progress": 1.2,
"done": False,
},
{
"stage": "Downloading",
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": "Extracting",
"progress": 97.2,
"done": False,
},
{
"stage": "Extracting",
"progress": 98.4,
"done": False,
},
{
"stage": "Extracting",
"progress": 99.4,
"done": False,
},
{
"stage": "Pull complete",
"progress": 100,
"done": False,
},
{
"stage": "Pull complete",
"progress": 100,
"done": True,
},
]

View File

@@ -2,17 +2,24 @@
# pylint: disable=protected-access
import time
from unittest.mock import AsyncMock, MagicMock, patch
from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
from blockbuster import BlockingError
import pytest
from supervisor.const import CoreState
from supervisor.core import Core
from supervisor.coresys import CoreSys
from supervisor.exceptions import HassioError, HostNotSupportedError, StoreGitError
from supervisor.homeassistant.const import WSEvent
from supervisor.store.repository import Repository
from supervisor.supervisor import Supervisor
from supervisor.updater import Updater
from tests.api import common_test_api_advanced_logs
from tests.common import load_json_fixture
from tests.dbus_service_mocks.base import DBusServiceMock
from tests.dbus_service_mocks.os_agent import OSAgent as OSAgentService
@@ -148,10 +155,17 @@ async def test_api_supervisor_options_diagnostics(
assert coresys.dbus.agent.diagnostics is False
async def test_api_supervisor_logs(api_client: TestClient, journald_logs: MagicMock):
async def test_api_supervisor_logs(
api_client: TestClient, journald_logs: MagicMock, coresys: CoreSys, os_available
):
"""Test supervisor logs."""
await common_test_api_advanced_logs(
"/supervisor", "hassio_supervisor", api_client, journald_logs
"/supervisor",
"hassio_supervisor",
api_client,
journald_logs,
coresys,
os_available,
)
@@ -175,7 +189,7 @@ async def test_api_supervisor_fallback(
b"\x1b[36m22-10-11 14:04:23 DEBUG (MainThread) [supervisor.utils.dbus] D-Bus call - org.freedesktop.DBus.Properties.call_get_all on /io/hass/os/AppArmor\x1b[0m",
]
# check fallback also works for the follow endpoint (no mock reset needed)
# check fallback also works for the /follow endpoint (no mock reset needed)
with patch("supervisor.api._LOGGER.exception") as logger:
resp = await api_client.get("/supervisor/logs/follow")
@@ -186,7 +200,16 @@ async def test_api_supervisor_fallback(
assert resp.status == 200
assert resp.content_type == "text/plain"
journald_logs.reset_mock()
# check the /latest endpoint as well
with patch("supervisor.api._LOGGER.exception") as logger:
resp = await api_client.get("/supervisor/logs/latest")
logger.assert_called_once_with(
"Failed to get supervisor logs using advanced_logs API"
)
assert resp.status == 200
assert resp.content_type == "text/plain"
# also check generic Python error
journald_logs.side_effect = OSError("Something bad happened!")
@@ -300,3 +323,97 @@ async def test_api_supervisor_options_blocking_io(
# This should not raise blocking error anymore
time.sleep(0)
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_supervisor_update(
api_client: TestClient, coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test progress updates sent to Home Assistant for updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
with (
patch.object(
Supervisor,
"version",
new=PropertyMock(return_value=AwesomeVersion("2025.08.0")),
),
patch.object(
Updater,
"version_supervisor",
new=PropertyMock(return_value=AwesomeVersion("2025.08.3")),
),
patch.object(
Updater, "image_supervisor", new=PropertyMock(return_value="supervisor")
),
patch.object(Supervisor, "update_apparmor"),
patch.object(Core, "stop"),
):
resp = await api_client.post("/supervisor/update")
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == "supervisor_update"
]
assert events[:4] == [
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": "Downloading",
"progress": 0.1,
"done": False,
},
{
"stage": "Downloading",
"progress": 1.2,
"done": False,
},
{
"stage": "Downloading",
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": "Extracting",
"progress": 97.2,
"done": False,
},
{
"stage": "Extracting",
"progress": 98.4,
"done": False,
},
{
"stage": "Extracting",
"progress": 99.4,
"done": False,
},
{
"stage": "Pull complete",
"progress": 100,
"done": False,
},
{
"stage": "Pull complete",
"progress": 100,
"done": True,
},
]

View File

@@ -488,6 +488,7 @@ async def tmp_supervisor_data(coresys: CoreSys, tmp_path: Path) -> Path:
coresys.config.path_addon_configs.mkdir(parents=True)
coresys.config.path_ssl.mkdir()
coresys.config.path_core_backup.mkdir(parents=True)
coresys.config.path_cid_files.mkdir()
yield tmp_path
@@ -803,7 +804,7 @@ async def os_available(request: pytest.FixtureRequest) -> None:
version = (
AwesomeVersion(request.param)
if hasattr(request, "param")
else AwesomeVersion("10.2")
else AwesomeVersion("16.2")
)
with (
patch.object(OSManager, "available", new=PropertyMock(return_value=True)),

View File

@@ -6,7 +6,7 @@ from supervisor.dbus.network import NetworkManager
from supervisor.dbus.network.interface import NetworkInterface
from supervisor.dbus.network.setting.generate import get_connection_from_interface
from supervisor.host.configuration import Ip6Setting, IpConfig, IpSetting, VlanConfig
from supervisor.host.const import InterfaceMethod, InterfaceType
from supervisor.host.const import InterfaceMethod, InterfaceType, MulticastDnsMode
from supervisor.host.network import Interface
from tests.const import TEST_INTERFACE_ETH_NAME
@@ -22,6 +22,8 @@ async def test_get_connection_from_interface(network_manager: NetworkManager):
assert "interface-name" not in connection_payload["connection"]
assert connection_payload["connection"]["type"].value == "802-3-ethernet"
assert connection_payload["connection"]["mdns"].value == 2
assert connection_payload["connection"]["llmnr"].value == 2
assert connection_payload["match"]["path"].value == ["platform-ff3f0000.ethernet"]
assert connection_payload["ipv4"]["method"].value == "auto"
@@ -61,11 +63,15 @@ async def test_generate_from_vlan(network_manager: NetworkManager):
ipv6setting=Ip6Setting(InterfaceMethod.AUTO, [], None, []),
wifi=None,
vlan=VlanConfig(1, "eth0"),
mdns=MulticastDnsMode.RESOLVE,
llmnr=MulticastDnsMode.OFF,
)
connection_payload = get_connection_from_interface(vlan_interface, network_manager)
assert connection_payload["connection"]["id"].value == "Supervisor eth0.1"
assert connection_payload["connection"]["type"].value == "vlan"
assert connection_payload["connection"]["mdns"].value == 1 # resolve
assert connection_payload["connection"]["llmnr"].value == 0 # off
assert "uuid" in connection_payload["connection"]
assert "match" not in connection_payload["connection"]
assert "interface-name" not in connection_payload["connection"]

View File

@@ -347,6 +347,7 @@ async def test_addon_run_add_host_error(
addonsdata_system: dict[str, Data],
capture_exception: Mock,
path_extern,
tmp_supervisor_data: Path,
):
"""Test error adding host when addon is run."""
await coresys.dbus.timedate.connect(coresys.dbus.bus)
@@ -433,6 +434,7 @@ async def test_addon_new_device(
dev_path: str,
cgroup: str,
is_os: bool,
tmp_supervisor_data: Path,
):
"""Test new device that is listed in static devices."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
@@ -463,6 +465,7 @@ async def test_addon_new_device_no_haos(
install_addon_ssh: Addon,
docker: DockerAPI,
dev_path: str,
tmp_supervisor_data: Path,
):
"""Test new device that is listed in static devices on non HAOS system with CGroup V2."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000

View File

@@ -25,7 +25,6 @@ from supervisor.exceptions import (
DockerNotFound,
DockerRequestError,
)
from supervisor.homeassistant.const import WSEvent
from supervisor.jobs import JobSchedulerOptions, SupervisorJob
from tests.common import load_json_fixture
@@ -415,14 +414,57 @@ async def test_install_fires_progress_events(
]
async def test_install_sends_progress_to_home_assistant(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock
async def test_install_progress_rounding_does_not_cause_misses(
coresys: CoreSys,
test_docker_interface: DockerInterface,
ha_ws_client: AsyncMock,
capture_exception: Mock,
):
"""Test progress events are sent as job updates to Home Assistant."""
"""Test extremely close progress events do not create rounding issues."""
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
# Current numbers chosen to create a rounding issue with original code
# Where a progress update came in with a value between the actual previous
# value and what it was rounded to. It should not raise an out of order exception
coresys.docker.docker.api.pull.return_value = [
{
"status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.1",
},
{"status": "Pulling fs layer", "progressDetail": {}, "id": "1e214cd6d7d0"},
{
"status": "Downloading",
"progressDetail": {"current": 432700000, "total": 436480882},
"progress": "[=================================================> ] 432.7MB/436.5MB",
"id": "1e214cd6d7d0",
},
{
"status": "Downloading",
"progressDetail": {"current": 432800000, "total": 436480882},
"progress": "[=================================================> ] 432.8MB/436.5MB",
"id": "1e214cd6d7d0",
},
{"status": "Verifying Checksum", "progressDetail": {}, "id": "1e214cd6d7d0"},
{"status": "Download complete", "progressDetail": {}, "id": "1e214cd6d7d0"},
{
"status": "Extracting",
"progressDetail": {"current": 432700000, "total": 436480882},
"progress": "[=================================================> ] 432.7MB/436.5MB",
"id": "1e214cd6d7d0",
},
{
"status": "Extracting",
"progressDetail": {"current": 432800000, "total": 436480882},
"progress": "[=================================================> ] 432.8MB/436.5MB",
"id": "1e214cd6d7d0",
},
{"status": "Pull complete", "progressDetail": {}, "id": "1e214cd6d7d0"},
{
"status": "Digest: sha256:7d97da645f232f82a768d0a537e452536719d56d484d419836e53dbe3e4ec736"
},
{
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/odroid-n2-homeassistant:2025.7.1"
},
]
with (
patch.object(
@@ -447,157 +489,7 @@ async def test_install_sends_progress_to_home_assistant(
await install_task
await event.wait()
events = [
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0] and evt.args[0]["data"]["event"] == WSEvent.JOB
]
assert events[0]["name"] == "docker_interface_install"
assert events[0]["uuid"] == job.uuid
assert events[0]["done"] is None
assert events[1]["name"] == "docker_interface_install"
assert events[1]["uuid"] == job.uuid
assert events[1]["done"] is False
assert events[-1]["name"] == "docker_interface_install"
assert events[-1]["uuid"] == job.uuid
assert events[-1]["done"] is True
def make_sub_log(layer_id: str):
return [
{
"stage": evt["stage"],
"progress": evt["progress"],
"done": evt["done"],
"extra": evt["extra"],
}
for evt in events
if evt["name"] == "Pulling container image layer"
and evt["reference"] == layer_id
and evt["parent_id"] == job.uuid
]
layer_1_log = make_sub_log("1e214cd6d7d0")
layer_2_log = make_sub_log("1a38e1d5e18d")
assert len(layer_1_log) == 20
assert len(layer_2_log) == 19
assert len(events) == 42
assert layer_1_log == [
{"stage": "Pulling fs layer", "progress": 0, "done": False, "extra": None},
{
"stage": "Downloading",
"progress": 0.1,
"done": False,
"extra": {"current": 539462, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 0.6,
"done": False,
"extra": {"current": 4864838, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 0.9,
"done": False,
"extra": {"current": 7552896, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 1.2,
"done": False,
"extra": {"current": 10252544, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 2.9,
"done": False,
"extra": {"current": 25369792, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Verifying Checksum",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Download complete",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 50.1,
"done": False,
"extra": {"current": 557056, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 60.3,
"done": False,
"extra": {"current": 89686016, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 70.0,
"done": False,
"extra": {"current": 174358528, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 80.0,
"done": False,
"extra": {"current": 261816320, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 88.4,
"done": False,
"extra": {"current": 334790656, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 94.0,
"done": False,
"extra": {"current": 383811584, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 99.9,
"done": False,
"extra": {"current": 435617792, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 100.0,
"done": False,
"extra": {"current": 436480882, "total": 436480882},
},
{
"stage": "Pull complete",
"progress": 100.0,
"done": True,
"extra": {"current": 436480882, "total": 436480882},
},
]
capture_exception.assert_not_called()
@pytest.mark.parametrize(
@@ -644,3 +536,43 @@ async def test_install_raises_on_pull_error(
with pytest.raises(exc_type, match=exc_msg):
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test")
async def test_install_progress_handles_download_restart(
coresys: CoreSys,
test_docker_interface: DockerInterface,
ha_ws_client: AsyncMock,
capture_exception: Mock,
):
"""Test install handles docker progress events that include a download restart."""
coresys.core.set_state(CoreState.RUNNING)
# Fixture emulates a download restart as it docker logs it
# A log out of order exception should not be raised
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log_restart.json"
)
with (
patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value="i386")
),
):
# Schedule job so we can listen for the end. Then we can assert against the WS mock
event = asyncio.Event()
job, install_task = coresys.jobs.schedule_job(
test_docker_interface.install,
JobSchedulerOptions(),
AwesomeVersion("1.2.3"),
"test",
)
async def listen_for_job_end(reference: SupervisorJob):
if reference.uuid != job.uuid:
return
event.set()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end)
await install_task
await event.wait()
capture_exception.assert_not_called()

View File

@@ -1,11 +1,13 @@
"""Test Docker manager."""
from unittest.mock import MagicMock
import asyncio
from unittest.mock import MagicMock, patch
from docker.errors import DockerException
import pytest
from requests import RequestException
from supervisor.coresys import CoreSys
from supervisor.docker.manager import CommandReturn, DockerAPI
from supervisor.exceptions import DockerError
@@ -134,3 +136,173 @@ async def test_run_command_custom_stdout_stderr(docker: DockerAPI):
# Verify the result
assert result.exit_code == 0
assert result.output == b"output"
async def test_run_container_with_cidfile(
coresys: CoreSys, docker: DockerAPI, path_extern, tmp_supervisor_data
):
"""Test container creation with cidfile and bind mount."""
# Mock container
mock_container = MagicMock()
mock_container.id = "test_container_id_12345"
container_name = "test_container"
cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid"
extern_cidfile_path = coresys.config.path_extern_cid_files / f"{container_name}.cid"
docker.docker.containers.run.return_value = mock_container
# Mock container creation
with patch.object(
docker.containers, "create", return_value=mock_container
) as create_mock:
# Execute run with a container name
loop = asyncio.get_event_loop()
result = await loop.run_in_executor(
None,
lambda kwrgs: docker.run(**kwrgs),
{"image": "test_image", "tag": "latest", "name": container_name},
)
# Check the container creation parameters
create_mock.assert_called_once()
kwargs = create_mock.call_args[1]
assert "volumes" in kwargs
assert str(extern_cidfile_path) in kwargs["volumes"]
assert kwargs["volumes"][str(extern_cidfile_path)]["bind"] == "/run/cid"
assert kwargs["volumes"][str(extern_cidfile_path)]["mode"] == "ro"
# Verify container start was called
mock_container.start.assert_called_once()
# Verify cidfile was written with container ID
assert cidfile_path.exists()
assert cidfile_path.read_text() == mock_container.id
assert result == mock_container
async def test_run_container_with_leftover_cidfile(
coresys: CoreSys, docker: DockerAPI, path_extern, tmp_supervisor_data
):
"""Test container creation removes leftover cidfile before creating new one."""
# Mock container
mock_container = MagicMock()
mock_container.id = "test_container_id_new"
container_name = "test_container"
cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid"
# Create a leftover cidfile
cidfile_path.touch()
# Mock container creation
with patch.object(
docker.containers, "create", return_value=mock_container
) as create_mock:
# Execute run with a container name
loop = asyncio.get_event_loop()
result = await loop.run_in_executor(
None,
lambda kwrgs: docker.run(**kwrgs),
{"image": "test_image", "tag": "latest", "name": container_name},
)
# Verify container was created
create_mock.assert_called_once()
# Verify new cidfile was written with container ID
assert cidfile_path.exists()
assert cidfile_path.read_text() == mock_container.id
assert result == mock_container
async def test_stop_container_with_cidfile_cleanup(
coresys: CoreSys, docker: DockerAPI, path_extern, tmp_supervisor_data
):
"""Test container stop with cidfile cleanup."""
# Mock container
mock_container = MagicMock()
mock_container.status = "running"
container_name = "test_container"
cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid"
# Create a cidfile
cidfile_path.touch()
# Mock the containers.get method and cidfile cleanup
with (
patch.object(docker.containers, "get", return_value=mock_container),
):
# Call stop_container with remove_container=True
loop = asyncio.get_event_loop()
await loop.run_in_executor(
None,
lambda kwrgs: docker.stop_container(**kwrgs),
{"timeout": 10, "remove_container": True, "name": container_name},
)
# Verify container operations
mock_container.stop.assert_called_once_with(timeout=10)
mock_container.remove.assert_called_once_with(force=True, v=True)
assert not cidfile_path.exists()
async def test_stop_container_without_removal_no_cidfile_cleanup(docker: DockerAPI):
"""Test container stop without removal doesn't clean up cidfile."""
# Mock container
mock_container = MagicMock()
mock_container.status = "running"
container_name = "test_container"
# Mock the containers.get method and cidfile cleanup
with (
patch.object(docker.containers, "get", return_value=mock_container),
patch("pathlib.Path.unlink") as mock_unlink,
):
# Call stop_container with remove_container=False
docker.stop_container(container_name, timeout=10, remove_container=False)
# Verify container operations
mock_container.stop.assert_called_once_with(timeout=10)
mock_container.remove.assert_not_called()
# Verify cidfile cleanup was NOT called
mock_unlink.assert_not_called()
async def test_cidfile_cleanup_handles_oserror(
coresys: CoreSys, docker: DockerAPI, path_extern, tmp_supervisor_data
):
"""Test that cidfile cleanup handles OSError gracefully."""
# Mock container
mock_container = MagicMock()
mock_container.status = "running"
container_name = "test_container"
cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid"
# Create a cidfile
cidfile_path.touch()
# Mock the containers.get method and cidfile cleanup to raise OSError
with (
patch.object(docker.containers, "get", return_value=mock_container),
patch(
"pathlib.Path.unlink", side_effect=OSError("File not found")
) as mock_unlink,
):
# Call stop_container - should not raise exception
docker.stop_container(container_name, timeout=10, remove_container=True)
# Verify container operations completed
mock_container.stop.assert_called_once_with(timeout=10)
mock_container.remove.assert_called_once_with(force=True, v=True)
# Verify cidfile cleanup was attempted
mock_unlink.assert_called_once_with(missing_ok=True)

View File

@@ -0,0 +1,134 @@
[
{
"status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.1"
},
{
"status": "Already exists",
"progressDetail": {},
"id": "6e771e15690e"
},
{
"status": "Already exists",
"progressDetail": {},
"id": "58da640818f4"
},
{
"status": "Pulling fs layer",
"progressDetail": {},
"id": "1e214cd6d7d0"
},
{
"status": "Already exists",
"progressDetail": {},
"id": "1a38e1d5e18d"
},
{
"status": "Waiting",
"progressDetail": {},
"id": "1e214cd6d7d0"
},
{
"status": "Downloading",
"progressDetail": {
"current": 103619904,
"total": 436480882
},
"progress": "[===========> ] 103.6MB/436.5MB",
"id": "1e214cd6d7d0"
},
{
"status": "Downloading",
"progressDetail": {
"current": 227726144,
"total": 436480882
},
"progress": "[==========================> ] 227.7MB/436.5MB",
"id": "1e214cd6d7d0"
},
{
"status": "Downloading",
"progressDetail": {
"current": 433170048,
"total": 436480882
},
"progress": "[=================================================> ] 433.2MB/436.5MB",
"id": "1e214cd6d7d0"
},
{
"status": "Retrying in 2 seconds",
"progressDetail": {},
"id": "1e214cd6d7d0"
},
{
"status": "Retrying in 1 seconds",
"progressDetail": {},
"id": "1e214cd6d7d0"
},
{
"status": "Downloading",
"progressDetail": {
"current": 103619904,
"total": 436480882
},
"progress": "[===========> ] 103.6MB/436.5MB",
"id": "1e214cd6d7d0"
},
{
"status": "Downloading",
"progressDetail": {
"current": 227726144,
"total": 436480882
},
"progress": "[==========================> ] 227.7MB/436.5MB",
"id": "1e214cd6d7d0"
},
{
"status": "Downloading",
"progressDetail": {
"current": 433170048,
"total": 436480882
},
"progress": "[=================================================> ] 433.2MB/436.5MB",
"id": "1e214cd6d7d0"
},
{
"status": "Verifying Checksum",
"progressDetail": {},
"id": "1e214cd6d7d0"
},
{
"status": "Download complete",
"progressDetail": {},
"id": "1e214cd6d7d0"
},
{
"status": "Extracting",
"progressDetail": {
"current": 261816320,
"total": 436480882
},
"progress": "[=============================> ] 261.8MB/436.5MB",
"id": "1e214cd6d7d0"
},
{
"status": "Extracting",
"progressDetail": {
"current": 436480882,
"total": 436480882
},
"progress": "[==================================================>] 436.5MB/436.5MB",
"id": "1e214cd6d7d0"
},
{
"status": "Pull complete",
"progressDetail": {},
"id": "1e214cd6d7d0"
},
{
"status": "Digest: sha256:7d97da645f232f82a768d0a537e452536719d56d484d419836e53dbe3e4ec736"
},
{
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/odroid-n2-homeassistant:2025.7.1"
}
]

View File

@@ -36,6 +36,8 @@ async def test_equals_dbus_interface_no_settings(coresys: CoreSys):
vlan=None,
path="platform-ff3f0000.ethernet",
mac="AA:BB:CC:DD:EE:FF",
mdns=None,
llmnr=None,
)
# Get network interface and remove its connection to simulate no settings
@@ -64,6 +66,8 @@ async def test_equals_dbus_interface_connection_name_match(coresys: CoreSys):
vlan=None,
path="platform-ff3f0000.ethernet",
mac="AA:BB:CC:DD:EE:FF",
mdns=None,
llmnr=None,
)
# Get the network interface - this should have connection settings with interface-name = "eth0"
@@ -90,6 +94,8 @@ def test_equals_dbus_interface_connection_name_no_match():
vlan=None,
path="platform-ff3f0000.ethernet",
mac="AA:BB:CC:DD:EE:FF",
mdns=None,
llmnr=None,
)
# Mock network interface with different connection name
@@ -125,6 +131,8 @@ async def test_equals_dbus_interface_path_match(
vlan=None,
path="platform-ff3f0000.ethernet",
mac="AA:BB:CC:DD:EE:FF",
mdns=None,
llmnr=None,
)
# Add match settings with path and remove interface name to force path matching
@@ -156,6 +164,8 @@ def test_equals_dbus_interface_vlan_type_mismatch():
vlan=VlanConfig(id=10, interface="0c23631e-2118-355c-bbb0-8943229cb0d6"),
path="",
mac="52:54:00:2B:36:80",
mdns=None,
llmnr=None,
)
# Mock non-VLAN NetworkInterface - should return False immediately
@@ -185,6 +195,8 @@ def test_equals_dbus_interface_vlan_missing_info():
vlan=None, # Missing VLAN config!
path="",
mac="52:54:00:2B:36:80",
mdns=None,
llmnr=None,
)
# Mock VLAN NetworkInterface
@@ -218,6 +230,8 @@ def test_equals_dbus_interface_vlan_no_vlan_settings():
vlan=VlanConfig(id=10, interface="0c23631e-2118-355c-bbb0-8943229cb0d6"),
path="",
mac="52:54:00:2B:36:80",
mdns=None,
llmnr=None,
)
# Mock VLAN NetworkInterface without VLAN settings
@@ -271,6 +285,8 @@ async def test_equals_dbus_interface_eth0_10_real(
),
path="",
mac="52:54:00:2B:36:80",
mdns=None,
llmnr=None,
)
# Test should pass with matching VLAN config

View File

@@ -139,10 +139,10 @@ async def test_free_space(coresys: CoreSys):
return True
test = TestClass(coresys)
with patch("shutil.disk_usage", return_value=(42, 42, (1024.0**3))):
with patch("shutil.disk_usage", return_value=(42, 42, (2048.0**3))):
assert await test.execute()
with patch("shutil.disk_usage", return_value=(42, 42, (512.0**3))):
with patch("shutil.disk_usage", return_value=(42, 42, (1024.0**3))):
assert not await test.execute()
coresys.jobs.ignore_conditions = [JobCondition.FREE_SPACE]
@@ -366,15 +366,21 @@ async def test_throttle_rate_limit(coresys: CoreSys, error: JobException | None)
test = TestClass(coresys)
await asyncio.gather(*[test.execute(), test.execute()])
start = utcnow()
with time_machine.travel(start):
await asyncio.gather(*[test.execute(), test.execute()])
assert test.call == 2
with pytest.raises(JobException if error is None else error):
with (
time_machine.travel(start + timedelta(milliseconds=1)),
pytest.raises(JobException if error is None else error),
):
await test.execute()
assert test.call == 2
with time_machine.travel(utcnow() + timedelta(hours=1)):
with time_machine.travel(start + timedelta(hours=1, milliseconds=1)):
await test.execute()
assert test.call == 3
@@ -830,15 +836,18 @@ async def test_group_throttle(coresys: CoreSys):
test1 = TestClass(coresys, "test1")
test2 = TestClass(coresys, "test2")
start = utcnow()
# One call of each should work. The subsequent calls will be silently throttled due to period
await asyncio.gather(
test1.execute(0), test1.execute(0), test2.execute(0), test2.execute(0)
)
with time_machine.travel(start):
await asyncio.gather(
test1.execute(0), test1.execute(0), test2.execute(0), test2.execute(0)
)
assert test1.call == 1
assert test2.call == 1
# First call to each will work again since period cleared. Second throttled once more as they don't wait
with time_machine.travel(utcnow() + timedelta(milliseconds=100)):
with time_machine.travel(start + timedelta(milliseconds=100)):
await asyncio.gather(
test1.execute(0.1),
test1.execute(0.1),
@@ -878,15 +887,18 @@ async def test_group_throttle_with_queue(coresys: CoreSys):
test1 = TestClass(coresys, "test1")
test2 = TestClass(coresys, "test2")
start = utcnow()
# One call of each should work. The subsequent calls will be silently throttled after waiting due to period
await asyncio.gather(
*[test1.execute(0), test1.execute(0), test2.execute(0), test2.execute(0)]
)
with time_machine.travel(start):
await asyncio.gather(
*[test1.execute(0), test1.execute(0), test2.execute(0), test2.execute(0)]
)
assert test1.call == 1
assert test2.call == 1
# All calls should work as we cleared the period. And tasks take longer then period and are queued
with time_machine.travel(utcnow() + timedelta(milliseconds=100)):
with time_machine.travel(start + timedelta(milliseconds=100)):
await asyncio.gather(
*[
test1.execute(0.1),
@@ -927,21 +939,25 @@ async def test_group_throttle_rate_limit(coresys: CoreSys, error: JobException |
test1 = TestClass(coresys, "test1")
test2 = TestClass(coresys, "test2")
await asyncio.gather(
*[test1.execute(), test1.execute(), test2.execute(), test2.execute()]
)
start = utcnow()
with time_machine.travel(start):
await asyncio.gather(
*[test1.execute(), test1.execute(), test2.execute(), test2.execute()]
)
assert test1.call == 2
assert test2.call == 2
with pytest.raises(JobException if error is None else error):
await test1.execute()
with pytest.raises(JobException if error is None else error):
await test2.execute()
with time_machine.travel(start + timedelta(milliseconds=1)):
with pytest.raises(JobException if error is None else error):
await test1.execute()
with pytest.raises(JobException if error is None else error):
await test2.execute()
assert test1.call == 2
assert test2.call == 2
with time_machine.travel(utcnow() + timedelta(hours=1)):
with time_machine.travel(start + timedelta(hours=1, milliseconds=1)):
await test1.execute()
await test2.execute()
@@ -1285,20 +1301,26 @@ async def test_concurency_reject_and_rate_limit(
test = TestClass(coresys)
results = await asyncio.gather(
*[test.execute(0.1), test.execute(), test.execute()], return_exceptions=True
)
start = utcnow()
with time_machine.travel(start):
results = await asyncio.gather(
*[test.execute(0.1), test.execute(), test.execute()], return_exceptions=True
)
assert results[0] is None
assert isinstance(results[1], JobException)
assert isinstance(results[2], JobException)
assert test.call == 1
with pytest.raises(JobException if error is None else error):
with (
time_machine.travel(start + timedelta(milliseconds=1)),
pytest.raises(JobException if error is None else error),
):
await test.execute()
assert test.call == 1
with time_machine.travel(utcnow() + timedelta(hours=1)):
with time_machine.travel(start + timedelta(hours=1, milliseconds=1)):
await test.execute()
assert test.call == 2
@@ -1342,18 +1364,22 @@ async def test_group_concurrency_with_group_throttling(coresys: CoreSys):
test = TestClass(coresys)
start = utcnow()
# First call should work
await test.main_method()
with time_machine.travel(start):
await test.main_method()
assert test.call_count == 1
assert test.nested_call_count == 1
# Second call should be throttled (not execute due to throttle period)
await test.main_method()
with time_machine.travel(start + timedelta(milliseconds=1)):
await test.main_method()
assert test.call_count == 1 # Still 1, throttled
assert test.nested_call_count == 1 # Still 1, throttled
# Wait for throttle period to pass and try again
with time_machine.travel(utcnow() + timedelta(milliseconds=60)):
with time_machine.travel(start + timedelta(milliseconds=60)):
await test.main_method()
assert test.call_count == 2 # Should execute now

View File

@@ -1,6 +1,7 @@
"""Test base plugin functionality."""
import asyncio
from pathlib import Path
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, patch
from awesomeversion import AwesomeVersion
@@ -165,6 +166,8 @@ async def test_plugin_watchdog_max_failed_attempts(
error: PluginError,
container: MagicMock,
caplog: pytest.LogCaptureFixture,
tmp_supervisor_data: Path,
path_extern,
) -> None:
"""Test plugin watchdog gives up after max failed attempts."""
with patch.object(type(plugin.instance), "attach"):

View File

@@ -70,7 +70,7 @@ async def test_if_check_cleanup_issue(coresys: CoreSys):
assert free_space in coresys.resolution.issues
with patch("shutil.disk_usage", return_value=(42, 42, 2 * (1024.0**3))):
with patch("shutil.disk_usage", return_value=(42, 42, 3 * (1024.0**3))):
await coresys.resolution.check.check_system()
assert free_space not in coresys.resolution.issues

View File

@@ -1,33 +1,12 @@
"""Test check free space fixup."""
# pylint: disable=import-error,protected-access
from unittest.mock import MagicMock, PropertyMock, patch
from unittest.mock import patch
import pytest
from supervisor.backups.const import BackupType
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.resolution.checks.free_space import CheckFreeSpace
from supervisor.resolution.const import IssueType, SuggestionType
@pytest.fixture(name="suggestion")
async def fixture_suggestion(
coresys: CoreSys, request: pytest.FixtureRequest
) -> SuggestionType | None:
"""Set up test for suggestion."""
if request.param == SuggestionType.CLEAR_FULL_BACKUP:
backup = MagicMock()
backup.sys_type = BackupType.FULL
with patch.object(
type(coresys.backups),
"list_backups",
new=PropertyMock(return_value=[backup, backup, backup]),
):
yield SuggestionType.CLEAR_FULL_BACKUP
else:
yield request.param
from supervisor.resolution.const import IssueType
async def test_base(coresys: CoreSys):
@@ -37,19 +16,14 @@ async def test_base(coresys: CoreSys):
assert free_space.enabled
@pytest.mark.parametrize(
"suggestion",
[None, SuggestionType.CLEAR_FULL_BACKUP],
indirect=True,
)
async def test_check(coresys: CoreSys, suggestion: SuggestionType | None):
async def test_check(coresys: CoreSys):
"""Test check."""
free_space = CheckFreeSpace(coresys)
await coresys.core.set_state(CoreState.RUNNING)
assert len(coresys.resolution.issues) == 0
with patch("shutil.disk_usage", return_value=(42, 42, 2 * (1024.0**3))):
with patch("shutil.disk_usage", return_value=(42, 42, 3 * (1024.0**3))):
await free_space.run_check()
assert len(coresys.resolution.issues) == 0
@@ -58,11 +32,7 @@ async def test_check(coresys: CoreSys, suggestion: SuggestionType | None):
await free_space.run_check()
assert coresys.resolution.issues[-1].type == IssueType.FREE_SPACE
if suggestion:
assert coresys.resolution.suggestions[-1].type == suggestion
else:
assert len(coresys.resolution.suggestions) == 0
assert len(coresys.resolution.suggestions) == 0
async def test_approve(coresys: CoreSys):
@@ -73,7 +43,7 @@ async def test_approve(coresys: CoreSys):
with patch("shutil.disk_usage", return_value=(1, 1, 1)):
assert await free_space.approve_check()
with patch("shutil.disk_usage", return_value=(42, 42, 2 * (1024.0**3))):
with patch("shutil.disk_usage", return_value=(42, 42, 3 * (1024.0**3))):
assert not await free_space.approve_check()

View File

@@ -12,7 +12,7 @@ from supervisor.addons.addon import Addon
from supervisor.arch import CpuArch
from supervisor.backups.manager import BackupManager
from supervisor.coresys import CoreSys
from supervisor.exceptions import AddonsNotSupportedError, StoreJobError
from supervisor.exceptions import AddonNotSupportedError, StoreJobError
from supervisor.homeassistant.module import HomeAssistant
from supervisor.store import StoreManager
from supervisor.store.addon import AddonStore
@@ -170,9 +170,9 @@ async def test_update_unavailable_addon(
"version",
new=PropertyMock(return_value=AwesomeVersion("2022.1.1")),
),
patch("shutil.disk_usage", return_value=(42, 42, (1024.0**3))),
patch("shutil.disk_usage", return_value=(42, 42, (5120.0**3))),
):
with pytest.raises(AddonsNotSupportedError):
with pytest.raises(AddonNotSupportedError):
await coresys.addons.update("local_ssh", backup=True)
backup.assert_not_called()
@@ -226,8 +226,8 @@ async def test_install_unavailable_addon(
"version",
new=PropertyMock(return_value=AwesomeVersion("2022.1.1")),
),
patch("shutil.disk_usage", return_value=(42, 42, (1024.0**3))),
pytest.raises(AddonsNotSupportedError),
patch("shutil.disk_usage", return_value=(42, 42, (5120.0**3))),
pytest.raises(AddonNotSupportedError),
):
await coresys.addons.install("local_ssh")

View File

@@ -20,7 +20,16 @@ def test_loading_traslations(coresys: CoreSys, tmp_path: Path):
for file in ("en.json", "es.json"):
write_json_or_yaml_file(
tmp_path / "translations" / file,
{"configuration": {"test": {"name": "test", "test": "test"}}},
{
"configuration": {
"test": {
"name": "test",
"description": "test",
"test": "test",
"fields": {"test2": {"name": "test2"}},
}
}
},
)
for file in ("no.yaml", "de.yaml"):
@@ -39,6 +48,18 @@ def test_loading_traslations(coresys: CoreSys, tmp_path: Path):
assert translations["no"]["configuration"]["test"]["name"] == "test"
assert translations["de"]["configuration"]["test"]["name"] == "test"
assert translations["en"]["configuration"]["test"]["description"] == "test"
assert translations["es"]["configuration"]["test"]["description"] == "test"
assert (
translations["en"]["configuration"]["test"]["fields"]["test2"]["name"]
== "test2"
)
assert (
translations["es"]["configuration"]["test"]["fields"]["test2"]["name"]
== "test2"
)
assert "test" not in translations["en"]["configuration"]["test"]
assert translations["no"]["network"]["80/tcp"] == "Webserver port"