Compare commits

..

73 Commits

Author SHA1 Message Date
Stefan Agner
9bee58a8b1
Migrate to JobConcurrency and JobThrottle parameters (#6065) 2025-08-05 13:24:44 +02:00
Mike Degatano
8a1e6b0895
Add unsupported reason os_version and evaluation (#6041)
* Add unsupported reason os_version and evaluation

* Order enum and add tests

* Apply suggestions from code review

* Apply suggestions from code review

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-08-05 13:23:42 +02:00
Stefan Agner
f150d1b287
Return optimistic life time estimate for eMMC storage (#6063)
This avoids that we display a 10% life time use for a brand new
eMMC storage. The values are estimates anyways, and there is a
separate value which represents life time completely used (100%).
2025-08-05 10:43:57 +02:00
dependabot[bot]
628a18c6b8
Bump coverage from 7.10.1 to 7.10.2 (#6062)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.10.1 to 7.10.2.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.10.1...7.10.2)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.10.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-04 14:41:38 +02:00
dependabot[bot]
74e43411e5
Bump dbus-fast from 2.44.2 to 2.44.3 (#6061)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.44.2 to 2.44.3.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.44.2...v2.44.3)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 2.44.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-04 14:41:25 +02:00
dependabot[bot]
e6b0d4144c
Bump awesomeversion from 25.5.0 to 25.8.0 (#6060)
Bumps [awesomeversion](https://github.com/ludeeus/awesomeversion) from 25.5.0 to 25.8.0.
- [Release notes](https://github.com/ludeeus/awesomeversion/releases)
- [Commits](https://github.com/ludeeus/awesomeversion/compare/25.5.0...25.8.0)

---
updated-dependencies:
- dependency-name: awesomeversion
  dependency-version: 25.8.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-04 14:41:11 +02:00
Mike Degatano
033896480d
Fix backup equal and add hash to objects with eq (#6059)
* Fix backup equal and add hash to objects with eq

* Add test for failed consolidate
2025-08-04 14:19:33 +02:00
dependabot[bot]
478e00c0fe
Bump home-assistant/wheels from 2025.03.0 to 2025.07.0 (#6057)
Bumps [home-assistant/wheels](https://github.com/home-assistant/wheels) from 2025.03.0 to 2025.07.0.
- [Release notes](https://github.com/home-assistant/wheels/releases)
- [Commits](https://github.com/home-assistant/wheels/compare/2025.03.0...2025.07.0)

---
updated-dependencies:
- dependency-name: home-assistant/wheels
  dependency-version: 2025.07.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-04 14:17:42 +02:00
dependabot[bot]
6f2ba7d68c
Bump mypy from 1.17.0 to 1.17.1 (#6058)
Bumps [mypy](https://github.com/python/mypy) from 1.17.0 to 1.17.1.
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.17.0...v1.17.1)

---
updated-dependencies:
- dependency-name: mypy
  dependency-version: 1.17.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-04 14:17:13 +02:00
Mike Degatano
22afa60f55
Get lifetime info for NVMe devices (#6056)
* Get lifetime info for NVMe devices

* Fix lint and test issues

* Update tests/dbus_service_mocks/udisks2_manager.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-04 13:53:56 +02:00
Stefan Agner
9f2fda5dc7
Update to Alpine 3.22 (#6054)
Update the Supervisor base image to Alpine 3.22 for all architectures.
2025-08-04 09:50:27 +02:00
Stefan Agner
27b092aed0
Block OS updates when the system is unhealthy (#6053)
* Block OS updates when the system is unhealthy

In #6024 we mark a system as unhealthy when multiple OS installations
were found. The idea was to block OS updates in this case. However, it
turns out that the OS update job was not checking the system health
and thus allowed updates even when the system was marked as unhealthy.

This commit adds the `JobCondition.HEALTHY` condition to the OS update
job, ensuring that OS updates are only performed when the system is
healthy.

Users can force an OS update still by using
`ha jobs options --ignore-conditions healthy`.

* Add test for update of unhealthy system

---------

Co-authored-by: Jan Čermák <sairon@sairon.cz>
2025-07-31 11:23:57 +02:00
dependabot[bot]
3af13cb7e2
Bump sentry-sdk from 2.34.0 to 2.34.1 (#6052)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.34.0 to 2.34.1.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.34.0...2.34.1)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.34.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-31 10:43:49 +02:00
Stefan Agner
6871ea4b81
Split execution limit in concurrency and throttle parameters (#6013)
* Split execution limit in concurrency and throttle parameters

Currently the execution limit combines two ortogonal features: Limit
concurrency and throttle execution. This change separates the two
features, allowing for more flexible configuration of job execution.

Ultimately I want to get rid of the old limit parameter. But for ease
of review and migration, I'd like to do this in two steps: First
introduce the new parameters, and map the old limit parameters to the
new parameters. Then, in a second step, remove the old limit parameter
and migrate all users to the new concurrency and throttle parameters
as needed.

* Introduce common lock release method

* Fix THROTTLE_WAIT behavior

The concurrency QUEUE does not really QUEUE throttle limits.

* Add documentation for new concurrency/throttle Job options

* Handle group options for concurrency and throttle separately

* Fix GROUP_THROTTLE_WAIT concurrency setting

We need to use the QUEUE concurrency setting instead of GROUP_QUEUE
for the GROUP_THROTTLE_WAIT execution limit. Otherwise the
test_jobs_decorator.py::test_execution_limit_group_throttle_wait
test deadlocks.

The reason this deadlocks is because GROUP_QUEUE concurrency doesn't
really work because we only can release a group lock if the job is
actually running.

Or put differently, throttling isn't supported with GROUP_*
concurrency options.

* Prevent using any throttling with group concurrency

The group concurrency modes (reject and queue) are not compatible with
any throttling, since we currently can't unlock the group lock when
a job doesn't get started (which is the case when throttling is
applied).

* Fix commit in group rate limit

* Explain the deadlock issue with group locks in code

* Handle locking correctly on throttle limit exceptions

* Introduce pytest for new job decorator combinations
2025-07-30 22:12:14 +02:00
dependabot[bot]
cf77ab2290
Bump aiohttp from 3.12.14 to 3.12.15 (#6049)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.12.15
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-30 14:34:14 +02:00
dependabot[bot]
ceeffa3284
Bump ruff from 0.12.5 to 0.12.7 (#6051)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.12.5 to 0.12.7.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.12.5...0.12.7)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.12.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-30 14:33:07 +02:00
dependabot[bot]
31f2f70cd9
Bump sentry-sdk from 2.33.2 to 2.34.0 (#6050)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.33.2 to 2.34.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.33.2...2.34.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.34.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-30 14:32:11 +02:00
Stefan Agner
deac85bddb
Scrub WiFi fields from Sentry events (#6048)
Make sure WiFi fields are scrubbed from Sentry events to prevent
accidental exposure of sensitive information.
2025-07-29 17:42:43 +02:00
Stefan Agner
7dcf5ba631
Enable IPv6 for containers on new installations (#6029)
* Enable IPv6 by default for new installations

Enable IPv6 by default for new Supervisor installations. Let's also
make the `enable_ipv6` attribute nullable, so we can distinguish
between "not set" and "set to false".

* Add pytest

* Add log message that system restart is required for IPv6 changes

* Fix API pytest

* Create resolution center issue when reboot is required

* Order log after actual setter call
2025-07-29 15:59:03 +02:00
dependabot[bot]
a004830131
Bump orjson from 3.11.0 to 3.11.1 (#6045)
Bumps [orjson](https://github.com/ijl/orjson) from 3.11.0 to 3.11.1.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.11.0...3.11.1)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 10:41:42 +02:00
dependabot[bot]
a8cc6c416d
Bump coverage from 7.10.0 to 7.10.1 (#6044)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.10.0 to 7.10.1.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.10.0...7.10.1)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.10.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 10:41:19 +02:00
dependabot[bot]
74b26642b0
Bump ruff from 0.12.4 to 0.12.5 (#6042)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-27 20:20:27 +02:00
dependabot[bot]
5e26ab5f4a
Bump gitpython from 3.1.44 to 3.1.45 (#6039)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-27 20:14:24 +02:00
dependabot[bot]
a841cb8282
Bump coverage from 7.9.2 to 7.10.0 (#6043) 2025-07-27 10:31:48 +02:00
dependabot[bot]
3b1b03c8a7
Bump dbus-fast from 2.44.1 to 2.44.2 (#6038)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.44.1 to 2.44.2.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.44.1...v2.44.2)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 2.44.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 16:06:19 -04:00
dependabot[bot]
680428f304
Bump sentry-sdk from 2.33.0 to 2.33.2 (#6037)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.33.0 to 2.33.2.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.33.0...2.33.2)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.33.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 12:44:35 -04:00
dependabot[bot]
f34128c37e
Bump ruff from 0.12.3 to 0.12.4 (#6031)
---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.12.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 12:43:56 -04:00
dependabot[bot]
2ed0682b34
Bump sigstore/cosign-installer from 3.9.1 to 3.9.2 (#6032) 2025-07-18 10:00:58 +02:00
Stefan Agner
fbb0915ef8
Mark system as unhealthy if multiple OS installations are found (#6024)
* Add resolution check for duplicate OS installations

* Only create single issue/use separate unhealthy type

* Check MBR partition UUIDs as well

* Use partlabel

* Use generator to avoid code duplication

* Add list of devices, avoid unnecessary exception handling

* Run check only on HAOS

* Fix message formatting

* Fix and simplify pytests

* Fix UnhealthyReason sort order
2025-07-17 10:06:35 +02:00
Stefan Agner
780ae1e15c
Check for duplicate data disks only when the OS is available (#6025)
* Check for duplicate data disks only when the OS is available

Supervised installations do not have a specific data disk, so only
check for duplicate data disks on Home Assistant OS.

* Enable OS for multiple data disks check test
2025-07-16 10:43:15 +02:00
dependabot[bot]
c617358855
Bump orjson from 3.10.18 to 3.11.0 (#6028)
Bumps [orjson](https://github.com/ijl/orjson) from 3.10.18 to 3.11.0.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.10.18...3.11.0)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:24:34 +02:00
dependabot[bot]
b679c4f4d8
Bump sentry-sdk from 2.32.0 to 2.33.0 (#6027)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.32.0 to 2.33.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.32.0...2.33.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.33.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:20:28 +02:00
dependabot[bot]
c946c421f2
Bump debugpy from 1.8.14 to 1.8.15 (#6026)
Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.8.14 to 1.8.15.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.8.14...v1.8.15)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-version: 1.8.15
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:19:44 +02:00
dependabot[bot]
aeabf7ea25
Bump blockbuster from 1.5.24 to 1.5.25 (#6020)
Bumps [blockbuster](https://github.com/cbornet/blockbuster) from 1.5.24 to 1.5.25.
- [Release notes](https://github.com/cbornet/blockbuster/releases)
- [Commits](https://github.com/cbornet/blockbuster/commits/v1.5.25)

---
updated-dependencies:
- dependency-name: blockbuster
  dependency-version: 1.5.25
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:18:57 +02:00
dependabot[bot]
365b838abf
Bump mypy from 1.16.1 to 1.17.0 (#6019)
Bumps [mypy](https://github.com/python/mypy) from 1.16.1 to 1.17.0.
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.16.1...v1.17.0)

---
updated-dependencies:
- dependency-name: mypy
  dependency-version: 1.17.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:08:57 +02:00
Stefan Agner
99c040520e
Drop ensure_builtin_repositories() (#6012)
* Drop ensure_builtin_repositories

With the new Repository classes we have the is_builtin property, so we
can easily make sure that built-ins are not removed. This allows us to
further cleanup the code by removing the ensure_builtin_repositories
function and the ALL_BUILTIN_REPOSITORIES constant.

* Make sure we add built-ins on load

* Reuse default set and avoid unnecessary copy

Reuse default set and avoid unnecessary copying during validation if
the default is not being used.
2025-07-14 22:19:06 +02:00
dependabot[bot]
eefe2f2e06
Bump aiohttp from 3.12.13 to 3.12.14 (#6014)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-14 11:43:55 +02:00
dependabot[bot]
a366e36b37
Bump ruff from 0.12.2 to 0.12.3 (#6016)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-14 11:19:08 +02:00
dependabot[bot]
27a2fde9e1
Bump astroid from 3.3.10 to 3.3.11 (#6017)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-14 11:18:54 +02:00
Stefan Agner
9a0f530a2f
Add Supervisor connectivity check after DNS restart (#6005)
* Add Supervisor connectivity check after DNS restart

When the DNS plug-in got restarted, check Supervisor connectivity
in case the DNS plug-in configuration change influenced Supervisor
connectivity. This is helpful when a DHCP server gets started after
Home Assistant is up. In that case the network provided DNS server
(local DNS server) becomes available after the DNS plug-in restart.

Without this change, the Supervisor connectivity will remain false
until the a Job triggers a connectivity check, for example the
periodic update check (which causes a updater and store reload) by
Core.

* Fix pytest and add coverage for new functionality
2025-07-10 11:08:10 +02:00
Stefan Agner
baf9695cf7
Refactoring around add-on store Repository classes (#5990)
* Rename repository fixture to test_repository

Also don't remove the built-in repositories. The list was incomplete,
and tests don't seem to require that anymore.

* Get rid of StoreType

The type doesn't have much value, we have constant strings anyways.

* Introduce types.py

* Use slug to determine which repository urls to return

* Simplify BuiltinRepository enum

* Mock GitRepo load

* Improve URL handling and repository creation logic

* Refactor update_repositories

* Get rid of get_from_url

It is no longer used in production code.

* More refactoring

* Address pylint

* Introduce is_git_based property to Repository class

Return all git based URLs, including the Core repository.

* Revert "Introduce is_git_based property to Repository class"

This reverts commit dfd5ad79bf23e0e127fc45d97d6f8de0e796faa0.

* Fold type.py into const.py

Align more with how Supervisor code is typically structured.

* Update supervisor/store/__init__.py

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Apply repository remove suggestion

* Fix tests

---------

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2025-07-10 11:07:53 +02:00
Stefan Agner
7873c457d5
Small improvement to Copilot instructions (#6011) 2025-07-10 11:05:59 +02:00
Stefan Agner
cbc48c381f
Return 401 Unauthorized when using json/url encoded auth fails (#5844)
When authentication using JSON payload or URL encoded payload fails,
use the generic HTTP response code 401 Unauthorized instead of 400
Bad Request.

This is a more appropriate response code for authentication errors
and is consistent with the behavior of other authentication methods.
2025-07-10 08:38:00 +02:00
Franck Nijhof
11e37011bd
Add Task issue form (#6007) 2025-07-09 16:58:10 +02:00
Franck Nijhof
cfda559a90
Adjust feature request links in issue reporting (#6009) 2025-07-09 16:44:35 +02:00
Mike Degatano
806bd9f52c
Apply store reload suggestion automatically on connectivity change (#6004)
* Apply store reload suggestion automatically on connectivity change

* Use sys_bus not coresys.bus

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-09 16:43:51 +02:00
Stefan Agner
953f7d01d7
Improve DNS plug-in restart (#5999)
* Improve DNS plug-in restart

Instead of simply go by PrimaryConnectioon change, use the DnsManager
Configuration property. This property is ultimately used to write the
DNS plug-in configuration, so it is really the relevant information
we pass on to the plug-in.

* Check for changes and restart DNS plugin

* Check for changes in plug-in DNS

Cache last local (NetworkManager) provided DNS servers. Check against
this DNS server list when deciding when to restart the DNS plug-in.

* Check connectivity unthrottled in certain situations

* Fix pytest

* Fix pytest

* Improve test coverage for DNS plugins restart functionality

* Apply suggestions from code review

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Debounce local DNS changes and event based connectivity checks

* Remove connection check logic

* Remove unthrottled connectivity check

* Fix delayed call

* Store restart task and cancel in case a restart is running

* Improve DNS configuration change tests

* Remove stale code

* Improve DNS plug-in tests, less mocking

* Cover multiple private functions at once

Improve tests around notify_locals_changed() to cover multiple
functions at once.

---------

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2025-07-09 11:35:03 +02:00
Felipe Santos
381e719a0e
Allow to force rebuild of add-ons (#6002) 2025-07-07 21:41:18 +02:00
Ruben van Dijk
296071067d
Fix multiple set-cookie headers with addons ingress (#5996) 2025-07-07 19:27:39 +02:00
dependabot[bot]
8336537f51
Bump types-docker from 7.1.0.20250523 to 7.1.0.20250705 (#6003)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-07 10:00:26 +02:00
Stefan Agner
5c90a00263
Force reload of /etc/resolv.conf on WebSession init (#6000) 2025-07-05 12:18:02 +02:00
dependabot[bot]
1f2bf77784
Bump coverage from 7.9.1 to 7.9.2 (#5992)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-04 08:54:36 +02:00
dependabot[bot]
9aa4f381b8
Bump ruff from 0.12.1 to 0.12.2 (#5993)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-04 08:47:35 +02:00
Mike Degatano
ae036ceffe
Don't backup uninstalled addons (#5988)
* Don't backup uninstalled addons

* Remove hash in backup
2025-07-04 07:05:53 +02:00
Stefan Agner
f0ea0d4a44
Add GitHub Copilot/Claude instruction (#5986)
* Add GitHub Copilot/Claude instruction

This adds an initial instruction file for GitHub Copilot and Claude
(CLAUDE.md symlinked to the same file).

* Add --ignore-missing-imports to mypy, add note to run pre-commit
2025-07-04 07:05:05 +02:00
Mike Degatano
abc44946bb
Refactor addon git repo (#5987)
* Refactor Repository into setup with inheritance

* Remove subclasses of GitRepo
2025-07-03 13:53:52 +02:00
dependabot[bot]
3e20a0937d
Bump cryptography from 45.0.4 to 45.0.5 (#5989)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.4 to 45.0.5.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/45.0.4...45.0.5)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 45.0.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-03 09:52:50 +02:00
Mike Degatano
6cebf52249
Store reset only deletes git cache after clone was successful (#5984)
* Store reset only deletes git cache after clone was successful

* Add test and fix fallback error handling

* Fix when lock is grabbed
2025-07-02 14:34:18 -04:00
Felipe Santos
bc57deb474
Use Docker BuildKit to build addons (#5974)
* Use Docker BuildKit to build addons

* Improve error message as suggested by CodeRabbit

* Fix container.remove() tests missing v=True

* Ignore squash rather than falling back to legacy builder

* Use version rather than tag to avoid confusion in run_command()

* Fix tests differently

* Use PropertyMock like other tests

* Restore position of fix_label fn

* Exempt addon builder image from unsupported checks

* Refactor tests

* Fix tests expecting wrong builder image

* Remove harcoded paths

* Fix tests

* Remove get_addon_host_path() function

* Use docker buildx build rather than docker build

Co-authored-by: Stefan Agner <stefan@agner.ch>

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-07-02 17:33:41 +02:00
Mike Degatano
38750d74a8
Refactor builtin repositories to enum (#5976) 2025-06-30 13:22:00 -04:00
Felipe Santos
d1c1a2d418
Fix docker.run_command() needing detach but not enforcing it (#5979)
* Fix `docker.run_command()` needing `detach` but not enforcing it

* Fix test
2025-06-30 16:09:19 +02:00
Felipe Santos
cf32f036c0
Fix docker_home_assistant_execute_command not honoring HA version (#5978)
* Fix `docker_home_assistant_execute_command` not honoring HA version

* Change variable name to image_with_tag

* Fix test
2025-06-30 16:08:05 +02:00
Felipe Santos
b8852872fe
Remove anonymous volumes when removing containers (#5977)
* Remove anonymous volumes when removing containers

* Add tests for docker.run_command()
2025-06-30 13:31:41 +02:00
dependabot[bot]
779f47e25d
Bump sentry-sdk from 2.31.0 to 2.32.0 (#5982)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-30 10:16:41 +02:00
dependabot[bot]
be8b36b560
Bump ruff from 0.12.0 to 0.12.1 (#5981)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-27 09:08:50 +02:00
dependabot[bot]
8378d434d4
Bump sentry-sdk from 2.30.0 to 2.31.0 (#5975)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.30.0 to 2.31.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.30.0...2.31.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.31.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-25 08:57:12 +02:00
Stefan Agner
0b79e09bc0
Add code documentation for Jobs decorator (#5965)
Add basic code documentation to the Jobs decorator.
2025-06-24 15:48:04 +02:00
Stefan Agner
d747a59696
Fix CLI/Observer access token property (#5973)
The access token token_validation() code in the security middleware
potentially accesses the access token property before the Supervisor
starts the CLI/Observer plugins, which leads to an KeyError when
trying to access the `access_token` property. This change ensures
that no key error is raised, but just None is returned.
2025-06-24 12:10:36 +02:00
Mike Degatano
3ee7c082ec
Add mypy to ci and precommit (#5969)
* Add mypy to ci and precommit

* Run precommit mypy in venv

* Fix issues raised in latest version of mypy
2025-06-24 11:48:03 +02:00
dependabot[bot]
3f921e50b3
Bump getsentry/action-release from 3.1.2 to 3.2.0 (#5972)
Bumps [getsentry/action-release](https://github.com/getsentry/action-release) from 3.1.2 to 3.2.0.
- [Release notes](https://github.com/getsentry/action-release/releases)
- [Changelog](https://github.com/getsentry/action-release/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/action-release/compare/v3.1.2...v3.2.0)

---
updated-dependencies:
- dependency-name: getsentry/action-release
  dependency-version: 3.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-24 10:08:27 +02:00
dependabot[bot]
0370320f75
Bump sigstore/cosign-installer from 3.9.0 to 3.9.1 (#5971)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.9.0 to 3.9.1.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.9.0...v3.9.1)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.9.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-24 10:08:19 +02:00
Stefan Agner
1e19e26ef3
Update request feature link (#5968)
Feature requests are now collected using the org wide GitHub Community.
Update the link accordingly.

While at it, also remove the unused ISSUE_TEMPLATE.md and align the
title to create issues with what is used in Home Assistant Core's
template.
2025-06-23 13:00:55 +02:00
Stefan Agner
e1a18eeba8
Use aiodns explicit close method (#5966) 2025-06-23 10:13:43 +02:00
133 changed files with 4131 additions and 1078 deletions

View File

@ -1,69 +0,0 @@
---
name: Report a bug with the Supervisor on a supported System
about: Report an issue related to the Home Assistant Supervisor.
labels: bug
---
<!-- READ THIS FIRST:
- If you need additional help with this template please refer to https://www.home-assistant.io/help/reporting_issues/
- This is for bugs only. Feature and enhancement requests should go in our community forum: https://community.home-assistant.io/c/feature-requests
- Provide as many details as possible. Paste logs, configuration sample and code into the backticks. Do not delete any text from this template!
- If you have a problem with an add-on, make an issue in it's repository.
-->
<!--
Important: You can only fill a bug repport for an supported system! If you run an unsupported installation. This report would be closed without comment.
-->
### Describe the issue
<!-- Provide as many details as possible. -->
### Steps to reproduce
<!-- What do you do to encounter the issue. -->
1. ...
2. ...
3. ...
### Enviroment details
<!-- You can find these details in the system tab of the supervisor panel, or by using the `ha` CLI. -->
- **Operating System:**: xxx
- **Supervisor version:**: xxx
- **Home Assistant version**: xxx
### Supervisor logs
<details>
<summary>Supervisor logs</summary>
<!--
- Frontend -> Supervisor -> System
- Or use this command: ha supervisor logs
- Logs are more than just errors, even if you don't think it's important, it is.
-->
```
Paste supervisor logs here
```
</details>
### System Information
<details>
<summary>System Information</summary>
<!--
- Use this command: ha info
-->
```
Paste system info here
```
</details>

View File

@ -1,4 +1,4 @@
name: Bug Report Form name: Report an issue with Home Assistant Supervisor
description: Report an issue related to the Home Assistant Supervisor. description: Report an issue related to the Home Assistant Supervisor.
body: body:
- type: markdown - type: markdown
@ -8,7 +8,7 @@ body:
If you have a feature or enhancement request, please use the [feature request][fr] section of our [Community Forum][fr]. If you have a feature or enhancement request, please use the [feature request][fr] section of our [Community Forum][fr].
[fr]: https://community.home-assistant.io/c/feature-requests [fr]: https://github.com/orgs/home-assistant/discussions
- type: textarea - type: textarea
validations: validations:
required: true required: true

View File

@ -13,7 +13,7 @@ contact_links:
about: Our documentation has its own issue tracker. Please report issues with the website there. about: Our documentation has its own issue tracker. Please report issues with the website there.
- name: Request a feature for the Supervisor - name: Request a feature for the Supervisor
url: https://community.home-assistant.io/c/feature-requests url: https://github.com/orgs/home-assistant/discussions
about: Request an new feature for the Supervisor. about: Request an new feature for the Supervisor.
- name: I have a question or need support - name: I have a question or need support

53
.github/ISSUE_TEMPLATE/task.yml vendored Normal file
View File

@ -0,0 +1,53 @@
name: Task
description: For staff only - Create a task
type: Task
body:
- type: markdown
attributes:
value: |
## ⚠️ RESTRICTED ACCESS
**This form is restricted to Open Home Foundation staff and authorized contributors only.**
If you are a community member wanting to contribute, please:
- For bug reports: Use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)
- For feature requests: Submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)
---
### For authorized contributors
Use this form to create tasks for development work, improvements, or other actionable items that need to be tracked.
- type: textarea
id: description
attributes:
label: Description
description: |
Provide a clear and detailed description of the task that needs to be accomplished.
Be specific about what needs to be done, why it's important, and any constraints or requirements.
placeholder: |
Describe the task, including:
- What needs to be done
- Why this task is needed
- Expected outcome
- Any constraints or requirements
validations:
required: true
- type: textarea
id: additional_context
attributes:
label: Additional context
description: |
Any additional information, links, research, or context that would be helpful.
Include links to related issues, research, prototypes, roadmap opportunities etc.
placeholder: |
- Roadmap opportunity: [link]
- Epic: [link]
- Feature request: [link]
- Technical design documents: [link]
- Prototype/mockup: [link]
- Dependencies: [links]
validations:
required: false

288
.github/copilot-instructions.md vendored Normal file
View File

@ -0,0 +1,288 @@
# GitHub Copilot & Claude Code Instructions
This repository contains the Home Assistant Supervisor, a Python 3 based container
orchestration and management system for Home Assistant.
## Supervisor Capabilities & Features
### Architecture Overview
Home Assistant Supervisor is a Python-based container orchestration system that
communicates with the Docker daemon to manage containerized components. It is tightly
integrated with the underlying Operating System and core Operating System components
through D-Bus.
**Managed Components:**
- **Home Assistant Core**: The main home automation application running in its own
container (also provides the web interface)
- **Add-ons**: Third-party applications and services (each add-on runs in its own
container)
- **Plugins**: Built-in system services like DNS, Audio, CLI, Multicast, and Observer
- **Host System Integration**: OS-level operations and hardware access via D-Bus
- **Container Networking**: Internal Docker network management and external
connectivity
- **Storage & Backup**: Data persistence and backup management across all containers
**Key Dependencies:**
- **Docker Engine**: Required for all container operations
- **D-Bus**: System-level communication with the host OS
- **systemd**: Service management for host system operations
- **NetworkManager**: Network configuration and management
### Add-on System
**Add-on Architecture**: Add-ons are containerized applications available through
add-on stores. Each store contains multiple add-ons, and each add-on includes metadata
that tells Supervisor the version, startup configuration (permissions), and available
user configurable options. Add-on metadata typically references a container image that
Supervisor fetches during installation. If not, the Supervisor builds the container
image from a Dockerfile.
**Built-in Stores**: Supervisor comes with several pre-configured stores:
- **Core Add-ons**: Official add-ons maintained by the Home Assistant team
- **Community Add-ons**: Popular third-party add-ons repository
- **ESPHome**: Add-ons for ESPHome ecosystem integration
- **Music Assistant**: Audio and music-related add-ons
- **Local Development**: Local folder for testing custom add-ons during development
**Store Management**: Stores are Git-based repositories that are periodically updated.
When updates are available, users receive notifications.
**Add-on Lifecycle**:
- **Installation**: Supervisor fetches or builds container images based on add-on
metadata
- **Configuration**: Schema-validated options with integrated UI management
- **Runtime**: Full container lifecycle management, health monitoring
- **Updates**: Automatic or manual version management
### Update System
**Core Components**: Supervisor, Home Assistant Core, HAOS, and built-in plugins
receive version information from a central JSON file fetched from
`https://version.home-assistant.io/{channel}.json`. The `Updater` class handles
fetching this data, validating signatures, and updating internal version tracking.
**Update Channels**: Three channels (`stable`/`beta`/`dev`) determine which version
JSON file is fetched, allowing users to opt into different release streams.
**Add-on Updates**: Add-on version information comes from store repository updates, not
the central JSON file. When repositories are refreshed via the store system, add-ons
compare their local versions against repository versions to determine update
availability.
### Backup & Recovery System
**Backup Capabilities**:
- **Full Backups**: Complete system state capture including all add-ons,
configuration, and data
- **Partial Backups**: Selective backup of specific components (Home Assistant,
add-ons, folders)
- **Encrypted Backups**: Optional backup encryption with user-provided passwords
- **Multiple Storage Locations**: Local storage and remote backup destinations
**Recovery Features**:
- **One-click Restore**: Simple restoration from backup files
- **Selective Restore**: Choose specific components to restore
- **Automatic Recovery**: Self-healing for common system issues
---
## Supervisor Development
### Python Requirements
- **Compatibility**: Python 3.13+
- **Language Features**: Use modern Python features:
- Type hints with `typing` module
- f-strings (preferred over `%` or `.format()`)
- Dataclasses and enum classes
- Async/await patterns
- Pattern matching where appropriate
### Code Quality Standards
- **Formatting**: Ruff
- **Linting**: PyLint and Ruff
- **Type Checking**: MyPy
- **Testing**: pytest with asyncio support
- **Language**: American English for all code, comments, and documentation
### Code Organization
**Core Structure**:
```
supervisor/
├── __init__.py # Package initialization
├── const.py # Constants and enums
├── coresys.py # Core system management
├── bootstrap.py # System initialization
├── exceptions.py # Custom exception classes
├── api/ # REST API endpoints
├── addons/ # Add-on management
├── backups/ # Backup system
├── docker/ # Docker integration
├── host/ # Host system interface
├── homeassistant/ # Home Assistant Core management
├── dbus/ # D-Bus system integration
├── hardware/ # Hardware detection and management
├── plugins/ # Plugin system
├── resolution/ # Issue detection and resolution
├── security/ # Security management
├── services/ # Service discovery and management
├── store/ # Add-on store management
└── utils/ # Utility functions
```
**Shared Constants**: Use constants from `supervisor/const.py` instead of hardcoding
values. Define new constants following existing patterns and group related constants
together.
### Supervisor Architecture Patterns
**CoreSysAttributes Inheritance Pattern**: Nearly all major classes in Supervisor
inherit from `CoreSysAttributes`, providing access to the centralized system state
via `self.coresys` and convenient `sys_*` properties.
```python
# Standard Supervisor class pattern
class MyManager(CoreSysAttributes):
"""Manage my functionality."""
def __init__(self, coresys: CoreSys):
"""Initialize manager."""
self.coresys: CoreSys = coresys
self._component: MyComponent = MyComponent(coresys)
@property
def component(self) -> MyComponent:
"""Return component handler."""
return self._component
# Access system components via inherited properties
async def do_something(self):
await self.sys_docker.containers.get("my_container")
self.sys_bus.fire_event(BusEvent.MY_EVENT, {"data": "value"})
```
**Key Inherited Properties from CoreSysAttributes**:
- `self.sys_docker` - Docker API access
- `self.sys_run_in_executor()` - Execute blocking operations
- `self.sys_create_task()` - Create async tasks
- `self.sys_bus` - Event bus for system events
- `self.sys_config` - System configuration
- `self.sys_homeassistant` - Home Assistant Core management
- `self.sys_addons` - Add-on management
- `self.sys_host` - Host system access
- `self.sys_dbus` - D-Bus system interface
**Load Pattern**: Many components implement a `load()` method which effectively
initialize the component from external sources (containers, files, D-Bus services).
### API Development
**REST API Structure**:
- **Base Path**: `/api/` for all endpoints
- **Authentication**: Bearer token authentication
- **Consistent Response Format**: `{"result": "ok", "data": {...}}` or
`{"result": "error", "message": "..."}`
- **Validation**: Use voluptuous schemas with `api_validate()`
**Use `@api_process` Decorator**: This decorator handles all standard error handling
and response formatting automatically. The decorator catches `APIError`, `HassioError`,
and other exceptions, returning appropriate HTTP responses.
```python
from ..api.utils import api_process, api_validate
@api_process
async def backup_full(self, request: web.Request) -> dict[str, Any]:
"""Create full backup."""
body = await api_validate(SCHEMA_BACKUP_FULL, request)
job = await self.sys_backups.do_backup_full(**body)
return {ATTR_JOB_ID: job.uuid}
```
### Docker Integration
- **Container Management**: Use Supervisor's Docker manager instead of direct
Docker API
- **Networking**: Supervisor manages internal Docker networks with predefined IP
ranges
- **Security**: AppArmor profiles, capability restrictions, and user namespace
isolation
- **Health Checks**: Implement health monitoring for all managed containers
### D-Bus Integration
- **Use dbus-fast**: Async D-Bus library for system integration
- **Service Management**: systemd, NetworkManager, hostname management
- **Error Handling**: Wrap D-Bus exceptions in Supervisor-specific exceptions
### Async Programming
- **All I/O operations must be async**: File operations, network calls, subprocess
execution
- **Use asyncio patterns**: Prefer `asyncio.gather()` over sequential awaits
- **Executor jobs**: Use `self.sys_run_in_executor()` for blocking operations
- **Two-phase initialization**: `__init__` for sync setup, `post_init()` for async
initialization
### Testing
- **Location**: `tests/` directory with module mirroring
- **Fixtures**: Extensive use of pytest fixtures for CoreSys setup
- **Mocking**: Mock external dependencies (Docker, D-Bus, network calls)
- **Coverage**: Minimum 90% test coverage, 100% for security-sensitive code
### Error Handling
- **Custom Exceptions**: Defined in `exceptions.py` with clear inheritance hierarchy
- **Error Propagation**: Use `from` clause for exception chaining
- **API Errors**: Use `APIError` with appropriate HTTP status codes
### Security Considerations
- **Container Security**: AppArmor profiles mandatory for add-ons, minimal
capabilities
- **Authentication**: Token-based API authentication with role-based access
- **Data Protection**: Backup encryption, secure secret management, comprehensive
input validation
### Development Commands
```bash
# Run tests, adjust paths as necessary
pytest -qsx tests/
# Linting and formatting
ruff check supervisor/
ruff format supervisor/
# Type checking
mypy --ignore-missing-imports supervisor/
# Pre-commit hooks
pre-commit run --all-files
```
Always run the pre-commit hooks at the end of code editing.
### Common Patterns to Follow
**✅ Use These Patterns**:
- Inherit from `CoreSysAttributes` for system access
- Use `@api_process` decorator for API endpoints
- Use `self.sys_run_in_executor()` for blocking operations
- Access Docker via `self.sys_docker` not direct Docker API
- Use constants from `const.py` instead of hardcoding
- Store types in (per-module) `const.py` (e.g. supervisor/store/const.py)
**❌ Avoid These Patterns**:
- Direct Docker API usage - use Supervisor's Docker manager
- Blocking operations in async context (use asyncio alternatives)
- Hardcoded values - use constants from `const.py`
- Manual error handling in API endpoints - let `@api_process` handle it
This guide provides the foundation for contributing to Home Assistant Supervisor.
Follow these patterns and guidelines to ensure code quality, security, and
maintainability.

View File

@ -106,7 +106,7 @@ jobs:
- name: Build wheels - name: Build wheels
if: needs.init.outputs.requirements == 'true' if: needs.init.outputs.requirements == 'true'
uses: home-assistant/wheels@2025.03.0 uses: home-assistant/wheels@2025.07.0
with: with:
abi: cp313 abi: cp313
tag: musllinux_1_2 tag: musllinux_1_2
@ -131,7 +131,7 @@ jobs:
- name: Install Cosign - name: Install Cosign
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@v3.9.0 uses: sigstore/cosign-installer@v3.9.2
with: with:
cosign-release: "v2.4.3" cosign-release: "v2.4.3"

View File

@ -10,6 +10,7 @@ on:
env: env:
DEFAULT_PYTHON: "3.13" DEFAULT_PYTHON: "3.13"
PRE_COMMIT_CACHE: ~/.cache/pre-commit PRE_COMMIT_CACHE: ~/.cache/pre-commit
MYPY_CACHE_VERSION: 1
concurrency: concurrency:
group: "${{ github.workflow }}-${{ github.ref }}" group: "${{ github.workflow }}-${{ github.ref }}"
@ -286,6 +287,52 @@ jobs:
. venv/bin/activate . venv/bin/activate
pylint supervisor tests pylint supervisor tests
mypy:
name: Check mypy
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Generate partial mypy restore key
id: generate-mypy-key
run: |
mypy_version=$(cat requirements_test.txt | grep mypy | cut -d '=' -f 3)
echo "version=$mypy_version" >> $GITHUB_OUTPUT
echo "key=mypy-${{ env.MYPY_CACHE_VERSION }}-$mypy_version-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: >-
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore mypy cache
uses: actions/cache@v4.2.3
with:
path: .mypy_cache
key: >-
${{ runner.os }}-mypy-${{ needs.prepare.outputs.python-version }}-${{ steps.generate-mypy-key.outputs.key }}
restore-keys: >-
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-mypy-${{ env.MYPY_CACHE_VERSION }}-${{ steps.generate-mypy-key.outputs.version }}
- name: Register mypy problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/mypy.json"
- name: Run mypy
run: |
. venv/bin/activate
mypy --ignore-missing-imports supervisor
pytest: pytest:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: prepare needs: prepare
@ -299,7 +346,7 @@ jobs:
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign - name: Install Cosign
uses: sigstore/cosign-installer@v3.9.0 uses: sigstore/cosign-installer@v3.9.2
with: with:
cosign-release: "v2.4.3" cosign-release: "v2.4.3"
- name: Restore Python virtual environment - name: Restore Python virtual environment

16
.github/workflows/matchers/mypy.json vendored Normal file
View File

@ -0,0 +1,16 @@
{
"problemMatcher": [
{
"owner": "mypy",
"pattern": [
{
"regexp": "^(.+):(\\d+):\\s(error|warning):\\s(.+)$",
"file": 1,
"line": 2,
"severity": 3,
"message": 4
}
]
}
]
}

View File

@ -0,0 +1,58 @@
name: Restrict task creation
# yamllint disable-line rule:truthy
on:
issues:
types: [opened]
jobs:
check-authorization:
runs-on: ubuntu-latest
# Only run if this is a Task issue type (from the issue form)
if: github.event.issue.issue_type == 'Task'
steps:
- name: Check if user is authorized
uses: actions/github-script@v7
with:
script: |
const issueAuthor = context.payload.issue.user.login;
// Check if user is an organization member
try {
await github.rest.orgs.checkMembershipForUser({
org: 'home-assistant',
username: issueAuthor
});
console.log(`✅ ${issueAuthor} is an organization member`);
return; // Authorized
} catch (error) {
console.log(`❌ ${issueAuthor} is not authorized to create Task issues`);
}
// Close the issue with a comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `Hi @${issueAuthor}, thank you for your contribution!\n\n` +
`Task issues are restricted to Open Home Foundation staff and authorized contributors.\n\n` +
`If you would like to:\n` +
`- Report a bug: Please use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)\n` +
`- Request a feature: Please submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)\n\n` +
`If you believe you should have access to create Task issues, please contact the maintainers.`
});
await github.rest.issues.update({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
state: 'closed'
});
// Add a label to indicate this was auto-closed
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: ['auto-closed']
});

View File

@ -12,7 +12,7 @@ jobs:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v4.2.2 uses: actions/checkout@v4.2.2
- name: Sentry Release - name: Sentry Release
uses: getsentry/action-release@v3.1.2 uses: getsentry/action-release@v3.2.0
env: env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }} SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }} SENTRY_ORG: ${{ secrets.SENTRY_ORG }}

View File

@ -13,3 +13,15 @@ repos:
- id: check-executables-have-shebangs - id: check-executables-have-shebangs
stages: [manual] stages: [manual]
- id: check-json - id: check-json
- repo: local
hooks:
# Run mypy through our wrapper script in order to get the possible
# pyenv and/or virtualenv activated; it may not have been e.g. if
# committing from a GUI tool that was not launched from an activated
# shell.
- id: mypy
name: mypy
entry: script/run-in-env.sh mypy --ignore-missing-imports
language: script
types_or: [python, pyi]
files: ^supervisor/.+\.(py|pyi)$

1
CLAUDE.md Symbolic link
View File

@ -0,0 +1 @@
.github/copilot-instructions.md

View File

@ -1,10 +1,10 @@
image: ghcr.io/home-assistant/{arch}-hassio-supervisor image: ghcr.io/home-assistant/{arch}-hassio-supervisor
build_from: build_from:
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.21 aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.22
armhf: ghcr.io/home-assistant/armhf-base-python:3.13-alpine3.21 armhf: ghcr.io/home-assistant/armhf-base-python:3.13-alpine3.22
armv7: ghcr.io/home-assistant/armv7-base-python:3.13-alpine3.21 armv7: ghcr.io/home-assistant/armv7-base-python:3.13-alpine3.22
amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.21 amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.22
i386: ghcr.io/home-assistant/i386-base-python:3.13-alpine3.21 i386: ghcr.io/home-assistant/i386-base-python:3.13-alpine3.22
codenotary: codenotary:
signer: notary@home-assistant.io signer: notary@home-assistant.io
base_image: notary@home-assistant.io base_image: notary@home-assistant.io

View File

@ -1,30 +1,30 @@
aiodns==3.5.0 aiodns==3.5.0
aiohttp==3.12.13 aiohttp==3.12.15
atomicwrites-homeassistant==1.4.1 atomicwrites-homeassistant==1.4.1
attrs==25.3.0 attrs==25.3.0
awesomeversion==25.5.0 awesomeversion==25.8.0
blockbuster==1.5.24 blockbuster==1.5.25
brotli==1.1.0 brotli==1.1.0
ciso8601==2.3.2 ciso8601==2.3.2
colorlog==6.9.0 colorlog==6.9.0
cpe==1.3.1 cpe==1.3.1
cryptography==45.0.4 cryptography==45.0.5
debugpy==1.8.14 debugpy==1.8.15
deepmerge==2.0 deepmerge==2.0
dirhash==0.5.0 dirhash==0.5.0
docker==7.1.0 docker==7.1.0
faust-cchardet==2.1.19 faust-cchardet==2.1.19
gitpython==3.1.44 gitpython==3.1.45
jinja2==3.1.6 jinja2==3.1.6
log-rate-limit==1.4.2 log-rate-limit==1.4.2
orjson==3.10.18 orjson==3.11.1
pulsectl==24.12.0 pulsectl==24.12.0
pyudev==0.24.3 pyudev==0.24.3
PyYAML==6.0.2 PyYAML==6.0.2
requests==2.32.4 requests==2.32.4
securetar==2025.2.1 securetar==2025.2.1
sentry-sdk==2.30.0 sentry-sdk==2.34.1
setuptools==80.9.0 setuptools==80.9.0
voluptuous==0.15.2 voluptuous==0.15.2
dbus-fast==2.44.1 dbus-fast==2.44.3
zlib-fast==0.2.1 zlib-fast==0.2.1

View File

@ -1,5 +1,6 @@
astroid==3.3.10 astroid==3.3.11
coverage==7.9.1 coverage==7.10.2
mypy==1.17.1
pre-commit==4.2.0 pre-commit==4.2.0
pylint==3.3.7 pylint==3.3.7
pytest-aiohttp==1.1.0 pytest-aiohttp==1.1.0
@ -7,6 +8,9 @@ pytest-asyncio==0.25.2
pytest-cov==6.2.1 pytest-cov==6.2.1
pytest-timeout==2.4.0 pytest-timeout==2.4.0
pytest==8.4.1 pytest==8.4.1
ruff==0.12.0 ruff==0.12.7
time-machine==2.16.0 time-machine==2.16.0
types-docker==7.1.0.20250705
types-pyyaml==6.0.12.20250516
types-requests==2.32.4.20250611
urllib3==2.5.0 urllib3==2.5.0

30
script/run-in-env.sh Executable file
View File

@ -0,0 +1,30 @@
#!/usr/bin/env sh
set -eu
# Used in venv activate script.
# Would be an error if undefined.
OSTYPE="${OSTYPE-}"
# Activate pyenv and virtualenv if present, then run the specified command
# pyenv, pyenv-virtualenv
if [ -s .python-version ]; then
PYENV_VERSION=$(head -n 1 .python-version)
export PYENV_VERSION
fi
if [ -n "${VIRTUAL_ENV-}" ] && [ -f "${VIRTUAL_ENV}/bin/activate" ]; then
. "${VIRTUAL_ENV}/bin/activate"
else
# other common virtualenvs
my_path=$(git rev-parse --show-toplevel)
for venv in venv .venv .; do
if [ -f "${my_path}/${venv}/bin/activate" ]; then
. "${my_path}/${venv}/bin/activate"
break
fi
done
fi
exec "$@"

View File

@ -77,7 +77,7 @@ from ..exceptions import (
) )
from ..hardware.data import Device from ..hardware.data import Device
from ..homeassistant.const import WSEvent from ..homeassistant.const import WSEvent
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.const import ContextType, IssueType, UnhealthyReason from ..resolution.const import ContextType, IssueType, UnhealthyReason
from ..resolution.data import Issue from ..resolution.data import Issue
@ -360,7 +360,7 @@ class Addon(AddonModel):
@property @property
def auto_update(self) -> bool: def auto_update(self) -> bool:
"""Return if auto update is enable.""" """Return if auto update is enable."""
return self.persist.get(ATTR_AUTO_UPDATE, super().auto_update) return self.persist.get(ATTR_AUTO_UPDATE, False)
@auto_update.setter @auto_update.setter
def auto_update(self, value: bool) -> None: def auto_update(self, value: bool) -> None:
@ -733,8 +733,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_unload", name="addon_unload",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def unload(self) -> None: async def unload(self) -> None:
"""Unload add-on and remove data.""" """Unload add-on and remove data."""
@ -766,8 +766,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_install", name="addon_install",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def install(self) -> None: async def install(self) -> None:
"""Install and setup this addon.""" """Install and setup this addon."""
@ -807,8 +807,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_uninstall", name="addon_uninstall",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def uninstall( async def uninstall(
self, *, remove_config: bool, remove_image: bool = True self, *, remove_config: bool, remove_image: bool = True
@ -873,8 +873,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_update", name="addon_update",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def update(self) -> asyncio.Task | None: async def update(self) -> asyncio.Task | None:
"""Update this addon to latest version. """Update this addon to latest version.
@ -923,8 +923,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_rebuild", name="addon_rebuild",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def rebuild(self) -> asyncio.Task | None: async def rebuild(self) -> asyncio.Task | None:
"""Rebuild this addons container and image. """Rebuild this addons container and image.
@ -1068,8 +1068,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_start", name="addon_start",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def start(self) -> asyncio.Task: async def start(self) -> asyncio.Task:
"""Set options and start add-on. """Set options and start add-on.
@ -1117,8 +1117,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_stop", name="addon_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def stop(self) -> None: async def stop(self) -> None:
"""Stop add-on.""" """Stop add-on."""
@ -1131,8 +1131,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_restart", name="addon_restart",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def restart(self) -> asyncio.Task: async def restart(self) -> asyncio.Task:
"""Restart add-on. """Restart add-on.
@ -1166,8 +1166,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_write_stdin", name="addon_write_stdin",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def write_stdin(self, data) -> None: async def write_stdin(self, data) -> None:
"""Write data to add-on stdin.""" """Write data to add-on stdin."""
@ -1200,8 +1200,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_begin_backup", name="addon_begin_backup",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def begin_backup(self) -> bool: async def begin_backup(self) -> bool:
"""Execute pre commands or stop addon if necessary. """Execute pre commands or stop addon if necessary.
@ -1222,8 +1222,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_end_backup", name="addon_end_backup",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def end_backup(self) -> asyncio.Task | None: async def end_backup(self) -> asyncio.Task | None:
"""Execute post commands or restart addon if necessary. """Execute post commands or restart addon if necessary.
@ -1260,8 +1260,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_backup", name="addon_backup",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def backup(self, tar_file: tarfile.TarFile) -> asyncio.Task | None: async def backup(self, tar_file: tarfile.TarFile) -> asyncio.Task | None:
"""Backup state of an add-on. """Backup state of an add-on.
@ -1368,8 +1368,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_restore", name="addon_restore",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def restore(self, tar_file: tarfile.TarFile) -> asyncio.Task | None: async def restore(self, tar_file: tarfile.TarFile) -> asyncio.Task | None:
"""Restore state of an add-on. """Restore state of an add-on.
@ -1521,10 +1521,10 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_restart_after_problem", name="addon_restart_after_problem",
limit=JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=AddonsJobError, on_condition=AddonsJobError,
throttle=JobThrottle.GROUP_RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed addon.""" """Restart unhealthy or failed addon."""

View File

@ -15,6 +15,7 @@ from ..const import (
ATTR_SQUASH, ATTR_SQUASH,
FILE_SUFFIX_CONFIGURATION, FILE_SUFFIX_CONFIGURATION,
META_ADDON, META_ADDON,
SOCKET_DOCKER,
) )
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..docker.interface import MAP_ARCH from ..docker.interface import MAP_ARCH
@ -121,39 +122,64 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
except HassioArchNotFound: except HassioArchNotFound:
return False return False
def get_docker_args(self, version: AwesomeVersion, image: str | None = None): def get_docker_args(
"""Create a dict with Docker build arguments. self, version: AwesomeVersion, image_tag: str
) -> dict[str, Any]:
"""Create a dict with Docker run args."""
dockerfile_path = self.get_dockerfile().relative_to(self.addon.path_location)
Must be run in executor. build_cmd = [
""" "docker",
args: dict[str, Any] = { "buildx",
"path": str(self.addon.path_location), "build",
"tag": f"{image or self.addon.image}:{version!s}", ".",
"dockerfile": str(self.get_dockerfile()), "--tag",
"pull": True, image_tag,
"forcerm": not self.sys_dev, "--file",
"squash": self.squash, str(dockerfile_path),
"platform": MAP_ARCH[self.arch], "--platform",
"labels": { MAP_ARCH[self.arch],
"--pull",
]
labels = {
"io.hass.version": version, "io.hass.version": version,
"io.hass.arch": self.arch, "io.hass.arch": self.arch,
"io.hass.type": META_ADDON, "io.hass.type": META_ADDON,
"io.hass.name": self._fix_label("name"), "io.hass.name": self._fix_label("name"),
"io.hass.description": self._fix_label("description"), "io.hass.description": self._fix_label("description"),
**self.additional_labels, **self.additional_labels,
}, }
"buildargs": {
if self.addon.url:
labels["io.hass.url"] = self.addon.url
for key, value in labels.items():
build_cmd.extend(["--label", f"{key}={value}"])
build_args = {
"BUILD_FROM": self.base_image, "BUILD_FROM": self.base_image,
"BUILD_VERSION": version, "BUILD_VERSION": version,
"BUILD_ARCH": self.sys_arch.default, "BUILD_ARCH": self.sys_arch.default,
**self.additional_args, **self.additional_args,
},
} }
if self.addon.url: for key, value in build_args.items():
args["labels"]["io.hass.url"] = self.addon.url build_cmd.extend(["--build-arg", f"{key}={value}"])
return args # The addon path will be mounted from the host system
addon_extern_path = self.sys_config.local_to_extern_path(
self.addon.path_location
)
return {
"command": build_cmd,
"volumes": {
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
addon_extern_path: {"bind": "/addon", "mode": "ro"},
},
"working_dir": "/addon",
}
def _fix_label(self, label_name: str) -> str: def _fix_label(self, label_name: str) -> str:
"""Remove characters they are not supported.""" """Remove characters they are not supported."""

View File

@ -266,7 +266,7 @@ class AddonManager(CoreSysAttributes):
], ],
on_condition=AddonsJobError, on_condition=AddonsJobError,
) )
async def rebuild(self, slug: str) -> asyncio.Task | None: async def rebuild(self, slug: str, *, force: bool = False) -> asyncio.Task | None:
"""Perform a rebuild of local build add-on. """Perform a rebuild of local build add-on.
Returns a Task that completes when addon has state 'started' (see addon.start) Returns a Task that completes when addon has state 'started' (see addon.start)
@ -289,7 +289,7 @@ class AddonManager(CoreSysAttributes):
raise AddonsError( raise AddonsError(
"Version changed, use Update instead Rebuild", _LOGGER.error "Version changed, use Update instead Rebuild", _LOGGER.error
) )
if not addon.need_build: if not force and not addon.need_build:
raise AddonsNotSupportedError( raise AddonsNotSupportedError(
"Can't rebuild a image based add-on", _LOGGER.error "Can't rebuild a image based add-on", _LOGGER.error
) )

View File

@ -664,12 +664,16 @@ class AddonModel(JobGroup, ABC):
"""Validate if addon is available for current system.""" """Validate if addon is available for current system."""
return self._validate_availability(self.data, logger=_LOGGER.error) return self._validate_availability(self.data, logger=_LOGGER.error)
def __eq__(self, other): def __eq__(self, other: Any) -> bool:
"""Compaired add-on objects.""" """Compare add-on objects."""
if not isinstance(other, AddonModel): if not isinstance(other, AddonModel):
return False return False
return self.slug == other.slug return self.slug == other.slug
def __hash__(self) -> int:
"""Hash for add-on objects."""
return hash(self.slug)
def _validate_availability( def _validate_availability(
self, config, *, logger: Callable[..., None] | None = None self, config, *, logger: Callable[..., None] | None = None
) -> None: ) -> None:

View File

@ -36,6 +36,7 @@ from ..const import (
ATTR_DNS, ATTR_DNS,
ATTR_DOCKER_API, ATTR_DOCKER_API,
ATTR_DOCUMENTATION, ATTR_DOCUMENTATION,
ATTR_FORCE,
ATTR_FULL_ACCESS, ATTR_FULL_ACCESS,
ATTR_GPIO, ATTR_GPIO,
ATTR_HASSIO_API, ATTR_HASSIO_API,
@ -139,6 +140,8 @@ SCHEMA_SECURITY = vol.Schema({vol.Optional(ATTR_PROTECTED): vol.Boolean()})
SCHEMA_UNINSTALL = vol.Schema( SCHEMA_UNINSTALL = vol.Schema(
{vol.Optional(ATTR_REMOVE_CONFIG, default=False): vol.Boolean()} {vol.Optional(ATTR_REMOVE_CONFIG, default=False): vol.Boolean()}
) )
SCHEMA_REBUILD = vol.Schema({vol.Optional(ATTR_FORCE, default=False): vol.Boolean()})
# pylint: enable=no-value-for-parameter # pylint: enable=no-value-for-parameter
@ -461,7 +464,11 @@ class APIAddons(CoreSysAttributes):
async def rebuild(self, request: web.Request) -> None: async def rebuild(self, request: web.Request) -> None:
"""Rebuild local build add-on.""" """Rebuild local build add-on."""
addon = self.get_addon_for_request(request) addon = self.get_addon_for_request(request)
if start_task := await asyncio.shield(self.sys_addons.rebuild(addon.slug)): body: dict[str, Any] = await api_validate(SCHEMA_REBUILD, request)
if start_task := await asyncio.shield(
self.sys_addons.rebuild(addon.slug, force=body[ATTR_FORCE])
):
await start_task await start_task
@api_process @api_process

View File

@ -3,11 +3,13 @@
import asyncio import asyncio
from collections.abc import Awaitable from collections.abc import Awaitable
import logging import logging
from typing import Any from typing import Any, cast
from aiohttp import BasicAuth, web from aiohttp import BasicAuth, web
from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE, WWW_AUTHENTICATE from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE, WWW_AUTHENTICATE
from aiohttp.web import FileField
from aiohttp.web_exceptions import HTTPUnauthorized from aiohttp.web_exceptions import HTTPUnauthorized
from multidict import MultiDictProxy
import voluptuous as vol import voluptuous as vol
from ..addons.addon import Addon from ..addons.addon import Addon
@ -51,7 +53,10 @@ class APIAuth(CoreSysAttributes):
return self.sys_auth.check_login(addon, auth.login, auth.password) return self.sys_auth.check_login(addon, auth.login, auth.password)
def _process_dict( def _process_dict(
self, request: web.Request, addon: Addon, data: dict[str, str] self,
request: web.Request,
addon: Addon,
data: dict[str, Any] | MultiDictProxy[str | bytes | FileField],
) -> Awaitable[bool]: ) -> Awaitable[bool]:
"""Process login with dict data. """Process login with dict data.
@ -60,7 +65,15 @@ class APIAuth(CoreSysAttributes):
username = data.get("username") or data.get("user") username = data.get("username") or data.get("user")
password = data.get("password") password = data.get("password")
return self.sys_auth.check_login(addon, username, password) # Test that we did receive strings and not something else, raise if so
try:
_ = username.encode and password.encode # type: ignore
except AttributeError:
raise HTTPUnauthorized(headers=REALM_HEADER) from None
return self.sys_auth.check_login(
addon, cast(str, username), cast(str, password)
)
@api_process @api_process
async def auth(self, request: web.Request) -> bool: async def auth(self, request: web.Request) -> bool:
@ -79,13 +92,18 @@ class APIAuth(CoreSysAttributes):
# Json # Json
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON: if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON:
data = await request.json(loads=json_loads) data = await request.json(loads=json_loads)
return await self._process_dict(request, addon, data) if not await self._process_dict(request, addon, data):
raise HTTPUnauthorized()
return True
# URL encoded # URL encoded
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_URL: if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_URL:
data = await request.post() data = await request.post()
return await self._process_dict(request, addon, data) if not await self._process_dict(request, addon, data):
raise HTTPUnauthorized()
return True
# Advertise Basic authentication by default
raise HTTPUnauthorized(headers=REALM_HEADER) raise HTTPUnauthorized(headers=REALM_HEADER)
@api_process @api_process

View File

@ -6,6 +6,8 @@ from typing import Any
from aiohttp import web from aiohttp import web
import voluptuous as vol import voluptuous as vol
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
from ..const import ( from ..const import (
ATTR_ENABLE_IPV6, ATTR_ENABLE_IPV6,
ATTR_HOSTNAME, ATTR_HOSTNAME,
@ -32,7 +34,7 @@ SCHEMA_DOCKER_REGISTRY = vol.Schema(
) )
# pylint: disable=no-value-for-parameter # pylint: disable=no-value-for-parameter
SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Boolean()}) SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Maybe(vol.Boolean())})
class APIDocker(CoreSysAttributes): class APIDocker(CoreSysAttributes):
@ -59,8 +61,17 @@ class APIDocker(CoreSysAttributes):
"""Set docker options.""" """Set docker options."""
body = await api_validate(SCHEMA_OPTIONS, request) body = await api_validate(SCHEMA_OPTIONS, request)
if ATTR_ENABLE_IPV6 in body: if (
ATTR_ENABLE_IPV6 in body
and self.sys_docker.config.enable_ipv6 != body[ATTR_ENABLE_IPV6]
):
self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6] self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6]
_LOGGER.info("Host system reboot required to apply new IPv6 configuration")
self.sys_resolution.create_issue(
IssueType.REBOOT_REQUIRED,
ContextType.SYSTEM,
suggestions=[SuggestionType.EXECUTE_REBOOT],
)
await self.sys_docker.config.save_data() await self.sys_docker.config.save_data()

View File

@ -309,9 +309,9 @@ class APIIngress(CoreSysAttributes):
def _init_header( def _init_header(
request: web.Request, addon: Addon, session_data: IngressSessionData | None request: web.Request, addon: Addon, session_data: IngressSessionData | None
) -> CIMultiDict | dict[str, str]: ) -> CIMultiDict[str]:
"""Create initial header.""" """Create initial header."""
headers = {} headers = CIMultiDict[str]()
if session_data is not None: if session_data is not None:
headers[HEADER_REMOTE_USER_ID] = session_data.user.id headers[HEADER_REMOTE_USER_ID] = session_data.user.id
@ -337,7 +337,7 @@ def _init_header(
istr(HEADER_REMOTE_USER_DISPLAY_NAME), istr(HEADER_REMOTE_USER_DISPLAY_NAME),
): ):
continue continue
headers[name] = value headers.add(name, value)
# Update X-Forwarded-For # Update X-Forwarded-For
if request.transport: if request.transport:
@ -348,9 +348,9 @@ def _init_header(
return headers return headers
def _response_header(response: aiohttp.ClientResponse) -> dict[str, str]: def _response_header(response: aiohttp.ClientResponse) -> CIMultiDict[str]:
"""Create response header.""" """Create response header."""
headers = {} headers = CIMultiDict[str]()
for name, value in response.headers.items(): for name, value in response.headers.items():
if name in ( if name in (
@ -360,7 +360,7 @@ def _response_header(response: aiohttp.ClientResponse) -> dict[str, str]:
hdrs.CONTENT_ENCODING, hdrs.CONTENT_ENCODING,
): ):
continue continue
headers[name] = value headers.add(name, value)
return headers return headers

View File

@ -63,6 +63,8 @@ from .const import BUF_SIZE, LOCATION_CLOUD_BACKUP, BackupType
from .utils import password_to_key from .utils import password_to_key
from .validate import SCHEMA_BACKUP from .validate import SCHEMA_BACKUP
IGNORED_COMPARISON_FIELDS = {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@ -260,41 +262,35 @@ class Backup(JobGroup):
def __eq__(self, other: Any) -> bool: def __eq__(self, other: Any) -> bool:
"""Return true if backups have same metadata.""" """Return true if backups have same metadata."""
if not isinstance(other, Backup): return isinstance(other, Backup) and self.slug == other.slug
return False
# Compare all fields except ones about protection. Current encryption status does not affect equality def __hash__(self) -> int:
keys = self._data.keys() | other._data.keys() """Return hash of backup."""
for k in keys - {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}: return hash(self.slug)
if (
k not in self._data
or k not in other._data
or self._data[k] != other._data[k]
):
_LOGGER.info(
"Backup %s and %s not equal because %s field has different value: %s and %s",
self.slug,
other.slug,
k,
self._data.get(k),
other._data.get(k),
)
return False
return True
def consolidate(self, backup: Self) -> None: def consolidate(self, backup: Self) -> None:
"""Consolidate two backups with same slug in different locations.""" """Consolidate two backups with same slug in different locations."""
if self.slug != backup.slug: if self != backup:
raise ValueError( raise ValueError(
f"Backup {self.slug} and {backup.slug} are not the same backup" f"Backup {self.slug} and {backup.slug} are not the same backup"
) )
if self != backup:
# Compare all fields except ones about protection. Current encryption status does not affect equality
other_data = backup._data # pylint: disable=protected-access
keys = self._data.keys() | other_data.keys()
for k in keys - IGNORED_COMPARISON_FIELDS:
if (
k not in self._data
or k not in other_data
or self._data[k] != other_data[k]
):
raise BackupInvalidError( raise BackupInvalidError(
f"Backup in {backup.location} and {self.location} both have slug {self.slug} but are not the same!" f"Cannot consolidate backups in {backup.location} and {self.location} with slug {self.slug} "
f"because field {k} has different values: {self._data.get(k)} and {other_data.get(k)}!",
_LOGGER.error,
) )
# In case of conflict we always ignore the ones from the first one. But log them to let the user know # In case of conflict we always ignore the ones from the first one. But log them to let the user know
if conflict := { if conflict := {
loc: val.path loc: val.path
for loc, val in self.all_locations.items() for loc, val in self.all_locations.items()
@ -577,13 +573,21 @@ class Backup(JobGroup):
@Job(name="backup_addon_save", cleanup=False) @Job(name="backup_addon_save", cleanup=False)
async def _addon_save(self, addon: Addon) -> asyncio.Task | None: async def _addon_save(self, addon: Addon) -> asyncio.Task | None:
"""Store an add-on into backup.""" """Store an add-on into backup."""
self.sys_jobs.current.reference = addon.slug self.sys_jobs.current.reference = slug = addon.slug
if not self._outer_secure_tarfile: if not self._outer_secure_tarfile:
raise RuntimeError( raise RuntimeError(
"Cannot backup components without initializing backup tar" "Cannot backup components without initializing backup tar"
) )
tar_name = f"{addon.slug}.tar{'.gz' if self.compressed else ''}" # Ensure it is still installed and get current data before proceeding
if not (curr_addon := self.sys_addons.get_local_only(slug)):
_LOGGER.warning(
"Skipping backup of add-on %s because it has been uninstalled",
slug,
)
return None
tar_name = f"{slug}.tar{'.gz' if self.compressed else ''}"
addon_file = self._outer_secure_tarfile.create_inner_tar( addon_file = self._outer_secure_tarfile.create_inner_tar(
f"./{tar_name}", f"./{tar_name}",
@ -592,16 +596,16 @@ class Backup(JobGroup):
) )
# Take backup # Take backup
try: try:
start_task = await addon.backup(addon_file) start_task = await curr_addon.backup(addon_file)
except AddonsError as err: except AddonsError as err:
raise BackupError(str(err)) from err raise BackupError(str(err)) from err
# Store to config # Store to config
self._data[ATTR_ADDONS].append( self._data[ATTR_ADDONS].append(
{ {
ATTR_SLUG: addon.slug, ATTR_SLUG: slug,
ATTR_NAME: addon.name, ATTR_NAME: curr_addon.name,
ATTR_VERSION: addon.version, ATTR_VERSION: curr_addon.version,
# Bug - addon_file.size used to give us this information # Bug - addon_file.size used to give us this information
# It always returns 0 in current securetar. Skipping until fixed # It always returns 0 in current securetar. Skipping until fixed
ATTR_SIZE: 0, ATTR_SIZE: 0,
@ -921,5 +925,5 @@ class Backup(JobGroup):
Return a coroutine. Return a coroutine.
""" """
return self.sys_store.update_repositories( return self.sys_store.update_repositories(
self.repositories, add_with_errors=True, replace=replace set(self.repositories), issue_on_error=True, replace=replace
) )

View File

@ -27,7 +27,7 @@ from ..exceptions import (
BackupJobError, BackupJobError,
BackupMountDownError, BackupMountDownError,
) )
from ..jobs.const import JOB_GROUP_BACKUP_MANAGER, JobCondition, JobExecutionLimit from ..jobs.const import JOB_GROUP_BACKUP_MANAGER, JobConcurrency, JobCondition
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup from ..jobs.job_group import JobGroup
from ..mounts.mount import Mount from ..mounts.mount import Mount
@ -583,9 +583,9 @@ class BackupManager(FileConfiguration, JobGroup):
@Job( @Job(
name="backup_manager_full_backup", name="backup_manager_full_backup",
conditions=[JobCondition.RUNNING], conditions=[JobCondition.RUNNING],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError, on_condition=BackupJobError,
cleanup=False, cleanup=False,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def do_backup_full( async def do_backup_full(
self, self,
@ -630,9 +630,9 @@ class BackupManager(FileConfiguration, JobGroup):
@Job( @Job(
name="backup_manager_partial_backup", name="backup_manager_partial_backup",
conditions=[JobCondition.RUNNING], conditions=[JobCondition.RUNNING],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError, on_condition=BackupJobError,
cleanup=False, cleanup=False,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def do_backup_partial( async def do_backup_partial(
self, self,
@ -810,9 +810,9 @@ class BackupManager(FileConfiguration, JobGroup):
JobCondition.INTERNET_SYSTEM, JobCondition.INTERNET_SYSTEM,
JobCondition.RUNNING, JobCondition.RUNNING,
], ],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError, on_condition=BackupJobError,
cleanup=False, cleanup=False,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def do_restore_full( async def do_restore_full(
self, self,
@ -869,9 +869,9 @@ class BackupManager(FileConfiguration, JobGroup):
JobCondition.INTERNET_SYSTEM, JobCondition.INTERNET_SYSTEM,
JobCondition.RUNNING, JobCondition.RUNNING,
], ],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError, on_condition=BackupJobError,
cleanup=False, cleanup=False,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def do_restore_partial( async def do_restore_partial(
self, self,
@ -930,8 +930,8 @@ class BackupManager(FileConfiguration, JobGroup):
@Job( @Job(
name="backup_manager_freeze_all", name="backup_manager_freeze_all",
conditions=[JobCondition.RUNNING], conditions=[JobCondition.RUNNING],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError, on_condition=BackupJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def freeze_all(self, timeout: float = DEFAULT_FREEZE_TIMEOUT) -> None: async def freeze_all(self, timeout: float = DEFAULT_FREEZE_TIMEOUT) -> None:
"""Freeze system to prepare for an external backup such as an image snapshot.""" """Freeze system to prepare for an external backup such as an image snapshot."""
@ -999,9 +999,9 @@ class BackupManager(FileConfiguration, JobGroup):
@Job( @Job(
name="backup_manager_signal_thaw", name="backup_manager_signal_thaw",
conditions=[JobCondition.FROZEN], conditions=[JobCondition.FROZEN],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError, on_condition=BackupJobError,
internal=True, internal=True,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def thaw_all(self) -> None: async def thaw_all(self) -> None:
"""Signal thaw task to begin unfreezing the system.""" """Signal thaw task to begin unfreezing the system."""

View File

@ -188,6 +188,7 @@ ATTR_FEATURES = "features"
ATTR_FILENAME = "filename" ATTR_FILENAME = "filename"
ATTR_FLAGS = "flags" ATTR_FLAGS = "flags"
ATTR_FOLDERS = "folders" ATTR_FOLDERS = "folders"
ATTR_FORCE = "force"
ATTR_FORCE_SECURITY = "force_security" ATTR_FORCE_SECURITY = "force_security"
ATTR_FREQUENCY = "frequency" ATTR_FREQUENCY = "frequency"
ATTR_FULL_ACCESS = "full_access" ATTR_FULL_ACCESS = "full_access"

View File

@ -124,7 +124,10 @@ class CoreSys:
resolver: aiohttp.abc.AbstractResolver resolver: aiohttp.abc.AbstractResolver
try: try:
resolver = aiohttp.AsyncResolver(loop=self.loop) # Use "unused" kwargs to force dedicated resolver instance. Otherwise
# aiodns won't reload /etc/resolv.conf which we need to make our connection
# check work in all cases.
resolver = aiohttp.AsyncResolver(loop=self.loop, timeout=None)
# pylint: disable=protected-access # pylint: disable=protected-access
_LOGGER.debug( _LOGGER.debug(
"Initializing ClientSession with AsyncResolver. Using nameservers %s", "Initializing ClientSession with AsyncResolver. Using nameservers %s",
@ -587,7 +590,7 @@ class CoreSys:
return self._machine_id return self._machine_id
@machine_id.setter @machine_id.setter
def machine_id(self, value: str) -> None: def machine_id(self, value: str | None) -> None:
"""Set a machine-id type string.""" """Set a machine-id type string."""
if self._machine_id: if self._machine_id:
raise RuntimeError("Machine-ID type already set!") raise RuntimeError("Machine-ID type already set!")

View File

@ -32,6 +32,7 @@ DBUS_IFACE_HOSTNAME = "org.freedesktop.hostname1"
DBUS_IFACE_IP4CONFIG = "org.freedesktop.NetworkManager.IP4Config" DBUS_IFACE_IP4CONFIG = "org.freedesktop.NetworkManager.IP4Config"
DBUS_IFACE_IP6CONFIG = "org.freedesktop.NetworkManager.IP6Config" DBUS_IFACE_IP6CONFIG = "org.freedesktop.NetworkManager.IP6Config"
DBUS_IFACE_NM = "org.freedesktop.NetworkManager" DBUS_IFACE_NM = "org.freedesktop.NetworkManager"
DBUS_IFACE_NVME_CONTROLLER = "org.freedesktop.UDisks2.NVMe.Controller"
DBUS_IFACE_PARTITION = "org.freedesktop.UDisks2.Partition" DBUS_IFACE_PARTITION = "org.freedesktop.UDisks2.Partition"
DBUS_IFACE_PARTITION_TABLE = "org.freedesktop.UDisks2.PartitionTable" DBUS_IFACE_PARTITION_TABLE = "org.freedesktop.UDisks2.PartitionTable"
DBUS_IFACE_RAUC_INSTALLER = "de.pengutronix.rauc.Installer" DBUS_IFACE_RAUC_INSTALLER = "de.pengutronix.rauc.Installer"
@ -87,6 +88,7 @@ DBUS_ATTR_CONNECTIVITY = "Connectivity"
DBUS_ATTR_CURRENT_DEVICE = "CurrentDevice" DBUS_ATTR_CURRENT_DEVICE = "CurrentDevice"
DBUS_ATTR_CURRENT_DNS_SERVER = "CurrentDNSServer" DBUS_ATTR_CURRENT_DNS_SERVER = "CurrentDNSServer"
DBUS_ATTR_CURRENT_DNS_SERVER_EX = "CurrentDNSServerEx" DBUS_ATTR_CURRENT_DNS_SERVER_EX = "CurrentDNSServerEx"
DBUS_ATTR_CONTROLLER_ID = "ControllerID"
DBUS_ATTR_DEFAULT = "Default" DBUS_ATTR_DEFAULT = "Default"
DBUS_ATTR_DEPLOYMENT = "Deployment" DBUS_ATTR_DEPLOYMENT = "Deployment"
DBUS_ATTR_DESCRIPTION = "Description" DBUS_ATTR_DESCRIPTION = "Description"
@ -111,6 +113,7 @@ DBUS_ATTR_DRIVER = "Driver"
DBUS_ATTR_EJECTABLE = "Ejectable" DBUS_ATTR_EJECTABLE = "Ejectable"
DBUS_ATTR_FALLBACK_DNS = "FallbackDNS" DBUS_ATTR_FALLBACK_DNS = "FallbackDNS"
DBUS_ATTR_FALLBACK_DNS_EX = "FallbackDNSEx" DBUS_ATTR_FALLBACK_DNS_EX = "FallbackDNSEx"
DBUS_ATTR_FGUID = "FGUID"
DBUS_ATTR_FINISH_TIMESTAMP = "FinishTimestamp" DBUS_ATTR_FINISH_TIMESTAMP = "FinishTimestamp"
DBUS_ATTR_FIRMWARE_TIMESTAMP_MONOTONIC = "FirmwareTimestampMonotonic" DBUS_ATTR_FIRMWARE_TIMESTAMP_MONOTONIC = "FirmwareTimestampMonotonic"
DBUS_ATTR_FREQUENCY = "Frequency" DBUS_ATTR_FREQUENCY = "Frequency"
@ -147,6 +150,7 @@ DBUS_ATTR_NAMESERVERS = "Nameservers"
DBUS_ATTR_NTP = "NTP" DBUS_ATTR_NTP = "NTP"
DBUS_ATTR_NTPSYNCHRONIZED = "NTPSynchronized" DBUS_ATTR_NTPSYNCHRONIZED = "NTPSynchronized"
DBUS_ATTR_NUMBER = "Number" DBUS_ATTR_NUMBER = "Number"
DBUS_ATTR_NVME_REVISION = "NVMeRevision"
DBUS_ATTR_OFFSET = "Offset" DBUS_ATTR_OFFSET = "Offset"
DBUS_ATTR_OPERATING_SYSTEM_PRETTY_NAME = "OperatingSystemPrettyName" DBUS_ATTR_OPERATING_SYSTEM_PRETTY_NAME = "OperatingSystemPrettyName"
DBUS_ATTR_OPERATION = "Operation" DBUS_ATTR_OPERATION = "Operation"
@ -161,15 +165,24 @@ DBUS_ATTR_REMOVABLE = "Removable"
DBUS_ATTR_RESOLV_CONF_MODE = "ResolvConfMode" DBUS_ATTR_RESOLV_CONF_MODE = "ResolvConfMode"
DBUS_ATTR_REVISION = "Revision" DBUS_ATTR_REVISION = "Revision"
DBUS_ATTR_RCMANAGER = "RcManager" DBUS_ATTR_RCMANAGER = "RcManager"
DBUS_ATTR_SANITIZE_PERCENT_REMAINING = "SanitizePercentRemaining"
DBUS_ATTR_SANITIZE_STATUS = "SanitizeStatus"
DBUS_ATTR_SEAT = "Seat" DBUS_ATTR_SEAT = "Seat"
DBUS_ATTR_SERIAL = "Serial" DBUS_ATTR_SERIAL = "Serial"
DBUS_ATTR_SIZE = "Size" DBUS_ATTR_SIZE = "Size"
DBUS_ATTR_SMART_CRITICAL_WARNING = "SmartCriticalWarning"
DBUS_ATTR_SMART_POWER_ON_HOURS = "SmartPowerOnHours"
DBUS_ATTR_SMART_SELFTEST_PERCENT_REMAINING = "SmartSelftestPercentRemaining"
DBUS_ATTR_SMART_SELFTEST_STATUS = "SmartSelftestStatus"
DBUS_ATTR_SMART_TEMPERATURE = "SmartTemperature"
DBUS_ATTR_SMART_UPDATED = "SmartUpdated"
DBUS_ATTR_SSID = "Ssid" DBUS_ATTR_SSID = "Ssid"
DBUS_ATTR_STATE = "State" DBUS_ATTR_STATE = "State"
DBUS_ATTR_STATE_FLAGS = "StateFlags" DBUS_ATTR_STATE_FLAGS = "StateFlags"
DBUS_ATTR_STATIC_HOSTNAME = "StaticHostname" DBUS_ATTR_STATIC_HOSTNAME = "StaticHostname"
DBUS_ATTR_STATIC_OPERATING_SYSTEM_CPE_NAME = "OperatingSystemCPEName" DBUS_ATTR_STATIC_OPERATING_SYSTEM_CPE_NAME = "OperatingSystemCPEName"
DBUS_ATTR_STRENGTH = "Strength" DBUS_ATTR_STRENGTH = "Strength"
DBUS_ATTR_SUBSYSTEM_NQN = "SubsystemNQN"
DBUS_ATTR_SUPPORTED_FILESYSTEMS = "SupportedFilesystems" DBUS_ATTR_SUPPORTED_FILESYSTEMS = "SupportedFilesystems"
DBUS_ATTR_SYMLINKS = "Symlinks" DBUS_ATTR_SYMLINKS = "Symlinks"
DBUS_ATTR_SWAP_SIZE = "SwapSize" DBUS_ATTR_SWAP_SIZE = "SwapSize"
@ -180,6 +193,7 @@ DBUS_ATTR_TIMEUSEC = "TimeUSec"
DBUS_ATTR_TIMEZONE = "Timezone" DBUS_ATTR_TIMEZONE = "Timezone"
DBUS_ATTR_TRANSACTION_STATISTICS = "TransactionStatistics" DBUS_ATTR_TRANSACTION_STATISTICS = "TransactionStatistics"
DBUS_ATTR_TYPE = "Type" DBUS_ATTR_TYPE = "Type"
DBUS_ATTR_UNALLOCATED_CAPACITY = "UnallocatedCapacity"
DBUS_ATTR_USER_LED = "UserLED" DBUS_ATTR_USER_LED = "UserLED"
DBUS_ATTR_USERSPACE_TIMESTAMP_MONOTONIC = "UserspaceTimestampMonotonic" DBUS_ATTR_USERSPACE_TIMESTAMP_MONOTONIC = "UserspaceTimestampMonotonic"
DBUS_ATTR_UUID_UPPERCASE = "UUID" DBUS_ATTR_UUID_UPPERCASE = "UUID"

View File

@ -259,7 +259,7 @@ class NetworkManager(DBusInterfaceProxy):
else: else:
interface.primary = False interface.primary = False
interfaces[interface.name] = interface interfaces[interface.interface_name] = interface
interfaces[interface.hw_address] = interface interfaces[interface.hw_address] = interface
# Disconnect removed devices # Disconnect removed devices

View File

@ -49,7 +49,7 @@ class NetworkInterface(DBusInterfaceProxy):
@property @property
@dbus_property @dbus_property
def name(self) -> str: def interface_name(self) -> str:
"""Return interface name.""" """Return interface name."""
return self.properties[DBUS_ATTR_DEVICE_INTERFACE] return self.properties[DBUS_ATTR_DEVICE_INTERFACE]

View File

@ -2,6 +2,7 @@
import asyncio import asyncio
import logging import logging
from pathlib import Path
from typing import Any from typing import Any
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
@ -132,7 +133,10 @@ class UDisks2Manager(DBusInterfaceProxy):
for drive in drives for drive in drives
} }
# Update existing drives # For existing drives, need to check their type and call update
await asyncio.gather(
*[self._drives[path].check_type() for path in unchanged_drives]
)
await asyncio.gather( await asyncio.gather(
*[self._drives[path].update() for path in unchanged_drives] *[self._drives[path].update() for path in unchanged_drives]
) )
@ -160,20 +164,33 @@ class UDisks2Manager(DBusInterfaceProxy):
return list(self._drives.values()) return list(self._drives.values())
@dbus_connected @dbus_connected
def get_drive(self, drive_path: str) -> UDisks2Drive: def get_drive(self, object_path: str) -> UDisks2Drive:
"""Get additional info on drive from object path.""" """Get additional info on drive from object path."""
if drive_path not in self._drives: if object_path not in self._drives:
raise DBusObjectError(f"Drive {drive_path} not found") raise DBusObjectError(f"Drive {object_path} not found")
return self._drives[drive_path] return self._drives[object_path]
@dbus_connected @dbus_connected
def get_block_device(self, device_path: str) -> UDisks2Block: def get_block_device(self, object_path: str) -> UDisks2Block:
"""Get additional info on block device from object path.""" """Get additional info on block device from object path."""
if device_path not in self._block_devices: if object_path not in self._block_devices:
raise DBusObjectError(f"Block device {device_path} not found") raise DBusObjectError(f"Block device {object_path} not found")
return self._block_devices[device_path] return self._block_devices[object_path]
@dbus_connected
def get_block_device_by_path(self, device_path: Path) -> UDisks2Block:
"""Get additional info on block device from device path.
Uses cache only. Use `resolve_device` to force a call for fresh data.
"""
for device in self._block_devices.values():
if device.device == device_path:
return device
raise DBusObjectError(
f"Block device not found with device path {device_path.as_posix()}"
)
@dbus_connected @dbus_connected
async def resolve_device(self, devspec: DeviceSpecification) -> list[UDisks2Block]: async def resolve_device(self, devspec: DeviceSpecification) -> list[UDisks2Block]:

View File

@ -28,6 +28,8 @@ class DeviceSpecificationDataType(TypedDict, total=False):
path: str path: str
label: str label: str
uuid: str uuid: str
partuuid: str
partlabel: str
@dataclass(slots=True) @dataclass(slots=True)
@ -40,6 +42,8 @@ class DeviceSpecification:
path: Path | None = None path: Path | None = None
label: str | None = None label: str | None = None
uuid: str | None = None uuid: str | None = None
partuuid: str | None = None
partlabel: str | None = None
@staticmethod @staticmethod
def from_dict(data: DeviceSpecificationDataType) -> "DeviceSpecification": def from_dict(data: DeviceSpecificationDataType) -> "DeviceSpecification":
@ -48,6 +52,8 @@ class DeviceSpecification:
path=Path(data["path"]) if "path" in data else None, path=Path(data["path"]) if "path" in data else None,
label=data.get("label"), label=data.get("label"),
uuid=data.get("uuid"), uuid=data.get("uuid"),
partuuid=data.get("partuuid"),
partlabel=data.get("partlabel"),
) )
def to_dict(self) -> dict[str, Variant]: def to_dict(self) -> dict[str, Variant]:
@ -56,6 +62,8 @@ class DeviceSpecification:
"path": Variant("s", self.path.as_posix()) if self.path else None, "path": Variant("s", self.path.as_posix()) if self.path else None,
"label": _optional_variant("s", self.label), "label": _optional_variant("s", self.label),
"uuid": _optional_variant("s", self.uuid), "uuid": _optional_variant("s", self.uuid),
"partuuid": _optional_variant("s", self.partuuid),
"partlabel": _optional_variant("s", self.partlabel),
} }
return {k: v for k, v in data.items() if v} return {k: v for k, v in data.items() if v}

View File

@ -1,6 +1,7 @@
"""Interface to UDisks2 Drive over D-Bus.""" """Interface to UDisks2 Drive over D-Bus."""
from datetime import UTC, datetime from datetime import UTC, datetime
from typing import Any
from dbus_fast.aio import MessageBus from dbus_fast.aio import MessageBus
@ -18,11 +19,13 @@ from ..const import (
DBUS_ATTR_VENDOR, DBUS_ATTR_VENDOR,
DBUS_ATTR_WWN, DBUS_ATTR_WWN,
DBUS_IFACE_DRIVE, DBUS_IFACE_DRIVE,
DBUS_IFACE_NVME_CONTROLLER,
DBUS_NAME_UDISKS2, DBUS_NAME_UDISKS2,
) )
from ..interface import DBusInterfaceProxy, dbus_property from ..interface import DBusInterfaceProxy, dbus_property
from ..utils import dbus_connected from ..utils import dbus_connected
from .const import UDISKS2_DEFAULT_OPTIONS from .const import UDISKS2_DEFAULT_OPTIONS
from .nvme_controller import UDisks2NVMeController
class UDisks2Drive(DBusInterfaceProxy): class UDisks2Drive(DBusInterfaceProxy):
@ -35,11 +38,18 @@ class UDisks2Drive(DBusInterfaceProxy):
bus_name: str = DBUS_NAME_UDISKS2 bus_name: str = DBUS_NAME_UDISKS2
properties_interface: str = DBUS_IFACE_DRIVE properties_interface: str = DBUS_IFACE_DRIVE
_nvme_controller: UDisks2NVMeController | None = None
def __init__(self, object_path: str) -> None: def __init__(self, object_path: str) -> None:
"""Initialize object.""" """Initialize object."""
self._object_path = object_path self._object_path = object_path
super().__init__() super().__init__()
async def connect(self, bus: MessageBus) -> None:
"""Connect to bus."""
await super().connect(bus)
await self._reload_interfaces()
@staticmethod @staticmethod
async def new(object_path: str, bus: MessageBus) -> "UDisks2Drive": async def new(object_path: str, bus: MessageBus) -> "UDisks2Drive":
"""Create and connect object.""" """Create and connect object."""
@ -52,6 +62,11 @@ class UDisks2Drive(DBusInterfaceProxy):
"""Object path for dbus object.""" """Object path for dbus object."""
return self._object_path return self._object_path
@property
def nvme_controller(self) -> UDisks2NVMeController | None:
"""NVMe controller interface if drive is one."""
return self._nvme_controller
@property @property
@dbus_property @dbus_property
def vendor(self) -> str: def vendor(self) -> str:
@ -130,3 +145,40 @@ class UDisks2Drive(DBusInterfaceProxy):
async def eject(self) -> None: async def eject(self) -> None:
"""Eject media from drive.""" """Eject media from drive."""
await self.connected_dbus.Drive.call("eject", UDISKS2_DEFAULT_OPTIONS) await self.connected_dbus.Drive.call("eject", UDISKS2_DEFAULT_OPTIONS)
@dbus_connected
async def update(self, changed: dict[str, Any] | None = None) -> None:
"""Update properties via D-Bus."""
await super().update(changed)
if not changed and self.nvme_controller:
await self.nvme_controller.update()
@dbus_connected
async def check_type(self) -> None:
"""Check if type of drive has changed and adjust interfaces if so."""
introspection = await self.connected_dbus.introspect()
interfaces = {intr.name for intr in introspection.interfaces}
# If interfaces changed, update the proxy from introspection and reload interfaces
if interfaces != set(self.connected_dbus.proxies.keys()):
await self.connected_dbus.init_proxy(introspection=introspection)
await self._reload_interfaces()
@dbus_connected
async def _reload_interfaces(self) -> None:
"""Reload interfaces from introspection as necessary."""
# Check if drive is an nvme controller
if (
not self.nvme_controller
and DBUS_IFACE_NVME_CONTROLLER in self.connected_dbus.proxies
):
self._nvme_controller = UDisks2NVMeController(self.object_path)
await self._nvme_controller.initialize(self.connected_dbus)
elif (
self.nvme_controller
and DBUS_IFACE_NVME_CONTROLLER not in self.connected_dbus.proxies
):
self.nvme_controller.stop_sync_property_changes()
self._nvme_controller = None

View File

@ -0,0 +1,200 @@
"""Interface to UDisks2 NVME Controller over D-Bus."""
from dataclasses import dataclass
from datetime import UTC, datetime
from typing import Any, cast
from dbus_fast.aio import MessageBus
from ..const import (
DBUS_ATTR_CONTROLLER_ID,
DBUS_ATTR_FGUID,
DBUS_ATTR_NVME_REVISION,
DBUS_ATTR_SANITIZE_PERCENT_REMAINING,
DBUS_ATTR_SANITIZE_STATUS,
DBUS_ATTR_SMART_CRITICAL_WARNING,
DBUS_ATTR_SMART_POWER_ON_HOURS,
DBUS_ATTR_SMART_SELFTEST_PERCENT_REMAINING,
DBUS_ATTR_SMART_SELFTEST_STATUS,
DBUS_ATTR_SMART_TEMPERATURE,
DBUS_ATTR_SMART_UPDATED,
DBUS_ATTR_STATE,
DBUS_ATTR_SUBSYSTEM_NQN,
DBUS_ATTR_UNALLOCATED_CAPACITY,
DBUS_IFACE_NVME_CONTROLLER,
DBUS_NAME_UDISKS2,
)
from ..interface import DBusInterfaceProxy, dbus_property
from ..utils import dbus_connected
from .const import UDISKS2_DEFAULT_OPTIONS
@dataclass(frozen=True, slots=True)
class SmartStatus:
"""Smart status information for NVMe devices.
https://storaged.org/doc/udisks2-api/latest/gdbus-org.freedesktop.UDisks2.NVMe.Controller.html#gdbus-method-org-freedesktop-UDisks2-NVMe-Controller.SmartGetAttributes
"""
available_spare: int
spare_threshold: int
percent_used: int
total_data_read: int
total_data_written: int
controller_busy_minutes: int
power_cycles: int
unsafe_shutdowns: int
media_errors: int
number_error_log_entries: int
temperature_sensors: list[int]
warning_composite_temperature: int
critical_composite_temperature: int
warning_temperature_minutes: int
critical_temperature_minutes: int
@classmethod
def from_smart_get_attributes_resp(cls, resp: dict[str, Any]):
"""Convert SmartGetAttributes response dictionary to instance."""
return cls(
available_spare=resp["avail_spare"],
spare_threshold=resp["spare_thresh"],
percent_used=resp["percent_used"],
total_data_read=resp["total_data_read"],
total_data_written=resp["total_data_written"],
controller_busy_minutes=resp["ctrl_busy_time"],
power_cycles=resp["power_cycles"],
unsafe_shutdowns=resp["unsafe_shutdowns"],
media_errors=resp["media_errors"],
number_error_log_entries=resp["num_err_log_entries"],
temperature_sensors=resp["temp_sensors"],
warning_composite_temperature=resp["wctemp"],
critical_composite_temperature=resp["cctemp"],
warning_temperature_minutes=resp["warning_temp_time"],
critical_temperature_minutes=resp["critical_temp_time"],
)
class UDisks2NVMeController(DBusInterfaceProxy):
"""Handle D-Bus interface for NVMe Controller object.
https://storaged.org/doc/udisks2-api/latest/gdbus-org.freedesktop.UDisks2.NVMe.Controller.html
"""
name: str = DBUS_IFACE_NVME_CONTROLLER
bus_name: str = DBUS_NAME_UDISKS2
properties_interface: str = DBUS_IFACE_NVME_CONTROLLER
def __init__(self, object_path: str) -> None:
"""Initialize object."""
self._object_path = object_path
super().__init__()
@staticmethod
async def new(object_path: str, bus: MessageBus) -> "UDisks2NVMeController":
"""Create and connect object."""
obj = UDisks2NVMeController(object_path)
await obj.connect(bus)
return obj
@property
def object_path(self) -> str:
"""Object path for dbus object."""
return self._object_path
@property
@dbus_property
def state(self) -> str:
"""Return NVMe controller state."""
return self.properties[DBUS_ATTR_STATE]
@property
@dbus_property
def controller_id(self) -> int:
"""Return controller ID."""
return self.properties[DBUS_ATTR_CONTROLLER_ID]
@property
@dbus_property
def subsystem_nqn(self) -> str:
"""Return NVM Subsystem NVMe Qualified Name."""
return cast(bytes, self.properties[DBUS_ATTR_SUBSYSTEM_NQN]).decode("utf-8")
@property
@dbus_property
def fguid(self) -> str:
"""Return FRU GUID."""
return self.properties[DBUS_ATTR_FGUID]
@property
@dbus_property
def nvme_revision(self) -> str:
"""Return NVMe version information."""
return self.properties[DBUS_ATTR_NVME_REVISION]
@property
@dbus_property
def unallocated_capacity(self) -> int:
"""Return unallocated capacity."""
return self.properties[DBUS_ATTR_UNALLOCATED_CAPACITY]
@property
@dbus_property
def smart_updated(self) -> datetime | None:
"""Return last time smart information was updated (or None if it hasn't been).
If this is None other smart properties are not meaningful.
"""
if not (ts := self.properties[DBUS_ATTR_SMART_UPDATED]):
return None
return datetime.fromtimestamp(ts, UTC)
@property
@dbus_property
def smart_critical_warning(self) -> list[str]:
"""Return critical warnings issued for current state of controller."""
return self.properties[DBUS_ATTR_SMART_CRITICAL_WARNING]
@property
@dbus_property
def smart_power_on_hours(self) -> int:
"""Return hours the disk has been powered on."""
return self.properties[DBUS_ATTR_SMART_POWER_ON_HOURS]
@property
@dbus_property
def smart_temperature(self) -> int:
"""Return current composite temperature of controller in Kelvin."""
return self.properties[DBUS_ATTR_SMART_TEMPERATURE]
@property
@dbus_property
def smart_selftest_status(self) -> str:
"""Return status of last sel-test."""
return self.properties[DBUS_ATTR_SMART_SELFTEST_STATUS]
@property
@dbus_property
def smart_selftest_percent_remaining(self) -> int:
"""Return percent remaining of self-test."""
return self.properties[DBUS_ATTR_SMART_SELFTEST_PERCENT_REMAINING]
@property
@dbus_property
def sanitize_status(self) -> str:
"""Return status of last sanitize operation."""
return self.properties[DBUS_ATTR_SANITIZE_STATUS]
@property
@dbus_property
def sanitize_percent_remaining(self) -> int:
"""Return percent remaining of sanitize operation."""
return self.properties[DBUS_ATTR_SANITIZE_PERCENT_REMAINING]
@dbus_connected
async def smart_get_attributes(self) -> SmartStatus:
"""Return smart/health information of controller."""
return SmartStatus.from_smart_get_attributes_resp(
await self.connected_dbus.NVMe.Controller.call(
"smart_get_attributes", UDISKS2_DEFAULT_OPTIONS
)
)

View File

@ -12,6 +12,7 @@ from typing import TYPE_CHECKING, cast
from attr import evolve from attr import evolve
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
import docker import docker
import docker.errors
from docker.types import Mount from docker.types import Mount
import requests import requests
@ -38,11 +39,12 @@ from ..exceptions import (
) )
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..hardware.data import Device from ..hardware.data import Device
from ..jobs.const import JobCondition, JobExecutionLimit from ..jobs.const import JobConcurrency, JobCondition
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.const import CGROUP_V2_VERSION, ContextType, IssueType, SuggestionType from ..resolution.const import CGROUP_V2_VERSION, ContextType, IssueType, SuggestionType
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from .const import ( from .const import (
ADDON_BUILDER_IMAGE,
ENV_TIME, ENV_TIME,
ENV_TOKEN, ENV_TOKEN,
ENV_TOKEN_OLD, ENV_TOKEN_OLD,
@ -551,8 +553,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_run", name="docker_addon_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
@ -617,8 +619,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_update", name="docker_addon_update",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def update( async def update(
self, self,
@ -645,8 +647,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_install", name="docker_addon_install",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def install( async def install(
self, self,
@ -673,10 +675,41 @@ class DockerAddon(DockerInterface):
_LOGGER.info("Starting build for %s:%s", self.image, version) _LOGGER.info("Starting build for %s:%s", self.image, version)
def build_image(): def build_image():
return self.sys_docker.images.build( if build_env.squash:
use_config_proxy=False, **build_env.get_docker_args(version, image) _LOGGER.warning(
"Ignoring squash build option for %s as Docker BuildKit does not support it.",
self.addon.slug,
) )
addon_image_tag = f"{image or self.addon.image}:{version!s}"
docker_version = self.sys_docker.info.version
builder_version_tag = f"{docker_version.major}.{docker_version.minor}.{docker_version.micro}-cli"
builder_name = f"addon_builder_{self.addon.slug}"
# Remove dangling builder container if it exists by any chance
# E.g. because of an abrupt host shutdown/reboot during a build
with suppress(docker.errors.NotFound):
self.sys_docker.containers.get(builder_name).remove(force=True, v=True)
result = self.sys_docker.run_command(
ADDON_BUILDER_IMAGE,
version=builder_version_tag,
name=builder_name,
**build_env.get_docker_args(version, addon_image_tag),
)
logs = result.output.decode("utf-8")
if result.exit_code != 0:
error_message = f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}"
raise docker.errors.DockerException(error_message)
addon_image = self.sys_docker.images.get(addon_image_tag)
return addon_image, logs
try: try:
docker_image, log = await self.sys_run_in_executor(build_image) docker_image, log = await self.sys_run_in_executor(build_image)
@ -687,15 +720,6 @@ class DockerAddon(DockerInterface):
except (docker.errors.DockerException, requests.RequestException) as err: except (docker.errors.DockerException, requests.RequestException) as err:
_LOGGER.error("Can't build %s:%s: %s", self.image, version, err) _LOGGER.error("Can't build %s:%s: %s", self.image, version, err)
if hasattr(err, "build_log"):
log = "\n".join(
[
x["stream"]
for x in err.build_log # pylint: disable=no-member
if isinstance(x, dict) and "stream" in x
]
)
_LOGGER.error("Build log: \n%s", log)
raise DockerError() from err raise DockerError() from err
_LOGGER.info("Build %s:%s done", self.image, version) _LOGGER.info("Build %s:%s done", self.image, version)
@ -711,8 +735,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_import_image", name="docker_addon_import_image",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def import_image(self, tar_file: Path) -> None: async def import_image(self, tar_file: Path) -> None:
"""Import a tar file as image.""" """Import a tar file as image."""
@ -726,7 +750,7 @@ class DockerAddon(DockerInterface):
with suppress(DockerError): with suppress(DockerError):
await self.cleanup() await self.cleanup()
@Job(name="docker_addon_cleanup", limit=JobExecutionLimit.GROUP_WAIT) @Job(name="docker_addon_cleanup", concurrency=JobConcurrency.GROUP_QUEUE)
async def cleanup( async def cleanup(
self, self,
old_image: str | None = None, old_image: str | None = None,
@ -750,8 +774,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_write_stdin", name="docker_addon_write_stdin",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def write_stdin(self, data: bytes) -> None: async def write_stdin(self, data: bytes) -> None:
"""Write to add-on stdin.""" """Write to add-on stdin."""
@ -784,8 +808,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_stop", name="docker_addon_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def stop(self, remove_container: bool = True) -> None: async def stop(self, remove_container: bool = True) -> None:
"""Stop/remove Docker container.""" """Stop/remove Docker container."""
@ -824,8 +848,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_hardware_events", name="docker_addon_hardware_events",
conditions=[JobCondition.OS_AGENT], conditions=[JobCondition.OS_AGENT],
limit=JobExecutionLimit.SINGLE_WAIT,
internal=True, internal=True,
concurrency=JobConcurrency.QUEUE,
) )
async def _hardware_events(self, device: Device) -> None: async def _hardware_events(self, device: Device) -> None:
"""Process Hardware events for adjust device access.""" """Process Hardware events for adjust device access."""

View File

@ -9,7 +9,7 @@ from ..const import DOCKER_CPU_RUNTIME_ALLOCATION
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ( from .const import (
ENV_TIME, ENV_TIME,
@ -89,8 +89,8 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
@Job( @Job(
name="docker_audio_run", name="docker_audio_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""

View File

@ -4,7 +4,7 @@ import logging
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ENV_TIME, ENV_TOKEN from .const import ENV_TIME, ENV_TOKEN
from .interface import DockerInterface from .interface import DockerInterface
@ -29,8 +29,8 @@ class DockerCli(DockerInterface, CoreSysAttributes):
@Job( @Job(
name="docker_cli_run", name="docker_cli_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""

View File

@ -107,3 +107,6 @@ PATH_BACKUP = PurePath("/backup")
PATH_SHARE = PurePath("/share") PATH_SHARE = PurePath("/share")
PATH_MEDIA = PurePath("/media") PATH_MEDIA = PurePath("/media")
PATH_CLOUD_BACKUP = PurePath("/cloud_backup") PATH_CLOUD_BACKUP = PurePath("/cloud_backup")
# https://hub.docker.com/_/docker
ADDON_BUILDER_IMAGE = "docker.io/library/docker"

View File

@ -6,7 +6,7 @@ from docker.types import Mount
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ENV_TIME, MOUNT_DBUS, MountType from .const import ENV_TIME, MOUNT_DBUS, MountType
from .interface import DockerInterface from .interface import DockerInterface
@ -31,8 +31,8 @@ class DockerDNS(DockerInterface, CoreSysAttributes):
@Job( @Job(
name="docker_dns_run", name="docker_dns_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""

View File

@ -12,7 +12,7 @@ from ..const import LABEL_MACHINE
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..homeassistant.const import LANDINGPAGE from ..homeassistant.const import LANDINGPAGE
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ( from .const import (
ENV_TIME, ENV_TIME,
@ -161,8 +161,8 @@ class DockerHomeAssistant(DockerInterface):
@Job( @Job(
name="docker_home_assistant_run", name="docker_home_assistant_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self, *, restore_job_id: str | None = None) -> None: async def run(self, *, restore_job_id: str | None = None) -> None:
"""Run Docker image.""" """Run Docker image."""
@ -200,8 +200,8 @@ class DockerHomeAssistant(DockerInterface):
@Job( @Job(
name="docker_home_assistant_execute_command", name="docker_home_assistant_execute_command",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def execute_command(self, command: str) -> CommandReturn: async def execute_command(self, command: str) -> CommandReturn:
"""Create a temporary container and run command.""" """Create a temporary container and run command."""
@ -213,9 +213,6 @@ class DockerHomeAssistant(DockerInterface):
privileged=True, privileged=True,
init=True, init=True,
entrypoint=[], entrypoint=[],
detach=True,
stdout=True,
stderr=True,
mounts=[ mounts=[
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND.value,

View File

@ -39,7 +39,7 @@ from ..exceptions import (
DockerRequestError, DockerRequestError,
DockerTrustError, DockerTrustError,
) )
from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobExecutionLimit from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType
@ -219,8 +219,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_install", name="docker_interface_install",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def install( async def install(
self, self,
@ -338,7 +338,7 @@ class DockerInterface(JobGroup, ABC):
return _container_state_from_model(docker_container) return _container_state_from_model(docker_container)
@Job(name="docker_interface_attach", limit=JobExecutionLimit.GROUP_WAIT) @Job(name="docker_interface_attach", concurrency=JobConcurrency.GROUP_QUEUE)
async def attach( async def attach(
self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False
) -> None: ) -> None:
@ -376,8 +376,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_run", name="docker_interface_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
@ -406,8 +406,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_stop", name="docker_interface_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def stop(self, remove_container: bool = True) -> None: async def stop(self, remove_container: bool = True) -> None:
"""Stop/remove Docker container.""" """Stop/remove Docker container."""
@ -421,8 +421,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_start", name="docker_interface_start",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
def start(self) -> Awaitable[None]: def start(self) -> Awaitable[None]:
"""Start Docker container.""" """Start Docker container."""
@ -430,8 +430,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_remove", name="docker_interface_remove",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def remove(self, *, remove_image: bool = True) -> None: async def remove(self, *, remove_image: bool = True) -> None:
"""Remove Docker images.""" """Remove Docker images."""
@ -448,8 +448,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_check_image", name="docker_interface_check_image",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def check_image( async def check_image(
self, self,
@ -497,8 +497,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_update", name="docker_interface_update",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def update( async def update(
self, version: AwesomeVersion, image: str | None = None, latest: bool = False self, version: AwesomeVersion, image: str | None = None, latest: bool = False
@ -526,7 +526,7 @@ class DockerInterface(JobGroup, ABC):
return b"" return b""
@Job(name="docker_interface_cleanup", limit=JobExecutionLimit.GROUP_WAIT) @Job(name="docker_interface_cleanup", concurrency=JobConcurrency.GROUP_QUEUE)
async def cleanup( async def cleanup(
self, self,
old_image: str | None = None, old_image: str | None = None,
@ -543,8 +543,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_restart", name="docker_interface_restart",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
def restart(self) -> Awaitable[None]: def restart(self) -> Awaitable[None]:
"""Restart docker container.""" """Restart docker container."""
@ -554,8 +554,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_execute_command", name="docker_interface_execute_command",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def execute_command(self, command: str) -> CommandReturn: async def execute_command(self, command: str) -> CommandReturn:
"""Create a temporary container and run command.""" """Create a temporary container and run command."""
@ -619,8 +619,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_run_inside", name="docker_interface_run_inside",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
def run_inside(self, command: str) -> Awaitable[CommandReturn]: def run_inside(self, command: str) -> Awaitable[CommandReturn]:
"""Execute a command inside Docker container.""" """Execute a command inside Docker container."""
@ -635,8 +635,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_check_trust", name="docker_interface_check_trust",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def check_trust(self) -> None: async def check_trust(self) -> None:
"""Check trust of exists Docker image.""" """Check trust of exists Docker image."""

View File

@ -95,12 +95,12 @@ class DockerConfig(FileConfiguration):
super().__init__(FILE_HASSIO_DOCKER, SCHEMA_DOCKER_CONFIG) super().__init__(FILE_HASSIO_DOCKER, SCHEMA_DOCKER_CONFIG)
@property @property
def enable_ipv6(self) -> bool: def enable_ipv6(self) -> bool | None:
"""Return IPv6 configuration for docker network.""" """Return IPv6 configuration for docker network."""
return self._data.get(ATTR_ENABLE_IPV6, False) return self._data.get(ATTR_ENABLE_IPV6, None)
@enable_ipv6.setter @enable_ipv6.setter
def enable_ipv6(self, value: bool) -> None: def enable_ipv6(self, value: bool | None) -> None:
"""Set IPv6 configuration for docker network.""" """Set IPv6 configuration for docker network."""
self._data[ATTR_ENABLE_IPV6] = value self._data[ATTR_ENABLE_IPV6] = value
@ -294,8 +294,8 @@ class DockerAPI:
def run_command( def run_command(
self, self,
image: str, image: str,
tag: str = "latest", version: str = "latest",
command: str | None = None, command: str | list[str] | None = None,
**kwargs: Any, **kwargs: Any,
) -> CommandReturn: ) -> CommandReturn:
"""Create a temporary container and run command. """Create a temporary container and run command.
@ -305,12 +305,15 @@ class DockerAPI:
stdout = kwargs.get("stdout", True) stdout = kwargs.get("stdout", True)
stderr = kwargs.get("stderr", True) stderr = kwargs.get("stderr", True)
_LOGGER.info("Runing command '%s' on %s", command, image) image_with_tag = f"{image}:{version}"
_LOGGER.info("Runing command '%s' on %s", command, image_with_tag)
container = None container = None
try: try:
container = self.docker.containers.run( container = self.docker.containers.run(
f"{image}:{tag}", image_with_tag,
command=command, command=command,
detach=True,
network=self.network.name, network=self.network.name,
use_config_proxy=False, use_config_proxy=False,
**kwargs, **kwargs,
@ -327,9 +330,9 @@ class DockerAPI:
# cleanup container # cleanup container
if container: if container:
with suppress(docker_errors.DockerException, requests.RequestException): with suppress(docker_errors.DockerException, requests.RequestException):
container.remove(force=True) container.remove(force=True, v=True)
return CommandReturn(result.get("StatusCode"), output) return CommandReturn(result["StatusCode"], output)
def repair(self) -> None: def repair(self) -> None:
"""Repair local docker overlayfs2 issues.""" """Repair local docker overlayfs2 issues."""
@ -442,7 +445,7 @@ class DockerAPI:
if remove_container: if remove_container:
with suppress(DockerException, requests.RequestException): with suppress(DockerException, requests.RequestException):
_LOGGER.info("Cleaning %s application", name) _LOGGER.info("Cleaning %s application", name)
docker_container.remove(force=True) docker_container.remove(force=True, v=True)
def start_container(self, name: str) -> None: def start_container(self, name: str) -> None:
"""Start Docker container.""" """Start Docker container."""

View File

@ -4,7 +4,7 @@ import logging
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ENV_TIME, Capabilities from .const import ENV_TIME, Capabilities
from .interface import DockerInterface from .interface import DockerInterface
@ -34,8 +34,8 @@ class DockerMulticast(DockerInterface, CoreSysAttributes):
@Job( @Job(
name="docker_multicast_run", name="docker_multicast_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""

View File

@ -47,6 +47,8 @@ DOCKER_NETWORK_PARAMS = {
"options": {"com.docker.network.bridge.name": DOCKER_NETWORK}, "options": {"com.docker.network.bridge.name": DOCKER_NETWORK},
} }
DOCKER_ENABLE_IPV6_DEFAULT = True
class DockerNetwork: class DockerNetwork:
"""Internal Supervisor Network. """Internal Supervisor Network.
@ -59,7 +61,7 @@ class DockerNetwork:
self.docker: docker.DockerClient = docker_client self.docker: docker.DockerClient = docker_client
self._network: docker.models.networks.Network self._network: docker.models.networks.Network
async def post_init(self, enable_ipv6: bool = False) -> Self: async def post_init(self, enable_ipv6: bool | None = None) -> Self:
"""Post init actions that must be done in event loop.""" """Post init actions that must be done in event loop."""
self._network = await asyncio.get_running_loop().run_in_executor( self._network = await asyncio.get_running_loop().run_in_executor(
None, self._get_network, enable_ipv6 None, self._get_network, enable_ipv6
@ -111,16 +113,24 @@ class DockerNetwork:
"""Return observer of the network.""" """Return observer of the network."""
return DOCKER_IPV4_NETWORK_MASK[6] return DOCKER_IPV4_NETWORK_MASK[6]
def _get_network(self, enable_ipv6: bool = False) -> docker.models.networks.Network: def _get_network(
self, enable_ipv6: bool | None = None
) -> docker.models.networks.Network:
"""Get supervisor network.""" """Get supervisor network."""
try: try:
if network := self.docker.networks.get(DOCKER_NETWORK): if network := self.docker.networks.get(DOCKER_NETWORK):
if network.attrs.get(DOCKER_ENABLEIPV6) == enable_ipv6: current_ipv6 = network.attrs.get(DOCKER_ENABLEIPV6, False)
# If the network exists and we don't have an explicit setting,
# simply stick with what we have.
if enable_ipv6 is None or current_ipv6 == enable_ipv6:
return network return network
# We have an explicit setting which differs from the current state.
_LOGGER.info( _LOGGER.info(
"Migrating Supervisor network to %s", "Migrating Supervisor network to %s",
"IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only", "IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only",
) )
if (containers := network.containers) and ( if (containers := network.containers) and (
containers_all := all( containers_all := all(
container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME) container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME)
@ -134,6 +144,7 @@ class DockerNetwork:
requests.RequestException, requests.RequestException,
): ):
network.disconnect(container, force=True) network.disconnect(container, force=True)
if not containers or containers_all: if not containers or containers_all:
try: try:
network.remove() network.remove()
@ -151,7 +162,9 @@ class DockerNetwork:
_LOGGER.info("Can't find Supervisor network, creating a new network") _LOGGER.info("Can't find Supervisor network, creating a new network")
network_params = DOCKER_NETWORK_PARAMS.copy() network_params = DOCKER_NETWORK_PARAMS.copy()
network_params[ATTR_ENABLE_IPV6] = enable_ipv6 network_params[ATTR_ENABLE_IPV6] = (
DOCKER_ENABLE_IPV6_DEFAULT if enable_ipv6 is None else enable_ipv6
)
try: try:
self._network = self.docker.networks.create(**network_params) # type: ignore self._network = self.docker.networks.create(**network_params) # type: ignore

View File

@ -5,7 +5,7 @@ import logging
from ..const import DOCKER_IPV4_NETWORK_MASK, OBSERVER_DOCKER_NAME from ..const import DOCKER_IPV4_NETWORK_MASK, OBSERVER_DOCKER_NAME
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ENV_TIME, ENV_TOKEN, MOUNT_DOCKER, RestartPolicy from .const import ENV_TIME, ENV_TOKEN, MOUNT_DOCKER, RestartPolicy
from .interface import DockerInterface from .interface import DockerInterface
@ -30,8 +30,8 @@ class DockerObserver(DockerInterface, CoreSysAttributes):
@Job( @Job(
name="docker_observer_run", name="docker_observer_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""

View File

@ -10,7 +10,7 @@ import docker
import requests import requests
from ..exceptions import DockerError from ..exceptions import DockerError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import PropagationMode from .const import PropagationMode
from .interface import DockerInterface from .interface import DockerInterface
@ -45,7 +45,7 @@ class DockerSupervisor(DockerInterface):
if mount.get("Destination") == "/data" if mount.get("Destination") == "/data"
) )
@Job(name="docker_supervisor_attach", limit=JobExecutionLimit.GROUP_WAIT) @Job(name="docker_supervisor_attach", concurrency=JobConcurrency.GROUP_QUEUE)
async def attach( async def attach(
self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False
) -> None: ) -> None:
@ -77,7 +77,7 @@ class DockerSupervisor(DockerInterface):
ipv4=self.sys_docker.network.supervisor, ipv4=self.sys_docker.network.supervisor,
) )
@Job(name="docker_supervisor_retag", limit=JobExecutionLimit.GROUP_WAIT) @Job(name="docker_supervisor_retag", concurrency=JobConcurrency.GROUP_QUEUE)
def retag(self) -> Awaitable[None]: def retag(self) -> Awaitable[None]:
"""Retag latest image to version.""" """Retag latest image to version."""
return self.sys_run_in_executor(self._retag) return self.sys_run_in_executor(self._retag)
@ -108,7 +108,10 @@ class DockerSupervisor(DockerInterface):
f"Can't retag Supervisor version: {err}", _LOGGER.error f"Can't retag Supervisor version: {err}", _LOGGER.error
) from err ) from err
@Job(name="docker_supervisor_update_start_tag", limit=JobExecutionLimit.GROUP_WAIT) @Job(
name="docker_supervisor_update_start_tag",
concurrency=JobConcurrency.GROUP_QUEUE,
)
def update_start_tag(self, image: str, version: AwesomeVersion) -> Awaitable[None]: def update_start_tag(self, image: str, version: AwesomeVersion) -> Awaitable[None]:
"""Update start tag to new version.""" """Update start tag to new version."""
return self.sys_run_in_executor(self._update_start_tag, image, version) return self.sys_run_in_executor(self._update_start_tag, image, version)

View File

@ -5,7 +5,7 @@ from pathlib import Path
import shutil import shutil
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import HardwareNotFound from ..exceptions import DBusError, DBusObjectError, HardwareNotFound
from .const import UdevSubsystem from .const import UdevSubsystem
from .data import Device from .data import Device
@ -14,6 +14,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
_MOUNTINFO: Path = Path("/proc/self/mountinfo") _MOUNTINFO: Path = Path("/proc/self/mountinfo")
_BLOCK_DEVICE_CLASS = "/sys/class/block/{}" _BLOCK_DEVICE_CLASS = "/sys/class/block/{}"
_BLOCK_DEVICE_EMMC_LIFE_TIME = "/sys/block/{}/device/life_time" _BLOCK_DEVICE_EMMC_LIFE_TIME = "/sys/block/{}/device/life_time"
_DEVICE_PATH = "/dev/{}"
class HwDisk(CoreSysAttributes): class HwDisk(CoreSysAttributes):
@ -92,8 +93,67 @@ class HwDisk(CoreSysAttributes):
optionsep += 1 optionsep += 1
return mountinfoarr[optionsep + 2] return mountinfoarr[optionsep + 2]
def _get_mount_source_device_name(self, path: str | Path) -> str | None:
"""Get mount source device name.
Must be run in executor.
"""
mount_source = self._get_mount_source(str(path))
if not mount_source or mount_source == "overlay":
return None
mount_source_path = Path(mount_source)
if not mount_source_path.is_block_device():
return None
# This looks a bit funky but it is more or less what lsblk is doing to get
# the parent dev reliably
# Get class device...
mount_source_device_part = Path(
_BLOCK_DEVICE_CLASS.format(mount_source_path.name)
)
# ... resolve symlink and get parent device from that path.
return mount_source_device_part.resolve().parts[-2]
async def _try_get_nvme_lifetime(self, device_name: str) -> float | None:
"""Get NVMe device lifetime."""
device_path = Path(_DEVICE_PATH.format(device_name))
try:
block_device = self.sys_dbus.udisks2.get_block_device_by_path(device_path)
drive = self.sys_dbus.udisks2.get_drive(block_device.drive)
except DBusObjectError:
_LOGGER.warning(
"Unable to find UDisks2 drive for device at %s", device_path.as_posix()
)
return None
# Exit if this isn't an NVMe device
if not drive.nvme_controller:
return None
try:
smart_log = await drive.nvme_controller.smart_get_attributes()
except DBusError as err:
_LOGGER.warning(
"Unable to get smart log for drive %s due to %s", drive.id, err
)
return None
# UDisks2 documentation specifies that value can exceed 100
if smart_log.percent_used >= 100:
_LOGGER.warning(
"NVMe controller reports that its estimated life-time has been exceeded!"
)
return 100.0
return smart_log.percent_used
def _try_get_emmc_life_time(self, device_name: str) -> float | None: def _try_get_emmc_life_time(self, device_name: str) -> float | None:
# Get eMMC life_time """Get eMMC life_time.
Must be run in executor.
"""
life_time_path = Path(_BLOCK_DEVICE_EMMC_LIFE_TIME.format(device_name)) life_time_path = Path(_BLOCK_DEVICE_EMMC_LIFE_TIME.format(device_name))
if not life_time_path.exists(): if not life_time_path.exists():
@ -118,32 +178,23 @@ class HwDisk(CoreSysAttributes):
) )
return 100.0 return 100.0
# Return the pessimistic estimate (0x02 -> 10%-20%, return 20%) # Return the optimistic estimate (0x02 -> 10%-20%, return 10%)
return life_time_value * 10.0 return (life_time_value - 1) * 10.0
def get_disk_life_time(self, path: str | Path) -> float | None: async def get_disk_life_time(self, path: str | Path) -> float | None:
"""Return life time estimate of the underlying SSD drive. """Return life time estimate of the underlying SSD drive."""
mount_source_device_name = await self.sys_run_in_executor(
Must be run in executor. self._get_mount_source_device_name, path
"""
mount_source = self._get_mount_source(str(path))
if not mount_source or mount_source == "overlay":
return None
mount_source_path = Path(mount_source)
if not mount_source_path.is_block_device():
return None
# This looks a bit funky but it is more or less what lsblk is doing to get
# the parent dev reliably
# Get class device...
mount_source_device_part = Path(
_BLOCK_DEVICE_CLASS.format(mount_source_path.name)
) )
if mount_source_device_name is None:
return None
# ... resolve symlink and get parent device from that path. # First check if its an NVMe device and get lifetime information that way
mount_source_device_name = mount_source_device_part.resolve().parts[-2] nvme_lifetime = await self._try_get_nvme_lifetime(mount_source_device_name)
if nvme_lifetime is not None:
return nvme_lifetime
# Currently only eMMC block devices supported # Else try to get lifetime information for eMMC devices. Other types of devices will return None
return self._try_get_emmc_life_time(mount_source_device_name) return await self.sys_run_in_executor(
self._try_get_emmc_life_time, mount_source_device_name
)

View File

@ -15,7 +15,7 @@ from multidict import MultiMapping
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import HomeAssistantAPIError, HomeAssistantAuthError from ..exceptions import HomeAssistantAPIError, HomeAssistantAuthError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..utils import check_port, version_is_new_enough from ..utils import check_port, version_is_new_enough
from .const import LANDINGPAGE from .const import LANDINGPAGE
@ -46,8 +46,8 @@ class HomeAssistantAPI(CoreSysAttributes):
@Job( @Job(
name="home_assistant_api_ensure_access_token", name="home_assistant_api_ensure_access_token",
limit=JobExecutionLimit.SINGLE_WAIT,
internal=True, internal=True,
concurrency=JobConcurrency.QUEUE,
) )
async def ensure_access_token(self) -> None: async def ensure_access_token(self) -> None:
"""Ensure there is an access token.""" """Ensure there is an access token."""

View File

@ -28,7 +28,7 @@ from ..exceptions import (
HomeAssistantUpdateError, HomeAssistantUpdateError,
JobException, JobException,
) )
from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobExecutionLimit from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobConcurrency, JobThrottle
from ..jobs.decorator import Job, JobCondition from ..jobs.decorator import Job, JobCondition
from ..jobs.job_group import JobGroup from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType from ..resolution.const import ContextType, IssueType
@ -87,19 +87,19 @@ class HomeAssistantCore(JobGroup):
try: try:
# Evaluate Version if we lost this information # Evaluate Version if we lost this information
if not self.sys_homeassistant.version: if self.sys_homeassistant.version:
version = self.sys_homeassistant.version
else:
self.sys_homeassistant.version = ( self.sys_homeassistant.version = (
await self.instance.get_latest_version() version
) ) = await self.instance.get_latest_version()
await self.instance.attach( await self.instance.attach(version=version, skip_state_event_if_down=True)
version=self.sys_homeassistant.version, skip_state_event_if_down=True
)
# Ensure we are using correct image for this system (unless user has overridden it) # Ensure we are using correct image for this system (unless user has overridden it)
if not self.sys_homeassistant.override_image: if not self.sys_homeassistant.override_image:
await self.instance.check_image( await self.instance.check_image(
self.sys_homeassistant.version, self.sys_homeassistant.default_image version, self.sys_homeassistant.default_image
) )
self.sys_homeassistant.set_image(self.sys_homeassistant.default_image) self.sys_homeassistant.set_image(self.sys_homeassistant.default_image)
except DockerError: except DockerError:
@ -108,7 +108,7 @@ class HomeAssistantCore(JobGroup):
) )
await self.install_landingpage() await self.install_landingpage()
else: else:
self.sys_homeassistant.version = self.instance.version self.sys_homeassistant.version = self.instance.version or version
self.sys_homeassistant.set_image(self.instance.image) self.sys_homeassistant.set_image(self.instance.image)
await self.sys_homeassistant.save_data() await self.sys_homeassistant.save_data()
@ -123,8 +123,8 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_install_landing_page", name="home_assistant_core_install_landing_page",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def install_landingpage(self) -> None: async def install_landingpage(self) -> None:
"""Install a landing page.""" """Install a landing page."""
@ -171,8 +171,8 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_install", name="home_assistant_core_install",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def install(self) -> None: async def install(self) -> None:
"""Install a landing page.""" """Install a landing page."""
@ -182,12 +182,13 @@ class HomeAssistantCore(JobGroup):
if not self.sys_homeassistant.latest_version: if not self.sys_homeassistant.latest_version:
await self.sys_updater.reload() await self.sys_updater.reload()
if self.sys_homeassistant.latest_version: if to_version := self.sys_homeassistant.latest_version:
try: try:
await self.instance.update( await self.instance.update(
self.sys_homeassistant.latest_version, to_version,
image=self.sys_updater.image_homeassistant, image=self.sys_updater.image_homeassistant,
) )
self.sys_homeassistant.version = self.instance.version or to_version
break break
except (DockerError, JobException): except (DockerError, JobException):
pass pass
@ -198,7 +199,6 @@ class HomeAssistantCore(JobGroup):
await asyncio.sleep(30) await asyncio.sleep(30)
_LOGGER.info("Home Assistant docker now installed") _LOGGER.info("Home Assistant docker now installed")
self.sys_homeassistant.version = self.instance.version
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant) self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
await self.sys_homeassistant.save_data() await self.sys_homeassistant.save_data()
@ -222,8 +222,8 @@ class HomeAssistantCore(JobGroup):
JobCondition.PLUGINS_UPDATED, JobCondition.PLUGINS_UPDATED,
JobCondition.SUPERVISOR_UPDATED, JobCondition.SUPERVISOR_UPDATED,
], ],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def update( async def update(
self, self,
@ -231,8 +231,8 @@ class HomeAssistantCore(JobGroup):
backup: bool | None = False, backup: bool | None = False,
) -> None: ) -> None:
"""Update HomeAssistant version.""" """Update HomeAssistant version."""
version = version or self.sys_homeassistant.latest_version to_version = version or self.sys_homeassistant.latest_version
if not version: if not to_version:
raise HomeAssistantUpdateError( raise HomeAssistantUpdateError(
"Cannot determine latest version of Home Assistant for update", "Cannot determine latest version of Home Assistant for update",
_LOGGER.error, _LOGGER.error,
@ -243,9 +243,9 @@ class HomeAssistantCore(JobGroup):
running = await self.instance.is_running() running = await self.instance.is_running()
exists = await self.instance.exists() exists = await self.instance.exists()
if exists and version == self.instance.version: if exists and to_version == self.instance.version:
raise HomeAssistantUpdateError( raise HomeAssistantUpdateError(
f"Version {version!s} is already installed", _LOGGER.warning f"Version {to_version!s} is already installed", _LOGGER.warning
) )
if backup: if backup:
@ -268,7 +268,7 @@ class HomeAssistantCore(JobGroup):
"Updating Home Assistant image failed", _LOGGER.warning "Updating Home Assistant image failed", _LOGGER.warning
) from err ) from err
self.sys_homeassistant.version = self.instance.version self.sys_homeassistant.version = self.instance.version or to_version
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant) self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
if running: if running:
@ -282,7 +282,7 @@ class HomeAssistantCore(JobGroup):
# Update Home Assistant # Update Home Assistant
with suppress(HomeAssistantError): with suppress(HomeAssistantError):
await _update(version) await _update(to_version)
if not self.error_state and rollback: if not self.error_state and rollback:
try: try:
@ -324,8 +324,8 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_start", name="home_assistant_core_start",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def start(self) -> None: async def start(self) -> None:
"""Run Home Assistant docker.""" """Run Home Assistant docker."""
@ -359,8 +359,8 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_stop", name="home_assistant_core_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def stop(self, *, remove_container: bool = False) -> None: async def stop(self, *, remove_container: bool = False) -> None:
"""Stop Home Assistant Docker.""" """Stop Home Assistant Docker."""
@ -371,8 +371,8 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_restart", name="home_assistant_core_restart",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def restart(self, *, safe_mode: bool = False) -> None: async def restart(self, *, safe_mode: bool = False) -> None:
"""Restart Home Assistant Docker.""" """Restart Home Assistant Docker."""
@ -392,8 +392,8 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_rebuild", name="home_assistant_core_rebuild",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def rebuild(self, *, safe_mode: bool = False) -> None: async def rebuild(self, *, safe_mode: bool = False) -> None:
"""Rebuild Home Assistant Docker container.""" """Rebuild Home Assistant Docker container."""
@ -546,9 +546,9 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_restart_after_problem", name="home_assistant_core_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
throttle=JobThrottle.RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed Home Assistant.""" """Restart unhealthy or failed Home Assistant."""

View File

@ -46,7 +46,8 @@ from ..exceptions import (
) )
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..hardware.data import Device from ..hardware.data import Device
from ..jobs.decorator import Job, JobExecutionLimit from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason from ..resolution.const import UnhealthyReason
from ..utils import remove_folder, remove_folder_with_excludes from ..utils import remove_folder, remove_folder_with_excludes
from ..utils.common import FileConfiguration from ..utils.common import FileConfiguration
@ -551,9 +552,10 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
@Job( @Job(
name="home_assistant_get_users", name="home_assistant_get_users",
limit=JobExecutionLimit.THROTTLE_WAIT,
throttle_period=timedelta(minutes=5), throttle_period=timedelta(minutes=5),
internal=True, internal=True,
concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
) )
async def get_users(self) -> list[IngressSessionDataUser]: async def get_users(self) -> list[IngressSessionDataUser]:
"""Get list of all configured users.""" """Get list of all configured users."""

View File

@ -6,7 +6,7 @@ from pathlib import Path
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import YamlFileError from ..exceptions import YamlFileError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..utils.yaml import read_yaml_file from ..utils.yaml import read_yaml_file
@ -43,9 +43,10 @@ class HomeAssistantSecrets(CoreSysAttributes):
@Job( @Job(
name="home_assistant_secrets_read", name="home_assistant_secrets_read",
limit=JobExecutionLimit.THROTTLE_WAIT,
throttle_period=timedelta(seconds=60), throttle_period=timedelta(seconds=60),
internal=True, internal=True,
concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
) )
async def _read_secrets(self): async def _read_secrets(self):
"""Read secrets.yaml into memory.""" """Read secrets.yaml into memory."""

View File

@ -175,7 +175,7 @@ class Interface:
) )
return Interface( return Interface(
name=inet.name, name=inet.interface_name,
mac=inet.hw_address, mac=inet.hw_address,
path=inet.path, path=inet.path,
enabled=inet.settings is not None, enabled=inet.settings is not None,
@ -286,7 +286,7 @@ class Interface:
_LOGGER.warning( _LOGGER.warning(
"Auth method %s for network interface %s unsupported, skipping", "Auth method %s for network interface %s unsupported, skipping",
inet.settings.wireless_security.key_mgmt, inet.settings.wireless_security.key_mgmt,
inet.name, inet.interface_name,
) )
return None return None

View File

@ -135,9 +135,8 @@ class InfoCenter(CoreSysAttributes):
async def disk_life_time(self) -> float | None: async def disk_life_time(self) -> float | None:
"""Return the estimated life-time usage (in %) of the SSD storing the data directory.""" """Return the estimated life-time usage (in %) of the SSD storing the data directory."""
return await self.sys_run_in_executor( return await self.sys_hardware.disk.get_disk_life_time(
self.sys_hardware.disk.get_disk_life_time, self.coresys.config.path_supervisor
self.coresys.config.path_supervisor,
) )
async def get_dmesg(self) -> bytes: async def get_dmesg(self) -> bytes:

View File

@ -8,11 +8,11 @@ from typing import Any
from ..const import ATTR_HOST_INTERNET from ..const import ATTR_HOST_INTERNET
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..dbus.const import ( from ..dbus.const import (
DBUS_ATTR_CONFIGURATION,
DBUS_ATTR_CONNECTION_ENABLED, DBUS_ATTR_CONNECTION_ENABLED,
DBUS_ATTR_CONNECTIVITY, DBUS_ATTR_CONNECTIVITY,
DBUS_ATTR_PRIMARY_CONNECTION, DBUS_IFACE_DNS,
DBUS_IFACE_NM, DBUS_IFACE_NM,
DBUS_OBJECT_BASE,
DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED, DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED,
ConnectionStateType, ConnectionStateType,
ConnectivityState, ConnectivityState,
@ -46,6 +46,8 @@ class NetworkManager(CoreSysAttributes):
"""Initialize system center handling.""" """Initialize system center handling."""
self.coresys: CoreSys = coresys self.coresys: CoreSys = coresys
self._connectivity: bool | None = None self._connectivity: bool | None = None
# No event need on initial change (NetworkManager initializes with empty list)
self._dns_configuration: list = []
@property @property
def connectivity(self) -> bool | None: def connectivity(self) -> bool | None:
@ -142,6 +144,10 @@ class NetworkManager(CoreSysAttributes):
"properties_changed", self._check_connectivity_changed "properties_changed", self._check_connectivity_changed
) )
self.sys_dbus.network.dns.dbus.properties.on(
"properties_changed", self._check_dns_changed
)
async def _check_connectivity_changed( async def _check_connectivity_changed(
self, interface: str, changed: dict[str, Any], invalidated: list[str] self, interface: str, changed: dict[str, Any], invalidated: list[str]
): ):
@ -152,16 +158,6 @@ class NetworkManager(CoreSysAttributes):
connectivity_check: bool | None = changed.get(DBUS_ATTR_CONNECTION_ENABLED) connectivity_check: bool | None = changed.get(DBUS_ATTR_CONNECTION_ENABLED)
connectivity: int | None = changed.get(DBUS_ATTR_CONNECTIVITY) connectivity: int | None = changed.get(DBUS_ATTR_CONNECTIVITY)
# This potentially updated the DNS configuration. Make sure the DNS plug-in
# picks up the latest settings.
if (
DBUS_ATTR_PRIMARY_CONNECTION in changed
and changed[DBUS_ATTR_PRIMARY_CONNECTION]
and changed[DBUS_ATTR_PRIMARY_CONNECTION] != DBUS_OBJECT_BASE
and await self.sys_plugins.dns.is_running()
):
await self.sys_plugins.dns.restart()
if ( if (
connectivity_check is True connectivity_check is True
or DBUS_ATTR_CONNECTION_ENABLED in invalidated or DBUS_ATTR_CONNECTION_ENABLED in invalidated
@ -175,6 +171,20 @@ class NetworkManager(CoreSysAttributes):
elif connectivity is not None: elif connectivity is not None:
self.connectivity = connectivity == ConnectivityState.CONNECTIVITY_FULL self.connectivity = connectivity == ConnectivityState.CONNECTIVITY_FULL
async def _check_dns_changed(
self, interface: str, changed: dict[str, Any], invalidated: list[str]
):
"""Check if DNS properties have changed."""
if interface != DBUS_IFACE_DNS:
return
if (
DBUS_ATTR_CONFIGURATION in changed
and self._dns_configuration != changed[DBUS_ATTR_CONFIGURATION]
):
self._dns_configuration = changed[DBUS_ATTR_CONFIGURATION]
self.sys_plugins.dns.notify_locals_changed()
async def update(self, *, force_connectivity_check: bool = False): async def update(self, *, force_connectivity_check: bool = False):
"""Update properties over dbus.""" """Update properties over dbus."""
_LOGGER.info("Updating local network information") _LOGGER.info("Updating local network information")

View File

@ -9,7 +9,7 @@ from pulsectl import Pulse, PulseError, PulseIndexError, PulseOperationFailed
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import PulseAudioError from ..exceptions import PulseAudioError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@ -236,8 +236,9 @@ class SoundControl(CoreSysAttributes):
@Job( @Job(
name="sound_control_update", name="sound_control_update",
limit=JobExecutionLimit.THROTTLE_WAIT,
throttle_period=timedelta(seconds=2), throttle_period=timedelta(seconds=2),
concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
) )
async def update(self, reload_pulse: bool = False): async def update(self, reload_pulse: bool = False):
"""Update properties over dbus.""" """Update properties over dbus."""

View File

@ -34,16 +34,53 @@ class JobCondition(StrEnum):
SUPERVISOR_UPDATED = "supervisor_updated" SUPERVISOR_UPDATED = "supervisor_updated"
class JobExecutionLimit(StrEnum): class JobConcurrency(StrEnum):
"""Job Execution limits.""" """Job concurrency control.
ONCE = "once" Controls how many instances of a job can run simultaneously.
SINGLE_WAIT = "single_wait"
THROTTLE = "throttle" Individual Concurrency (applies to each method separately):
THROTTLE_WAIT = "throttle_wait" - REJECT: Fail immediately if another instance is already running
THROTTLE_RATE_LIMIT = "throttle_rate_limit" - QUEUE: Wait for the current instance to finish, then run
GROUP_ONCE = "group_once"
GROUP_WAIT = "group_wait" Group Concurrency (applies across all methods on a JobGroup):
GROUP_THROTTLE = "group_throttle" - GROUP_REJECT: Fail if ANY job is running on the JobGroup
GROUP_THROTTLE_WAIT = "group_throttle_wait" - GROUP_QUEUE: Wait for ANY running job on the JobGroup to finish
GROUP_THROTTLE_RATE_LIMIT = "group_throttle_rate_limit"
JobGroup Behavior:
- All methods on the same JobGroup instance share a single lock
- Methods can call other methods on the same group without deadlock
- Uses the JobGroup.group_name for coordination
- Requires the class to inherit from JobGroup
"""
REJECT = "reject" # Fail if already running (was ONCE)
QUEUE = "queue" # Wait if already running (was SINGLE_WAIT)
GROUP_REJECT = "group_reject" # Was GROUP_ONCE
GROUP_QUEUE = "group_queue" # Was GROUP_WAIT
class JobThrottle(StrEnum):
"""Job throttling control.
Controls how frequently jobs can be executed.
Individual Throttling (each method has its own throttle state):
- THROTTLE: Skip execution if called within throttle_period
- RATE_LIMIT: Allow up to throttle_max_calls within throttle_period, then fail
Group Throttling (all methods on a JobGroup share throttle state):
- GROUP_THROTTLE: Skip if ANY method was called within throttle_period
- GROUP_RATE_LIMIT: Allow up to throttle_max_calls total across ALL methods
JobGroup Behavior:
- All methods on the same JobGroup instance share throttle counters/timers
- Uses the JobGroup.group_name as the key for tracking state
- If one method is throttled, other methods may also be throttled
- Requires the class to inherit from JobGroup
"""
THROTTLE = "throttle" # Skip if called too frequently
RATE_LIMIT = "rate_limit" # Rate limiting with max calls per period
GROUP_THROTTLE = "group_throttle" # Group version of THROTTLE
GROUP_RATE_LIMIT = "group_rate_limit" # Group version of RATE_LIMIT

View File

@ -20,7 +20,7 @@ from ..host.const import HostFeature
from ..resolution.const import MINIMUM_FREE_SPACE_THRESHOLD, ContextType, IssueType from ..resolution.const import MINIMUM_FREE_SPACE_THRESHOLD, ContextType, IssueType
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from . import SupervisorJob from . import SupervisorJob
from .const import JobCondition, JobExecutionLimit from .const import JobConcurrency, JobCondition, JobThrottle
from .job_group import JobGroup from .job_group import JobGroup
_LOGGER: logging.Logger = logging.getLogger(__package__) _LOGGER: logging.Logger = logging.getLogger(__package__)
@ -36,14 +36,31 @@ class Job(CoreSysAttributes):
conditions: list[JobCondition] | None = None, conditions: list[JobCondition] | None = None,
cleanup: bool = True, cleanup: bool = True,
on_condition: type[JobException] | None = None, on_condition: type[JobException] | None = None,
limit: JobExecutionLimit | None = None, concurrency: JobConcurrency | None = None,
throttle: JobThrottle | None = None,
throttle_period: timedelta throttle_period: timedelta
| Callable[[CoreSys, datetime, list[datetime] | None], timedelta] | Callable[[CoreSys, datetime, list[datetime] | None], timedelta]
| None = None, | None = None,
throttle_max_calls: int | None = None, throttle_max_calls: int | None = None,
internal: bool = False, internal: bool = False,
): ): # pylint: disable=too-many-positional-arguments
"""Initialize the Job class.""" """Initialize the Job decorator.
Args:
name (str): Unique name for the job. Must not be duplicated.
conditions (list[JobCondition] | None): List of conditions that must be met before the job runs.
cleanup (bool): Whether to clean up the job after execution. Defaults to True. If set to False, the job will remain accessible through the Supervisor API until the next restart.
on_condition (type[JobException] | None): Exception type to raise if a job condition fails. If None, logs the failure.
concurrency (JobConcurrency | None): Concurrency control policy (e.g., reject, queue, group-based).
throttle (JobThrottle | None): Throttling policy (e.g., throttle, rate_limit, group-based).
throttle_period (timedelta | Callable | None): Throttle period as a timedelta or a callable returning a timedelta (for throttled jobs).
throttle_max_calls (int | None): Maximum number of calls allowed within the throttle period (for rate-limited jobs).
internal (bool): Whether the job is internal (not exposed through the Supervisor API). Defaults to False.
Raises:
RuntimeError: If job name is not unique, or required throttle parameters are missing for the selected throttle policy.
"""
if name in _JOB_NAMES: if name in _JOB_NAMES:
raise RuntimeError(f"A job already exists with name {name}!") raise RuntimeError(f"A job already exists with name {name}!")
@ -52,7 +69,6 @@ class Job(CoreSysAttributes):
self.conditions = conditions self.conditions = conditions
self.cleanup = cleanup self.cleanup = cleanup
self.on_condition = on_condition self.on_condition = on_condition
self.limit = limit
self._throttle_period = throttle_period self._throttle_period = throttle_period
self._throttle_max_calls = throttle_max_calls self._throttle_max_calls = throttle_max_calls
self._lock: asyncio.Semaphore | None = None self._lock: asyncio.Semaphore | None = None
@ -60,34 +76,49 @@ class Job(CoreSysAttributes):
self._rate_limited_calls: dict[str | None, list[datetime]] | None = None self._rate_limited_calls: dict[str | None, list[datetime]] | None = None
self._internal = internal self._internal = internal
self.concurrency = concurrency
self.throttle = throttle
# Validate Options # Validate Options
self._validate_parameters()
def _validate_parameters(self) -> None:
"""Validate job parameters."""
# Validate throttle parameters
if ( if (
self.limit self.throttle
in ( in (
JobExecutionLimit.THROTTLE, JobThrottle.THROTTLE,
JobExecutionLimit.THROTTLE_WAIT, JobThrottle.GROUP_THROTTLE,
JobExecutionLimit.THROTTLE_RATE_LIMIT, JobThrottle.RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE, JobThrottle.GROUP_RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
) )
and self._throttle_period is None and self._throttle_period is None
): ):
raise RuntimeError( raise RuntimeError(
f"Job {name} is using execution limit {limit} without a throttle period!" f"Job {self.name} is using throttle {self.throttle} without a throttle period!"
) )
if self.limit in ( if self.throttle in (
JobExecutionLimit.THROTTLE_RATE_LIMIT, JobThrottle.RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT, JobThrottle.GROUP_RATE_LIMIT,
): ):
if self._throttle_max_calls is None: if self._throttle_max_calls is None:
raise RuntimeError( raise RuntimeError(
f"Job {name} is using execution limit {limit} without throttle max calls!" f"Job {self.name} is using throttle {self.throttle} without throttle max calls!"
) )
self._rate_limited_calls = {} self._rate_limited_calls = {}
if self.throttle is not None and self.concurrency in (
JobConcurrency.GROUP_REJECT,
JobConcurrency.GROUP_QUEUE,
):
# We cannot release group locks when Job is not running (e.g. throttled)
# which makes these combinations impossible to use currently.
raise RuntimeError(
f"Job {self.name} is using throttling ({self.throttle}) with group concurrency ({self.concurrency}), which is not allowed!"
)
@property @property
def throttle_max_calls(self) -> int: def throttle_max_calls(self) -> int:
"""Return max calls for throttle.""" """Return max calls for throttle."""
@ -116,7 +147,7 @@ class Job(CoreSysAttributes):
"""Return rate limited calls if used.""" """Return rate limited calls if used."""
if self._rate_limited_calls is None: if self._rate_limited_calls is None:
raise RuntimeError( raise RuntimeError(
f"Rate limited calls not available for limit type {self.limit}" "Rate limited calls not available for this throttle type"
) )
return self._rate_limited_calls.get(group_name, []) return self._rate_limited_calls.get(group_name, [])
@ -127,7 +158,7 @@ class Job(CoreSysAttributes):
"""Add a rate limited call to list if used.""" """Add a rate limited call to list if used."""
if self._rate_limited_calls is None: if self._rate_limited_calls is None:
raise RuntimeError( raise RuntimeError(
f"Rate limited calls not available for limit type {self.limit}" "Rate limited calls not available for this throttle type"
) )
if group_name in self._rate_limited_calls: if group_name in self._rate_limited_calls:
@ -141,7 +172,7 @@ class Job(CoreSysAttributes):
"""Set rate limited calls if used.""" """Set rate limited calls if used."""
if self._rate_limited_calls is None: if self._rate_limited_calls is None:
raise RuntimeError( raise RuntimeError(
f"Rate limited calls not available for limit type {self.limit}" "Rate limited calls not available for this throttle type"
) )
self._rate_limited_calls[group_name] = value self._rate_limited_calls[group_name] = value
@ -178,15 +209,23 @@ class Job(CoreSysAttributes):
if obj.acquire and obj.release: # type: ignore if obj.acquire and obj.release: # type: ignore
job_group = cast(JobGroup, obj) job_group = cast(JobGroup, obj)
if not job_group and self.limit in ( # Check for group-based parameters
JobExecutionLimit.GROUP_ONCE, if not job_group:
JobExecutionLimit.GROUP_WAIT, if self.concurrency in (
JobExecutionLimit.GROUP_THROTTLE, JobConcurrency.GROUP_REJECT,
JobExecutionLimit.GROUP_THROTTLE_WAIT, JobConcurrency.GROUP_QUEUE,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
): ):
raise RuntimeError( raise RuntimeError(
f"Job on {self.name} need to be a JobGroup to use group based limits!" f"Job {self.name} uses group concurrency ({self.concurrency}) but is not on a JobGroup! "
f"The class must inherit from JobGroup to use GROUP_REJECT or GROUP_QUEUE."
) from None
if self.throttle in (
JobThrottle.GROUP_THROTTLE,
JobThrottle.GROUP_RATE_LIMIT,
):
raise RuntimeError(
f"Job {self.name} uses group throttling ({self.throttle}) but is not on a JobGroup! "
f"The class must inherit from JobGroup to use GROUP_THROTTLE or GROUP_RATE_LIMIT."
) from None ) from None
return job_group return job_group
@ -240,71 +279,15 @@ class Job(CoreSysAttributes):
except JobConditionException as err: except JobConditionException as err:
return self._handle_job_condition_exception(err) return self._handle_job_condition_exception(err)
# Handle exection limits # Handle execution limits
if self.limit in ( await self._handle_concurrency_control(job_group, job)
JobExecutionLimit.SINGLE_WAIT,
JobExecutionLimit.ONCE,
):
await self._acquire_exection_limit()
elif self.limit in (
JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT,
):
try: try:
await cast(JobGroup, job_group).acquire( if not await self._handle_throttling(group_name):
job, self.limit == JobExecutionLimit.GROUP_WAIT self._release_concurrency_control(job_group)
) return # Job was throttled, exit early
except JobGroupExecutionLimitExceeded as err: except Exception:
if self.on_condition: self._release_concurrency_control(job_group)
raise self.on_condition(str(err)) from err raise
raise err
elif self.limit in (
JobExecutionLimit.THROTTLE,
JobExecutionLimit.GROUP_THROTTLE,
):
time_since_last_call = datetime.now() - self.last_call(group_name)
if time_since_last_call < self.throttle_period(group_name):
return
elif self.limit in (
JobExecutionLimit.THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
):
await self._acquire_exection_limit()
time_since_last_call = datetime.now() - self.last_call(group_name)
if time_since_last_call < self.throttle_period(group_name):
self._release_exception_limits()
return
elif self.limit in (
JobExecutionLimit.THROTTLE_RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
):
# Only reprocess array when necessary (at limit)
if (
len(self.rate_limited_calls(group_name))
>= self.throttle_max_calls
):
self.set_rate_limited_calls(
[
call
for call in self.rate_limited_calls(group_name)
if call
> datetime.now() - self.throttle_period(group_name)
],
group_name,
)
if (
len(self.rate_limited_calls(group_name))
>= self.throttle_max_calls
):
on_condition = (
JobException
if self.on_condition is None
else self.on_condition
)
raise on_condition(
f"Rate limit exceeded, more than {self.throttle_max_calls} calls in {self.throttle_period(group_name)}",
)
# Execute Job # Execute Job
with job.start(): with job.start():
@ -330,12 +313,7 @@ class Job(CoreSysAttributes):
await async_capture_exception(err) await async_capture_exception(err)
raise JobException() from err raise JobException() from err
finally: finally:
self._release_exception_limits() self._release_concurrency_control(job_group)
if job_group and self.limit in (
JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT,
):
job_group.release()
# Jobs that weren't started are always cleaned up. Also clean up done jobs if required # Jobs that weren't started are always cleaned up. Also clean up done jobs if required
finally: finally:
@ -477,31 +455,75 @@ class Job(CoreSysAttributes):
f"'{method_name}' blocked from execution, mounting not supported on system" f"'{method_name}' blocked from execution, mounting not supported on system"
) )
async def _acquire_exection_limit(self) -> None: def _release_concurrency_control(self, job_group: JobGroup | None) -> None:
"""Process exection limits.""" """Release concurrency control locks."""
if self.limit not in ( if self.concurrency == JobConcurrency.REJECT:
JobExecutionLimit.SINGLE_WAIT, if self.lock.locked():
JobExecutionLimit.ONCE, self.lock.release()
JobExecutionLimit.THROTTLE_WAIT, elif self.concurrency == JobConcurrency.QUEUE:
JobExecutionLimit.GROUP_THROTTLE_WAIT, if self.lock.locked():
self.lock.release()
elif self.concurrency in (
JobConcurrency.GROUP_REJECT,
JobConcurrency.GROUP_QUEUE,
): ):
return if job_group and job_group.has_lock:
job_group.release()
if self.limit == JobExecutionLimit.ONCE and self.lock.locked(): async def _handle_concurrency_control(
self, job_group: JobGroup | None, job: SupervisorJob
) -> None:
"""Handle concurrency control limits."""
if self.concurrency == JobConcurrency.REJECT:
if self.lock.locked():
on_condition = ( on_condition = (
JobException if self.on_condition is None else self.on_condition JobException if self.on_condition is None else self.on_condition
) )
raise on_condition("Another job is running") raise on_condition("Another job is running")
await self.lock.acquire() await self.lock.acquire()
elif self.concurrency == JobConcurrency.QUEUE:
await self.lock.acquire()
elif self.concurrency == JobConcurrency.GROUP_REJECT:
try:
await cast(JobGroup, job_group).acquire(job, wait=False)
except JobGroupExecutionLimitExceeded as err:
if self.on_condition:
raise self.on_condition(str(err)) from err
raise err
elif self.concurrency == JobConcurrency.GROUP_QUEUE:
try:
await cast(JobGroup, job_group).acquire(job, wait=True)
except JobGroupExecutionLimitExceeded as err:
if self.on_condition:
raise self.on_condition(str(err)) from err
raise err
def _release_exception_limits(self) -> None: async def _handle_throttling(self, group_name: str | None) -> bool:
"""Release possible exception limits.""" """Handle throttling limits. Returns True if job should continue, False if throttled."""
if self.limit not in ( if self.throttle in (JobThrottle.THROTTLE, JobThrottle.GROUP_THROTTLE):
JobExecutionLimit.SINGLE_WAIT, time_since_last_call = datetime.now() - self.last_call(group_name)
JobExecutionLimit.ONCE, throttle_period = self.throttle_period(group_name)
JobExecutionLimit.THROTTLE_WAIT, if time_since_last_call < throttle_period:
JobExecutionLimit.GROUP_THROTTLE_WAIT, # Always return False when throttled (skip execution)
): return False
return elif self.throttle in (JobThrottle.RATE_LIMIT, JobThrottle.GROUP_RATE_LIMIT):
self.lock.release() # Only reprocess array when necessary (at limit)
if len(self.rate_limited_calls(group_name)) >= self.throttle_max_calls:
self.set_rate_limited_calls(
[
call
for call in self.rate_limited_calls(group_name)
if call > datetime.now() - self.throttle_period(group_name)
],
group_name,
)
if len(self.rate_limited_calls(group_name)) >= self.throttle_max_calls:
on_condition = (
JobException if self.on_condition is None else self.on_condition
)
raise on_condition(
f"Rate limit exceeded, more than {self.throttle_max_calls} calls in {self.throttle_period(group_name)}",
)
return True

View File

@ -15,7 +15,8 @@ from ..exceptions import (
ObserverError, ObserverError,
) )
from ..homeassistant.const import LANDINGPAGE, WSType from ..homeassistant.const import LANDINGPAGE, WSType
from ..jobs.decorator import Job, JobCondition, JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition
from ..plugins.const import PLUGIN_UPDATE_CONDITIONS from ..plugins.const import PLUGIN_UPDATE_CONDITIONS
from ..utils.dt import utcnow from ..utils.dt import utcnow
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
@ -160,7 +161,7 @@ class Tasks(CoreSysAttributes):
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
JobCondition.RUNNING, JobCondition.RUNNING,
], ],
limit=JobExecutionLimit.ONCE, concurrency=JobConcurrency.REJECT,
) )
async def _update_supervisor(self): async def _update_supervisor(self):
"""Check and run update of Supervisor Supervisor.""" """Check and run update of Supervisor Supervisor."""

View File

@ -164,10 +164,14 @@ class Mount(CoreSysAttributes, ABC):
"""Return true if successfully mounted and available.""" """Return true if successfully mounted and available."""
return self.state == UnitActiveState.ACTIVE return self.state == UnitActiveState.ACTIVE
def __eq__(self, other): def __eq__(self, other: object) -> bool:
"""Return true if mounts are the same.""" """Return true if mounts are the same."""
return isinstance(other, Mount) and self.name == other.name return isinstance(other, Mount) and self.name == other.name
def __hash__(self) -> int:
"""Return hash of mount."""
return hash(self.name)
async def load(self) -> None: async def load(self) -> None:
"""Initialize object.""" """Initialize object."""
# If there's no mount unit, mount it to make one # If there's no mount unit, mount it to make one

View File

@ -22,7 +22,7 @@ from ..exceptions import (
HassOSJobError, HassOSJobError,
HostError, HostError,
) )
from ..jobs.const import JobCondition, JobExecutionLimit from ..jobs.const import JobConcurrency, JobCondition
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.checks.base import CheckBase from ..resolution.checks.base import CheckBase
from ..resolution.checks.disabled_data_disk import CheckDisabledDataDisk from ..resolution.checks.disabled_data_disk import CheckDisabledDataDisk
@ -205,8 +205,8 @@ class DataDisk(CoreSysAttributes):
@Job( @Job(
name="data_disk_migrate", name="data_disk_migrate",
conditions=[JobCondition.HAOS, JobCondition.OS_AGENT, JobCondition.HEALTHY], conditions=[JobCondition.HAOS, JobCondition.OS_AGENT, JobCondition.HEALTHY],
limit=JobExecutionLimit.ONCE,
on_condition=HassOSJobError, on_condition=HassOSJobError,
concurrency=JobConcurrency.REJECT,
) )
async def migrate_disk(self, new_disk: str) -> None: async def migrate_disk(self, new_disk: str) -> None:
"""Move data partition to a new disk.""" """Move data partition to a new disk."""
@ -305,8 +305,8 @@ class DataDisk(CoreSysAttributes):
@Job( @Job(
name="data_disk_wipe", name="data_disk_wipe",
conditions=[JobCondition.HAOS, JobCondition.OS_AGENT, JobCondition.HEALTHY], conditions=[JobCondition.HAOS, JobCondition.OS_AGENT, JobCondition.HEALTHY],
limit=JobExecutionLimit.ONCE,
on_condition=HassOSJobError, on_condition=HassOSJobError,
concurrency=JobConcurrency.REJECT,
) )
async def wipe_disk(self) -> None: async def wipe_disk(self) -> None:
"""Wipe the current data disk.""" """Wipe the current data disk."""

View File

@ -21,7 +21,7 @@ from ..exceptions import (
HassOSSlotUpdateError, HassOSSlotUpdateError,
HassOSUpdateError, HassOSUpdateError,
) )
from ..jobs.const import JobCondition, JobExecutionLimit from ..jobs.const import JobConcurrency, JobCondition
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason from ..resolution.const import UnhealthyReason
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
@ -272,12 +272,13 @@ class OSManager(CoreSysAttributes):
name="os_manager_update", name="os_manager_update",
conditions=[ conditions=[
JobCondition.HAOS, JobCondition.HAOS,
JobCondition.HEALTHY,
JobCondition.INTERNET_SYSTEM, JobCondition.INTERNET_SYSTEM,
JobCondition.RUNNING, JobCondition.RUNNING,
JobCondition.SUPERVISOR_UPDATED, JobCondition.SUPERVISOR_UPDATED,
], ],
limit=JobExecutionLimit.ONCE,
on_condition=HassOSJobError, on_condition=HassOSJobError,
concurrency=JobConcurrency.REJECT,
) )
async def update(self, version: AwesomeVersion | None = None) -> None: async def update(self, version: AwesomeVersion | None = None) -> None:
"""Update HassOS system.""" """Update HassOS system."""

View File

@ -22,8 +22,9 @@ from ..exceptions import (
AudioUpdateError, AudioUpdateError,
ConfigurationFileError, ConfigurationFileError,
DockerError, DockerError,
PluginError,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason from ..resolution.const import UnhealthyReason
from ..utils.json import write_json_file from ..utils.json import write_json_file
@ -127,7 +128,7 @@ class PluginAudio(PluginBase):
"""Update Audio plugin.""" """Update Audio plugin."""
try: try:
await super().update(version) await super().update(version)
except DockerError as err: except (DockerError, PluginError) as err:
raise AudioUpdateError("Audio update failed", _LOGGER.error) from err raise AudioUpdateError("Audio update failed", _LOGGER.error) from err
async def restart(self) -> None: async def restart(self) -> None:
@ -204,10 +205,10 @@ class PluginAudio(PluginBase):
@Job( @Job(
name="plugin_audio_restart_after_problem", name="plugin_audio_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=AudioJobError, on_condition=AudioJobError,
throttle=JobThrottle.RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed plugin.""" """Restart unhealthy or failed plugin."""

View File

@ -168,14 +168,14 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
# Check plugin state # Check plugin state
try: try:
# Evaluate Version if we lost this information # Evaluate Version if we lost this information
if not self.version: if self.version:
self.version = await self.instance.get_latest_version() version = self.version
else:
self.version = version = await self.instance.get_latest_version()
await self.instance.attach( await self.instance.attach(version=version, skip_state_event_if_down=True)
version=self.version, skip_state_event_if_down=True
)
await self.instance.check_image(self.version, self.default_image) await self.instance.check_image(version, self.default_image)
except DockerError: except DockerError:
_LOGGER.info( _LOGGER.info(
"No %s plugin Docker image %s found.", self.slug, self.instance.image "No %s plugin Docker image %s found.", self.slug, self.instance.image
@ -185,7 +185,7 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
with suppress(PluginError): with suppress(PluginError):
await self.install() await self.install()
else: else:
self.version = self.instance.version self.version = self.instance.version or version
self.image = self.default_image self.image = self.default_image
await self.save_data() await self.save_data()
@ -202,11 +202,10 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
if not self.latest_version: if not self.latest_version:
await self.sys_updater.reload() await self.sys_updater.reload()
if self.latest_version: if to_version := self.latest_version:
with suppress(DockerError): with suppress(DockerError):
await self.instance.install( await self.instance.install(to_version, image=self.default_image)
self.latest_version, image=self.default_image self.version = self.instance.version or to_version
)
break break
_LOGGER.warning( _LOGGER.warning(
"Error on installing %s plugin, retrying in 30sec", self.slug "Error on installing %s plugin, retrying in 30sec", self.slug
@ -214,23 +213,28 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
await asyncio.sleep(30) await asyncio.sleep(30)
_LOGGER.info("%s plugin now installed", self.slug) _LOGGER.info("%s plugin now installed", self.slug)
self.version = self.instance.version
self.image = self.default_image self.image = self.default_image
await self.save_data() await self.save_data()
async def update(self, version: str | None = None) -> None: async def update(self, version: str | None = None) -> None:
"""Update system plugin.""" """Update system plugin."""
version = version or self.latest_version to_version = AwesomeVersion(version) if version else self.latest_version
if not to_version:
raise PluginError(
f"Cannot determine latest version of plugin {self.slug} for update",
_LOGGER.error,
)
old_image = self.image old_image = self.image
if version == self.version: if to_version == self.version:
_LOGGER.warning( _LOGGER.warning(
"Version %s is already installed for %s", version, self.slug "Version %s is already installed for %s", to_version, self.slug
) )
return return
await self.instance.update(version, image=self.default_image) await self.instance.update(to_version, image=self.default_image)
self.version = self.instance.version self.version = self.instance.version or to_version
self.image = self.default_image self.image = self.default_image
await self.save_data() await self.save_data()

View File

@ -6,7 +6,6 @@ Code: https://github.com/home-assistant/plugin-cli
from collections.abc import Awaitable from collections.abc import Awaitable
import logging import logging
import secrets import secrets
from typing import cast
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
@ -15,8 +14,8 @@ from ..coresys import CoreSys
from ..docker.cli import DockerCli from ..docker.cli import DockerCli
from ..docker.const import ContainerState from ..docker.const import ContainerState
from ..docker.stats import DockerStats from ..docker.stats import DockerStats
from ..exceptions import CliError, CliJobError, CliUpdateError, DockerError from ..exceptions import CliError, CliJobError, CliUpdateError, DockerError, PluginError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from .base import PluginBase from .base import PluginBase
@ -54,9 +53,9 @@ class PluginCli(PluginBase):
return self.sys_updater.version_cli return self.sys_updater.version_cli
@property @property
def supervisor_token(self) -> str: def supervisor_token(self) -> str | None:
"""Return an access token for the Supervisor API.""" """Return an access token for the Supervisor API."""
return cast(str, self._data[ATTR_ACCESS_TOKEN]) return self._data.get(ATTR_ACCESS_TOKEN)
@Job( @Job(
name="plugin_cli_update", name="plugin_cli_update",
@ -67,7 +66,7 @@ class PluginCli(PluginBase):
"""Update local HA cli.""" """Update local HA cli."""
try: try:
await super().update(version) await super().update(version)
except DockerError as err: except (DockerError, PluginError) as err:
raise CliUpdateError("CLI update failed", _LOGGER.error) from err raise CliUpdateError("CLI update failed", _LOGGER.error) from err
async def start(self) -> None: async def start(self) -> None:
@ -119,10 +118,10 @@ class PluginCli(PluginBase):
@Job( @Job(
name="plugin_cli_restart_after_problem", name="plugin_cli_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=CliJobError, on_condition=CliJobError,
throttle=JobThrottle.RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed plugin.""" """Restart unhealthy or failed plugin."""

View File

@ -15,7 +15,8 @@ from awesomeversion import AwesomeVersion
import jinja2 import jinja2
import voluptuous as vol import voluptuous as vol
from ..const import ATTR_SERVERS, DNS_SUFFIX, LogLevel from ..bus import EventListener
from ..const import ATTR_SERVERS, DNS_SUFFIX, BusEvent, LogLevel
from ..coresys import CoreSys from ..coresys import CoreSys
from ..dbus.const import MulticastProtocolEnabled from ..dbus.const import MulticastProtocolEnabled
from ..docker.const import ContainerState from ..docker.const import ContainerState
@ -28,8 +29,9 @@ from ..exceptions import (
CoreDNSJobError, CoreDNSJobError,
CoreDNSUpdateError, CoreDNSUpdateError,
DockerError, DockerError,
PluginError,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from ..utils.json import write_json_file from ..utils.json import write_json_file
@ -76,6 +78,12 @@ class PluginDns(PluginBase):
self._hosts: list[HostEntry] = [] self._hosts: list[HostEntry] = []
self._loop: bool = False self._loop: bool = False
self._cached_locals: list[str] | None = None
# Debouncing system for rapid local changes
self._locals_changed_handle: asyncio.TimerHandle | None = None
self._restart_after_locals_change_handle: asyncio.Task | None = None
self._connectivity_check_listener: EventListener | None = None
@property @property
def hosts(self) -> Path: def hosts(self) -> Path:
@ -90,6 +98,12 @@ class PluginDns(PluginBase):
@property @property
def locals(self) -> list[str]: def locals(self) -> list[str]:
"""Return list of local system DNS servers.""" """Return list of local system DNS servers."""
if self._cached_locals is None:
self._cached_locals = self._compute_locals()
return self._cached_locals
def _compute_locals(self) -> list[str]:
"""Compute list of local system DNS servers."""
servers: list[str] = [] servers: list[str] = []
for server in [ for server in [
f"dns://{server!s}" for server in self.sys_host.network.dns_servers f"dns://{server!s}" for server in self.sys_host.network.dns_servers
@ -99,6 +113,52 @@ class PluginDns(PluginBase):
return servers return servers
async def _on_dns_container_running(self, event: DockerContainerStateEvent) -> None:
"""Handle DNS container state change to running and trigger connectivity check."""
if event.name == self.instance.name and event.state == ContainerState.RUNNING:
# Wait before CoreDNS actually becomes available
await asyncio.sleep(5)
_LOGGER.debug("CoreDNS started, checking connectivity")
await self.sys_supervisor.check_connectivity()
async def _restart_dns_after_locals_change(self) -> None:
"""Restart DNS after a debounced delay for local changes."""
old_locals = self._cached_locals
new_locals = self._compute_locals()
if old_locals == new_locals:
return
_LOGGER.debug("DNS locals changed from %s to %s", old_locals, new_locals)
self._cached_locals = new_locals
if not await self.instance.is_running():
return
await self.restart()
self._restart_after_locals_change_handle = None
def _trigger_restart_dns_after_locals_change(self) -> None:
"""Trigger a restart of DNS after local changes."""
# Cancel existing restart task if any
if self._restart_after_locals_change_handle:
self._restart_after_locals_change_handle.cancel()
self._restart_after_locals_change_handle = self.sys_create_task(
self._restart_dns_after_locals_change()
)
self._locals_changed_handle = None
def notify_locals_changed(self) -> None:
"""Schedule a debounced DNS restart for local changes."""
# Cancel existing timer if any
if self._locals_changed_handle:
self._locals_changed_handle.cancel()
# Schedule new timer with 1 second delay
self._locals_changed_handle = self.sys_call_later(
1.0, self._trigger_restart_dns_after_locals_change
)
@property @property
def servers(self) -> list[str]: def servers(self) -> list[str]:
"""Return list of DNS servers.""" """Return list of DNS servers."""
@ -187,6 +247,13 @@ class PluginDns(PluginBase):
_LOGGER.error("Can't read hosts.tmpl: %s", err) _LOGGER.error("Can't read hosts.tmpl: %s", err)
await self._init_hosts() await self._init_hosts()
# Register Docker event listener for connectivity checks
if not self._connectivity_check_listener:
self._connectivity_check_listener = self.sys_bus.register_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE, self._on_dns_container_running
)
await super().load() await super().load()
# Update supervisor # Update supervisor
@ -217,7 +284,7 @@ class PluginDns(PluginBase):
"""Update CoreDNS plugin.""" """Update CoreDNS plugin."""
try: try:
await super().update(version) await super().update(version)
except DockerError as err: except (DockerError, PluginError) as err:
raise CoreDNSUpdateError("CoreDNS update failed", _LOGGER.error) from err raise CoreDNSUpdateError("CoreDNS update failed", _LOGGER.error) from err
async def restart(self) -> None: async def restart(self) -> None:
@ -242,6 +309,16 @@ class PluginDns(PluginBase):
async def stop(self) -> None: async def stop(self) -> None:
"""Stop CoreDNS.""" """Stop CoreDNS."""
# Cancel any pending locals change timer
if self._locals_changed_handle:
self._locals_changed_handle.cancel()
self._locals_changed_handle = None
# Wait for any pending restart before stopping
if self._restart_after_locals_change_handle:
self._restart_after_locals_change_handle.cancel()
self._restart_after_locals_change_handle = None
_LOGGER.info("Stopping CoreDNS plugin") _LOGGER.info("Stopping CoreDNS plugin")
try: try:
await self.instance.stop() await self.instance.stop()
@ -274,10 +351,10 @@ class PluginDns(PluginBase):
@Job( @Job(
name="plugin_dns_restart_after_problem", name="plugin_dns_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=CoreDNSJobError, on_condition=CoreDNSJobError,
throttle=JobThrottle.RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed plugin.""" """Restart unhealthy or failed plugin."""

View File

@ -16,8 +16,9 @@ from ..exceptions import (
MulticastError, MulticastError,
MulticastJobError, MulticastJobError,
MulticastUpdateError, MulticastUpdateError,
PluginError,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from .base import PluginBase from .base import PluginBase
@ -63,7 +64,7 @@ class PluginMulticast(PluginBase):
"""Update Multicast plugin.""" """Update Multicast plugin."""
try: try:
await super().update(version) await super().update(version)
except DockerError as err: except (DockerError, PluginError) as err:
raise MulticastUpdateError( raise MulticastUpdateError(
"Multicast update failed", _LOGGER.error "Multicast update failed", _LOGGER.error
) from err ) from err
@ -113,10 +114,10 @@ class PluginMulticast(PluginBase):
@Job( @Job(
name="plugin_multicast_restart_after_problem", name="plugin_multicast_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=MulticastJobError, on_condition=MulticastJobError,
throttle=JobThrottle.RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed plugin.""" """Restart unhealthy or failed plugin."""

View File

@ -5,7 +5,6 @@ Code: https://github.com/home-assistant/plugin-observer
import logging import logging
import secrets import secrets
from typing import cast
import aiohttp import aiohttp
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
@ -20,8 +19,9 @@ from ..exceptions import (
ObserverError, ObserverError,
ObserverJobError, ObserverJobError,
ObserverUpdateError, ObserverUpdateError,
PluginError,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from .base import PluginBase from .base import PluginBase
@ -59,9 +59,9 @@ class PluginObserver(PluginBase):
return self.sys_updater.version_observer return self.sys_updater.version_observer
@property @property
def supervisor_token(self) -> str: def supervisor_token(self) -> str | None:
"""Return an access token for the Observer API.""" """Return an access token for the Observer API."""
return cast(str, self._data[ATTR_ACCESS_TOKEN]) return self._data.get(ATTR_ACCESS_TOKEN)
@Job( @Job(
name="plugin_observer_update", name="plugin_observer_update",
@ -72,7 +72,7 @@ class PluginObserver(PluginBase):
"""Update local HA observer.""" """Update local HA observer."""
try: try:
await super().update(version) await super().update(version)
except DockerError as err: except (DockerError, PluginError) as err:
raise ObserverUpdateError( raise ObserverUpdateError(
"HA observer update failed", _LOGGER.error "HA observer update failed", _LOGGER.error
) from err ) from err
@ -130,10 +130,10 @@ class PluginObserver(PluginBase):
@Job( @Job(
name="plugin_observer_restart_after_problem", name="plugin_observer_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=ObserverJobError, on_condition=ObserverJobError,
throttle=JobThrottle.RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed plugin.""" """Restart unhealthy or failed plugin."""

View File

@ -6,7 +6,7 @@ import logging
from ...const import AddonState, CoreState from ...const import AddonState, CoreState
from ...coresys import CoreSys from ...coresys import CoreSys
from ...exceptions import PwnedConnectivityError, PwnedError, PwnedSecret from ...exceptions import PwnedConnectivityError, PwnedError, PwnedSecret
from ...jobs.const import JobCondition, JobExecutionLimit from ...jobs.const import JobCondition, JobThrottle
from ...jobs.decorator import Job from ...jobs.decorator import Job
from ..const import ContextType, IssueType, SuggestionType from ..const import ContextType, IssueType, SuggestionType
from .base import CheckBase from .base import CheckBase
@ -25,8 +25,8 @@ class CheckAddonPwned(CheckBase):
@Job( @Job(
name="check_addon_pwned_run", name="check_addon_pwned_run",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
limit=JobExecutionLimit.THROTTLE,
throttle_period=timedelta(hours=24), throttle_period=timedelta(hours=24),
throttle=JobThrottle.THROTTLE,
) )
async def run_check(self) -> None: async def run_check(self) -> None:
"""Run check if not affected by issue.""" """Run check if not affected by issue."""

View File

@ -9,7 +9,7 @@ from aiodns.error import DNSError
from ...const import CoreState from ...const import CoreState
from ...coresys import CoreSys from ...coresys import CoreSys
from ...jobs.const import JobCondition, JobExecutionLimit from ...jobs.const import JobCondition, JobThrottle
from ...jobs.decorator import Job from ...jobs.decorator import Job
from ...utils.sentry import async_capture_exception from ...utils.sentry import async_capture_exception
from ..const import DNS_CHECK_HOST, ContextType, IssueType from ..const import DNS_CHECK_HOST, ContextType, IssueType
@ -21,17 +21,8 @@ async def check_server(
) -> None: ) -> None:
"""Check a DNS server and report issues.""" """Check a DNS server and report issues."""
ip_addr = server[6:] if server.startswith("dns://") else server ip_addr = server[6:] if server.startswith("dns://") else server
resolver = DNSResolver(loop=loop, nameservers=[ip_addr]) async with DNSResolver(loop=loop, nameservers=[ip_addr]) as resolver:
try:
await resolver.query(DNS_CHECK_HOST, qtype) await resolver.query(DNS_CHECK_HOST, qtype)
finally:
def _delete_resolver():
"""Close resolver to avoid memory leaks."""
nonlocal resolver
del resolver
loop.call_later(1, _delete_resolver)
def setup(coresys: CoreSys) -> CheckBase: def setup(coresys: CoreSys) -> CheckBase:
@ -45,8 +36,8 @@ class CheckDNSServer(CheckBase):
@Job( @Job(
name="check_dns_server_run", name="check_dns_server_run",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
limit=JobExecutionLimit.THROTTLE,
throttle_period=timedelta(hours=24), throttle_period=timedelta(hours=24),
throttle=JobThrottle.THROTTLE,
) )
async def run_check(self) -> None: async def run_check(self) -> None:
"""Run check if not affected by issue.""" """Run check if not affected by issue."""

View File

@ -7,7 +7,7 @@ from aiodns.error import DNSError
from ...const import CoreState from ...const import CoreState
from ...coresys import CoreSys from ...coresys import CoreSys
from ...jobs.const import JobCondition, JobExecutionLimit from ...jobs.const import JobCondition, JobThrottle
from ...jobs.decorator import Job from ...jobs.decorator import Job
from ...utils.sentry import async_capture_exception from ...utils.sentry import async_capture_exception
from ..const import DNS_ERROR_NO_DATA, ContextType, IssueType from ..const import DNS_ERROR_NO_DATA, ContextType, IssueType
@ -26,8 +26,8 @@ class CheckDNSServerIPv6(CheckBase):
@Job( @Job(
name="check_dns_server_ipv6_run", name="check_dns_server_ipv6_run",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
limit=JobExecutionLimit.THROTTLE,
throttle_period=timedelta(hours=24), throttle_period=timedelta(hours=24),
throttle=JobThrottle.THROTTLE,
) )
async def run_check(self) -> None: async def run_check(self) -> None:
"""Run check if not affected by issue.""" """Run check if not affected by issue."""

View File

@ -0,0 +1,108 @@
"""Helpers to check for duplicate OS installations."""
import logging
from ...const import CoreState
from ...coresys import CoreSys
from ...dbus.udisks2.data import DeviceSpecification
from ..const import ContextType, IssueType, UnhealthyReason
from .base import CheckBase
_LOGGER: logging.Logger = logging.getLogger(__name__)
# Partition labels to check for duplicates (GPT-based installations)
HAOS_PARTITIONS = [
"hassos-boot",
"hassos-kernel0",
"hassos-kernel1",
"hassos-system0",
"hassos-system1",
]
# Partition UUIDs to check for duplicates (MBR-based installations)
HAOS_PARTITION_UUIDS = [
"48617373-01", # hassos-boot
"48617373-05", # hassos-kernel0
"48617373-06", # hassos-system0
"48617373-07", # hassos-kernel1
"48617373-08", # hassos-system1
]
def _get_device_specifications():
"""Generate DeviceSpecification objects for both GPT and MBR partitions."""
# GPT-based installations (partition labels)
for partition_label in HAOS_PARTITIONS:
yield (
DeviceSpecification(partlabel=partition_label),
"partition",
partition_label,
)
# MBR-based installations (partition UUIDs)
for partition_uuid in HAOS_PARTITION_UUIDS:
yield (
DeviceSpecification(partuuid=partition_uuid),
"partition UUID",
partition_uuid,
)
def setup(coresys: CoreSys) -> CheckBase:
"""Check setup function."""
return CheckDuplicateOSInstallation(coresys)
class CheckDuplicateOSInstallation(CheckBase):
"""CheckDuplicateOSInstallation class for check."""
async def run_check(self) -> None:
"""Run check if not affected by issue."""
if not self.sys_os.available:
_LOGGER.debug(
"Skipping duplicate OS installation check, OS is not available"
)
return
for device_spec, spec_type, identifier in _get_device_specifications():
resolved = await self.sys_dbus.udisks2.resolve_device(device_spec)
if resolved and len(resolved) > 1:
_LOGGER.warning(
"Found duplicate OS installation: %s %s exists on %d devices (%s)",
identifier,
spec_type,
len(resolved),
", ".join(str(device.device) for device in resolved),
)
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.DUPLICATE_OS_INSTALLATION
)
self.sys_resolution.create_issue(
IssueType.DUPLICATE_OS_INSTALLATION,
ContextType.SYSTEM,
)
return
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
# Check all partitions for duplicates since issue is created without reference
for device_spec, _, _ in _get_device_specifications():
resolved = await self.sys_dbus.udisks2.resolve_device(device_spec)
if resolved and len(resolved) > 1:
return True
return False
@property
def issue(self) -> IssueType:
"""Return a IssueType enum."""
return IssueType.DUPLICATE_OS_INSTALLATION
@property
def context(self) -> ContextType:
"""Return a ContextType enum."""
return ContextType.SYSTEM
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this check can run."""
return [CoreState.SETUP]

View File

@ -21,6 +21,9 @@ class CheckMultipleDataDisks(CheckBase):
async def run_check(self) -> None: async def run_check(self) -> None:
"""Run check if not affected by issue.""" """Run check if not affected by issue."""
if not self.sys_os.available:
return
for block_device in self.sys_dbus.udisks2.block_devices: for block_device in self.sys_dbus.udisks2.block_devices:
if self._block_device_has_name_issue(block_device): if self._block_device_has_name_issue(block_device):
self.sys_resolution.create_issue( self.sys_resolution.create_issue(

View File

@ -19,12 +19,12 @@ class CheckNetworkInterfaceIPV4(CheckBase):
async def run_check(self) -> None: async def run_check(self) -> None:
"""Run check if not affected by issue.""" """Run check if not affected by issue."""
for interface in self.sys_dbus.network.interfaces: for inet in self.sys_dbus.network.interfaces:
if CheckNetworkInterfaceIPV4.check_interface(interface): if CheckNetworkInterfaceIPV4.check_interface(inet):
self.sys_resolution.create_issue( self.sys_resolution.create_issue(
IssueType.IPV4_CONNECTION_PROBLEM, IssueType.IPV4_CONNECTION_PROBLEM,
ContextType.SYSTEM, ContextType.SYSTEM,
interface.name, inet.interface_name,
) )
async def approve_check(self, reference: str | None = None) -> bool: async def approve_check(self, reference: str | None = None) -> bool:

View File

@ -49,6 +49,7 @@ class UnsupportedReason(StrEnum):
NETWORK_MANAGER = "network_manager" NETWORK_MANAGER = "network_manager"
OS = "os" OS = "os"
OS_AGENT = "os_agent" OS_AGENT = "os_agent"
OS_VERSION = "os_version"
PRIVILEGED = "privileged" PRIVILEGED = "privileged"
RESTART_POLICY = "restart_policy" RESTART_POLICY = "restart_policy"
SOFTWARE = "software" SOFTWARE = "software"
@ -64,10 +65,11 @@ class UnhealthyReason(StrEnum):
"""Reasons for unsupported status.""" """Reasons for unsupported status."""
DOCKER = "docker" DOCKER = "docker"
DUPLICATE_OS_INSTALLATION = "duplicate_os_installation"
OSERROR_BAD_MESSAGE = "oserror_bad_message" OSERROR_BAD_MESSAGE = "oserror_bad_message"
PRIVILEGED = "privileged" PRIVILEGED = "privileged"
SUPERVISOR = "supervisor"
SETUP = "setup" SETUP = "setup"
SUPERVISOR = "supervisor"
UNTRUSTED = "untrusted" UNTRUSTED = "untrusted"
@ -83,6 +85,7 @@ class IssueType(StrEnum):
DEVICE_ACCESS_MISSING = "device_access_missing" DEVICE_ACCESS_MISSING = "device_access_missing"
DISABLED_DATA_DISK = "disabled_data_disk" DISABLED_DATA_DISK = "disabled_data_disk"
DNS_LOOP = "dns_loop" DNS_LOOP = "dns_loop"
DUPLICATE_OS_INSTALLATION = "duplicate_os_installation"
DNS_SERVER_FAILED = "dns_server_failed" DNS_SERVER_FAILED = "dns_server_failed"
DNS_SERVER_IPV6_ERROR = "dns_server_ipv6_error" DNS_SERVER_IPV6_ERROR = "dns_server_ipv6_error"
DOCKER_CONFIG = "docker_config" DOCKER_CONFIG = "docker_config"

View File

@ -5,6 +5,8 @@ import logging
from docker.errors import DockerException from docker.errors import DockerException
from requests import RequestException from requests import RequestException
from supervisor.docker.const import ADDON_BUILDER_IMAGE
from ...const import CoreState from ...const import CoreState
from ...coresys import CoreSys from ...coresys import CoreSys
from ..const import ( from ..const import (
@ -63,6 +65,7 @@ class EvaluateContainer(EvaluateBase):
self.sys_supervisor.image or self.sys_supervisor.default_image, self.sys_supervisor.image or self.sys_supervisor.default_image,
*(plugin.image for plugin in self.sys_plugins.all_plugins if plugin.image), *(plugin.image for plugin in self.sys_plugins.all_plugins if plugin.image),
*(addon.image for addon in self.sys_addons.installed if addon.image), *(addon.image for addon in self.sys_addons.installed if addon.image),
ADDON_BUILDER_IMAGE,
} }
async def evaluate(self) -> bool: async def evaluate(self) -> bool:

View File

@ -0,0 +1,51 @@
"""Evaluation class for OS version."""
from awesomeversion import AwesomeVersion, AwesomeVersionException
from ...const import CoreState
from ...coresys import CoreSys
from ..const import UnsupportedReason
from .base import EvaluateBase
def setup(coresys: CoreSys) -> EvaluateBase:
"""Initialize evaluation-setup function."""
return EvaluateOSVersion(coresys)
class EvaluateOSVersion(EvaluateBase):
"""Evaluate the OS version."""
@property
def reason(self) -> UnsupportedReason:
"""Return a UnsupportedReason enum."""
return UnsupportedReason.OS_VERSION
@property
def on_failure(self) -> str:
"""Return a string that is printed when self.evaluate is True."""
return f"OS version '{self.sys_os.version}' is more than 4 versions behind the latest '{self.sys_os.latest_version}'!"
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this evaluation can run."""
# Technically there's no reason to run this after STARTUP as update requires
# a reboot. But if network is down we won't have latest version info then.
return [CoreState.RUNNING, CoreState.STARTUP]
async def evaluate(self) -> bool:
"""Run evaluation."""
if (
not self.sys_os.available
or not (current := self.sys_os.version)
or not (latest := self.sys_os.latest_version)
or not latest.major
):
return False
# If current is more than 4 major versions behind latest, mark as unsupported
last_supported_version = AwesomeVersion(f"{int(latest.major) - 4}.0")
try:
return current < last_supported_version
except AwesomeVersionException:
return True

View File

@ -3,6 +3,7 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
import logging import logging
from ...const import BusEvent
from ...coresys import CoreSys, CoreSysAttributes from ...coresys import CoreSys, CoreSysAttributes
from ...exceptions import ResolutionFixupError from ...exceptions import ResolutionFixupError
from ..const import ContextType, IssueType, SuggestionType from ..const import ContextType, IssueType, SuggestionType
@ -66,6 +67,11 @@ class FixupBase(ABC, CoreSysAttributes):
"""Return if a fixup can be apply as auto fix.""" """Return if a fixup can be apply as auto fix."""
return False return False
@property
def bus_event(self) -> BusEvent | None:
"""Return the BusEvent that triggers this fixup, or None if not event-based."""
return None
@property @property
def all_suggestions(self) -> list[Suggestion]: def all_suggestions(self) -> list[Suggestion]:
"""List of all suggestions which when applied run this fixup.""" """List of all suggestions which when applied run this fixup."""

View File

@ -2,6 +2,7 @@
import logging import logging
from ...const import BusEvent
from ...coresys import CoreSys from ...coresys import CoreSys
from ...exceptions import ( from ...exceptions import (
ResolutionFixupError, ResolutionFixupError,
@ -68,3 +69,8 @@ class FixupStoreExecuteReload(FixupBase):
def auto(self) -> bool: def auto(self) -> bool:
"""Return if a fixup can be apply as auto fix.""" """Return if a fixup can be apply as auto fix."""
return True return True
@property
def bus_event(self) -> BusEvent | None:
"""Return the BusEvent that triggers this fixup, or None if not event-based."""
return BusEvent.SUPERVISOR_CONNECTIVITY_CHANGE

View File

@ -1,6 +1,5 @@
"""Helpers to check and fix issues with free space.""" """Helpers to check and fix issues with free space."""
from functools import partial
import logging import logging
from ...coresys import CoreSys from ...coresys import CoreSys
@ -12,7 +11,6 @@ from ...exceptions import (
) )
from ...jobs.const import JobCondition from ...jobs.const import JobCondition
from ...jobs.decorator import Job from ...jobs.decorator import Job
from ...utils import remove_folder
from ..const import ContextType, IssueType, SuggestionType from ..const import ContextType, IssueType, SuggestionType
from .base import FixupBase from .base import FixupBase
@ -44,15 +42,8 @@ class FixupStoreExecuteReset(FixupBase):
_LOGGER.warning("Can't find store %s for fixup", reference) _LOGGER.warning("Can't find store %s for fixup", reference)
return return
# Local add-ons are not a git repo, can't remove and re-pull
if repository.git:
await self.sys_run_in_executor(
partial(remove_folder, folder=repository.git.path, content_only=True)
)
# Load data again
try: try:
await repository.load() await repository.reset()
except StoreError: except StoreError:
raise ResolutionFixupError() from None raise ResolutionFixupError() from None

View File

@ -5,7 +5,7 @@ import logging
from ...coresys import CoreSys from ...coresys import CoreSys
from ...exceptions import ResolutionFixupError, ResolutionFixupJobError from ...exceptions import ResolutionFixupError, ResolutionFixupJobError
from ...jobs.const import JobCondition, JobExecutionLimit from ...jobs.const import JobCondition, JobThrottle
from ...jobs.decorator import Job from ...jobs.decorator import Job
from ...security.const import ContentTrustResult from ...security.const import ContentTrustResult
from ..const import ContextType, IssueType, SuggestionType from ..const import ContextType, IssueType, SuggestionType
@ -26,8 +26,8 @@ class FixupSystemExecuteIntegrity(FixupBase):
name="fixup_system_execute_integrity_process", name="fixup_system_execute_integrity_process",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=ResolutionFixupJobError, on_condition=ResolutionFixupJobError,
limit=JobExecutionLimit.THROTTLE,
throttle_period=timedelta(hours=8), throttle_period=timedelta(hours=8),
throttle=JobThrottle.THROTTLE,
) )
async def process_fixup(self, reference: str | None = None) -> None: async def process_fixup(self, reference: str | None = None) -> None:
"""Initialize the fixup class.""" """Initialize the fixup class."""

View File

@ -5,6 +5,7 @@ from typing import Any
import attr import attr
from ..bus import EventListener
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ResolutionError, ResolutionNotFound from ..exceptions import ResolutionError, ResolutionNotFound
from ..homeassistant.const import WSEvent from ..homeassistant.const import WSEvent
@ -46,6 +47,9 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
self._unsupported: list[UnsupportedReason] = [] self._unsupported: list[UnsupportedReason] = []
self._unhealthy: list[UnhealthyReason] = [] self._unhealthy: list[UnhealthyReason] = []
# Map suggestion UUID to event listeners (list)
self._suggestion_listeners: dict[str, list[EventListener]] = {}
async def load_modules(self): async def load_modules(self):
"""Load resolution evaluation, check and fixup modules.""" """Load resolution evaluation, check and fixup modules."""
@ -105,6 +109,19 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
) )
self._suggestions.append(suggestion) self._suggestions.append(suggestion)
# Register event listeners if fixups have a bus_event
listeners: list[EventListener] = []
for fixup in self.fixup.fixes_for_suggestion(suggestion):
if fixup.auto and fixup.bus_event:
def event_callback(reference, fixup=fixup):
return fixup(suggestion)
listener = self.sys_bus.register_event(fixup.bus_event, event_callback)
listeners.append(listener)
if listeners:
self._suggestion_listeners[suggestion.uuid] = listeners
# Event on suggestion added to issue # Event on suggestion added to issue
for issue in self.issues_for_suggestion(suggestion): for issue in self.issues_for_suggestion(suggestion):
self.sys_homeassistant.websocket.supervisor_event( self.sys_homeassistant.websocket.supervisor_event(
@ -233,6 +250,11 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
) )
self._suggestions.remove(suggestion) self._suggestions.remove(suggestion)
# Remove event listeners if present
listeners = self._suggestion_listeners.pop(suggestion.uuid, [])
for listener in listeners:
self.sys_bus.remove_listener(listener)
# Event on suggestion removed from issues # Event on suggestion removed from issues
for issue in self.issues_for_suggestion(suggestion): for issue in self.issues_for_suggestion(suggestion):
self.sys_homeassistant.websocket.supervisor_event( self.sys_homeassistant.websocket.supervisor_event(

View File

@ -17,7 +17,8 @@ from ..exceptions import (
PwnedError, PwnedError,
SecurityJobError, SecurityJobError,
) )
from ..jobs.decorator import Job, JobCondition, JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.codenotary import cas_validate from ..utils.codenotary import cas_validate
from ..utils.common import FileConfiguration from ..utils.common import FileConfiguration
@ -107,7 +108,7 @@ class Security(FileConfiguration, CoreSysAttributes):
name="security_manager_integrity_check", name="security_manager_integrity_check",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=SecurityJobError, on_condition=SecurityJobError,
limit=JobExecutionLimit.ONCE, concurrency=JobConcurrency.REJECT,
) )
async def integrity_check(self) -> IntegrityResult: async def integrity_check(self) -> IntegrityResult:
"""Run a full system integrity check of the platform. """Run a full system integrity check of the platform.

View File

@ -4,7 +4,7 @@ import asyncio
from collections.abc import Awaitable from collections.abc import Awaitable
import logging import logging
from ..const import ATTR_REPOSITORIES, URL_HASSIO_ADDONS from ..const import ATTR_REPOSITORIES, REPOSITORY_CORE, URL_HASSIO_ADDONS
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ( from ..exceptions import (
StoreError, StoreError,
@ -18,14 +18,10 @@ from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.common import FileConfiguration from ..utils.common import FileConfiguration
from .addon import AddonStore from .addon import AddonStore
from .const import FILE_HASSIO_STORE, StoreType from .const import FILE_HASSIO_STORE, BuiltinRepository
from .data import StoreData from .data import StoreData
from .repository import Repository from .repository import Repository
from .validate import ( from .validate import DEFAULT_REPOSITORIES, SCHEMA_STORE_FILE
BUILTIN_REPOSITORIES,
SCHEMA_STORE_FILE,
ensure_builtin_repositories,
)
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@ -56,7 +52,8 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
return [ return [
repository.source repository.source
for repository in self.all for repository in self.all
if repository.type == StoreType.GIT if repository.slug
not in {BuiltinRepository.LOCAL.value, BuiltinRepository.CORE.value}
] ]
def get(self, slug: str) -> Repository: def get(self, slug: str) -> Repository:
@ -65,20 +62,15 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
raise StoreNotFound() raise StoreNotFound()
return self.repositories[slug] return self.repositories[slug]
def get_from_url(self, url: str) -> Repository:
"""Return Repository with slug."""
for repository in self.all:
if repository.source != url:
continue
return repository
raise StoreNotFound()
async def load(self) -> None: async def load(self) -> None:
"""Start up add-on management.""" """Start up add-on store management."""
# Init custom repositories and load add-ons # Make sure the built-in repositories are all present
await self.update_repositories( # This is especially important when adding new built-in repositories
self._data[ATTR_REPOSITORIES], add_with_errors=True # to make sure existing installations have them.
all_repositories: set[str] = (
set(self._data.get(ATTR_REPOSITORIES, [])) | DEFAULT_REPOSITORIES
) )
await self.update_repositories(all_repositories, issue_on_error=True)
@Job( @Job(
name="store_manager_reload", name="store_manager_reload",
@ -126,16 +118,16 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
) )
async def add_repository(self, url: str, *, persist: bool = True) -> None: async def add_repository(self, url: str, *, persist: bool = True) -> None:
"""Add a repository.""" """Add a repository."""
await self._add_repository(url, persist=persist, add_with_errors=False) await self._add_repository(url, persist=persist, issue_on_error=False)
async def _add_repository( async def _add_repository(
self, url: str, *, persist: bool = True, add_with_errors: bool = False self, url: str, *, persist: bool = True, issue_on_error: bool = False
) -> None: ) -> None:
"""Add a repository.""" """Add a repository."""
if url == URL_HASSIO_ADDONS: if url == URL_HASSIO_ADDONS:
url = StoreType.CORE url = REPOSITORY_CORE
repository = Repository(self.coresys, url) repository = Repository.create(self.coresys, url)
if repository.slug in self.repositories: if repository.slug in self.repositories:
raise StoreError(f"Can't add {url}, already in the store", _LOGGER.error) raise StoreError(f"Can't add {url}, already in the store", _LOGGER.error)
@ -145,7 +137,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
await repository.load() await repository.load()
except StoreGitCloneError as err: except StoreGitCloneError as err:
_LOGGER.error("Can't retrieve data from %s due to %s", url, err) _LOGGER.error("Can't retrieve data from %s due to %s", url, err)
if add_with_errors: if issue_on_error:
self.sys_resolution.create_issue( self.sys_resolution.create_issue(
IssueType.FATAL_ERROR, IssueType.FATAL_ERROR,
ContextType.STORE, ContextType.STORE,
@ -158,7 +150,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
except StoreGitError as err: except StoreGitError as err:
_LOGGER.error("Can't load data from repository %s due to %s", url, err) _LOGGER.error("Can't load data from repository %s due to %s", url, err)
if add_with_errors: if issue_on_error:
self.sys_resolution.create_issue( self.sys_resolution.create_issue(
IssueType.FATAL_ERROR, IssueType.FATAL_ERROR,
ContextType.STORE, ContextType.STORE,
@ -171,7 +163,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
except StoreJobError as err: except StoreJobError as err:
_LOGGER.error("Can't add repository %s due to %s", url, err) _LOGGER.error("Can't add repository %s due to %s", url, err)
if add_with_errors: if issue_on_error:
self.sys_resolution.create_issue( self.sys_resolution.create_issue(
IssueType.FATAL_ERROR, IssueType.FATAL_ERROR,
ContextType.STORE, ContextType.STORE,
@ -183,8 +175,8 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
raise err raise err
else: else:
if not await self.sys_run_in_executor(repository.validate): if not await repository.validate():
if add_with_errors: if issue_on_error:
_LOGGER.error("%s is not a valid add-on repository", url) _LOGGER.error("%s is not a valid add-on repository", url)
self.sys_resolution.create_issue( self.sys_resolution.create_issue(
IssueType.CORRUPT_REPOSITORY, IssueType.CORRUPT_REPOSITORY,
@ -213,7 +205,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
async def remove_repository(self, repository: Repository, *, persist: bool = True): async def remove_repository(self, repository: Repository, *, persist: bool = True):
"""Remove a repository.""" """Remove a repository."""
if repository.source in BUILTIN_REPOSITORIES: if repository.is_builtin:
raise StoreInvalidAddonRepo( raise StoreInvalidAddonRepo(
"Can't remove built-in repositories!", logger=_LOGGER.error "Can't remove built-in repositories!", logger=_LOGGER.error
) )
@ -234,40 +226,50 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
@Job(name="store_manager_update_repositories") @Job(name="store_manager_update_repositories")
async def update_repositories( async def update_repositories(
self, self,
list_repositories: list[str], list_repositories: set[str],
*, *,
add_with_errors: bool = False, issue_on_error: bool = False,
replace: bool = True, replace: bool = True,
): ):
"""Add a new custom repository.""" """Update repositories by adding new ones and removing stale ones."""
new_rep = set( current_repositories = {repository.source for repository in self.all}
ensure_builtin_repositories(list_repositories)
if replace # Determine repositories to add
else list_repositories + self.repository_urls repositories_to_add = list_repositories - current_repositories
)
old_rep = {repository.source for repository in self.all}
# Add new repositories # Add new repositories
add_errors = await asyncio.gather( add_errors = await asyncio.gather(
*[ *[
self._add_repository(url, persist=False, add_with_errors=True) # Use _add_repository to avoid JobCondition.SUPERVISOR_UPDATED
if add_with_errors # to prevent proper loading of repositories on startup.
self._add_repository(url, persist=False, issue_on_error=True)
if issue_on_error
else self.add_repository(url, persist=False) else self.add_repository(url, persist=False)
for url in new_rep - old_rep for url in repositories_to_add
], ],
return_exceptions=True, return_exceptions=True,
) )
# Delete stale repositories remove_errors: list[BaseException | None] = []
if replace:
# Determine repositories to remove
repositories_to_remove: list[Repository] = [
repository
for repository in self.all
if repository.source not in list_repositories
and not repository.is_builtin
]
# Remove repositories
remove_errors = await asyncio.gather( remove_errors = await asyncio.gather(
*[ *[
self.remove_repository(self.get_from_url(url), persist=False) self.remove_repository(repository, persist=False)
for url in old_rep - new_rep - BUILTIN_REPOSITORIES for repository in repositories_to_remove
], ],
return_exceptions=True, return_exceptions=True,
) )
# Always update data, even there are errors, some changes may have succeeded # Always update data, even if there are errors, some changes may have succeeded
await self.data.update() await self.data.update()
await self._read_addons() await self._read_addons()

View File

@ -3,14 +3,35 @@
from enum import StrEnum from enum import StrEnum
from pathlib import Path from pathlib import Path
from ..const import SUPERVISOR_DATA from ..const import (
REPOSITORY_CORE,
REPOSITORY_LOCAL,
SUPERVISOR_DATA,
URL_HASSIO_ADDONS,
)
FILE_HASSIO_STORE = Path(SUPERVISOR_DATA, "store.json") FILE_HASSIO_STORE = Path(SUPERVISOR_DATA, "store.json")
"""Repository type definitions for the store."""
class StoreType(StrEnum): class BuiltinRepository(StrEnum):
"""Store Types.""" """All built-in repositories that come pre-configured."""
CORE = "core" # Local repository (non-git, special handling)
LOCAL = "local" LOCAL = REPOSITORY_LOCAL
GIT = "git"
# Git-based built-in repositories
CORE = REPOSITORY_CORE
COMMUNITY_ADDONS = "https://github.com/hassio-addons/repository"
ESPHOME = "https://github.com/esphome/home-assistant-addon"
MUSIC_ASSISTANT = "https://github.com/music-assistant/home-assistant-addon"
@property
def git_url(self) -> str:
"""Return the git URL for this repository."""
if self == BuiltinRepository.LOCAL:
raise RuntimeError("Local repository does not have a git URL")
if self == BuiltinRepository.CORE:
return URL_HASSIO_ADDONS
else:
return self.value # For URL-based repos, value is the URL

View File

@ -25,7 +25,6 @@ from ..exceptions import ConfigurationFileError
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from ..utils.common import find_one_filetype, read_json_or_yaml_file from ..utils.common import find_one_filetype, read_json_or_yaml_file
from ..utils.json import read_json_file from ..utils.json import read_json_file
from .const import StoreType
from .utils import extract_hash_from_path from .utils import extract_hash_from_path
from .validate import SCHEMA_REPOSITORY_CONFIG from .validate import SCHEMA_REPOSITORY_CONFIG
@ -169,7 +168,7 @@ class StoreData(CoreSysAttributes):
self.sys_resolution.add_unhealthy_reason( self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE UnhealthyReason.OSERROR_BAD_MESSAGE
) )
elif path.stem != StoreType.LOCAL: elif repository != REPOSITORY_LOCAL:
suggestion = [SuggestionType.EXECUTE_RESET] suggestion = [SuggestionType.EXECUTE_RESET]
self.sys_resolution.create_issue( self.sys_resolution.create_issue(
IssueType.CORRUPT_REPOSITORY, IssueType.CORRUPT_REPOSITORY,

View File

@ -1,19 +1,20 @@
"""Init file for Supervisor add-on Git.""" """Init file for Supervisor add-on Git."""
import asyncio import asyncio
import errno
import functools as ft import functools as ft
import logging import logging
from pathlib import Path from pathlib import Path
from tempfile import TemporaryDirectory
import git import git
from ..const import ATTR_BRANCH, ATTR_URL, URL_HASSIO_ADDONS from ..const import ATTR_BRANCH, ATTR_URL
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import StoreGitCloneError, StoreGitError, StoreJobError from ..exceptions import StoreGitCloneError, StoreGitError, StoreJobError
from ..jobs.decorator import Job, JobCondition from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from ..utils import remove_folder from ..utils import remove_folder
from .utils import get_hash_from_repository
from .validate import RE_REPOSITORY from .validate import RE_REPOSITORY
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@ -22,8 +23,6 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
class GitRepo(CoreSysAttributes): class GitRepo(CoreSysAttributes):
"""Manage Add-on Git repository.""" """Manage Add-on Git repository."""
builtin: bool
def __init__(self, coresys: CoreSys, path: Path, url: str): def __init__(self, coresys: CoreSys, path: Path, url: str):
"""Initialize Git base wrapper.""" """Initialize Git base wrapper."""
self.coresys: CoreSys = coresys self.coresys: CoreSys = coresys
@ -87,6 +86,47 @@ class GitRepo(CoreSysAttributes):
async def clone(self) -> None: async def clone(self) -> None:
"""Clone git add-on repository.""" """Clone git add-on repository."""
async with self.lock: async with self.lock:
await self._clone()
@Job(
name="git_repo_reset",
conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_SYSTEM],
on_condition=StoreJobError,
)
async def reset(self) -> None:
"""Reset repository to fix issue with local copy."""
# Clone into temporary folder
temp_dir = await self.sys_run_in_executor(
TemporaryDirectory, dir=self.sys_config.path_tmp
)
temp_path = Path(temp_dir.name)
try:
await self._clone(temp_path)
# Remove corrupted repo and move temp clone to its place
def move_clone():
remove_folder(folder=self.path)
temp_path.rename(self.path)
async with self.lock:
try:
await self.sys_run_in_executor(move_clone)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
raise StoreGitCloneError(
f"Can't move clone due to: {err!s}", _LOGGER.error
) from err
finally:
# Clean up temporary directory in case of error
# If the folder was moved this will do nothing
await self.sys_run_in_executor(temp_dir.cleanup)
async def _clone(self, path: Path | None = None) -> None:
"""Clone git add-on repository to location."""
path = path or self.path
git_args = { git_args = {
attribute: value attribute: value
for attribute, value in ( for attribute, value in (
@ -99,14 +139,12 @@ class GitRepo(CoreSysAttributes):
} }
try: try:
_LOGGER.info( _LOGGER.info("Cloning add-on %s repository from %s", path, self.url)
"Cloning add-on %s repository from %s", self.path, self.url
)
self.repo = await self.sys_run_in_executor( self.repo = await self.sys_run_in_executor(
ft.partial( ft.partial(
git.Repo.clone_from, git.Repo.clone_from,
self.url, self.url,
str(self.path), str(path),
**git_args, # type: ignore **git_args, # type: ignore
) )
) )
@ -197,12 +235,17 @@ class GitRepo(CoreSysAttributes):
) )
raise StoreGitError() from err raise StoreGitError() from err
async def _remove(self): async def remove(self) -> None:
"""Remove a repository.""" """Remove a repository."""
if self.lock.locked(): if self.lock.locked():
_LOGGER.warning("There is already a task in progress") _LOGGER.warning(
"Cannot remove add-on repository %s, there is already a task in progress",
self.url,
)
return return
_LOGGER.info("Removing custom add-on repository %s", self.url)
def _remove_git_dir(path: Path) -> None: def _remove_git_dir(path: Path) -> None:
if not path.is_dir(): if not path.is_dir():
return return
@ -210,30 +253,3 @@ class GitRepo(CoreSysAttributes):
async with self.lock: async with self.lock:
await self.sys_run_in_executor(_remove_git_dir, self.path) await self.sys_run_in_executor(_remove_git_dir, self.path)
class GitRepoHassIO(GitRepo):
"""Supervisor add-ons repository."""
builtin: bool = False
def __init__(self, coresys):
"""Initialize Git Supervisor add-on repository."""
super().__init__(coresys, coresys.config.path_addons_core, URL_HASSIO_ADDONS)
class GitRepoCustom(GitRepo):
"""Custom add-ons repository."""
builtin: bool = False
def __init__(self, coresys, url):
"""Initialize custom Git Supervisor addo-n repository."""
path = Path(coresys.config.path_addons_git, get_hash_from_repository(url))
super().__init__(coresys, path, url)
async def remove(self):
"""Remove a custom repository."""
_LOGGER.info("Removing custom add-on repository %s", self.url)
await self._remove()

View File

@ -1,19 +1,28 @@
"""Represent a Supervisor repository.""" """Represent a Supervisor repository."""
from __future__ import annotations
from abc import ABC, abstractmethod
import logging import logging
from pathlib import Path from pathlib import Path
from typing import cast
import voluptuous as vol import voluptuous as vol
from supervisor.utils import get_latest_mtime from supervisor.utils import get_latest_mtime
from ..const import ATTR_MAINTAINER, ATTR_NAME, ATTR_URL, FILE_SUFFIX_CONFIGURATION from ..const import (
ATTR_MAINTAINER,
ATTR_NAME,
ATTR_URL,
FILE_SUFFIX_CONFIGURATION,
REPOSITORY_CORE,
REPOSITORY_LOCAL,
)
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ConfigurationFileError, StoreError from ..exceptions import ConfigurationFileError, StoreError
from ..utils.common import read_json_or_yaml_file from ..utils.common import read_json_or_yaml_file
from .const import StoreType from .const import BuiltinRepository
from .git import GitRepo, GitRepoCustom, GitRepoHassIO from .git import GitRepo
from .utils import get_hash_from_repository from .utils import get_hash_from_repository
from .validate import SCHEMA_REPOSITORY_CONFIG from .validate import SCHEMA_REPOSITORY_CONFIG
@ -21,27 +30,48 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
UNKNOWN = "unknown" UNKNOWN = "unknown"
class Repository(CoreSysAttributes): class Repository(CoreSysAttributes, ABC):
"""Add-on store repository in Supervisor.""" """Add-on store repository in Supervisor."""
def __init__(self, coresys: CoreSys, repository: str): def __init__(self, coresys: CoreSys, repository: str, local_path: Path, slug: str):
"""Initialize add-on store repository object.""" """Initialize add-on store repository object."""
self._slug: str = slug
self._local_path: Path = local_path
self.coresys: CoreSys = coresys self.coresys: CoreSys = coresys
self.git: GitRepo | None = None
self.source: str = repository self.source: str = repository
if repository == StoreType.LOCAL:
self._slug = repository @staticmethod
self._type = StoreType.LOCAL def create(coresys: CoreSys, repository: str) -> Repository:
self._latest_mtime: float | None = None """Create a repository instance."""
elif repository == StoreType.CORE: if repository in BuiltinRepository:
self.git = GitRepoHassIO(coresys) return Repository._create_builtin(coresys, BuiltinRepository(repository))
self._slug = repository
self._type = StoreType.CORE
else: else:
self.git = GitRepoCustom(coresys, repository) return Repository._create_custom(coresys, repository)
self._slug = get_hash_from_repository(repository)
self._type = StoreType.GIT @staticmethod
def _create_builtin(coresys: CoreSys, builtin: BuiltinRepository) -> Repository:
"""Create builtin repository."""
if builtin == BuiltinRepository.LOCAL:
slug = REPOSITORY_LOCAL
local_path = coresys.config.path_addons_local
return RepositoryLocal(coresys, local_path, slug)
elif builtin == BuiltinRepository.CORE:
slug = REPOSITORY_CORE
local_path = coresys.config.path_addons_core
else:
# For other builtin repositories (URL-based)
slug = get_hash_from_repository(builtin.value)
local_path = coresys.config.path_addons_git / slug
return RepositoryGitBuiltin(
coresys, builtin.value, local_path, slug, builtin.git_url
)
@staticmethod
def _create_custom(coresys: CoreSys, repository: str) -> RepositoryCustom:
"""Create custom repository."""
slug = get_hash_from_repository(repository)
local_path = coresys.config.path_addons_git / slug
return RepositoryCustom(coresys, repository, local_path, slug)
def __repr__(self) -> str: def __repr__(self) -> str:
"""Return internal representation.""" """Return internal representation."""
@ -53,9 +83,9 @@ class Repository(CoreSysAttributes):
return self._slug return self._slug
@property @property
def type(self) -> StoreType: def local_path(self) -> Path:
"""Return type of the store.""" """Return local path to repository."""
return self._type return self._local_path
@property @property
def data(self) -> dict: def data(self) -> dict:
@ -77,17 +107,78 @@ class Repository(CoreSysAttributes):
"""Return url of repository.""" """Return url of repository."""
return self.data.get(ATTR_MAINTAINER, UNKNOWN) return self.data.get(ATTR_MAINTAINER, UNKNOWN)
def validate(self) -> bool: @property
"""Check if store is valid. @abstractmethod
def is_builtin(self) -> bool:
"""Return True if this is a built-in repository."""
Must be run in executor. @abstractmethod
async def validate(self) -> bool:
"""Check if store is valid."""
@abstractmethod
async def load(self) -> None:
"""Load addon repository."""
@abstractmethod
async def update(self) -> bool:
"""Update add-on repository.
Returns True if the repository was updated.
""" """
if not self.git or self.type == StoreType.CORE:
@abstractmethod
async def remove(self) -> None:
"""Remove add-on repository."""
@abstractmethod
async def reset(self) -> None:
"""Reset add-on repository to fix corruption issue with files."""
class RepositoryBuiltin(Repository, ABC):
"""A built-in add-on repository."""
@property
def is_builtin(self) -> bool:
"""Return True if this is a built-in repository."""
return True return True
async def validate(self) -> bool:
"""Assume built-in repositories are always valid."""
return True
async def remove(self) -> None:
"""Raise. Not supported for built-in repositories."""
raise StoreError("Can't remove built-in repositories!", _LOGGER.error)
class RepositoryGit(Repository, ABC):
"""A git based add-on repository."""
_git: GitRepo
async def load(self) -> None:
"""Load addon repository."""
await self._git.load()
async def update(self) -> bool:
"""Update add-on repository.
Returns True if the repository was updated.
"""
if not await self.validate():
return False
return await self._git.pull()
async def validate(self) -> bool:
"""Check if store is valid."""
def validate_file() -> bool:
# If exists? # If exists?
for filetype in FILE_SUFFIX_CONFIGURATION: for filetype in FILE_SUFFIX_CONFIGURATION:
repository_file = Path(self.git.path / f"repository{filetype}") repository_file = Path(self._git.path / f"repository{filetype}")
if repository_file.exists(): if repository_file.exists():
break break
@ -103,29 +194,36 @@ class Repository(CoreSysAttributes):
return True return True
return await self.sys_run_in_executor(validate_file)
async def reset(self) -> None:
"""Reset add-on repository to fix corruption issue with files."""
await self._git.reset()
await self.load()
class RepositoryLocal(RepositoryBuiltin):
"""A local add-on repository."""
def __init__(self, coresys: CoreSys, local_path: Path, slug: str) -> None:
"""Initialize object."""
super().__init__(coresys, BuiltinRepository.LOCAL.value, local_path, slug)
self._latest_mtime: float | None = None
async def load(self) -> None: async def load(self) -> None:
"""Load addon repository.""" """Load addon repository."""
if not self.git:
self._latest_mtime, _ = await self.sys_run_in_executor( self._latest_mtime, _ = await self.sys_run_in_executor(
get_latest_mtime, self.sys_config.path_addons_local get_latest_mtime, self.local_path
) )
return
await self.git.load()
async def update(self) -> bool: async def update(self) -> bool:
"""Update add-on repository. """Update add-on repository.
Returns True if the repository was updated. Returns True if the repository was updated.
""" """
if not await self.sys_run_in_executor(self.validate):
return False
if self.git:
return await self.git.pull()
# Check local modifications # Check local modifications
latest_mtime, modified_path = await self.sys_run_in_executor( latest_mtime, modified_path = await self.sys_run_in_executor(
get_latest_mtime, self.sys_config.path_addons_local get_latest_mtime, self.local_path
) )
if self._latest_mtime != latest_mtime: if self._latest_mtime != latest_mtime:
_LOGGER.debug( _LOGGER.debug(
@ -138,9 +236,37 @@ class Repository(CoreSysAttributes):
return False return False
async def reset(self) -> None:
"""Raise. Not supported for local repository."""
raise StoreError(
"Can't reset local repository as it is not git based!", _LOGGER.error
)
class RepositoryGitBuiltin(RepositoryBuiltin, RepositoryGit):
"""A built-in add-on repository based on git."""
def __init__(
self, coresys: CoreSys, repository: str, local_path: Path, slug: str, url: str
) -> None:
"""Initialize object."""
super().__init__(coresys, repository, local_path, slug)
self._git = GitRepo(coresys, local_path, url)
class RepositoryCustom(RepositoryGit):
"""A custom add-on repository."""
def __init__(self, coresys: CoreSys, url: str, local_path: Path, slug: str) -> None:
"""Initialize object."""
super().__init__(coresys, url, local_path, slug)
self._git = GitRepo(coresys, local_path, url)
@property
def is_builtin(self) -> bool:
"""Return True if this is a built-in repository."""
return False
async def remove(self) -> None: async def remove(self) -> None:
"""Remove add-on repository.""" """Remove add-on repository."""
if not self.git or self.type == StoreType.CORE: await self._git.remove()
raise StoreError("Can't remove built-in repositories!", _LOGGER.error)
await cast(GitRepoCustom, self.git).remove()

View File

@ -4,18 +4,7 @@ import voluptuous as vol
from ..const import ATTR_MAINTAINER, ATTR_NAME, ATTR_REPOSITORIES, ATTR_URL from ..const import ATTR_MAINTAINER, ATTR_NAME, ATTR_REPOSITORIES, ATTR_URL
from ..validate import RE_REPOSITORY from ..validate import RE_REPOSITORY
from .const import StoreType from .const import BuiltinRepository
URL_COMMUNITY_ADDONS = "https://github.com/hassio-addons/repository"
URL_ESPHOME = "https://github.com/esphome/home-assistant-addon"
URL_MUSIC_ASSISTANT = "https://github.com/music-assistant/home-assistant-addon"
BUILTIN_REPOSITORIES = {
StoreType.CORE,
StoreType.LOCAL,
URL_COMMUNITY_ADDONS,
URL_ESPHOME,
URL_MUSIC_ASSISTANT,
}
# pylint: disable=no-value-for-parameter # pylint: disable=no-value-for-parameter
SCHEMA_REPOSITORY_CONFIG = vol.Schema( SCHEMA_REPOSITORY_CONFIG = vol.Schema(
@ -28,18 +17,9 @@ SCHEMA_REPOSITORY_CONFIG = vol.Schema(
) )
def ensure_builtin_repositories(addon_repositories: list[str]) -> list[str]:
"""Ensure builtin repositories are in list.
Note: This should not be used in validation as the resulting list is not
stable. This can have side effects when comparing data later on.
"""
return list(set(addon_repositories) | BUILTIN_REPOSITORIES)
def validate_repository(repository: str) -> str: def validate_repository(repository: str) -> str:
"""Validate a valid repository.""" """Validate a valid repository."""
if repository in [StoreType.CORE, StoreType.LOCAL]: if repository in BuiltinRepository:
return repository return repository
data = RE_REPOSITORY.match(repository) data = RE_REPOSITORY.match(repository)
@ -55,10 +35,12 @@ def validate_repository(repository: str) -> str:
repositories = vol.All([validate_repository], vol.Unique()) repositories = vol.All([validate_repository], vol.Unique())
DEFAULT_REPOSITORIES = {repo.value for repo in BuiltinRepository}
SCHEMA_STORE_FILE = vol.Schema( SCHEMA_STORE_FILE = vol.Schema(
{ {
vol.Optional( vol.Optional(
ATTR_REPOSITORIES, default=list(BUILTIN_REPOSITORIES) ATTR_REPOSITORIES, default=lambda: list(DEFAULT_REPOSITORIES)
): repositories, ): repositories,
}, },
extra=vol.REMOVE_EXTRA, extra=vol.REMOVE_EXTRA,

View File

@ -32,7 +32,7 @@ from .exceptions import (
SupervisorJobError, SupervisorJobError,
SupervisorUpdateError, SupervisorUpdateError,
) )
from .jobs.const import JobCondition, JobExecutionLimit from .jobs.const import JobCondition, JobThrottle
from .jobs.decorator import Job from .jobs.decorator import Job
from .resolution.const import ContextType, IssueType, UnhealthyReason from .resolution.const import ContextType, IssueType, UnhealthyReason
from .utils.codenotary import calc_checksum from .utils.codenotary import calc_checksum
@ -46,7 +46,7 @@ def _check_connectivity_throttle_period(coresys: CoreSys, *_) -> timedelta:
if coresys.supervisor.connectivity: if coresys.supervisor.connectivity:
return timedelta(minutes=10) return timedelta(minutes=10)
return timedelta(seconds=30) return timedelta(seconds=5)
class Supervisor(CoreSysAttributes): class Supervisor(CoreSysAttributes):
@ -204,6 +204,12 @@ class Supervisor(CoreSysAttributes):
f"Version {version!s} is already installed", _LOGGER.warning f"Version {version!s} is already installed", _LOGGER.warning
) )
image = self.sys_updater.image_supervisor or self.instance.image
if not image:
raise SupervisorUpdateError(
"Cannot determine image to use for supervisor update!", _LOGGER.error
)
# First update own AppArmor # First update own AppArmor
try: try:
await self.update_apparmor() await self.update_apparmor()
@ -216,12 +222,8 @@ class Supervisor(CoreSysAttributes):
# Update container # Update container
_LOGGER.info("Update Supervisor to version %s", version) _LOGGER.info("Update Supervisor to version %s", version)
try: try:
await self.instance.install( await self.instance.install(version, image=image)
version, image=self.sys_updater.image_supervisor await self.instance.update_start_tag(image, version)
)
await self.instance.update_start_tag(
self.sys_updater.image_supervisor, version
)
except DockerError as err: except DockerError as err:
self.sys_resolution.create_issue( self.sys_resolution.create_issue(
IssueType.UPDATE_FAILED, ContextType.SUPERVISOR IssueType.UPDATE_FAILED, ContextType.SUPERVISOR
@ -232,7 +234,7 @@ class Supervisor(CoreSysAttributes):
) from err ) from err
self.sys_config.version = version self.sys_config.version = version
self.sys_config.image = self.sys_updater.image_supervisor self.sys_config.image = image
await self.sys_config.save_data() await self.sys_config.save_data()
self.sys_create_task(self.sys_core.stop()) self.sys_create_task(self.sys_core.stop())
@ -286,17 +288,19 @@ class Supervisor(CoreSysAttributes):
@Job( @Job(
name="supervisor_check_connectivity", name="supervisor_check_connectivity",
limit=JobExecutionLimit.THROTTLE,
throttle_period=_check_connectivity_throttle_period, throttle_period=_check_connectivity_throttle_period,
throttle=JobThrottle.THROTTLE,
) )
async def check_connectivity(self): async def check_connectivity(self) -> None:
"""Check the connection.""" """Check the Internet connectivity from Supervisor's point of view."""
timeout = aiohttp.ClientTimeout(total=10) timeout = aiohttp.ClientTimeout(total=10)
try: try:
await self.sys_websession.head( await self.sys_websession.head(
"https://checkonline.home-assistant.io/online.txt", timeout=timeout "https://checkonline.home-assistant.io/online.txt", timeout=timeout
) )
except (ClientError, TimeoutError): except (ClientError, TimeoutError) as err:
_LOGGER.debug("Supervisor Connectivity check failed: %s", err)
self.connectivity = False self.connectivity = False
else: else:
_LOGGER.debug("Supervisor Connectivity check succeeded")
self.connectivity = True self.connectivity = True

View File

@ -8,6 +8,8 @@ import logging
import aiohttp import aiohttp
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from supervisor.jobs.const import JobConcurrency, JobThrottle
from .bus import EventListener from .bus import EventListener
from .const import ( from .const import (
ATTR_AUDIO, ATTR_AUDIO,
@ -34,7 +36,7 @@ from .exceptions import (
UpdaterError, UpdaterError,
UpdaterJobError, UpdaterJobError,
) )
from .jobs.decorator import Job, JobCondition, JobExecutionLimit from .jobs.decorator import Job, JobCondition
from .utils.codenotary import calc_checksum from .utils.codenotary import calc_checksum
from .utils.common import FileConfiguration from .utils.common import FileConfiguration
from .validate import SCHEMA_UPDATER_CONFIG from .validate import SCHEMA_UPDATER_CONFIG
@ -198,8 +200,9 @@ class Updater(FileConfiguration, CoreSysAttributes):
name="updater_fetch_data", name="updater_fetch_data",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=UpdaterJobError, on_condition=UpdaterJobError,
limit=JobExecutionLimit.THROTTLE_WAIT,
throttle_period=timedelta(seconds=30), throttle_period=timedelta(seconds=30),
concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
) )
async def fetch_data(self): async def fetch_data(self):
"""Fetch current versions from Github. """Fetch current versions from Github.

View File

@ -12,6 +12,7 @@ from sentry_sdk.integrations.dedupe import DedupeIntegration
from sentry_sdk.integrations.excepthook import ExcepthookIntegration from sentry_sdk.integrations.excepthook import ExcepthookIntegration
from sentry_sdk.integrations.logging import LoggingIntegration from sentry_sdk.integrations.logging import LoggingIntegration
from sentry_sdk.integrations.threading import ThreadingIntegration from sentry_sdk.integrations.threading import ThreadingIntegration
from sentry_sdk.scrubber import DEFAULT_DENYLIST, EventScrubber
from ..const import SUPERVISOR_VERSION from ..const import SUPERVISOR_VERSION
from ..coresys import CoreSys from ..coresys import CoreSys
@ -26,6 +27,7 @@ def init_sentry(coresys: CoreSys) -> None:
"""Initialize sentry client.""" """Initialize sentry client."""
if not sentry_sdk.is_initialized(): if not sentry_sdk.is_initialized():
_LOGGER.info("Initializing Supervisor Sentry") _LOGGER.info("Initializing Supervisor Sentry")
denylist = DEFAULT_DENYLIST + ["psk", "ssid"]
# Don't use AsyncioIntegration(). We commonly handle task exceptions # Don't use AsyncioIntegration(). We commonly handle task exceptions
# outside of tasks. This would cause exception we gracefully handle to # outside of tasks. This would cause exception we gracefully handle to
# be captured by sentry. # be captured by sentry.
@ -34,6 +36,7 @@ def init_sentry(coresys: CoreSys) -> None:
before_send=partial(filter_data, coresys), before_send=partial(filter_data, coresys),
auto_enabling_integrations=False, auto_enabling_integrations=False,
default_integrations=False, default_integrations=False,
event_scrubber=EventScrubber(denylist=denylist),
integrations=[ integrations=[
AioHttpIntegration( AioHttpIntegration(
failed_request_status_codes=frozenset(range(500, 600)) failed_request_status_codes=frozenset(range(500, 600))

View File

@ -182,7 +182,7 @@ SCHEMA_DOCKER_CONFIG = vol.Schema(
} }
} }
), ),
vol.Optional(ATTR_ENABLE_IPV6): vol.Boolean(), vol.Optional(ATTR_ENABLE_IPV6, default=None): vol.Maybe(vol.Boolean()),
} }
) )

View File

@ -18,6 +18,7 @@ from supervisor.const import AddonBoot, AddonState, BusEvent
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState from supervisor.docker.const import ContainerState
from supervisor.docker.manager import CommandReturn
from supervisor.docker.monitor import DockerContainerStateEvent from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError
from supervisor.hardware.helper import HwHelper from supervisor.hardware.helper import HwHelper
@ -27,7 +28,7 @@ from supervisor.utils.dt import utcnow
from .test_manager import BOOT_FAIL_ISSUE, BOOT_FAIL_SUGGESTIONS from .test_manager import BOOT_FAIL_ISSUE, BOOT_FAIL_SUGGESTIONS
from tests.common import get_fixture_path from tests.common import get_fixture_path, is_in_list
from tests.const import TEST_ADDON_SLUG from tests.const import TEST_ADDON_SLUG
@ -208,7 +209,7 @@ async def test_watchdog_on_stop(coresys: CoreSys, install_addon_ssh: Addon) -> N
async def test_listener_attached_on_install( async def test_listener_attached_on_install(
coresys: CoreSys, mock_amd64_arch_supported: None, repository coresys: CoreSys, mock_amd64_arch_supported: None, test_repository
): ):
"""Test events listener attached on addon install.""" """Test events listener attached on addon install."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000 coresys.hardware.disk.get_disk_free_space = lambda x: 5000
@ -241,7 +242,7 @@ async def test_listener_attached_on_install(
) )
async def test_watchdog_during_attach( async def test_watchdog_during_attach(
coresys: CoreSys, coresys: CoreSys,
repository: Repository, test_repository: Repository,
boot_timedelta: timedelta, boot_timedelta: timedelta,
restart_count: int, restart_count: int,
): ):
@ -709,7 +710,7 @@ async def test_local_example_install(
coresys: CoreSys, coresys: CoreSys,
container: MagicMock, container: MagicMock,
tmp_supervisor_data: Path, tmp_supervisor_data: Path,
repository, test_repository,
mock_aarch64_arch_supported: None, mock_aarch64_arch_supported: None,
): ):
"""Test install of an addon.""" """Test install of an addon."""
@ -819,7 +820,7 @@ async def test_paths_cache(coresys: CoreSys, install_addon_ssh: Addon):
with ( with (
patch("supervisor.addons.addon.Path.exists", return_value=True), patch("supervisor.addons.addon.Path.exists", return_value=True),
patch("supervisor.store.repository.Repository.update", return_value=True), patch("supervisor.store.repository.RepositoryLocal.update", return_value=True),
): ):
await coresys.store.reload(coresys.store.get("local")) await coresys.store.reload(coresys.store.get("local"))
@ -840,10 +841,25 @@ async def test_addon_loads_wrong_image(
install_addon_ssh.persist["image"] = "local/aarch64-addon-ssh" install_addon_ssh.persist["image"] = "local/aarch64-addon-ssh"
assert install_addon_ssh.image == "local/aarch64-addon-ssh" assert install_addon_ssh.image == "local/aarch64-addon-ssh"
with patch("pathlib.Path.is_file", return_value=True): with (
patch("pathlib.Path.is_file", return_value=True),
patch.object(
coresys.docker,
"run_command",
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
) as mock_run_command,
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
await install_addon_ssh.load() await install_addon_ssh.load()
container.remove.assert_called_once_with(force=True) container.remove.assert_called_with(force=True, v=True)
# one for removing the addon, one for removing the addon builder
assert coresys.docker.images.remove.call_count == 2
assert coresys.docker.images.remove.call_args_list[0].kwargs == { assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": "local/aarch64-addon-ssh:latest", "image": "local/aarch64-addon-ssh:latest",
"force": True, "force": True,
@ -852,12 +868,18 @@ async def test_addon_loads_wrong_image(
"image": "local/aarch64-addon-ssh:9.2.1", "image": "local/aarch64-addon-ssh:9.2.1",
"force": True, "force": True,
} }
coresys.docker.images.build.assert_called_once() mock_run_command.assert_called_once()
assert ( assert mock_run_command.call_args.args[0] == "docker.io/library/docker"
coresys.docker.images.build.call_args.kwargs["tag"] assert mock_run_command.call_args.kwargs["version"] == "1.0.0-cli"
== "local/amd64-addon-ssh:9.2.1" command = mock_run_command.call_args.kwargs["command"]
assert is_in_list(
["--platform", "linux/amd64"],
command,
)
assert is_in_list(
["--tag", "local/amd64-addon-ssh:9.2.1"],
command,
) )
assert coresys.docker.images.build.call_args.kwargs["platform"] == "linux/amd64"
assert install_addon_ssh.image == "local/amd64-addon-ssh" assert install_addon_ssh.image == "local/amd64-addon-ssh"
coresys.addons.data.save_data.assert_called_once() coresys.addons.data.save_data.assert_called_once()
@ -871,15 +893,33 @@ async def test_addon_loads_missing_image(
"""Test addon corrects a missing image on load.""" """Test addon corrects a missing image on load."""
coresys.docker.images.get.side_effect = ImageNotFound("missing") coresys.docker.images.get.side_effect = ImageNotFound("missing")
with patch("pathlib.Path.is_file", return_value=True): with (
patch("pathlib.Path.is_file", return_value=True),
patch.object(
coresys.docker,
"run_command",
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
) as mock_run_command,
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
await install_addon_ssh.load() await install_addon_ssh.load()
coresys.docker.images.build.assert_called_once() mock_run_command.assert_called_once()
assert ( assert mock_run_command.call_args.args[0] == "docker.io/library/docker"
coresys.docker.images.build.call_args.kwargs["tag"] assert mock_run_command.call_args.kwargs["version"] == "1.0.0-cli"
== "local/amd64-addon-ssh:9.2.1" command = mock_run_command.call_args.kwargs["command"]
assert is_in_list(
["--platform", "linux/amd64"],
command,
)
assert is_in_list(
["--tag", "local/amd64-addon-ssh:9.2.1"],
command,
) )
assert coresys.docker.images.build.call_args.kwargs["platform"] == "linux/amd64"
assert install_addon_ssh.image == "local/amd64-addon-ssh" assert install_addon_ssh.image == "local/amd64-addon-ssh"
@ -900,7 +940,14 @@ async def test_addon_load_succeeds_with_docker_errors(
# Image build failure # Image build failure
coresys.docker.images.build.side_effect = DockerException() coresys.docker.images.build.side_effect = DockerException()
caplog.clear() caplog.clear()
with patch("pathlib.Path.is_file", return_value=True): with (
patch("pathlib.Path.is_file", return_value=True),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
await install_addon_ssh.load() await install_addon_ssh.load()
assert "Can't build local/amd64-addon-ssh:9.2.1" in caplog.text assert "Can't build local/amd64-addon-ssh:9.2.1" in caplog.text

View File

@ -8,10 +8,13 @@ from supervisor.addons.addon import Addon
from supervisor.addons.build import AddonBuild from supervisor.addons.build import AddonBuild
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from tests.common import is_in_list
async def test_platform_set(coresys: CoreSys, install_addon_ssh: Addon): async def test_platform_set(coresys: CoreSys, install_addon_ssh: Addon):
"""Test platform set in docker args.""" """Test platform set in container build args."""
build = await AddonBuild(coresys, install_addon_ssh).load_config() build = await AddonBuild(coresys, install_addon_ssh).load_config()
with ( with (
patch.object( patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"]) type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
@ -19,17 +22,23 @@ async def test_platform_set(coresys: CoreSys, install_addon_ssh: Addon):
patch.object( patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64") type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
), ),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
): ):
args = await coresys.run_in_executor( args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest") build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
) )
assert args["platform"] == "linux/amd64" assert is_in_list(["--platform", "linux/amd64"], args["command"])
async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon): async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon):
"""Test platform set in docker args.""" """Test dockerfile path in container build args."""
build = await AddonBuild(coresys, install_addon_ssh).load_config() build = await AddonBuild(coresys, install_addon_ssh).load_config()
with ( with (
patch.object( patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"]) type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
@ -37,12 +46,17 @@ async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon)
patch.object( patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64") type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
), ),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
): ):
args = await coresys.run_in_executor( args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest") build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
) )
assert args["dockerfile"].endswith("fixtures/addons/local/ssh/Dockerfile") assert is_in_list(["--file", "Dockerfile"], args["command"])
assert str(await coresys.run_in_executor(build.get_dockerfile)).endswith( assert str(await coresys.run_in_executor(build.get_dockerfile)).endswith(
"fixtures/addons/local/ssh/Dockerfile" "fixtures/addons/local/ssh/Dockerfile"
) )
@ -50,8 +64,9 @@ async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon)
async def test_dockerfile_evaluation_arch(coresys: CoreSys, install_addon_ssh: Addon): async def test_dockerfile_evaluation_arch(coresys: CoreSys, install_addon_ssh: Addon):
"""Test platform set in docker args.""" """Test dockerfile arch evaluation in container build args."""
build = await AddonBuild(coresys, install_addon_ssh).load_config() build = await AddonBuild(coresys, install_addon_ssh).load_config()
with ( with (
patch.object( patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["aarch64"]) type(coresys.arch), "supported", new=PropertyMock(return_value=["aarch64"])
@ -59,12 +74,17 @@ async def test_dockerfile_evaluation_arch(coresys: CoreSys, install_addon_ssh: A
patch.object( patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="aarch64") type(coresys.arch), "default", new=PropertyMock(return_value="aarch64")
), ),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
): ):
args = await coresys.run_in_executor( args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest") build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
) )
assert args["dockerfile"].endswith("fixtures/addons/local/ssh/Dockerfile.aarch64") assert is_in_list(["--file", "Dockerfile.aarch64"], args["command"])
assert str(await coresys.run_in_executor(build.get_dockerfile)).endswith( assert str(await coresys.run_in_executor(build.get_dockerfile)).endswith(
"fixtures/addons/local/ssh/Dockerfile.aarch64" "fixtures/addons/local/ssh/Dockerfile.aarch64"
) )

View File

@ -29,7 +29,7 @@ from supervisor.plugins.dns import PluginDns
from supervisor.resolution.const import ContextType, IssueType, SuggestionType from supervisor.resolution.const import ContextType, IssueType, SuggestionType
from supervisor.resolution.data import Issue, Suggestion from supervisor.resolution.data import Issue, Suggestion
from supervisor.store.addon import AddonStore from supervisor.store.addon import AddonStore
from supervisor.store.repository import Repository from supervisor.store.repository import RepositoryLocal
from supervisor.utils import check_exception_chain from supervisor.utils import check_exception_chain
from supervisor.utils.common import write_json_file from supervisor.utils.common import write_json_file
@ -67,7 +67,7 @@ async def fixture_remove_wait_boot(coresys: CoreSys) -> AsyncGenerator[None]:
@pytest.fixture(name="install_addon_example_image") @pytest.fixture(name="install_addon_example_image")
async def fixture_install_addon_example_image( async def fixture_install_addon_example_image(
coresys: CoreSys, repository coresys: CoreSys, test_repository
) -> Generator[Addon]: ) -> Generator[Addon]:
"""Install local_example add-on with image.""" """Install local_example add-on with image."""
store = coresys.addons.store["local_example_image"] store = coresys.addons.store["local_example_image"]
@ -442,7 +442,7 @@ async def test_store_data_changes_during_update(
update_task = coresys.create_task(simulate_update()) update_task = coresys.create_task(simulate_update())
await asyncio.sleep(0) await asyncio.sleep(0)
with patch.object(Repository, "update", return_value=True): with patch.object(RepositoryLocal, "update", return_value=True):
await coresys.store.reload() await coresys.store.reload()
assert "image" not in coresys.store.data.addons["local_ssh"] assert "image" not in coresys.store.data.addons["local_ssh"]

View File

@ -14,6 +14,7 @@ from supervisor.const import AddonState
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState from supervisor.docker.const import ContainerState
from supervisor.docker.manager import CommandReturn
from supervisor.docker.monitor import DockerContainerStateEvent from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import HassioError from supervisor.exceptions import HassioError
from supervisor.store.repository import Repository from supervisor.store.repository import Repository
@ -53,7 +54,7 @@ async def test_addons_info(
# DEPRECATED - Remove with legacy routing logic on 1/2023 # DEPRECATED - Remove with legacy routing logic on 1/2023
async def test_addons_info_not_installed( async def test_addons_info_not_installed(
api_client: TestClient, coresys: CoreSys, repository: Repository api_client: TestClient, coresys: CoreSys, test_repository: Repository
): ):
"""Test getting addon info for not installed addon.""" """Test getting addon info for not installed addon."""
resp = await api_client.get(f"/addons/{TEST_ADDON_SLUG}/info") resp = await api_client.get(f"/addons/{TEST_ADDON_SLUG}/info")
@ -239,6 +240,19 @@ async def test_api_addon_rebuild_healthcheck(
patch.object(Addon, "need_build", new=PropertyMock(return_value=True)), patch.object(Addon, "need_build", new=PropertyMock(return_value=True)),
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])), patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
patch.object(DockerAddon, "run", new=container_events_task), patch.object(DockerAddon, "run", new=container_events_task),
patch.object(
coresys.docker,
"run_command",
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
),
patch.object(
DockerAddon, "healthcheck", new=PropertyMock(return_value={"exists": True})
),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
): ):
resp = await api_client.post("/addons/local_ssh/rebuild") resp = await api_client.post("/addons/local_ssh/rebuild")
@ -247,6 +261,98 @@ async def test_api_addon_rebuild_healthcheck(
assert resp.status == 200 assert resp.status == 200
async def test_api_addon_rebuild_force(
api_client: TestClient,
coresys: CoreSys,
install_addon_ssh: Addon,
container: MagicMock,
tmp_supervisor_data,
path_extern,
):
"""Test rebuilding an image-based addon with force parameter."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
container.status = "running"
install_addon_ssh.path_data.mkdir()
container.attrs["Config"] = {"Healthcheck": "exists"}
await install_addon_ssh.load()
await asyncio.sleep(0)
assert install_addon_ssh.state == AddonState.STARTUP
state_changes: list[AddonState] = []
_container_events_task: asyncio.Task | None = None
async def container_events():
nonlocal state_changes
await install_addon_ssh.container_state_changed(
_create_test_event(f"addon_{TEST_ADDON_SLUG}", ContainerState.STOPPED)
)
state_changes.append(install_addon_ssh.state)
await install_addon_ssh.container_state_changed(
_create_test_event(f"addon_{TEST_ADDON_SLUG}", ContainerState.RUNNING)
)
state_changes.append(install_addon_ssh.state)
await asyncio.sleep(0)
await install_addon_ssh.container_state_changed(
_create_test_event(f"addon_{TEST_ADDON_SLUG}", ContainerState.HEALTHY)
)
async def container_events_task(*args, **kwargs):
nonlocal _container_events_task
_container_events_task = asyncio.create_task(container_events())
# Test 1: Without force, image-based addon should fail
with (
patch.object(AddonBuild, "is_valid", return_value=True),
patch.object(DockerAddon, "is_running", return_value=False),
patch.object(
Addon, "need_build", new=PropertyMock(return_value=False)
), # Image-based
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
):
resp = await api_client.post("/addons/local_ssh/rebuild")
assert resp.status == 400
result = await resp.json()
assert "Can't rebuild a image based add-on" in result["message"]
# Reset state for next test
state_changes.clear()
# Test 2: With force=True, image-based addon should succeed
with (
patch.object(AddonBuild, "is_valid", return_value=True),
patch.object(DockerAddon, "is_running", return_value=False),
patch.object(
Addon, "need_build", new=PropertyMock(return_value=False)
), # Image-based
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
patch.object(DockerAddon, "run", new=container_events_task),
patch.object(
coresys.docker,
"run_command",
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
),
patch.object(
DockerAddon, "healthcheck", new=PropertyMock(return_value={"exists": True})
),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
resp = await api_client.post("/addons/local_ssh/rebuild", json={"force": True})
assert state_changes == [AddonState.STOPPED, AddonState.STARTUP]
assert install_addon_ssh.state == AddonState.STARTED
assert resp.status == 200
await _container_events_task
async def test_api_addon_uninstall( async def test_api_addon_uninstall(
api_client: TestClient, api_client: TestClient,
coresys: CoreSys, coresys: CoreSys,
@ -427,7 +533,7 @@ async def test_addon_not_found(
("get", "/addons/local_ssh/logs/boots/1/follow", False), ("get", "/addons/local_ssh/logs/boots/1/follow", False),
], ],
) )
@pytest.mark.usefixtures("repository") @pytest.mark.usefixtures("test_repository")
async def test_addon_not_installed( async def test_addon_not_installed(
api_client: TestClient, method: str, url: str, json_expected: bool api_client: TestClient, method: str, url: str, json_expected: bool
): ):

View File

@ -3,6 +3,7 @@
from datetime import UTC, datetime, timedelta from datetime import UTC, datetime, timedelta
from unittest.mock import AsyncMock, MagicMock, patch from unittest.mock import AsyncMock, MagicMock, patch
from aiohttp.hdrs import WWW_AUTHENTICATE
from aiohttp.test_utils import TestClient from aiohttp.test_utils import TestClient
import pytest import pytest
@ -137,25 +138,24 @@ async def test_auth_json_success(
@pytest.mark.parametrize( @pytest.mark.parametrize(
("user", "password", "message", "api_client"), ("user", "password", "api_client"),
[ [
(None, "password", "None as username is not supported!", TEST_ADDON_SLUG), (None, "password", TEST_ADDON_SLUG),
("user", None, "None as password is not supported!", TEST_ADDON_SLUG), ("user", None, TEST_ADDON_SLUG),
], ],
indirect=["api_client"], indirect=["api_client"],
) )
async def test_auth_json_failure_none( async def test_auth_json_failure_none(
api_client: TestClient, api_client: TestClient,
mock_check_login: AsyncMock,
install_addon_ssh: Addon, install_addon_ssh: Addon,
user: str | None, user: str | None,
password: str | None, password: str | None,
message: str,
): ):
"""Test failed JSON auth with none user or password.""" """Test failed JSON auth with none user or password."""
mock_check_login.return_value = True
resp = await api_client.post("/auth", json={"username": user, "password": password}) resp = await api_client.post("/auth", json={"username": user, "password": password})
assert resp.status == 400 assert resp.status == 401
body = await resp.json()
assert body["message"] == message
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True) @pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
@ -167,8 +167,8 @@ async def test_auth_json_invalid_credentials(
resp = await api_client.post( resp = await api_client.post(
"/auth", json={"username": "test", "password": "wrong"} "/auth", json={"username": "test", "password": "wrong"}
) )
# Do we really want the API to return 400 here? assert WWW_AUTHENTICATE not in resp.headers
assert resp.status == 400 assert resp.status == 401
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True) @pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
@ -177,7 +177,7 @@ async def test_auth_json_empty_body(api_client: TestClient, install_addon_ssh: A
resp = await api_client.post( resp = await api_client.post(
"/auth", data="", headers={"Content-Type": "application/json"} "/auth", data="", headers={"Content-Type": "application/json"}
) )
assert resp.status == 400 assert resp.status == 401
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True) @pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
@ -214,8 +214,8 @@ async def test_auth_urlencoded_failure(
data="username=test&password=fail", data="username=test&password=fail",
headers={"Content-Type": "application/x-www-form-urlencoded"}, headers={"Content-Type": "application/x-www-form-urlencoded"},
) )
# Do we really want the API to return 400 here? assert WWW_AUTHENTICATE not in resp.headers
assert resp.status == 400 assert resp.status == 401
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True) @pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
@ -226,7 +226,7 @@ async def test_auth_unsupported_content_type(
resp = await api_client.post( resp = await api_client.post(
"/auth", data="something", headers={"Content-Type": "text/plain"} "/auth", data="something", headers={"Content-Type": "text/plain"}
) )
# This probably should be 400 here for better consistency assert "Basic realm" in resp.headers[WWW_AUTHENTICATE]
assert resp.status == 401 assert resp.status == 401

View File

@ -19,7 +19,7 @@ async def test_api_docker_info(api_client: TestClient):
async def test_api_network_enable_ipv6(coresys: CoreSys, api_client: TestClient): async def test_api_network_enable_ipv6(coresys: CoreSys, api_client: TestClient):
"""Test setting docker network for enabled IPv6.""" """Test setting docker network for enabled IPv6."""
assert coresys.docker.config.enable_ipv6 is False assert coresys.docker.config.enable_ipv6 is None
resp = await api_client.post("/docker/options", json={"enable_ipv6": True}) resp = await api_client.post("/docker/options", json={"enable_ipv6": True})
assert resp.status == 200 assert resp.status == 200

View File

@ -23,7 +23,11 @@ DEFAULT_RANGE_FOLLOW = "entries=:-99:18446744073709551615"
@pytest.fixture(name="coresys_disk_info") @pytest.fixture(name="coresys_disk_info")
async def fixture_coresys_disk_info(coresys: CoreSys) -> AsyncGenerator[CoreSys]: async def fixture_coresys_disk_info(coresys: CoreSys) -> AsyncGenerator[CoreSys]:
"""Mock basic disk information for host APIs.""" """Mock basic disk information for host APIs."""
coresys.hardware.disk.get_disk_life_time = lambda _: 0
async def mock_disk_lifetime(_):
return 0
coresys.hardware.disk.get_disk_life_time = mock_disk_lifetime
coresys.hardware.disk.get_disk_free_space = lambda _: 5000 coresys.hardware.disk.get_disk_free_space = lambda _: 5000
coresys.hardware.disk.get_disk_total_space = lambda _: 50000 coresys.hardware.disk.get_disk_total_space = lambda _: 50000
coresys.hardware.disk.get_disk_used_space = lambda _: 45000 coresys.hardware.disk.get_disk_used_space = lambda _: 45000

Some files were not shown because too many files have changed in this diff Show More