Compare commits

..

28 Commits

Author SHA1 Message Date
Stefan Agner
9bee58a8b1
Migrate to JobConcurrency and JobThrottle parameters (#6065) 2025-08-05 13:24:44 +02:00
Mike Degatano
8a1e6b0895
Add unsupported reason os_version and evaluation (#6041)
* Add unsupported reason os_version and evaluation

* Order enum and add tests

* Apply suggestions from code review

* Apply suggestions from code review

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-08-05 13:23:42 +02:00
Stefan Agner
f150d1b287
Return optimistic life time estimate for eMMC storage (#6063)
This avoids that we display a 10% life time use for a brand new
eMMC storage. The values are estimates anyways, and there is a
separate value which represents life time completely used (100%).
2025-08-05 10:43:57 +02:00
dependabot[bot]
628a18c6b8
Bump coverage from 7.10.1 to 7.10.2 (#6062)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.10.1 to 7.10.2.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.10.1...7.10.2)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.10.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-04 14:41:38 +02:00
dependabot[bot]
74e43411e5
Bump dbus-fast from 2.44.2 to 2.44.3 (#6061)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.44.2 to 2.44.3.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.44.2...v2.44.3)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 2.44.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-04 14:41:25 +02:00
dependabot[bot]
e6b0d4144c
Bump awesomeversion from 25.5.0 to 25.8.0 (#6060)
Bumps [awesomeversion](https://github.com/ludeeus/awesomeversion) from 25.5.0 to 25.8.0.
- [Release notes](https://github.com/ludeeus/awesomeversion/releases)
- [Commits](https://github.com/ludeeus/awesomeversion/compare/25.5.0...25.8.0)

---
updated-dependencies:
- dependency-name: awesomeversion
  dependency-version: 25.8.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-04 14:41:11 +02:00
Mike Degatano
033896480d
Fix backup equal and add hash to objects with eq (#6059)
* Fix backup equal and add hash to objects with eq

* Add test for failed consolidate
2025-08-04 14:19:33 +02:00
dependabot[bot]
478e00c0fe
Bump home-assistant/wheels from 2025.03.0 to 2025.07.0 (#6057)
Bumps [home-assistant/wheels](https://github.com/home-assistant/wheels) from 2025.03.0 to 2025.07.0.
- [Release notes](https://github.com/home-assistant/wheels/releases)
- [Commits](https://github.com/home-assistant/wheels/compare/2025.03.0...2025.07.0)

---
updated-dependencies:
- dependency-name: home-assistant/wheels
  dependency-version: 2025.07.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-04 14:17:42 +02:00
dependabot[bot]
6f2ba7d68c
Bump mypy from 1.17.0 to 1.17.1 (#6058)
Bumps [mypy](https://github.com/python/mypy) from 1.17.0 to 1.17.1.
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.17.0...v1.17.1)

---
updated-dependencies:
- dependency-name: mypy
  dependency-version: 1.17.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-04 14:17:13 +02:00
Mike Degatano
22afa60f55
Get lifetime info for NVMe devices (#6056)
* Get lifetime info for NVMe devices

* Fix lint and test issues

* Update tests/dbus_service_mocks/udisks2_manager.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-04 13:53:56 +02:00
Stefan Agner
9f2fda5dc7
Update to Alpine 3.22 (#6054)
Update the Supervisor base image to Alpine 3.22 for all architectures.
2025-08-04 09:50:27 +02:00
Stefan Agner
27b092aed0
Block OS updates when the system is unhealthy (#6053)
* Block OS updates when the system is unhealthy

In #6024 we mark a system as unhealthy when multiple OS installations
were found. The idea was to block OS updates in this case. However, it
turns out that the OS update job was not checking the system health
and thus allowed updates even when the system was marked as unhealthy.

This commit adds the `JobCondition.HEALTHY` condition to the OS update
job, ensuring that OS updates are only performed when the system is
healthy.

Users can force an OS update still by using
`ha jobs options --ignore-conditions healthy`.

* Add test for update of unhealthy system

---------

Co-authored-by: Jan Čermák <sairon@sairon.cz>
2025-07-31 11:23:57 +02:00
dependabot[bot]
3af13cb7e2
Bump sentry-sdk from 2.34.0 to 2.34.1 (#6052)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.34.0 to 2.34.1.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.34.0...2.34.1)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.34.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-31 10:43:49 +02:00
Stefan Agner
6871ea4b81
Split execution limit in concurrency and throttle parameters (#6013)
* Split execution limit in concurrency and throttle parameters

Currently the execution limit combines two ortogonal features: Limit
concurrency and throttle execution. This change separates the two
features, allowing for more flexible configuration of job execution.

Ultimately I want to get rid of the old limit parameter. But for ease
of review and migration, I'd like to do this in two steps: First
introduce the new parameters, and map the old limit parameters to the
new parameters. Then, in a second step, remove the old limit parameter
and migrate all users to the new concurrency and throttle parameters
as needed.

* Introduce common lock release method

* Fix THROTTLE_WAIT behavior

The concurrency QUEUE does not really QUEUE throttle limits.

* Add documentation for new concurrency/throttle Job options

* Handle group options for concurrency and throttle separately

* Fix GROUP_THROTTLE_WAIT concurrency setting

We need to use the QUEUE concurrency setting instead of GROUP_QUEUE
for the GROUP_THROTTLE_WAIT execution limit. Otherwise the
test_jobs_decorator.py::test_execution_limit_group_throttle_wait
test deadlocks.

The reason this deadlocks is because GROUP_QUEUE concurrency doesn't
really work because we only can release a group lock if the job is
actually running.

Or put differently, throttling isn't supported with GROUP_*
concurrency options.

* Prevent using any throttling with group concurrency

The group concurrency modes (reject and queue) are not compatible with
any throttling, since we currently can't unlock the group lock when
a job doesn't get started (which is the case when throttling is
applied).

* Fix commit in group rate limit

* Explain the deadlock issue with group locks in code

* Handle locking correctly on throttle limit exceptions

* Introduce pytest for new job decorator combinations
2025-07-30 22:12:14 +02:00
dependabot[bot]
cf77ab2290
Bump aiohttp from 3.12.14 to 3.12.15 (#6049)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.12.15
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-30 14:34:14 +02:00
dependabot[bot]
ceeffa3284
Bump ruff from 0.12.5 to 0.12.7 (#6051)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.12.5 to 0.12.7.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.12.5...0.12.7)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.12.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-30 14:33:07 +02:00
dependabot[bot]
31f2f70cd9
Bump sentry-sdk from 2.33.2 to 2.34.0 (#6050)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.33.2 to 2.34.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.33.2...2.34.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.34.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-30 14:32:11 +02:00
Stefan Agner
deac85bddb
Scrub WiFi fields from Sentry events (#6048)
Make sure WiFi fields are scrubbed from Sentry events to prevent
accidental exposure of sensitive information.
2025-07-29 17:42:43 +02:00
Stefan Agner
7dcf5ba631
Enable IPv6 for containers on new installations (#6029)
* Enable IPv6 by default for new installations

Enable IPv6 by default for new Supervisor installations. Let's also
make the `enable_ipv6` attribute nullable, so we can distinguish
between "not set" and "set to false".

* Add pytest

* Add log message that system restart is required for IPv6 changes

* Fix API pytest

* Create resolution center issue when reboot is required

* Order log after actual setter call
2025-07-29 15:59:03 +02:00
dependabot[bot]
a004830131
Bump orjson from 3.11.0 to 3.11.1 (#6045)
Bumps [orjson](https://github.com/ijl/orjson) from 3.11.0 to 3.11.1.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.11.0...3.11.1)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 10:41:42 +02:00
dependabot[bot]
a8cc6c416d
Bump coverage from 7.10.0 to 7.10.1 (#6044)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.10.0 to 7.10.1.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.10.0...7.10.1)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.10.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 10:41:19 +02:00
dependabot[bot]
74b26642b0
Bump ruff from 0.12.4 to 0.12.5 (#6042)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-27 20:20:27 +02:00
dependabot[bot]
5e26ab5f4a
Bump gitpython from 3.1.44 to 3.1.45 (#6039)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-27 20:14:24 +02:00
dependabot[bot]
a841cb8282
Bump coverage from 7.9.2 to 7.10.0 (#6043) 2025-07-27 10:31:48 +02:00
dependabot[bot]
3b1b03c8a7
Bump dbus-fast from 2.44.1 to 2.44.2 (#6038)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.44.1 to 2.44.2.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.44.1...v2.44.2)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 2.44.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 16:06:19 -04:00
dependabot[bot]
680428f304
Bump sentry-sdk from 2.33.0 to 2.33.2 (#6037)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.33.0 to 2.33.2.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.33.0...2.33.2)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.33.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 12:44:35 -04:00
dependabot[bot]
f34128c37e
Bump ruff from 0.12.3 to 0.12.4 (#6031)
---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.12.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 12:43:56 -04:00
dependabot[bot]
2ed0682b34
Bump sigstore/cosign-installer from 3.9.1 to 3.9.2 (#6032) 2025-07-18 10:00:58 +02:00
66 changed files with 1492 additions and 421 deletions

View File

@ -106,7 +106,7 @@ jobs:
- name: Build wheels - name: Build wheels
if: needs.init.outputs.requirements == 'true' if: needs.init.outputs.requirements == 'true'
uses: home-assistant/wheels@2025.03.0 uses: home-assistant/wheels@2025.07.0
with: with:
abi: cp313 abi: cp313
tag: musllinux_1_2 tag: musllinux_1_2
@ -131,7 +131,7 @@ jobs:
- name: Install Cosign - name: Install Cosign
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@v3.9.1 uses: sigstore/cosign-installer@v3.9.2
with: with:
cosign-release: "v2.4.3" cosign-release: "v2.4.3"

View File

@ -346,7 +346,7 @@ jobs:
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign - name: Install Cosign
uses: sigstore/cosign-installer@v3.9.1 uses: sigstore/cosign-installer@v3.9.2
with: with:
cosign-release: "v2.4.3" cosign-release: "v2.4.3"
- name: Restore Python virtual environment - name: Restore Python virtual environment

View File

@ -1,10 +1,10 @@
image: ghcr.io/home-assistant/{arch}-hassio-supervisor image: ghcr.io/home-assistant/{arch}-hassio-supervisor
build_from: build_from:
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.21 aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.22
armhf: ghcr.io/home-assistant/armhf-base-python:3.13-alpine3.21 armhf: ghcr.io/home-assistant/armhf-base-python:3.13-alpine3.22
armv7: ghcr.io/home-assistant/armv7-base-python:3.13-alpine3.21 armv7: ghcr.io/home-assistant/armv7-base-python:3.13-alpine3.22
amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.21 amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.22
i386: ghcr.io/home-assistant/i386-base-python:3.13-alpine3.21 i386: ghcr.io/home-assistant/i386-base-python:3.13-alpine3.22
codenotary: codenotary:
signer: notary@home-assistant.io signer: notary@home-assistant.io
base_image: notary@home-assistant.io base_image: notary@home-assistant.io

View File

@ -1,8 +1,8 @@
aiodns==3.5.0 aiodns==3.5.0
aiohttp==3.12.14 aiohttp==3.12.15
atomicwrites-homeassistant==1.4.1 atomicwrites-homeassistant==1.4.1
attrs==25.3.0 attrs==25.3.0
awesomeversion==25.5.0 awesomeversion==25.8.0
blockbuster==1.5.25 blockbuster==1.5.25
brotli==1.1.0 brotli==1.1.0
ciso8601==2.3.2 ciso8601==2.3.2
@ -14,17 +14,17 @@ deepmerge==2.0
dirhash==0.5.0 dirhash==0.5.0
docker==7.1.0 docker==7.1.0
faust-cchardet==2.1.19 faust-cchardet==2.1.19
gitpython==3.1.44 gitpython==3.1.45
jinja2==3.1.6 jinja2==3.1.6
log-rate-limit==1.4.2 log-rate-limit==1.4.2
orjson==3.11.0 orjson==3.11.1
pulsectl==24.12.0 pulsectl==24.12.0
pyudev==0.24.3 pyudev==0.24.3
PyYAML==6.0.2 PyYAML==6.0.2
requests==2.32.4 requests==2.32.4
securetar==2025.2.1 securetar==2025.2.1
sentry-sdk==2.33.0 sentry-sdk==2.34.1
setuptools==80.9.0 setuptools==80.9.0
voluptuous==0.15.2 voluptuous==0.15.2
dbus-fast==2.44.1 dbus-fast==2.44.3
zlib-fast==0.2.1 zlib-fast==0.2.1

View File

@ -1,6 +1,6 @@
astroid==3.3.11 astroid==3.3.11
coverage==7.9.2 coverage==7.10.2
mypy==1.17.0 mypy==1.17.1
pre-commit==4.2.0 pre-commit==4.2.0
pylint==3.3.7 pylint==3.3.7
pytest-aiohttp==1.1.0 pytest-aiohttp==1.1.0
@ -8,7 +8,7 @@ pytest-asyncio==0.25.2
pytest-cov==6.2.1 pytest-cov==6.2.1
pytest-timeout==2.4.0 pytest-timeout==2.4.0
pytest==8.4.1 pytest==8.4.1
ruff==0.12.3 ruff==0.12.7
time-machine==2.16.0 time-machine==2.16.0
types-docker==7.1.0.20250705 types-docker==7.1.0.20250705
types-pyyaml==6.0.12.20250516 types-pyyaml==6.0.12.20250516

View File

@ -77,7 +77,7 @@ from ..exceptions import (
) )
from ..hardware.data import Device from ..hardware.data import Device
from ..homeassistant.const import WSEvent from ..homeassistant.const import WSEvent
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.const import ContextType, IssueType, UnhealthyReason from ..resolution.const import ContextType, IssueType, UnhealthyReason
from ..resolution.data import Issue from ..resolution.data import Issue
@ -733,8 +733,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_unload", name="addon_unload",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def unload(self) -> None: async def unload(self) -> None:
"""Unload add-on and remove data.""" """Unload add-on and remove data."""
@ -766,8 +766,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_install", name="addon_install",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def install(self) -> None: async def install(self) -> None:
"""Install and setup this addon.""" """Install and setup this addon."""
@ -807,8 +807,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_uninstall", name="addon_uninstall",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def uninstall( async def uninstall(
self, *, remove_config: bool, remove_image: bool = True self, *, remove_config: bool, remove_image: bool = True
@ -873,8 +873,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_update", name="addon_update",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def update(self) -> asyncio.Task | None: async def update(self) -> asyncio.Task | None:
"""Update this addon to latest version. """Update this addon to latest version.
@ -923,8 +923,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_rebuild", name="addon_rebuild",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def rebuild(self) -> asyncio.Task | None: async def rebuild(self) -> asyncio.Task | None:
"""Rebuild this addons container and image. """Rebuild this addons container and image.
@ -1068,8 +1068,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_start", name="addon_start",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def start(self) -> asyncio.Task: async def start(self) -> asyncio.Task:
"""Set options and start add-on. """Set options and start add-on.
@ -1117,8 +1117,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_stop", name="addon_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def stop(self) -> None: async def stop(self) -> None:
"""Stop add-on.""" """Stop add-on."""
@ -1131,8 +1131,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_restart", name="addon_restart",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def restart(self) -> asyncio.Task: async def restart(self) -> asyncio.Task:
"""Restart add-on. """Restart add-on.
@ -1166,8 +1166,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_write_stdin", name="addon_write_stdin",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def write_stdin(self, data) -> None: async def write_stdin(self, data) -> None:
"""Write data to add-on stdin.""" """Write data to add-on stdin."""
@ -1200,8 +1200,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_begin_backup", name="addon_begin_backup",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def begin_backup(self) -> bool: async def begin_backup(self) -> bool:
"""Execute pre commands or stop addon if necessary. """Execute pre commands or stop addon if necessary.
@ -1222,8 +1222,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_end_backup", name="addon_end_backup",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def end_backup(self) -> asyncio.Task | None: async def end_backup(self) -> asyncio.Task | None:
"""Execute post commands or restart addon if necessary. """Execute post commands or restart addon if necessary.
@ -1260,8 +1260,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_backup", name="addon_backup",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def backup(self, tar_file: tarfile.TarFile) -> asyncio.Task | None: async def backup(self, tar_file: tarfile.TarFile) -> asyncio.Task | None:
"""Backup state of an add-on. """Backup state of an add-on.
@ -1368,8 +1368,8 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_restore", name="addon_restore",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def restore(self, tar_file: tarfile.TarFile) -> asyncio.Task | None: async def restore(self, tar_file: tarfile.TarFile) -> asyncio.Task | None:
"""Restore state of an add-on. """Restore state of an add-on.
@ -1521,10 +1521,10 @@ class Addon(AddonModel):
@Job( @Job(
name="addon_restart_after_problem", name="addon_restart_after_problem",
limit=JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=AddonsJobError, on_condition=AddonsJobError,
throttle=JobThrottle.GROUP_RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed addon.""" """Restart unhealthy or failed addon."""

View File

@ -6,6 +6,8 @@ from typing import Any
from aiohttp import web from aiohttp import web
import voluptuous as vol import voluptuous as vol
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
from ..const import ( from ..const import (
ATTR_ENABLE_IPV6, ATTR_ENABLE_IPV6,
ATTR_HOSTNAME, ATTR_HOSTNAME,
@ -32,7 +34,7 @@ SCHEMA_DOCKER_REGISTRY = vol.Schema(
) )
# pylint: disable=no-value-for-parameter # pylint: disable=no-value-for-parameter
SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Boolean()}) SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Maybe(vol.Boolean())})
class APIDocker(CoreSysAttributes): class APIDocker(CoreSysAttributes):
@ -59,8 +61,17 @@ class APIDocker(CoreSysAttributes):
"""Set docker options.""" """Set docker options."""
body = await api_validate(SCHEMA_OPTIONS, request) body = await api_validate(SCHEMA_OPTIONS, request)
if ATTR_ENABLE_IPV6 in body: if (
ATTR_ENABLE_IPV6 in body
and self.sys_docker.config.enable_ipv6 != body[ATTR_ENABLE_IPV6]
):
self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6] self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6]
_LOGGER.info("Host system reboot required to apply new IPv6 configuration")
self.sys_resolution.create_issue(
IssueType.REBOOT_REQUIRED,
ContextType.SYSTEM,
suggestions=[SuggestionType.EXECUTE_REBOOT],
)
await self.sys_docker.config.save_data() await self.sys_docker.config.save_data()

View File

@ -262,41 +262,35 @@ class Backup(JobGroup):
def __eq__(self, other: Any) -> bool: def __eq__(self, other: Any) -> bool:
"""Return true if backups have same metadata.""" """Return true if backups have same metadata."""
if not isinstance(other, Backup): return isinstance(other, Backup) and self.slug == other.slug
return False
# Compare all fields except ones about protection. Current encryption status does not affect equality def __hash__(self) -> int:
keys = self._data.keys() | other._data.keys() """Return hash of backup."""
for k in keys - IGNORED_COMPARISON_FIELDS: return hash(self.slug)
if (
k not in self._data
or k not in other._data
or self._data[k] != other._data[k]
):
_LOGGER.info(
"Backup %s and %s not equal because %s field has different value: %s and %s",
self.slug,
other.slug,
k,
self._data.get(k),
other._data.get(k),
)
return False
return True
def consolidate(self, backup: Self) -> None: def consolidate(self, backup: Self) -> None:
"""Consolidate two backups with same slug in different locations.""" """Consolidate two backups with same slug in different locations."""
if self.slug != backup.slug: if self != backup:
raise ValueError( raise ValueError(
f"Backup {self.slug} and {backup.slug} are not the same backup" f"Backup {self.slug} and {backup.slug} are not the same backup"
) )
if self != backup:
# Compare all fields except ones about protection. Current encryption status does not affect equality
other_data = backup._data # pylint: disable=protected-access
keys = self._data.keys() | other_data.keys()
for k in keys - IGNORED_COMPARISON_FIELDS:
if (
k not in self._data
or k not in other_data
or self._data[k] != other_data[k]
):
raise BackupInvalidError( raise BackupInvalidError(
f"Backup in {backup.location} and {self.location} both have slug {self.slug} but are not the same!" f"Cannot consolidate backups in {backup.location} and {self.location} with slug {self.slug} "
f"because field {k} has different values: {self._data.get(k)} and {other_data.get(k)}!",
_LOGGER.error,
) )
# In case of conflict we always ignore the ones from the first one. But log them to let the user know # In case of conflict we always ignore the ones from the first one. But log them to let the user know
if conflict := { if conflict := {
loc: val.path loc: val.path
for loc, val in self.all_locations.items() for loc, val in self.all_locations.items()

View File

@ -27,7 +27,7 @@ from ..exceptions import (
BackupJobError, BackupJobError,
BackupMountDownError, BackupMountDownError,
) )
from ..jobs.const import JOB_GROUP_BACKUP_MANAGER, JobCondition, JobExecutionLimit from ..jobs.const import JOB_GROUP_BACKUP_MANAGER, JobConcurrency, JobCondition
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup from ..jobs.job_group import JobGroup
from ..mounts.mount import Mount from ..mounts.mount import Mount
@ -583,9 +583,9 @@ class BackupManager(FileConfiguration, JobGroup):
@Job( @Job(
name="backup_manager_full_backup", name="backup_manager_full_backup",
conditions=[JobCondition.RUNNING], conditions=[JobCondition.RUNNING],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError, on_condition=BackupJobError,
cleanup=False, cleanup=False,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def do_backup_full( async def do_backup_full(
self, self,
@ -630,9 +630,9 @@ class BackupManager(FileConfiguration, JobGroup):
@Job( @Job(
name="backup_manager_partial_backup", name="backup_manager_partial_backup",
conditions=[JobCondition.RUNNING], conditions=[JobCondition.RUNNING],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError, on_condition=BackupJobError,
cleanup=False, cleanup=False,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def do_backup_partial( async def do_backup_partial(
self, self,
@ -810,9 +810,9 @@ class BackupManager(FileConfiguration, JobGroup):
JobCondition.INTERNET_SYSTEM, JobCondition.INTERNET_SYSTEM,
JobCondition.RUNNING, JobCondition.RUNNING,
], ],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError, on_condition=BackupJobError,
cleanup=False, cleanup=False,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def do_restore_full( async def do_restore_full(
self, self,
@ -869,9 +869,9 @@ class BackupManager(FileConfiguration, JobGroup):
JobCondition.INTERNET_SYSTEM, JobCondition.INTERNET_SYSTEM,
JobCondition.RUNNING, JobCondition.RUNNING,
], ],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError, on_condition=BackupJobError,
cleanup=False, cleanup=False,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def do_restore_partial( async def do_restore_partial(
self, self,
@ -930,8 +930,8 @@ class BackupManager(FileConfiguration, JobGroup):
@Job( @Job(
name="backup_manager_freeze_all", name="backup_manager_freeze_all",
conditions=[JobCondition.RUNNING], conditions=[JobCondition.RUNNING],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError, on_condition=BackupJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def freeze_all(self, timeout: float = DEFAULT_FREEZE_TIMEOUT) -> None: async def freeze_all(self, timeout: float = DEFAULT_FREEZE_TIMEOUT) -> None:
"""Freeze system to prepare for an external backup such as an image snapshot.""" """Freeze system to prepare for an external backup such as an image snapshot."""
@ -999,9 +999,9 @@ class BackupManager(FileConfiguration, JobGroup):
@Job( @Job(
name="backup_manager_signal_thaw", name="backup_manager_signal_thaw",
conditions=[JobCondition.FROZEN], conditions=[JobCondition.FROZEN],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError, on_condition=BackupJobError,
internal=True, internal=True,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def thaw_all(self) -> None: async def thaw_all(self) -> None:
"""Signal thaw task to begin unfreezing the system.""" """Signal thaw task to begin unfreezing the system."""

View File

@ -32,6 +32,7 @@ DBUS_IFACE_HOSTNAME = "org.freedesktop.hostname1"
DBUS_IFACE_IP4CONFIG = "org.freedesktop.NetworkManager.IP4Config" DBUS_IFACE_IP4CONFIG = "org.freedesktop.NetworkManager.IP4Config"
DBUS_IFACE_IP6CONFIG = "org.freedesktop.NetworkManager.IP6Config" DBUS_IFACE_IP6CONFIG = "org.freedesktop.NetworkManager.IP6Config"
DBUS_IFACE_NM = "org.freedesktop.NetworkManager" DBUS_IFACE_NM = "org.freedesktop.NetworkManager"
DBUS_IFACE_NVME_CONTROLLER = "org.freedesktop.UDisks2.NVMe.Controller"
DBUS_IFACE_PARTITION = "org.freedesktop.UDisks2.Partition" DBUS_IFACE_PARTITION = "org.freedesktop.UDisks2.Partition"
DBUS_IFACE_PARTITION_TABLE = "org.freedesktop.UDisks2.PartitionTable" DBUS_IFACE_PARTITION_TABLE = "org.freedesktop.UDisks2.PartitionTable"
DBUS_IFACE_RAUC_INSTALLER = "de.pengutronix.rauc.Installer" DBUS_IFACE_RAUC_INSTALLER = "de.pengutronix.rauc.Installer"
@ -87,6 +88,7 @@ DBUS_ATTR_CONNECTIVITY = "Connectivity"
DBUS_ATTR_CURRENT_DEVICE = "CurrentDevice" DBUS_ATTR_CURRENT_DEVICE = "CurrentDevice"
DBUS_ATTR_CURRENT_DNS_SERVER = "CurrentDNSServer" DBUS_ATTR_CURRENT_DNS_SERVER = "CurrentDNSServer"
DBUS_ATTR_CURRENT_DNS_SERVER_EX = "CurrentDNSServerEx" DBUS_ATTR_CURRENT_DNS_SERVER_EX = "CurrentDNSServerEx"
DBUS_ATTR_CONTROLLER_ID = "ControllerID"
DBUS_ATTR_DEFAULT = "Default" DBUS_ATTR_DEFAULT = "Default"
DBUS_ATTR_DEPLOYMENT = "Deployment" DBUS_ATTR_DEPLOYMENT = "Deployment"
DBUS_ATTR_DESCRIPTION = "Description" DBUS_ATTR_DESCRIPTION = "Description"
@ -111,6 +113,7 @@ DBUS_ATTR_DRIVER = "Driver"
DBUS_ATTR_EJECTABLE = "Ejectable" DBUS_ATTR_EJECTABLE = "Ejectable"
DBUS_ATTR_FALLBACK_DNS = "FallbackDNS" DBUS_ATTR_FALLBACK_DNS = "FallbackDNS"
DBUS_ATTR_FALLBACK_DNS_EX = "FallbackDNSEx" DBUS_ATTR_FALLBACK_DNS_EX = "FallbackDNSEx"
DBUS_ATTR_FGUID = "FGUID"
DBUS_ATTR_FINISH_TIMESTAMP = "FinishTimestamp" DBUS_ATTR_FINISH_TIMESTAMP = "FinishTimestamp"
DBUS_ATTR_FIRMWARE_TIMESTAMP_MONOTONIC = "FirmwareTimestampMonotonic" DBUS_ATTR_FIRMWARE_TIMESTAMP_MONOTONIC = "FirmwareTimestampMonotonic"
DBUS_ATTR_FREQUENCY = "Frequency" DBUS_ATTR_FREQUENCY = "Frequency"
@ -147,6 +150,7 @@ DBUS_ATTR_NAMESERVERS = "Nameservers"
DBUS_ATTR_NTP = "NTP" DBUS_ATTR_NTP = "NTP"
DBUS_ATTR_NTPSYNCHRONIZED = "NTPSynchronized" DBUS_ATTR_NTPSYNCHRONIZED = "NTPSynchronized"
DBUS_ATTR_NUMBER = "Number" DBUS_ATTR_NUMBER = "Number"
DBUS_ATTR_NVME_REVISION = "NVMeRevision"
DBUS_ATTR_OFFSET = "Offset" DBUS_ATTR_OFFSET = "Offset"
DBUS_ATTR_OPERATING_SYSTEM_PRETTY_NAME = "OperatingSystemPrettyName" DBUS_ATTR_OPERATING_SYSTEM_PRETTY_NAME = "OperatingSystemPrettyName"
DBUS_ATTR_OPERATION = "Operation" DBUS_ATTR_OPERATION = "Operation"
@ -161,15 +165,24 @@ DBUS_ATTR_REMOVABLE = "Removable"
DBUS_ATTR_RESOLV_CONF_MODE = "ResolvConfMode" DBUS_ATTR_RESOLV_CONF_MODE = "ResolvConfMode"
DBUS_ATTR_REVISION = "Revision" DBUS_ATTR_REVISION = "Revision"
DBUS_ATTR_RCMANAGER = "RcManager" DBUS_ATTR_RCMANAGER = "RcManager"
DBUS_ATTR_SANITIZE_PERCENT_REMAINING = "SanitizePercentRemaining"
DBUS_ATTR_SANITIZE_STATUS = "SanitizeStatus"
DBUS_ATTR_SEAT = "Seat" DBUS_ATTR_SEAT = "Seat"
DBUS_ATTR_SERIAL = "Serial" DBUS_ATTR_SERIAL = "Serial"
DBUS_ATTR_SIZE = "Size" DBUS_ATTR_SIZE = "Size"
DBUS_ATTR_SMART_CRITICAL_WARNING = "SmartCriticalWarning"
DBUS_ATTR_SMART_POWER_ON_HOURS = "SmartPowerOnHours"
DBUS_ATTR_SMART_SELFTEST_PERCENT_REMAINING = "SmartSelftestPercentRemaining"
DBUS_ATTR_SMART_SELFTEST_STATUS = "SmartSelftestStatus"
DBUS_ATTR_SMART_TEMPERATURE = "SmartTemperature"
DBUS_ATTR_SMART_UPDATED = "SmartUpdated"
DBUS_ATTR_SSID = "Ssid" DBUS_ATTR_SSID = "Ssid"
DBUS_ATTR_STATE = "State" DBUS_ATTR_STATE = "State"
DBUS_ATTR_STATE_FLAGS = "StateFlags" DBUS_ATTR_STATE_FLAGS = "StateFlags"
DBUS_ATTR_STATIC_HOSTNAME = "StaticHostname" DBUS_ATTR_STATIC_HOSTNAME = "StaticHostname"
DBUS_ATTR_STATIC_OPERATING_SYSTEM_CPE_NAME = "OperatingSystemCPEName" DBUS_ATTR_STATIC_OPERATING_SYSTEM_CPE_NAME = "OperatingSystemCPEName"
DBUS_ATTR_STRENGTH = "Strength" DBUS_ATTR_STRENGTH = "Strength"
DBUS_ATTR_SUBSYSTEM_NQN = "SubsystemNQN"
DBUS_ATTR_SUPPORTED_FILESYSTEMS = "SupportedFilesystems" DBUS_ATTR_SUPPORTED_FILESYSTEMS = "SupportedFilesystems"
DBUS_ATTR_SYMLINKS = "Symlinks" DBUS_ATTR_SYMLINKS = "Symlinks"
DBUS_ATTR_SWAP_SIZE = "SwapSize" DBUS_ATTR_SWAP_SIZE = "SwapSize"
@ -180,6 +193,7 @@ DBUS_ATTR_TIMEUSEC = "TimeUSec"
DBUS_ATTR_TIMEZONE = "Timezone" DBUS_ATTR_TIMEZONE = "Timezone"
DBUS_ATTR_TRANSACTION_STATISTICS = "TransactionStatistics" DBUS_ATTR_TRANSACTION_STATISTICS = "TransactionStatistics"
DBUS_ATTR_TYPE = "Type" DBUS_ATTR_TYPE = "Type"
DBUS_ATTR_UNALLOCATED_CAPACITY = "UnallocatedCapacity"
DBUS_ATTR_USER_LED = "UserLED" DBUS_ATTR_USER_LED = "UserLED"
DBUS_ATTR_USERSPACE_TIMESTAMP_MONOTONIC = "UserspaceTimestampMonotonic" DBUS_ATTR_USERSPACE_TIMESTAMP_MONOTONIC = "UserspaceTimestampMonotonic"
DBUS_ATTR_UUID_UPPERCASE = "UUID" DBUS_ATTR_UUID_UPPERCASE = "UUID"

View File

@ -2,6 +2,7 @@
import asyncio import asyncio
import logging import logging
from pathlib import Path
from typing import Any from typing import Any
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
@ -132,7 +133,10 @@ class UDisks2Manager(DBusInterfaceProxy):
for drive in drives for drive in drives
} }
# Update existing drives # For existing drives, need to check their type and call update
await asyncio.gather(
*[self._drives[path].check_type() for path in unchanged_drives]
)
await asyncio.gather( await asyncio.gather(
*[self._drives[path].update() for path in unchanged_drives] *[self._drives[path].update() for path in unchanged_drives]
) )
@ -160,20 +164,33 @@ class UDisks2Manager(DBusInterfaceProxy):
return list(self._drives.values()) return list(self._drives.values())
@dbus_connected @dbus_connected
def get_drive(self, drive_path: str) -> UDisks2Drive: def get_drive(self, object_path: str) -> UDisks2Drive:
"""Get additional info on drive from object path.""" """Get additional info on drive from object path."""
if drive_path not in self._drives: if object_path not in self._drives:
raise DBusObjectError(f"Drive {drive_path} not found") raise DBusObjectError(f"Drive {object_path} not found")
return self._drives[drive_path] return self._drives[object_path]
@dbus_connected @dbus_connected
def get_block_device(self, device_path: str) -> UDisks2Block: def get_block_device(self, object_path: str) -> UDisks2Block:
"""Get additional info on block device from object path.""" """Get additional info on block device from object path."""
if device_path not in self._block_devices: if object_path not in self._block_devices:
raise DBusObjectError(f"Block device {device_path} not found") raise DBusObjectError(f"Block device {object_path} not found")
return self._block_devices[device_path] return self._block_devices[object_path]
@dbus_connected
def get_block_device_by_path(self, device_path: Path) -> UDisks2Block:
"""Get additional info on block device from device path.
Uses cache only. Use `resolve_device` to force a call for fresh data.
"""
for device in self._block_devices.values():
if device.device == device_path:
return device
raise DBusObjectError(
f"Block device not found with device path {device_path.as_posix()}"
)
@dbus_connected @dbus_connected
async def resolve_device(self, devspec: DeviceSpecification) -> list[UDisks2Block]: async def resolve_device(self, devspec: DeviceSpecification) -> list[UDisks2Block]:

View File

@ -1,6 +1,7 @@
"""Interface to UDisks2 Drive over D-Bus.""" """Interface to UDisks2 Drive over D-Bus."""
from datetime import UTC, datetime from datetime import UTC, datetime
from typing import Any
from dbus_fast.aio import MessageBus from dbus_fast.aio import MessageBus
@ -18,11 +19,13 @@ from ..const import (
DBUS_ATTR_VENDOR, DBUS_ATTR_VENDOR,
DBUS_ATTR_WWN, DBUS_ATTR_WWN,
DBUS_IFACE_DRIVE, DBUS_IFACE_DRIVE,
DBUS_IFACE_NVME_CONTROLLER,
DBUS_NAME_UDISKS2, DBUS_NAME_UDISKS2,
) )
from ..interface import DBusInterfaceProxy, dbus_property from ..interface import DBusInterfaceProxy, dbus_property
from ..utils import dbus_connected from ..utils import dbus_connected
from .const import UDISKS2_DEFAULT_OPTIONS from .const import UDISKS2_DEFAULT_OPTIONS
from .nvme_controller import UDisks2NVMeController
class UDisks2Drive(DBusInterfaceProxy): class UDisks2Drive(DBusInterfaceProxy):
@ -35,11 +38,18 @@ class UDisks2Drive(DBusInterfaceProxy):
bus_name: str = DBUS_NAME_UDISKS2 bus_name: str = DBUS_NAME_UDISKS2
properties_interface: str = DBUS_IFACE_DRIVE properties_interface: str = DBUS_IFACE_DRIVE
_nvme_controller: UDisks2NVMeController | None = None
def __init__(self, object_path: str) -> None: def __init__(self, object_path: str) -> None:
"""Initialize object.""" """Initialize object."""
self._object_path = object_path self._object_path = object_path
super().__init__() super().__init__()
async def connect(self, bus: MessageBus) -> None:
"""Connect to bus."""
await super().connect(bus)
await self._reload_interfaces()
@staticmethod @staticmethod
async def new(object_path: str, bus: MessageBus) -> "UDisks2Drive": async def new(object_path: str, bus: MessageBus) -> "UDisks2Drive":
"""Create and connect object.""" """Create and connect object."""
@ -52,6 +62,11 @@ class UDisks2Drive(DBusInterfaceProxy):
"""Object path for dbus object.""" """Object path for dbus object."""
return self._object_path return self._object_path
@property
def nvme_controller(self) -> UDisks2NVMeController | None:
"""NVMe controller interface if drive is one."""
return self._nvme_controller
@property @property
@dbus_property @dbus_property
def vendor(self) -> str: def vendor(self) -> str:
@ -130,3 +145,40 @@ class UDisks2Drive(DBusInterfaceProxy):
async def eject(self) -> None: async def eject(self) -> None:
"""Eject media from drive.""" """Eject media from drive."""
await self.connected_dbus.Drive.call("eject", UDISKS2_DEFAULT_OPTIONS) await self.connected_dbus.Drive.call("eject", UDISKS2_DEFAULT_OPTIONS)
@dbus_connected
async def update(self, changed: dict[str, Any] | None = None) -> None:
"""Update properties via D-Bus."""
await super().update(changed)
if not changed and self.nvme_controller:
await self.nvme_controller.update()
@dbus_connected
async def check_type(self) -> None:
"""Check if type of drive has changed and adjust interfaces if so."""
introspection = await self.connected_dbus.introspect()
interfaces = {intr.name for intr in introspection.interfaces}
# If interfaces changed, update the proxy from introspection and reload interfaces
if interfaces != set(self.connected_dbus.proxies.keys()):
await self.connected_dbus.init_proxy(introspection=introspection)
await self._reload_interfaces()
@dbus_connected
async def _reload_interfaces(self) -> None:
"""Reload interfaces from introspection as necessary."""
# Check if drive is an nvme controller
if (
not self.nvme_controller
and DBUS_IFACE_NVME_CONTROLLER in self.connected_dbus.proxies
):
self._nvme_controller = UDisks2NVMeController(self.object_path)
await self._nvme_controller.initialize(self.connected_dbus)
elif (
self.nvme_controller
and DBUS_IFACE_NVME_CONTROLLER not in self.connected_dbus.proxies
):
self.nvme_controller.stop_sync_property_changes()
self._nvme_controller = None

View File

@ -0,0 +1,200 @@
"""Interface to UDisks2 NVME Controller over D-Bus."""
from dataclasses import dataclass
from datetime import UTC, datetime
from typing import Any, cast
from dbus_fast.aio import MessageBus
from ..const import (
DBUS_ATTR_CONTROLLER_ID,
DBUS_ATTR_FGUID,
DBUS_ATTR_NVME_REVISION,
DBUS_ATTR_SANITIZE_PERCENT_REMAINING,
DBUS_ATTR_SANITIZE_STATUS,
DBUS_ATTR_SMART_CRITICAL_WARNING,
DBUS_ATTR_SMART_POWER_ON_HOURS,
DBUS_ATTR_SMART_SELFTEST_PERCENT_REMAINING,
DBUS_ATTR_SMART_SELFTEST_STATUS,
DBUS_ATTR_SMART_TEMPERATURE,
DBUS_ATTR_SMART_UPDATED,
DBUS_ATTR_STATE,
DBUS_ATTR_SUBSYSTEM_NQN,
DBUS_ATTR_UNALLOCATED_CAPACITY,
DBUS_IFACE_NVME_CONTROLLER,
DBUS_NAME_UDISKS2,
)
from ..interface import DBusInterfaceProxy, dbus_property
from ..utils import dbus_connected
from .const import UDISKS2_DEFAULT_OPTIONS
@dataclass(frozen=True, slots=True)
class SmartStatus:
"""Smart status information for NVMe devices.
https://storaged.org/doc/udisks2-api/latest/gdbus-org.freedesktop.UDisks2.NVMe.Controller.html#gdbus-method-org-freedesktop-UDisks2-NVMe-Controller.SmartGetAttributes
"""
available_spare: int
spare_threshold: int
percent_used: int
total_data_read: int
total_data_written: int
controller_busy_minutes: int
power_cycles: int
unsafe_shutdowns: int
media_errors: int
number_error_log_entries: int
temperature_sensors: list[int]
warning_composite_temperature: int
critical_composite_temperature: int
warning_temperature_minutes: int
critical_temperature_minutes: int
@classmethod
def from_smart_get_attributes_resp(cls, resp: dict[str, Any]):
"""Convert SmartGetAttributes response dictionary to instance."""
return cls(
available_spare=resp["avail_spare"],
spare_threshold=resp["spare_thresh"],
percent_used=resp["percent_used"],
total_data_read=resp["total_data_read"],
total_data_written=resp["total_data_written"],
controller_busy_minutes=resp["ctrl_busy_time"],
power_cycles=resp["power_cycles"],
unsafe_shutdowns=resp["unsafe_shutdowns"],
media_errors=resp["media_errors"],
number_error_log_entries=resp["num_err_log_entries"],
temperature_sensors=resp["temp_sensors"],
warning_composite_temperature=resp["wctemp"],
critical_composite_temperature=resp["cctemp"],
warning_temperature_minutes=resp["warning_temp_time"],
critical_temperature_minutes=resp["critical_temp_time"],
)
class UDisks2NVMeController(DBusInterfaceProxy):
"""Handle D-Bus interface for NVMe Controller object.
https://storaged.org/doc/udisks2-api/latest/gdbus-org.freedesktop.UDisks2.NVMe.Controller.html
"""
name: str = DBUS_IFACE_NVME_CONTROLLER
bus_name: str = DBUS_NAME_UDISKS2
properties_interface: str = DBUS_IFACE_NVME_CONTROLLER
def __init__(self, object_path: str) -> None:
"""Initialize object."""
self._object_path = object_path
super().__init__()
@staticmethod
async def new(object_path: str, bus: MessageBus) -> "UDisks2NVMeController":
"""Create and connect object."""
obj = UDisks2NVMeController(object_path)
await obj.connect(bus)
return obj
@property
def object_path(self) -> str:
"""Object path for dbus object."""
return self._object_path
@property
@dbus_property
def state(self) -> str:
"""Return NVMe controller state."""
return self.properties[DBUS_ATTR_STATE]
@property
@dbus_property
def controller_id(self) -> int:
"""Return controller ID."""
return self.properties[DBUS_ATTR_CONTROLLER_ID]
@property
@dbus_property
def subsystem_nqn(self) -> str:
"""Return NVM Subsystem NVMe Qualified Name."""
return cast(bytes, self.properties[DBUS_ATTR_SUBSYSTEM_NQN]).decode("utf-8")
@property
@dbus_property
def fguid(self) -> str:
"""Return FRU GUID."""
return self.properties[DBUS_ATTR_FGUID]
@property
@dbus_property
def nvme_revision(self) -> str:
"""Return NVMe version information."""
return self.properties[DBUS_ATTR_NVME_REVISION]
@property
@dbus_property
def unallocated_capacity(self) -> int:
"""Return unallocated capacity."""
return self.properties[DBUS_ATTR_UNALLOCATED_CAPACITY]
@property
@dbus_property
def smart_updated(self) -> datetime | None:
"""Return last time smart information was updated (or None if it hasn't been).
If this is None other smart properties are not meaningful.
"""
if not (ts := self.properties[DBUS_ATTR_SMART_UPDATED]):
return None
return datetime.fromtimestamp(ts, UTC)
@property
@dbus_property
def smart_critical_warning(self) -> list[str]:
"""Return critical warnings issued for current state of controller."""
return self.properties[DBUS_ATTR_SMART_CRITICAL_WARNING]
@property
@dbus_property
def smart_power_on_hours(self) -> int:
"""Return hours the disk has been powered on."""
return self.properties[DBUS_ATTR_SMART_POWER_ON_HOURS]
@property
@dbus_property
def smart_temperature(self) -> int:
"""Return current composite temperature of controller in Kelvin."""
return self.properties[DBUS_ATTR_SMART_TEMPERATURE]
@property
@dbus_property
def smart_selftest_status(self) -> str:
"""Return status of last sel-test."""
return self.properties[DBUS_ATTR_SMART_SELFTEST_STATUS]
@property
@dbus_property
def smart_selftest_percent_remaining(self) -> int:
"""Return percent remaining of self-test."""
return self.properties[DBUS_ATTR_SMART_SELFTEST_PERCENT_REMAINING]
@property
@dbus_property
def sanitize_status(self) -> str:
"""Return status of last sanitize operation."""
return self.properties[DBUS_ATTR_SANITIZE_STATUS]
@property
@dbus_property
def sanitize_percent_remaining(self) -> int:
"""Return percent remaining of sanitize operation."""
return self.properties[DBUS_ATTR_SANITIZE_PERCENT_REMAINING]
@dbus_connected
async def smart_get_attributes(self) -> SmartStatus:
"""Return smart/health information of controller."""
return SmartStatus.from_smart_get_attributes_resp(
await self.connected_dbus.NVMe.Controller.call(
"smart_get_attributes", UDISKS2_DEFAULT_OPTIONS
)
)

View File

@ -39,7 +39,7 @@ from ..exceptions import (
) )
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..hardware.data import Device from ..hardware.data import Device
from ..jobs.const import JobCondition, JobExecutionLimit from ..jobs.const import JobConcurrency, JobCondition
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.const import CGROUP_V2_VERSION, ContextType, IssueType, SuggestionType from ..resolution.const import CGROUP_V2_VERSION, ContextType, IssueType, SuggestionType
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
@ -553,8 +553,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_run", name="docker_addon_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
@ -619,8 +619,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_update", name="docker_addon_update",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def update( async def update(
self, self,
@ -647,8 +647,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_install", name="docker_addon_install",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def install( async def install(
self, self,
@ -735,8 +735,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_import_image", name="docker_addon_import_image",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def import_image(self, tar_file: Path) -> None: async def import_image(self, tar_file: Path) -> None:
"""Import a tar file as image.""" """Import a tar file as image."""
@ -750,7 +750,7 @@ class DockerAddon(DockerInterface):
with suppress(DockerError): with suppress(DockerError):
await self.cleanup() await self.cleanup()
@Job(name="docker_addon_cleanup", limit=JobExecutionLimit.GROUP_WAIT) @Job(name="docker_addon_cleanup", concurrency=JobConcurrency.GROUP_QUEUE)
async def cleanup( async def cleanup(
self, self,
old_image: str | None = None, old_image: str | None = None,
@ -774,8 +774,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_write_stdin", name="docker_addon_write_stdin",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def write_stdin(self, data: bytes) -> None: async def write_stdin(self, data: bytes) -> None:
"""Write to add-on stdin.""" """Write to add-on stdin."""
@ -808,8 +808,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_stop", name="docker_addon_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def stop(self, remove_container: bool = True) -> None: async def stop(self, remove_container: bool = True) -> None:
"""Stop/remove Docker container.""" """Stop/remove Docker container."""
@ -848,8 +848,8 @@ class DockerAddon(DockerInterface):
@Job( @Job(
name="docker_addon_hardware_events", name="docker_addon_hardware_events",
conditions=[JobCondition.OS_AGENT], conditions=[JobCondition.OS_AGENT],
limit=JobExecutionLimit.SINGLE_WAIT,
internal=True, internal=True,
concurrency=JobConcurrency.QUEUE,
) )
async def _hardware_events(self, device: Device) -> None: async def _hardware_events(self, device: Device) -> None:
"""Process Hardware events for adjust device access.""" """Process Hardware events for adjust device access."""

View File

@ -9,7 +9,7 @@ from ..const import DOCKER_CPU_RUNTIME_ALLOCATION
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ( from .const import (
ENV_TIME, ENV_TIME,
@ -89,8 +89,8 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
@Job( @Job(
name="docker_audio_run", name="docker_audio_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""

View File

@ -4,7 +4,7 @@ import logging
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ENV_TIME, ENV_TOKEN from .const import ENV_TIME, ENV_TOKEN
from .interface import DockerInterface from .interface import DockerInterface
@ -29,8 +29,8 @@ class DockerCli(DockerInterface, CoreSysAttributes):
@Job( @Job(
name="docker_cli_run", name="docker_cli_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""

View File

@ -6,7 +6,7 @@ from docker.types import Mount
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ENV_TIME, MOUNT_DBUS, MountType from .const import ENV_TIME, MOUNT_DBUS, MountType
from .interface import DockerInterface from .interface import DockerInterface
@ -31,8 +31,8 @@ class DockerDNS(DockerInterface, CoreSysAttributes):
@Job( @Job(
name="docker_dns_run", name="docker_dns_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""

View File

@ -12,7 +12,7 @@ from ..const import LABEL_MACHINE
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..homeassistant.const import LANDINGPAGE from ..homeassistant.const import LANDINGPAGE
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ( from .const import (
ENV_TIME, ENV_TIME,
@ -161,8 +161,8 @@ class DockerHomeAssistant(DockerInterface):
@Job( @Job(
name="docker_home_assistant_run", name="docker_home_assistant_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self, *, restore_job_id: str | None = None) -> None: async def run(self, *, restore_job_id: str | None = None) -> None:
"""Run Docker image.""" """Run Docker image."""
@ -200,8 +200,8 @@ class DockerHomeAssistant(DockerInterface):
@Job( @Job(
name="docker_home_assistant_execute_command", name="docker_home_assistant_execute_command",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def execute_command(self, command: str) -> CommandReturn: async def execute_command(self, command: str) -> CommandReturn:
"""Create a temporary container and run command.""" """Create a temporary container and run command."""

View File

@ -39,7 +39,7 @@ from ..exceptions import (
DockerRequestError, DockerRequestError,
DockerTrustError, DockerTrustError,
) )
from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobExecutionLimit from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType
@ -219,8 +219,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_install", name="docker_interface_install",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def install( async def install(
self, self,
@ -338,7 +338,7 @@ class DockerInterface(JobGroup, ABC):
return _container_state_from_model(docker_container) return _container_state_from_model(docker_container)
@Job(name="docker_interface_attach", limit=JobExecutionLimit.GROUP_WAIT) @Job(name="docker_interface_attach", concurrency=JobConcurrency.GROUP_QUEUE)
async def attach( async def attach(
self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False
) -> None: ) -> None:
@ -376,8 +376,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_run", name="docker_interface_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
@ -406,8 +406,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_stop", name="docker_interface_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def stop(self, remove_container: bool = True) -> None: async def stop(self, remove_container: bool = True) -> None:
"""Stop/remove Docker container.""" """Stop/remove Docker container."""
@ -421,8 +421,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_start", name="docker_interface_start",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
def start(self) -> Awaitable[None]: def start(self) -> Awaitable[None]:
"""Start Docker container.""" """Start Docker container."""
@ -430,8 +430,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_remove", name="docker_interface_remove",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def remove(self, *, remove_image: bool = True) -> None: async def remove(self, *, remove_image: bool = True) -> None:
"""Remove Docker images.""" """Remove Docker images."""
@ -448,8 +448,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_check_image", name="docker_interface_check_image",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def check_image( async def check_image(
self, self,
@ -497,8 +497,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_update", name="docker_interface_update",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def update( async def update(
self, version: AwesomeVersion, image: str | None = None, latest: bool = False self, version: AwesomeVersion, image: str | None = None, latest: bool = False
@ -526,7 +526,7 @@ class DockerInterface(JobGroup, ABC):
return b"" return b""
@Job(name="docker_interface_cleanup", limit=JobExecutionLimit.GROUP_WAIT) @Job(name="docker_interface_cleanup", concurrency=JobConcurrency.GROUP_QUEUE)
async def cleanup( async def cleanup(
self, self,
old_image: str | None = None, old_image: str | None = None,
@ -543,8 +543,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_restart", name="docker_interface_restart",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
def restart(self) -> Awaitable[None]: def restart(self) -> Awaitable[None]:
"""Restart docker container.""" """Restart docker container."""
@ -554,8 +554,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_execute_command", name="docker_interface_execute_command",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def execute_command(self, command: str) -> CommandReturn: async def execute_command(self, command: str) -> CommandReturn:
"""Create a temporary container and run command.""" """Create a temporary container and run command."""
@ -619,8 +619,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_run_inside", name="docker_interface_run_inside",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
def run_inside(self, command: str) -> Awaitable[CommandReturn]: def run_inside(self, command: str) -> Awaitable[CommandReturn]:
"""Execute a command inside Docker container.""" """Execute a command inside Docker container."""
@ -635,8 +635,8 @@ class DockerInterface(JobGroup, ABC):
@Job( @Job(
name="docker_interface_check_trust", name="docker_interface_check_trust",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def check_trust(self) -> None: async def check_trust(self) -> None:
"""Check trust of exists Docker image.""" """Check trust of exists Docker image."""

View File

@ -95,12 +95,12 @@ class DockerConfig(FileConfiguration):
super().__init__(FILE_HASSIO_DOCKER, SCHEMA_DOCKER_CONFIG) super().__init__(FILE_HASSIO_DOCKER, SCHEMA_DOCKER_CONFIG)
@property @property
def enable_ipv6(self) -> bool: def enable_ipv6(self) -> bool | None:
"""Return IPv6 configuration for docker network.""" """Return IPv6 configuration for docker network."""
return self._data.get(ATTR_ENABLE_IPV6, False) return self._data.get(ATTR_ENABLE_IPV6, None)
@enable_ipv6.setter @enable_ipv6.setter
def enable_ipv6(self, value: bool) -> None: def enable_ipv6(self, value: bool | None) -> None:
"""Set IPv6 configuration for docker network.""" """Set IPv6 configuration for docker network."""
self._data[ATTR_ENABLE_IPV6] = value self._data[ATTR_ENABLE_IPV6] = value

View File

@ -4,7 +4,7 @@ import logging
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ENV_TIME, Capabilities from .const import ENV_TIME, Capabilities
from .interface import DockerInterface from .interface import DockerInterface
@ -34,8 +34,8 @@ class DockerMulticast(DockerInterface, CoreSysAttributes):
@Job( @Job(
name="docker_multicast_run", name="docker_multicast_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""

View File

@ -47,6 +47,8 @@ DOCKER_NETWORK_PARAMS = {
"options": {"com.docker.network.bridge.name": DOCKER_NETWORK}, "options": {"com.docker.network.bridge.name": DOCKER_NETWORK},
} }
DOCKER_ENABLE_IPV6_DEFAULT = True
class DockerNetwork: class DockerNetwork:
"""Internal Supervisor Network. """Internal Supervisor Network.
@ -59,7 +61,7 @@ class DockerNetwork:
self.docker: docker.DockerClient = docker_client self.docker: docker.DockerClient = docker_client
self._network: docker.models.networks.Network self._network: docker.models.networks.Network
async def post_init(self, enable_ipv6: bool = False) -> Self: async def post_init(self, enable_ipv6: bool | None = None) -> Self:
"""Post init actions that must be done in event loop.""" """Post init actions that must be done in event loop."""
self._network = await asyncio.get_running_loop().run_in_executor( self._network = await asyncio.get_running_loop().run_in_executor(
None, self._get_network, enable_ipv6 None, self._get_network, enable_ipv6
@ -111,16 +113,24 @@ class DockerNetwork:
"""Return observer of the network.""" """Return observer of the network."""
return DOCKER_IPV4_NETWORK_MASK[6] return DOCKER_IPV4_NETWORK_MASK[6]
def _get_network(self, enable_ipv6: bool = False) -> docker.models.networks.Network: def _get_network(
self, enable_ipv6: bool | None = None
) -> docker.models.networks.Network:
"""Get supervisor network.""" """Get supervisor network."""
try: try:
if network := self.docker.networks.get(DOCKER_NETWORK): if network := self.docker.networks.get(DOCKER_NETWORK):
if network.attrs.get(DOCKER_ENABLEIPV6) == enable_ipv6: current_ipv6 = network.attrs.get(DOCKER_ENABLEIPV6, False)
# If the network exists and we don't have an explicit setting,
# simply stick with what we have.
if enable_ipv6 is None or current_ipv6 == enable_ipv6:
return network return network
# We have an explicit setting which differs from the current state.
_LOGGER.info( _LOGGER.info(
"Migrating Supervisor network to %s", "Migrating Supervisor network to %s",
"IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only", "IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only",
) )
if (containers := network.containers) and ( if (containers := network.containers) and (
containers_all := all( containers_all := all(
container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME) container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME)
@ -134,6 +144,7 @@ class DockerNetwork:
requests.RequestException, requests.RequestException,
): ):
network.disconnect(container, force=True) network.disconnect(container, force=True)
if not containers or containers_all: if not containers or containers_all:
try: try:
network.remove() network.remove()
@ -151,7 +162,9 @@ class DockerNetwork:
_LOGGER.info("Can't find Supervisor network, creating a new network") _LOGGER.info("Can't find Supervisor network, creating a new network")
network_params = DOCKER_NETWORK_PARAMS.copy() network_params = DOCKER_NETWORK_PARAMS.copy()
network_params[ATTR_ENABLE_IPV6] = enable_ipv6 network_params[ATTR_ENABLE_IPV6] = (
DOCKER_ENABLE_IPV6_DEFAULT if enable_ipv6 is None else enable_ipv6
)
try: try:
self._network = self.docker.networks.create(**network_params) # type: ignore self._network = self.docker.networks.create(**network_params) # type: ignore

View File

@ -5,7 +5,7 @@ import logging
from ..const import DOCKER_IPV4_NETWORK_MASK, OBSERVER_DOCKER_NAME from ..const import DOCKER_IPV4_NETWORK_MASK, OBSERVER_DOCKER_NAME
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ENV_TIME, ENV_TOKEN, MOUNT_DOCKER, RestartPolicy from .const import ENV_TIME, ENV_TOKEN, MOUNT_DOCKER, RestartPolicy
from .interface import DockerInterface from .interface import DockerInterface
@ -30,8 +30,8 @@ class DockerObserver(DockerInterface, CoreSysAttributes):
@Job( @Job(
name="docker_observer_run", name="docker_observer_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""

View File

@ -10,7 +10,7 @@ import docker
import requests import requests
from ..exceptions import DockerError from ..exceptions import DockerError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import PropagationMode from .const import PropagationMode
from .interface import DockerInterface from .interface import DockerInterface
@ -45,7 +45,7 @@ class DockerSupervisor(DockerInterface):
if mount.get("Destination") == "/data" if mount.get("Destination") == "/data"
) )
@Job(name="docker_supervisor_attach", limit=JobExecutionLimit.GROUP_WAIT) @Job(name="docker_supervisor_attach", concurrency=JobConcurrency.GROUP_QUEUE)
async def attach( async def attach(
self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False
) -> None: ) -> None:
@ -77,7 +77,7 @@ class DockerSupervisor(DockerInterface):
ipv4=self.sys_docker.network.supervisor, ipv4=self.sys_docker.network.supervisor,
) )
@Job(name="docker_supervisor_retag", limit=JobExecutionLimit.GROUP_WAIT) @Job(name="docker_supervisor_retag", concurrency=JobConcurrency.GROUP_QUEUE)
def retag(self) -> Awaitable[None]: def retag(self) -> Awaitable[None]:
"""Retag latest image to version.""" """Retag latest image to version."""
return self.sys_run_in_executor(self._retag) return self.sys_run_in_executor(self._retag)
@ -108,7 +108,10 @@ class DockerSupervisor(DockerInterface):
f"Can't retag Supervisor version: {err}", _LOGGER.error f"Can't retag Supervisor version: {err}", _LOGGER.error
) from err ) from err
@Job(name="docker_supervisor_update_start_tag", limit=JobExecutionLimit.GROUP_WAIT) @Job(
name="docker_supervisor_update_start_tag",
concurrency=JobConcurrency.GROUP_QUEUE,
)
def update_start_tag(self, image: str, version: AwesomeVersion) -> Awaitable[None]: def update_start_tag(self, image: str, version: AwesomeVersion) -> Awaitable[None]:
"""Update start tag to new version.""" """Update start tag to new version."""
return self.sys_run_in_executor(self._update_start_tag, image, version) return self.sys_run_in_executor(self._update_start_tag, image, version)

View File

@ -5,7 +5,7 @@ from pathlib import Path
import shutil import shutil
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import HardwareNotFound from ..exceptions import DBusError, DBusObjectError, HardwareNotFound
from .const import UdevSubsystem from .const import UdevSubsystem
from .data import Device from .data import Device
@ -14,6 +14,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
_MOUNTINFO: Path = Path("/proc/self/mountinfo") _MOUNTINFO: Path = Path("/proc/self/mountinfo")
_BLOCK_DEVICE_CLASS = "/sys/class/block/{}" _BLOCK_DEVICE_CLASS = "/sys/class/block/{}"
_BLOCK_DEVICE_EMMC_LIFE_TIME = "/sys/block/{}/device/life_time" _BLOCK_DEVICE_EMMC_LIFE_TIME = "/sys/block/{}/device/life_time"
_DEVICE_PATH = "/dev/{}"
class HwDisk(CoreSysAttributes): class HwDisk(CoreSysAttributes):
@ -92,8 +93,67 @@ class HwDisk(CoreSysAttributes):
optionsep += 1 optionsep += 1
return mountinfoarr[optionsep + 2] return mountinfoarr[optionsep + 2]
def _get_mount_source_device_name(self, path: str | Path) -> str | None:
"""Get mount source device name.
Must be run in executor.
"""
mount_source = self._get_mount_source(str(path))
if not mount_source or mount_source == "overlay":
return None
mount_source_path = Path(mount_source)
if not mount_source_path.is_block_device():
return None
# This looks a bit funky but it is more or less what lsblk is doing to get
# the parent dev reliably
# Get class device...
mount_source_device_part = Path(
_BLOCK_DEVICE_CLASS.format(mount_source_path.name)
)
# ... resolve symlink and get parent device from that path.
return mount_source_device_part.resolve().parts[-2]
async def _try_get_nvme_lifetime(self, device_name: str) -> float | None:
"""Get NVMe device lifetime."""
device_path = Path(_DEVICE_PATH.format(device_name))
try:
block_device = self.sys_dbus.udisks2.get_block_device_by_path(device_path)
drive = self.sys_dbus.udisks2.get_drive(block_device.drive)
except DBusObjectError:
_LOGGER.warning(
"Unable to find UDisks2 drive for device at %s", device_path.as_posix()
)
return None
# Exit if this isn't an NVMe device
if not drive.nvme_controller:
return None
try:
smart_log = await drive.nvme_controller.smart_get_attributes()
except DBusError as err:
_LOGGER.warning(
"Unable to get smart log for drive %s due to %s", drive.id, err
)
return None
# UDisks2 documentation specifies that value can exceed 100
if smart_log.percent_used >= 100:
_LOGGER.warning(
"NVMe controller reports that its estimated life-time has been exceeded!"
)
return 100.0
return smart_log.percent_used
def _try_get_emmc_life_time(self, device_name: str) -> float | None: def _try_get_emmc_life_time(self, device_name: str) -> float | None:
# Get eMMC life_time """Get eMMC life_time.
Must be run in executor.
"""
life_time_path = Path(_BLOCK_DEVICE_EMMC_LIFE_TIME.format(device_name)) life_time_path = Path(_BLOCK_DEVICE_EMMC_LIFE_TIME.format(device_name))
if not life_time_path.exists(): if not life_time_path.exists():
@ -118,32 +178,23 @@ class HwDisk(CoreSysAttributes):
) )
return 100.0 return 100.0
# Return the pessimistic estimate (0x02 -> 10%-20%, return 20%) # Return the optimistic estimate (0x02 -> 10%-20%, return 10%)
return life_time_value * 10.0 return (life_time_value - 1) * 10.0
def get_disk_life_time(self, path: str | Path) -> float | None: async def get_disk_life_time(self, path: str | Path) -> float | None:
"""Return life time estimate of the underlying SSD drive. """Return life time estimate of the underlying SSD drive."""
mount_source_device_name = await self.sys_run_in_executor(
Must be run in executor. self._get_mount_source_device_name, path
"""
mount_source = self._get_mount_source(str(path))
if not mount_source or mount_source == "overlay":
return None
mount_source_path = Path(mount_source)
if not mount_source_path.is_block_device():
return None
# This looks a bit funky but it is more or less what lsblk is doing to get
# the parent dev reliably
# Get class device...
mount_source_device_part = Path(
_BLOCK_DEVICE_CLASS.format(mount_source_path.name)
) )
if mount_source_device_name is None:
return None
# ... resolve symlink and get parent device from that path. # First check if its an NVMe device and get lifetime information that way
mount_source_device_name = mount_source_device_part.resolve().parts[-2] nvme_lifetime = await self._try_get_nvme_lifetime(mount_source_device_name)
if nvme_lifetime is not None:
return nvme_lifetime
# Currently only eMMC block devices supported # Else try to get lifetime information for eMMC devices. Other types of devices will return None
return self._try_get_emmc_life_time(mount_source_device_name) return await self.sys_run_in_executor(
self._try_get_emmc_life_time, mount_source_device_name
)

View File

@ -15,7 +15,7 @@ from multidict import MultiMapping
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import HomeAssistantAPIError, HomeAssistantAuthError from ..exceptions import HomeAssistantAPIError, HomeAssistantAuthError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..utils import check_port, version_is_new_enough from ..utils import check_port, version_is_new_enough
from .const import LANDINGPAGE from .const import LANDINGPAGE
@ -46,8 +46,8 @@ class HomeAssistantAPI(CoreSysAttributes):
@Job( @Job(
name="home_assistant_api_ensure_access_token", name="home_assistant_api_ensure_access_token",
limit=JobExecutionLimit.SINGLE_WAIT,
internal=True, internal=True,
concurrency=JobConcurrency.QUEUE,
) )
async def ensure_access_token(self) -> None: async def ensure_access_token(self) -> None:
"""Ensure there is an access token.""" """Ensure there is an access token."""

View File

@ -28,7 +28,7 @@ from ..exceptions import (
HomeAssistantUpdateError, HomeAssistantUpdateError,
JobException, JobException,
) )
from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobExecutionLimit from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobConcurrency, JobThrottle
from ..jobs.decorator import Job, JobCondition from ..jobs.decorator import Job, JobCondition
from ..jobs.job_group import JobGroup from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType from ..resolution.const import ContextType, IssueType
@ -123,8 +123,8 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_install_landing_page", name="home_assistant_core_install_landing_page",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def install_landingpage(self) -> None: async def install_landingpage(self) -> None:
"""Install a landing page.""" """Install a landing page."""
@ -171,8 +171,8 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_install", name="home_assistant_core_install",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def install(self) -> None: async def install(self) -> None:
"""Install a landing page.""" """Install a landing page."""
@ -222,8 +222,8 @@ class HomeAssistantCore(JobGroup):
JobCondition.PLUGINS_UPDATED, JobCondition.PLUGINS_UPDATED,
JobCondition.SUPERVISOR_UPDATED, JobCondition.SUPERVISOR_UPDATED,
], ],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def update( async def update(
self, self,
@ -324,8 +324,8 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_start", name="home_assistant_core_start",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def start(self) -> None: async def start(self) -> None:
"""Run Home Assistant docker.""" """Run Home Assistant docker."""
@ -359,8 +359,8 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_stop", name="home_assistant_core_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def stop(self, *, remove_container: bool = False) -> None: async def stop(self, *, remove_container: bool = False) -> None:
"""Stop Home Assistant Docker.""" """Stop Home Assistant Docker."""
@ -371,8 +371,8 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_restart", name="home_assistant_core_restart",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def restart(self, *, safe_mode: bool = False) -> None: async def restart(self, *, safe_mode: bool = False) -> None:
"""Restart Home Assistant Docker.""" """Restart Home Assistant Docker."""
@ -392,8 +392,8 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_rebuild", name="home_assistant_core_rebuild",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
) )
async def rebuild(self, *, safe_mode: bool = False) -> None: async def rebuild(self, *, safe_mode: bool = False) -> None:
"""Rebuild Home Assistant Docker container.""" """Rebuild Home Assistant Docker container."""
@ -546,9 +546,9 @@ class HomeAssistantCore(JobGroup):
@Job( @Job(
name="home_assistant_core_restart_after_problem", name="home_assistant_core_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
throttle=JobThrottle.RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed Home Assistant.""" """Restart unhealthy or failed Home Assistant."""

View File

@ -46,7 +46,8 @@ from ..exceptions import (
) )
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..hardware.data import Device from ..hardware.data import Device
from ..jobs.decorator import Job, JobExecutionLimit from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason from ..resolution.const import UnhealthyReason
from ..utils import remove_folder, remove_folder_with_excludes from ..utils import remove_folder, remove_folder_with_excludes
from ..utils.common import FileConfiguration from ..utils.common import FileConfiguration
@ -551,9 +552,10 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
@Job( @Job(
name="home_assistant_get_users", name="home_assistant_get_users",
limit=JobExecutionLimit.THROTTLE_WAIT,
throttle_period=timedelta(minutes=5), throttle_period=timedelta(minutes=5),
internal=True, internal=True,
concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
) )
async def get_users(self) -> list[IngressSessionDataUser]: async def get_users(self) -> list[IngressSessionDataUser]:
"""Get list of all configured users.""" """Get list of all configured users."""

View File

@ -6,7 +6,7 @@ from pathlib import Path
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import YamlFileError from ..exceptions import YamlFileError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..utils.yaml import read_yaml_file from ..utils.yaml import read_yaml_file
@ -43,9 +43,10 @@ class HomeAssistantSecrets(CoreSysAttributes):
@Job( @Job(
name="home_assistant_secrets_read", name="home_assistant_secrets_read",
limit=JobExecutionLimit.THROTTLE_WAIT,
throttle_period=timedelta(seconds=60), throttle_period=timedelta(seconds=60),
internal=True, internal=True,
concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
) )
async def _read_secrets(self): async def _read_secrets(self):
"""Read secrets.yaml into memory.""" """Read secrets.yaml into memory."""

View File

@ -135,9 +135,8 @@ class InfoCenter(CoreSysAttributes):
async def disk_life_time(self) -> float | None: async def disk_life_time(self) -> float | None:
"""Return the estimated life-time usage (in %) of the SSD storing the data directory.""" """Return the estimated life-time usage (in %) of the SSD storing the data directory."""
return await self.sys_run_in_executor( return await self.sys_hardware.disk.get_disk_life_time(
self.sys_hardware.disk.get_disk_life_time, self.coresys.config.path_supervisor
self.coresys.config.path_supervisor,
) )
async def get_dmesg(self) -> bytes: async def get_dmesg(self) -> bytes:

View File

@ -9,7 +9,7 @@ from pulsectl import Pulse, PulseError, PulseIndexError, PulseOperationFailed
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import PulseAudioError from ..exceptions import PulseAudioError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@ -236,8 +236,9 @@ class SoundControl(CoreSysAttributes):
@Job( @Job(
name="sound_control_update", name="sound_control_update",
limit=JobExecutionLimit.THROTTLE_WAIT,
throttle_period=timedelta(seconds=2), throttle_period=timedelta(seconds=2),
concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
) )
async def update(self, reload_pulse: bool = False): async def update(self, reload_pulse: bool = False):
"""Update properties over dbus.""" """Update properties over dbus."""

View File

@ -34,16 +34,53 @@ class JobCondition(StrEnum):
SUPERVISOR_UPDATED = "supervisor_updated" SUPERVISOR_UPDATED = "supervisor_updated"
class JobExecutionLimit(StrEnum): class JobConcurrency(StrEnum):
"""Job Execution limits.""" """Job concurrency control.
ONCE = "once" Controls how many instances of a job can run simultaneously.
SINGLE_WAIT = "single_wait"
THROTTLE = "throttle" Individual Concurrency (applies to each method separately):
THROTTLE_WAIT = "throttle_wait" - REJECT: Fail immediately if another instance is already running
THROTTLE_RATE_LIMIT = "throttle_rate_limit" - QUEUE: Wait for the current instance to finish, then run
GROUP_ONCE = "group_once"
GROUP_WAIT = "group_wait" Group Concurrency (applies across all methods on a JobGroup):
GROUP_THROTTLE = "group_throttle" - GROUP_REJECT: Fail if ANY job is running on the JobGroup
GROUP_THROTTLE_WAIT = "group_throttle_wait" - GROUP_QUEUE: Wait for ANY running job on the JobGroup to finish
GROUP_THROTTLE_RATE_LIMIT = "group_throttle_rate_limit"
JobGroup Behavior:
- All methods on the same JobGroup instance share a single lock
- Methods can call other methods on the same group without deadlock
- Uses the JobGroup.group_name for coordination
- Requires the class to inherit from JobGroup
"""
REJECT = "reject" # Fail if already running (was ONCE)
QUEUE = "queue" # Wait if already running (was SINGLE_WAIT)
GROUP_REJECT = "group_reject" # Was GROUP_ONCE
GROUP_QUEUE = "group_queue" # Was GROUP_WAIT
class JobThrottle(StrEnum):
"""Job throttling control.
Controls how frequently jobs can be executed.
Individual Throttling (each method has its own throttle state):
- THROTTLE: Skip execution if called within throttle_period
- RATE_LIMIT: Allow up to throttle_max_calls within throttle_period, then fail
Group Throttling (all methods on a JobGroup share throttle state):
- GROUP_THROTTLE: Skip if ANY method was called within throttle_period
- GROUP_RATE_LIMIT: Allow up to throttle_max_calls total across ALL methods
JobGroup Behavior:
- All methods on the same JobGroup instance share throttle counters/timers
- Uses the JobGroup.group_name as the key for tracking state
- If one method is throttled, other methods may also be throttled
- Requires the class to inherit from JobGroup
"""
THROTTLE = "throttle" # Skip if called too frequently
RATE_LIMIT = "rate_limit" # Rate limiting with max calls per period
GROUP_THROTTLE = "group_throttle" # Group version of THROTTLE
GROUP_RATE_LIMIT = "group_rate_limit" # Group version of RATE_LIMIT

View File

@ -20,7 +20,7 @@ from ..host.const import HostFeature
from ..resolution.const import MINIMUM_FREE_SPACE_THRESHOLD, ContextType, IssueType from ..resolution.const import MINIMUM_FREE_SPACE_THRESHOLD, ContextType, IssueType
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from . import SupervisorJob from . import SupervisorJob
from .const import JobCondition, JobExecutionLimit from .const import JobConcurrency, JobCondition, JobThrottle
from .job_group import JobGroup from .job_group import JobGroup
_LOGGER: logging.Logger = logging.getLogger(__package__) _LOGGER: logging.Logger = logging.getLogger(__package__)
@ -36,13 +36,14 @@ class Job(CoreSysAttributes):
conditions: list[JobCondition] | None = None, conditions: list[JobCondition] | None = None,
cleanup: bool = True, cleanup: bool = True,
on_condition: type[JobException] | None = None, on_condition: type[JobException] | None = None,
limit: JobExecutionLimit | None = None, concurrency: JobConcurrency | None = None,
throttle: JobThrottle | None = None,
throttle_period: timedelta throttle_period: timedelta
| Callable[[CoreSys, datetime, list[datetime] | None], timedelta] | Callable[[CoreSys, datetime, list[datetime] | None], timedelta]
| None = None, | None = None,
throttle_max_calls: int | None = None, throttle_max_calls: int | None = None,
internal: bool = False, internal: bool = False,
): ): # pylint: disable=too-many-positional-arguments
"""Initialize the Job decorator. """Initialize the Job decorator.
Args: Args:
@ -50,13 +51,14 @@ class Job(CoreSysAttributes):
conditions (list[JobCondition] | None): List of conditions that must be met before the job runs. conditions (list[JobCondition] | None): List of conditions that must be met before the job runs.
cleanup (bool): Whether to clean up the job after execution. Defaults to True. If set to False, the job will remain accessible through the Supervisor API until the next restart. cleanup (bool): Whether to clean up the job after execution. Defaults to True. If set to False, the job will remain accessible through the Supervisor API until the next restart.
on_condition (type[JobException] | None): Exception type to raise if a job condition fails. If None, logs the failure. on_condition (type[JobException] | None): Exception type to raise if a job condition fails. If None, logs the failure.
limit (JobExecutionLimit | None): Execution limit policy for the job (e.g., throttle, once, group-based). concurrency (JobConcurrency | None): Concurrency control policy (e.g., reject, queue, group-based).
throttle_period (timedelta | Callable | None): Throttle period as a timedelta or a callable returning a timedelta (for rate-limited jobs). throttle (JobThrottle | None): Throttling policy (e.g., throttle, rate_limit, group-based).
throttle_period (timedelta | Callable | None): Throttle period as a timedelta or a callable returning a timedelta (for throttled jobs).
throttle_max_calls (int | None): Maximum number of calls allowed within the throttle period (for rate-limited jobs). throttle_max_calls (int | None): Maximum number of calls allowed within the throttle period (for rate-limited jobs).
internal (bool): Whether the job is internal (not exposed through the Supervisor API). Defaults to False. internal (bool): Whether the job is internal (not exposed through the Supervisor API). Defaults to False.
Raises: Raises:
RuntimeError: If job name is not unique, or required throttle parameters are missing for the selected limit. RuntimeError: If job name is not unique, or required throttle parameters are missing for the selected throttle policy.
""" """
if name in _JOB_NAMES: if name in _JOB_NAMES:
@ -67,7 +69,6 @@ class Job(CoreSysAttributes):
self.conditions = conditions self.conditions = conditions
self.cleanup = cleanup self.cleanup = cleanup
self.on_condition = on_condition self.on_condition = on_condition
self.limit = limit
self._throttle_period = throttle_period self._throttle_period = throttle_period
self._throttle_max_calls = throttle_max_calls self._throttle_max_calls = throttle_max_calls
self._lock: asyncio.Semaphore | None = None self._lock: asyncio.Semaphore | None = None
@ -75,34 +76,49 @@ class Job(CoreSysAttributes):
self._rate_limited_calls: dict[str | None, list[datetime]] | None = None self._rate_limited_calls: dict[str | None, list[datetime]] | None = None
self._internal = internal self._internal = internal
self.concurrency = concurrency
self.throttle = throttle
# Validate Options # Validate Options
self._validate_parameters()
def _validate_parameters(self) -> None:
"""Validate job parameters."""
# Validate throttle parameters
if ( if (
self.limit self.throttle
in ( in (
JobExecutionLimit.THROTTLE, JobThrottle.THROTTLE,
JobExecutionLimit.THROTTLE_WAIT, JobThrottle.GROUP_THROTTLE,
JobExecutionLimit.THROTTLE_RATE_LIMIT, JobThrottle.RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE, JobThrottle.GROUP_RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
) )
and self._throttle_period is None and self._throttle_period is None
): ):
raise RuntimeError( raise RuntimeError(
f"Job {name} is using execution limit {limit} without a throttle period!" f"Job {self.name} is using throttle {self.throttle} without a throttle period!"
) )
if self.limit in ( if self.throttle in (
JobExecutionLimit.THROTTLE_RATE_LIMIT, JobThrottle.RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT, JobThrottle.GROUP_RATE_LIMIT,
): ):
if self._throttle_max_calls is None: if self._throttle_max_calls is None:
raise RuntimeError( raise RuntimeError(
f"Job {name} is using execution limit {limit} without throttle max calls!" f"Job {self.name} is using throttle {self.throttle} without throttle max calls!"
) )
self._rate_limited_calls = {} self._rate_limited_calls = {}
if self.throttle is not None and self.concurrency in (
JobConcurrency.GROUP_REJECT,
JobConcurrency.GROUP_QUEUE,
):
# We cannot release group locks when Job is not running (e.g. throttled)
# which makes these combinations impossible to use currently.
raise RuntimeError(
f"Job {self.name} is using throttling ({self.throttle}) with group concurrency ({self.concurrency}), which is not allowed!"
)
@property @property
def throttle_max_calls(self) -> int: def throttle_max_calls(self) -> int:
"""Return max calls for throttle.""" """Return max calls for throttle."""
@ -131,7 +147,7 @@ class Job(CoreSysAttributes):
"""Return rate limited calls if used.""" """Return rate limited calls if used."""
if self._rate_limited_calls is None: if self._rate_limited_calls is None:
raise RuntimeError( raise RuntimeError(
f"Rate limited calls not available for limit type {self.limit}" "Rate limited calls not available for this throttle type"
) )
return self._rate_limited_calls.get(group_name, []) return self._rate_limited_calls.get(group_name, [])
@ -142,7 +158,7 @@ class Job(CoreSysAttributes):
"""Add a rate limited call to list if used.""" """Add a rate limited call to list if used."""
if self._rate_limited_calls is None: if self._rate_limited_calls is None:
raise RuntimeError( raise RuntimeError(
f"Rate limited calls not available for limit type {self.limit}" "Rate limited calls not available for this throttle type"
) )
if group_name in self._rate_limited_calls: if group_name in self._rate_limited_calls:
@ -156,7 +172,7 @@ class Job(CoreSysAttributes):
"""Set rate limited calls if used.""" """Set rate limited calls if used."""
if self._rate_limited_calls is None: if self._rate_limited_calls is None:
raise RuntimeError( raise RuntimeError(
f"Rate limited calls not available for limit type {self.limit}" "Rate limited calls not available for this throttle type"
) )
self._rate_limited_calls[group_name] = value self._rate_limited_calls[group_name] = value
@ -193,15 +209,23 @@ class Job(CoreSysAttributes):
if obj.acquire and obj.release: # type: ignore if obj.acquire and obj.release: # type: ignore
job_group = cast(JobGroup, obj) job_group = cast(JobGroup, obj)
if not job_group and self.limit in ( # Check for group-based parameters
JobExecutionLimit.GROUP_ONCE, if not job_group:
JobExecutionLimit.GROUP_WAIT, if self.concurrency in (
JobExecutionLimit.GROUP_THROTTLE, JobConcurrency.GROUP_REJECT,
JobExecutionLimit.GROUP_THROTTLE_WAIT, JobConcurrency.GROUP_QUEUE,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
): ):
raise RuntimeError( raise RuntimeError(
f"Job on {self.name} need to be a JobGroup to use group based limits!" f"Job {self.name} uses group concurrency ({self.concurrency}) but is not on a JobGroup! "
f"The class must inherit from JobGroup to use GROUP_REJECT or GROUP_QUEUE."
) from None
if self.throttle in (
JobThrottle.GROUP_THROTTLE,
JobThrottle.GROUP_RATE_LIMIT,
):
raise RuntimeError(
f"Job {self.name} uses group throttling ({self.throttle}) but is not on a JobGroup! "
f"The class must inherit from JobGroup to use GROUP_THROTTLE or GROUP_RATE_LIMIT."
) from None ) from None
return job_group return job_group
@ -255,71 +279,15 @@ class Job(CoreSysAttributes):
except JobConditionException as err: except JobConditionException as err:
return self._handle_job_condition_exception(err) return self._handle_job_condition_exception(err)
# Handle exection limits # Handle execution limits
if self.limit in ( await self._handle_concurrency_control(job_group, job)
JobExecutionLimit.SINGLE_WAIT,
JobExecutionLimit.ONCE,
):
await self._acquire_exection_limit()
elif self.limit in (
JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT,
):
try: try:
await cast(JobGroup, job_group).acquire( if not await self._handle_throttling(group_name):
job, self.limit == JobExecutionLimit.GROUP_WAIT self._release_concurrency_control(job_group)
) return # Job was throttled, exit early
except JobGroupExecutionLimitExceeded as err: except Exception:
if self.on_condition: self._release_concurrency_control(job_group)
raise self.on_condition(str(err)) from err raise
raise err
elif self.limit in (
JobExecutionLimit.THROTTLE,
JobExecutionLimit.GROUP_THROTTLE,
):
time_since_last_call = datetime.now() - self.last_call(group_name)
if time_since_last_call < self.throttle_period(group_name):
return
elif self.limit in (
JobExecutionLimit.THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
):
await self._acquire_exection_limit()
time_since_last_call = datetime.now() - self.last_call(group_name)
if time_since_last_call < self.throttle_period(group_name):
self._release_exception_limits()
return
elif self.limit in (
JobExecutionLimit.THROTTLE_RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
):
# Only reprocess array when necessary (at limit)
if (
len(self.rate_limited_calls(group_name))
>= self.throttle_max_calls
):
self.set_rate_limited_calls(
[
call
for call in self.rate_limited_calls(group_name)
if call
> datetime.now() - self.throttle_period(group_name)
],
group_name,
)
if (
len(self.rate_limited_calls(group_name))
>= self.throttle_max_calls
):
on_condition = (
JobException
if self.on_condition is None
else self.on_condition
)
raise on_condition(
f"Rate limit exceeded, more than {self.throttle_max_calls} calls in {self.throttle_period(group_name)}",
)
# Execute Job # Execute Job
with job.start(): with job.start():
@ -345,12 +313,7 @@ class Job(CoreSysAttributes):
await async_capture_exception(err) await async_capture_exception(err)
raise JobException() from err raise JobException() from err
finally: finally:
self._release_exception_limits() self._release_concurrency_control(job_group)
if job_group and self.limit in (
JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT,
):
job_group.release()
# Jobs that weren't started are always cleaned up. Also clean up done jobs if required # Jobs that weren't started are always cleaned up. Also clean up done jobs if required
finally: finally:
@ -492,31 +455,75 @@ class Job(CoreSysAttributes):
f"'{method_name}' blocked from execution, mounting not supported on system" f"'{method_name}' blocked from execution, mounting not supported on system"
) )
async def _acquire_exection_limit(self) -> None: def _release_concurrency_control(self, job_group: JobGroup | None) -> None:
"""Process exection limits.""" """Release concurrency control locks."""
if self.limit not in ( if self.concurrency == JobConcurrency.REJECT:
JobExecutionLimit.SINGLE_WAIT, if self.lock.locked():
JobExecutionLimit.ONCE, self.lock.release()
JobExecutionLimit.THROTTLE_WAIT, elif self.concurrency == JobConcurrency.QUEUE:
JobExecutionLimit.GROUP_THROTTLE_WAIT, if self.lock.locked():
self.lock.release()
elif self.concurrency in (
JobConcurrency.GROUP_REJECT,
JobConcurrency.GROUP_QUEUE,
): ):
return if job_group and job_group.has_lock:
job_group.release()
if self.limit == JobExecutionLimit.ONCE and self.lock.locked(): async def _handle_concurrency_control(
self, job_group: JobGroup | None, job: SupervisorJob
) -> None:
"""Handle concurrency control limits."""
if self.concurrency == JobConcurrency.REJECT:
if self.lock.locked():
on_condition = ( on_condition = (
JobException if self.on_condition is None else self.on_condition JobException if self.on_condition is None else self.on_condition
) )
raise on_condition("Another job is running") raise on_condition("Another job is running")
await self.lock.acquire() await self.lock.acquire()
elif self.concurrency == JobConcurrency.QUEUE:
await self.lock.acquire()
elif self.concurrency == JobConcurrency.GROUP_REJECT:
try:
await cast(JobGroup, job_group).acquire(job, wait=False)
except JobGroupExecutionLimitExceeded as err:
if self.on_condition:
raise self.on_condition(str(err)) from err
raise err
elif self.concurrency == JobConcurrency.GROUP_QUEUE:
try:
await cast(JobGroup, job_group).acquire(job, wait=True)
except JobGroupExecutionLimitExceeded as err:
if self.on_condition:
raise self.on_condition(str(err)) from err
raise err
def _release_exception_limits(self) -> None: async def _handle_throttling(self, group_name: str | None) -> bool:
"""Release possible exception limits.""" """Handle throttling limits. Returns True if job should continue, False if throttled."""
if self.limit not in ( if self.throttle in (JobThrottle.THROTTLE, JobThrottle.GROUP_THROTTLE):
JobExecutionLimit.SINGLE_WAIT, time_since_last_call = datetime.now() - self.last_call(group_name)
JobExecutionLimit.ONCE, throttle_period = self.throttle_period(group_name)
JobExecutionLimit.THROTTLE_WAIT, if time_since_last_call < throttle_period:
JobExecutionLimit.GROUP_THROTTLE_WAIT, # Always return False when throttled (skip execution)
): return False
return elif self.throttle in (JobThrottle.RATE_LIMIT, JobThrottle.GROUP_RATE_LIMIT):
self.lock.release() # Only reprocess array when necessary (at limit)
if len(self.rate_limited_calls(group_name)) >= self.throttle_max_calls:
self.set_rate_limited_calls(
[
call
for call in self.rate_limited_calls(group_name)
if call > datetime.now() - self.throttle_period(group_name)
],
group_name,
)
if len(self.rate_limited_calls(group_name)) >= self.throttle_max_calls:
on_condition = (
JobException if self.on_condition is None else self.on_condition
)
raise on_condition(
f"Rate limit exceeded, more than {self.throttle_max_calls} calls in {self.throttle_period(group_name)}",
)
return True

View File

@ -15,7 +15,8 @@ from ..exceptions import (
ObserverError, ObserverError,
) )
from ..homeassistant.const import LANDINGPAGE, WSType from ..homeassistant.const import LANDINGPAGE, WSType
from ..jobs.decorator import Job, JobCondition, JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition
from ..plugins.const import PLUGIN_UPDATE_CONDITIONS from ..plugins.const import PLUGIN_UPDATE_CONDITIONS
from ..utils.dt import utcnow from ..utils.dt import utcnow
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
@ -160,7 +161,7 @@ class Tasks(CoreSysAttributes):
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
JobCondition.RUNNING, JobCondition.RUNNING,
], ],
limit=JobExecutionLimit.ONCE, concurrency=JobConcurrency.REJECT,
) )
async def _update_supervisor(self): async def _update_supervisor(self):
"""Check and run update of Supervisor Supervisor.""" """Check and run update of Supervisor Supervisor."""

View File

@ -164,10 +164,14 @@ class Mount(CoreSysAttributes, ABC):
"""Return true if successfully mounted and available.""" """Return true if successfully mounted and available."""
return self.state == UnitActiveState.ACTIVE return self.state == UnitActiveState.ACTIVE
def __eq__(self, other): def __eq__(self, other: object) -> bool:
"""Return true if mounts are the same.""" """Return true if mounts are the same."""
return isinstance(other, Mount) and self.name == other.name return isinstance(other, Mount) and self.name == other.name
def __hash__(self) -> int:
"""Return hash of mount."""
return hash(self.name)
async def load(self) -> None: async def load(self) -> None:
"""Initialize object.""" """Initialize object."""
# If there's no mount unit, mount it to make one # If there's no mount unit, mount it to make one

View File

@ -22,7 +22,7 @@ from ..exceptions import (
HassOSJobError, HassOSJobError,
HostError, HostError,
) )
from ..jobs.const import JobCondition, JobExecutionLimit from ..jobs.const import JobConcurrency, JobCondition
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.checks.base import CheckBase from ..resolution.checks.base import CheckBase
from ..resolution.checks.disabled_data_disk import CheckDisabledDataDisk from ..resolution.checks.disabled_data_disk import CheckDisabledDataDisk
@ -205,8 +205,8 @@ class DataDisk(CoreSysAttributes):
@Job( @Job(
name="data_disk_migrate", name="data_disk_migrate",
conditions=[JobCondition.HAOS, JobCondition.OS_AGENT, JobCondition.HEALTHY], conditions=[JobCondition.HAOS, JobCondition.OS_AGENT, JobCondition.HEALTHY],
limit=JobExecutionLimit.ONCE,
on_condition=HassOSJobError, on_condition=HassOSJobError,
concurrency=JobConcurrency.REJECT,
) )
async def migrate_disk(self, new_disk: str) -> None: async def migrate_disk(self, new_disk: str) -> None:
"""Move data partition to a new disk.""" """Move data partition to a new disk."""
@ -305,8 +305,8 @@ class DataDisk(CoreSysAttributes):
@Job( @Job(
name="data_disk_wipe", name="data_disk_wipe",
conditions=[JobCondition.HAOS, JobCondition.OS_AGENT, JobCondition.HEALTHY], conditions=[JobCondition.HAOS, JobCondition.OS_AGENT, JobCondition.HEALTHY],
limit=JobExecutionLimit.ONCE,
on_condition=HassOSJobError, on_condition=HassOSJobError,
concurrency=JobConcurrency.REJECT,
) )
async def wipe_disk(self) -> None: async def wipe_disk(self) -> None:
"""Wipe the current data disk.""" """Wipe the current data disk."""

View File

@ -21,7 +21,7 @@ from ..exceptions import (
HassOSSlotUpdateError, HassOSSlotUpdateError,
HassOSUpdateError, HassOSUpdateError,
) )
from ..jobs.const import JobCondition, JobExecutionLimit from ..jobs.const import JobConcurrency, JobCondition
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason from ..resolution.const import UnhealthyReason
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
@ -272,12 +272,13 @@ class OSManager(CoreSysAttributes):
name="os_manager_update", name="os_manager_update",
conditions=[ conditions=[
JobCondition.HAOS, JobCondition.HAOS,
JobCondition.HEALTHY,
JobCondition.INTERNET_SYSTEM, JobCondition.INTERNET_SYSTEM,
JobCondition.RUNNING, JobCondition.RUNNING,
JobCondition.SUPERVISOR_UPDATED, JobCondition.SUPERVISOR_UPDATED,
], ],
limit=JobExecutionLimit.ONCE,
on_condition=HassOSJobError, on_condition=HassOSJobError,
concurrency=JobConcurrency.REJECT,
) )
async def update(self, version: AwesomeVersion | None = None) -> None: async def update(self, version: AwesomeVersion | None = None) -> None:
"""Update HassOS system.""" """Update HassOS system."""

View File

@ -24,7 +24,7 @@ from ..exceptions import (
DockerError, DockerError,
PluginError, PluginError,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason from ..resolution.const import UnhealthyReason
from ..utils.json import write_json_file from ..utils.json import write_json_file
@ -205,10 +205,10 @@ class PluginAudio(PluginBase):
@Job( @Job(
name="plugin_audio_restart_after_problem", name="plugin_audio_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=AudioJobError, on_condition=AudioJobError,
throttle=JobThrottle.RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed plugin.""" """Restart unhealthy or failed plugin."""

View File

@ -15,7 +15,7 @@ from ..docker.cli import DockerCli
from ..docker.const import ContainerState from ..docker.const import ContainerState
from ..docker.stats import DockerStats from ..docker.stats import DockerStats
from ..exceptions import CliError, CliJobError, CliUpdateError, DockerError, PluginError from ..exceptions import CliError, CliJobError, CliUpdateError, DockerError, PluginError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from .base import PluginBase from .base import PluginBase
@ -118,10 +118,10 @@ class PluginCli(PluginBase):
@Job( @Job(
name="plugin_cli_restart_after_problem", name="plugin_cli_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=CliJobError, on_condition=CliJobError,
throttle=JobThrottle.RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed plugin.""" """Restart unhealthy or failed plugin."""

View File

@ -31,7 +31,7 @@ from ..exceptions import (
DockerError, DockerError,
PluginError, PluginError,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from ..utils.json import write_json_file from ..utils.json import write_json_file
@ -351,10 +351,10 @@ class PluginDns(PluginBase):
@Job( @Job(
name="plugin_dns_restart_after_problem", name="plugin_dns_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=CoreDNSJobError, on_condition=CoreDNSJobError,
throttle=JobThrottle.RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed plugin.""" """Restart unhealthy or failed plugin."""

View File

@ -18,7 +18,7 @@ from ..exceptions import (
MulticastUpdateError, MulticastUpdateError,
PluginError, PluginError,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from .base import PluginBase from .base import PluginBase
@ -114,10 +114,10 @@ class PluginMulticast(PluginBase):
@Job( @Job(
name="plugin_multicast_restart_after_problem", name="plugin_multicast_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=MulticastJobError, on_condition=MulticastJobError,
throttle=JobThrottle.RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed plugin.""" """Restart unhealthy or failed plugin."""

View File

@ -21,7 +21,7 @@ from ..exceptions import (
ObserverUpdateError, ObserverUpdateError,
PluginError, PluginError,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobThrottle
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from .base import PluginBase from .base import PluginBase
@ -130,10 +130,10 @@ class PluginObserver(PluginBase):
@Job( @Job(
name="plugin_observer_restart_after_problem", name="plugin_observer_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=ObserverJobError, on_condition=ObserverJobError,
throttle=JobThrottle.RATE_LIMIT,
) )
async def _restart_after_problem(self, state: ContainerState): async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed plugin.""" """Restart unhealthy or failed plugin."""

View File

@ -6,7 +6,7 @@ import logging
from ...const import AddonState, CoreState from ...const import AddonState, CoreState
from ...coresys import CoreSys from ...coresys import CoreSys
from ...exceptions import PwnedConnectivityError, PwnedError, PwnedSecret from ...exceptions import PwnedConnectivityError, PwnedError, PwnedSecret
from ...jobs.const import JobCondition, JobExecutionLimit from ...jobs.const import JobCondition, JobThrottle
from ...jobs.decorator import Job from ...jobs.decorator import Job
from ..const import ContextType, IssueType, SuggestionType from ..const import ContextType, IssueType, SuggestionType
from .base import CheckBase from .base import CheckBase
@ -25,8 +25,8 @@ class CheckAddonPwned(CheckBase):
@Job( @Job(
name="check_addon_pwned_run", name="check_addon_pwned_run",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
limit=JobExecutionLimit.THROTTLE,
throttle_period=timedelta(hours=24), throttle_period=timedelta(hours=24),
throttle=JobThrottle.THROTTLE,
) )
async def run_check(self) -> None: async def run_check(self) -> None:
"""Run check if not affected by issue.""" """Run check if not affected by issue."""

View File

@ -9,7 +9,7 @@ from aiodns.error import DNSError
from ...const import CoreState from ...const import CoreState
from ...coresys import CoreSys from ...coresys import CoreSys
from ...jobs.const import JobCondition, JobExecutionLimit from ...jobs.const import JobCondition, JobThrottle
from ...jobs.decorator import Job from ...jobs.decorator import Job
from ...utils.sentry import async_capture_exception from ...utils.sentry import async_capture_exception
from ..const import DNS_CHECK_HOST, ContextType, IssueType from ..const import DNS_CHECK_HOST, ContextType, IssueType
@ -36,8 +36,8 @@ class CheckDNSServer(CheckBase):
@Job( @Job(
name="check_dns_server_run", name="check_dns_server_run",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
limit=JobExecutionLimit.THROTTLE,
throttle_period=timedelta(hours=24), throttle_period=timedelta(hours=24),
throttle=JobThrottle.THROTTLE,
) )
async def run_check(self) -> None: async def run_check(self) -> None:
"""Run check if not affected by issue.""" """Run check if not affected by issue."""

View File

@ -7,7 +7,7 @@ from aiodns.error import DNSError
from ...const import CoreState from ...const import CoreState
from ...coresys import CoreSys from ...coresys import CoreSys
from ...jobs.const import JobCondition, JobExecutionLimit from ...jobs.const import JobCondition, JobThrottle
from ...jobs.decorator import Job from ...jobs.decorator import Job
from ...utils.sentry import async_capture_exception from ...utils.sentry import async_capture_exception
from ..const import DNS_ERROR_NO_DATA, ContextType, IssueType from ..const import DNS_ERROR_NO_DATA, ContextType, IssueType
@ -26,8 +26,8 @@ class CheckDNSServerIPv6(CheckBase):
@Job( @Job(
name="check_dns_server_ipv6_run", name="check_dns_server_ipv6_run",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
limit=JobExecutionLimit.THROTTLE,
throttle_period=timedelta(hours=24), throttle_period=timedelta(hours=24),
throttle=JobThrottle.THROTTLE,
) )
async def run_check(self) -> None: async def run_check(self) -> None:
"""Run check if not affected by issue.""" """Run check if not affected by issue."""

View File

@ -49,6 +49,7 @@ class UnsupportedReason(StrEnum):
NETWORK_MANAGER = "network_manager" NETWORK_MANAGER = "network_manager"
OS = "os" OS = "os"
OS_AGENT = "os_agent" OS_AGENT = "os_agent"
OS_VERSION = "os_version"
PRIVILEGED = "privileged" PRIVILEGED = "privileged"
RESTART_POLICY = "restart_policy" RESTART_POLICY = "restart_policy"
SOFTWARE = "software" SOFTWARE = "software"

View File

@ -0,0 +1,51 @@
"""Evaluation class for OS version."""
from awesomeversion import AwesomeVersion, AwesomeVersionException
from ...const import CoreState
from ...coresys import CoreSys
from ..const import UnsupportedReason
from .base import EvaluateBase
def setup(coresys: CoreSys) -> EvaluateBase:
"""Initialize evaluation-setup function."""
return EvaluateOSVersion(coresys)
class EvaluateOSVersion(EvaluateBase):
"""Evaluate the OS version."""
@property
def reason(self) -> UnsupportedReason:
"""Return a UnsupportedReason enum."""
return UnsupportedReason.OS_VERSION
@property
def on_failure(self) -> str:
"""Return a string that is printed when self.evaluate is True."""
return f"OS version '{self.sys_os.version}' is more than 4 versions behind the latest '{self.sys_os.latest_version}'!"
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this evaluation can run."""
# Technically there's no reason to run this after STARTUP as update requires
# a reboot. But if network is down we won't have latest version info then.
return [CoreState.RUNNING, CoreState.STARTUP]
async def evaluate(self) -> bool:
"""Run evaluation."""
if (
not self.sys_os.available
or not (current := self.sys_os.version)
or not (latest := self.sys_os.latest_version)
or not latest.major
):
return False
# If current is more than 4 major versions behind latest, mark as unsupported
last_supported_version = AwesomeVersion(f"{int(latest.major) - 4}.0")
try:
return current < last_supported_version
except AwesomeVersionException:
return True

View File

@ -5,7 +5,7 @@ import logging
from ...coresys import CoreSys from ...coresys import CoreSys
from ...exceptions import ResolutionFixupError, ResolutionFixupJobError from ...exceptions import ResolutionFixupError, ResolutionFixupJobError
from ...jobs.const import JobCondition, JobExecutionLimit from ...jobs.const import JobCondition, JobThrottle
from ...jobs.decorator import Job from ...jobs.decorator import Job
from ...security.const import ContentTrustResult from ...security.const import ContentTrustResult
from ..const import ContextType, IssueType, SuggestionType from ..const import ContextType, IssueType, SuggestionType
@ -26,8 +26,8 @@ class FixupSystemExecuteIntegrity(FixupBase):
name="fixup_system_execute_integrity_process", name="fixup_system_execute_integrity_process",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=ResolutionFixupJobError, on_condition=ResolutionFixupJobError,
limit=JobExecutionLimit.THROTTLE,
throttle_period=timedelta(hours=8), throttle_period=timedelta(hours=8),
throttle=JobThrottle.THROTTLE,
) )
async def process_fixup(self, reference: str | None = None) -> None: async def process_fixup(self, reference: str | None = None) -> None:
"""Initialize the fixup class.""" """Initialize the fixup class."""

View File

@ -17,7 +17,8 @@ from ..exceptions import (
PwnedError, PwnedError,
SecurityJobError, SecurityJobError,
) )
from ..jobs.decorator import Job, JobCondition, JobExecutionLimit from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.codenotary import cas_validate from ..utils.codenotary import cas_validate
from ..utils.common import FileConfiguration from ..utils.common import FileConfiguration
@ -107,7 +108,7 @@ class Security(FileConfiguration, CoreSysAttributes):
name="security_manager_integrity_check", name="security_manager_integrity_check",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=SecurityJobError, on_condition=SecurityJobError,
limit=JobExecutionLimit.ONCE, concurrency=JobConcurrency.REJECT,
) )
async def integrity_check(self) -> IntegrityResult: async def integrity_check(self) -> IntegrityResult:
"""Run a full system integrity check of the platform. """Run a full system integrity check of the platform.

View File

@ -32,7 +32,7 @@ from .exceptions import (
SupervisorJobError, SupervisorJobError,
SupervisorUpdateError, SupervisorUpdateError,
) )
from .jobs.const import JobCondition, JobExecutionLimit from .jobs.const import JobCondition, JobThrottle
from .jobs.decorator import Job from .jobs.decorator import Job
from .resolution.const import ContextType, IssueType, UnhealthyReason from .resolution.const import ContextType, IssueType, UnhealthyReason
from .utils.codenotary import calc_checksum from .utils.codenotary import calc_checksum
@ -288,8 +288,8 @@ class Supervisor(CoreSysAttributes):
@Job( @Job(
name="supervisor_check_connectivity", name="supervisor_check_connectivity",
limit=JobExecutionLimit.THROTTLE,
throttle_period=_check_connectivity_throttle_period, throttle_period=_check_connectivity_throttle_period,
throttle=JobThrottle.THROTTLE,
) )
async def check_connectivity(self) -> None: async def check_connectivity(self) -> None:
"""Check the Internet connectivity from Supervisor's point of view.""" """Check the Internet connectivity from Supervisor's point of view."""

View File

@ -8,6 +8,8 @@ import logging
import aiohttp import aiohttp
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from supervisor.jobs.const import JobConcurrency, JobThrottle
from .bus import EventListener from .bus import EventListener
from .const import ( from .const import (
ATTR_AUDIO, ATTR_AUDIO,
@ -34,7 +36,7 @@ from .exceptions import (
UpdaterError, UpdaterError,
UpdaterJobError, UpdaterJobError,
) )
from .jobs.decorator import Job, JobCondition, JobExecutionLimit from .jobs.decorator import Job, JobCondition
from .utils.codenotary import calc_checksum from .utils.codenotary import calc_checksum
from .utils.common import FileConfiguration from .utils.common import FileConfiguration
from .validate import SCHEMA_UPDATER_CONFIG from .validate import SCHEMA_UPDATER_CONFIG
@ -198,8 +200,9 @@ class Updater(FileConfiguration, CoreSysAttributes):
name="updater_fetch_data", name="updater_fetch_data",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=UpdaterJobError, on_condition=UpdaterJobError,
limit=JobExecutionLimit.THROTTLE_WAIT,
throttle_period=timedelta(seconds=30), throttle_period=timedelta(seconds=30),
concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
) )
async def fetch_data(self): async def fetch_data(self):
"""Fetch current versions from Github. """Fetch current versions from Github.

View File

@ -12,6 +12,7 @@ from sentry_sdk.integrations.dedupe import DedupeIntegration
from sentry_sdk.integrations.excepthook import ExcepthookIntegration from sentry_sdk.integrations.excepthook import ExcepthookIntegration
from sentry_sdk.integrations.logging import LoggingIntegration from sentry_sdk.integrations.logging import LoggingIntegration
from sentry_sdk.integrations.threading import ThreadingIntegration from sentry_sdk.integrations.threading import ThreadingIntegration
from sentry_sdk.scrubber import DEFAULT_DENYLIST, EventScrubber
from ..const import SUPERVISOR_VERSION from ..const import SUPERVISOR_VERSION
from ..coresys import CoreSys from ..coresys import CoreSys
@ -26,6 +27,7 @@ def init_sentry(coresys: CoreSys) -> None:
"""Initialize sentry client.""" """Initialize sentry client."""
if not sentry_sdk.is_initialized(): if not sentry_sdk.is_initialized():
_LOGGER.info("Initializing Supervisor Sentry") _LOGGER.info("Initializing Supervisor Sentry")
denylist = DEFAULT_DENYLIST + ["psk", "ssid"]
# Don't use AsyncioIntegration(). We commonly handle task exceptions # Don't use AsyncioIntegration(). We commonly handle task exceptions
# outside of tasks. This would cause exception we gracefully handle to # outside of tasks. This would cause exception we gracefully handle to
# be captured by sentry. # be captured by sentry.
@ -34,6 +36,7 @@ def init_sentry(coresys: CoreSys) -> None:
before_send=partial(filter_data, coresys), before_send=partial(filter_data, coresys),
auto_enabling_integrations=False, auto_enabling_integrations=False,
default_integrations=False, default_integrations=False,
event_scrubber=EventScrubber(denylist=denylist),
integrations=[ integrations=[
AioHttpIntegration( AioHttpIntegration(
failed_request_status_codes=frozenset(range(500, 600)) failed_request_status_codes=frozenset(range(500, 600))

View File

@ -182,7 +182,7 @@ SCHEMA_DOCKER_CONFIG = vol.Schema(
} }
} }
), ),
vol.Optional(ATTR_ENABLE_IPV6): vol.Boolean(), vol.Optional(ATTR_ENABLE_IPV6, default=None): vol.Maybe(vol.Boolean()),
} }
) )

View File

@ -19,7 +19,7 @@ async def test_api_docker_info(api_client: TestClient):
async def test_api_network_enable_ipv6(coresys: CoreSys, api_client: TestClient): async def test_api_network_enable_ipv6(coresys: CoreSys, api_client: TestClient):
"""Test setting docker network for enabled IPv6.""" """Test setting docker network for enabled IPv6."""
assert coresys.docker.config.enable_ipv6 is False assert coresys.docker.config.enable_ipv6 is None
resp = await api_client.post("/docker/options", json={"enable_ipv6": True}) resp = await api_client.post("/docker/options", json={"enable_ipv6": True})
assert resp.status == 200 assert resp.status == 200

View File

@ -23,7 +23,11 @@ DEFAULT_RANGE_FOLLOW = "entries=:-99:18446744073709551615"
@pytest.fixture(name="coresys_disk_info") @pytest.fixture(name="coresys_disk_info")
async def fixture_coresys_disk_info(coresys: CoreSys) -> AsyncGenerator[CoreSys]: async def fixture_coresys_disk_info(coresys: CoreSys) -> AsyncGenerator[CoreSys]:
"""Mock basic disk information for host APIs.""" """Mock basic disk information for host APIs."""
coresys.hardware.disk.get_disk_life_time = lambda _: 0
async def mock_disk_lifetime(_):
return 0
coresys.hardware.disk.get_disk_life_time = mock_disk_lifetime
coresys.hardware.disk.get_disk_free_space = lambda _: 5000 coresys.hardware.disk.get_disk_free_space = lambda _: 5000
coresys.hardware.disk.get_disk_total_space = lambda _: 50000 coresys.hardware.disk.get_disk_total_space = lambda _: 50000
coresys.hardware.disk.get_disk_used_space = lambda _: 45000 coresys.hardware.disk.get_disk_used_space = lambda _: 45000

View File

@ -185,6 +185,33 @@ async def test_consolidate(
} }
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_consolidate_failure(coresys: CoreSys, tmp_path: Path):
"""Test consolidate with two backups that are not the same."""
(mount_dir := coresys.config.path_mounts / "backup_test").mkdir()
tar1 = Path(copy(get_fixture_path("test_consolidate_unc.tar"), tmp_path))
backup1 = Backup(coresys, tar1, "test", None)
await backup1.load()
tar2 = Path(copy(get_fixture_path("backup_example.tar"), mount_dir))
backup2 = Backup(coresys, tar2, "test", "backup_test")
await backup2.load()
with pytest.raises(
ValueError,
match=f"Backup {backup1.slug} and {backup2.slug} are not the same backup",
):
backup1.consolidate(backup2)
# Force slugs to be the same to run the fields check
backup1._data["slug"] = backup2.slug # pylint: disable=protected-access
with pytest.raises(
BackupInvalidError,
match=f"Cannot consolidate backups in {backup2.location} and {backup1.location} with slug {backup1.slug}",
):
backup1.consolidate(backup2)
@pytest.mark.parametrize( @pytest.mark.parametrize(
( (
"tarfile_side_effect", "tarfile_side_effect",

View File

@ -0,0 +1,72 @@
"""Test UDisks2 NVMe Controller."""
from datetime import UTC, datetime
from dbus_fast.aio import MessageBus
import pytest
from supervisor.dbus.udisks2.nvme_controller import UDisks2NVMeController
from tests.common import mock_dbus_services
from tests.dbus_service_mocks.udisks2_nvme_controller import (
NVMeController as NVMeControllerService,
)
@pytest.fixture(name="nvme_controller_service")
async def fixture_nvme_controller_service(
dbus_session_bus: MessageBus,
) -> NVMeControllerService:
"""Mock NVMe Controller service."""
yield (
await mock_dbus_services(
{
"udisks2_nvme_controller": "/org/freedesktop/UDisks2/drives/Samsung_SSD_970_EVO_Plus_2TB_S40123456789ABC"
},
dbus_session_bus,
)
)["udisks2_nvme_controller"]
async def test_nvme_controller_info(
nvme_controller_service: NVMeControllerService, dbus_session_bus: MessageBus
):
"""Test NVMe Controller info."""
controller = UDisks2NVMeController(
"/org/freedesktop/UDisks2/drives/Samsung_SSD_970_EVO_Plus_2TB_S40123456789ABC"
)
assert controller.state is None
assert controller.unallocated_capacity is None
assert controller.smart_updated is None
assert controller.smart_temperature is None
await controller.connect(dbus_session_bus)
assert controller.state == "live"
assert controller.unallocated_capacity == 0
assert controller.smart_updated == datetime.fromtimestamp(1753906112, UTC)
assert controller.smart_temperature == 311
nvme_controller_service.emit_properties_changed({"SmartTemperature": 300})
await nvme_controller_service.ping()
await nvme_controller_service.ping()
assert controller.smart_temperature == 300
@pytest.mark.usefixtures("nvme_controller_service")
async def test_nvme_controller_smart_get_attributes(dbus_session_bus: MessageBus):
"""Test NVMe Controller smart get attributes."""
controller = UDisks2NVMeController(
"/org/freedesktop/UDisks2/drives/Samsung_SSD_970_EVO_Plus_2TB_S40123456789ABC"
)
await controller.connect(dbus_session_bus)
smart_log = await controller.smart_get_attributes()
assert smart_log.available_spare == 100
assert smart_log.percent_used == 1
assert smart_log.total_data_read == 22890461184000
assert smart_log.total_data_written == 27723431936000
assert smart_log.controller_busy_minutes == 2682
assert smart_log.temperature_sensors == [310, 305, 0, 0, 0, 0, 0, 0]

View File

@ -410,6 +410,33 @@ FIXTURES: dict[str, BlockFixture] = {
HintSymbolicIconName="", HintSymbolicIconName="",
UserspaceMountOptions=[], UserspaceMountOptions=[],
), ),
"/org/freedesktop/UDisks2/block_devices/nvme0n1": BlockFixture(
Device=b"/dev/nvme0n1",
PreferredDevice=b"/dev/nvme0n1",
Symlinks=[],
DeviceNumber=66304,
Id="by-id-nvme-Samsung_SSD_970_EVO_Plus_2TB_S40123456789ABC",
Size=33554432,
ReadOnly=False,
Drive="/org/freedesktop/UDisks2/drives/Samsung_SSD_970_EVO_Plus_2TB_S40123456789ABC",
MDRaid="/",
MDRaidMember="/",
IdUsage="",
IdType="",
IdVersion="",
IdLabel="",
IdUUID="",
Configuration=[],
CryptoBackingDevice="/",
HintPartitionable=True,
HintSystem=True,
HintIgnore=False,
HintAuto=False,
HintName="",
HintIconName="",
HintSymbolicIconName="",
UserspaceMountOptions=[],
),
} }

View File

@ -177,6 +177,37 @@ FIXTURES: dict[str, DriveFixture] = {
CanPowerOff=True, CanPowerOff=True,
SiblingId="", SiblingId="",
), ),
"/org/freedesktop/UDisks2/drives/Samsung_SSD_970_EVO_Plus_2TB_S40123456789ABC": DriveFixture(
Vendor="",
Model="Samsung SSD 970 EVO Plus 2TB",
Revision="2B2QEXM7",
Serial="S40123456789ABC",
WWN="",
Id="Samsung-SSD-970-EVO-Plus-2TB-S40123456789ABC",
Configuration={},
Media="",
MediaCompatibility=[],
MediaRemovable=False,
MediaAvailable=True,
MediaChangeDetected=True,
Size=0,
TimeDetected=0,
TimeMediaDetected=0,
Optical=False,
OpticalBlank=False,
OpticalNumTracks=0,
OpticalNumAudioTracks=0,
OpticalNumDataTracks=0,
OpticalNumSessions=0,
RotationRate=0,
ConnectionBus="usb",
Seat="seat0",
Removable=True,
Ejectable=False,
SortKey="",
CanPowerOff=True,
SiblingId="",
),
} }

View File

@ -20,7 +20,11 @@ class UDisks2Manager(DBusServiceMock):
interface = "org.freedesktop.UDisks2.Manager" interface = "org.freedesktop.UDisks2.Manager"
object_path = "/org/freedesktop/UDisks2/Manager" object_path = "/org/freedesktop/UDisks2/Manager"
block_devices = [
def __init__(self):
"""Initialize object."""
super().__init__()
self.block_devices = [
"/org/freedesktop/UDisks2/block_devices/loop0", "/org/freedesktop/UDisks2/block_devices/loop0",
"/org/freedesktop/UDisks2/block_devices/mmcblk1", "/org/freedesktop/UDisks2/block_devices/mmcblk1",
"/org/freedesktop/UDisks2/block_devices/mmcblk1p1", "/org/freedesktop/UDisks2/block_devices/mmcblk1p1",
@ -32,7 +36,7 @@ class UDisks2Manager(DBusServiceMock):
"/org/freedesktop/UDisks2/block_devices/sdb1", "/org/freedesktop/UDisks2/block_devices/sdb1",
"/org/freedesktop/UDisks2/block_devices/zram1", "/org/freedesktop/UDisks2/block_devices/zram1",
] ]
resolved_devices: list[list[str]] | list[str] = [ self.resolved_devices: list[list[str]] | list[str] = [
"/org/freedesktop/UDisks2/block_devices/sda1" "/org/freedesktop/UDisks2/block_devices/sda1"
] ]

View File

@ -0,0 +1,138 @@
"""Mock of UDisks2 Drive service."""
from dbus_fast import Variant
from dbus_fast.service import PropertyAccess, dbus_property
from .base import DBusServiceMock, dbus_method
BUS_NAME = "org.freedesktop.UDisks2"
DEFAULT_OBJECT_PATH = (
"/org/freedesktop/UDisks2/drives/Samsung_SSD_970_EVO_Plus_2TB_S40123456789ABC"
)
def setup(object_path: str | None = None) -> DBusServiceMock:
"""Create dbus mock object."""
return NVMeController(object_path if object_path else DEFAULT_OBJECT_PATH)
class NVMeController(DBusServiceMock):
"""NVMe Controller mock.
gdbus introspect --system --dest org.freedesktop.UDisks2 --object-path /org/freedesktop/UDisks2/drives/id
"""
interface = "org.freedesktop.UDisks2.NVMe.Controller"
def __init__(self, object_path: str):
"""Initialize object."""
super().__init__()
self.object_path = object_path
self.smart_get_attributes_response = {
"avail_spare": Variant("y", 0x64),
"spare_thresh": Variant("y", 0x0A),
"percent_used": Variant("y", 0x01),
"total_data_read": Variant("t", 22890461184000),
"total_data_written": Variant("t", 27723431936000),
"ctrl_busy_time": Variant("t", 2682),
"power_cycles": Variant("t", 652),
"unsafe_shutdowns": Variant("t", 107),
"media_errors": Variant("t", 0),
"num_err_log_entries": Variant("t", 1069),
"temp_sensors": Variant("aq", [310, 305, 0, 0, 0, 0, 0, 0]),
"wctemp": Variant("q", 358),
"cctemp": Variant("q", 358),
"warning_temp_time": Variant("i", 0),
"critical_temp_time": Variant("i", 0),
}
@dbus_property(access=PropertyAccess.READ)
def State(self) -> "s":
"""Get State."""
return "live"
@dbus_property(access=PropertyAccess.READ)
def ControllerID(self) -> "q":
"""Get ControllerID."""
return 4
@dbus_property(access=PropertyAccess.READ)
def SubsystemNQN(self) -> "ay":
"""Get SubsystemNQN."""
return b"nqn.2014.08.org.nvmexpress:144d144dS4J4NM0RB05961P Samsung SSD 970 EVO Plus 2TB"
@dbus_property(access=PropertyAccess.READ)
def FGUID(self) -> "s":
"""Get FGUID."""
return ""
@dbus_property(access=PropertyAccess.READ)
def NVMeRevision(self) -> "s":
"""Get NVMeRevision."""
return "1.3"
@dbus_property(access=PropertyAccess.READ)
def UnallocatedCapacity(self) -> "t":
"""Get UnallocatedCapacity."""
return 0
@dbus_property(access=PropertyAccess.READ)
def SmartUpdated(self) -> "t":
"""Get SmartUpdated."""
return 1753906112
@dbus_property(access=PropertyAccess.READ)
def SmartCriticalWarning(self) -> "as":
"""Get SmartCriticalWarning."""
return []
@dbus_property(access=PropertyAccess.READ)
def SmartPowerOnHours(self) -> "t":
"""Get SmartPowerOnHours."""
return 3208
@dbus_property(access=PropertyAccess.READ)
def SmartTemperature(self) -> "q":
"""Get SmartTemperature."""
return 311
@dbus_property(access=PropertyAccess.READ)
def SmartSelftestStatus(self) -> "s":
"""Get SmartSelftestStatus."""
return "success"
@dbus_property(access=PropertyAccess.READ)
def SmartSelftestPercentRemaining(self) -> "i":
"""Get SmartSelftestPercentRemaining."""
return -1
@dbus_property(access=PropertyAccess.READ)
def SanitizeStatus(self) -> "s":
"""Get SanitizeStatus."""
return ""
@dbus_property(access=PropertyAccess.READ)
def SanitizePercentRemaining(self) -> "i":
"""Get SanitizePercentRemaining."""
return -1
@dbus_method()
def SmartUpdate(self, options: "a{sv}") -> None:
"""Do SmartUpdate."""
@dbus_method()
def SmartGetAttributes(self, options: "a{sv}") -> "a{sv}":
"""Do SmartGetAttributes."""
return self.smart_get_attributes_response
@dbus_method()
def SmartSelftestStart(self, type_: "s", options: "a{sv}") -> None:
"""Do SmartSelftestStart."""
@dbus_method()
def SmartSelftestAbort(self, options: "a{sv}") -> None:
"""Do SmartSelftestAbort."""
@dbus_method()
def SanitizeStart(self, action: "s", options: "a{sv}") -> None:
"""Do SanitizeStart."""

View File

@ -111,3 +111,39 @@ async def test_network_recreation(
network_params[ATTR_ENABLE_IPV6] = new_enable_ipv6 network_params[ATTR_ENABLE_IPV6] = new_enable_ipv6
mock_create.assert_called_with(**network_params) mock_create.assert_called_with(**network_params)
async def test_network_default_ipv6_for_new_installations():
"""Test that IPv6 is enabled by default when no user setting is provided (None)."""
with (
patch(
"supervisor.docker.network.DockerNetwork.docker",
new_callable=PropertyMock,
return_value=MagicMock(),
create=True,
),
patch(
"supervisor.docker.network.DockerNetwork.docker.networks",
new_callable=PropertyMock,
return_value=MagicMock(),
create=True,
),
patch(
"supervisor.docker.network.DockerNetwork.docker.networks.get",
side_effect=docker.errors.NotFound("Network not found"),
),
patch(
"supervisor.docker.network.DockerNetwork.docker.networks.create",
return_value=MockNetwork(False, None, True),
) as mock_create,
):
# Pass None as enable_ipv6 to simulate no user setting
network = (await DockerNetwork(MagicMock()).post_init(None)).network
assert network is not None
assert network.attrs.get(DOCKER_ENABLEIPV6) is True
# Verify that create was called with IPv6 enabled by default
expected_params = DOCKER_NETWORK_PARAMS.copy()
expected_params[ATTR_ENABLE_IPV6] = True
mock_create.assert_called_with(**expected_params)

View File

@ -4,9 +4,71 @@
from pathlib import Path from pathlib import Path
from unittest.mock import patch from unittest.mock import patch
from dbus_fast.aio import MessageBus
import pytest
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.hardware.data import Device from supervisor.hardware.data import Device
from tests.common import mock_dbus_services
from tests.dbus_service_mocks.base import DBusServiceMock
from tests.dbus_service_mocks.udisks2_manager import (
UDisks2Manager as UDisks2ManagerService,
)
from tests.dbus_service_mocks.udisks2_nvme_controller import (
NVMeController as NVMeControllerService,
)
MOCK_MOUNTINFO = """790 750 259:8 /supervisor /data rw,relatime master:118 - ext4 /dev/nvme0n1p8 rw,commit=30
810 750 0:24 /systemd-journal-gatewayd.sock /run/systemd-journal-gatewayd.sock rw,nosuid,nodev - tmpfs tmpfs rw,size=405464k,nr_inodes=819200,mode=755
811 750 0:24 /supervisor /run/os rw,nosuid,nodev - tmpfs tmpfs rw,size=405464k,nr_inodes=819200,mode=755
813 750 0:24 /udev /run/udev ro,nosuid,nodev - tmpfs tmpfs rw,size=405464k,nr_inodes=819200,mode=755
814 750 0:24 /machine-id /etc/machine-id ro - tmpfs tmpfs rw,size=405464k,nr_inodes=819200,mode=755
815 750 0:24 /docker.sock /run/docker.sock rw,nosuid,nodev - tmpfs tmpfs rw,size=405464k,nr_inodes=819200,mode=755
816 750 0:24 /dbus /run/dbus ro,nosuid,nodev - tmpfs tmpfs rw,size=405464k,nr_inodes=819200,mode=755
820 750 0:24 /containerd/containerd.sock /run/containerd/containerd.sock rw,nosuid,nodev - tmpfs tmpfs rw,size=405464k,nr_inodes=819200,mode=755
821 750 0:24 /systemd/journal/socket /run/systemd/journal/socket rw,nosuid,nodev - tmpfs tmpfs rw,size=405464k,nr_inodes=819200,mode=755
"""
@pytest.fixture(name="nvme_data_disk")
async def fixture_nvme_data_disk(
udisks2_services: dict[str, DBusServiceMock | dict[str, DBusServiceMock]],
coresys: CoreSys,
dbus_session_bus: MessageBus,
) -> NVMeControllerService:
"""Mock using an NVMe data disk."""
nvme_service = (
await mock_dbus_services(
{
"udisks2_block": "/org/freedesktop/UDisks2/block_devices/nvme0n1",
"udisks2_drive": "/org/freedesktop/UDisks2/drives/Samsung_SSD_970_EVO_Plus_2TB_S40123456789ABC",
"udisks2_nvme_controller": "/org/freedesktop/UDisks2/drives/Samsung_SSD_970_EVO_Plus_2TB_S40123456789ABC",
},
dbus_session_bus,
)
)["udisks2_nvme_controller"]
udisks2_manager: UDisks2ManagerService = udisks2_services["udisks2_manager"]
udisks2_manager.block_devices.append(
"/org/freedesktop/UDisks2/block_devices/nvme0n1"
)
await coresys.dbus.udisks2.update()
with (
patch(
"supervisor.hardware.disk.Path.read_text",
return_value=MOCK_MOUNTINFO,
),
patch("supervisor.hardware.disk.Path.is_block_device", return_value=True),
patch(
"supervisor.hardware.disk.Path.resolve",
return_value=Path(
"/sys/devices/platform/soc/ffe07000.nvme/nvme_host/nvme0/nvme0:0000/block/nvme0n1/nvme0n1p8"
),
),
):
yield nvme_service
def test_system_partition_disk(coresys: CoreSys): def test_system_partition_disk(coresys: CoreSys):
"""Test if it is a system disk/partition.""" """Test if it is a system disk/partition."""
@ -98,4 +160,20 @@ def test_try_get_emmc_life_time(coresys, tmp_path):
str(tmp_path / "fake-{}-lifetime"), str(tmp_path / "fake-{}-lifetime"),
): ):
value = coresys.hardware.disk._try_get_emmc_life_time("mmcblk0") value = coresys.hardware.disk._try_get_emmc_life_time("mmcblk0")
assert value == 20.0 assert value == 10.0
async def test_try_get_nvme_life_time(
coresys: CoreSys, nvme_data_disk: NVMeControllerService
):
"""Test getting lifetime info from an NVMe."""
lifetime = await coresys.hardware.disk.get_disk_life_time(
coresys.config.path_supervisor
)
assert lifetime == 1
nvme_data_disk.smart_get_attributes_response["percent_used"].value = 50
lifetime = await coresys.hardware.disk.get_disk_life_time(
coresys.config.path_supervisor
)
assert lifetime == 50

View File

@ -20,7 +20,7 @@ from supervisor.exceptions import (
from supervisor.host.const import HostFeature from supervisor.host.const import HostFeature
from supervisor.host.manager import HostManager from supervisor.host.manager import HostManager
from supervisor.jobs import JobSchedulerOptions, SupervisorJob from supervisor.jobs import JobSchedulerOptions, SupervisorJob
from supervisor.jobs.const import JobExecutionLimit from supervisor.jobs.const import JobConcurrency, JobThrottle
from supervisor.jobs.decorator import Job, JobCondition from supervisor.jobs.decorator import Job, JobCondition
from supervisor.jobs.job_group import JobGroup from supervisor.jobs.job_group import JobGroup
from supervisor.os.manager import OSManager from supervisor.os.manager import OSManager
@ -280,8 +280,8 @@ async def test_exception_conditions(coresys: CoreSys):
await test.execute() await test.execute()
async def test_execution_limit_single_wait(coresys: CoreSys): async def test_concurrency_queue(coresys: CoreSys):
"""Test the single wait job execution limit.""" """Test the queue job concurrency."""
class TestClass: class TestClass:
"""Test class.""" """Test class."""
@ -292,8 +292,8 @@ async def test_execution_limit_single_wait(coresys: CoreSys):
self.run = asyncio.Lock() self.run = asyncio.Lock()
@Job( @Job(
name="test_execution_limit_single_wait_execute", name="test_concurrency_queue_execute",
limit=JobExecutionLimit.SINGLE_WAIT, concurrency=JobConcurrency.QUEUE,
) )
async def execute(self, sleep: float): async def execute(self, sleep: float):
"""Execute the class method.""" """Execute the class method."""
@ -306,8 +306,8 @@ async def test_execution_limit_single_wait(coresys: CoreSys):
await asyncio.gather(*[test.execute(0.1), test.execute(0.1), test.execute(0.1)]) await asyncio.gather(*[test.execute(0.1), test.execute(0.1), test.execute(0.1)])
async def test_execution_limit_throttle_wait(coresys: CoreSys): async def test_concurrency_queue_with_throttle(coresys: CoreSys):
"""Test the throttle wait job execution limit.""" """Test the queue concurrency with throttle."""
class TestClass: class TestClass:
"""Test class.""" """Test class."""
@ -319,8 +319,9 @@ async def test_execution_limit_throttle_wait(coresys: CoreSys):
self.call = 0 self.call = 0
@Job( @Job(
name="test_execution_limit_throttle_wait_execute", name="test_concurrency_queue_with_throttle_execute",
limit=JobExecutionLimit.THROTTLE_WAIT, concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
throttle_period=timedelta(hours=1), throttle_period=timedelta(hours=1),
) )
async def execute(self, sleep: float): async def execute(self, sleep: float):
@ -340,10 +341,8 @@ async def test_execution_limit_throttle_wait(coresys: CoreSys):
@pytest.mark.parametrize("error", [None, PluginJobError]) @pytest.mark.parametrize("error", [None, PluginJobError])
async def test_execution_limit_throttle_rate_limit( async def test_throttle_rate_limit(coresys: CoreSys, error: JobException | None):
coresys: CoreSys, error: JobException | None """Test the throttle rate limit."""
):
"""Test the throttle wait job execution limit."""
class TestClass: class TestClass:
"""Test class.""" """Test class."""
@ -355,8 +354,8 @@ async def test_execution_limit_throttle_rate_limit(
self.call = 0 self.call = 0
@Job( @Job(
name=f"test_execution_limit_throttle_rate_limit_execute_{uuid4().hex}", name=f"test_throttle_rate_limit_execute_{uuid4().hex}",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT, throttle=JobThrottle.RATE_LIMIT,
throttle_period=timedelta(hours=1), throttle_period=timedelta(hours=1),
throttle_max_calls=2, throttle_max_calls=2,
on_condition=error, on_condition=error,
@ -381,8 +380,8 @@ async def test_execution_limit_throttle_rate_limit(
assert test.call == 3 assert test.call == 3
async def test_execution_limit_throttle(coresys: CoreSys): async def test_throttle_basic(coresys: CoreSys):
"""Test the ignore conditions decorator.""" """Test the basic throttle functionality."""
class TestClass: class TestClass:
"""Test class.""" """Test class."""
@ -394,8 +393,8 @@ async def test_execution_limit_throttle(coresys: CoreSys):
self.call = 0 self.call = 0
@Job( @Job(
name="test_execution_limit_throttle_execute", name="test_throttle_basic_execute",
limit=JobExecutionLimit.THROTTLE, throttle=JobThrottle.THROTTLE,
throttle_period=timedelta(hours=1), throttle_period=timedelta(hours=1),
) )
async def execute(self, sleep: float): async def execute(self, sleep: float):
@ -414,8 +413,8 @@ async def test_execution_limit_throttle(coresys: CoreSys):
assert test.call == 1 assert test.call == 1
async def test_execution_limit_once(coresys: CoreSys): async def test_concurrency_reject(coresys: CoreSys):
"""Test the ignore conditions decorator.""" """Test the reject concurrency."""
class TestClass: class TestClass:
"""Test class.""" """Test class."""
@ -426,8 +425,8 @@ async def test_execution_limit_once(coresys: CoreSys):
self.run = asyncio.Lock() self.run = asyncio.Lock()
@Job( @Job(
name="test_execution_limit_once_execute", name="test_concurrency_reject_execute",
limit=JobExecutionLimit.ONCE, concurrency=JobConcurrency.REJECT,
on_condition=JobException, on_condition=JobException,
) )
async def execute(self, sleep: float): async def execute(self, sleep: float):
@ -603,8 +602,8 @@ async def test_host_network(coresys: CoreSys):
assert await test.execute() assert await test.execute()
async def test_job_group_once(coresys: CoreSys): async def test_job_group_reject(coresys: CoreSys):
"""Test job group once execution limitation.""" """Test job group reject concurrency limitation."""
class TestClass(JobGroup): class TestClass(JobGroup):
"""Test class.""" """Test class."""
@ -615,8 +614,8 @@ async def test_job_group_once(coresys: CoreSys):
self.event = asyncio.Event() self.event = asyncio.Event()
@Job( @Job(
name="test_job_group_once_inner_execute", name="test_job_group_reject_inner_execute",
limit=JobExecutionLimit.GROUP_ONCE, concurrency=JobConcurrency.GROUP_REJECT,
on_condition=JobException, on_condition=JobException,
) )
async def inner_execute(self) -> bool: async def inner_execute(self) -> bool:
@ -625,8 +624,8 @@ async def test_job_group_once(coresys: CoreSys):
return True return True
@Job( @Job(
name="test_job_group_once_execute", name="test_job_group_reject_execute",
limit=JobExecutionLimit.GROUP_ONCE, concurrency=JobConcurrency.GROUP_REJECT,
on_condition=JobException, on_condition=JobException,
) )
async def execute(self) -> bool: async def execute(self) -> bool:
@ -634,8 +633,8 @@ async def test_job_group_once(coresys: CoreSys):
return await self.inner_execute() return await self.inner_execute()
@Job( @Job(
name="test_job_group_once_separate_execute", name="test_job_group_reject_separate_execute",
limit=JobExecutionLimit.GROUP_ONCE, concurrency=JobConcurrency.GROUP_REJECT,
on_condition=JobException, on_condition=JobException,
) )
async def separate_execute(self) -> bool: async def separate_execute(self) -> bool:
@ -643,8 +642,8 @@ async def test_job_group_once(coresys: CoreSys):
return True return True
@Job( @Job(
name="test_job_group_once_unrelated", name="test_job_group_reject_unrelated",
limit=JobExecutionLimit.ONCE, concurrency=JobConcurrency.REJECT,
on_condition=JobException, on_condition=JobException,
) )
async def unrelated_method(self) -> bool: async def unrelated_method(self) -> bool:
@ -672,8 +671,8 @@ async def test_job_group_once(coresys: CoreSys):
assert await run_task assert await run_task
async def test_job_group_wait(coresys: CoreSys): async def test_job_group_queue(coresys: CoreSys):
"""Test job group wait execution limitation.""" """Test job group queue concurrency limitation."""
class TestClass(JobGroup): class TestClass(JobGroup):
"""Test class.""" """Test class."""
@ -686,8 +685,8 @@ async def test_job_group_wait(coresys: CoreSys):
self.event = asyncio.Event() self.event = asyncio.Event()
@Job( @Job(
name="test_job_group_wait_inner_execute", name="test_job_group_queue_inner_execute",
limit=JobExecutionLimit.GROUP_WAIT, concurrency=JobConcurrency.GROUP_QUEUE,
on_condition=JobException, on_condition=JobException,
) )
async def inner_execute(self) -> None: async def inner_execute(self) -> None:
@ -696,8 +695,8 @@ async def test_job_group_wait(coresys: CoreSys):
await self.event.wait() await self.event.wait()
@Job( @Job(
name="test_job_group_wait_execute", name="test_job_group_queue_execute",
limit=JobExecutionLimit.GROUP_WAIT, concurrency=JobConcurrency.GROUP_QUEUE,
on_condition=JobException, on_condition=JobException,
) )
async def execute(self) -> None: async def execute(self) -> None:
@ -705,8 +704,8 @@ async def test_job_group_wait(coresys: CoreSys):
await self.inner_execute() await self.inner_execute()
@Job( @Job(
name="test_job_group_wait_separate_execute", name="test_job_group_queue_separate_execute",
limit=JobExecutionLimit.GROUP_WAIT, concurrency=JobConcurrency.GROUP_QUEUE,
on_condition=JobException, on_condition=JobException,
) )
async def separate_execute(self) -> None: async def separate_execute(self) -> None:
@ -746,7 +745,7 @@ async def test_job_cleanup(coresys: CoreSys):
self.event = asyncio.Event() self.event = asyncio.Event()
self.job: SupervisorJob | None = None self.job: SupervisorJob | None = None
@Job(name="test_job_cleanup_execute", limit=JobExecutionLimit.ONCE) @Job(name="test_job_cleanup_execute", concurrency=JobConcurrency.REJECT)
async def execute(self): async def execute(self):
"""Execute the class method.""" """Execute the class method."""
self.job = coresys.jobs.current self.job = coresys.jobs.current
@ -781,7 +780,7 @@ async def test_job_skip_cleanup(coresys: CoreSys):
@Job( @Job(
name="test_job_skip_cleanup_execute", name="test_job_skip_cleanup_execute",
limit=JobExecutionLimit.ONCE, concurrency=JobConcurrency.REJECT,
cleanup=False, cleanup=False,
) )
async def execute(self): async def execute(self):
@ -804,8 +803,8 @@ async def test_job_skip_cleanup(coresys: CoreSys):
assert test.job.done assert test.job.done
async def test_execution_limit_group_throttle(coresys: CoreSys): async def test_group_throttle(coresys: CoreSys):
"""Test the group throttle execution limit.""" """Test the group throttle."""
class TestClass(JobGroup): class TestClass(JobGroup):
"""Test class.""" """Test class."""
@ -817,8 +816,8 @@ async def test_execution_limit_group_throttle(coresys: CoreSys):
self.call = 0 self.call = 0
@Job( @Job(
name="test_execution_limit_group_throttle_execute", name="test_group_throttle_execute",
limit=JobExecutionLimit.GROUP_THROTTLE, throttle=JobThrottle.GROUP_THROTTLE,
throttle_period=timedelta(milliseconds=95), throttle_period=timedelta(milliseconds=95),
) )
async def execute(self, sleep: float): async def execute(self, sleep: float):
@ -851,8 +850,8 @@ async def test_execution_limit_group_throttle(coresys: CoreSys):
assert test2.call == 2 assert test2.call == 2
async def test_execution_limit_group_throttle_wait(coresys: CoreSys): async def test_group_throttle_with_queue(coresys: CoreSys):
"""Test the group throttle wait job execution limit.""" """Test the group throttle with queue concurrency."""
class TestClass(JobGroup): class TestClass(JobGroup):
"""Test class.""" """Test class."""
@ -864,8 +863,9 @@ async def test_execution_limit_group_throttle_wait(coresys: CoreSys):
self.call = 0 self.call = 0
@Job( @Job(
name="test_execution_limit_group_throttle_wait_execute", name="test_group_throttle_with_queue_execute",
limit=JobExecutionLimit.GROUP_THROTTLE_WAIT, concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.GROUP_THROTTLE,
throttle_period=timedelta(milliseconds=95), throttle_period=timedelta(milliseconds=95),
) )
async def execute(self, sleep: float): async def execute(self, sleep: float):
@ -901,10 +901,8 @@ async def test_execution_limit_group_throttle_wait(coresys: CoreSys):
@pytest.mark.parametrize("error", [None, PluginJobError]) @pytest.mark.parametrize("error", [None, PluginJobError])
async def test_execution_limit_group_throttle_rate_limit( async def test_group_throttle_rate_limit(coresys: CoreSys, error: JobException | None):
coresys: CoreSys, error: JobException | None """Test the group throttle rate limit."""
):
"""Test the group throttle rate limit job execution limit."""
class TestClass(JobGroup): class TestClass(JobGroup):
"""Test class.""" """Test class."""
@ -916,8 +914,8 @@ async def test_execution_limit_group_throttle_rate_limit(
self.call = 0 self.call = 0
@Job( @Job(
name=f"test_execution_limit_group_throttle_rate_limit_execute_{uuid4().hex}", name=f"test_group_throttle_rate_limit_execute_{uuid4().hex}",
limit=JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT, throttle=JobThrottle.GROUP_RATE_LIMIT,
throttle_period=timedelta(hours=1), throttle_period=timedelta(hours=1),
throttle_max_calls=2, throttle_max_calls=2,
on_condition=error, on_condition=error,
@ -1013,7 +1011,7 @@ async def test_job_starting_separate_task(coresys: CoreSys):
@Job( @Job(
name="test_job_starting_separate_task_job_task", name="test_job_starting_separate_task_job_task",
limit=JobExecutionLimit.GROUP_ONCE, concurrency=JobConcurrency.GROUP_REJECT,
) )
async def job_task(self): async def job_task(self):
"""Create a separate long running job task.""" """Create a separate long running job task."""
@ -1035,7 +1033,7 @@ async def test_job_starting_separate_task(coresys: CoreSys):
@Job( @Job(
name="test_job_starting_separate_task_job_await", name="test_job_starting_separate_task_job_await",
limit=JobExecutionLimit.GROUP_ONCE, concurrency=JobConcurrency.GROUP_REJECT,
) )
async def job_await(self): async def job_await(self):
"""Await a simple job in same group to confirm lock released.""" """Await a simple job in same group to confirm lock released."""
@ -1080,7 +1078,7 @@ async def test_job_always_removed_on_check_failure(coresys: CoreSys):
@Job( @Job(
name="test_job_always_removed_on_check_failure_limit", name="test_job_always_removed_on_check_failure_limit",
limit=JobExecutionLimit.ONCE, concurrency=JobConcurrency.REJECT,
cleanup=False, cleanup=False,
) )
async def limit_check(self): async def limit_check(self):
@ -1212,3 +1210,93 @@ async def test_job_scheduled_at(coresys: CoreSys):
assert job.name == "test_job_scheduled_at_job_task" assert job.name == "test_job_scheduled_at_job_task"
assert job.stage == "work" assert job.stage == "work"
assert job.parent_id is None assert job.parent_id is None
async def test_concurency_reject_and_throttle(coresys: CoreSys):
"""Test the concurrency rejct and throttle job execution limit."""
class TestClass:
"""Test class."""
def __init__(self, coresys: CoreSys):
"""Initialize the test class."""
self.coresys = coresys
self.run = asyncio.Lock()
self.call = 0
@Job(
name="test_concurency_reject_and_throttle_execute",
concurrency=JobConcurrency.REJECT,
throttle=JobThrottle.THROTTLE,
throttle_period=timedelta(hours=1),
)
async def execute(self, sleep: float):
"""Execute the class method."""
assert not self.run.locked()
async with self.run:
await asyncio.sleep(sleep)
self.call += 1
test = TestClass(coresys)
results = await asyncio.gather(
*[test.execute(0.1), test.execute(0.1), test.execute(0.1)],
return_exceptions=True,
)
assert results[0] is None
assert isinstance(results[1], JobException)
assert isinstance(results[2], JobException)
assert test.call == 1
await asyncio.gather(*[test.execute(0.1)])
assert test.call == 1
@pytest.mark.parametrize("error", [None, PluginJobError])
async def test_concurency_reject_and_rate_limit(
coresys: CoreSys, error: JobException | None
):
"""Test the concurrency rejct and rate limit job execution limit."""
class TestClass:
"""Test class."""
def __init__(self, coresys: CoreSys):
"""Initialize the test class."""
self.coresys = coresys
self.run = asyncio.Lock()
self.call = 0
@Job(
name=f"test_concurency_reject_and_rate_limit_execute_{uuid4().hex}",
concurrency=JobConcurrency.REJECT,
throttle=JobThrottle.RATE_LIMIT,
throttle_period=timedelta(hours=1),
throttle_max_calls=1,
on_condition=error,
)
async def execute(self, sleep: float = 0):
"""Execute the class method."""
async with self.run:
await asyncio.sleep(sleep)
self.call += 1
test = TestClass(coresys)
results = await asyncio.gather(
*[test.execute(0.1), test.execute(), test.execute()], return_exceptions=True
)
assert results[0] is None
assert isinstance(results[1], JobException)
assert isinstance(results[2], JobException)
assert test.call == 1
with pytest.raises(JobException if error is None else error):
await test.execute()
assert test.call == 1
with time_machine.travel(utcnow() + timedelta(hours=1)):
await test.execute()
assert test.call == 2

View File

@ -9,6 +9,7 @@ import pytest
from supervisor.const import CoreState from supervisor.const import CoreState
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.exceptions import HassOSJobError from supervisor.exceptions import HassOSJobError
from supervisor.resolution.const import UnhealthyReason
from tests.common import MockResponse from tests.common import MockResponse
from tests.dbus_service_mocks.base import DBusServiceMock from tests.dbus_service_mocks.base import DBusServiceMock
@ -85,6 +86,21 @@ async def test_update_fails_if_out_of_date(
await coresys.os.update() await coresys.os.update()
async def test_update_fails_if_unhealthy(
coresys: CoreSys,
) -> None:
"""Test update of OS fails if Supervisor is unhealthy."""
await coresys.core.set_state(CoreState.RUNNING)
coresys.resolution.add_unhealthy_reason(UnhealthyReason.DUPLICATE_OS_INSTALLATION)
with (
patch.object(
type(coresys.os), "available", new=PropertyMock(return_value=True)
),
pytest.raises(HassOSJobError),
):
await coresys.os.update()
async def test_board_name_supervised(coresys: CoreSys) -> None: async def test_board_name_supervised(coresys: CoreSys) -> None:
"""Test board name is supervised when not on haos.""" """Test board name is supervised when not on haos."""
with patch("supervisor.os.manager.CPE.get_product", return_value=["not-hassos"]): with patch("supervisor.os.manager.CPE.get_product", return_value=["not-hassos"]):

View File

@ -0,0 +1,83 @@
"""Test OS Version evaluation."""
from unittest.mock import PropertyMock, patch
from awesomeversion import AwesomeVersion
import pytest
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.os.manager import OSManager
from supervisor.resolution.evaluations.os_version import EvaluateOSVersion
@pytest.mark.parametrize(
"current,latest,expected",
[
("10.0", "15.0", True), # 5 major behind, should be unsupported
("10.0", "14.0", False), # 4 major behind, should be supported
("10.2", "11.0", False), # 1 major behind, supported
("10.4", "10.5", False), # same major, supported
("10.5", "10.5", False), # up to date, supported
("10.5", "10.6", False), # same major, supported
("10.0", "13.3", False), # 3 major behind, supported
("10.2.dev20240321", "15.0", True), # 5 major behind, dev version, unsupported
("10.2.dev20240321", "13.0", False), # 3 major behind, dev version, supported
("10.2.rc2", "15.0", True), # 5 major behind, rc version, unsupported
("10.2.rc2", "13.0", False), # 3 major behind, rc version, supported
(None, "15.0", False), # No current version info, check skipped
("2.0", None, False), # No latest version info, check skipped
(
"9ccda431973acf17e4221850b08f3280b723df8d",
"15.0",
True,
), # Dev setup running on a commit hash, check skipped
],
)
@pytest.mark.usefixtures("os_available")
async def test_os_version_evaluation(
coresys: CoreSys, current: str | None, latest: str | None, expected: bool
):
"""Test evaluation logic on versions."""
evaluation = EvaluateOSVersion(coresys)
await coresys.core.set_state(CoreState.RUNNING)
with (
patch.object(
OSManager,
"version",
new=PropertyMock(return_value=current and AwesomeVersion(current)),
),
patch.object(
OSManager,
"latest_version",
new=PropertyMock(return_value=latest and AwesomeVersion(latest)),
),
):
assert evaluation.reason not in coresys.resolution.unsupported
await evaluation()
assert (evaluation.reason in coresys.resolution.unsupported) is expected
async def test_did_run(coresys: CoreSys):
"""Test that the evaluation ran as expected."""
evaluation = EvaluateOSVersion(coresys)
should_run = evaluation.states
should_not_run = [state for state in CoreState if state not in should_run]
assert len(should_run) != 0
assert len(should_not_run) != 0
with patch(
"supervisor.resolution.evaluations.os_version.EvaluateOSVersion.evaluate",
return_value=None,
) as evaluate:
for state in should_run:
await coresys.core.set_state(state)
await evaluation()
evaluate.assert_called_once()
evaluate.reset_mock()
for state in should_not_run:
await coresys.core.set_state(state)
await evaluation()
evaluate.assert_not_called()
evaluate.reset_mock()