Compare commits

...

26 Commits

Author SHA1 Message Date
dependabot[bot]
38758d05a8 Bump actions/checkout from 3.5.3 to 3.6.0 (#4508)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-25 09:54:14 +02:00
Mike Degatano
a79fa14ee7 Don't notify listeners on CoreState.CLOSE (#4506) 2023-08-25 07:22:49 +02:00
Pascal Vizeli
1eb95b4d33 Remove old add-on state refresh (#4504) 2023-08-24 11:04:31 -04:00
dependabot[bot]
d04e47f5b3 Bump dbus-fast from 1.92.0 to 1.93.0 (#4501)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 1.92.0 to 1.93.0.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v1.92.0...v1.93.0)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-23 17:12:49 -04:00
Mike Degatano
dad5118f21 Use dataclasses asdict for dataclasses (#4502) 2023-08-22 17:50:48 -04:00
Florian Bachmann
acc0e5c989 Allows the supervisor to send a session's user to addon with header X-Remote-User (#4152)
* Working draft for x-remote-user

* Renames prop to remote_user

* Allows to set in addon description whether it requests the username

* Fixes addon-options schema

* Sends user ID instead of username to addons

* Adds tests

* Removes configurability of remote-user forwarding

* Update const.py

* Also adds username header

* Fetches full user info object from homeassistant

* Cleaner validation and dataclasses

* Fixes linting

* Fixes linting

* Tries to fix test

* Updates tests

* Updates tests

* Updates tests

* Updates tests

* Updates tests

* Updates tests

* Updates tests

* Updates tests

* Resolves PR comments

* Linting

* Fixes tests

* Update const.py

* Removes header keys if not required

* Moves ignoring user ID headers if no session_data is given

* simplify

* fix lint with new job

---------

Co-authored-by: Pascal Vizeli <pvizeli@syshack.ch>
Co-authored-by: Pascal Vizeli <pascal.vizeli@syshack.ch>
2023-08-22 10:11:13 +02:00
dependabot[bot]
204fcdf479 Bump dbus-fast from 1.91.2 to 1.92.0 (#4500)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 1.91.2 to 1.92.0.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v1.91.2...v1.92.0)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-21 11:03:02 +02:00
Mike Degatano
93ba8a3574 Add job names and references everywhere (#4495)
* Add job names and references everywhere

* Remove group names check and switch to const

* Ensure unique job names in decorator tests
2023-08-21 09:15:37 +02:00
dependabot[bot]
f2f9e3b514 Bump dbus-fast from 1.86.0 to 1.91.2 (#4485)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 1.86.0 to 1.91.2.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v1.86.0...v1.91.2)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 10:32:27 -04:00
Mike Degatano
61288559b3 Always stop the addon before restoring it (#4492)
* Always stop the addon before restoring it

* patch ingress refresh to avoid timeout
2023-08-15 13:08:45 +02:00
dependabot[bot]
bd2c99a455 Bump awesomeversion from 23.5.0 to 23.8.0 (#4494)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 10:38:43 +02:00
dependabot[bot]
1937348b24 Bump time-machine from 2.11.0 to 2.12.0 (#4493)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 09:55:26 +02:00
dependabot[bot]
b7b2fae325 Bump coverage from 7.2.7 to 7.3.0 (#4491)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-14 09:49:04 +02:00
dependabot[bot]
11115923b2 Bump async-timeout from 4.0.2 to 4.0.3 (#4488)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-11 12:13:28 +02:00
dependabot[bot]
295133d2e9 Bump home-assistant/builder from 2023.06.1 to 2023.08.0 (#4489)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-11 10:23:47 +02:00
Mike Degatano
3018b851c8 Missing an await in addon data (#4487) 2023-08-10 16:31:43 -04:00
Mike Degatano
222c3fd485 Address addon storage race condition (#4481)
* Address addon storage race condition

* Add some error test cases
2023-08-10 15:24:43 -04:00
Stefan Agner
9650fd2ba1 Extend container image name validator (#4480)
* Extend container image name validator

The current validator allows certain invalid names (e.g. upper
case), but disallows valid cases (such as ttl.sh/myimage).
Improve the container image validator to support more valid
options and at the same time disallow some of the invalid
options.

Note that this is not a complete/perfect validation still. A much
much more sophisticated regex would be necessary to be 100% accurate.
Also we format the string and replace {machine}/{arch} using Python
format strings. In that regard the image format in Supervisor deviates
from the Docker/OCI container image name format.

* Use an actual invalid image name in config validation
2023-08-10 12:58:33 -04:00
Stefan Agner
c88fd9a7d9 Add Home Assistant Green (#4486) 2023-08-10 17:31:37 +02:00
Mike Degatano
1611beccd1 Add job group execution limit option (#4457)
* Add job group execution limit option

* Fix pylint issues

* Assign variable before usage

* Cleanup jobs when done

* Remove isinstance check for performance

* Explicitly raise from None

* Add some more documentation info
2023-08-08 16:49:17 -04:00
Mike Degatano
71077fb0f7 Fallback on interface name if path is missing (#4479) 2023-08-07 20:53:25 -04:00
dependabot[bot]
9647fba98f Bump cryptography from 41.0.2 to 41.0.3 (#4468)
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.2 to 41.0.3.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.2...41.0.3)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-04 17:50:46 -04:00
Mike Degatano
86f004e45a Use udev path instead of mac or name for nm match (#4476) 2023-08-04 17:39:35 -04:00
Mike Degatano
a98334ede8 Cancel startup wait task on addon uninstallation (#4475)
* Cancel startup wait task on addon uninstallation

* Await startup task instead

* Suppress cancelled error
2023-08-04 16:28:44 -04:00
dependabot[bot]
e19c2d6805 Bump aiohttp from 3.8.4 to 3.8.5 (#4467)
Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.4 to 3.8.5.
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/v3.8.5/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.8.4...v3.8.5)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-03 11:01:58 -04:00
dependabot[bot]
847736dab8 Bump sentry-sdk from 1.29.0 to 1.29.2 (#4470)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-02 09:24:42 +02:00
89 changed files with 2007 additions and 651 deletions

View File

@@ -53,7 +53,7 @@ jobs:
requirements: ${{ steps.requirements.outputs.changed }} requirements: ${{ steps.requirements.outputs.changed }}
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -92,7 +92,7 @@ jobs:
arch: ${{ fromJson(needs.init.outputs.architectures) }} arch: ${{ fromJson(needs.init.outputs.architectures) }}
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -160,7 +160,7 @@ jobs:
run: echo "BUILD_ARGS=--test" >> $GITHUB_ENV run: echo "BUILD_ARGS=--test" >> $GITHUB_ENV
- name: Build supervisor - name: Build supervisor
uses: home-assistant/builder@2023.06.1 uses: home-assistant/builder@2023.08.0
with: with:
args: | args: |
$BUILD_ARGS \ $BUILD_ARGS \
@@ -178,7 +178,7 @@ jobs:
steps: steps:
- name: Checkout the repository - name: Checkout the repository
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Initialize git - name: Initialize git
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
@@ -203,11 +203,11 @@ jobs:
timeout-minutes: 60 timeout-minutes: 60
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Build the Supervisor - name: Build the Supervisor
if: needs.init.outputs.publish != 'true' if: needs.init.outputs.publish != 'true'
uses: home-assistant/builder@2023.06.1 uses: home-assistant/builder@2023.08.0
with: with:
args: | args: |
--test \ --test \

View File

@@ -25,7 +25,7 @@ jobs:
name: Prepare Python dependencies name: Prepare Python dependencies
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Set up Python - name: Set up Python
id: python id: python
uses: actions/setup-python@v4.7.0 uses: actions/setup-python@v4.7.0
@@ -66,7 +66,7 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.7.0 uses: actions/setup-python@v4.7.0
id: python id: python
@@ -95,7 +95,7 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Register hadolint problem matcher - name: Register hadolint problem matcher
run: | run: |
echo "::add-matcher::.github/workflows/matchers/hadolint.json" echo "::add-matcher::.github/workflows/matchers/hadolint.json"
@@ -110,7 +110,7 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.7.0 uses: actions/setup-python@v4.7.0
id: python id: python
@@ -154,7 +154,7 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.7.0 uses: actions/setup-python@v4.7.0
id: python id: python
@@ -186,7 +186,7 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.7.0 uses: actions/setup-python@v4.7.0
id: python id: python
@@ -227,7 +227,7 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.7.0 uses: actions/setup-python@v4.7.0
id: python id: python
@@ -271,7 +271,7 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.7.0 uses: actions/setup-python@v4.7.0
id: python id: python
@@ -303,7 +303,7 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.7.0 uses: actions/setup-python@v4.7.0
id: python id: python
@@ -344,7 +344,7 @@ jobs:
name: Run tests Python ${{ needs.prepare.outputs.python-version }} name: Run tests Python ${{ needs.prepare.outputs.python-version }}
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.7.0 uses: actions/setup-python@v4.7.0
id: python id: python
@@ -402,7 +402,7 @@ jobs:
needs: ["pytest", "prepare"] needs: ["pytest", "prepare"]
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.7.0 uses: actions/setup-python@v4.7.0
id: python id: python

View File

@@ -11,7 +11,7 @@ jobs:
name: Release Drafter name: Release Drafter
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
with: with:
fetch-depth: 0 fetch-depth: 0

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v3.6.0
- name: Sentry Release - name: Sentry Release
uses: getsentry/action-release@v1.4.1 uses: getsentry/action-release@v1.4.1
env: env:

View File

@@ -1,14 +1,14 @@
aiodns==3.0.0 aiodns==3.0.0
aiohttp==3.8.4 aiohttp==3.8.5
async_timeout==4.0.2 async_timeout==4.0.3
atomicwrites-homeassistant==1.4.1 atomicwrites-homeassistant==1.4.1
attrs==23.1.0 attrs==23.1.0
awesomeversion==23.5.0 awesomeversion==23.8.0
brotli==1.0.9 brotli==1.0.9
ciso8601==2.3.0 ciso8601==2.3.0
colorlog==6.7.0 colorlog==6.7.0
cpe==1.2.1 cpe==1.2.1
cryptography==41.0.2 cryptography==41.0.3
debugpy==1.6.7 debugpy==1.6.7
deepmerge==1.1.0 deepmerge==1.1.0
dirhash==0.2.1 dirhash==0.2.1
@@ -20,7 +20,7 @@ pulsectl==23.5.2
pyudev==0.24.1 pyudev==0.24.1
ruamel.yaml==0.17.21 ruamel.yaml==0.17.21
securetar==2023.3.0 securetar==2023.3.0
sentry-sdk==1.29.0 sentry-sdk==1.29.2
voluptuous==0.13.1 voluptuous==0.13.1
dbus-fast==1.86.0 dbus-fast==1.93.0
typing_extensions==4.7.1 typing_extensions==4.7.1

View File

@@ -1,5 +1,5 @@
black==23.7.0 black==23.7.0
coverage==7.2.7 coverage==7.3.0
flake8-docstrings==1.7.0 flake8-docstrings==1.7.0
flake8==6.1.0 flake8==6.1.0
pre-commit==3.3.3 pre-commit==3.3.3
@@ -11,6 +11,6 @@ pytest-cov==4.1.0
pytest-timeout==2.1.0 pytest-timeout==2.1.0
pytest==7.4.0 pytest==7.4.0
pyupgrade==3.10.1 pyupgrade==3.10.1
time-machine==2.11.0 time-machine==2.12.0
typing_extensions==4.7.1 typing_extensions==4.7.1
urllib3==2.0.4 urllib3==2.0.4

View File

@@ -152,11 +152,15 @@ class AddonManager(CoreSysAttributes):
capture_exception(err) capture_exception(err)
@Job( @Job(
name="addon_manager_install",
conditions=ADDON_UPDATE_CONDITIONS, conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError, on_condition=AddonsJobError,
) )
async def install(self, slug: str) -> None: async def install(self, slug: str) -> None:
"""Install an add-on.""" """Install an add-on."""
if job := self.sys_jobs.get_job():
job.reference = slug
if slug in self.local: if slug in self.local:
raise AddonsError(f"Add-on {slug} is already installed", _LOGGER.warning) raise AddonsError(f"Add-on {slug} is already installed", _LOGGER.warning)
store = self.store.get(slug) store = self.store.get(slug)
@@ -247,6 +251,7 @@ class AddonManager(CoreSysAttributes):
_LOGGER.info("Add-on '%s' successfully removed", slug) _LOGGER.info("Add-on '%s' successfully removed", slug)
@Job( @Job(
name="addon_manager_update",
conditions=ADDON_UPDATE_CONDITIONS, conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError, on_condition=AddonsJobError,
) )
@@ -258,6 +263,9 @@ class AddonManager(CoreSysAttributes):
Returns a coroutine that completes when addon has state 'started' (see addon.start) Returns a coroutine that completes when addon has state 'started' (see addon.start)
if addon is started after update. Else nothing is returned. if addon is started after update. Else nothing is returned.
""" """
if job := self.sys_jobs.get_job():
job.reference = slug
if slug not in self.local: if slug not in self.local:
raise AddonsError(f"Add-on {slug} is not installed", _LOGGER.error) raise AddonsError(f"Add-on {slug} is not installed", _LOGGER.error)
addon = self.local[slug] addon = self.local[slug]
@@ -307,6 +315,7 @@ class AddonManager(CoreSysAttributes):
) )
@Job( @Job(
name="addon_manager_rebuild",
conditions=[ conditions=[
JobCondition.FREE_SPACE, JobCondition.FREE_SPACE,
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
@@ -320,6 +329,9 @@ class AddonManager(CoreSysAttributes):
Returns a coroutine that completes when addon has state 'started' (see addon.start) Returns a coroutine that completes when addon has state 'started' (see addon.start)
if addon is started after rebuild. Else nothing is returned. if addon is started after rebuild. Else nothing is returned.
""" """
if job := self.sys_jobs.get_job():
job.reference = slug
if slug not in self.local: if slug not in self.local:
raise AddonsError(f"Add-on {slug} is not installed", _LOGGER.error) raise AddonsError(f"Add-on {slug} is not installed", _LOGGER.error)
addon = self.local[slug] addon = self.local[slug]
@@ -359,6 +371,7 @@ class AddonManager(CoreSysAttributes):
) )
@Job( @Job(
name="addon_manager_restore",
conditions=[ conditions=[
JobCondition.FREE_SPACE, JobCondition.FREE_SPACE,
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
@@ -374,6 +387,9 @@ class AddonManager(CoreSysAttributes):
Returns a coroutine that completes when addon has state 'started' (see addon.start) Returns a coroutine that completes when addon has state 'started' (see addon.start)
if addon is started after restore. Else nothing is returned. if addon is started after restore. Else nothing is returned.
""" """
if job := self.sys_jobs.get_job():
job.reference = slug
if slug not in self.local: if slug not in self.local:
_LOGGER.debug("Add-on %s is not local available for restore", slug) _LOGGER.debug("Add-on %s is not local available for restore", slug)
addon = Addon(self.coresys, slug) addon = Addon(self.coresys, slug)
@@ -396,7 +412,10 @@ class AddonManager(CoreSysAttributes):
return wait_for_start return wait_for_start
@Job(conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_HOST]) @Job(
name="addon_manager_repair",
conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_HOST],
)
async def repair(self) -> None: async def repair(self) -> None:
"""Repair local add-ons.""" """Repair local add-ons."""
needs_repair: list[Addon] = [] needs_repair: list[Addon] = []

View File

@@ -129,54 +129,7 @@ class Addon(AddonModel):
) )
self._listeners: list[EventListener] = [] self._listeners: list[EventListener] = []
self._startup_event = asyncio.Event() self._startup_event = asyncio.Event()
self._startup_task: asyncio.Task | None = None
@Job(
name=f"addon_{slug}_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=AddonsJobError,
)
async def restart_after_problem(addon: Addon, state: ContainerState):
"""Restart unhealthy or failed addon."""
attempts = 0
while await addon.instance.current_state() == state:
if not addon.in_progress:
_LOGGER.warning(
"Watchdog found addon %s is %s, restarting...",
addon.name,
state.value,
)
try:
if state == ContainerState.FAILED:
# Ensure failed container is removed before attempting reanimation
if attempts == 0:
with suppress(DockerError):
await addon.instance.stop(remove_container=True)
await (await addon.start())
else:
await (await addon.restart())
except AddonsError as err:
attempts = attempts + 1
_LOGGER.error(
"Watchdog restart of addon %s failed!", addon.name
)
capture_exception(err)
else:
break
if attempts >= WATCHDOG_MAX_ATTEMPTS:
_LOGGER.critical(
"Watchdog cannot restart addon %s, failed all %s attempts",
addon.name,
attempts,
)
break
await asyncio.sleep(WATCHDOG_RETRY_SECONDS)
self._restart_after_problem = restart_after_problem
def __repr__(self) -> str: def __repr__(self) -> str:
"""Return internal representation.""" """Return internal representation."""
@@ -608,6 +561,12 @@ class Addon(AddonModel):
async def unload(self) -> None: async def unload(self) -> None:
"""Unload add-on and remove data.""" """Unload add-on and remove data."""
if self._startup_task:
# If we were waiting on startup, cancel that and let the task finish before proceeding
self._startup_task.cancel(f"Removing add-on {self.name} from system")
with suppress(asyncio.CancelledError):
await self._startup_task
for listener in self._listeners: for listener in self._listeners:
self.sys_bus.remove_listener(listener) self.sys_bus.remove_listener(listener)
@@ -699,13 +658,18 @@ class Addon(AddonModel):
async def _wait_for_startup(self) -> None: async def _wait_for_startup(self) -> None:
"""Wait for startup event to be set with timeout.""" """Wait for startup event to be set with timeout."""
try: try:
await asyncio.wait_for(self._startup_event.wait(), STARTUP_TIMEOUT) self._startup_task = self.sys_create_task(self._startup_event.wait())
await asyncio.wait_for(self._startup_task, STARTUP_TIMEOUT)
except asyncio.TimeoutError: except asyncio.TimeoutError:
_LOGGER.warning( _LOGGER.warning(
"Timeout while waiting for addon %s to start, took more then %s seconds", "Timeout while waiting for addon %s to start, took more then %s seconds",
self.name, self.name,
STARTUP_TIMEOUT, STARTUP_TIMEOUT,
) )
except asyncio.CancelledError as err:
_LOGGER.info("Wait for addon startup task cancelled due to: %s", err)
finally:
self._startup_task = None
async def start(self) -> Awaitable[None]: async def start(self) -> Awaitable[None]:
"""Set options and start add-on. """Set options and start add-on.
@@ -779,10 +743,7 @@ class Addon(AddonModel):
raise AddonsError() from err raise AddonsError() from err
async def write_stdin(self, data) -> None: async def write_stdin(self, data) -> None:
"""Write data to add-on stdin. """Write data to add-on stdin."""
Return a coroutine.
"""
if not self.with_stdin: if not self.with_stdin:
raise AddonsNotSupportedError( raise AddonsNotSupportedError(
f"Add-on {self.slug} does not support writing to stdin!", _LOGGER.error f"Add-on {self.slug} does not support writing to stdin!", _LOGGER.error
@@ -877,7 +838,10 @@ class Addon(AddonModel):
await self._backup_command(self.backup_pre) await self._backup_command(self.backup_pre)
elif is_running and self.backup_mode == AddonBackupMode.COLD: elif is_running and self.backup_mode == AddonBackupMode.COLD:
_LOGGER.info("Shutdown add-on %s for cold backup", self.slug) _LOGGER.info("Shutdown add-on %s for cold backup", self.slug)
await self.instance.stop() try:
await self.instance.stop()
except DockerError as err:
raise AddonsError() from err
try: try:
_LOGGER.info("Building backup for add-on %s", self.slug) _LOGGER.info("Building backup for add-on %s", self.slug)
@@ -950,6 +914,11 @@ class Addon(AddonModel):
self.slug, data[ATTR_USER], data[ATTR_SYSTEM], restore_image self.slug, data[ATTR_USER], data[ATTR_SYSTEM], restore_image
) )
# Stop it first if its running
if await self.instance.is_running():
with suppress(DockerError):
await self.instance.stop()
# Check version / restore image # Check version / restore image
version = data[ATTR_VERSION] version = data[ATTR_VERSION]
if not await self.instance.exists(): if not await self.instance.exists():
@@ -967,9 +936,6 @@ class Addon(AddonModel):
_LOGGER.info("Restore/Update of image for addon %s", self.slug) _LOGGER.info("Restore/Update of image for addon %s", self.slug)
with suppress(DockerError): with suppress(DockerError):
await self.instance.update(version, restore_image) await self.instance.update(version, restore_image)
else:
with suppress(DockerError):
await self.instance.stop()
# Restore data # Restore data
def _restore_data(): def _restore_data():
@@ -1019,6 +985,50 @@ class Addon(AddonModel):
""" """
return self.instance.check_trust() return self.instance.check_trust()
@Job(
name="addon_restart_after_problem",
limit=JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=AddonsJobError,
)
async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed addon."""
attempts = 0
while await self.instance.current_state() == state:
if not self.in_progress:
_LOGGER.warning(
"Watchdog found addon %s is %s, restarting...",
self.name,
state.value,
)
try:
if state == ContainerState.FAILED:
# Ensure failed container is removed before attempting reanimation
if attempts == 0:
with suppress(DockerError):
await self.instance.stop(remove_container=True)
await (await self.start())
else:
await (await self.restart())
except AddonsError as err:
attempts = attempts + 1
_LOGGER.error("Watchdog restart of addon %s failed!", self.name)
capture_exception(err)
else:
break
if attempts >= WATCHDOG_MAX_ATTEMPTS:
_LOGGER.critical(
"Watchdog cannot restart addon %s, failed all %s attempts",
self.name,
attempts,
)
break
await asyncio.sleep(WATCHDOG_RETRY_SECONDS)
async def container_state_changed(self, event: DockerContainerStateEvent) -> None: async def container_state_changed(self, event: DockerContainerStateEvent) -> None:
"""Set addon state from container state.""" """Set addon state from container state."""
if event.name != self.instance.name: if event.name != self.instance.name:
@@ -1053,4 +1063,4 @@ class Addon(AddonModel):
ContainerState.STOPPED, ContainerState.STOPPED,
ContainerState.UNHEALTHY, ContainerState.UNHEALTHY,
]: ]:
await self._restart_after_problem(self, event.state) await self._restart_after_problem(event.state)

View File

@@ -1,5 +1,6 @@
"""Init file for Supervisor add-ons.""" """Init file for Supervisor add-ons."""
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from collections import defaultdict
from collections.abc import Awaitable, Callable from collections.abc import Awaitable, Callable
from contextlib import suppress from contextlib import suppress
import logging import logging
@@ -79,9 +80,11 @@ from ..const import (
AddonStage, AddonStage,
AddonStartup, AddonStartup,
) )
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys
from ..docker.const import Capabilities from ..docker.const import Capabilities
from ..exceptions import AddonsNotSupportedError from ..exceptions import AddonsNotSupportedError
from ..jobs.const import JOB_GROUP_ADDON
from ..jobs.job_group import JobGroup
from .const import ATTR_BACKUP, ATTR_CODENOTARY, AddonBackupMode from .const import ATTR_BACKUP, ATTR_CODENOTARY, AddonBackupMode
from .options import AddonOptions, UiOptions from .options import AddonOptions, UiOptions
from .validate import RE_SERVICE, RE_VOLUME from .validate import RE_SERVICE, RE_VOLUME
@@ -91,12 +94,14 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
Data = dict[str, Any] Data = dict[str, Any]
class AddonModel(CoreSysAttributes, ABC): class AddonModel(JobGroup, ABC):
"""Add-on Data layout.""" """Add-on Data layout."""
def __init__(self, coresys: CoreSys, slug: str): def __init__(self, coresys: CoreSys, slug: str):
"""Initialize data holder.""" """Initialize data holder."""
self.coresys: CoreSys = coresys super().__init__(
coresys, JOB_GROUP_ADDON.format_map(defaultdict(str, slug=slug)), slug
)
self.slug: str = slug self.slug: str = slug
@property @property

View File

@@ -143,6 +143,8 @@ RE_MACHINE = re.compile(
r"|raspberrypi3" r"|raspberrypi3"
r"|raspberrypi4-64" r"|raspberrypi4-64"
r"|raspberrypi4" r"|raspberrypi4"
r"|yellow"
r"|green"
r"|tinker" r"|tinker"
r")$" r")$"
) )

View File

@@ -1,11 +1,11 @@
"""Init file for Supervisor Audio RESTful API.""" """Init file for Supervisor Audio RESTful API."""
import asyncio import asyncio
from collections.abc import Awaitable from collections.abc import Awaitable
from dataclasses import asdict
import logging import logging
from typing import Any from typing import Any
from aiohttp import web from aiohttp import web
import attr
import voluptuous as vol import voluptuous as vol
from ..const import ( from ..const import (
@@ -76,15 +76,11 @@ class APIAudio(CoreSysAttributes):
ATTR_UPDATE_AVAILABLE: self.sys_plugins.audio.need_update, ATTR_UPDATE_AVAILABLE: self.sys_plugins.audio.need_update,
ATTR_HOST: str(self.sys_docker.network.audio), ATTR_HOST: str(self.sys_docker.network.audio),
ATTR_AUDIO: { ATTR_AUDIO: {
ATTR_CARD: [attr.asdict(card) for card in self.sys_host.sound.cards], ATTR_CARD: [asdict(card) for card in self.sys_host.sound.cards],
ATTR_INPUT: [ ATTR_INPUT: [asdict(stream) for stream in self.sys_host.sound.inputs],
attr.asdict(stream) for stream in self.sys_host.sound.inputs ATTR_OUTPUT: [asdict(stream) for stream in self.sys_host.sound.outputs],
],
ATTR_OUTPUT: [
attr.asdict(stream) for stream in self.sys_host.sound.outputs
],
ATTR_APPLICATION: [ ATTR_APPLICATION: [
attr.asdict(stream) for stream in self.sys_host.sound.applications asdict(stream) for stream in self.sys_host.sound.applications
], ],
}, },
} }

View File

@@ -21,11 +21,18 @@ from ..const import (
ATTR_ICON, ATTR_ICON,
ATTR_PANELS, ATTR_PANELS,
ATTR_SESSION, ATTR_SESSION,
ATTR_SESSION_DATA_USER_ID,
ATTR_TITLE, ATTR_TITLE,
HEADER_REMOTE_USER_DISPLAY_NAME,
HEADER_REMOTE_USER_ID,
HEADER_REMOTE_USER_NAME,
HEADER_TOKEN, HEADER_TOKEN,
HEADER_TOKEN_OLD, HEADER_TOKEN_OLD,
IngressSessionData,
IngressSessionDataUser,
) )
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import HomeAssistantAPIError
from .const import COOKIE_INGRESS from .const import COOKIE_INGRESS
from .utils import api_process, api_validate, require_home_assistant from .utils import api_process, api_validate, require_home_assistant
@@ -33,10 +40,23 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
VALIDATE_SESSION_DATA = vol.Schema({ATTR_SESSION: str}) VALIDATE_SESSION_DATA = vol.Schema({ATTR_SESSION: str})
"""Expected optional payload of create session request"""
SCHEMA_INGRESS_CREATE_SESSION_DATA = vol.Schema(
{
vol.Optional(ATTR_SESSION_DATA_USER_ID): str,
}
)
class APIIngress(CoreSysAttributes): class APIIngress(CoreSysAttributes):
"""Ingress view to handle add-on webui routing.""" """Ingress view to handle add-on webui routing."""
_list_of_users: list[IngressSessionDataUser]
def __init__(self) -> None:
"""Initialize APIIngress."""
self._list_of_users = []
def _extract_addon(self, request: web.Request) -> Addon: def _extract_addon(self, request: web.Request) -> Addon:
"""Return addon, throw an exception it it doesn't exist.""" """Return addon, throw an exception it it doesn't exist."""
token = request.match_info.get("token") token = request.match_info.get("token")
@@ -71,7 +91,19 @@ class APIIngress(CoreSysAttributes):
@require_home_assistant @require_home_assistant
async def create_session(self, request: web.Request) -> dict[str, Any]: async def create_session(self, request: web.Request) -> dict[str, Any]:
"""Create a new session.""" """Create a new session."""
session = self.sys_ingress.create_session() schema_ingress_config_session_data = await api_validate(
SCHEMA_INGRESS_CREATE_SESSION_DATA, request
)
data: IngressSessionData | None = None
if ATTR_SESSION_DATA_USER_ID in schema_ingress_config_session_data:
user = await self._find_user_by_id(
schema_ingress_config_session_data[ATTR_SESSION_DATA_USER_ID]
)
if user:
data = IngressSessionData(user)
session = self.sys_ingress.create_session(data)
return {ATTR_SESSION: session} return {ATTR_SESSION: session}
@api_process @api_process
@@ -99,13 +131,14 @@ class APIIngress(CoreSysAttributes):
# Process requests # Process requests
addon = self._extract_addon(request) addon = self._extract_addon(request)
path = request.match_info.get("path") path = request.match_info.get("path")
session_data = self.sys_ingress.get_session_data(session)
try: try:
# Websocket # Websocket
if _is_websocket(request): if _is_websocket(request):
return await self._handle_websocket(request, addon, path) return await self._handle_websocket(request, addon, path, session_data)
# Request # Request
return await self._handle_request(request, addon, path) return await self._handle_request(request, addon, path, session_data)
except aiohttp.ClientError as err: except aiohttp.ClientError as err:
_LOGGER.error("Ingress error: %s", err) _LOGGER.error("Ingress error: %s", err)
@@ -113,7 +146,11 @@ class APIIngress(CoreSysAttributes):
raise HTTPBadGateway() raise HTTPBadGateway()
async def _handle_websocket( async def _handle_websocket(
self, request: web.Request, addon: Addon, path: str self,
request: web.Request,
addon: Addon,
path: str,
session_data: IngressSessionData | None,
) -> web.WebSocketResponse: ) -> web.WebSocketResponse:
"""Ingress route for websocket.""" """Ingress route for websocket."""
if hdrs.SEC_WEBSOCKET_PROTOCOL in request.headers: if hdrs.SEC_WEBSOCKET_PROTOCOL in request.headers:
@@ -131,7 +168,7 @@ class APIIngress(CoreSysAttributes):
# Preparing # Preparing
url = self._create_url(addon, path) url = self._create_url(addon, path)
source_header = _init_header(request, addon) source_header = _init_header(request, addon, session_data)
# Support GET query # Support GET query
if request.query_string: if request.query_string:
@@ -157,11 +194,15 @@ class APIIngress(CoreSysAttributes):
return ws_server return ws_server
async def _handle_request( async def _handle_request(
self, request: web.Request, addon: Addon, path: str self,
request: web.Request,
addon: Addon,
path: str,
session_data: IngressSessionData | None,
) -> web.Response | web.StreamResponse: ) -> web.Response | web.StreamResponse:
"""Ingress route for request.""" """Ingress route for request."""
url = self._create_url(addon, path) url = self._create_url(addon, path)
source_header = _init_header(request, addon) source_header = _init_header(request, addon, session_data)
# Passing the raw stream breaks requests for some webservers # Passing the raw stream breaks requests for some webservers
# since we just need it for POST requests really, for all other methods # since we just need it for POST requests really, for all other methods
@@ -217,11 +258,33 @@ class APIIngress(CoreSysAttributes):
return response return response
async def _find_user_by_id(self, user_id: str) -> IngressSessionDataUser | None:
"""Find user object by the user's ID."""
try:
list_of_users = await self.sys_homeassistant.get_users()
except (HomeAssistantAPIError, TypeError) as err:
_LOGGER.error(
"%s error occurred while requesting list of users: %s", type(err), err
)
return None
def _init_header(request: web.Request, addon: str) -> CIMultiDict | dict[str, str]: if list_of_users is not None:
self._list_of_users = list_of_users
return next((user for user in self._list_of_users if user.id == user_id), None)
def _init_header(
request: web.Request, addon: Addon, session_data: IngressSessionData | None
) -> CIMultiDict | dict[str, str]:
"""Create initial header.""" """Create initial header."""
headers = {} headers = {}
if session_data is not None:
headers[HEADER_REMOTE_USER_ID] = session_data.user.id
headers[HEADER_REMOTE_USER_NAME] = session_data.user.username
headers[HEADER_REMOTE_USER_DISPLAY_NAME] = session_data.user.display_name
# filter flags # filter flags
for name, value in request.headers.items(): for name, value in request.headers.items():
if name in ( if name in (
@@ -234,6 +297,9 @@ def _init_header(request: web.Request, addon: str) -> CIMultiDict | dict[str, st
hdrs.SEC_WEBSOCKET_KEY, hdrs.SEC_WEBSOCKET_KEY,
istr(HEADER_TOKEN), istr(HEADER_TOKEN),
istr(HEADER_TOKEN_OLD), istr(HEADER_TOKEN_OLD),
istr(HEADER_REMOTE_USER_ID),
istr(HEADER_REMOTE_USER_NAME),
istr(HEADER_REMOTE_USER_DISPLAY_NAME),
): ):
continue continue
headers[name] = value headers[name] = value

View File

@@ -277,6 +277,7 @@ class APINetwork(CoreSysAttributes):
) )
vlan_interface = Interface( vlan_interface = Interface(
"",
"", "",
"", "",
True, True,

View File

@@ -110,6 +110,10 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
backup.store_repositories() backup.store_repositories()
backup.store_dockerconfig() backup.store_dockerconfig()
# Add backup ID to job
if job := self.sys_jobs.get_job():
job.reference = backup.slug
return backup return backup
def load(self): def load(self):
@@ -224,7 +228,10 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
finally: finally:
self.sys_core.state = CoreState.RUNNING self.sys_core.state = CoreState.RUNNING
@Job(conditions=[JobCondition.FREE_SPACE, JobCondition.RUNNING]) @Job(
name="backup_manager_full_backup",
conditions=[JobCondition.FREE_SPACE, JobCondition.RUNNING],
)
async def do_backup_full( async def do_backup_full(
self, self,
name="", name="",
@@ -250,7 +257,10 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
_LOGGER.info("Creating full backup with slug %s completed", backup.slug) _LOGGER.info("Creating full backup with slug %s completed", backup.slug)
return backup return backup
@Job(conditions=[JobCondition.FREE_SPACE, JobCondition.RUNNING]) @Job(
name="backup_manager_partial_backup",
conditions=[JobCondition.FREE_SPACE, JobCondition.RUNNING],
)
async def do_backup_partial( async def do_backup_partial(
self, self,
name: str = "", name: str = "",
@@ -371,16 +381,21 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
await self.sys_homeassistant.core.restart() await self.sys_homeassistant.core.restart()
@Job( @Job(
name="backup_manager_full_restore",
conditions=[ conditions=[
JobCondition.FREE_SPACE, JobCondition.FREE_SPACE,
JobCondition.HEALTHY, JobCondition.HEALTHY,
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
JobCondition.INTERNET_SYSTEM, JobCondition.INTERNET_SYSTEM,
JobCondition.RUNNING, JobCondition.RUNNING,
] ],
) )
async def do_restore_full(self, backup: Backup, password=None): async def do_restore_full(self, backup: Backup, password=None):
"""Restore a backup.""" """Restore a backup."""
# Add backup ID to job
if job := self.sys_jobs.get_job():
job.reference = backup.slug
if self.lock.locked(): if self.lock.locked():
_LOGGER.error("A backup/restore process is already running") _LOGGER.error("A backup/restore process is already running")
return False return False
@@ -418,13 +433,14 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
_LOGGER.info("Full-Restore %s done", backup.slug) _LOGGER.info("Full-Restore %s done", backup.slug)
@Job( @Job(
name="backup_manager_partial_restore",
conditions=[ conditions=[
JobCondition.FREE_SPACE, JobCondition.FREE_SPACE,
JobCondition.HEALTHY, JobCondition.HEALTHY,
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
JobCondition.INTERNET_SYSTEM, JobCondition.INTERNET_SYSTEM,
JobCondition.RUNNING, JobCondition.RUNNING,
] ],
) )
async def do_restore_partial( async def do_restore_partial(
self, self,
@@ -435,6 +451,10 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
password: str | None = None, password: str | None = None,
): ):
"""Restore a backup.""" """Restore a backup."""
# Add backup ID to job
if job := self.sys_jobs.get_job():
job.reference = backup.slug
if self.lock.locked(): if self.lock.locked():
_LOGGER.error("A backup/restore process is already running") _LOGGER.error("A backup/restore process is already running")
return False return False

View File

@@ -1,4 +1,5 @@
"""Constants file for Supervisor.""" """Constants file for Supervisor."""
from dataclasses import dataclass
from enum import Enum from enum import Enum
from ipaddress import ip_network from ipaddress import ip_network
from pathlib import Path from pathlib import Path
@@ -69,6 +70,9 @@ JSON_RESULT = "result"
RESULT_ERROR = "error" RESULT_ERROR = "error"
RESULT_OK = "ok" RESULT_OK = "ok"
HEADER_REMOTE_USER_ID = "X-Remote-User-Id"
HEADER_REMOTE_USER_NAME = "X-Remote-User-Name"
HEADER_REMOTE_USER_DISPLAY_NAME = "X-Remote-User-Display-Name"
HEADER_TOKEN_OLD = "X-Hassio-Key" HEADER_TOKEN_OLD = "X-Hassio-Key"
HEADER_TOKEN = "X-Supervisor-Token" HEADER_TOKEN = "X-Supervisor-Token"
@@ -271,6 +275,9 @@ ATTR_SERVERS = "servers"
ATTR_SERVICE = "service" ATTR_SERVICE = "service"
ATTR_SERVICES = "services" ATTR_SERVICES = "services"
ATTR_SESSION = "session" ATTR_SESSION = "session"
ATTR_SESSION_DATA = "session_data"
ATTR_SESSION_DATA_USER = "user"
ATTR_SESSION_DATA_USER_ID = "user_id"
ATTR_SIGNAL = "signal" ATTR_SIGNAL = "signal"
ATTR_SIZE = "size" ATTR_SIZE = "size"
ATTR_SLUG = "slug" ATTR_SLUG = "slug"
@@ -464,6 +471,22 @@ class CpuArch(str, Enum):
AMD64 = "amd64" AMD64 = "amd64"
@dataclass
class IngressSessionDataUser:
"""Format of an IngressSessionDataUser object."""
id: str
display_name: str
username: str
@dataclass
class IngressSessionData:
"""Format of an IngressSessionData object."""
user: IngressSessionDataUser
STARTING_STATES = [ STARTING_STATES = [
CoreState.INITIALIZE, CoreState.INITIALIZE,
CoreState.STARTUP, CoreState.STARTUP,

View File

@@ -70,13 +70,16 @@ class Core(CoreSysAttributes):
) )
finally: finally:
self._state = new_state self._state = new_state
self.sys_bus.fire_event(BusEvent.SUPERVISOR_STATE_CHANGE, new_state)
# These will be received by HA after startup has completed which won't make sense # Don't attempt to notify anyone on CLOSE as we're about to stop the event loop
if new_state not in STARTING_STATES: if new_state != CoreState.CLOSE:
self.sys_homeassistant.websocket.supervisor_update_event( self.sys_bus.fire_event(BusEvent.SUPERVISOR_STATE_CHANGE, new_state)
"info", {"state": new_state}
) # These will be received by HA after startup has completed which won't make sense
if new_state not in STARTING_STATES:
self.sys_homeassistant.websocket.supervisor_update_event(
"info", {"state": new_state}
)
async def connect(self): async def connect(self):
"""Connect Supervisor container.""" """Connect Supervisor container."""

View File

@@ -6,6 +6,7 @@
"raspberrypi4": ["armv7", "armhf"], "raspberrypi4": ["armv7", "armhf"],
"raspberrypi4-64": ["aarch64", "armv7", "armhf"], "raspberrypi4-64": ["aarch64", "armv7", "armhf"],
"yellow": ["aarch64", "armv7", "armhf"], "yellow": ["aarch64", "armv7", "armhf"],
"green": ["aarch64", "armv7", "armhf"],
"tinker": ["armv7", "armhf"], "tinker": ["armv7", "armhf"],
"odroid-c2": ["aarch64", "armv7", "armhf"], "odroid-c2": ["aarch64", "armv7", "armhf"],
"odroid-c4": ["aarch64", "armv7", "armhf"], "odroid-c4": ["aarch64", "armv7", "armhf"],

View File

@@ -144,6 +144,7 @@ DBUS_ATTR_OPERATION = "Operation"
DBUS_ATTR_OPTIONS = "Options" DBUS_ATTR_OPTIONS = "Options"
DBUS_ATTR_PARSER_VERSION = "ParserVersion" DBUS_ATTR_PARSER_VERSION = "ParserVersion"
DBUS_ATTR_PARTITIONS = "Partitions" DBUS_ATTR_PARTITIONS = "Partitions"
DBUS_ATTR_PATH = "Path"
DBUS_ATTR_POWER_LED = "PowerLED" DBUS_ATTR_POWER_LED = "PowerLED"
DBUS_ATTR_PRIMARY_CONNECTION = "PrimaryConnection" DBUS_ATTR_PRIMARY_CONNECTION = "PrimaryConnection"
DBUS_ATTR_READ_ONLY = "ReadOnly" DBUS_ATTR_READ_ONLY = "ReadOnly"

View File

@@ -66,7 +66,7 @@ class IpProperties:
@dataclass(slots=True) @dataclass(slots=True)
class DeviceProperties: class MatchProperties:
"""Device properties object for Network Manager.""" """Match properties object for Network Manager."""
match_device: str | None path: list[str] | None = None

View File

@@ -11,6 +11,7 @@ from ..const import (
DBUS_ATTR_DRIVER, DBUS_ATTR_DRIVER,
DBUS_ATTR_HWADDRESS, DBUS_ATTR_HWADDRESS,
DBUS_ATTR_MANAGED, DBUS_ATTR_MANAGED,
DBUS_ATTR_PATH,
DBUS_IFACE_DEVICE, DBUS_IFACE_DEVICE,
DBUS_NAME_NM, DBUS_NAME_NM,
DBUS_OBJECT_BASE, DBUS_OBJECT_BASE,
@@ -74,6 +75,12 @@ class NetworkInterface(DBusInterfaceProxy):
"""Return hardware address (i.e. mac address) of device.""" """Return hardware address (i.e. mac address) of device."""
return self.properties[DBUS_ATTR_HWADDRESS] return self.properties[DBUS_ATTR_HWADDRESS]
@property
@dbus_property
def path(self) -> str:
"""Return The path of the device as exposed by the udev property ID_PATH."""
return self.properties[DBUS_ATTR_PATH]
@property @property
def connection(self) -> NetworkConnection | None: def connection(self) -> NetworkConnection | None:
"""Return the connection used for this interface.""" """Return the connection used for this interface."""

View File

@@ -11,9 +11,9 @@ from ...interface import DBusInterface
from ...utils import dbus_connected from ...utils import dbus_connected
from ..configuration import ( from ..configuration import (
ConnectionProperties, ConnectionProperties,
DeviceProperties,
EthernetProperties, EthernetProperties,
IpProperties, IpProperties,
MatchProperties,
VlanProperties, VlanProperties,
WirelessProperties, WirelessProperties,
WirelessSecurityProperties, WirelessSecurityProperties,
@@ -26,7 +26,8 @@ CONF_ATTR_802_WIRELESS_SECURITY = "802-11-wireless-security"
CONF_ATTR_VLAN = "vlan" CONF_ATTR_VLAN = "vlan"
CONF_ATTR_IPV4 = "ipv4" CONF_ATTR_IPV4 = "ipv4"
CONF_ATTR_IPV6 = "ipv6" CONF_ATTR_IPV6 = "ipv6"
CONF_ATTR_DEVICE = "device" CONF_ATTR_MATCH = "match"
CONF_ATTR_PATH = "path"
ATTR_ID = "id" ATTR_ID = "id"
ATTR_UUID = "uuid" ATTR_UUID = "uuid"
@@ -37,7 +38,7 @@ ATTR_POWERSAVE = "powersave"
ATTR_AUTH_ALG = "auth-alg" ATTR_AUTH_ALG = "auth-alg"
ATTR_KEY_MGMT = "key-mgmt" ATTR_KEY_MGMT = "key-mgmt"
ATTR_INTERFACE_NAME = "interface-name" ATTR_INTERFACE_NAME = "interface-name"
ATTR_MATCH_DEVICE = "match-device" ATTR_PATH = "path"
IPV4_6_IGNORE_FIELDS = [ IPV4_6_IGNORE_FIELDS = [
"addresses", "addresses",
@@ -88,7 +89,7 @@ class NetworkSetting(DBusInterface):
self._vlan: VlanProperties | None = None self._vlan: VlanProperties | None = None
self._ipv4: IpProperties | None = None self._ipv4: IpProperties | None = None
self._ipv6: IpProperties | None = None self._ipv6: IpProperties | None = None
self._device: DeviceProperties | None = None self._match: MatchProperties | None = None
@property @property
def connection(self) -> ConnectionProperties | None: def connection(self) -> ConnectionProperties | None:
@@ -126,9 +127,9 @@ class NetworkSetting(DBusInterface):
return self._ipv6 return self._ipv6
@property @property
def device(self) -> DeviceProperties | None: def match(self) -> MatchProperties | None:
"""Return device properties if any.""" """Return match properties if any."""
return self._device return self._match
@dbus_connected @dbus_connected
async def get_settings(self) -> dict[str, Any]: async def get_settings(self) -> dict[str, Any]:
@@ -166,7 +167,7 @@ class NetworkSetting(DBusInterface):
CONF_ATTR_IPV6, CONF_ATTR_IPV6,
ignore_current_value=IPV4_6_IGNORE_FIELDS, ignore_current_value=IPV4_6_IGNORE_FIELDS,
) )
_merge_settings_attribute(new_settings, settings, CONF_ATTR_DEVICE) _merge_settings_attribute(new_settings, settings, CONF_ATTR_MATCH)
await self.dbus.Settings.Connection.call_update(new_settings) await self.dbus.Settings.Connection.call_update(new_settings)
@@ -233,7 +234,5 @@ class NetworkSetting(DBusInterface):
data[CONF_ATTR_IPV6].get(ATTR_METHOD), data[CONF_ATTR_IPV6].get(ATTR_METHOD),
) )
if CONF_ATTR_DEVICE in data: if CONF_ATTR_MATCH in data:
self._device = DeviceProperties( self._match = MatchProperties(data[CONF_ATTR_MATCH].get(ATTR_PATH))
data[CONF_ATTR_DEVICE].get(ATTR_MATCH_DEVICE)
)

View File

@@ -9,14 +9,14 @@ from dbus_fast import Variant
from . import ( from . import (
ATTR_ASSIGNED_MAC, ATTR_ASSIGNED_MAC,
ATTR_MATCH_DEVICE,
CONF_ATTR_802_ETHERNET, CONF_ATTR_802_ETHERNET,
CONF_ATTR_802_WIRELESS, CONF_ATTR_802_WIRELESS,
CONF_ATTR_802_WIRELESS_SECURITY, CONF_ATTR_802_WIRELESS_SECURITY,
CONF_ATTR_CONNECTION, CONF_ATTR_CONNECTION,
CONF_ATTR_DEVICE,
CONF_ATTR_IPV4, CONF_ATTR_IPV4,
CONF_ATTR_IPV6, CONF_ATTR_IPV6,
CONF_ATTR_MATCH,
CONF_ATTR_PATH,
CONF_ATTR_VLAN, CONF_ATTR_VLAN,
) )
from ....host.const import InterfaceMethod, InterfaceType from ....host.const import InterfaceMethod, InterfaceType
@@ -59,11 +59,10 @@ def get_connection_from_interface(
} }
if interface.type != InterfaceType.VLAN: if interface.type != InterfaceType.VLAN:
conn[CONF_ATTR_DEVICE] = { if interface.path:
ATTR_MATCH_DEVICE: Variant( conn[CONF_ATTR_MATCH] = {CONF_ATTR_PATH: Variant("as", [interface.path])}
"s", f"mac:{interface.mac},interface-name:{interface.name}" else:
) conn[CONF_ATTR_CONNECTION]["interface-name"] = Variant("s", interface.name)
}
ipv4 = {} ipv4 = {}
if not interface.ipv4 or interface.ipv4.method == InterfaceMethod.AUTO: if not interface.ipv4 or interface.ipv4.method == InterfaceMethod.AUTO:

View File

@@ -36,14 +36,15 @@ from ..exceptions import (
CoreDNSError, CoreDNSError,
DBusError, DBusError,
DockerError, DockerError,
DockerJobError,
DockerNotFound, DockerNotFound,
HardwareNotFound, HardwareNotFound,
) )
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..hardware.data import Device from ..hardware.data import Device
from ..jobs.decorator import Job, JobCondition, JobExecutionLimit from ..jobs.const import JobCondition, JobExecutionLimit
from ..jobs.decorator import Job
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils import process_lock
from ..utils.sentry import capture_exception from ..utils.sentry import capture_exception
from .const import ( from .const import (
ENV_TIME, ENV_TIME,
@@ -73,8 +74,8 @@ class DockerAddon(DockerInterface):
def __init__(self, coresys: CoreSys, addon: Addon): def __init__(self, coresys: CoreSys, addon: Addon):
"""Initialize Docker Home Assistant wrapper.""" """Initialize Docker Home Assistant wrapper."""
super().__init__(coresys)
self.addon: Addon = addon self.addon: Addon = addon
super().__init__(coresys)
self._hw_listener: EventListener | None = None self._hw_listener: EventListener | None = None
@@ -493,7 +494,12 @@ class DockerAddon(DockerInterface):
return mounts return mounts
async def _run(self) -> None: @Job(
name="docker_addon_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
if await self.is_running(): if await self.is_running():
return return
@@ -503,7 +509,7 @@ class DockerAddon(DockerInterface):
_LOGGER.warning("%s running with disabled protected mode!", self.addon.name) _LOGGER.warning("%s running with disabled protected mode!", self.addon.name)
# Cleanup # Cleanup
await self._stop() await self.stop()
# Don't set a hostname if no separate UTS namespace is used # Don't set a hostname if no separate UTS namespace is used
hostname = None if self.uts_mode else self.addon.hostname hostname = None if self.uts_mode else self.addon.hostname
@@ -563,7 +569,12 @@ class DockerAddon(DockerInterface):
BusEvent.HARDWARE_NEW_DEVICE, self._hardware_events BusEvent.HARDWARE_NEW_DEVICE, self._hardware_events
) )
async def _update( @Job(
name="docker_addon_update",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def update(
self, version: AwesomeVersion, image: str | None = None, latest: bool = False self, version: AwesomeVersion, image: str | None = None, latest: bool = False
) -> None: ) -> None:
"""Update a docker image.""" """Update a docker image."""
@@ -574,15 +585,20 @@ class DockerAddon(DockerInterface):
) )
# Update docker image # Update docker image
await self._install( await self.install(
version, image=image, latest=latest, need_build=self.addon.latest_need_build version, image=image, latest=latest, need_build=self.addon.latest_need_build
) )
# Stop container & cleanup # Stop container & cleanup
with suppress(DockerError): with suppress(DockerError):
await self._stop() await self.stop()
async def _install( @Job(
name="docker_addon_install",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def install(
self, self,
version: AwesomeVersion, version: AwesomeVersion,
image: str | None = None, image: str | None = None,
@@ -595,7 +611,7 @@ class DockerAddon(DockerInterface):
if need_build is None and self.addon.need_build or need_build: if need_build is None and self.addon.need_build or need_build:
await self._build(version) await self._build(version)
else: else:
await super()._install(version, image, latest, arch) await super().install(version, image, latest, arch)
async def _build(self, version: AwesomeVersion) -> None: async def _build(self, version: AwesomeVersion) -> None:
"""Build a Docker container.""" """Build a Docker container."""
@@ -632,14 +648,22 @@ class DockerAddon(DockerInterface):
_LOGGER.info("Build %s:%s done", self.image, version) _LOGGER.info("Build %s:%s done", self.image, version)
@process_lock @Job(
name="docker_addon_export_image",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
def export_image(self, tar_file: Path) -> Awaitable[None]: def export_image(self, tar_file: Path) -> Awaitable[None]:
"""Export current images into a tar file.""" """Export current images into a tar file."""
return self.sys_run_in_executor( return self.sys_run_in_executor(
self.sys_docker.export_image, self.image, self.version, tar_file self.sys_docker.export_image, self.image, self.version, tar_file
) )
@process_lock @Job(
name="docker_addon_import_image",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def import_image(self, tar_file: Path) -> None: async def import_image(self, tar_file: Path) -> None:
"""Import a tar file as image.""" """Import a tar file as image."""
docker_image = await self.sys_run_in_executor( docker_image = await self.sys_run_in_executor(
@@ -650,9 +674,13 @@ class DockerAddon(DockerInterface):
_LOGGER.info("Importing image %s and version %s", tar_file, self.version) _LOGGER.info("Importing image %s and version %s", tar_file, self.version)
with suppress(DockerError): with suppress(DockerError):
await self._cleanup() await self.cleanup()
@process_lock @Job(
name="docker_addon_write_stdin",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def write_stdin(self, data: bytes) -> None: async def write_stdin(self, data: bytes) -> None:
"""Write to add-on stdin.""" """Write to add-on stdin."""
if not await self.is_running(): if not await self.is_running():
@@ -682,7 +710,12 @@ class DockerAddon(DockerInterface):
_LOGGER.error("Can't write to %s stdin: %s", self.name, err) _LOGGER.error("Can't write to %s stdin: %s", self.name, err)
raise DockerError() from err raise DockerError() from err
async def _stop(self, remove_container: bool = True) -> None: @Job(
name="docker_addon_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def stop(self, remove_container: bool = True) -> None:
"""Stop/remove Docker container.""" """Stop/remove Docker container."""
# DNS # DNS
if self.ip_address != NO_ADDDRESS: if self.ip_address != NO_ADDDRESS:
@@ -697,7 +730,7 @@ class DockerAddon(DockerInterface):
self.sys_bus.remove_listener(self._hw_listener) self.sys_bus.remove_listener(self._hw_listener)
self._hw_listener = None self._hw_listener = None
await super()._stop(remove_container) await super().stop(remove_container)
async def _validate_trust( async def _validate_trust(
self, image_id: str, image: str, version: AwesomeVersion self, image_id: str, image: str, version: AwesomeVersion
@@ -709,7 +742,11 @@ class DockerAddon(DockerInterface):
checksum = image_id.partition(":")[2] checksum = image_id.partition(":")[2]
return await self.sys_security.verify_content(self.addon.codenotary, checksum) return await self.sys_security.verify_content(self.addon.codenotary, checksum)
@Job(conditions=[JobCondition.OS_AGENT], limit=JobExecutionLimit.SINGLE_WAIT) @Job(
name="docker_addon_hardware_events",
conditions=[JobCondition.OS_AGENT],
limit=JobExecutionLimit.SINGLE_WAIT,
)
async def _hardware_events(self, device: Device) -> None: async def _hardware_events(self, device: Device) -> None:
"""Process Hardware events for adjust device access.""" """Process Hardware events for adjust device access."""
if not any( if not any(

View File

@@ -6,7 +6,10 @@ from docker.types import Mount
from ..const import DOCKER_CPU_RUNTIME_ALLOCATION, MACHINE_ID from ..const import DOCKER_CPU_RUNTIME_ALLOCATION, MACHINE_ID
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import ( from .const import (
ENV_TIME, ENV_TIME,
MOUNT_DBUS, MOUNT_DBUS,
@@ -82,13 +85,18 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
return None return None
return DOCKER_CPU_RUNTIME_ALLOCATION return DOCKER_CPU_RUNTIME_ALLOCATION
async def _run(self) -> None: @Job(
name="docker_audio_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
if await self.is_running(): if await self.is_running():
return return
# Cleanup # Cleanup
await self._stop() await self.stop()
# Create & Run container # Create & Run container
docker_container = await self.sys_run_in_executor( docker_container = await self.sys_run_in_executor(

View File

@@ -2,6 +2,9 @@
import logging import logging
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import ENV_TIME, ENV_TOKEN from .const import ENV_TIME, ENV_TOKEN
from .interface import DockerInterface from .interface import DockerInterface
@@ -23,13 +26,18 @@ class DockerCli(DockerInterface, CoreSysAttributes):
"""Return name of Docker container.""" """Return name of Docker container."""
return CLI_DOCKER_NAME return CLI_DOCKER_NAME
async def _run(self) -> None: @Job(
name="docker_cli_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
if await self.is_running(): if await self.is_running():
return return
# Cleanup # Cleanup
await self._stop() await self.stop()
# Create & Run container # Create & Run container
docker_container = await self.sys_run_in_executor( docker_container = await self.sys_run_in_executor(

View File

@@ -4,6 +4,9 @@ import logging
from docker.types import Mount from docker.types import Mount
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import ENV_TIME, MOUNT_DBUS, MountType from .const import ENV_TIME, MOUNT_DBUS, MountType
from .interface import DockerInterface from .interface import DockerInterface
@@ -25,13 +28,18 @@ class DockerDNS(DockerInterface, CoreSysAttributes):
"""Return name of Docker container.""" """Return name of Docker container."""
return DNS_DOCKER_NAME return DNS_DOCKER_NAME
async def _run(self) -> None: @Job(
name="docker_dns_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
if await self.is_running(): if await self.is_running():
return return
# Cleanup # Cleanup
await self._stop() await self.stop()
# Create & Run container # Create & Run container
docker_container = await self.sys_run_in_executor( docker_container = await self.sys_run_in_executor(

View File

@@ -7,9 +7,11 @@ from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from docker.types import Mount from docker.types import Mount
from ..const import LABEL_MACHINE, MACHINE_ID from ..const import LABEL_MACHINE, MACHINE_ID
from ..exceptions import DockerJobError
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..homeassistant.const import LANDINGPAGE from ..homeassistant.const import LANDINGPAGE
from ..utils import process_lock from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import ( from .const import (
ENV_TIME, ENV_TIME,
ENV_TOKEN, ENV_TOKEN,
@@ -131,13 +133,18 @@ class DockerHomeAssistant(DockerInterface):
return mounts return mounts
async def _run(self) -> None: @Job(
name="docker_home_assistant_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
if await self.is_running(): if await self.is_running():
return return
# Cleanup # Cleanup
await self._stop() await self.stop()
# Create & Run container # Create & Run container
docker_container = await self.sys_run_in_executor( docker_container = await self.sys_run_in_executor(
@@ -173,7 +180,11 @@ class DockerHomeAssistant(DockerInterface):
"Starting Home Assistant %s with version %s", self.image, self.version "Starting Home Assistant %s with version %s", self.image, self.version
) )
@process_lock @Job(
name="docker_home_assistant_execute_command",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def execute_command(self, command: str) -> CommandReturn: async def execute_command(self, command: str) -> CommandReturn:
"""Create a temporary container and run command.""" """Create a temporary container and run command."""
return await self.sys_run_in_executor( return await self.sys_run_in_executor(

View File

@@ -2,12 +2,14 @@
from __future__ import annotations from __future__ import annotations
import asyncio import asyncio
from collections import defaultdict
from collections.abc import Awaitable from collections.abc import Awaitable
from contextlib import suppress from contextlib import suppress
import logging import logging
import re import re
from time import time from time import time
from typing import Any from typing import Any
from uuid import uuid4
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from awesomeversion.strategy import AwesomeVersionStrategy from awesomeversion.strategy import AwesomeVersionStrategy
@@ -24,18 +26,21 @@ from ..const import (
BusEvent, BusEvent,
CpuArch, CpuArch,
) )
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys
from ..exceptions import ( from ..exceptions import (
CodeNotaryError, CodeNotaryError,
CodeNotaryUntrusted, CodeNotaryUntrusted,
DockerAPIError, DockerAPIError,
DockerError, DockerError,
DockerJobError,
DockerNotFound, DockerNotFound,
DockerRequestError, DockerRequestError,
DockerTrustError, DockerTrustError,
) )
from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobExecutionLimit
from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils import process_lock
from ..utils.sentry import capture_exception from ..utils.sentry import capture_exception
from .const import ContainerState, RestartPolicy from .const import ContainerState, RestartPolicy
from .manager import CommandReturn from .manager import CommandReturn
@@ -73,11 +78,18 @@ def _container_state_from_model(docker_container: Container) -> ContainerState:
return ContainerState.STOPPED return ContainerState.STOPPED
class DockerInterface(CoreSysAttributes): class DockerInterface(JobGroup):
"""Docker Supervisor interface.""" """Docker Supervisor interface."""
def __init__(self, coresys: CoreSys): def __init__(self, coresys: CoreSys):
"""Initialize Docker base wrapper.""" """Initialize Docker base wrapper."""
super().__init__(
coresys,
JOB_GROUP_DOCKER_INTERFACE.format_map(
defaultdict(str, name=self.name or uuid4().hex)
),
self.name,
)
self.coresys: CoreSys = coresys self.coresys: CoreSys = coresys
self._meta: dict[str, Any] | None = None self._meta: dict[str, Any] | None = None
self.lock: asyncio.Lock = asyncio.Lock() self.lock: asyncio.Lock = asyncio.Lock()
@@ -204,25 +216,19 @@ class DockerInterface(CoreSysAttributes):
await self.sys_run_in_executor(self.sys_docker.docker.login, **credentials) await self.sys_run_in_executor(self.sys_docker.docker.login, **credentials)
@process_lock @Job(
def install( name="docker_interface_install",
self, limit=JobExecutionLimit.GROUP_ONCE,
version: AwesomeVersion, on_condition=DockerJobError,
image: str | None = None, )
latest: bool = False, async def install(
arch: CpuArch | None = None,
) -> Awaitable[None]:
"""Pull docker image."""
return self._install(version, image, latest, arch)
async def _install(
self, self,
version: AwesomeVersion, version: AwesomeVersion,
image: str | None = None, image: str | None = None,
latest: bool = False, latest: bool = False,
arch: CpuArch | None = None, arch: CpuArch | None = None,
) -> None: ) -> None:
"""Pull Docker image.""" """Pull docker image."""
image = image or self.image image = image or self.image
arch = arch or self.sys_arch.supervisor arch = arch or self.sys_arch.supervisor
@@ -328,17 +334,15 @@ class DockerInterface(CoreSysAttributes):
return _container_state_from_model(docker_container) return _container_state_from_model(docker_container)
@process_lock @Job(
def attach( name="docker_interface_attach",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def attach(
self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False
) -> Awaitable[None]:
"""Attach to running Docker container."""
return self._attach(version, skip_state_event_if_down)
async def _attach(
self, version: AwesomeVersion, skip_state_event_if_down: bool = False
) -> None: ) -> None:
"""Attach to running docker container.""" """Attach to running Docker container."""
with suppress(docker.errors.DockerException, requests.RequestException): with suppress(docker.errors.DockerException, requests.RequestException):
docker_container = await self.sys_run_in_executor( docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name self.sys_docker.containers.get, self.name
@@ -370,21 +374,21 @@ class DockerInterface(CoreSysAttributes):
raise DockerError() raise DockerError()
_LOGGER.info("Attaching to %s with version %s", self.image, self.version) _LOGGER.info("Attaching to %s with version %s", self.image, self.version)
@process_lock @Job(
def run(self) -> Awaitable[None]: name="docker_interface_run",
"""Run Docker image.""" limit=JobExecutionLimit.GROUP_ONCE,
return self._run() on_condition=DockerJobError,
)
async def _run(self) -> None: async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
raise NotImplementedError() raise NotImplementedError()
@process_lock @Job(
def stop(self, remove_container: bool = True) -> Awaitable[None]: name="docker_interface_stop",
"""Stop/remove Docker container.""" limit=JobExecutionLimit.GROUP_ONCE,
return self._stop(remove_container) on_condition=DockerJobError,
)
async def _stop(self, remove_container: bool = True) -> None: async def stop(self, remove_container: bool = True) -> None:
"""Stop/remove Docker container.""" """Stop/remove Docker container."""
with suppress(DockerNotFound): with suppress(DockerNotFound):
await self.sys_run_in_executor( await self.sys_run_in_executor(
@@ -394,34 +398,40 @@ class DockerInterface(CoreSysAttributes):
remove_container, remove_container,
) )
@process_lock @Job(
name="docker_interface_start",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
def start(self) -> Awaitable[None]: def start(self) -> Awaitable[None]:
"""Start Docker container.""" """Start Docker container."""
return self.sys_run_in_executor(self.sys_docker.start_container, self.name) return self.sys_run_in_executor(self.sys_docker.start_container, self.name)
@process_lock @Job(
name="docker_interface_remove",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def remove(self) -> None: async def remove(self) -> None:
"""Remove Docker images.""" """Remove Docker images."""
# Cleanup container # Cleanup container
with suppress(DockerError): with suppress(DockerError):
await self._stop() await self.stop()
await self.sys_run_in_executor( await self.sys_run_in_executor(
self.sys_docker.remove_image, self.image, self.version self.sys_docker.remove_image, self.image, self.version
) )
self._meta = None self._meta = None
@process_lock @Job(
def update( name="docker_interface_update",
self, version: AwesomeVersion, image: str | None = None, latest: bool = False limit=JobExecutionLimit.GROUP_ONCE,
) -> Awaitable[None]: on_condition=DockerJobError,
"""Update a Docker image.""" )
return self._update(version, image, latest) async def update(
async def _update(
self, version: AwesomeVersion, image: str | None = None, latest: bool = False self, version: AwesomeVersion, image: str | None = None, latest: bool = False
) -> None: ) -> None:
"""Update a docker image.""" """Update a Docker image."""
image = image or self.image image = image or self.image
_LOGGER.info( _LOGGER.info(
@@ -429,11 +439,11 @@ class DockerInterface(CoreSysAttributes):
) )
# Update docker image # Update docker image
await self._install(version, image=image, latest=latest) await self.install(version, image=image, latest=latest)
# Stop container & cleanup # Stop container & cleanup
with suppress(DockerError): with suppress(DockerError):
await self._stop() await self.stop()
async def logs(self) -> bytes: async def logs(self) -> bytes:
"""Return Docker logs of container.""" """Return Docker logs of container."""
@@ -444,12 +454,12 @@ class DockerInterface(CoreSysAttributes):
return b"" return b""
@process_lock @Job(
name="docker_interface_cleanup",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
def cleanup(self, old_image: str | None = None) -> Awaitable[None]: def cleanup(self, old_image: str | None = None) -> Awaitable[None]:
"""Check if old version exists and cleanup."""
return self._cleanup(old_image)
def _cleanup(self, old_image: str | None = None) -> Awaitable[None]:
"""Check if old version exists and cleanup.""" """Check if old version exists and cleanup."""
return self.sys_run_in_executor( return self.sys_run_in_executor(
self.sys_docker.cleanup_old_images, self.sys_docker.cleanup_old_images,
@@ -458,14 +468,22 @@ class DockerInterface(CoreSysAttributes):
{old_image} if old_image else None, {old_image} if old_image else None,
) )
@process_lock @Job(
name="docker_interface_restart",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
def restart(self) -> Awaitable[None]: def restart(self) -> Awaitable[None]:
"""Restart docker container.""" """Restart docker container."""
return self.sys_run_in_executor( return self.sys_run_in_executor(
self.sys_docker.restart_container, self.name, self.timeout self.sys_docker.restart_container, self.name, self.timeout
) )
@process_lock @Job(
name="docker_interface_execute_command",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def execute_command(self, command: str) -> CommandReturn: async def execute_command(self, command: str) -> CommandReturn:
"""Create a temporary container and run command.""" """Create a temporary container and run command."""
raise NotImplementedError() raise NotImplementedError()
@@ -526,7 +544,11 @@ class DockerInterface(CoreSysAttributes):
available_version.sort(reverse=True) available_version.sort(reverse=True)
return available_version[0] return available_version[0]
@process_lock @Job(
name="docker_interface_run_inside",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
def run_inside(self, command: str) -> Awaitable[CommandReturn]: def run_inside(self, command: str) -> Awaitable[CommandReturn]:
"""Execute a command inside Docker container.""" """Execute a command inside Docker container."""
return self.sys_run_in_executor( return self.sys_run_in_executor(
@@ -540,7 +562,11 @@ class DockerInterface(CoreSysAttributes):
checksum = image_id.partition(":")[2] checksum = image_id.partition(":")[2]
return await self.sys_security.verify_own_content(checksum) return await self.sys_security.verify_own_content(checksum)
@process_lock @Job(
name="docker_interface_check_trust",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def check_trust(self) -> None: async def check_trust(self) -> None:
"""Check trust of exists Docker image.""" """Check trust of exists Docker image."""
try: try:

View File

@@ -2,6 +2,9 @@
import logging import logging
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import ENV_TIME, Capabilities from .const import ENV_TIME, Capabilities
from .interface import DockerInterface from .interface import DockerInterface
@@ -28,13 +31,18 @@ class DockerMulticast(DockerInterface, CoreSysAttributes):
"""Generate needed capabilities.""" """Generate needed capabilities."""
return [Capabilities.NET_ADMIN.value] return [Capabilities.NET_ADMIN.value]
async def _run(self) -> None: @Job(
name="docker_multicast_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
if await self.is_running(): if await self.is_running():
return return
# Cleanup # Cleanup
await self._stop() await self.stop()
# Create & Run container # Create & Run container
docker_container = await self.sys_run_in_executor( docker_container = await self.sys_run_in_executor(

View File

@@ -3,6 +3,9 @@ import logging
from ..const import DOCKER_NETWORK_MASK from ..const import DOCKER_NETWORK_MASK
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import ENV_TIME, ENV_TOKEN, MOUNT_DOCKER, RestartPolicy from .const import ENV_TIME, ENV_TOKEN, MOUNT_DOCKER, RestartPolicy
from .interface import DockerInterface from .interface import DockerInterface
@@ -25,13 +28,18 @@ class DockerObserver(DockerInterface, CoreSysAttributes):
"""Return name of Docker container.""" """Return name of Docker container."""
return OBSERVER_DOCKER_NAME return OBSERVER_DOCKER_NAME
async def _run(self) -> None: @Job(
name="docker_observer_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
if await self.is_running(): if await self.is_running():
return return
# Cleanup # Cleanup
await self._stop() await self.stop()
# Create & Run container # Create & Run container
docker_container = await self.sys_run_in_executor( docker_container = await self.sys_run_in_executor(

View File

@@ -9,7 +9,9 @@ import docker
import requests import requests
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerError from ..exceptions import DockerError, DockerJobError
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import PropagationMode from .const import PropagationMode
from .interface import DockerInterface from .interface import DockerInterface
@@ -43,8 +45,13 @@ class DockerSupervisor(DockerInterface, CoreSysAttributes):
if mount.get("Destination") == "/data" if mount.get("Destination") == "/data"
) )
async def _attach( @Job(
self, version: AwesomeVersion, skip_state_event_if_down: bool = False name="docker_supervisor_attach",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def attach(
self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False
) -> None: ) -> None:
"""Attach to running docker container.""" """Attach to running docker container."""
try: try:

View File

@@ -36,6 +36,18 @@ class JobConditionException(JobException):
"""Exception happening for job conditions.""" """Exception happening for job conditions."""
class JobStartException(JobException):
"""Exception occurred starting a job on in current asyncio task."""
class JobNotFound(JobException):
"""Exception for job not found."""
class JobGroupExecutionLimitExceeded(JobException):
"""Exception when job group execution limit exceeded."""
# HomeAssistant # HomeAssistant
@@ -478,6 +490,10 @@ class DockerNotFound(DockerError):
"""Docker object don't Exists.""" """Docker object don't Exists."""
class DockerJobError(DockerError, JobException):
"""Error executing docker job."""
# Hardware # Hardware

View File

@@ -32,7 +32,10 @@ class HomeAssistantAPI(CoreSysAttributes):
self.access_token: str | None = None self.access_token: str | None = None
self._access_token_expires: datetime | None = None self._access_token_expires: datetime | None = None
@Job(limit=JobExecutionLimit.SINGLE_WAIT) @Job(
name="home_assistant_api_ensure_access_token",
limit=JobExecutionLimit.SINGLE_WAIT,
)
async def ensure_access_token(self) -> None: async def ensure_access_token(self) -> None:
"""Ensure there is an access token.""" """Ensure there is an access token."""
if ( if (

View File

@@ -11,7 +11,7 @@ import attr
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from ..const import ATTR_HOMEASSISTANT, BusEvent from ..const import ATTR_HOMEASSISTANT, BusEvent
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys
from ..docker.const import ContainerState from ..docker.const import ContainerState
from ..docker.homeassistant import DockerHomeAssistant from ..docker.homeassistant import DockerHomeAssistant
from ..docker.monitor import DockerContainerStateEvent from ..docker.monitor import DockerContainerStateEvent
@@ -22,11 +22,13 @@ from ..exceptions import (
HomeAssistantError, HomeAssistantError,
HomeAssistantJobError, HomeAssistantJobError,
HomeAssistantUpdateError, HomeAssistantUpdateError,
JobException,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobExecutionLimit
from ..jobs.decorator import Job, JobCondition from ..jobs.decorator import Job, JobCondition
from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType from ..resolution.const import ContextType, IssueType
from ..utils import convert_to_ascii, process_lock from ..utils import convert_to_ascii
from ..utils.sentry import capture_exception from ..utils.sentry import capture_exception
from .const import ( from .const import (
LANDINGPAGE, LANDINGPAGE,
@@ -49,12 +51,12 @@ class ConfigResult:
log = attr.ib() log = attr.ib()
class HomeAssistantCore(CoreSysAttributes): class HomeAssistantCore(JobGroup):
"""Home Assistant core object for handle it.""" """Home Assistant core object for handle it."""
def __init__(self, coresys: CoreSys): def __init__(self, coresys: CoreSys):
"""Initialize Home Assistant object.""" """Initialize Home Assistant object."""
self.coresys: CoreSys = coresys super().__init__(coresys, JOB_GROUP_HOME_ASSISTANT_CORE)
self.instance: DockerHomeAssistant = DockerHomeAssistant(coresys) self.instance: DockerHomeAssistant = DockerHomeAssistant(coresys)
self.lock: asyncio.Lock = asyncio.Lock() self.lock: asyncio.Lock = asyncio.Lock()
self._error_state: bool = False self._error_state: bool = False
@@ -95,9 +97,13 @@ class HomeAssistantCore(CoreSysAttributes):
_LOGGER.info("Starting HomeAssistant landingpage") _LOGGER.info("Starting HomeAssistant landingpage")
if not await self.instance.is_running(): if not await self.instance.is_running():
with suppress(HomeAssistantError): with suppress(HomeAssistantError):
await self._start() await self.start()
@process_lock @Job(
name="home_assistant_core_install_landing_page",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError,
)
async def install_landingpage(self) -> None: async def install_landingpage(self) -> None:
"""Install a landing page.""" """Install a landing page."""
# Try to use a preinstalled landingpage # Try to use a preinstalled landingpage
@@ -127,7 +133,7 @@ class HomeAssistantCore(CoreSysAttributes):
LANDINGPAGE, image=self.sys_updater.image_homeassistant LANDINGPAGE, image=self.sys_updater.image_homeassistant
) )
break break
except DockerError: except (DockerError, JobException):
pass pass
except Exception as err: # pylint: disable=broad-except except Exception as err: # pylint: disable=broad-except
capture_exception(err) capture_exception(err)
@@ -139,7 +145,11 @@ class HomeAssistantCore(CoreSysAttributes):
self.sys_homeassistant.image = self.sys_updater.image_homeassistant self.sys_homeassistant.image = self.sys_updater.image_homeassistant
self.sys_homeassistant.save_data() self.sys_homeassistant.save_data()
@process_lock @Job(
name="home_assistant_core_install",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError,
)
async def install(self) -> None: async def install(self) -> None:
"""Install a landing page.""" """Install a landing page."""
_LOGGER.info("Home Assistant setup") _LOGGER.info("Home Assistant setup")
@@ -155,7 +165,7 @@ class HomeAssistantCore(CoreSysAttributes):
image=self.sys_updater.image_homeassistant, image=self.sys_updater.image_homeassistant,
) )
break break
except DockerError: except (DockerError, JobException):
pass pass
except Exception as err: # pylint: disable=broad-except except Exception as err: # pylint: disable=broad-except
capture_exception(err) capture_exception(err)
@@ -171,7 +181,7 @@ class HomeAssistantCore(CoreSysAttributes):
# finishing # finishing
try: try:
_LOGGER.info("Starting Home Assistant") _LOGGER.info("Starting Home Assistant")
await self._start() await self.start()
except HomeAssistantError: except HomeAssistantError:
_LOGGER.error("Can't start Home Assistant!") _LOGGER.error("Can't start Home Assistant!")
@@ -179,8 +189,8 @@ class HomeAssistantCore(CoreSysAttributes):
with suppress(DockerError): with suppress(DockerError):
await self.instance.cleanup() await self.instance.cleanup()
@process_lock
@Job( @Job(
name="home_assistant_core_update",
conditions=[ conditions=[
JobCondition.FREE_SPACE, JobCondition.FREE_SPACE,
JobCondition.HEALTHY, JobCondition.HEALTHY,
@@ -188,6 +198,7 @@ class HomeAssistantCore(CoreSysAttributes):
JobCondition.PLUGINS_UPDATED, JobCondition.PLUGINS_UPDATED,
JobCondition.SUPERVISOR_UPDATED, JobCondition.SUPERVISOR_UPDATED,
], ],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
) )
async def update( async def update(
@@ -231,7 +242,7 @@ class HomeAssistantCore(CoreSysAttributes):
self.sys_homeassistant.image = self.sys_updater.image_homeassistant self.sys_homeassistant.image = self.sys_updater.image_homeassistant
if running: if running:
await self._start() await self.start()
_LOGGER.info("Successfully started Home Assistant %s", to_version) _LOGGER.info("Successfully started Home Assistant %s", to_version)
# Successfull - last step # Successfull - last step
@@ -281,23 +292,11 @@ class HomeAssistantCore(CoreSysAttributes):
self.sys_resolution.create_issue(IssueType.UPDATE_FAILED, ContextType.CORE) self.sys_resolution.create_issue(IssueType.UPDATE_FAILED, ContextType.CORE)
raise HomeAssistantUpdateError() raise HomeAssistantUpdateError()
async def _start(self) -> None: @Job(
"""Start Home Assistant Docker & wait.""" name="home_assistant_core_start",
# Create new API token limit=JobExecutionLimit.GROUP_ONCE,
self.sys_homeassistant.supervisor_token = secrets.token_hex(56) on_condition=HomeAssistantJobError,
self.sys_homeassistant.save_data() )
# Write audio settings
self.sys_homeassistant.write_pulse()
try:
await self.instance.run()
except DockerError as err:
raise HomeAssistantError() from err
await self._block_till_run(self.sys_homeassistant.version)
@process_lock
async def start(self) -> None: async def start(self) -> None:
"""Run Home Assistant docker.""" """Run Home Assistant docker."""
if await self.instance.is_running(): if await self.instance.is_running():
@@ -314,9 +313,25 @@ class HomeAssistantCore(CoreSysAttributes):
await self._block_till_run(self.sys_homeassistant.version) await self._block_till_run(self.sys_homeassistant.version)
# No Instance/Container found, extended start # No Instance/Container found, extended start
else: else:
await self._start() # Create new API token
self.sys_homeassistant.supervisor_token = secrets.token_hex(56)
self.sys_homeassistant.save_data()
@process_lock # Write audio settings
self.sys_homeassistant.write_pulse()
try:
await self.instance.run()
except DockerError as err:
raise HomeAssistantError() from err
await self._block_till_run(self.sys_homeassistant.version)
@Job(
name="home_assistant_core_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError,
)
async def stop(self) -> None: async def stop(self) -> None:
"""Stop Home Assistant Docker.""" """Stop Home Assistant Docker."""
try: try:
@@ -324,7 +339,11 @@ class HomeAssistantCore(CoreSysAttributes):
except DockerError as err: except DockerError as err:
raise HomeAssistantError() from err raise HomeAssistantError() from err
@process_lock @Job(
name="home_assistant_core_restart",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError,
)
async def restart(self) -> None: async def restart(self) -> None:
"""Restart Home Assistant Docker.""" """Restart Home Assistant Docker."""
try: try:
@@ -334,12 +353,16 @@ class HomeAssistantCore(CoreSysAttributes):
await self._block_till_run(self.sys_homeassistant.version) await self._block_till_run(self.sys_homeassistant.version)
@process_lock @Job(
name="home_assistant_core_rebuild",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError,
)
async def rebuild(self) -> None: async def rebuild(self) -> None:
"""Rebuild Home Assistant Docker container.""" """Rebuild Home Assistant Docker container."""
with suppress(DockerError): with suppress(DockerError):
await self.instance.stop() await self.instance.stop()
await self._start() await self.start()
def logs(self) -> Awaitable[bytes]: def logs(self) -> Awaitable[bytes]:
"""Get HomeAssistant docker logs. """Get HomeAssistant docker logs.
@@ -356,10 +379,7 @@ class HomeAssistantCore(CoreSysAttributes):
return self.instance.check_trust() return self.instance.check_trust()
async def stats(self) -> DockerStats: async def stats(self) -> DockerStats:
"""Return stats of Home Assistant. """Return stats of Home Assistant."""
Return a coroutine.
"""
try: try:
return await self.instance.stats() return await self.instance.stats()
except DockerError as err: except DockerError as err:
@@ -386,9 +406,12 @@ class HomeAssistantCore(CoreSysAttributes):
async def check_config(self) -> ConfigResult: async def check_config(self) -> ConfigResult:
"""Run Home Assistant config check.""" """Run Home Assistant config check."""
result = await self.instance.execute_command( try:
"python3 -m homeassistant -c /config --script check_config" result = await self.instance.execute_command(
) "python3 -m homeassistant -c /config --script check_config"
)
except DockerError as err:
raise HomeAssistantError() from err
# If not valid # If not valid
if result.exit_code is None: if result.exit_code is None:
@@ -431,10 +454,11 @@ class HomeAssistantCore(CoreSysAttributes):
raise HomeAssistantCrashError() raise HomeAssistantCrashError()
@Job( @Job(
name="home_assistant_core_repair",
conditions=[ conditions=[
JobCondition.FREE_SPACE, JobCondition.FREE_SPACE,
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
] ],
) )
async def repair(self): async def repair(self):
"""Repair local Home Assistant data.""" """Repair local Home Assistant data."""
@@ -456,6 +480,7 @@ class HomeAssistantCore(CoreSysAttributes):
await self._restart_after_problem(event.state) await self._restart_after_problem(event.state)
@Job( @Job(
name="home_assistant_core_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT, limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,

View File

@@ -1,5 +1,6 @@
"""Home Assistant control object.""" """Home Assistant control object."""
import asyncio import asyncio
from datetime import timedelta
from ipaddress import IPv4Address from ipaddress import IPv4Address
import logging import logging
from pathlib import Path, PurePath from pathlib import Path, PurePath
@@ -28,6 +29,7 @@ from ..const import (
ATTR_WATCHDOG, ATTR_WATCHDOG,
FILE_HASSIO_HOMEASSISTANT, FILE_HASSIO_HOMEASSISTANT,
BusEvent, BusEvent,
IngressSessionDataUser,
) )
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ( from ..exceptions import (
@@ -38,7 +40,7 @@ from ..exceptions import (
) )
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..hardware.data import Device from ..hardware.data import Device
from ..jobs.decorator import Job from ..jobs.decorator import Job, JobExecutionLimit
from ..utils import remove_folder from ..utils import remove_folder
from ..utils.common import FileConfiguration from ..utils.common import FileConfiguration
from ..utils.json import read_json_file, write_json_file from ..utils.json import read_json_file, write_json_file
@@ -304,7 +306,7 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
self.sys_homeassistant.websocket.send_message({ATTR_TYPE: "usb/scan"}) self.sys_homeassistant.websocket.send_message({ATTR_TYPE: "usb/scan"})
@Job() @Job(name="home_assistant_module_backup")
async def backup(self, tar_file: tarfile.TarFile) -> None: async def backup(self, tar_file: tarfile.TarFile) -> None:
"""Backup Home Assistant Core config/ directory.""" """Backup Home Assistant Core config/ directory."""
@@ -432,3 +434,21 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
ATTR_WATCHDOG, ATTR_WATCHDOG,
): ):
self._data[attr] = data[attr] self._data[attr] = data[attr]
@Job(
name="home_assistant_get_users",
limit=JobExecutionLimit.THROTTLE_WAIT,
throttle_period=timedelta(minutes=5),
)
async def get_users(self) -> list[IngressSessionDataUser]:
"""Get list of all configured users."""
list_of_users = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"}
)
return [
IngressSessionDataUser(
id=data["id"], username=data["username"], display_name=data["name"]
)
for data in list_of_users
]

View File

@@ -40,7 +40,11 @@ class HomeAssistantSecrets(CoreSysAttributes):
"""Reload secrets.""" """Reload secrets."""
await self._read_secrets() await self._read_secrets()
@Job(limit=JobExecutionLimit.THROTTLE_WAIT, throttle_period=timedelta(seconds=60)) @Job(
name="home_assistant_secrets_read",
limit=JobExecutionLimit.THROTTLE_WAIT,
throttle_period=timedelta(seconds=60),
)
async def _read_secrets(self): async def _read_secrets(self):
"""Read secrets.yaml into memory.""" """Read secrets.yaml into memory."""
if not self.path_secrets.exists(): if not self.path_secrets.exists():

View File

@@ -61,6 +61,7 @@ class Interface:
name: str name: str
mac: str mac: str
path: str
enabled: bool enabled: bool
connected: bool connected: bool
primary: bool primary: bool
@@ -75,12 +76,8 @@ class Interface:
if not inet.settings: if not inet.settings:
return False return False
if inet.settings.device and inet.settings.device.match_device: if inet.settings.match and inet.settings.match.path:
matchers = inet.settings.device.match_device.split(",", 1) return inet.settings.match.path == [self.path]
return (
f"mac:{self.mac}" in matchers
or f"interface-name:{self.name}" in matchers
)
return inet.settings.connection.interface_name == self.name return inet.settings.connection.interface_name == self.name
@@ -108,6 +105,7 @@ class Interface:
return Interface( return Interface(
inet.name, inet.name,
inet.hw_address, inet.hw_address,
inet.path,
inet.settings is not None, inet.settings is not None,
Interface._map_nm_connected(inet.connection), Interface._map_nm_connected(inet.connection),
inet.primary, inet.primary,

View File

@@ -107,7 +107,7 @@ class NetworkManager(CoreSysAttributes):
return Interface.from_dbus_interface(self.sys_dbus.network.get(inet_name)) return Interface.from_dbus_interface(self.sys_dbus.network.get(inet_name))
@Job(conditions=[JobCondition.HOST_NETWORK]) @Job(name="network_manager_load", conditions=[JobCondition.HOST_NETWORK])
async def load(self): async def load(self):
"""Load network information and reapply defaults over dbus.""" """Load network information and reapply defaults over dbus."""
# Apply current settings on each interface so OS can update any out of date defaults # Apply current settings on each interface so OS can update any out of date defaults

View File

@@ -232,7 +232,11 @@ class SoundControl(CoreSysAttributes):
await self.sys_run_in_executor(_activate_profile) await self.sys_run_in_executor(_activate_profile)
await self.update() await self.update()
@Job(limit=JobExecutionLimit.THROTTLE_WAIT, throttle_period=timedelta(seconds=10)) @Job(
name="sound_control_update",
limit=JobExecutionLimit.THROTTLE_WAIT,
throttle_period=timedelta(seconds=10),
)
async def update(self): async def update(self):
"""Update properties over dbus.""" """Update properties over dbus."""
_LOGGER.info("Updating PulseAudio information") _LOGGER.info("Updating PulseAudio information")

View File

@@ -5,7 +5,13 @@ import random
import secrets import secrets
from .addons.addon import Addon from .addons.addon import Addon
from .const import ATTR_PORTS, ATTR_SESSION, FILE_HASSIO_INGRESS from .const import (
ATTR_PORTS,
ATTR_SESSION,
ATTR_SESSION_DATA,
FILE_HASSIO_INGRESS,
IngressSessionData,
)
from .coresys import CoreSys, CoreSysAttributes from .coresys import CoreSys, CoreSysAttributes
from .utils import check_port from .utils import check_port
from .utils.common import FileConfiguration from .utils.common import FileConfiguration
@@ -30,11 +36,20 @@ class Ingress(FileConfiguration, CoreSysAttributes):
return None return None
return self.sys_addons.get(self.tokens[token], local_only=True) return self.sys_addons.get(self.tokens[token], local_only=True)
def get_session_data(self, session_id: str) -> IngressSessionData | None:
"""Return complementary data of current session or None."""
return self.sessions_data.get(session_id)
@property @property
def sessions(self) -> dict[str, float]: def sessions(self) -> dict[str, float]:
"""Return sessions.""" """Return sessions."""
return self._data[ATTR_SESSION] return self._data[ATTR_SESSION]
@property
def sessions_data(self) -> dict[str, IngressSessionData]:
"""Return sessions_data."""
return self._data[ATTR_SESSION_DATA]
@property @property
def ports(self) -> dict[str, int]: def ports(self) -> dict[str, int]:
"""Return list of dynamic ports.""" """Return list of dynamic ports."""
@@ -71,6 +86,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
now = utcnow() now = utcnow()
sessions = {} sessions = {}
sessions_data: dict[str, IngressSessionData] = {}
for session, valid in self.sessions.items(): for session, valid in self.sessions.items():
# check if timestamp valid, to avoid crash on malformed timestamp # check if timestamp valid, to avoid crash on malformed timestamp
try: try:
@@ -84,10 +100,13 @@ class Ingress(FileConfiguration, CoreSysAttributes):
# Is valid # Is valid
sessions[session] = valid sessions[session] = valid
sessions_data[session] = self.get_session_data(session)
# Write back # Write back
self.sessions.clear() self.sessions.clear()
self.sessions.update(sessions) self.sessions.update(sessions)
self.sessions_data.clear()
self.sessions_data.update(sessions_data)
def _update_token_list(self) -> None: def _update_token_list(self) -> None:
"""Regenerate token <-> Add-on map.""" """Regenerate token <-> Add-on map."""
@@ -97,12 +116,15 @@ class Ingress(FileConfiguration, CoreSysAttributes):
for addon in self.addons: for addon in self.addons:
self.tokens[addon.ingress_token] = addon.slug self.tokens[addon.ingress_token] = addon.slug
def create_session(self) -> str: def create_session(self, data: IngressSessionData | None = None) -> str:
"""Create new session.""" """Create new session."""
session = secrets.token_hex(64) session = secrets.token_hex(64)
valid = utcnow() + timedelta(minutes=15) valid = utcnow() + timedelta(minutes=15)
self.sessions[session] = valid.timestamp() self.sessions[session] = valid.timestamp()
if data is not None:
self.sessions_data[session] = data
return session return session
def validate_session(self, session: str) -> bool: def validate_session(self, session: str) -> bool:

View File

@@ -1,53 +1,70 @@
"""Supervisor job manager.""" """Supervisor job manager."""
from collections.abc import Callable
from contextlib import contextmanager
from contextvars import ContextVar, Token
import logging import logging
from uuid import UUID, uuid4
from attrs import define, field
from attrs.setters import frozen
from attrs.validators import ge, le
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import JobNotFound, JobStartException
from ..utils.common import FileConfiguration from ..utils.common import FileConfiguration
from .const import ATTR_IGNORE_CONDITIONS, FILE_CONFIG_JOBS, JobCondition from .const import ATTR_IGNORE_CONDITIONS, FILE_CONFIG_JOBS, JobCondition
from .validate import SCHEMA_JOBS_CONFIG from .validate import SCHEMA_JOBS_CONFIG
_LOGGER: logging.Logger = logging.getLogger(__package__) # Context vars only act as a global within the same asyncio task
# When a new asyncio task is started the current context is copied over.
# Modifications to it in one task are not visible to others though.
# This allows us to track what job is currently in progress in each task.
_CURRENT_JOB: ContextVar[UUID] = ContextVar("current_job")
_LOGGER: logging.Logger = logging.getLogger(__name__)
class SupervisorJob(CoreSysAttributes): @define
"""Supervisor running job class.""" class SupervisorJob:
"""Representation of a job running in supervisor."""
def __init__(self, coresys: CoreSys, name: str): name: str = field(on_setattr=frozen)
"""Initialize the JobManager class.""" reference: str | None = None
self.coresys: CoreSys = coresys progress: int = field(default=0, validator=[ge(0), le(100)])
self.name: str = name stage: str | None = None
self._progress: int = 0 uuid: UUID = field(init=False, factory=lambda: uuid4().hex, on_setattr=frozen)
self._stage: str | None = None parent_id: UUID = field(
init=False, factory=lambda: _CURRENT_JOB.get(None), on_setattr=frozen
)
done: bool = field(init=False, default=False)
@property @contextmanager
def progress(self) -> int: def start(self, *, on_done: Callable[["SupervisorJob"], None] | None = None):
"""Return the current progress.""" """Start the job in the current task.
return self._progress
@property This can only be called if the parent ID matches the job running in the current task.
def stage(self) -> str | None: This is to ensure that each asyncio task can only be doing one job at a time as that
"""Return the current stage.""" determines what resources it can and cannot access.
return self._stage """
if self.done:
raise JobStartException("Job is already complete")
if _CURRENT_JOB.get(None) != self.parent_id:
raise JobStartException("Job has a different parent from current job")
def update(self, progress: int | None = None, stage: str | None = None) -> None: token: Token[UUID] | None = None
"""Update the job object.""" try:
if progress is not None: token = _CURRENT_JOB.set(self.uuid)
if progress >= round(100): yield self
self.sys_jobs.remove_job(self) finally:
return self.done = True
self._progress = round(progress) if token:
if stage is not None: _CURRENT_JOB.reset(token)
self._stage = stage if on_done:
_LOGGER.debug( on_done(self)
"Job updated; name: %s, progress: %s, stage: %s",
self.name,
self.progress,
self.stage,
)
class JobManager(FileConfiguration, CoreSysAttributes): class JobManager(FileConfiguration, CoreSysAttributes):
"""Job class.""" """Job Manager class."""
def __init__(self, coresys: CoreSys): def __init__(self, coresys: CoreSys):
"""Initialize the JobManager class.""" """Initialize the JobManager class."""
@@ -58,7 +75,7 @@ class JobManager(FileConfiguration, CoreSysAttributes):
@property @property
def jobs(self) -> list[SupervisorJob]: def jobs(self) -> list[SupervisorJob]:
"""Return a list of current jobs.""" """Return a list of current jobs."""
return self._jobs return list(self._jobs.values())
@property @property
def ignore_conditions(self) -> list[JobCondition]: def ignore_conditions(self) -> list[JobCondition]:
@@ -70,18 +87,30 @@ class JobManager(FileConfiguration, CoreSysAttributes):
"""Set a list of ignored condition.""" """Set a list of ignored condition."""
self._data[ATTR_IGNORE_CONDITIONS] = value self._data[ATTR_IGNORE_CONDITIONS] = value
def get_job(self, name: str) -> SupervisorJob: def new_job(
"""Return a job, create one if it does not exist.""" self, name: str, reference: str | None = None, initial_stage: str | None = None
if name not in self._jobs: ) -> SupervisorJob:
self._jobs[name] = SupervisorJob(self.coresys, name) """Create a new job."""
job = SupervisorJob(name, reference=reference, stage=initial_stage)
self._jobs[job.uuid] = job
return job
return self._jobs[name] def get_job(self, uuid: UUID | None = None) -> SupervisorJob | None:
"""Return a job by uuid if it exists. Returns the current job of the asyncio task if uuid omitted."""
if uuid:
return self._jobs.get(uuid)
if uuid := _CURRENT_JOB.get(None):
return self._jobs.get(uuid)
return None
def remove_job(self, job: SupervisorJob) -> None: def remove_job(self, job: SupervisorJob) -> None:
"""Remove a job.""" """Remove a job by UUID."""
if job.name in self._jobs: if job.uuid not in self._jobs:
del self._jobs[job.name] raise JobNotFound(f"Could not find job {job.name}", _LOGGER.error)
def clear(self) -> None: if not job.done:
"""Clear all jobs.""" _LOGGER.warning("Removing incomplete job %s from job manager", job.name)
self._jobs.clear()
del self._jobs[job.uuid]

View File

@@ -8,6 +8,10 @@ FILE_CONFIG_JOBS = Path(SUPERVISOR_DATA, "jobs.json")
ATTR_IGNORE_CONDITIONS = "ignore_conditions" ATTR_IGNORE_CONDITIONS = "ignore_conditions"
JOB_GROUP_ADDON = "addon_{slug}"
JOB_GROUP_DOCKER_INTERFACE = "container_{name}"
JOB_GROUP_HOME_ASSISTANT_CORE = "home_assistant_core"
class JobCondition(str, Enum): class JobCondition(str, Enum):
"""Job condition enum.""" """Job condition enum."""
@@ -34,3 +38,8 @@ class JobExecutionLimit(str, Enum):
THROTTLE = "throttle" THROTTLE = "throttle"
THROTTLE_WAIT = "throttle_wait" THROTTLE_WAIT = "throttle_wait"
THROTTLE_RATE_LIMIT = "throttle_rate_limit" THROTTLE_RATE_LIMIT = "throttle_rate_limit"
GROUP_ONCE = "group_once"
GROUP_WAIT = "group_wait"
GROUP_THROTTLE = "group_throttle"
GROUP_THROTTLE_WAIT = "group_throttle_wait"
GROUP_THROTTLE_RATE_LIMIT = "group_throttle_rate_limit"

View File

@@ -8,13 +8,20 @@ from typing import Any
from ..const import CoreState from ..const import CoreState
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import HassioError, JobConditionException, JobException from ..exceptions import (
HassioError,
JobConditionException,
JobException,
JobGroupExecutionLimitExceeded,
)
from ..host.const import HostFeature from ..host.const import HostFeature
from ..resolution.const import MINIMUM_FREE_SPACE_THRESHOLD, ContextType, IssueType from ..resolution.const import MINIMUM_FREE_SPACE_THRESHOLD, ContextType, IssueType
from ..utils.sentry import capture_exception from ..utils.sentry import capture_exception
from .const import JobCondition, JobExecutionLimit from .const import JobCondition, JobExecutionLimit
from .job_group import JobGroup
_LOGGER: logging.Logger = logging.getLogger(__package__) _LOGGER: logging.Logger = logging.getLogger(__package__)
_JOB_NAMES: set[str] = set()
class Job(CoreSysAttributes): class Job(CoreSysAttributes):
@@ -22,7 +29,7 @@ class Job(CoreSysAttributes):
def __init__( def __init__(
self, self,
name: str | None = None, name: str,
conditions: list[JobCondition] | None = None, conditions: list[JobCondition] | None = None,
cleanup: bool = True, cleanup: bool = True,
on_condition: JobException | None = None, on_condition: JobException | None = None,
@@ -33,6 +40,10 @@ class Job(CoreSysAttributes):
throttle_max_calls: int | None = None, throttle_max_calls: int | None = None,
): ):
"""Initialize the Job class.""" """Initialize the Job class."""
if name in _JOB_NAMES:
raise RuntimeError(f"A job already exists with name {name}!")
_JOB_NAMES.add(name)
self.name = name self.name = name
self.conditions = conditions self.conditions = conditions
self.cleanup = cleanup self.cleanup = cleanup
@@ -42,8 +53,8 @@ class Job(CoreSysAttributes):
self.throttle_max_calls = throttle_max_calls self.throttle_max_calls = throttle_max_calls
self._lock: asyncio.Semaphore | None = None self._lock: asyncio.Semaphore | None = None
self._method = None self._method = None
self._last_call = datetime.min self._last_call: dict[str | None, datetime] = {}
self._rate_limited_calls: list[datetime] | None = None self._rate_limited_calls: dict[str, list[datetime]] | None = None
# Validate Options # Validate Options
if ( if (
@@ -52,19 +63,70 @@ class Job(CoreSysAttributes):
JobExecutionLimit.THROTTLE, JobExecutionLimit.THROTTLE,
JobExecutionLimit.THROTTLE_WAIT, JobExecutionLimit.THROTTLE_WAIT,
JobExecutionLimit.THROTTLE_RATE_LIMIT, JobExecutionLimit.THROTTLE_RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
) )
and self._throttle_period is None and self._throttle_period is None
): ):
raise RuntimeError("Using Job without a Throttle period!") raise RuntimeError(
f"Job {name} is using execution limit {limit.value} without a throttle period!"
)
if self.limit == JobExecutionLimit.THROTTLE_RATE_LIMIT: if self.limit in (
JobExecutionLimit.THROTTLE_RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
):
if self.throttle_max_calls is None: if self.throttle_max_calls is None:
raise RuntimeError("Using rate limit without throttle max calls!") raise RuntimeError(
f"Job {name} is using execution limit {limit.value} without throttle max calls!"
)
self._rate_limited_calls = [] self._rate_limited_calls = {}
@property def last_call(self, group_name: str | None = None) -> datetime:
def throttle_period(self) -> timedelta | None: """Return last call datetime."""
return self._last_call.get(group_name, datetime.min)
def set_last_call(self, value: datetime, group_name: str | None = None) -> None:
"""Set last call datetime."""
self._last_call[group_name] = value
def rate_limited_calls(
self, group_name: str | None = None
) -> list[datetime] | None:
"""Return rate limited calls if used."""
if self._rate_limited_calls is None:
return None
return self._rate_limited_calls.get(group_name, [])
def add_rate_limited_call(
self, value: datetime, group_name: str | None = None
) -> None:
"""Add a rate limited call to list if used."""
if self._rate_limited_calls is None:
raise RuntimeError(
f"Rate limited calls not available for limit type {self.limit}"
)
if group_name in self._rate_limited_calls:
self._rate_limited_calls[group_name].append(value)
else:
self._rate_limited_calls[group_name] = [value]
def set_rate_limited_calls(
self, value: list[datetime], group_name: str | None = None
) -> None:
"""Set rate limited calls if used."""
if self._rate_limited_calls is None:
raise RuntimeError(
f"Rate limited calls not available for limit type {self.limit}"
)
self._rate_limited_calls[group_name] = value
def throttle_period(self, group_name: str | None = None) -> timedelta | None:
"""Return throttle period.""" """Return throttle period."""
if self._throttle_period is None: if self._throttle_period is None:
return None return None
@@ -73,36 +135,59 @@ class Job(CoreSysAttributes):
return self._throttle_period return self._throttle_period
return self._throttle_period( return self._throttle_period(
self.coresys, self._last_call, self._rate_limited_calls self.coresys,
self.last_call(group_name),
self.rate_limited_calls(group_name),
) )
def _post_init(self, args: tuple[Any]) -> None: def _post_init(self, obj: JobGroup | CoreSysAttributes) -> JobGroup | None:
"""Runtime init.""" """Runtime init."""
if self.name is None:
self.name = str(self._method.__qualname__).lower().replace(".", "_")
# Coresys # Coresys
try: try:
self.coresys = args[0].coresys self.coresys = obj.coresys
except AttributeError: except AttributeError:
pass pass
if not self.coresys: if not self.coresys:
raise RuntimeError(f"Job on {self.name} need to be an coresys object!") raise RuntimeError(f"Job on {self.name} need to be an coresys object!")
# Others # Setup lock for limits
if self._lock is None: if self._lock is None:
self._lock = asyncio.Semaphore() self._lock = asyncio.Semaphore()
# Job groups
if self.limit in (
JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT,
JobExecutionLimit.GROUP_THROTTLE,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
):
try:
_ = obj.acquire and obj.release
except AttributeError:
raise RuntimeError(
f"Job on {self.name} need to be a JobGroup to use group based limits!"
) from None
return obj
return None
def __call__(self, method): def __call__(self, method):
"""Call the wrapper logic.""" """Call the wrapper logic."""
self._method = method self._method = method
@wraps(method) @wraps(method)
async def wrapper(*args, **kwargs) -> Any: async def wrapper(obj: JobGroup | CoreSysAttributes, *args, **kwargs) -> Any:
"""Wrap the method.""" """Wrap the method.
self._post_init(args)
job = self.sys_jobs.get_job(self.name) This method must be on an instance of CoreSysAttributes. If a JOB_GROUP limit
is used, then it must be on an instance of JobGroup.
"""
job_group = self._post_init(obj)
group_name: str | None = job_group.group_name if job_group else None
job = self.sys_jobs.new_job(
self.name, job_group.job_reference if job_group else None
)
# Handle condition # Handle condition
if self.conditions: if self.conditions:
@@ -118,50 +203,78 @@ class Job(CoreSysAttributes):
# Handle exection limits # Handle exection limits
if self.limit in (JobExecutionLimit.SINGLE_WAIT, JobExecutionLimit.ONCE): if self.limit in (JobExecutionLimit.SINGLE_WAIT, JobExecutionLimit.ONCE):
await self._acquire_exection_limit() await self._acquire_exection_limit()
elif self.limit == JobExecutionLimit.THROTTLE: elif self.limit in (
time_since_last_call = datetime.now() - self._last_call JobExecutionLimit.GROUP_ONCE,
if time_since_last_call < self.throttle_period: JobExecutionLimit.GROUP_WAIT,
):
try:
await obj.acquire(job, self.limit == JobExecutionLimit.GROUP_WAIT)
except JobGroupExecutionLimitExceeded as err:
if self.on_condition:
raise self.on_condition(str(err)) from err
raise err
elif self.limit in (
JobExecutionLimit.THROTTLE,
JobExecutionLimit.GROUP_THROTTLE,
):
time_since_last_call = datetime.now() - self.last_call(group_name)
if time_since_last_call < self.throttle_period(group_name):
return return
elif self.limit == JobExecutionLimit.THROTTLE_WAIT: elif self.limit in (
JobExecutionLimit.THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
):
await self._acquire_exection_limit() await self._acquire_exection_limit()
time_since_last_call = datetime.now() - self._last_call time_since_last_call = datetime.now() - self.last_call(group_name)
if time_since_last_call < self.throttle_period: if time_since_last_call < self.throttle_period(group_name):
self._release_exception_limits() self._release_exception_limits()
return return
elif self.limit == JobExecutionLimit.THROTTLE_RATE_LIMIT: elif self.limit in (
JobExecutionLimit.THROTTLE_RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
):
# Only reprocess array when necessary (at limit) # Only reprocess array when necessary (at limit)
if len(self._rate_limited_calls) >= self.throttle_max_calls: if len(self.rate_limited_calls(group_name)) >= self.throttle_max_calls:
self._rate_limited_calls = [ self.set_rate_limited_calls(
call [
for call in self._rate_limited_calls call
if call > datetime.now() - self.throttle_period for call in self.rate_limited_calls(group_name)
] if call > datetime.now() - self.throttle_period(group_name)
],
group_name,
)
if len(self._rate_limited_calls) >= self.throttle_max_calls: if len(self.rate_limited_calls(group_name)) >= self.throttle_max_calls:
on_condition = ( on_condition = (
JobException if self.on_condition is None else self.on_condition JobException if self.on_condition is None else self.on_condition
) )
raise on_condition( raise on_condition(
f"Rate limit exceeded, more then {self.throttle_max_calls} calls in {self.throttle_period}", f"Rate limit exceeded, more then {self.throttle_max_calls} calls in {self.throttle_period(group_name)}",
) )
# Execute Job # Execute Job
try: with job.start(on_done=self.sys_jobs.remove_job if self.cleanup else None):
self._last_call = datetime.now() try:
if self._rate_limited_calls is not None: self.set_last_call(datetime.now(), group_name)
self._rate_limited_calls.append(self._last_call) if self.rate_limited_calls(group_name) is not None:
self.add_rate_limited_call(
self.last_call(group_name), group_name
)
return await self._method(*args, **kwargs) return await self._method(obj, *args, **kwargs)
except HassioError as err: except HassioError as err:
raise err raise err
except Exception as err: except Exception as err:
_LOGGER.exception("Unhandled exception: %s", err) _LOGGER.exception("Unhandled exception: %s", err)
capture_exception(err) capture_exception(err)
raise JobException() from err raise JobException() from err
finally: finally:
if self.cleanup: self._release_exception_limits()
self.sys_jobs.remove_job(job) if self.limit in (
self._release_exception_limits() JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT,
):
obj.release()
return wrapper return wrapper
@@ -283,6 +396,7 @@ class Job(CoreSysAttributes):
JobExecutionLimit.SINGLE_WAIT, JobExecutionLimit.SINGLE_WAIT,
JobExecutionLimit.ONCE, JobExecutionLimit.ONCE,
JobExecutionLimit.THROTTLE_WAIT, JobExecutionLimit.THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
): ):
return return
@@ -300,6 +414,7 @@ class Job(CoreSysAttributes):
JobExecutionLimit.SINGLE_WAIT, JobExecutionLimit.SINGLE_WAIT,
JobExecutionLimit.ONCE, JobExecutionLimit.ONCE,
JobExecutionLimit.THROTTLE_WAIT, JobExecutionLimit.THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
): ):
return return
self._lock.release() self._lock.release()

View File

@@ -0,0 +1,81 @@
"""Job group object."""
from asyncio import Lock
from . import SupervisorJob
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import JobException, JobGroupExecutionLimitExceeded
class JobGroup(CoreSysAttributes):
"""Object with methods that require a common lock.
This is used in classes like our DockerInterface class. Where each method
requires a lock as it involves some extensive I/O with Docker. But some
methods may need to call others as a part of processing to complete a
higher-level task and should not need to relinquish the lock in between.
"""
def __init__(
self, coresys: CoreSys, group_name: str, job_reference: str | None = None
) -> None:
"""Initialize object."""
self.coresys: CoreSys = coresys
self._group_name: str = group_name
self._lock: Lock = Lock()
self._active_job: SupervisorJob | None = None
self._parent_jobs: list[SupervisorJob] = []
self._job_reference: str | None = job_reference
@property
def active_job(self) -> SupervisorJob | None:
"""Get active job ID."""
return self._active_job
@property
def group_name(self) -> str:
"""Return group name."""
return self._group_name
@property
def has_lock(self) -> bool:
"""Return true if current task has the lock on this job group."""
return (
self.active_job
and (task_job := self.sys_jobs.get_job())
and self.active_job == task_job
)
@property
def job_reference(self) -> str | None:
"""Return value to use as reference for all jobs created for this job group."""
return self._job_reference
async def acquire(self, job: SupervisorJob, wait: bool = False) -> None:
"""Acquire the lock for the group for the specified job."""
# If there's another job running and we're not waiting, raise
if self.active_job and not self.has_lock and not wait:
raise JobGroupExecutionLimitExceeded(
f"Another job is running for job group {self.group_name}"
)
# Else if we don't have the lock, acquire it
if not self.has_lock:
await self._lock.acquire()
# Store the job ID we acquired the lock for
if self.active_job:
self._parent_jobs.append(self.active_job)
self._active_job = job
def release(self) -> None:
"""Release the lock for the group or return it to parent."""
if not self.has_lock:
raise JobException("Cannot release as caller does not own lock")
if self._parent_jobs:
self._active_job = self._parent_jobs.pop()
else:
self._active_job = None
self._lock.release()

View File

@@ -28,15 +28,13 @@ RUN_RELOAD_BACKUPS = 72000
RUN_RELOAD_HOST = 7600 RUN_RELOAD_HOST = 7600
RUN_RELOAD_UPDATER = 7200 RUN_RELOAD_UPDATER = 7200
RUN_RELOAD_INGRESS = 930 RUN_RELOAD_INGRESS = 930
RUN_RELOAD_MOUNTS = 900
RUN_WATCHDOG_HOMEASSISTANT_API = 120 RUN_WATCHDOG_HOMEASSISTANT_API = 120
RUN_WATCHDOG_ADDON_APPLICATON = 120 RUN_WATCHDOG_ADDON_APPLICATON = 120
RUN_WATCHDOG_OBSERVER_APPLICATION = 180 RUN_WATCHDOG_OBSERVER_APPLICATION = 180
RUN_REFRESH_ADDON = 15
RUN_REFRESH_MOUNTS = 900
PLUGIN_AUTO_UPDATE_CONDITIONS = PLUGIN_UPDATE_CONDITIONS + [JobCondition.RUNNING] PLUGIN_AUTO_UPDATE_CONDITIONS = PLUGIN_UPDATE_CONDITIONS + [JobCondition.RUNNING]
@@ -65,7 +63,7 @@ class Tasks(CoreSysAttributes):
self.sys_scheduler.register_task(self.sys_backups.reload, RUN_RELOAD_BACKUPS) self.sys_scheduler.register_task(self.sys_backups.reload, RUN_RELOAD_BACKUPS)
self.sys_scheduler.register_task(self.sys_host.reload, RUN_RELOAD_HOST) self.sys_scheduler.register_task(self.sys_host.reload, RUN_RELOAD_HOST)
self.sys_scheduler.register_task(self.sys_ingress.reload, RUN_RELOAD_INGRESS) self.sys_scheduler.register_task(self.sys_ingress.reload, RUN_RELOAD_INGRESS)
self.sys_scheduler.register_task(self.sys_mounts.reload, RUN_REFRESH_MOUNTS) self.sys_scheduler.register_task(self.sys_mounts.reload, RUN_RELOAD_MOUNTS)
# Watchdog # Watchdog
self.sys_scheduler.register_task( self.sys_scheduler.register_task(
@@ -78,12 +76,12 @@ class Tasks(CoreSysAttributes):
self._watchdog_addon_application, RUN_WATCHDOG_ADDON_APPLICATON self._watchdog_addon_application, RUN_WATCHDOG_ADDON_APPLICATON
) )
# Refresh
self.sys_scheduler.register_task(self._refresh_addon, RUN_REFRESH_ADDON)
_LOGGER.info("All core tasks are scheduled") _LOGGER.info("All core tasks are scheduled")
@Job(conditions=ADDON_UPDATE_CONDITIONS + [JobCondition.RUNNING]) @Job(
name="tasks_update_addons",
conditions=ADDON_UPDATE_CONDITIONS + [JobCondition.RUNNING],
)
async def _update_addons(self): async def _update_addons(self):
"""Check if an update is available for an Add-on and update it.""" """Check if an update is available for an Add-on and update it."""
start_tasks: list[Awaitable[None]] = [] start_tasks: list[Awaitable[None]] = []
@@ -112,13 +110,14 @@ class Tasks(CoreSysAttributes):
await asyncio.gather(*start_tasks) await asyncio.gather(*start_tasks)
@Job( @Job(
name="tasks_update_supervisor",
conditions=[ conditions=[
JobCondition.AUTO_UPDATE, JobCondition.AUTO_UPDATE,
JobCondition.FREE_SPACE, JobCondition.FREE_SPACE,
JobCondition.HEALTHY, JobCondition.HEALTHY,
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
JobCondition.RUNNING, JobCondition.RUNNING,
] ],
) )
async def _update_supervisor(self): async def _update_supervisor(self):
"""Check and run update of Supervisor Supervisor.""" """Check and run update of Supervisor Supervisor."""
@@ -172,7 +171,7 @@ class Tasks(CoreSysAttributes):
finally: finally:
self._cache[HASS_WATCHDOG_API] = 0 self._cache[HASS_WATCHDOG_API] = 0
@Job(conditions=PLUGIN_AUTO_UPDATE_CONDITIONS) @Job(name="tasks_update_cli", conditions=PLUGIN_AUTO_UPDATE_CONDITIONS)
async def _update_cli(self): async def _update_cli(self):
"""Check and run update of cli.""" """Check and run update of cli."""
if not self.sys_plugins.cli.need_update: if not self.sys_plugins.cli.need_update:
@@ -183,7 +182,7 @@ class Tasks(CoreSysAttributes):
) )
await self.sys_plugins.cli.update() await self.sys_plugins.cli.update()
@Job(conditions=PLUGIN_AUTO_UPDATE_CONDITIONS) @Job(name="tasks_update_dns", conditions=PLUGIN_AUTO_UPDATE_CONDITIONS)
async def _update_dns(self): async def _update_dns(self):
"""Check and run update of CoreDNS plugin.""" """Check and run update of CoreDNS plugin."""
if not self.sys_plugins.dns.need_update: if not self.sys_plugins.dns.need_update:
@@ -195,7 +194,7 @@ class Tasks(CoreSysAttributes):
) )
await self.sys_plugins.dns.update() await self.sys_plugins.dns.update()
@Job(conditions=PLUGIN_AUTO_UPDATE_CONDITIONS) @Job(name="tasks_update_audio", conditions=PLUGIN_AUTO_UPDATE_CONDITIONS)
async def _update_audio(self): async def _update_audio(self):
"""Check and run update of PulseAudio plugin.""" """Check and run update of PulseAudio plugin."""
if not self.sys_plugins.audio.need_update: if not self.sys_plugins.audio.need_update:
@@ -207,7 +206,7 @@ class Tasks(CoreSysAttributes):
) )
await self.sys_plugins.audio.update() await self.sys_plugins.audio.update()
@Job(conditions=PLUGIN_AUTO_UPDATE_CONDITIONS) @Job(name="tasks_update_observer", conditions=PLUGIN_AUTO_UPDATE_CONDITIONS)
async def _update_observer(self): async def _update_observer(self):
"""Check and run update of Observer plugin.""" """Check and run update of Observer plugin."""
if not self.sys_plugins.observer.need_update: if not self.sys_plugins.observer.need_update:
@@ -219,7 +218,7 @@ class Tasks(CoreSysAttributes):
) )
await self.sys_plugins.observer.update() await self.sys_plugins.observer.update()
@Job(conditions=PLUGIN_AUTO_UPDATE_CONDITIONS) @Job(name="tasks_update_multicast", conditions=PLUGIN_AUTO_UPDATE_CONDITIONS)
async def _update_multicast(self): async def _update_multicast(self):
"""Check and run update of multicast.""" """Check and run update of multicast."""
if not self.sys_plugins.multicast.need_update: if not self.sys_plugins.multicast.need_update:
@@ -278,21 +277,7 @@ class Tasks(CoreSysAttributes):
finally: finally:
self._cache[addon.slug] = 0 self._cache[addon.slug] = 0
async def _refresh_addon(self) -> None: @Job(name="tasks_reload_store", conditions=[JobCondition.SUPERVISOR_UPDATED])
"""Refresh addon state."""
for addon in self.sys_addons.installed:
# if watchdog need looking for
if addon.watchdog or addon.state != AddonState.STARTED:
continue
# if Addon have running actions
if addon.in_progress or await addon.is_running():
continue
# Adjust state
addon.state = AddonState.STOPPED
@Job(conditions=[JobCondition.SUPERVISOR_UPDATED])
async def _reload_store(self) -> None: async def _reload_store(self) -> None:
"""Reload store and check for addon updates.""" """Reload store and check for addon updates."""
await self.sys_store.reload() await self.sys_store.reload()

View File

@@ -139,7 +139,7 @@ class MountManager(FileConfiguration, CoreSysAttributes):
] ]
) )
@Job(conditions=[JobCondition.MOUNT_AVAILABLE]) @Job(name="mount_manager_reload", conditions=[JobCondition.MOUNT_AVAILABLE])
async def reload(self) -> None: async def reload(self) -> None:
"""Update mounts info via dbus and reload failed mounts.""" """Update mounts info via dbus and reload failed mounts."""
if not self.mounts: if not self.mounts:
@@ -180,9 +180,17 @@ class MountManager(FileConfiguration, CoreSysAttributes):
], ],
) )
@Job(conditions=[JobCondition.MOUNT_AVAILABLE], on_condition=MountJobError) @Job(
name="mount_manager_create_mount",
conditions=[JobCondition.MOUNT_AVAILABLE],
on_condition=MountJobError,
)
async def create_mount(self, mount: Mount) -> None: async def create_mount(self, mount: Mount) -> None:
"""Add/update a mount.""" """Add/update a mount."""
# Add mount name to job
if job := self.sys_jobs.get_job():
job.reference = mount.name
if mount.name in self._mounts: if mount.name in self._mounts:
_LOGGER.debug("Mount '%s' exists, unmounting then mounting from new config") _LOGGER.debug("Mount '%s' exists, unmounting then mounting from new config")
await self.remove_mount(mount.name, retain_entry=True) await self.remove_mount(mount.name, retain_entry=True)
@@ -200,9 +208,17 @@ class MountManager(FileConfiguration, CoreSysAttributes):
elif mount.usage == MountUsage.SHARE: elif mount.usage == MountUsage.SHARE:
await self._bind_share(mount) await self._bind_share(mount)
@Job(conditions=[JobCondition.MOUNT_AVAILABLE], on_condition=MountJobError) @Job(
name="mount_manager_remove_mount",
conditions=[JobCondition.MOUNT_AVAILABLE],
on_condition=MountJobError,
)
async def remove_mount(self, name: str, *, retain_entry: bool = False) -> None: async def remove_mount(self, name: str, *, retain_entry: bool = False) -> None:
"""Remove a mount.""" """Remove a mount."""
# Add mount name to job
if job := self.sys_jobs.get_job():
job.reference = name
if name not in self._mounts: if name not in self._mounts:
raise MountNotFound( raise MountNotFound(
f"Cannot remove '{name}', no mount exists with that name" f"Cannot remove '{name}', no mount exists with that name"
@@ -223,9 +239,17 @@ class MountManager(FileConfiguration, CoreSysAttributes):
return mount return mount
@Job(conditions=[JobCondition.MOUNT_AVAILABLE], on_condition=MountJobError) @Job(
name="mount_manager_reload_mount",
conditions=[JobCondition.MOUNT_AVAILABLE],
on_condition=MountJobError,
)
async def reload_mount(self, name: str) -> None: async def reload_mount(self, name: str) -> None:
"""Reload a mount to retry mounting with same config.""" """Reload a mount to retry mounting with same config."""
# Add mount name to job
if job := self.sys_jobs.get_job():
job.reference = name
if name not in self._mounts: if name not in self._mounts:
raise MountNotFound( raise MountNotFound(
f"Cannot reload '{name}', no mount exists with that name" f"Cannot reload '{name}', no mount exists with that name"

View File

@@ -165,7 +165,7 @@ class DataDisk(CoreSysAttributes):
if block.drive == drive.object_path if block.drive == drive.object_path
] ]
@Job(conditions=[JobCondition.OS_AGENT]) @Job(name="data_disk_load", conditions=[JobCondition.OS_AGENT])
async def load(self) -> None: async def load(self) -> None:
"""Load DataDisk feature.""" """Load DataDisk feature."""
# Update datadisk details on OS-Agent # Update datadisk details on OS-Agent
@@ -173,6 +173,7 @@ class DataDisk(CoreSysAttributes):
await self.sys_dbus.agent.datadisk.reload_device() await self.sys_dbus.agent.datadisk.reload_device()
@Job( @Job(
name="data_disk_migrate",
conditions=[JobCondition.HAOS, JobCondition.OS_AGENT, JobCondition.HEALTHY], conditions=[JobCondition.HAOS, JobCondition.OS_AGENT, JobCondition.HEALTHY],
limit=JobExecutionLimit.ONCE, limit=JobExecutionLimit.ONCE,
on_condition=HassOSJobError, on_condition=HassOSJobError,

View File

@@ -156,6 +156,7 @@ class OSManager(CoreSysAttributes):
) )
@Job( @Job(
name="os_manager_config_sync",
conditions=[JobCondition.HAOS], conditions=[JobCondition.HAOS],
on_condition=HassOSJobError, on_condition=HassOSJobError,
) )
@@ -170,6 +171,7 @@ class OSManager(CoreSysAttributes):
await self.sys_host.services.restart("hassos-config.service") await self.sys_host.services.restart("hassos-config.service")
@Job( @Job(
name="os_manager_update",
conditions=[ conditions=[
JobCondition.HAOS, JobCondition.HAOS,
JobCondition.INTERNET_SYSTEM, JobCondition.INTERNET_SYSTEM,
@@ -225,7 +227,7 @@ class OSManager(CoreSysAttributes):
) )
raise HassOSUpdateError() raise HassOSUpdateError()
@Job(conditions=[JobCondition.HAOS]) @Job(name="os_manager_mark_healthy", conditions=[JobCondition.HAOS])
async def mark_healthy(self) -> None: async def mark_healthy(self) -> None:
"""Set booted partition as good for rauc.""" """Set booted partition as good for rauc."""
try: try:

View File

@@ -118,6 +118,7 @@ class PluginAudio(PluginBase):
self.save_data() self.save_data()
@Job( @Job(
name="plugin_audio_update",
conditions=PLUGIN_UPDATE_CONDITIONS, conditions=PLUGIN_UPDATE_CONDITIONS,
on_condition=AudioJobError, on_condition=AudioJobError,
) )
@@ -218,6 +219,7 @@ class PluginAudio(PluginBase):
) from err ) from err
@Job( @Job(
name="plugin_audio_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT, limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,

View File

@@ -75,6 +75,7 @@ class PluginCli(PluginBase):
self.save_data() self.save_data()
@Job( @Job(
name="plugin_cli_update",
conditions=PLUGIN_UPDATE_CONDITIONS, conditions=PLUGIN_UPDATE_CONDITIONS,
on_condition=CliJobError, on_condition=CliJobError,
) )
@@ -151,6 +152,7 @@ class PluginCli(PluginBase):
capture_exception(err) capture_exception(err)
@Job( @Job(
name="plugin_cli_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT, limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,

View File

@@ -187,6 +187,7 @@ class PluginDns(PluginBase):
await self.write_hosts() await self.write_hosts()
@Job( @Job(
name="plugin_dns_update",
conditions=PLUGIN_UPDATE_CONDITIONS, conditions=PLUGIN_UPDATE_CONDITIONS,
on_condition=CoreDNSJobError, on_condition=CoreDNSJobError,
) )
@@ -269,6 +270,7 @@ class PluginDns(PluginBase):
return await super().watchdog_container(event) return await super().watchdog_container(event)
@Job( @Job(
name="plugin_dns_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT, limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,

View File

@@ -71,6 +71,7 @@ class PluginMulticast(PluginBase):
self.save_data() self.save_data()
@Job( @Job(
name="plugin_multicast_update",
conditions=PLUGIN_UPDATE_CONDITIONS, conditions=PLUGIN_UPDATE_CONDITIONS,
on_condition=MulticastJobError, on_condition=MulticastJobError,
) )
@@ -146,6 +147,7 @@ class PluginMulticast(PluginBase):
capture_exception(err) capture_exception(err)
@Job( @Job(
name="plugin_multicast_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT, limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,

View File

@@ -79,6 +79,7 @@ class PluginObserver(PluginBase):
self.save_data() self.save_data()
@Job( @Job(
name="plugin_observer_update",
conditions=PLUGIN_UPDATE_CONDITIONS, conditions=PLUGIN_UPDATE_CONDITIONS,
on_condition=ObserverJobError, on_condition=ObserverJobError,
) )
@@ -156,6 +157,7 @@ class PluginObserver(PluginBase):
capture_exception(err) capture_exception(err)
@Job( @Job(
name="plugin_observer_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT, limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,

View File

@@ -22,6 +22,7 @@ class CheckAddonPwned(CheckBase):
"""CheckAddonPwned class for check.""" """CheckAddonPwned class for check."""
@Job( @Job(
name="check_addon_pwned_run",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
limit=JobExecutionLimit.THROTTLE, limit=JobExecutionLimit.THROTTLE,
throttle_period=timedelta(hours=24), throttle_period=timedelta(hours=24),
@@ -62,7 +63,7 @@ class CheckAddonPwned(CheckBase):
except PwnedError: except PwnedError:
pass pass
@Job(conditions=[JobCondition.INTERNET_SYSTEM]) @Job(name="check_addon_pwned_approve", conditions=[JobCondition.INTERNET_SYSTEM])
async def approve_check(self, reference: str | None = None) -> bool: async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue.""" """Approve check if it is affected by issue."""
addon = self.sys_addons.get(reference) addon = self.sys_addons.get(reference)

View File

@@ -23,6 +23,7 @@ class CheckDNSServer(CheckBase):
"""CheckDNSServer class for check.""" """CheckDNSServer class for check."""
@Job( @Job(
name="check_dns_server_run",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
limit=JobExecutionLimit.THROTTLE, limit=JobExecutionLimit.THROTTLE,
throttle_period=timedelta(hours=24), throttle_period=timedelta(hours=24),
@@ -42,7 +43,7 @@ class CheckDNSServer(CheckBase):
) )
capture_exception(results[i]) capture_exception(results[i])
@Job(conditions=[JobCondition.INTERNET_SYSTEM]) @Job(name="check_dns_server_approve", conditions=[JobCondition.INTERNET_SYSTEM])
async def approve_check(self, reference: str | None = None) -> bool: async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue.""" """Approve check if it is affected by issue."""
if reference not in self.dns_servers: if reference not in self.dns_servers:

View File

@@ -23,6 +23,7 @@ class CheckDNSServerIPv6(CheckBase):
"""CheckDNSServerIPv6 class for check.""" """CheckDNSServerIPv6 class for check."""
@Job( @Job(
name="check_dns_server_ipv6_run",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
limit=JobExecutionLimit.THROTTLE, limit=JobExecutionLimit.THROTTLE,
throttle_period=timedelta(hours=24), throttle_period=timedelta(hours=24),
@@ -47,7 +48,9 @@ class CheckDNSServerIPv6(CheckBase):
) )
capture_exception(results[i]) capture_exception(results[i])
@Job(conditions=[JobCondition.INTERNET_SYSTEM]) @Job(
name="check_dns_server_ipv6_approve", conditions=[JobCondition.INTERNET_SYSTEM]
)
async def approve_check(self, reference: str | None = None) -> bool: async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue.""" """Approve check if it is affected by issue."""
if reference not in self.dns_servers: if reference not in self.dns_servers:

View File

@@ -36,7 +36,10 @@ class ResolutionFixup(CoreSysAttributes):
"""Return a list of all fixups.""" """Return a list of all fixups."""
return list(self._fixups.values()) return list(self._fixups.values())
@Job(conditions=[JobCondition.HEALTHY, JobCondition.RUNNING]) @Job(
name="resolution_fixup_run_autofix",
conditions=[JobCondition.HEALTHY, JobCondition.RUNNING],
)
async def run_autofix(self) -> None: async def run_autofix(self) -> None:
"""Run all startup fixes.""" """Run all startup fixes."""
_LOGGER.info("Starting system autofix at state %s", self.sys_core.state) _LOGGER.info("Starting system autofix at state %s", self.sys_core.state)

View File

@@ -25,6 +25,7 @@ class FixupStoreExecuteReload(FixupBase):
"""Storage class for fixup.""" """Storage class for fixup."""
@Job( @Job(
name="fixup_store_execute_reload_process",
conditions=[JobCondition.INTERNET_SYSTEM, JobCondition.FREE_SPACE], conditions=[JobCondition.INTERNET_SYSTEM, JobCondition.FREE_SPACE],
on_condition=ResolutionFixupJobError, on_condition=ResolutionFixupJobError,
) )

View File

@@ -26,6 +26,7 @@ class FixupStoreExecuteReset(FixupBase):
"""Storage class for fixup.""" """Storage class for fixup."""
@Job( @Job(
name="fixup_store_execute_reset_process",
conditions=[JobCondition.INTERNET_SYSTEM, JobCondition.FREE_SPACE], conditions=[JobCondition.INTERNET_SYSTEM, JobCondition.FREE_SPACE],
on_condition=ResolutionFixupJobError, on_condition=ResolutionFixupJobError,
) )

View File

@@ -22,6 +22,7 @@ class FixupSystemExecuteIntegrity(FixupBase):
"""Storage class for fixup.""" """Storage class for fixup."""
@Job( @Job(
name="fixup_system_execute_integrity_process",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=ResolutionFixupJobError, on_condition=ResolutionFixupJobError,
limit=JobExecutionLimit.THROTTLE, limit=JobExecutionLimit.THROTTLE,

View File

@@ -103,6 +103,7 @@ class Security(FileConfiguration, CoreSysAttributes):
return return
@Job( @Job(
name="security_manager_integrity_check",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=SecurityJobError, on_condition=SecurityJobError,
limit=JobExecutionLimit.ONCE, limit=JobExecutionLimit.ONCE,

View File

@@ -80,7 +80,11 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
self._data[ATTR_REPOSITORIES], add_with_errors=True self._data[ATTR_REPOSITORIES], add_with_errors=True
) )
@Job(conditions=[JobCondition.SUPERVISOR_UPDATED], on_condition=StoreJobError) @Job(
name="store_manager_reload",
conditions=[JobCondition.SUPERVISOR_UPDATED],
on_condition=StoreJobError,
)
async def reload(self) -> None: async def reload(self) -> None:
"""Update add-ons from repository and reload list.""" """Update add-ons from repository and reload list."""
tasks = [self.sys_create_task(repository.update()) for repository in self.all] tasks = [self.sys_create_task(repository.update()) for repository in self.all]
@@ -92,6 +96,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
self._read_addons() self._read_addons()
@Job( @Job(
name="store_manager_add_repository",
conditions=[JobCondition.INTERNET_SYSTEM, JobCondition.SUPERVISOR_UPDATED], conditions=[JobCondition.INTERNET_SYSTEM, JobCondition.SUPERVISOR_UPDATED],
on_condition=StoreJobError, on_condition=StoreJobError,
) )

View File

@@ -1,5 +1,5 @@
"""Init file for Supervisor add-on data.""" """Init file for Supervisor add-on data."""
from collections.abc import Awaitable from dataclasses import dataclass
import logging import logging
from pathlib import Path from pathlib import Path
from typing import Any from typing import Any
@@ -29,6 +29,72 @@ from .validate import SCHEMA_REPOSITORY_CONFIG
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@dataclass(slots=True)
class ProcessedRepository:
"""Representation of a repository processed from its git folder."""
slug: str
path: Path
config: dict[str, Any]
def _read_addon_translations(addon_path: Path) -> dict:
"""Read translations from add-ons folder.
Should be run in the executor.
"""
translations_dir = addon_path / "translations"
translations = {}
if not translations_dir.exists():
return translations
translation_files = [
translation
for translation in translations_dir.glob("*")
if translation.suffix in FILE_SUFFIX_CONFIGURATION
]
for translation in translation_files:
try:
translations[translation.stem] = SCHEMA_ADDON_TRANSLATIONS(
read_json_or_yaml_file(translation)
)
except (ConfigurationFileError, vol.Invalid) as err:
_LOGGER.warning("Can't read translations from %s - %s", translation, err)
continue
return translations
def _read_git_repository(path: Path) -> ProcessedRepository | None:
"""Process a custom repository folder."""
slug = extract_hash_from_path(path)
# exists repository json
try:
repository_file = find_one_filetype(
path, "repository", FILE_SUFFIX_CONFIGURATION
)
except ConfigurationFileError:
_LOGGER.warning("No repository information exists at %s", path)
return None
try:
return ProcessedRepository(
slug,
path,
SCHEMA_REPOSITORY_CONFIG(read_json_or_yaml_file(repository_file)),
)
except ConfigurationFileError:
_LOGGER.warning("Can't read repository information from %s", repository_file)
return None
except vol.Invalid:
_LOGGER.warning("Repository parse error %s", repository_file)
return None
class StoreData(CoreSysAttributes): class StoreData(CoreSysAttributes):
"""Hold data for Add-ons inside Supervisor.""" """Hold data for Add-ons inside Supervisor."""
@@ -38,63 +104,43 @@ class StoreData(CoreSysAttributes):
self.repositories: dict[str, Any] = {} self.repositories: dict[str, Any] = {}
self.addons: dict[str, Any] = {} self.addons: dict[str, Any] = {}
async def update(self) -> Awaitable[None]: async def update(self) -> None:
"""Read data from add-on repository.""" """Read data from add-on repository."""
return await self.sys_run_in_executor(self._update)
def _update(self) -> None:
self.repositories.clear() self.repositories.clear()
self.addons.clear() self.addons.clear()
# read core repository # read core repository
self._read_addons_folder(self.sys_config.path_addons_core, REPOSITORY_CORE) await self._read_addons_folder(
self.sys_config.path_addons_core, REPOSITORY_CORE
)
# read local repository # read local repository
self._read_addons_folder(self.sys_config.path_addons_local, REPOSITORY_LOCAL) await self._read_addons_folder(
self.sys_config.path_addons_local, REPOSITORY_LOCAL
)
# add built-in repositories information # add built-in repositories information
self._set_builtin_repositories() await self._set_builtin_repositories()
# read custom git repositories # read custom git repositories
for repository_element in self.sys_config.path_addons_git.iterdir(): def _read_git_repositories() -> list[ProcessedRepository]:
if repository_element.is_dir(): return [
self._read_git_repository(repository_element) repo
for repository_element in self.sys_config.path_addons_git.iterdir()
if repository_element.is_dir()
and (repo := _read_git_repository(repository_element))
]
def _read_git_repository(self, path: Path) -> None: for repo in await self.sys_run_in_executor(_read_git_repositories):
"""Process a custom repository folder.""" self.repositories[repo.slug] = repo.config
slug = extract_hash_from_path(path) await self._read_addons_folder(repo.path, repo.slug)
# exists repository json async def _find_addons(self, path: Path, repository: dict) -> list[Path] | None:
try:
repository_file = find_one_filetype(
path, "repository", FILE_SUFFIX_CONFIGURATION
)
except ConfigurationFileError:
_LOGGER.warning("No repository information exists at %s", path)
return
try:
repository_info = SCHEMA_REPOSITORY_CONFIG(
read_json_or_yaml_file(repository_file)
)
except ConfigurationFileError:
_LOGGER.warning(
"Can't read repository information from %s", repository_file
)
return
except vol.Invalid:
_LOGGER.warning("Repository parse error %s", repository_file)
return
# process data
self.repositories[slug] = repository_info
self._read_addons_folder(path, slug)
def _find_addons(self, path: Path, repository: dict) -> list[Path] | None:
"""Find add-ons in the path.""" """Find add-ons in the path."""
try:
def _get_addons_list() -> list[Path]:
# Generate a list without artefact, safe for corruptions # Generate a list without artefact, safe for corruptions
addon_list = [ return [
addon addon
for addon in path.glob("**/config.*") for addon in path.glob("**/config.*")
if not [ if not [
@@ -104,6 +150,9 @@ class StoreData(CoreSysAttributes):
] ]
and addon.suffix in FILE_SUFFIX_CONFIGURATION and addon.suffix in FILE_SUFFIX_CONFIGURATION
] ]
try:
addon_list = await self.sys_run_in_executor(_get_addons_list)
except OSError as err: except OSError as err:
suggestion = None suggestion = None
if path.stem != StoreType.LOCAL: if path.stem != StoreType.LOCAL:
@@ -120,77 +169,59 @@ class StoreData(CoreSysAttributes):
return None return None
return addon_list return addon_list
def _read_addons_folder(self, path: Path, repository: dict) -> None: async def _read_addons_folder(self, path: Path, repository: str) -> None:
"""Read data from add-ons folder.""" """Read data from add-ons folder."""
if not (addon_list := self._find_addons(path, repository)): if not (addon_list := await self._find_addons(path, repository)):
return return
for addon in addon_list: def _process_addons_config() -> dict[str, dict[str, Any]]:
try: addons_config: dict[str, dict[str, Any]] = {}
addon_config = read_json_or_yaml_file(addon) for addon in addon_list:
except ConfigurationFileError: try:
_LOGGER.warning("Can't read %s from repository %s", addon, repository) addon_config = read_json_or_yaml_file(addon)
continue except ConfigurationFileError:
_LOGGER.warning(
"Can't read %s from repository %s", addon, repository
)
continue
# validate # validate
try: try:
addon_config = SCHEMA_ADDON_CONFIG(addon_config) addon_config = SCHEMA_ADDON_CONFIG(addon_config)
except vol.Invalid as ex: except vol.Invalid as ex:
_LOGGER.warning( _LOGGER.warning(
"Can't read %s: %s", addon, humanize_error(addon_config, ex) "Can't read %s: %s", addon, humanize_error(addon_config, ex)
) )
continue continue
# Generate slug # Generate slug
addon_slug = f"{repository}_{addon_config[ATTR_SLUG]}" addon_slug = f"{repository}_{addon_config[ATTR_SLUG]}"
# store # store
addon_config[ATTR_REPOSITORY] = repository addon_config[ATTR_REPOSITORY] = repository
addon_config[ATTR_LOCATON] = str(addon.parent) addon_config[ATTR_LOCATON] = str(addon.parent)
addon_config[ATTR_TRANSLATIONS] = self._read_addon_translations( addon_config[ATTR_TRANSLATIONS] = _read_addon_translations(addon.parent)
addon.parent addons_config[addon_slug] = addon_config
)
self.addons[addon_slug] = addon_config
def _set_builtin_repositories(self): return addons_config
self.addons.update(await self.sys_run_in_executor(_process_addons_config))
async def _set_builtin_repositories(self):
"""Add local built-in repository into dataset.""" """Add local built-in repository into dataset."""
try:
builtin_file = Path(__file__).parent.joinpath("built-in.json")
builtin_data = read_json_file(builtin_file)
except ConfigurationFileError:
_LOGGER.warning("Can't read built-in json")
return
# core repository def _get_builtins() -> dict[str, dict[str, str]] | None:
self.repositories[REPOSITORY_CORE] = builtin_data[REPOSITORY_CORE]
# local repository
self.repositories[REPOSITORY_LOCAL] = builtin_data[REPOSITORY_LOCAL]
def _read_addon_translations(self, addon_path: Path) -> dict:
"""Read translations from add-ons folder."""
translations_dir = addon_path / "translations"
translations = {}
if not translations_dir.exists():
return translations
translation_files = [
translation
for translation in translations_dir.glob("*")
if translation.suffix in FILE_SUFFIX_CONFIGURATION
]
for translation in translation_files:
try: try:
translations[translation.stem] = SCHEMA_ADDON_TRANSLATIONS( builtin_file = Path(__file__).parent.joinpath("built-in.json")
read_json_or_yaml_file(translation) return read_json_file(builtin_file)
) except ConfigurationFileError:
_LOGGER.warning("Can't read built-in json")
return None
except (ConfigurationFileError, vol.Invalid) as err: builtin_data = await self.sys_run_in_executor(_get_builtins)
_LOGGER.warning( if builtin_data:
"Can't read translations from %s - %s", translation, err # core repository
) self.repositories[REPOSITORY_CORE] = builtin_data[REPOSITORY_CORE]
continue
return translations # local repository
self.repositories[REPOSITORY_LOCAL] = builtin_data[REPOSITORY_LOCAL]

View File

@@ -77,6 +77,7 @@ class GitRepo(CoreSysAttributes):
raise StoreGitError() from err raise StoreGitError() from err
@Job( @Job(
name="git_repo_clone",
conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_SYSTEM],
on_condition=StoreJobError, on_condition=StoreJobError,
) )
@@ -112,6 +113,7 @@ class GitRepo(CoreSysAttributes):
raise StoreGitCloneError() from err raise StoreGitCloneError() from err
@Job( @Job(
name="git_repo_pull",
conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_SYSTEM],
on_condition=StoreJobError, on_condition=StoreJobError,
) )

View File

@@ -211,7 +211,11 @@ class Supervisor(CoreSysAttributes):
self.sys_create_task(self.sys_core.stop()) self.sys_create_task(self.sys_core.stop())
@Job(conditions=[JobCondition.RUNNING], on_condition=SupervisorJobError) @Job(
name="supervisor_restart",
conditions=[JobCondition.RUNNING],
on_condition=SupervisorJobError,
)
async def restart(self) -> None: async def restart(self) -> None:
"""Restart Supervisor soft.""" """Restart Supervisor soft."""
self.sys_core.exit_code = 100 self.sys_core.exit_code = 100
@@ -255,6 +259,7 @@ class Supervisor(CoreSysAttributes):
_LOGGER.error("Repair of Supervisor failed") _LOGGER.error("Repair of Supervisor failed")
@Job( @Job(
name="supervisor_check_connectivity",
limit=JobExecutionLimit.THROTTLE, limit=JobExecutionLimit.THROTTLE,
throttle_period=_check_connectivity_throttle_period, throttle_period=_check_connectivity_throttle_period,
) )

View File

@@ -181,6 +181,7 @@ class Updater(FileConfiguration, CoreSysAttributes):
self._data[ATTR_AUTO_UPDATE] = value self._data[ATTR_AUTO_UPDATE] = value
@Job( @Job(
name="updater_fetch_data",
conditions=[JobCondition.INTERNET_SYSTEM], conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=UpdaterJobError, on_condition=UpdaterJobError,
limit=JobExecutionLimit.THROTTLE_WAIT, limit=JobExecutionLimit.THROTTLE_WAIT,

View File

@@ -30,6 +30,8 @@ from .const import (
ATTR_PWNED, ATTR_PWNED,
ATTR_REGISTRIES, ATTR_REGISTRIES,
ATTR_SESSION, ATTR_SESSION,
ATTR_SESSION_DATA,
ATTR_SESSION_DATA_USER,
ATTR_SUPERVISOR, ATTR_SUPERVISOR,
ATTR_TIMEZONE, ATTR_TIMEZONE,
ATTR_USERNAME, ATTR_USERNAME,
@@ -49,7 +51,9 @@ RE_REGISTRY = re.compile(r"^([a-z0-9]+(-[a-z0-9]+)*\.)+[a-z]{2,}$")
# pylint: disable=invalid-name # pylint: disable=invalid-name
network_port = vol.All(vol.Coerce(int), vol.Range(min=1, max=65535)) network_port = vol.All(vol.Coerce(int), vol.Range(min=1, max=65535))
wait_boot = vol.All(vol.Coerce(int), vol.Range(min=1, max=60)) wait_boot = vol.All(vol.Coerce(int), vol.Range(min=1, max=60))
docker_image = vol.Match(r"^([a-zA-Z\-\.:\d{}]+/)*?([\-\w{}]+)/([\-\w{}]+)$") docker_image = vol.Match(
r"^([a-z0-9][a-z0-9.\-]*(:[0-9]+)?/)*?([a-z0-9{][a-z0-9.\-_{}]*/)*?([a-z0-9{][a-z0-9.\-_{}]*)$"
)
uuid_match = vol.Match(r"^[0-9a-f]{32}$") uuid_match = vol.Match(r"^[0-9a-f]{32}$")
sha256 = vol.Match(r"^[0-9a-f]{64}$") sha256 = vol.Match(r"^[0-9a-f]{64}$")
token = vol.Match(r"^[0-9a-f]{32,256}$") token = vol.Match(r"^[0-9a-f]{32,256}$")
@@ -176,18 +180,33 @@ SCHEMA_DOCKER_CONFIG = vol.Schema(
SCHEMA_AUTH_CONFIG = vol.Schema({sha256: sha256}) SCHEMA_AUTH_CONFIG = vol.Schema({sha256: sha256})
SCHEMA_SESSION_DATA = vol.Schema(
{
token: vol.Schema(
{
vol.Required(ATTR_SESSION_DATA_USER): vol.Schema(
{
vol.Required("id"): str,
vol.Required("username"): str,
vol.Required("displayname"): str,
}
)
}
)
}
)
SCHEMA_INGRESS_CONFIG = vol.Schema( SCHEMA_INGRESS_CONFIG = vol.Schema(
{ {
vol.Required(ATTR_SESSION, default=dict): vol.Schema( vol.Required(ATTR_SESSION, default=dict): vol.Schema(
{token: vol.Coerce(float)} {token: vol.Coerce(float)}
), ),
vol.Required(ATTR_SESSION_DATA, default=dict): SCHEMA_SESSION_DATA,
vol.Required(ATTR_PORTS, default=dict): vol.Schema({str: network_port}), vol.Required(ATTR_PORTS, default=dict): vol.Schema({str: network_port}),
}, },
extra=vol.REMOVE_EXTRA, extra=vol.REMOVE_EXTRA,
) )
# pylint: disable=no-value-for-parameter # pylint: disable=no-value-for-parameter
SCHEMA_SECURITY_CONFIG = vol.Schema( SCHEMA_SECURITY_CONFIG = vol.Schema(
{ {

View File

@@ -18,6 +18,7 @@ from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState from supervisor.docker.const import ContainerState
from supervisor.docker.monitor import DockerContainerStateEvent from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError
from supervisor.ingress import Ingress
from supervisor.store.repository import Repository from supervisor.store.repository import Repository
from supervisor.utils.dt import utcnow from supervisor.utils.dt import utcnow
@@ -523,18 +524,44 @@ async def test_restore(
path_extern, path_extern,
) -> None: ) -> None:
"""Test restoring an addon.""" """Test restoring an addon."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
install_addon_ssh.path_data.mkdir() install_addon_ssh.path_data.mkdir()
await install_addon_ssh.load() await install_addon_ssh.load()
tarfile = SecureTarFile(get_fixture_path(f"backup_local_ssh_{status}.tar.gz"), "r") tarfile = SecureTarFile(get_fixture_path(f"backup_local_ssh_{status}.tar.gz"), "r")
with patch.object(DockerAddon, "is_running", return_value=False), patch.object( with patch.object(DockerAddon, "is_running", return_value=False), patch.object(
CpuArch, "supported", new=PropertyMock(return_value=["aarch64"]) CpuArch, "supported", new=PropertyMock(return_value=["aarch64"])
): ), patch.object(Ingress, "update_hass_panel") as update_hass_panel:
start_task = await coresys.addons.restore(TEST_ADDON_SLUG, tarfile) start_task = await coresys.addons.restore(TEST_ADDON_SLUG, tarfile)
update_hass_panel.assert_called_once()
assert bool(start_task) is (status == "running") assert bool(start_task) is (status == "running")
async def test_restore_while_running(
coresys: CoreSys,
install_addon_ssh: Addon,
container: MagicMock,
tmp_supervisor_data,
path_extern,
):
"""Test restore of a running addon."""
container.status = "running"
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
install_addon_ssh.path_data.mkdir()
await install_addon_ssh.load()
tarfile = SecureTarFile(get_fixture_path("backup_local_ssh_stopped.tar.gz"), "r")
with patch.object(DockerAddon, "is_running", return_value=True), patch.object(
CpuArch, "supported", new=PropertyMock(return_value=["aarch64"])
), patch.object(Ingress, "update_hass_panel"):
start_task = await coresys.addons.restore(TEST_ADDON_SLUG, tarfile)
assert bool(start_task) is False
container.stop.assert_called_once()
async def test_start_when_running( async def test_start_when_running(
coresys: CoreSys, coresys: CoreSys,
install_addon_ssh: Addon, install_addon_ssh: Addon,

View File

@@ -110,7 +110,7 @@ def test_invalid_repository():
"""Validate basic config with invalid repositories.""" """Validate basic config with invalid repositories."""
config = load_json_fixture("basic-addon-config.json") config = load_json_fixture("basic-addon-config.json")
config["image"] = "something" config["image"] = "-invalid-something"
with pytest.raises(vol.Invalid): with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config) vd.SCHEMA_ADDON_CONFIG(config)

View File

@@ -1,6 +1,7 @@
"""Test addon manager.""" """Test addon manager."""
import asyncio import asyncio
from pathlib import Path
from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, patch from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, patch
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
@@ -8,6 +9,7 @@ import pytest
from supervisor.addons.addon import Addon from supervisor.addons.addon import Addon
from supervisor.arch import CpuArch from supervisor.arch import CpuArch
from supervisor.config import CoreConfig
from supervisor.const import AddonBoot, AddonStartup, AddonState, BusEvent from supervisor.const import AddonBoot, AddonStartup, AddonState, BusEvent
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon from supervisor.docker.addon import DockerAddon
@@ -22,6 +24,7 @@ from supervisor.exceptions import (
) )
from supervisor.plugins.dns import PluginDns from supervisor.plugins.dns import PluginDns
from supervisor.utils import check_exception_chain from supervisor.utils import check_exception_chain
from supervisor.utils.common import write_json_file
from tests.common import load_json_fixture from tests.common import load_json_fixture
from tests.const import TEST_ADDON_SLUG from tests.const import TEST_ADDON_SLUG
@@ -57,7 +60,7 @@ async def test_image_added_removed_on_update(
assert install_addon_ssh.image == "local/amd64-addon-ssh" assert install_addon_ssh.image == "local/amd64-addon-ssh"
assert coresys.addons.store.get(TEST_ADDON_SLUG).image == "test/amd64-my-ssh-addon" assert coresys.addons.store.get(TEST_ADDON_SLUG).image == "test/amd64-my-ssh-addon"
with patch.object(DockerInterface, "_install") as install, patch.object( with patch.object(DockerInterface, "install") as install, patch.object(
DockerAddon, "_build" DockerAddon, "_build"
) as build: ) as build:
await install_addon_ssh.update() await install_addon_ssh.update()
@@ -77,7 +80,7 @@ async def test_image_added_removed_on_update(
assert install_addon_ssh.image == "test/amd64-my-ssh-addon" assert install_addon_ssh.image == "test/amd64-my-ssh-addon"
assert coresys.addons.store.get(TEST_ADDON_SLUG).image == "local/amd64-addon-ssh" assert coresys.addons.store.get(TEST_ADDON_SLUG).image == "local/amd64-addon-ssh"
with patch.object(DockerInterface, "_install") as install, patch.object( with patch.object(DockerInterface, "install") as install, patch.object(
DockerAddon, "_build" DockerAddon, "_build"
) as build: ) as build:
await install_addon_ssh.update() await install_addon_ssh.update()
@@ -249,7 +252,7 @@ async def test_update(
assert install_addon_ssh.need_update is True assert install_addon_ssh.need_update is True
with patch.object(DockerInterface, "_install"), patch.object( with patch.object(DockerInterface, "install"), patch.object(
DockerAddon, "is_running", return_value=False DockerAddon, "is_running", return_value=False
): ):
start_task = await coresys.addons.update(TEST_ADDON_SLUG) start_task = await coresys.addons.update(TEST_ADDON_SLUG)
@@ -277,3 +280,87 @@ async def test_rebuild(
start_task = await coresys.addons.rebuild(TEST_ADDON_SLUG) start_task = await coresys.addons.rebuild(TEST_ADDON_SLUG)
assert bool(start_task) is (status == "running") assert bool(start_task) is (status == "running")
async def test_start_wait_cancel_on_uninstall(
coresys: CoreSys,
install_addon_ssh: Addon,
container: MagicMock,
caplog: pytest.LogCaptureFixture,
tmp_supervisor_data,
path_extern,
) -> None:
"""Test the addon wait task is cancelled when addon is uninstalled."""
install_addon_ssh.path_data.mkdir()
container.attrs["Config"] = {"Healthcheck": "exists"}
await install_addon_ssh.load()
await asyncio.sleep(0)
assert install_addon_ssh.state == AddonState.STOPPED
start_task = asyncio.create_task(await install_addon_ssh.start())
assert start_task
coresys.bus.fire_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE,
DockerContainerStateEvent(
name=f"addon_{TEST_ADDON_SLUG}",
state=ContainerState.RUNNING,
id="abc123",
time=1,
),
)
await asyncio.sleep(0.01)
assert not start_task.done()
assert install_addon_ssh.state == AddonState.STARTUP
caplog.clear()
await coresys.addons.uninstall(TEST_ADDON_SLUG)
await asyncio.sleep(0.01)
assert start_task.done()
assert "Wait for addon startup task cancelled" in caplog.text
async def test_repository_file_missing(
coresys: CoreSys, tmp_supervisor_data: Path, caplog: pytest.LogCaptureFixture
):
"""Test repository file is missing."""
with patch.object(
CoreConfig,
"path_addons_git",
new=PropertyMock(return_value=tmp_supervisor_data / "addons" / "git"),
):
repo_dir = coresys.config.path_addons_git / "test"
repo_dir.mkdir(parents=True)
await coresys.store.data.update()
assert f"No repository information exists at {repo_dir.as_posix()}" in caplog.text
async def test_repository_file_error(
coresys: CoreSys, tmp_supervisor_data: Path, caplog: pytest.LogCaptureFixture
):
"""Test repository file is missing."""
with patch.object(
CoreConfig,
"path_addons_git",
new=PropertyMock(return_value=tmp_supervisor_data / "addons" / "git"),
):
repo_dir = coresys.config.path_addons_git / "test"
repo_dir.mkdir(parents=True)
repo_file = repo_dir / "repository.json"
with repo_file.open("w") as file:
file.write("not json")
await coresys.store.data.update()
assert (
f"Can't read repository information from {repo_file.as_posix()}"
in caplog.text
)
write_json_file(repo_file, {"invalid": "bad"})
await coresys.store.data.update()
assert f"Repository parse error {repo_dir.as_posix()}" in caplog.text

View File

@@ -171,6 +171,7 @@ async def test_api_addon_rebuild_healthcheck(
path_extern, path_extern,
): ):
"""Test rebuilding an addon waits for healthy.""" """Test rebuilding an addon waits for healthy."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
container.status = "running" container.status = "running"
install_addon_ssh.path_data.mkdir() install_addon_ssh.path_data.mkdir()
container.attrs["Config"] = {"Healthcheck": "exists"} container.attrs["Config"] = {"Healthcheck": "exists"}

View File

@@ -1,4 +1,5 @@
"""Test ingress API.""" """Test ingress API."""
# pylint: disable=protected-access
from unittest.mock import patch from unittest.mock import patch
import pytest import pytest
@@ -37,3 +38,50 @@ async def test_validate_session(api_client, coresys):
assert await resp.json() == {"result": "ok", "data": {}} assert await resp.json() == {"result": "ok", "data": {}}
assert coresys.ingress.sessions[session] > valid_time assert coresys.ingress.sessions[session] > valid_time
@pytest.mark.asyncio
async def test_validate_session_with_user_id(api_client, coresys):
"""Test validating ingress session with user ID passed."""
with patch("aiohttp.web_request.BaseRequest.__getitem__", return_value=None):
resp = await api_client.post(
"/ingress/validate_session",
json={"session": "non-existing"},
)
assert resp.status == 401
with patch(
"aiohttp.web_request.BaseRequest.__getitem__",
return_value=coresys.homeassistant,
):
client = coresys.homeassistant.websocket._client
client.async_send_command.return_value = [
{"id": "some-id", "name": "Some Name", "username": "sn"}
]
resp = await api_client.post("/ingress/session", json={"user_id": "some-id"})
result = await resp.json()
client.async_send_command.assert_called_with({"type": "config/auth/list"})
assert "session" in result["data"]
session = result["data"]["session"]
assert session in coresys.ingress.sessions
valid_time = coresys.ingress.sessions[session]
resp = await api_client.post(
"/ingress/validate_session",
json={"session": session},
)
assert resp.status == 200
assert await resp.json() == {"result": "ok", "data": {}}
assert coresys.ingress.sessions[session] > valid_time
assert session in coresys.ingress.sessions_data
assert coresys.ingress.get_session_data(session).user.id == "some-id"
assert coresys.ingress.get_session_data(session).user.username == "sn"
assert (
coresys.ingress.get_session_data(session).user.display_name == "Some Name"
)

View File

@@ -125,6 +125,7 @@ async def test_api_store_update_healthcheck(
path_extern, path_extern,
): ):
"""Test updating an addon with healthcheck waits for health status.""" """Test updating an addon with healthcheck waits for health status."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
container.status = "running" container.status = "running"
container.attrs["Config"] = {"Healthcheck": "exists"} container.attrs["Config"] = {"Healthcheck": "exists"}
install_addon_ssh.path_data.mkdir() install_addon_ssh.path_data.mkdir()
@@ -176,7 +177,7 @@ async def test_api_store_update_healthcheck(
asyncio.create_task(container_events()) asyncio.create_task(container_events())
with patch.object(DockerAddon, "run", new=container_events_task), patch.object( with patch.object(DockerAddon, "run", new=container_events_task), patch.object(
DockerInterface, "_install" DockerInterface, "install"
), patch.object(DockerAddon, "is_running", return_value=False), patch.object( ), patch.object(DockerAddon, "is_running", return_value=False), patch.object(
CpuArch, "supported", new=PropertyMock(return_value=["amd64"]) CpuArch, "supported", new=PropertyMock(return_value=["amd64"])
): ):

View File

@@ -70,6 +70,13 @@ async def path_extern() -> None:
yield yield
@pytest.fixture
async def supervisor_name() -> None:
"""Set env for supervisor name."""
os.environ["SUPERVISOR_NAME"] = "hassio_supervisor"
yield
@pytest.fixture @pytest.fixture
async def docker() -> DockerAPI: async def docker() -> DockerAPI:
"""Mock DockerAPI.""" """Mock DockerAPI."""
@@ -286,7 +293,13 @@ async def fixture_all_dbus_services(
@pytest.fixture @pytest.fixture
async def coresys( async def coresys(
event_loop, docker, dbus_session_bus, all_dbus_services, aiohttp_client, run_dir event_loop,
docker,
dbus_session_bus,
all_dbus_services,
aiohttp_client,
run_dir,
supervisor_name,
) -> CoreSys: ) -> CoreSys:
"""Create a CoreSys Mock.""" """Create a CoreSys Mock."""
with patch("supervisor.bootstrap.initialize_system"), patch( with patch("supervisor.bootstrap.initialize_system"), patch(
@@ -409,7 +422,9 @@ def sys_supervisor():
@pytest.fixture @pytest.fixture
async def api_client( async def api_client(
aiohttp_client, coresys: CoreSys, request: pytest.FixtureRequest aiohttp_client,
coresys: CoreSys,
request: pytest.FixtureRequest,
) -> TestClient: ) -> TestClient:
"""Fixture for RestAPI client.""" """Fixture for RestAPI client."""
@@ -428,9 +443,7 @@ async def api_client(
api = RestAPI(coresys) api = RestAPI(coresys)
api.webapp = web.Application(middlewares=[_security_middleware]) api.webapp = web.Application(middlewares=[_security_middleware])
api.start = AsyncMock() api.start = AsyncMock()
with patch("supervisor.docker.supervisor.os") as os: await api.load()
os.environ = {"SUPERVISOR_NAME": "hassio_supervisor"}
await api.load()
yield await aiohttp_client(api.webapp) yield await aiohttp_client(api.webapp)
@@ -593,16 +606,12 @@ async def journald_logs(coresys: CoreSys) -> MagicMock:
@pytest.fixture @pytest.fixture
async def docker_logs(docker: DockerAPI) -> MagicMock: async def docker_logs(docker: DockerAPI, supervisor_name) -> MagicMock:
"""Mock log output for a container from docker.""" """Mock log output for a container from docker."""
container_mock = MagicMock() container_mock = MagicMock()
container_mock.logs.return_value = load_binary_fixture("logs_docker_container.txt") container_mock.logs.return_value = load_binary_fixture("logs_docker_container.txt")
docker.containers.get.return_value = container_mock docker.containers.get.return_value = container_mock
yield container_mock.logs
with patch("supervisor.docker.supervisor.os") as os:
os.environ = {"SUPERVISOR_NAME": "hassio_supervisor"}
yield container_mock.logs
@pytest.fixture @pytest.fixture
@@ -640,7 +649,6 @@ async def os_available(request: pytest.FixtureRequest) -> None:
@pytest.fixture @pytest.fixture
async def mount_propagation(docker: DockerAPI, coresys: CoreSys) -> None: async def mount_propagation(docker: DockerAPI, coresys: CoreSys) -> None:
"""Mock supervisor connected to container with propagation set.""" """Mock supervisor connected to container with propagation set."""
os.environ["SUPERVISOR_NAME"] = "hassio_supervisor"
docker.containers.get.return_value = supervisor = MagicMock() docker.containers.get.return_value = supervisor = MagicMock()
supervisor.attrs = { supervisor.attrs = {
"Mounts": [ "Mounts": [

View File

@@ -1,6 +1,9 @@
"""Test settings generation from interface.""" """Test settings generation from interface."""
from unittest.mock import PropertyMock, patch
from supervisor.dbus.network import NetworkManager from supervisor.dbus.network import NetworkManager
from supervisor.dbus.network.interface import NetworkInterface
from supervisor.dbus.network.setting.generate import get_connection_from_interface from supervisor.dbus.network.setting.generate import get_connection_from_interface
from supervisor.host.network import Interface from supervisor.host.network import Interface
@@ -17,13 +20,24 @@ async def test_get_connection_from_interface(network_manager: NetworkManager):
assert "interface-name" not in connection_payload["connection"] assert "interface-name" not in connection_payload["connection"]
assert connection_payload["connection"]["type"].value == "802-3-ethernet" assert connection_payload["connection"]["type"].value == "802-3-ethernet"
assert ( assert connection_payload["match"]["path"].value == ["platform-ff3f0000.ethernet"]
connection_payload["device"]["match-device"].value
== "mac:AA:BB:CC:DD:EE:FF,interface-name:eth0"
)
assert connection_payload["ipv4"]["method"].value == "auto" assert connection_payload["ipv4"]["method"].value == "auto"
assert "address-data" not in connection_payload["ipv4"] assert "address-data" not in connection_payload["ipv4"]
assert connection_payload["ipv6"]["method"].value == "auto" assert connection_payload["ipv6"]["method"].value == "auto"
assert "address-data" not in connection_payload["ipv6"] assert "address-data" not in connection_payload["ipv6"]
async def test_get_connection_no_path(network_manager: NetworkManager):
"""Test network interface without a path."""
dbus_interface = network_manager.get(TEST_INTERFACE)
with patch.object(NetworkInterface, "path", new=PropertyMock(return_value=None)):
interface = Interface.from_dbus_interface(dbus_interface)
connection_payload = get_connection_from_interface(interface)
assert "connection" in connection_payload
assert "match" not in connection_payload
assert connection_payload["connection"]["interface-name"].value == "eth0"

View File

@@ -58,9 +58,7 @@ async def test_update(
) )
assert settings["connection"]["autoconnect"] == Variant("b", True) assert settings["connection"]["autoconnect"] == Variant("b", True)
assert settings["device"] == { assert settings["match"] == {"path": Variant("as", ["platform-ff3f0000.ethernet"])}
"match-device": Variant("s", "mac:AA:BB:CC:DD:EE:FF,interface-name:eth0")
}
assert "ipv4" in settings assert "ipv4" in settings
assert settings["ipv4"]["method"] == Variant("s", "auto") assert settings["ipv4"]["method"] == Variant("s", "auto")

View File

@@ -77,9 +77,7 @@ SETINGS_FIXTURES: dict[str, dict[str, dict[str, Variant]]] = {
"proxy": {}, "proxy": {},
"802-3-ethernet": SETTINGS_FIXTURE["802-3-ethernet"], "802-3-ethernet": SETTINGS_FIXTURE["802-3-ethernet"],
"802-11-wireless": SETTINGS_FIXTURE["802-11-wireless"], "802-11-wireless": SETTINGS_FIXTURE["802-11-wireless"],
"device": { "match": {"path": Variant("as", ["platform-ff3f0000.ethernet"])},
"match-device": Variant("s", "mac:AA:BB:CC:DD:EE:FF,interface-name:eth0"),
},
}, },
} }

View File

@@ -194,7 +194,7 @@ async def test_addon_run_docker_error(
coresys, addonsdata_system, "basic-addon-config.json" coresys, addonsdata_system, "basic-addon-config.json"
) )
with patch.object(DockerAddon, "_stop"), patch.object( with patch.object(DockerAddon, "stop"), patch.object(
AddonOptions, "validate", new=PropertyMock(return_value=lambda _: None) AddonOptions, "validate", new=PropertyMock(return_value=lambda _: None)
), pytest.raises(DockerNotFound): ), pytest.raises(DockerNotFound):
await docker_addon.run() await docker_addon.run()
@@ -218,7 +218,7 @@ async def test_addon_run_add_host_error(
coresys, addonsdata_system, "basic-addon-config.json" coresys, addonsdata_system, "basic-addon-config.json"
) )
with patch.object(DockerAddon, "_stop"), patch.object( with patch.object(DockerAddon, "stop"), patch.object(
AddonOptions, "validate", new=PropertyMock(return_value=lambda _: None) AddonOptions, "validate", new=PropertyMock(return_value=lambda _: None)
), patch.object(PluginDns, "add_host", side_effect=(err := CoreDNSError())): ), patch.object(PluginDns, "add_host", side_effect=(err := CoreDNSError())):
await docker_addon.run() await docker_addon.run()

View File

@@ -92,7 +92,7 @@ async def test_install_docker_error(
): ):
"""Test install fails due to docker error.""" """Test install fails due to docker error."""
coresys.security.force = True coresys.security.force = True
with patch.object(HomeAssistantCore, "_start"), patch.object( with patch.object(HomeAssistantCore, "start"), patch.object(
DockerHomeAssistant, "cleanup" DockerHomeAssistant, "cleanup"
), patch.object( ), patch.object(
Updater, "image_homeassistant", new=PropertyMock(return_value="homeassistant") Updater, "image_homeassistant", new=PropertyMock(return_value="homeassistant")
@@ -119,7 +119,7 @@ async def test_install_other_error(
"""Test install fails due to other error.""" """Test install fails due to other error."""
coresys.docker.images.pull.side_effect = [(err := OSError()), MagicMock()] coresys.docker.images.pull.side_effect = [(err := OSError()), MagicMock()]
with patch.object(HomeAssistantCore, "_start"), patch.object( with patch.object(HomeAssistantCore, "start"), patch.object(
DockerHomeAssistant, "cleanup" DockerHomeAssistant, "cleanup"
), patch.object( ), patch.object(
Updater, "image_homeassistant", new=PropertyMock(return_value="homeassistant") Updater, "image_homeassistant", new=PropertyMock(return_value="homeassistant")

View File

@@ -13,7 +13,6 @@ from supervisor.exceptions import HostNotSupportedError
from supervisor.homeassistant.const import WSEvent, WSType from supervisor.homeassistant.const import WSEvent, WSType
from supervisor.host.const import WifiMode from supervisor.host.const import WifiMode
from tests.common import mock_dbus_services
from tests.dbus_service_mocks.base import DBusServiceMock from tests.dbus_service_mocks.base import DBusServiceMock
from tests.dbus_service_mocks.network_active_connection import ( from tests.dbus_service_mocks.network_active_connection import (
ActiveConnection as ActiveConnectionService, ActiveConnection as ActiveConnectionService,
@@ -253,47 +252,3 @@ async def test_host_connectivity_disabled(
} }
) )
assert "connectivity_check" not in coresys.resolution.unsupported assert "connectivity_check" not in coresys.resolution.unsupported
@pytest.mark.parametrize(
"interface_obj_path",
[
"/org/freedesktop/NetworkManager/Devices/4",
"/org/freedesktop/NetworkManager/Devices/5",
],
)
async def test_load_with_mac_or_name_change(
coresys: CoreSys,
network_manager_service: NetworkManagerService,
interface_obj_path: str,
):
"""Test load fixes match-device settings if mac address or interface name has changed."""
await mock_dbus_services(
{
"network_active_connection": "/org/freedesktop/NetworkManager/ActiveConnection/2",
"network_connection_settings": "/org/freedesktop/NetworkManager/Settings/2",
"network_device": interface_obj_path,
},
coresys.dbus.bus,
)
await coresys.dbus.network.update({"Devices": [interface_obj_path]})
network_manager_service.ActivateConnection.calls.clear()
assert len(coresys.dbus.network.interfaces) == 1
interface = next(iter(coresys.dbus.network.interfaces))
assert interface.object_path == interface_obj_path
expected_match_device = (
f"mac:{interface.hw_address},interface-name:{interface.name}"
)
assert interface.settings.device.match_device != expected_match_device
await coresys.host.network.load()
assert network_manager_service.ActivateConnection.calls == [
(
"/org/freedesktop/NetworkManager/Settings/2",
interface_obj_path,
"/",
)
]
assert interface.settings.device.match_device == expected_match_device

View File

@@ -3,6 +3,7 @@
import asyncio import asyncio
from datetime import timedelta from datetime import timedelta
from unittest.mock import AsyncMock, Mock, PropertyMock, patch from unittest.mock import AsyncMock, Mock, PropertyMock, patch
from uuid import uuid4
from aiohttp.client_exceptions import ClientError from aiohttp.client_exceptions import ClientError
import pytest import pytest
@@ -18,8 +19,10 @@ from supervisor.exceptions import (
) )
from supervisor.host.const import HostFeature from supervisor.host.const import HostFeature
from supervisor.host.manager import HostManager from supervisor.host.manager import HostManager
from supervisor.jobs import SupervisorJob
from supervisor.jobs.const import JobExecutionLimit from supervisor.jobs.const import JobExecutionLimit
from supervisor.jobs.decorator import Job, JobCondition from supervisor.jobs.decorator import Job, JobCondition
from supervisor.jobs.job_group import JobGroup
from supervisor.plugins.audio import PluginAudio from supervisor.plugins.audio import PluginAudio
from supervisor.resolution.const import UnhealthyReason from supervisor.resolution.const import UnhealthyReason
from supervisor.utils.dt import utcnow from supervisor.utils.dt import utcnow
@@ -35,7 +38,7 @@ async def test_healthy(coresys: CoreSys, caplog: pytest.LogCaptureFixture):
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.HEALTHY]) @Job(name="test_healthy_execute", conditions=[JobCondition.HEALTHY])
async def execute(self): async def execute(self):
"""Execute the class method.""" """Execute the class method."""
return True return True
@@ -77,12 +80,18 @@ async def test_internet(
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.INTERNET_HOST]) @Job(
name=f"test_internet_execute_host_{uuid4().hex}",
conditions=[JobCondition.INTERNET_HOST],
)
async def execute_host(self): async def execute_host(self):
"""Execute the class method.""" """Execute the class method."""
return True return True
@Job(conditions=[JobCondition.INTERNET_SYSTEM]) @Job(
name=f"test_internet_execute_system_{uuid4().hex}",
conditions=[JobCondition.INTERNET_SYSTEM],
)
async def execute_system(self): async def execute_system(self):
"""Execute the class method.""" """Execute the class method."""
return True return True
@@ -118,7 +127,7 @@ async def test_free_space(coresys: CoreSys):
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.FREE_SPACE]) @Job(name="test_free_space_execute", conditions=[JobCondition.FREE_SPACE])
async def execute(self): async def execute(self):
"""Execute the class method.""" """Execute the class method."""
return True return True
@@ -144,7 +153,7 @@ async def test_haos(coresys: CoreSys):
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.HAOS]) @Job(name="test_haos_execute", conditions=[JobCondition.HAOS])
async def execute(self): async def execute(self):
"""Execute the class method.""" """Execute the class method."""
return True return True
@@ -170,7 +179,7 @@ async def test_exception(coresys: CoreSys, capture_exception: Mock):
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.HEALTHY]) @Job(name="test_exception_execute", conditions=[JobCondition.HEALTHY])
async def execute(self): async def execute(self):
"""Execute the class method.""" """Execute the class method."""
raise HassioError() raise HassioError()
@@ -194,7 +203,9 @@ async def test_exception_not_handle(coresys: CoreSys, capture_exception: Mock):
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.HEALTHY]) @Job(
name="test_exception_not_handle_execute", conditions=[JobCondition.HEALTHY]
)
async def execute(self): async def execute(self):
"""Execute the class method.""" """Execute the class method."""
raise err raise err
@@ -217,7 +228,7 @@ async def test_running(coresys: CoreSys):
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.RUNNING]) @Job(name="test_running_execute", conditions=[JobCondition.RUNNING])
async def execute(self): async def execute(self):
"""Execute the class method.""" """Execute the class method."""
return True return True
@@ -244,7 +255,11 @@ async def test_exception_conditions(coresys: CoreSys):
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.RUNNING], on_condition=HassioError) @Job(
name="test_exception_conditions_execute",
conditions=[JobCondition.RUNNING],
on_condition=HassioError,
)
async def execute(self): async def execute(self):
"""Execute the class method.""" """Execute the class method."""
return True return True
@@ -272,7 +287,10 @@ async def test_execution_limit_single_wait(
self.coresys = coresys self.coresys = coresys
self.run = asyncio.Lock() self.run = asyncio.Lock()
@Job(limit=JobExecutionLimit.SINGLE_WAIT) @Job(
name="test_execution_limit_single_wait_execute",
limit=JobExecutionLimit.SINGLE_WAIT,
)
async def execute(self, sleep: float): async def execute(self, sleep: float):
"""Execute the class method.""" """Execute the class method."""
assert not self.run.locked() assert not self.run.locked()
@@ -298,7 +316,11 @@ async def test_execution_limit_throttle_wait(
self.run = asyncio.Lock() self.run = asyncio.Lock()
self.call = 0 self.call = 0
@Job(limit=JobExecutionLimit.THROTTLE_WAIT, throttle_period=timedelta(hours=1)) @Job(
name="test_execution_limit_throttle_wait_execute",
limit=JobExecutionLimit.THROTTLE_WAIT,
throttle_period=timedelta(hours=1),
)
async def execute(self, sleep: float): async def execute(self, sleep: float):
"""Execute the class method.""" """Execute the class method."""
assert not self.run.locked() assert not self.run.locked()
@@ -331,6 +353,7 @@ async def test_execution_limit_throttle_rate_limit(
self.call = 0 self.call = 0
@Job( @Job(
name=f"test_execution_limit_throttle_rate_limit_execute_{uuid4().hex}",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT, limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=timedelta(hours=1), throttle_period=timedelta(hours=1),
throttle_max_calls=2, throttle_max_calls=2,
@@ -368,7 +391,11 @@ async def test_execution_limit_throttle(coresys: CoreSys, loop: asyncio.BaseEven
self.run = asyncio.Lock() self.run = asyncio.Lock()
self.call = 0 self.call = 0
@Job(limit=JobExecutionLimit.THROTTLE, throttle_period=timedelta(hours=1)) @Job(
name="test_execution_limit_throttle_execute",
limit=JobExecutionLimit.THROTTLE,
throttle_period=timedelta(hours=1),
)
async def execute(self, sleep: float): async def execute(self, sleep: float):
"""Execute the class method.""" """Execute the class method."""
assert not self.run.locked() assert not self.run.locked()
@@ -396,7 +423,11 @@ async def test_execution_limit_once(coresys: CoreSys, loop: asyncio.BaseEventLoo
self.coresys = coresys self.coresys = coresys
self.run = asyncio.Lock() self.run = asyncio.Lock()
@Job(limit=JobExecutionLimit.ONCE, on_condition=JobException) @Job(
name="test_execution_limit_once_execute",
limit=JobExecutionLimit.ONCE,
on_condition=JobException,
)
async def execute(self, sleep: float): async def execute(self, sleep: float):
"""Execute the class method.""" """Execute the class method."""
assert not self.run.locked() assert not self.run.locked()
@@ -423,7 +454,10 @@ async def test_supervisor_updated(coresys: CoreSys):
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.SUPERVISOR_UPDATED]) @Job(
name="test_supervisor_updated_execute",
conditions=[JobCondition.SUPERVISOR_UPDATED],
)
async def execute(self) -> bool: async def execute(self) -> bool:
"""Execute the class method.""" """Execute the class method."""
return True return True
@@ -451,7 +485,10 @@ async def test_plugins_updated(coresys: CoreSys):
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.PLUGINS_UPDATED]) @Job(
name="test_plugins_updated_execute",
conditions=[JobCondition.PLUGINS_UPDATED],
)
async def execute(self) -> bool: async def execute(self) -> bool:
"""Execute the class method.""" """Execute the class method."""
return True return True
@@ -484,7 +521,7 @@ async def test_auto_update(coresys: CoreSys):
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.AUTO_UPDATE]) @Job(name="test_auto_update_execute", conditions=[JobCondition.AUTO_UPDATE])
async def execute(self) -> bool: async def execute(self) -> bool:
"""Execute the class method.""" """Execute the class method."""
return True return True
@@ -510,7 +547,7 @@ async def test_os_agent(coresys: CoreSys):
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.OS_AGENT]) @Job(name="test_os_agent_execute", conditions=[JobCondition.OS_AGENT])
async def execute(self) -> bool: async def execute(self) -> bool:
"""Execute the class method.""" """Execute the class method."""
return True return True
@@ -539,7 +576,7 @@ async def test_host_network(coresys: CoreSys):
"""Initialize the test class.""" """Initialize the test class."""
self.coresys = coresys self.coresys = coresys
@Job(conditions=[JobCondition.HOST_NETWORK]) @Job(name="test_host_network_execute", conditions=[JobCondition.HOST_NETWORK])
async def execute(self) -> bool: async def execute(self) -> bool:
"""Execute the class method.""" """Execute the class method."""
return True return True
@@ -552,3 +589,354 @@ async def test_host_network(coresys: CoreSys):
coresys.jobs.ignore_conditions = [JobCondition.HOST_NETWORK] coresys.jobs.ignore_conditions = [JobCondition.HOST_NETWORK]
assert await test.execute() assert await test.execute()
async def test_job_group_once(coresys: CoreSys, loop: asyncio.BaseEventLoop):
"""Test job group once execution limitation."""
class TestClass(JobGroup):
"""Test class."""
def __init__(self, coresys: CoreSys):
"""Initialize the test class."""
super().__init__(coresys, "TestClass")
self.event = asyncio.Event()
@Job(
name="test_job_group_once_inner_execute",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=JobException,
)
async def inner_execute(self) -> bool:
"""Inner class method called by execute, group level lock allows this."""
await self.event.wait()
return True
@Job(
name="test_job_group_once_execute",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=JobException,
)
async def execute(self) -> bool:
"""Execute the class method."""
return await self.inner_execute()
@Job(
name="test_job_group_once_separate_execute",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=JobException,
)
async def separate_execute(self) -> bool:
"""Alternate execute method that shares group lock."""
return True
@Job(
name="test_job_group_once_unrelated",
limit=JobExecutionLimit.ONCE,
on_condition=JobException,
)
async def unrelated_method(self) -> bool:
"""Unrelated method, sparate job with separate lock."""
return True
test = TestClass(coresys)
run_task = loop.create_task(test.execute())
await asyncio.sleep(0)
# All methods with group limits should be locked
with pytest.raises(JobException):
await test.execute()
with pytest.raises(JobException):
await test.inner_execute()
with pytest.raises(JobException):
await test.separate_execute()
# The once method is still callable
assert await test.unrelated_method()
test.event.set()
assert await run_task
async def test_job_group_wait(coresys: CoreSys, loop: asyncio.BaseEventLoop):
"""Test job group wait execution limitation."""
class TestClass(JobGroup):
"""Test class."""
def __init__(self, coresys: CoreSys):
"""Initialize the test class."""
super().__init__(coresys, "TestClass")
self.execute_count = 0
self.other_count = 0
self.event = asyncio.Event()
@Job(
name="test_job_group_wait_inner_execute",
limit=JobExecutionLimit.GROUP_WAIT,
on_condition=JobException,
)
async def inner_execute(self) -> None:
"""Inner class method called by execute, group level lock allows this."""
self.execute_count += 1
await self.event.wait()
@Job(
name="test_job_group_wait_execute",
limit=JobExecutionLimit.GROUP_WAIT,
on_condition=JobException,
)
async def execute(self) -> None:
"""Execute the class method."""
await self.inner_execute()
@Job(
name="test_job_group_wait_separate_execute",
limit=JobExecutionLimit.GROUP_WAIT,
on_condition=JobException,
)
async def separate_execute(self) -> None:
"""Alternate execute method that shares group lock."""
self.other_count += 1
test = TestClass(coresys)
run_task = loop.create_task(test.execute())
await asyncio.sleep(0)
repeat_task = loop.create_task(test.execute())
other_task = loop.create_task(test.separate_execute())
await asyncio.sleep(0)
assert test.execute_count == 1
assert test.other_count == 0
test.event.set()
await run_task
await repeat_task
await other_task
assert test.execute_count == 2
assert test.other_count == 1
async def test_job_cleanup(coresys: CoreSys, loop: asyncio.BaseEventLoop):
"""Test job is cleaned up."""
class TestClass:
"""Test class."""
def __init__(self, coresys: CoreSys):
"""Initialize the test class."""
self.coresys = coresys
self.event = asyncio.Event()
self.job: SupervisorJob | None = None
@Job(name="test_job_cleanup_execute", limit=JobExecutionLimit.ONCE)
async def execute(self):
"""Execute the class method."""
self.job = coresys.jobs.get_job()
await self.event.wait()
return True
test = TestClass(coresys)
run_task = loop.create_task(test.execute())
await asyncio.sleep(0)
assert coresys.jobs.jobs == [test.job]
assert not test.job.done
test.event.set()
assert await run_task
assert coresys.jobs.jobs == []
assert test.job.done
async def test_job_skip_cleanup(coresys: CoreSys, loop: asyncio.BaseEventLoop):
"""Test job is left in job manager when cleanup is false."""
class TestClass:
"""Test class."""
def __init__(self, coresys: CoreSys):
"""Initialize the test class."""
self.coresys = coresys
self.event = asyncio.Event()
self.job: SupervisorJob | None = None
@Job(
name="test_job_skip_cleanup_execute",
limit=JobExecutionLimit.ONCE,
cleanup=False,
)
async def execute(self):
"""Execute the class method."""
self.job = coresys.jobs.get_job()
await self.event.wait()
return True
test = TestClass(coresys)
run_task = loop.create_task(test.execute())
await asyncio.sleep(0)
assert coresys.jobs.jobs == [test.job]
assert not test.job.done
test.event.set()
assert await run_task
assert coresys.jobs.jobs == [test.job]
assert test.job.done
async def test_execution_limit_group_throttle(
coresys: CoreSys, loop: asyncio.BaseEventLoop
):
"""Test the group throttle execution limit."""
class TestClass(JobGroup):
"""Test class."""
def __init__(self, coresys: CoreSys, reference: str):
"""Initialize the test class."""
super().__init__(coresys, f"test_class_{reference}", reference)
self.run = asyncio.Lock()
self.call = 0
@Job(
name="test_execution_limit_group_throttle_execute",
limit=JobExecutionLimit.GROUP_THROTTLE,
throttle_period=timedelta(milliseconds=95),
)
async def execute(self, sleep: float):
"""Execute the class method."""
assert not self.run.locked()
async with self.run:
await asyncio.sleep(sleep)
self.call += 1
test1 = TestClass(coresys, "test1")
test2 = TestClass(coresys, "test2")
# One call of each should work. The subsequent calls will be silently throttled due to period
await asyncio.gather(
test1.execute(0), test1.execute(0), test2.execute(0), test2.execute(0)
)
assert test1.call == 1
assert test2.call == 1
# First call to each will work again since period cleared. Second throttled once more as they don't wait
with time_machine.travel(utcnow() + timedelta(milliseconds=100)):
await asyncio.gather(
test1.execute(0.1),
test1.execute(0.1),
test2.execute(0.1),
test2.execute(0.1),
)
assert test1.call == 2
assert test2.call == 2
async def test_execution_limit_group_throttle_wait(
coresys: CoreSys, loop: asyncio.BaseEventLoop
):
"""Test the group throttle wait job execution limit."""
class TestClass(JobGroup):
"""Test class."""
def __init__(self, coresys: CoreSys, reference: str):
"""Initialize the test class."""
super().__init__(coresys, f"test_class_{reference}", reference)
self.run = asyncio.Lock()
self.call = 0
@Job(
name="test_execution_limit_group_throttle_wait_execute",
limit=JobExecutionLimit.GROUP_THROTTLE_WAIT,
throttle_period=timedelta(milliseconds=95),
)
async def execute(self, sleep: float):
"""Execute the class method."""
assert not self.run.locked()
async with self.run:
await asyncio.sleep(sleep)
self.call += 1
test1 = TestClass(coresys, "test1")
test2 = TestClass(coresys, "test2")
# One call of each should work. The subsequent calls will be silently throttled after waiting due to period
await asyncio.gather(
*[test1.execute(0), test1.execute(0), test2.execute(0), test2.execute(0)]
)
assert test1.call == 1
assert test2.call == 1
# All calls should work as we cleared the period. And tasks take longer then period and are queued
with time_machine.travel(utcnow() + timedelta(milliseconds=100)):
await asyncio.gather(
*[
test1.execute(0.1),
test1.execute(0.1),
test2.execute(0.1),
test2.execute(0.1),
]
)
assert test1.call == 3
assert test2.call == 3
@pytest.mark.parametrize("error", [None, PluginJobError])
async def test_execution_limit_group_throttle_rate_limit(
coresys: CoreSys, loop: asyncio.BaseEventLoop, error: JobException | None
):
"""Test the group throttle rate limit job execution limit."""
class TestClass(JobGroup):
"""Test class."""
def __init__(self, coresys: CoreSys, reference: str):
"""Initialize the test class."""
super().__init__(coresys, f"test_class_{reference}", reference)
self.run = asyncio.Lock()
self.call = 0
@Job(
name=f"test_execution_limit_group_throttle_rate_limit_execute_{uuid4().hex}",
limit=JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
throttle_period=timedelta(hours=1),
throttle_max_calls=2,
on_condition=error,
)
async def execute(self):
"""Execute the class method."""
self.call += 1
test1 = TestClass(coresys, "test1")
test2 = TestClass(coresys, "test2")
await asyncio.gather(
*[test1.execute(), test1.execute(), test2.execute(), test2.execute()]
)
assert test1.call == 2
assert test2.call == 2
with pytest.raises(JobException if error is None else error):
await test1.execute()
with pytest.raises(JobException if error is None else error):
await test2.execute()
assert test1.call == 2
assert test2.call == 2
with time_machine.travel(utcnow() + timedelta(hours=1)):
await test1.execute()
await test2.execute()
assert test1.call == 3
assert test2.call == 3

View File

@@ -1,39 +1,76 @@
"""Test the condition decorators.""" """Test the condition decorators."""
import pytest
# pylint: disable=protected-access,import-error # pylint: disable=protected-access,import-error
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.exceptions import JobStartException
TEST_JOB = "test" TEST_JOB = "test"
async def test_add_job(coresys: CoreSys): async def test_add_job(coresys: CoreSys):
"""Test adding jobs.""" """Test adding jobs."""
job = coresys.jobs.get_job(TEST_JOB) job = coresys.jobs.new_job(TEST_JOB)
assert job.name in coresys.jobs.jobs assert job in coresys.jobs.jobs
async def test_remove_job_directly(coresys: CoreSys): async def test_remove_job_directly(coresys: CoreSys, caplog: pytest.LogCaptureFixture):
"""Test removing jobs from manager.""" """Test removing jobs from manager."""
job = coresys.jobs.get_job(TEST_JOB) job = coresys.jobs.new_job(TEST_JOB)
assert job in coresys.jobs.jobs
assert job.name in coresys.jobs.jobs
coresys.jobs.remove_job(job) coresys.jobs.remove_job(job)
assert job.name not in coresys.jobs.jobs assert job not in coresys.jobs.jobs
assert f"Removing incomplete job {job.name}" in caplog.text
async def test_remove_job_with_progress(coresys: CoreSys): async def test_job_done(coresys: CoreSys):
"""Test removing jobs by setting progress to 100.""" """Test done set correctly with jobs."""
job = coresys.jobs.get_job(TEST_JOB) job = coresys.jobs.new_job(TEST_JOB)
assert not job.done
assert coresys.jobs.get_job() != job
assert job.name in coresys.jobs.jobs with job.start():
job.update(progress=100) assert coresys.jobs.get_job() == job
assert job.name not in coresys.jobs.jobs assert not job.done
assert coresys.jobs.get_job() != job
assert job.done
with pytest.raises(JobStartException):
with job.start():
pass
async def test_job_start_bad_parent(coresys: CoreSys):
"""Test job cannot be started outside of parent."""
job = coresys.jobs.new_job(TEST_JOB)
job2 = coresys.jobs.new_job(f"{TEST_JOB}_2")
with job.start():
with pytest.raises(JobStartException):
with job2.start():
pass
with job2.start():
assert coresys.jobs.get_job() == job2
async def test_update_job(coresys: CoreSys): async def test_update_job(coresys: CoreSys):
"""Test updating jobs.""" """Test updating jobs."""
job = coresys.jobs.get_job(TEST_JOB) job = coresys.jobs.new_job(TEST_JOB)
job.update(progress=50, stage="stage") job.progress = 50
assert job.progress == 50 assert job.progress == 50
job.stage = "stage"
assert job.stage == "stage" assert job.stage == "stage"
with pytest.raises(ValueError):
job.progress = 110
with pytest.raises(ValueError):
job.progress = -10

View File

@@ -6,8 +6,16 @@ import pytest
from supervisor.const import LogLevel from supervisor.const import LogLevel
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.audio import DockerAudio
from tests.plugins.test_dns import fixture_docker_interface # noqa: F401
@pytest.fixture(name="docker_interface")
async def fixture_docker_interface() -> tuple[AsyncMock, AsyncMock]:
"""Mock docker interface methods."""
with patch.object(DockerAudio, "run") as run, patch.object(
DockerAudio, "restart"
) as restart:
yield (run, restart)
@pytest.fixture(name="write_json") @pytest.fixture(name="write_json")

View File

@@ -9,7 +9,7 @@ import pytest
from supervisor.const import BusEvent, LogLevel from supervisor.const import BusEvent, LogLevel
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.const import ContainerState from supervisor.docker.const import ContainerState
from supervisor.docker.interface import DockerInterface from supervisor.docker.dns import DockerDNS
from supervisor.docker.monitor import DockerContainerStateEvent from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.plugins.dns import HostEntry from supervisor.plugins.dns import HostEntry
from supervisor.resolution.const import ContextType, IssueType, SuggestionType from supervisor.resolution.const import ContextType, IssueType, SuggestionType
@@ -19,8 +19,8 @@ from supervisor.resolution.data import Issue, Suggestion
@pytest.fixture(name="docker_interface") @pytest.fixture(name="docker_interface")
async def fixture_docker_interface() -> tuple[AsyncMock, AsyncMock]: async def fixture_docker_interface() -> tuple[AsyncMock, AsyncMock]:
"""Mock docker interface methods.""" """Mock docker interface methods."""
with patch.object(DockerInterface, "run") as run, patch.object( with patch.object(DockerDNS, "run") as run, patch.object(
DockerInterface, "restart" DockerDNS, "restart"
) as restart: ) as restart:
yield (run, restart) yield (run, restart)

View File

@@ -5,7 +5,7 @@ from unittest.mock import patch
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
def test_read_addon_files(coresys: CoreSys): async def test_read_addon_files(coresys: CoreSys):
"""Test that we are reading add-on files correctly.""" """Test that we are reading add-on files correctly."""
with patch( with patch(
"pathlib.Path.glob", "pathlib.Path.glob",
@@ -19,7 +19,7 @@ def test_read_addon_files(coresys: CoreSys):
Path(".circleci/config.yml"), Path(".circleci/config.yml"),
], ],
): ):
addon_list = coresys.store.data._find_addons(Path("test"), {}) addon_list = await coresys.store.data._find_addons(Path("test"), {})
assert len(addon_list) == 1 assert len(addon_list) == 1
assert str(addon_list[0]) == "addon/config.yml" assert str(addon_list[0]) == "addon/config.yml"

View File

@@ -1,16 +1,20 @@
"""Test loading add-translation.""" """Test loading add-translation."""
# pylint: disable=import-error,protected-access # pylint: disable=import-error,protected-access
import os import os
from pathlib import Path
import pytest
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.store.data import _read_addon_translations
from supervisor.utils.common import write_json_or_yaml_file from supervisor.utils.common import write_json_or_yaml_file
def test_loading_traslations(coresys: CoreSys, tmp_path): def test_loading_traslations(coresys: CoreSys, tmp_path: Path):
"""Test loading add-translation.""" """Test loading add-translation."""
os.makedirs(tmp_path / "translations") os.makedirs(tmp_path / "translations")
# no transaltions # no transaltions
assert coresys.store.data._read_addon_translations(tmp_path) == {} assert _read_addon_translations(tmp_path) == {}
for file in ("en.json", "es.json"): for file in ("en.json", "es.json"):
write_json_or_yaml_file( write_json_or_yaml_file(
@@ -27,7 +31,7 @@ def test_loading_traslations(coresys: CoreSys, tmp_path):
}, },
) )
translations = coresys.store.data._read_addon_translations(tmp_path) translations = _read_addon_translations(tmp_path)
assert translations["en"]["configuration"]["test"]["name"] == "test" assert translations["en"]["configuration"]["test"]["name"] == "test"
assert translations["es"]["configuration"]["test"]["name"] == "test" assert translations["es"]["configuration"]["test"]["name"] == "test"
@@ -37,3 +41,22 @@ def test_loading_traslations(coresys: CoreSys, tmp_path):
assert "test" not in translations["en"]["configuration"]["test"] assert "test" not in translations["en"]["configuration"]["test"]
assert translations["no"]["network"]["80/tcp"] == "Webserver port" assert translations["no"]["network"]["80/tcp"] == "Webserver port"
def test_translation_file_failure(
coresys: CoreSys, tmp_path: Path, caplog: pytest.LogCaptureFixture
):
"""Test translations load if one fails."""
os.makedirs(tmp_path / "translations")
write_json_or_yaml_file(
tmp_path / "translations" / "en.json",
{"configuration": {"test": {"name": "test", "test": "test"}}},
)
fail_path = tmp_path / "translations" / "de.json"
with fail_path.open("w") as de_file:
de_file.write("not json")
translations = _read_addon_translations(tmp_path)
assert translations["en"]["configuration"]["test"]["name"] == "test"
assert f"Can't read translations from {fail_path.as_posix()}" in caplog.text

View File

@@ -103,6 +103,26 @@ async def test_raspberrypi4_64_arch(coresys, sys_machine, sys_supervisor):
assert coresys.arch.supported == ["aarch64", "armv7", "armhf"] assert coresys.arch.supported == ["aarch64", "armv7", "armhf"]
async def test_yellow_arch(coresys, sys_machine, sys_supervisor):
"""Test arch for yellow."""
sys_machine.return_value = "yellow"
sys_supervisor.arch = "aarch64"
await coresys.arch.load()
assert coresys.arch.default == "aarch64"
assert coresys.arch.supported == ["aarch64", "armv7", "armhf"]
async def test_green_arch(coresys, sys_machine, sys_supervisor):
"""Test arch for green."""
sys_machine.return_value = "green"
sys_supervisor.arch = "aarch64"
await coresys.arch.load()
assert coresys.arch.default == "aarch64"
assert coresys.arch.supported == ["aarch64", "armv7", "armhf"]
async def test_tinker_arch(coresys, sys_machine, sys_supervisor): async def test_tinker_arch(coresys, sys_machine, sys_supervisor):
"""Test arch for tinker.""" """Test arch for tinker."""
sys_machine.return_value = "tinker" sys_machine.return_value = "tinker"

View File

@@ -1,6 +1,7 @@
"""Test ingress.""" """Test ingress."""
from datetime import timedelta from datetime import timedelta
from supervisor.const import ATTR_SESSION_DATA_USER_ID
from supervisor.utils.dt import utc_from_timestamp from supervisor.utils.dt import utc_from_timestamp
@@ -20,6 +21,21 @@ def test_session_handling(coresys):
assert not coresys.ingress.validate_session(session) assert not coresys.ingress.validate_session(session)
assert not coresys.ingress.validate_session("invalid session") assert not coresys.ingress.validate_session("invalid session")
session_data = coresys.ingress.get_session_data(session)
assert session_data is None
def test_session_handling_with_session_data(coresys):
"""Create and test session."""
session = coresys.ingress.create_session(
dict([(ATTR_SESSION_DATA_USER_ID, "some-id")])
)
assert session
session_data = coresys.ingress.get_session_data(session)
assert session_data[ATTR_SESSION_DATA_USER_ID] == "some-id"
async def test_save_on_unload(coresys): async def test_save_on_unload(coresys):
"""Test called save on unload.""" """Test called save on unload."""

View File

@@ -16,6 +16,24 @@ DNS_GOOD_V6 = [
"DNS://2606:4700:4700::1001", # cloudflare "DNS://2606:4700:4700::1001", # cloudflare
] ]
DNS_BAD = ["hello world", "https://foo.bar", "", "dns://example.com"] DNS_BAD = ["hello world", "https://foo.bar", "", "dns://example.com"]
IMAGE_NAME_GOOD = [
"ghcr.io/home-assistant/{machine}-homeassistant",
"ghcr.io/home-assistant/{arch}-homeassistant",
"homeassistant/{arch}-homeassistant",
"doocker.io/homeassistant/{arch}-homeassistant",
"ghcr.io/home-assistant/amd64-homeassistant",
"homeassistant/amd64-homeassistant",
"ttl.sh/homeassistant",
"myreg.local:8080/homeassistant",
]
IMAGE_NAME_BAD = [
"ghcr.io/home-assistant/homeassistant:123",
".ghcr.io/home-assistant/homeassistant",
"HOMEASSISTANT/homeassistant",
"homeassistant/HOMEASSISTANT",
"homeassistant/_homeassistant",
"homeassistant/-homeassistant",
]
async def test_dns_url_v4_good(): async def test_dns_url_v4_good():
@@ -72,6 +90,19 @@ def test_dns_server_list_bad_combined():
assert validate.dns_server_list(combined) assert validate.dns_server_list(combined)
def test_image_name_good():
"""Test container image names validator with known-good image names."""
for image_name in IMAGE_NAME_GOOD:
assert validate.docker_image(image_name)
def test_image_name_bad():
"""Test container image names validator with known-bad image names."""
for image_name in IMAGE_NAME_BAD:
with pytest.raises(vol.error.Invalid):
assert validate.docker_image(image_name)
def test_version_complex(): def test_version_complex():
"""Test version simple with good version.""" """Test version simple with good version."""
for version in ( for version in (