Compare commits

...

19 Commits

Author SHA1 Message Date
Stefan Agner
3e307c5c8b Fix pylint 2025-10-08 14:56:56 +02:00
Stefan Agner
1cd499b4a5 Fix WebSocket transport None race condition in proxy
Add a transport validity check before WebSocket upgrade to prevent
AssertionError when clients disconnect during handshake.

The issue occurs when a client connection is lost between the API state
check and server.prepare() call, causing request.transport to become None
and triggering "assert transport is not None" in aiohttp's _pre_start().

The fix detects the closed connection early and raises HTTPBadRequest
with a clear reason instead of crashing with an AssertionError.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-08 14:48:08 +02:00
Stefan Agner
53a8044aff Add support for ulimit in addon config (#6206)
* Add support for ulimit in addon config

Similar to docker-compose, this adds support for setting ulimits
for addons via the addon config. This is useful e.g. for InfluxDB
which on its own does not support setting higher open file descriptor
limits, but recommends increasing limits on the host.

* Make soft and hard limit mandatory if ulimit is a dict
2025-10-08 12:43:12 +02:00
Jan Čermák
c71553f37d Add AGENTS.md symlink (#6237)
Add AGENTS.md along the CLAUDE.md for agents that support it. While
CLAUDE.md is still required and specific to Claude Code, AGENTS.md
covers various other agents that implemented this proposed standard.

Core already adopted the same approach recently.
2025-10-08 10:44:49 +02:00
dependabot[bot]
c1eb97d8ab Bump ruff from 0.13.3 to 0.14.0 (#6238)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.13.3 to 0.14.0.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.13.3...0.14.0)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-08 09:47:19 +02:00
Mike Degatano
190b734332 Add progress reporting to addon, HA and Supervisor updates (#6195)
* Add progress reporting to addon, HA and Supervisor updates

* Fix assert in test

* Add progress to addon, core, supervisor updates/installs

* Fix double install bug in addons install

* Remove initial_install and re-arrange order of load
2025-10-07 16:54:11 +02:00
dependabot[bot]
559b6982a3 Bump aiohttp from 3.12.15 to 3.13.0 (#6234)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.13.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 12:29:47 +02:00
dependabot[bot]
301362e9e5 Bump attrs from 25.3.0 to 25.4.0 (#6235)
Bumps [attrs](https://github.com/sponsors/hynek) from 25.3.0 to 25.4.0.
- [Commits](https://github.com/sponsors/hynek/commits)

---
updated-dependencies:
- dependency-name: attrs
  dependency-version: 25.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 12:29:03 +02:00
dependabot[bot]
fc928d294c Bump sentry-sdk from 2.39.0 to 2.40.0 (#6233)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.39.0 to 2.40.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.39.0...2.40.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.40.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 09:47:03 +02:00
dependabot[bot]
f42aeb4937 Bump dbus-fast from 2.44.3 to 2.44.5 (#6232)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.44.3 to 2.44.5.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.44.3...v2.44.5)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 2.44.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:37:22 +02:00
dependabot[bot]
fd21886de9 Bump pylint from 3.3.8 to 3.3.9 (#6230)
Bumps [pylint](https://github.com/pylint-dev/pylint) from 3.3.8 to 3.3.9.
- [Release notes](https://github.com/pylint-dev/pylint/releases)
- [Commits](https://github.com/pylint-dev/pylint/compare/v3.3.8...v3.3.9)

---
updated-dependencies:
- dependency-name: pylint
  dependency-version: 3.3.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:36:03 +02:00
dependabot[bot]
e4bb415e30 Bump actions/stale from 10.0.0 to 10.1.0 (#6229)
Bumps [actions/stale](https://github.com/actions/stale) from 10.0.0 to 10.1.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](3a9db7e6a4...5f858e3efb)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:35:49 +02:00
dependabot[bot]
622dda5382 Bump ruff from 0.13.2 to 0.13.3 (#6228)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.13.2 to 0.13.3.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.13.2...0.13.3)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.13.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-03 14:42:13 +02:00
Stefan Agner
78a2e15ebb Replace non-UTF-8 characters in log messages (#6227) 2025-10-02 10:35:50 +02:00
Stefan Agner
f3e1e0f423 Fix CID file handling to prevent directory creation (#6225)
* Fix CID file handling to prevent directory creation

It seems that under certain conditions Docker creates a directory
instead of a file for the CID file. This change ensures that
the CID file is always created as a file, and any existing directory
is removed before creating the file.

* Fix tests

* Fix pytest
2025-10-02 09:24:19 +02:00
Stefan Agner
5779b567f1 Optimize API connection handling by removing redundant port checks (#6212)
* Simplify ensure_access_token

Make the caller of ensure_access_token responsible for connection error
handling. This is especially useful for API connection checks, as it
avoids an extra call to the API (if we fail to connect when refreshing
the token there is no point in calling the API to check if it is up).
Document the change in the docstring.

Also avoid the overhead of creating a Job object. We can simply use an
asyncio.Lock() to ensure only one coroutine is refreshing the token at
a time. This also avoids Job interference in Exception handling.

* Remove check_port from API checks

Remove check_port usage from Home Assistant API connection checks.
Simply rely on errors raised from actual connection attempts. During a
Supervisor startup when Home Assistant Core is running (e.g. after a
Supervisor update) we make about 10 successful API checks. The old code
path did a port check and then a connection check, causing two socket
creation. The new code without the separate port check safes 10 socket
creations per startup (the aiohttp connections are reused, hence do not
cause only one socket creation).

* Log API exceptions on call site

Since make_request is no longer logging API exceptions on its own, we
need to log them where we call make_request. This approach gives the
user more context about what Supervisor was trying to do when the error
happened.

* Avoid unnecessary nesting

* Improve error when ingress panel update fails

* Add comment about fast path
2025-10-02 08:54:50 +02:00
dependabot[bot]
3c5f4920a0 Bump cryptography from 46.0.1 to 46.0.2 (#6224)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.1 to 46.0.2.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.1...46.0.2)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 46.0.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-01 09:41:38 +02:00
Mike Degatano
64f94a159c Add progress syncing from child jobs (#6207)
* Add progress syncing from child jobs

* Fix pylint issue

* Set initial progress from parent and end at 100
2025-09-30 14:52:16 -04:00
dependabot[bot]
ab3b147876 Bump docker/login-action from 3.5.0 to 3.6.0 (#6219)
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.5.0 to 3.6.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](184bdaa072...5e57cd1181)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-30 09:56:29 +02:00
38 changed files with 1047 additions and 480 deletions

View File

@@ -150,7 +150,7 @@ jobs:
- name: Login to GitHub Container Registry - name: Login to GitHub Container Registry
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0 uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}

View File

@@ -9,7 +9,7 @@ jobs:
stale: stale:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0 - uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
with: with:
repo-token: ${{ secrets.GITHUB_TOKEN }} repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30 days-before-stale: 30

1
AGENTS.md Symbolic link
View File

@@ -0,0 +1 @@
.github/copilot-instructions.md

View File

@@ -1,14 +1,14 @@
aiodns==3.5.0 aiodns==3.5.0
aiohttp==3.12.15 aiohttp==3.13.0
atomicwrites-homeassistant==1.4.1 atomicwrites-homeassistant==1.4.1
attrs==25.3.0 attrs==25.4.0
awesomeversion==25.8.0 awesomeversion==25.8.0
blockbuster==1.5.25 blockbuster==1.5.25
brotli==1.1.0 brotli==1.1.0
ciso8601==2.3.3 ciso8601==2.3.3
colorlog==6.9.0 colorlog==6.9.0
cpe==1.3.1 cpe==1.3.1
cryptography==46.0.1 cryptography==46.0.2
debugpy==1.8.17 debugpy==1.8.17
deepmerge==2.0 deepmerge==2.0
dirhash==0.5.0 dirhash==0.5.0
@@ -23,8 +23,8 @@ pyudev==0.24.3
PyYAML==6.0.3 PyYAML==6.0.3
requests==2.32.5 requests==2.32.5
securetar==2025.2.1 securetar==2025.2.1
sentry-sdk==2.39.0 sentry-sdk==2.40.0
setuptools==80.9.0 setuptools==80.9.0
voluptuous==0.15.2 voluptuous==0.15.2
dbus-fast==2.44.3 dbus-fast==2.44.5
zlib-fast==0.2.1 zlib-fast==0.2.1

View File

@@ -2,13 +2,13 @@ astroid==3.3.11
coverage==7.10.7 coverage==7.10.7
mypy==1.18.2 mypy==1.18.2
pre-commit==4.3.0 pre-commit==4.3.0
pylint==3.3.8 pylint==3.3.9
pytest-aiohttp==1.1.0 pytest-aiohttp==1.1.0
pytest-asyncio==0.25.2 pytest-asyncio==0.25.2
pytest-cov==7.0.0 pytest-cov==7.0.0
pytest-timeout==2.4.0 pytest-timeout==2.4.0
pytest==8.4.2 pytest==8.4.2
ruff==0.13.2 ruff==0.14.0
time-machine==2.19.0 time-machine==2.19.0
types-docker==7.1.0.20250916 types-docker==7.1.0.20250916
types-pyyaml==6.0.12.20250915 types-pyyaml==6.0.12.20250915

View File

@@ -72,7 +72,6 @@ from ..exceptions import (
AddonsJobError, AddonsJobError,
ConfigurationFileError, ConfigurationFileError,
DockerError, DockerError,
HomeAssistantAPIError,
HostAppArmorError, HostAppArmorError,
) )
from ..hardware.data import Device from ..hardware.data import Device
@@ -227,6 +226,7 @@ class Addon(AddonModel):
) )
await self._check_ingress_port() await self._check_ingress_port()
default_image = self._image(self.data) default_image = self._image(self.data)
try: try:
await self.instance.attach(version=self.version) await self.instance.attach(version=self.version)
@@ -775,7 +775,6 @@ class Addon(AddonModel):
raise AddonsError("Missing from store, cannot install!") raise AddonsError("Missing from store, cannot install!")
await self.sys_addons.data.install(self.addon_store) await self.sys_addons.data.install(self.addon_store)
await self.load()
def setup_data(): def setup_data():
if not self.path_data.is_dir(): if not self.path_data.is_dir():
@@ -798,6 +797,9 @@ class Addon(AddonModel):
await self.sys_addons.data.uninstall(self) await self.sys_addons.data.uninstall(self)
raise AddonsError() from err raise AddonsError() from err
# Finish initialization and set up listeners
await self.load()
# Add to addon manager # Add to addon manager
self.sys_addons.local[self.slug] = self self.sys_addons.local[self.slug] = self
@@ -842,7 +844,6 @@ class Addon(AddonModel):
# Cleanup Ingress panel from sidebar # Cleanup Ingress panel from sidebar
if self.ingress_panel: if self.ingress_panel:
self.ingress_panel = False self.ingress_panel = False
with suppress(HomeAssistantAPIError):
await self.sys_ingress.update_hass_panel(self) await self.sys_ingress.update_hass_panel(self)
# Cleanup Ingress dynamic port assignment # Cleanup Ingress dynamic port assignment

View File

@@ -9,8 +9,6 @@ from typing import Self, Union
from attr import evolve from attr import evolve
from supervisor.jobs.const import JobConcurrency
from ..const import AddonBoot, AddonStartup, AddonState from ..const import AddonBoot, AddonStartup, AddonState
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ( from ..exceptions import (
@@ -20,8 +18,9 @@ from ..exceptions import (
CoreDNSError, CoreDNSError,
DockerError, DockerError,
HassioError, HassioError,
HomeAssistantAPIError,
) )
from ..jobs import ChildJobSyncFilter
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType
from ..store.addon import AddonStore from ..store.addon import AddonStore
@@ -183,6 +182,9 @@ class AddonManager(CoreSysAttributes):
conditions=ADDON_UPDATE_CONDITIONS, conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.QUEUE, concurrency=JobConcurrency.QUEUE,
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
) )
async def install( async def install(
self, slug: str, *, validation_complete: asyncio.Event | None = None self, slug: str, *, validation_complete: asyncio.Event | None = None
@@ -230,6 +232,13 @@ class AddonManager(CoreSysAttributes):
name="addon_manager_update", name="addon_manager_update",
conditions=ADDON_UPDATE_CONDITIONS, conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError, on_condition=AddonsJobError,
# We assume for now the docker image pull is 100% of this task for progress
# allocation. But from a user perspective that isn't true. Other steps
# that take time which is not accounted for in progress include:
# partial backup, image cleanup, apparmor update, and addon restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
) )
async def update( async def update(
self, self,
@@ -272,7 +281,10 @@ class AddonManager(CoreSysAttributes):
addons=[addon.slug], addons=[addon.slug],
) )
return await addon.update() task = await addon.update()
_LOGGER.info("Add-on '%s' successfully updated", slug)
return task
@Job( @Job(
name="addon_manager_rebuild", name="addon_manager_rebuild",
@@ -351,7 +363,6 @@ class AddonManager(CoreSysAttributes):
# Update ingress # Update ingress
if had_ingress != addon.ingress_panel: if had_ingress != addon.ingress_panel:
await self.sys_ingress.reload() await self.sys_ingress.reload()
with suppress(HomeAssistantAPIError):
await self.sys_ingress.update_hass_panel(addon) await self.sys_ingress.update_hass_panel(addon)
return wait_for_start return wait_for_start

View File

@@ -72,6 +72,7 @@ from ..const import (
ATTR_TYPE, ATTR_TYPE,
ATTR_UART, ATTR_UART,
ATTR_UDEV, ATTR_UDEV,
ATTR_ULIMITS,
ATTR_URL, ATTR_URL,
ATTR_USB, ATTR_USB,
ATTR_VERSION, ATTR_VERSION,
@@ -462,6 +463,11 @@ class AddonModel(JobGroup, ABC):
"""Return True if the add-on have his own udev.""" """Return True if the add-on have his own udev."""
return self.data[ATTR_UDEV] return self.data[ATTR_UDEV]
@property
def ulimits(self) -> dict[str, Any]:
"""Return ulimits configuration."""
return self.data[ATTR_ULIMITS]
@property @property
def with_kernel_modules(self) -> bool: def with_kernel_modules(self) -> bool:
"""Return True if the add-on access to kernel modules.""" """Return True if the add-on access to kernel modules."""

View File

@@ -88,6 +88,7 @@ from ..const import (
ATTR_TYPE, ATTR_TYPE,
ATTR_UART, ATTR_UART,
ATTR_UDEV, ATTR_UDEV,
ATTR_ULIMITS,
ATTR_URL, ATTR_URL,
ATTR_USB, ATTR_USB,
ATTR_USER, ATTR_USER,
@@ -423,6 +424,20 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
False, False,
), ),
vol.Optional(ATTR_IMAGE): docker_image, vol.Optional(ATTR_IMAGE): docker_image,
vol.Optional(ATTR_ULIMITS, default=dict): vol.Any(
{str: vol.Coerce(int)}, # Simple format: {name: limit}
{
str: vol.Any(
vol.Coerce(int), # Simple format for individual entries
vol.Schema(
{ # Detailed format for individual entries
vol.Required("soft"): vol.Coerce(int),
vol.Required("hard"): vol.Coerce(int),
}
),
)
},
),
vol.Optional(ATTR_TIMEOUT, default=10): vol.All( vol.Optional(ATTR_TIMEOUT, default=10): vol.All(
vol.Coerce(int), vol.Range(min=10, max=300) vol.Coerce(int), vol.Range(min=10, max=300)
), ),

View File

@@ -77,10 +77,10 @@ class APIProxy(CoreSysAttributes):
yield resp yield resp
return return
except HomeAssistantAuthError: except HomeAssistantAuthError as err:
_LOGGER.error("Authenticate error on API for request %s", path) _LOGGER.error("Authenticate error on API for request %s: %s", path, err)
except HomeAssistantAPIError: except HomeAssistantAPIError as err:
_LOGGER.error("Error on API for request %s", path) _LOGGER.error("Error on API for request %s: %s", path, err)
except aiohttp.ClientError as err: except aiohttp.ClientError as err:
_LOGGER.error("Client error on API %s request %s", path, err) _LOGGER.error("Client error on API %s request %s", path, err)
except TimeoutError: except TimeoutError:
@@ -222,6 +222,11 @@ class APIProxy(CoreSysAttributes):
raise HTTPBadGateway() raise HTTPBadGateway()
_LOGGER.info("Home Assistant WebSocket API request initialize") _LOGGER.info("Home Assistant WebSocket API request initialize")
# Check if transport is still valid before WebSocket upgrade
if request.transport is None:
_LOGGER.warning("WebSocket connection lost before upgrade")
raise web.HTTPBadRequest(reason="Connection closed")
# init server # init server
server = web.WebSocketResponse(heartbeat=30) server = web.WebSocketResponse(heartbeat=30)
await server.prepare(request) await server.prepare(request)

View File

@@ -132,8 +132,8 @@ class Auth(FileConfiguration, CoreSysAttributes):
_LOGGER.warning("Unauthorized login for '%s'", username) _LOGGER.warning("Unauthorized login for '%s'", username)
await self._dismatch_cache(username, password) await self._dismatch_cache(username, password)
return False return False
except HomeAssistantAPIError: except HomeAssistantAPIError as err:
_LOGGER.error("Can't request auth on Home Assistant!") _LOGGER.error("Can't request auth on Home Assistant: %s", err)
finally: finally:
self._running.pop(username, None) self._running.pop(username, None)
@@ -152,8 +152,8 @@ class Auth(FileConfiguration, CoreSysAttributes):
return return
_LOGGER.warning("The user '%s' is not registered", username) _LOGGER.warning("The user '%s' is not registered", username)
except HomeAssistantAPIError: except HomeAssistantAPIError as err:
_LOGGER.error("Can't request password reset on Home Assistant!") _LOGGER.error("Can't request password reset on Home Assistant: %s", err)
raise AuthPasswordResetError() raise AuthPasswordResetError()

View File

@@ -348,6 +348,7 @@ ATTR_TRANSLATIONS = "translations"
ATTR_TYPE = "type" ATTR_TYPE = "type"
ATTR_UART = "uart" ATTR_UART = "uart"
ATTR_UDEV = "udev" ATTR_UDEV = "udev"
ATTR_ULIMITS = "ulimits"
ATTR_UNHEALTHY = "unhealthy" ATTR_UNHEALTHY = "unhealthy"
ATTR_UNSAVED = "unsaved" ATTR_UNSAVED = "unsaved"
ATTR_UNSUPPORTED = "unsupported" ATTR_UNSUPPORTED = "unsupported"

View File

@@ -2,7 +2,6 @@
from __future__ import annotations from __future__ import annotations
from contextlib import suppress
import logging import logging
from typing import TYPE_CHECKING, Any from typing import TYPE_CHECKING, Any
from uuid import uuid4 from uuid import uuid4
@@ -119,7 +118,7 @@ class Discovery(CoreSysAttributes, FileConfiguration):
data = attr.asdict(message) data = attr.asdict(message)
data.pop(ATTR_CONFIG) data.pop(ATTR_CONFIG)
with suppress(HomeAssistantAPIError): try:
async with self.sys_homeassistant.api.make_request( async with self.sys_homeassistant.api.make_request(
command, command,
f"api/hassio_push/discovery/{message.uuid}", f"api/hassio_push/discovery/{message.uuid}",
@@ -128,5 +127,5 @@ class Discovery(CoreSysAttributes, FileConfiguration):
): ):
_LOGGER.info("Discovery %s message send", message.uuid) _LOGGER.info("Discovery %s message send", message.uuid)
return return
except HomeAssistantAPIError as err:
_LOGGER.warning("Discovery %s message fail", message.uuid) _LOGGER.error("Discovery %s message failed: %s", message.uuid, err)

View File

@@ -318,7 +318,18 @@ class DockerAddon(DockerInterface):
mem = 128 * 1024 * 1024 mem = 128 * 1024 * 1024
limits.append(docker.types.Ulimit(name="memlock", soft=mem, hard=mem)) limits.append(docker.types.Ulimit(name="memlock", soft=mem, hard=mem))
# Return None if no capabilities is present # Add configurable ulimits from add-on config
for name, config in self.addon.ulimits.items():
if isinstance(config, int):
# Simple format: both soft and hard limits are the same
limits.append(docker.types.Ulimit(name=name, soft=config, hard=config))
elif isinstance(config, dict):
# Detailed format: both soft and hard limits are mandatory
soft = config["soft"]
hard = config["hard"]
limits.append(docker.types.Ulimit(name=name, soft=soft, hard=hard))
# Return None if no ulimits are present
if limits: if limits:
return limits return limits
return None return None

View File

@@ -220,10 +220,12 @@ class DockerInterface(JobGroup, ABC):
await self.sys_run_in_executor(self.sys_docker.docker.login, **credentials) await self.sys_run_in_executor(self.sys_docker.docker.login, **credentials)
def _process_pull_image_log(self, job_id: str, reference: PullLogEntry) -> None: def _process_pull_image_log(
self, install_job_id: str, reference: PullLogEntry
) -> None:
"""Process events fired from a docker while pulling an image, filtered to a given job id.""" """Process events fired from a docker while pulling an image, filtered to a given job id."""
if ( if (
reference.job_id != job_id reference.job_id != install_job_id
or not reference.id or not reference.id
or not reference.status or not reference.status
or not (stage := PullImageLayerStage.from_status(reference.status)) or not (stage := PullImageLayerStage.from_status(reference.status))
@@ -237,21 +239,22 @@ class DockerInterface(JobGroup, ABC):
name="Pulling container image layer", name="Pulling container image layer",
initial_stage=stage.status, initial_stage=stage.status,
reference=reference.id, reference=reference.id,
parent_id=job_id, parent_id=install_job_id,
internal=True,
) )
job.done = False job.done = False
return return
# Find our sub job to update details of # Find our sub job to update details of
for j in self.sys_jobs.jobs: for j in self.sys_jobs.jobs:
if j.parent_id == job_id and j.reference == reference.id: if j.parent_id == install_job_id and j.reference == reference.id:
job = j job = j
break break
# This likely only occurs if the logs came in out of sync and we got progress before the Pulling FS Layer one # This likely only occurs if the logs came in out of sync and we got progress before the Pulling FS Layer one
if not job: if not job:
raise DockerLogOutOfOrder( raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {job_id} but could not find a matching job, skipping", f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {install_job_id} but could not find a matching job, skipping",
_LOGGER.debug, _LOGGER.debug,
) )
@@ -325,10 +328,56 @@ class DockerInterface(JobGroup, ABC):
else job.extra, else job.extra,
) )
# Once we have received a progress update for every child job, start to set status of the main one
install_job = self.sys_jobs.get_job(install_job_id)
layer_jobs = [
job
for job in self.sys_jobs.jobs
if job.parent_id == install_job.uuid
and job.name == "Pulling container image layer"
]
# First set the total bytes to be downloaded/extracted on the main job
if not install_job.extra:
total = 0
for job in layer_jobs:
if not job.extra:
return
total += job.extra["total"]
install_job.extra = {"total": total}
else:
total = install_job.extra["total"]
# Then determine total progress based on progress of each sub-job, factoring in size of each compared to total
progress = 0.0
stage = PullImageLayerStage.PULL_COMPLETE
for job in layer_jobs:
if not job.extra:
return
progress += job.progress * (job.extra["total"] / total)
job_stage = PullImageLayerStage.from_status(cast(str, job.stage))
if job_stage < PullImageLayerStage.EXTRACTING:
stage = PullImageLayerStage.DOWNLOADING
elif (
stage == PullImageLayerStage.PULL_COMPLETE
and job_stage < PullImageLayerStage.PULL_COMPLETE
):
stage = PullImageLayerStage.EXTRACTING
# Ensure progress is 100 at this point to prevent float drift
if stage == PullImageLayerStage.PULL_COMPLETE:
progress = 100
# To reduce noise, limit updates to when result has changed by an entire percent or when stage changed
if stage != install_job.stage or progress >= install_job.progress + 1:
install_job.update(stage=stage.status, progress=progress)
@Job( @Job(
name="docker_interface_install", name="docker_interface_install",
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT, concurrency=JobConcurrency.GROUP_REJECT,
internal=True,
) )
async def install( async def install(
self, self,
@@ -351,11 +400,11 @@ class DockerInterface(JobGroup, ABC):
# Try login if we have defined credentials # Try login if we have defined credentials
await self._docker_login(image) await self._docker_login(image)
job_id = self.sys_jobs.current.uuid curr_job_id = self.sys_jobs.current.uuid
async def process_pull_image_log(reference: PullLogEntry) -> None: async def process_pull_image_log(reference: PullLogEntry) -> None:
try: try:
self._process_pull_image_log(job_id, reference) self._process_pull_image_log(curr_job_id, reference)
except DockerLogOutOfOrder as err: except DockerLogOutOfOrder as err:
# Send all these to sentry. Missing a few progress updates # Send all these to sentry. Missing a few progress updates
# shouldn't matter to users but matters to us # shouldn't matter to users but matters to us
@@ -629,7 +678,10 @@ class DockerInterface(JobGroup, ABC):
concurrency=JobConcurrency.GROUP_REJECT, concurrency=JobConcurrency.GROUP_REJECT,
) )
async def update( async def update(
self, version: AwesomeVersion, image: str | None = None, latest: bool = False self,
version: AwesomeVersion,
image: str | None = None,
latest: bool = False,
) -> None: ) -> None:
"""Update a Docker image.""" """Update a Docker image."""
image = image or self.image image = image or self.image

View File

@@ -326,11 +326,19 @@ class DockerAPI(CoreSysAttributes):
if name: if name:
cidfile_path = self.coresys.config.path_cid_files / f"{name}.cid" cidfile_path = self.coresys.config.path_cid_files / f"{name}.cid"
# Remove the file if it exists e.g. as a leftover from unclean shutdown # Remove the file/directory if it exists e.g. as a leftover from unclean shutdown
if cidfile_path.is_file(): # Note: Can be a directory if Docker auto-started container with restart policy
# before Supervisor could write the CID file
with suppress(OSError): with suppress(OSError):
if cidfile_path.is_dir():
cidfile_path.rmdir()
elif cidfile_path.is_file():
cidfile_path.unlink(missing_ok=True) cidfile_path.unlink(missing_ok=True)
# Create empty CID file before adding it to volumes to prevent Docker
# from creating it as a directory if container auto-starts
cidfile_path.touch()
extern_cidfile_path = ( extern_cidfile_path = (
self.coresys.config.path_extern_cid_files / f"{name}.cid" self.coresys.config.path_extern_cid_files / f"{name}.cid"
) )

View File

@@ -2,7 +2,7 @@
import asyncio import asyncio
from collections.abc import AsyncIterator from collections.abc import AsyncIterator
from contextlib import asynccontextmanager, suppress from contextlib import asynccontextmanager
from dataclasses import dataclass from dataclasses import dataclass
from datetime import UTC, datetime, timedelta from datetime import UTC, datetime, timedelta
import logging import logging
@@ -15,9 +15,7 @@ from multidict import MultiMapping
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import HomeAssistantAPIError, HomeAssistantAuthError from ..exceptions import HomeAssistantAPIError, HomeAssistantAuthError
from ..jobs.const import JobConcurrency from ..utils import version_is_new_enough
from ..jobs.decorator import Job
from ..utils import check_port, version_is_new_enough
from .const import LANDINGPAGE from .const import LANDINGPAGE
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -43,14 +41,28 @@ class HomeAssistantAPI(CoreSysAttributes):
# We don't persist access tokens. Instead we fetch new ones when needed # We don't persist access tokens. Instead we fetch new ones when needed
self.access_token: str | None = None self.access_token: str | None = None
self._access_token_expires: datetime | None = None self._access_token_expires: datetime | None = None
self._token_lock: asyncio.Lock = asyncio.Lock()
@Job(
name="home_assistant_api_ensure_access_token",
internal=True,
concurrency=JobConcurrency.QUEUE,
)
async def ensure_access_token(self) -> None: async def ensure_access_token(self) -> None:
"""Ensure there is an access token.""" """Ensure there is a valid access token.
Raises:
HomeAssistantAuthError: When we cannot get a valid token
aiohttp.ClientError: On network or connection errors
TimeoutError: On request timeouts
"""
# Fast path check without lock (avoid unnecessary locking
# for the majority of calls).
if (
self.access_token
and self._access_token_expires
and self._access_token_expires > datetime.now(tz=UTC)
):
return
async with self._token_lock:
# Double-check after acquiring lock (avoid race condition)
if ( if (
self.access_token self.access_token
and self._access_token_expires and self._access_token_expires
@@ -58,7 +70,6 @@ class HomeAssistantAPI(CoreSysAttributes):
): ):
return return
with suppress(asyncio.TimeoutError, aiohttp.ClientError):
async with self.sys_websession.post( async with self.sys_websession.post(
f"{self.sys_homeassistant.api_url}/auth/token", f"{self.sys_homeassistant.api_url}/auth/token",
timeout=aiohttp.ClientTimeout(total=30), timeout=aiohttp.ClientTimeout(total=30),
@@ -92,7 +103,36 @@ class HomeAssistantAPI(CoreSysAttributes):
params: MultiMapping[str] | None = None, params: MultiMapping[str] | None = None,
headers: dict[str, str] | None = None, headers: dict[str, str] | None = None,
) -> AsyncIterator[aiohttp.ClientResponse]: ) -> AsyncIterator[aiohttp.ClientResponse]:
"""Async context manager to make a request with right auth.""" """Async context manager to make authenticated requests to Home Assistant API.
This context manager handles authentication token management automatically,
including token refresh on 401 responses. It yields the HTTP response
for the caller to handle.
Error Handling:
- HTTP error status codes (4xx, 5xx) are preserved in the response
- Authentication is handled transparently with one retry on 401
- Network/connection failures raise HomeAssistantAPIError
- No logging is performed - callers should handle logging as needed
Args:
method: HTTP method (get, post, etc.)
path: API path relative to Home Assistant base URL
json: JSON data to send in request body
content_type: Override content-type header
data: Raw data to send in request body
timeout: Request timeout in seconds
params: URL query parameters
headers: Additional HTTP headers
Yields:
aiohttp.ClientResponse: The HTTP response object
Raises:
HomeAssistantAPIError: When request cannot be completed due to
network errors, timeouts, or connection failures
"""
url = f"{self.sys_homeassistant.api_url}/{path}" url = f"{self.sys_homeassistant.api_url}/{path}"
headers = headers or {} headers = headers or {}
@@ -101,10 +141,9 @@ class HomeAssistantAPI(CoreSysAttributes):
headers[hdrs.CONTENT_TYPE] = content_type headers[hdrs.CONTENT_TYPE] = content_type
for _ in (1, 2): for _ in (1, 2):
try:
await self.ensure_access_token() await self.ensure_access_token()
headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}" headers[hdrs.AUTHORIZATION] = f"Bearer {self.access_token}"
try:
async with getattr(self.sys_websession, method)( async with getattr(self.sys_websession, method)(
url, url,
data=data, data=data,
@@ -120,23 +159,19 @@ class HomeAssistantAPI(CoreSysAttributes):
continue continue
yield resp yield resp
return return
except TimeoutError: except TimeoutError as err:
_LOGGER.error("Timeout on call %s.", url) _LOGGER.debug("Timeout on call %s.", url)
break raise HomeAssistantAPIError(str(err)) from err
except aiohttp.ClientError as err: except aiohttp.ClientError as err:
_LOGGER.error("Error on call %s: %s", url, err) _LOGGER.debug("Error on call %s: %s", url, err)
break raise HomeAssistantAPIError(str(err)) from err
raise HomeAssistantAPIError()
async def _get_json(self, path: str) -> dict[str, Any]: async def _get_json(self, path: str) -> dict[str, Any]:
"""Return Home Assistant get API.""" """Return Home Assistant get API."""
async with self.make_request("get", path) as resp: async with self.make_request("get", path) as resp:
if resp.status in (200, 201): if resp.status in (200, 201):
return await resp.json() return await resp.json()
else: raise HomeAssistantAPIError(f"Home Assistant Core API return {resp.status}")
_LOGGER.debug("Home Assistant API return: %d", resp.status)
raise HomeAssistantAPIError()
async def get_config(self) -> dict[str, Any]: async def get_config(self) -> dict[str, Any]:
"""Return Home Assistant config.""" """Return Home Assistant config."""
@@ -155,15 +190,8 @@ class HomeAssistantAPI(CoreSysAttributes):
): ):
return None return None
# Check if port is up
if not await check_port(
self.sys_homeassistant.ip_address,
self.sys_homeassistant.api_port,
):
return None
# Check if API is up # Check if API is up
with suppress(HomeAssistantAPIError): try:
# get_core_state is available since 2023.8.0 and preferred # get_core_state is available since 2023.8.0 and preferred
# since it is significantly faster than get_config because # since it is significantly faster than get_config because
# it does not require serializing the entire config # it does not require serializing the entire config
@@ -181,6 +209,8 @@ class HomeAssistantAPI(CoreSysAttributes):
migrating = recorder_state.get("migration_in_progress", False) migrating = recorder_state.get("migration_in_progress", False)
live_migration = recorder_state.get("migration_is_live", False) live_migration = recorder_state.get("migration_is_live", False)
return APIState(state, migrating and not live_migration) return APIState(state, migrating and not live_migration)
except HomeAssistantAPIError as err:
_LOGGER.debug("Can't connect to Home Assistant API: %s", err)
return None return None

View File

@@ -28,6 +28,7 @@ from ..exceptions import (
HomeAssistantUpdateError, HomeAssistantUpdateError,
JobException, JobException,
) )
from ..jobs import ChildJobSyncFilter
from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobConcurrency, JobThrottle from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobConcurrency, JobThrottle
from ..jobs.decorator import Job, JobCondition from ..jobs.decorator import Job, JobCondition
from ..jobs.job_group import JobGroup from ..jobs.job_group import JobGroup
@@ -224,6 +225,13 @@ class HomeAssistantCore(JobGroup):
], ],
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT, concurrency=JobConcurrency.GROUP_REJECT,
# We assume for now the docker image pull is 100% of this task. But from
# a user perspective that isn't true. Other steps that take time which
# is not accounted for in progress include: partial backup, image
# cleanup, and Home Assistant restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
) )
async def update( async def update(
self, self,

View File

@@ -3,6 +3,7 @@
from __future__ import annotations from __future__ import annotations
import asyncio import asyncio
from contextlib import suppress
import logging import logging
from typing import Any, TypeVar, cast from typing import Any, TypeVar, cast
@@ -202,6 +203,7 @@ class HomeAssistantWebSocket(CoreSysAttributes):
if self._client is not None and self._client.connected: if self._client is not None and self._client.connected:
return self._client return self._client
with suppress(asyncio.TimeoutError, aiohttp.ClientError):
await self.sys_homeassistant.api.ensure_access_token() await self.sys_homeassistant.api.ensure_access_token()
client = await WSClient.connect_with_auth( client = await WSClient.connect_with_auth(
self.sys_websession, self.sys_websession,

View File

@@ -15,6 +15,7 @@ from .const import (
IngressSessionDataDict, IngressSessionDataDict,
) )
from .coresys import CoreSys, CoreSysAttributes from .coresys import CoreSys, CoreSysAttributes
from .exceptions import HomeAssistantAPIError
from .utils import check_port from .utils import check_port
from .utils.common import FileConfiguration from .utils.common import FileConfiguration
from .utils.dt import utc_from_timestamp, utcnow from .utils.dt import utc_from_timestamp, utcnow
@@ -191,6 +192,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
# Update UI # Update UI
method = "post" if addon.ingress_panel else "delete" method = "post" if addon.ingress_panel else "delete"
try:
async with self.sys_homeassistant.api.make_request( async with self.sys_homeassistant.api.make_request(
method, f"api/hassio_push/panel/{addon.slug}" method, f"api/hassio_push/panel/{addon.slug}"
) as resp: ) as resp:
@@ -198,5 +200,9 @@ class Ingress(FileConfiguration, CoreSysAttributes):
_LOGGER.info("Update Ingress as panel for %s", addon.slug) _LOGGER.info("Update Ingress as panel for %s", addon.slug)
else: else:
_LOGGER.warning( _LOGGER.warning(
"Fails Ingress panel for %s with %i", addon.slug, resp.status "Failed to update the Ingress panel for %s with %i",
addon.slug,
resp.status,
) )
except HomeAssistantAPIError as err:
_LOGGER.error("Panel update request failed for %s: %s", addon.slug, err)

View File

@@ -1,5 +1,7 @@
"""Supervisor job manager.""" """Supervisor job manager."""
from __future__ import annotations
import asyncio import asyncio
from collections.abc import Callable, Coroutine, Generator from collections.abc import Callable, Coroutine, Generator
from contextlib import contextmanager, suppress from contextlib import contextmanager, suppress
@@ -10,6 +12,7 @@ import logging
from typing import Any, Self from typing import Any, Self
from uuid import uuid4 from uuid import uuid4
from attr.validators import gt, lt
from attrs import Attribute, define, field from attrs import Attribute, define, field
from attrs.setters import convert as attr_convert, frozen, validate as attr_validate from attrs.setters import convert as attr_convert, frozen, validate as attr_validate
from attrs.validators import ge, le from attrs.validators import ge, le
@@ -47,13 +50,13 @@ def _remove_current_job(context: Context) -> Context:
return context return context
def _invalid_if_done(instance: "SupervisorJob", *_) -> None: def _invalid_if_done(instance: SupervisorJob, *_) -> None:
"""Validate that job is not done.""" """Validate that job is not done."""
if instance.done: if instance.done:
raise ValueError("Cannot update a job that is done") raise ValueError("Cannot update a job that is done")
def _on_change(instance: "SupervisorJob", attribute: Attribute, value: Any) -> Any: def _on_change(instance: SupervisorJob, attribute: Attribute, value: Any) -> Any:
"""Forward a change to a field on to the listener if defined.""" """Forward a change to a field on to the listener if defined."""
value = attr_convert(instance, attribute, value) value = attr_convert(instance, attribute, value)
value = attr_validate(instance, attribute, value) value = attr_validate(instance, attribute, value)
@@ -62,12 +65,34 @@ def _on_change(instance: "SupervisorJob", attribute: Attribute, value: Any) -> A
return value return value
def _invalid_if_started(instance: "SupervisorJob", *_) -> None: def _invalid_if_started(instance: SupervisorJob, *_) -> None:
"""Validate that job has not been started.""" """Validate that job has not been started."""
if instance.done is not None: if instance.done is not None:
raise ValueError("Field cannot be updated once job has started") raise ValueError("Field cannot be updated once job has started")
@define(frozen=True)
class ChildJobSyncFilter:
"""Filter to identify a child job to sync progress from."""
name: str
reference: str | None | type[DEFAULT] = DEFAULT
progress_allocation: float = field(default=1.0, validator=[gt(0.0), le(1.0)])
def matches(self, job: SupervisorJob) -> bool:
"""Return true if job matches filter."""
return job.name == self.name and self.reference in (DEFAULT, job.reference)
@define(frozen=True)
class ParentJobSync:
"""Parent job sync details."""
uuid: str
starting_progress: float = field(validator=[ge(0.0), lt(100.0)])
progress_allocation: float = field(validator=[gt(0.0), le(1.0)])
@define @define
class SupervisorJobError: class SupervisorJobError:
"""Representation of an error occurring during a supervisor job.""" """Representation of an error occurring during a supervisor job."""
@@ -103,13 +128,15 @@ class SupervisorJob:
) )
parent_id: str | None = field(factory=_CURRENT_JOB.get, on_setattr=frozen) parent_id: str | None = field(factory=_CURRENT_JOB.get, on_setattr=frozen)
done: bool | None = field(init=False, default=None, on_setattr=_on_change) done: bool | None = field(init=False, default=None, on_setattr=_on_change)
on_change: Callable[["SupervisorJob", Attribute, Any], None] | None = None on_change: Callable[[SupervisorJob, Attribute, Any], None] | None = None
internal: bool = field(default=False) internal: bool = field(default=False)
errors: list[SupervisorJobError] = field( errors: list[SupervisorJobError] = field(
init=False, factory=list, on_setattr=_on_change init=False, factory=list, on_setattr=_on_change
) )
release_event: asyncio.Event | None = None release_event: asyncio.Event | None = None
extra: dict[str, Any] | None = None extra: dict[str, Any] | None = None
child_job_syncs: list[ChildJobSyncFilter] | None = None
parent_job_syncs: list[ParentJobSync] = field(init=False, factory=list)
def as_dict(self) -> dict[str, Any]: def as_dict(self) -> dict[str, Any]:
"""Return dictionary representation.""" """Return dictionary representation."""
@@ -152,7 +179,13 @@ class SupervisorJob:
try: try:
token = _CURRENT_JOB.set(self.uuid) token = _CURRENT_JOB.set(self.uuid)
yield self yield self
# Cannot have an else without an except so we do nothing and re-raise
except: # noqa: TRY203
raise
else:
self.update(progress=100, done=True)
finally: finally:
if not self.done:
self.done = True self.done = True
if token: if token:
_CURRENT_JOB.reset(token) _CURRENT_JOB.reset(token)
@@ -174,11 +207,13 @@ class SupervisorJob:
self.stage = stage self.stage = stage
if extra != DEFAULT: if extra != DEFAULT:
self.extra = extra self.extra = extra
# Done has special event. use that to trigger on change if included
# If not then just use any other field to trigger
self.on_change = on_change
if done is not None: if done is not None:
self.done = done self.done = done
else:
self.on_change = on_change
# Just triggers the normal on change
self.reference = self.reference self.reference = self.reference
@@ -225,16 +260,37 @@ class JobManager(FileConfiguration, CoreSysAttributes):
"""Return true if there is an active job for the current asyncio task.""" """Return true if there is an active job for the current asyncio task."""
return _CURRENT_JOB.get() is not None return _CURRENT_JOB.get() is not None
def _notify_on_job_change( def _on_job_change(
self, job: SupervisorJob, attribute: Attribute, value: Any self, job: SupervisorJob, attribute: Attribute, value: Any
) -> None: ) -> None:
"""Notify Home Assistant of a change to a job and bus on job start/end.""" """Take on change actions such as notify home assistant and sync progress."""
# Job object will be before the change. Combine the change with current data
if attribute.name == "errors": if attribute.name == "errors":
value = [err.as_dict() for err in value] value = [err.as_dict() for err in value]
job_data = job.as_dict() | {attribute.name: value}
self.sys_homeassistant.websocket.supervisor_event( # Notify Home Assistant of change if its not internal
WSEvent.JOB, job.as_dict() | {attribute.name: value} if not job.internal:
self.sys_homeassistant.websocket.supervisor_event(WSEvent.JOB, job_data)
# If we have any parent job syncs, sync progress to them
for sync in job.parent_job_syncs:
try:
parent_job = self.get_job(sync.uuid)
except JobNotFound:
# Shouldn't happen but failure to find a parent for progress
# reporting shouldn't raise and break the active job
continue
progress = min(
100,
sync.starting_progress
+ (sync.progress_allocation * job_data["progress"]),
) )
# Using max would always trigger on change even if progress was unchanged
# pylint: disable-next=R1731
if parent_job.progress < progress: # noqa: PLR1730
parent_job.progress = progress
if attribute.name == "done": if attribute.name == "done":
if value is False: if value is False:
@@ -249,16 +305,41 @@ class JobManager(FileConfiguration, CoreSysAttributes):
initial_stage: str | None = None, initial_stage: str | None = None,
internal: bool = False, internal: bool = False,
parent_id: str | None = DEFAULT, # type: ignore parent_id: str | None = DEFAULT, # type: ignore
child_job_syncs: list[ChildJobSyncFilter] | None = None,
) -> SupervisorJob: ) -> SupervisorJob:
"""Create a new job.""" """Create a new job."""
job = SupervisorJob( job = SupervisorJob(
name, name,
reference=reference, reference=reference,
stage=initial_stage, stage=initial_stage,
on_change=None if internal else self._notify_on_job_change, on_change=self._on_job_change,
internal=internal, internal=internal,
child_job_syncs=child_job_syncs,
**({} if parent_id == DEFAULT else {"parent_id": parent_id}), # type: ignore **({} if parent_id == DEFAULT else {"parent_id": parent_id}), # type: ignore
) )
# Shouldn't happen but inability to find a parent for progress reporting
# shouldn't raise and break the active job
with suppress(JobNotFound):
curr_parent = job
while curr_parent.parent_id:
curr_parent = self.get_job(curr_parent.parent_id)
if not curr_parent.child_job_syncs:
continue
# Break after first match at each parent as it doesn't make sense
# to match twice. But it could match multiple parents
for sync in curr_parent.child_job_syncs:
if sync.matches(job):
job.parent_job_syncs.append(
ParentJobSync(
curr_parent.uuid,
starting_progress=curr_parent.progress,
progress_allocation=sync.progress_allocation,
)
)
break
self._jobs[job.uuid] = job self._jobs[job.uuid] = job
return job return job

View File

@@ -24,7 +24,7 @@ from ..resolution.const import (
UnsupportedReason, UnsupportedReason,
) )
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from . import SupervisorJob from . import ChildJobSyncFilter, SupervisorJob
from .const import JobConcurrency, JobCondition, JobThrottle from .const import JobConcurrency, JobCondition, JobThrottle
from .job_group import JobGroup from .job_group import JobGroup
@@ -48,6 +48,7 @@ class Job(CoreSysAttributes):
| None = None, | None = None,
throttle_max_calls: int | None = None, throttle_max_calls: int | None = None,
internal: bool = False, internal: bool = False,
child_job_syncs: list[ChildJobSyncFilter] | None = None,
): # pylint: disable=too-many-positional-arguments ): # pylint: disable=too-many-positional-arguments
"""Initialize the Job decorator. """Initialize the Job decorator.
@@ -61,6 +62,7 @@ class Job(CoreSysAttributes):
throttle_period (timedelta | Callable | None): Throttle period as a timedelta or a callable returning a timedelta (for throttled jobs). throttle_period (timedelta | Callable | None): Throttle period as a timedelta or a callable returning a timedelta (for throttled jobs).
throttle_max_calls (int | None): Maximum number of calls allowed within the throttle period (for rate-limited jobs). throttle_max_calls (int | None): Maximum number of calls allowed within the throttle period (for rate-limited jobs).
internal (bool): Whether the job is internal (not exposed through the Supervisor API). Defaults to False. internal (bool): Whether the job is internal (not exposed through the Supervisor API). Defaults to False.
child_job_syncs (list[ChildJobSyncFilter] | None): Use if jobs progress should be kept in sync with progress of one or more of its child jobs.ye
Raises: Raises:
RuntimeError: If job name is not unique, or required throttle parameters are missing for the selected throttle policy. RuntimeError: If job name is not unique, or required throttle parameters are missing for the selected throttle policy.
@@ -80,6 +82,7 @@ class Job(CoreSysAttributes):
self._last_call: dict[str | None, datetime] = {} self._last_call: dict[str | None, datetime] = {}
self._rate_limited_calls: dict[str | None, list[datetime]] | None = None self._rate_limited_calls: dict[str | None, list[datetime]] | None = None
self._internal = internal self._internal = internal
self._child_job_syncs = child_job_syncs
self.concurrency = concurrency self.concurrency = concurrency
self.throttle = throttle self.throttle = throttle
@@ -258,6 +261,7 @@ class Job(CoreSysAttributes):
job = _job__use_existing job = _job__use_existing
job.name = self.name job.name = self.name
job.internal = self._internal job.internal = self._internal
job.child_job_syncs = self._child_job_syncs
if job_group: if job_group:
job.reference = job_group.job_reference job.reference = job_group.job_reference
else: else:
@@ -265,6 +269,7 @@ class Job(CoreSysAttributes):
self.name, self.name,
job_group.job_reference if job_group else None, job_group.job_reference if job_group else None,
internal=self._internal, internal=self._internal,
child_job_syncs=self._child_job_syncs,
) )
try: try:

View File

@@ -52,5 +52,5 @@ class ResolutionNotify(CoreSysAttributes):
_LOGGER.debug("Successfully created persistent_notification") _LOGGER.debug("Successfully created persistent_notification")
else: else:
_LOGGER.error("Can't create persistant notification") _LOGGER.error("Can't create persistant notification")
except HomeAssistantAPIError: except HomeAssistantAPIError as err:
_LOGGER.error("Can't create persistant notification") _LOGGER.error("Can't create persistant notification: %s", err)

View File

@@ -13,6 +13,8 @@ import aiohttp
from aiohttp.client_exceptions import ClientError from aiohttp.client_exceptions import ClientError
from awesomeversion import AwesomeVersion, AwesomeVersionException from awesomeversion import AwesomeVersion, AwesomeVersionException
from supervisor.jobs import ChildJobSyncFilter
from .const import ( from .const import (
ATTR_SUPERVISOR_INTERNET, ATTR_SUPERVISOR_INTERNET,
SUPERVISOR_VERSION, SUPERVISOR_VERSION,
@@ -195,6 +197,15 @@ class Supervisor(CoreSysAttributes):
if temp_dir: if temp_dir:
await self.sys_run_in_executor(temp_dir.cleanup) await self.sys_run_in_executor(temp_dir.cleanup)
@Job(
name="supervisor_update",
# We assume for now the docker image pull is 100% of this task. But from
# a user perspective that isn't true. Other steps that take time which
# is not accounted for in progress include: app armor update and restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def update(self, version: AwesomeVersion | None = None) -> None: async def update(self, version: AwesomeVersion | None = None) -> None:
"""Update Supervisor version.""" """Update Supervisor version."""
version = version or self.latest_version or self.version version = version or self.latest_version or self.version
@@ -221,6 +232,7 @@ class Supervisor(CoreSysAttributes):
# Update container # Update container
_LOGGER.info("Update Supervisor to version %s", version) _LOGGER.info("Update Supervisor to version %s", version)
try: try:
await self.instance.install(version, image=image) await self.instance.install(version, image=image)
await self.instance.update_start_tag(image, version) await self.instance.update_start_tag(image, version)

View File

@@ -117,7 +117,7 @@ async def journal_logs_reader(
continue continue
# strip \n for simple fields before decoding # strip \n for simple fields before decoding
entries[field_name] = data[:-1].decode("utf-8") entries[field_name] = data[:-1].decode("utf-8", errors="replace")
def _parse_boot_json(boot_json_bytes: bytes) -> tuple[int, str]: def _parse_boot_json(boot_json_bytes: bytes) -> tuple[int, str]:

View File

@@ -419,3 +419,71 @@ def test_valid_schema():
config["schema"] = {"field": "invalid"} config["schema"] = {"field": "invalid"}
with pytest.raises(vol.Invalid): with pytest.raises(vol.Invalid):
assert vd.SCHEMA_ADDON_CONFIG(config) assert vd.SCHEMA_ADDON_CONFIG(config)
def test_ulimits_simple_format():
"""Test ulimits simple format validation."""
config = load_json_fixture("basic-addon-config.json")
config["ulimits"] = {"nofile": 65535, "nproc": 32768, "memlock": 134217728}
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["ulimits"]["nofile"] == 65535
assert valid_config["ulimits"]["nproc"] == 32768
assert valid_config["ulimits"]["memlock"] == 134217728
def test_ulimits_detailed_format():
"""Test ulimits detailed format validation."""
config = load_json_fixture("basic-addon-config.json")
config["ulimits"] = {
"nofile": {"soft": 20000, "hard": 40000},
"nproc": 32768, # Mixed format should work
"memlock": {"soft": 67108864, "hard": 134217728},
}
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["ulimits"]["nofile"]["soft"] == 20000
assert valid_config["ulimits"]["nofile"]["hard"] == 40000
assert valid_config["ulimits"]["nproc"] == 32768
assert valid_config["ulimits"]["memlock"]["soft"] == 67108864
assert valid_config["ulimits"]["memlock"]["hard"] == 134217728
def test_ulimits_empty_dict():
"""Test ulimits with empty dict (default)."""
config = load_json_fixture("basic-addon-config.json")
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["ulimits"] == {}
def test_ulimits_invalid_values():
"""Test ulimits with invalid values."""
config = load_json_fixture("basic-addon-config.json")
# Invalid string values
config["ulimits"] = {"nofile": "invalid"}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Invalid detailed format
config["ulimits"] = {"nofile": {"invalid_key": 1000}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Missing hard value in detailed format
config["ulimits"] = {"nofile": {"soft": 1000}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Missing soft value in detailed format
config["ulimits"] = {"nofile": {"hard": 1000}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Empty dict in detailed format
config["ulimits"] = {"nofile": {}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)

View File

@@ -2,16 +2,19 @@
import asyncio import asyncio
from pathlib import Path from pathlib import Path
from unittest.mock import MagicMock, PropertyMock, patch from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp.test_utils import TestClient from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
import pytest import pytest
from supervisor.backups.manager import BackupManager from supervisor.backups.manager import BackupManager
from supervisor.const import CoreState
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.homeassistant import DockerHomeAssistant
from supervisor.docker.interface import DockerInterface from supervisor.docker.interface import DockerInterface
from supervisor.homeassistant.api import APIState from supervisor.homeassistant.api import APIState, HomeAssistantAPI
from supervisor.homeassistant.const import WSEvent
from supervisor.homeassistant.core import HomeAssistantCore from supervisor.homeassistant.core import HomeAssistantCore
from supervisor.homeassistant.module import HomeAssistant from supervisor.homeassistant.module import HomeAssistant
@@ -271,3 +274,96 @@ async def test_background_home_assistant_update_fails_fast(
assert resp.status == 400 assert resp.status == 400
body = await resp.json() body = await resp.json()
assert body["message"] == "Version 2025.8.3 is already installed" assert body["message"] == "Version 2025.8.3 is already installed"
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_home_assistant_update(
api_client: TestClient, coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test progress updates sent to Home Assistant for updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
coresys.homeassistant.version = AwesomeVersion("2025.8.0")
with (
patch.object(
DockerHomeAssistant,
"version",
new=PropertyMock(return_value=AwesomeVersion("2025.8.0")),
),
patch.object(
HomeAssistantAPI, "get_config", return_value={"components": ["frontend"]}
),
):
resp = await api_client.post("/core/update", json={"version": "2025.8.3"})
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == "home_assistant_core_update"
]
assert events[:5] == [
{
"stage": None,
"progress": 0,
"done": None,
},
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": None,
"progress": 0.1,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": True,
},
]

View File

@@ -152,7 +152,7 @@ async def test_jobs_tree_representation(api_client: TestClient, coresys: CoreSys
"name": "test_jobs_tree_alt", "name": "test_jobs_tree_alt",
"reference": None, "reference": None,
"uuid": ANY, "uuid": ANY,
"progress": 0, "progress": 100,
"stage": "end", "stage": "end",
"done": True, "done": True,
"child_jobs": [], "child_jobs": [],
@@ -282,7 +282,7 @@ async def test_jobs_sorted(api_client: TestClient, coresys: CoreSys):
"name": "test_jobs_sorted_2", "name": "test_jobs_sorted_2",
"reference": None, "reference": None,
"uuid": ANY, "uuid": ANY,
"progress": 0, "progress": 100,
"stage": None, "stage": None,
"done": True, "done": True,
"errors": [], "errors": [],
@@ -294,7 +294,7 @@ async def test_jobs_sorted(api_client: TestClient, coresys: CoreSys):
"name": "test_jobs_sorted_1", "name": "test_jobs_sorted_1",
"reference": None, "reference": None,
"uuid": ANY, "uuid": ANY,
"progress": 0, "progress": 100,
"stage": None, "stage": None,
"done": True, "done": True,
"errors": [], "errors": [],
@@ -305,7 +305,7 @@ async def test_jobs_sorted(api_client: TestClient, coresys: CoreSys):
"name": "test_jobs_sorted_inner_1", "name": "test_jobs_sorted_inner_1",
"reference": None, "reference": None,
"uuid": ANY, "uuid": ANY,
"progress": 0, "progress": 100,
"stage": None, "stage": None,
"done": True, "done": True,
"errors": [], "errors": [],
@@ -317,7 +317,7 @@ async def test_jobs_sorted(api_client: TestClient, coresys: CoreSys):
"name": "test_jobs_sorted_inner_2", "name": "test_jobs_sorted_inner_2",
"reference": None, "reference": None,
"uuid": ANY, "uuid": ANY,
"progress": 0, "progress": 100,
"stage": None, "stage": None,
"done": True, "done": True,
"errors": [], "errors": [],

View File

@@ -9,7 +9,7 @@ import logging
from typing import Any, cast from typing import Any, cast
from unittest.mock import AsyncMock, patch from unittest.mock import AsyncMock, patch
from aiohttp import ClientWebSocketResponse, WSCloseCode from aiohttp import ClientWebSocketResponse, WSCloseCode, web
from aiohttp.http_websocket import WSMessage, WSMsgType from aiohttp.http_websocket import WSMessage, WSMsgType
from aiohttp.test_utils import TestClient from aiohttp.test_utils import TestClient
import pytest import pytest
@@ -223,6 +223,32 @@ async def test_proxy_auth_abort_log(
) )
async def test_websocket_transport_none(
coresys,
caplog: pytest.LogCaptureFixture,
):
"""Test WebSocket connection with transport None is handled gracefully."""
# Get the API proxy instance from coresys
api_proxy = APIProxy.__new__(APIProxy)
api_proxy.coresys = coresys
# Create a mock request with transport set to None to simulate connection loss
mock_request = AsyncMock(spec=web.Request)
mock_request.transport = None
caplog.clear()
with caplog.at_level(logging.WARNING):
# This should raise HTTPBadRequest, not AssertionError
with pytest.raises(web.HTTPBadRequest) as exc_info:
await api_proxy.websocket(mock_request)
# Verify the error reason
assert exc_info.value.reason == "Connection closed"
# Verify the warning was logged
assert "WebSocket connection lost before upgrade" in caplog.text
@pytest.mark.parametrize("path", ["", "mock_path"]) @pytest.mark.parametrize("path", ["", "mock_path"])
async def test_api_proxy_get_request( async def test_api_proxy_get_request(
api_client: TestClient, api_client: TestClient,

View File

@@ -13,12 +13,13 @@ from supervisor.addons.addon import Addon
from supervisor.arch import CpuArch from supervisor.arch import CpuArch
from supervisor.backups.manager import BackupManager from supervisor.backups.manager import BackupManager
from supervisor.config import CoreConfig from supervisor.config import CoreConfig
from supervisor.const import AddonState from supervisor.const import AddonState, CoreState
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState from supervisor.docker.const import ContainerState
from supervisor.docker.interface import DockerInterface from supervisor.docker.interface import DockerInterface
from supervisor.docker.monitor import DockerContainerStateEvent from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.homeassistant.const import WSEvent
from supervisor.homeassistant.module import HomeAssistant from supervisor.homeassistant.module import HomeAssistant
from supervisor.store.addon import AddonStore from supervisor.store.addon import AddonStore
from supervisor.store.repository import Repository from supervisor.store.repository import Repository
@@ -709,3 +710,101 @@ async def test_api_store_addons_addon_availability_installed_addon(
assert ( assert (
"requires Home Assistant version 2023.1.1 or greater" in result["message"] "requires Home Assistant version 2023.1.1 or greater" in result["message"]
) )
@pytest.mark.parametrize(
("action", "job_name", "addon_slug"),
[
("install", "addon_manager_install", "local_ssh"),
("update", "addon_manager_update", "local_example"),
],
)
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_addon_install_update(
api_client: TestClient,
coresys: CoreSys,
ha_ws_client: AsyncMock,
install_addon_example: Addon,
action: str,
job_name: str,
addon_slug: str,
):
"""Test progress updates sent to Home Assistant for installs/updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
coresys.arch._supported_arch = ["amd64"] # pylint: disable=protected-access
install_addon_example.data_store["version"] = AwesomeVersion("2.0.0")
with (
patch.object(Addon, "load"),
patch.object(Addon, "need_build", new=PropertyMock(return_value=False)),
patch.object(Addon, "latest_need_build", new=PropertyMock(return_value=False)),
):
resp = await api_client.post(f"/store/addons/{addon_slug}/{action}")
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == job_name
and evt.args[0]["data"]["data"]["reference"] == addon_slug
]
assert events[:4] == [
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": None,
"progress": 0.1,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": True,
},
]

View File

@@ -2,17 +2,24 @@
# pylint: disable=protected-access # pylint: disable=protected-access
import time import time
from unittest.mock import AsyncMock, MagicMock, patch from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp.test_utils import TestClient from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
from blockbuster import BlockingError from blockbuster import BlockingError
import pytest import pytest
from supervisor.const import CoreState
from supervisor.core import Core
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.exceptions import HassioError, HostNotSupportedError, StoreGitError from supervisor.exceptions import HassioError, HostNotSupportedError, StoreGitError
from supervisor.homeassistant.const import WSEvent
from supervisor.store.repository import Repository from supervisor.store.repository import Repository
from supervisor.supervisor import Supervisor
from supervisor.updater import Updater
from tests.api import common_test_api_advanced_logs from tests.api import common_test_api_advanced_logs
from tests.common import load_json_fixture
from tests.dbus_service_mocks.base import DBusServiceMock from tests.dbus_service_mocks.base import DBusServiceMock
from tests.dbus_service_mocks.os_agent import OSAgent as OSAgentService from tests.dbus_service_mocks.os_agent import OSAgent as OSAgentService
@@ -316,3 +323,97 @@ async def test_api_supervisor_options_blocking_io(
# This should not raise blocking error anymore # This should not raise blocking error anymore
time.sleep(0) time.sleep(0)
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_supervisor_update(
api_client: TestClient, coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test progress updates sent to Home Assistant for updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
with (
patch.object(
Supervisor,
"version",
new=PropertyMock(return_value=AwesomeVersion("2025.08.0")),
),
patch.object(
Updater,
"version_supervisor",
new=PropertyMock(return_value=AwesomeVersion("2025.08.3")),
),
patch.object(
Updater, "image_supervisor", new=PropertyMock(return_value="supervisor")
),
patch.object(Supervisor, "update_apparmor"),
patch.object(Core, "stop"),
):
resp = await api_client.post("/supervisor/update")
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == "supervisor_update"
]
assert events[:4] == [
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": None,
"progress": 0.1,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": True,
},
]

View File

@@ -1110,6 +1110,7 @@ def _make_backup_message_for_assert(
reference: str, reference: str,
stage: str | None, stage: str | None,
done: bool = False, done: bool = False,
progress: float = 0.0,
): ):
"""Make a backup message to use for assert test.""" """Make a backup message to use for assert test."""
return { return {
@@ -1120,7 +1121,7 @@ def _make_backup_message_for_assert(
"name": f"backup_manager_{action}", "name": f"backup_manager_{action}",
"reference": reference, "reference": reference,
"uuid": ANY, "uuid": ANY,
"progress": 0, "progress": progress,
"stage": stage, "stage": stage,
"done": done, "done": done,
"parent_id": None, "parent_id": None,
@@ -1132,13 +1133,12 @@ def _make_backup_message_for_assert(
} }
@pytest.mark.usefixtures("tmp_supervisor_data", "path_extern")
async def test_backup_progress( async def test_backup_progress(
coresys: CoreSys, coresys: CoreSys,
install_addon_ssh: Addon, install_addon_ssh: Addon,
container: MagicMock, container: MagicMock,
ha_ws_client: AsyncMock, ha_ws_client: AsyncMock,
tmp_supervisor_data,
path_extern,
): ):
"""Test progress is tracked during backups.""" """Test progress is tracked during backups."""
container.status = "running" container.status = "running"
@@ -1182,7 +1182,10 @@ async def test_backup_progress(
reference=full_backup.slug, stage="await_addon_restarts" reference=full_backup.slug, stage="await_addon_restarts"
), ),
_make_backup_message_for_assert( _make_backup_message_for_assert(
reference=full_backup.slug, stage="await_addon_restarts", done=True reference=full_backup.slug,
stage="await_addon_restarts",
done=True,
progress=100,
), ),
] ]
@@ -1227,18 +1230,17 @@ async def test_backup_progress(
reference=partial_backup.slug, reference=partial_backup.slug,
stage="finishing_file", stage="finishing_file",
done=True, done=True,
progress=100,
), ),
] ]
@pytest.mark.usefixtures("supervisor_internet", "tmp_supervisor_data", "path_extern")
async def test_restore_progress( async def test_restore_progress(
coresys: CoreSys, coresys: CoreSys,
supervisor_internet,
install_addon_ssh: Addon, install_addon_ssh: Addon,
container: MagicMock, container: MagicMock,
ha_ws_client: AsyncMock, ha_ws_client: AsyncMock,
tmp_supervisor_data,
path_extern,
): ):
"""Test progress is tracked during backups.""" """Test progress is tracked during backups."""
container.status = "running" container.status = "running"
@@ -1320,6 +1322,7 @@ async def test_restore_progress(
reference=full_backup.slug, reference=full_backup.slug,
stage="await_home_assistant_restart", stage="await_home_assistant_restart",
done=True, done=True,
progress=100,
), ),
] ]
@@ -1358,6 +1361,7 @@ async def test_restore_progress(
reference=folders_backup.slug, reference=folders_backup.slug,
stage="folders", stage="folders",
done=True, done=True,
progress=100,
), ),
] ]
@@ -1404,17 +1408,17 @@ async def test_restore_progress(
reference=addon_backup.slug, reference=addon_backup.slug,
stage="addons", stage="addons",
done=True, done=True,
progress=100,
), ),
] ]
@pytest.mark.usefixtures("tmp_supervisor_data", "path_extern")
async def test_freeze_thaw( async def test_freeze_thaw(
coresys: CoreSys, coresys: CoreSys,
install_addon_ssh: Addon, install_addon_ssh: Addon,
container: MagicMock, container: MagicMock,
ha_ws_client: AsyncMock, ha_ws_client: AsyncMock,
tmp_supervisor_data,
path_extern,
): ):
"""Test manual freeze and thaw for external snapshots.""" """Test manual freeze and thaw for external snapshots."""
container.status = "running" container.status = "running"
@@ -1460,7 +1464,11 @@ async def test_freeze_thaw(
action="thaw_all", reference=None, stage=None action="thaw_all", reference=None, stage=None
), ),
_make_backup_message_for_assert( _make_backup_message_for_assert(
action="freeze_all", reference=None, stage="addons", done=True action="freeze_all",
reference=None,
stage="addons",
done=True,
progress=100,
), ),
] ]
@@ -1488,7 +1496,11 @@ async def test_freeze_thaw(
action="thaw_all", reference=None, stage="addons" action="thaw_all", reference=None, stage="addons"
), ),
_make_backup_message_for_assert( _make_backup_message_for_assert(
action="thaw_all", reference=None, stage="addons", done=True action="thaw_all",
reference=None,
stage="addons",
done=True,
progress=100,
), ),
] ]

View File

@@ -318,7 +318,10 @@ def test_not_journald_addon(
async def test_addon_run_docker_error( async def test_addon_run_docker_error(
coresys: CoreSys, addonsdata_system: dict[str, Data], path_extern coresys: CoreSys,
addonsdata_system: dict[str, Data],
path_extern,
tmp_supervisor_data: Path,
): ):
"""Test docker error when addon is run.""" """Test docker error when addon is run."""
await coresys.dbus.timedate.connect(coresys.dbus.bus) await coresys.dbus.timedate.connect(coresys.dbus.bus)
@@ -500,3 +503,93 @@ async def test_addon_new_device_no_haos(
await install_addon_ssh.stop() await install_addon_ssh.stop()
assert coresys.resolution.issues == [] assert coresys.resolution.issues == []
assert coresys.resolution.suggestions == [] assert coresys.resolution.suggestions == []
async def test_ulimits_integration(
coresys: CoreSys,
install_addon_ssh: Addon,
):
"""Test ulimits integration with Docker addon."""
docker_addon = DockerAddon(coresys, install_addon_ssh)
# Test default case (no ulimits, no realtime)
assert docker_addon.ulimits is None
# Test with realtime enabled (should have built-in ulimits)
install_addon_ssh.data["realtime"] = True
ulimits = docker_addon.ulimits
assert ulimits is not None
assert len(ulimits) == 2
# Check for rtprio limit
rtprio_limit = next((u for u in ulimits if u.name == "rtprio"), None)
assert rtprio_limit is not None
assert rtprio_limit.soft == 90
assert rtprio_limit.hard == 99
# Check for memlock limit
memlock_limit = next((u for u in ulimits if u.name == "memlock"), None)
assert memlock_limit is not None
assert memlock_limit.soft == 128 * 1024 * 1024
assert memlock_limit.hard == 128 * 1024 * 1024
# Test with configurable ulimits (simple format)
install_addon_ssh.data["realtime"] = False
install_addon_ssh.data["ulimits"] = {"nofile": 65535, "nproc": 32768}
ulimits = docker_addon.ulimits
assert ulimits is not None
assert len(ulimits) == 2
nofile_limit = next((u for u in ulimits if u.name == "nofile"), None)
assert nofile_limit is not None
assert nofile_limit.soft == 65535
assert nofile_limit.hard == 65535
nproc_limit = next((u for u in ulimits if u.name == "nproc"), None)
assert nproc_limit is not None
assert nproc_limit.soft == 32768
assert nproc_limit.hard == 32768
# Test with configurable ulimits (detailed format)
install_addon_ssh.data["ulimits"] = {
"nofile": {"soft": 20000, "hard": 40000},
"memlock": {"soft": 67108864, "hard": 134217728},
}
ulimits = docker_addon.ulimits
assert ulimits is not None
assert len(ulimits) == 2
nofile_limit = next((u for u in ulimits if u.name == "nofile"), None)
assert nofile_limit is not None
assert nofile_limit.soft == 20000
assert nofile_limit.hard == 40000
memlock_limit = next((u for u in ulimits if u.name == "memlock"), None)
assert memlock_limit is not None
assert memlock_limit.soft == 67108864
assert memlock_limit.hard == 134217728
# Test mixed format and realtime (realtime + custom ulimits)
install_addon_ssh.data["realtime"] = True
install_addon_ssh.data["ulimits"] = {
"nofile": 65535,
"core": {"soft": 0, "hard": 0}, # Disable core dumps
}
ulimits = docker_addon.ulimits
assert ulimits is not None
assert (
len(ulimits) == 4
) # rtprio, memlock (from realtime) + nofile, core (from config)
# Check realtime limits still present
rtprio_limit = next((u for u in ulimits if u.name == "rtprio"), None)
assert rtprio_limit is not None
# Check custom limits added
nofile_limit = next((u for u in ulimits if u.name == "nofile"), None)
assert nofile_limit is not None
assert nofile_limit.soft == 65535
assert nofile_limit.hard == 65535
core_limit = next((u for u in ulimits if u.name == "core"), None)
assert core_limit is not None
assert core_limit.soft == 0
assert core_limit.hard == 0

View File

@@ -1,6 +1,7 @@
"""Test Docker interface.""" """Test Docker interface."""
import asyncio import asyncio
from pathlib import Path
from typing import Any from typing import Any
from unittest.mock import ANY, AsyncMock, MagicMock, Mock, PropertyMock, call, patch from unittest.mock import ANY, AsyncMock, MagicMock, Mock, PropertyMock, call, patch
@@ -25,7 +26,6 @@ from supervisor.exceptions import (
DockerNotFound, DockerNotFound,
DockerRequestError, DockerRequestError,
) )
from supervisor.homeassistant.const import WSEvent
from supervisor.jobs import JobSchedulerOptions, SupervisorJob from supervisor.jobs import JobSchedulerOptions, SupervisorJob
from tests.common import load_json_fixture from tests.common import load_json_fixture
@@ -281,6 +281,7 @@ async def test_run_missing_image(
container: MagicMock, container: MagicMock,
capture_exception: Mock, capture_exception: Mock,
path_extern, path_extern,
tmp_supervisor_data: Path,
): ):
"""Test run captures the exception when image is missing.""" """Test run captures the exception when image is missing."""
coresys.docker.containers.create.side_effect = [NotFound("missing"), MagicMock()] coresys.docker.containers.create.side_effect = [NotFound("missing"), MagicMock()]
@@ -415,196 +416,17 @@ async def test_install_fires_progress_events(
] ]
async def test_install_sends_progress_to_home_assistant(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock
):
"""Test progress events are sent as job updates to Home Assistant."""
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
with (
patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value="i386")
),
):
# Schedule job so we can listen for the end. Then we can assert against the WS mock
event = asyncio.Event()
job, install_task = coresys.jobs.schedule_job(
test_docker_interface.install,
JobSchedulerOptions(),
AwesomeVersion("1.2.3"),
"test",
)
async def listen_for_job_end(reference: SupervisorJob):
if reference.uuid != job.uuid:
return
event.set()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end)
await install_task
await event.wait()
events = [
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0] and evt.args[0]["data"]["event"] == WSEvent.JOB
]
assert events[0]["name"] == "docker_interface_install"
assert events[0]["uuid"] == job.uuid
assert events[0]["done"] is None
assert events[1]["name"] == "docker_interface_install"
assert events[1]["uuid"] == job.uuid
assert events[1]["done"] is False
assert events[-1]["name"] == "docker_interface_install"
assert events[-1]["uuid"] == job.uuid
assert events[-1]["done"] is True
def make_sub_log(layer_id: str):
return [
{
"stage": evt["stage"],
"progress": evt["progress"],
"done": evt["done"],
"extra": evt["extra"],
}
for evt in events
if evt["name"] == "Pulling container image layer"
and evt["reference"] == layer_id
and evt["parent_id"] == job.uuid
]
layer_1_log = make_sub_log("1e214cd6d7d0")
layer_2_log = make_sub_log("1a38e1d5e18d")
assert len(layer_1_log) == 20
assert len(layer_2_log) == 19
assert len(events) == 42
assert layer_1_log == [
{"stage": "Pulling fs layer", "progress": 0, "done": False, "extra": None},
{
"stage": "Downloading",
"progress": 0.1,
"done": False,
"extra": {"current": 539462, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 0.6,
"done": False,
"extra": {"current": 4864838, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 0.9,
"done": False,
"extra": {"current": 7552896, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 1.2,
"done": False,
"extra": {"current": 10252544, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 2.9,
"done": False,
"extra": {"current": 25369792, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Verifying Checksum",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Download complete",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 50.1,
"done": False,
"extra": {"current": 557056, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 60.3,
"done": False,
"extra": {"current": 89686016, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 70.0,
"done": False,
"extra": {"current": 174358528, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 80.0,
"done": False,
"extra": {"current": 261816320, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 88.4,
"done": False,
"extra": {"current": 334790656, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 94.0,
"done": False,
"extra": {"current": 383811584, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 99.9,
"done": False,
"extra": {"current": 435617792, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 100.0,
"done": False,
"extra": {"current": 436480882, "total": 436480882},
},
{
"stage": "Pull complete",
"progress": 100.0,
"done": True,
"extra": {"current": 436480882, "total": 436480882},
},
]
async def test_install_progress_rounding_does_not_cause_misses( async def test_install_progress_rounding_does_not_cause_misses(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock coresys: CoreSys,
test_docker_interface: DockerInterface,
ha_ws_client: AsyncMock,
capture_exception: Mock,
): ):
"""Test extremely close progress events do not create rounding issues.""" """Test extremely close progress events do not create rounding issues."""
coresys.core.set_state(CoreState.RUNNING) coresys.core.set_state(CoreState.RUNNING)
# Current numbers chosen to create a rounding issue with original code
# Where a progress update came in with a value between the actual previous
# value and what it was rounded to. It should not raise an out of order exception
coresys.docker.docker.api.pull.return_value = [ coresys.docker.docker.api.pull.return_value = [
{ {
"status": "Pulling from home-assistant/odroid-n2-homeassistant", "status": "Pulling from home-assistant/odroid-n2-homeassistant",
@@ -669,65 +491,7 @@ async def test_install_progress_rounding_does_not_cause_misses(
await install_task await install_task
await event.wait() await event.wait()
events = [ capture_exception.assert_not_called()
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["reference"] == "1e214cd6d7d0"
and evt.args[0]["data"]["data"]["stage"] in {"Downloading", "Extracting"}
]
assert events == [
{
"name": "Pulling container image layer",
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 432700000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
{
"name": "Pulling container image layer",
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 432800000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
{
"name": "Pulling container image layer",
"stage": "Extracting",
"progress": 99.6,
"done": False,
"extra": {"current": 432700000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
{
"name": "Pulling container image layer",
"stage": "Extracting",
"progress": 99.6,
"done": False,
"extra": {"current": 432800000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
]
@pytest.mark.parametrize( @pytest.mark.parametrize(
@@ -777,10 +541,15 @@ async def test_install_raises_on_pull_error(
async def test_install_progress_handles_download_restart( async def test_install_progress_handles_download_restart(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock coresys: CoreSys,
test_docker_interface: DockerInterface,
ha_ws_client: AsyncMock,
capture_exception: Mock,
): ):
"""Test install handles docker progress events that include a download restart.""" """Test install handles docker progress events that include a download restart."""
coresys.core.set_state(CoreState.RUNNING) coresys.core.set_state(CoreState.RUNNING)
# Fixture emulates a download restart as it docker logs it
# A log out of order exception should not be raised
coresys.docker.docker.api.pull.return_value = load_json_fixture( coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log_restart.json" "docker_pull_image_log_restart.json"
) )
@@ -808,106 +577,4 @@ async def test_install_progress_handles_download_restart(
await install_task await install_task
await event.wait() await event.wait()
events = [ capture_exception.assert_not_called()
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0] and evt.args[0]["data"]["event"] == WSEvent.JOB
]
def make_sub_log(layer_id: str):
return [
{
"stage": evt["stage"],
"progress": evt["progress"],
"done": evt["done"],
"extra": evt["extra"],
}
for evt in events
if evt["name"] == "Pulling container image layer"
and evt["reference"] == layer_id
and evt["parent_id"] == job.uuid
]
layer_1_log = make_sub_log("1e214cd6d7d0")
assert len(layer_1_log) == 14
assert layer_1_log == [
{"stage": "Pulling fs layer", "progress": 0, "done": False, "extra": None},
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Retrying download",
"progress": 0,
"done": False,
"extra": None,
},
{
"stage": "Retrying download",
"progress": 0,
"done": False,
"extra": None,
},
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Verifying Checksum",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Download complete",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 80.0,
"done": False,
"extra": {"current": 261816320, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 100.0,
"done": False,
"extra": {"current": 436480882, "total": 436480882},
},
{
"stage": "Pull complete",
"progress": 100.0,
"done": True,
"extra": {"current": 436480882, "total": 436480882},
},
]

View File

@@ -293,6 +293,8 @@ async def test_cidfile_cleanup_handles_oserror(
# Mock the containers.get method and cidfile cleanup to raise OSError # Mock the containers.get method and cidfile cleanup to raise OSError
with ( with (
patch.object(docker.containers, "get", return_value=mock_container), patch.object(docker.containers, "get", return_value=mock_container),
patch("pathlib.Path.is_dir", return_value=False),
patch("pathlib.Path.is_file", return_value=True),
patch( patch(
"pathlib.Path.unlink", side_effect=OSError("File not found") "pathlib.Path.unlink", side_effect=OSError("File not found")
) as mock_unlink, ) as mock_unlink,
@@ -306,3 +308,46 @@ async def test_cidfile_cleanup_handles_oserror(
# Verify cidfile cleanup was attempted # Verify cidfile cleanup was attempted
mock_unlink.assert_called_once_with(missing_ok=True) mock_unlink.assert_called_once_with(missing_ok=True)
async def test_run_container_with_leftover_cidfile_directory(
coresys: CoreSys, docker: DockerAPI, path_extern, tmp_supervisor_data
):
"""Test container creation removes leftover cidfile directory before creating new one.
This can happen when Docker auto-starts a container with restart policy
before Supervisor could write the CID file, causing Docker to create
the bind mount source as a directory.
"""
# Mock container
mock_container = MagicMock()
mock_container.id = "test_container_id_new"
container_name = "test_container"
cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid"
# Create a leftover directory (simulating Docker's behavior)
cidfile_path.mkdir()
assert cidfile_path.is_dir()
# Mock container creation
with patch.object(
docker.containers, "create", return_value=mock_container
) as create_mock:
# Execute run with a container name
loop = asyncio.get_event_loop()
result = await loop.run_in_executor(
None,
lambda kwrgs: docker.run(**kwrgs),
{"image": "test_image", "tag": "latest", "name": container_name},
)
# Verify container was created
create_mock.assert_called_once()
# Verify new cidfile was written as a file (not directory)
assert cidfile_path.exists()
assert cidfile_path.is_file()
assert cidfile_path.read_text() == mock_container.id
assert result == mock_container

View File

@@ -19,7 +19,7 @@ from supervisor.exceptions import (
) )
from supervisor.host.const import HostFeature from supervisor.host.const import HostFeature
from supervisor.host.manager import HostManager from supervisor.host.manager import HostManager
from supervisor.jobs import JobSchedulerOptions, SupervisorJob from supervisor.jobs import ChildJobSyncFilter, JobSchedulerOptions, SupervisorJob
from supervisor.jobs.const import JobConcurrency, JobThrottle from supervisor.jobs.const import JobConcurrency, JobThrottle
from supervisor.jobs.decorator import Job, JobCondition from supervisor.jobs.decorator import Job, JobCondition
from supervisor.jobs.job_group import JobGroup from supervisor.jobs.job_group import JobGroup
@@ -1003,7 +1003,7 @@ async def test_internal_jobs_no_notify(coresys: CoreSys, ha_ws_client: AsyncMock
"name": "test_internal_jobs_no_notify_default", "name": "test_internal_jobs_no_notify_default",
"reference": None, "reference": None,
"uuid": ANY, "uuid": ANY,
"progress": 0, "progress": 100,
"stage": None, "stage": None,
"done": True, "done": True,
"parent_id": None, "parent_id": None,
@@ -1415,3 +1415,87 @@ async def test_core_supported(coresys: CoreSys, caplog: pytest.LogCaptureFixture
coresys.jobs.ignore_conditions = [JobCondition.HOME_ASSISTANT_CORE_SUPPORTED] coresys.jobs.ignore_conditions = [JobCondition.HOME_ASSISTANT_CORE_SUPPORTED]
assert await test.execute() assert await test.execute()
async def test_progress_syncing(coresys: CoreSys):
"""Test progress syncing from child jobs to parent."""
group_child_event = asyncio.Event()
child_event = asyncio.Event()
execute_event = asyncio.Event()
main_event = asyncio.Event()
class TestClassGroup(JobGroup):
"""Test class group."""
def __init__(self, coresys: CoreSys) -> None:
super().__init__(coresys, "test_class_group", "test")
@Job(name="test_progress_syncing_group_child", internal=True)
async def test_progress_syncing_group_child(self):
"""Test progress syncing group child."""
coresys.jobs.current.progress = 50
main_event.set()
await group_child_event.wait()
coresys.jobs.current.progress = 100
class TestClass:
"""Test class."""
def __init__(self, coresys: CoreSys):
"""Initialize the test class."""
self.coresys = coresys
self.test_group = TestClassGroup(coresys)
@Job(
name="test_progress_syncing_execute",
child_job_syncs=[
ChildJobSyncFilter(
"test_progress_syncing_child_execute", progress_allocation=0.5
),
ChildJobSyncFilter(
"test_progress_syncing_group_child",
reference="test",
progress_allocation=0.5,
),
],
)
async def test_progress_syncing_execute(self):
"""Test progress syncing execute."""
await self.test_progress_syncing_child_execute()
await self.test_group.test_progress_syncing_group_child()
main_event.set()
await execute_event.wait()
@Job(name="test_progress_syncing_child_execute", internal=True)
async def test_progress_syncing_child_execute(self):
"""Test progress syncing child execute."""
coresys.jobs.current.progress = 50
main_event.set()
await child_event.wait()
coresys.jobs.current.progress = 100
test = TestClass(coresys)
job, task = coresys.jobs.schedule_job(
test.test_progress_syncing_execute, JobSchedulerOptions()
)
# First child should've set parent job to 25% progress
await main_event.wait()
assert job.progress == 25
# Now we run to middle of second job which should put us at 75%
main_event.clear()
child_event.set()
await main_event.wait()
assert job.progress == 75
# Finally let it run to the end and see progress is 100%
main_event.clear()
group_child_event.set()
await main_event.wait()
assert job.progress == 100
# Release and check it is done
execute_event.set()
await task
assert job.done

View File

@@ -219,7 +219,7 @@ async def test_notify_on_change(coresys: CoreSys, ha_ws_client: AsyncMock):
"name": TEST_JOB, "name": TEST_JOB,
"reference": "test", "reference": "test",
"uuid": ANY, "uuid": ANY,
"progress": 50, "progress": 100,
"stage": "test", "stage": "test",
"done": True, "done": True,
"parent_id": None, "parent_id": None,

View File

@@ -275,3 +275,25 @@ async def test_parsing_boots_none():
boots.append((index, boot_id)) boots.append((index, boot_id))
assert boots == [] assert boots == []
async def test_parsing_non_utf8_message():
"""Test that non-UTF-8 bytes in message are replaced with replacement character."""
journal_logs, stream = _journal_logs_mock()
# Include invalid UTF-8 sequence (0xff is not valid UTF-8)
stream.feed_data(b"MESSAGE=Hello, \xff world!\n\n")
_, line = await anext(journal_logs_reader(journal_logs))
assert line == "Hello, \ufffd world!"
async def test_parsing_non_utf8_in_binary_message():
"""Test that non-UTF-8 bytes in binary format message are replaced."""
journal_logs, stream = _journal_logs_mock()
# Binary format with invalid UTF-8 sequence
stream.feed_data(
b"ID=1\n"
b"MESSAGE\n\x0f\x00\x00\x00\x00\x00\x00\x00Hello, \xff world!\n"
b"AFTER=after\n\n"
)
_, line = await anext(journal_logs_reader(journal_logs))
assert line == "Hello, \ufffd world!"