Compare commits

...

37 Commits

Author SHA1 Message Date
Mike Degatano
94d1f520ba Bind libraries to different files and refactor images.pull 2025-10-27 16:40:14 +00:00
Mike Degatano
33c09a7a96 Add missing coverage and fix bug in repair 2025-10-27 16:40:13 +00:00
Mike Degatano
c1dcaea9b5 Migrate images from dockerpy to aiodocker 2025-10-27 16:40:12 +00:00
dependabot[bot]
9c0174f1fd Bump actions/upload-artifact from 4.6.2 to 5.0.0 (#6269)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.6.2 to 5.0.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](ea165f8d65...330a01c490)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 12:28:43 +01:00
dependabot[bot]
dc3d8b9266 Bump actions/download-artifact from 5.0.0 to 6.0.0 (#6270)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 5.0.0 to 6.0.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](634f93cb29...018cc2cf5b)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 12:28:13 +01:00
dependabot[bot]
06d96db55b Bump orjson from 3.11.3 to 3.11.4 (#6271)
Bumps [orjson](https://github.com/ijl/orjson) from 3.11.3 to 3.11.4.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.11.3...3.11.4)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 12:27:39 +01:00
Mike Degatano
131cc3b6d1 Prevent float drift error on progress update (#6267) 2025-10-24 09:37:19 -04:00
Michel Nederlof
b92f5976a3 If D-Bus to UDisks2 is not connected, return None (#6265)
When fetching the host info without UDisks2 D-Bus connected it would raise an exception and disable managing addons via home assistant (and other GUI functions that need host info status).
2025-10-24 10:22:02 +02:00
dependabot[bot]
370c961c9e Bump ruff from 0.14.1 to 0.14.2 (#6268)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.1 to 0.14.2.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.1...0.14.2)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-24 09:46:04 +02:00
dependabot[bot]
b903e1196f Bump sigstore/cosign-installer from 3.10.0 to 4.0.0 (#6256)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.10.0 to 4.0.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](d7543c93d8...faadad0cce)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-23 08:37:59 +02:00
dependabot[bot]
9f8e8ab15a Bump home-assistant/wheels from 2025.09.1 to 2025.10.0 (#6260)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-22 23:17:38 +02:00
dependabot[bot]
56bffc839b Bump aiohttp from 3.13.0 to 3.13.1 (#6261)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-22 23:14:01 +02:00
dependabot[bot]
952a553c3b Bump pylint from 4.0.1 to 4.0.2 (#6263) 2025-10-22 10:20:35 +02:00
dependabot[bot]
717f1c85f5 Bump sentry-sdk from 2.42.0 to 2.42.1 (#6264)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.42.0 to 2.42.1.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.42.0...2.42.1)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.42.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-22 09:25:14 +02:00
dependabot[bot]
ffd498a515 Bump sentry-sdk from 2.41.0 to 2.42.0 (#6253)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-18 23:30:45 +02:00
dependabot[bot]
35f0645cb9 Bump cryptography from 46.0.2 to 46.0.3 (#6255)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-18 22:04:32 +02:00
dependabot[bot]
15c6547382 Bump coverage from 7.10.7 to 7.11.0 (#6254)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-17 23:42:02 +02:00
dependabot[bot]
adefa242e5 Bump colorlog from 6.9.0 to 6.10.1 (#6258)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-17 09:14:38 +02:00
dependabot[bot]
583a8a82fb Bump ruff from 0.14.0 to 0.14.1 (#6257)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-17 09:12:47 +02:00
dependabot[bot]
322df15e73 Bump pylint from 4.0.0 to 4.0.1 (#6251)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-15 19:58:36 +02:00
dependabot[bot]
51490c8e41 Bump pylint from 3.3.9 to 4.0.0 and astroid from 3.3.11 to 4.0.1 (#6248)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-10-14 10:45:17 +02:00
dependabot[bot]
3c21a8b8ef Bump sentry-sdk from 2.40.0 to 2.41.0 (#6246)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.40.0 to 2.41.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.40.0...2.41.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.41.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-10 08:38:17 +02:00
Jan Čermák
ddb8588d77 Treat containerd snapshotter/overlayfs driver as supported (#6242)
* Treat containerd snapshotter/overlayfs driver as supported

With home-assistant/operating-system#4252 the storage driver would
change to "overlayfs". We don't want the system to be marked as
unsupported. It should be safe to treat it as supported even now, so add
it to the list of allowed values.

* Flip the logic

(note for self: don't forget to check for unstaged changes before push)

* Set valid storage for invalid logging test case
2025-10-09 18:47:06 +02:00
dependabot[bot]
81e46b20b8 Bump pyudev from 0.24.3 to 0.24.4 (#6244)
Bumps [pyudev](https://github.com/pyudev/pyudev) from 0.24.3 to 0.24.4.
- [Changelog](https://github.com/pyudev/pyudev/blob/master/CHANGES.rst)
- [Commits](https://github.com/pyudev/pyudev/compare/v0.24.3...v0.24.4)

---
updated-dependencies:
- dependency-name: pyudev
  dependency-version: 0.24.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-09 09:48:34 +02:00
dependabot[bot]
5041a1ed5c Bump types-docker from 7.1.0.20250916 to 7.1.0.20251009 (#6243)
Bumps [types-docker](https://github.com/typeshed-internal/stub_uploader) from 7.1.0.20250916 to 7.1.0.20251009.
- [Commits](https://github.com/typeshed-internal/stub_uploader/commits)

---
updated-dependencies:
- dependency-name: types-docker
  dependency-version: 7.1.0.20251009
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-09 09:38:12 +02:00
Stefan Agner
337731a55a Let only bugs go stale (#6239)
Let's apply the stale label only to issues which are bugs. Tasks are
more long lived and should not be marked as stale automatically.
2025-10-08 15:56:35 +02:00
Stefan Agner
53a8044aff Add support for ulimit in addon config (#6206)
* Add support for ulimit in addon config

Similar to docker-compose, this adds support for setting ulimits
for addons via the addon config. This is useful e.g. for InfluxDB
which on its own does not support setting higher open file descriptor
limits, but recommends increasing limits on the host.

* Make soft and hard limit mandatory if ulimit is a dict
2025-10-08 12:43:12 +02:00
Jan Čermák
c71553f37d Add AGENTS.md symlink (#6237)
Add AGENTS.md along the CLAUDE.md for agents that support it. While
CLAUDE.md is still required and specific to Claude Code, AGENTS.md
covers various other agents that implemented this proposed standard.

Core already adopted the same approach recently.
2025-10-08 10:44:49 +02:00
dependabot[bot]
c1eb97d8ab Bump ruff from 0.13.3 to 0.14.0 (#6238)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.13.3 to 0.14.0.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.13.3...0.14.0)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-08 09:47:19 +02:00
Mike Degatano
190b734332 Add progress reporting to addon, HA and Supervisor updates (#6195)
* Add progress reporting to addon, HA and Supervisor updates

* Fix assert in test

* Add progress to addon, core, supervisor updates/installs

* Fix double install bug in addons install

* Remove initial_install and re-arrange order of load
2025-10-07 16:54:11 +02:00
dependabot[bot]
559b6982a3 Bump aiohttp from 3.12.15 to 3.13.0 (#6234)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.13.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 12:29:47 +02:00
dependabot[bot]
301362e9e5 Bump attrs from 25.3.0 to 25.4.0 (#6235)
Bumps [attrs](https://github.com/sponsors/hynek) from 25.3.0 to 25.4.0.
- [Commits](https://github.com/sponsors/hynek/commits)

---
updated-dependencies:
- dependency-name: attrs
  dependency-version: 25.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 12:29:03 +02:00
dependabot[bot]
fc928d294c Bump sentry-sdk from 2.39.0 to 2.40.0 (#6233)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.39.0 to 2.40.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.39.0...2.40.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.40.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 09:47:03 +02:00
dependabot[bot]
f42aeb4937 Bump dbus-fast from 2.44.3 to 2.44.5 (#6232)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.44.3 to 2.44.5.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.44.3...v2.44.5)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 2.44.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:37:22 +02:00
dependabot[bot]
fd21886de9 Bump pylint from 3.3.8 to 3.3.9 (#6230)
Bumps [pylint](https://github.com/pylint-dev/pylint) from 3.3.8 to 3.3.9.
- [Release notes](https://github.com/pylint-dev/pylint/releases)
- [Commits](https://github.com/pylint-dev/pylint/compare/v3.3.8...v3.3.9)

---
updated-dependencies:
- dependency-name: pylint
  dependency-version: 3.3.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:36:03 +02:00
dependabot[bot]
e4bb415e30 Bump actions/stale from 10.0.0 to 10.1.0 (#6229)
Bumps [actions/stale](https://github.com/actions/stale) from 10.0.0 to 10.1.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](3a9db7e6a4...5f858e3efb)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:35:49 +02:00
dependabot[bot]
622dda5382 Bump ruff from 0.13.2 to 0.13.3 (#6228)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.13.2 to 0.13.3.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.13.2...0.13.3)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.13.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-03 14:42:13 +02:00
39 changed files with 1396 additions and 810 deletions

View File

@@ -107,7 +107,7 @@ jobs:
# home-assistant/wheels doesn't support sha pinning
- name: Build wheels
if: needs.init.outputs.requirements == 'true'
uses: home-assistant/wheels@2025.09.1
uses: home-assistant/wheels@2025.10.0
with:
abi: cp313
tag: musllinux_1_2
@@ -132,7 +132,7 @@ jobs:
- name: Install Cosign
if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with:
cosign-release: "v2.5.3"

View File

@@ -346,7 +346,7 @@ jobs:
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with:
cosign-release: "v2.5.3"
- name: Restore Python virtual environment
@@ -386,7 +386,7 @@ jobs:
-o console_output_style=count \
tests
- name: Upload coverage artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: coverage
path: .coverage
@@ -417,7 +417,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Download all coverage artifacts
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: coverage
path: coverage/

View File

@@ -9,13 +9,14 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0
- uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30
days-before-close: 7
stale-issue-label: "stale"
exempt-issue-labels: "no-stale,Help%20wanted,help-wanted,pinned,rfc,security"
only-issue-types: "bug"
stale-issue-message: >
There hasn't been any activity on this issue recently. Due to the
high number of incoming GitHub notifications, we have to clean some

1
AGENTS.md Symbolic link
View File

@@ -0,0 +1 @@
.github/copilot-instructions.md

View File

@@ -1,14 +1,15 @@
aiodns==3.5.0
aiohttp==3.12.15
aiodocker==0.24.0
aiohttp==3.13.1
atomicwrites-homeassistant==1.4.1
attrs==25.3.0
attrs==25.4.0
awesomeversion==25.8.0
blockbuster==1.5.25
brotli==1.1.0
ciso8601==2.3.3
colorlog==6.9.0
colorlog==6.10.1
cpe==1.3.1
cryptography==46.0.2
cryptography==46.0.3
debugpy==1.8.17
deepmerge==2.0
dirhash==0.5.0
@@ -17,14 +18,14 @@ faust-cchardet==2.1.19
gitpython==3.1.45
jinja2==3.1.6
log-rate-limit==1.4.2
orjson==3.11.3
orjson==3.11.4
pulsectl==24.12.0
pyudev==0.24.3
pyudev==0.24.4
PyYAML==6.0.3
requests==2.32.5
securetar==2025.2.1
sentry-sdk==2.39.0
sentry-sdk==2.42.1
setuptools==80.9.0
voluptuous==0.15.2
dbus-fast==2.44.3
dbus-fast==2.44.5
zlib-fast==0.2.1

View File

@@ -1,16 +1,16 @@
astroid==3.3.11
coverage==7.10.7
astroid==4.0.1
coverage==7.11.0
mypy==1.18.2
pre-commit==4.3.0
pylint==3.3.8
pylint==4.0.2
pytest-aiohttp==1.1.0
pytest-asyncio==0.25.2
pytest-cov==7.0.0
pytest-timeout==2.4.0
pytest==8.4.2
ruff==0.13.2
ruff==0.14.2
time-machine==2.19.0
types-docker==7.1.0.20250916
types-docker==7.1.0.20251009
types-pyyaml==6.0.12.20250915
types-requests==2.32.4.20250913
urllib3==2.5.0

View File

@@ -226,6 +226,7 @@ class Addon(AddonModel):
)
await self._check_ingress_port()
default_image = self._image(self.data)
try:
await self.instance.attach(version=self.version)
@@ -774,7 +775,6 @@ class Addon(AddonModel):
raise AddonsError("Missing from store, cannot install!")
await self.sys_addons.data.install(self.addon_store)
await self.load()
def setup_data():
if not self.path_data.is_dir():
@@ -797,6 +797,9 @@ class Addon(AddonModel):
await self.sys_addons.data.uninstall(self)
raise AddonsError() from err
# Finish initialization and set up listeners
await self.load()
# Add to addon manager
self.sys_addons.local[self.slug] = self

View File

@@ -9,8 +9,6 @@ from typing import Self, Union
from attr import evolve
from supervisor.jobs.const import JobConcurrency
from ..const import AddonBoot, AddonStartup, AddonState
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
@@ -21,6 +19,8 @@ from ..exceptions import (
DockerError,
HassioError,
)
from ..jobs import ChildJobSyncFilter
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..store.addon import AddonStore
@@ -182,6 +182,9 @@ class AddonManager(CoreSysAttributes):
conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError,
concurrency=JobConcurrency.QUEUE,
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def install(
self, slug: str, *, validation_complete: asyncio.Event | None = None
@@ -229,6 +232,13 @@ class AddonManager(CoreSysAttributes):
name="addon_manager_update",
conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError,
# We assume for now the docker image pull is 100% of this task for progress
# allocation. But from a user perspective that isn't true. Other steps
# that take time which is not accounted for in progress include:
# partial backup, image cleanup, apparmor update, and addon restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def update(
self,
@@ -271,7 +281,10 @@ class AddonManager(CoreSysAttributes):
addons=[addon.slug],
)
return await addon.update()
task = await addon.update()
_LOGGER.info("Add-on '%s' successfully updated", slug)
return task
@Job(
name="addon_manager_rebuild",

View File

@@ -72,6 +72,7 @@ from ..const import (
ATTR_TYPE,
ATTR_UART,
ATTR_UDEV,
ATTR_ULIMITS,
ATTR_URL,
ATTR_USB,
ATTR_VERSION,
@@ -462,6 +463,11 @@ class AddonModel(JobGroup, ABC):
"""Return True if the add-on have his own udev."""
return self.data[ATTR_UDEV]
@property
def ulimits(self) -> dict[str, Any]:
"""Return ulimits configuration."""
return self.data[ATTR_ULIMITS]
@property
def with_kernel_modules(self) -> bool:
"""Return True if the add-on access to kernel modules."""

View File

@@ -88,6 +88,7 @@ from ..const import (
ATTR_TYPE,
ATTR_UART,
ATTR_UDEV,
ATTR_ULIMITS,
ATTR_URL,
ATTR_USB,
ATTR_USER,
@@ -423,6 +424,20 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
False,
),
vol.Optional(ATTR_IMAGE): docker_image,
vol.Optional(ATTR_ULIMITS, default=dict): vol.Any(
{str: vol.Coerce(int)}, # Simple format: {name: limit}
{
str: vol.Any(
vol.Coerce(int), # Simple format for individual entries
vol.Schema(
{ # Detailed format for individual entries
vol.Required("soft"): vol.Coerce(int),
vol.Required("hard"): vol.Coerce(int),
}
),
)
},
),
vol.Optional(ATTR_TIMEOUT, default=10): vol.All(
vol.Coerce(int), vol.Range(min=10, max=300)
),

View File

@@ -2,6 +2,7 @@
from __future__ import annotations
from asyncio import Task
from collections.abc import Callable, Coroutine
import logging
from typing import Any
@@ -38,11 +39,13 @@ class Bus(CoreSysAttributes):
self._listeners.setdefault(event, []).append(listener)
return listener
def fire_event(self, event: BusEvent, reference: Any) -> None:
def fire_event(self, event: BusEvent, reference: Any) -> list[Task]:
"""Fire an event to the bus."""
_LOGGER.debug("Fire event '%s' with '%s'", event, reference)
tasks: list[Task] = []
for listener in self._listeners.get(event, []):
self.sys_create_task(listener.callback(reference))
tasks.append(self.sys_create_task(listener.callback(reference)))
return tasks
def remove_listener(self, listener: EventListener) -> None:
"""Unregister an listener."""

View File

@@ -348,6 +348,7 @@ ATTR_TRANSLATIONS = "translations"
ATTR_TYPE = "type"
ATTR_UART = "uart"
ATTR_UDEV = "udev"
ATTR_ULIMITS = "ulimits"
ATTR_UNHEALTHY = "unhealthy"
ATTR_UNSAVED = "unsaved"
ATTR_UNSUPPORTED = "unsupported"

View File

@@ -9,6 +9,7 @@ import os
from pathlib import Path
from typing import TYPE_CHECKING, cast
import aiodocker
from attr import evolve
from awesomeversion import AwesomeVersion
import docker
@@ -318,7 +319,18 @@ class DockerAddon(DockerInterface):
mem = 128 * 1024 * 1024
limits.append(docker.types.Ulimit(name="memlock", soft=mem, hard=mem))
# Return None if no capabilities is present
# Add configurable ulimits from add-on config
for name, config in self.addon.ulimits.items():
if isinstance(config, int):
# Simple format: both soft and hard limits are the same
limits.append(docker.types.Ulimit(name=name, soft=config, hard=config))
elif isinstance(config, dict):
# Detailed format: both soft and hard limits are mandatory
soft = config["soft"]
hard = config["hard"]
limits.append(docker.types.Ulimit(name=name, soft=soft, hard=hard))
# Return None if no ulimits are present
if limits:
return limits
return None
@@ -706,19 +718,21 @@ class DockerAddon(DockerInterface):
error_message = f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}"
raise docker.errors.DockerException(error_message)
addon_image = self.sys_docker.images.get(addon_image_tag)
return addon_image, logs
return addon_image_tag, logs
try:
docker_image, log = await self.sys_run_in_executor(build_image)
addon_image_tag, log = await self.sys_run_in_executor(build_image)
_LOGGER.debug("Build %s:%s done: %s", self.image, version, log)
# Update meta data
self._meta = docker_image.attrs
self._meta = await self.sys_docker.images.inspect(addon_image_tag)
except (docker.errors.DockerException, requests.RequestException) as err:
except (
docker.errors.DockerException,
requests.RequestException,
aiodocker.DockerError,
) as err:
_LOGGER.error("Can't build %s:%s: %s", self.image, version, err)
raise DockerError() from err
@@ -740,11 +754,8 @@ class DockerAddon(DockerInterface):
)
async def import_image(self, tar_file: Path) -> None:
"""Import a tar file as image."""
docker_image = await self.sys_run_in_executor(
self.sys_docker.import_image, tar_file
)
if docker_image:
self._meta = docker_image.attrs
if docker_image := await self.sys_docker.import_image(tar_file):
self._meta = docker_image
_LOGGER.info("Importing image %s and version %s", tar_file, self.version)
with suppress(DockerError):
@@ -758,17 +769,21 @@ class DockerAddon(DockerInterface):
version: AwesomeVersion | None = None,
) -> None:
"""Check if old version exists and cleanup other versions of image not in use."""
await self.sys_run_in_executor(
self.sys_docker.cleanup_old_images,
(image := image or self.image),
version or self.version,
if not (use_image := image or self.image):
raise DockerError("Cannot determine image from metadata!", _LOGGER.error)
if not (use_version := version or self.version):
raise DockerError("Cannot determine version from metadata!", _LOGGER.error)
await self.sys_docker.cleanup_old_images(
use_image,
use_version,
{old_image} if old_image else None,
keep_images={
f"{addon.image}:{addon.version}"
for addon in self.sys_addons.installed
if addon.slug != self.addon.slug
and addon.image
and addon.image in {old_image, image}
and addon.image in {old_image, use_image}
},
)

View File

@@ -1,6 +1,5 @@
"""Init file for Supervisor Docker object."""
from collections.abc import Awaitable
from ipaddress import IPv4Address
import logging
import re
@@ -236,13 +235,12 @@ class DockerHomeAssistant(DockerInterface):
environment={ENV_TIME: self.sys_timezone},
)
def is_initialize(self) -> Awaitable[bool]:
async def is_initialize(self) -> bool:
"""Return True if Docker container exists."""
return self.sys_run_in_executor(
self.sys_docker.container_is_initialized,
self.name,
self.image,
self.sys_homeassistant.version,
if not self.sys_homeassistant.version:
return False
return await self.sys_docker.container_is_initialized(
self.name, self.image, self.sys_homeassistant.version
)
async def _validate_trust(self, image_id: str) -> None:

View File

@@ -6,17 +6,18 @@ from abc import ABC, abstractmethod
from collections import defaultdict
from collections.abc import Awaitable
from contextlib import suppress
from http import HTTPStatus
import logging
import re
from time import time
from typing import Any, cast
from uuid import uuid4
import aiodocker
from awesomeversion import AwesomeVersion
from awesomeversion.strategy import AwesomeVersionStrategy
import docker
from docker.models.containers import Container
from docker.models.images import Image
import requests
from ..bus import EventListener
@@ -35,6 +36,7 @@ from ..exceptions import (
CodeNotaryUntrusted,
DockerAPIError,
DockerError,
DockerHubRateLimitExceeded,
DockerJobError,
DockerLogOutOfOrder,
DockerNotFound,
@@ -218,12 +220,14 @@ class DockerInterface(JobGroup, ABC):
if not credentials:
return
await self.sys_run_in_executor(self.sys_docker.docker.login, **credentials)
await self.sys_run_in_executor(self.sys_docker.dockerpy.login, **credentials)
def _process_pull_image_log(self, job_id: str, reference: PullLogEntry) -> None:
def _process_pull_image_log(
self, install_job_id: str, reference: PullLogEntry
) -> None:
"""Process events fired from a docker while pulling an image, filtered to a given job id."""
if (
reference.job_id != job_id
reference.job_id != install_job_id
or not reference.id
or not reference.status
or not (stage := PullImageLayerStage.from_status(reference.status))
@@ -237,21 +241,22 @@ class DockerInterface(JobGroup, ABC):
name="Pulling container image layer",
initial_stage=stage.status,
reference=reference.id,
parent_id=job_id,
parent_id=install_job_id,
internal=True,
)
job.done = False
return
# Find our sub job to update details of
for j in self.sys_jobs.jobs:
if j.parent_id == job_id and j.reference == reference.id:
if j.parent_id == install_job_id and j.reference == reference.id:
job = j
break
# This likely only occurs if the logs came in out of sync and we got progress before the Pulling FS Layer one
if not job:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {job_id} but could not find a matching job, skipping",
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {install_job_id} but could not find a matching job, skipping",
_LOGGER.debug,
)
@@ -303,6 +308,8 @@ class DockerInterface(JobGroup, ABC):
# Our filters have all passed. Time to update the job
# Only downloading and extracting have progress details. Use that to set extra
# We'll leave it around on later stages as the total bytes may be useful after that stage
# Enforce range to prevent float drift error
progress = max(0, min(progress, 100))
if (
stage in {PullImageLayerStage.DOWNLOADING, PullImageLayerStage.EXTRACTING}
and reference.progress_detail
@@ -325,10 +332,56 @@ class DockerInterface(JobGroup, ABC):
else job.extra,
)
# Once we have received a progress update for every child job, start to set status of the main one
install_job = self.sys_jobs.get_job(install_job_id)
layer_jobs = [
job
for job in self.sys_jobs.jobs
if job.parent_id == install_job.uuid
and job.name == "Pulling container image layer"
]
# First set the total bytes to be downloaded/extracted on the main job
if not install_job.extra:
total = 0
for job in layer_jobs:
if not job.extra:
return
total += job.extra["total"]
install_job.extra = {"total": total}
else:
total = install_job.extra["total"]
# Then determine total progress based on progress of each sub-job, factoring in size of each compared to total
progress = 0.0
stage = PullImageLayerStage.PULL_COMPLETE
for job in layer_jobs:
if not job.extra:
return
progress += job.progress * (job.extra["total"] / total)
job_stage = PullImageLayerStage.from_status(cast(str, job.stage))
if job_stage < PullImageLayerStage.EXTRACTING:
stage = PullImageLayerStage.DOWNLOADING
elif (
stage == PullImageLayerStage.PULL_COMPLETE
and job_stage < PullImageLayerStage.PULL_COMPLETE
):
stage = PullImageLayerStage.EXTRACTING
# Ensure progress is 100 at this point to prevent float drift
if stage == PullImageLayerStage.PULL_COMPLETE:
progress = 100
# To reduce noise, limit updates to when result has changed by an entire percent or when stage changed
if stage != install_job.stage or progress >= install_job.progress + 1:
install_job.update(stage=stage.status, progress=max(0, min(progress, 100)))
@Job(
name="docker_interface_install",
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
internal=True,
)
async def install(
self,
@@ -351,11 +404,11 @@ class DockerInterface(JobGroup, ABC):
# Try login if we have defined credentials
await self._docker_login(image)
job_id = self.sys_jobs.current.uuid
curr_job_id = self.sys_jobs.current.uuid
async def process_pull_image_log(reference: PullLogEntry) -> None:
try:
self._process_pull_image_log(job_id, reference)
self._process_pull_image_log(curr_job_id, reference)
except DockerLogOutOfOrder as err:
# Send all these to sentry. Missing a few progress updates
# shouldn't matter to users but matters to us
@@ -366,8 +419,7 @@ class DockerInterface(JobGroup, ABC):
)
# Pull new image
docker_image = await self.sys_run_in_executor(
self.sys_docker.pull_image,
docker_image = await self.sys_docker.pull_image(
self.sys_jobs.current.uuid,
image,
str(version),
@@ -376,13 +428,11 @@ class DockerInterface(JobGroup, ABC):
# Validate content
try:
await self._validate_trust(cast(str, docker_image.id))
await self._validate_trust(cast(str, docker_image["Id"]))
except CodeNotaryError:
with suppress(docker.errors.DockerException):
await self.sys_run_in_executor(
self.sys_docker.images.remove,
image=f"{image}:{version!s}",
force=True,
with suppress(aiodocker.DockerError, requests.RequestException):
await self.sys_docker.images.delete(
f"{image}:{version!s}", force=True
)
raise
@@ -391,22 +441,37 @@ class DockerInterface(JobGroup, ABC):
_LOGGER.info(
"Tagging image %s with version %s as latest", image, version
)
await self.sys_run_in_executor(docker_image.tag, image, tag="latest")
await self.sys_docker.images.tag(
docker_image["Id"], image, tag="latest"
)
except docker.errors.APIError as err:
if err.status_code == 429:
if err.status_code == HTTPStatus.TOO_MANY_REQUESTS:
self.sys_resolution.create_issue(
IssueType.DOCKER_RATELIMIT,
ContextType.SYSTEM,
suggestions=[SuggestionType.REGISTRY_LOGIN],
)
_LOGGER.info(
"Your IP address has made too many requests to Docker Hub which activated a rate limit. "
"For more details see https://www.home-assistant.io/more-info/dockerhub-rate-limit"
)
raise DockerHubRateLimitExceeded(_LOGGER.error) from err
await async_capture_exception(err)
raise DockerError(
f"Can't install {image}:{version!s}: {err}", _LOGGER.error
) from err
except (docker.errors.DockerException, requests.RequestException) as err:
except aiodocker.DockerError as err:
if err.status == HTTPStatus.TOO_MANY_REQUESTS:
self.sys_resolution.create_issue(
IssueType.DOCKER_RATELIMIT,
ContextType.SYSTEM,
suggestions=[SuggestionType.REGISTRY_LOGIN],
)
raise DockerHubRateLimitExceeded(_LOGGER.error) from err
await async_capture_exception(err)
raise DockerError(
f"Can't install {image}:{version!s}: {err}", _LOGGER.error
) from err
except (
docker.errors.DockerException,
requests.RequestException,
) as err:
await async_capture_exception(err)
raise DockerError(
f"Unknown error with {image}:{version!s} -> {err!s}", _LOGGER.error
@@ -425,14 +490,12 @@ class DockerInterface(JobGroup, ABC):
if listener:
self.sys_bus.remove_listener(listener)
self._meta = docker_image.attrs
self._meta = docker_image
async def exists(self) -> bool:
"""Return True if Docker image exists in local repository."""
with suppress(docker.errors.DockerException, requests.RequestException):
await self.sys_run_in_executor(
self.sys_docker.images.get, f"{self.image}:{self.version!s}"
)
with suppress(aiodocker.DockerError, requests.RequestException):
await self.sys_docker.images.inspect(f"{self.image}:{self.version!s}")
return True
return False
@@ -491,11 +554,11 @@ class DockerInterface(JobGroup, ABC):
),
)
with suppress(docker.errors.DockerException, requests.RequestException):
with suppress(aiodocker.DockerError, requests.RequestException):
if not self._meta and self.image:
self._meta = self.sys_docker.images.get(
self._meta = await self.sys_docker.images.inspect(
f"{self.image}:{version!s}"
).attrs
)
# Successful?
if not self._meta:
@@ -563,14 +626,17 @@ class DockerInterface(JobGroup, ABC):
)
async def remove(self, *, remove_image: bool = True) -> None:
"""Remove Docker images."""
if not self.image or not self.version:
raise DockerError(
"Cannot determine image and/or version from metadata!", _LOGGER.error
)
# Cleanup container
with suppress(DockerError):
await self.stop()
if remove_image:
await self.sys_run_in_executor(
self.sys_docker.remove_image, self.image, self.version
)
await self.sys_docker.remove_image(self.image, self.version)
self._meta = None
@@ -592,18 +658,16 @@ class DockerInterface(JobGroup, ABC):
image_name = f"{expected_image}:{version!s}"
if self.image == expected_image:
try:
image: Image = await self.sys_run_in_executor(
self.sys_docker.images.get, image_name
)
except (docker.errors.DockerException, requests.RequestException) as err:
image = await self.sys_docker.images.inspect(image_name)
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(
f"Could not get {image_name} for check due to: {err!s}",
_LOGGER.error,
) from err
image_arch = f"{image.attrs['Os']}/{image.attrs['Architecture']}"
if "Variant" in image.attrs:
image_arch = f"{image_arch}/{image.attrs['Variant']}"
image_arch = f"{image['Os']}/{image['Architecture']}"
if "Variant" in image:
image_arch = f"{image_arch}/{image['Variant']}"
# If we have an image and its the right arch, all set
# It seems that newer Docker version return a variant for arm64 images.
@@ -629,7 +693,10 @@ class DockerInterface(JobGroup, ABC):
concurrency=JobConcurrency.GROUP_REJECT,
)
async def update(
self, version: AwesomeVersion, image: str | None = None, latest: bool = False
self,
version: AwesomeVersion,
image: str | None = None,
latest: bool = False,
) -> None:
"""Update a Docker image."""
image = image or self.image
@@ -662,11 +729,13 @@ class DockerInterface(JobGroup, ABC):
version: AwesomeVersion | None = None,
) -> None:
"""Check if old version exists and cleanup."""
await self.sys_run_in_executor(
self.sys_docker.cleanup_old_images,
image or self.image,
version or self.version,
{old_image} if old_image else None,
if not (use_image := image or self.image):
raise DockerError("Cannot determine image from metadata!", _LOGGER.error)
if not (use_version := version or self.version):
raise DockerError("Cannot determine version from metadata!", _LOGGER.error)
await self.sys_docker.cleanup_old_images(
use_image, use_version, {old_image} if old_image else None
)
@Job(
@@ -718,10 +787,10 @@ class DockerInterface(JobGroup, ABC):
"""Return latest version of local image."""
available_version: list[AwesomeVersion] = []
try:
for image in await self.sys_run_in_executor(
self.sys_docker.images.list, self.image
for image in await self.sys_docker.images.list(
filters=f'{{"reference": ["{self.image}"]}}'
):
for tag in image.tags:
for tag in image["RepoTags"]:
version = AwesomeVersion(tag.partition(":")[2])
if version.strategy == AwesomeVersionStrategy.UNKNOWN:
continue
@@ -730,7 +799,7 @@ class DockerInterface(JobGroup, ABC):
if not available_version:
raise ValueError()
except (docker.errors.DockerException, ValueError) as err:
except (aiodocker.DockerError, ValueError) as err:
raise DockerNotFound(
f"No version found for {self.image}", _LOGGER.info
) from err
@@ -769,10 +838,10 @@ class DockerInterface(JobGroup, ABC):
async def check_trust(self) -> None:
"""Check trust of exists Docker image."""
try:
image = await self.sys_run_in_executor(
self.sys_docker.images.get, f"{self.image}:{self.version!s}"
image = await self.sys_docker.images.inspect(
f"{self.image}:{self.version!s}"
)
except (docker.errors.DockerException, requests.RequestException):
except (aiodocker.DockerError, requests.RequestException):
return
await self._validate_trust(cast(str, image.id))
await self._validate_trust(cast(str, image["Id"]))

View File

@@ -6,20 +6,24 @@ import asyncio
from contextlib import suppress
from dataclasses import dataclass
from functools import partial
from http import HTTPStatus
from ipaddress import IPv4Address
import json
import logging
import os
from pathlib import Path
import re
from typing import Any, Final, Self, cast
import aiodocker
from aiodocker.images import DockerImages
from aiohttp import ClientSession, ClientTimeout, UnixConnector
import attr
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from docker import errors as docker_errors
from docker.api.client import APIClient
from docker.client import DockerClient
from docker.errors import DockerException, ImageNotFound, NotFound
from docker.models.containers import Container, ContainerCollection
from docker.models.images import Image, ImageCollection
from docker.models.networks import Network
from docker.types.daemon import CancellableStream
import requests
@@ -53,6 +57,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
MIN_SUPPORTED_DOCKER: Final = AwesomeVersion("24.0.0")
DOCKER_NETWORK_HOST: Final = "host"
RE_IMPORT_IMAGE_STREAM = re.compile(r"(^Loaded image ID: |^Loaded image: )(.+)$")
@attr.s(frozen=True)
@@ -204,7 +209,15 @@ class DockerAPI(CoreSysAttributes):
def __init__(self, coresys: CoreSys):
"""Initialize Docker base wrapper."""
self.coresys = coresys
self._docker: DockerClient | None = None
# We keep both until we can fully refactor to aiodocker
self._dockerpy: DockerClient | None = None
self.docker: aiodocker.Docker = aiodocker.Docker(
url="unix://localhost", # dummy hostname for URL composition
connector=(connector := UnixConnector(SOCKET_DOCKER.as_posix())),
session=ClientSession(connector=connector, timeout=ClientTimeout(900)),
api_version="auto",
)
self._network: DockerNetwork | None = None
self._info: DockerInfo | None = None
self.config: DockerConfig = DockerConfig()
@@ -212,28 +225,30 @@ class DockerAPI(CoreSysAttributes):
async def post_init(self) -> Self:
"""Post init actions that must be done in event loop."""
self._docker = await asyncio.get_running_loop().run_in_executor(
# Use /var/run/docker.sock for this one so aiodocker and dockerpy don't
# share the same handle. Temporary fix while refactoring this client out
self._dockerpy = await asyncio.get_running_loop().run_in_executor(
None,
partial(
DockerClient,
base_url=f"unix:/{str(SOCKET_DOCKER)}",
base_url=f"unix://var{SOCKET_DOCKER.as_posix()}",
version="auto",
timeout=900,
),
)
self._info = DockerInfo.new(self.docker.info())
self._info = DockerInfo.new(self.dockerpy.info())
await self.config.read_data()
self._network = await DockerNetwork(self.docker).post_init(
self._network = await DockerNetwork(self.dockerpy).post_init(
self.config.enable_ipv6, self.config.mtu
)
return self
@property
def docker(self) -> DockerClient:
def dockerpy(self) -> DockerClient:
"""Get docker API client."""
if not self._docker:
if not self._dockerpy:
raise RuntimeError("Docker API Client not initialized!")
return self._docker
return self._dockerpy
@property
def network(self) -> DockerNetwork:
@@ -243,19 +258,19 @@ class DockerAPI(CoreSysAttributes):
return self._network
@property
def images(self) -> ImageCollection:
def images(self) -> DockerImages:
"""Return API images."""
return self.docker.images
@property
def containers(self) -> ContainerCollection:
"""Return API containers."""
return self.docker.containers
return self.dockerpy.containers
@property
def api(self) -> APIClient:
"""Return API containers."""
return self.docker.api
return self.dockerpy.api
@property
def info(self) -> DockerInfo:
@@ -267,7 +282,7 @@ class DockerAPI(CoreSysAttributes):
@property
def events(self) -> CancellableStream:
"""Return docker event stream."""
return self.docker.events(decode=True)
return self.dockerpy.events(decode=True)
@property
def monitor(self) -> DockerMonitor:
@@ -383,7 +398,7 @@ class DockerAPI(CoreSysAttributes):
with suppress(DockerError):
self.network.detach_default_bridge(container)
else:
host_network: Network = self.docker.networks.get(DOCKER_NETWORK_HOST)
host_network: Network = self.dockerpy.networks.get(DOCKER_NETWORK_HOST)
# Check if container is register on host
# https://github.com/moby/moby/issues/23302
@@ -410,35 +425,32 @@ class DockerAPI(CoreSysAttributes):
return container
def pull_image(
async def pull_image(
self,
job_id: str,
repository: str,
tag: str = "latest",
platform: str | None = None,
) -> Image:
) -> dict[str, Any]:
"""Pull the specified image and return it.
This mimics the high level API of images.pull but provides better error handling by raising
based on a docker error on pull. Whereas the high level API ignores all errors on pull and
raises only if the get fails afterwards. Additionally it fires progress reports for the pull
on the bus so listeners can use that to update status for users.
Must be run in executor.
"""
pull_log = self.docker.api.pull(
repository, tag=tag, platform=platform, stream=True, decode=True
)
for e in pull_log:
async for e in self.images.pull(
repository, tag=tag, platform=platform, stream=True
):
entry = PullLogEntry.from_pull_log_dict(job_id, e)
if entry.error:
raise entry.exception
self.sys_loop.call_soon_threadsafe(
self.sys_bus.fire_event, BusEvent.DOCKER_IMAGE_PULL_UPDATE, entry
await asyncio.gather(
*self.sys_bus.fire_event(BusEvent.DOCKER_IMAGE_PULL_UPDATE, entry)
)
sep = "@" if tag.startswith("sha256:") else ":"
return self.images.get(f"{repository}{sep}{tag}")
return await self.images.inspect(f"{repository}{sep}{tag}")
def run_command(
self,
@@ -459,7 +471,7 @@ class DockerAPI(CoreSysAttributes):
_LOGGER.info("Runing command '%s' on %s", command, image_with_tag)
container = None
try:
container = self.docker.containers.run(
container = self.dockerpy.containers.run(
image_with_tag,
command=command,
detach=True,
@@ -487,35 +499,35 @@ class DockerAPI(CoreSysAttributes):
"""Repair local docker overlayfs2 issues."""
_LOGGER.info("Prune stale containers")
try:
output = self.docker.api.prune_containers()
output = self.dockerpy.api.prune_containers()
_LOGGER.debug("Containers prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for containers prune: %s", err)
_LOGGER.info("Prune stale images")
try:
output = self.docker.api.prune_images(filters={"dangling": False})
output = self.dockerpy.api.prune_images(filters={"dangling": False})
_LOGGER.debug("Images prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for images prune: %s", err)
_LOGGER.info("Prune stale builds")
try:
output = self.docker.api.prune_builds()
output = self.dockerpy.api.prune_builds()
_LOGGER.debug("Builds prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for builds prune: %s", err)
_LOGGER.info("Prune stale volumes")
try:
output = self.docker.api.prune_builds()
output = self.dockerpy.api.prune_volumes()
_LOGGER.debug("Volumes prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for volumes prune: %s", err)
_LOGGER.info("Prune stale networks")
try:
output = self.docker.api.prune_networks()
output = self.dockerpy.api.prune_networks()
_LOGGER.debug("Networks prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for networks prune: %s", err)
@@ -537,11 +549,11 @@ class DockerAPI(CoreSysAttributes):
Fix: https://github.com/moby/moby/issues/23302
"""
network: Network = self.docker.networks.get(network_name)
network: Network = self.dockerpy.networks.get(network_name)
for cid, data in network.attrs.get("Containers", {}).items():
try:
self.docker.containers.get(cid)
self.dockerpy.containers.get(cid)
continue
except docker_errors.NotFound:
_LOGGER.debug(
@@ -556,22 +568,26 @@ class DockerAPI(CoreSysAttributes):
with suppress(docker_errors.DockerException, requests.RequestException):
network.disconnect(data.get("Name", cid), force=True)
def container_is_initialized(
async def container_is_initialized(
self, name: str, image: str, version: AwesomeVersion
) -> bool:
"""Return True if docker container exists in good state and is built from expected image."""
try:
docker_container = self.containers.get(name)
docker_image = self.images.get(f"{image}:{version}")
except NotFound:
docker_container = await self.sys_run_in_executor(self.containers.get, name)
docker_image = await self.images.inspect(f"{image}:{version}")
except docker_errors.NotFound:
return False
except (DockerException, requests.RequestException) as err:
except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
return False
raise DockerError() from err
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
# Check the image is correct and state is good
return (
docker_container.image is not None
and docker_container.image.id == docker_image.id
and docker_container.image.id == docker_image["Id"]
and docker_container.status in ("exited", "running", "created")
)
@@ -581,18 +597,18 @@ class DockerAPI(CoreSysAttributes):
"""Stop/remove Docker container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
except docker_errors.NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
if docker_container.status == "running":
_LOGGER.info("Stopping %s application", name)
with suppress(DockerException, requests.RequestException):
with suppress(docker_errors.DockerException, requests.RequestException):
docker_container.stop(timeout=timeout)
if remove_container:
with suppress(DockerException, requests.RequestException):
with suppress(docker_errors.DockerException, requests.RequestException):
_LOGGER.info("Cleaning %s application", name)
docker_container.remove(force=True, v=True)
@@ -604,11 +620,11 @@ class DockerAPI(CoreSysAttributes):
"""Start Docker container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
except docker_errors.NotFound:
raise DockerNotFound(
f"{name} not found for starting up", _LOGGER.error
) from None
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Could not get {name} for starting up", _LOGGER.error
) from err
@@ -616,36 +632,36 @@ class DockerAPI(CoreSysAttributes):
_LOGGER.info("Starting %s", name)
try:
docker_container.start()
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(f"Can't start {name}: {err}", _LOGGER.error) from err
def restart_container(self, name: str, timeout: int) -> None:
"""Restart docker container."""
try:
container: Container = self.containers.get(name)
except NotFound:
except docker_errors.NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
_LOGGER.info("Restarting %s", name)
try:
container.restart(timeout=timeout)
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(f"Can't restart {name}: {err}", _LOGGER.warning) from err
def container_logs(self, name: str, tail: int = 100) -> bytes:
"""Return Docker logs of container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
except docker_errors.NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
try:
return docker_container.logs(tail=tail, stdout=True, stderr=True)
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't grep logs from {name}: {err}", _LOGGER.warning
) from err
@@ -654,9 +670,9 @@ class DockerAPI(CoreSysAttributes):
"""Read and return stats from container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
except docker_errors.NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
# container is not running
@@ -665,7 +681,7 @@ class DockerAPI(CoreSysAttributes):
try:
return docker_container.stats(stream=False)
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't read stats from {name}: {err}", _LOGGER.error
) from err
@@ -674,61 +690,84 @@ class DockerAPI(CoreSysAttributes):
"""Execute a command inside Docker container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
except docker_errors.NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
# Execute
try:
code, output = docker_container.exec_run(command)
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
return CommandReturn(code, output)
def remove_image(
async def remove_image(
self, image: str, version: AwesomeVersion, latest: bool = True
) -> None:
"""Remove a Docker image by version and latest."""
try:
if latest:
_LOGGER.info("Removing image %s with latest", image)
with suppress(ImageNotFound):
self.images.remove(image=f"{image}:latest", force=True)
try:
await self.images.delete(f"{image}:latest", force=True)
except aiodocker.DockerError as err:
if err.status != HTTPStatus.NOT_FOUND:
raise
_LOGGER.info("Removing image %s with %s", image, version)
with suppress(ImageNotFound):
self.images.remove(image=f"{image}:{version!s}", force=True)
try:
await self.images.delete(f"{image}:{version!s}", force=True)
except aiodocker.DockerError as err:
if err.status != HTTPStatus.NOT_FOUND:
raise
except (DockerException, requests.RequestException) as err:
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(
f"Can't remove image {image}: {err}", _LOGGER.warning
) from err
def import_image(self, tar_file: Path) -> Image | None:
async def import_image(self, tar_file: Path) -> dict[str, Any] | None:
"""Import a tar file as image."""
try:
with tar_file.open("rb") as read_tar:
docker_image_list: list[Image] = self.images.load(read_tar) # type: ignore
if len(docker_image_list) != 1:
_LOGGER.warning(
"Unexpected image count %d while importing image from tar",
len(docker_image_list),
)
return None
return docker_image_list[0]
except (DockerException, OSError) as err:
resp: list[dict[str, Any]] = self.images.import_image(read_tar)
except (aiodocker.DockerError, OSError) as err:
raise DockerError(
f"Can't import image from tar: {err}", _LOGGER.error
) from err
docker_image_list: list[str] = []
for chunk in resp:
if "errorDetail" in chunk:
raise DockerError(
f"Can't import image from tar: {chunk['errorDetail']['message']}",
_LOGGER.error,
)
if "stream" in chunk:
if match := RE_IMPORT_IMAGE_STREAM.search(chunk["stream"]):
docker_image_list.append(match.group(2))
if len(docker_image_list) != 1:
_LOGGER.warning(
"Unexpected image count %d while importing image from tar",
len(docker_image_list),
)
return None
try:
return await self.images.inspect(docker_image_list[0])
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(
f"Could not inspect imported image due to: {err!s}", _LOGGER.error
) from err
def export_image(self, image: str, version: AwesomeVersion, tar_file: Path) -> None:
"""Export current images into a tar file."""
try:
docker_image = self.api.get_image(f"{image}:{version}")
except (DockerException, requests.RequestException) as err:
except (docker_errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't fetch image {image}: {err}", _LOGGER.error
) from err
@@ -745,7 +784,7 @@ class DockerAPI(CoreSysAttributes):
_LOGGER.info("Export image %s done", image)
def cleanup_old_images(
async def cleanup_old_images(
self,
current_image: str,
current_version: AwesomeVersion,
@@ -756,46 +795,57 @@ class DockerAPI(CoreSysAttributes):
"""Clean up old versions of an image."""
image = f"{current_image}:{current_version!s}"
try:
keep = {cast(str, self.images.get(image).id)}
except ImageNotFound:
raise DockerNotFound(
f"{current_image} not found for cleanup", _LOGGER.warning
) from None
except (DockerException, requests.RequestException) as err:
try:
image_attr = await self.images.inspect(image)
except aiodocker.DockerError as err:
if err.status == HTTPStatus.NOT_FOUND:
raise DockerNotFound(
f"{current_image} not found for cleanup", _LOGGER.warning
) from None
raise
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(
f"Can't get {current_image} for cleanup", _LOGGER.warning
) from err
keep = {cast(str, image_attr["Id"])}
if keep_images:
keep_images -= {image}
try:
for image in keep_images:
# If its not found, no need to preserve it from getting removed
with suppress(ImageNotFound):
keep.add(cast(str, self.images.get(image).id))
except (DockerException, requests.RequestException) as err:
raise DockerError(
f"Failed to get one or more images from {keep} during cleanup",
_LOGGER.warning,
) from err
results = await asyncio.gather(
*[self.images.inspect(image) for image in keep_images],
return_exceptions=True,
)
for result in results:
# If its not found, no need to preserve it from getting removed
if (
isinstance(result, aiodocker.DockerError)
and result.status == HTTPStatus.NOT_FOUND
):
continue
if isinstance(result, BaseException):
raise DockerError(
f"Failed to get one or more images from {keep} during cleanup",
_LOGGER.warning,
) from result
keep.add(cast(str, result["Id"]))
# Cleanup old and current
image_names = list(
old_images | {current_image} if old_images else {current_image}
)
try:
# This API accepts a list of image names. Tested and confirmed working on docker==7.1.0
# Its typing does say only `str` though. Bit concerning, could an update break this?
images_list = self.images.list(name=image_names) # type: ignore
except (DockerException, requests.RequestException) as err:
images_list = await self.images.list(
filters=json.dumps({"reference": image_names})
)
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(
f"Corrupt docker overlayfs found: {err}", _LOGGER.warning
) from err
for docker_image in images_list:
if docker_image.id in keep:
if docker_image["Id"] in keep:
continue
with suppress(DockerException, requests.RequestException):
_LOGGER.info("Cleanup images: %s", docker_image.tags)
self.images.remove(docker_image.id, force=True)
with suppress(aiodocker.DockerError, requests.RequestException):
_LOGGER.info("Cleanup images: %s", docker_image["RepoTags"])
await self.images.delete(docker_image["Id"], force=True)

View File

@@ -1,10 +1,12 @@
"""Init file for Supervisor Docker object."""
import asyncio
from collections.abc import Awaitable
from ipaddress import IPv4Address
import logging
import os
import aiodocker
from awesomeversion.awesomeversion import AwesomeVersion
import docker
import requests
@@ -112,19 +114,18 @@ class DockerSupervisor(DockerInterface):
name="docker_supervisor_update_start_tag",
concurrency=JobConcurrency.GROUP_QUEUE,
)
def update_start_tag(self, image: str, version: AwesomeVersion) -> Awaitable[None]:
async def update_start_tag(self, image: str, version: AwesomeVersion) -> None:
"""Update start tag to new version."""
return self.sys_run_in_executor(self._update_start_tag, image, version)
def _update_start_tag(self, image: str, version: AwesomeVersion) -> None:
"""Update start tag to new version.
Need run inside executor.
"""
try:
docker_container = self.sys_docker.containers.get(self.name)
docker_image = self.sys_docker.images.get(f"{image}:{version!s}")
except (docker.errors.DockerException, requests.RequestException) as err:
docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
docker_image = await self.sys_docker.images.inspect(f"{image}:{version!s}")
except (
aiodocker.DockerError,
docker.errors.DockerException,
requests.RequestException,
) as err:
raise DockerError(
f"Can't get image or container to fix start tag: {err}", _LOGGER.error
) from err
@@ -144,8 +145,14 @@ class DockerSupervisor(DockerInterface):
# If version tag
if start_tag != "latest":
continue
docker_image.tag(start_image, start_tag)
docker_image.tag(start_image, version.string)
await asyncio.gather(
self.sys_docker.images.tag(
docker_image["Id"], start_image, tag=start_tag
),
self.sys_docker.images.tag(
docker_image["Id"], start_image, tag=version.string
),
)
except (docker.errors.DockerException, requests.RequestException) as err:
except (aiodocker.DockerError, requests.RequestException) as err:
raise DockerError(f"Can't fix start tag: {err}", _LOGGER.error) from err

View File

@@ -648,9 +648,32 @@ class DockerLogOutOfOrder(DockerError):
class DockerNoSpaceOnDevice(DockerError):
"""Raise if a docker pull fails due to available space."""
error_key = "docker_no_space_on_device"
message_template = "No space left on disk"
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Raise & log."""
super().__init__("No space left on disk", logger=logger)
super().__init__(None, logger=logger)
class DockerHubRateLimitExceeded(DockerError):
"""Raise for docker hub rate limit exceeded error."""
error_key = "dockerhub_rate_limit_exceeded"
message_template = (
"Your IP address has made too many requests to Docker Hub which activated a rate limit. "
"For more details see {dockerhub_rate_limit_url}"
)
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Raise & log."""
super().__init__(
None,
logger=logger,
extra_fields={
"dockerhub_rate_limit_url": "https://www.home-assistant.io/more-info/dockerhub-rate-limit"
},
)
class DockerJobError(DockerError, JobException):

View File

@@ -9,7 +9,12 @@ from typing import Any
from supervisor.resolution.const import UnhealthyReason
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import DBusError, DBusObjectError, HardwareNotFound
from ..exceptions import (
DBusError,
DBusNotConnectedError,
DBusObjectError,
HardwareNotFound,
)
from .const import UdevSubsystem
from .data import Device
@@ -207,6 +212,8 @@ class HwDisk(CoreSysAttributes):
try:
block_device = self.sys_dbus.udisks2.get_block_device_by_path(device_path)
drive = self.sys_dbus.udisks2.get_drive(block_device.drive)
except DBusNotConnectedError:
return None
except DBusObjectError:
_LOGGER.warning(
"Unable to find UDisks2 drive for device at %s", device_path.as_posix()

View File

@@ -28,6 +28,7 @@ from ..exceptions import (
HomeAssistantUpdateError,
JobException,
)
from ..jobs import ChildJobSyncFilter
from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobConcurrency, JobThrottle
from ..jobs.decorator import Job, JobCondition
from ..jobs.job_group import JobGroup
@@ -224,6 +225,13 @@ class HomeAssistantCore(JobGroup):
],
on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT,
# We assume for now the docker image pull is 100% of this task. But from
# a user perspective that isn't true. Other steps that take time which
# is not accounted for in progress include: partial backup, image
# cleanup, and Home Assistant restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def update(
self,

View File

@@ -282,8 +282,10 @@ class JobManager(FileConfiguration, CoreSysAttributes):
# reporting shouldn't raise and break the active job
continue
progress = sync.starting_progress + (
sync.progress_allocation * job_data["progress"]
progress = min(
100,
sync.starting_progress
+ (sync.progress_allocation * job_data["progress"]),
)
# Using max would always trigger on change even if progress was unchanged
# pylint: disable-next=R1731

View File

@@ -8,7 +8,7 @@ from ..const import UnsupportedReason
from .base import EvaluateBase
EXPECTED_LOGGING = "journald"
EXPECTED_STORAGE = "overlay2"
EXPECTED_STORAGE = ("overlay2", "overlayfs")
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -41,14 +41,18 @@ class EvaluateDockerConfiguration(EvaluateBase):
storage_driver = self.sys_docker.info.storage
logging_driver = self.sys_docker.info.logging
if storage_driver != EXPECTED_STORAGE:
is_unsupported = False
if storage_driver not in EXPECTED_STORAGE:
is_unsupported = True
_LOGGER.warning(
"Docker storage driver %s is not supported!", storage_driver
)
if logging_driver != EXPECTED_LOGGING:
is_unsupported = True
_LOGGER.warning(
"Docker logging driver %s is not supported!", logging_driver
)
return storage_driver != EXPECTED_STORAGE or logging_driver != EXPECTED_LOGGING
return is_unsupported

View File

@@ -13,6 +13,8 @@ import aiohttp
from aiohttp.client_exceptions import ClientError
from awesomeversion import AwesomeVersion, AwesomeVersionException
from supervisor.jobs import ChildJobSyncFilter
from .const import (
ATTR_SUPERVISOR_INTERNET,
SUPERVISOR_VERSION,
@@ -195,6 +197,15 @@ class Supervisor(CoreSysAttributes):
if temp_dir:
await self.sys_run_in_executor(temp_dir.cleanup)
@Job(
name="supervisor_update",
# We assume for now the docker image pull is 100% of this task. But from
# a user perspective that isn't true. Other steps that take time which
# is not accounted for in progress include: app armor update and restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def update(self, version: AwesomeVersion | None = None) -> None:
"""Update Supervisor version."""
version = version or self.latest_version or self.version
@@ -221,6 +232,7 @@ class Supervisor(CoreSysAttributes):
# Update container
_LOGGER.info("Update Supervisor to version %s", version)
try:
await self.instance.install(version, image=image)
await self.instance.update_start_tag(image, version)

View File

@@ -3,22 +3,25 @@
import asyncio
from datetime import timedelta
import errno
from http import HTTPStatus
from pathlib import Path
from unittest.mock import MagicMock, PropertyMock, patch
from unittest.mock import MagicMock, PropertyMock, call, patch
import aiodocker
from awesomeversion import AwesomeVersion
from docker.errors import DockerException, ImageNotFound, NotFound
from docker.errors import APIError, DockerException, NotFound
import pytest
from securetar import SecureTarFile
from supervisor.addons.addon import Addon
from supervisor.addons.const import AddonBackupMode
from supervisor.addons.model import AddonModel
from supervisor.config import CoreConfig
from supervisor.const import AddonBoot, AddonState, BusEvent
from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState
from supervisor.docker.manager import CommandReturn
from supervisor.docker.manager import CommandReturn, DockerAPI
from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError
from supervisor.hardware.helper import HwHelper
@@ -861,16 +864,14 @@ async def test_addon_loads_wrong_image(
container.remove.assert_called_with(force=True, v=True)
# one for removing the addon, one for removing the addon builder
assert coresys.docker.images.remove.call_count == 2
assert coresys.docker.images.delete.call_count == 2
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": "local/aarch64-addon-ssh:latest",
"force": True,
}
assert coresys.docker.images.remove.call_args_list[1].kwargs == {
"image": "local/aarch64-addon-ssh:9.2.1",
"force": True,
}
assert coresys.docker.images.delete.call_args_list[0] == call(
"local/aarch64-addon-ssh:latest", force=True
)
assert coresys.docker.images.delete.call_args_list[1] == call(
"local/aarch64-addon-ssh:9.2.1", force=True
)
mock_run_command.assert_called_once()
assert mock_run_command.call_args.args[0] == "docker.io/library/docker"
assert mock_run_command.call_args.kwargs["version"] == "1.0.0-cli"
@@ -894,7 +895,9 @@ async def test_addon_loads_missing_image(
mock_amd64_arch_supported,
):
"""Test addon corrects a missing image on load."""
coresys.docker.images.get.side_effect = ImageNotFound("missing")
coresys.docker.images.inspect.side_effect = aiodocker.DockerError(
HTTPStatus.NOT_FOUND, {"message": "missing"}
)
with (
patch("pathlib.Path.is_file", return_value=True),
@@ -926,41 +929,51 @@ async def test_addon_loads_missing_image(
assert install_addon_ssh.image == "local/amd64-addon-ssh"
@pytest.mark.parametrize(
"pull_image_exc",
[APIError("error"), aiodocker.DockerError(400, {"message": "error"})],
)
@pytest.mark.usefixtures("container", "mock_amd64_arch_supported")
async def test_addon_load_succeeds_with_docker_errors(
coresys: CoreSys,
install_addon_ssh: Addon,
container: MagicMock,
caplog: pytest.LogCaptureFixture,
mock_amd64_arch_supported,
pull_image_exc: Exception,
):
"""Docker errors while building/pulling an image during load should not raise and fail setup."""
# Build env invalid failure
coresys.docker.images.get.side_effect = ImageNotFound("missing")
coresys.docker.images.inspect.side_effect = aiodocker.DockerError(
HTTPStatus.NOT_FOUND, {"message": "missing"}
)
caplog.clear()
await install_addon_ssh.load()
assert "Invalid build environment" in caplog.text
# Image build failure
coresys.docker.images.build.side_effect = DockerException()
caplog.clear()
with (
patch("pathlib.Path.is_file", return_value=True),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
CoreConfig, "local_to_extern_path", return_value="/addon/path/on/host"
),
patch.object(
DockerAPI,
"run_command",
return_value=MagicMock(exit_code=1, output=b"error"),
),
):
await install_addon_ssh.load()
assert "Can't build local/amd64-addon-ssh:9.2.1" in caplog.text
assert (
"Can't build local/amd64-addon-ssh:9.2.1: Docker build failed for local/amd64-addon-ssh:9.2.1 (exit code 1). Build output:\nerror"
in caplog.text
)
# Image pull failure
install_addon_ssh.data["image"] = "test/amd64-addon-ssh"
coresys.docker.images.build.reset_mock(side_effect=True)
coresys.docker.pull_image.side_effect = DockerException()
caplog.clear()
await install_addon_ssh.load()
assert "Unknown error with test/amd64-addon-ssh:9.2.1" in caplog.text
with patch.object(DockerAPI, "pull_image", side_effect=pull_image_exc):
await install_addon_ssh.load()
assert "Can't install test/amd64-addon-ssh:9.2.1:" in caplog.text
async def test_addon_manual_only_boot(coresys: CoreSys, install_addon_example: Addon):

View File

@@ -419,3 +419,71 @@ def test_valid_schema():
config["schema"] = {"field": "invalid"}
with pytest.raises(vol.Invalid):
assert vd.SCHEMA_ADDON_CONFIG(config)
def test_ulimits_simple_format():
"""Test ulimits simple format validation."""
config = load_json_fixture("basic-addon-config.json")
config["ulimits"] = {"nofile": 65535, "nproc": 32768, "memlock": 134217728}
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["ulimits"]["nofile"] == 65535
assert valid_config["ulimits"]["nproc"] == 32768
assert valid_config["ulimits"]["memlock"] == 134217728
def test_ulimits_detailed_format():
"""Test ulimits detailed format validation."""
config = load_json_fixture("basic-addon-config.json")
config["ulimits"] = {
"nofile": {"soft": 20000, "hard": 40000},
"nproc": 32768, # Mixed format should work
"memlock": {"soft": 67108864, "hard": 134217728},
}
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["ulimits"]["nofile"]["soft"] == 20000
assert valid_config["ulimits"]["nofile"]["hard"] == 40000
assert valid_config["ulimits"]["nproc"] == 32768
assert valid_config["ulimits"]["memlock"]["soft"] == 67108864
assert valid_config["ulimits"]["memlock"]["hard"] == 134217728
def test_ulimits_empty_dict():
"""Test ulimits with empty dict (default)."""
config = load_json_fixture("basic-addon-config.json")
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["ulimits"] == {}
def test_ulimits_invalid_values():
"""Test ulimits with invalid values."""
config = load_json_fixture("basic-addon-config.json")
# Invalid string values
config["ulimits"] = {"nofile": "invalid"}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Invalid detailed format
config["ulimits"] = {"nofile": {"invalid_key": 1000}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Missing hard value in detailed format
config["ulimits"] = {"nofile": {"soft": 1000}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Missing soft value in detailed format
config["ulimits"] = {"nofile": {"hard": 1000}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Empty dict in detailed format
config["ulimits"] = {"nofile": {}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)

View File

@@ -4,7 +4,7 @@ import asyncio
from collections.abc import AsyncGenerator, Generator
from copy import deepcopy
from pathlib import Path
from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, patch
from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, call, patch
from awesomeversion import AwesomeVersion
import pytest
@@ -514,19 +514,13 @@ async def test_shared_image_kept_on_uninstall(
latest = f"{install_addon_example.image}:latest"
await coresys.addons.uninstall("local_example2")
coresys.docker.images.remove.assert_not_called()
coresys.docker.images.delete.assert_not_called()
assert not coresys.addons.get("local_example2", local_only=True)
await coresys.addons.uninstall("local_example")
assert coresys.docker.images.remove.call_count == 2
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": latest,
"force": True,
}
assert coresys.docker.images.remove.call_args_list[1].kwargs == {
"image": image,
"force": True,
}
assert coresys.docker.images.delete.call_count == 2
assert coresys.docker.images.delete.call_args_list[0] == call(latest, force=True)
assert coresys.docker.images.delete.call_args_list[1] == call(image, force=True)
assert not coresys.addons.get("local_example", local_only=True)
@@ -554,19 +548,17 @@ async def test_shared_image_kept_on_update(
assert example_2.version == "1.2.0"
assert install_addon_example_image.version == "1.2.0"
image_new = MagicMock()
image_new.id = "image_new"
image_old = MagicMock()
image_old.id = "image_old"
docker.images.get.side_effect = [image_new, image_old]
image_new = {"Id": "image_new", "RepoTags": ["image_new:latest"]}
image_old = {"Id": "image_old", "RepoTags": ["image_old:latest"]}
docker.images.inspect.side_effect = [image_new, image_old]
docker.images.list.return_value = [image_new, image_old]
with patch.object(DockerAPI, "pull_image", return_value=image_new):
await coresys.addons.update("local_example2")
docker.images.remove.assert_not_called()
docker.images.delete.assert_not_called()
assert example_2.version == "1.3.0"
docker.images.get.side_effect = [image_new]
docker.images.inspect.side_effect = [image_new]
await coresys.addons.update("local_example_image")
docker.images.remove.assert_called_once_with("image_old", force=True)
docker.images.delete.assert_called_once_with("image_old", force=True)
assert install_addon_example_image.version == "1.3.0"

View File

@@ -2,21 +2,24 @@
import asyncio
from pathlib import Path
from unittest.mock import MagicMock, PropertyMock, patch
from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
import pytest
from supervisor.backups.manager import BackupManager
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.docker.homeassistant import DockerHomeAssistant
from supervisor.docker.interface import DockerInterface
from supervisor.homeassistant.api import APIState
from supervisor.homeassistant.api import APIState, HomeAssistantAPI
from supervisor.homeassistant.const import WSEvent
from supervisor.homeassistant.core import HomeAssistantCore
from supervisor.homeassistant.module import HomeAssistant
from tests.api import common_test_api_advanced_logs
from tests.common import load_json_fixture
from tests.common import AsyncIterator, load_json_fixture
@pytest.mark.parametrize("legacy_route", [True, False])
@@ -271,3 +274,96 @@ async def test_background_home_assistant_update_fails_fast(
assert resp.status == 400
body = await resp.json()
assert body["message"] == "Version 2025.8.3 is already installed"
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_home_assistant_update(
api_client: TestClient, coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test progress updates sent to Home Assistant for updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
logs = load_json_fixture("docker_pull_image_log.json")
coresys.docker.images.pull.return_value = AsyncIterator(logs)
coresys.homeassistant.version = AwesomeVersion("2025.8.0")
with (
patch.object(
DockerHomeAssistant,
"version",
new=PropertyMock(return_value=AwesomeVersion("2025.8.0")),
),
patch.object(
HomeAssistantAPI, "get_config", return_value={"components": ["frontend"]}
),
):
resp = await api_client.post("/core/update", json={"version": "2025.8.3"})
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == "home_assistant_core_update"
]
assert events[:5] == [
{
"stage": None,
"progress": 0,
"done": None,
},
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": None,
"progress": 0.1,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": True,
},
]

View File

@@ -13,17 +13,18 @@ from supervisor.addons.addon import Addon
from supervisor.arch import CpuArch
from supervisor.backups.manager import BackupManager
from supervisor.config import CoreConfig
from supervisor.const import AddonState
from supervisor.const import AddonState, CoreState
from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState
from supervisor.docker.interface import DockerInterface
from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.homeassistant.const import WSEvent
from supervisor.homeassistant.module import HomeAssistant
from supervisor.store.addon import AddonStore
from supervisor.store.repository import Repository
from tests.common import load_json_fixture
from tests.common import AsyncIterator, load_json_fixture
from tests.const import TEST_ADDON_SLUG
REPO_URL = "https://github.com/awesome-developer/awesome-repo"
@@ -709,3 +710,102 @@ async def test_api_store_addons_addon_availability_installed_addon(
assert (
"requires Home Assistant version 2023.1.1 or greater" in result["message"]
)
@pytest.mark.parametrize(
("action", "job_name", "addon_slug"),
[
("install", "addon_manager_install", "local_ssh"),
("update", "addon_manager_update", "local_example"),
],
)
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_addon_install_update(
api_client: TestClient,
coresys: CoreSys,
ha_ws_client: AsyncMock,
install_addon_example: Addon,
action: str,
job_name: str,
addon_slug: str,
):
"""Test progress updates sent to Home Assistant for installs/updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
logs = load_json_fixture("docker_pull_image_log.json")
coresys.docker.images.pull.return_value = AsyncIterator(logs)
coresys.arch._supported_arch = ["amd64"] # pylint: disable=protected-access
install_addon_example.data_store["version"] = AwesomeVersion("2.0.0")
with (
patch.object(Addon, "load"),
patch.object(Addon, "need_build", new=PropertyMock(return_value=False)),
patch.object(Addon, "latest_need_build", new=PropertyMock(return_value=False)),
):
resp = await api_client.post(f"/store/addons/{addon_slug}/{action}")
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == job_name
and evt.args[0]["data"]["data"]["reference"] == addon_slug
]
assert events[:4] == [
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": None,
"progress": 0.1,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": True,
},
]

View File

@@ -2,17 +2,24 @@
# pylint: disable=protected-access
import time
from unittest.mock import AsyncMock, MagicMock, patch
from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
from blockbuster import BlockingError
import pytest
from supervisor.const import CoreState
from supervisor.core import Core
from supervisor.coresys import CoreSys
from supervisor.exceptions import HassioError, HostNotSupportedError, StoreGitError
from supervisor.homeassistant.const import WSEvent
from supervisor.store.repository import Repository
from supervisor.supervisor import Supervisor
from supervisor.updater import Updater
from tests.api import common_test_api_advanced_logs
from tests.common import AsyncIterator, load_json_fixture
from tests.dbus_service_mocks.base import DBusServiceMock
from tests.dbus_service_mocks.os_agent import OSAgent as OSAgentService
@@ -316,3 +323,97 @@ async def test_api_supervisor_options_blocking_io(
# This should not raise blocking error anymore
time.sleep(0)
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_supervisor_update(
api_client: TestClient, coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test progress updates sent to Home Assistant for updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
logs = load_json_fixture("docker_pull_image_log.json")
coresys.docker.images.pull.return_value = AsyncIterator(logs)
with (
patch.object(
Supervisor,
"version",
new=PropertyMock(return_value=AwesomeVersion("2025.08.0")),
),
patch.object(
Updater,
"version_supervisor",
new=PropertyMock(return_value=AwesomeVersion("2025.08.3")),
),
patch.object(
Updater, "image_supervisor", new=PropertyMock(return_value="supervisor")
),
patch.object(Supervisor, "update_apparmor"),
patch.object(Core, "stop"),
):
resp = await api_client.post("/supervisor/update")
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == "supervisor_update"
]
assert events[:4] == [
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": None,
"progress": 0.1,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": True,
},
]

View File

@@ -1,13 +1,14 @@
"""Common test functions."""
import asyncio
from collections.abc import Sequence
from datetime import datetime
from functools import partial
from importlib import import_module
from inspect import getclosurevars
import json
from pathlib import Path
from typing import Any
from typing import Any, Self
from dbus_fast.aio.message_bus import MessageBus
@@ -145,3 +146,22 @@ class MockResponse:
async def __aexit__(self, exc_type, exc, tb):
"""Exit the context manager."""
class AsyncIterator:
"""Make list/fixture into async iterator for test mocks."""
def __init__(self, seq: Sequence[Any]) -> None:
"""Initialize with sequence."""
self.iter = iter(seq)
def __aiter__(self) -> Self:
"""Implement aiter."""
return self
async def __anext__(self) -> Any:
"""Return next in sequence."""
try:
return next(self.iter)
except StopIteration:
raise StopAsyncIteration() from None

View File

@@ -9,6 +9,7 @@ import subprocess
from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, patch
from uuid import uuid4
from aiodocker.docker import DockerImages
from aiohttp import ClientSession, web
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
@@ -55,6 +56,7 @@ from supervisor.store.repository import Repository
from supervisor.utils.dt import utcnow
from .common import (
AsyncIterator,
MockResponse,
load_binary_fixture,
load_fixture,
@@ -112,40 +114,46 @@ async def supervisor_name() -> None:
@pytest.fixture
async def docker() -> DockerAPI:
"""Mock DockerAPI."""
images = [MagicMock(tags=["ghcr.io/home-assistant/amd64-hassio-supervisor:latest"])]
image = MagicMock()
image.attrs = {"Os": "linux", "Architecture": "amd64"}
image_inspect = {
"Os": "linux",
"Architecture": "amd64",
"Id": "test123",
"RepoTags": ["ghcr.io/home-assistant/amd64-hassio-supervisor:latest"],
}
with (
patch("supervisor.docker.manager.DockerClient", return_value=MagicMock()),
patch("supervisor.docker.manager.DockerAPI.images", return_value=MagicMock()),
patch(
"supervisor.docker.manager.DockerAPI.containers", return_value=MagicMock()
),
patch(
"supervisor.docker.manager.DockerAPI.api",
return_value=(api_mock := MagicMock()),
),
patch("supervisor.docker.manager.DockerAPI.images.get", return_value=image),
patch("supervisor.docker.manager.DockerAPI.images.list", return_value=images),
patch(
"supervisor.docker.manager.DockerAPI.info",
return_value=MagicMock(),
),
patch("supervisor.docker.manager.DockerAPI.api", return_value=MagicMock()),
patch("supervisor.docker.manager.DockerAPI.info", return_value=MagicMock()),
patch("supervisor.docker.manager.DockerAPI.unload"),
patch("supervisor.docker.manager.aiodocker.Docker", return_value=MagicMock()),
patch(
"supervisor.docker.manager.DockerAPI.images",
new=PropertyMock(
return_value=(docker_images := MagicMock(spec=DockerImages))
),
),
):
docker_obj = await DockerAPI(MagicMock()).post_init()
docker_obj.config._data = {"registries": {}}
with patch("supervisor.docker.monitor.DockerMonitor.load"):
await docker_obj.load()
docker_images.inspect.return_value = image_inspect
docker_images.list.return_value = [image_inspect]
docker_images.import_image.return_value = [
{"stream": "Loaded image: test:latest\n"}
]
docker_images.pull.return_value = AsyncIterator([{}])
docker_obj.info.logging = "journald"
docker_obj.info.storage = "overlay2"
docker_obj.info.version = AwesomeVersion("1.0.0")
# Need an iterable for logs
api_mock.pull.return_value = []
yield docker_obj
@@ -838,11 +846,9 @@ async def container(docker: DockerAPI) -> MagicMock:
"""Mock attrs and status for container on attach."""
docker.containers.get.return_value = addon = MagicMock()
docker.containers.create.return_value = addon
docker.images.build.return_value = (addon, "")
addon.status = "stopped"
addon.attrs = {"State": {"ExitCode": 0}}
with patch.object(DockerAPI, "pull_image", return_value=addon):
yield addon
yield addon
@pytest.fixture

View File

@@ -503,3 +503,93 @@ async def test_addon_new_device_no_haos(
await install_addon_ssh.stop()
assert coresys.resolution.issues == []
assert coresys.resolution.suggestions == []
async def test_ulimits_integration(
coresys: CoreSys,
install_addon_ssh: Addon,
):
"""Test ulimits integration with Docker addon."""
docker_addon = DockerAddon(coresys, install_addon_ssh)
# Test default case (no ulimits, no realtime)
assert docker_addon.ulimits is None
# Test with realtime enabled (should have built-in ulimits)
install_addon_ssh.data["realtime"] = True
ulimits = docker_addon.ulimits
assert ulimits is not None
assert len(ulimits) == 2
# Check for rtprio limit
rtprio_limit = next((u for u in ulimits if u.name == "rtprio"), None)
assert rtprio_limit is not None
assert rtprio_limit.soft == 90
assert rtprio_limit.hard == 99
# Check for memlock limit
memlock_limit = next((u for u in ulimits if u.name == "memlock"), None)
assert memlock_limit is not None
assert memlock_limit.soft == 128 * 1024 * 1024
assert memlock_limit.hard == 128 * 1024 * 1024
# Test with configurable ulimits (simple format)
install_addon_ssh.data["realtime"] = False
install_addon_ssh.data["ulimits"] = {"nofile": 65535, "nproc": 32768}
ulimits = docker_addon.ulimits
assert ulimits is not None
assert len(ulimits) == 2
nofile_limit = next((u for u in ulimits if u.name == "nofile"), None)
assert nofile_limit is not None
assert nofile_limit.soft == 65535
assert nofile_limit.hard == 65535
nproc_limit = next((u for u in ulimits if u.name == "nproc"), None)
assert nproc_limit is not None
assert nproc_limit.soft == 32768
assert nproc_limit.hard == 32768
# Test with configurable ulimits (detailed format)
install_addon_ssh.data["ulimits"] = {
"nofile": {"soft": 20000, "hard": 40000},
"memlock": {"soft": 67108864, "hard": 134217728},
}
ulimits = docker_addon.ulimits
assert ulimits is not None
assert len(ulimits) == 2
nofile_limit = next((u for u in ulimits if u.name == "nofile"), None)
assert nofile_limit is not None
assert nofile_limit.soft == 20000
assert nofile_limit.hard == 40000
memlock_limit = next((u for u in ulimits if u.name == "memlock"), None)
assert memlock_limit is not None
assert memlock_limit.soft == 67108864
assert memlock_limit.hard == 134217728
# Test mixed format and realtime (realtime + custom ulimits)
install_addon_ssh.data["realtime"] = True
install_addon_ssh.data["ulimits"] = {
"nofile": 65535,
"core": {"soft": 0, "hard": 0}, # Disable core dumps
}
ulimits = docker_addon.ulimits
assert ulimits is not None
assert (
len(ulimits) == 4
) # rtprio, memlock (from realtime) + nofile, core (from config)
# Check realtime limits still present
rtprio_limit = next((u for u in ulimits if u.name == "rtprio"), None)
assert rtprio_limit is not None
# Check custom limits added
nofile_limit = next((u for u in ulimits if u.name == "nofile"), None)
assert nofile_limit is not None
assert nofile_limit.soft == 65535
assert nofile_limit.hard == 65535
core_limit = next((u for u in ulimits if u.name == "core"), None)
assert core_limit is not None
assert core_limit.soft == 0
assert core_limit.hard == 0

View File

@@ -5,10 +5,10 @@ from pathlib import Path
from typing import Any
from unittest.mock import ANY, AsyncMock, MagicMock, Mock, PropertyMock, call, patch
import aiodocker
from awesomeversion import AwesomeVersion
from docker.errors import DockerException, NotFound
from docker.models.containers import Container
from docker.models.images import Image
import pytest
from requests import RequestException
@@ -26,10 +26,9 @@ from supervisor.exceptions import (
DockerNotFound,
DockerRequestError,
)
from supervisor.homeassistant.const import WSEvent
from supervisor.jobs import JobSchedulerOptions, SupervisorJob
from tests.common import load_json_fixture
from tests.common import AsyncIterator, load_json_fixture
@pytest.fixture(autouse=True)
@@ -58,35 +57,30 @@ async def test_docker_image_platform(
platform: str,
):
"""Test platform set correctly from arch."""
with patch.object(
coresys.docker.images, "get", return_value=Mock(id="test:1.2.3")
) as get:
await test_docker_interface.install(
AwesomeVersion("1.2.3"), "test", arch=cpu_arch
)
coresys.docker.docker.api.pull.assert_called_once_with(
"test", tag="1.2.3", platform=platform, stream=True, decode=True
)
get.assert_called_once_with("test:1.2.3")
coresys.docker.images.inspect.return_value = {"Id": "test:1.2.3"}
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test", arch=cpu_arch)
coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform=platform, stream=True
)
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
async def test_docker_image_default_platform(
coresys: CoreSys, test_docker_interface: DockerInterface
):
"""Test platform set using supervisor arch when omitted."""
coresys.docker.images.inspect.return_value = {"Id": "test:1.2.3"}
with (
patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value="i386")
),
patch.object(
coresys.docker.images, "get", return_value=Mock(id="test:1.2.3")
) as get,
):
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test")
coresys.docker.docker.api.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", stream=True, decode=True
coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", stream=True
)
get.assert_called_once_with("test:1.2.3")
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
@pytest.mark.parametrize(
@@ -217,57 +211,40 @@ async def test_attach_existing_container(
async def test_attach_container_failure(coresys: CoreSys):
"""Test attach fails to find container but finds image."""
container_collection = MagicMock()
container_collection.get.side_effect = DockerException()
image_collection = MagicMock()
image_config = {"Image": "sha256:abc123"}
image_collection.get.return_value = Image({"Config": image_config})
with (
patch(
"supervisor.docker.manager.DockerAPI.containers",
new=PropertyMock(return_value=container_collection),
),
patch(
"supervisor.docker.manager.DockerAPI.images",
new=PropertyMock(return_value=image_collection),
),
patch.object(type(coresys.bus), "fire_event") as fire_event,
):
coresys.docker.containers.get.side_effect = DockerException()
coresys.docker.images.inspect.return_value.setdefault("Config", {})["Image"] = (
"sha256:abc123"
)
with patch.object(type(coresys.bus), "fire_event") as fire_event:
await coresys.homeassistant.core.instance.attach(AwesomeVersion("2022.7.3"))
assert not [
event
for event in fire_event.call_args_list
if event.args[0] == BusEvent.DOCKER_CONTAINER_STATE_CHANGE
]
assert coresys.homeassistant.core.instance.meta_config == image_config
assert (
coresys.homeassistant.core.instance.meta_config["Image"] == "sha256:abc123"
)
async def test_attach_total_failure(coresys: CoreSys):
"""Test attach fails to find container or image."""
container_collection = MagicMock()
container_collection.get.side_effect = DockerException()
image_collection = MagicMock()
image_collection.get.side_effect = DockerException()
with (
patch(
"supervisor.docker.manager.DockerAPI.containers",
new=PropertyMock(return_value=container_collection),
),
patch(
"supervisor.docker.manager.DockerAPI.images",
new=PropertyMock(return_value=image_collection),
),
pytest.raises(DockerError),
):
coresys.docker.containers.get.side_effect = DockerException
coresys.docker.images.inspect.side_effect = aiodocker.DockerError(
400, {"message": ""}
)
with pytest.raises(DockerError):
await coresys.homeassistant.core.instance.attach(AwesomeVersion("2022.7.3"))
@pytest.mark.parametrize("err", [DockerException(), RequestException()])
@pytest.mark.parametrize(
"err", [aiodocker.DockerError(400, {"message": ""}), RequestException()]
)
async def test_image_pull_fail(
coresys: CoreSys, capture_exception: Mock, err: Exception
):
"""Test failure to pull image."""
coresys.docker.images.get.side_effect = err
coresys.docker.images.inspect.side_effect = err
with pytest.raises(DockerError):
await coresys.homeassistant.core.instance.install(
AwesomeVersion("2022.7.3"), arch=CpuArch.AMD64
@@ -299,8 +276,9 @@ async def test_install_fires_progress_events(
coresys: CoreSys, test_docker_interface: DockerInterface
):
"""Test progress events are fired during an install for listeners."""
# This is from a sample pull. Filtered log to just one per unique status for test
coresys.docker.docker.api.pull.return_value = [
logs = [
{
"status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.2",
@@ -322,7 +300,11 @@ async def test_install_fires_progress_events(
"id": "1578b14a573c",
},
{"status": "Pull complete", "progressDetail": {}, "id": "1578b14a573c"},
{"status": "Verifying Checksum", "progressDetail": {}, "id": "6a1e931d8f88"},
{
"status": "Verifying Checksum",
"progressDetail": {},
"id": "6a1e931d8f88",
},
{
"status": "Digest: sha256:490080d7da0f385928022927990e04f604615f7b8c622ef3e58253d0f089881d"
},
@@ -330,6 +312,7 @@ async def test_install_fires_progress_events(
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/odroid-n2-homeassistant:2025.7.2"
},
]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
events: list[PullLogEntry] = []
@@ -344,10 +327,10 @@ async def test_install_fires_progress_events(
),
):
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test")
coresys.docker.docker.api.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", stream=True, decode=True
coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", stream=True
)
coresys.docker.images.get.assert_called_once_with("test:1.2.3")
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
await asyncio.sleep(1)
assert events == [
@@ -417,197 +400,19 @@ async def test_install_fires_progress_events(
]
async def test_install_sends_progress_to_home_assistant(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock
):
"""Test progress events are sent as job updates to Home Assistant."""
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
with (
patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value="i386")
),
):
# Schedule job so we can listen for the end. Then we can assert against the WS mock
event = asyncio.Event()
job, install_task = coresys.jobs.schedule_job(
test_docker_interface.install,
JobSchedulerOptions(),
AwesomeVersion("1.2.3"),
"test",
)
async def listen_for_job_end(reference: SupervisorJob):
if reference.uuid != job.uuid:
return
event.set()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end)
await install_task
await event.wait()
events = [
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0] and evt.args[0]["data"]["event"] == WSEvent.JOB
]
assert events[0]["name"] == "docker_interface_install"
assert events[0]["uuid"] == job.uuid
assert events[0]["done"] is None
assert events[1]["name"] == "docker_interface_install"
assert events[1]["uuid"] == job.uuid
assert events[1]["done"] is False
assert events[-1]["name"] == "docker_interface_install"
assert events[-1]["uuid"] == job.uuid
assert events[-1]["done"] is True
def make_sub_log(layer_id: str):
return [
{
"stage": evt["stage"],
"progress": evt["progress"],
"done": evt["done"],
"extra": evt["extra"],
}
for evt in events
if evt["name"] == "Pulling container image layer"
and evt["reference"] == layer_id
and evt["parent_id"] == job.uuid
]
layer_1_log = make_sub_log("1e214cd6d7d0")
layer_2_log = make_sub_log("1a38e1d5e18d")
assert len(layer_1_log) == 20
assert len(layer_2_log) == 19
assert len(events) == 42
assert layer_1_log == [
{"stage": "Pulling fs layer", "progress": 0, "done": False, "extra": None},
{
"stage": "Downloading",
"progress": 0.1,
"done": False,
"extra": {"current": 539462, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 0.6,
"done": False,
"extra": {"current": 4864838, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 0.9,
"done": False,
"extra": {"current": 7552896, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 1.2,
"done": False,
"extra": {"current": 10252544, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 2.9,
"done": False,
"extra": {"current": 25369792, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Verifying Checksum",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Download complete",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 50.1,
"done": False,
"extra": {"current": 557056, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 60.3,
"done": False,
"extra": {"current": 89686016, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 70.0,
"done": False,
"extra": {"current": 174358528, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 80.0,
"done": False,
"extra": {"current": 261816320, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 88.4,
"done": False,
"extra": {"current": 334790656, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 94.0,
"done": False,
"extra": {"current": 383811584, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 99.9,
"done": False,
"extra": {"current": 435617792, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 100.0,
"done": False,
"extra": {"current": 436480882, "total": 436480882},
},
{
"stage": "Pull complete",
"progress": 100.0,
"done": True,
"extra": {"current": 436480882, "total": 436480882},
},
]
async def test_install_progress_rounding_does_not_cause_misses(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock
coresys: CoreSys,
test_docker_interface: DockerInterface,
ha_ws_client: AsyncMock,
capture_exception: Mock,
):
"""Test extremely close progress events do not create rounding issues."""
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = [
# Current numbers chosen to create a rounding issue with original code
# Where a progress update came in with a value between the actual previous
# value and what it was rounded to. It should not raise an out of order exception
logs = [
{
"status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.1",
@@ -647,6 +452,7 @@ async def test_install_progress_rounding_does_not_cause_misses(
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/odroid-n2-homeassistant:2025.7.1"
},
]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
with (
patch.object(
@@ -671,65 +477,7 @@ async def test_install_progress_rounding_does_not_cause_misses(
await install_task
await event.wait()
events = [
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["reference"] == "1e214cd6d7d0"
and evt.args[0]["data"]["data"]["stage"] in {"Downloading", "Extracting"}
]
assert events == [
{
"name": "Pulling container image layer",
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 432700000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
{
"name": "Pulling container image layer",
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 432800000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
{
"name": "Pulling container image layer",
"stage": "Extracting",
"progress": 99.6,
"done": False,
"extra": {"current": 432700000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
{
"name": "Pulling container image layer",
"stage": "Extracting",
"progress": 99.6,
"done": False,
"extra": {"current": 432800000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
]
capture_exception.assert_not_called()
@pytest.mark.parametrize(
@@ -760,7 +508,8 @@ async def test_install_raises_on_pull_error(
exc_msg: str,
):
"""Test exceptions raised from errors in pull log."""
coresys.docker.docker.api.pull.return_value = [
logs = [
{
"status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.2",
@@ -773,19 +522,25 @@ async def test_install_raises_on_pull_error(
},
error_log,
]
coresys.docker.images.pull.return_value = AsyncIterator(logs)
with pytest.raises(exc_type, match=exc_msg):
await test_docker_interface.install(AwesomeVersion("1.2.3"), "test")
async def test_install_progress_handles_download_restart(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock
coresys: CoreSys,
test_docker_interface: DockerInterface,
ha_ws_client: AsyncMock,
capture_exception: Mock,
):
"""Test install handles docker progress events that include a download restart."""
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log_restart.json"
)
# Fixture emulates a download restart as it docker logs it
# A log out of order exception should not be raised
logs = load_json_fixture("docker_pull_image_log_restart.json")
coresys.docker.images.pull.return_value = AsyncIterator(logs)
with (
patch.object(
@@ -810,106 +565,4 @@ async def test_install_progress_handles_download_restart(
await install_task
await event.wait()
events = [
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0] and evt.args[0]["data"]["event"] == WSEvent.JOB
]
def make_sub_log(layer_id: str):
return [
{
"stage": evt["stage"],
"progress": evt["progress"],
"done": evt["done"],
"extra": evt["extra"],
}
for evt in events
if evt["name"] == "Pulling container image layer"
and evt["reference"] == layer_id
and evt["parent_id"] == job.uuid
]
layer_1_log = make_sub_log("1e214cd6d7d0")
assert len(layer_1_log) == 14
assert layer_1_log == [
{"stage": "Pulling fs layer", "progress": 0, "done": False, "extra": None},
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Retrying download",
"progress": 0,
"done": False,
"extra": None,
},
{
"stage": "Retrying download",
"progress": 0,
"done": False,
"extra": None,
},
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Verifying Checksum",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Download complete",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 80.0,
"done": False,
"extra": {"current": 261816320, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 100.0,
"done": False,
"extra": {"current": 436480882, "total": 436480882},
},
{
"stage": "Pull complete",
"progress": 100.0,
"done": True,
"extra": {"current": 436480882, "total": 436480882},
},
]
capture_exception.assert_not_called()

View File

@@ -1,9 +1,10 @@
"""Test Docker manager."""
import asyncio
from pathlib import Path
from unittest.mock import MagicMock, patch
from docker.errors import DockerException
from docker.errors import APIError, DockerException, NotFound
import pytest
from requests import RequestException
@@ -20,7 +21,7 @@ async def test_run_command_success(docker: DockerAPI):
mock_container.logs.return_value = b"command output"
# Mock docker containers.run to return our mock container
docker.docker.containers.run.return_value = mock_container
docker.dockerpy.containers.run.return_value = mock_container
# Execute the command
result = docker.run_command(
@@ -33,7 +34,7 @@ async def test_run_command_success(docker: DockerAPI):
assert result.output == b"command output"
# Verify docker.containers.run was called correctly
docker.docker.containers.run.assert_called_once_with(
docker.dockerpy.containers.run.assert_called_once_with(
"alpine:3.18",
command="echo hello",
detach=True,
@@ -55,7 +56,7 @@ async def test_run_command_with_defaults(docker: DockerAPI):
mock_container.logs.return_value = b"error output"
# Mock docker containers.run to return our mock container
docker.docker.containers.run.return_value = mock_container
docker.dockerpy.containers.run.return_value = mock_container
# Execute the command with minimal parameters
result = docker.run_command(image="ubuntu")
@@ -66,7 +67,7 @@ async def test_run_command_with_defaults(docker: DockerAPI):
assert result.output == b"error output"
# Verify docker.containers.run was called with defaults
docker.docker.containers.run.assert_called_once_with(
docker.dockerpy.containers.run.assert_called_once_with(
"ubuntu:latest", # default tag
command=None, # default command
detach=True,
@@ -81,7 +82,7 @@ async def test_run_command_with_defaults(docker: DockerAPI):
async def test_run_command_docker_exception(docker: DockerAPI):
"""Test command execution when Docker raises an exception."""
# Mock docker containers.run to raise DockerException
docker.docker.containers.run.side_effect = DockerException("Docker error")
docker.dockerpy.containers.run.side_effect = DockerException("Docker error")
# Execute the command and expect DockerError
with pytest.raises(DockerError, match="Can't execute command: Docker error"):
@@ -91,7 +92,7 @@ async def test_run_command_docker_exception(docker: DockerAPI):
async def test_run_command_request_exception(docker: DockerAPI):
"""Test command execution when requests raises an exception."""
# Mock docker containers.run to raise RequestException
docker.docker.containers.run.side_effect = RequestException("Connection error")
docker.dockerpy.containers.run.side_effect = RequestException("Connection error")
# Execute the command and expect DockerError
with pytest.raises(DockerError, match="Can't execute command: Connection error"):
@@ -104,7 +105,7 @@ async def test_run_command_cleanup_on_exception(docker: DockerAPI):
mock_container = MagicMock()
# Mock docker.containers.run to return container, but container.wait to raise exception
docker.docker.containers.run.return_value = mock_container
docker.dockerpy.containers.run.return_value = mock_container
mock_container.wait.side_effect = DockerException("Wait failed")
# Execute the command and expect DockerError
@@ -123,7 +124,7 @@ async def test_run_command_custom_stdout_stderr(docker: DockerAPI):
mock_container.logs.return_value = b"output"
# Mock docker containers.run to return our mock container
docker.docker.containers.run.return_value = mock_container
docker.dockerpy.containers.run.return_value = mock_container
# Execute the command with custom stdout/stderr
result = docker.run_command(
@@ -150,7 +151,7 @@ async def test_run_container_with_cidfile(
cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid"
extern_cidfile_path = coresys.config.path_extern_cid_files / f"{container_name}.cid"
docker.docker.containers.run.return_value = mock_container
docker.dockerpy.containers.run.return_value = mock_container
# Mock container creation
with patch.object(
@@ -351,3 +352,101 @@ async def test_run_container_with_leftover_cidfile_directory(
assert cidfile_path.read_text() == mock_container.id
assert result == mock_container
async def test_repair(coresys: CoreSys, caplog: pytest.LogCaptureFixture):
"""Test repair API."""
coresys.docker.dockerpy.networks.get.side_effect = [
hassio := MagicMock(
attrs={
"Containers": {
"good": {"Name": "good"},
"corrupt": {"Name": "corrupt"},
"fail": {"Name": "fail"},
}
}
),
host := MagicMock(attrs={"Containers": {}}),
]
coresys.docker.dockerpy.containers.get.side_effect = [
MagicMock(),
NotFound("corrupt"),
DockerException("fail"),
]
await coresys.run_in_executor(coresys.docker.repair)
coresys.docker.dockerpy.api.prune_containers.assert_called_once()
coresys.docker.dockerpy.api.prune_images.assert_called_once_with(
filters={"dangling": False}
)
coresys.docker.dockerpy.api.prune_builds.assert_called_once()
coresys.docker.dockerpy.api.prune_volumes.assert_called_once()
coresys.docker.dockerpy.api.prune_networks.assert_called_once()
hassio.disconnect.assert_called_once_with("corrupt", force=True)
host.disconnect.assert_not_called()
assert "Docker fatal error on container fail on hassio" in caplog.text
async def test_repair_failures(coresys: CoreSys, caplog: pytest.LogCaptureFixture):
"""Test repair proceeds best it can through failures."""
coresys.docker.dockerpy.api.prune_containers.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_images.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_builds.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_volumes.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_networks.side_effect = APIError("fail")
coresys.docker.dockerpy.networks.get.side_effect = NotFound("missing")
await coresys.run_in_executor(coresys.docker.repair)
assert "Error for containers prune: fail" in caplog.text
assert "Error for images prune: fail" in caplog.text
assert "Error for builds prune: fail" in caplog.text
assert "Error for volumes prune: fail" in caplog.text
assert "Error for networks prune: fail" in caplog.text
assert "Error for networks hassio prune: missing" in caplog.text
assert "Error for networks host prune: missing" in caplog.text
@pytest.mark.parametrize("log_starter", [("Loaded image ID"), ("Loaded image")])
async def test_import_image(coresys: CoreSys, tmp_path: Path, log_starter: str):
"""Test importing an image into docker."""
(test_tar := tmp_path / "test.tar").touch()
coresys.docker.images.import_image.return_value = [
{"stream": f"{log_starter}: imported"}
]
coresys.docker.images.inspect.return_value = {"Id": "imported"}
image = await coresys.docker.import_image(test_tar)
assert image["Id"] == "imported"
coresys.docker.images.inspect.assert_called_once_with("imported")
async def test_import_image_error(coresys: CoreSys, tmp_path: Path):
"""Test failure importing an image into docker."""
(test_tar := tmp_path / "test.tar").touch()
coresys.docker.images.import_image.return_value = [
{"errorDetail": {"message": "fail"}}
]
with pytest.raises(DockerError, match="Can't import image from tar: fail"):
await coresys.docker.import_image(test_tar)
coresys.docker.images.inspect.assert_not_called()
async def test_import_multiple_images_in_tar(
coresys: CoreSys, tmp_path: Path, caplog: pytest.LogCaptureFixture
):
"""Test importing an image into docker."""
(test_tar := tmp_path / "test.tar").touch()
coresys.docker.images.import_image.return_value = [
{"stream": "Loaded image: imported-1"},
{"stream": "Loaded image: imported-2"},
]
assert await coresys.docker.import_image(test_tar) is None
assert "Unexpected image count 2 while importing image from tar" in caplog.text
coresys.docker.images.inspect.assert_not_called()

View File

@@ -376,3 +376,14 @@ async def test_try_get_nvme_life_time_missing_percent_used(
coresys.config.path_supervisor
)
assert lifetime is None
async def test_try_get_nvme_life_time_dbus_not_connected(coresys: CoreSys):
"""Test getting lifetime info from an NVMe when DBUS is not connected."""
# Set the dbus for udisks2 bus to be None, to make it forcibly disconnected.
coresys.dbus.udisks2.dbus = None
lifetime = await coresys.hardware.disk.get_disk_life_time(
coresys.config.path_supervisor
)
assert lifetime is None

View File

@@ -1,11 +1,14 @@
"""Test Home Assistant core."""
from datetime import datetime, timedelta
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, patch
from http import HTTPStatus
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, call, patch
import aiodocker
from awesomeversion import AwesomeVersion
from docker.errors import APIError, DockerException, ImageNotFound, NotFound
from docker.errors import APIError, DockerException, NotFound
import pytest
from requests import RequestException
from time_machine import travel
from supervisor.const import CpuArch
@@ -23,8 +26,12 @@ from supervisor.exceptions import (
from supervisor.homeassistant.api import APIState
from supervisor.homeassistant.core import HomeAssistantCore
from supervisor.homeassistant.module import HomeAssistant
from supervisor.resolution.const import ContextType, IssueType
from supervisor.resolution.data import Issue
from supervisor.updater import Updater
from tests.common import AsyncIterator
async def test_update_fails_if_out_of_date(coresys: CoreSys):
"""Test update of Home Assistant fails when supervisor or plugin is out of date."""
@@ -52,11 +59,23 @@ async def test_update_fails_if_out_of_date(coresys: CoreSys):
await coresys.homeassistant.core.update()
async def test_install_landingpage_docker_error(
coresys: CoreSys, capture_exception: Mock, caplog: pytest.LogCaptureFixture
@pytest.mark.parametrize(
"err",
[
aiodocker.DockerError(HTTPStatus.TOO_MANY_REQUESTS, {"message": "ratelimit"}),
APIError("ratelimit", MagicMock(status_code=HTTPStatus.TOO_MANY_REQUESTS)),
],
)
async def test_install_landingpage_docker_ratelimit_error(
coresys: CoreSys,
capture_exception: Mock,
caplog: pytest.LogCaptureFixture,
err: Exception,
):
"""Test install landing page fails due to docker error."""
"""Test install landing page fails due to docker ratelimit error."""
coresys.security.force = True
coresys.docker.images.pull.side_effect = [err, AsyncIterator([{}])]
with (
patch.object(DockerHomeAssistant, "attach", side_effect=DockerError),
patch.object(
@@ -69,19 +88,35 @@ async def test_install_landingpage_docker_error(
),
patch("supervisor.homeassistant.core.asyncio.sleep") as sleep,
):
coresys.docker.images.get.side_effect = [APIError("fail"), MagicMock()]
await coresys.homeassistant.core.install_landingpage()
sleep.assert_awaited_once_with(30)
assert "Failed to install landingpage, retrying after 30sec" in caplog.text
capture_exception.assert_not_called()
assert (
Issue(IssueType.DOCKER_RATELIMIT, ContextType.SYSTEM)
in coresys.resolution.issues
)
@pytest.mark.parametrize(
"err",
[
aiodocker.DockerError(HTTPStatus.INTERNAL_SERVER_ERROR, {"message": "fail"}),
APIError("fail"),
DockerException(),
RequestException(),
OSError(),
],
)
async def test_install_landingpage_other_error(
coresys: CoreSys, capture_exception: Mock, caplog: pytest.LogCaptureFixture
coresys: CoreSys,
capture_exception: Mock,
caplog: pytest.LogCaptureFixture,
err: Exception,
):
"""Test install landing page fails due to other error."""
coresys.docker.images.get.side_effect = [(err := OSError()), MagicMock()]
coresys.docker.images.inspect.side_effect = [err, MagicMock()]
with (
patch.object(DockerHomeAssistant, "attach", side_effect=DockerError),
@@ -102,11 +137,23 @@ async def test_install_landingpage_other_error(
capture_exception.assert_called_once_with(err)
async def test_install_docker_error(
coresys: CoreSys, capture_exception: Mock, caplog: pytest.LogCaptureFixture
@pytest.mark.parametrize(
"err",
[
aiodocker.DockerError(HTTPStatus.TOO_MANY_REQUESTS, {"message": "ratelimit"}),
APIError("ratelimit", MagicMock(status_code=HTTPStatus.TOO_MANY_REQUESTS)),
],
)
async def test_install_docker_ratelimit_error(
coresys: CoreSys,
capture_exception: Mock,
caplog: pytest.LogCaptureFixture,
err: Exception,
):
"""Test install fails due to docker error."""
"""Test install fails due to docker ratelimit error."""
coresys.security.force = True
coresys.docker.images.pull.side_effect = [err, AsyncIterator([{}])]
with (
patch.object(HomeAssistantCore, "start"),
patch.object(DockerHomeAssistant, "cleanup"),
@@ -123,19 +170,35 @@ async def test_install_docker_error(
),
patch("supervisor.homeassistant.core.asyncio.sleep") as sleep,
):
coresys.docker.images.get.side_effect = [APIError("fail"), MagicMock()]
await coresys.homeassistant.core.install()
sleep.assert_awaited_once_with(30)
assert "Error on Home Assistant installation. Retrying in 30sec" in caplog.text
capture_exception.assert_not_called()
assert (
Issue(IssueType.DOCKER_RATELIMIT, ContextType.SYSTEM)
in coresys.resolution.issues
)
@pytest.mark.parametrize(
"err",
[
aiodocker.DockerError(HTTPStatus.INTERNAL_SERVER_ERROR, {"message": "fail"}),
APIError("fail"),
DockerException(),
RequestException(),
OSError(),
],
)
async def test_install_other_error(
coresys: CoreSys, capture_exception: Mock, caplog: pytest.LogCaptureFixture
coresys: CoreSys,
capture_exception: Mock,
caplog: pytest.LogCaptureFixture,
err: Exception,
):
"""Test install fails due to other error."""
coresys.docker.images.get.side_effect = [(err := OSError()), MagicMock()]
coresys.docker.images.inspect.side_effect = [err, MagicMock()]
with (
patch.object(HomeAssistantCore, "start"),
@@ -161,21 +224,29 @@ async def test_install_other_error(
@pytest.mark.parametrize(
"container_exists,image_exists", [(False, True), (True, False), (True, True)]
("container_exc", "image_exc", "remove_calls"),
[
(NotFound("missing"), None, []),
(
None,
aiodocker.DockerError(404, {"message": "missing"}),
[call(force=True, v=True)],
),
(None, None, [call(force=True, v=True)]),
],
)
@pytest.mark.usefixtures("path_extern")
async def test_start(
coresys: CoreSys, container_exists: bool, image_exists: bool, path_extern
coresys: CoreSys,
container_exc: DockerException | None,
image_exc: aiodocker.DockerError | None,
remove_calls: list[call],
):
"""Test starting Home Assistant."""
if image_exists:
coresys.docker.images.get.return_value.id = "123"
else:
coresys.docker.images.get.side_effect = ImageNotFound("missing")
if container_exists:
coresys.docker.containers.get.return_value.image.id = "123"
else:
coresys.docker.containers.get.side_effect = NotFound("missing")
coresys.docker.images.inspect.return_value = {"Id": "123"}
coresys.docker.images.inspect.side_effect = image_exc
coresys.docker.containers.get.return_value.id = "123"
coresys.docker.containers.get.side_effect = container_exc
with (
patch.object(
@@ -198,18 +269,14 @@ async def test_start(
assert run.call_args.kwargs["hostname"] == "homeassistant"
coresys.docker.containers.get.return_value.stop.assert_not_called()
if container_exists:
coresys.docker.containers.get.return_value.remove.assert_called_once_with(
force=True,
v=True,
)
else:
coresys.docker.containers.get.return_value.remove.assert_not_called()
assert (
coresys.docker.containers.get.return_value.remove.call_args_list == remove_calls
)
async def test_start_existing_container(coresys: CoreSys, path_extern):
"""Test starting Home Assistant when container exists and is viable."""
coresys.docker.images.get.return_value.id = "123"
coresys.docker.images.inspect.return_value = {"Id": "123"}
coresys.docker.containers.get.return_value.image.id = "123"
coresys.docker.containers.get.return_value.status = "exited"
@@ -394,24 +461,32 @@ async def test_core_loads_wrong_image_for_machine(
"""Test core is loaded with wrong image for machine."""
coresys.homeassistant.set_image("ghcr.io/home-assistant/odroid-n2-homeassistant")
coresys.homeassistant.version = AwesomeVersion("2024.4.0")
container.attrs["Config"] = {"Labels": {"io.hass.version": "2024.4.0"}}
await coresys.homeassistant.core.load()
with patch.object(
DockerAPI,
"pull_image",
return_value={
"Id": "abc123",
"Config": {"Labels": {"io.hass.version": "2024.4.0"}},
},
) as pull_image:
container.attrs |= pull_image.return_value
await coresys.homeassistant.core.load()
pull_image.assert_called_once_with(
ANY,
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
)
container.remove.assert_called_once_with(force=True, v=True)
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": "ghcr.io/home-assistant/odroid-n2-homeassistant:latest",
"force": True,
}
assert coresys.docker.images.remove.call_args_list[1].kwargs == {
"image": "ghcr.io/home-assistant/odroid-n2-homeassistant:2024.4.0",
"force": True,
}
coresys.docker.pull_image.assert_called_once_with(
ANY,
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
assert coresys.docker.images.delete.call_args_list[0] == call(
"ghcr.io/home-assistant/odroid-n2-homeassistant:latest",
force=True,
)
assert coresys.docker.images.delete.call_args_list[1] == call(
"ghcr.io/home-assistant/odroid-n2-homeassistant:2024.4.0",
force=True,
)
assert (
coresys.homeassistant.image == "ghcr.io/home-assistant/qemux86-64-homeassistant"
@@ -428,8 +503,8 @@ async def test_core_load_allows_image_override(coresys: CoreSys, container: Magi
await coresys.homeassistant.core.load()
container.remove.assert_not_called()
coresys.docker.images.remove.assert_not_called()
coresys.docker.images.get.assert_not_called()
coresys.docker.images.delete.assert_not_called()
coresys.docker.images.inspect.assert_not_called()
assert (
coresys.homeassistant.image == "ghcr.io/home-assistant/odroid-n2-homeassistant"
)
@@ -440,27 +515,36 @@ async def test_core_loads_wrong_image_for_architecture(
):
"""Test core is loaded with wrong image for architecture."""
coresys.homeassistant.version = AwesomeVersion("2024.4.0")
container.attrs["Config"] = {"Labels": {"io.hass.version": "2024.4.0"}}
coresys.docker.images.get("ghcr.io/home-assistant/qemux86-64-homeassistant").attrs[
"Architecture"
] = "arm64"
coresys.docker.images.inspect.return_value = img_data = (
coresys.docker.images.inspect.return_value
| {
"Architecture": "arm64",
"Config": {"Labels": {"io.hass.version": "2024.4.0"}},
}
)
container.attrs |= img_data
await coresys.homeassistant.core.load()
with patch.object(
DockerAPI,
"pull_image",
return_value=img_data | {"Architecture": "amd64"},
) as pull_image:
await coresys.homeassistant.core.load()
pull_image.assert_called_once_with(
ANY,
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
)
container.remove.assert_called_once_with(force=True, v=True)
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": "ghcr.io/home-assistant/qemux86-64-homeassistant:latest",
"force": True,
}
assert coresys.docker.images.remove.call_args_list[1].kwargs == {
"image": "ghcr.io/home-assistant/qemux86-64-homeassistant:2024.4.0",
"force": True,
}
coresys.docker.pull_image.assert_called_once_with(
ANY,
"ghcr.io/home-assistant/qemux86-64-homeassistant",
"2024.4.0",
platform="linux/amd64",
assert coresys.docker.images.delete.call_args_list[0] == call(
"ghcr.io/home-assistant/qemux86-64-homeassistant:latest",
force=True,
)
assert coresys.docker.images.delete.call_args_list[1] == call(
"ghcr.io/home-assistant/qemux86-64-homeassistant:2024.4.0",
force=True,
)
assert (
coresys.homeassistant.image == "ghcr.io/home-assistant/qemux86-64-homeassistant"

View File

@@ -2,7 +2,7 @@
import asyncio
from pathlib import Path
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, patch
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, call, patch
from awesomeversion import AwesomeVersion
import pytest
@@ -11,6 +11,7 @@ from supervisor.const import BusEvent, CpuArch
from supervisor.coresys import CoreSys
from supervisor.docker.const import ContainerState
from supervisor.docker.interface import DockerInterface
from supervisor.docker.manager import DockerAPI
from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import (
AudioError,
@@ -362,21 +363,26 @@ async def test_load_with_incorrect_image(
plugin.version = AwesomeVersion("2024.4.0")
container.status = "running"
container.attrs["Config"] = {"Labels": {"io.hass.version": "2024.4.0"}}
coresys.docker.images.inspect.return_value = img_data = (
coresys.docker.images.inspect.return_value
| {"Config": {"Labels": {"io.hass.version": "2024.4.0"}}}
)
container.attrs |= img_data
await plugin.load()
with patch.object(DockerAPI, "pull_image", return_value=img_data) as pull_image:
await plugin.load()
pull_image.assert_called_once_with(
ANY, correct_image, "2024.4.0", platform="linux/amd64"
)
container.remove.assert_called_once_with(force=True, v=True)
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": f"{old_image}:latest",
"force": True,
}
assert coresys.docker.images.remove.call_args_list[1].kwargs == {
"image": f"{old_image}:2024.4.0",
"force": True,
}
coresys.docker.pull_image.assert_called_once_with(
ANY, correct_image, "2024.4.0", platform="linux/amd64"
assert coresys.docker.images.delete.call_args_list[0] == call(
f"{old_image}:latest",
force=True,
)
assert coresys.docker.images.delete.call_args_list[1] == call(
f"{old_image}:2024.4.0",
force=True,
)
assert plugin.image == correct_image

View File

@@ -25,13 +25,18 @@ async def test_evaluation(coresys: CoreSys):
assert docker_configuration.reason in coresys.resolution.unsupported
coresys.resolution.unsupported.clear()
coresys.docker.info.storage = EXPECTED_STORAGE
coresys.docker.info.storage = EXPECTED_STORAGE[0]
coresys.docker.info.logging = "unsupported"
await docker_configuration()
assert docker_configuration.reason in coresys.resolution.unsupported
coresys.resolution.unsupported.clear()
coresys.docker.info.storage = EXPECTED_STORAGE
coresys.docker.info.storage = "overlay2"
coresys.docker.info.logging = EXPECTED_LOGGING
await docker_configuration()
assert docker_configuration.reason not in coresys.resolution.unsupported
coresys.docker.info.storage = "overlayfs"
coresys.docker.info.logging = EXPECTED_LOGGING
await docker_configuration()
assert docker_configuration.reason not in coresys.resolution.unsupported

View File

@@ -1,8 +1,9 @@
"""Test fixup addon execute repair."""
from unittest.mock import MagicMock, patch
from http import HTTPStatus
from unittest.mock import patch
from docker.errors import NotFound
import aiodocker
import pytest
from supervisor.addons.addon import Addon
@@ -17,7 +18,9 @@ from supervisor.resolution.fixups.addon_execute_repair import FixupAddonExecuteR
async def test_fixup(docker: DockerAPI, coresys: CoreSys, install_addon_ssh: Addon):
"""Test fixup rebuilds addon's container."""
docker.images.get.side_effect = NotFound("missing")
docker.images.inspect.side_effect = aiodocker.DockerError(
HTTPStatus.NOT_FOUND, {"message": "missing"}
)
install_addon_ssh.data["image"] = "test_image"
addon_execute_repair = FixupAddonExecuteRepair(coresys)
@@ -41,7 +44,9 @@ async def test_fixup_max_auto_attempts(
docker: DockerAPI, coresys: CoreSys, install_addon_ssh: Addon
):
"""Test fixup stops being auto-applied after 5 failures."""
docker.images.get.side_effect = NotFound("missing")
docker.images.inspect.side_effect = aiodocker.DockerError(
HTTPStatus.NOT_FOUND, {"message": "missing"}
)
install_addon_ssh.data["image"] = "test_image"
addon_execute_repair = FixupAddonExecuteRepair(coresys)
@@ -82,8 +87,6 @@ async def test_fixup_image_exists(
docker: DockerAPI, coresys: CoreSys, install_addon_ssh: Addon
):
"""Test fixup dismisses if image exists."""
docker.images.get.return_value = MagicMock()
addon_execute_repair = FixupAddonExecuteRepair(coresys)
assert addon_execute_repair.auto is True