Compare commits

...

57 Commits

Author SHA1 Message Date
Stefan Agner
e20154670b Remove unrelated pytest 2025-10-29 14:12:55 +01:00
Stefan Agner
1fa2f33d4d Remove unnecessary comment 2025-10-29 14:08:49 +01:00
Stefan Agner
9dfb02f244 Remove content_trust unsupported reason 2025-10-28 18:18:59 +01:00
Stefan Agner
3412efad66 Remove content_trust from tests 2025-10-28 18:18:02 +01:00
Stefan Agner
a0890aca65 Drop codenotary from schema
Since we have "remove extra" in voluptuous, we can remove the
codenotary field from the addon schema.
2025-10-28 17:42:18 +01:00
Stefan Agner
d308709706 Drop unused tests 2025-10-28 16:22:39 +01:00
Stefan Agner
5d80bf5c8f Fix security tests 2025-10-28 16:04:17 +01:00
Stefan Agner
e53f476234 Remove content_trust reference in bootstrap.py 2025-10-28 16:03:51 +01:00
Stefan Agner
d01ca410fb Remove source_mods/content_trust evaluations 2025-10-28 15:58:23 +01:00
Stefan Agner
5ef1c6822f Drop code sign test 2025-10-28 15:57:58 +01:00
Stefan Agner
0bb0b51021 Drop content trust
A cosign based signature verification will likely be named differently
to avoid confusion with existing implementations. For now, remove the
content trust option entirely.
2025-10-28 15:48:00 +01:00
Stefan Agner
caabf13230 Introduce APIGone exception for removed APIs
Introduce a new exception class APIGone to indicate that certain API
features have been removed and are no longer available. Update the
security integrity check endpoint to raise this new exception instead
of a generic APIError, providing clearer communication to clients that
the feature has been intentionally removed.
2025-10-28 15:48:00 +01:00
Stefan Agner
28d9789430 Remove unused constants 2025-10-28 15:48:00 +01:00
Stefan Agner
e3428b46a3 Drop Codenotary specific environment and secret reference 2025-10-28 15:48:00 +01:00
Stefan Agner
0e6ca190e6 Remove Codenotary specific IssueType/SuggestionType 2025-10-28 15:48:00 +01:00
Stefan Agner
676cc29c00 Drop unnecessary comment 2025-10-28 15:48:00 +01:00
Stefan Agner
c80a69b000 Remove CodeNotary related exceptions and handling
Remove CodeNotary related exceptions and handling from the Docker
interface.
2025-10-28 15:48:00 +01:00
Stefan Agner
bf3ff95f66 Fix pytest 2025-10-28 15:48:00 +01:00
Stefan Agner
0030b03737 Fix pytest 2025-10-28 15:48:00 +01:00
Stefan Agner
2e808a7555 Drop unused tests 2025-10-28 15:48:00 +01:00
Stefan Agner
789053f2f0 Drop CodeNotary integrity fixups 2025-10-28 15:48:00 +01:00
Stefan Agner
4f78b19f5c Remove CodeNotary specific integrity checking
The current code is specific to how CodeNotary was doing integrity
checking. A future integrity checking mechanism likely will work
differently (e.g. through EROFS based containers). Remove the current
code to make way for a future implementation.
2025-10-28 15:48:00 +01:00
Stefan Agner
116319e1e3 Formally deprecate CodeNotary build config 2025-10-28 15:48:00 +01:00
dependabot[bot]
9c0174f1fd Bump actions/upload-artifact from 4.6.2 to 5.0.0 (#6269)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.6.2 to 5.0.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](ea165f8d65...330a01c490)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 12:28:43 +01:00
dependabot[bot]
dc3d8b9266 Bump actions/download-artifact from 5.0.0 to 6.0.0 (#6270)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 5.0.0 to 6.0.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](634f93cb29...018cc2cf5b)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 12:28:13 +01:00
dependabot[bot]
06d96db55b Bump orjson from 3.11.3 to 3.11.4 (#6271)
Bumps [orjson](https://github.com/ijl/orjson) from 3.11.3 to 3.11.4.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.11.3...3.11.4)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27 12:27:39 +01:00
Mike Degatano
131cc3b6d1 Prevent float drift error on progress update (#6267) 2025-10-24 09:37:19 -04:00
Michel Nederlof
b92f5976a3 If D-Bus to UDisks2 is not connected, return None (#6265)
When fetching the host info without UDisks2 D-Bus connected it would raise an exception and disable managing addons via home assistant (and other GUI functions that need host info status).
2025-10-24 10:22:02 +02:00
dependabot[bot]
370c961c9e Bump ruff from 0.14.1 to 0.14.2 (#6268)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.1 to 0.14.2.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.1...0.14.2)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-24 09:46:04 +02:00
dependabot[bot]
b903e1196f Bump sigstore/cosign-installer from 3.10.0 to 4.0.0 (#6256)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.10.0 to 4.0.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](d7543c93d8...faadad0cce)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-23 08:37:59 +02:00
dependabot[bot]
9f8e8ab15a Bump home-assistant/wheels from 2025.09.1 to 2025.10.0 (#6260)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-22 23:17:38 +02:00
dependabot[bot]
56bffc839b Bump aiohttp from 3.13.0 to 3.13.1 (#6261)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-22 23:14:01 +02:00
dependabot[bot]
952a553c3b Bump pylint from 4.0.1 to 4.0.2 (#6263) 2025-10-22 10:20:35 +02:00
dependabot[bot]
717f1c85f5 Bump sentry-sdk from 2.42.0 to 2.42.1 (#6264)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.42.0 to 2.42.1.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.42.0...2.42.1)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.42.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-22 09:25:14 +02:00
dependabot[bot]
ffd498a515 Bump sentry-sdk from 2.41.0 to 2.42.0 (#6253)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-18 23:30:45 +02:00
dependabot[bot]
35f0645cb9 Bump cryptography from 46.0.2 to 46.0.3 (#6255)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-18 22:04:32 +02:00
dependabot[bot]
15c6547382 Bump coverage from 7.10.7 to 7.11.0 (#6254)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-17 23:42:02 +02:00
dependabot[bot]
adefa242e5 Bump colorlog from 6.9.0 to 6.10.1 (#6258)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-17 09:14:38 +02:00
dependabot[bot]
583a8a82fb Bump ruff from 0.14.0 to 0.14.1 (#6257)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-17 09:12:47 +02:00
dependabot[bot]
322df15e73 Bump pylint from 4.0.0 to 4.0.1 (#6251)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-15 19:58:36 +02:00
dependabot[bot]
51490c8e41 Bump pylint from 3.3.9 to 4.0.0 and astroid from 3.3.11 to 4.0.1 (#6248)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-10-14 10:45:17 +02:00
dependabot[bot]
3c21a8b8ef Bump sentry-sdk from 2.40.0 to 2.41.0 (#6246)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.40.0 to 2.41.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.40.0...2.41.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.41.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-10 08:38:17 +02:00
Jan Čermák
ddb8588d77 Treat containerd snapshotter/overlayfs driver as supported (#6242)
* Treat containerd snapshotter/overlayfs driver as supported

With home-assistant/operating-system#4252 the storage driver would
change to "overlayfs". We don't want the system to be marked as
unsupported. It should be safe to treat it as supported even now, so add
it to the list of allowed values.

* Flip the logic

(note for self: don't forget to check for unstaged changes before push)

* Set valid storage for invalid logging test case
2025-10-09 18:47:06 +02:00
dependabot[bot]
81e46b20b8 Bump pyudev from 0.24.3 to 0.24.4 (#6244)
Bumps [pyudev](https://github.com/pyudev/pyudev) from 0.24.3 to 0.24.4.
- [Changelog](https://github.com/pyudev/pyudev/blob/master/CHANGES.rst)
- [Commits](https://github.com/pyudev/pyudev/compare/v0.24.3...v0.24.4)

---
updated-dependencies:
- dependency-name: pyudev
  dependency-version: 0.24.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-09 09:48:34 +02:00
dependabot[bot]
5041a1ed5c Bump types-docker from 7.1.0.20250916 to 7.1.0.20251009 (#6243)
Bumps [types-docker](https://github.com/typeshed-internal/stub_uploader) from 7.1.0.20250916 to 7.1.0.20251009.
- [Commits](https://github.com/typeshed-internal/stub_uploader/commits)

---
updated-dependencies:
- dependency-name: types-docker
  dependency-version: 7.1.0.20251009
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-09 09:38:12 +02:00
Stefan Agner
337731a55a Let only bugs go stale (#6239)
Let's apply the stale label only to issues which are bugs. Tasks are
more long lived and should not be marked as stale automatically.
2025-10-08 15:56:35 +02:00
Stefan Agner
53a8044aff Add support for ulimit in addon config (#6206)
* Add support for ulimit in addon config

Similar to docker-compose, this adds support for setting ulimits
for addons via the addon config. This is useful e.g. for InfluxDB
which on its own does not support setting higher open file descriptor
limits, but recommends increasing limits on the host.

* Make soft and hard limit mandatory if ulimit is a dict
2025-10-08 12:43:12 +02:00
Jan Čermák
c71553f37d Add AGENTS.md symlink (#6237)
Add AGENTS.md along the CLAUDE.md for agents that support it. While
CLAUDE.md is still required and specific to Claude Code, AGENTS.md
covers various other agents that implemented this proposed standard.

Core already adopted the same approach recently.
2025-10-08 10:44:49 +02:00
dependabot[bot]
c1eb97d8ab Bump ruff from 0.13.3 to 0.14.0 (#6238)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.13.3 to 0.14.0.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.13.3...0.14.0)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-08 09:47:19 +02:00
Mike Degatano
190b734332 Add progress reporting to addon, HA and Supervisor updates (#6195)
* Add progress reporting to addon, HA and Supervisor updates

* Fix assert in test

* Add progress to addon, core, supervisor updates/installs

* Fix double install bug in addons install

* Remove initial_install and re-arrange order of load
2025-10-07 16:54:11 +02:00
dependabot[bot]
559b6982a3 Bump aiohttp from 3.12.15 to 3.13.0 (#6234)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.13.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 12:29:47 +02:00
dependabot[bot]
301362e9e5 Bump attrs from 25.3.0 to 25.4.0 (#6235)
Bumps [attrs](https://github.com/sponsors/hynek) from 25.3.0 to 25.4.0.
- [Commits](https://github.com/sponsors/hynek/commits)

---
updated-dependencies:
- dependency-name: attrs
  dependency-version: 25.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 12:29:03 +02:00
dependabot[bot]
fc928d294c Bump sentry-sdk from 2.39.0 to 2.40.0 (#6233)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.39.0 to 2.40.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.39.0...2.40.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.40.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-07 09:47:03 +02:00
dependabot[bot]
f42aeb4937 Bump dbus-fast from 2.44.3 to 2.44.5 (#6232)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.44.3 to 2.44.5.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.44.3...v2.44.5)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 2.44.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:37:22 +02:00
dependabot[bot]
fd21886de9 Bump pylint from 3.3.8 to 3.3.9 (#6230)
Bumps [pylint](https://github.com/pylint-dev/pylint) from 3.3.8 to 3.3.9.
- [Release notes](https://github.com/pylint-dev/pylint/releases)
- [Commits](https://github.com/pylint-dev/pylint/compare/v3.3.8...v3.3.9)

---
updated-dependencies:
- dependency-name: pylint
  dependency-version: 3.3.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:36:03 +02:00
dependabot[bot]
e4bb415e30 Bump actions/stale from 10.0.0 to 10.1.0 (#6229)
Bumps [actions/stale](https://github.com/actions/stale) from 10.0.0 to 10.1.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](3a9db7e6a4...5f858e3efb)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 11:35:49 +02:00
dependabot[bot]
622dda5382 Bump ruff from 0.13.2 to 0.13.3 (#6228)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.13.2 to 0.13.3.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.13.2...0.13.3)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.13.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-03 14:42:13 +02:00
56 changed files with 705 additions and 1690 deletions

View File

@@ -107,7 +107,7 @@ jobs:
# home-assistant/wheels doesn't support sha pinning # home-assistant/wheels doesn't support sha pinning
- name: Build wheels - name: Build wheels
if: needs.init.outputs.requirements == 'true' if: needs.init.outputs.requirements == 'true'
uses: home-assistant/wheels@2025.09.1 uses: home-assistant/wheels@2025.10.0
with: with:
abi: cp313 abi: cp313
tag: musllinux_1_2 tag: musllinux_1_2
@@ -132,7 +132,7 @@ jobs:
- name: Install Cosign - name: Install Cosign
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0 uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with: with:
cosign-release: "v2.5.3" cosign-release: "v2.5.3"
@@ -170,8 +170,6 @@ jobs:
--target /data \ --target /data \
--cosign \ --cosign \
--generic ${{ needs.init.outputs.version }} --generic ${{ needs.init.outputs.version }}
env:
CAS_API_KEY: ${{ secrets.CAS_TOKEN }}
version: version:
name: Update version name: Update version
@@ -293,33 +291,6 @@ jobs:
exit 1 exit 1
fi fi
- name: Check the Supervisor code sign
if: needs.init.outputs.publish == 'true'
run: |
echo "Enable Content-Trust"
test=$(docker exec hassio_cli ha security options --content-trust=true --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
echo "Run supervisor health check"
test=$(docker exec hassio_cli ha resolution healthcheck --no-progress --raw-json | jq -r '.result')
if [ "$test" != "ok" ]; then
exit 1
fi
echo "Check supervisor unhealthy"
test=$(docker exec hassio_cli ha resolution info --no-progress --raw-json | jq -r '.data.unhealthy[]')
if [ "$test" != "" ]; then
exit 1
fi
echo "Check supervisor supported"
test=$(docker exec hassio_cli ha resolution info --no-progress --raw-json | jq -r '.data.unsupported[]')
if [[ "$test" =~ source_mods ]]; then
exit 1
fi
- name: Create full backup - name: Create full backup
id: backup id: backup
run: | run: |

View File

@@ -346,7 +346,7 @@ jobs:
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign - name: Install Cosign
uses: sigstore/cosign-installer@d7543c93d881b35a8faa02e8e3605f69b7a1ce62 # v3.10.0 uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
with: with:
cosign-release: "v2.5.3" cosign-release: "v2.5.3"
- name: Restore Python virtual environment - name: Restore Python virtual environment
@@ -386,7 +386,7 @@ jobs:
-o console_output_style=count \ -o console_output_style=count \
tests tests
- name: Upload coverage artifact - name: Upload coverage artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with: with:
name: coverage name: coverage
path: .coverage path: .coverage
@@ -417,7 +417,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache" echo "Failed to restore Python virtual environment from cache"
exit 1 exit 1
- name: Download all coverage artifacts - name: Download all coverage artifacts
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0 uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with: with:
name: coverage name: coverage
path: coverage/ path: coverage/

View File

@@ -9,13 +9,14 @@ jobs:
stale: stale:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0 - uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
with: with:
repo-token: ${{ secrets.GITHUB_TOKEN }} repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30 days-before-stale: 30
days-before-close: 7 days-before-close: 7
stale-issue-label: "stale" stale-issue-label: "stale"
exempt-issue-labels: "no-stale,Help%20wanted,help-wanted,pinned,rfc,security" exempt-issue-labels: "no-stale,Help%20wanted,help-wanted,pinned,rfc,security"
only-issue-types: "bug"
stale-issue-message: > stale-issue-message: >
There hasn't been any activity on this issue recently. Due to the There hasn't been any activity on this issue recently. Due to the
high number of incoming GitHub notifications, we have to clean some high number of incoming GitHub notifications, we have to clean some

1
AGENTS.md Symbolic link
View File

@@ -0,0 +1 @@
.github/copilot-instructions.md

View File

@@ -1,14 +1,14 @@
aiodns==3.5.0 aiodns==3.5.0
aiohttp==3.12.15 aiohttp==3.13.1
atomicwrites-homeassistant==1.4.1 atomicwrites-homeassistant==1.4.1
attrs==25.3.0 attrs==25.4.0
awesomeversion==25.8.0 awesomeversion==25.8.0
blockbuster==1.5.25 blockbuster==1.5.25
brotli==1.1.0 brotli==1.1.0
ciso8601==2.3.3 ciso8601==2.3.3
colorlog==6.9.0 colorlog==6.10.1
cpe==1.3.1 cpe==1.3.1
cryptography==46.0.2 cryptography==46.0.3
debugpy==1.8.17 debugpy==1.8.17
deepmerge==2.0 deepmerge==2.0
dirhash==0.5.0 dirhash==0.5.0
@@ -17,14 +17,14 @@ faust-cchardet==2.1.19
gitpython==3.1.45 gitpython==3.1.45
jinja2==3.1.6 jinja2==3.1.6
log-rate-limit==1.4.2 log-rate-limit==1.4.2
orjson==3.11.3 orjson==3.11.4
pulsectl==24.12.0 pulsectl==24.12.0
pyudev==0.24.3 pyudev==0.24.4
PyYAML==6.0.3 PyYAML==6.0.3
requests==2.32.5 requests==2.32.5
securetar==2025.2.1 securetar==2025.2.1
sentry-sdk==2.39.0 sentry-sdk==2.42.1
setuptools==80.9.0 setuptools==80.9.0
voluptuous==0.15.2 voluptuous==0.15.2
dbus-fast==2.44.3 dbus-fast==2.44.5
zlib-fast==0.2.1 zlib-fast==0.2.1

View File

@@ -1,16 +1,16 @@
astroid==3.3.11 astroid==4.0.1
coverage==7.10.7 coverage==7.11.0
mypy==1.18.2 mypy==1.18.2
pre-commit==4.3.0 pre-commit==4.3.0
pylint==3.3.8 pylint==4.0.2
pytest-aiohttp==1.1.0 pytest-aiohttp==1.1.0
pytest-asyncio==0.25.2 pytest-asyncio==0.25.2
pytest-cov==7.0.0 pytest-cov==7.0.0
pytest-timeout==2.4.0 pytest-timeout==2.4.0
pytest==8.4.2 pytest==8.4.2
ruff==0.13.2 ruff==0.14.2
time-machine==2.19.0 time-machine==2.19.0
types-docker==7.1.0.20250916 types-docker==7.1.0.20251009
types-pyyaml==6.0.12.20250915 types-pyyaml==6.0.12.20250915
types-requests==2.32.4.20250913 types-requests==2.32.4.20250913
urllib3==2.5.0 urllib3==2.5.0

View File

@@ -226,6 +226,7 @@ class Addon(AddonModel):
) )
await self._check_ingress_port() await self._check_ingress_port()
default_image = self._image(self.data) default_image = self._image(self.data)
try: try:
await self.instance.attach(version=self.version) await self.instance.attach(version=self.version)
@@ -774,7 +775,6 @@ class Addon(AddonModel):
raise AddonsError("Missing from store, cannot install!") raise AddonsError("Missing from store, cannot install!")
await self.sys_addons.data.install(self.addon_store) await self.sys_addons.data.install(self.addon_store)
await self.load()
def setup_data(): def setup_data():
if not self.path_data.is_dir(): if not self.path_data.is_dir():
@@ -797,6 +797,9 @@ class Addon(AddonModel):
await self.sys_addons.data.uninstall(self) await self.sys_addons.data.uninstall(self)
raise AddonsError() from err raise AddonsError() from err
# Finish initialization and set up listeners
await self.load()
# Add to addon manager # Add to addon manager
self.sys_addons.local[self.slug] = self self.sys_addons.local[self.slug] = self
@@ -1510,13 +1513,6 @@ class Addon(AddonModel):
_LOGGER.info("Finished restore for add-on %s", self.slug) _LOGGER.info("Finished restore for add-on %s", self.slug)
return wait_for_start return wait_for_start
def check_trust(self) -> Awaitable[None]:
"""Calculate Addon docker content trust.
Return Coroutine.
"""
return self.instance.check_trust()
@Job( @Job(
name="addon_restart_after_problem", name="addon_restart_after_problem",
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,

View File

@@ -9,8 +9,6 @@ from typing import Self, Union
from attr import evolve from attr import evolve
from supervisor.jobs.const import JobConcurrency
from ..const import AddonBoot, AddonStartup, AddonState from ..const import AddonBoot, AddonStartup, AddonState
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ( from ..exceptions import (
@@ -21,6 +19,8 @@ from ..exceptions import (
DockerError, DockerError,
HassioError, HassioError,
) )
from ..jobs import ChildJobSyncFilter
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType
from ..store.addon import AddonStore from ..store.addon import AddonStore
@@ -182,6 +182,9 @@ class AddonManager(CoreSysAttributes):
conditions=ADDON_UPDATE_CONDITIONS, conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError, on_condition=AddonsJobError,
concurrency=JobConcurrency.QUEUE, concurrency=JobConcurrency.QUEUE,
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
) )
async def install( async def install(
self, slug: str, *, validation_complete: asyncio.Event | None = None self, slug: str, *, validation_complete: asyncio.Event | None = None
@@ -229,6 +232,13 @@ class AddonManager(CoreSysAttributes):
name="addon_manager_update", name="addon_manager_update",
conditions=ADDON_UPDATE_CONDITIONS, conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError, on_condition=AddonsJobError,
# We assume for now the docker image pull is 100% of this task for progress
# allocation. But from a user perspective that isn't true. Other steps
# that take time which is not accounted for in progress include:
# partial backup, image cleanup, apparmor update, and addon restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
) )
async def update( async def update(
self, self,
@@ -271,7 +281,10 @@ class AddonManager(CoreSysAttributes):
addons=[addon.slug], addons=[addon.slug],
) )
return await addon.update() task = await addon.update()
_LOGGER.info("Add-on '%s' successfully updated", slug)
return task
@Job( @Job(
name="addon_manager_rebuild", name="addon_manager_rebuild",

View File

@@ -72,6 +72,7 @@ from ..const import (
ATTR_TYPE, ATTR_TYPE,
ATTR_UART, ATTR_UART,
ATTR_UDEV, ATTR_UDEV,
ATTR_ULIMITS,
ATTR_URL, ATTR_URL,
ATTR_USB, ATTR_USB,
ATTR_VERSION, ATTR_VERSION,
@@ -102,7 +103,6 @@ from .configuration import FolderMapping
from .const import ( from .const import (
ATTR_BACKUP, ATTR_BACKUP,
ATTR_BREAKING_VERSIONS, ATTR_BREAKING_VERSIONS,
ATTR_CODENOTARY,
ATTR_PATH, ATTR_PATH,
ATTR_READ_ONLY, ATTR_READ_ONLY,
AddonBackupMode, AddonBackupMode,
@@ -462,6 +462,11 @@ class AddonModel(JobGroup, ABC):
"""Return True if the add-on have his own udev.""" """Return True if the add-on have his own udev."""
return self.data[ATTR_UDEV] return self.data[ATTR_UDEV]
@property
def ulimits(self) -> dict[str, Any]:
"""Return ulimits configuration."""
return self.data[ATTR_ULIMITS]
@property @property
def with_kernel_modules(self) -> bool: def with_kernel_modules(self) -> bool:
"""Return True if the add-on access to kernel modules.""" """Return True if the add-on access to kernel modules."""
@@ -626,13 +631,8 @@ class AddonModel(JobGroup, ABC):
@property @property
def signed(self) -> bool: def signed(self) -> bool:
"""Return True if the image is signed.""" """Currently no signing support."""
return ATTR_CODENOTARY in self.data return False
@property
def codenotary(self) -> str | None:
"""Return Signer email address for CAS."""
return self.data.get(ATTR_CODENOTARY)
@property @property
def breaking_versions(self) -> list[AwesomeVersion]: def breaking_versions(self) -> list[AwesomeVersion]:

View File

@@ -88,6 +88,7 @@ from ..const import (
ATTR_TYPE, ATTR_TYPE,
ATTR_UART, ATTR_UART,
ATTR_UDEV, ATTR_UDEV,
ATTR_ULIMITS,
ATTR_URL, ATTR_URL,
ATTR_USB, ATTR_USB,
ATTR_USER, ATTR_USER,
@@ -206,6 +207,12 @@ def _warn_addon_config(config: dict[str, Any]):
name, name,
) )
if ATTR_CODENOTARY in config:
_LOGGER.warning(
"Add-on '%s' uses deprecated 'codenotary' field in config. This field is no longer used and will be ignored. Please report this to the maintainer.",
name,
)
return config return config
@@ -416,13 +423,26 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
vol.Optional(ATTR_BACKUP, default=AddonBackupMode.HOT): vol.Coerce( vol.Optional(ATTR_BACKUP, default=AddonBackupMode.HOT): vol.Coerce(
AddonBackupMode AddonBackupMode
), ),
vol.Optional(ATTR_CODENOTARY): vol.Email(),
vol.Optional(ATTR_OPTIONS, default={}): dict, vol.Optional(ATTR_OPTIONS, default={}): dict,
vol.Optional(ATTR_SCHEMA, default={}): vol.Any( vol.Optional(ATTR_SCHEMA, default={}): vol.Any(
vol.Schema({str: SCHEMA_ELEMENT}), vol.Schema({str: SCHEMA_ELEMENT}),
False, False,
), ),
vol.Optional(ATTR_IMAGE): docker_image, vol.Optional(ATTR_IMAGE): docker_image,
vol.Optional(ATTR_ULIMITS, default=dict): vol.Any(
{str: vol.Coerce(int)}, # Simple format: {name: limit}
{
str: vol.Any(
vol.Coerce(int), # Simple format for individual entries
vol.Schema(
{ # Detailed format for individual entries
vol.Required("soft"): vol.Coerce(int),
vol.Required("hard"): vol.Coerce(int),
}
),
)
},
),
vol.Optional(ATTR_TIMEOUT, default=10): vol.All( vol.Optional(ATTR_TIMEOUT, default=10): vol.All(
vol.Coerce(int), vol.Range(min=10, max=300) vol.Coerce(int), vol.Range(min=10, max=300)
), ),

View File

@@ -1,24 +1,20 @@
"""Init file for Supervisor Security RESTful API.""" """Init file for Supervisor Security RESTful API."""
import asyncio
import logging
from typing import Any from typing import Any
from aiohttp import web from aiohttp import web
import attr
import voluptuous as vol import voluptuous as vol
from ..const import ATTR_CONTENT_TRUST, ATTR_FORCE_SECURITY, ATTR_PWNED from supervisor.exceptions import APIGone
from ..const import ATTR_FORCE_SECURITY, ATTR_PWNED
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from .utils import api_process, api_validate from .utils import api_process, api_validate
_LOGGER: logging.Logger = logging.getLogger(__name__)
# pylint: disable=no-value-for-parameter # pylint: disable=no-value-for-parameter
SCHEMA_OPTIONS = vol.Schema( SCHEMA_OPTIONS = vol.Schema(
{ {
vol.Optional(ATTR_PWNED): vol.Boolean(), vol.Optional(ATTR_PWNED): vol.Boolean(),
vol.Optional(ATTR_CONTENT_TRUST): vol.Boolean(),
vol.Optional(ATTR_FORCE_SECURITY): vol.Boolean(), vol.Optional(ATTR_FORCE_SECURITY): vol.Boolean(),
} }
) )
@@ -31,7 +27,6 @@ class APISecurity(CoreSysAttributes):
async def info(self, request: web.Request) -> dict[str, Any]: async def info(self, request: web.Request) -> dict[str, Any]:
"""Return Security information.""" """Return Security information."""
return { return {
ATTR_CONTENT_TRUST: self.sys_security.content_trust,
ATTR_PWNED: self.sys_security.pwned, ATTR_PWNED: self.sys_security.pwned,
ATTR_FORCE_SECURITY: self.sys_security.force, ATTR_FORCE_SECURITY: self.sys_security.force,
} }
@@ -43,8 +38,6 @@ class APISecurity(CoreSysAttributes):
if ATTR_PWNED in body: if ATTR_PWNED in body:
self.sys_security.pwned = body[ATTR_PWNED] self.sys_security.pwned = body[ATTR_PWNED]
if ATTR_CONTENT_TRUST in body:
self.sys_security.content_trust = body[ATTR_CONTENT_TRUST]
if ATTR_FORCE_SECURITY in body: if ATTR_FORCE_SECURITY in body:
self.sys_security.force = body[ATTR_FORCE_SECURITY] self.sys_security.force = body[ATTR_FORCE_SECURITY]
@@ -54,6 +47,9 @@ class APISecurity(CoreSysAttributes):
@api_process @api_process
async def integrity_check(self, request: web.Request) -> dict[str, Any]: async def integrity_check(self, request: web.Request) -> dict[str, Any]:
"""Run backend integrity check.""" """Run backend integrity check.
result = await asyncio.shield(self.sys_security.integrity_check())
return attr.asdict(result) CodeNotary integrity checking has been removed. This endpoint now returns
an error indicating the feature is gone.
"""
raise APIGone("Integrity check feature has been removed.")

View File

@@ -16,14 +16,12 @@ from ..const import (
ATTR_BLK_READ, ATTR_BLK_READ,
ATTR_BLK_WRITE, ATTR_BLK_WRITE,
ATTR_CHANNEL, ATTR_CHANNEL,
ATTR_CONTENT_TRUST,
ATTR_COUNTRY, ATTR_COUNTRY,
ATTR_CPU_PERCENT, ATTR_CPU_PERCENT,
ATTR_DEBUG, ATTR_DEBUG,
ATTR_DEBUG_BLOCK, ATTR_DEBUG_BLOCK,
ATTR_DETECT_BLOCKING_IO, ATTR_DETECT_BLOCKING_IO,
ATTR_DIAGNOSTICS, ATTR_DIAGNOSTICS,
ATTR_FORCE_SECURITY,
ATTR_HEALTHY, ATTR_HEALTHY,
ATTR_ICON, ATTR_ICON,
ATTR_IP_ADDRESS, ATTR_IP_ADDRESS,
@@ -69,8 +67,6 @@ SCHEMA_OPTIONS = vol.Schema(
vol.Optional(ATTR_DEBUG): vol.Boolean(), vol.Optional(ATTR_DEBUG): vol.Boolean(),
vol.Optional(ATTR_DEBUG_BLOCK): vol.Boolean(), vol.Optional(ATTR_DEBUG_BLOCK): vol.Boolean(),
vol.Optional(ATTR_DIAGNOSTICS): vol.Boolean(), vol.Optional(ATTR_DIAGNOSTICS): vol.Boolean(),
vol.Optional(ATTR_CONTENT_TRUST): vol.Boolean(),
vol.Optional(ATTR_FORCE_SECURITY): vol.Boolean(),
vol.Optional(ATTR_AUTO_UPDATE): vol.Boolean(), vol.Optional(ATTR_AUTO_UPDATE): vol.Boolean(),
vol.Optional(ATTR_DETECT_BLOCKING_IO): vol.Coerce(DetectBlockingIO), vol.Optional(ATTR_DETECT_BLOCKING_IO): vol.Coerce(DetectBlockingIO),
vol.Optional(ATTR_COUNTRY): str, vol.Optional(ATTR_COUNTRY): str,

View File

@@ -105,7 +105,6 @@ async def initialize_coresys() -> CoreSys:
if coresys.dev: if coresys.dev:
coresys.updater.channel = UpdateChannel.DEV coresys.updater.channel = UpdateChannel.DEV
coresys.security.content_trust = False
# Convert datetime # Convert datetime
logging.Formatter.converter = lambda *args: coresys.now().timetuple() logging.Formatter.converter = lambda *args: coresys.now().timetuple()

View File

@@ -348,6 +348,7 @@ ATTR_TRANSLATIONS = "translations"
ATTR_TYPE = "type" ATTR_TYPE = "type"
ATTR_UART = "uart" ATTR_UART = "uart"
ATTR_UDEV = "udev" ATTR_UDEV = "udev"
ATTR_ULIMITS = "ulimits"
ATTR_UNHEALTHY = "unhealthy" ATTR_UNHEALTHY = "unhealthy"
ATTR_UNSAVED = "unsaved" ATTR_UNSAVED = "unsaved"
ATTR_UNSUPPORTED = "unsupported" ATTR_UNSUPPORTED = "unsupported"

View File

@@ -318,7 +318,18 @@ class DockerAddon(DockerInterface):
mem = 128 * 1024 * 1024 mem = 128 * 1024 * 1024
limits.append(docker.types.Ulimit(name="memlock", soft=mem, hard=mem)) limits.append(docker.types.Ulimit(name="memlock", soft=mem, hard=mem))
# Return None if no capabilities is present # Add configurable ulimits from add-on config
for name, config in self.addon.ulimits.items():
if isinstance(config, int):
# Simple format: both soft and hard limits are the same
limits.append(docker.types.Ulimit(name=name, soft=config, hard=config))
elif isinstance(config, dict):
# Detailed format: both soft and hard limits are mandatory
soft = config["soft"]
hard = config["hard"]
limits.append(docker.types.Ulimit(name=name, soft=soft, hard=hard))
# Return None if no ulimits are present
if limits: if limits:
return limits return limits
return None return None
@@ -835,16 +846,6 @@ class DockerAddon(DockerInterface):
): ):
self.sys_resolution.dismiss_issue(self.addon.device_access_missing_issue) self.sys_resolution.dismiss_issue(self.addon.device_access_missing_issue)
async def _validate_trust(self, image_id: str) -> None:
"""Validate trust of content."""
if not self.addon.signed:
return
checksum = image_id.partition(":")[2]
return await self.sys_security.verify_content(
cast(str, self.addon.codenotary), checksum
)
@Job( @Job(
name="docker_addon_hardware_events", name="docker_addon_hardware_events",
conditions=[JobCondition.OS_AGENT], conditions=[JobCondition.OS_AGENT],

View File

@@ -5,7 +5,7 @@ from ipaddress import IPv4Address
import logging import logging
import re import re
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException from awesomeversion import AwesomeVersion
from docker.types import Mount from docker.types import Mount
from ..const import LABEL_MACHINE from ..const import LABEL_MACHINE
@@ -244,13 +244,3 @@ class DockerHomeAssistant(DockerInterface):
self.image, self.image,
self.sys_homeassistant.version, self.sys_homeassistant.version,
) )
async def _validate_trust(self, image_id: str) -> None:
"""Validate trust of content."""
try:
if self.version in {None, LANDINGPAGE} or self.version < _VERIFY_TRUST:
return
except AwesomeVersionCompareException:
return
await super()._validate_trust(image_id)

View File

@@ -31,15 +31,12 @@ from ..const import (
) )
from ..coresys import CoreSys from ..coresys import CoreSys
from ..exceptions import ( from ..exceptions import (
CodeNotaryError,
CodeNotaryUntrusted,
DockerAPIError, DockerAPIError,
DockerError, DockerError,
DockerJobError, DockerJobError,
DockerLogOutOfOrder, DockerLogOutOfOrder,
DockerNotFound, DockerNotFound,
DockerRequestError, DockerRequestError,
DockerTrustError,
) )
from ..jobs import SupervisorJob from ..jobs import SupervisorJob
from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobConcurrency from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobConcurrency
@@ -220,10 +217,12 @@ class DockerInterface(JobGroup, ABC):
await self.sys_run_in_executor(self.sys_docker.docker.login, **credentials) await self.sys_run_in_executor(self.sys_docker.docker.login, **credentials)
def _process_pull_image_log(self, job_id: str, reference: PullLogEntry) -> None: def _process_pull_image_log(
self, install_job_id: str, reference: PullLogEntry
) -> None:
"""Process events fired from a docker while pulling an image, filtered to a given job id.""" """Process events fired from a docker while pulling an image, filtered to a given job id."""
if ( if (
reference.job_id != job_id reference.job_id != install_job_id
or not reference.id or not reference.id
or not reference.status or not reference.status
or not (stage := PullImageLayerStage.from_status(reference.status)) or not (stage := PullImageLayerStage.from_status(reference.status))
@@ -237,21 +236,22 @@ class DockerInterface(JobGroup, ABC):
name="Pulling container image layer", name="Pulling container image layer",
initial_stage=stage.status, initial_stage=stage.status,
reference=reference.id, reference=reference.id,
parent_id=job_id, parent_id=install_job_id,
internal=True,
) )
job.done = False job.done = False
return return
# Find our sub job to update details of # Find our sub job to update details of
for j in self.sys_jobs.jobs: for j in self.sys_jobs.jobs:
if j.parent_id == job_id and j.reference == reference.id: if j.parent_id == install_job_id and j.reference == reference.id:
job = j job = j
break break
# This likely only occurs if the logs came in out of sync and we got progress before the Pulling FS Layer one # This likely only occurs if the logs came in out of sync and we got progress before the Pulling FS Layer one
if not job: if not job:
raise DockerLogOutOfOrder( raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {job_id} but could not find a matching job, skipping", f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {install_job_id} but could not find a matching job, skipping",
_LOGGER.debug, _LOGGER.debug,
) )
@@ -303,6 +303,8 @@ class DockerInterface(JobGroup, ABC):
# Our filters have all passed. Time to update the job # Our filters have all passed. Time to update the job
# Only downloading and extracting have progress details. Use that to set extra # Only downloading and extracting have progress details. Use that to set extra
# We'll leave it around on later stages as the total bytes may be useful after that stage # We'll leave it around on later stages as the total bytes may be useful after that stage
# Enforce range to prevent float drift error
progress = max(0, min(progress, 100))
if ( if (
stage in {PullImageLayerStage.DOWNLOADING, PullImageLayerStage.EXTRACTING} stage in {PullImageLayerStage.DOWNLOADING, PullImageLayerStage.EXTRACTING}
and reference.progress_detail and reference.progress_detail
@@ -325,10 +327,56 @@ class DockerInterface(JobGroup, ABC):
else job.extra, else job.extra,
) )
# Once we have received a progress update for every child job, start to set status of the main one
install_job = self.sys_jobs.get_job(install_job_id)
layer_jobs = [
job
for job in self.sys_jobs.jobs
if job.parent_id == install_job.uuid
and job.name == "Pulling container image layer"
]
# First set the total bytes to be downloaded/extracted on the main job
if not install_job.extra:
total = 0
for job in layer_jobs:
if not job.extra:
return
total += job.extra["total"]
install_job.extra = {"total": total}
else:
total = install_job.extra["total"]
# Then determine total progress based on progress of each sub-job, factoring in size of each compared to total
progress = 0.0
stage = PullImageLayerStage.PULL_COMPLETE
for job in layer_jobs:
if not job.extra:
return
progress += job.progress * (job.extra["total"] / total)
job_stage = PullImageLayerStage.from_status(cast(str, job.stage))
if job_stage < PullImageLayerStage.EXTRACTING:
stage = PullImageLayerStage.DOWNLOADING
elif (
stage == PullImageLayerStage.PULL_COMPLETE
and job_stage < PullImageLayerStage.PULL_COMPLETE
):
stage = PullImageLayerStage.EXTRACTING
# Ensure progress is 100 at this point to prevent float drift
if stage == PullImageLayerStage.PULL_COMPLETE:
progress = 100
# To reduce noise, limit updates to when result has changed by an entire percent or when stage changed
if stage != install_job.stage or progress >= install_job.progress + 1:
install_job.update(stage=stage.status, progress=max(0, min(progress, 100)))
@Job( @Job(
name="docker_interface_install", name="docker_interface_install",
on_condition=DockerJobError, on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT, concurrency=JobConcurrency.GROUP_REJECT,
internal=True,
) )
async def install( async def install(
self, self,
@@ -351,11 +399,11 @@ class DockerInterface(JobGroup, ABC):
# Try login if we have defined credentials # Try login if we have defined credentials
await self._docker_login(image) await self._docker_login(image)
job_id = self.sys_jobs.current.uuid curr_job_id = self.sys_jobs.current.uuid
async def process_pull_image_log(reference: PullLogEntry) -> None: async def process_pull_image_log(reference: PullLogEntry) -> None:
try: try:
self._process_pull_image_log(job_id, reference) self._process_pull_image_log(curr_job_id, reference)
except DockerLogOutOfOrder as err: except DockerLogOutOfOrder as err:
# Send all these to sentry. Missing a few progress updates # Send all these to sentry. Missing a few progress updates
# shouldn't matter to users but matters to us # shouldn't matter to users but matters to us
@@ -374,18 +422,6 @@ class DockerInterface(JobGroup, ABC):
platform=MAP_ARCH[image_arch], platform=MAP_ARCH[image_arch],
) )
# Validate content
try:
await self._validate_trust(cast(str, docker_image.id))
except CodeNotaryError:
with suppress(docker.errors.DockerException):
await self.sys_run_in_executor(
self.sys_docker.images.remove,
image=f"{image}:{version!s}",
force=True,
)
raise
# Tag latest # Tag latest
if latest: if latest:
_LOGGER.info( _LOGGER.info(
@@ -411,16 +447,6 @@ class DockerInterface(JobGroup, ABC):
raise DockerError( raise DockerError(
f"Unknown error with {image}:{version!s} -> {err!s}", _LOGGER.error f"Unknown error with {image}:{version!s} -> {err!s}", _LOGGER.error
) from err ) from err
except CodeNotaryUntrusted as err:
raise DockerTrustError(
f"Pulled image {image}:{version!s} failed on content-trust verification!",
_LOGGER.critical,
) from err
except CodeNotaryError as err:
raise DockerTrustError(
f"Error happened on Content-Trust check for {image}:{version!s}: {err!s}",
_LOGGER.error,
) from err
finally: finally:
if listener: if listener:
self.sys_bus.remove_listener(listener) self.sys_bus.remove_listener(listener)
@@ -629,7 +655,10 @@ class DockerInterface(JobGroup, ABC):
concurrency=JobConcurrency.GROUP_REJECT, concurrency=JobConcurrency.GROUP_REJECT,
) )
async def update( async def update(
self, version: AwesomeVersion, image: str | None = None, latest: bool = False self,
version: AwesomeVersion,
image: str | None = None,
latest: bool = False,
) -> None: ) -> None:
"""Update a Docker image.""" """Update a Docker image."""
image = image or self.image image = image or self.image
@@ -755,24 +784,3 @@ class DockerInterface(JobGroup, ABC):
return self.sys_run_in_executor( return self.sys_run_in_executor(
self.sys_docker.container_run_inside, self.name, command self.sys_docker.container_run_inside, self.name, command
) )
async def _validate_trust(self, image_id: str) -> None:
"""Validate trust of content."""
checksum = image_id.partition(":")[2]
return await self.sys_security.verify_own_content(checksum)
@Job(
name="docker_interface_check_trust",
on_condition=DockerJobError,
concurrency=JobConcurrency.GROUP_REJECT,
)
async def check_trust(self) -> None:
"""Check trust of exists Docker image."""
try:
image = await self.sys_run_in_executor(
self.sys_docker.images.get, f"{self.image}:{self.version!s}"
)
except (docker.errors.DockerException, requests.RequestException):
return
await self._validate_trust(cast(str, image.id))

View File

@@ -423,6 +423,12 @@ class APINotFound(APIError):
status = 404 status = 404
class APIGone(APIError):
"""API is no longer available."""
status = 410
class APIAddonNotInstalled(APIError): class APIAddonNotInstalled(APIError):
"""Not installed addon requested at addons API.""" """Not installed addon requested at addons API."""
@@ -577,21 +583,6 @@ class PwnedConnectivityError(PwnedError):
"""Connectivity errors while checking pwned passwords.""" """Connectivity errors while checking pwned passwords."""
# util/codenotary
class CodeNotaryError(HassioError):
"""Error general with CodeNotary."""
class CodeNotaryUntrusted(CodeNotaryError):
"""Error on untrusted content."""
class CodeNotaryBackendError(CodeNotaryError):
"""CodeNotary backend error happening."""
# util/whoami # util/whoami

View File

@@ -9,7 +9,12 @@ from typing import Any
from supervisor.resolution.const import UnhealthyReason from supervisor.resolution.const import UnhealthyReason
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import DBusError, DBusObjectError, HardwareNotFound from ..exceptions import (
DBusError,
DBusNotConnectedError,
DBusObjectError,
HardwareNotFound,
)
from .const import UdevSubsystem from .const import UdevSubsystem
from .data import Device from .data import Device
@@ -207,6 +212,8 @@ class HwDisk(CoreSysAttributes):
try: try:
block_device = self.sys_dbus.udisks2.get_block_device_by_path(device_path) block_device = self.sys_dbus.udisks2.get_block_device_by_path(device_path)
drive = self.sys_dbus.udisks2.get_drive(block_device.drive) drive = self.sys_dbus.udisks2.get_drive(block_device.drive)
except DBusNotConnectedError:
return None
except DBusObjectError: except DBusObjectError:
_LOGGER.warning( _LOGGER.warning(
"Unable to find UDisks2 drive for device at %s", device_path.as_posix() "Unable to find UDisks2 drive for device at %s", device_path.as_posix()

View File

@@ -28,6 +28,7 @@ from ..exceptions import (
HomeAssistantUpdateError, HomeAssistantUpdateError,
JobException, JobException,
) )
from ..jobs import ChildJobSyncFilter
from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobConcurrency, JobThrottle from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobConcurrency, JobThrottle
from ..jobs.decorator import Job, JobCondition from ..jobs.decorator import Job, JobCondition
from ..jobs.job_group import JobGroup from ..jobs.job_group import JobGroup
@@ -224,6 +225,13 @@ class HomeAssistantCore(JobGroup):
], ],
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
concurrency=JobConcurrency.GROUP_REJECT, concurrency=JobConcurrency.GROUP_REJECT,
# We assume for now the docker image pull is 100% of this task. But from
# a user perspective that isn't true. Other steps that take time which
# is not accounted for in progress include: partial backup, image
# cleanup, and Home Assistant restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
) )
async def update( async def update(
self, self,
@@ -420,13 +428,6 @@ class HomeAssistantCore(JobGroup):
""" """
return self.instance.logs() return self.instance.logs()
def check_trust(self) -> Awaitable[None]:
"""Calculate HomeAssistant docker content trust.
Return Coroutine.
"""
return self.instance.check_trust()
async def stats(self) -> DockerStats: async def stats(self) -> DockerStats:
"""Return stats of Home Assistant.""" """Return stats of Home Assistant."""
try: try:

View File

@@ -282,8 +282,10 @@ class JobManager(FileConfiguration, CoreSysAttributes):
# reporting shouldn't raise and break the active job # reporting shouldn't raise and break the active job
continue continue
progress = sync.starting_progress + ( progress = min(
sync.progress_allocation * job_data["progress"] 100,
sync.starting_progress
+ (sync.progress_allocation * job_data["progress"]),
) )
# Using max would always trigger on change even if progress was unchanged # Using max would always trigger on change even if progress was unchanged
# pylint: disable-next=R1731 # pylint: disable-next=R1731

View File

@@ -76,13 +76,6 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
"""Return True if a task is in progress.""" """Return True if a task is in progress."""
return self.instance.in_progress return self.instance.in_progress
def check_trust(self) -> Awaitable[None]:
"""Calculate plugin docker content trust.
Return Coroutine.
"""
return self.instance.check_trust()
def logs(self) -> Awaitable[bytes]: def logs(self) -> Awaitable[bytes]:
"""Get docker plugin logs. """Get docker plugin logs.

View File

@@ -1,59 +0,0 @@
"""Helpers to check supervisor trust."""
import logging
from ...const import CoreState
from ...coresys import CoreSys
from ...exceptions import CodeNotaryError, CodeNotaryUntrusted
from ..const import ContextType, IssueType, UnhealthyReason
from .base import CheckBase
_LOGGER: logging.Logger = logging.getLogger(__name__)
def setup(coresys: CoreSys) -> CheckBase:
"""Check setup function."""
return CheckSupervisorTrust(coresys)
class CheckSupervisorTrust(CheckBase):
"""CheckSystemTrust class for check."""
async def run_check(self) -> None:
"""Run check if not affected by issue."""
if not self.sys_security.content_trust:
_LOGGER.warning(
"Skipping %s, content_trust is globally disabled", self.slug
)
return
try:
await self.sys_supervisor.check_trust()
except CodeNotaryUntrusted:
self.sys_resolution.add_unhealthy_reason(UnhealthyReason.UNTRUSTED)
self.sys_resolution.create_issue(IssueType.TRUST, ContextType.SUPERVISOR)
except CodeNotaryError:
pass
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
try:
await self.sys_supervisor.check_trust()
except CodeNotaryError:
return True
return False
@property
def issue(self) -> IssueType:
"""Return a IssueType enum."""
return IssueType.TRUST
@property
def context(self) -> ContextType:
"""Return a ContextType enum."""
return ContextType.SUPERVISOR
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this check can run."""
return [CoreState.RUNNING, CoreState.STARTUP]

View File

@@ -39,7 +39,6 @@ class UnsupportedReason(StrEnum):
APPARMOR = "apparmor" APPARMOR = "apparmor"
CGROUP_VERSION = "cgroup_version" CGROUP_VERSION = "cgroup_version"
CONNECTIVITY_CHECK = "connectivity_check" CONNECTIVITY_CHECK = "connectivity_check"
CONTENT_TRUST = "content_trust"
DBUS = "dbus" DBUS = "dbus"
DNS_SERVER = "dns_server" DNS_SERVER = "dns_server"
DOCKER_CONFIGURATION = "docker_configuration" DOCKER_CONFIGURATION = "docker_configuration"
@@ -54,7 +53,6 @@ class UnsupportedReason(StrEnum):
PRIVILEGED = "privileged" PRIVILEGED = "privileged"
RESTART_POLICY = "restart_policy" RESTART_POLICY = "restart_policy"
SOFTWARE = "software" SOFTWARE = "software"
SOURCE_MODS = "source_mods"
SUPERVISOR_VERSION = "supervisor_version" SUPERVISOR_VERSION = "supervisor_version"
SYSTEMD = "systemd" SYSTEMD = "systemd"
SYSTEMD_JOURNAL = "systemd_journal" SYSTEMD_JOURNAL = "systemd_journal"
@@ -103,7 +101,6 @@ class IssueType(StrEnum):
PWNED = "pwned" PWNED = "pwned"
REBOOT_REQUIRED = "reboot_required" REBOOT_REQUIRED = "reboot_required"
SECURITY = "security" SECURITY = "security"
TRUST = "trust"
UPDATE_FAILED = "update_failed" UPDATE_FAILED = "update_failed"
UPDATE_ROLLBACK = "update_rollback" UPDATE_ROLLBACK = "update_rollback"
@@ -115,7 +112,6 @@ class SuggestionType(StrEnum):
CLEAR_FULL_BACKUP = "clear_full_backup" CLEAR_FULL_BACKUP = "clear_full_backup"
CREATE_FULL_BACKUP = "create_full_backup" CREATE_FULL_BACKUP = "create_full_backup"
DISABLE_BOOT = "disable_boot" DISABLE_BOOT = "disable_boot"
EXECUTE_INTEGRITY = "execute_integrity"
EXECUTE_REBOOT = "execute_reboot" EXECUTE_REBOOT = "execute_reboot"
EXECUTE_REBUILD = "execute_rebuild" EXECUTE_REBUILD = "execute_rebuild"
EXECUTE_RELOAD = "execute_reload" EXECUTE_RELOAD = "execute_reload"

View File

@@ -1,34 +0,0 @@
"""Evaluation class for Content Trust."""
from ...const import CoreState
from ...coresys import CoreSys
from ..const import UnsupportedReason
from .base import EvaluateBase
def setup(coresys: CoreSys) -> EvaluateBase:
"""Initialize evaluation-setup function."""
return EvaluateContentTrust(coresys)
class EvaluateContentTrust(EvaluateBase):
"""Evaluate system content trust level."""
@property
def reason(self) -> UnsupportedReason:
"""Return a UnsupportedReason enum."""
return UnsupportedReason.CONTENT_TRUST
@property
def on_failure(self) -> str:
"""Return a string that is printed when self.evaluate is True."""
return "System run with disabled trusted content security."
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this evaluation can run."""
return [CoreState.INITIALIZE, CoreState.SETUP, CoreState.RUNNING]
async def evaluate(self) -> bool:
"""Run evaluation."""
return not self.sys_security.content_trust

View File

@@ -8,7 +8,7 @@ from ..const import UnsupportedReason
from .base import EvaluateBase from .base import EvaluateBase
EXPECTED_LOGGING = "journald" EXPECTED_LOGGING = "journald"
EXPECTED_STORAGE = "overlay2" EXPECTED_STORAGE = ("overlay2", "overlayfs")
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -41,14 +41,18 @@ class EvaluateDockerConfiguration(EvaluateBase):
storage_driver = self.sys_docker.info.storage storage_driver = self.sys_docker.info.storage
logging_driver = self.sys_docker.info.logging logging_driver = self.sys_docker.info.logging
if storage_driver != EXPECTED_STORAGE: is_unsupported = False
if storage_driver not in EXPECTED_STORAGE:
is_unsupported = True
_LOGGER.warning( _LOGGER.warning(
"Docker storage driver %s is not supported!", storage_driver "Docker storage driver %s is not supported!", storage_driver
) )
if logging_driver != EXPECTED_LOGGING: if logging_driver != EXPECTED_LOGGING:
is_unsupported = True
_LOGGER.warning( _LOGGER.warning(
"Docker logging driver %s is not supported!", logging_driver "Docker logging driver %s is not supported!", logging_driver
) )
return storage_driver != EXPECTED_STORAGE or logging_driver != EXPECTED_LOGGING return is_unsupported

View File

@@ -1,72 +0,0 @@
"""Evaluation class for Content Trust."""
import errno
import logging
from pathlib import Path
from ...const import CoreState
from ...coresys import CoreSys
from ...exceptions import CodeNotaryError, CodeNotaryUntrusted
from ...utils.codenotary import calc_checksum_path_sourcecode
from ..const import ContextType, IssueType, UnhealthyReason, UnsupportedReason
from .base import EvaluateBase
_SUPERVISOR_SOURCE = Path("/usr/src/supervisor/supervisor")
_LOGGER: logging.Logger = logging.getLogger(__name__)
def setup(coresys: CoreSys) -> EvaluateBase:
"""Initialize evaluation-setup function."""
return EvaluateSourceMods(coresys)
class EvaluateSourceMods(EvaluateBase):
"""Evaluate supervisor source modifications."""
@property
def reason(self) -> UnsupportedReason:
"""Return a UnsupportedReason enum."""
return UnsupportedReason.SOURCE_MODS
@property
def on_failure(self) -> str:
"""Return a string that is printed when self.evaluate is True."""
return "System detect unauthorized source code modifications."
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this evaluation can run."""
return [CoreState.RUNNING]
async def evaluate(self) -> bool:
"""Run evaluation."""
if not self.sys_security.content_trust:
_LOGGER.warning("Disabled content-trust, skipping evaluation")
return False
# Calculate sume of the sourcecode
try:
checksum = await self.sys_run_in_executor(
calc_checksum_path_sourcecode, _SUPERVISOR_SOURCE
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_resolution.create_issue(
IssueType.CORRUPT_FILESYSTEM, ContextType.SYSTEM
)
_LOGGER.error("Can't calculate checksum of source code: %s", err)
return False
# Validate checksum
try:
await self.sys_security.verify_own_content(checksum)
except CodeNotaryUntrusted:
return True
except CodeNotaryError:
pass
return False

View File

@@ -1,67 +0,0 @@
"""Helpers to check and fix issues with free space."""
from datetime import timedelta
import logging
from ...coresys import CoreSys
from ...exceptions import ResolutionFixupError, ResolutionFixupJobError
from ...jobs.const import JobCondition, JobThrottle
from ...jobs.decorator import Job
from ...security.const import ContentTrustResult
from ..const import ContextType, IssueType, SuggestionType
from .base import FixupBase
_LOGGER: logging.Logger = logging.getLogger(__name__)
def setup(coresys: CoreSys) -> FixupBase:
"""Check setup function."""
return FixupSystemExecuteIntegrity(coresys)
class FixupSystemExecuteIntegrity(FixupBase):
"""Storage class for fixup."""
@Job(
name="fixup_system_execute_integrity_process",
conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=ResolutionFixupJobError,
throttle_period=timedelta(hours=8),
throttle=JobThrottle.THROTTLE,
)
async def process_fixup(self, reference: str | None = None) -> None:
"""Initialize the fixup class."""
result = await self.sys_security.integrity_check()
if ContentTrustResult.FAILED in (result.core, result.supervisor):
raise ResolutionFixupError()
for plugin in result.plugins:
if plugin != ContentTrustResult.FAILED:
continue
raise ResolutionFixupError()
for addon in result.addons:
if addon != ContentTrustResult.FAILED:
continue
raise ResolutionFixupError()
@property
def suggestion(self) -> SuggestionType:
"""Return a SuggestionType enum."""
return SuggestionType.EXECUTE_INTEGRITY
@property
def context(self) -> ContextType:
"""Return a ContextType enum."""
return ContextType.SYSTEM
@property
def issues(self) -> list[IssueType]:
"""Return a IssueType enum list."""
return [IssueType.TRUST]
@property
def auto(self) -> bool:
"""Return if a fixup can be apply as auto fix."""
return True

View File

@@ -1,24 +0,0 @@
"""Security constants."""
from enum import StrEnum
import attr
class ContentTrustResult(StrEnum):
"""Content trust result enum."""
PASS = "pass"
ERROR = "error"
FAILED = "failed"
UNTESTED = "untested"
@attr.s
class IntegrityResult:
"""Result of a full integrity check."""
supervisor: ContentTrustResult = attr.ib(default=ContentTrustResult.UNTESTED)
core: ContentTrustResult = attr.ib(default=ContentTrustResult.UNTESTED)
plugins: dict[str, ContentTrustResult] = attr.ib(default={})
addons: dict[str, ContentTrustResult] = attr.ib(default={})

View File

@@ -4,27 +4,12 @@ from __future__ import annotations
import logging import logging
from ..const import ( from ..const import ATTR_FORCE_SECURITY, ATTR_PWNED, FILE_HASSIO_SECURITY
ATTR_CONTENT_TRUST,
ATTR_FORCE_SECURITY,
ATTR_PWNED,
FILE_HASSIO_SECURITY,
)
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ( from ..exceptions import PwnedError
CodeNotaryError,
CodeNotaryUntrusted,
PwnedError,
SecurityJobError,
)
from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.codenotary import cas_validate
from ..utils.common import FileConfiguration from ..utils.common import FileConfiguration
from ..utils.pwned import check_pwned_password from ..utils.pwned import check_pwned_password
from ..validate import SCHEMA_SECURITY_CONFIG from ..validate import SCHEMA_SECURITY_CONFIG
from .const import ContentTrustResult, IntegrityResult
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -37,16 +22,6 @@ class Security(FileConfiguration, CoreSysAttributes):
super().__init__(FILE_HASSIO_SECURITY, SCHEMA_SECURITY_CONFIG) super().__init__(FILE_HASSIO_SECURITY, SCHEMA_SECURITY_CONFIG)
self.coresys = coresys self.coresys = coresys
@property
def content_trust(self) -> bool:
"""Return if content trust is enabled/disabled."""
return self._data[ATTR_CONTENT_TRUST]
@content_trust.setter
def content_trust(self, value: bool) -> None:
"""Set content trust is enabled/disabled."""
self._data[ATTR_CONTENT_TRUST] = value
@property @property
def force(self) -> bool: def force(self) -> bool:
"""Return if force security is enabled/disabled.""" """Return if force security is enabled/disabled."""
@@ -67,30 +42,6 @@ class Security(FileConfiguration, CoreSysAttributes):
"""Set pwned is enabled/disabled.""" """Set pwned is enabled/disabled."""
self._data[ATTR_PWNED] = value self._data[ATTR_PWNED] = value
async def verify_content(self, signer: str, checksum: str) -> None:
"""Verify content on CAS."""
if not self.content_trust:
_LOGGER.warning("Disabled content-trust, skip validation")
return
try:
await cas_validate(signer, checksum)
except CodeNotaryUntrusted:
raise
except CodeNotaryError:
if self.force:
raise
self.sys_resolution.create_issue(
IssueType.TRUST,
ContextType.SYSTEM,
suggestions=[SuggestionType.EXECUTE_INTEGRITY],
)
return
async def verify_own_content(self, checksum: str) -> None:
"""Verify content from HA org."""
return await self.verify_content("notary@home-assistant.io", checksum)
async def verify_secret(self, pwned_hash: str) -> None: async def verify_secret(self, pwned_hash: str) -> None:
"""Verify pwned state of a secret.""" """Verify pwned state of a secret."""
if not self.pwned: if not self.pwned:
@@ -103,73 +54,3 @@ class Security(FileConfiguration, CoreSysAttributes):
if self.force: if self.force:
raise raise
return return
@Job(
name="security_manager_integrity_check",
conditions=[JobCondition.INTERNET_SYSTEM],
on_condition=SecurityJobError,
concurrency=JobConcurrency.REJECT,
)
async def integrity_check(self) -> IntegrityResult:
"""Run a full system integrity check of the platform.
We only allow to install trusted content.
This is a out of the band manual check.
"""
result: IntegrityResult = IntegrityResult()
if not self.content_trust:
_LOGGER.warning(
"Skipping integrity check, content_trust is globally disabled"
)
return result
# Supervisor
try:
await self.sys_supervisor.check_trust()
result.supervisor = ContentTrustResult.PASS
except CodeNotaryUntrusted:
result.supervisor = ContentTrustResult.ERROR
self.sys_resolution.create_issue(IssueType.TRUST, ContextType.SUPERVISOR)
except CodeNotaryError:
result.supervisor = ContentTrustResult.FAILED
# Core
try:
await self.sys_homeassistant.core.check_trust()
result.core = ContentTrustResult.PASS
except CodeNotaryUntrusted:
result.core = ContentTrustResult.ERROR
self.sys_resolution.create_issue(IssueType.TRUST, ContextType.CORE)
except CodeNotaryError:
result.core = ContentTrustResult.FAILED
# Plugins
for plugin in self.sys_plugins.all_plugins:
try:
await plugin.check_trust()
result.plugins[plugin.slug] = ContentTrustResult.PASS
except CodeNotaryUntrusted:
result.plugins[plugin.slug] = ContentTrustResult.ERROR
self.sys_resolution.create_issue(
IssueType.TRUST, ContextType.PLUGIN, reference=plugin.slug
)
except CodeNotaryError:
result.plugins[plugin.slug] = ContentTrustResult.FAILED
# Add-ons
for addon in self.sys_addons.installed:
if not addon.signed:
result.addons[addon.slug] = ContentTrustResult.UNTESTED
continue
try:
await addon.check_trust()
result.addons[addon.slug] = ContentTrustResult.PASS
except CodeNotaryUntrusted:
result.addons[addon.slug] = ContentTrustResult.ERROR
self.sys_resolution.create_issue(
IssueType.TRUST, ContextType.ADDON, reference=addon.slug
)
except CodeNotaryError:
result.addons[addon.slug] = ContentTrustResult.FAILED
return result

View File

@@ -13,6 +13,8 @@ import aiohttp
from aiohttp.client_exceptions import ClientError from aiohttp.client_exceptions import ClientError
from awesomeversion import AwesomeVersion, AwesomeVersionException from awesomeversion import AwesomeVersion, AwesomeVersionException
from supervisor.jobs import ChildJobSyncFilter
from .const import ( from .const import (
ATTR_SUPERVISOR_INTERNET, ATTR_SUPERVISOR_INTERNET,
SUPERVISOR_VERSION, SUPERVISOR_VERSION,
@@ -23,8 +25,6 @@ from .coresys import CoreSys, CoreSysAttributes
from .docker.stats import DockerStats from .docker.stats import DockerStats
from .docker.supervisor import DockerSupervisor from .docker.supervisor import DockerSupervisor
from .exceptions import ( from .exceptions import (
CodeNotaryError,
CodeNotaryUntrusted,
DockerError, DockerError,
HostAppArmorError, HostAppArmorError,
SupervisorAppArmorError, SupervisorAppArmorError,
@@ -35,7 +35,6 @@ from .exceptions import (
from .jobs.const import JobCondition, JobThrottle from .jobs.const import JobCondition, JobThrottle
from .jobs.decorator import Job from .jobs.decorator import Job
from .resolution.const import ContextType, IssueType, UnhealthyReason from .resolution.const import ContextType, IssueType, UnhealthyReason
from .utils.codenotary import calc_checksum
from .utils.sentry import async_capture_exception from .utils.sentry import async_capture_exception
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -148,20 +147,6 @@ class Supervisor(CoreSysAttributes):
_LOGGER.error, _LOGGER.error,
) from err ) from err
# Validate
try:
await self.sys_security.verify_own_content(calc_checksum(data))
except CodeNotaryUntrusted as err:
raise SupervisorAppArmorError(
"Content-Trust is broken for the AppArmor profile fetch!",
_LOGGER.critical,
) from err
except CodeNotaryError as err:
raise SupervisorAppArmorError(
f"CodeNotary error while processing AppArmor fetch: {err!s}",
_LOGGER.error,
) from err
# Load # Load
temp_dir: TemporaryDirectory | None = None temp_dir: TemporaryDirectory | None = None
@@ -195,6 +180,15 @@ class Supervisor(CoreSysAttributes):
if temp_dir: if temp_dir:
await self.sys_run_in_executor(temp_dir.cleanup) await self.sys_run_in_executor(temp_dir.cleanup)
@Job(
name="supervisor_update",
# We assume for now the docker image pull is 100% of this task. But from
# a user perspective that isn't true. Other steps that take time which
# is not accounted for in progress include: app armor update and restart
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def update(self, version: AwesomeVersion | None = None) -> None: async def update(self, version: AwesomeVersion | None = None) -> None:
"""Update Supervisor version.""" """Update Supervisor version."""
version = version or self.latest_version or self.version version = version or self.latest_version or self.version
@@ -221,6 +215,7 @@ class Supervisor(CoreSysAttributes):
# Update container # Update container
_LOGGER.info("Update Supervisor to version %s", version) _LOGGER.info("Update Supervisor to version %s", version)
try: try:
await self.instance.install(version, image=image) await self.instance.install(version, image=image)
await self.instance.update_start_tag(image, version) await self.instance.update_start_tag(image, version)
@@ -261,13 +256,6 @@ class Supervisor(CoreSysAttributes):
""" """
return self.instance.logs() return self.instance.logs()
def check_trust(self) -> Awaitable[None]:
"""Calculate Supervisor docker content trust.
Return Coroutine.
"""
return self.instance.check_trust()
async def stats(self) -> DockerStats: async def stats(self) -> DockerStats:
"""Return stats of Supervisor.""" """Return stats of Supervisor."""
try: try:

View File

@@ -31,14 +31,8 @@ from .const import (
UpdateChannel, UpdateChannel,
) )
from .coresys import CoreSys, CoreSysAttributes from .coresys import CoreSys, CoreSysAttributes
from .exceptions import ( from .exceptions import UpdaterError, UpdaterJobError
CodeNotaryError,
CodeNotaryUntrusted,
UpdaterError,
UpdaterJobError,
)
from .jobs.decorator import Job, JobCondition from .jobs.decorator import Job, JobCondition
from .utils.codenotary import calc_checksum
from .utils.common import FileConfiguration from .utils.common import FileConfiguration
from .validate import SCHEMA_UPDATER_CONFIG from .validate import SCHEMA_UPDATER_CONFIG
@@ -289,19 +283,6 @@ class Updater(FileConfiguration, CoreSysAttributes):
self.sys_bus.remove_listener(self._connectivity_listener) self.sys_bus.remove_listener(self._connectivity_listener)
self._connectivity_listener = None self._connectivity_listener = None
# Validate
try:
await self.sys_security.verify_own_content(calc_checksum(data))
except CodeNotaryUntrusted as err:
raise UpdaterError(
"Content-Trust is broken for the version file fetch!", _LOGGER.critical
) from err
except CodeNotaryError as err:
raise UpdaterError(
f"CodeNotary error while processing version fetch: {err!s}",
_LOGGER.error,
) from err
# Parse data # Parse data
try: try:
data = json.loads(data) data = json.loads(data)

View File

@@ -1,109 +0,0 @@
"""Small wrapper for CodeNotary."""
from __future__ import annotations
import asyncio
import hashlib
import json
import logging
from pathlib import Path
import shlex
from typing import Final
from dirhash import dirhash
from ..exceptions import CodeNotaryBackendError, CodeNotaryError, CodeNotaryUntrusted
from . import clean_env
_LOGGER: logging.Logger = logging.getLogger(__name__)
_CAS_CMD: str = (
"cas authenticate --signerID {signer} --silent --output json --hash {sum}"
)
_CACHE: set[tuple[str, str]] = set()
_ATTR_ERROR: Final = "error"
_ATTR_STATUS: Final = "status"
_FALLBACK_ERROR: Final = "Unknown CodeNotary backend issue"
def calc_checksum(data: str | bytes) -> str:
"""Generate checksum for CodeNotary."""
if isinstance(data, str):
return hashlib.sha256(data.encode()).hexdigest()
return hashlib.sha256(data).hexdigest()
def calc_checksum_path_sourcecode(folder: Path) -> str:
"""Calculate checksum for a path source code.
Need catch OSError.
"""
return dirhash(folder.as_posix(), "sha256", match=["*.py"])
# pylint: disable=unreachable
async def cas_validate(
signer: str,
checksum: str,
) -> None:
"""Validate data against CodeNotary."""
return
if (checksum, signer) in _CACHE:
return
# Generate command for request
command = shlex.split(_CAS_CMD.format(signer=signer, sum=checksum))
# Request notary authorization
_LOGGER.debug("Send cas command: %s", command)
try:
proc = await asyncio.create_subprocess_exec(
*command,
stdin=asyncio.subprocess.DEVNULL,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
env=clean_env(),
)
async with asyncio.timeout(15):
data, error = await proc.communicate()
except TimeoutError:
raise CodeNotaryBackendError(
"Timeout while processing CodeNotary", _LOGGER.warning
) from None
except OSError as err:
raise CodeNotaryError(
f"CodeNotary fatal error: {err!s}", _LOGGER.critical
) from err
# Check if Notarized
if proc.returncode != 0 and not data:
if error:
try:
error = error.decode("utf-8")
except UnicodeDecodeError as err:
raise CodeNotaryBackendError(_FALLBACK_ERROR, _LOGGER.warning) from err
if "not notarized" in error:
raise CodeNotaryUntrusted()
else:
error = _FALLBACK_ERROR
raise CodeNotaryBackendError(error, _LOGGER.warning)
# Parse data
try:
data_json = json.loads(data)
_LOGGER.debug("CodeNotary response with: %s", data_json)
except (json.JSONDecodeError, UnicodeDecodeError) as err:
raise CodeNotaryError(
f"Can't parse CodeNotary output: {data!s} - {err!s}", _LOGGER.error
) from err
if _ATTR_ERROR in data_json:
raise CodeNotaryBackendError(data_json[_ATTR_ERROR], _LOGGER.warning)
if data_json[_ATTR_STATUS] == 0:
_CACHE.add((checksum, signer))
else:
raise CodeNotaryUntrusted()

View File

@@ -12,7 +12,6 @@ from .const import (
ATTR_AUTO_UPDATE, ATTR_AUTO_UPDATE,
ATTR_CHANNEL, ATTR_CHANNEL,
ATTR_CLI, ATTR_CLI,
ATTR_CONTENT_TRUST,
ATTR_COUNTRY, ATTR_COUNTRY,
ATTR_DEBUG, ATTR_DEBUG,
ATTR_DEBUG_BLOCK, ATTR_DEBUG_BLOCK,
@@ -229,7 +228,6 @@ SCHEMA_INGRESS_CONFIG = vol.Schema(
# pylint: disable=no-value-for-parameter # pylint: disable=no-value-for-parameter
SCHEMA_SECURITY_CONFIG = vol.Schema( SCHEMA_SECURITY_CONFIG = vol.Schema(
{ {
vol.Optional(ATTR_CONTENT_TRUST, default=True): vol.Boolean(),
vol.Optional(ATTR_PWNED, default=True): vol.Boolean(), vol.Optional(ATTR_PWNED, default=True): vol.Boolean(),
vol.Optional(ATTR_FORCE_SECURITY, default=False): vol.Boolean(), vol.Optional(ATTR_FORCE_SECURITY, default=False): vol.Boolean(),
}, },

View File

@@ -419,3 +419,71 @@ def test_valid_schema():
config["schema"] = {"field": "invalid"} config["schema"] = {"field": "invalid"}
with pytest.raises(vol.Invalid): with pytest.raises(vol.Invalid):
assert vd.SCHEMA_ADDON_CONFIG(config) assert vd.SCHEMA_ADDON_CONFIG(config)
def test_ulimits_simple_format():
"""Test ulimits simple format validation."""
config = load_json_fixture("basic-addon-config.json")
config["ulimits"] = {"nofile": 65535, "nproc": 32768, "memlock": 134217728}
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["ulimits"]["nofile"] == 65535
assert valid_config["ulimits"]["nproc"] == 32768
assert valid_config["ulimits"]["memlock"] == 134217728
def test_ulimits_detailed_format():
"""Test ulimits detailed format validation."""
config = load_json_fixture("basic-addon-config.json")
config["ulimits"] = {
"nofile": {"soft": 20000, "hard": 40000},
"nproc": 32768, # Mixed format should work
"memlock": {"soft": 67108864, "hard": 134217728},
}
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["ulimits"]["nofile"]["soft"] == 20000
assert valid_config["ulimits"]["nofile"]["hard"] == 40000
assert valid_config["ulimits"]["nproc"] == 32768
assert valid_config["ulimits"]["memlock"]["soft"] == 67108864
assert valid_config["ulimits"]["memlock"]["hard"] == 134217728
def test_ulimits_empty_dict():
"""Test ulimits with empty dict (default)."""
config = load_json_fixture("basic-addon-config.json")
valid_config = vd.SCHEMA_ADDON_CONFIG(config)
assert valid_config["ulimits"] == {}
def test_ulimits_invalid_values():
"""Test ulimits with invalid values."""
config = load_json_fixture("basic-addon-config.json")
# Invalid string values
config["ulimits"] = {"nofile": "invalid"}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Invalid detailed format
config["ulimits"] = {"nofile": {"invalid_key": 1000}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Missing hard value in detailed format
config["ulimits"] = {"nofile": {"soft": 1000}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Missing soft value in detailed format
config["ulimits"] = {"nofile": {"hard": 1000}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)
# Empty dict in detailed format
config["ulimits"] = {"nofile": {}}
with pytest.raises(vol.Invalid):
vd.SCHEMA_ADDON_CONFIG(config)

View File

@@ -2,16 +2,19 @@
import asyncio import asyncio
from pathlib import Path from pathlib import Path
from unittest.mock import MagicMock, PropertyMock, patch from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp.test_utils import TestClient from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
import pytest import pytest
from supervisor.backups.manager import BackupManager from supervisor.backups.manager import BackupManager
from supervisor.const import CoreState
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.homeassistant import DockerHomeAssistant
from supervisor.docker.interface import DockerInterface from supervisor.docker.interface import DockerInterface
from supervisor.homeassistant.api import APIState from supervisor.homeassistant.api import APIState, HomeAssistantAPI
from supervisor.homeassistant.const import WSEvent
from supervisor.homeassistant.core import HomeAssistantCore from supervisor.homeassistant.core import HomeAssistantCore
from supervisor.homeassistant.module import HomeAssistant from supervisor.homeassistant.module import HomeAssistant
@@ -271,3 +274,96 @@ async def test_background_home_assistant_update_fails_fast(
assert resp.status == 400 assert resp.status == 400
body = await resp.json() body = await resp.json()
assert body["message"] == "Version 2025.8.3 is already installed" assert body["message"] == "Version 2025.8.3 is already installed"
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_home_assistant_update(
api_client: TestClient, coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test progress updates sent to Home Assistant for updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
coresys.homeassistant.version = AwesomeVersion("2025.8.0")
with (
patch.object(
DockerHomeAssistant,
"version",
new=PropertyMock(return_value=AwesomeVersion("2025.8.0")),
),
patch.object(
HomeAssistantAPI, "get_config", return_value={"components": ["frontend"]}
),
):
resp = await api_client.post("/core/update", json={"version": "2025.8.3"})
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == "home_assistant_core_update"
]
assert events[:5] == [
{
"stage": None,
"progress": 0,
"done": None,
},
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": None,
"progress": 0.1,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": True,
},
]

View File

@@ -17,16 +17,6 @@ async def test_api_security_options_force_security(api_client, coresys: CoreSys)
assert coresys.security.force assert coresys.security.force
@pytest.mark.asyncio
async def test_api_security_options_content_trust(api_client, coresys: CoreSys):
"""Test security options content trust."""
assert coresys.security.content_trust
await api_client.post("/security/options", json={"content_trust": False})
assert not coresys.security.content_trust
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_api_security_options_pwned(api_client, coresys: CoreSys): async def test_api_security_options_pwned(api_client, coresys: CoreSys):
"""Test security options pwned.""" """Test security options pwned."""
@@ -41,11 +31,8 @@ async def test_api_security_options_pwned(api_client, coresys: CoreSys):
async def test_api_integrity_check( async def test_api_integrity_check(
api_client, coresys: CoreSys, supervisor_internet: AsyncMock api_client, coresys: CoreSys, supervisor_internet: AsyncMock
): ):
"""Test security integrity check.""" """Test security integrity check - now deprecated."""
coresys.security.content_trust = False
resp = await api_client.post("/security/integrity") resp = await api_client.post("/security/integrity")
result = await resp.json()
assert result["data"]["core"] == "untested" # CodeNotary integrity check has been removed, should return 410 Gone
assert result["data"]["supervisor"] == "untested" assert resp.status == 410

View File

@@ -13,12 +13,13 @@ from supervisor.addons.addon import Addon
from supervisor.arch import CpuArch from supervisor.arch import CpuArch
from supervisor.backups.manager import BackupManager from supervisor.backups.manager import BackupManager
from supervisor.config import CoreConfig from supervisor.config import CoreConfig
from supervisor.const import AddonState from supervisor.const import AddonState, CoreState
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState from supervisor.docker.const import ContainerState
from supervisor.docker.interface import DockerInterface from supervisor.docker.interface import DockerInterface
from supervisor.docker.monitor import DockerContainerStateEvent from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.homeassistant.const import WSEvent
from supervisor.homeassistant.module import HomeAssistant from supervisor.homeassistant.module import HomeAssistant
from supervisor.store.addon import AddonStore from supervisor.store.addon import AddonStore
from supervisor.store.repository import Repository from supervisor.store.repository import Repository
@@ -709,3 +710,101 @@ async def test_api_store_addons_addon_availability_installed_addon(
assert ( assert (
"requires Home Assistant version 2023.1.1 or greater" in result["message"] "requires Home Assistant version 2023.1.1 or greater" in result["message"]
) )
@pytest.mark.parametrize(
("action", "job_name", "addon_slug"),
[
("install", "addon_manager_install", "local_ssh"),
("update", "addon_manager_update", "local_example"),
],
)
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_addon_install_update(
api_client: TestClient,
coresys: CoreSys,
ha_ws_client: AsyncMock,
install_addon_example: Addon,
action: str,
job_name: str,
addon_slug: str,
):
"""Test progress updates sent to Home Assistant for installs/updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
coresys.arch._supported_arch = ["amd64"] # pylint: disable=protected-access
install_addon_example.data_store["version"] = AwesomeVersion("2.0.0")
with (
patch.object(Addon, "load"),
patch.object(Addon, "need_build", new=PropertyMock(return_value=False)),
patch.object(Addon, "latest_need_build", new=PropertyMock(return_value=False)),
):
resp = await api_client.post(f"/store/addons/{addon_slug}/{action}")
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == job_name
and evt.args[0]["data"]["data"]["reference"] == addon_slug
]
assert events[:4] == [
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": None,
"progress": 0.1,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": True,
},
]

View File

@@ -2,17 +2,24 @@
# pylint: disable=protected-access # pylint: disable=protected-access
import time import time
from unittest.mock import AsyncMock, MagicMock, patch from unittest.mock import AsyncMock, MagicMock, PropertyMock, patch
from aiohttp.test_utils import TestClient from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
from blockbuster import BlockingError from blockbuster import BlockingError
import pytest import pytest
from supervisor.const import CoreState
from supervisor.core import Core
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.exceptions import HassioError, HostNotSupportedError, StoreGitError from supervisor.exceptions import HassioError, HostNotSupportedError, StoreGitError
from supervisor.homeassistant.const import WSEvent
from supervisor.store.repository import Repository from supervisor.store.repository import Repository
from supervisor.supervisor import Supervisor
from supervisor.updater import Updater
from tests.api import common_test_api_advanced_logs from tests.api import common_test_api_advanced_logs
from tests.common import load_json_fixture
from tests.dbus_service_mocks.base import DBusServiceMock from tests.dbus_service_mocks.base import DBusServiceMock
from tests.dbus_service_mocks.os_agent import OSAgent as OSAgentService from tests.dbus_service_mocks.os_agent import OSAgent as OSAgentService
@@ -316,3 +323,97 @@ async def test_api_supervisor_options_blocking_io(
# This should not raise blocking error anymore # This should not raise blocking error anymore
time.sleep(0) time.sleep(0)
@pytest.mark.usefixtures("tmp_supervisor_data")
async def test_api_progress_updates_supervisor_update(
api_client: TestClient, coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test progress updates sent to Home Assistant for updates."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
with (
patch.object(
Supervisor,
"version",
new=PropertyMock(return_value=AwesomeVersion("2025.08.0")),
),
patch.object(
Updater,
"version_supervisor",
new=PropertyMock(return_value=AwesomeVersion("2025.08.3")),
),
patch.object(
Updater, "image_supervisor", new=PropertyMock(return_value="supervisor")
),
patch.object(Supervisor, "update_apparmor"),
patch.object(Core, "stop"),
):
resp = await api_client.post("/supervisor/update")
assert resp.status == 200
events = [
{
"stage": evt.args[0]["data"]["data"]["stage"],
"progress": evt.args[0]["data"]["data"]["progress"],
"done": evt.args[0]["data"]["data"]["done"],
}
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["name"] == "supervisor_update"
]
assert events[:4] == [
{
"stage": None,
"progress": 0,
"done": False,
},
{
"stage": None,
"progress": 0.1,
"done": False,
},
{
"stage": None,
"progress": 1.2,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": False,
},
{
"stage": None,
"progress": 100,
"done": True,
},
]

View File

@@ -503,3 +503,93 @@ async def test_addon_new_device_no_haos(
await install_addon_ssh.stop() await install_addon_ssh.stop()
assert coresys.resolution.issues == [] assert coresys.resolution.issues == []
assert coresys.resolution.suggestions == [] assert coresys.resolution.suggestions == []
async def test_ulimits_integration(
coresys: CoreSys,
install_addon_ssh: Addon,
):
"""Test ulimits integration with Docker addon."""
docker_addon = DockerAddon(coresys, install_addon_ssh)
# Test default case (no ulimits, no realtime)
assert docker_addon.ulimits is None
# Test with realtime enabled (should have built-in ulimits)
install_addon_ssh.data["realtime"] = True
ulimits = docker_addon.ulimits
assert ulimits is not None
assert len(ulimits) == 2
# Check for rtprio limit
rtprio_limit = next((u for u in ulimits if u.name == "rtprio"), None)
assert rtprio_limit is not None
assert rtprio_limit.soft == 90
assert rtprio_limit.hard == 99
# Check for memlock limit
memlock_limit = next((u for u in ulimits if u.name == "memlock"), None)
assert memlock_limit is not None
assert memlock_limit.soft == 128 * 1024 * 1024
assert memlock_limit.hard == 128 * 1024 * 1024
# Test with configurable ulimits (simple format)
install_addon_ssh.data["realtime"] = False
install_addon_ssh.data["ulimits"] = {"nofile": 65535, "nproc": 32768}
ulimits = docker_addon.ulimits
assert ulimits is not None
assert len(ulimits) == 2
nofile_limit = next((u for u in ulimits if u.name == "nofile"), None)
assert nofile_limit is not None
assert nofile_limit.soft == 65535
assert nofile_limit.hard == 65535
nproc_limit = next((u for u in ulimits if u.name == "nproc"), None)
assert nproc_limit is not None
assert nproc_limit.soft == 32768
assert nproc_limit.hard == 32768
# Test with configurable ulimits (detailed format)
install_addon_ssh.data["ulimits"] = {
"nofile": {"soft": 20000, "hard": 40000},
"memlock": {"soft": 67108864, "hard": 134217728},
}
ulimits = docker_addon.ulimits
assert ulimits is not None
assert len(ulimits) == 2
nofile_limit = next((u for u in ulimits if u.name == "nofile"), None)
assert nofile_limit is not None
assert nofile_limit.soft == 20000
assert nofile_limit.hard == 40000
memlock_limit = next((u for u in ulimits if u.name == "memlock"), None)
assert memlock_limit is not None
assert memlock_limit.soft == 67108864
assert memlock_limit.hard == 134217728
# Test mixed format and realtime (realtime + custom ulimits)
install_addon_ssh.data["realtime"] = True
install_addon_ssh.data["ulimits"] = {
"nofile": 65535,
"core": {"soft": 0, "hard": 0}, # Disable core dumps
}
ulimits = docker_addon.ulimits
assert ulimits is not None
assert (
len(ulimits) == 4
) # rtprio, memlock (from realtime) + nofile, core (from config)
# Check realtime limits still present
rtprio_limit = next((u for u in ulimits if u.name == "rtprio"), None)
assert rtprio_limit is not None
# Check custom limits added
nofile_limit = next((u for u in ulimits if u.name == "nofile"), None)
assert nofile_limit is not None
assert nofile_limit.soft == 65535
assert nofile_limit.hard == 65535
core_limit = next((u for u in ulimits if u.name == "core"), None)
assert core_limit is not None
assert core_limit.soft == 0
assert core_limit.hard == 0

View File

@@ -26,21 +26,11 @@ from supervisor.exceptions import (
DockerNotFound, DockerNotFound,
DockerRequestError, DockerRequestError,
) )
from supervisor.homeassistant.const import WSEvent
from supervisor.jobs import JobSchedulerOptions, SupervisorJob from supervisor.jobs import JobSchedulerOptions, SupervisorJob
from tests.common import load_json_fixture from tests.common import load_json_fixture
@pytest.fixture(autouse=True)
def mock_verify_content(coresys: CoreSys):
"""Mock verify_content utility during tests."""
with patch.object(
coresys.security, "verify_content", return_value=None
) as verify_content:
yield verify_content
@pytest.mark.parametrize( @pytest.mark.parametrize(
"cpu_arch, platform", "cpu_arch, platform",
[ [
@@ -417,196 +407,17 @@ async def test_install_fires_progress_events(
] ]
async def test_install_sends_progress_to_home_assistant(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock
):
"""Test progress events are sent as job updates to Home Assistant."""
coresys.core.set_state(CoreState.RUNNING)
coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log.json"
)
with (
patch.object(
type(coresys.supervisor), "arch", PropertyMock(return_value="i386")
),
):
# Schedule job so we can listen for the end. Then we can assert against the WS mock
event = asyncio.Event()
job, install_task = coresys.jobs.schedule_job(
test_docker_interface.install,
JobSchedulerOptions(),
AwesomeVersion("1.2.3"),
"test",
)
async def listen_for_job_end(reference: SupervisorJob):
if reference.uuid != job.uuid:
return
event.set()
coresys.bus.register_event(BusEvent.SUPERVISOR_JOB_END, listen_for_job_end)
await install_task
await event.wait()
events = [
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0] and evt.args[0]["data"]["event"] == WSEvent.JOB
]
assert events[0]["name"] == "docker_interface_install"
assert events[0]["uuid"] == job.uuid
assert events[0]["done"] is None
assert events[1]["name"] == "docker_interface_install"
assert events[1]["uuid"] == job.uuid
assert events[1]["done"] is False
assert events[-1]["name"] == "docker_interface_install"
assert events[-1]["uuid"] == job.uuid
assert events[-1]["done"] is True
def make_sub_log(layer_id: str):
return [
{
"stage": evt["stage"],
"progress": evt["progress"],
"done": evt["done"],
"extra": evt["extra"],
}
for evt in events
if evt["name"] == "Pulling container image layer"
and evt["reference"] == layer_id
and evt["parent_id"] == job.uuid
]
layer_1_log = make_sub_log("1e214cd6d7d0")
layer_2_log = make_sub_log("1a38e1d5e18d")
assert len(layer_1_log) == 20
assert len(layer_2_log) == 19
assert len(events) == 42
assert layer_1_log == [
{"stage": "Pulling fs layer", "progress": 0, "done": False, "extra": None},
{
"stage": "Downloading",
"progress": 0.1,
"done": False,
"extra": {"current": 539462, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 0.6,
"done": False,
"extra": {"current": 4864838, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 0.9,
"done": False,
"extra": {"current": 7552896, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 1.2,
"done": False,
"extra": {"current": 10252544, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 2.9,
"done": False,
"extra": {"current": 25369792, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Verifying Checksum",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Download complete",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 50.1,
"done": False,
"extra": {"current": 557056, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 60.3,
"done": False,
"extra": {"current": 89686016, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 70.0,
"done": False,
"extra": {"current": 174358528, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 80.0,
"done": False,
"extra": {"current": 261816320, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 88.4,
"done": False,
"extra": {"current": 334790656, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 94.0,
"done": False,
"extra": {"current": 383811584, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 99.9,
"done": False,
"extra": {"current": 435617792, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 100.0,
"done": False,
"extra": {"current": 436480882, "total": 436480882},
},
{
"stage": "Pull complete",
"progress": 100.0,
"done": True,
"extra": {"current": 436480882, "total": 436480882},
},
]
async def test_install_progress_rounding_does_not_cause_misses( async def test_install_progress_rounding_does_not_cause_misses(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock coresys: CoreSys,
test_docker_interface: DockerInterface,
ha_ws_client: AsyncMock,
capture_exception: Mock,
): ):
"""Test extremely close progress events do not create rounding issues.""" """Test extremely close progress events do not create rounding issues."""
coresys.core.set_state(CoreState.RUNNING) coresys.core.set_state(CoreState.RUNNING)
# Current numbers chosen to create a rounding issue with original code
# Where a progress update came in with a value between the actual previous
# value and what it was rounded to. It should not raise an out of order exception
coresys.docker.docker.api.pull.return_value = [ coresys.docker.docker.api.pull.return_value = [
{ {
"status": "Pulling from home-assistant/odroid-n2-homeassistant", "status": "Pulling from home-assistant/odroid-n2-homeassistant",
@@ -671,65 +482,7 @@ async def test_install_progress_rounding_does_not_cause_misses(
await install_task await install_task
await event.wait() await event.wait()
events = [ capture_exception.assert_not_called()
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0]
and evt.args[0]["data"]["event"] == WSEvent.JOB
and evt.args[0]["data"]["data"]["reference"] == "1e214cd6d7d0"
and evt.args[0]["data"]["data"]["stage"] in {"Downloading", "Extracting"}
]
assert events == [
{
"name": "Pulling container image layer",
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 432700000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
{
"name": "Pulling container image layer",
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 432800000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
{
"name": "Pulling container image layer",
"stage": "Extracting",
"progress": 99.6,
"done": False,
"extra": {"current": 432700000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
{
"name": "Pulling container image layer",
"stage": "Extracting",
"progress": 99.6,
"done": False,
"extra": {"current": 432800000, "total": 436480882},
"reference": "1e214cd6d7d0",
"parent_id": job.uuid,
"errors": [],
"uuid": ANY,
"created": ANY,
},
]
@pytest.mark.parametrize( @pytest.mark.parametrize(
@@ -779,10 +532,15 @@ async def test_install_raises_on_pull_error(
async def test_install_progress_handles_download_restart( async def test_install_progress_handles_download_restart(
coresys: CoreSys, test_docker_interface: DockerInterface, ha_ws_client: AsyncMock coresys: CoreSys,
test_docker_interface: DockerInterface,
ha_ws_client: AsyncMock,
capture_exception: Mock,
): ):
"""Test install handles docker progress events that include a download restart.""" """Test install handles docker progress events that include a download restart."""
coresys.core.set_state(CoreState.RUNNING) coresys.core.set_state(CoreState.RUNNING)
# Fixture emulates a download restart as it docker logs it
# A log out of order exception should not be raised
coresys.docker.docker.api.pull.return_value = load_json_fixture( coresys.docker.docker.api.pull.return_value = load_json_fixture(
"docker_pull_image_log_restart.json" "docker_pull_image_log_restart.json"
) )
@@ -810,106 +568,4 @@ async def test_install_progress_handles_download_restart(
await install_task await install_task
await event.wait() await event.wait()
events = [ capture_exception.assert_not_called()
evt.args[0]["data"]["data"]
for evt in ha_ws_client.async_send_command.call_args_list
if "data" in evt.args[0] and evt.args[0]["data"]["event"] == WSEvent.JOB
]
def make_sub_log(layer_id: str):
return [
{
"stage": evt["stage"],
"progress": evt["progress"],
"done": evt["done"],
"extra": evt["extra"],
}
for evt in events
if evt["name"] == "Pulling container image layer"
and evt["reference"] == layer_id
and evt["parent_id"] == job.uuid
]
layer_1_log = make_sub_log("1e214cd6d7d0")
assert len(layer_1_log) == 14
assert layer_1_log == [
{"stage": "Pulling fs layer", "progress": 0, "done": False, "extra": None},
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Retrying download",
"progress": 0,
"done": False,
"extra": None,
},
{
"stage": "Retrying download",
"progress": 0,
"done": False,
"extra": None,
},
{
"stage": "Downloading",
"progress": 11.9,
"done": False,
"extra": {"current": 103619904, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 26.1,
"done": False,
"extra": {"current": 227726144, "total": 436480882},
},
{
"stage": "Downloading",
"progress": 49.6,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Verifying Checksum",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Download complete",
"progress": 50,
"done": False,
"extra": {"current": 433170048, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 80.0,
"done": False,
"extra": {"current": 261816320, "total": 436480882},
},
{
"stage": "Extracting",
"progress": 100.0,
"done": False,
"extra": {"current": 436480882, "total": 436480882},
},
{
"stage": "Pull complete",
"progress": 100.0,
"done": True,
"extra": {"current": 436480882, "total": 436480882},
},
]

1
tests/fixtures/bla/backup.json vendored Normal file
View File

@@ -0,0 +1 @@
{"slug":"d9c48f8b","version":2,"name":"test_consolidate","date":"2025-01-22T18:09:28.196333+00:00","type":"partial","supervisor_version":"2025.01.1.dev2104","extra":{},"homeassistant":null,"compressed":true,"addons":[],"docker":{"registries":{}},"protected":true,"crypto":"aes128","repositories":["https://github.com/hassio-addons/repository","https://github.com/music-assistant/home-assistant-addon","core","https://github.com/home-assistant/addons-development","https://github.com/esphome/home-assistant-addon","local"],"folders":["ssl"]}

BIN
tests/fixtures/bla/ssl.tar.gz vendored Normal file

Binary file not shown.

View File

@@ -376,3 +376,14 @@ async def test_try_get_nvme_life_time_missing_percent_used(
coresys.config.path_supervisor coresys.config.path_supervisor
) )
assert lifetime is None assert lifetime is None
async def test_try_get_nvme_life_time_dbus_not_connected(coresys: CoreSys):
"""Test getting lifetime info from an NVMe when DBUS is not connected."""
# Set the dbus for udisks2 bus to be None, to make it forcibly disconnected.
coresys.dbus.udisks2.dbus = None
lifetime = await coresys.hardware.disk.get_disk_life_time(
coresys.config.path_supervisor
)
assert lifetime is None

View File

@@ -181,7 +181,6 @@ async def test_reload_updater_triggers_supervisor_update(
"""Test an updater reload triggers a supervisor update if there is one.""" """Test an updater reload triggers a supervisor update if there is one."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000 coresys.hardware.disk.get_disk_free_space = lambda x: 5000
await coresys.core.set_state(CoreState.RUNNING) await coresys.core.set_state(CoreState.RUNNING)
coresys.security.content_trust = False
with ( with (
patch.object( patch.object(

View File

@@ -17,7 +17,6 @@ from supervisor.exceptions import (
AudioJobError, AudioJobError,
CliError, CliError,
CliJobError, CliJobError,
CodeNotaryUntrusted,
CoreDNSError, CoreDNSError,
CoreDNSJobError, CoreDNSJobError,
DockerError, DockerError,
@@ -337,14 +336,12 @@ async def test_repair_failed(
patch.object( patch.object(
DockerInterface, "arch", new=PropertyMock(return_value=CpuArch.AMD64) DockerInterface, "arch", new=PropertyMock(return_value=CpuArch.AMD64)
), ),
patch( patch.object(DockerInterface, "install", side_effect=DockerError),
"supervisor.security.module.cas_validate", side_effect=CodeNotaryUntrusted
),
): ):
await plugin.repair() await plugin.repair()
capture_exception.assert_called_once() capture_exception.assert_called_once()
assert check_exception_chain(capture_exception.call_args[0][0], CodeNotaryUntrusted) assert check_exception_chain(capture_exception.call_args[0][0], DockerError)
@pytest.mark.parametrize( @pytest.mark.parametrize(

View File

@@ -51,7 +51,6 @@ async def test_if_check_make_issue(coresys: CoreSys):
"""Test check for setup.""" """Test check for setup."""
free_space = Issue(IssueType.FREE_SPACE, ContextType.SYSTEM) free_space = Issue(IssueType.FREE_SPACE, ContextType.SYSTEM)
await coresys.core.set_state(CoreState.RUNNING) await coresys.core.set_state(CoreState.RUNNING)
coresys.security.content_trust = False
with patch("shutil.disk_usage", return_value=(1, 1, 1)): with patch("shutil.disk_usage", return_value=(1, 1, 1)):
await coresys.resolution.check.check_system() await coresys.resolution.check.check_system()
@@ -63,7 +62,6 @@ async def test_if_check_cleanup_issue(coresys: CoreSys):
"""Test check for setup.""" """Test check for setup."""
free_space = Issue(IssueType.FREE_SPACE, ContextType.SYSTEM) free_space = Issue(IssueType.FREE_SPACE, ContextType.SYSTEM)
await coresys.core.set_state(CoreState.RUNNING) await coresys.core.set_state(CoreState.RUNNING)
coresys.security.content_trust = False
with patch("shutil.disk_usage", return_value=(1, 1, 1)): with patch("shutil.disk_usage", return_value=(1, 1, 1)):
await coresys.resolution.check.check_system() await coresys.resolution.check.check_system()

View File

@@ -1,96 +0,0 @@
"""Test Check Supervisor trust."""
# pylint: disable=import-error,protected-access
from unittest.mock import AsyncMock, patch
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.exceptions import CodeNotaryError, CodeNotaryUntrusted
from supervisor.resolution.checks.supervisor_trust import CheckSupervisorTrust
from supervisor.resolution.const import IssueType, UnhealthyReason
async def test_base(coresys: CoreSys):
"""Test check basics."""
supervisor_trust = CheckSupervisorTrust(coresys)
assert supervisor_trust.slug == "supervisor_trust"
assert supervisor_trust.enabled
async def test_check(coresys: CoreSys):
"""Test check."""
supervisor_trust = CheckSupervisorTrust(coresys)
await coresys.core.set_state(CoreState.RUNNING)
assert len(coresys.resolution.issues) == 0
coresys.supervisor.check_trust = AsyncMock(side_effect=CodeNotaryError)
await supervisor_trust.run_check()
assert coresys.supervisor.check_trust.called
coresys.supervisor.check_trust = AsyncMock(return_value=None)
await supervisor_trust.run_check()
assert coresys.supervisor.check_trust.called
assert len(coresys.resolution.issues) == 0
coresys.supervisor.check_trust = AsyncMock(side_effect=CodeNotaryUntrusted)
await supervisor_trust.run_check()
assert coresys.supervisor.check_trust.called
assert len(coresys.resolution.issues) == 1
assert coresys.resolution.issues[-1].type == IssueType.TRUST
assert UnhealthyReason.UNTRUSTED in coresys.resolution.unhealthy
async def test_approve(coresys: CoreSys):
"""Test check."""
supervisor_trust = CheckSupervisorTrust(coresys)
await coresys.core.set_state(CoreState.RUNNING)
coresys.supervisor.check_trust = AsyncMock(side_effect=CodeNotaryUntrusted)
assert await supervisor_trust.approve_check()
coresys.supervisor.check_trust = AsyncMock(return_value=None)
assert not await supervisor_trust.approve_check()
async def test_with_global_disable(coresys: CoreSys, caplog):
"""Test when pwned is globally disabled."""
coresys.security.content_trust = False
supervisor_trust = CheckSupervisorTrust(coresys)
await coresys.core.set_state(CoreState.RUNNING)
assert len(coresys.resolution.issues) == 0
coresys.security.verify_own_content = AsyncMock(side_effect=CodeNotaryUntrusted)
await supervisor_trust.run_check()
assert not coresys.security.verify_own_content.called
assert (
"Skipping supervisor_trust, content_trust is globally disabled" in caplog.text
)
async def test_did_run(coresys: CoreSys):
"""Test that the check ran as expected."""
supervisor_trust = CheckSupervisorTrust(coresys)
should_run = supervisor_trust.states
should_not_run = [state for state in CoreState if state not in should_run]
assert len(should_run) != 0
assert len(should_not_run) != 0
with patch(
"supervisor.resolution.checks.supervisor_trust.CheckSupervisorTrust.run_check",
return_value=None,
) as check:
for state in should_run:
await coresys.core.set_state(state)
await supervisor_trust()
check.assert_called_once()
check.reset_mock()
for state in should_not_run:
await coresys.core.set_state(state)
await supervisor_trust()
check.assert_not_called()
check.reset_mock()

View File

@@ -1,46 +0,0 @@
"""Test evaluation base."""
# pylint: disable=import-error,protected-access
from unittest.mock import patch
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.resolution.evaluations.content_trust import EvaluateContentTrust
async def test_evaluation(coresys: CoreSys):
"""Test evaluation."""
job_conditions = EvaluateContentTrust(coresys)
await coresys.core.set_state(CoreState.SETUP)
await job_conditions()
assert job_conditions.reason not in coresys.resolution.unsupported
coresys.security.content_trust = False
await job_conditions()
assert job_conditions.reason in coresys.resolution.unsupported
async def test_did_run(coresys: CoreSys):
"""Test that the evaluation ran as expected."""
job_conditions = EvaluateContentTrust(coresys)
should_run = job_conditions.states
should_not_run = [state for state in CoreState if state not in should_run]
assert len(should_run) != 0
assert len(should_not_run) != 0
with patch(
"supervisor.resolution.evaluations.content_trust.EvaluateContentTrust.evaluate",
return_value=None,
) as evaluate:
for state in should_run:
await coresys.core.set_state(state)
await job_conditions()
evaluate.assert_called_once()
evaluate.reset_mock()
for state in should_not_run:
await coresys.core.set_state(state)
await job_conditions()
evaluate.assert_not_called()
evaluate.reset_mock()

View File

@@ -25,13 +25,18 @@ async def test_evaluation(coresys: CoreSys):
assert docker_configuration.reason in coresys.resolution.unsupported assert docker_configuration.reason in coresys.resolution.unsupported
coresys.resolution.unsupported.clear() coresys.resolution.unsupported.clear()
coresys.docker.info.storage = EXPECTED_STORAGE coresys.docker.info.storage = EXPECTED_STORAGE[0]
coresys.docker.info.logging = "unsupported" coresys.docker.info.logging = "unsupported"
await docker_configuration() await docker_configuration()
assert docker_configuration.reason in coresys.resolution.unsupported assert docker_configuration.reason in coresys.resolution.unsupported
coresys.resolution.unsupported.clear() coresys.resolution.unsupported.clear()
coresys.docker.info.storage = EXPECTED_STORAGE coresys.docker.info.storage = "overlay2"
coresys.docker.info.logging = EXPECTED_LOGGING
await docker_configuration()
assert docker_configuration.reason not in coresys.resolution.unsupported
coresys.docker.info.storage = "overlayfs"
coresys.docker.info.logging = EXPECTED_LOGGING coresys.docker.info.logging = EXPECTED_LOGGING
await docker_configuration() await docker_configuration()
assert docker_configuration.reason not in coresys.resolution.unsupported assert docker_configuration.reason not in coresys.resolution.unsupported

View File

@@ -1,89 +0,0 @@
"""Test evaluation base."""
# pylint: disable=import-error,protected-access
import errno
import os
from pathlib import Path
from unittest.mock import AsyncMock, patch
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.exceptions import CodeNotaryError, CodeNotaryUntrusted
from supervisor.resolution.const import ContextType, IssueType
from supervisor.resolution.data import Issue
from supervisor.resolution.evaluations.source_mods import EvaluateSourceMods
async def test_evaluation(coresys: CoreSys):
"""Test evaluation."""
with patch(
"supervisor.resolution.evaluations.source_mods._SUPERVISOR_SOURCE",
Path(f"{os.getcwd()}/supervisor"),
):
sourcemods = EvaluateSourceMods(coresys)
await coresys.core.set_state(CoreState.RUNNING)
assert sourcemods.reason not in coresys.resolution.unsupported
coresys.security.verify_own_content = AsyncMock(side_effect=CodeNotaryUntrusted)
await sourcemods()
assert sourcemods.reason in coresys.resolution.unsupported
coresys.security.verify_own_content = AsyncMock(side_effect=CodeNotaryError)
await sourcemods()
assert sourcemods.reason not in coresys.resolution.unsupported
coresys.security.verify_own_content = AsyncMock()
await sourcemods()
assert sourcemods.reason not in coresys.resolution.unsupported
async def test_did_run(coresys: CoreSys):
"""Test that the evaluation ran as expected."""
sourcemods = EvaluateSourceMods(coresys)
should_run = sourcemods.states
should_not_run = [state for state in CoreState if state not in should_run]
assert len(should_run) != 0
assert len(should_not_run) != 0
with patch(
"supervisor.resolution.evaluations.source_mods.EvaluateSourceMods.evaluate",
return_value=None,
) as evaluate:
for state in should_run:
await coresys.core.set_state(state)
await sourcemods()
evaluate.assert_called_once()
evaluate.reset_mock()
for state in should_not_run:
await coresys.core.set_state(state)
await sourcemods()
evaluate.assert_not_called()
evaluate.reset_mock()
async def test_evaluation_error(coresys: CoreSys):
"""Test error reading file during evaluation."""
sourcemods = EvaluateSourceMods(coresys)
await coresys.core.set_state(CoreState.RUNNING)
corrupt_fs = Issue(IssueType.CORRUPT_FILESYSTEM, ContextType.SYSTEM)
assert sourcemods.reason not in coresys.resolution.unsupported
assert corrupt_fs not in coresys.resolution.issues
with patch(
"supervisor.utils.codenotary.dirhash",
side_effect=(err := OSError()),
):
err.errno = errno.EBUSY
await sourcemods()
assert sourcemods.reason not in coresys.resolution.unsupported
assert corrupt_fs in coresys.resolution.issues
assert coresys.core.healthy is True
coresys.resolution.dismiss_issue(corrupt_fs)
err.errno = errno.EBADMSG
await sourcemods()
assert sourcemods.reason not in coresys.resolution.unsupported
assert corrupt_fs in coresys.resolution.issues
assert coresys.core.healthy is False

View File

@@ -1,69 +0,0 @@
"""Test evaluation base."""
# pylint: disable=import-error,protected-access
from datetime import timedelta
from unittest.mock import AsyncMock
import time_machine
from supervisor.coresys import CoreSys
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
from supervisor.resolution.data import Issue, Suggestion
from supervisor.resolution.fixups.system_execute_integrity import (
FixupSystemExecuteIntegrity,
)
from supervisor.security.const import ContentTrustResult, IntegrityResult
from supervisor.utils.dt import utcnow
async def test_fixup(coresys: CoreSys, supervisor_internet: AsyncMock):
"""Test fixup."""
system_execute_integrity = FixupSystemExecuteIntegrity(coresys)
assert system_execute_integrity.auto
coresys.resolution.add_suggestion(
Suggestion(SuggestionType.EXECUTE_INTEGRITY, ContextType.SYSTEM)
)
coresys.resolution.add_issue(Issue(IssueType.TRUST, ContextType.SYSTEM))
coresys.security.integrity_check = AsyncMock(
return_value=IntegrityResult(
ContentTrustResult.PASS,
ContentTrustResult.PASS,
{"audio": ContentTrustResult.PASS},
)
)
await system_execute_integrity()
assert coresys.security.integrity_check.called
assert len(coresys.resolution.suggestions) == 0
assert len(coresys.resolution.issues) == 0
async def test_fixup_error(coresys: CoreSys, supervisor_internet: AsyncMock):
"""Test fixup."""
system_execute_integrity = FixupSystemExecuteIntegrity(coresys)
assert system_execute_integrity.auto
coresys.resolution.add_suggestion(
Suggestion(SuggestionType.EXECUTE_INTEGRITY, ContextType.SYSTEM)
)
coresys.resolution.add_issue(Issue(IssueType.TRUST, ContextType.SYSTEM))
coresys.security.integrity_check = AsyncMock(
return_value=IntegrityResult(
ContentTrustResult.FAILED,
ContentTrustResult.PASS,
{"audio": ContentTrustResult.PASS},
)
)
with time_machine.travel(utcnow() + timedelta(hours=24)):
await system_execute_integrity()
assert coresys.security.integrity_check.called
assert len(coresys.resolution.suggestions) == 1
assert len(coresys.resolution.issues) == 1

View File

@@ -1,21 +1,15 @@
"""Test evaluations.""" """Test evaluations."""
from unittest.mock import Mock, patch from unittest.mock import Mock
from supervisor.const import CoreState from supervisor.const import CoreState
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.utils import check_exception_chain
async def test_evaluate_system_error(coresys: CoreSys, capture_exception: Mock): async def test_evaluate_system_error(coresys: CoreSys, capture_exception: Mock):
"""Test error while evaluating system.""" """Test error while evaluating system."""
await coresys.core.set_state(CoreState.RUNNING) await coresys.core.set_state(CoreState.RUNNING)
with patch(
"supervisor.resolution.evaluations.source_mods.calc_checksum_path_sourcecode",
side_effect=RuntimeError,
):
await coresys.resolution.evaluate.evaluate_system() await coresys.resolution.evaluate.evaluate_system()
capture_exception.assert_called_once() capture_exception.assert_not_called()
assert check_exception_chain(capture_exception.call_args[0][0], RuntimeError)

View File

@@ -1,127 +0,0 @@
"""Testing handling with Security."""
from unittest.mock import AsyncMock, patch
import pytest
from supervisor.coresys import CoreSys
from supervisor.exceptions import CodeNotaryError, CodeNotaryUntrusted
from supervisor.security.const import ContentTrustResult
async def test_content_trust(coresys: CoreSys):
"""Test Content-Trust."""
with patch("supervisor.security.module.cas_validate", AsyncMock()) as cas_validate:
await coresys.security.verify_content("test@mail.com", "ffffffffffffff")
assert cas_validate.called
cas_validate.assert_called_once_with("test@mail.com", "ffffffffffffff")
with patch(
"supervisor.security.module.cas_validate", AsyncMock()
) as cas_validate:
await coresys.security.verify_own_content("ffffffffffffff")
assert cas_validate.called
cas_validate.assert_called_once_with(
"notary@home-assistant.io", "ffffffffffffff"
)
async def test_disabled_content_trust(coresys: CoreSys):
"""Test Content-Trust."""
coresys.security.content_trust = False
with patch("supervisor.security.module.cas_validate", AsyncMock()) as cas_validate:
await coresys.security.verify_content("test@mail.com", "ffffffffffffff")
assert not cas_validate.called
with patch("supervisor.security.module.cas_validate", AsyncMock()) as cas_validate:
await coresys.security.verify_own_content("ffffffffffffff")
assert not cas_validate.called
async def test_force_content_trust(coresys: CoreSys):
"""Force Content-Trust tests."""
with patch(
"supervisor.security.module.cas_validate",
AsyncMock(side_effect=CodeNotaryError),
) as cas_validate:
await coresys.security.verify_content("test@mail.com", "ffffffffffffff")
assert cas_validate.called
cas_validate.assert_called_once_with("test@mail.com", "ffffffffffffff")
coresys.security.force = True
with (
patch(
"supervisor.security.module.cas_validate",
AsyncMock(side_effect=CodeNotaryError),
) as cas_validate,
pytest.raises(CodeNotaryError),
):
await coresys.security.verify_content("test@mail.com", "ffffffffffffff")
async def test_integrity_check_disabled(coresys: CoreSys):
"""Test integrity check with disabled content trust."""
coresys.security.content_trust = False
result = await coresys.security.integrity_check.__wrapped__(coresys.security)
assert result.core == ContentTrustResult.UNTESTED
assert result.supervisor == ContentTrustResult.UNTESTED
async def test_integrity_check(coresys: CoreSys, install_addon_ssh):
"""Test integrity check with content trust."""
coresys.homeassistant.core.check_trust = AsyncMock()
coresys.supervisor.check_trust = AsyncMock()
install_addon_ssh.check_trust = AsyncMock()
install_addon_ssh.data["codenotary"] = "test@example.com"
result = await coresys.security.integrity_check.__wrapped__(coresys.security)
assert result.core == ContentTrustResult.PASS
assert result.supervisor == ContentTrustResult.PASS
assert result.addons[install_addon_ssh.slug] == ContentTrustResult.PASS
async def test_integrity_check_error(coresys: CoreSys, install_addon_ssh):
"""Test integrity check with content trust issues."""
coresys.homeassistant.core.check_trust = AsyncMock(side_effect=CodeNotaryUntrusted)
coresys.supervisor.check_trust = AsyncMock(side_effect=CodeNotaryUntrusted)
install_addon_ssh.check_trust = AsyncMock(side_effect=CodeNotaryUntrusted)
install_addon_ssh.data["codenotary"] = "test@example.com"
result = await coresys.security.integrity_check.__wrapped__(coresys.security)
assert result.core == ContentTrustResult.ERROR
assert result.supervisor == ContentTrustResult.ERROR
assert result.addons[install_addon_ssh.slug] == ContentTrustResult.ERROR
async def test_integrity_check_failed(coresys: CoreSys, install_addon_ssh):
"""Test integrity check with content trust failed."""
coresys.homeassistant.core.check_trust = AsyncMock(side_effect=CodeNotaryError)
coresys.supervisor.check_trust = AsyncMock(side_effect=CodeNotaryError)
install_addon_ssh.check_trust = AsyncMock(side_effect=CodeNotaryError)
install_addon_ssh.data["codenotary"] = "test@example.com"
result = await coresys.security.integrity_check.__wrapped__(coresys.security)
assert result.core == ContentTrustResult.FAILED
assert result.supervisor == ContentTrustResult.FAILED
assert result.addons[install_addon_ssh.slug] == ContentTrustResult.FAILED
async def test_integrity_check_addon(coresys: CoreSys, install_addon_ssh):
"""Test integrity check with content trust but no signed add-ons."""
coresys.homeassistant.core.check_trust = AsyncMock()
coresys.supervisor.check_trust = AsyncMock()
result = await coresys.security.integrity_check.__wrapped__(coresys.security)
assert result.core == ContentTrustResult.PASS
assert result.supervisor == ContentTrustResult.PASS
assert result.addons[install_addon_ssh.slug] == ContentTrustResult.UNTESTED

View File

@@ -86,7 +86,6 @@ async def test_os_update_path(
"""Test OS upgrade path across major versions.""" """Test OS upgrade path across major versions."""
coresys.os._board = "rpi4" # pylint: disable=protected-access coresys.os._board = "rpi4" # pylint: disable=protected-access
coresys.os._version = AwesomeVersion(version) # pylint: disable=protected-access coresys.os._version = AwesomeVersion(version) # pylint: disable=protected-access
with patch.object(type(coresys.security), "verify_own_content"):
await coresys.updater.fetch_data() await coresys.updater.fetch_data()
assert coresys.updater.version_hassos == AwesomeVersion(expected) assert coresys.updater.version_hassos == AwesomeVersion(expected)
@@ -105,7 +104,6 @@ async def test_delayed_fetch_for_connectivity(
load_binary_fixture("version_stable.json") load_binary_fixture("version_stable.json")
) )
coresys.websession.head = AsyncMock() coresys.websession.head = AsyncMock()
coresys.security.verify_own_content = AsyncMock()
# Network connectivity change causes a series of async tasks to eventually do a version fetch # Network connectivity change causes a series of async tasks to eventually do a version fetch
# Rather then use some kind of sleep loop, set up listener for start of fetch data job # Rather then use some kind of sleep loop, set up listener for start of fetch data job

View File

@@ -1,128 +0,0 @@
"""Test CodeNotary."""
from __future__ import annotations
from dataclasses import dataclass
from unittest.mock import AsyncMock, Mock, patch
import pytest
from supervisor.exceptions import (
CodeNotaryBackendError,
CodeNotaryError,
CodeNotaryUntrusted,
)
from supervisor.utils.codenotary import calc_checksum, cas_validate
pytest.skip("code notary has been disabled due to issues", allow_module_level=True)
@dataclass
class SubprocessResponse:
"""Class for specifying subprocess exec response."""
returncode: int = 0
data: str = ""
error: str | None = None
exception: Exception | None = None
@pytest.fixture(name="subprocess_exec")
def fixture_subprocess_exec(request):
"""Mock subprocess exec with specific return."""
response = request.param
if response.exception:
communicate_return = AsyncMock(side_effect=response.exception)
else:
communicate_return = AsyncMock(return_value=(response.data, response.error))
exec_return = Mock(returncode=response.returncode, communicate=communicate_return)
with patch(
"supervisor.utils.codenotary.asyncio.create_subprocess_exec",
return_value=exec_return,
) as subprocess_exec:
yield subprocess_exec
def test_checksum_calc():
"""Calc Checkusm as test."""
assert calc_checksum("test") == calc_checksum(b"test")
assert (
calc_checksum("test")
== "9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08"
)
async def test_valid_checksum():
"""Test a valid autorization."""
await cas_validate(
"notary@home-assistant.io",
"4434a33ff9c695e870bc5bbe04230ea3361ecf4c129eb06133dd1373975a43f0",
)
async def test_invalid_checksum():
"""Test a invalid autorization."""
with pytest.raises(CodeNotaryUntrusted):
await cas_validate(
"notary@home-assistant.io",
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
)
@pytest.mark.parametrize(
"subprocess_exec",
[SubprocessResponse(returncode=1, error=b"x is not notarized")],
)
async def test_not_notarized_error(subprocess_exec):
"""Test received a not notarized error response from command."""
with pytest.raises(CodeNotaryUntrusted):
await cas_validate(
"notary@home-assistant.io",
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
)
@pytest.mark.parametrize(
"subprocess_exec",
[
SubprocessResponse(returncode=1, error=b"test"),
SubprocessResponse(returncode=0, data='{"error":"asn1: structure error"}'),
SubprocessResponse(returncode=1, error="test".encode("utf-16")),
],
indirect=True,
)
async def test_cas_backend_error(subprocess_exec):
"""Test backend error executing cas command."""
with pytest.raises(CodeNotaryBackendError):
await cas_validate(
"notary@home-assistant.io",
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
)
@pytest.mark.parametrize(
"subprocess_exec",
[SubprocessResponse(returncode=0, data='{"status":1}')],
indirect=True,
)
async def test_cas_notarized_untrusted(subprocess_exec):
"""Test cas found notarized but untrusted content."""
with pytest.raises(CodeNotaryUntrusted):
await cas_validate(
"notary@home-assistant.io",
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
)
@pytest.mark.parametrize(
"subprocess_exec", [SubprocessResponse(exception=OSError())], indirect=True
)
async def test_cas_exec_os_error(subprocess_exec):
"""Test os error attempting to execute cas command."""
with pytest.raises(CodeNotaryError):
await cas_validate(
"notary@home-assistant.io",
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
)