Compare commits

...

170 Commits

Author SHA1 Message Date
dependabot[bot]
3b1b03c8a7
Bump dbus-fast from 2.44.1 to 2.44.2 (#6038)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.44.1 to 2.44.2.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.44.1...v2.44.2)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 2.44.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 16:06:19 -04:00
dependabot[bot]
680428f304
Bump sentry-sdk from 2.33.0 to 2.33.2 (#6037)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.33.0 to 2.33.2.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.33.0...2.33.2)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.33.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 12:44:35 -04:00
dependabot[bot]
f34128c37e
Bump ruff from 0.12.3 to 0.12.4 (#6031)
---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.12.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 12:43:56 -04:00
dependabot[bot]
2ed0682b34
Bump sigstore/cosign-installer from 3.9.1 to 3.9.2 (#6032) 2025-07-18 10:00:58 +02:00
Stefan Agner
fbb0915ef8
Mark system as unhealthy if multiple OS installations are found (#6024)
* Add resolution check for duplicate OS installations

* Only create single issue/use separate unhealthy type

* Check MBR partition UUIDs as well

* Use partlabel

* Use generator to avoid code duplication

* Add list of devices, avoid unnecessary exception handling

* Run check only on HAOS

* Fix message formatting

* Fix and simplify pytests

* Fix UnhealthyReason sort order
2025-07-17 10:06:35 +02:00
Stefan Agner
780ae1e15c
Check for duplicate data disks only when the OS is available (#6025)
* Check for duplicate data disks only when the OS is available

Supervised installations do not have a specific data disk, so only
check for duplicate data disks on Home Assistant OS.

* Enable OS for multiple data disks check test
2025-07-16 10:43:15 +02:00
dependabot[bot]
c617358855
Bump orjson from 3.10.18 to 3.11.0 (#6028)
Bumps [orjson](https://github.com/ijl/orjson) from 3.10.18 to 3.11.0.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.10.18...3.11.0)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:24:34 +02:00
dependabot[bot]
b679c4f4d8
Bump sentry-sdk from 2.32.0 to 2.33.0 (#6027)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.32.0 to 2.33.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.32.0...2.33.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.33.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:20:28 +02:00
dependabot[bot]
c946c421f2
Bump debugpy from 1.8.14 to 1.8.15 (#6026)
Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.8.14 to 1.8.15.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.8.14...v1.8.15)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-version: 1.8.15
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:19:44 +02:00
dependabot[bot]
aeabf7ea25
Bump blockbuster from 1.5.24 to 1.5.25 (#6020)
Bumps [blockbuster](https://github.com/cbornet/blockbuster) from 1.5.24 to 1.5.25.
- [Release notes](https://github.com/cbornet/blockbuster/releases)
- [Commits](https://github.com/cbornet/blockbuster/commits/v1.5.25)

---
updated-dependencies:
- dependency-name: blockbuster
  dependency-version: 1.5.25
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:18:57 +02:00
dependabot[bot]
365b838abf
Bump mypy from 1.16.1 to 1.17.0 (#6019)
Bumps [mypy](https://github.com/python/mypy) from 1.16.1 to 1.17.0.
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.16.1...v1.17.0)

---
updated-dependencies:
- dependency-name: mypy
  dependency-version: 1.17.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:08:57 +02:00
Stefan Agner
99c040520e
Drop ensure_builtin_repositories() (#6012)
* Drop ensure_builtin_repositories

With the new Repository classes we have the is_builtin property, so we
can easily make sure that built-ins are not removed. This allows us to
further cleanup the code by removing the ensure_builtin_repositories
function and the ALL_BUILTIN_REPOSITORIES constant.

* Make sure we add built-ins on load

* Reuse default set and avoid unnecessary copy

Reuse default set and avoid unnecessary copying during validation if
the default is not being used.
2025-07-14 22:19:06 +02:00
dependabot[bot]
eefe2f2e06
Bump aiohttp from 3.12.13 to 3.12.14 (#6014)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-14 11:43:55 +02:00
dependabot[bot]
a366e36b37
Bump ruff from 0.12.2 to 0.12.3 (#6016)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-14 11:19:08 +02:00
dependabot[bot]
27a2fde9e1
Bump astroid from 3.3.10 to 3.3.11 (#6017)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-14 11:18:54 +02:00
Stefan Agner
9a0f530a2f
Add Supervisor connectivity check after DNS restart (#6005)
* Add Supervisor connectivity check after DNS restart

When the DNS plug-in got restarted, check Supervisor connectivity
in case the DNS plug-in configuration change influenced Supervisor
connectivity. This is helpful when a DHCP server gets started after
Home Assistant is up. In that case the network provided DNS server
(local DNS server) becomes available after the DNS plug-in restart.

Without this change, the Supervisor connectivity will remain false
until the a Job triggers a connectivity check, for example the
periodic update check (which causes a updater and store reload) by
Core.

* Fix pytest and add coverage for new functionality
2025-07-10 11:08:10 +02:00
Stefan Agner
baf9695cf7
Refactoring around add-on store Repository classes (#5990)
* Rename repository fixture to test_repository

Also don't remove the built-in repositories. The list was incomplete,
and tests don't seem to require that anymore.

* Get rid of StoreType

The type doesn't have much value, we have constant strings anyways.

* Introduce types.py

* Use slug to determine which repository urls to return

* Simplify BuiltinRepository enum

* Mock GitRepo load

* Improve URL handling and repository creation logic

* Refactor update_repositories

* Get rid of get_from_url

It is no longer used in production code.

* More refactoring

* Address pylint

* Introduce is_git_based property to Repository class

Return all git based URLs, including the Core repository.

* Revert "Introduce is_git_based property to Repository class"

This reverts commit dfd5ad79bf23e0e127fc45d97d6f8de0e796faa0.

* Fold type.py into const.py

Align more with how Supervisor code is typically structured.

* Update supervisor/store/__init__.py

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Apply repository remove suggestion

* Fix tests

---------

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2025-07-10 11:07:53 +02:00
Stefan Agner
7873c457d5
Small improvement to Copilot instructions (#6011) 2025-07-10 11:05:59 +02:00
Stefan Agner
cbc48c381f
Return 401 Unauthorized when using json/url encoded auth fails (#5844)
When authentication using JSON payload or URL encoded payload fails,
use the generic HTTP response code 401 Unauthorized instead of 400
Bad Request.

This is a more appropriate response code for authentication errors
and is consistent with the behavior of other authentication methods.
2025-07-10 08:38:00 +02:00
Franck Nijhof
11e37011bd
Add Task issue form (#6007) 2025-07-09 16:58:10 +02:00
Franck Nijhof
cfda559a90
Adjust feature request links in issue reporting (#6009) 2025-07-09 16:44:35 +02:00
Mike Degatano
806bd9f52c
Apply store reload suggestion automatically on connectivity change (#6004)
* Apply store reload suggestion automatically on connectivity change

* Use sys_bus not coresys.bus

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-09 16:43:51 +02:00
Stefan Agner
953f7d01d7
Improve DNS plug-in restart (#5999)
* Improve DNS plug-in restart

Instead of simply go by PrimaryConnectioon change, use the DnsManager
Configuration property. This property is ultimately used to write the
DNS plug-in configuration, so it is really the relevant information
we pass on to the plug-in.

* Check for changes and restart DNS plugin

* Check for changes in plug-in DNS

Cache last local (NetworkManager) provided DNS servers. Check against
this DNS server list when deciding when to restart the DNS plug-in.

* Check connectivity unthrottled in certain situations

* Fix pytest

* Fix pytest

* Improve test coverage for DNS plugins restart functionality

* Apply suggestions from code review

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Debounce local DNS changes and event based connectivity checks

* Remove connection check logic

* Remove unthrottled connectivity check

* Fix delayed call

* Store restart task and cancel in case a restart is running

* Improve DNS configuration change tests

* Remove stale code

* Improve DNS plug-in tests, less mocking

* Cover multiple private functions at once

Improve tests around notify_locals_changed() to cover multiple
functions at once.

---------

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2025-07-09 11:35:03 +02:00
Felipe Santos
381e719a0e
Allow to force rebuild of add-ons (#6002) 2025-07-07 21:41:18 +02:00
Ruben van Dijk
296071067d
Fix multiple set-cookie headers with addons ingress (#5996) 2025-07-07 19:27:39 +02:00
dependabot[bot]
8336537f51
Bump types-docker from 7.1.0.20250523 to 7.1.0.20250705 (#6003)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-07 10:00:26 +02:00
Stefan Agner
5c90a00263
Force reload of /etc/resolv.conf on WebSession init (#6000) 2025-07-05 12:18:02 +02:00
dependabot[bot]
1f2bf77784
Bump coverage from 7.9.1 to 7.9.2 (#5992)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-04 08:54:36 +02:00
dependabot[bot]
9aa4f381b8
Bump ruff from 0.12.1 to 0.12.2 (#5993)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-04 08:47:35 +02:00
Mike Degatano
ae036ceffe
Don't backup uninstalled addons (#5988)
* Don't backup uninstalled addons

* Remove hash in backup
2025-07-04 07:05:53 +02:00
Stefan Agner
f0ea0d4a44
Add GitHub Copilot/Claude instruction (#5986)
* Add GitHub Copilot/Claude instruction

This adds an initial instruction file for GitHub Copilot and Claude
(CLAUDE.md symlinked to the same file).

* Add --ignore-missing-imports to mypy, add note to run pre-commit
2025-07-04 07:05:05 +02:00
Mike Degatano
abc44946bb
Refactor addon git repo (#5987)
* Refactor Repository into setup with inheritance

* Remove subclasses of GitRepo
2025-07-03 13:53:52 +02:00
dependabot[bot]
3e20a0937d
Bump cryptography from 45.0.4 to 45.0.5 (#5989)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.4 to 45.0.5.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/45.0.4...45.0.5)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 45.0.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-03 09:52:50 +02:00
Mike Degatano
6cebf52249
Store reset only deletes git cache after clone was successful (#5984)
* Store reset only deletes git cache after clone was successful

* Add test and fix fallback error handling

* Fix when lock is grabbed
2025-07-02 14:34:18 -04:00
Felipe Santos
bc57deb474
Use Docker BuildKit to build addons (#5974)
* Use Docker BuildKit to build addons

* Improve error message as suggested by CodeRabbit

* Fix container.remove() tests missing v=True

* Ignore squash rather than falling back to legacy builder

* Use version rather than tag to avoid confusion in run_command()

* Fix tests differently

* Use PropertyMock like other tests

* Restore position of fix_label fn

* Exempt addon builder image from unsupported checks

* Refactor tests

* Fix tests expecting wrong builder image

* Remove harcoded paths

* Fix tests

* Remove get_addon_host_path() function

* Use docker buildx build rather than docker build

Co-authored-by: Stefan Agner <stefan@agner.ch>

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-07-02 17:33:41 +02:00
Mike Degatano
38750d74a8
Refactor builtin repositories to enum (#5976) 2025-06-30 13:22:00 -04:00
Felipe Santos
d1c1a2d418
Fix docker.run_command() needing detach but not enforcing it (#5979)
* Fix `docker.run_command()` needing `detach` but not enforcing it

* Fix test
2025-06-30 16:09:19 +02:00
Felipe Santos
cf32f036c0
Fix docker_home_assistant_execute_command not honoring HA version (#5978)
* Fix `docker_home_assistant_execute_command` not honoring HA version

* Change variable name to image_with_tag

* Fix test
2025-06-30 16:08:05 +02:00
Felipe Santos
b8852872fe
Remove anonymous volumes when removing containers (#5977)
* Remove anonymous volumes when removing containers

* Add tests for docker.run_command()
2025-06-30 13:31:41 +02:00
dependabot[bot]
779f47e25d
Bump sentry-sdk from 2.31.0 to 2.32.0 (#5982)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-30 10:16:41 +02:00
dependabot[bot]
be8b36b560
Bump ruff from 0.12.0 to 0.12.1 (#5981)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-27 09:08:50 +02:00
dependabot[bot]
8378d434d4
Bump sentry-sdk from 2.30.0 to 2.31.0 (#5975)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.30.0 to 2.31.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.30.0...2.31.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.31.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-25 08:57:12 +02:00
Stefan Agner
0b79e09bc0
Add code documentation for Jobs decorator (#5965)
Add basic code documentation to the Jobs decorator.
2025-06-24 15:48:04 +02:00
Stefan Agner
d747a59696
Fix CLI/Observer access token property (#5973)
The access token token_validation() code in the security middleware
potentially accesses the access token property before the Supervisor
starts the CLI/Observer plugins, which leads to an KeyError when
trying to access the `access_token` property. This change ensures
that no key error is raised, but just None is returned.
2025-06-24 12:10:36 +02:00
Mike Degatano
3ee7c082ec
Add mypy to ci and precommit (#5969)
* Add mypy to ci and precommit

* Run precommit mypy in venv

* Fix issues raised in latest version of mypy
2025-06-24 11:48:03 +02:00
dependabot[bot]
3f921e50b3
Bump getsentry/action-release from 3.1.2 to 3.2.0 (#5972)
Bumps [getsentry/action-release](https://github.com/getsentry/action-release) from 3.1.2 to 3.2.0.
- [Release notes](https://github.com/getsentry/action-release/releases)
- [Changelog](https://github.com/getsentry/action-release/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/action-release/compare/v3.1.2...v3.2.0)

---
updated-dependencies:
- dependency-name: getsentry/action-release
  dependency-version: 3.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-24 10:08:27 +02:00
dependabot[bot]
0370320f75
Bump sigstore/cosign-installer from 3.9.0 to 3.9.1 (#5971)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.9.0 to 3.9.1.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.9.0...v3.9.1)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.9.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-24 10:08:19 +02:00
Stefan Agner
1e19e26ef3
Update request feature link (#5968)
Feature requests are now collected using the org wide GitHub Community.
Update the link accordingly.

While at it, also remove the unused ISSUE_TEMPLATE.md and align the
title to create issues with what is used in Home Assistant Core's
template.
2025-06-23 13:00:55 +02:00
Stefan Agner
e1a18eeba8
Use aiodns explicit close method (#5966) 2025-06-23 10:13:43 +02:00
Stefan Agner
b030879efd
Rename detect-blocking-io API value to match other APIs (#5964)
* Rename detect-blocking-io API value to match other APIs

For the new detect-blocking-io option, use dashes instead of
underscores in `on-at-startup` for consistency with other API
endpoints.

This is a breaking change, but since the API is really new and not
really used yet, it is fairly safe to do so.

* Fix pytest
2025-06-20 12:52:12 +02:00
dependabot[bot]
dfa1602ac6
Bump getsentry/action-release from 3.1.1 to 3.1.2 (#5963)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-19 10:33:47 +02:00
dependabot[bot]
bbda943583
Bump urllib3 from 2.4.0 to 2.5.0 (#5962)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-19 10:33:33 +02:00
Mike Degatano
aea15b65b7
Fix mypy issues in store, utils and all other source files (#5957)
* Fix mypy issues in store module

* Fix mypy issues in utils module

* Fix mypy issues in all remaining source files

* Fix ingress user typeddict

* Fixes from feedback

* Fix mypy issues after installing docker-types
2025-06-18 12:40:12 -04:00
dependabot[bot]
5c04249e41
Bump pytest from 8.4.0 to 8.4.1 (#5960)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 8.4.0 to 8.4.1.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/8.4.0...8.4.1)

---
updated-dependencies:
- dependency-name: pytest
  dependency-version: 8.4.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-18 15:43:22 +02:00
dependabot[bot]
456cec7ed1
Bump ruff from 0.11.13 to 0.12.0 (#5959)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.11.13 to 0.12.0.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.11.13...0.12.0)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.12.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-18 12:06:45 +02:00
dependabot[bot]
52a519e55c
Bump sigstore/cosign-installer from 3.8.2 to 3.9.0 (#5958)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.8.2 to 3.9.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.8.2...v3.9.0)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-18 10:57:20 +02:00
Stefan Agner
fcb20d0ae8
Remove bug label from issue template (#5955)
Don't label new issues with the bug label by default. We started making
use of issue types, so if anything, this should be type "Bug". However,
we prefer to leave the type unspecified until the issue has been triaged.
2025-06-17 13:10:52 +02:00
Stefan Agner
9b3f2b17bd
Remove AES cipher from backup (#5954)
AES cipher is no longer needed since Docker repository authentication
has been removed from backups in #5605.
2025-06-16 20:14:21 +02:00
Stefan Agner
3d026b9534
Expose machine ID (#5953)
Expose the unique machine ID of the local system via the Supervisor
API. This allows to identify a particular machine across reboots,
backup restores and updates. The machine ID is a stable identifier
that does not change unless the underlying hardware is changed or
the operating system is reinstalled.
2025-06-16 20:14:13 +02:00
Mike Degatano
0e8ace949a
Fix mypy issues in plugins and resolution (#5946)
* Fix mypy issues in plugins

* Fix mypy issues in resolution module

* fix misses in resolution check

* Fix signatures on evaluate methods

* nitpick fix suggestions
2025-06-16 14:12:47 -04:00
Stefan Agner
1fe6f8ad99
Bump cosign to v2.4.3 (#5945)
Follow the builder bump of 2025.02.0 and use cosign v2.4.3 for
Supervisor too.
2025-06-16 20:12:27 +02:00
dependabot[bot]
9ef2352d12
Bump sentry-sdk from 2.29.1 to 2.30.0 (#5947)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.29.1 to 2.30.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.29.1...2.30.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.30.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 11:48:34 +02:00
dependabot[bot]
2543bcae29
Bump aiohttp from 3.12.12 to 3.12.13 (#5952)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.12.13
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 11:48:20 +02:00
dependabot[bot]
ad9de9f73c
Bump coverage from 7.9.0 to 7.9.1 (#5949)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.9.0 to 7.9.1.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.9.0...7.9.1)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.9.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 10:26:18 +02:00
dependabot[bot]
a5556651ae
Bump pytest-cov from 6.2.0 to 6.2.1 (#5948)
Bumps [pytest-cov](https://github.com/pytest-dev/pytest-cov) from 6.2.0 to 6.2.1.
- [Changelog](https://github.com/pytest-dev/pytest-cov/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest-cov/compare/v6.2.0...v6.2.1)

---
updated-dependencies:
- dependency-name: pytest-cov
  dependency-version: 6.2.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 10:26:05 +02:00
dependabot[bot]
ac28deff6d
Bump aiodns from 3.4.0 to 3.5.0 (#5951)
Bumps [aiodns](https://github.com/saghul/aiodns) from 3.4.0 to 3.5.0.
- [Release notes](https://github.com/saghul/aiodns/releases)
- [Changelog](https://github.com/aio-libs/aiodns/blob/master/ChangeLog)
- [Commits](https://github.com/saghul/aiodns/compare/v3.4.0...v3.5.0)

---
updated-dependencies:
- dependency-name: aiodns
  dependency-version: 3.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 10:06:18 +02:00
Mike Degatano
82ee4bc441
Fix mypy issues in misc, mounts and os modules (#5942)
* Fix mypy errors in misc and mounts

* Fix mypy issues in os module

* Fix typing of capture_exception

* avoid unnecessary property call

* Fixes from feedback
2025-06-12 18:06:57 -04:00
Stefan Agner
bdbd09733a
Avoid aiodns resolver memory leak (#5941)
* Avoid aiodns resolver memory leak

In certain cases, the aiodns resolver can leak memory. This also
leads to Fatal `Python error… ffi.from_handle()`. This addresses
the issue by ensuring that the resolver is properly closed
when it is no longer needed.

* Address coderabbitai feedback

* Fix pytest

* Fix pytest
2025-06-12 11:32:53 +02:00
David Rapan
d5b5a328d7
feat: Add opt-in IPv6 for containers (#5879)
Configurable and w/ migrations between IPv4-Only and Dual-Stack

Signed-off-by: David Rapan <david@rapan.cz>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-06-12 11:32:24 +02:00
dependabot[bot]
52b24e177f
Bump coverage from 7.8.2 to 7.9.0 (#5944)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.8.2 to 7.9.0.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.8.2...7.9.0)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-12 09:27:56 +02:00
dependabot[bot]
e10c58c424
Bump pytest-cov from 6.1.1 to 6.2.0 (#5943)
Bumps [pytest-cov](https://github.com/pytest-dev/pytest-cov) from 6.1.1 to 6.2.0.
- [Changelog](https://github.com/pytest-dev/pytest-cov/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest-cov/compare/v6.1.1...v6.2.0)

---
updated-dependencies:
- dependency-name: pytest-cov
  dependency-version: 6.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-12 09:27:44 +02:00
Mike Degatano
9682870c2c
Fix mypy issues in host and jobs (#5939)
* Fix mypy issues in host

* Fix mypy issues in job module

* Fix mypy issues introduced in previously fixed modules

* Apply suggestions from code review

Co-authored-by: Stefan Agner <stefan@agner.ch>

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-06-11 12:04:25 -04:00
Stefan Agner
fd0b894d6a
Fix dynamic port pytest (#5940) 2025-06-11 15:10:31 +02:00
dependabot[bot]
697515b81f
Bump aiohttp from 3.12.9 to 3.12.12 (#5937)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.12.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-10 11:52:14 +02:00
dependabot[bot]
d912c234fa
Bump requests from 2.32.3 to 2.32.4 (#5935)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-10 10:53:45 +02:00
dependabot[bot]
e8445ae8f2
Bump cryptography from 45.0.3 to 45.0.4 (#5936)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-10 09:23:40 +02:00
dependabot[bot]
6710439ce5
Bump ruff from 0.11.12 to 0.11.13 (#5932)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-10 00:41:04 -05:00
dependabot[bot]
95eec03c91
Bump aiohttp from 3.12.6 to 3.12.9 (#5930) 2025-06-05 07:43:55 +01:00
dependabot[bot]
9b686a2d9a
Bump pytest from 8.3.5 to 8.4.0 (#5929)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-03 10:23:35 +02:00
dependabot[bot]
063d69da90
Bump aiohttp from 3.12.4 to 3.12.6 (#5928)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-31 12:35:37 -05:00
dependabot[bot]
baaf04981f
Bump awesomeversion from 24.6.0 to 25.5.0 (#5926)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-31 01:23:53 -05:00
dependabot[bot]
bdb25a7ff8
Bump ruff from 0.11.11 to 0.11.12 (#5927)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-31 01:23:06 -05:00
Jan Čermák
ad2d6a3156
Revert "Do not backup add-on being uninstalled (#5917)" (#5925)
This reverts commit 63fde3b4109310e95ebdcc8e3c23a04ff96ba592.

This change introduced another more severe regression, causing all
add-ons that haven't been started since Supervisor startup to cause
errors during their backup. More sophisticated check would have to be
implemented to address edge cases during backups for non-existing
add-ons (or their config actually).

Fixes #5924
2025-05-29 17:32:51 +02:00
Stefan Agner
42f885595e
Avoid early DNS plug-in start (#5922)
* Avoid early DNS plug-in start

A connectivity check can potentially be triggered before the DNS
plug-in is loaded. Avoid calling restart on the DNS plug-in before
it got initially loaded. This prevents starting before attaching.
The attaching makes sure that the DNS plug-in container is recreated
before the DNS plug-in is initially started, which is e.g. needed
by a potentially hassio network configuration change (e.g. the
migration required to enable/disable IPv6 on the hassio network,
see #5879).

* Mock DNS plug-in running
2025-05-29 11:49:19 +02:00
Stefan Agner
2a88cb9339
Improve Supervisor startup error handling (#5918)
Instead of starting a task in the background synchronously wait
for Supervisor start sequence to complete. This should be functional
equivalent, as we anyways would loop forever in the event loop just
afterwards.

The advantage is that we now can catch any exceptions during the
start sequence and report any errors with critical logging to report
those to Sentry, if enabled. It also avoids "Task exception was never
retrieved" errors. Reporting errors is especially important since we
can't use the asyncio Sentry integration (see #5729 for details).

Also handle early add-on start errors just like other add-on start
errors (make sure the finally block is executed as well). And finally,
register signal handlers synchronously. There is no real benefit in
doing them asynchronously, and it avoids a potential race condition.
2025-05-29 11:42:28 +02:00
Jan Čermák
4d1a5e2dc2
Use journal-gatewayd's new /boots endpoint to list boots (#5914)
* Use journal-gatewayd's new /boots endpoint to list boots

Current method we use for getting boots has several known downsides, for
example it can miss some incomplete boots and the performance might be
worse than what we could get by using Systemd directly. Systemd was
missing a method to get list boots through the journal-gatewayd but that
should be addressed by the new /boots endpoint added in [1] which
returns application/json-seq response containing all boots as reported
in `journalctl --list-boots`.

Implement Supervisor methods to parse this format and use the endpoint
at first, falling back to the old method if it fails.

[1] https://github.com/systemd/systemd/pull/37574

* Log info instead of warning when /boots is not present

Co-authored-by: Stefan Agner <stefan@agner.ch>

* Split records only by RS instead of LF in journal_boots_reader

* Strip only RS, json.loads is fine with whitespace

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-05-29 11:41:23 +02:00
dependabot[bot]
705e76abe3
Bump aiohttp from 3.12.2 to 3.12.4 (#5923)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-29 00:42:40 -05:00
Jan Čermák
7f54383147
Revert "Use s6-overlay read-only mode by default (#5906)" (#5921) 2025-05-27 20:00:22 +02:00
Stefan Agner
63fde3b410
Do not backup add-on being uninstalled (#5917) 2025-05-27 14:00:54 +02:00
dependabot[bot]
5285e60cd3
Bump setuptools from 80.8.0 to 80.9.0 (#5919)
Bumps [setuptools](https://github.com/pypa/setuptools) from 80.8.0 to 80.9.0.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v80.8.0...v80.9.0)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-version: 80.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-27 12:13:06 +02:00
J. Nick Koston
2a1e32bb36
Bump aiohttp to 3.12.2 (#5915) 2025-05-27 09:03:34 +02:00
dependabot[bot]
a2251e0729
Bump cryptography from 45.0.2 to 45.0.3 (#5912)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-26 17:02:04 +02:00
dependabot[bot]
1efee641ba
Bump coverage from 7.8.1 to 7.8.2 (#5913) 2025-05-26 08:41:06 +02:00
Stefan Agner
bbb8fa0b92
Ignore missing backup file on error (#5910)
When a backup error occurs, it might be that the backup file hasn't
been created yet, e.g. when there is no space or no permission on
the target backup directory. Deleting the backup file would fail
in this case. Use missing_ok instead to ignore a missing backup file
on delete.
2025-05-23 14:29:36 +02:00
Stefan Agner
7593f857e8
Fix add-on config parse messages (#5909)
With #5897 we renamed addon to addon_config and vis-versa. The log
messages were still using the previous variable names leading to
UnboundLocalError.

Fix the log messages to use the correct variable names.
2025-05-23 14:29:28 +02:00
Stefan Agner
87232cf1e4
Enable debug logging early (#5908)
Set logging level early in the bootstrap process so we can use debug
level messages in the early stages of the Supervisor.
2025-05-23 12:03:32 +02:00
dependabot[bot]
9e6a4d65cd
Bump ruff from 0.11.10 to 0.11.11 (#5907)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.11.10 to 0.11.11.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.11.10...0.11.11)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.11.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-23 10:35:24 +02:00
Stefan Agner
c80fbd77c8
Use s6-overlay read-only mode by default (#5906)
To avoid accidential writes to the Supervisor root filesystem, we might
use the Docker read-only mode at one point. This is not yet the default,
but using s6-overlay with the read-only flag seems not to have any
downsides. So enable this by default.

To start Supervisor with read-only root file system teh following
arguments have to be used: `--read-only --tmpfs /run:exec`.
2025-05-22 17:30:42 +02:00
dependabot[bot]
a452969ffe
Bump coverage from 7.8.0 to 7.8.1 (#5905)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.8.0 to 7.8.1.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.8.0...7.8.1)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.8.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-22 09:45:52 +02:00
Stefan Agner
89fa5c9c7a
Avoid initializing Blockbuster on Supervisor info call (#5901)
* Avoid initializing Blockbuster on Supervisor info call

Instead of creating an instance of Blockbuster to simply check if
Bluckbuster is enabled, use a global variable to store the instance
of Blockbuster and only initialize it when needed. This avoids
unnecessary initialization of Blockbuster when it is not required.

* Update supervisor/utils/blockbuster.py

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>

* Fix merge and rename singleton class to BlockBusterManager

* Fix pytest

---------

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
2025-05-21 15:06:46 +02:00
Stefan Agner
73069b628e
Bump pre-commit ruff to 0.11.10 (#5904)
Bump pre-commit ruff to 0.11.10 and address current issues.
2025-05-21 15:06:32 +02:00
Stefan Agner
8251b6c61c
Process NetworkManager PrimaryConnection changes (#5903)
Process NetworkManager interface updates in case PrimaryConnection
changes. This makes sure that the /network/interface/default/info
endpoint can be used to get the IP address of the primary interface.
2025-05-21 13:50:46 +02:00
Stefan Agner
1faf529b42
Use add-on config timestamp to determine add-on update age (#5897)
* Use add-on config timestamp to determine add-on update age

Instead of using the current timestamp when loading the add-on config,
simply use the add-on config modification timestamp. This way, we can
get a timetsamp even when Supervisor got restarted. It also simplifies
the code a bit.

* Fix pytest

* Patch stat() instead of modifing fixture files
2025-05-21 13:46:20 +02:00
dependabot[bot]
86c016b35d
Bump setuptools from 80.7.1 to 80.8.0 (#5902)
Bumps [setuptools](https://github.com/pypa/setuptools) from 80.7.1 to 80.8.0.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v80.7.1...v80.8.0)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-version: 80.8.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-21 09:19:20 +02:00
Stefan Agner
4f35759fe3
Stop reading advanced logs on ConnectionError (#5900)
* Stop reading advanced logs on ConnectionError

If the client side connection closes with a `ConnectionError`, stop
reading the advanced logs.

This is very similar to ClientConnectionResetError which is easily
reproducable by having a log open and following in the browser and
then restaring Home Assistant. So far I wans't able to artificaially
reproduce the ConnectionError, but there are quite some reports on
Sentry so it seems to happen in real world.

* Warn on ConnectionError
2025-05-20 17:04:25 +02:00
David Rapan
3b575eedba
Add IPv6 address generation mode & privacy extensions (#5892)
* feat: Add IPv6 address generation mode & privacy extensions

Signed-off-by: David Rapan <david@rapan.cz>

* Use NetworkManager fixture for settings init tests

This fixes the test by since the extended implementation now can read
the version of NetworkManager.

* Add pytest for addr_gen_mode

---------

Signed-off-by: David Rapan <david@rapan.cz>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-05-20 17:03:08 +02:00
Stefan Agner
6e6fe5ba39
Trigger auto-update through Core WebSocket call (#5896)
* Trigger auto-update through Core WebSocket call

Instead of auto-updating add-ons on Supervisor side trigger an update
through Core via a WebSocket command. This makes sure that the backup
is categorized correctly and all backup features like retention are
applied.

* Add pytest

* Fix pytest

* Fix pytest

* Fix pytest

* Fix pytest

* Fix pytest cleaner

* Set timestamp of add-on far into the past
2025-05-20 15:18:37 +02:00
Stefan Agner
b5a7e521ae
Copy additional backup locations in jobs (#5890)
Instead of copying the backup in the main job, lets copy them in
separate job per location. This allows to use the same backup error
handling mechanism as for add-ons and folders.

This makes the stage introduced in #5784 somewhat redundant, but
before removing it, let's see if this approach works out.
2025-05-20 15:18:23 +02:00
Stefan Agner
bac7c21fe8
Fix container image detection for aarch64 (#5898) 2025-05-20 10:24:27 +02:00
dependabot[bot]
2eb9ec20d6
Bump sentry-sdk from 2.28.0 to 2.29.1 (#5899)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.28.0 to 2.29.1.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.28.0...2.29.1)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.29.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-20 09:14:42 +02:00
dependabot[bot]
406348c068
Bump cryptography from 44.0.3 to 45.0.2 (#5895)
Bumps [cryptography](https://github.com/pyca/cryptography) from 44.0.3 to 45.0.2.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/44.0.3...45.0.2)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 45.0.2
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 15:05:30 +02:00
dependabot[bot]
5e3f4e8ff3
Bump ruff from 0.11.9 to 0.11.10 (#5894)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-16 10:14:16 +02:00
dependabot[bot]
31a67bc642
Bump codecov/codecov-action from 5.4.2 to 5.4.3 (#5893)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-16 10:13:58 +02:00
Stefan Agner
d0d11db7b1
Harmonize folder and add-on backup error handling (#5885)
* Harmonize folder and add-on backup error handling

Align add-on and folder backup error handling in that in both cases
errors are recorded on the respective backup Jobs, but not raised to
the caller. This allows the backup to complete successfully even if
some add-ons or folders fail to back up.

Along with this, also record errors in the per-add-on and per-folder
backup jobs, as well as the add-on and folder root job.

And finally, align the exception handling to only catch expected
exceptions for add-ons too.

* Fix pytest
2025-05-15 10:14:35 +02:00
dependabot[bot]
cbf4b4e27e
Bump setuptools from 80.4.0 to 80.7.1 (#5889)
Bumps [setuptools](https://github.com/pypa/setuptools) from 80.4.0 to 80.7.1.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v80.4.0...v80.7.1)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-version: 80.7.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-15 09:44:02 +02:00
Stefan Agner
c855eaab52
Delete Backup files on error (#5880) 2025-05-13 20:51:16 +02:00
Stefan Agner
6bac751c4c
Log DNS resolver initialization errors with critical severity (#5884)
To make sure we learn about DNS resolver initialization errors, lets
log them with critical severity. This was the original intention of
PR #5882.
2025-05-13 14:42:59 +02:00
Stefan Agner
da0ae75e8e
Fallback to threaded resolver in case AsyncResolver fails (#5882)
In case the c-ares based AsyncResolver fails to initialize, let's
fallback to the threaded resolver. This would have helped for aiodns
3.3.0 issue when clients were unable to allocate more inotify watches.

This is fixed in aiodns 3.4.0, but we should still fallback to the
threaded resolver as a precautionary measure.
2025-05-13 12:37:35 +02:00
dependabot[bot]
154aeaee87
Bump sentry-sdk from 2.27.0 to 2.28.0 (#5881) 2025-05-13 08:20:44 +02:00
Stefan Agner
b9bbb99f37
Fix pytests to make them run in isolation (#5878) 2025-05-12 12:37:09 +02:00
dependabot[bot]
ff849ce692
Bump astroid from 3.3.9 to 3.3.10 (#5875)
Bumps [astroid](https://github.com/pylint-dev/astroid) from 3.3.9 to 3.3.10.
- [Release notes](https://github.com/pylint-dev/astroid/releases)
- [Changelog](https://github.com/pylint-dev/astroid/blob/main/ChangeLog)
- [Commits](https://github.com/pylint-dev/astroid/compare/v3.3.9...v3.3.10)

---
updated-dependencies:
- dependency-name: astroid
  dependency-version: 3.3.10
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-12 10:05:17 +02:00
dependabot[bot]
24456efb6b
Bump setuptools from 80.3.1 to 80.4.0 (#5876)
Bumps [setuptools](https://github.com/pypa/setuptools) from 80.3.1 to 80.4.0.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v80.3.1...v80.4.0)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-version: 80.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-12 10:05:01 +02:00
dependabot[bot]
0cd9d04e63
Bump ruff from 0.11.8 to 0.11.9 (#5877) 2025-05-12 09:19:32 +02:00
Stefan Agner
39bd20c0e7
Handle non-existing addon config dir (#5871)
* Handle non-existing addon config dir

Since users have access to the root of all add-on config directories,
they can delete the directory of an add-ons at any time. Hence we need
to handle gracefully if it doesn't exist anymore.

* Add pytest
2025-05-09 11:07:22 +02:00
dependabot[bot]
481bbc5be8
Bump aiodns from 3.3.0 to 3.4.0 (#5870)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-09 09:50:58 +02:00
Stefan Agner
36da382af3
Handle ClientPayloadError in advanced logging (#5869)
When the systemd-journal-gatewayd service is being shutdown while
Supervisor is still trying to read logs, aiohttp throws a
ClientPayloadError, presumably because we try to read until the next
linefeed, which aiohttp cannot satisfy anymore.

Simply catch the exception just like the connection reset errors
previously in #5358 and #5715.
2025-05-06 20:33:05 +02:00
Stefan Agner
85f8107b60
Recreate aiohttp ClientSession after DNS plug-in load (#5862)
* Recreate aiohttp ClientSession after DNS plug-in load

Create a temporary ClientSession early in case we need to load version
information from the internet. This doesn't use the final DNS setup
and hence might fail to load in certain situations since we don't have
the fallback mechanims in place yet. But if the DNS container image
is present, we'll continue the setup and load the DNS plug-in. We then
can recreate the ClientSession such that it uses the DNS plug-in.

This works around an issue with aiodns, which today doesn't reload
`resolv.conf` automatically when it changes. This lead to Supervisor
using the initial `resolv.conf` as created by Docker. It meant that
we did not use the DNS plug-in (and its fallback capabilities) in
Supervisor. Also it meant that changes to the DNS setup at runtime
did not propagate to the aiohttp ClientSession (as observed in #5332).

* Mock aiohttp.ClientSession for all tests

Currently in several places pytest actually uses the aiohttp
ClientSession and reaches out to the internet. This is not ideal
for unit tests and should be avoided.

This creates several new fixtures to aid this effort: The `websession`
fixture simply returns a mocked aiohttp.ClientSession, which can be
used whenever a function is tested which needs the global websession.

A separate new fixture to mock the connectivity check named
`supervisor_internet` since this is often used through the Job
decorator which require INTERNET_SYSTEM.

And the `mock_update_data` uses the already existing update json
test data from the fixture directory instead of loading the data
from the internet.

* Log ClientSession nameserver information

When recreating the aiohttp ClientSession, log information what
nameservers exactly are going to be used.

* Refuse ClientSession initialization when API is available

Previous attempts to reinitialize the ClientSession have shown
use of the ClientSession after it was closed due to API requets
being handled in parallel to the reinitialization (see #5851).
Make sure this is not possible by refusing to reinitialize the
ClientSession when the API is available.

* Fix pytests

Also sure we don't create aiohttp ClientSession objects unnecessarily.

* Apply suggestions from code review

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>

---------

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
2025-05-06 16:23:40 +02:00
dependabot[bot]
2e44e6494f
Bump pytest-timeout from 2.3.1 to 2.4.0 (#5868) 2025-05-06 09:00:11 +02:00
dependabot[bot]
cd1cc66c77
Bump cryptography from 44.0.2 to 44.0.3 (#5866)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 02:07:51 -05:00
dependabot[bot]
b76a1f58ea
Bump setuptools from 80.1.0 to 80.3.1 (#5867)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 01:50:41 -05:00
dependabot[bot]
3fcd254d25
Bump pylint from 3.3.6 to 3.3.7 (#5865)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 01:49:50 -05:00
dependabot[bot]
3dff2abe65
Bump aiodns from 3.2.0 to 3.3.0 (#5864)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 01:07:22 -05:00
dependabot[bot]
ba91be1367
Bump ruff from 0.11.7 to 0.11.8 (#5863) 2025-05-02 09:40:45 +02:00
dependabot[bot]
25f93cd338
Bump setuptools from 80.0.1 to 80.1.0 (#5861)
Bumps [setuptools](https://github.com/pypa/setuptools) from 80.0.1 to 80.1.0.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v80.0.1...v80.1.0)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-version: 80.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-01 11:07:40 +02:00
Stefan Agner
9b0044edd6
Avoid using host system socket for systemd journald tests (#5858)
Similar to #5825, make sure we mock the systemd journal gateway socket
for tests. This makes the test work on systems which have
systemd-journal-gatewayd installed.
2025-04-30 19:59:09 +02:00
Stefan Agner
9915c21243
Check local store repository for changes (#5845)
* Check local store repository for changes

Instead of simply assume that the local store repository got changed,
use mtime to check if there have been any changes to the local store.
This mimics a similar behavior to the git repository store updates.

Before this change, we end up in the updated repo code path, which
caused a re-read of all add-ons on every store reload, even though
nothing changed at all. Store reloads are triggered by Home Assistant
Core every 5 minutes.

* Fix pytest failure

Now that we actually only reload metadata if the local store changed
we have to fake the change as well to fix the store manager tests.

* Fix path cache update test for local store repository

* Take root directory into account/add pytest

* Rename utils/__init__.py tests to test_utils_init.py
2025-04-30 11:13:24 +02:00
dependabot[bot]
657cb56fb9
Bump orjson from 3.10.16 to 3.10.18 (#5855)
Bumps [orjson](https://github.com/ijl/orjson) from 3.10.16 to 3.10.18.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.10.16...3.10.18)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.10.18
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-30 10:20:26 +02:00
dependabot[bot]
1b384cebc9
Bump setuptools from 80.0.0 to 80.0.1 (#5856)
Bumps [setuptools](https://github.com/pypa/setuptools) from 80.0.0 to 80.0.1.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v80.0.0...v80.0.1)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-version: 80.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-30 10:20:17 +02:00
Jan Čermák
61089c3507
Bump uv to 0.6.17 (#5854) 2025-04-29 16:57:48 +02:00
Stefan Agner
bc9e3eb95b
Fix race condition when removing add-on (#5850)
When uninstalling an add-on, we schedule a task to reload the ingress
tokens. This scheduled task typically ends up running right after
clearing the add-on data with `self.sys_addons.data.uninstall(self)`
(since this task is doing I/O, the race is rather deterministic).

Let's make sure we reload the ingress tokens at the end. Also simply
execute reloading synchrounsly since this is a rather quick operation
and makes sure that errors would get attributed to the right add-on
uninstall operation.
2025-04-29 16:14:33 +02:00
Stefan Agner
c1b45406d6
Improve backup upload location determination (#5848)
* Improve backup upload location determination

For local backup upload locations, check if the location is on the same
file system an thuse allows to move the backup file after upload. This
allows custom backup mounts. Currently there is no documented,
persistent way to create such mounts in with Home Assistant OS
installations, but since we might add local mounts in the future this
seems a worthwhile addition.

Fixes: #5837

* Fix pytests
2025-04-29 16:14:20 +02:00
Stefan Agner
8e714072c2
Avoid reading add-ons twice unnecessarily (#5846)
So far a store reload lead to a reload of all add-ons twice, usually
causing two messages in quick succession:
```
2025-04-25 17:01:05.058 INFO (MainThread) [supervisor.store] Loading add-ons from store: 91 all - 0 new - 0 remove
2025-04-25 17:01:05.058 INFO (MainThread) [supervisor.store] Loading add-ons from store: 91 all - 0 new - 0 remove
```

This is because when repository changes are detected, `reload()` calls
`load()` which then calls `update_repositories()` which ends up calling
`_read_addons()`, while `reload()` itself calls `_read_addons()` after
`load()` as well.

One way to fix this would be to simply remove the `_read_addons()` call
in `reload()`.

However, it seems the `update_repositories()` call (via `load()`)
is not necessary at all, as we don't add new store repositories in
`reload()`, and we already made sure the built-ins are present on
startup.

So simply call `data.update()` to update the store data cache, as it
was the case before #2225. There is no apparent reason documented why
`data.update()` was changed to a `load()` call. It might be to ensure
regularly that built-in repositories are still in the list of store
repositories. But this type of regular invariant check is often harmful
as it might hide bugs in other places.

Supervisor will still call `update_repositories()` in `load()` to
ensure all built-in repositories are present, just in case the local
configuration file got modified or corrupted.
2025-04-29 16:13:56 +02:00
Stefan Agner
88087046de
Remove deprecated ruff rule S320 (#5847) 2025-04-29 12:58:09 +02:00
Stefan Agner
53393afe8d
Revert "Recreate aiohttp session on connectivity check (#5332)" (#5851)
This reverts commit 1504278223a8d852d3b11de25f676fa8d6501a70.

It turns out that recreating the session can cause race conditions, e.g.
with API checks triggered by proxied requests running alongside. These
manifest in the following error:

AttributeError: 'NoneType' object has no attribute 'connect'
...
  File "supervisor/homeassistant/api.py", line 187, in check_api_state
    if state := await self.get_api_state():
  File "supervisor/homeassistant/api.py", line 171, in get_api_state
    data = await self.get_core_state()
  File "supervisor/homeassistant/api.py", line 145, in get_core_state
    return await self._get_json("api/core/state")
  File "supervisor/homeassistant/api.py", line 132, in _get_json
    async with self.make_request("get", path) as resp:
  File "contextlib.py", line 214, in __aenter__
    return await anext(self.gen)
  File "supervisor/homeassistant/api.py", line 106, in make_request
    async with getattr(self.sys_websession, method)(
  File "aiohttp/client.py", line 1425, in __aenter__
    self._resp: _RetType = await self._coro
  File "aiohttp/client.py", line 703, in _request
    conn = await self._connector.connect(

The only reason for the _connection in the aiohttp client to be None is
when close() gets called on the session. The only place this is being
done is the connectivity check.

So it seems that between fetching the session from the `sys_websession`
property) and actually using the connector, a connectivity check has been
run which then causes the above stack trace.

Let's not mess with the lifetime of the ClientSession object and simply
revert the change. Another solution for the original problem needs to be
found.
2025-04-28 23:48:47 +02:00
dependabot[bot]
4b5bcece64
Bump setuptools from 79.0.1 to 80.0.0 (#5849)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-28 02:47:57 -04:00
dependabot[bot]
0e7e4f8b42
Bump ruff from 0.11.6 to 0.11.7 (#5841)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.11.6 to 0.11.7.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.11.6...0.11.7)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.11.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-25 15:45:14 +02:00
Stefan Agner
9470f44840
Improve /auth API request sanitation (#5843)
* Add basic test coverage for /auth API

* Check /auth API is called from an add-on

Currently the /auth API is only available for add-ons. Return 403
for calls not originating from an add-on.

* Handle bad json in auth API

Use the API specific JSON load helper which raises an APIError. This
causes the API to return a 400 error instead of a 500 error when the
JSON is invalid.

* Avoid redefining name 'mock_check_login'

* Update tests/api/test_auth.py
2025-04-25 15:17:25 +02:00
dependabot[bot]
0e55e6e67b
Bump sentry-sdk from 2.26.1 to 2.27.0 (#5842)
* Bump sentry-sdk from 2.26.1 to 2.27.0

Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.26.1 to 2.27.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.26.1...2.27.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.27.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Make use of new typing hints in filter.py

* Avoid creating unnecessary empty dict

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-04-25 11:40:17 +02:00
dependabot[bot]
6116425265
Bump actions/download-artifact from 4.2.1 to 4.3.0 (#5840)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-25 09:01:23 +02:00
Stefan Agner
de497cdc19
Add dedicated version update refresh for main components (#5833)
* Add dedicated update information reload

Currently we have the /refresh_updates endpoint which updates the main
component versions (Core, OS, Supervisor, Plug-ins) and the add-on
store at the same time. This combined update causes more update
information reloads than necessary.

To allow fine grained update refresh control introduce a new endpoint
/reload_updates which asks Supervisor to only update main component
versions (learned through the version json files).

The /store/reload endpoint already allows to update the add-on store
separately.

* Add pytest

* Update supervisor/api/__init__.py
2025-04-24 15:46:18 +02:00
dependabot[bot]
88b41e80bb
Bump actions/setup-python from 5.5.0 to 5.6.0 (#5836)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-23 21:04:49 -10:00
dependabot[bot]
876afdb26e
Bump setuptools from 79.0.0 to 79.0.1 (#5835)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-23 21:04:32 -10:00
dependabot[bot]
9d062c8ed0
Bump sigstore/cosign-installer from 3.8.1 to 3.8.2 (#5832)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.8.1 to 3.8.2.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.8.1...v3.8.2)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.8.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-23 10:40:41 +02:00
Stefan Agner
122b73202b
Unify Supervisor event message functions (#5831)
* Unify Supervisor event message functions

Unify functions which send WebSocket messages of type
"supervisor/event". This deduplicates code and hopefully avoids further
diversication in the future.

While at it, remove unused HomeAssistantWSNotSupported exception. It
seems the only place this exception is used got removed in #3317.

* Test message delivery during shutdown states
2025-04-23 10:40:25 +02:00
Stefan Agner
5d07dd2c42
Add country to Supervisor info (#5826)
Similar to timezone also add country information to the Supervisor
info. This is useful to set country specific configurations such as
Wireless radio regulatory setting. This is also useful for add-ons
which need country information but only have hassio API access.
2025-04-22 16:18:23 +02:00
Jan Čermák
adfb433f57
Intercept host logs Range header for Systemd v256+ compatibility (#5827)
Since Systemd v256 the Range header must not end with a trailing colon.
We relied on this undocumented feature when following logs, and the
frontend or CLI may still use it in requests. To fix the requests
failing with new Systemd version, intercept the header and fill in the
num_entries to maximum possible value, which avoids the journal-gatewayd
returning the response prematurely and also works on older Systemd
versions.

The journal-gatewayd would still return response if follow flag is used
along with num_entries, but this behavior is unchanged and would be
better fixed in the backend.

Link: https://github.com/systemd/systemd/issues/37172
2025-04-22 09:05:49 +02:00
dependabot[bot]
198af54d1e
Bump aiohttp from 3.11.16 to 3.11.18 (#5830)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.11.18
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-22 09:02:14 +02:00
dependabot[bot]
c3e63a5669
Bump setuptools from 78.1.0 to 79.0.0 (#5829) 2025-04-21 20:29:21 +02:00
dependabot[bot]
8f27958e20
Bump ruff from 0.11.5 to 0.11.6 (#5828)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-17 21:43:49 -10:00
Stefan Agner
6fad7d14e1
Avoid using host system socket for logs tests (#5825)
Make sure we mock the systemd journal gateway socket for tests. This
makes the test work on systems which have systemd-journal-gatewayd
installed.
2025-04-17 16:23:34 +02:00
dependabot[bot]
f7317134e3
Bump sentry-sdk from 2.26.0 to 2.26.1 (#5824)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-16 09:20:46 +02:00
dependabot[bot]
9d8db27701
Bump sentry-sdk from 2.25.1 to 2.26.0 (#5822)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.25.1 to 2.26.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.25.1...2.26.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.26.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-16 08:38:36 +02:00
dependabot[bot]
7da3a34304
Bump codecov/codecov-action from 5.4.0 to 5.4.2 (#5823) 2025-04-15 08:25:45 +02:00
Stefan Agner
d413e0dcb9
Drop typing-extensions from requirements (#5821)
The dependency typing-extensions has been added with #3848, but not
used much. With the update to Python 3.11 in #4666 the necessary types
are now available from the Python standard library, making
typing-extensions unused. Remove the now unnecessary typing-extensions
from the dependencies.
2025-04-11 10:40:24 +02:00
dependabot[bot]
542ab0411c
Bump typing-extensions from 4.13.1 to 4.13.2 (#5817)
Bumps [typing-extensions](https://github.com/python/typing_extensions) from 4.13.1 to 4.13.2.
- [Release notes](https://github.com/python/typing_extensions/releases)
- [Changelog](https://github.com/python/typing_extensions/blob/main/CHANGELOG.md)
- [Commits](https://github.com/python/typing_extensions/compare/4.13.1...4.13.2)

---
updated-dependencies:
- dependency-name: typing-extensions
  dependency-version: 4.13.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-11 09:45:33 +02:00
dependabot[bot]
999789f7ce
Bump ruff from 0.11.4 to 0.11.5 (#5818)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.11.4 to 0.11.5.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.11.4...0.11.5)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.11.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-11 09:31:52 +02:00
dependabot[bot]
de105f8cb7
Bump debugpy from 1.8.13 to 1.8.14 (#5819)
Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.8.13 to 1.8.14.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.8.13...v1.8.14)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-version: 1.8.14
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-11 09:31:38 +02:00
dependabot[bot]
b37b0ff744
Bump urllib3 from 2.3.0 to 2.4.0 (#5820)
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.3.0 to 2.4.0.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/2.3.0...2.4.0)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-version: 2.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-11 09:31:25 +02:00
dependabot[bot]
db330ab58a
Update wheel requirement from ~=0.45.0 to ~=0.46.1 (#5816)
Updates the requirements on [wheel](https://github.com/pypa/wheel) to permit the latest version.
- [Release notes](https://github.com/pypa/wheel/releases)
- [Changelog](https://github.com/pypa/wheel/blob/main/docs/news.rst)
- [Commits](https://github.com/pypa/wheel/compare/0.45.0...0.46.1)

---
updated-dependencies:
- dependency-name: wheel
  dependency-version: 0.46.1
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-09 09:47:43 +02:00
Mike Degatano
4a00caa2e8
Fix mypy issues in docker, hardware and homeassistant modules (#5805)
* Fix mypy issues in docker and hardware modules

* Fix mypy issues in homeassistant module

* Fix async_send_command typing

* Fixes from feedback
2025-04-08 12:52:58 -04:00
228 changed files with 5795 additions and 1982 deletions

View File

@ -1,69 +0,0 @@
---
name: Report a bug with the Supervisor on a supported System
about: Report an issue related to the Home Assistant Supervisor.
labels: bug
---
<!-- READ THIS FIRST:
- If you need additional help with this template please refer to https://www.home-assistant.io/help/reporting_issues/
- This is for bugs only. Feature and enhancement requests should go in our community forum: https://community.home-assistant.io/c/feature-requests
- Provide as many details as possible. Paste logs, configuration sample and code into the backticks. Do not delete any text from this template!
- If you have a problem with an add-on, make an issue in it's repository.
-->
<!--
Important: You can only fill a bug repport for an supported system! If you run an unsupported installation. This report would be closed without comment.
-->
### Describe the issue
<!-- Provide as many details as possible. -->
### Steps to reproduce
<!-- What do you do to encounter the issue. -->
1. ...
2. ...
3. ...
### Enviroment details
<!-- You can find these details in the system tab of the supervisor panel, or by using the `ha` CLI. -->
- **Operating System:**: xxx
- **Supervisor version:**: xxx
- **Home Assistant version**: xxx
### Supervisor logs
<details>
<summary>Supervisor logs</summary>
<!--
- Frontend -> Supervisor -> System
- Or use this command: ha supervisor logs
- Logs are more than just errors, even if you don't think it's important, it is.
-->
```
Paste supervisor logs here
```
</details>
### System Information
<details>
<summary>System Information</summary>
<!--
- Use this command: ha info
-->
```
Paste system info here
```
</details>

View File

@ -1,6 +1,5 @@
name: Bug Report Form name: Report an issue with Home Assistant Supervisor
description: Report an issue related to the Home Assistant Supervisor. description: Report an issue related to the Home Assistant Supervisor.
labels: bug
body: body:
- type: markdown - type: markdown
attributes: attributes:
@ -9,7 +8,7 @@ body:
If you have a feature or enhancement request, please use the [feature request][fr] section of our [Community Forum][fr]. If you have a feature or enhancement request, please use the [feature request][fr] section of our [Community Forum][fr].
[fr]: https://community.home-assistant.io/c/feature-requests [fr]: https://github.com/orgs/home-assistant/discussions
- type: textarea - type: textarea
validations: validations:
required: true required: true
@ -76,7 +75,7 @@ body:
description: > description: >
The System information can be found in [Settings -> System -> Repairs -> (three dot menu) -> System Information](https://my.home-assistant.io/redirect/system_health/). The System information can be found in [Settings -> System -> Repairs -> (three dot menu) -> System Information](https://my.home-assistant.io/redirect/system_health/).
Click the copy button at the bottom of the pop-up and paste it here. Click the copy button at the bottom of the pop-up and paste it here.
[![Open your Home Assistant instance and show health information about your system.](https://my.home-assistant.io/badges/system_health.svg)](https://my.home-assistant.io/redirect/system_health/) [![Open your Home Assistant instance and show health information about your system.](https://my.home-assistant.io/badges/system_health.svg)](https://my.home-assistant.io/redirect/system_health/)
- type: textarea - type: textarea
attributes: attributes:
@ -86,7 +85,7 @@ body:
Supervisor diagnostics can be found in [Settings -> Devices & services](https://my.home-assistant.io/redirect/integrations/). Supervisor diagnostics can be found in [Settings -> Devices & services](https://my.home-assistant.io/redirect/integrations/).
Find the card that says `Home Assistant Supervisor`, open it, and select the three dot menu of the Supervisor integration entry Find the card that says `Home Assistant Supervisor`, open it, and select the three dot menu of the Supervisor integration entry
and select 'Download diagnostics'. and select 'Download diagnostics'.
**Please drag-and-drop the downloaded file into the textbox below. Do not copy and paste its contents.** **Please drag-and-drop the downloaded file into the textbox below. Do not copy and paste its contents.**
- type: textarea - type: textarea
attributes: attributes:

View File

@ -13,7 +13,7 @@ contact_links:
about: Our documentation has its own issue tracker. Please report issues with the website there. about: Our documentation has its own issue tracker. Please report issues with the website there.
- name: Request a feature for the Supervisor - name: Request a feature for the Supervisor
url: https://community.home-assistant.io/c/feature-requests url: https://github.com/orgs/home-assistant/discussions
about: Request an new feature for the Supervisor. about: Request an new feature for the Supervisor.
- name: I have a question or need support - name: I have a question or need support

53
.github/ISSUE_TEMPLATE/task.yml vendored Normal file
View File

@ -0,0 +1,53 @@
name: Task
description: For staff only - Create a task
type: Task
body:
- type: markdown
attributes:
value: |
## ⚠️ RESTRICTED ACCESS
**This form is restricted to Open Home Foundation staff and authorized contributors only.**
If you are a community member wanting to contribute, please:
- For bug reports: Use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)
- For feature requests: Submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)
---
### For authorized contributors
Use this form to create tasks for development work, improvements, or other actionable items that need to be tracked.
- type: textarea
id: description
attributes:
label: Description
description: |
Provide a clear and detailed description of the task that needs to be accomplished.
Be specific about what needs to be done, why it's important, and any constraints or requirements.
placeholder: |
Describe the task, including:
- What needs to be done
- Why this task is needed
- Expected outcome
- Any constraints or requirements
validations:
required: true
- type: textarea
id: additional_context
attributes:
label: Additional context
description: |
Any additional information, links, research, or context that would be helpful.
Include links to related issues, research, prototypes, roadmap opportunities etc.
placeholder: |
- Roadmap opportunity: [link]
- Epic: [link]
- Feature request: [link]
- Technical design documents: [link]
- Prototype/mockup: [link]
- Dependencies: [links]
validations:
required: false

288
.github/copilot-instructions.md vendored Normal file
View File

@ -0,0 +1,288 @@
# GitHub Copilot & Claude Code Instructions
This repository contains the Home Assistant Supervisor, a Python 3 based container
orchestration and management system for Home Assistant.
## Supervisor Capabilities & Features
### Architecture Overview
Home Assistant Supervisor is a Python-based container orchestration system that
communicates with the Docker daemon to manage containerized components. It is tightly
integrated with the underlying Operating System and core Operating System components
through D-Bus.
**Managed Components:**
- **Home Assistant Core**: The main home automation application running in its own
container (also provides the web interface)
- **Add-ons**: Third-party applications and services (each add-on runs in its own
container)
- **Plugins**: Built-in system services like DNS, Audio, CLI, Multicast, and Observer
- **Host System Integration**: OS-level operations and hardware access via D-Bus
- **Container Networking**: Internal Docker network management and external
connectivity
- **Storage & Backup**: Data persistence and backup management across all containers
**Key Dependencies:**
- **Docker Engine**: Required for all container operations
- **D-Bus**: System-level communication with the host OS
- **systemd**: Service management for host system operations
- **NetworkManager**: Network configuration and management
### Add-on System
**Add-on Architecture**: Add-ons are containerized applications available through
add-on stores. Each store contains multiple add-ons, and each add-on includes metadata
that tells Supervisor the version, startup configuration (permissions), and available
user configurable options. Add-on metadata typically references a container image that
Supervisor fetches during installation. If not, the Supervisor builds the container
image from a Dockerfile.
**Built-in Stores**: Supervisor comes with several pre-configured stores:
- **Core Add-ons**: Official add-ons maintained by the Home Assistant team
- **Community Add-ons**: Popular third-party add-ons repository
- **ESPHome**: Add-ons for ESPHome ecosystem integration
- **Music Assistant**: Audio and music-related add-ons
- **Local Development**: Local folder for testing custom add-ons during development
**Store Management**: Stores are Git-based repositories that are periodically updated.
When updates are available, users receive notifications.
**Add-on Lifecycle**:
- **Installation**: Supervisor fetches or builds container images based on add-on
metadata
- **Configuration**: Schema-validated options with integrated UI management
- **Runtime**: Full container lifecycle management, health monitoring
- **Updates**: Automatic or manual version management
### Update System
**Core Components**: Supervisor, Home Assistant Core, HAOS, and built-in plugins
receive version information from a central JSON file fetched from
`https://version.home-assistant.io/{channel}.json`. The `Updater` class handles
fetching this data, validating signatures, and updating internal version tracking.
**Update Channels**: Three channels (`stable`/`beta`/`dev`) determine which version
JSON file is fetched, allowing users to opt into different release streams.
**Add-on Updates**: Add-on version information comes from store repository updates, not
the central JSON file. When repositories are refreshed via the store system, add-ons
compare their local versions against repository versions to determine update
availability.
### Backup & Recovery System
**Backup Capabilities**:
- **Full Backups**: Complete system state capture including all add-ons,
configuration, and data
- **Partial Backups**: Selective backup of specific components (Home Assistant,
add-ons, folders)
- **Encrypted Backups**: Optional backup encryption with user-provided passwords
- **Multiple Storage Locations**: Local storage and remote backup destinations
**Recovery Features**:
- **One-click Restore**: Simple restoration from backup files
- **Selective Restore**: Choose specific components to restore
- **Automatic Recovery**: Self-healing for common system issues
---
## Supervisor Development
### Python Requirements
- **Compatibility**: Python 3.13+
- **Language Features**: Use modern Python features:
- Type hints with `typing` module
- f-strings (preferred over `%` or `.format()`)
- Dataclasses and enum classes
- Async/await patterns
- Pattern matching where appropriate
### Code Quality Standards
- **Formatting**: Ruff
- **Linting**: PyLint and Ruff
- **Type Checking**: MyPy
- **Testing**: pytest with asyncio support
- **Language**: American English for all code, comments, and documentation
### Code Organization
**Core Structure**:
```
supervisor/
├── __init__.py # Package initialization
├── const.py # Constants and enums
├── coresys.py # Core system management
├── bootstrap.py # System initialization
├── exceptions.py # Custom exception classes
├── api/ # REST API endpoints
├── addons/ # Add-on management
├── backups/ # Backup system
├── docker/ # Docker integration
├── host/ # Host system interface
├── homeassistant/ # Home Assistant Core management
├── dbus/ # D-Bus system integration
├── hardware/ # Hardware detection and management
├── plugins/ # Plugin system
├── resolution/ # Issue detection and resolution
├── security/ # Security management
├── services/ # Service discovery and management
├── store/ # Add-on store management
└── utils/ # Utility functions
```
**Shared Constants**: Use constants from `supervisor/const.py` instead of hardcoding
values. Define new constants following existing patterns and group related constants
together.
### Supervisor Architecture Patterns
**CoreSysAttributes Inheritance Pattern**: Nearly all major classes in Supervisor
inherit from `CoreSysAttributes`, providing access to the centralized system state
via `self.coresys` and convenient `sys_*` properties.
```python
# Standard Supervisor class pattern
class MyManager(CoreSysAttributes):
"""Manage my functionality."""
def __init__(self, coresys: CoreSys):
"""Initialize manager."""
self.coresys: CoreSys = coresys
self._component: MyComponent = MyComponent(coresys)
@property
def component(self) -> MyComponent:
"""Return component handler."""
return self._component
# Access system components via inherited properties
async def do_something(self):
await self.sys_docker.containers.get("my_container")
self.sys_bus.fire_event(BusEvent.MY_EVENT, {"data": "value"})
```
**Key Inherited Properties from CoreSysAttributes**:
- `self.sys_docker` - Docker API access
- `self.sys_run_in_executor()` - Execute blocking operations
- `self.sys_create_task()` - Create async tasks
- `self.sys_bus` - Event bus for system events
- `self.sys_config` - System configuration
- `self.sys_homeassistant` - Home Assistant Core management
- `self.sys_addons` - Add-on management
- `self.sys_host` - Host system access
- `self.sys_dbus` - D-Bus system interface
**Load Pattern**: Many components implement a `load()` method which effectively
initialize the component from external sources (containers, files, D-Bus services).
### API Development
**REST API Structure**:
- **Base Path**: `/api/` for all endpoints
- **Authentication**: Bearer token authentication
- **Consistent Response Format**: `{"result": "ok", "data": {...}}` or
`{"result": "error", "message": "..."}`
- **Validation**: Use voluptuous schemas with `api_validate()`
**Use `@api_process` Decorator**: This decorator handles all standard error handling
and response formatting automatically. The decorator catches `APIError`, `HassioError`,
and other exceptions, returning appropriate HTTP responses.
```python
from ..api.utils import api_process, api_validate
@api_process
async def backup_full(self, request: web.Request) -> dict[str, Any]:
"""Create full backup."""
body = await api_validate(SCHEMA_BACKUP_FULL, request)
job = await self.sys_backups.do_backup_full(**body)
return {ATTR_JOB_ID: job.uuid}
```
### Docker Integration
- **Container Management**: Use Supervisor's Docker manager instead of direct
Docker API
- **Networking**: Supervisor manages internal Docker networks with predefined IP
ranges
- **Security**: AppArmor profiles, capability restrictions, and user namespace
isolation
- **Health Checks**: Implement health monitoring for all managed containers
### D-Bus Integration
- **Use dbus-fast**: Async D-Bus library for system integration
- **Service Management**: systemd, NetworkManager, hostname management
- **Error Handling**: Wrap D-Bus exceptions in Supervisor-specific exceptions
### Async Programming
- **All I/O operations must be async**: File operations, network calls, subprocess
execution
- **Use asyncio patterns**: Prefer `asyncio.gather()` over sequential awaits
- **Executor jobs**: Use `self.sys_run_in_executor()` for blocking operations
- **Two-phase initialization**: `__init__` for sync setup, `post_init()` for async
initialization
### Testing
- **Location**: `tests/` directory with module mirroring
- **Fixtures**: Extensive use of pytest fixtures for CoreSys setup
- **Mocking**: Mock external dependencies (Docker, D-Bus, network calls)
- **Coverage**: Minimum 90% test coverage, 100% for security-sensitive code
### Error Handling
- **Custom Exceptions**: Defined in `exceptions.py` with clear inheritance hierarchy
- **Error Propagation**: Use `from` clause for exception chaining
- **API Errors**: Use `APIError` with appropriate HTTP status codes
### Security Considerations
- **Container Security**: AppArmor profiles mandatory for add-ons, minimal
capabilities
- **Authentication**: Token-based API authentication with role-based access
- **Data Protection**: Backup encryption, secure secret management, comprehensive
input validation
### Development Commands
```bash
# Run tests, adjust paths as necessary
pytest -qsx tests/
# Linting and formatting
ruff check supervisor/
ruff format supervisor/
# Type checking
mypy --ignore-missing-imports supervisor/
# Pre-commit hooks
pre-commit run --all-files
```
Always run the pre-commit hooks at the end of code editing.
### Common Patterns to Follow
**✅ Use These Patterns**:
- Inherit from `CoreSysAttributes` for system access
- Use `@api_process` decorator for API endpoints
- Use `self.sys_run_in_executor()` for blocking operations
- Access Docker via `self.sys_docker` not direct Docker API
- Use constants from `const.py` instead of hardcoding
- Store types in (per-module) `const.py` (e.g. supervisor/store/const.py)
**❌ Avoid These Patterns**:
- Direct Docker API usage - use Supervisor's Docker manager
- Blocking operations in async context (use asyncio alternatives)
- Hardcoded values - use constants from `const.py`
- Manual error handling in API endpoints - let `@api_process` handle it
This guide provides the foundation for contributing to Home Assistant Supervisor.
Follow these patterns and guidelines to ensure code quality, security, and
maintainability.

View File

@ -125,15 +125,15 @@ jobs:
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: actions/setup-python@v5.5.0 uses: actions/setup-python@v5.6.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
- name: Install Cosign - name: Install Cosign
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@v3.8.1 uses: sigstore/cosign-installer@v3.9.2
with: with:
cosign-release: "v2.4.0" cosign-release: "v2.4.3"
- name: Install dirhash and calc hash - name: Install dirhash and calc hash
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'

View File

@ -10,6 +10,7 @@ on:
env: env:
DEFAULT_PYTHON: "3.13" DEFAULT_PYTHON: "3.13"
PRE_COMMIT_CACHE: ~/.cache/pre-commit PRE_COMMIT_CACHE: ~/.cache/pre-commit
MYPY_CACHE_VERSION: 1
concurrency: concurrency:
group: "${{ github.workflow }}-${{ github.ref }}" group: "${{ github.workflow }}-${{ github.ref }}"
@ -28,7 +29,7 @@ jobs:
uses: actions/checkout@v4.2.2 uses: actions/checkout@v4.2.2
- name: Set up Python - name: Set up Python
id: python id: python
uses: actions/setup-python@v5.5.0 uses: actions/setup-python@v5.6.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
- name: Restore Python virtual environment - name: Restore Python virtual environment
@ -69,7 +70,7 @@ jobs:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v4.2.2 uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.5.0 uses: actions/setup-python@v5.6.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@ -112,7 +113,7 @@ jobs:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v4.2.2 uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.5.0 uses: actions/setup-python@v5.6.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@ -170,7 +171,7 @@ jobs:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v4.2.2 uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.5.0 uses: actions/setup-python@v5.6.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@ -214,7 +215,7 @@ jobs:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v4.2.2 uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.5.0 uses: actions/setup-python@v5.6.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@ -258,7 +259,7 @@ jobs:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v4.2.2 uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.5.0 uses: actions/setup-python@v5.6.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@ -286,6 +287,52 @@ jobs:
. venv/bin/activate . venv/bin/activate
pylint supervisor tests pylint supervisor tests
mypy:
name: Check mypy
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Generate partial mypy restore key
id: generate-mypy-key
run: |
mypy_version=$(cat requirements_test.txt | grep mypy | cut -d '=' -f 3)
echo "version=$mypy_version" >> $GITHUB_OUTPUT
echo "key=mypy-${{ env.MYPY_CACHE_VERSION }}-$mypy_version-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: >-
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore mypy cache
uses: actions/cache@v4.2.3
with:
path: .mypy_cache
key: >-
${{ runner.os }}-mypy-${{ needs.prepare.outputs.python-version }}-${{ steps.generate-mypy-key.outputs.key }}
restore-keys: >-
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-mypy-${{ env.MYPY_CACHE_VERSION }}-${{ steps.generate-mypy-key.outputs.version }}
- name: Register mypy problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/mypy.json"
- name: Run mypy
run: |
. venv/bin/activate
mypy --ignore-missing-imports supervisor
pytest: pytest:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: prepare needs: prepare
@ -294,14 +341,14 @@ jobs:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v4.2.2 uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.5.0 uses: actions/setup-python@v5.6.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign - name: Install Cosign
uses: sigstore/cosign-installer@v3.8.1 uses: sigstore/cosign-installer@v3.9.2
with: with:
cosign-release: "v2.4.0" cosign-release: "v2.4.3"
- name: Restore Python virtual environment - name: Restore Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@v4.2.3 uses: actions/cache@v4.2.3
@ -353,7 +400,7 @@ jobs:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v4.2.2 uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.5.0 uses: actions/setup-python@v5.6.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
@ -370,7 +417,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache" echo "Failed to restore Python virtual environment from cache"
exit 1 exit 1
- name: Download all coverage artifacts - name: Download all coverage artifacts
uses: actions/download-artifact@v4.2.1 uses: actions/download-artifact@v4.3.0
- name: Combine coverage results - name: Combine coverage results
run: | run: |
. venv/bin/activate . venv/bin/activate
@ -378,4 +425,4 @@ jobs:
coverage report coverage report
coverage xml coverage xml
- name: Upload coverage to Codecov - name: Upload coverage to Codecov
uses: codecov/codecov-action@v5.4.0 uses: codecov/codecov-action@v5.4.3

16
.github/workflows/matchers/mypy.json vendored Normal file
View File

@ -0,0 +1,16 @@
{
"problemMatcher": [
{
"owner": "mypy",
"pattern": [
{
"regexp": "^(.+):(\\d+):\\s(error|warning):\\s(.+)$",
"file": 1,
"line": 2,
"severity": 3,
"message": 4
}
]
}
]
}

View File

@ -0,0 +1,58 @@
name: Restrict task creation
# yamllint disable-line rule:truthy
on:
issues:
types: [opened]
jobs:
check-authorization:
runs-on: ubuntu-latest
# Only run if this is a Task issue type (from the issue form)
if: github.event.issue.issue_type == 'Task'
steps:
- name: Check if user is authorized
uses: actions/github-script@v7
with:
script: |
const issueAuthor = context.payload.issue.user.login;
// Check if user is an organization member
try {
await github.rest.orgs.checkMembershipForUser({
org: 'home-assistant',
username: issueAuthor
});
console.log(`✅ ${issueAuthor} is an organization member`);
return; // Authorized
} catch (error) {
console.log(`❌ ${issueAuthor} is not authorized to create Task issues`);
}
// Close the issue with a comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `Hi @${issueAuthor}, thank you for your contribution!\n\n` +
`Task issues are restricted to Open Home Foundation staff and authorized contributors.\n\n` +
`If you would like to:\n` +
`- Report a bug: Please use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)\n` +
`- Request a feature: Please submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)\n\n` +
`If you believe you should have access to create Task issues, please contact the maintainers.`
});
await github.rest.issues.update({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
state: 'closed'
});
// Add a label to indicate this was auto-closed
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: ['auto-closed']
});

View File

@ -12,7 +12,7 @@ jobs:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v4.2.2 uses: actions/checkout@v4.2.2
- name: Sentry Release - name: Sentry Release
uses: getsentry/action-release@v3.1.1 uses: getsentry/action-release@v3.2.0
env: env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }} SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }} SENTRY_ORG: ${{ secrets.SENTRY_ORG }}

View File

@ -1,6 +1,6 @@
repos: repos:
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.9.1 rev: v0.11.10
hooks: hooks:
- id: ruff - id: ruff
args: args:
@ -13,3 +13,15 @@ repos:
- id: check-executables-have-shebangs - id: check-executables-have-shebangs
stages: [manual] stages: [manual]
- id: check-json - id: check-json
- repo: local
hooks:
# Run mypy through our wrapper script in order to get the possible
# pyenv and/or virtualenv activated; it may not have been e.g. if
# committing from a GUI tool that was not launched from an activated
# shell.
- id: mypy
name: mypy
entry: script/run-in-env.sh mypy --ignore-missing-imports
language: script
types_or: [python, pyi]
files: ^supervisor/.+\.(py|pyi)$

1
CLAUDE.md Symbolic link
View File

@ -0,0 +1 @@
.github/copilot-instructions.md

View File

@ -29,7 +29,7 @@ RUN \
\ \
&& curl -Lso /usr/bin/cosign "https://github.com/home-assistant/cosign/releases/download/${COSIGN_VERSION}/cosign_${BUILD_ARCH}" \ && curl -Lso /usr/bin/cosign "https://github.com/home-assistant/cosign/releases/download/${COSIGN_VERSION}/cosign_${BUILD_ARCH}" \
&& chmod a+x /usr/bin/cosign \ && chmod a+x /usr/bin/cosign \
&& pip3 install uv==0.6.1 && pip3 install uv==0.6.17
# Install requirements # Install requirements
COPY requirements.txt . COPY requirements.txt .

View File

@ -12,7 +12,7 @@ cosign:
base_identity: https://github.com/home-assistant/docker-base/.* base_identity: https://github.com/home-assistant/docker-base/.*
identity: https://github.com/home-assistant/supervisor/.* identity: https://github.com/home-assistant/supervisor/.*
args: args:
COSIGN_VERSION: 2.4.0 COSIGN_VERSION: 2.4.3
labels: labels:
io.hass.type: supervisor io.hass.type: supervisor
org.opencontainers.image.title: Home Assistant Supervisor org.opencontainers.image.title: Home Assistant Supervisor

View File

@ -1,5 +1,5 @@
[build-system] [build-system]
requires = ["setuptools~=78.1.0", "wheel~=0.45.0"] requires = ["setuptools~=80.9.0", "wheel~=0.46.1"]
build-backend = "setuptools.build_meta" build-backend = "setuptools.build_meta"
[project] [project]
@ -230,6 +230,9 @@ filterwarnings = [
"ignore:pkg_resources is deprecated as an API:DeprecationWarning:dirhash", "ignore:pkg_resources is deprecated as an API:DeprecationWarning:dirhash",
"ignore::pytest.PytestUnraisableExceptionWarning", "ignore::pytest.PytestUnraisableExceptionWarning",
] ]
markers = [
"no_mock_init_websession: disable the autouse mock of init_websession for this test",
]
[tool.ruff] [tool.ruff]
lint.select = [ lint.select = [
@ -272,7 +275,6 @@ lint.select = [
"S317", # suspicious-xml-sax-usage "S317", # suspicious-xml-sax-usage
"S318", # suspicious-xml-mini-dom-usage "S318", # suspicious-xml-mini-dom-usage
"S319", # suspicious-xml-pull-dom-usage "S319", # suspicious-xml-pull-dom-usage
"S320", # suspicious-xmle-tree-usage
"S601", # paramiko-call "S601", # paramiko-call
"S602", # subprocess-popen-with-shell-equals-true "S602", # subprocess-popen-with-shell-equals-true
"S604", # call-with-shell-equals-true "S604", # call-with-shell-equals-true

View File

@ -1,15 +1,15 @@
aiodns==3.2.0 aiodns==3.5.0
aiohttp==3.11.16 aiohttp==3.12.14
atomicwrites-homeassistant==1.4.1 atomicwrites-homeassistant==1.4.1
attrs==25.3.0 attrs==25.3.0
awesomeversion==24.6.0 awesomeversion==25.5.0
blockbuster==1.5.24 blockbuster==1.5.25
brotli==1.1.0 brotli==1.1.0
ciso8601==2.3.2 ciso8601==2.3.2
colorlog==6.9.0 colorlog==6.9.0
cpe==1.3.1 cpe==1.3.1
cryptography==44.0.2 cryptography==45.0.5
debugpy==1.8.13 debugpy==1.8.15
deepmerge==2.0 deepmerge==2.0
dirhash==0.5.0 dirhash==0.5.0
docker==7.1.0 docker==7.1.0
@ -17,15 +17,14 @@ faust-cchardet==2.1.19
gitpython==3.1.44 gitpython==3.1.44
jinja2==3.1.6 jinja2==3.1.6
log-rate-limit==1.4.2 log-rate-limit==1.4.2
orjson==3.10.16 orjson==3.11.0
pulsectl==24.12.0 pulsectl==24.12.0
pyudev==0.24.3 pyudev==0.24.3
PyYAML==6.0.2 PyYAML==6.0.2
requests==2.32.3 requests==2.32.4
securetar==2025.2.1 securetar==2025.2.1
sentry-sdk==2.25.1 sentry-sdk==2.33.2
setuptools==78.1.0 setuptools==80.9.0
voluptuous==0.15.2 voluptuous==0.15.2
dbus-fast==2.44.1 dbus-fast==2.44.2
typing_extensions==4.13.1
zlib-fast==0.2.1 zlib-fast==0.2.1

View File

@ -1,13 +1,16 @@
astroid==3.3.9 astroid==3.3.11
coverage==7.8.0 coverage==7.9.2
mypy==1.17.0
pre-commit==4.2.0 pre-commit==4.2.0
pylint==3.3.6 pylint==3.3.7
pytest-aiohttp==1.1.0 pytest-aiohttp==1.1.0
pytest-asyncio==0.25.2 pytest-asyncio==0.25.2
pytest-cov==6.1.1 pytest-cov==6.2.1
pytest-timeout==2.3.1 pytest-timeout==2.4.0
pytest==8.3.5 pytest==8.4.1
ruff==0.11.4 ruff==0.12.4
time-machine==2.16.0 time-machine==2.16.0
typing_extensions==4.13.1 types-docker==7.1.0.20250705
urllib3==2.3.0 types-pyyaml==6.0.12.20250516
types-requests==2.32.4.20250611
urllib3==2.5.0

30
script/run-in-env.sh Executable file
View File

@ -0,0 +1,30 @@
#!/usr/bin/env sh
set -eu
# Used in venv activate script.
# Would be an error if undefined.
OSTYPE="${OSTYPE-}"
# Activate pyenv and virtualenv if present, then run the specified command
# pyenv, pyenv-virtualenv
if [ -s .python-version ]; then
PYENV_VERSION=$(head -n 1 .python-version)
export PYENV_VERSION
fi
if [ -n "${VIRTUAL_ENV-}" ] && [ -f "${VIRTUAL_ENV}/bin/activate" ]; then
. "${VIRTUAL_ENV}/bin/activate"
else
# other common virtualenvs
my_path=$(git rev-parse --show-toplevel)
for venv in venv .venv .; do
if [ -f "${my_path}/${venv}/bin/activate" ]; then
. "${my_path}/${venv}/bin/activate"
break
fi
done
fi
exec "$@"

View File

@ -13,7 +13,7 @@ zlib_fast.enable()
# pylint: disable=wrong-import-position # pylint: disable=wrong-import-position
from supervisor import bootstrap # noqa: E402 from supervisor import bootstrap # noqa: E402
from supervisor.utils.blockbuster import activate_blockbuster # noqa: E402 from supervisor.utils.blockbuster import BlockBusterManager # noqa: E402
from supervisor.utils.logging import activate_log_queue_handler # noqa: E402 from supervisor.utils.logging import activate_log_queue_handler # noqa: E402
# pylint: enable=wrong-import-position # pylint: enable=wrong-import-position
@ -55,7 +55,7 @@ if __name__ == "__main__":
coresys = loop.run_until_complete(bootstrap.initialize_coresys()) coresys = loop.run_until_complete(bootstrap.initialize_coresys())
loop.set_debug(coresys.config.debug) loop.set_debug(coresys.config.debug)
if coresys.config.detect_blocking_io: if coresys.config.detect_blocking_io:
activate_blockbuster() BlockBusterManager.activate()
loop.run_until_complete(coresys.core.connect()) loop.run_until_complete(coresys.core.connect())
loop.run_until_complete(bootstrap.supervisor_debugger(coresys)) loop.run_until_complete(bootstrap.supervisor_debugger(coresys))
@ -66,8 +66,15 @@ if __name__ == "__main__":
_LOGGER.info("Setting up Supervisor") _LOGGER.info("Setting up Supervisor")
loop.run_until_complete(coresys.core.setup()) loop.run_until_complete(coresys.core.setup())
loop.call_soon_threadsafe(loop.create_task, coresys.core.start()) bootstrap.register_signal_handlers(loop, coresys)
loop.call_soon_threadsafe(bootstrap.reg_signal, loop, coresys)
try:
loop.run_until_complete(coresys.core.start())
except Exception as err: # pylint: disable=broad-except
# Supervisor itself is running at this point, just something didn't
# start as expected. Log with traceback to get more insights for
# such cases.
_LOGGER.critical("Supervisor start failed: %s", err, exc_info=True)
try: try:
_LOGGER.info("Running Supervisor") _LOGGER.info("Running Supervisor")

View File

@ -33,8 +33,6 @@ from ..const import (
ATTR_AUDIO_OUTPUT, ATTR_AUDIO_OUTPUT,
ATTR_AUTO_UPDATE, ATTR_AUTO_UPDATE,
ATTR_BOOT, ATTR_BOOT,
ATTR_DATA,
ATTR_EVENT,
ATTR_IMAGE, ATTR_IMAGE,
ATTR_INGRESS_ENTRY, ATTR_INGRESS_ENTRY,
ATTR_INGRESS_PANEL, ATTR_INGRESS_PANEL,
@ -50,7 +48,6 @@ from ..const import (
ATTR_SYSTEM, ATTR_SYSTEM,
ATTR_SYSTEM_MANAGED, ATTR_SYSTEM_MANAGED,
ATTR_SYSTEM_MANAGED_CONFIG_ENTRY, ATTR_SYSTEM_MANAGED_CONFIG_ENTRY,
ATTR_TYPE,
ATTR_USER, ATTR_USER,
ATTR_UUID, ATTR_UUID,
ATTR_VERSION, ATTR_VERSION,
@ -79,7 +76,7 @@ from ..exceptions import (
HostAppArmorError, HostAppArmorError,
) )
from ..hardware.data import Device from ..hardware.data import Device
from ..homeassistant.const import WSEvent, WSType from ..homeassistant.const import WSEvent
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.const import ContextType, IssueType, UnhealthyReason from ..resolution.const import ContextType, IssueType, UnhealthyReason
@ -196,15 +193,12 @@ class Addon(AddonModel):
): ):
self.sys_resolution.dismiss_issue(self.device_access_missing_issue) self.sys_resolution.dismiss_issue(self.device_access_missing_issue)
self.sys_homeassistant.websocket.send_message( self.sys_homeassistant.websocket.supervisor_event_custom(
WSEvent.ADDON,
{ {
ATTR_TYPE: WSType.SUPERVISOR_EVENT, ATTR_SLUG: self.slug,
ATTR_DATA: { ATTR_STATE: new_state,
ATTR_EVENT: WSEvent.ADDON, },
ATTR_SLUG: self.slug,
ATTR_STATE: new_state,
},
}
) )
@property @property
@ -366,7 +360,7 @@ class Addon(AddonModel):
@property @property
def auto_update(self) -> bool: def auto_update(self) -> bool:
"""Return if auto update is enable.""" """Return if auto update is enable."""
return self.persist.get(ATTR_AUTO_UPDATE, super().auto_update) return self.persist.get(ATTR_AUTO_UPDATE, False)
@auto_update.setter @auto_update.setter
def auto_update(self, value: bool) -> None: def auto_update(self, value: bool) -> None:
@ -852,9 +846,10 @@ class Addon(AddonModel):
await self.sys_ingress.update_hass_panel(self) await self.sys_ingress.update_hass_panel(self)
# Cleanup Ingress dynamic port assignment # Cleanup Ingress dynamic port assignment
need_ingress_token_cleanup = False
if self.with_ingress: if self.with_ingress:
need_ingress_token_cleanup = True
await self.sys_ingress.del_dynamic_port(self.slug) await self.sys_ingress.del_dynamic_port(self.slug)
self.sys_create_task(self.sys_ingress.reload())
# Cleanup discovery data # Cleanup discovery data
for message in self.sys_discovery.list_messages: for message in self.sys_discovery.list_messages:
@ -869,8 +864,12 @@ class Addon(AddonModel):
await service.del_service_data(self) await service.del_service_data(self)
# Remove from addon manager # Remove from addon manager
await self.sys_addons.data.uninstall(self)
self.sys_addons.local.pop(self.slug) self.sys_addons.local.pop(self.slug)
await self.sys_addons.data.uninstall(self)
# Cleanup Ingress tokens
if need_ingress_token_cleanup:
await self.sys_ingress.reload()
@Job( @Job(
name="addon_update", name="addon_update",
@ -1323,8 +1322,8 @@ class Addon(AddonModel):
arcname="data", arcname="data",
) )
# Backup config # Backup config (if used and existing, restore handles this gracefully)
if addon_config_used: if addon_config_used and self.path_config.is_dir():
atomic_contents_add( atomic_contents_add(
backup, backup,
self.path_config, self.path_config,
@ -1360,9 +1359,7 @@ class Addon(AddonModel):
) )
_LOGGER.info("Finish backup for addon %s", self.slug) _LOGGER.info("Finish backup for addon %s", self.slug)
except (tarfile.TarError, OSError, AddFileError) as err: except (tarfile.TarError, OSError, AddFileError) as err:
raise AddonsError( raise AddonsError(f"Can't write tarfile: {err}", _LOGGER.error) from err
f"Can't write tarfile {tar_file}: {err}", _LOGGER.error
) from err
finally: finally:
if was_running: if was_running:
wait_for_start = await self.end_backup() wait_for_start = await self.end_backup()

View File

@ -15,6 +15,7 @@ from ..const import (
ATTR_SQUASH, ATTR_SQUASH,
FILE_SUFFIX_CONFIGURATION, FILE_SUFFIX_CONFIGURATION,
META_ADDON, META_ADDON,
SOCKET_DOCKER,
) )
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..docker.interface import MAP_ARCH from ..docker.interface import MAP_ARCH
@ -121,39 +122,64 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
except HassioArchNotFound: except HassioArchNotFound:
return False return False
def get_docker_args(self, version: AwesomeVersion, image: str | None = None): def get_docker_args(
"""Create a dict with Docker build arguments. self, version: AwesomeVersion, image_tag: str
) -> dict[str, Any]:
"""Create a dict with Docker run args."""
dockerfile_path = self.get_dockerfile().relative_to(self.addon.path_location)
Must be run in executor. build_cmd = [
""" "docker",
args: dict[str, Any] = { "buildx",
"path": str(self.addon.path_location), "build",
"tag": f"{image or self.addon.image}:{version!s}", ".",
"dockerfile": str(self.get_dockerfile()), "--tag",
"pull": True, image_tag,
"forcerm": not self.sys_dev, "--file",
"squash": self.squash, str(dockerfile_path),
"platform": MAP_ARCH[self.arch], "--platform",
"labels": { MAP_ARCH[self.arch],
"io.hass.version": version, "--pull",
"io.hass.arch": self.arch, ]
"io.hass.type": META_ADDON,
"io.hass.name": self._fix_label("name"), labels = {
"io.hass.description": self._fix_label("description"), "io.hass.version": version,
**self.additional_labels, "io.hass.arch": self.arch,
}, "io.hass.type": META_ADDON,
"buildargs": { "io.hass.name": self._fix_label("name"),
"BUILD_FROM": self.base_image, "io.hass.description": self._fix_label("description"),
"BUILD_VERSION": version, **self.additional_labels,
"BUILD_ARCH": self.sys_arch.default,
**self.additional_args,
},
} }
if self.addon.url: if self.addon.url:
args["labels"]["io.hass.url"] = self.addon.url labels["io.hass.url"] = self.addon.url
return args for key, value in labels.items():
build_cmd.extend(["--label", f"{key}={value}"])
build_args = {
"BUILD_FROM": self.base_image,
"BUILD_VERSION": version,
"BUILD_ARCH": self.sys_arch.default,
**self.additional_args,
}
for key, value in build_args.items():
build_cmd.extend(["--build-arg", f"{key}={value}"])
# The addon path will be mounted from the host system
addon_extern_path = self.sys_config.local_to_extern_path(
self.addon.path_location
)
return {
"command": build_cmd,
"volumes": {
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
addon_extern_path: {"bind": "/addon", "mode": "ro"},
},
"working_dir": "/addon",
}
def _fix_label(self, label_name: str) -> str: def _fix_label(self, label_name: str) -> str:
"""Remove characters they are not supported.""" """Remove characters they are not supported."""

View File

@ -67,6 +67,10 @@ class AddonManager(CoreSysAttributes):
return self.store.get(addon_slug) return self.store.get(addon_slug)
return None return None
def get_local_only(self, addon_slug: str) -> Addon | None:
"""Return an installed add-on from slug."""
return self.local.get(addon_slug)
def from_token(self, token: str) -> Addon | None: def from_token(self, token: str) -> Addon | None:
"""Return an add-on from Supervisor token.""" """Return an add-on from Supervisor token."""
for addon in self.installed: for addon in self.installed:
@ -262,7 +266,7 @@ class AddonManager(CoreSysAttributes):
], ],
on_condition=AddonsJobError, on_condition=AddonsJobError,
) )
async def rebuild(self, slug: str) -> asyncio.Task | None: async def rebuild(self, slug: str, *, force: bool = False) -> asyncio.Task | None:
"""Perform a rebuild of local build add-on. """Perform a rebuild of local build add-on.
Returns a Task that completes when addon has state 'started' (see addon.start) Returns a Task that completes when addon has state 'started' (see addon.start)
@ -285,7 +289,7 @@ class AddonManager(CoreSysAttributes):
raise AddonsError( raise AddonsError(
"Version changed, use Update instead Rebuild", _LOGGER.error "Version changed, use Update instead Rebuild", _LOGGER.error
) )
if not addon.need_build: if not force and not addon.need_build:
raise AddonsNotSupportedError( raise AddonsNotSupportedError(
"Can't rebuild a image based add-on", _LOGGER.error "Can't rebuild a image based add-on", _LOGGER.error
) )

View File

@ -664,12 +664,16 @@ class AddonModel(JobGroup, ABC):
"""Validate if addon is available for current system.""" """Validate if addon is available for current system."""
return self._validate_availability(self.data, logger=_LOGGER.error) return self._validate_availability(self.data, logger=_LOGGER.error)
def __eq__(self, other): def __eq__(self, other: Any) -> bool:
"""Compaired add-on objects.""" """Compare add-on objects."""
if not isinstance(other, AddonModel): if not isinstance(other, AddonModel):
return False return False
return self.slug == other.slug return self.slug == other.slug
def __hash__(self) -> int:
"""Hash for add-on objects."""
return hash(self.slug)
def _validate_availability( def _validate_availability(
self, config, *, logger: Callable[..., None] | None = None self, config, *, logger: Callable[..., None] | None = None
) -> None: ) -> None:

View File

@ -8,7 +8,7 @@ from typing import Any
from aiohttp import hdrs, web from aiohttp import hdrs, web
from ..const import AddonState from ..const import SUPERVISOR_DOCKER_NAME, AddonState
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import APIAddonNotInstalled, HostNotSupportedError from ..exceptions import APIAddonNotInstalled, HostNotSupportedError
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
@ -345,6 +345,9 @@ class RestAPI(CoreSysAttributes):
api_root.coresys = self.coresys api_root.coresys = self.coresys
self.webapp.add_routes([web.get("/info", api_root.info)]) self.webapp.add_routes([web.get("/info", api_root.info)])
self.webapp.add_routes([web.post("/reload_updates", api_root.reload_updates)])
# Discouraged
self.webapp.add_routes([web.post("/refresh_updates", api_root.refresh_updates)]) self.webapp.add_routes([web.post("/refresh_updates", api_root.refresh_updates)])
self.webapp.add_routes( self.webapp.add_routes(
[web.get("/available_updates", api_root.available_updates)] [web.get("/available_updates", api_root.available_updates)]
@ -423,7 +426,7 @@ class RestAPI(CoreSysAttributes):
async def get_supervisor_logs(*args, **kwargs): async def get_supervisor_logs(*args, **kwargs):
try: try:
return await self._api_host.advanced_logs_handler( return await self._api_host.advanced_logs_handler(
*args, identifier="hassio_supervisor", **kwargs *args, identifier=SUPERVISOR_DOCKER_NAME, **kwargs
) )
except Exception as err: # pylint: disable=broad-exception-caught except Exception as err: # pylint: disable=broad-exception-caught
# Supervisor logs are critical, so catch everything, log the exception # Supervisor logs are critical, so catch everything, log the exception
@ -786,6 +789,7 @@ class RestAPI(CoreSysAttributes):
self.webapp.add_routes( self.webapp.add_routes(
[ [
web.get("/docker/info", api_docker.info), web.get("/docker/info", api_docker.info),
web.post("/docker/options", api_docker.options),
web.get("/docker/registries", api_docker.registries), web.get("/docker/registries", api_docker.registries),
web.post("/docker/registries", api_docker.create_registry), web.post("/docker/registries", api_docker.create_registry),
web.delete("/docker/registries/{hostname}", api_docker.remove_registry), web.delete("/docker/registries/{hostname}", api_docker.remove_registry),

View File

@ -36,6 +36,7 @@ from ..const import (
ATTR_DNS, ATTR_DNS,
ATTR_DOCKER_API, ATTR_DOCKER_API,
ATTR_DOCUMENTATION, ATTR_DOCUMENTATION,
ATTR_FORCE,
ATTR_FULL_ACCESS, ATTR_FULL_ACCESS,
ATTR_GPIO, ATTR_GPIO,
ATTR_HASSIO_API, ATTR_HASSIO_API,
@ -139,6 +140,8 @@ SCHEMA_SECURITY = vol.Schema({vol.Optional(ATTR_PROTECTED): vol.Boolean()})
SCHEMA_UNINSTALL = vol.Schema( SCHEMA_UNINSTALL = vol.Schema(
{vol.Optional(ATTR_REMOVE_CONFIG, default=False): vol.Boolean()} {vol.Optional(ATTR_REMOVE_CONFIG, default=False): vol.Boolean()}
) )
SCHEMA_REBUILD = vol.Schema({vol.Optional(ATTR_FORCE, default=False): vol.Boolean()})
# pylint: enable=no-value-for-parameter # pylint: enable=no-value-for-parameter
@ -461,7 +464,11 @@ class APIAddons(CoreSysAttributes):
async def rebuild(self, request: web.Request) -> None: async def rebuild(self, request: web.Request) -> None:
"""Rebuild local build add-on.""" """Rebuild local build add-on."""
addon = self.get_addon_for_request(request) addon = self.get_addon_for_request(request)
if start_task := await asyncio.shield(self.sys_addons.rebuild(addon.slug)): body: dict[str, Any] = await api_validate(SCHEMA_REBUILD, request)
if start_task := await asyncio.shield(
self.sys_addons.rebuild(addon.slug, force=body[ATTR_FORCE])
):
await start_task await start_task
@api_process @api_process

View File

@ -3,18 +3,19 @@
import asyncio import asyncio
from collections.abc import Awaitable from collections.abc import Awaitable
import logging import logging
from typing import Any from typing import Any, cast
from aiohttp import BasicAuth, web from aiohttp import BasicAuth, web
from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE, WWW_AUTHENTICATE from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE, WWW_AUTHENTICATE
from aiohttp.web import FileField
from aiohttp.web_exceptions import HTTPUnauthorized from aiohttp.web_exceptions import HTTPUnauthorized
from multidict import MultiDictProxy
import voluptuous as vol import voluptuous as vol
from ..addons.addon import Addon from ..addons.addon import Addon
from ..const import ATTR_NAME, ATTR_PASSWORD, ATTR_USERNAME, REQUEST_FROM from ..const import ATTR_NAME, ATTR_PASSWORD, ATTR_USERNAME, REQUEST_FROM
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import APIForbidden from ..exceptions import APIForbidden
from ..utils.json import json_loads
from .const import ( from .const import (
ATTR_GROUP_IDS, ATTR_GROUP_IDS,
ATTR_IS_ACTIVE, ATTR_IS_ACTIVE,
@ -24,7 +25,7 @@ from .const import (
CONTENT_TYPE_JSON, CONTENT_TYPE_JSON,
CONTENT_TYPE_URL, CONTENT_TYPE_URL,
) )
from .utils import api_process, api_validate from .utils import api_process, api_validate, json_loads
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@ -52,7 +53,10 @@ class APIAuth(CoreSysAttributes):
return self.sys_auth.check_login(addon, auth.login, auth.password) return self.sys_auth.check_login(addon, auth.login, auth.password)
def _process_dict( def _process_dict(
self, request: web.Request, addon: Addon, data: dict[str, str] self,
request: web.Request,
addon: Addon,
data: dict[str, Any] | MultiDictProxy[str | bytes | FileField],
) -> Awaitable[bool]: ) -> Awaitable[bool]:
"""Process login with dict data. """Process login with dict data.
@ -61,14 +65,22 @@ class APIAuth(CoreSysAttributes):
username = data.get("username") or data.get("user") username = data.get("username") or data.get("user")
password = data.get("password") password = data.get("password")
return self.sys_auth.check_login(addon, username, password) # Test that we did receive strings and not something else, raise if so
try:
_ = username.encode and password.encode # type: ignore
except AttributeError:
raise HTTPUnauthorized(headers=REALM_HEADER) from None
return self.sys_auth.check_login(
addon, cast(str, username), cast(str, password)
)
@api_process @api_process
async def auth(self, request: web.Request) -> bool: async def auth(self, request: web.Request) -> bool:
"""Process login request.""" """Process login request."""
addon = request[REQUEST_FROM] addon = request[REQUEST_FROM]
if not addon.access_auth_api: if not isinstance(addon, Addon) or not addon.access_auth_api:
raise APIForbidden("Can't use Home Assistant auth!") raise APIForbidden("Can't use Home Assistant auth!")
# BasicAuth # BasicAuth
@ -80,13 +92,18 @@ class APIAuth(CoreSysAttributes):
# Json # Json
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON: if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON:
data = await request.json(loads=json_loads) data = await request.json(loads=json_loads)
return await self._process_dict(request, addon, data) if not await self._process_dict(request, addon, data):
raise HTTPUnauthorized()
return True
# URL encoded # URL encoded
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_URL: if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_URL:
data = await request.post() data = await request.post()
return await self._process_dict(request, addon, data) if not await self._process_dict(request, addon, data):
raise HTTPUnauthorized()
return True
# Advertise Basic authentication by default
raise HTTPUnauthorized(headers=REALM_HEADER) raise HTTPUnauthorized(headers=REALM_HEADER)
@api_process @api_process

View File

@ -53,7 +53,6 @@ from ..coresys import CoreSysAttributes
from ..exceptions import APIError, APIForbidden, APINotFound from ..exceptions import APIError, APIForbidden, APINotFound
from ..jobs import JobSchedulerOptions, SupervisorJob from ..jobs import JobSchedulerOptions, SupervisorJob
from ..mounts.const import MountUsage from ..mounts.const import MountUsage
from ..mounts.mount import Mount
from ..resolution.const import UnhealthyReason from ..resolution.const import UnhealthyReason
from .const import ( from .const import (
ATTR_ADDITIONAL_LOCATIONS, ATTR_ADDITIONAL_LOCATIONS,
@ -495,7 +494,7 @@ class APIBackups(CoreSysAttributes):
"""Upload a backup file.""" """Upload a backup file."""
location: LOCATION_TYPE = None location: LOCATION_TYPE = None
locations: list[LOCATION_TYPE] | None = None locations: list[LOCATION_TYPE] | None = None
tmp_path = self.sys_config.path_tmp
if ATTR_LOCATION in request.query: if ATTR_LOCATION in request.query:
location_names: list[str] = request.query.getall(ATTR_LOCATION, []) location_names: list[str] = request.query.getall(ATTR_LOCATION, [])
self._validate_cloud_backup_location( self._validate_cloud_backup_location(
@ -510,9 +509,6 @@ class APIBackups(CoreSysAttributes):
] ]
location = locations.pop(0) location = locations.pop(0)
if location and location != LOCATION_CLOUD_BACKUP:
tmp_path = cast(Mount, location).local_where
filename: str | None = None filename: str | None = None
if ATTR_FILENAME in request.query: if ATTR_FILENAME in request.query:
filename = request.query.get(ATTR_FILENAME) filename = request.query.get(ATTR_FILENAME)
@ -521,13 +517,14 @@ class APIBackups(CoreSysAttributes):
except vol.Invalid as ex: except vol.Invalid as ex:
raise APIError(humanize_error(filename, ex)) from None raise APIError(humanize_error(filename, ex)) from None
tmp_path = await self.sys_backups.get_upload_path_for_location(location)
temp_dir: TemporaryDirectory | None = None temp_dir: TemporaryDirectory | None = None
backup_file_stream: IOBase | None = None backup_file_stream: IOBase | None = None
def open_backup_file() -> Path: def open_backup_file() -> Path:
nonlocal temp_dir, backup_file_stream nonlocal temp_dir, backup_file_stream
temp_dir = TemporaryDirectory(dir=tmp_path.as_posix()) temp_dir = TemporaryDirectory(dir=tmp_path.as_posix())
tar_file = Path(temp_dir.name, "backup.tar") tar_file = Path(temp_dir.name, "upload.tar")
backup_file_stream = tar_file.open("wb") backup_file_stream = tar_file.open("wb")
return tar_file return tar_file

View File

@ -87,4 +87,4 @@ class DetectBlockingIO(StrEnum):
OFF = "off" OFF = "off"
ON = "on" ON = "on"
ON_AT_STARTUP = "on_at_startup" ON_AT_STARTUP = "on-at-startup"

View File

@ -1,7 +1,7 @@
"""Init file for Supervisor network RESTful API.""" """Init file for Supervisor network RESTful API."""
import logging import logging
from typing import Any, cast from typing import Any
from aiohttp import web from aiohttp import web
import voluptuous as vol import voluptuous as vol
@ -56,8 +56,8 @@ class APIDiscovery(CoreSysAttributes):
} }
for message in self.sys_discovery.list_messages for message in self.sys_discovery.list_messages
if ( if (
discovered := cast( discovered := self.sys_addons.get_local_only(
Addon, self.sys_addons.get(message.addon, local_only=True) message.addon,
) )
) )
and discovered.state == AddonState.STARTED and discovered.state == AddonState.STARTED

View File

@ -7,6 +7,7 @@ from aiohttp import web
import voluptuous as vol import voluptuous as vol
from ..const import ( from ..const import (
ATTR_ENABLE_IPV6,
ATTR_HOSTNAME, ATTR_HOSTNAME,
ATTR_LOGGING, ATTR_LOGGING,
ATTR_PASSWORD, ATTR_PASSWORD,
@ -30,10 +31,39 @@ SCHEMA_DOCKER_REGISTRY = vol.Schema(
} }
) )
# pylint: disable=no-value-for-parameter
SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Boolean()})
class APIDocker(CoreSysAttributes): class APIDocker(CoreSysAttributes):
"""Handle RESTful API for Docker configuration.""" """Handle RESTful API for Docker configuration."""
@api_process
async def info(self, request: web.Request):
"""Get docker info."""
data_registries = {}
for hostname, registry in self.sys_docker.config.registries.items():
data_registries[hostname] = {
ATTR_USERNAME: registry[ATTR_USERNAME],
}
return {
ATTR_VERSION: self.sys_docker.info.version,
ATTR_ENABLE_IPV6: self.sys_docker.config.enable_ipv6,
ATTR_STORAGE: self.sys_docker.info.storage,
ATTR_LOGGING: self.sys_docker.info.logging,
ATTR_REGISTRIES: data_registries,
}
@api_process
async def options(self, request: web.Request) -> None:
"""Set docker options."""
body = await api_validate(SCHEMA_OPTIONS, request)
if ATTR_ENABLE_IPV6 in body:
self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6]
await self.sys_docker.config.save_data()
@api_process @api_process
async def registries(self, request) -> dict[str, Any]: async def registries(self, request) -> dict[str, Any]:
"""Return the list of registries.""" """Return the list of registries."""
@ -64,18 +94,3 @@ class APIDocker(CoreSysAttributes):
del self.sys_docker.config.registries[hostname] del self.sys_docker.config.registries[hostname]
await self.sys_docker.config.save_data() await self.sys_docker.config.save_data()
@api_process
async def info(self, request: web.Request):
"""Get docker info."""
data_registries = {}
for hostname, registry in self.sys_docker.config.registries.items():
data_registries[hostname] = {
ATTR_USERNAME: registry[ATTR_USERNAME],
}
return {
ATTR_VERSION: self.sys_docker.info.version,
ATTR_STORAGE: self.sys_docker.info.storage,
ATTR_LOGGING: self.sys_docker.info.logging,
ATTR_REGISTRIES: data_registries,
}

View File

@ -118,7 +118,7 @@ class APIHomeAssistant(CoreSysAttributes):
body = await api_validate(SCHEMA_OPTIONS, request) body = await api_validate(SCHEMA_OPTIONS, request)
if ATTR_IMAGE in body: if ATTR_IMAGE in body:
self.sys_homeassistant.image = body[ATTR_IMAGE] self.sys_homeassistant.set_image(body[ATTR_IMAGE])
self.sys_homeassistant.override_image = ( self.sys_homeassistant.override_image = (
self.sys_homeassistant.image != self.sys_homeassistant.default_image self.sys_homeassistant.image != self.sys_homeassistant.default_image
) )

View File

@ -5,7 +5,7 @@ from contextlib import suppress
import logging import logging
from typing import Any from typing import Any
from aiohttp import ClientConnectionResetError, web from aiohttp import ClientConnectionResetError, ClientPayloadError, web
from aiohttp.hdrs import ACCEPT, RANGE from aiohttp.hdrs import ACCEPT, RANGE
import voluptuous as vol import voluptuous as vol
from voluptuous.error import CoerceInvalid from voluptuous.error import CoerceInvalid
@ -37,6 +37,7 @@ from ..host.const import (
LogFormat, LogFormat,
LogFormatter, LogFormatter,
) )
from ..host.logs import SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX
from ..utils.systemd_journal import journal_logs_reader from ..utils.systemd_journal import journal_logs_reader
from .const import ( from .const import (
ATTR_AGENT_VERSION, ATTR_AGENT_VERSION,
@ -238,13 +239,11 @@ class APIHost(CoreSysAttributes):
# return 2 lines at minimum. # return 2 lines at minimum.
lines = max(2, lines) lines = max(2, lines)
# entries=cursor[[:num_skip]:num_entries] # entries=cursor[[:num_skip]:num_entries]
range_header = f"entries=:-{lines - 1}:{'' if follow else lines}" range_header = f"entries=:-{lines - 1}:{SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX if follow else lines}"
elif RANGE in request.headers: elif RANGE in request.headers:
range_header = request.headers[RANGE] range_header = request.headers[RANGE]
else: else:
range_header = ( range_header = f"entries=:-{DEFAULT_LINES - 1}:{SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX if follow else DEFAULT_LINES}"
f"entries=:-{DEFAULT_LINES - 1}:{'' if follow else DEFAULT_LINES}"
)
async with self.sys_host.logs.journald_logs( async with self.sys_host.logs.journald_logs(
params=params, range_header=range_header, accept=LogFormat.JOURNAL params=params, range_header=range_header, accept=LogFormat.JOURNAL
@ -270,7 +269,15 @@ class APIHost(CoreSysAttributes):
err, err,
) )
break break
except ConnectionResetError as ex: except ConnectionError as err:
_LOGGER.warning(
"%s raised when returning journal logs: %s",
type(err).__name__,
err,
)
break
except (ConnectionResetError, ClientPayloadError) as ex:
# ClientPayloadError is most likely caused by the closing the connection
raise APIError( raise APIError(
"Connection reset when trying to fetch data from systemd-journald." "Connection reset when trying to fetch data from systemd-journald."
) from ex ) from ex

View File

@ -309,9 +309,9 @@ class APIIngress(CoreSysAttributes):
def _init_header( def _init_header(
request: web.Request, addon: Addon, session_data: IngressSessionData | None request: web.Request, addon: Addon, session_data: IngressSessionData | None
) -> CIMultiDict | dict[str, str]: ) -> CIMultiDict[str]:
"""Create initial header.""" """Create initial header."""
headers = {} headers = CIMultiDict[str]()
if session_data is not None: if session_data is not None:
headers[HEADER_REMOTE_USER_ID] = session_data.user.id headers[HEADER_REMOTE_USER_ID] = session_data.user.id
@ -337,7 +337,7 @@ def _init_header(
istr(HEADER_REMOTE_USER_DISPLAY_NAME), istr(HEADER_REMOTE_USER_DISPLAY_NAME),
): ):
continue continue
headers[name] = value headers.add(name, value)
# Update X-Forwarded-For # Update X-Forwarded-For
if request.transport: if request.transport:
@ -348,9 +348,9 @@ def _init_header(
return headers return headers
def _response_header(response: aiohttp.ClientResponse) -> dict[str, str]: def _response_header(response: aiohttp.ClientResponse) -> CIMultiDict[str]:
"""Create response header.""" """Create response header."""
headers = {} headers = CIMultiDict[str]()
for name, value in response.headers.items(): for name, value in response.headers.items():
if name in ( if name in (
@ -360,7 +360,7 @@ def _response_header(response: aiohttp.ClientResponse) -> dict[str, str]:
hdrs.CONTENT_ENCODING, hdrs.CONTENT_ENCODING,
): ):
continue continue
headers[name] = value headers.add(name, value)
return headers return headers

View File

@ -20,7 +20,7 @@ from ...const import (
ROLE_DEFAULT, ROLE_DEFAULT,
ROLE_HOMEASSISTANT, ROLE_HOMEASSISTANT,
ROLE_MANAGER, ROLE_MANAGER,
CoreState, VALID_API_STATES,
) )
from ...coresys import CoreSys, CoreSysAttributes from ...coresys import CoreSys, CoreSysAttributes
from ...utils import version_is_new_enough from ...utils import version_is_new_enough
@ -200,11 +200,7 @@ class SecurityMiddleware(CoreSysAttributes):
@middleware @middleware
async def system_validation(self, request: Request, handler: Callable) -> Response: async def system_validation(self, request: Request, handler: Callable) -> Response:
"""Check if core is ready to response.""" """Check if core is ready to response."""
if self.sys_core.state not in ( if self.sys_core.state not in VALID_API_STATES:
CoreState.STARTUP,
CoreState.RUNNING,
CoreState.FREEZE,
):
return api_return_error( return api_return_error(
message=f"System is not ready with state: {self.sys_core.state}" message=f"System is not ready with state: {self.sys_core.state}"
) )

View File

@ -10,6 +10,7 @@ import voluptuous as vol
from ..const import ( from ..const import (
ATTR_ACCESSPOINTS, ATTR_ACCESSPOINTS,
ATTR_ADDR_GEN_MODE,
ATTR_ADDRESS, ATTR_ADDRESS,
ATTR_AUTH, ATTR_AUTH,
ATTR_CONNECTED, ATTR_CONNECTED,
@ -22,6 +23,7 @@ from ..const import (
ATTR_ID, ATTR_ID,
ATTR_INTERFACE, ATTR_INTERFACE,
ATTR_INTERFACES, ATTR_INTERFACES,
ATTR_IP6_PRIVACY,
ATTR_IPV4, ATTR_IPV4,
ATTR_IPV6, ATTR_IPV6,
ATTR_MAC, ATTR_MAC,
@ -38,15 +40,18 @@ from ..const import (
ATTR_TYPE, ATTR_TYPE,
ATTR_VLAN, ATTR_VLAN,
ATTR_WIFI, ATTR_WIFI,
DOCKER_IPV4_NETWORK_MASK,
DOCKER_NETWORK, DOCKER_NETWORK,
DOCKER_NETWORK_MASK,
) )
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import APIError, APINotFound, HostNetworkNotFound from ..exceptions import APIError, APINotFound, HostNetworkNotFound
from ..host.configuration import ( from ..host.configuration import (
AccessPoint, AccessPoint,
Interface, Interface,
InterfaceAddrGenMode,
InterfaceIp6Privacy,
InterfaceMethod, InterfaceMethod,
Ip6Setting,
IpConfig, IpConfig,
IpSetting, IpSetting,
VlanConfig, VlanConfig,
@ -68,6 +73,8 @@ _SCHEMA_IPV6_CONFIG = vol.Schema(
{ {
vol.Optional(ATTR_ADDRESS): [vol.Coerce(IPv6Interface)], vol.Optional(ATTR_ADDRESS): [vol.Coerce(IPv6Interface)],
vol.Optional(ATTR_METHOD): vol.Coerce(InterfaceMethod), vol.Optional(ATTR_METHOD): vol.Coerce(InterfaceMethod),
vol.Optional(ATTR_ADDR_GEN_MODE): vol.Coerce(InterfaceAddrGenMode),
vol.Optional(ATTR_IP6_PRIVACY): vol.Coerce(InterfaceIp6Privacy),
vol.Optional(ATTR_GATEWAY): vol.Coerce(IPv6Address), vol.Optional(ATTR_GATEWAY): vol.Coerce(IPv6Address),
vol.Optional(ATTR_NAMESERVERS): [vol.Coerce(IPv6Address)], vol.Optional(ATTR_NAMESERVERS): [vol.Coerce(IPv6Address)],
} }
@ -94,8 +101,8 @@ SCHEMA_UPDATE = vol.Schema(
) )
def ipconfig_struct(config: IpConfig, setting: IpSetting) -> dict[str, Any]: def ip4config_struct(config: IpConfig, setting: IpSetting) -> dict[str, Any]:
"""Return a dict with information about ip configuration.""" """Return a dict with information about IPv4 configuration."""
return { return {
ATTR_METHOD: setting.method, ATTR_METHOD: setting.method,
ATTR_ADDRESS: [address.with_prefixlen for address in config.address], ATTR_ADDRESS: [address.with_prefixlen for address in config.address],
@ -105,6 +112,19 @@ def ipconfig_struct(config: IpConfig, setting: IpSetting) -> dict[str, Any]:
} }
def ip6config_struct(config: IpConfig, setting: Ip6Setting) -> dict[str, Any]:
"""Return a dict with information about IPv6 configuration."""
return {
ATTR_METHOD: setting.method,
ATTR_ADDR_GEN_MODE: setting.addr_gen_mode,
ATTR_IP6_PRIVACY: setting.ip6_privacy,
ATTR_ADDRESS: [address.with_prefixlen for address in config.address],
ATTR_NAMESERVERS: [str(address) for address in config.nameservers],
ATTR_GATEWAY: str(config.gateway) if config.gateway else None,
ATTR_READY: config.ready,
}
def wifi_struct(config: WifiConfig) -> dict[str, Any]: def wifi_struct(config: WifiConfig) -> dict[str, Any]:
"""Return a dict with information about wifi configuration.""" """Return a dict with information about wifi configuration."""
return { return {
@ -132,10 +152,10 @@ def interface_struct(interface: Interface) -> dict[str, Any]:
ATTR_CONNECTED: interface.connected, ATTR_CONNECTED: interface.connected,
ATTR_PRIMARY: interface.primary, ATTR_PRIMARY: interface.primary,
ATTR_MAC: interface.mac, ATTR_MAC: interface.mac,
ATTR_IPV4: ipconfig_struct(interface.ipv4, interface.ipv4setting) ATTR_IPV4: ip4config_struct(interface.ipv4, interface.ipv4setting)
if interface.ipv4 and interface.ipv4setting if interface.ipv4 and interface.ipv4setting
else None, else None,
ATTR_IPV6: ipconfig_struct(interface.ipv6, interface.ipv6setting) ATTR_IPV6: ip6config_struct(interface.ipv6, interface.ipv6setting)
if interface.ipv6 and interface.ipv6setting if interface.ipv6 and interface.ipv6setting
else None, else None,
ATTR_WIFI: wifi_struct(interface.wifi) if interface.wifi else None, ATTR_WIFI: wifi_struct(interface.wifi) if interface.wifi else None,
@ -183,7 +203,7 @@ class APINetwork(CoreSysAttributes):
], ],
ATTR_DOCKER: { ATTR_DOCKER: {
ATTR_INTERFACE: DOCKER_NETWORK, ATTR_INTERFACE: DOCKER_NETWORK,
ATTR_ADDRESS: str(DOCKER_NETWORK_MASK), ATTR_ADDRESS: str(DOCKER_IPV4_NETWORK_MASK),
ATTR_GATEWAY: str(self.sys_docker.network.gateway), ATTR_GATEWAY: str(self.sys_docker.network.gateway),
ATTR_DNS: str(self.sys_docker.network.dns), ATTR_DNS: str(self.sys_docker.network.dns),
}, },
@ -212,25 +232,31 @@ class APINetwork(CoreSysAttributes):
for key, config in body.items(): for key, config in body.items():
if key == ATTR_IPV4: if key == ATTR_IPV4:
interface.ipv4setting = IpSetting( interface.ipv4setting = IpSetting(
config.get(ATTR_METHOD, InterfaceMethod.STATIC), method=config.get(ATTR_METHOD, InterfaceMethod.STATIC),
config.get(ATTR_ADDRESS, []), address=config.get(ATTR_ADDRESS, []),
config.get(ATTR_GATEWAY), gateway=config.get(ATTR_GATEWAY),
config.get(ATTR_NAMESERVERS, []), nameservers=config.get(ATTR_NAMESERVERS, []),
) )
elif key == ATTR_IPV6: elif key == ATTR_IPV6:
interface.ipv6setting = IpSetting( interface.ipv6setting = Ip6Setting(
config.get(ATTR_METHOD, InterfaceMethod.STATIC), method=config.get(ATTR_METHOD, InterfaceMethod.STATIC),
config.get(ATTR_ADDRESS, []), addr_gen_mode=config.get(
config.get(ATTR_GATEWAY), ATTR_ADDR_GEN_MODE, InterfaceAddrGenMode.DEFAULT
config.get(ATTR_NAMESERVERS, []), ),
ip6_privacy=config.get(
ATTR_IP6_PRIVACY, InterfaceIp6Privacy.DEFAULT
),
address=config.get(ATTR_ADDRESS, []),
gateway=config.get(ATTR_GATEWAY),
nameservers=config.get(ATTR_NAMESERVERS, []),
) )
elif key == ATTR_WIFI: elif key == ATTR_WIFI:
interface.wifi = WifiConfig( interface.wifi = WifiConfig(
config.get(ATTR_MODE, WifiMode.INFRASTRUCTURE), mode=config.get(ATTR_MODE, WifiMode.INFRASTRUCTURE),
config.get(ATTR_SSID, ""), ssid=config.get(ATTR_SSID, ""),
config.get(ATTR_AUTH, AuthMethod.OPEN), auth=config.get(ATTR_AUTH, AuthMethod.OPEN),
config.get(ATTR_PSK, None), psk=config.get(ATTR_PSK, None),
None, signal=None,
) )
elif key == ATTR_ENABLED: elif key == ATTR_ENABLED:
interface.enabled = config interface.enabled = config
@ -277,19 +303,25 @@ class APINetwork(CoreSysAttributes):
ipv4_setting = None ipv4_setting = None
if ATTR_IPV4 in body: if ATTR_IPV4 in body:
ipv4_setting = IpSetting( ipv4_setting = IpSetting(
body[ATTR_IPV4].get(ATTR_METHOD, InterfaceMethod.AUTO), method=body[ATTR_IPV4].get(ATTR_METHOD, InterfaceMethod.AUTO),
body[ATTR_IPV4].get(ATTR_ADDRESS, []), address=body[ATTR_IPV4].get(ATTR_ADDRESS, []),
body[ATTR_IPV4].get(ATTR_GATEWAY, None), gateway=body[ATTR_IPV4].get(ATTR_GATEWAY, None),
body[ATTR_IPV4].get(ATTR_NAMESERVERS, []), nameservers=body[ATTR_IPV4].get(ATTR_NAMESERVERS, []),
) )
ipv6_setting = None ipv6_setting = None
if ATTR_IPV6 in body: if ATTR_IPV6 in body:
ipv6_setting = IpSetting( ipv6_setting = Ip6Setting(
body[ATTR_IPV6].get(ATTR_METHOD, InterfaceMethod.AUTO), method=body[ATTR_IPV6].get(ATTR_METHOD, InterfaceMethod.AUTO),
body[ATTR_IPV6].get(ATTR_ADDRESS, []), addr_gen_mode=body[ATTR_IPV6].get(
body[ATTR_IPV6].get(ATTR_GATEWAY, None), ATTR_ADDR_GEN_MODE, InterfaceAddrGenMode.DEFAULT
body[ATTR_IPV6].get(ATTR_NAMESERVERS, []), ),
ip6_privacy=body[ATTR_IPV6].get(
ATTR_IP6_PRIVACY, InterfaceIp6Privacy.DEFAULT
),
address=body[ATTR_IPV6].get(ATTR_ADDRESS, []),
gateway=body[ATTR_IPV6].get(ATTR_GATEWAY, None),
nameservers=body[ATTR_IPV6].get(ATTR_NAMESERVERS, []),
) )
vlan_interface = Interface( vlan_interface = Interface(

View File

@ -17,6 +17,7 @@ from ..const import (
ATTR_ICON, ATTR_ICON,
ATTR_LOGGING, ATTR_LOGGING,
ATTR_MACHINE, ATTR_MACHINE,
ATTR_MACHINE_ID,
ATTR_NAME, ATTR_NAME,
ATTR_OPERATING_SYSTEM, ATTR_OPERATING_SYSTEM,
ATTR_STATE, ATTR_STATE,
@ -48,6 +49,7 @@ class APIRoot(CoreSysAttributes):
ATTR_OPERATING_SYSTEM: self.sys_host.info.operating_system, ATTR_OPERATING_SYSTEM: self.sys_host.info.operating_system,
ATTR_FEATURES: self.sys_host.features, ATTR_FEATURES: self.sys_host.features,
ATTR_MACHINE: self.sys_machine, ATTR_MACHINE: self.sys_machine,
ATTR_MACHINE_ID: self.sys_machine_id,
ATTR_ARCH: self.sys_arch.default, ATTR_ARCH: self.sys_arch.default,
ATTR_STATE: self.sys_core.state, ATTR_STATE: self.sys_core.state,
ATTR_SUPPORTED_ARCH: self.sys_arch.supported, ATTR_SUPPORTED_ARCH: self.sys_arch.supported,
@ -113,3 +115,8 @@ class APIRoot(CoreSysAttributes):
await asyncio.shield( await asyncio.shield(
asyncio.gather(self.sys_updater.reload(), self.sys_store.reload()) asyncio.gather(self.sys_updater.reload(), self.sys_store.reload())
) )
@api_process
async def reload_updates(self, request: web.Request) -> None:
"""Refresh updater update information."""
await self.sys_updater.reload()

View File

@ -126,9 +126,7 @@ class APIStore(CoreSysAttributes):
"""Generate addon information.""" """Generate addon information."""
installed = ( installed = (
cast(Addon, self.sys_addons.get(addon.slug, local_only=True)) self.sys_addons.get_local_only(addon.slug) if addon.is_installed else None
if addon.is_installed
else None
) )
data = { data = {

View File

@ -17,6 +17,7 @@ from ..const import (
ATTR_BLK_WRITE, ATTR_BLK_WRITE,
ATTR_CHANNEL, ATTR_CHANNEL,
ATTR_CONTENT_TRUST, ATTR_CONTENT_TRUST,
ATTR_COUNTRY,
ATTR_CPU_PERCENT, ATTR_CPU_PERCENT,
ATTR_DEBUG, ATTR_DEBUG,
ATTR_DEBUG_BLOCK, ATTR_DEBUG_BLOCK,
@ -48,11 +49,7 @@ from ..const import (
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import APIError from ..exceptions import APIError
from ..store.validate import repositories from ..store.validate import repositories
from ..utils.blockbuster import ( from ..utils.blockbuster import BlockBusterManager
activate_blockbuster,
blockbuster_enabled,
deactivate_blockbuster,
)
from ..utils.sentry import close_sentry, init_sentry from ..utils.sentry import close_sentry, init_sentry
from ..utils.validate import validate_timezone from ..utils.validate import validate_timezone
from ..validate import version_tag, wait_boot from ..validate import version_tag, wait_boot
@ -76,6 +73,7 @@ SCHEMA_OPTIONS = vol.Schema(
vol.Optional(ATTR_FORCE_SECURITY): vol.Boolean(), vol.Optional(ATTR_FORCE_SECURITY): vol.Boolean(),
vol.Optional(ATTR_AUTO_UPDATE): vol.Boolean(), vol.Optional(ATTR_AUTO_UPDATE): vol.Boolean(),
vol.Optional(ATTR_DETECT_BLOCKING_IO): vol.Coerce(DetectBlockingIO), vol.Optional(ATTR_DETECT_BLOCKING_IO): vol.Coerce(DetectBlockingIO),
vol.Optional(ATTR_COUNTRY): str,
} }
) )
@ -108,7 +106,8 @@ class APISupervisor(CoreSysAttributes):
ATTR_DEBUG_BLOCK: self.sys_config.debug_block, ATTR_DEBUG_BLOCK: self.sys_config.debug_block,
ATTR_DIAGNOSTICS: self.sys_config.diagnostics, ATTR_DIAGNOSTICS: self.sys_config.diagnostics,
ATTR_AUTO_UPDATE: self.sys_updater.auto_update, ATTR_AUTO_UPDATE: self.sys_updater.auto_update,
ATTR_DETECT_BLOCKING_IO: blockbuster_enabled(), ATTR_DETECT_BLOCKING_IO: BlockBusterManager.is_enabled(),
ATTR_COUNTRY: self.sys_config.country,
# Depricated # Depricated
ATTR_WAIT_BOOT: self.sys_config.wait_boot, ATTR_WAIT_BOOT: self.sys_config.wait_boot,
ATTR_ADDONS: [ ATTR_ADDONS: [
@ -147,6 +146,9 @@ class APISupervisor(CoreSysAttributes):
if ATTR_CHANNEL in body: if ATTR_CHANNEL in body:
self.sys_updater.channel = body[ATTR_CHANNEL] self.sys_updater.channel = body[ATTR_CHANNEL]
if ATTR_COUNTRY in body:
self.sys_config.country = body[ATTR_COUNTRY]
if ATTR_DEBUG in body: if ATTR_DEBUG in body:
self.sys_config.debug = body[ATTR_DEBUG] self.sys_config.debug = body[ATTR_DEBUG]
@ -174,10 +176,10 @@ class APISupervisor(CoreSysAttributes):
detect_blocking_io = DetectBlockingIO.ON detect_blocking_io = DetectBlockingIO.ON
if detect_blocking_io == DetectBlockingIO.ON: if detect_blocking_io == DetectBlockingIO.ON:
activate_blockbuster() BlockBusterManager.activate()
elif detect_blocking_io == DetectBlockingIO.OFF: elif detect_blocking_io == DetectBlockingIO.OFF:
self.sys_config.detect_blocking_io = False self.sys_config.detect_blocking_io = False
deactivate_blockbuster() BlockBusterManager.deactivate()
# Deprecated # Deprecated
if ATTR_WAIT_BOOT in body: if ATTR_WAIT_BOOT in body:

View File

@ -40,7 +40,7 @@ class CpuArch(CoreSysAttributes):
@property @property
def supervisor(self) -> str: def supervisor(self) -> str:
"""Return supervisor arch.""" """Return supervisor arch."""
return self.sys_supervisor.arch return self.sys_supervisor.arch or self._default_arch
@property @property
def supported(self) -> list[str]: def supported(self) -> list[str]:
@ -91,4 +91,14 @@ class CpuArch(CoreSysAttributes):
for check, value in MAP_CPU.items(): for check, value in MAP_CPU.items():
if cpu.startswith(check): if cpu.startswith(check):
return value return value
return self.sys_supervisor.arch if self.sys_supervisor.arch:
_LOGGER.warning(
"Unknown CPU architecture %s, falling back to Supervisor architecture.",
cpu,
)
return self.sys_supervisor.arch
_LOGGER.warning(
"Unknown CPU architecture %s, assuming CPU architecture equals Supervisor architecture.",
cpu,
)
return cpu

View File

@ -3,10 +3,10 @@
import asyncio import asyncio
import hashlib import hashlib
import logging import logging
from typing import Any from typing import Any, TypedDict, cast
from .addons.addon import Addon from .addons.addon import Addon
from .const import ATTR_ADDON, ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH from .const import ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
from .coresys import CoreSys, CoreSysAttributes from .coresys import CoreSys, CoreSysAttributes
from .exceptions import ( from .exceptions import (
AuthError, AuthError,
@ -21,6 +21,17 @@ from .validate import SCHEMA_AUTH_CONFIG
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
class BackendAuthRequest(TypedDict):
"""Model for a backend auth request.
https://github.com/home-assistant/core/blob/ed9503324d9d255e6fb077f1614fb6d55800f389/homeassistant/components/hassio/auth.py#L66-L73
"""
username: str
password: str
addon: str
class Auth(FileConfiguration, CoreSysAttributes): class Auth(FileConfiguration, CoreSysAttributes):
"""Manage SSO for Add-ons with Home Assistant user.""" """Manage SSO for Add-ons with Home Assistant user."""
@ -74,6 +85,9 @@ class Auth(FileConfiguration, CoreSysAttributes):
"""Check username login.""" """Check username login."""
if password is None: if password is None:
raise AuthError("None as password is not supported!", _LOGGER.error) raise AuthError("None as password is not supported!", _LOGGER.error)
if username is None:
raise AuthError("None as username is not supported!", _LOGGER.error)
_LOGGER.info("Auth request from '%s' for '%s'", addon.slug, username) _LOGGER.info("Auth request from '%s' for '%s'", addon.slug, username)
# Get from cache # Get from cache
@ -103,11 +117,12 @@ class Auth(FileConfiguration, CoreSysAttributes):
async with self.sys_homeassistant.api.make_request( async with self.sys_homeassistant.api.make_request(
"post", "post",
"api/hassio_auth", "api/hassio_auth",
json={ json=cast(
ATTR_USERNAME: username, dict[str, Any],
ATTR_PASSWORD: password, BackendAuthRequest(
ATTR_ADDON: addon.slug, username=username, password=password, addon=addon.slug
}, ),
),
) as req: ) as req:
if req.status == 200: if req.status == 200:
_LOGGER.info("Successful login for '%s'", username) _LOGGER.info("Successful login for '%s'", username)
@ -145,13 +160,21 @@ class Auth(FileConfiguration, CoreSysAttributes):
async def list_users(self) -> list[dict[str, Any]]: async def list_users(self) -> list[dict[str, Any]]:
"""List users on the Home Assistant instance.""" """List users on the Home Assistant instance."""
try: try:
return await self.sys_homeassistant.websocket.async_send_command( users: (
list[dict[str, Any]] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"} {ATTR_TYPE: "config/auth/list"}
) )
except HomeAssistantWSError: except HomeAssistantWSError as err:
_LOGGER.error("Can't request listing users on Home Assistant!") raise AuthListUsersError(
f"Can't request listing users on Home Assistant: {err}", _LOGGER.error
) from err
raise AuthListUsersError() if users is not None:
return users
raise AuthListUsersError(
"Can't request listing users on Home Assistant!", _LOGGER.error
)
@staticmethod @staticmethod
def _rehash(value: str, salt2: str = "") -> str: def _rehash(value: str, salt2: str = "") -> str:

View File

@ -18,8 +18,6 @@ import time
from typing import Any, Self, cast from typing import Any, Self, cast
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
from securetar import AddFileError, SecureTarFile, atomic_contents_add, secure_path from securetar import AddFileError, SecureTarFile, atomic_contents_add, secure_path
import voluptuous as vol import voluptuous as vol
from voluptuous.humanize import humanize_error from voluptuous.humanize import humanize_error
@ -62,9 +60,11 @@ from ..utils.dt import parse_datetime, utcnow
from ..utils.json import json_bytes from ..utils.json import json_bytes
from ..utils.sentinel import DEFAULT from ..utils.sentinel import DEFAULT
from .const import BUF_SIZE, LOCATION_CLOUD_BACKUP, BackupType from .const import BUF_SIZE, LOCATION_CLOUD_BACKUP, BackupType
from .utils import key_to_iv, password_to_key from .utils import password_to_key
from .validate import SCHEMA_BACKUP from .validate import SCHEMA_BACKUP
IGNORED_COMPARISON_FIELDS = {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@ -102,7 +102,6 @@ class Backup(JobGroup):
self._tmp: TemporaryDirectory | None = None self._tmp: TemporaryDirectory | None = None
self._outer_secure_tarfile: SecureTarFile | None = None self._outer_secure_tarfile: SecureTarFile | None = None
self._key: bytes | None = None self._key: bytes | None = None
self._aes: Cipher | None = None
self._locations: dict[str | None, BackupLocation] = { self._locations: dict[str | None, BackupLocation] = {
location: BackupLocation( location: BackupLocation(
path=tar_file, path=tar_file,
@ -244,11 +243,6 @@ class Backup(JobGroup):
"""Return backup size in bytes.""" """Return backup size in bytes."""
return self._locations[self.location].size_bytes return self._locations[self.location].size_bytes
@property
def is_new(self) -> bool:
"""Return True if there is new."""
return not self.tarfile.exists()
@property @property
def tarfile(self) -> Path: def tarfile(self) -> Path:
"""Return path to backup tarfile.""" """Return path to backup tarfile."""
@ -273,7 +267,7 @@ class Backup(JobGroup):
# Compare all fields except ones about protection. Current encryption status does not affect equality # Compare all fields except ones about protection. Current encryption status does not affect equality
keys = self._data.keys() | other._data.keys() keys = self._data.keys() | other._data.keys()
for k in keys - {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}: for k in keys - IGNORED_COMPARISON_FIELDS:
if ( if (
k not in self._data k not in self._data
or k not in other._data or k not in other._data
@ -353,16 +347,10 @@ class Backup(JobGroup):
self._init_password(password) self._init_password(password)
else: else:
self._key = None self._key = None
self._aes = None
def _init_password(self, password: str) -> None: def _init_password(self, password: str) -> None:
"""Set password + init aes cipher.""" """Create key from password."""
self._key = password_to_key(password) self._key = password_to_key(password)
self._aes = Cipher(
algorithms.AES(self._key),
modes.CBC(key_to_iv(self._key)),
backend=default_backend(),
)
async def validate_backup(self, location: str | None) -> None: async def validate_backup(self, location: str | None) -> None:
"""Validate backup. """Validate backup.
@ -591,13 +579,21 @@ class Backup(JobGroup):
@Job(name="backup_addon_save", cleanup=False) @Job(name="backup_addon_save", cleanup=False)
async def _addon_save(self, addon: Addon) -> asyncio.Task | None: async def _addon_save(self, addon: Addon) -> asyncio.Task | None:
"""Store an add-on into backup.""" """Store an add-on into backup."""
self.sys_jobs.current.reference = addon.slug self.sys_jobs.current.reference = slug = addon.slug
if not self._outer_secure_tarfile: if not self._outer_secure_tarfile:
raise RuntimeError( raise RuntimeError(
"Cannot backup components without initializing backup tar" "Cannot backup components without initializing backup tar"
) )
tar_name = f"{addon.slug}.tar{'.gz' if self.compressed else ''}" # Ensure it is still installed and get current data before proceeding
if not (curr_addon := self.sys_addons.get_local_only(slug)):
_LOGGER.warning(
"Skipping backup of add-on %s because it has been uninstalled",
slug,
)
return None
tar_name = f"{slug}.tar{'.gz' if self.compressed else ''}"
addon_file = self._outer_secure_tarfile.create_inner_tar( addon_file = self._outer_secure_tarfile.create_inner_tar(
f"./{tar_name}", f"./{tar_name}",
@ -606,18 +602,16 @@ class Backup(JobGroup):
) )
# Take backup # Take backup
try: try:
start_task = await addon.backup(addon_file) start_task = await curr_addon.backup(addon_file)
except AddonsError as err: except AddonsError as err:
raise BackupError( raise BackupError(str(err)) from err
f"Can't create backup for {addon.slug}", _LOGGER.error
) from err
# Store to config # Store to config
self._data[ATTR_ADDONS].append( self._data[ATTR_ADDONS].append(
{ {
ATTR_SLUG: addon.slug, ATTR_SLUG: slug,
ATTR_NAME: addon.name, ATTR_NAME: curr_addon.name,
ATTR_VERSION: addon.version, ATTR_VERSION: curr_addon.version,
# Bug - addon_file.size used to give us this information # Bug - addon_file.size used to give us this information
# It always returns 0 in current securetar. Skipping until fixed # It always returns 0 in current securetar. Skipping until fixed
ATTR_SIZE: 0, ATTR_SIZE: 0,
@ -639,8 +633,11 @@ class Backup(JobGroup):
try: try:
if start_task := await self._addon_save(addon): if start_task := await self._addon_save(addon):
start_tasks.append(start_task) start_tasks.append(start_task)
except Exception as err: # pylint: disable=broad-except except BackupError as err:
_LOGGER.warning("Can't save Add-on %s: %s", addon.slug, err) err = BackupError(
f"Can't backup add-on {addon.slug}: {str(err)}", _LOGGER.error
)
self.sys_jobs.current.capture_error(err)
return start_tasks return start_tasks
@ -769,16 +766,20 @@ class Backup(JobGroup):
if await self.sys_run_in_executor(_save): if await self.sys_run_in_executor(_save):
self._data[ATTR_FOLDERS].append(name) self._data[ATTR_FOLDERS].append(name)
except (tarfile.TarError, OSError, AddFileError) as err: except (tarfile.TarError, OSError, AddFileError) as err:
raise BackupError( raise BackupError(f"Can't write tarfile: {str(err)}") from err
f"Can't backup folder {name}: {str(err)}", _LOGGER.error
) from err
@Job(name="backup_store_folders", cleanup=False) @Job(name="backup_store_folders", cleanup=False)
async def store_folders(self, folder_list: list[str]): async def store_folders(self, folder_list: list[str]):
"""Backup Supervisor data into backup.""" """Backup Supervisor data into backup."""
# Save folder sequential avoid issue on slow IO # Save folder sequential avoid issue on slow IO
for folder in folder_list: for folder in folder_list:
await self._folder_save(folder) try:
await self._folder_save(folder)
except BackupError as err:
err = BackupError(
f"Can't backup folder {folder}: {str(err)}", _LOGGER.error
)
self.sys_jobs.current.capture_error(err)
@Job(name="backup_folder_restore", cleanup=False) @Job(name="backup_folder_restore", cleanup=False)
async def _folder_restore(self, name: str) -> None: async def _folder_restore(self, name: str) -> None:
@ -930,5 +931,5 @@ class Backup(JobGroup):
Return a coroutine. Return a coroutine.
""" """
return self.sys_store.update_repositories( return self.sys_store.update_repositories(
self.repositories, add_with_errors=True, replace=replace set(self.repositories), issue_on_error=True, replace=replace
) )

View File

@ -122,6 +122,25 @@ class BackupManager(FileConfiguration, JobGroup):
return self.sys_config.path_backup return self.sys_config.path_backup
async def get_upload_path_for_location(self, location: LOCATION_TYPE) -> Path:
"""Get a path (temporary) upload path for a backup location."""
target_path = self._get_base_path(location)
# Return target path for mounts since tmp will always be local, mounts
# will never be the same device.
if location is not None and location != LOCATION_CLOUD_BACKUP:
return target_path
tmp_path = self.sys_config.path_tmp
def check_same_mount() -> bool:
"""Check if the target path is on the same mount as the backup location."""
return target_path.stat().st_dev == tmp_path.stat().st_dev
if await self.sys_run_in_executor(check_same_mount):
return tmp_path
return target_path
async def _check_location(self, location: LOCATION_TYPE | type[DEFAULT] = DEFAULT): async def _check_location(self, location: LOCATION_TYPE | type[DEFAULT] = DEFAULT):
"""Check if backup location is accessible.""" """Check if backup location is accessible."""
if location == DEFAULT and self.sys_mounts.default_backup_mount: if location == DEFAULT and self.sys_mounts.default_backup_mount:
@ -359,66 +378,69 @@ class BackupManager(FileConfiguration, JobGroup):
if not backup.all_locations: if not backup.all_locations:
del self._backups[backup.slug] del self._backups[backup.slug]
@Job(name="backup_copy_to_location", cleanup=False)
async def _copy_to_location(
self, backup: Backup, location: LOCATION_TYPE
) -> tuple[str | None, Path]:
"""Copy a backup file to the default location."""
location_name = location.name if isinstance(location, Mount) else location
self.sys_jobs.current.reference = location_name
try:
if location == LOCATION_CLOUD_BACKUP:
destination = self.sys_config.path_core_backup
elif location:
location_mount = cast(Mount, location)
if not location_mount.local_where.is_mount():
raise BackupMountDownError(
f"{location_mount.name} is down, cannot copy to it",
_LOGGER.error,
)
destination = location_mount.local_where
else:
destination = self.sys_config.path_backup
path = await self.sys_run_in_executor(copy, backup.tarfile, destination)
return (location_name, Path(path))
except OSError as err:
msg = f"Could not copy backup to {location_name} due to: {err!s}"
if err.errno == errno.EBADMSG and location in {
LOCATION_CLOUD_BACKUP,
None,
}:
raise BackupDataDiskBadMessageError(msg, _LOGGER.error) from err
raise BackupError(msg, _LOGGER.error) from err
@Job(name="backup_copy_to_additional_locations", cleanup=False)
async def _copy_to_additional_locations( async def _copy_to_additional_locations(
self, self,
backup: Backup, backup: Backup,
locations: list[LOCATION_TYPE], locations: list[LOCATION_TYPE],
): ):
"""Copy a backup file to additional locations.""" """Copy a backup file to additional locations."""
all_new_locations: dict[str | None, Path] = {} all_new_locations: dict[str | None, Path] = {}
for location in locations:
try:
location_name, path = await self._copy_to_location(backup, location)
all_new_locations[location_name] = path
except BackupDataDiskBadMessageError as err:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
self.sys_jobs.current.capture_error(err)
except BackupError as err:
self.sys_jobs.current.capture_error(err)
def copy_to_additional_locations() -> None: backup.all_locations.update(
"""Copy backup file to additional locations.""" {
nonlocal all_new_locations loc: BackupLocation(
for location in locations: path=path,
try: protected=backup.protected,
if location == LOCATION_CLOUD_BACKUP: size_bytes=backup.size_bytes,
all_new_locations[LOCATION_CLOUD_BACKUP] = Path( )
copy(backup.tarfile, self.sys_config.path_core_backup) for loc, path in all_new_locations.items()
) }
elif location: )
location_mount = cast(Mount, location)
if not location_mount.local_where.is_mount():
raise BackupMountDownError(
f"{location_mount.name} is down, cannot copy to it",
_LOGGER.error,
)
all_new_locations[location_mount.name] = Path(
copy(backup.tarfile, location_mount.local_where)
)
else:
all_new_locations[None] = Path(
copy(backup.tarfile, self.sys_config.path_backup)
)
except OSError as err:
msg = f"Could not copy backup to {location.name if isinstance(location, Mount) else location} due to: {err!s}"
if err.errno == errno.EBADMSG and location in {
LOCATION_CLOUD_BACKUP,
None,
}:
raise BackupDataDiskBadMessageError(msg, _LOGGER.error) from err
raise BackupError(msg, _LOGGER.error) from err
try:
await self.sys_run_in_executor(copy_to_additional_locations)
except BackupDataDiskBadMessageError:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
raise
finally:
backup.all_locations.update(
{
loc: BackupLocation(
path=path,
protected=backup.protected,
size_bytes=backup.size_bytes,
)
for loc, path in all_new_locations.items()
}
)
@Job(name="backup_manager_import_backup") @Job(name="backup_manager_import_backup")
async def import_backup( async def import_backup(
@ -499,7 +521,8 @@ class BackupManager(FileConfiguration, JobGroup):
) -> Backup | None: ) -> Backup | None:
"""Create a backup. """Create a backup.
Must be called from an existing backup job. Must be called from an existing backup job. If the backup failed, the
backup file is being deleted and None is returned.
""" """
addon_start_tasks: list[Awaitable[None]] | None = None addon_start_tasks: list[Awaitable[None]] | None = None
@ -529,9 +552,12 @@ class BackupManager(FileConfiguration, JobGroup):
self._change_stage(BackupJobStage.FINISHING_FILE, backup) self._change_stage(BackupJobStage.FINISHING_FILE, backup)
except BackupError as err: except BackupError as err:
await self.sys_run_in_executor(backup.tarfile.unlink, missing_ok=True)
_LOGGER.error("Backup %s error: %s", backup.slug, err)
self.sys_jobs.current.capture_error(err) self.sys_jobs.current.capture_error(err)
return None return None
except Exception as err: # pylint: disable=broad-except except Exception as err: # pylint: disable=broad-except
await self.sys_run_in_executor(backup.tarfile.unlink, missing_ok=True)
_LOGGER.exception("Backup %s error", backup.slug) _LOGGER.exception("Backup %s error", backup.slug)
await async_capture_exception(err) await async_capture_exception(err)
self.sys_jobs.current.capture_error( self.sys_jobs.current.capture_error(
@ -543,12 +569,7 @@ class BackupManager(FileConfiguration, JobGroup):
if additional_locations: if additional_locations:
self._change_stage(BackupJobStage.COPY_ADDITONAL_LOCATIONS, backup) self._change_stage(BackupJobStage.COPY_ADDITONAL_LOCATIONS, backup)
try: await self._copy_to_additional_locations(backup, additional_locations)
await self._copy_to_additional_locations(
backup, additional_locations
)
except BackupError as err:
self.sys_jobs.current.capture_error(err)
if addon_start_tasks: if addon_start_tasks:
self._change_stage(BackupJobStage.AWAIT_ADDON_RESTARTS, backup) self._change_stage(BackupJobStage.AWAIT_ADDON_RESTARTS, backup)

View File

@ -1,6 +1,7 @@
"""Bootstrap Supervisor.""" """Bootstrap Supervisor."""
# ruff: noqa: T100 # ruff: noqa: T100
import asyncio
from importlib import import_module from importlib import import_module
import logging import logging
import os import os
@ -54,6 +55,14 @@ async def initialize_coresys() -> CoreSys:
"""Initialize supervisor coresys/objects.""" """Initialize supervisor coresys/objects."""
coresys = await CoreSys().load_config() coresys = await CoreSys().load_config()
# Check if ENV is in development mode
if coresys.dev:
_LOGGER.warning("Environment variable 'SUPERVISOR_DEV' is set")
coresys.config.logging = LogLevel.DEBUG
coresys.config.debug = True
else:
coresys.config.modify_log_level()
# Initialize core objects # Initialize core objects
coresys.docker = await DockerAPI(coresys).post_init() coresys.docker = await DockerAPI(coresys).post_init()
coresys.resolution = await ResolutionManager(coresys).load_config() coresys.resolution = await ResolutionManager(coresys).load_config()
@ -70,7 +79,7 @@ async def initialize_coresys() -> CoreSys:
coresys.addons = await AddonManager(coresys).load_config() coresys.addons = await AddonManager(coresys).load_config()
coresys.backups = await BackupManager(coresys).load_config() coresys.backups = await BackupManager(coresys).load_config()
coresys.host = await HostManager(coresys).post_init() coresys.host = await HostManager(coresys).post_init()
coresys.hardware = await HardwareManager(coresys).post_init() coresys.hardware = await HardwareManager.create(coresys)
coresys.ingress = await Ingress(coresys).load_config() coresys.ingress = await Ingress(coresys).load_config()
coresys.tasks = Tasks(coresys) coresys.tasks = Tasks(coresys)
coresys.services = await ServiceManager(coresys).load_config() coresys.services = await ServiceManager(coresys).load_config()
@ -93,15 +102,9 @@ async def initialize_coresys() -> CoreSys:
# bootstrap config # bootstrap config
initialize_system(coresys) initialize_system(coresys)
# Check if ENV is in development mode
if coresys.dev: if coresys.dev:
_LOGGER.warning("Environment variable 'SUPERVISOR_DEV' is set")
coresys.config.logging = LogLevel.DEBUG
coresys.config.debug = True
coresys.updater.channel = UpdateChannel.DEV coresys.updater.channel = UpdateChannel.DEV
coresys.security.content_trust = False coresys.security.content_trust = False
else:
coresys.config.modify_log_level()
# Convert datetime # Convert datetime
logging.Formatter.converter = lambda *args: coresys.now().timetuple() logging.Formatter.converter = lambda *args: coresys.now().timetuple()
@ -282,8 +285,8 @@ def check_environment() -> None:
_LOGGER.critical("Can't find Docker socket!") _LOGGER.critical("Can't find Docker socket!")
def reg_signal(loop, coresys: CoreSys) -> None: def register_signal_handlers(loop: asyncio.AbstractEventLoop, coresys: CoreSys) -> None:
"""Register SIGTERM and SIGKILL to stop system.""" """Register SIGTERM, SIGHUP and SIGKILL to stop the Supervisor."""
try: try:
loop.add_signal_handler( loop.add_signal_handler(
signal.SIGTERM, lambda: loop.create_task(coresys.core.stop()) signal.SIGTERM, lambda: loop.create_task(coresys.core.stop())

View File

@ -2,7 +2,7 @@
from __future__ import annotations from __future__ import annotations
from collections.abc import Awaitable, Callable from collections.abc import Callable, Coroutine
import logging import logging
from typing import Any from typing import Any
@ -19,7 +19,7 @@ class EventListener:
"""Event listener.""" """Event listener."""
event_type: BusEvent = attr.ib() event_type: BusEvent = attr.ib()
callback: Callable[[Any], Awaitable[None]] = attr.ib() callback: Callable[[Any], Coroutine[Any, Any, None]] = attr.ib()
class Bus(CoreSysAttributes): class Bus(CoreSysAttributes):
@ -31,7 +31,7 @@ class Bus(CoreSysAttributes):
self._listeners: dict[BusEvent, list[EventListener]] = {} self._listeners: dict[BusEvent, list[EventListener]] = {}
def register_event( def register_event(
self, event: BusEvent, callback: Callable[[Any], Awaitable[None]] self, event: BusEvent, callback: Callable[[Any], Coroutine[Any, Any, None]]
) -> EventListener: ) -> EventListener:
"""Register callback for an event.""" """Register callback for an event."""
listener = EventListener(event, callback) listener = EventListener(event, callback)

View File

@ -10,6 +10,7 @@ from awesomeversion import AwesomeVersion
from .const import ( from .const import (
ATTR_ADDONS_CUSTOM_LIST, ATTR_ADDONS_CUSTOM_LIST,
ATTR_COUNTRY,
ATTR_DEBUG, ATTR_DEBUG,
ATTR_DEBUG_BLOCK, ATTR_DEBUG_BLOCK,
ATTR_DETECT_BLOCKING_IO, ATTR_DETECT_BLOCKING_IO,
@ -65,7 +66,7 @@ _UTC = "UTC"
class CoreConfig(FileConfiguration): class CoreConfig(FileConfiguration):
"""Hold all core config data.""" """Hold all core config data."""
def __init__(self): def __init__(self) -> None:
"""Initialize config object.""" """Initialize config object."""
super().__init__(FILE_HASSIO_CONFIG, SCHEMA_SUPERVISOR_CONFIG) super().__init__(FILE_HASSIO_CONFIG, SCHEMA_SUPERVISOR_CONFIG)
self._timezone_tzinfo: tzinfo | None = None self._timezone_tzinfo: tzinfo | None = None
@ -93,6 +94,20 @@ class CoreConfig(FileConfiguration):
None, get_time_zone, value None, get_time_zone, value
) )
@property
def country(self) -> str | None:
"""Return supervisor country.
The format follows what Home Assistant Core provides, which today is
ISO 3166-1 alpha-2.
"""
return self._data.get(ATTR_COUNTRY)
@country.setter
def country(self, value: str | None) -> None:
"""Set supervisor country."""
self._data[ATTR_COUNTRY] = value
@property @property
def version(self) -> AwesomeVersion: def version(self) -> AwesomeVersion:
"""Return supervisor version.""" """Return supervisor version."""

View File

@ -2,16 +2,20 @@
from dataclasses import dataclass from dataclasses import dataclass
from enum import StrEnum from enum import StrEnum
from ipaddress import ip_network from ipaddress import IPv4Network, IPv6Network
from pathlib import Path from pathlib import Path
from sys import version_info as systemversion from sys import version_info as systemversion
from typing import Self from typing import NotRequired, Self, TypedDict
from aiohttp import __version__ as aiohttpversion from aiohttp import __version__ as aiohttpversion
SUPERVISOR_VERSION = "9999.09.9.dev9999" SUPERVISOR_VERSION = "9999.09.9.dev9999"
SERVER_SOFTWARE = f"HomeAssistantSupervisor/{SUPERVISOR_VERSION} aiohttp/{aiohttpversion} Python/{systemversion[0]}.{systemversion[1]}" SERVER_SOFTWARE = f"HomeAssistantSupervisor/{SUPERVISOR_VERSION} aiohttp/{aiohttpversion} Python/{systemversion[0]}.{systemversion[1]}"
DOCKER_PREFIX: str = "hassio"
OBSERVER_DOCKER_NAME: str = f"{DOCKER_PREFIX}_observer"
SUPERVISOR_DOCKER_NAME: str = f"{DOCKER_PREFIX}_supervisor"
URL_HASSIO_ADDONS = "https://github.com/home-assistant/addons" URL_HASSIO_ADDONS = "https://github.com/home-assistant/addons"
URL_HASSIO_APPARMOR = "https://version.home-assistant.io/apparmor_{channel}.txt" URL_HASSIO_APPARMOR = "https://version.home-assistant.io/apparmor_{channel}.txt"
URL_HASSIO_VERSION = "https://version.home-assistant.io/{channel}.json" URL_HASSIO_VERSION = "https://version.home-assistant.io/{channel}.json"
@ -41,8 +45,10 @@ SYSTEMD_JOURNAL_PERSISTENT = Path("/var/log/journal")
SYSTEMD_JOURNAL_VOLATILE = Path("/run/log/journal") SYSTEMD_JOURNAL_VOLATILE = Path("/run/log/journal")
DOCKER_NETWORK = "hassio" DOCKER_NETWORK = "hassio"
DOCKER_NETWORK_MASK = ip_network("172.30.32.0/23") DOCKER_NETWORK_DRIVER = "bridge"
DOCKER_NETWORK_RANGE = ip_network("172.30.33.0/24") DOCKER_IPV6_NETWORK_MASK = IPv6Network("fd0c:ac1e:2100::/48")
DOCKER_IPV4_NETWORK_MASK = IPv4Network("172.30.32.0/23")
DOCKER_IPV4_NETWORK_RANGE = IPv4Network("172.30.33.0/24")
# This needs to match the dockerd --cpu-rt-runtime= argument. # This needs to match the dockerd --cpu-rt-runtime= argument.
DOCKER_CPU_RUNTIME_TOTAL = 950_000 DOCKER_CPU_RUNTIME_TOTAL = 950_000
@ -97,6 +103,7 @@ ATTR_ADDON = "addon"
ATTR_ADDONS = "addons" ATTR_ADDONS = "addons"
ATTR_ADDONS_CUSTOM_LIST = "addons_custom_list" ATTR_ADDONS_CUSTOM_LIST = "addons_custom_list"
ATTR_ADDONS_REPOSITORIES = "addons_repositories" ATTR_ADDONS_REPOSITORIES = "addons_repositories"
ATTR_ADDR_GEN_MODE = "addr_gen_mode"
ATTR_ADDRESS = "address" ATTR_ADDRESS = "address"
ATTR_ADDRESS_DATA = "address-data" ATTR_ADDRESS_DATA = "address-data"
ATTR_ADMIN = "admin" ATTR_ADMIN = "admin"
@ -140,6 +147,7 @@ ATTR_CONNECTIONS = "connections"
ATTR_CONTAINERS = "containers" ATTR_CONTAINERS = "containers"
ATTR_CONTENT = "content" ATTR_CONTENT = "content"
ATTR_CONTENT_TRUST = "content_trust" ATTR_CONTENT_TRUST = "content_trust"
ATTR_COUNTRY = "country"
ATTR_CPE = "cpe" ATTR_CPE = "cpe"
ATTR_CPU_PERCENT = "cpu_percent" ATTR_CPU_PERCENT = "cpu_percent"
ATTR_CRYPTO = "crypto" ATTR_CRYPTO = "crypto"
@ -170,6 +178,7 @@ ATTR_DOCKER_API = "docker_api"
ATTR_DOCUMENTATION = "documentation" ATTR_DOCUMENTATION = "documentation"
ATTR_DOMAINS = "domains" ATTR_DOMAINS = "domains"
ATTR_ENABLE = "enable" ATTR_ENABLE = "enable"
ATTR_ENABLE_IPV6 = "enable_ipv6"
ATTR_ENABLED = "enabled" ATTR_ENABLED = "enabled"
ATTR_ENVIRONMENT = "environment" ATTR_ENVIRONMENT = "environment"
ATTR_EVENT = "event" ATTR_EVENT = "event"
@ -179,6 +188,7 @@ ATTR_FEATURES = "features"
ATTR_FILENAME = "filename" ATTR_FILENAME = "filename"
ATTR_FLAGS = "flags" ATTR_FLAGS = "flags"
ATTR_FOLDERS = "folders" ATTR_FOLDERS = "folders"
ATTR_FORCE = "force"
ATTR_FORCE_SECURITY = "force_security" ATTR_FORCE_SECURITY = "force_security"
ATTR_FREQUENCY = "frequency" ATTR_FREQUENCY = "frequency"
ATTR_FULL_ACCESS = "full_access" ATTR_FULL_ACCESS = "full_access"
@ -219,6 +229,7 @@ ATTR_INSTALLED = "installed"
ATTR_INTERFACE = "interface" ATTR_INTERFACE = "interface"
ATTR_INTERFACES = "interfaces" ATTR_INTERFACES = "interfaces"
ATTR_IP_ADDRESS = "ip_address" ATTR_IP_ADDRESS = "ip_address"
ATTR_IP6_PRIVACY = "ip6_privacy"
ATTR_IPV4 = "ipv4" ATTR_IPV4 = "ipv4"
ATTR_IPV6 = "ipv6" ATTR_IPV6 = "ipv6"
ATTR_ISSUES = "issues" ATTR_ISSUES = "issues"
@ -236,6 +247,7 @@ ATTR_LOGO = "logo"
ATTR_LONG_DESCRIPTION = "long_description" ATTR_LONG_DESCRIPTION = "long_description"
ATTR_MAC = "mac" ATTR_MAC = "mac"
ATTR_MACHINE = "machine" ATTR_MACHINE = "machine"
ATTR_MACHINE_ID = "machine_id"
ATTR_MAINTAINER = "maintainer" ATTR_MAINTAINER = "maintainer"
ATTR_MAP = "map" ATTR_MAP = "map"
ATTR_MEMORY_LIMIT = "memory_limit" ATTR_MEMORY_LIMIT = "memory_limit"
@ -404,10 +416,12 @@ class AddonBoot(StrEnum):
MANUAL = "manual" MANUAL = "manual"
@classmethod @classmethod
def _missing_(cls, value: str) -> Self | None: def _missing_(cls, value: object) -> Self | None:
"""Convert 'forced' config values to their counterpart.""" """Convert 'forced' config values to their counterpart."""
if value == AddonBootConfig.MANUAL_ONLY: if value == AddonBootConfig.MANUAL_ONLY:
return AddonBoot.MANUAL for member in cls:
if member == AddonBoot.MANUAL:
return member
return None return None
@ -504,6 +518,16 @@ class CpuArch(StrEnum):
AMD64 = "amd64" AMD64 = "amd64"
class IngressSessionDataUserDict(TypedDict):
"""Response object for ingress session user."""
id: str
username: NotRequired[str | None]
# Name is an alias for displayname, only one should be used
displayname: NotRequired[str | None]
name: NotRequired[str | None]
@dataclass @dataclass
class IngressSessionDataUser: class IngressSessionDataUser:
"""Format of an IngressSessionDataUser object.""" """Format of an IngressSessionDataUser object."""
@ -512,38 +536,42 @@ class IngressSessionDataUser:
display_name: str | None = None display_name: str | None = None
username: str | None = None username: str | None = None
def to_dict(self) -> dict[str, str | None]: def to_dict(self) -> IngressSessionDataUserDict:
"""Get dictionary representation.""" """Get dictionary representation."""
return { return IngressSessionDataUserDict(
ATTR_ID: self.id, id=self.id, displayname=self.display_name, username=self.username
ATTR_DISPLAYNAME: self.display_name, )
ATTR_USERNAME: self.username,
}
@classmethod @classmethod
def from_dict(cls, data: dict[str, str | None]) -> Self: def from_dict(cls, data: IngressSessionDataUserDict) -> Self:
"""Return object from dictionary representation.""" """Return object from dictionary representation."""
return cls( return cls(
id=data[ATTR_ID], id=data["id"],
display_name=data.get(ATTR_DISPLAYNAME), display_name=data.get("displayname") or data.get("name"),
username=data.get(ATTR_USERNAME), username=data.get("username"),
) )
class IngressSessionDataDict(TypedDict):
"""Response object for ingress session data."""
user: IngressSessionDataUserDict
@dataclass @dataclass
class IngressSessionData: class IngressSessionData:
"""Format of an IngressSessionData object.""" """Format of an IngressSessionData object."""
user: IngressSessionDataUser user: IngressSessionDataUser
def to_dict(self) -> dict[str, dict[str, str | None]]: def to_dict(self) -> IngressSessionDataDict:
"""Get dictionary representation.""" """Get dictionary representation."""
return {ATTR_USER: self.user.to_dict()} return IngressSessionDataDict(user=self.user.to_dict())
@classmethod @classmethod
def from_dict(cls, data: dict[str, dict[str, str | None]]) -> Self: def from_dict(cls, data: IngressSessionDataDict) -> Self:
"""Return object from dictionary representation.""" """Return object from dictionary representation."""
return cls(user=IngressSessionDataUser.from_dict(data[ATTR_USER])) return cls(user=IngressSessionDataUser.from_dict(data["user"]))
STARTING_STATES = [ STARTING_STATES = [
@ -551,3 +579,12 @@ STARTING_STATES = [
CoreState.STARTUP, CoreState.STARTUP,
CoreState.SETUP, CoreState.SETUP,
] ]
# States in which the API can be used (enforced by system_validation())
VALID_API_STATES = frozenset(
{
CoreState.STARTUP,
CoreState.RUNNING,
CoreState.FREEZE,
}
)

View File

@ -28,7 +28,7 @@ from .homeassistant.core import LANDINGPAGE
from .resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason from .resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from .utils.dt import utcnow from .utils.dt import utcnow
from .utils.sentry import async_capture_exception from .utils.sentry import async_capture_exception
from .utils.whoami import WhoamiData, retrieve_whoami from .utils.whoami import retrieve_whoami
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@ -36,7 +36,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
class Core(CoreSysAttributes): class Core(CoreSysAttributes):
"""Main object of Supervisor.""" """Main object of Supervisor."""
def __init__(self, coresys: CoreSys): def __init__(self, coresys: CoreSys) -> None:
"""Initialize Supervisor object.""" """Initialize Supervisor object."""
self.coresys: CoreSys = coresys self.coresys: CoreSys = coresys
self._state: CoreState = CoreState.INITIALIZE self._state: CoreState = CoreState.INITIALIZE
@ -91,7 +91,7 @@ class Core(CoreSysAttributes):
"info", {"state": self._state} "info", {"state": self._state}
) )
async def connect(self): async def connect(self) -> None:
"""Connect Supervisor container.""" """Connect Supervisor container."""
# Load information from container # Load information from container
await self.sys_supervisor.load() await self.sys_supervisor.load()
@ -120,10 +120,23 @@ class Core(CoreSysAttributes):
self.sys_config.version = self.sys_supervisor.version self.sys_config.version = self.sys_supervisor.version
await self.sys_config.save_data() await self.sys_config.save_data()
async def setup(self): async def setup(self) -> None:
"""Start setting up supervisor orchestration.""" """Start setting up supervisor orchestration."""
await self.set_state(CoreState.SETUP) await self.set_state(CoreState.SETUP)
# Initialize websession early. At this point we'll use the Docker DNS proxy
# at 127.0.0.11, which does not have the fallback feature and hence might
# fail in certain environments. But a websession is required to get the
# initial version information after a device wipe or otherwise empty state
# (e.g. CI environment, Supervised).
#
# An OS installation has the plug-in container images pre-installed, so we
# setup can continue even if this early websession fails to connect to the
# internet. We'll reinitialize the websession when the DNS plug-in is up to
# make sure the DNS plug-in along with its fallback capabilities is used
# (see #5857).
await self.coresys.init_websession()
# Check internet on startup # Check internet on startup
await self.sys_supervisor.check_connectivity() await self.sys_supervisor.check_connectivity()
@ -175,7 +188,10 @@ class Core(CoreSysAttributes):
await setup_task await setup_task
except Exception as err: # pylint: disable=broad-except except Exception as err: # pylint: disable=broad-except
_LOGGER.critical( _LOGGER.critical(
"Fatal error happening on load Task %s: %s", setup_task, err "Fatal error happening on load Task %s: %s",
setup_task,
err,
exc_info=True,
) )
self.sys_resolution.add_unhealthy_reason(UnhealthyReason.SETUP) self.sys_resolution.add_unhealthy_reason(UnhealthyReason.SETUP)
await async_capture_exception(err) await async_capture_exception(err)
@ -200,7 +216,7 @@ class Core(CoreSysAttributes):
# Evaluate the system # Evaluate the system
await self.sys_resolution.evaluate.evaluate_system() await self.sys_resolution.evaluate.evaluate_system()
async def start(self): async def start(self) -> None:
"""Start Supervisor orchestration.""" """Start Supervisor orchestration."""
await self.set_state(CoreState.STARTUP) await self.set_state(CoreState.STARTUP)
@ -224,10 +240,10 @@ class Core(CoreSysAttributes):
await self.sys_supervisor.update() await self.sys_supervisor.update()
return return
# Start addon mark as initialize
await self.sys_addons.boot(AddonStartup.INITIALIZE)
try: try:
# Start addon mark as initialize
await self.sys_addons.boot(AddonStartup.INITIALIZE)
# HomeAssistant is already running, only Supervisor restarted # HomeAssistant is already running, only Supervisor restarted
if await self.sys_hardware.helper.last_boot() == self.sys_config.last_boot: if await self.sys_hardware.helper.last_boot() == self.sys_config.last_boot:
_LOGGER.info("Detected Supervisor restart") _LOGGER.info("Detected Supervisor restart")
@ -294,7 +310,7 @@ class Core(CoreSysAttributes):
) )
_LOGGER.info("Supervisor is up and running") _LOGGER.info("Supervisor is up and running")
async def stop(self): async def stop(self) -> None:
"""Stop a running orchestration.""" """Stop a running orchestration."""
# store new last boot / prevent time adjustments # store new last boot / prevent time adjustments
if self.state in (CoreState.RUNNING, CoreState.SHUTDOWN): if self.state in (CoreState.RUNNING, CoreState.SHUTDOWN):
@ -342,7 +358,7 @@ class Core(CoreSysAttributes):
_LOGGER.info("Supervisor is down - %d", self.exit_code) _LOGGER.info("Supervisor is down - %d", self.exit_code)
self.sys_loop.stop() self.sys_loop.stop()
async def shutdown(self, *, remove_homeassistant_container: bool = False): async def shutdown(self, *, remove_homeassistant_container: bool = False) -> None:
"""Shutdown all running containers in correct order.""" """Shutdown all running containers in correct order."""
# don't process scheduler anymore # don't process scheduler anymore
if self.state == CoreState.RUNNING: if self.state == CoreState.RUNNING:
@ -366,19 +382,15 @@ class Core(CoreSysAttributes):
if self.state in (CoreState.STOPPING, CoreState.SHUTDOWN): if self.state in (CoreState.STOPPING, CoreState.SHUTDOWN):
await self.sys_plugins.shutdown() await self.sys_plugins.shutdown()
async def _update_last_boot(self): async def _update_last_boot(self) -> None:
"""Update last boot time.""" """Update last boot time."""
self.sys_config.last_boot = await self.sys_hardware.helper.last_boot() if not (last_boot := await self.sys_hardware.helper.last_boot()):
_LOGGER.error("Could not update last boot information!")
return
self.sys_config.last_boot = last_boot
await self.sys_config.save_data() await self.sys_config.save_data()
async def _retrieve_whoami(self, with_ssl: bool) -> WhoamiData | None: async def _adjust_system_datetime(self) -> None:
try:
return await retrieve_whoami(self.sys_websession, with_ssl)
except WhoamiSSLError:
_LOGGER.info("Whoami service SSL error")
return None
async def _adjust_system_datetime(self):
"""Adjust system time/date on startup.""" """Adjust system time/date on startup."""
# If no timezone is detect or set # If no timezone is detect or set
# If we are not connected or time sync # If we are not connected or time sync
@ -390,11 +402,13 @@ class Core(CoreSysAttributes):
# Get Timezone data # Get Timezone data
try: try:
data = await self._retrieve_whoami(True) try:
data = await retrieve_whoami(self.sys_websession, True)
except WhoamiSSLError:
# SSL Date Issue & possible time drift
_LOGGER.info("Whoami service SSL error")
data = await retrieve_whoami(self.sys_websession, False)
# SSL Date Issue & possible time drift
if not data:
data = await self._retrieve_whoami(False)
except WhoamiError as err: except WhoamiError as err:
_LOGGER.warning("Can't adjust Time/Date settings: %s", err) _LOGGER.warning("Can't adjust Time/Date settings: %s", err)
return return
@ -410,7 +424,7 @@ class Core(CoreSysAttributes):
await self.sys_host.control.set_datetime(data.dt_utc) await self.sys_host.control.set_datetime(data.dt_utc)
await self.sys_supervisor.check_connectivity() await self.sys_supervisor.check_connectivity()
async def repair(self): async def repair(self) -> None:
"""Repair system integrity.""" """Repair system integrity."""
_LOGGER.info("Starting repair of Supervisor Environment") _LOGGER.info("Starting repair of Supervisor Environment")
await self.sys_run_in_executor(self.sys_docker.repair) await self.sys_run_in_executor(self.sys_docker.repair)

View File

@ -13,6 +13,7 @@ from types import MappingProxyType
from typing import TYPE_CHECKING, Any, Self, TypeVar from typing import TYPE_CHECKING, Any, Self, TypeVar
import aiohttp import aiohttp
from pycares import AresError
from .config import CoreConfig from .config import CoreConfig
from .const import ( from .const import (
@ -21,6 +22,7 @@ from .const import (
ENV_SUPERVISOR_MACHINE, ENV_SUPERVISOR_MACHINE,
MACHINE_ID, MACHINE_ID,
SERVER_SOFTWARE, SERVER_SOFTWARE,
VALID_API_STATES,
) )
if TYPE_CHECKING: if TYPE_CHECKING:
@ -60,18 +62,17 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
class CoreSys: class CoreSys:
"""Class that handle all shared data.""" """Class that handle all shared data."""
def __init__(self): def __init__(self) -> None:
"""Initialize coresys.""" """Initialize coresys."""
# Static attributes protected # Static attributes protected
self._machine_id: str | None = None self._machine_id: str | None = None
self._machine: str | None = None self._machine: str | None = None
# External objects # External objects
self._loop: asyncio.BaseEventLoop = asyncio.get_running_loop() self._loop = asyncio.get_running_loop()
self._websession = None
# Global objects # Global objects
self._config: CoreConfig = CoreConfig() self._config = CoreConfig()
# Internal objects pointers # Internal objects pointers
self._docker: DockerAPI | None = None self._docker: DockerAPI | None = None
@ -100,9 +101,7 @@ class CoreSys:
self._security: Security | None = None self._security: Security | None = None
self._bus: Bus | None = None self._bus: Bus | None = None
self._mounts: MountManager | None = None self._mounts: MountManager | None = None
self._websession: aiohttp.ClientSession | None = None
# Setup aiohttp session
self.create_websession()
# Task factory attributes # Task factory attributes
self._set_task_context: list[Callable[[Context], Context]] = [] self._set_task_context: list[Callable[[Context], Context]] = []
@ -112,7 +111,44 @@ class CoreSys:
await self.config.read_data() await self.config.read_data()
return self return self
async def init_machine(self): async def init_websession(self) -> None:
"""Initialize global aiohttp ClientSession."""
if self.core.state in VALID_API_STATES:
# Make sure we don't reinitialize the session if the API is running (see #5851)
raise RuntimeError(
"Initializing ClientSession is not safe when API is running"
)
if self._websession:
await self._websession.close()
resolver: aiohttp.abc.AbstractResolver
try:
# Use "unused" kwargs to force dedicated resolver instance. Otherwise
# aiodns won't reload /etc/resolv.conf which we need to make our connection
# check work in all cases.
resolver = aiohttp.AsyncResolver(loop=self.loop, timeout=None)
# pylint: disable=protected-access
_LOGGER.debug(
"Initializing ClientSession with AsyncResolver. Using nameservers %s",
resolver._resolver.nameservers,
)
except AresError as err:
_LOGGER.critical(
"Unable to initialize async DNS resolver: %s", err, exc_info=True
)
resolver = aiohttp.ThreadedResolver(loop=self.loop)
connector = aiohttp.TCPConnector(loop=self.loop, resolver=resolver)
session = aiohttp.ClientSession(
headers=MappingProxyType({aiohttp.hdrs.USER_AGENT: SERVER_SOFTWARE}),
connector=connector,
)
self._websession = session
async def init_machine(self) -> None:
"""Initialize machine information.""" """Initialize machine information."""
def _load_machine_id() -> str | None: def _load_machine_id() -> str | None:
@ -135,7 +171,7 @@ class CoreSys:
@property @property
def dev(self) -> bool: def dev(self) -> bool:
"""Return True if we run dev mode.""" """Return True if we run dev mode."""
return bool(os.environ.get(ENV_SUPERVISOR_DEV, 0)) return bool(os.environ.get(ENV_SUPERVISOR_DEV) == "1")
@property @property
def timezone(self) -> str: def timezone(self) -> str:
@ -156,13 +192,15 @@ class CoreSys:
return UTC return UTC
@property @property
def loop(self) -> asyncio.BaseEventLoop: def loop(self) -> asyncio.AbstractEventLoop:
"""Return loop object.""" """Return loop object."""
return self._loop return self._loop
@property @property
def websession(self) -> aiohttp.ClientSession: def websession(self) -> aiohttp.ClientSession:
"""Return websession object.""" """Return websession object."""
if self._websession is None:
raise RuntimeError("WebSession not setup yet")
return self._websession return self._websession
@property @property
@ -552,7 +590,7 @@ class CoreSys:
return self._machine_id return self._machine_id
@machine_id.setter @machine_id.setter
def machine_id(self, value: str) -> None: def machine_id(self, value: str | None) -> None:
"""Set a machine-id type string.""" """Set a machine-id type string."""
if self._machine_id: if self._machine_id:
raise RuntimeError("Machine-ID type already set!") raise RuntimeError("Machine-ID type already set!")
@ -574,24 +612,14 @@ class CoreSys:
self._set_task_context.append(callback) self._set_task_context.append(callback)
def run_in_executor( def run_in_executor(
self, funct: Callable[..., T], *args: tuple[Any], **kwargs: dict[str, Any] self, funct: Callable[..., T], *args, **kwargs
) -> Coroutine[Any, Any, T]: ) -> asyncio.Future[T]:
"""Add an job to the executor pool.""" """Add an job to the executor pool."""
if kwargs: if kwargs:
funct = partial(funct, **kwargs) funct = partial(funct, **kwargs)
return self.loop.run_in_executor(None, funct, *args) return self.loop.run_in_executor(None, funct, *args)
def create_websession(self) -> None:
"""Create a new aiohttp session."""
if self._websession:
self.create_task(self._websession.close())
# Create session and set default header for aiohttp
self._websession: aiohttp.ClientSession = aiohttp.ClientSession(
headers=MappingProxyType({aiohttp.hdrs.USER_AGENT: SERVER_SOFTWARE})
)
def _create_context(self) -> Context: def _create_context(self) -> Context:
"""Create a new context for a task.""" """Create a new context for a task."""
context = copy_context() context = copy_context()
@ -606,9 +634,9 @@ class CoreSys:
def call_later( def call_later(
self, self,
delay: float, delay: float,
funct: Callable[..., Coroutine[Any, Any, T]], funct: Callable[..., Any],
*args: tuple[Any], *args,
**kwargs: dict[str, Any], **kwargs,
) -> asyncio.TimerHandle: ) -> asyncio.TimerHandle:
"""Start a task after a delay.""" """Start a task after a delay."""
if kwargs: if kwargs:
@ -619,9 +647,9 @@ class CoreSys:
def call_at( def call_at(
self, self,
when: datetime, when: datetime,
funct: Callable[..., Coroutine[Any, Any, T]], funct: Callable[..., Any],
*args: tuple[Any], *args,
**kwargs: dict[str, Any], **kwargs,
) -> asyncio.TimerHandle: ) -> asyncio.TimerHandle:
"""Start a task at the specified datetime.""" """Start a task at the specified datetime."""
if kwargs: if kwargs:
@ -649,7 +677,7 @@ class CoreSysAttributes:
@property @property
def sys_machine_id(self) -> str | None: def sys_machine_id(self) -> str | None:
"""Return machine id.""" """Return machine ID."""
return self.coresys.machine_id return self.coresys.machine_id
@property @property
@ -658,7 +686,7 @@ class CoreSysAttributes:
return self.coresys.dev return self.coresys.dev
@property @property
def sys_loop(self) -> asyncio.BaseEventLoop: def sys_loop(self) -> asyncio.AbstractEventLoop:
"""Return loop object.""" """Return loop object."""
return self.coresys.loop return self.coresys.loop
@ -808,7 +836,7 @@ class CoreSysAttributes:
def sys_run_in_executor( def sys_run_in_executor(
self, funct: Callable[..., T], *args, **kwargs self, funct: Callable[..., T], *args, **kwargs
) -> Coroutine[Any, Any, T]: ) -> asyncio.Future[T]:
"""Add a job to the executor pool.""" """Add a job to the executor pool."""
return self.coresys.run_in_executor(funct, *args, **kwargs) return self.coresys.run_in_executor(funct, *args, **kwargs)
@ -819,7 +847,7 @@ class CoreSysAttributes:
def sys_call_later( def sys_call_later(
self, self,
delay: float, delay: float,
funct: Callable[..., Coroutine[Any, Any, T]], funct: Callable[..., Any],
*args, *args,
**kwargs, **kwargs,
) -> asyncio.TimerHandle: ) -> asyncio.TimerHandle:
@ -829,7 +857,7 @@ class CoreSysAttributes:
def sys_call_at( def sys_call_at(
self, self,
when: datetime, when: datetime,
funct: Callable[..., Coroutine[Any, Any, T]], funct: Callable[..., Any],
*args, *args,
**kwargs, **kwargs,
) -> asyncio.TimerHandle: ) -> asyncio.TimerHandle:

View File

@ -135,6 +135,7 @@ DBUS_ATTR_LAST_ERROR = "LastError"
DBUS_ATTR_LLMNR = "LLMNR" DBUS_ATTR_LLMNR = "LLMNR"
DBUS_ATTR_LLMNR_HOSTNAME = "LLMNRHostname" DBUS_ATTR_LLMNR_HOSTNAME = "LLMNRHostname"
DBUS_ATTR_LOADER_TIMESTAMP_MONOTONIC = "LoaderTimestampMonotonic" DBUS_ATTR_LOADER_TIMESTAMP_MONOTONIC = "LoaderTimestampMonotonic"
DBUS_ATTR_LOCAL_RTC = "LocalRTC"
DBUS_ATTR_MANAGED = "Managed" DBUS_ATTR_MANAGED = "Managed"
DBUS_ATTR_MODE = "Mode" DBUS_ATTR_MODE = "Mode"
DBUS_ATTR_MODEL = "Model" DBUS_ATTR_MODEL = "Model"
@ -210,6 +211,24 @@ class InterfaceMethod(StrEnum):
LINK_LOCAL = "link-local" LINK_LOCAL = "link-local"
class InterfaceAddrGenMode(IntEnum):
"""Interface addr_gen_mode."""
EUI64 = 0
STABLE_PRIVACY = 1
DEFAULT_OR_EUI64 = 2
DEFAULT = 3
class InterfaceIp6Privacy(IntEnum):
"""Interface ip6_privacy."""
DEFAULT = -1
DISABLED = 0
ENABLED_PREFER_PUBLIC = 1
ENABLED = 2
class ConnectionType(StrEnum): class ConnectionType(StrEnum):
"""Connection type.""" """Connection type."""

View File

@ -117,7 +117,7 @@ class DBusInterfaceProxy(DBusInterface, ABC):
"""Initialize object with already connected dbus object.""" """Initialize object with already connected dbus object."""
await super().initialize(connected_dbus) await super().initialize(connected_dbus)
if not self.connected_dbus.properties: if not self.connected_dbus.supports_properties:
self.disconnect() self.disconnect()
raise DBusInterfaceError( raise DBusInterfaceError(
f"D-Bus object {self.object_path} is not usable, introspection is missing required properties interface" f"D-Bus object {self.object_path} is not usable, introspection is missing required properties interface"

View File

@ -8,7 +8,7 @@ from dbus_fast.aio.message_bus import MessageBus
from ..const import SOCKET_DBUS from ..const import SOCKET_DBUS
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import DBusFatalError from ..exceptions import DBusFatalError, DBusNotConnectedError
from .agent import OSAgent from .agent import OSAgent
from .hostname import Hostname from .hostname import Hostname
from .interface import DBusInterface from .interface import DBusInterface
@ -91,6 +91,13 @@ class DBusManager(CoreSysAttributes):
"""Return the message bus.""" """Return the message bus."""
return self._bus return self._bus
@property
def connected_bus(self) -> MessageBus:
"""Return the message bus. Raise if not connected."""
if not self._bus:
raise DBusNotConnectedError()
return self._bus
@property @property
def all(self) -> list[DBusInterface]: def all(self) -> list[DBusInterface]:
"""Return all managed dbus interfaces.""" """Return all managed dbus interfaces."""

View File

@ -185,10 +185,14 @@ class NetworkManager(DBusInterfaceProxy):
if not changed and self.dns.is_connected: if not changed and self.dns.is_connected:
await self.dns.update() await self.dns.update()
if changed and ( if (
DBUS_ATTR_DEVICES not in changed changed
or {intr.object_path for intr in self.interfaces if intr.managed}.issubset( and DBUS_ATTR_PRIMARY_CONNECTION not in changed
set(changed[DBUS_ATTR_DEVICES]) and (
DBUS_ATTR_DEVICES not in changed
or {
intr.object_path for intr in self.interfaces if intr.managed
}.issubset(set(changed[DBUS_ATTR_DEVICES]))
) )
): ):
# If none of our managed devices were removed then most likely this is just veths changing. # If none of our managed devices were removed then most likely this is just veths changing.
@ -255,7 +259,7 @@ class NetworkManager(DBusInterfaceProxy):
else: else:
interface.primary = False interface.primary = False
interfaces[interface.name] = interface interfaces[interface.interface_name] = interface
interfaces[interface.hw_address] = interface interfaces[interface.hw_address] = interface
# Disconnect removed devices # Disconnect removed devices

View File

@ -1,5 +1,6 @@
"""NetworkConnection objects for Network Manager.""" """NetworkConnection objects for Network Manager."""
from abc import ABC
from dataclasses import dataclass from dataclasses import dataclass
from ipaddress import IPv4Address, IPv6Address from ipaddress import IPv4Address, IPv6Address
@ -29,7 +30,7 @@ class ConnectionProperties:
class WirelessProperties: class WirelessProperties:
"""Wireless Properties object for Network Manager.""" """Wireless Properties object for Network Manager."""
ssid: str | None ssid: str
assigned_mac: str | None assigned_mac: str | None
mode: str | None mode: str | None
powersave: int | None powersave: int | None
@ -55,7 +56,7 @@ class EthernetProperties:
class VlanProperties: class VlanProperties:
"""Ethernet properties object for Network Manager.""" """Ethernet properties object for Network Manager."""
id: int | None id: int
parent: str | None parent: str | None
@ -67,14 +68,29 @@ class IpAddress:
prefix: int prefix: int
@dataclass(slots=True) @dataclass
class IpProperties: class IpProperties(ABC):
"""IP properties object for Network Manager.""" """IP properties object for Network Manager."""
method: str | None method: str | None
address_data: list[IpAddress] | None address_data: list[IpAddress] | None
gateway: str | None gateway: str | None
dns: list[bytes | int] | None
@dataclass(slots=True)
class Ip4Properties(IpProperties):
"""IPv4 properties object."""
dns: list[int] | None
@dataclass(slots=True)
class Ip6Properties(IpProperties):
"""IPv6 properties object for Network Manager."""
addr_gen_mode: int
ip6_privacy: int
dns: list[bytes] | None
@dataclass(slots=True) @dataclass(slots=True)

View File

@ -96,7 +96,7 @@ class NetworkConnection(DBusInterfaceProxy):
@ipv4.setter @ipv4.setter
def ipv4(self, ipv4: IpConfiguration | None) -> None: def ipv4(self, ipv4: IpConfiguration | None) -> None:
"""Set ipv4 configuration.""" """Set IPv4 configuration."""
if self._ipv4 and self._ipv4 is not ipv4: if self._ipv4 and self._ipv4 is not ipv4:
self._ipv4.shutdown() self._ipv4.shutdown()
@ -109,7 +109,7 @@ class NetworkConnection(DBusInterfaceProxy):
@ipv6.setter @ipv6.setter
def ipv6(self, ipv6: IpConfiguration | None) -> None: def ipv6(self, ipv6: IpConfiguration | None) -> None:
"""Set ipv6 configuration.""" """Set IPv6 configuration."""
if self._ipv6 and self._ipv6 is not ipv6: if self._ipv6 and self._ipv6 is not ipv6:
self._ipv6.shutdown() self._ipv6.shutdown()

View File

@ -49,7 +49,7 @@ class NetworkInterface(DBusInterfaceProxy):
@property @property
@dbus_property @dbus_property
def name(self) -> str: def interface_name(self) -> str:
"""Return interface name.""" """Return interface name."""
return self.properties[DBUS_ATTR_DEVICE_INTERFACE] return self.properties[DBUS_ATTR_DEVICE_INTERFACE]

View File

@ -12,8 +12,9 @@ from ...utils import dbus_connected
from ..configuration import ( from ..configuration import (
ConnectionProperties, ConnectionProperties,
EthernetProperties, EthernetProperties,
Ip4Properties,
Ip6Properties,
IpAddress, IpAddress,
IpProperties,
MatchProperties, MatchProperties,
VlanProperties, VlanProperties,
WirelessProperties, WirelessProperties,
@ -58,6 +59,8 @@ CONF_ATTR_IPV4_GATEWAY = "gateway"
CONF_ATTR_IPV4_DNS = "dns" CONF_ATTR_IPV4_DNS = "dns"
CONF_ATTR_IPV6_METHOD = "method" CONF_ATTR_IPV6_METHOD = "method"
CONF_ATTR_IPV6_ADDR_GEN_MODE = "addr-gen-mode"
CONF_ATTR_IPV6_PRIVACY = "ip6-privacy"
CONF_ATTR_IPV6_ADDRESS_DATA = "address-data" CONF_ATTR_IPV6_ADDRESS_DATA = "address-data"
CONF_ATTR_IPV6_GATEWAY = "gateway" CONF_ATTR_IPV6_GATEWAY = "gateway"
CONF_ATTR_IPV6_DNS = "dns" CONF_ATTR_IPV6_DNS = "dns"
@ -69,6 +72,8 @@ IPV4_6_IGNORE_FIELDS = [
"dns-data", "dns-data",
"gateway", "gateway",
"method", "method",
"addr-gen-mode",
"ip6-privacy",
] ]
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@ -110,8 +115,8 @@ class NetworkSetting(DBusInterface):
self._wireless_security: WirelessSecurityProperties | None = None self._wireless_security: WirelessSecurityProperties | None = None
self._ethernet: EthernetProperties | None = None self._ethernet: EthernetProperties | None = None
self._vlan: VlanProperties | None = None self._vlan: VlanProperties | None = None
self._ipv4: IpProperties | None = None self._ipv4: Ip4Properties | None = None
self._ipv6: IpProperties | None = None self._ipv6: Ip6Properties | None = None
self._match: MatchProperties | None = None self._match: MatchProperties | None = None
super().__init__() super().__init__()
@ -146,13 +151,13 @@ class NetworkSetting(DBusInterface):
return self._vlan return self._vlan
@property @property
def ipv4(self) -> IpProperties | None: def ipv4(self) -> Ip4Properties | None:
"""Return ipv4 properties if any.""" """Return IPv4 properties if any."""
return self._ipv4 return self._ipv4
@property @property
def ipv6(self) -> IpProperties | None: def ipv6(self) -> Ip6Properties | None:
"""Return ipv6 properties if any.""" """Return IPv6 properties if any."""
return self._ipv6 return self._ipv6
@property @property
@ -223,66 +228,83 @@ class NetworkSetting(DBusInterface):
# See: https://developer-old.gnome.org/NetworkManager/stable/ch01.html # See: https://developer-old.gnome.org/NetworkManager/stable/ch01.html
if CONF_ATTR_CONNECTION in data: if CONF_ATTR_CONNECTION in data:
self._connection = ConnectionProperties( self._connection = ConnectionProperties(
data[CONF_ATTR_CONNECTION].get(CONF_ATTR_CONNECTION_ID), id=data[CONF_ATTR_CONNECTION].get(CONF_ATTR_CONNECTION_ID),
data[CONF_ATTR_CONNECTION].get(CONF_ATTR_CONNECTION_UUID), uuid=data[CONF_ATTR_CONNECTION].get(CONF_ATTR_CONNECTION_UUID),
data[CONF_ATTR_CONNECTION].get(CONF_ATTR_CONNECTION_TYPE), type=data[CONF_ATTR_CONNECTION].get(CONF_ATTR_CONNECTION_TYPE),
data[CONF_ATTR_CONNECTION].get(CONF_ATTR_CONNECTION_INTERFACE_NAME), interface_name=data[CONF_ATTR_CONNECTION].get(
CONF_ATTR_CONNECTION_INTERFACE_NAME
),
) )
if CONF_ATTR_802_ETHERNET in data: if CONF_ATTR_802_ETHERNET in data:
self._ethernet = EthernetProperties( self._ethernet = EthernetProperties(
data[CONF_ATTR_802_ETHERNET].get(CONF_ATTR_802_ETHERNET_ASSIGNED_MAC), assigned_mac=data[CONF_ATTR_802_ETHERNET].get(
CONF_ATTR_802_ETHERNET_ASSIGNED_MAC
),
) )
if CONF_ATTR_802_WIRELESS in data: if CONF_ATTR_802_WIRELESS in data:
self._wireless = WirelessProperties( self._wireless = WirelessProperties(
bytes( ssid=bytes(
data[CONF_ATTR_802_WIRELESS].get(CONF_ATTR_802_WIRELESS_SSID, []) data[CONF_ATTR_802_WIRELESS].get(CONF_ATTR_802_WIRELESS_SSID, [])
).decode(), ).decode(),
data[CONF_ATTR_802_WIRELESS].get(CONF_ATTR_802_WIRELESS_ASSIGNED_MAC), assigned_mac=data[CONF_ATTR_802_WIRELESS].get(
data[CONF_ATTR_802_WIRELESS].get(CONF_ATTR_802_WIRELESS_MODE), CONF_ATTR_802_WIRELESS_ASSIGNED_MAC
data[CONF_ATTR_802_WIRELESS].get(CONF_ATTR_802_WIRELESS_POWERSAVE), ),
mode=data[CONF_ATTR_802_WIRELESS].get(CONF_ATTR_802_WIRELESS_MODE),
powersave=data[CONF_ATTR_802_WIRELESS].get(
CONF_ATTR_802_WIRELESS_POWERSAVE
),
) )
if CONF_ATTR_802_WIRELESS_SECURITY in data: if CONF_ATTR_802_WIRELESS_SECURITY in data:
self._wireless_security = WirelessSecurityProperties( self._wireless_security = WirelessSecurityProperties(
data[CONF_ATTR_802_WIRELESS_SECURITY].get( auth_alg=data[CONF_ATTR_802_WIRELESS_SECURITY].get(
CONF_ATTR_802_WIRELESS_SECURITY_AUTH_ALG CONF_ATTR_802_WIRELESS_SECURITY_AUTH_ALG
), ),
data[CONF_ATTR_802_WIRELESS_SECURITY].get( key_mgmt=data[CONF_ATTR_802_WIRELESS_SECURITY].get(
CONF_ATTR_802_WIRELESS_SECURITY_KEY_MGMT CONF_ATTR_802_WIRELESS_SECURITY_KEY_MGMT
), ),
data[CONF_ATTR_802_WIRELESS_SECURITY].get( psk=data[CONF_ATTR_802_WIRELESS_SECURITY].get(
CONF_ATTR_802_WIRELESS_SECURITY_PSK CONF_ATTR_802_WIRELESS_SECURITY_PSK
), ),
) )
if CONF_ATTR_VLAN in data: if CONF_ATTR_VLAN in data:
self._vlan = VlanProperties( if CONF_ATTR_VLAN_ID in data[CONF_ATTR_VLAN]:
data[CONF_ATTR_VLAN].get(CONF_ATTR_VLAN_ID), self._vlan = VlanProperties(
data[CONF_ATTR_VLAN].get(CONF_ATTR_VLAN_PARENT), data[CONF_ATTR_VLAN][CONF_ATTR_VLAN_ID],
) data[CONF_ATTR_VLAN].get(CONF_ATTR_VLAN_PARENT),
)
else:
self._vlan = None
_LOGGER.warning(
"Network settings for vlan connection %s missing required vlan id, cannot process it",
self.connection.interface_name,
)
if CONF_ATTR_IPV4 in data: if CONF_ATTR_IPV4 in data:
address_data = None address_data = None
if ips := data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_ADDRESS_DATA): if ips := data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_ADDRESS_DATA):
address_data = [IpAddress(ip["address"], ip["prefix"]) for ip in ips] address_data = [IpAddress(ip["address"], ip["prefix"]) for ip in ips]
self._ipv4 = IpProperties( self._ipv4 = Ip4Properties(
data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_METHOD), method=data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_METHOD),
address_data, address_data=address_data,
data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_GATEWAY), gateway=data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_GATEWAY),
data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_DNS), dns=data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_DNS),
) )
if CONF_ATTR_IPV6 in data: if CONF_ATTR_IPV6 in data:
address_data = None address_data = None
if ips := data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_ADDRESS_DATA): if ips := data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_ADDRESS_DATA):
address_data = [IpAddress(ip["address"], ip["prefix"]) for ip in ips] address_data = [IpAddress(ip["address"], ip["prefix"]) for ip in ips]
self._ipv6 = IpProperties( self._ipv6 = Ip6Properties(
data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_METHOD), method=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_METHOD),
address_data, addr_gen_mode=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_ADDR_GEN_MODE),
data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_GATEWAY), ip6_privacy=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_PRIVACY),
data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_DNS), address_data=address_data,
gateway=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_GATEWAY),
dns=data[CONF_ATTR_IPV6].get(CONF_ATTR_IPV6_DNS),
) )
if CONF_ATTR_MATCH in data: if CONF_ATTR_MATCH in data:

View File

@ -8,8 +8,13 @@ from uuid import uuid4
from dbus_fast import Variant from dbus_fast import Variant
from ....host.configuration import VlanConfig from ....host.configuration import Ip6Setting, IpSetting, VlanConfig
from ....host.const import InterfaceMethod, InterfaceType from ....host.const import (
InterfaceAddrGenMode,
InterfaceIp6Privacy,
InterfaceMethod,
InterfaceType,
)
from .. import NetworkManager from .. import NetworkManager
from . import ( from . import (
CONF_ATTR_802_ETHERNET, CONF_ATTR_802_ETHERNET,
@ -36,10 +41,12 @@ from . import (
CONF_ATTR_IPV4_GATEWAY, CONF_ATTR_IPV4_GATEWAY,
CONF_ATTR_IPV4_METHOD, CONF_ATTR_IPV4_METHOD,
CONF_ATTR_IPV6, CONF_ATTR_IPV6,
CONF_ATTR_IPV6_ADDR_GEN_MODE,
CONF_ATTR_IPV6_ADDRESS_DATA, CONF_ATTR_IPV6_ADDRESS_DATA,
CONF_ATTR_IPV6_DNS, CONF_ATTR_IPV6_DNS,
CONF_ATTR_IPV6_GATEWAY, CONF_ATTR_IPV6_GATEWAY,
CONF_ATTR_IPV6_METHOD, CONF_ATTR_IPV6_METHOD,
CONF_ATTR_IPV6_PRIVACY,
CONF_ATTR_MATCH, CONF_ATTR_MATCH,
CONF_ATTR_MATCH_PATH, CONF_ATTR_MATCH_PATH,
CONF_ATTR_VLAN, CONF_ATTR_VLAN,
@ -51,7 +58,7 @@ if TYPE_CHECKING:
from ....host.configuration import Interface from ....host.configuration import Interface
def _get_ipv4_connection_settings(ipv4setting) -> dict: def _get_ipv4_connection_settings(ipv4setting: IpSetting | None) -> dict:
ipv4 = {} ipv4 = {}
if not ipv4setting or ipv4setting.method == InterfaceMethod.AUTO: if not ipv4setting or ipv4setting.method == InterfaceMethod.AUTO:
ipv4[CONF_ATTR_IPV4_METHOD] = Variant("s", "auto") ipv4[CONF_ATTR_IPV4_METHOD] = Variant("s", "auto")
@ -93,10 +100,32 @@ def _get_ipv4_connection_settings(ipv4setting) -> dict:
return ipv4 return ipv4
def _get_ipv6_connection_settings(ipv6setting) -> dict: def _get_ipv6_connection_settings(
ipv6setting: Ip6Setting | None, support_addr_gen_mode_defaults: bool = False
) -> dict:
ipv6 = {} ipv6 = {}
if not ipv6setting or ipv6setting.method == InterfaceMethod.AUTO: if not ipv6setting or ipv6setting.method == InterfaceMethod.AUTO:
ipv6[CONF_ATTR_IPV6_METHOD] = Variant("s", "auto") ipv6[CONF_ATTR_IPV6_METHOD] = Variant("s", "auto")
if ipv6setting:
if ipv6setting.addr_gen_mode == InterfaceAddrGenMode.EUI64:
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant("i", 0)
elif (
not support_addr_gen_mode_defaults
or ipv6setting.addr_gen_mode == InterfaceAddrGenMode.STABLE_PRIVACY
):
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant("i", 1)
elif ipv6setting.addr_gen_mode == InterfaceAddrGenMode.DEFAULT_OR_EUI64:
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant("i", 2)
else:
ipv6[CONF_ATTR_IPV6_ADDR_GEN_MODE] = Variant("i", 3)
if ipv6setting.ip6_privacy == InterfaceIp6Privacy.DISABLED:
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant("i", 0)
elif ipv6setting.ip6_privacy == InterfaceIp6Privacy.ENABLED_PREFER_PUBLIC:
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant("i", 1)
elif ipv6setting.ip6_privacy == InterfaceIp6Privacy.ENABLED:
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant("i", 2)
else:
ipv6[CONF_ATTR_IPV6_PRIVACY] = Variant("i", -1)
elif ipv6setting.method == InterfaceMethod.DISABLED: elif ipv6setting.method == InterfaceMethod.DISABLED:
ipv6[CONF_ATTR_IPV6_METHOD] = Variant("s", "link-local") ipv6[CONF_ATTR_IPV6_METHOD] = Variant("s", "link-local")
elif ipv6setting.method == InterfaceMethod.STATIC: elif ipv6setting.method == InterfaceMethod.STATIC:
@ -183,7 +212,9 @@ def get_connection_from_interface(
conn[CONF_ATTR_IPV4] = _get_ipv4_connection_settings(interface.ipv4setting) conn[CONF_ATTR_IPV4] = _get_ipv4_connection_settings(interface.ipv4setting)
conn[CONF_ATTR_IPV6] = _get_ipv6_connection_settings(interface.ipv6setting) conn[CONF_ATTR_IPV6] = _get_ipv6_connection_settings(
interface.ipv6setting, network_manager.version >= "1.40.0"
)
if interface.type == InterfaceType.ETHERNET: if interface.type == InterfaceType.ETHERNET:
conn[CONF_ATTR_802_ETHERNET] = { conn[CONF_ATTR_802_ETHERNET] = {
@ -191,8 +222,10 @@ def get_connection_from_interface(
} }
elif interface.type == "vlan": elif interface.type == "vlan":
parent = cast(VlanConfig, interface.vlan).interface parent = cast(VlanConfig, interface.vlan).interface
if parent in network_manager and ( if (
parent_connection := network_manager.get(parent).connection parent
and parent in network_manager
and (parent_connection := network_manager.get(parent).connection)
): ):
parent = parent_connection.uuid parent = parent_connection.uuid

View File

@ -10,6 +10,7 @@ from dbus_fast.aio.message_bus import MessageBus
from ..exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError from ..exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError
from ..utils.dt import get_time_zone, utc_from_timestamp from ..utils.dt import get_time_zone, utc_from_timestamp
from .const import ( from .const import (
DBUS_ATTR_LOCAL_RTC,
DBUS_ATTR_NTP, DBUS_ATTR_NTP,
DBUS_ATTR_NTPSYNCHRONIZED, DBUS_ATTR_NTPSYNCHRONIZED,
DBUS_ATTR_TIMEUSEC, DBUS_ATTR_TIMEUSEC,
@ -46,6 +47,12 @@ class TimeDate(DBusInterfaceProxy):
"""Return host timezone.""" """Return host timezone."""
return self.properties[DBUS_ATTR_TIMEZONE] return self.properties[DBUS_ATTR_TIMEZONE]
@property
@dbus_property
def local_rtc(self) -> bool:
"""Return whether rtc is local time or utc."""
return self.properties[DBUS_ATTR_LOCAL_RTC]
@property @property
@dbus_property @dbus_property
def ntp(self) -> bool: def ntp(self) -> bool:

View File

@ -28,6 +28,8 @@ class DeviceSpecificationDataType(TypedDict, total=False):
path: str path: str
label: str label: str
uuid: str uuid: str
partuuid: str
partlabel: str
@dataclass(slots=True) @dataclass(slots=True)
@ -40,6 +42,8 @@ class DeviceSpecification:
path: Path | None = None path: Path | None = None
label: str | None = None label: str | None = None
uuid: str | None = None uuid: str | None = None
partuuid: str | None = None
partlabel: str | None = None
@staticmethod @staticmethod
def from_dict(data: DeviceSpecificationDataType) -> "DeviceSpecification": def from_dict(data: DeviceSpecificationDataType) -> "DeviceSpecification":
@ -48,6 +52,8 @@ class DeviceSpecification:
path=Path(data["path"]) if "path" in data else None, path=Path(data["path"]) if "path" in data else None,
label=data.get("label"), label=data.get("label"),
uuid=data.get("uuid"), uuid=data.get("uuid"),
partuuid=data.get("partuuid"),
partlabel=data.get("partlabel"),
) )
def to_dict(self) -> dict[str, Variant]: def to_dict(self) -> dict[str, Variant]:
@ -56,6 +62,8 @@ class DeviceSpecification:
"path": Variant("s", self.path.as_posix()) if self.path else None, "path": Variant("s", self.path.as_posix()) if self.path else None,
"label": _optional_variant("s", self.label), "label": _optional_variant("s", self.label),
"uuid": _optional_variant("s", self.uuid), "uuid": _optional_variant("s", self.uuid),
"partuuid": _optional_variant("s", self.partuuid),
"partlabel": _optional_variant("s", self.partlabel),
} }
return {k: v for k, v in data.items() if v} return {k: v for k, v in data.items() if v}

View File

@ -2,17 +2,17 @@
from __future__ import annotations from __future__ import annotations
from collections.abc import Awaitable
from contextlib import suppress from contextlib import suppress
from ipaddress import IPv4Address, ip_address from ipaddress import IPv4Address
import logging import logging
import os import os
from pathlib import Path from pathlib import Path
from typing import TYPE_CHECKING from typing import TYPE_CHECKING, cast
from attr import evolve from attr import evolve
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
import docker import docker
import docker.errors
from docker.types import Mount from docker.types import Mount
import requests import requests
@ -44,6 +44,7 @@ from ..jobs.decorator import Job
from ..resolution.const import CGROUP_V2_VERSION, ContextType, IssueType, SuggestionType from ..resolution.const import CGROUP_V2_VERSION, ContextType, IssueType, SuggestionType
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from .const import ( from .const import (
ADDON_BUILDER_IMAGE,
ENV_TIME, ENV_TIME,
ENV_TOKEN, ENV_TOKEN,
ENV_TOKEN_OLD, ENV_TOKEN_OLD,
@ -73,7 +74,7 @@ if TYPE_CHECKING:
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
NO_ADDDRESS = ip_address("0.0.0.0") NO_ADDDRESS = IPv4Address("0.0.0.0")
class DockerAddon(DockerInterface): class DockerAddon(DockerInterface):
@ -101,10 +102,12 @@ class DockerAddon(DockerInterface):
"""Return IP address of this container.""" """Return IP address of this container."""
if self.addon.host_network: if self.addon.host_network:
return self.sys_docker.network.gateway return self.sys_docker.network.gateway
if not self._meta:
return NO_ADDDRESS
# Extract IP-Address # Extract IP-Address
try: try:
return ip_address( return IPv4Address(
self._meta["NetworkSettings"]["Networks"]["hassio"]["IPAddress"] self._meta["NetworkSettings"]["Networks"]["hassio"]["IPAddress"]
) )
except (KeyError, TypeError, ValueError): except (KeyError, TypeError, ValueError):
@ -121,7 +124,7 @@ class DockerAddon(DockerInterface):
return self.addon.version return self.addon.version
@property @property
def arch(self) -> str: def arch(self) -> str | None:
"""Return arch of Docker image.""" """Return arch of Docker image."""
if self.addon.legacy: if self.addon.legacy:
return self.sys_arch.default return self.sys_arch.default
@ -133,9 +136,9 @@ class DockerAddon(DockerInterface):
return DockerAddon.slug_to_name(self.addon.slug) return DockerAddon.slug_to_name(self.addon.slug)
@property @property
def environment(self) -> dict[str, str | None]: def environment(self) -> dict[str, str | int | None]:
"""Return environment for Docker add-on.""" """Return environment for Docker add-on."""
addon_env = self.addon.environment or {} addon_env = cast(dict[str, str | int | None], self.addon.environment or {})
# Provide options for legacy add-ons # Provide options for legacy add-ons
if self.addon.legacy: if self.addon.legacy:
@ -336,14 +339,14 @@ class DockerAddon(DockerInterface):
"""Return mounts for container.""" """Return mounts for container."""
addon_mapping = self.addon.map_volumes addon_mapping = self.addon.map_volumes
target_data_path = "" target_data_path: str | None = None
if MappingType.DATA in addon_mapping: if MappingType.DATA in addon_mapping:
target_data_path = addon_mapping[MappingType.DATA].path target_data_path = addon_mapping[MappingType.DATA].path
mounts = [ mounts = [
MOUNT_DEV, MOUNT_DEV,
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.addon.path_extern_data.as_posix(), source=self.addon.path_extern_data.as_posix(),
target=target_data_path or PATH_PRIVATE_DATA.as_posix(), target=target_data_path or PATH_PRIVATE_DATA.as_posix(),
read_only=False, read_only=False,
@ -354,7 +357,7 @@ class DockerAddon(DockerInterface):
if MappingType.CONFIG in addon_mapping: if MappingType.CONFIG in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_homeassistant.as_posix(), source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.CONFIG].path target=addon_mapping[MappingType.CONFIG].path
or PATH_HOMEASSISTANT_CONFIG_LEGACY.as_posix(), or PATH_HOMEASSISTANT_CONFIG_LEGACY.as_posix(),
@ -367,7 +370,7 @@ class DockerAddon(DockerInterface):
if self.addon.addon_config_used: if self.addon.addon_config_used:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.addon.path_extern_config.as_posix(), source=self.addon.path_extern_config.as_posix(),
target=addon_mapping[MappingType.ADDON_CONFIG].path target=addon_mapping[MappingType.ADDON_CONFIG].path
or PATH_PUBLIC_CONFIG.as_posix(), or PATH_PUBLIC_CONFIG.as_posix(),
@ -379,7 +382,7 @@ class DockerAddon(DockerInterface):
if MappingType.HOMEASSISTANT_CONFIG in addon_mapping: if MappingType.HOMEASSISTANT_CONFIG in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_homeassistant.as_posix(), source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.HOMEASSISTANT_CONFIG].path target=addon_mapping[MappingType.HOMEASSISTANT_CONFIG].path
or PATH_HOMEASSISTANT_CONFIG.as_posix(), or PATH_HOMEASSISTANT_CONFIG.as_posix(),
@ -392,7 +395,7 @@ class DockerAddon(DockerInterface):
if MappingType.ALL_ADDON_CONFIGS in addon_mapping: if MappingType.ALL_ADDON_CONFIGS in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_addon_configs.as_posix(), source=self.sys_config.path_extern_addon_configs.as_posix(),
target=addon_mapping[MappingType.ALL_ADDON_CONFIGS].path target=addon_mapping[MappingType.ALL_ADDON_CONFIGS].path
or PATH_ALL_ADDON_CONFIGS.as_posix(), or PATH_ALL_ADDON_CONFIGS.as_posix(),
@ -403,7 +406,7 @@ class DockerAddon(DockerInterface):
if MappingType.SSL in addon_mapping: if MappingType.SSL in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_ssl.as_posix(), source=self.sys_config.path_extern_ssl.as_posix(),
target=addon_mapping[MappingType.SSL].path or PATH_SSL.as_posix(), target=addon_mapping[MappingType.SSL].path or PATH_SSL.as_posix(),
read_only=addon_mapping[MappingType.SSL].read_only, read_only=addon_mapping[MappingType.SSL].read_only,
@ -413,7 +416,7 @@ class DockerAddon(DockerInterface):
if MappingType.ADDONS in addon_mapping: if MappingType.ADDONS in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_addons_local.as_posix(), source=self.sys_config.path_extern_addons_local.as_posix(),
target=addon_mapping[MappingType.ADDONS].path target=addon_mapping[MappingType.ADDONS].path
or PATH_LOCAL_ADDONS.as_posix(), or PATH_LOCAL_ADDONS.as_posix(),
@ -424,7 +427,7 @@ class DockerAddon(DockerInterface):
if MappingType.BACKUP in addon_mapping: if MappingType.BACKUP in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_backup.as_posix(), source=self.sys_config.path_extern_backup.as_posix(),
target=addon_mapping[MappingType.BACKUP].path target=addon_mapping[MappingType.BACKUP].path
or PATH_BACKUP.as_posix(), or PATH_BACKUP.as_posix(),
@ -435,7 +438,7 @@ class DockerAddon(DockerInterface):
if MappingType.SHARE in addon_mapping: if MappingType.SHARE in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_share.as_posix(), source=self.sys_config.path_extern_share.as_posix(),
target=addon_mapping[MappingType.SHARE].path target=addon_mapping[MappingType.SHARE].path
or PATH_SHARE.as_posix(), or PATH_SHARE.as_posix(),
@ -447,7 +450,7 @@ class DockerAddon(DockerInterface):
if MappingType.MEDIA in addon_mapping: if MappingType.MEDIA in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_media.as_posix(), source=self.sys_config.path_extern_media.as_posix(),
target=addon_mapping[MappingType.MEDIA].path target=addon_mapping[MappingType.MEDIA].path
or PATH_MEDIA.as_posix(), or PATH_MEDIA.as_posix(),
@ -465,7 +468,7 @@ class DockerAddon(DockerInterface):
continue continue
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=gpio_path, source=gpio_path,
target=gpio_path, target=gpio_path,
read_only=False, read_only=False,
@ -476,7 +479,7 @@ class DockerAddon(DockerInterface):
if self.addon.with_devicetree: if self.addon.with_devicetree:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source="/sys/firmware/devicetree/base", source="/sys/firmware/devicetree/base",
target="/device-tree", target="/device-tree",
read_only=True, read_only=True,
@ -491,7 +494,7 @@ class DockerAddon(DockerInterface):
if self.addon.with_kernel_modules: if self.addon.with_kernel_modules:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source="/lib/modules", source="/lib/modules",
target="/lib/modules", target="/lib/modules",
read_only=True, read_only=True,
@ -510,19 +513,19 @@ class DockerAddon(DockerInterface):
if self.addon.with_audio: if self.addon.with_audio:
mounts += [ mounts += [
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.addon.path_extern_pulse.as_posix(), source=self.addon.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf", target="/etc/pulse/client.conf",
read_only=True, read_only=True,
), ),
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(), source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio", target="/run/audio",
read_only=True, read_only=True,
), ),
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_plugins.audio.path_extern_asound.as_posix(), source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf", target="/etc/asound.conf",
read_only=True, read_only=True,
@ -533,13 +536,13 @@ class DockerAddon(DockerInterface):
if self.addon.with_journald: if self.addon.with_journald:
mounts += [ mounts += [
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=SYSTEMD_JOURNAL_PERSISTENT.as_posix(), source=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
target=SYSTEMD_JOURNAL_PERSISTENT.as_posix(), target=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
read_only=True, read_only=True,
), ),
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=SYSTEMD_JOURNAL_VOLATILE.as_posix(), source=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
target=SYSTEMD_JOURNAL_VOLATILE.as_posix(), target=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
read_only=True, read_only=True,
@ -672,36 +675,63 @@ class DockerAddon(DockerInterface):
_LOGGER.info("Starting build for %s:%s", self.image, version) _LOGGER.info("Starting build for %s:%s", self.image, version)
def build_image(): def build_image():
return self.sys_docker.images.build( if build_env.squash:
use_config_proxy=False, **build_env.get_docker_args(version, image) _LOGGER.warning(
"Ignoring squash build option for %s as Docker BuildKit does not support it.",
self.addon.slug,
)
addon_image_tag = f"{image or self.addon.image}:{version!s}"
docker_version = self.sys_docker.info.version
builder_version_tag = f"{docker_version.major}.{docker_version.minor}.{docker_version.micro}-cli"
builder_name = f"addon_builder_{self.addon.slug}"
# Remove dangling builder container if it exists by any chance
# E.g. because of an abrupt host shutdown/reboot during a build
with suppress(docker.errors.NotFound):
self.sys_docker.containers.get(builder_name).remove(force=True, v=True)
result = self.sys_docker.run_command(
ADDON_BUILDER_IMAGE,
version=builder_version_tag,
name=builder_name,
**build_env.get_docker_args(version, addon_image_tag),
) )
logs = result.output.decode("utf-8")
if result.exit_code != 0:
error_message = f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}"
raise docker.errors.DockerException(error_message)
addon_image = self.sys_docker.images.get(addon_image_tag)
return addon_image, logs
try: try:
image, log = await self.sys_run_in_executor(build_image) docker_image, log = await self.sys_run_in_executor(build_image)
_LOGGER.debug("Build %s:%s done: %s", self.image, version, log) _LOGGER.debug("Build %s:%s done: %s", self.image, version, log)
# Update meta data # Update meta data
self._meta = image.attrs self._meta = docker_image.attrs
except (docker.errors.DockerException, requests.RequestException) as err: except (docker.errors.DockerException, requests.RequestException) as err:
_LOGGER.error("Can't build %s:%s: %s", self.image, version, err) _LOGGER.error("Can't build %s:%s: %s", self.image, version, err)
if hasattr(err, "build_log"):
log = "\n".join(
[
x["stream"]
for x in err.build_log # pylint: disable=no-member
if isinstance(x, dict) and "stream" in x
]
)
_LOGGER.error("Build log: \n%s", log)
raise DockerError() from err raise DockerError() from err
_LOGGER.info("Build %s:%s done", self.image, version) _LOGGER.info("Build %s:%s done", self.image, version)
def export_image(self, tar_file: Path) -> Awaitable[None]: def export_image(self, tar_file: Path) -> None:
"""Export current images into a tar file.""" """Export current images into a tar file.
return self.sys_docker.export_image(self.image, self.version, tar_file)
Must be run in executor.
"""
if not self.image:
raise RuntimeError("Cannot export without image!")
self.sys_docker.export_image(self.image, self.version, tar_file)
@Job( @Job(
name="docker_addon_import_image", name="docker_addon_import_image",
@ -805,15 +835,15 @@ class DockerAddon(DockerInterface):
): ):
self.sys_resolution.dismiss_issue(self.addon.device_access_missing_issue) self.sys_resolution.dismiss_issue(self.addon.device_access_missing_issue)
async def _validate_trust( async def _validate_trust(self, image_id: str) -> None:
self, image_id: str, image: str, version: AwesomeVersion
) -> None:
"""Validate trust of content.""" """Validate trust of content."""
if not self.addon.signed: if not self.addon.signed:
return return
checksum = image_id.partition(":")[2] checksum = image_id.partition(":")[2]
return await self.sys_security.verify_content(self.addon.codenotary, checksum) return await self.sys_security.verify_content(
cast(str, self.addon.codenotary), checksum
)
@Job( @Job(
name="docker_addon_hardware_events", name="docker_addon_hardware_events",
@ -834,7 +864,8 @@ class DockerAddon(DockerInterface):
self.sys_docker.containers.get, self.name self.sys_docker.containers.get, self.name
) )
except docker.errors.NotFound: except docker.errors.NotFound:
self.sys_bus.remove_listener(self._hw_listener) if self._hw_listener:
self.sys_bus.remove_listener(self._hw_listener)
self._hw_listener = None self._hw_listener = None
return return
except (docker.errors.DockerException, requests.RequestException) as err: except (docker.errors.DockerException, requests.RequestException) as err:

View File

@ -47,7 +47,7 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
mounts = [ mounts = [
MOUNT_DEV, MOUNT_DEV,
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_audio.as_posix(), source=self.sys_config.path_extern_audio.as_posix(),
target=PATH_PRIVATE_DATA.as_posix(), target=PATH_PRIVATE_DATA.as_posix(),
read_only=False, read_only=False,

View File

@ -74,24 +74,26 @@ ENV_TOKEN_OLD = "HASSIO_TOKEN"
LABEL_MANAGED = "supervisor_managed" LABEL_MANAGED = "supervisor_managed"
MOUNT_DBUS = Mount( MOUNT_DBUS = Mount(
type=MountType.BIND, source="/run/dbus", target="/run/dbus", read_only=True type=MountType.BIND.value, source="/run/dbus", target="/run/dbus", read_only=True
)
MOUNT_DEV = Mount(
type=MountType.BIND.value, source="/dev", target="/dev", read_only=True
) )
MOUNT_DEV = Mount(type=MountType.BIND, source="/dev", target="/dev", read_only=True)
MOUNT_DEV.setdefault("BindOptions", {})["ReadOnlyNonRecursive"] = True MOUNT_DEV.setdefault("BindOptions", {})["ReadOnlyNonRecursive"] = True
MOUNT_DOCKER = Mount( MOUNT_DOCKER = Mount(
type=MountType.BIND, type=MountType.BIND.value,
source="/run/docker.sock", source="/run/docker.sock",
target="/run/docker.sock", target="/run/docker.sock",
read_only=True, read_only=True,
) )
MOUNT_MACHINE_ID = Mount( MOUNT_MACHINE_ID = Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=MACHINE_ID.as_posix(), source=MACHINE_ID.as_posix(),
target=MACHINE_ID.as_posix(), target=MACHINE_ID.as_posix(),
read_only=True, read_only=True,
) )
MOUNT_UDEV = Mount( MOUNT_UDEV = Mount(
type=MountType.BIND, source="/run/udev", target="/run/udev", read_only=True type=MountType.BIND.value, source="/run/udev", target="/run/udev", read_only=True
) )
PATH_PRIVATE_DATA = PurePath("/data") PATH_PRIVATE_DATA = PurePath("/data")
@ -105,3 +107,6 @@ PATH_BACKUP = PurePath("/backup")
PATH_SHARE = PurePath("/share") PATH_SHARE = PurePath("/share")
PATH_MEDIA = PurePath("/media") PATH_MEDIA = PurePath("/media")
PATH_CLOUD_BACKUP = PurePath("/cloud_backup") PATH_CLOUD_BACKUP = PurePath("/cloud_backup")
# https://hub.docker.com/_/docker
ADDON_BUILDER_IMAGE = "docker.io/library/docker"

View File

@ -48,7 +48,7 @@ class DockerDNS(DockerInterface, CoreSysAttributes):
environment={ENV_TIME: self.sys_timezone}, environment={ENV_TIME: self.sys_timezone},
mounts=[ mounts=[
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_dns.as_posix(), source=self.sys_config.path_extern_dns.as_posix(),
target="/config", target="/config",
read_only=False, read_only=False,

View File

@ -99,7 +99,7 @@ class DockerHomeAssistant(DockerInterface):
MOUNT_UDEV, MOUNT_UDEV,
# HA config folder # HA config folder
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_homeassistant.as_posix(), source=self.sys_config.path_extern_homeassistant.as_posix(),
target=PATH_PUBLIC_CONFIG.as_posix(), target=PATH_PUBLIC_CONFIG.as_posix(),
read_only=False, read_only=False,
@ -112,20 +112,20 @@ class DockerHomeAssistant(DockerInterface):
[ [
# All other folders # All other folders
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_ssl.as_posix(), source=self.sys_config.path_extern_ssl.as_posix(),
target=PATH_SSL.as_posix(), target=PATH_SSL.as_posix(),
read_only=True, read_only=True,
), ),
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_share.as_posix(), source=self.sys_config.path_extern_share.as_posix(),
target=PATH_SHARE.as_posix(), target=PATH_SHARE.as_posix(),
read_only=False, read_only=False,
propagation=PropagationMode.RSLAVE.value, propagation=PropagationMode.RSLAVE.value,
), ),
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_media.as_posix(), source=self.sys_config.path_extern_media.as_posix(),
target=PATH_MEDIA.as_posix(), target=PATH_MEDIA.as_posix(),
read_only=False, read_only=False,
@ -133,19 +133,19 @@ class DockerHomeAssistant(DockerInterface):
), ),
# Configuration audio # Configuration audio
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_homeassistant.path_extern_pulse.as_posix(), source=self.sys_homeassistant.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf", target="/etc/pulse/client.conf",
read_only=True, read_only=True,
), ),
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(), source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio", target="/run/audio",
read_only=True, read_only=True,
), ),
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_plugins.audio.path_extern_asound.as_posix(), source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf", target="/etc/asound.conf",
read_only=True, read_only=True,
@ -213,24 +213,21 @@ class DockerHomeAssistant(DockerInterface):
privileged=True, privileged=True,
init=True, init=True,
entrypoint=[], entrypoint=[],
detach=True,
stdout=True,
stderr=True,
mounts=[ mounts=[
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_homeassistant.as_posix(), source=self.sys_config.path_extern_homeassistant.as_posix(),
target="/config", target="/config",
read_only=False, read_only=False,
), ),
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_ssl.as_posix(), source=self.sys_config.path_extern_ssl.as_posix(),
target="/ssl", target="/ssl",
read_only=True, read_only=True,
), ),
Mount( Mount(
type=MountType.BIND, type=MountType.BIND.value,
source=self.sys_config.path_extern_share.as_posix(), source=self.sys_config.path_extern_share.as_posix(),
target="/share", target="/share",
read_only=False, read_only=False,
@ -248,14 +245,12 @@ class DockerHomeAssistant(DockerInterface):
self.sys_homeassistant.version, self.sys_homeassistant.version,
) )
async def _validate_trust( async def _validate_trust(self, image_id: str) -> None:
self, image_id: str, image: str, version: AwesomeVersion
) -> None:
"""Validate trust of content.""" """Validate trust of content."""
try: try:
if version != LANDINGPAGE and version < _VERIFY_TRUST: if self.version in {None, LANDINGPAGE} or self.version < _VERIFY_TRUST:
return return
except AwesomeVersionCompareException: except AwesomeVersionCompareException:
return return
await super()._validate_trust(image_id, image, version) await super()._validate_trust(image_id)

View File

@ -2,13 +2,14 @@
from __future__ import annotations from __future__ import annotations
from abc import ABC, abstractmethod
from collections import defaultdict from collections import defaultdict
from collections.abc import Awaitable from collections.abc import Awaitable
from contextlib import suppress from contextlib import suppress
import logging import logging
import re import re
from time import time from time import time
from typing import Any from typing import Any, cast
from uuid import uuid4 from uuid import uuid4
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
@ -79,7 +80,7 @@ def _container_state_from_model(docker_container: Container) -> ContainerState:
return ContainerState.STOPPED return ContainerState.STOPPED
class DockerInterface(JobGroup): class DockerInterface(JobGroup, ABC):
"""Docker Supervisor interface.""" """Docker Supervisor interface."""
def __init__(self, coresys: CoreSys): def __init__(self, coresys: CoreSys):
@ -100,9 +101,9 @@ class DockerInterface(JobGroup):
return 10 return 10
@property @property
def name(self) -> str | None: @abstractmethod
def name(self) -> str:
"""Return name of Docker container.""" """Return name of Docker container."""
return None
@property @property
def meta_config(self) -> dict[str, Any]: def meta_config(self) -> dict[str, Any]:
@ -153,7 +154,7 @@ class DockerInterface(JobGroup):
@property @property
def in_progress(self) -> bool: def in_progress(self) -> bool:
"""Return True if a task is in progress.""" """Return True if a task is in progress."""
return self.active_job return self.active_job is not None
@property @property
def restart_policy(self) -> RestartPolicy | None: def restart_policy(self) -> RestartPolicy | None:
@ -230,7 +231,10 @@ class DockerInterface(JobGroup):
) -> None: ) -> None:
"""Pull docker image.""" """Pull docker image."""
image = image or self.image image = image or self.image
arch = arch or self.sys_arch.supervisor if not image:
raise ValueError("Cannot pull without an image!")
image_arch = str(arch) if arch else self.sys_arch.supervisor
_LOGGER.info("Downloading docker image %s with tag %s.", image, version) _LOGGER.info("Downloading docker image %s with tag %s.", image, version)
try: try:
@ -242,12 +246,12 @@ class DockerInterface(JobGroup):
docker_image = await self.sys_run_in_executor( docker_image = await self.sys_run_in_executor(
self.sys_docker.images.pull, self.sys_docker.images.pull,
f"{image}:{version!s}", f"{image}:{version!s}",
platform=MAP_ARCH[arch], platform=MAP_ARCH[image_arch],
) )
# Validate content # Validate content
try: try:
await self._validate_trust(docker_image.id, image, version) await self._validate_trust(cast(str, docker_image.id))
except CodeNotaryError: except CodeNotaryError:
with suppress(docker.errors.DockerException): with suppress(docker.errors.DockerException):
await self.sys_run_in_executor( await self.sys_run_in_executor(
@ -355,7 +359,7 @@ class DockerInterface(JobGroup):
self.sys_bus.fire_event( self.sys_bus.fire_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE, BusEvent.DOCKER_CONTAINER_STATE_CHANGE,
DockerContainerStateEvent( DockerContainerStateEvent(
self.name, state, docker_container.id, int(time()) self.name, state, cast(str, docker_container.id), int(time())
), ),
) )
@ -451,10 +455,12 @@ class DockerInterface(JobGroup):
self, self,
version: AwesomeVersion, version: AwesomeVersion,
expected_image: str, expected_image: str,
expected_arch: CpuArch | None = None, expected_cpu_arch: CpuArch | None = None,
) -> None: ) -> None:
"""Check we have expected image with correct arch.""" """Check we have expected image with correct arch."""
expected_arch = expected_arch or self.sys_arch.supervisor expected_image_cpu_arch = (
str(expected_cpu_arch) if expected_cpu_arch else self.sys_arch.supervisor
)
image_name = f"{expected_image}:{version!s}" image_name = f"{expected_image}:{version!s}"
if self.image == expected_image: if self.image == expected_image:
try: try:
@ -472,13 +478,22 @@ class DockerInterface(JobGroup):
image_arch = f"{image_arch}/{image.attrs['Variant']}" image_arch = f"{image_arch}/{image.attrs['Variant']}"
# If we have an image and its the right arch, all set # If we have an image and its the right arch, all set
if MAP_ARCH[expected_arch] == image_arch: # It seems that newer Docker version return a variant for arm64 images.
# Make sure we match linux/arm64 and linux/arm64/v8.
expected_image_arch = MAP_ARCH[expected_image_cpu_arch]
if image_arch.startswith(expected_image_arch):
return return
_LOGGER.info(
"Image %s has arch %s, expected %s. Reinstalling.",
image_name,
image_arch,
expected_image_arch,
)
# We're missing the image we need. Stop and clean up what we have then pull the right one # We're missing the image we need. Stop and clean up what we have then pull the right one
with suppress(DockerError): with suppress(DockerError):
await self.remove() await self.remove()
await self.install(version, expected_image, arch=expected_arch) await self.install(version, expected_image, arch=expected_image_cpu_arch)
@Job( @Job(
name="docker_interface_update", name="docker_interface_update",
@ -613,9 +628,7 @@ class DockerInterface(JobGroup):
self.sys_docker.container_run_inside, self.name, command self.sys_docker.container_run_inside, self.name, command
) )
async def _validate_trust( async def _validate_trust(self, image_id: str) -> None:
self, image_id: str, image: str, version: AwesomeVersion
) -> None:
"""Validate trust of content.""" """Validate trust of content."""
checksum = image_id.partition(":")[2] checksum = image_id.partition(":")[2]
return await self.sys_security.verify_own_content(checksum) return await self.sys_security.verify_own_content(checksum)
@ -634,4 +647,4 @@ class DockerInterface(JobGroup):
except (docker.errors.DockerException, requests.RequestException): except (docker.errors.DockerException, requests.RequestException):
return return
await self._validate_trust(image.id, self.image, self.version) await self._validate_trust(cast(str, image.id))

View File

@ -7,7 +7,7 @@ from ipaddress import IPv4Address
import logging import logging
import os import os
from pathlib import Path from pathlib import Path
from typing import Any, Final, Self from typing import Any, Final, Self, cast
import attr import attr
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
@ -22,6 +22,7 @@ from docker.types.daemon import CancellableStream
import requests import requests
from ..const import ( from ..const import (
ATTR_ENABLE_IPV6,
ATTR_REGISTRIES, ATTR_REGISTRIES,
DNS_SUFFIX, DNS_SUFFIX,
DOCKER_NETWORK, DOCKER_NETWORK,
@ -83,7 +84,7 @@ class DockerInfo:
"""Return true, if CONFIG_RT_GROUP_SCHED is loaded.""" """Return true, if CONFIG_RT_GROUP_SCHED is loaded."""
if not Path("/sys/fs/cgroup/cpu/cpu.rt_runtime_us").exists(): if not Path("/sys/fs/cgroup/cpu/cpu.rt_runtime_us").exists():
return False return False
return bool(os.environ.get(ENV_SUPERVISOR_CPU_RT, 0)) return bool(os.environ.get(ENV_SUPERVISOR_CPU_RT) == "1")
class DockerConfig(FileConfiguration): class DockerConfig(FileConfiguration):
@ -93,6 +94,16 @@ class DockerConfig(FileConfiguration):
"""Initialize the JSON configuration.""" """Initialize the JSON configuration."""
super().__init__(FILE_HASSIO_DOCKER, SCHEMA_DOCKER_CONFIG) super().__init__(FILE_HASSIO_DOCKER, SCHEMA_DOCKER_CONFIG)
@property
def enable_ipv6(self) -> bool:
"""Return IPv6 configuration for docker network."""
return self._data.get(ATTR_ENABLE_IPV6, False)
@enable_ipv6.setter
def enable_ipv6(self, value: bool) -> None:
"""Set IPv6 configuration for docker network."""
self._data[ATTR_ENABLE_IPV6] = value
@property @property
def registries(self) -> dict[str, Any]: def registries(self) -> dict[str, Any]:
"""Return credentials for docker registries.""" """Return credentials for docker registries."""
@ -124,9 +135,11 @@ class DockerAPI:
timeout=900, timeout=900,
), ),
) )
self._network = DockerNetwork(self._docker)
self._info = DockerInfo.new(self.docker.info()) self._info = DockerInfo.new(self.docker.info())
await self.config.read_data() await self.config.read_data()
self._network = await DockerNetwork(self.docker).post_init(
self.config.enable_ipv6
)
return self return self
@property @property
@ -202,7 +215,7 @@ class DockerAPI:
if "labels" not in kwargs: if "labels" not in kwargs:
kwargs["labels"] = {} kwargs["labels"] = {}
elif isinstance(kwargs["labels"], list): elif isinstance(kwargs["labels"], list):
kwargs["labels"] = {label: "" for label in kwargs["labels"]} kwargs["labels"] = dict.fromkeys(kwargs["labels"], "")
kwargs["labels"][LABEL_MANAGED] = "" kwargs["labels"][LABEL_MANAGED] = ""
@ -255,7 +268,7 @@ class DockerAPI:
# Check if container is register on host # Check if container is register on host
# https://github.com/moby/moby/issues/23302 # https://github.com/moby/moby/issues/23302
if name in ( if name and name in (
val.get("Name") val.get("Name")
for val in host_network.attrs.get("Containers", {}).values() for val in host_network.attrs.get("Containers", {}).values()
): ):
@ -281,8 +294,8 @@ class DockerAPI:
def run_command( def run_command(
self, self,
image: str, image: str,
tag: str = "latest", version: str = "latest",
command: str | None = None, command: str | list[str] | None = None,
**kwargs: Any, **kwargs: Any,
) -> CommandReturn: ) -> CommandReturn:
"""Create a temporary container and run command. """Create a temporary container and run command.
@ -292,12 +305,15 @@ class DockerAPI:
stdout = kwargs.get("stdout", True) stdout = kwargs.get("stdout", True)
stderr = kwargs.get("stderr", True) stderr = kwargs.get("stderr", True)
_LOGGER.info("Runing command '%s' on %s", command, image) image_with_tag = f"{image}:{version}"
_LOGGER.info("Runing command '%s' on %s", command, image_with_tag)
container = None container = None
try: try:
container = self.docker.containers.run( container = self.docker.containers.run(
f"{image}:{tag}", image_with_tag,
command=command, command=command,
detach=True,
network=self.network.name, network=self.network.name,
use_config_proxy=False, use_config_proxy=False,
**kwargs, **kwargs,
@ -314,9 +330,9 @@ class DockerAPI:
# cleanup container # cleanup container
if container: if container:
with suppress(docker_errors.DockerException, requests.RequestException): with suppress(docker_errors.DockerException, requests.RequestException):
container.remove(force=True) container.remove(force=True, v=True)
return CommandReturn(result.get("StatusCode"), output) return CommandReturn(result["StatusCode"], output)
def repair(self) -> None: def repair(self) -> None:
"""Repair local docker overlayfs2 issues.""" """Repair local docker overlayfs2 issues."""
@ -405,7 +421,8 @@ class DockerAPI:
# Check the image is correct and state is good # Check the image is correct and state is good
return ( return (
docker_container.image.id == docker_image.id docker_container.image is not None
and docker_container.image.id == docker_image.id
and docker_container.status in ("exited", "running", "created") and docker_container.status in ("exited", "running", "created")
) )
@ -428,7 +445,7 @@ class DockerAPI:
if remove_container: if remove_container:
with suppress(DockerException, requests.RequestException): with suppress(DockerException, requests.RequestException):
_LOGGER.info("Cleaning %s application", name) _LOGGER.info("Cleaning %s application", name)
docker_container.remove(force=True) docker_container.remove(force=True, v=True)
def start_container(self, name: str) -> None: def start_container(self, name: str) -> None:
"""Start Docker container.""" """Start Docker container."""
@ -540,7 +557,7 @@ class DockerAPI:
"""Import a tar file as image.""" """Import a tar file as image."""
try: try:
with tar_file.open("rb") as read_tar: with tar_file.open("rb") as read_tar:
docker_image_list: list[Image] = self.images.load(read_tar) docker_image_list: list[Image] = self.images.load(read_tar) # type: ignore
if len(docker_image_list) != 1: if len(docker_image_list) != 1:
_LOGGER.warning( _LOGGER.warning(
@ -557,7 +574,7 @@ class DockerAPI:
def export_image(self, image: str, version: AwesomeVersion, tar_file: Path) -> None: def export_image(self, image: str, version: AwesomeVersion, tar_file: Path) -> None:
"""Export current images into a tar file.""" """Export current images into a tar file."""
try: try:
image = self.api.get_image(f"{image}:{version}") docker_image = self.api.get_image(f"{image}:{version}")
except (DockerException, requests.RequestException) as err: except (DockerException, requests.RequestException) as err:
raise DockerError( raise DockerError(
f"Can't fetch image {image}: {err}", _LOGGER.error f"Can't fetch image {image}: {err}", _LOGGER.error
@ -566,7 +583,7 @@ class DockerAPI:
_LOGGER.info("Export image %s to %s", image, tar_file) _LOGGER.info("Export image %s to %s", image, tar_file)
try: try:
with tar_file.open("wb") as write_tar: with tar_file.open("wb") as write_tar:
for chunk in image: for chunk in docker_image:
write_tar.write(chunk) write_tar.write(chunk)
except (OSError, requests.RequestException) as err: except (OSError, requests.RequestException) as err:
raise DockerError( raise DockerError(
@ -586,7 +603,7 @@ class DockerAPI:
"""Clean up old versions of an image.""" """Clean up old versions of an image."""
image = f"{current_image}:{current_version!s}" image = f"{current_image}:{current_version!s}"
try: try:
keep: set[str] = {self.images.get(image).id} keep = {cast(str, self.images.get(image).id)}
except ImageNotFound: except ImageNotFound:
raise DockerNotFound( raise DockerNotFound(
f"{current_image} not found for cleanup", _LOGGER.warning f"{current_image} not found for cleanup", _LOGGER.warning
@ -602,7 +619,7 @@ class DockerAPI:
for image in keep_images: for image in keep_images:
# If its not found, no need to preserve it from getting removed # If its not found, no need to preserve it from getting removed
with suppress(ImageNotFound): with suppress(ImageNotFound):
keep.add(self.images.get(image).id) keep.add(cast(str, self.images.get(image).id))
except (DockerException, requests.RequestException) as err: except (DockerException, requests.RequestException) as err:
raise DockerError( raise DockerError(
f"Failed to get one or more images from {keep} during cleanup", f"Failed to get one or more images from {keep} during cleanup",
@ -614,16 +631,18 @@ class DockerAPI:
old_images | {current_image} if old_images else {current_image} old_images | {current_image} if old_images else {current_image}
) )
try: try:
images_list = self.images.list(name=image_names) # This API accepts a list of image names. Tested and confirmed working on docker==7.1.0
# Its typing does say only `str` though. Bit concerning, could an update break this?
images_list = self.images.list(name=image_names) # type: ignore
except (DockerException, requests.RequestException) as err: except (DockerException, requests.RequestException) as err:
raise DockerError( raise DockerError(
f"Corrupt docker overlayfs found: {err}", _LOGGER.warning f"Corrupt docker overlayfs found: {err}", _LOGGER.warning
) from err ) from err
for image in images_list: for docker_image in images_list:
if image.id in keep: if docker_image.id in keep:
continue continue
with suppress(DockerException, requests.RequestException): with suppress(DockerException, requests.RequestException):
_LOGGER.info("Cleanup images: %s", image.tags) _LOGGER.info("Cleanup images: %s", docker_image.tags)
self.images.remove(image.id, force=True) self.images.remove(docker_image.id, force=True)

View File

@ -37,7 +37,7 @@ class DockerMonitor(CoreSysAttributes, Thread):
def watch_container(self, container: Container): def watch_container(self, container: Container):
"""If container is missing the managed label, add name to list.""" """If container is missing the managed label, add name to list."""
if LABEL_MANAGED not in container.labels: if LABEL_MANAGED not in container.labels and container.name:
self._unlabeled_managed_containers += [container.name] self._unlabeled_managed_containers += [container.name]
async def load(self): async def load(self):
@ -54,8 +54,11 @@ class DockerMonitor(CoreSysAttributes, Thread):
_LOGGER.info("Stopped docker events monitor") _LOGGER.info("Stopped docker events monitor")
def run(self): def run(self) -> None:
"""Monitor and process docker events.""" """Monitor and process docker events."""
if not self._events:
raise RuntimeError("Monitor has not been loaded!")
for event in self._events: for event in self._events:
attributes: dict[str, str] = event.get("Actor", {}).get("Attributes", {}) attributes: dict[str, str] = event.get("Actor", {}).get("Attributes", {})

View File

@ -1,17 +1,52 @@
"""Internal network manager for Supervisor.""" """Internal network manager for Supervisor."""
import asyncio
from contextlib import suppress from contextlib import suppress
from ipaddress import IPv4Address from ipaddress import IPv4Address
import logging import logging
from typing import Self
import docker import docker
import requests import requests
from ..const import DOCKER_NETWORK, DOCKER_NETWORK_MASK, DOCKER_NETWORK_RANGE from ..const import (
ATTR_AUDIO,
ATTR_CLI,
ATTR_DNS,
ATTR_ENABLE_IPV6,
ATTR_OBSERVER,
ATTR_SUPERVISOR,
DOCKER_IPV4_NETWORK_MASK,
DOCKER_IPV4_NETWORK_RANGE,
DOCKER_IPV6_NETWORK_MASK,
DOCKER_NETWORK,
DOCKER_NETWORK_DRIVER,
DOCKER_PREFIX,
OBSERVER_DOCKER_NAME,
SUPERVISOR_DOCKER_NAME,
)
from ..exceptions import DockerError from ..exceptions import DockerError
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
DOCKER_ENABLEIPV6 = "EnableIPv6"
DOCKER_NETWORK_PARAMS = {
"name": DOCKER_NETWORK,
"driver": DOCKER_NETWORK_DRIVER,
"ipam": docker.types.IPAMConfig(
pool_configs=[
docker.types.IPAMPool(subnet=str(DOCKER_IPV6_NETWORK_MASK)),
docker.types.IPAMPool(
subnet=str(DOCKER_IPV4_NETWORK_MASK),
gateway=str(DOCKER_IPV4_NETWORK_MASK[1]),
iprange=str(DOCKER_IPV4_NETWORK_RANGE),
),
]
),
ATTR_ENABLE_IPV6: True,
"options": {"com.docker.network.bridge.name": DOCKER_NETWORK},
}
class DockerNetwork: class DockerNetwork:
"""Internal Supervisor Network. """Internal Supervisor Network.
@ -22,7 +57,14 @@ class DockerNetwork:
def __init__(self, docker_client: docker.DockerClient): def __init__(self, docker_client: docker.DockerClient):
"""Initialize internal Supervisor network.""" """Initialize internal Supervisor network."""
self.docker: docker.DockerClient = docker_client self.docker: docker.DockerClient = docker_client
self._network: docker.models.networks.Network = self._get_network() self._network: docker.models.networks.Network
async def post_init(self, enable_ipv6: bool = False) -> Self:
"""Post init actions that must be done in event loop."""
self._network = await asyncio.get_running_loop().run_in_executor(
None, self._get_network, enable_ipv6
)
return self
@property @property
def name(self) -> str: def name(self) -> str:
@ -42,55 +84,101 @@ class DockerNetwork:
@property @property
def gateway(self) -> IPv4Address: def gateway(self) -> IPv4Address:
"""Return gateway of the network.""" """Return gateway of the network."""
return DOCKER_NETWORK_MASK[1] return DOCKER_IPV4_NETWORK_MASK[1]
@property @property
def supervisor(self) -> IPv4Address: def supervisor(self) -> IPv4Address:
"""Return supervisor of the network.""" """Return supervisor of the network."""
return DOCKER_NETWORK_MASK[2] return DOCKER_IPV4_NETWORK_MASK[2]
@property @property
def dns(self) -> IPv4Address: def dns(self) -> IPv4Address:
"""Return dns of the network.""" """Return dns of the network."""
return DOCKER_NETWORK_MASK[3] return DOCKER_IPV4_NETWORK_MASK[3]
@property @property
def audio(self) -> IPv4Address: def audio(self) -> IPv4Address:
"""Return audio of the network.""" """Return audio of the network."""
return DOCKER_NETWORK_MASK[4] return DOCKER_IPV4_NETWORK_MASK[4]
@property @property
def cli(self) -> IPv4Address: def cli(self) -> IPv4Address:
"""Return cli of the network.""" """Return cli of the network."""
return DOCKER_NETWORK_MASK[5] return DOCKER_IPV4_NETWORK_MASK[5]
@property @property
def observer(self) -> IPv4Address: def observer(self) -> IPv4Address:
"""Return observer of the network.""" """Return observer of the network."""
return DOCKER_NETWORK_MASK[6] return DOCKER_IPV4_NETWORK_MASK[6]
def _get_network(self) -> docker.models.networks.Network: def _get_network(self, enable_ipv6: bool = False) -> docker.models.networks.Network:
"""Get supervisor network.""" """Get supervisor network."""
try: try:
return self.docker.networks.get(DOCKER_NETWORK) if network := self.docker.networks.get(DOCKER_NETWORK):
if network.attrs.get(DOCKER_ENABLEIPV6) == enable_ipv6:
return network
_LOGGER.info(
"Migrating Supervisor network to %s",
"IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only",
)
if (containers := network.containers) and (
containers_all := all(
container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME)
for container in containers
)
):
for container in containers:
with suppress(
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
):
network.disconnect(container, force=True)
if not containers or containers_all:
try:
network.remove()
except docker.errors.APIError:
_LOGGER.warning("Failed to remove existing Supervisor network")
return network
else:
_LOGGER.warning(
"System appears to be running, "
"not applying Supervisor network change. "
"Reboot your system to apply the change."
)
return network
except docker.errors.NotFound: except docker.errors.NotFound:
_LOGGER.info("Can't find Supervisor network, creating a new network") _LOGGER.info("Can't find Supervisor network, creating a new network")
ipam_pool = docker.types.IPAMPool( network_params = DOCKER_NETWORK_PARAMS.copy()
subnet=str(DOCKER_NETWORK_MASK), network_params[ATTR_ENABLE_IPV6] = enable_ipv6
gateway=str(self.gateway),
iprange=str(DOCKER_NETWORK_RANGE),
)
ipam_config = docker.types.IPAMConfig(pool_configs=[ipam_pool]) try:
self._network = self.docker.networks.create(**network_params) # type: ignore
except docker.errors.APIError as err:
raise DockerError(
f"Can't create Supervisor network: {err}", _LOGGER.error
) from err
return self.docker.networks.create( with suppress(DockerError):
DOCKER_NETWORK, self.attach_container_by_name(
driver="bridge", SUPERVISOR_DOCKER_NAME, [ATTR_SUPERVISOR], self.supervisor
ipam=ipam_config, )
enable_ipv6=False,
options={"com.docker.network.bridge.name": DOCKER_NETWORK}, with suppress(DockerError):
) self.attach_container_by_name(
OBSERVER_DOCKER_NAME, [ATTR_OBSERVER], self.observer
)
for name, ip in (
(ATTR_CLI, self.cli),
(ATTR_DNS, self.dns),
(ATTR_AUDIO, self.audio),
):
with suppress(DockerError):
self.attach_container_by_name(f"{DOCKER_PREFIX}_{name}", [name], ip)
return self._network
def attach_container( def attach_container(
self, self,
@ -102,26 +190,55 @@ class DockerNetwork:
Need run inside executor. Need run inside executor.
""" """
ipv4_address = str(ipv4) if ipv4 else None
# Reload Network information # Reload Network information
with suppress(docker.errors.DockerException, requests.RequestException): with suppress(docker.errors.DockerException, requests.RequestException):
self.network.reload() self.network.reload()
# Check stale Network # Check stale Network
if container.name in ( if container.name and container.name in (
val.get("Name") for val in self.network.attrs.get("Containers", {}).values() val.get("Name") for val in self.network.attrs.get("Containers", {}).values()
): ):
self.stale_cleanup(container.name) self.stale_cleanup(container.name)
# Attach Network # Attach Network
try: try:
self.network.connect(container, aliases=alias, ipv4_address=ipv4_address) self.network.connect(
except docker.errors.APIError as err: container, aliases=alias, ipv4_address=str(ipv4) if ipv4 else None
)
except (
docker.errors.NotFound,
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
raise DockerError( raise DockerError(
f"Can't link container to hassio-net: {err}", _LOGGER.error f"Can't connect {container.name} to Supervisor network: {err}",
_LOGGER.error,
) from err ) from err
def attach_container_by_name(
self,
name: str,
alias: list[str] | None = None,
ipv4: IPv4Address | None = None,
) -> None:
"""Attach container to Supervisor network.
Need run inside executor.
"""
try:
container = self.docker.containers.get(name)
except (
docker.errors.NotFound,
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
raise DockerError(f"Can't find {name}: {err}", _LOGGER.error) from err
if container.id not in self.containers:
self.attach_container(container, alias, ipv4)
def detach_default_bridge( def detach_default_bridge(
self, container: docker.models.containers.Container self, container: docker.models.containers.Container
) -> None: ) -> None:
@ -130,25 +247,33 @@ class DockerNetwork:
Need run inside executor. Need run inside executor.
""" """
try: try:
default_network = self.docker.networks.get("bridge") default_network = self.docker.networks.get(DOCKER_NETWORK_DRIVER)
default_network.disconnect(container) default_network.disconnect(container)
except docker.errors.NotFound: except docker.errors.NotFound:
return pass
except (
except docker.errors.APIError as err: docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
raise DockerError( raise DockerError(
f"Can't disconnect container from default: {err}", _LOGGER.warning f"Can't disconnect {container.name} from default network: {err}",
_LOGGER.warning,
) from err ) from err
def stale_cleanup(self, container_name: str): def stale_cleanup(self, name: str) -> None:
"""Remove force a container from Network. """Force remove a container from Network.
Fix: https://github.com/moby/moby/issues/23302 Fix: https://github.com/moby/moby/issues/23302
""" """
try: try:
self.network.disconnect(container_name, force=True) self.network.disconnect(name, force=True)
except docker.errors.NotFound: except (
pass docker.errors.APIError,
except (docker.errors.DockerException, requests.RequestException) as err: docker.errors.DockerException,
raise DockerError() from err requests.RequestException,
) as err:
raise DockerError(
f"Can't disconnect {name} from Supervisor network: {err}",
_LOGGER.warning,
) from err

View File

@ -2,7 +2,7 @@
import logging import logging
from ..const import DOCKER_NETWORK_MASK from ..const import DOCKER_IPV4_NETWORK_MASK, OBSERVER_DOCKER_NAME
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobExecutionLimit
@ -12,7 +12,6 @@ from .interface import DockerInterface
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
OBSERVER_DOCKER_NAME: str = "hassio_observer"
ENV_NETWORK_MASK: str = "NETWORK_MASK" ENV_NETWORK_MASK: str = "NETWORK_MASK"
@ -49,7 +48,7 @@ class DockerObserver(DockerInterface, CoreSysAttributes):
environment={ environment={
ENV_TIME: self.sys_timezone, ENV_TIME: self.sys_timezone,
ENV_TOKEN: self.sys_plugins.observer.supervisor_token, ENV_TOKEN: self.sys_plugins.observer.supervisor_token,
ENV_NETWORK_MASK: DOCKER_NETWORK_MASK, ENV_NETWORK_MASK: DOCKER_IPV4_NETWORK_MASK,
}, },
mounts=[MOUNT_DOCKER], mounts=[MOUNT_DOCKER],
ports={"80/tcp": 4357}, ports={"80/tcp": 4357},

View File

@ -39,7 +39,7 @@ class DockerSupervisor(DockerInterface):
@property @property
def host_mounts_available(self) -> bool: def host_mounts_available(self) -> bool:
"""Return True if container can see mounts on host within its data directory.""" """Return True if container can see mounts on host within its data directory."""
return self._meta and any( return self._meta is not None and any(
mount.get("Propagation") == PropagationMode.SLAVE mount.get("Propagation") == PropagationMode.SLAVE
for mount in self.meta_mounts for mount in self.meta_mounts
if mount.get("Destination") == "/data" if mount.get("Destination") == "/data"
@ -89,7 +89,18 @@ class DockerSupervisor(DockerInterface):
""" """
try: try:
docker_container = self.sys_docker.containers.get(self.name) docker_container = self.sys_docker.containers.get(self.name)
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Could not get Supervisor container for retag: {err}", _LOGGER.error
) from err
if not self.image or not docker_container.image:
raise DockerError(
"Could not locate image from container metadata for retag",
_LOGGER.error,
)
try:
docker_container.image.tag(self.image, tag=str(self.version)) docker_container.image.tag(self.image, tag=str(self.version))
docker_container.image.tag(self.image, tag="latest") docker_container.image.tag(self.image, tag="latest")
except (docker.errors.DockerException, requests.RequestException) as err: except (docker.errors.DockerException, requests.RequestException) as err:
@ -110,7 +121,18 @@ class DockerSupervisor(DockerInterface):
try: try:
docker_container = self.sys_docker.containers.get(self.name) docker_container = self.sys_docker.containers.get(self.name)
docker_image = self.sys_docker.images.get(f"{image}:{version!s}") docker_image = self.sys_docker.images.get(f"{image}:{version!s}")
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't get image or container to fix start tag: {err}", _LOGGER.error
) from err
if not docker_container.image:
raise DockerError(
"Cannot locate image from container metadata to fix start tag",
_LOGGER.error,
)
try:
# Find start tag # Find start tag
for tag in docker_container.image.tags: for tag in docker_container.image.tags:
start_image = tag.partition(":")[0] start_image = tag.partition(":")[0]

View File

@ -84,10 +84,6 @@ class HomeAssistantWSError(HomeAssistantAPIError):
"""Home Assistant websocket error.""" """Home Assistant websocket error."""
class HomeAssistantWSNotSupported(HomeAssistantWSError):
"""Raise when WebSockets are not supported."""
class HomeAssistantWSConnectionError(HomeAssistantWSError): class HomeAssistantWSConnectionError(HomeAssistantWSError):
"""Raise when the WebSocket connection has an error.""" """Raise when the WebSocket connection has an error."""

View File

@ -72,7 +72,7 @@ class HwDisk(CoreSysAttributes):
_, _, free = shutil.disk_usage(path) _, _, free = shutil.disk_usage(path)
return round(free / (1024.0**3), 1) return round(free / (1024.0**3), 1)
def _get_mountinfo(self, path: str) -> str: def _get_mountinfo(self, path: str) -> list[str] | None:
mountinfo = _MOUNTINFO.read_text(encoding="utf-8") mountinfo = _MOUNTINFO.read_text(encoding="utf-8")
for line in mountinfo.splitlines(): for line in mountinfo.splitlines():
mountinfoarr = line.split() mountinfoarr = line.split()
@ -80,7 +80,7 @@ class HwDisk(CoreSysAttributes):
return mountinfoarr return mountinfoarr
return None return None
def _get_mount_source(self, path: str) -> str: def _get_mount_source(self, path: str) -> str | None:
mountinfoarr = self._get_mountinfo(path) mountinfoarr = self._get_mountinfo(path)
if mountinfoarr is None: if mountinfoarr is None:
@ -92,7 +92,7 @@ class HwDisk(CoreSysAttributes):
optionsep += 1 optionsep += 1
return mountinfoarr[optionsep + 2] return mountinfoarr[optionsep + 2]
def _try_get_emmc_life_time(self, device_name: str) -> float: def _try_get_emmc_life_time(self, device_name: str) -> float | None:
# Get eMMC life_time # Get eMMC life_time
life_time_path = Path(_BLOCK_DEVICE_EMMC_LIFE_TIME.format(device_name)) life_time_path = Path(_BLOCK_DEVICE_EMMC_LIFE_TIME.format(device_name))
@ -121,13 +121,13 @@ class HwDisk(CoreSysAttributes):
# Return the pessimistic estimate (0x02 -> 10%-20%, return 20%) # Return the pessimistic estimate (0x02 -> 10%-20%, return 20%)
return life_time_value * 10.0 return life_time_value * 10.0
def get_disk_life_time(self, path: str | Path) -> float: def get_disk_life_time(self, path: str | Path) -> float | None:
"""Return life time estimate of the underlying SSD drive. """Return life time estimate of the underlying SSD drive.
Must be run in executor. Must be run in executor.
""" """
mount_source = self._get_mount_source(str(path)) mount_source = self._get_mount_source(str(path))
if mount_source == "overlay": if not mount_source or mount_source == "overlay":
return None return None
mount_source_path = Path(mount_source) mount_source_path = Path(mount_source)

View File

@ -1,8 +1,9 @@
"""Hardware Manager of Supervisor.""" """Hardware Manager of Supervisor."""
from __future__ import annotations
import logging import logging
from pathlib import Path from pathlib import Path
from typing import Self
import pyudev import pyudev
@ -48,28 +49,30 @@ _STATIC_NODES: list[Device] = [
class HardwareManager(CoreSysAttributes): class HardwareManager(CoreSysAttributes):
"""Hardware manager for supervisor.""" """Hardware manager for supervisor."""
def __init__(self, coresys: CoreSys): def __init__(self, coresys: CoreSys, udev: pyudev.Context) -> None:
"""Initialize Hardware Monitor object.""" """Initialize Hardware Monitor object."""
self.coresys: CoreSys = coresys self.coresys: CoreSys = coresys
self._devices: dict[str, Device] = {} self._devices: dict[str, Device] = {}
self._udev: pyudev.Context | None = None self._udev: pyudev.Context = udev
self._monitor: HwMonitor | None = None self._monitor: HwMonitor = HwMonitor(coresys, udev)
self._helper: HwHelper = HwHelper(coresys) self._helper: HwHelper = HwHelper(coresys)
self._policy: HwPolicy = HwPolicy(coresys) self._policy: HwPolicy = HwPolicy(coresys)
self._disk: HwDisk = HwDisk(coresys) self._disk: HwDisk = HwDisk(coresys)
async def post_init(self) -> Self: @classmethod
"""Complete initialization of obect within event loop.""" async def create(cls: type[HardwareManager], coresys: CoreSys) -> HardwareManager:
self._udev = await self.sys_run_in_executor(pyudev.Context) """Complete initialization of a HardwareManager object within event loop."""
self._monitor: HwMonitor = HwMonitor(self.coresys, self._udev) return cls(coresys, await coresys.run_in_executor(pyudev.Context))
return self
@property
def udev(self) -> pyudev.Context:
"""Return Udev context instance."""
return self._udev
@property @property
def monitor(self) -> HwMonitor: def monitor(self) -> HwMonitor:
"""Return Hardware Monitor instance.""" """Return Hardware Monitor instance."""
if not self._monitor:
raise RuntimeError("Hardware monitor not initialized!")
return self._monitor return self._monitor
@property @property
@ -129,7 +132,7 @@ class HardwareManager(CoreSysAttributes):
def check_subsystem_parents(self, device: Device, subsystem: UdevSubsystem) -> bool: def check_subsystem_parents(self, device: Device, subsystem: UdevSubsystem) -> bool:
"""Return True if the device is part of the given subsystem parent.""" """Return True if the device is part of the given subsystem parent."""
udev_device: pyudev.Device = pyudev.Devices.from_sys_path( udev_device: pyudev.Device = pyudev.Devices.from_sys_path(
self._udev, str(device.sysfs) self.udev, str(device.sysfs)
) )
return udev_device.find_parent(subsystem) is not None return udev_device.find_parent(subsystem) is not None
@ -138,7 +141,7 @@ class HardwareManager(CoreSysAttributes):
self._devices.clear() self._devices.clear()
# Exctract all devices # Exctract all devices
for device in self._udev.list_devices(): for device in self.udev.list_devices():
# Skip devices without mapping # Skip devices without mapping
try: try:
if not device.device_node or self.helper.hide_virtual_device(device): if not device.device_node or self.helper.hide_virtual_device(device):

View File

@ -51,15 +51,17 @@ class HomeAssistantAPI(CoreSysAttributes):
) )
async def ensure_access_token(self) -> None: async def ensure_access_token(self) -> None:
"""Ensure there is an access token.""" """Ensure there is an access token."""
if self.access_token is not None and self._access_token_expires > datetime.now( if (
tz=UTC self.access_token
and self._access_token_expires
and self._access_token_expires > datetime.now(tz=UTC)
): ):
return return
with suppress(asyncio.TimeoutError, aiohttp.ClientError): with suppress(asyncio.TimeoutError, aiohttp.ClientError):
async with self.sys_websession.post( async with self.sys_websession.post(
f"{self.sys_homeassistant.api_url}/auth/token", f"{self.sys_homeassistant.api_url}/auth/token",
timeout=30, timeout=aiohttp.ClientTimeout(total=30),
data={ data={
"grant_type": "refresh_token", "grant_type": "refresh_token",
"refresh_token": self.sys_homeassistant.refresh_token, "refresh_token": self.sys_homeassistant.refresh_token,

View File

@ -32,6 +32,7 @@ class WSType(StrEnum):
SUPERVISOR_EVENT = "supervisor/event" SUPERVISOR_EVENT = "supervisor/event"
BACKUP_START = "backup/start" BACKUP_START = "backup/start"
BACKUP_END = "backup/end" BACKUP_END = "backup/end"
HASSIO_UPDATE_ADDON = "hassio/update/addon"
class WSEvent(StrEnum): class WSEvent(StrEnum):

View File

@ -87,29 +87,29 @@ class HomeAssistantCore(JobGroup):
try: try:
# Evaluate Version if we lost this information # Evaluate Version if we lost this information
if not self.sys_homeassistant.version: if self.sys_homeassistant.version:
version = self.sys_homeassistant.version
else:
self.sys_homeassistant.version = ( self.sys_homeassistant.version = (
await self.instance.get_latest_version() version
) ) = await self.instance.get_latest_version()
await self.instance.attach( await self.instance.attach(version=version, skip_state_event_if_down=True)
version=self.sys_homeassistant.version, skip_state_event_if_down=True
)
# Ensure we are using correct image for this system (unless user has overridden it) # Ensure we are using correct image for this system (unless user has overridden it)
if not self.sys_homeassistant.override_image: if not self.sys_homeassistant.override_image:
await self.instance.check_image( await self.instance.check_image(
self.sys_homeassistant.version, self.sys_homeassistant.default_image version, self.sys_homeassistant.default_image
) )
self.sys_homeassistant.image = self.sys_homeassistant.default_image self.sys_homeassistant.set_image(self.sys_homeassistant.default_image)
except DockerError: except DockerError:
_LOGGER.info( _LOGGER.info(
"No Home Assistant Docker image %s found.", self.sys_homeassistant.image "No Home Assistant Docker image %s found.", self.sys_homeassistant.image
) )
await self.install_landingpage() await self.install_landingpage()
else: else:
self.sys_homeassistant.version = self.instance.version self.sys_homeassistant.version = self.instance.version or version
self.sys_homeassistant.image = self.instance.image self.sys_homeassistant.set_image(self.instance.image)
await self.sys_homeassistant.save_data() await self.sys_homeassistant.save_data()
# Start landingpage # Start landingpage
@ -138,7 +138,7 @@ class HomeAssistantCore(JobGroup):
else: else:
_LOGGER.info("Using preinstalled landingpage") _LOGGER.info("Using preinstalled landingpage")
self.sys_homeassistant.version = LANDINGPAGE self.sys_homeassistant.version = LANDINGPAGE
self.sys_homeassistant.image = self.instance.image self.sys_homeassistant.set_image(self.instance.image)
await self.sys_homeassistant.save_data() await self.sys_homeassistant.save_data()
return return
@ -166,7 +166,7 @@ class HomeAssistantCore(JobGroup):
await asyncio.sleep(30) await asyncio.sleep(30)
self.sys_homeassistant.version = LANDINGPAGE self.sys_homeassistant.version = LANDINGPAGE
self.sys_homeassistant.image = self.sys_updater.image_homeassistant self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
await self.sys_homeassistant.save_data() await self.sys_homeassistant.save_data()
@Job( @Job(
@ -182,12 +182,13 @@ class HomeAssistantCore(JobGroup):
if not self.sys_homeassistant.latest_version: if not self.sys_homeassistant.latest_version:
await self.sys_updater.reload() await self.sys_updater.reload()
if self.sys_homeassistant.latest_version: if to_version := self.sys_homeassistant.latest_version:
try: try:
await self.instance.update( await self.instance.update(
self.sys_homeassistant.latest_version, to_version,
image=self.sys_updater.image_homeassistant, image=self.sys_updater.image_homeassistant,
) )
self.sys_homeassistant.version = self.instance.version or to_version
break break
except (DockerError, JobException): except (DockerError, JobException):
pass pass
@ -198,8 +199,7 @@ class HomeAssistantCore(JobGroup):
await asyncio.sleep(30) await asyncio.sleep(30)
_LOGGER.info("Home Assistant docker now installed") _LOGGER.info("Home Assistant docker now installed")
self.sys_homeassistant.version = self.instance.version self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
self.sys_homeassistant.image = self.sys_updater.image_homeassistant
await self.sys_homeassistant.save_data() await self.sys_homeassistant.save_data()
# finishing # finishing
@ -231,15 +231,21 @@ class HomeAssistantCore(JobGroup):
backup: bool | None = False, backup: bool | None = False,
) -> None: ) -> None:
"""Update HomeAssistant version.""" """Update HomeAssistant version."""
version = version or self.sys_homeassistant.latest_version to_version = version or self.sys_homeassistant.latest_version
if not to_version:
raise HomeAssistantUpdateError(
"Cannot determine latest version of Home Assistant for update",
_LOGGER.error,
)
old_image = self.sys_homeassistant.image old_image = self.sys_homeassistant.image
rollback = self.sys_homeassistant.version if not self.error_state else None rollback = self.sys_homeassistant.version if not self.error_state else None
running = await self.instance.is_running() running = await self.instance.is_running()
exists = await self.instance.exists() exists = await self.instance.exists()
if exists and version == self.instance.version: if exists and to_version == self.instance.version:
raise HomeAssistantUpdateError( raise HomeAssistantUpdateError(
f"Version {version!s} is already installed", _LOGGER.warning f"Version {to_version!s} is already installed", _LOGGER.warning
) )
if backup: if backup:
@ -262,8 +268,8 @@ class HomeAssistantCore(JobGroup):
"Updating Home Assistant image failed", _LOGGER.warning "Updating Home Assistant image failed", _LOGGER.warning
) from err ) from err
self.sys_homeassistant.version = self.instance.version self.sys_homeassistant.version = self.instance.version or to_version
self.sys_homeassistant.image = self.sys_updater.image_homeassistant self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
if running: if running:
await self.start() await self.start()
@ -276,7 +282,7 @@ class HomeAssistantCore(JobGroup):
# Update Home Assistant # Update Home Assistant
with suppress(HomeAssistantError): with suppress(HomeAssistantError):
await _update(version) await _update(to_version)
if not self.error_state and rollback: if not self.error_state and rollback:
try: try:
@ -303,11 +309,11 @@ class HomeAssistantCore(JobGroup):
# Make a copy of the current log file if it exists # Make a copy of the current log file if it exists
logfile = self.sys_config.path_homeassistant / "home-assistant.log" logfile = self.sys_config.path_homeassistant / "home-assistant.log"
if logfile.exists(): if logfile.exists():
backup = ( rollback_log = (
self.sys_config.path_homeassistant / "home-assistant-rollback.log" self.sys_config.path_homeassistant / "home-assistant-rollback.log"
) )
shutil.copy(logfile, backup) shutil.copy(logfile, rollback_log)
_LOGGER.info( _LOGGER.info(
"A backup of the logfile is stored in /config/home-assistant-rollback.log" "A backup of the logfile is stored in /config/home-assistant-rollback.log"
) )
@ -334,7 +340,7 @@ class HomeAssistantCore(JobGroup):
except DockerError as err: except DockerError as err:
raise HomeAssistantError() from err raise HomeAssistantError() from err
await self._block_till_run(self.sys_homeassistant.version) await self._block_till_run()
# No Instance/Container found, extended start # No Instance/Container found, extended start
else: else:
# Create new API token # Create new API token
@ -349,7 +355,7 @@ class HomeAssistantCore(JobGroup):
except DockerError as err: except DockerError as err:
raise HomeAssistantError() from err raise HomeAssistantError() from err
await self._block_till_run(self.sys_homeassistant.version) await self._block_till_run()
@Job( @Job(
name="home_assistant_core_stop", name="home_assistant_core_stop",
@ -382,7 +388,7 @@ class HomeAssistantCore(JobGroup):
except DockerError as err: except DockerError as err:
raise HomeAssistantError() from err raise HomeAssistantError() from err
await self._block_till_run(self.sys_homeassistant.version) await self._block_till_run()
@Job( @Job(
name="home_assistant_core_rebuild", name="home_assistant_core_rebuild",
@ -440,7 +446,7 @@ class HomeAssistantCore(JobGroup):
@property @property
def in_progress(self) -> bool: def in_progress(self) -> bool:
"""Return True if a task is in progress.""" """Return True if a task is in progress."""
return self.instance.in_progress or self.active_job return self.instance.in_progress or self.active_job is not None
async def check_config(self) -> ConfigResult: async def check_config(self) -> ConfigResult:
"""Run Home Assistant config check.""" """Run Home Assistant config check."""
@ -467,10 +473,10 @@ class HomeAssistantCore(JobGroup):
_LOGGER.info("Home Assistant config is valid") _LOGGER.info("Home Assistant config is valid")
return ConfigResult(True, log) return ConfigResult(True, log)
async def _block_till_run(self, version: AwesomeVersion) -> None: async def _block_till_run(self) -> None:
"""Block until Home-Assistant is booting up or startup timeout.""" """Block until Home-Assistant is booting up or startup timeout."""
# Skip landingpage # Skip landingpage
if version == LANDINGPAGE: if self.sys_homeassistant.version == LANDINGPAGE:
return return
_LOGGER.info("Wait until Home Assistant is ready") _LOGGER.info("Wait until Home Assistant is ready")

View File

@ -35,6 +35,7 @@ from ..const import (
FILE_HASSIO_HOMEASSISTANT, FILE_HASSIO_HOMEASSISTANT,
BusEvent, BusEvent,
IngressSessionDataUser, IngressSessionDataUser,
IngressSessionDataUserDict,
) )
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ( from ..exceptions import (
@ -112,12 +113,12 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
return self._secrets return self._secrets
@property @property
def machine(self) -> str: def machine(self) -> str | None:
"""Return the system machines.""" """Return the system machines."""
return self.core.instance.machine return self.core.instance.machine
@property @property
def arch(self) -> str: def arch(self) -> str | None:
"""Return arch of running Home Assistant.""" """Return arch of running Home Assistant."""
return self.core.instance.arch return self.core.instance.arch
@ -190,8 +191,7 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
return self._data[ATTR_IMAGE] return self._data[ATTR_IMAGE]
return self.default_image return self.default_image
@image.setter def set_image(self, value: str | None) -> None:
def image(self, value: str | None) -> None:
"""Set image name of Home Assistant container.""" """Set image name of Home Assistant container."""
self._data[ATTR_IMAGE] = value self._data[ATTR_IMAGE] = value
@ -284,7 +284,7 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
def need_update(self) -> bool: def need_update(self) -> bool:
"""Return true if a Home Assistant update is available.""" """Return true if a Home Assistant update is available."""
try: try:
return self.version < self.latest_version return self.version is not None and self.version < self.latest_version
except (AwesomeVersionException, TypeError): except (AwesomeVersionException, TypeError):
return False return False
@ -347,7 +347,9 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
): ):
return return
configuration = await self.sys_homeassistant.websocket.async_send_command( configuration: (
dict[str, Any] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "get_config"} {ATTR_TYPE: "get_config"}
) )
if not configuration or "usb" not in configuration.get("components", []): if not configuration or "usb" not in configuration.get("components", []):
@ -359,7 +361,7 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
async def begin_backup(self) -> None: async def begin_backup(self) -> None:
"""Inform Home Assistant a backup is beginning.""" """Inform Home Assistant a backup is beginning."""
try: try:
resp = await self.websocket.async_send_command( resp: dict[str, Any] | None = await self.websocket.async_send_command(
{ATTR_TYPE: WSType.BACKUP_START} {ATTR_TYPE: WSType.BACKUP_START}
) )
except HomeAssistantWSError as err: except HomeAssistantWSError as err:
@ -378,7 +380,7 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
async def end_backup(self) -> None: async def end_backup(self) -> None:
"""Inform Home Assistant the backup is ending.""" """Inform Home Assistant the backup is ending."""
try: try:
resp = await self.websocket.async_send_command( resp: dict[str, Any] | None = await self.websocket.async_send_command(
{ATTR_TYPE: WSType.BACKUP_END} {ATTR_TYPE: WSType.BACKUP_END}
) )
except HomeAssistantWSError as err: except HomeAssistantWSError as err:
@ -555,17 +557,12 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
) )
async def get_users(self) -> list[IngressSessionDataUser]: async def get_users(self) -> list[IngressSessionDataUser]:
"""Get list of all configured users.""" """Get list of all configured users."""
list_of_users = await self.sys_homeassistant.websocket.async_send_command( list_of_users: (
list[IngressSessionDataUserDict] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"} {ATTR_TYPE: "config/auth/list"}
) )
if list_of_users: if list_of_users:
return [ return [IngressSessionDataUser.from_dict(data) for data in list_of_users]
IngressSessionDataUser(
id=data["id"],
username=data.get("username"),
display_name=data.get("name"),
)
for data in list_of_users
]
return [] return []

View File

@ -4,7 +4,7 @@ from __future__ import annotations
import asyncio import asyncio
import logging import logging
from typing import Any from typing import Any, TypeVar, cast
import aiohttp import aiohttp
from aiohttp.http_websocket import WSMsgType from aiohttp.http_websocket import WSMsgType
@ -25,7 +25,6 @@ from ..exceptions import (
HomeAssistantAPIError, HomeAssistantAPIError,
HomeAssistantWSConnectionError, HomeAssistantWSConnectionError,
HomeAssistantWSError, HomeAssistantWSError,
HomeAssistantWSNotSupported,
) )
from ..utils.json import json_dumps from ..utils.json import json_dumps
from .const import CLOSING_STATES, WSEvent, WSType from .const import CLOSING_STATES, WSEvent, WSType
@ -38,6 +37,8 @@ MIN_VERSION = {
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
T = TypeVar("T")
class WSClient: class WSClient:
"""Home Assistant Websocket client.""" """Home Assistant Websocket client."""
@ -53,7 +54,7 @@ class WSClient:
self._client = client self._client = client
self._message_id: int = 0 self._message_id: int = 0
self._loop = loop self._loop = loop
self._futures: dict[int, asyncio.Future[dict]] = {} self._futures: dict[int, asyncio.Future[T]] = {} # type: ignore
@property @property
def connected(self) -> bool: def connected(self) -> bool:
@ -78,9 +79,9 @@ class WSClient:
try: try:
await self._client.send_json(message, dumps=json_dumps) await self._client.send_json(message, dumps=json_dumps)
except ConnectionError as err: except ConnectionError as err:
raise HomeAssistantWSConnectionError(err) from err raise HomeAssistantWSConnectionError(str(err)) from err
async def async_send_command(self, message: dict[str, Any]) -> dict | None: async def async_send_command(self, message: dict[str, Any]) -> T | None:
"""Send a websocket message, and return the response.""" """Send a websocket message, and return the response."""
self._message_id += 1 self._message_id += 1
message["id"] = self._message_id message["id"] = self._message_id
@ -89,7 +90,7 @@ class WSClient:
try: try:
await self._client.send_json(message, dumps=json_dumps) await self._client.send_json(message, dumps=json_dumps)
except ConnectionError as err: except ConnectionError as err:
raise HomeAssistantWSConnectionError(err) from err raise HomeAssistantWSConnectionError(str(err)) from err
try: try:
return await self._futures[message["id"]] return await self._futures[message["id"]]
@ -206,7 +207,7 @@ class HomeAssistantWebSocket(CoreSysAttributes):
self.sys_websession, self.sys_websession,
self.sys_loop, self.sys_loop,
self.sys_homeassistant.ws_url, self.sys_homeassistant.ws_url,
self.sys_homeassistant.api.access_token, cast(str, self.sys_homeassistant.api.access_token),
) )
self.sys_create_task(client.start_listener()) self.sys_create_task(client.start_listener())
@ -252,7 +253,7 @@ class HomeAssistantWebSocket(CoreSysAttributes):
) )
async def async_send_message(self, message: dict[str, Any]) -> None: async def async_send_message(self, message: dict[str, Any]) -> None:
"""Send a command with the WS client.""" """Send a message with the WS client."""
# Only commands allowed during startup as those tell Home Assistant to do something. # Only commands allowed during startup as those tell Home Assistant to do something.
# Messages may cause clients to make follow-up API calls so those wait. # Messages may cause clients to make follow-up API calls so those wait.
if self.sys_core.state in STARTING_STATES: if self.sys_core.state in STARTING_STATES:
@ -264,84 +265,89 @@ class HomeAssistantWebSocket(CoreSysAttributes):
return return
try: try:
await self._client.async_send_command(message) if self._client:
await self._client.async_send_command(message)
except HomeAssistantWSConnectionError: except HomeAssistantWSConnectionError:
if self._client: if self._client:
await self._client.close() await self._client.close()
self._client = None self._client = None
async def async_send_command(self, message: dict[str, Any]) -> dict[str, Any]: async def async_send_command(self, message: dict[str, Any]) -> T | None:
"""Send a command with the WS client and wait for the response.""" """Send a command with the WS client and wait for the response."""
if not await self._can_send(message): if not await self._can_send(message):
return return None
try: try:
return await self._client.async_send_command(message) if self._client:
return await self._client.async_send_command(message)
except HomeAssistantWSConnectionError: except HomeAssistantWSConnectionError:
if self._client: if self._client:
await self._client.close() await self._client.close()
self._client = None self._client = None
raise raise
return None
async def async_supervisor_update_event(
self,
key: str,
data: dict[str, Any] | None = None,
) -> None:
"""Send a supervisor/event command."""
try:
await self.async_send_message(
{
ATTR_TYPE: WSType.SUPERVISOR_EVENT,
ATTR_DATA: {
ATTR_EVENT: WSEvent.SUPERVISOR_UPDATE,
ATTR_UPDATE_KEY: key,
ATTR_DATA: data or {},
},
}
)
except HomeAssistantWSNotSupported:
pass
except HomeAssistantWSError as err:
_LOGGER.error("Could not send message to Home Assistant due to %s", err)
def supervisor_update_event(
self,
key: str,
data: dict[str, Any] | None = None,
) -> None:
"""Send a supervisor/event command."""
if self.sys_core.state in CLOSING_STATES:
return
self.sys_create_task(self.async_supervisor_update_event(key, data))
def send_message(self, message: dict[str, Any]) -> None: def send_message(self, message: dict[str, Any]) -> None:
"""Send a supervisor/event command.""" """Send a supervisor/event message."""
if self.sys_core.state in CLOSING_STATES: if self.sys_core.state in CLOSING_STATES:
return return
self.sys_create_task(self.async_send_message(message)) self.sys_create_task(self.async_send_message(message))
async def async_supervisor_event( async def async_supervisor_event_custom(
self, event: WSEvent, data: dict[str, Any] | None = None self, event: WSEvent, extra_data: dict[str, Any] | None = None
): ) -> None:
"""Send a supervisor/event command to Home Assistant.""" """Send a supervisor/event message to Home Assistant with custom data."""
try: try:
await self.async_send_message( await self.async_send_message(
{ {
ATTR_TYPE: WSType.SUPERVISOR_EVENT, ATTR_TYPE: WSType.SUPERVISOR_EVENT,
ATTR_DATA: { ATTR_DATA: {
ATTR_EVENT: event, ATTR_EVENT: event,
ATTR_DATA: data or {}, **(extra_data or {}),
}, },
} }
) )
except HomeAssistantWSNotSupported:
pass
except HomeAssistantWSError as err: except HomeAssistantWSError as err:
_LOGGER.error("Could not send message to Home Assistant due to %s", err) _LOGGER.error("Could not send message to Home Assistant due to %s", err)
def supervisor_event(self, event: WSEvent, data: dict[str, Any] | None = None): def supervisor_event_custom(
"""Send a supervisor/event command to Home Assistant.""" self, event: WSEvent, extra_data: dict[str, Any] | None = None
) -> None:
"""Send a supervisor/event message to Home Assistant with custom data."""
if self.sys_core.state in CLOSING_STATES: if self.sys_core.state in CLOSING_STATES:
return return
self.sys_create_task(self.async_supervisor_event(event, data)) self.sys_create_task(self.async_supervisor_event_custom(event, extra_data))
def supervisor_event(
self, event: WSEvent, data: dict[str, Any] | None = None
) -> None:
"""Send a supervisor/event message to Home Assistant."""
if self.sys_core.state in CLOSING_STATES:
return
self.sys_create_task(
self.async_supervisor_event_custom(event, {ATTR_DATA: data or {}})
)
async def async_supervisor_update_event(
self,
key: str,
data: dict[str, Any] | None = None,
) -> None:
"""Send an update supervisor/event message."""
await self.async_supervisor_event_custom(
WSEvent.SUPERVISOR_UPDATE,
{
ATTR_UPDATE_KEY: key,
ATTR_DATA: data or {},
},
)
def supervisor_update_event(
self,
key: str,
data: dict[str, Any] | None = None,
) -> None:
"""Send an update supervisor/event message."""
if self.sys_core.state in CLOSING_STATES:
return
self.sys_create_task(self.async_supervisor_update_event(key, data))

View File

@ -2,17 +2,29 @@
from dataclasses import dataclass from dataclasses import dataclass
from ipaddress import IPv4Address, IPv4Interface, IPv6Address, IPv6Interface from ipaddress import IPv4Address, IPv4Interface, IPv6Address, IPv6Interface
import logging
import socket import socket
from ..dbus.const import ( from ..dbus.const import (
ConnectionStateFlags, ConnectionStateFlags,
ConnectionStateType, ConnectionStateType,
DeviceType, DeviceType,
InterfaceAddrGenMode as NMInterfaceAddrGenMode,
InterfaceIp6Privacy as NMInterfaceIp6Privacy,
InterfaceMethod as NMInterfaceMethod, InterfaceMethod as NMInterfaceMethod,
) )
from ..dbus.network.connection import NetworkConnection from ..dbus.network.connection import NetworkConnection
from ..dbus.network.interface import NetworkInterface from ..dbus.network.interface import NetworkInterface
from .const import AuthMethod, InterfaceMethod, InterfaceType, WifiMode from .const import (
AuthMethod,
InterfaceAddrGenMode,
InterfaceIp6Privacy,
InterfaceMethod,
InterfaceType,
WifiMode,
)
_LOGGER: logging.Logger = logging.getLogger(__name__)
@dataclass(slots=True) @dataclass(slots=True)
@ -46,6 +58,14 @@ class IpSetting:
nameservers: list[IPv4Address | IPv6Address] nameservers: list[IPv4Address | IPv6Address]
@dataclass(slots=True)
class Ip6Setting(IpSetting):
"""Represent a user IPv6 setting."""
addr_gen_mode: InterfaceAddrGenMode = InterfaceAddrGenMode.DEFAULT
ip6_privacy: InterfaceIp6Privacy = InterfaceIp6Privacy.DEFAULT
@dataclass(slots=True) @dataclass(slots=True)
class WifiConfig: class WifiConfig:
"""Represent a wifi configuration.""" """Represent a wifi configuration."""
@ -62,7 +82,7 @@ class VlanConfig:
"""Represent a vlan configuration.""" """Represent a vlan configuration."""
id: int id: int
interface: str interface: str | None
@dataclass(slots=True) @dataclass(slots=True)
@ -79,7 +99,7 @@ class Interface:
ipv4: IpConfig | None ipv4: IpConfig | None
ipv4setting: IpSetting | None ipv4setting: IpSetting | None
ipv6: IpConfig | None ipv6: IpConfig | None
ipv6setting: IpSetting | None ipv6setting: Ip6Setting | None
wifi: WifiConfig | None wifi: WifiConfig | None
vlan: VlanConfig | None vlan: VlanConfig | None
@ -91,7 +111,10 @@ class Interface:
if inet.settings.match and inet.settings.match.path: if inet.settings.match and inet.settings.match.path:
return inet.settings.match.path == [self.path] return inet.settings.match.path == [self.path]
return inet.settings.connection.interface_name == self.name return (
inet.settings.connection is not None
and inet.settings.connection.interface_name == self.name
)
@staticmethod @staticmethod
def from_dbus_interface(inet: NetworkInterface) -> "Interface": def from_dbus_interface(inet: NetworkInterface) -> "Interface":
@ -118,8 +141,14 @@ class Interface:
ipv4_setting = IpSetting(InterfaceMethod.DISABLED, [], None, []) ipv4_setting = IpSetting(InterfaceMethod.DISABLED, [], None, [])
if inet.settings and inet.settings.ipv6: if inet.settings and inet.settings.ipv6:
ipv6_setting = IpSetting( ipv6_setting = Ip6Setting(
method=Interface._map_nm_method(inet.settings.ipv6.method), method=Interface._map_nm_method(inet.settings.ipv6.method),
addr_gen_mode=Interface._map_nm_addr_gen_mode(
inet.settings.ipv6.addr_gen_mode
),
ip6_privacy=Interface._map_nm_ip6_privacy(
inet.settings.ipv6.ip6_privacy
),
address=[ address=[
IPv6Interface(f"{ip.address}/{ip.prefix}") IPv6Interface(f"{ip.address}/{ip.prefix}")
for ip in inet.settings.ipv6.address_data for ip in inet.settings.ipv6.address_data
@ -134,26 +163,26 @@ class Interface:
else [], else [],
) )
else: else:
ipv6_setting = IpSetting(InterfaceMethod.DISABLED, [], None, []) ipv6_setting = Ip6Setting(InterfaceMethod.DISABLED, [], None, [])
ipv4_ready = ( ipv4_ready = (
bool(inet.connection) inet.connection is not None
and ConnectionStateFlags.IP4_READY in inet.connection.state_flags and ConnectionStateFlags.IP4_READY in inet.connection.state_flags
) )
ipv6_ready = ( ipv6_ready = (
bool(inet.connection) inet.connection is not None
and ConnectionStateFlags.IP6_READY in inet.connection.state_flags and ConnectionStateFlags.IP6_READY in inet.connection.state_flags
) )
return Interface( return Interface(
inet.name, name=inet.interface_name,
inet.hw_address, mac=inet.hw_address,
inet.path, path=inet.path,
inet.settings is not None, enabled=inet.settings is not None,
Interface._map_nm_connected(inet.connection), connected=Interface._map_nm_connected(inet.connection),
inet.primary, primary=inet.primary,
Interface._map_nm_type(inet.type), type=Interface._map_nm_type(inet.type),
IpConfig( ipv4=IpConfig(
address=inet.connection.ipv4.address address=inet.connection.ipv4.address
if inet.connection.ipv4.address if inet.connection.ipv4.address
else [], else [],
@ -165,8 +194,8 @@ class Interface:
) )
if inet.connection and inet.connection.ipv4 if inet.connection and inet.connection.ipv4
else IpConfig([], None, [], ipv4_ready), else IpConfig([], None, [], ipv4_ready),
ipv4_setting, ipv4setting=ipv4_setting,
IpConfig( ipv6=IpConfig(
address=inet.connection.ipv6.address address=inet.connection.ipv6.address
if inet.connection.ipv6.address if inet.connection.ipv6.address
else [], else [],
@ -178,22 +207,42 @@ class Interface:
) )
if inet.connection and inet.connection.ipv6 if inet.connection and inet.connection.ipv6
else IpConfig([], None, [], ipv6_ready), else IpConfig([], None, [], ipv6_ready),
ipv6_setting, ipv6setting=ipv6_setting,
Interface._map_nm_wifi(inet), wifi=Interface._map_nm_wifi(inet),
Interface._map_nm_vlan(inet), vlan=Interface._map_nm_vlan(inet),
) )
@staticmethod @staticmethod
def _map_nm_method(method: str) -> InterfaceMethod: def _map_nm_method(method: str | None) -> InterfaceMethod:
"""Map IP interface method.""" """Map IP interface method."""
match method:
case NMInterfaceMethod.AUTO.value:
return InterfaceMethod.AUTO
case NMInterfaceMethod.MANUAL:
return InterfaceMethod.STATIC
return InterfaceMethod.DISABLED
@staticmethod
def _map_nm_addr_gen_mode(addr_gen_mode: int) -> InterfaceAddrGenMode:
"""Map IPv6 interface addr_gen_mode."""
mapping = { mapping = {
NMInterfaceMethod.AUTO: InterfaceMethod.AUTO, NMInterfaceAddrGenMode.EUI64.value: InterfaceAddrGenMode.EUI64,
NMInterfaceMethod.DISABLED: InterfaceMethod.DISABLED, NMInterfaceAddrGenMode.STABLE_PRIVACY.value: InterfaceAddrGenMode.STABLE_PRIVACY,
NMInterfaceMethod.MANUAL: InterfaceMethod.STATIC, NMInterfaceAddrGenMode.DEFAULT_OR_EUI64.value: InterfaceAddrGenMode.DEFAULT_OR_EUI64,
NMInterfaceMethod.LINK_LOCAL: InterfaceMethod.DISABLED,
} }
return mapping.get(method, InterfaceMethod.DISABLED) return mapping.get(addr_gen_mode, InterfaceAddrGenMode.DEFAULT)
@staticmethod
def _map_nm_ip6_privacy(ip6_privacy: int) -> InterfaceIp6Privacy:
"""Map IPv6 interface ip6_privacy."""
mapping = {
NMInterfaceIp6Privacy.DISABLED.value: InterfaceIp6Privacy.DISABLED,
NMInterfaceIp6Privacy.ENABLED_PREFER_PUBLIC.value: InterfaceIp6Privacy.ENABLED_PREFER_PUBLIC,
NMInterfaceIp6Privacy.ENABLED.value: InterfaceIp6Privacy.ENABLED,
}
return mapping.get(ip6_privacy, InterfaceIp6Privacy.DEFAULT)
@staticmethod @staticmethod
def _map_nm_connected(connection: NetworkConnection | None) -> bool: def _map_nm_connected(connection: NetworkConnection | None) -> bool:
@ -208,12 +257,14 @@ class Interface:
@staticmethod @staticmethod
def _map_nm_type(device_type: int) -> InterfaceType: def _map_nm_type(device_type: int) -> InterfaceType:
mapping = { match device_type:
DeviceType.ETHERNET: InterfaceType.ETHERNET, case DeviceType.ETHERNET.value:
DeviceType.WIRELESS: InterfaceType.WIRELESS, return InterfaceType.ETHERNET
DeviceType.VLAN: InterfaceType.VLAN, case DeviceType.WIRELESS.value:
} return InterfaceType.WIRELESS
return mapping[device_type] case DeviceType.VLAN.value:
return InterfaceType.VLAN
raise ValueError(f"Invalid device type: {device_type}")
@staticmethod @staticmethod
def _map_nm_wifi(inet: NetworkInterface) -> WifiConfig | None: def _map_nm_wifi(inet: NetworkInterface) -> WifiConfig | None:
@ -222,15 +273,22 @@ class Interface:
return None return None
# Authentication and PSK # Authentication and PSK
auth = None auth = AuthMethod.OPEN
psk = None psk = None
if not inet.settings.wireless_security: if inet.settings.wireless_security:
auth = AuthMethod.OPEN match inet.settings.wireless_security.key_mgmt:
elif inet.settings.wireless_security.key_mgmt == "none": case "none":
auth = AuthMethod.WEP auth = AuthMethod.WEP
elif inet.settings.wireless_security.key_mgmt == "wpa-psk": case "wpa-psk":
auth = AuthMethod.WPA_PSK auth = AuthMethod.WPA_PSK
psk = inet.settings.wireless_security.psk psk = inet.settings.wireless_security.psk
case _:
_LOGGER.warning(
"Auth method %s for network interface %s unsupported, skipping",
inet.settings.wireless_security.key_mgmt,
inet.interface_name,
)
return None
# WifiMode # WifiMode
mode = WifiMode.INFRASTRUCTURE mode = WifiMode.INFRASTRUCTURE
@ -244,17 +302,17 @@ class Interface:
signal = None signal = None
return WifiConfig( return WifiConfig(
mode, mode=mode,
inet.settings.wireless.ssid, ssid=inet.settings.wireless.ssid if inet.settings.wireless else "",
auth, auth=auth,
psk, psk=psk,
signal, signal=signal,
) )
@staticmethod @staticmethod
def _map_nm_vlan(inet: NetworkInterface) -> WifiConfig | None: def _map_nm_vlan(inet: NetworkInterface) -> VlanConfig | None:
"""Create mapping to nm vlan property.""" """Create mapping to nm vlan property."""
if inet.type != DeviceType.VLAN or not inet.settings: if inet.type != DeviceType.VLAN or not inet.settings or not inet.settings.vlan:
return None return None
return VlanConfig(inet.settings.vlan.id, inet.settings.vlan.parent) return VlanConfig(inet.settings.vlan.id, inet.settings.vlan.parent)

View File

@ -15,6 +15,24 @@ class InterfaceMethod(StrEnum):
AUTO = "auto" AUTO = "auto"
class InterfaceAddrGenMode(StrEnum):
"""Configuration of an interface."""
EUI64 = "eui64"
STABLE_PRIVACY = "stable-privacy"
DEFAULT_OR_EUI64 = "default-or-eui64"
DEFAULT = "default"
class InterfaceIp6Privacy(StrEnum):
"""Configuration of an interface."""
DEFAULT = "default"
DISABLED = "disabled"
ENABLED_PREFER_PUBLIC = "enabled-prefer-public"
ENABLED = "enabled"
class InterfaceType(StrEnum): class InterfaceType(StrEnum):
"""Configuration of an interface.""" """Configuration of an interface."""
@ -62,6 +80,7 @@ class LogFormat(StrEnum):
JOURNAL = "application/vnd.fdo.journal" JOURNAL = "application/vnd.fdo.journal"
JSON = "application/json" JSON = "application/json"
JSON_SEQ = "application/json-seq"
TEXT = "text/plain" TEXT = "text/plain"

View File

@ -133,7 +133,7 @@ class InfoCenter(CoreSysAttributes):
self.coresys.config.path_supervisor, self.coresys.config.path_supervisor,
) )
async def disk_life_time(self) -> float: async def disk_life_time(self) -> float | None:
"""Return the estimated life-time usage (in %) of the SSD storing the data directory.""" """Return the estimated life-time usage (in %) of the SSD storing the data directory."""
return await self.sys_run_in_executor( return await self.sys_run_in_executor(
self.sys_hardware.disk.get_disk_life_time, self.sys_hardware.disk.get_disk_life_time,

View File

@ -2,12 +2,13 @@
from __future__ import annotations from __future__ import annotations
from collections.abc import AsyncGenerator from collections.abc import AsyncGenerator, Mapping
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
import json import json
import logging import logging
import os import os
from pathlib import Path from pathlib import Path
import re
from typing import Self from typing import Self
from aiohttp import ClientError, ClientSession, ClientTimeout from aiohttp import ClientError, ClientSession, ClientTimeout
@ -24,6 +25,7 @@ from ..exceptions import (
HostServiceError, HostServiceError,
) )
from ..utils.json import read_json_file from ..utils.json import read_json_file
from ..utils.systemd_journal import journal_boots_reader
from .const import PARAM_BOOT_ID, PARAM_SYSLOG_IDENTIFIER, LogFormat from .const import PARAM_BOOT_ID, PARAM_SYSLOG_IDENTIFIER, LogFormat
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@ -34,6 +36,8 @@ SYSLOG_IDENTIFIERS_JSON: Path = (
) )
# pylint: enable=no-member # pylint: enable=no-member
SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX = (1 << 64) - 1
SYSTEMD_JOURNAL_GATEWAYD_SOCKET: Path = Path("/run/systemd-journal-gatewayd.sock") SYSTEMD_JOURNAL_GATEWAYD_SOCKET: Path = Path("/run/systemd-journal-gatewayd.sock")
# From systemd catalog for message IDs (`journalctl --dump-catalog``) # From systemd catalog for message IDs (`journalctl --dump-catalog``)
@ -42,6 +46,10 @@ SYSTEMD_JOURNAL_GATEWAYD_SOCKET: Path = Path("/run/systemd-journal-gatewayd.sock
# Defined-By: systemd # Defined-By: systemd
BOOT_IDS_QUERY = {"MESSAGE_ID": "b07a249cd024414a82dd00cd181378ff"} BOOT_IDS_QUERY = {"MESSAGE_ID": "b07a249cd024414a82dd00cd181378ff"}
RE_ENTRIES_HEADER = re.compile(
r"^entries=(?P<cursor>[^:]*):(?P<num_skip>-?\d+):(?P<num_lines>\d*)$"
)
class LogsControl(CoreSysAttributes): class LogsControl(CoreSysAttributes):
"""Handle systemd-journal logs.""" """Handle systemd-journal logs."""
@ -101,12 +109,8 @@ class LogsControl(CoreSysAttributes):
return boot_ids[offset] return boot_ids[offset]
async def get_boot_ids(self) -> list[str]: async def _get_boot_ids_legacy(self) -> list[str]:
"""Get boot IDs from oldest to newest.""" """Get boots IDs using suboptimal method where /boots is not available."""
if self._boot_ids:
# Doesn't change without a reboot, no reason to query again once cached
return self._boot_ids
try: try:
async with self.journald_logs( async with self.journald_logs(
params=BOOT_IDS_QUERY, params=BOOT_IDS_QUERY,
@ -135,13 +139,51 @@ class LogsControl(CoreSysAttributes):
_LOGGER.error, _LOGGER.error,
) from err ) from err
self._boot_ids = [] _boot_ids = []
for entry in text.split("\n"): for entry in text.split("\n"):
if ( if entry and (boot_id := json.loads(entry)[PARAM_BOOT_ID]) not in _boot_ids:
entry _boot_ids.append(boot_id)
and (boot_id := json.loads(entry)[PARAM_BOOT_ID]) not in self._boot_ids
): return _boot_ids
self._boot_ids.append(boot_id)
async def _get_boot_ids_native(self):
"""Get boot IDs using /boots endpoint."""
try:
async with self.journald_logs(
path="/boots",
accept=LogFormat.JSON_SEQ,
timeout=ClientTimeout(total=20),
) as resp:
if resp.status != 200:
raise HostLogError(
f"Got HTTP {resp.status} from /boots.",
_LOGGER.debug,
)
# Don't rely solely on the order of boots in the response,
# sort the boots by index returned in the response.
boot_id_tuples = [boot async for boot in journal_boots_reader(resp)]
return [
boot_id for _, boot_id in sorted(boot_id_tuples, key=lambda x: x[0])
]
except (ClientError, TimeoutError) as err:
raise HostLogError(
"Could not get a list of boot IDs from systemd-journal-gatewayd",
_LOGGER.error,
) from err
async def get_boot_ids(self) -> list[str]:
"""Get boot IDs from oldest to newest."""
if self._boot_ids:
# Doesn't change without a reboot, no reason to query again once cached
return self._boot_ids
try:
self._boot_ids = await self._get_boot_ids_native()
except HostLogError:
_LOGGER.info(
"Could not get /boots from systemd-journal-gatewayd, using fallback."
)
self._boot_ids = await self._get_boot_ids_legacy()
return self._boot_ids return self._boot_ids
@ -163,7 +205,7 @@ class LogsControl(CoreSysAttributes):
async def journald_logs( async def journald_logs(
self, self,
path: str = "/entries", path: str = "/entries",
params: dict[str, str | list[str]] | None = None, params: Mapping[str, str | list[str]] | None = None,
range_header: str | None = None, range_header: str | None = None,
accept: LogFormat = LogFormat.TEXT, accept: LogFormat = LogFormat.TEXT,
timeout: ClientTimeout | None = None, timeout: ClientTimeout | None = None,
@ -184,8 +226,18 @@ class LogsControl(CoreSysAttributes):
base_url = "http://localhost/" base_url = "http://localhost/"
connector = UnixConnector(path=str(SYSTEMD_JOURNAL_GATEWAYD_SOCKET)) connector = UnixConnector(path=str(SYSTEMD_JOURNAL_GATEWAYD_SOCKET))
async with ClientSession(base_url=base_url, connector=connector) as session: async with ClientSession(base_url=base_url, connector=connector) as session:
headers = {ACCEPT: accept} headers = {ACCEPT: accept.value}
if range_header: if range_header:
if range_header.endswith(":"):
# Make sure that num_entries is always set - before Systemd v256 it was
# possible to omit it, which made sense when the "follow" option was used,
# but this syntax is now invalid and triggers HTTP 400.
# See: https://github.com/systemd/systemd/issues/37172
if not (matches := re.match(RE_ENTRIES_HEADER, range_header)):
raise HostNotSupportedError(
f"Invalid range header: {range_header}"
)
range_header = f"entries={matches.group('cursor')}:{matches.group('num_skip')}:{SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX}"
headers[RANGE] = range_header headers[RANGE] = range_header
async with session.get( async with session.get(
f"{path}", f"{path}",

View File

@ -8,11 +8,11 @@ from typing import Any
from ..const import ATTR_HOST_INTERNET from ..const import ATTR_HOST_INTERNET
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..dbus.const import ( from ..dbus.const import (
DBUS_ATTR_CONFIGURATION,
DBUS_ATTR_CONNECTION_ENABLED, DBUS_ATTR_CONNECTION_ENABLED,
DBUS_ATTR_CONNECTIVITY, DBUS_ATTR_CONNECTIVITY,
DBUS_ATTR_PRIMARY_CONNECTION, DBUS_IFACE_DNS,
DBUS_IFACE_NM, DBUS_IFACE_NM,
DBUS_OBJECT_BASE,
DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED, DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED,
ConnectionStateType, ConnectionStateType,
ConnectivityState, ConnectivityState,
@ -46,6 +46,8 @@ class NetworkManager(CoreSysAttributes):
"""Initialize system center handling.""" """Initialize system center handling."""
self.coresys: CoreSys = coresys self.coresys: CoreSys = coresys
self._connectivity: bool | None = None self._connectivity: bool | None = None
# No event need on initial change (NetworkManager initializes with empty list)
self._dns_configuration: list = []
@property @property
def connectivity(self) -> bool | None: def connectivity(self) -> bool | None:
@ -87,7 +89,7 @@ class NetworkManager(CoreSysAttributes):
for config in self.sys_dbus.network.dns.configuration: for config in self.sys_dbus.network.dns.configuration:
if config.vpn or not config.nameservers: if config.vpn or not config.nameservers:
continue continue
servers.extend(config.nameservers) servers.extend([str(ns) for ns in config.nameservers])
return list(dict.fromkeys(servers)) return list(dict.fromkeys(servers))
@ -138,8 +140,12 @@ class NetworkManager(CoreSysAttributes):
] ]
) )
self.sys_dbus.network.dbus.properties.on_properties_changed( self.sys_dbus.network.dbus.properties.on(
self._check_connectivity_changed "properties_changed", self._check_connectivity_changed
)
self.sys_dbus.network.dns.dbus.properties.on(
"properties_changed", self._check_dns_changed
) )
async def _check_connectivity_changed( async def _check_connectivity_changed(
@ -152,15 +158,6 @@ class NetworkManager(CoreSysAttributes):
connectivity_check: bool | None = changed.get(DBUS_ATTR_CONNECTION_ENABLED) connectivity_check: bool | None = changed.get(DBUS_ATTR_CONNECTION_ENABLED)
connectivity: int | None = changed.get(DBUS_ATTR_CONNECTIVITY) connectivity: int | None = changed.get(DBUS_ATTR_CONNECTIVITY)
# This potentially updated the DNS configuration. Make sure the DNS plug-in
# picks up the latest settings.
if (
DBUS_ATTR_PRIMARY_CONNECTION in changed
and changed[DBUS_ATTR_PRIMARY_CONNECTION]
and changed[DBUS_ATTR_PRIMARY_CONNECTION] != DBUS_OBJECT_BASE
):
await self.sys_plugins.dns.restart()
if ( if (
connectivity_check is True connectivity_check is True
or DBUS_ATTR_CONNECTION_ENABLED in invalidated or DBUS_ATTR_CONNECTION_ENABLED in invalidated
@ -174,6 +171,20 @@ class NetworkManager(CoreSysAttributes):
elif connectivity is not None: elif connectivity is not None:
self.connectivity = connectivity == ConnectivityState.CONNECTIVITY_FULL self.connectivity = connectivity == ConnectivityState.CONNECTIVITY_FULL
async def _check_dns_changed(
self, interface: str, changed: dict[str, Any], invalidated: list[str]
):
"""Check if DNS properties have changed."""
if interface != DBUS_IFACE_DNS:
return
if (
DBUS_ATTR_CONFIGURATION in changed
and self._dns_configuration != changed[DBUS_ATTR_CONFIGURATION]
):
self._dns_configuration = changed[DBUS_ATTR_CONFIGURATION]
self.sys_plugins.dns.notify_locals_changed()
async def update(self, *, force_connectivity_check: bool = False): async def update(self, *, force_connectivity_check: bool = False):
"""Update properties over dbus.""" """Update properties over dbus."""
_LOGGER.info("Updating local network information") _LOGGER.info("Updating local network information")
@ -196,10 +207,16 @@ class NetworkManager(CoreSysAttributes):
with suppress(NetworkInterfaceNotFound): with suppress(NetworkInterfaceNotFound):
inet = self.sys_dbus.network.get(interface.name) inet = self.sys_dbus.network.get(interface.name)
con: NetworkConnection = None con: NetworkConnection | None = None
# Update exist configuration # Update exist configuration
if inet and interface.equals_dbus_interface(inet) and interface.enabled: if (
inet
and inet.settings
and inet.settings.connection
and interface.equals_dbus_interface(inet)
and interface.enabled
):
_LOGGER.debug("Updating existing configuration for %s", interface.name) _LOGGER.debug("Updating existing configuration for %s", interface.name)
settings = get_connection_from_interface( settings = get_connection_from_interface(
interface, interface,
@ -210,12 +227,12 @@ class NetworkManager(CoreSysAttributes):
try: try:
await inet.settings.update(settings) await inet.settings.update(settings)
con = await self.sys_dbus.network.activate_connection( con = activated = await self.sys_dbus.network.activate_connection(
inet.settings.object_path, inet.object_path inet.settings.object_path, inet.object_path
) )
_LOGGER.debug( _LOGGER.debug(
"activate_connection returns %s", "activate_connection returns %s",
con.object_path, activated.object_path,
) )
except DBusError as err: except DBusError as err:
raise HostNetworkError( raise HostNetworkError(
@ -235,12 +252,16 @@ class NetworkManager(CoreSysAttributes):
settings = get_connection_from_interface(interface, self.sys_dbus.network) settings = get_connection_from_interface(interface, self.sys_dbus.network)
try: try:
settings, con = await self.sys_dbus.network.add_and_activate_connection( (
settings,
activated,
) = await self.sys_dbus.network.add_and_activate_connection(
settings, inet.object_path settings, inet.object_path
) )
con = activated
_LOGGER.debug( _LOGGER.debug(
"add_and_activate_connection returns %s", "add_and_activate_connection returns %s",
con.object_path, activated.object_path,
) )
except DBusError as err: except DBusError as err:
raise HostNetworkError( raise HostNetworkError(
@ -276,7 +297,7 @@ class NetworkManager(CoreSysAttributes):
) )
if con: if con:
async with con.dbus.signal( async with con.connected_dbus.signal(
DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED
) as signal: ) as signal:
# From this point we monitor signals. However, it might be that # From this point we monitor signals. However, it might be that
@ -302,7 +323,7 @@ class NetworkManager(CoreSysAttributes):
"""Scan on Interface for AccessPoint.""" """Scan on Interface for AccessPoint."""
inet = self.sys_dbus.network.get(interface.name) inet = self.sys_dbus.network.get(interface.name)
if inet.type != DeviceType.WIRELESS: if inet.type != DeviceType.WIRELESS or not inet.wireless:
raise HostNotSupportedError( raise HostNotSupportedError(
f"Can only scan with wireless card - {interface.name}", _LOGGER.error f"Can only scan with wireless card - {interface.name}", _LOGGER.error
) )

View File

@ -12,6 +12,7 @@ from .const import (
ATTR_SESSION_DATA, ATTR_SESSION_DATA,
FILE_HASSIO_INGRESS, FILE_HASSIO_INGRESS,
IngressSessionData, IngressSessionData,
IngressSessionDataDict,
) )
from .coresys import CoreSys, CoreSysAttributes from .coresys import CoreSys, CoreSysAttributes
from .utils import check_port from .utils import check_port
@ -35,7 +36,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
"""Return addon they have this ingress token.""" """Return addon they have this ingress token."""
if token not in self.tokens: if token not in self.tokens:
return None return None
return self.sys_addons.get(self.tokens[token], local_only=True) return self.sys_addons.get_local_only(self.tokens[token])
def get_session_data(self, session_id: str) -> IngressSessionData | None: def get_session_data(self, session_id: str) -> IngressSessionData | None:
"""Return complementary data of current session or None.""" """Return complementary data of current session or None."""
@ -49,7 +50,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
return self._data[ATTR_SESSION] return self._data[ATTR_SESSION]
@property @property
def sessions_data(self) -> dict[str, dict[str, str | None]]: def sessions_data(self) -> dict[str, IngressSessionDataDict]:
"""Return sessions_data.""" """Return sessions_data."""
return self._data[ATTR_SESSION_DATA] return self._data[ATTR_SESSION_DATA]
@ -89,7 +90,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
now = utcnow() now = utcnow()
sessions = {} sessions = {}
sessions_data: dict[str, dict[str, str | None]] = {} sessions_data: dict[str, IngressSessionDataDict] = {}
for session, valid in self.sessions.items(): for session, valid in self.sessions.items():
# check if timestamp valid, to avoid crash on malformed timestamp # check if timestamp valid, to avoid crash on malformed timestamp
try: try:
@ -118,7 +119,8 @@ class Ingress(FileConfiguration, CoreSysAttributes):
# Read all ingress token and build a map # Read all ingress token and build a map
for addon in self.addons: for addon in self.addons:
self.tokens[addon.ingress_token] = addon.slug if addon.ingress_token:
self.tokens[addon.ingress_token] = addon.slug
def create_session(self, data: IngressSessionData | None = None) -> str: def create_session(self, data: IngressSessionData | None = None) -> str:
"""Create new session.""" """Create new session."""
@ -141,7 +143,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
try: try:
valid_until = utc_from_timestamp(self.sessions[session]) valid_until = utc_from_timestamp(self.sessions[session])
except OverflowError: except OverflowError:
self.sessions[session] = utcnow() + timedelta(minutes=15) self.sessions[session] = (utcnow() + timedelta(minutes=15)).timestamp()
return True return True
# Is still valid? # Is still valid?

View File

@ -1,13 +1,13 @@
"""Supervisor job manager.""" """Supervisor job manager."""
import asyncio import asyncio
from collections.abc import Awaitable, Callable from collections.abc import Callable, Coroutine, Generator
from contextlib import contextmanager from contextlib import contextmanager, suppress
from contextvars import Context, ContextVar, Token from contextvars import Context, ContextVar, Token
from dataclasses import dataclass from dataclasses import dataclass
from datetime import datetime from datetime import datetime
import logging import logging
from typing import Any from typing import Any, Self
from uuid import uuid4 from uuid import uuid4
from attrs import Attribute, define, field from attrs import Attribute, define, field
@ -27,7 +27,7 @@ from .validate import SCHEMA_JOBS_CONFIG
# When a new asyncio task is started the current context is copied over. # When a new asyncio task is started the current context is copied over.
# Modifications to it in one task are not visible to others though. # Modifications to it in one task are not visible to others though.
# This allows us to track what job is currently in progress in each task. # This allows us to track what job is currently in progress in each task.
_CURRENT_JOB: ContextVar[str] = ContextVar("current_job") _CURRENT_JOB: ContextVar[str | None] = ContextVar("current_job", default=None)
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@ -75,7 +75,7 @@ class SupervisorJobError:
message: str = "Unknown error, see supervisor logs" message: str = "Unknown error, see supervisor logs"
stage: str | None = None stage: str | None = None
def as_dict(self) -> dict[str, str]: def as_dict(self) -> dict[str, str | None]:
"""Return dictionary representation.""" """Return dictionary representation."""
return { return {
"type": self.type_.__name__, "type": self.type_.__name__,
@ -101,9 +101,7 @@ class SupervisorJob:
stage: str | None = field( stage: str | None = field(
default=None, validator=[_invalid_if_done], on_setattr=_on_change default=None, validator=[_invalid_if_done], on_setattr=_on_change
) )
parent_id: str | None = field( parent_id: str | None = field(factory=_CURRENT_JOB.get, on_setattr=frozen)
factory=lambda: _CURRENT_JOB.get(None), on_setattr=frozen
)
done: bool | None = field(init=False, default=None, on_setattr=_on_change) done: bool | None = field(init=False, default=None, on_setattr=_on_change)
on_change: Callable[["SupervisorJob", Attribute, Any], None] | None = field( on_change: Callable[["SupervisorJob", Attribute, Any], None] | None = field(
default=None, on_setattr=frozen default=None, on_setattr=frozen
@ -137,7 +135,7 @@ class SupervisorJob:
self.errors += [new_error] self.errors += [new_error]
@contextmanager @contextmanager
def start(self): def start(self) -> Generator[Self]:
"""Start the job in the current task. """Start the job in the current task.
This can only be called if the parent ID matches the job running in the current task. This can only be called if the parent ID matches the job running in the current task.
@ -146,11 +144,11 @@ class SupervisorJob:
""" """
if self.done is not None: if self.done is not None:
raise JobStartException("Job has already been started") raise JobStartException("Job has already been started")
if _CURRENT_JOB.get(None) != self.parent_id: if _CURRENT_JOB.get() != self.parent_id:
raise JobStartException("Job has a different parent from current job") raise JobStartException("Job has a different parent from current job")
self.done = False self.done = False
token: Token[str] | None = None token: Token[str | None] | None = None
try: try:
token = _CURRENT_JOB.set(self.uuid) token = _CURRENT_JOB.set(self.uuid)
yield self yield self
@ -193,17 +191,15 @@ class JobManager(FileConfiguration, CoreSysAttributes):
Must be called from within a job. Raises RuntimeError if there is no current job. Must be called from within a job. Raises RuntimeError if there is no current job.
""" """
try: if job_id := _CURRENT_JOB.get():
return self.get_job(_CURRENT_JOB.get()) with suppress(JobNotFound):
except (LookupError, JobNotFound): return self.get_job(job_id)
raise RuntimeError( raise RuntimeError("No job for the current asyncio task!", _LOGGER.critical)
"No job for the current asyncio task!", _LOGGER.critical
) from None
@property @property
def is_job(self) -> bool: def is_job(self) -> bool:
"""Return true if there is an active job for the current asyncio task.""" """Return true if there is an active job for the current asyncio task."""
return bool(_CURRENT_JOB.get(None)) return _CURRENT_JOB.get() is not None
def _notify_on_job_change( def _notify_on_job_change(
self, job: SupervisorJob, attribute: Attribute, value: Any self, job: SupervisorJob, attribute: Attribute, value: Any
@ -265,7 +261,7 @@ class JobManager(FileConfiguration, CoreSysAttributes):
def schedule_job( def schedule_job(
self, self,
job_method: Callable[..., Awaitable[Any]], job_method: Callable[..., Coroutine],
options: JobSchedulerOptions, options: JobSchedulerOptions,
*args, *args,
**kwargs, **kwargs,

View File

@ -1,12 +1,12 @@
"""Job decorator.""" """Job decorator."""
import asyncio import asyncio
from collections.abc import Callable from collections.abc import Awaitable, Callable
from contextlib import suppress from contextlib import suppress
from datetime import datetime, timedelta from datetime import datetime, timedelta
from functools import wraps from functools import wraps
import logging import logging
from typing import Any from typing import Any, cast
from ..const import CoreState from ..const import CoreState
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
@ -43,7 +43,22 @@ class Job(CoreSysAttributes):
throttle_max_calls: int | None = None, throttle_max_calls: int | None = None,
internal: bool = False, internal: bool = False,
): ):
"""Initialize the Job class.""" """Initialize the Job decorator.
Args:
name (str): Unique name for the job. Must not be duplicated.
conditions (list[JobCondition] | None): List of conditions that must be met before the job runs.
cleanup (bool): Whether to clean up the job after execution. Defaults to True. If set to False, the job will remain accessible through the Supervisor API until the next restart.
on_condition (type[JobException] | None): Exception type to raise if a job condition fails. If None, logs the failure.
limit (JobExecutionLimit | None): Execution limit policy for the job (e.g., throttle, once, group-based).
throttle_period (timedelta | Callable | None): Throttle period as a timedelta or a callable returning a timedelta (for rate-limited jobs).
throttle_max_calls (int | None): Maximum number of calls allowed within the throttle period (for rate-limited jobs).
internal (bool): Whether the job is internal (not exposed through the Supervisor API). Defaults to False.
Raises:
RuntimeError: If job name is not unique, or required throttle parameters are missing for the selected limit.
"""
if name in _JOB_NAMES: if name in _JOB_NAMES:
raise RuntimeError(f"A job already exists with name {name}!") raise RuntimeError(f"A job already exists with name {name}!")
@ -54,11 +69,10 @@ class Job(CoreSysAttributes):
self.on_condition = on_condition self.on_condition = on_condition
self.limit = limit self.limit = limit
self._throttle_period = throttle_period self._throttle_period = throttle_period
self.throttle_max_calls = throttle_max_calls self._throttle_max_calls = throttle_max_calls
self._lock: asyncio.Semaphore | None = None self._lock: asyncio.Semaphore | None = None
self._method = None
self._last_call: dict[str | None, datetime] = {} self._last_call: dict[str | None, datetime] = {}
self._rate_limited_calls: dict[str, list[datetime]] | None = None self._rate_limited_calls: dict[str | None, list[datetime]] | None = None
self._internal = internal self._internal = internal
# Validate Options # Validate Options
@ -82,13 +96,29 @@ class Job(CoreSysAttributes):
JobExecutionLimit.THROTTLE_RATE_LIMIT, JobExecutionLimit.THROTTLE_RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT, JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
): ):
if self.throttle_max_calls is None: if self._throttle_max_calls is None:
raise RuntimeError( raise RuntimeError(
f"Job {name} is using execution limit {limit} without throttle max calls!" f"Job {name} is using execution limit {limit} without throttle max calls!"
) )
self._rate_limited_calls = {} self._rate_limited_calls = {}
@property
def throttle_max_calls(self) -> int:
"""Return max calls for throttle."""
if self._throttle_max_calls is None:
raise RuntimeError("No throttle max calls set for job!")
return self._throttle_max_calls
@property
def lock(self) -> asyncio.Semaphore:
"""Return lock for limits."""
# asyncio.Semaphore objects must be created in event loop
# Since this is sync code it is not safe to create if missing here
if not self._lock:
raise RuntimeError("Lock has not been created yet!")
return self._lock
def last_call(self, group_name: str | None = None) -> datetime: def last_call(self, group_name: str | None = None) -> datetime:
"""Return last call datetime.""" """Return last call datetime."""
return self._last_call.get(group_name, datetime.min) return self._last_call.get(group_name, datetime.min)
@ -97,12 +127,12 @@ class Job(CoreSysAttributes):
"""Set last call datetime.""" """Set last call datetime."""
self._last_call[group_name] = value self._last_call[group_name] = value
def rate_limited_calls( def rate_limited_calls(self, group_name: str | None = None) -> list[datetime]:
self, group_name: str | None = None
) -> list[datetime] | None:
"""Return rate limited calls if used.""" """Return rate limited calls if used."""
if self._rate_limited_calls is None: if self._rate_limited_calls is None:
return None raise RuntimeError(
f"Rate limited calls not available for limit type {self.limit}"
)
return self._rate_limited_calls.get(group_name, []) return self._rate_limited_calls.get(group_name, [])
@ -131,10 +161,10 @@ class Job(CoreSysAttributes):
self._rate_limited_calls[group_name] = value self._rate_limited_calls[group_name] = value
def throttle_period(self, group_name: str | None = None) -> timedelta | None: def throttle_period(self, group_name: str | None = None) -> timedelta:
"""Return throttle period.""" """Return throttle period."""
if self._throttle_period is None: if self._throttle_period is None:
return None raise RuntimeError("No throttle period set for Job!")
if isinstance(self._throttle_period, timedelta): if isinstance(self._throttle_period, timedelta):
return self._throttle_period return self._throttle_period
@ -142,7 +172,7 @@ class Job(CoreSysAttributes):
return self._throttle_period( return self._throttle_period(
self.coresys, self.coresys,
self.last_call(group_name), self.last_call(group_name),
self.rate_limited_calls(group_name), self.rate_limited_calls(group_name) if self._rate_limited_calls else None,
) )
def _post_init(self, obj: JobGroup | CoreSysAttributes) -> JobGroup | None: def _post_init(self, obj: JobGroup | CoreSysAttributes) -> JobGroup | None:
@ -158,12 +188,12 @@ class Job(CoreSysAttributes):
self._lock = asyncio.Semaphore() self._lock = asyncio.Semaphore()
# Job groups # Job groups
try: job_group: JobGroup | None = None
is_job_group = obj.acquire and obj.release with suppress(AttributeError):
except AttributeError: if obj.acquire and obj.release: # type: ignore
is_job_group = False job_group = cast(JobGroup, obj)
if not is_job_group and self.limit in ( if not job_group and self.limit in (
JobExecutionLimit.GROUP_ONCE, JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT, JobExecutionLimit.GROUP_WAIT,
JobExecutionLimit.GROUP_THROTTLE, JobExecutionLimit.GROUP_THROTTLE,
@ -174,7 +204,7 @@ class Job(CoreSysAttributes):
f"Job on {self.name} need to be a JobGroup to use group based limits!" f"Job on {self.name} need to be a JobGroup to use group based limits!"
) from None ) from None
return obj if is_job_group else None return job_group
def _handle_job_condition_exception(self, err: JobConditionException) -> None: def _handle_job_condition_exception(self, err: JobConditionException) -> None:
"""Handle a job condition failure.""" """Handle a job condition failure."""
@ -184,9 +214,8 @@ class Job(CoreSysAttributes):
return return
raise self.on_condition(error_msg, _LOGGER.warning) from None raise self.on_condition(error_msg, _LOGGER.warning) from None
def __call__(self, method): def __call__(self, method: Callable[..., Awaitable]):
"""Call the wrapper logic.""" """Call the wrapper logic."""
self._method = method
@wraps(method) @wraps(method)
async def wrapper( async def wrapper(
@ -221,7 +250,7 @@ class Job(CoreSysAttributes):
if self.conditions: if self.conditions:
try: try:
await Job.check_conditions( await Job.check_conditions(
self, set(self.conditions), self._method.__qualname__ self, set(self.conditions), method.__qualname__
) )
except JobConditionException as err: except JobConditionException as err:
return self._handle_job_condition_exception(err) return self._handle_job_condition_exception(err)
@ -237,7 +266,7 @@ class Job(CoreSysAttributes):
JobExecutionLimit.GROUP_WAIT, JobExecutionLimit.GROUP_WAIT,
): ):
try: try:
await obj.acquire( await cast(JobGroup, job_group).acquire(
job, self.limit == JobExecutionLimit.GROUP_WAIT job, self.limit == JobExecutionLimit.GROUP_WAIT
) )
except JobGroupExecutionLimitExceeded as err: except JobGroupExecutionLimitExceeded as err:
@ -296,12 +325,12 @@ class Job(CoreSysAttributes):
with job.start(): with job.start():
try: try:
self.set_last_call(datetime.now(), group_name) self.set_last_call(datetime.now(), group_name)
if self.rate_limited_calls(group_name) is not None: if self._rate_limited_calls is not None:
self.add_rate_limited_call( self.add_rate_limited_call(
self.last_call(group_name), group_name self.last_call(group_name), group_name
) )
return await self._method(obj, *args, **kwargs) return await method(obj, *args, **kwargs)
# If a method has a conditional JobCondition, they must check it in the method # If a method has a conditional JobCondition, they must check it in the method
# These should be handled like normal JobConditions as much as possible # These should be handled like normal JobConditions as much as possible
@ -317,11 +346,11 @@ class Job(CoreSysAttributes):
raise JobException() from err raise JobException() from err
finally: finally:
self._release_exception_limits() self._release_exception_limits()
if self.limit in ( if job_group and self.limit in (
JobExecutionLimit.GROUP_ONCE, JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT, JobExecutionLimit.GROUP_WAIT,
): ):
obj.release() job_group.release()
# Jobs that weren't started are always cleaned up. Also clean up done jobs if required # Jobs that weren't started are always cleaned up. Also clean up done jobs if required
finally: finally:
@ -473,13 +502,13 @@ class Job(CoreSysAttributes):
): ):
return return
if self.limit == JobExecutionLimit.ONCE and self._lock.locked(): if self.limit == JobExecutionLimit.ONCE and self.lock.locked():
on_condition = ( on_condition = (
JobException if self.on_condition is None else self.on_condition JobException if self.on_condition is None else self.on_condition
) )
raise on_condition("Another job is running") raise on_condition("Another job is running")
await self._lock.acquire() await self.lock.acquire()
def _release_exception_limits(self) -> None: def _release_exception_limits(self) -> None:
"""Release possible exception limits.""" """Release possible exception limits."""
@ -490,4 +519,4 @@ class Job(CoreSysAttributes):
JobExecutionLimit.GROUP_THROTTLE_WAIT, JobExecutionLimit.GROUP_THROTTLE_WAIT,
): ):
return return
self._lock.release() self.lock.release()

View File

@ -41,7 +41,7 @@ class JobGroup(CoreSysAttributes):
def has_lock(self) -> bool: def has_lock(self) -> bool:
"""Return true if current task has the lock on this job group.""" """Return true if current task has the lock on this job group."""
return ( return (
self.active_job self.active_job is not None
and self.sys_jobs.is_job and self.sys_jobs.is_job
and self.active_job == self.sys_jobs.current and self.active_job == self.sys_jobs.current
) )

View File

@ -3,11 +3,13 @@
import ipaddress import ipaddress
import os import os
import re import re
from typing import cast
from aiohttp import hdrs from aiohttp import hdrs
import attr import attr
from sentry_sdk.types import Event, Hint
from ..const import DOCKER_NETWORK_MASK, HEADER_TOKEN, HEADER_TOKEN_OLD, CoreState from ..const import DOCKER_IPV4_NETWORK_MASK, HEADER_TOKEN, HEADER_TOKEN_OLD, CoreState
from ..coresys import CoreSys from ..coresys import CoreSys
from ..exceptions import AddonConfigurationError from ..exceptions import AddonConfigurationError
@ -19,7 +21,7 @@ def sanitize_host(host: str) -> str:
try: try:
# Allow internal URLs # Allow internal URLs
ip = ipaddress.ip_address(host) ip = ipaddress.ip_address(host)
if ip in ipaddress.ip_network(DOCKER_NETWORK_MASK): if ip in ipaddress.ip_network(DOCKER_IPV4_NETWORK_MASK):
return host return host
except ValueError: except ValueError:
pass pass
@ -39,7 +41,7 @@ def sanitize_url(url: str) -> str:
return f"{match.group(1)}{host}{match.group(3)}" return f"{match.group(1)}{host}{match.group(3)}"
def filter_data(coresys: CoreSys, event: dict, hint: dict) -> dict: def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
"""Filter event data before sending to sentry.""" """Filter event data before sending to sentry."""
# Ignore some exceptions # Ignore some exceptions
if "exc_info" in hint: if "exc_info" in hint:
@ -53,11 +55,12 @@ def filter_data(coresys: CoreSys, event: dict, hint: dict) -> dict:
event.setdefault("extra", {}).update({"os.environ": dict(os.environ)}) event.setdefault("extra", {}).update({"os.environ": dict(os.environ)})
event.setdefault("user", {}).update({"id": coresys.machine_id}) event.setdefault("user", {}).update({"id": coresys.machine_id})
event.setdefault("tags", {}).update( if coresys.machine:
{ event.setdefault("tags", {}).update(
"machine": coresys.machine, {
} "machine": coresys.machine,
) }
)
# Not full startup - missing information # Not full startup - missing information
if coresys.core.state in (CoreState.INITIALIZE, CoreState.SETUP): if coresys.core.state in (CoreState.INITIALIZE, CoreState.SETUP):
@ -122,22 +125,22 @@ def filter_data(coresys: CoreSys, event: dict, hint: dict) -> dict:
} }
) )
if event.get("request"): if request := event.get("request"):
if event["request"].get("url"): if request.get("url"):
event["request"]["url"] = sanitize_url(event["request"]["url"]) request["url"] = sanitize_url(cast(str, request["url"]))
headers = event["request"].get("headers", {}) if headers := cast(dict, request.get("headers")):
if hdrs.REFERER in headers: if hdrs.REFERER in headers:
headers[hdrs.REFERER] = sanitize_url(headers[hdrs.REFERER]) headers[hdrs.REFERER] = sanitize_url(headers[hdrs.REFERER])
if HEADER_TOKEN in headers: if HEADER_TOKEN in headers:
headers[HEADER_TOKEN] = "XXXXXXXXXXXXXXXXXXX" headers[HEADER_TOKEN] = "XXXXXXXXXXXXXXXXXXX"
if HEADER_TOKEN_OLD in headers: if HEADER_TOKEN_OLD in headers:
headers[HEADER_TOKEN_OLD] = "XXXXXXXXXXXXXXXXXXX" headers[HEADER_TOKEN_OLD] = "XXXXXXXXXXXXXXXXXXX"
if hdrs.HOST in headers: if hdrs.HOST in headers:
headers[hdrs.HOST] = sanitize_host(headers[hdrs.HOST]) headers[hdrs.HOST] = sanitize_host(headers[hdrs.HOST])
if hdrs.X_FORWARDED_HOST in headers: if hdrs.X_FORWARDED_HOST in headers:
headers[hdrs.X_FORWARDED_HOST] = sanitize_host( headers[hdrs.X_FORWARDED_HOST] = sanitize_host(
headers[hdrs.X_FORWARDED_HOST] headers[hdrs.X_FORWARDED_HOST]
) )
return event return event

View File

@ -1,13 +1,12 @@
"""A collection of tasks.""" """A collection of tasks."""
import asyncio
from collections.abc import Awaitable
from datetime import datetime, timedelta from datetime import datetime, timedelta
import logging import logging
from typing import cast
from ..addons.const import ADDON_UPDATE_CONDITIONS from ..addons.const import ADDON_UPDATE_CONDITIONS
from ..backups.const import LOCATION_CLOUD_BACKUP from ..backups.const import LOCATION_CLOUD_BACKUP, LOCATION_TYPE
from ..const import AddonState from ..const import ATTR_TYPE, AddonState
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import ( from ..exceptions import (
AddonsError, AddonsError,
@ -15,7 +14,7 @@ from ..exceptions import (
HomeAssistantError, HomeAssistantError,
ObserverError, ObserverError,
) )
from ..homeassistant.const import LANDINGPAGE from ..homeassistant.const import LANDINGPAGE, WSType
from ..jobs.decorator import Job, JobCondition, JobExecutionLimit from ..jobs.decorator import Job, JobCondition, JobExecutionLimit
from ..plugins.const import PLUGIN_UPDATE_CONDITIONS from ..plugins.const import PLUGIN_UPDATE_CONDITIONS
from ..utils.dt import utcnow from ..utils.dt import utcnow
@ -106,7 +105,6 @@ class Tasks(CoreSysAttributes):
) )
async def _update_addons(self): async def _update_addons(self):
"""Check if an update is available for an Add-on and update it.""" """Check if an update is available for an Add-on and update it."""
start_tasks: list[Awaitable[None]] = []
for addon in self.sys_addons.all: for addon in self.sys_addons.all:
if not addon.is_installed or not addon.auto_update: if not addon.is_installed or not addon.auto_update:
continue continue
@ -124,6 +122,12 @@ class Tasks(CoreSysAttributes):
continue continue
# Delay auto-updates for a day in case of issues # Delay auto-updates for a day in case of issues
if utcnow() < addon.latest_version_timestamp + timedelta(days=1): if utcnow() < addon.latest_version_timestamp + timedelta(days=1):
_LOGGER.debug(
"Not updating add-on %s from %s to %s as the latest version is less than a day old",
addon.slug,
addon.version,
addon.latest_version,
)
continue continue
if not addon.test_update_schema(): if not addon.test_update_schema():
_LOGGER.warning( _LOGGER.warning(
@ -131,16 +135,21 @@ class Tasks(CoreSysAttributes):
) )
continue continue
# Run Add-on update sequential
# avoid issue on slow IO
_LOGGER.info("Add-on auto update process %s", addon.slug) _LOGGER.info("Add-on auto update process %s", addon.slug)
try: # Call Home Assistant Core to update add-on to make sure that backups
if start_task := await self.sys_addons.update(addon.slug, backup=True): # get created through the Home Assistant Core API (categorized correctly).
start_tasks.append(start_task) # Ultimately auto updates should be handled by Home Assistant Core itself
except AddonsError: # through a update entity feature.
_LOGGER.error("Can't auto update Add-on %s", addon.slug) message = {
ATTR_TYPE: WSType.HASSIO_UPDATE_ADDON,
await asyncio.gather(*start_tasks) "addon": addon.slug,
"backup": True,
}
_LOGGER.debug(
"Sending update add-on WebSocket command to Home Assistant Core: %s",
message,
)
await self.sys_homeassistant.websocket.async_send_command(message)
@Job( @Job(
name="tasks_update_supervisor", name="tasks_update_supervisor",
@ -370,6 +379,8 @@ class Tasks(CoreSysAttributes):
] ]
for backup in old_backups: for backup in old_backups:
try: try:
await self.sys_backups.remove(backup, [LOCATION_CLOUD_BACKUP]) await self.sys_backups.remove(
backup, [cast(LOCATION_TYPE, LOCATION_CLOUD_BACKUP)]
)
except BackupFileNotFoundError as err: except BackupFileNotFoundError as err:
_LOGGER.debug("Can't remove backup %s: %s", backup.slug, err) _LOGGER.debug("Can't remove backup %s: %s", backup.slug, err)

View File

@ -56,7 +56,7 @@ class MountManager(FileConfiguration, CoreSysAttributes):
async def load_config(self) -> Self: async def load_config(self) -> Self:
"""Load config in executor.""" """Load config in executor."""
await super().load_config() await super().load_config()
self._mounts: dict[str, Mount] = { self._mounts = {
mount[ATTR_NAME]: Mount.from_dict(self.coresys, mount) mount[ATTR_NAME]: Mount.from_dict(self.coresys, mount)
for mount in self._data[ATTR_MOUNTS] for mount in self._data[ATTR_MOUNTS]
} }
@ -172,12 +172,12 @@ class MountManager(FileConfiguration, CoreSysAttributes):
errors = await asyncio.gather(*mount_tasks, return_exceptions=True) errors = await asyncio.gather(*mount_tasks, return_exceptions=True)
for i in range(len(errors)): # pylint: disable=consider-using-enumerate for i in range(len(errors)): # pylint: disable=consider-using-enumerate
if not errors[i]: if not (err := errors[i]):
continue continue
if mounts[i].failed_issue in self.sys_resolution.issues: if mounts[i].failed_issue in self.sys_resolution.issues:
continue continue
if not isinstance(errors[i], MountError): if not isinstance(err, MountError):
await async_capture_exception(errors[i]) await async_capture_exception(err)
self.sys_resolution.add_issue( self.sys_resolution.add_issue(
evolve(mounts[i].failed_issue), evolve(mounts[i].failed_issue),
@ -219,7 +219,7 @@ class MountManager(FileConfiguration, CoreSysAttributes):
conditions=[JobCondition.MOUNT_AVAILABLE], conditions=[JobCondition.MOUNT_AVAILABLE],
on_condition=MountJobError, on_condition=MountJobError,
) )
async def remove_mount(self, name: str, *, retain_entry: bool = False) -> None: async def remove_mount(self, name: str, *, retain_entry: bool = False) -> Mount:
"""Remove a mount.""" """Remove a mount."""
# Add mount name to job # Add mount name to job
self.sys_jobs.current.reference = name self.sys_jobs.current.reference = name

View File

@ -2,6 +2,7 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
import asyncio import asyncio
from collections.abc import Callable
from functools import cached_property from functools import cached_property
import logging import logging
from pathlib import Path, PurePath from pathlib import Path, PurePath
@ -9,14 +10,6 @@ from pathlib import Path, PurePath
from dbus_fast import Variant from dbus_fast import Variant
from voluptuous import Coerce from voluptuous import Coerce
from ..const import (
ATTR_NAME,
ATTR_PASSWORD,
ATTR_PORT,
ATTR_TYPE,
ATTR_USERNAME,
ATTR_VERSION,
)
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..dbus.const import ( from ..dbus.const import (
DBUS_ATTR_ACTIVE_STATE, DBUS_ATTR_ACTIVE_STATE,
@ -41,22 +34,13 @@ from ..exceptions import (
from ..resolution.const import ContextType, IssueType from ..resolution.const import ContextType, IssueType
from ..resolution.data import Issue from ..resolution.data import Issue
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
from .const import ( from .const import MountCifsVersion, MountType, MountUsage
ATTR_PATH,
ATTR_READ_ONLY,
ATTR_SERVER,
ATTR_SHARE,
ATTR_USAGE,
MountCifsVersion,
MountType,
MountUsage,
)
from .validate import MountData from .validate import MountData
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
COERCE_MOUNT_TYPE = Coerce(MountType) COERCE_MOUNT_TYPE: Callable[[str], MountType] = Coerce(MountType)
COERCE_MOUNT_USAGE = Coerce(MountUsage) COERCE_MOUNT_USAGE: Callable[[str], MountUsage] = Coerce(MountUsage)
class Mount(CoreSysAttributes, ABC): class Mount(CoreSysAttributes, ABC):
@ -80,7 +64,7 @@ class Mount(CoreSysAttributes, ABC):
if cls not in [Mount, NetworkMount]: if cls not in [Mount, NetworkMount]:
return cls(coresys, data) return cls(coresys, data)
type_ = COERCE_MOUNT_TYPE(data[ATTR_TYPE]) type_ = COERCE_MOUNT_TYPE(data["type"])
if type_ == MountType.CIFS: if type_ == MountType.CIFS:
return CIFSMount(coresys, data) return CIFSMount(coresys, data)
if type_ == MountType.NFS: if type_ == MountType.NFS:
@ -90,32 +74,33 @@ class Mount(CoreSysAttributes, ABC):
def to_dict(self, *, skip_secrets: bool = True) -> MountData: def to_dict(self, *, skip_secrets: bool = True) -> MountData:
"""Return dictionary representation.""" """Return dictionary representation."""
return MountData( return MountData(
name=self.name, type=self.type, usage=self.usage, read_only=self.read_only name=self.name,
type=self.type,
usage=self.usage and self.usage.value,
read_only=self.read_only,
) )
@property @property
def name(self) -> str: def name(self) -> str:
"""Get name.""" """Get name."""
return self._data[ATTR_NAME] return self._data["name"]
@property @property
def type(self) -> MountType: def type(self) -> MountType:
"""Get mount type.""" """Get mount type."""
return COERCE_MOUNT_TYPE(self._data[ATTR_TYPE]) return COERCE_MOUNT_TYPE(self._data["type"])
@property @property
def usage(self) -> MountUsage | None: def usage(self) -> MountUsage | None:
"""Get mount usage.""" """Get mount usage."""
return ( if self._data["usage"] is None:
COERCE_MOUNT_USAGE(self._data[ATTR_USAGE]) return None
if ATTR_USAGE in self._data return COERCE_MOUNT_USAGE(self._data["usage"])
else None
)
@property @property
def read_only(self) -> bool: def read_only(self) -> bool:
"""Is mount read-only.""" """Is mount read-only."""
return self._data.get(ATTR_READ_ONLY, False) return self._data.get("read_only", False)
@property @property
@abstractmethod @abstractmethod
@ -186,20 +171,20 @@ class Mount(CoreSysAttributes, ABC):
async def load(self) -> None: async def load(self) -> None:
"""Initialize object.""" """Initialize object."""
# If there's no mount unit, mount it to make one # If there's no mount unit, mount it to make one
if not await self._update_unit(): if not (unit := await self._update_unit()):
await self.mount() await self.mount()
return return
await self._update_state_await(not_state=UnitActiveState.ACTIVATING) await self._update_state_await(unit, not_state=UnitActiveState.ACTIVATING)
# If mount is not available, try to reload it # If mount is not available, try to reload it
if not await self.is_mounted(): if not await self.is_mounted():
await self.reload() await self.reload()
async def _update_state(self) -> UnitActiveState | None: async def _update_state(self, unit: SystemdUnit) -> None:
"""Update mount unit state.""" """Update mount unit state."""
try: try:
self._state = await self.unit.get_active_state() self._state = await unit.get_active_state()
except DBusError as err: except DBusError as err:
await async_capture_exception(err) await async_capture_exception(err)
raise MountError( raise MountError(
@ -220,10 +205,10 @@ class Mount(CoreSysAttributes, ABC):
async def update(self) -> bool: async def update(self) -> bool:
"""Update info about mount from dbus. Return true if it is mounted and available.""" """Update info about mount from dbus. Return true if it is mounted and available."""
if not await self._update_unit(): if not (unit := await self._update_unit()):
return False return False
await self._update_state() await self._update_state(unit)
# If active, dismiss corresponding failed mount issue if found # If active, dismiss corresponding failed mount issue if found
if ( if (
@ -235,16 +220,14 @@ class Mount(CoreSysAttributes, ABC):
async def _update_state_await( async def _update_state_await(
self, self,
unit: SystemdUnit,
expected_states: list[UnitActiveState] | None = None, expected_states: list[UnitActiveState] | None = None,
not_state: UnitActiveState = UnitActiveState.ACTIVATING, not_state: UnitActiveState = UnitActiveState.ACTIVATING,
) -> None: ) -> None:
"""Update state info about mount from dbus. Wait for one of expected_states to appear or state to change from not_state.""" """Update state info about mount from dbus. Wait for one of expected_states to appear or state to change from not_state."""
if not self.unit:
return
try: try:
async with asyncio.timeout(30), self.unit.properties_changed() as signal: async with asyncio.timeout(30), unit.properties_changed() as signal:
await self._update_state() await self._update_state(unit)
while ( while (
expected_states expected_states
and self.state not in expected_states and self.state not in expected_states
@ -312,8 +295,8 @@ class Mount(CoreSysAttributes, ABC):
f"Could not mount {self.name} due to: {err!s}", _LOGGER.error f"Could not mount {self.name} due to: {err!s}", _LOGGER.error
) from err ) from err
if await self._update_unit(): if unit := await self._update_unit():
await self._update_state_await(not_state=UnitActiveState.ACTIVATING) await self._update_state_await(unit, not_state=UnitActiveState.ACTIVATING)
if not await self.is_mounted(): if not await self.is_mounted():
raise MountActivationError( raise MountActivationError(
@ -323,17 +306,17 @@ class Mount(CoreSysAttributes, ABC):
async def unmount(self) -> None: async def unmount(self) -> None:
"""Unmount using systemd.""" """Unmount using systemd."""
if not await self._update_unit(): if not (unit := await self._update_unit()):
_LOGGER.info("Mount %s is not mounted, skipping unmount", self.name) _LOGGER.info("Mount %s is not mounted, skipping unmount", self.name)
return return
await self._update_state() await self._update_state(unit)
try: try:
if self.state != UnitActiveState.FAILED: if self.state != UnitActiveState.FAILED:
await self.sys_dbus.systemd.stop_unit(self.unit_name, StopUnitMode.FAIL) await self.sys_dbus.systemd.stop_unit(self.unit_name, StopUnitMode.FAIL)
await self._update_state_await( await self._update_state_await(
[UnitActiveState.INACTIVE, UnitActiveState.FAILED] unit, [UnitActiveState.INACTIVE, UnitActiveState.FAILED]
) )
if self.state == UnitActiveState.FAILED: if self.state == UnitActiveState.FAILED:
@ -360,8 +343,10 @@ class Mount(CoreSysAttributes, ABC):
f"Could not reload mount {self.name} due to: {err!s}", _LOGGER.error f"Could not reload mount {self.name} due to: {err!s}", _LOGGER.error
) from err ) from err
else: else:
if await self._update_unit(): if unit := await self._update_unit():
await self._update_state_await(not_state=UnitActiveState.ACTIVATING) await self._update_state_await(
unit, not_state=UnitActiveState.ACTIVATING
)
if not await self.is_mounted(): if not await self.is_mounted():
raise MountActivationError( raise MountActivationError(
@ -381,18 +366,18 @@ class NetworkMount(Mount, ABC):
"""Return dictionary representation.""" """Return dictionary representation."""
out = MountData(server=self.server, **super().to_dict()) out = MountData(server=self.server, **super().to_dict())
if self.port is not None: if self.port is not None:
out[ATTR_PORT] = self.port out["port"] = self.port
return out return out
@property @property
def server(self) -> str: def server(self) -> str:
"""Get server.""" """Get server."""
return self._data[ATTR_SERVER] return self._data["server"]
@property @property
def port(self) -> int | None: def port(self) -> int | None:
"""Get port, returns none if using the protocol default.""" """Get port, returns none if using the protocol default."""
return self._data.get(ATTR_PORT) return self._data.get("port")
@property @property
def where(self) -> PurePath: def where(self) -> PurePath:
@ -420,31 +405,31 @@ class CIFSMount(NetworkMount):
def to_dict(self, *, skip_secrets: bool = True) -> MountData: def to_dict(self, *, skip_secrets: bool = True) -> MountData:
"""Return dictionary representation.""" """Return dictionary representation."""
out = MountData(share=self.share, **super().to_dict()) out = MountData(share=self.share, **super().to_dict())
if not skip_secrets and self.username is not None: if not skip_secrets and self.username is not None and self.password is not None:
out[ATTR_USERNAME] = self.username out["username"] = self.username
out[ATTR_PASSWORD] = self.password out["password"] = self.password
out[ATTR_VERSION] = self.version out["version"] = self.version
return out return out
@property @property
def share(self) -> str: def share(self) -> str:
"""Get share.""" """Get share."""
return self._data[ATTR_SHARE] return self._data["share"]
@property @property
def username(self) -> str | None: def username(self) -> str | None:
"""Get username, returns none if auth is not used.""" """Get username, returns none if auth is not used."""
return self._data.get(ATTR_USERNAME) return self._data.get("username")
@property @property
def password(self) -> str | None: def password(self) -> str | None:
"""Get password, returns none if auth is not used.""" """Get password, returns none if auth is not used."""
return self._data.get(ATTR_PASSWORD) return self._data.get("password")
@property @property
def version(self) -> str | None: def version(self) -> str | None:
"""Get password, returns none if auth is not used.""" """Get cifs version, returns none if using default."""
version = self._data.get(ATTR_VERSION) version = self._data.get("version")
if version == MountCifsVersion.LEGACY_1_0: if version == MountCifsVersion.LEGACY_1_0:
return "1.0" return "1.0"
if version == MountCifsVersion.LEGACY_2_0: if version == MountCifsVersion.LEGACY_2_0:
@ -513,7 +498,7 @@ class NFSMount(NetworkMount):
@property @property
def path(self) -> PurePath: def path(self) -> PurePath:
"""Get path.""" """Get path."""
return PurePath(self._data[ATTR_PATH]) return PurePath(self._data["path"])
@property @property
def what(self) -> str: def what(self) -> str:
@ -543,7 +528,7 @@ class BindMount(Mount):
def create( def create(
coresys: CoreSys, coresys: CoreSys,
name: str, name: str,
path: Path, path: PurePath,
usage: MountUsage | None = None, usage: MountUsage | None = None,
where: PurePath | None = None, where: PurePath | None = None,
read_only: bool = False, read_only: bool = False,
@ -568,7 +553,7 @@ class BindMount(Mount):
@property @property
def path(self) -> PurePath: def path(self) -> PurePath:
"""Get path.""" """Get path."""
return PurePath(self._data[ATTR_PATH]) return PurePath(self._data["path"])
@property @property
def what(self) -> str: def what(self) -> str:

View File

@ -103,7 +103,7 @@ class MountData(TypedDict):
name: str name: str
type: str type: str
read_only: bool read_only: bool
usage: NotRequired[str] usage: str | None
# CIFS and NFS fields # CIFS and NFS fields
server: NotRequired[str] server: NotRequired[str]
@ -113,6 +113,7 @@ class MountData(TypedDict):
share: NotRequired[str] share: NotRequired[str]
username: NotRequired[str] username: NotRequired[str]
password: NotRequired[str] password: NotRequired[str]
version: NotRequired[str | None]
# NFS and Bind fields # NFS and Bind fields
path: NotRequired[str] path: NotRequired[str]

View File

@ -5,7 +5,7 @@ from contextlib import suppress
from dataclasses import dataclass from dataclasses import dataclass
import logging import logging
from pathlib import Path from pathlib import Path
from typing import Any, Final from typing import Any, Final, cast
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
@ -24,6 +24,7 @@ from ..exceptions import (
) )
from ..jobs.const import JobCondition, JobExecutionLimit from ..jobs.const import JobCondition, JobExecutionLimit
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.checks.base import CheckBase
from ..resolution.checks.disabled_data_disk import CheckDisabledDataDisk from ..resolution.checks.disabled_data_disk import CheckDisabledDataDisk
from ..resolution.checks.multiple_data_disks import CheckMultipleDataDisks from ..resolution.checks.multiple_data_disks import CheckMultipleDataDisks
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
@ -149,7 +150,7 @@ class DataDisk(CoreSysAttributes):
Available disks are drives where nothing on it has been mounted Available disks are drives where nothing on it has been mounted
and it can be formatted. and it can be formatted.
""" """
available: list[UDisks2Drive] = [] available: list[Disk] = []
for drive in self.sys_dbus.udisks2.drives: for drive in self.sys_dbus.udisks2.drives:
block_devices = self._get_block_devices_for_drive(drive) block_devices = self._get_block_devices_for_drive(drive)
primary = _get_primary_block_device(block_devices) primary = _get_primary_block_device(block_devices)
@ -166,12 +167,16 @@ class DataDisk(CoreSysAttributes):
@property @property
def check_multiple_data_disks(self) -> CheckMultipleDataDisks: def check_multiple_data_disks(self) -> CheckMultipleDataDisks:
"""Resolution center check for multiple data disks.""" """Resolution center check for multiple data disks."""
return self.sys_resolution.check.get("multiple_data_disks") return cast(
CheckMultipleDataDisks, self.sys_resolution.check.get("multiple_data_disks")
)
@property @property
def check_disabled_data_disk(self) -> CheckDisabledDataDisk: def check_disabled_data_disk(self) -> CheckDisabledDataDisk:
"""Resolution center check for disabled data disk.""" """Resolution center check for disabled data disk."""
return self.sys_resolution.check.get("disabled_data_disk") return cast(
CheckDisabledDataDisk, self.sys_resolution.check.get("disabled_data_disk")
)
def _get_block_devices_for_drive(self, drive: UDisks2Drive) -> list[UDisks2Block]: def _get_block_devices_for_drive(self, drive: UDisks2Drive) -> list[UDisks2Block]:
"""Get block devices for a drive.""" """Get block devices for a drive."""
@ -361,7 +366,7 @@ class DataDisk(CoreSysAttributes):
try: try:
partition_block = await UDisks2Block.new( partition_block = await UDisks2Block.new(
partition, self.sys_dbus.bus, sync_properties=False partition, self.sys_dbus.connected_bus, sync_properties=False
) )
except DBusError as err: except DBusError as err:
raise HassOSDataDiskError( raise HassOSDataDiskError(
@ -388,7 +393,7 @@ class DataDisk(CoreSysAttributes):
properties[DBUS_IFACE_BLOCK][DBUS_ATTR_ID_LABEL] properties[DBUS_IFACE_BLOCK][DBUS_ATTR_ID_LABEL]
== FILESYSTEM_LABEL_DATA_DISK == FILESYSTEM_LABEL_DATA_DISK
): ):
check = self.check_multiple_data_disks check: CheckBase = self.check_multiple_data_disks
elif ( elif (
properties[DBUS_IFACE_BLOCK][DBUS_ATTR_ID_LABEL] properties[DBUS_IFACE_BLOCK][DBUS_ATTR_ID_LABEL]
== FILESYSTEM_LABEL_DISABLED_DATA_DISK == FILESYSTEM_LABEL_DISABLED_DATA_DISK
@ -411,7 +416,7 @@ class DataDisk(CoreSysAttributes):
and issue.context == self.check_multiple_data_disks.context and issue.context == self.check_multiple_data_disks.context
for issue in self.sys_resolution.issues for issue in self.sys_resolution.issues
): ):
check = self.check_multiple_data_disks check: CheckBase = self.check_multiple_data_disks
elif any( elif any(
issue.type == self.check_disabled_data_disk.issue issue.type == self.check_disabled_data_disk.issue
and issue.context == self.check_disabled_data_disk.context and issue.context == self.check_disabled_data_disk.context

View File

@ -1,11 +1,11 @@
"""OS support on supervisor.""" """OS support on supervisor."""
from collections.abc import Awaitable
from dataclasses import dataclass from dataclasses import dataclass
from datetime import datetime from datetime import datetime
import errno import errno
import logging import logging
from pathlib import Path, PurePath from pathlib import Path, PurePath
from typing import cast
import aiohttp import aiohttp
from awesomeversion import AwesomeVersion, AwesomeVersionException from awesomeversion import AwesomeVersion, AwesomeVersionException
@ -61,8 +61,8 @@ class SlotStatus:
device=PurePath(data["device"]), device=PurePath(data["device"]),
bundle_compatible=data.get("bundle.compatible"), bundle_compatible=data.get("bundle.compatible"),
sha256=data.get("sha256"), sha256=data.get("sha256"),
size=data.get("size"), size=cast(int | None, data.get("size")),
installed_count=data.get("installed.count"), installed_count=cast(int | None, data.get("installed.count")),
bundle_version=AwesomeVersion(data["bundle.version"]) bundle_version=AwesomeVersion(data["bundle.version"])
if "bundle.version" in data if "bundle.version" in data
else None, else None,
@ -70,51 +70,17 @@ class SlotStatus:
if "installed.timestamp" in data if "installed.timestamp" in data
else None, else None,
status=data.get("status"), status=data.get("status"),
activated_count=data.get("activated.count"), activated_count=cast(int | None, data.get("activated.count")),
activated_timestamp=datetime.fromisoformat(data["activated.timestamp"]) activated_timestamp=datetime.fromisoformat(data["activated.timestamp"])
if "activated.timestamp" in data if "activated.timestamp" in data
else None, else None,
boot_status=data.get("boot-status"), boot_status=RaucState(data["boot-status"])
if "boot-status" in data
else None,
bootname=data.get("bootname"), bootname=data.get("bootname"),
parent=data.get("parent"), parent=data.get("parent"),
) )
def to_dict(self) -> SlotStatusDataType:
"""Get dictionary representation."""
out: SlotStatusDataType = {
"class": self.class_,
"type": self.type_,
"state": self.state,
"device": self.device.as_posix(),
}
if self.bundle_compatible is not None:
out["bundle.compatible"] = self.bundle_compatible
if self.sha256 is not None:
out["sha256"] = self.sha256
if self.size is not None:
out["size"] = self.size
if self.installed_count is not None:
out["installed.count"] = self.installed_count
if self.bundle_version is not None:
out["bundle.version"] = str(self.bundle_version)
if self.installed_timestamp is not None:
out["installed.timestamp"] = str(self.installed_timestamp)
if self.status is not None:
out["status"] = self.status
if self.activated_count is not None:
out["activated.count"] = self.activated_count
if self.activated_timestamp:
out["activated.timestamp"] = str(self.activated_timestamp)
if self.boot_status:
out["boot-status"] = self.boot_status
if self.bootname is not None:
out["bootname"] = self.bootname
if self.parent is not None:
out["parent"] = self.parent
return out
class OSManager(CoreSysAttributes): class OSManager(CoreSysAttributes):
"""OS interface inside supervisor.""" """OS interface inside supervisor."""
@ -148,7 +114,11 @@ class OSManager(CoreSysAttributes):
def need_update(self) -> bool: def need_update(self) -> bool:
"""Return true if a HassOS update is available.""" """Return true if a HassOS update is available."""
try: try:
return self.version < self.latest_version return (
self.version is not None
and self.latest_version is not None
and self.version < self.latest_version
)
except (AwesomeVersionException, TypeError): except (AwesomeVersionException, TypeError):
return False return False
@ -176,6 +146,9 @@ class OSManager(CoreSysAttributes):
def get_slot_name(self, boot_name: str) -> str: def get_slot_name(self, boot_name: str) -> str:
"""Get slot name from boot name.""" """Get slot name from boot name."""
if not self._slots:
raise HassOSSlotNotFound()
for name, status in self._slots.items(): for name, status in self._slots.items():
if status.bootname == boot_name: if status.bootname == boot_name:
return name return name
@ -288,11 +261,8 @@ class OSManager(CoreSysAttributes):
conditions=[JobCondition.HAOS], conditions=[JobCondition.HAOS],
on_condition=HassOSJobError, on_condition=HassOSJobError,
) )
async def config_sync(self) -> Awaitable[None]: async def config_sync(self) -> None:
"""Trigger a host config reload from usb. """Trigger a host config reload from usb."""
Return a coroutine.
"""
_LOGGER.info( _LOGGER.info(
"Synchronizing configuration from USB with Home Assistant Operating System." "Synchronizing configuration from USB with Home Assistant Operating System."
) )
@ -314,6 +284,10 @@ class OSManager(CoreSysAttributes):
version = version or self.latest_version version = version or self.latest_version
# Check installed version # Check installed version
if not version:
raise HassOSUpdateError(
"No version information available, cannot update", _LOGGER.error
)
if version == self.version: if version == self.version:
raise HassOSUpdateError( raise HassOSUpdateError(
f"Version {version!s} is already installed", _LOGGER.warning f"Version {version!s} is already installed", _LOGGER.warning

View File

@ -22,6 +22,7 @@ from ..exceptions import (
AudioUpdateError, AudioUpdateError,
ConfigurationFileError, ConfigurationFileError,
DockerError, DockerError,
PluginError,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job from ..jobs.decorator import Job
@ -127,7 +128,7 @@ class PluginAudio(PluginBase):
"""Update Audio plugin.""" """Update Audio plugin."""
try: try:
await super().update(version) await super().update(version)
except DockerError as err: except (DockerError, PluginError) as err:
raise AudioUpdateError("Audio update failed", _LOGGER.error) from err raise AudioUpdateError("Audio update failed", _LOGGER.error) from err
async def restart(self) -> None: async def restart(self) -> None:

View File

@ -63,7 +63,11 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
def need_update(self) -> bool: def need_update(self) -> bool:
"""Return True if an update is available.""" """Return True if an update is available."""
try: try:
return self.version < self.latest_version return (
self.version is not None
and self.latest_version is not None
and self.version < self.latest_version
)
except (AwesomeVersionException, TypeError): except (AwesomeVersionException, TypeError):
return False return False
@ -153,6 +157,10 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
async def start(self) -> None: async def start(self) -> None:
"""Start system plugin.""" """Start system plugin."""
@abstractmethod
async def stop(self) -> None:
"""Stop system plugin."""
async def load(self) -> None: async def load(self) -> None:
"""Load system plugin.""" """Load system plugin."""
self.start_watchdog() self.start_watchdog()
@ -160,14 +168,14 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
# Check plugin state # Check plugin state
try: try:
# Evaluate Version if we lost this information # Evaluate Version if we lost this information
if not self.version: if self.version:
self.version = await self.instance.get_latest_version() version = self.version
else:
self.version = version = await self.instance.get_latest_version()
await self.instance.attach( await self.instance.attach(version=version, skip_state_event_if_down=True)
version=self.version, skip_state_event_if_down=True
)
await self.instance.check_image(self.version, self.default_image) await self.instance.check_image(version, self.default_image)
except DockerError: except DockerError:
_LOGGER.info( _LOGGER.info(
"No %s plugin Docker image %s found.", self.slug, self.instance.image "No %s plugin Docker image %s found.", self.slug, self.instance.image
@ -177,7 +185,7 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
with suppress(PluginError): with suppress(PluginError):
await self.install() await self.install()
else: else:
self.version = self.instance.version self.version = self.instance.version or version
self.image = self.default_image self.image = self.default_image
await self.save_data() await self.save_data()
@ -194,11 +202,10 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
if not self.latest_version: if not self.latest_version:
await self.sys_updater.reload() await self.sys_updater.reload()
if self.latest_version: if to_version := self.latest_version:
with suppress(DockerError): with suppress(DockerError):
await self.instance.install( await self.instance.install(to_version, image=self.default_image)
self.latest_version, image=self.default_image self.version = self.instance.version or to_version
)
break break
_LOGGER.warning( _LOGGER.warning(
"Error on installing %s plugin, retrying in 30sec", self.slug "Error on installing %s plugin, retrying in 30sec", self.slug
@ -206,23 +213,28 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
await asyncio.sleep(30) await asyncio.sleep(30)
_LOGGER.info("%s plugin now installed", self.slug) _LOGGER.info("%s plugin now installed", self.slug)
self.version = self.instance.version
self.image = self.default_image self.image = self.default_image
await self.save_data() await self.save_data()
async def update(self, version: str | None = None) -> None: async def update(self, version: str | None = None) -> None:
"""Update system plugin.""" """Update system plugin."""
version = version or self.latest_version to_version = AwesomeVersion(version) if version else self.latest_version
if not to_version:
raise PluginError(
f"Cannot determine latest version of plugin {self.slug} for update",
_LOGGER.error,
)
old_image = self.image old_image = self.image
if version == self.version: if to_version == self.version:
_LOGGER.warning( _LOGGER.warning(
"Version %s is already installed for %s", version, self.slug "Version %s is already installed for %s", to_version, self.slug
) )
return return
await self.instance.update(version, image=self.default_image) await self.instance.update(to_version, image=self.default_image)
self.version = self.instance.version self.version = self.instance.version or to_version
self.image = self.default_image self.image = self.default_image
await self.save_data() await self.save_data()

View File

@ -14,7 +14,7 @@ from ..coresys import CoreSys
from ..docker.cli import DockerCli from ..docker.cli import DockerCli
from ..docker.const import ContainerState from ..docker.const import ContainerState
from ..docker.stats import DockerStats from ..docker.stats import DockerStats
from ..exceptions import CliError, CliJobError, CliUpdateError, DockerError from ..exceptions import CliError, CliJobError, CliUpdateError, DockerError, PluginError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..utils.sentry import async_capture_exception from ..utils.sentry import async_capture_exception
@ -53,7 +53,7 @@ class PluginCli(PluginBase):
return self.sys_updater.version_cli return self.sys_updater.version_cli
@property @property
def supervisor_token(self) -> str: def supervisor_token(self) -> str | None:
"""Return an access token for the Supervisor API.""" """Return an access token for the Supervisor API."""
return self._data.get(ATTR_ACCESS_TOKEN) return self._data.get(ATTR_ACCESS_TOKEN)
@ -66,7 +66,7 @@ class PluginCli(PluginBase):
"""Update local HA cli.""" """Update local HA cli."""
try: try:
await super().update(version) await super().update(version)
except DockerError as err: except (DockerError, PluginError) as err:
raise CliUpdateError("CLI update failed", _LOGGER.error) from err raise CliUpdateError("CLI update failed", _LOGGER.error) from err
async def start(self) -> None: async def start(self) -> None:

View File

@ -15,7 +15,8 @@ from awesomeversion import AwesomeVersion
import jinja2 import jinja2
import voluptuous as vol import voluptuous as vol
from ..const import ATTR_SERVERS, DNS_SUFFIX, LogLevel from ..bus import EventListener
from ..const import ATTR_SERVERS, DNS_SUFFIX, BusEvent, LogLevel
from ..coresys import CoreSys from ..coresys import CoreSys
from ..dbus.const import MulticastProtocolEnabled from ..dbus.const import MulticastProtocolEnabled
from ..docker.const import ContainerState from ..docker.const import ContainerState
@ -28,6 +29,7 @@ from ..exceptions import (
CoreDNSJobError, CoreDNSJobError,
CoreDNSUpdateError, CoreDNSUpdateError,
DockerError, DockerError,
PluginError,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job from ..jobs.decorator import Job
@ -71,11 +73,17 @@ class PluginDns(PluginBase):
self.slug = "dns" self.slug = "dns"
self.coresys: CoreSys = coresys self.coresys: CoreSys = coresys
self.instance: DockerDNS = DockerDNS(coresys) self.instance: DockerDNS = DockerDNS(coresys)
self.resolv_template: jinja2.Template | None = None self._resolv_template: jinja2.Template | None = None
self.hosts_template: jinja2.Template | None = None self._hosts_template: jinja2.Template | None = None
self._hosts: list[HostEntry] = [] self._hosts: list[HostEntry] = []
self._loop: bool = False self._loop: bool = False
self._cached_locals: list[str] | None = None
# Debouncing system for rapid local changes
self._locals_changed_handle: asyncio.TimerHandle | None = None
self._restart_after_locals_change_handle: asyncio.Task | None = None
self._connectivity_check_listener: EventListener | None = None
@property @property
def hosts(self) -> Path: def hosts(self) -> Path:
@ -90,6 +98,12 @@ class PluginDns(PluginBase):
@property @property
def locals(self) -> list[str]: def locals(self) -> list[str]:
"""Return list of local system DNS servers.""" """Return list of local system DNS servers."""
if self._cached_locals is None:
self._cached_locals = self._compute_locals()
return self._cached_locals
def _compute_locals(self) -> list[str]:
"""Compute list of local system DNS servers."""
servers: list[str] = [] servers: list[str] = []
for server in [ for server in [
f"dns://{server!s}" for server in self.sys_host.network.dns_servers f"dns://{server!s}" for server in self.sys_host.network.dns_servers
@ -99,6 +113,52 @@ class PluginDns(PluginBase):
return servers return servers
async def _on_dns_container_running(self, event: DockerContainerStateEvent) -> None:
"""Handle DNS container state change to running and trigger connectivity check."""
if event.name == self.instance.name and event.state == ContainerState.RUNNING:
# Wait before CoreDNS actually becomes available
await asyncio.sleep(5)
_LOGGER.debug("CoreDNS started, checking connectivity")
await self.sys_supervisor.check_connectivity()
async def _restart_dns_after_locals_change(self) -> None:
"""Restart DNS after a debounced delay for local changes."""
old_locals = self._cached_locals
new_locals = self._compute_locals()
if old_locals == new_locals:
return
_LOGGER.debug("DNS locals changed from %s to %s", old_locals, new_locals)
self._cached_locals = new_locals
if not await self.instance.is_running():
return
await self.restart()
self._restart_after_locals_change_handle = None
def _trigger_restart_dns_after_locals_change(self) -> None:
"""Trigger a restart of DNS after local changes."""
# Cancel existing restart task if any
if self._restart_after_locals_change_handle:
self._restart_after_locals_change_handle.cancel()
self._restart_after_locals_change_handle = self.sys_create_task(
self._restart_dns_after_locals_change()
)
self._locals_changed_handle = None
def notify_locals_changed(self) -> None:
"""Schedule a debounced DNS restart for local changes."""
# Cancel existing timer if any
if self._locals_changed_handle:
self._locals_changed_handle.cancel()
# Schedule new timer with 1 second delay
self._locals_changed_handle = self.sys_call_later(
1.0, self._trigger_restart_dns_after_locals_change
)
@property @property
def servers(self) -> list[str]: def servers(self) -> list[str]:
"""Return list of DNS servers.""" """Return list of DNS servers."""
@ -147,11 +207,25 @@ class PluginDns(PluginBase):
"""Set fallback DNS enabled.""" """Set fallback DNS enabled."""
self._data[ATTR_FALLBACK] = value self._data[ATTR_FALLBACK] = value
@property
def hosts_template(self) -> jinja2.Template:
"""Get hosts jinja template."""
if not self._hosts_template:
raise RuntimeError("Hosts template not set!")
return self._hosts_template
@property
def resolv_template(self) -> jinja2.Template:
"""Get resolv jinja template."""
if not self._resolv_template:
raise RuntimeError("Resolv template not set!")
return self._resolv_template
async def load(self) -> None: async def load(self) -> None:
"""Load DNS setup.""" """Load DNS setup."""
# Initialize CoreDNS Template # Initialize CoreDNS Template
try: try:
self.resolv_template = jinja2.Template( self._resolv_template = jinja2.Template(
await self.sys_run_in_executor(RESOLV_TMPL.read_text, encoding="utf-8") await self.sys_run_in_executor(RESOLV_TMPL.read_text, encoding="utf-8")
) )
except OSError as err: except OSError as err:
@ -162,7 +236,7 @@ class PluginDns(PluginBase):
_LOGGER.error("Can't read resolve.tmpl: %s", err) _LOGGER.error("Can't read resolve.tmpl: %s", err)
try: try:
self.hosts_template = jinja2.Template( self._hosts_template = jinja2.Template(
await self.sys_run_in_executor(HOSTS_TMPL.read_text, encoding="utf-8") await self.sys_run_in_executor(HOSTS_TMPL.read_text, encoding="utf-8")
) )
except OSError as err: except OSError as err:
@ -173,11 +247,26 @@ class PluginDns(PluginBase):
_LOGGER.error("Can't read hosts.tmpl: %s", err) _LOGGER.error("Can't read hosts.tmpl: %s", err)
await self._init_hosts() await self._init_hosts()
# Register Docker event listener for connectivity checks
if not self._connectivity_check_listener:
self._connectivity_check_listener = self.sys_bus.register_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE, self._on_dns_container_running
)
await super().load() await super().load()
# Update supervisor # Update supervisor
await self._write_resolv(HOST_RESOLV) # Resolv template should always be set but just in case don't fail load
await self.sys_supervisor.check_connectivity() if self._resolv_template:
await self._write_resolv(HOST_RESOLV)
# Reinitializing aiohttp.ClientSession after DNS setup makes sure that
# aiodns is using the right DNS servers (see #5857).
# At this point it should be fairly safe to replace the session since
# we only use the session synchronously during setup and not thorugh the
# API which previously caused issues (see #5851).
await self.coresys.init_websession()
async def install(self) -> None: async def install(self) -> None:
"""Install CoreDNS.""" """Install CoreDNS."""
@ -195,7 +284,7 @@ class PluginDns(PluginBase):
"""Update CoreDNS plugin.""" """Update CoreDNS plugin."""
try: try:
await super().update(version) await super().update(version)
except DockerError as err: except (DockerError, PluginError) as err:
raise CoreDNSUpdateError("CoreDNS update failed", _LOGGER.error) from err raise CoreDNSUpdateError("CoreDNS update failed", _LOGGER.error) from err
async def restart(self) -> None: async def restart(self) -> None:
@ -205,7 +294,7 @@ class PluginDns(PluginBase):
try: try:
await self.instance.restart() await self.instance.restart()
except DockerError as err: except DockerError as err:
raise CoreDNSError("Can't start CoreDNS plugin", _LOGGER.error) from err raise CoreDNSError("Can't restart CoreDNS plugin", _LOGGER.error) from err
async def start(self) -> None: async def start(self) -> None:
"""Run CoreDNS.""" """Run CoreDNS."""
@ -220,6 +309,16 @@ class PluginDns(PluginBase):
async def stop(self) -> None: async def stop(self) -> None:
"""Stop CoreDNS.""" """Stop CoreDNS."""
# Cancel any pending locals change timer
if self._locals_changed_handle:
self._locals_changed_handle.cancel()
self._locals_changed_handle = None
# Wait for any pending restart before stopping
if self._restart_after_locals_change_handle:
self._restart_after_locals_change_handle.cancel()
self._restart_after_locals_change_handle = None
_LOGGER.info("Stopping CoreDNS plugin") _LOGGER.info("Stopping CoreDNS plugin")
try: try:
await self.instance.stop() await self.instance.stop()
@ -422,12 +521,6 @@ class PluginDns(PluginBase):
async def _write_resolv(self, resolv_conf: Path) -> None: async def _write_resolv(self, resolv_conf: Path) -> None:
"""Update/Write resolv.conf file.""" """Update/Write resolv.conf file."""
if not self.resolv_template:
_LOGGER.warning(
"Resolv template is missing, cannot write/update %s", resolv_conf
)
return
nameservers = [str(self.sys_docker.network.dns), "127.0.0.11"] nameservers = [str(self.sys_docker.network.dns), "127.0.0.11"]
# Read resolv config # Read resolv config

View File

@ -16,6 +16,7 @@ from ..exceptions import (
MulticastError, MulticastError,
MulticastJobError, MulticastJobError,
MulticastUpdateError, MulticastUpdateError,
PluginError,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job from ..jobs.decorator import Job
@ -63,7 +64,7 @@ class PluginMulticast(PluginBase):
"""Update Multicast plugin.""" """Update Multicast plugin."""
try: try:
await super().update(version) await super().update(version)
except DockerError as err: except (DockerError, PluginError) as err:
raise MulticastUpdateError( raise MulticastUpdateError(
"Multicast update failed", _LOGGER.error "Multicast update failed", _LOGGER.error
) from err ) from err

View File

@ -19,6 +19,7 @@ from ..exceptions import (
ObserverError, ObserverError,
ObserverJobError, ObserverJobError,
ObserverUpdateError, ObserverUpdateError,
PluginError,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job from ..jobs.decorator import Job
@ -58,7 +59,7 @@ class PluginObserver(PluginBase):
return self.sys_updater.version_observer return self.sys_updater.version_observer
@property @property
def supervisor_token(self) -> str: def supervisor_token(self) -> str | None:
"""Return an access token for the Observer API.""" """Return an access token for the Observer API."""
return self._data.get(ATTR_ACCESS_TOKEN) return self._data.get(ATTR_ACCESS_TOKEN)
@ -71,7 +72,7 @@ class PluginObserver(PluginBase):
"""Update local HA observer.""" """Update local HA observer."""
try: try:
await super().update(version) await super().update(version)
except DockerError as err: except (DockerError, PluginError) as err:
raise ObserverUpdateError( raise ObserverUpdateError(
"HA observer update failed", _LOGGER.error "HA observer update failed", _LOGGER.error
) from err ) from err
@ -90,6 +91,10 @@ class PluginObserver(PluginBase):
_LOGGER.error("Can't start observer plugin") _LOGGER.error("Can't start observer plugin")
raise ObserverError() from err raise ObserverError() from err
async def stop(self) -> None:
"""Raise. Supervisor should not stop observer."""
raise RuntimeError("Stopping observer without a restart is not supported!")
async def stats(self) -> DockerStats: async def stats(self) -> DockerStats:
"""Return stats of observer.""" """Return stats of observer."""
try: try:

Some files were not shown because too many files have changed in this diff Show More