Compare commits

...

89 Commits

Author SHA1 Message Date
Stefan Agner
deac85bddb
Scrub WiFi fields from Sentry events (#6048)
Make sure WiFi fields are scrubbed from Sentry events to prevent
accidental exposure of sensitive information.
2025-07-29 17:42:43 +02:00
Stefan Agner
7dcf5ba631
Enable IPv6 for containers on new installations (#6029)
* Enable IPv6 by default for new installations

Enable IPv6 by default for new Supervisor installations. Let's also
make the `enable_ipv6` attribute nullable, so we can distinguish
between "not set" and "set to false".

* Add pytest

* Add log message that system restart is required for IPv6 changes

* Fix API pytest

* Create resolution center issue when reboot is required

* Order log after actual setter call
2025-07-29 15:59:03 +02:00
dependabot[bot]
a004830131
Bump orjson from 3.11.0 to 3.11.1 (#6045)
Bumps [orjson](https://github.com/ijl/orjson) from 3.11.0 to 3.11.1.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.11.0...3.11.1)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 10:41:42 +02:00
dependabot[bot]
a8cc6c416d
Bump coverage from 7.10.0 to 7.10.1 (#6044)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.10.0 to 7.10.1.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.10.0...7.10.1)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.10.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 10:41:19 +02:00
dependabot[bot]
74b26642b0
Bump ruff from 0.12.4 to 0.12.5 (#6042)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-27 20:20:27 +02:00
dependabot[bot]
5e26ab5f4a
Bump gitpython from 3.1.44 to 3.1.45 (#6039)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-27 20:14:24 +02:00
dependabot[bot]
a841cb8282
Bump coverage from 7.9.2 to 7.10.0 (#6043) 2025-07-27 10:31:48 +02:00
dependabot[bot]
3b1b03c8a7
Bump dbus-fast from 2.44.1 to 2.44.2 (#6038)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.44.1 to 2.44.2.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.44.1...v2.44.2)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-version: 2.44.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 16:06:19 -04:00
dependabot[bot]
680428f304
Bump sentry-sdk from 2.33.0 to 2.33.2 (#6037)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.33.0 to 2.33.2.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.33.0...2.33.2)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.33.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 12:44:35 -04:00
dependabot[bot]
f34128c37e
Bump ruff from 0.12.3 to 0.12.4 (#6031)
---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.12.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 12:43:56 -04:00
dependabot[bot]
2ed0682b34
Bump sigstore/cosign-installer from 3.9.1 to 3.9.2 (#6032) 2025-07-18 10:00:58 +02:00
Stefan Agner
fbb0915ef8
Mark system as unhealthy if multiple OS installations are found (#6024)
* Add resolution check for duplicate OS installations

* Only create single issue/use separate unhealthy type

* Check MBR partition UUIDs as well

* Use partlabel

* Use generator to avoid code duplication

* Add list of devices, avoid unnecessary exception handling

* Run check only on HAOS

* Fix message formatting

* Fix and simplify pytests

* Fix UnhealthyReason sort order
2025-07-17 10:06:35 +02:00
Stefan Agner
780ae1e15c
Check for duplicate data disks only when the OS is available (#6025)
* Check for duplicate data disks only when the OS is available

Supervised installations do not have a specific data disk, so only
check for duplicate data disks on Home Assistant OS.

* Enable OS for multiple data disks check test
2025-07-16 10:43:15 +02:00
dependabot[bot]
c617358855
Bump orjson from 3.10.18 to 3.11.0 (#6028)
Bumps [orjson](https://github.com/ijl/orjson) from 3.10.18 to 3.11.0.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.10.18...3.11.0)

---
updated-dependencies:
- dependency-name: orjson
  dependency-version: 3.11.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:24:34 +02:00
dependabot[bot]
b679c4f4d8
Bump sentry-sdk from 2.32.0 to 2.33.0 (#6027)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.32.0 to 2.33.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.32.0...2.33.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.33.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:20:28 +02:00
dependabot[bot]
c946c421f2
Bump debugpy from 1.8.14 to 1.8.15 (#6026)
Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.8.14 to 1.8.15.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.8.14...v1.8.15)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-version: 1.8.15
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:19:44 +02:00
dependabot[bot]
aeabf7ea25
Bump blockbuster from 1.5.24 to 1.5.25 (#6020)
Bumps [blockbuster](https://github.com/cbornet/blockbuster) from 1.5.24 to 1.5.25.
- [Release notes](https://github.com/cbornet/blockbuster/releases)
- [Commits](https://github.com/cbornet/blockbuster/commits/v1.5.25)

---
updated-dependencies:
- dependency-name: blockbuster
  dependency-version: 1.5.25
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:18:57 +02:00
dependabot[bot]
365b838abf
Bump mypy from 1.16.1 to 1.17.0 (#6019)
Bumps [mypy](https://github.com/python/mypy) from 1.16.1 to 1.17.0.
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.16.1...v1.17.0)

---
updated-dependencies:
- dependency-name: mypy
  dependency-version: 1.17.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-16 09:08:57 +02:00
Stefan Agner
99c040520e
Drop ensure_builtin_repositories() (#6012)
* Drop ensure_builtin_repositories

With the new Repository classes we have the is_builtin property, so we
can easily make sure that built-ins are not removed. This allows us to
further cleanup the code by removing the ensure_builtin_repositories
function and the ALL_BUILTIN_REPOSITORIES constant.

* Make sure we add built-ins on load

* Reuse default set and avoid unnecessary copy

Reuse default set and avoid unnecessary copying during validation if
the default is not being used.
2025-07-14 22:19:06 +02:00
dependabot[bot]
eefe2f2e06
Bump aiohttp from 3.12.13 to 3.12.14 (#6014)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-14 11:43:55 +02:00
dependabot[bot]
a366e36b37
Bump ruff from 0.12.2 to 0.12.3 (#6016)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-14 11:19:08 +02:00
dependabot[bot]
27a2fde9e1
Bump astroid from 3.3.10 to 3.3.11 (#6017)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-14 11:18:54 +02:00
Stefan Agner
9a0f530a2f
Add Supervisor connectivity check after DNS restart (#6005)
* Add Supervisor connectivity check after DNS restart

When the DNS plug-in got restarted, check Supervisor connectivity
in case the DNS plug-in configuration change influenced Supervisor
connectivity. This is helpful when a DHCP server gets started after
Home Assistant is up. In that case the network provided DNS server
(local DNS server) becomes available after the DNS plug-in restart.

Without this change, the Supervisor connectivity will remain false
until the a Job triggers a connectivity check, for example the
periodic update check (which causes a updater and store reload) by
Core.

* Fix pytest and add coverage for new functionality
2025-07-10 11:08:10 +02:00
Stefan Agner
baf9695cf7
Refactoring around add-on store Repository classes (#5990)
* Rename repository fixture to test_repository

Also don't remove the built-in repositories. The list was incomplete,
and tests don't seem to require that anymore.

* Get rid of StoreType

The type doesn't have much value, we have constant strings anyways.

* Introduce types.py

* Use slug to determine which repository urls to return

* Simplify BuiltinRepository enum

* Mock GitRepo load

* Improve URL handling and repository creation logic

* Refactor update_repositories

* Get rid of get_from_url

It is no longer used in production code.

* More refactoring

* Address pylint

* Introduce is_git_based property to Repository class

Return all git based URLs, including the Core repository.

* Revert "Introduce is_git_based property to Repository class"

This reverts commit dfd5ad79bf23e0e127fc45d97d6f8de0e796faa0.

* Fold type.py into const.py

Align more with how Supervisor code is typically structured.

* Update supervisor/store/__init__.py

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Apply repository remove suggestion

* Fix tests

---------

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2025-07-10 11:07:53 +02:00
Stefan Agner
7873c457d5
Small improvement to Copilot instructions (#6011) 2025-07-10 11:05:59 +02:00
Stefan Agner
cbc48c381f
Return 401 Unauthorized when using json/url encoded auth fails (#5844)
When authentication using JSON payload or URL encoded payload fails,
use the generic HTTP response code 401 Unauthorized instead of 400
Bad Request.

This is a more appropriate response code for authentication errors
and is consistent with the behavior of other authentication methods.
2025-07-10 08:38:00 +02:00
Franck Nijhof
11e37011bd
Add Task issue form (#6007) 2025-07-09 16:58:10 +02:00
Franck Nijhof
cfda559a90
Adjust feature request links in issue reporting (#6009) 2025-07-09 16:44:35 +02:00
Mike Degatano
806bd9f52c
Apply store reload suggestion automatically on connectivity change (#6004)
* Apply store reload suggestion automatically on connectivity change

* Use sys_bus not coresys.bus

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-09 16:43:51 +02:00
Stefan Agner
953f7d01d7
Improve DNS plug-in restart (#5999)
* Improve DNS plug-in restart

Instead of simply go by PrimaryConnectioon change, use the DnsManager
Configuration property. This property is ultimately used to write the
DNS plug-in configuration, so it is really the relevant information
we pass on to the plug-in.

* Check for changes and restart DNS plugin

* Check for changes in plug-in DNS

Cache last local (NetworkManager) provided DNS servers. Check against
this DNS server list when deciding when to restart the DNS plug-in.

* Check connectivity unthrottled in certain situations

* Fix pytest

* Fix pytest

* Improve test coverage for DNS plugins restart functionality

* Apply suggestions from code review

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Debounce local DNS changes and event based connectivity checks

* Remove connection check logic

* Remove unthrottled connectivity check

* Fix delayed call

* Store restart task and cancel in case a restart is running

* Improve DNS configuration change tests

* Remove stale code

* Improve DNS plug-in tests, less mocking

* Cover multiple private functions at once

Improve tests around notify_locals_changed() to cover multiple
functions at once.

---------

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2025-07-09 11:35:03 +02:00
Felipe Santos
381e719a0e
Allow to force rebuild of add-ons (#6002) 2025-07-07 21:41:18 +02:00
Ruben van Dijk
296071067d
Fix multiple set-cookie headers with addons ingress (#5996) 2025-07-07 19:27:39 +02:00
dependabot[bot]
8336537f51
Bump types-docker from 7.1.0.20250523 to 7.1.0.20250705 (#6003)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-07 10:00:26 +02:00
Stefan Agner
5c90a00263
Force reload of /etc/resolv.conf on WebSession init (#6000) 2025-07-05 12:18:02 +02:00
dependabot[bot]
1f2bf77784
Bump coverage from 7.9.1 to 7.9.2 (#5992)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-04 08:54:36 +02:00
dependabot[bot]
9aa4f381b8
Bump ruff from 0.12.1 to 0.12.2 (#5993)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-04 08:47:35 +02:00
Mike Degatano
ae036ceffe
Don't backup uninstalled addons (#5988)
* Don't backup uninstalled addons

* Remove hash in backup
2025-07-04 07:05:53 +02:00
Stefan Agner
f0ea0d4a44
Add GitHub Copilot/Claude instruction (#5986)
* Add GitHub Copilot/Claude instruction

This adds an initial instruction file for GitHub Copilot and Claude
(CLAUDE.md symlinked to the same file).

* Add --ignore-missing-imports to mypy, add note to run pre-commit
2025-07-04 07:05:05 +02:00
Mike Degatano
abc44946bb
Refactor addon git repo (#5987)
* Refactor Repository into setup with inheritance

* Remove subclasses of GitRepo
2025-07-03 13:53:52 +02:00
dependabot[bot]
3e20a0937d
Bump cryptography from 45.0.4 to 45.0.5 (#5989)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.4 to 45.0.5.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/45.0.4...45.0.5)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 45.0.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-03 09:52:50 +02:00
Mike Degatano
6cebf52249
Store reset only deletes git cache after clone was successful (#5984)
* Store reset only deletes git cache after clone was successful

* Add test and fix fallback error handling

* Fix when lock is grabbed
2025-07-02 14:34:18 -04:00
Felipe Santos
bc57deb474
Use Docker BuildKit to build addons (#5974)
* Use Docker BuildKit to build addons

* Improve error message as suggested by CodeRabbit

* Fix container.remove() tests missing v=True

* Ignore squash rather than falling back to legacy builder

* Use version rather than tag to avoid confusion in run_command()

* Fix tests differently

* Use PropertyMock like other tests

* Restore position of fix_label fn

* Exempt addon builder image from unsupported checks

* Refactor tests

* Fix tests expecting wrong builder image

* Remove harcoded paths

* Fix tests

* Remove get_addon_host_path() function

* Use docker buildx build rather than docker build

Co-authored-by: Stefan Agner <stefan@agner.ch>

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-07-02 17:33:41 +02:00
Mike Degatano
38750d74a8
Refactor builtin repositories to enum (#5976) 2025-06-30 13:22:00 -04:00
Felipe Santos
d1c1a2d418
Fix docker.run_command() needing detach but not enforcing it (#5979)
* Fix `docker.run_command()` needing `detach` but not enforcing it

* Fix test
2025-06-30 16:09:19 +02:00
Felipe Santos
cf32f036c0
Fix docker_home_assistant_execute_command not honoring HA version (#5978)
* Fix `docker_home_assistant_execute_command` not honoring HA version

* Change variable name to image_with_tag

* Fix test
2025-06-30 16:08:05 +02:00
Felipe Santos
b8852872fe
Remove anonymous volumes when removing containers (#5977)
* Remove anonymous volumes when removing containers

* Add tests for docker.run_command()
2025-06-30 13:31:41 +02:00
dependabot[bot]
779f47e25d
Bump sentry-sdk from 2.31.0 to 2.32.0 (#5982)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-30 10:16:41 +02:00
dependabot[bot]
be8b36b560
Bump ruff from 0.12.0 to 0.12.1 (#5981)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-27 09:08:50 +02:00
dependabot[bot]
8378d434d4
Bump sentry-sdk from 2.30.0 to 2.31.0 (#5975)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.30.0 to 2.31.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.30.0...2.31.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.31.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-25 08:57:12 +02:00
Stefan Agner
0b79e09bc0
Add code documentation for Jobs decorator (#5965)
Add basic code documentation to the Jobs decorator.
2025-06-24 15:48:04 +02:00
Stefan Agner
d747a59696
Fix CLI/Observer access token property (#5973)
The access token token_validation() code in the security middleware
potentially accesses the access token property before the Supervisor
starts the CLI/Observer plugins, which leads to an KeyError when
trying to access the `access_token` property. This change ensures
that no key error is raised, but just None is returned.
2025-06-24 12:10:36 +02:00
Mike Degatano
3ee7c082ec
Add mypy to ci and precommit (#5969)
* Add mypy to ci and precommit

* Run precommit mypy in venv

* Fix issues raised in latest version of mypy
2025-06-24 11:48:03 +02:00
dependabot[bot]
3f921e50b3
Bump getsentry/action-release from 3.1.2 to 3.2.0 (#5972)
Bumps [getsentry/action-release](https://github.com/getsentry/action-release) from 3.1.2 to 3.2.0.
- [Release notes](https://github.com/getsentry/action-release/releases)
- [Changelog](https://github.com/getsentry/action-release/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/action-release/compare/v3.1.2...v3.2.0)

---
updated-dependencies:
- dependency-name: getsentry/action-release
  dependency-version: 3.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-24 10:08:27 +02:00
dependabot[bot]
0370320f75
Bump sigstore/cosign-installer from 3.9.0 to 3.9.1 (#5971)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.9.0 to 3.9.1.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.9.0...v3.9.1)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.9.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-24 10:08:19 +02:00
Stefan Agner
1e19e26ef3
Update request feature link (#5968)
Feature requests are now collected using the org wide GitHub Community.
Update the link accordingly.

While at it, also remove the unused ISSUE_TEMPLATE.md and align the
title to create issues with what is used in Home Assistant Core's
template.
2025-06-23 13:00:55 +02:00
Stefan Agner
e1a18eeba8
Use aiodns explicit close method (#5966) 2025-06-23 10:13:43 +02:00
Stefan Agner
b030879efd
Rename detect-blocking-io API value to match other APIs (#5964)
* Rename detect-blocking-io API value to match other APIs

For the new detect-blocking-io option, use dashes instead of
underscores in `on-at-startup` for consistency with other API
endpoints.

This is a breaking change, but since the API is really new and not
really used yet, it is fairly safe to do so.

* Fix pytest
2025-06-20 12:52:12 +02:00
dependabot[bot]
dfa1602ac6
Bump getsentry/action-release from 3.1.1 to 3.1.2 (#5963)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-19 10:33:47 +02:00
dependabot[bot]
bbda943583
Bump urllib3 from 2.4.0 to 2.5.0 (#5962)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-19 10:33:33 +02:00
Mike Degatano
aea15b65b7
Fix mypy issues in store, utils and all other source files (#5957)
* Fix mypy issues in store module

* Fix mypy issues in utils module

* Fix mypy issues in all remaining source files

* Fix ingress user typeddict

* Fixes from feedback

* Fix mypy issues after installing docker-types
2025-06-18 12:40:12 -04:00
dependabot[bot]
5c04249e41
Bump pytest from 8.4.0 to 8.4.1 (#5960)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 8.4.0 to 8.4.1.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/8.4.0...8.4.1)

---
updated-dependencies:
- dependency-name: pytest
  dependency-version: 8.4.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-18 15:43:22 +02:00
dependabot[bot]
456cec7ed1
Bump ruff from 0.11.13 to 0.12.0 (#5959)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.11.13 to 0.12.0.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.11.13...0.12.0)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.12.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-18 12:06:45 +02:00
dependabot[bot]
52a519e55c
Bump sigstore/cosign-installer from 3.8.2 to 3.9.0 (#5958)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.8.2 to 3.9.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.8.2...v3.9.0)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-18 10:57:20 +02:00
Stefan Agner
fcb20d0ae8
Remove bug label from issue template (#5955)
Don't label new issues with the bug label by default. We started making
use of issue types, so if anything, this should be type "Bug". However,
we prefer to leave the type unspecified until the issue has been triaged.
2025-06-17 13:10:52 +02:00
Stefan Agner
9b3f2b17bd
Remove AES cipher from backup (#5954)
AES cipher is no longer needed since Docker repository authentication
has been removed from backups in #5605.
2025-06-16 20:14:21 +02:00
Stefan Agner
3d026b9534
Expose machine ID (#5953)
Expose the unique machine ID of the local system via the Supervisor
API. This allows to identify a particular machine across reboots,
backup restores and updates. The machine ID is a stable identifier
that does not change unless the underlying hardware is changed or
the operating system is reinstalled.
2025-06-16 20:14:13 +02:00
Mike Degatano
0e8ace949a
Fix mypy issues in plugins and resolution (#5946)
* Fix mypy issues in plugins

* Fix mypy issues in resolution module

* fix misses in resolution check

* Fix signatures on evaluate methods

* nitpick fix suggestions
2025-06-16 14:12:47 -04:00
Stefan Agner
1fe6f8ad99
Bump cosign to v2.4.3 (#5945)
Follow the builder bump of 2025.02.0 and use cosign v2.4.3 for
Supervisor too.
2025-06-16 20:12:27 +02:00
dependabot[bot]
9ef2352d12
Bump sentry-sdk from 2.29.1 to 2.30.0 (#5947)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.29.1 to 2.30.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.29.1...2.30.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.30.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 11:48:34 +02:00
dependabot[bot]
2543bcae29
Bump aiohttp from 3.12.12 to 3.12.13 (#5952)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.12.13
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 11:48:20 +02:00
dependabot[bot]
ad9de9f73c
Bump coverage from 7.9.0 to 7.9.1 (#5949)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.9.0 to 7.9.1.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.9.0...7.9.1)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.9.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 10:26:18 +02:00
dependabot[bot]
a5556651ae
Bump pytest-cov from 6.2.0 to 6.2.1 (#5948)
Bumps [pytest-cov](https://github.com/pytest-dev/pytest-cov) from 6.2.0 to 6.2.1.
- [Changelog](https://github.com/pytest-dev/pytest-cov/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest-cov/compare/v6.2.0...v6.2.1)

---
updated-dependencies:
- dependency-name: pytest-cov
  dependency-version: 6.2.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 10:26:05 +02:00
dependabot[bot]
ac28deff6d
Bump aiodns from 3.4.0 to 3.5.0 (#5951)
Bumps [aiodns](https://github.com/saghul/aiodns) from 3.4.0 to 3.5.0.
- [Release notes](https://github.com/saghul/aiodns/releases)
- [Changelog](https://github.com/aio-libs/aiodns/blob/master/ChangeLog)
- [Commits](https://github.com/saghul/aiodns/compare/v3.4.0...v3.5.0)

---
updated-dependencies:
- dependency-name: aiodns
  dependency-version: 3.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 10:06:18 +02:00
Mike Degatano
82ee4bc441
Fix mypy issues in misc, mounts and os modules (#5942)
* Fix mypy errors in misc and mounts

* Fix mypy issues in os module

* Fix typing of capture_exception

* avoid unnecessary property call

* Fixes from feedback
2025-06-12 18:06:57 -04:00
Stefan Agner
bdbd09733a
Avoid aiodns resolver memory leak (#5941)
* Avoid aiodns resolver memory leak

In certain cases, the aiodns resolver can leak memory. This also
leads to Fatal `Python error… ffi.from_handle()`. This addresses
the issue by ensuring that the resolver is properly closed
when it is no longer needed.

* Address coderabbitai feedback

* Fix pytest

* Fix pytest
2025-06-12 11:32:53 +02:00
David Rapan
d5b5a328d7
feat: Add opt-in IPv6 for containers (#5879)
Configurable and w/ migrations between IPv4-Only and Dual-Stack

Signed-off-by: David Rapan <david@rapan.cz>
Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-06-12 11:32:24 +02:00
dependabot[bot]
52b24e177f
Bump coverage from 7.8.2 to 7.9.0 (#5944)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.8.2 to 7.9.0.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.8.2...7.9.0)

---
updated-dependencies:
- dependency-name: coverage
  dependency-version: 7.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-12 09:27:56 +02:00
dependabot[bot]
e10c58c424
Bump pytest-cov from 6.1.1 to 6.2.0 (#5943)
Bumps [pytest-cov](https://github.com/pytest-dev/pytest-cov) from 6.1.1 to 6.2.0.
- [Changelog](https://github.com/pytest-dev/pytest-cov/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest-cov/compare/v6.1.1...v6.2.0)

---
updated-dependencies:
- dependency-name: pytest-cov
  dependency-version: 6.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-12 09:27:44 +02:00
Mike Degatano
9682870c2c
Fix mypy issues in host and jobs (#5939)
* Fix mypy issues in host

* Fix mypy issues in job module

* Fix mypy issues introduced in previously fixed modules

* Apply suggestions from code review

Co-authored-by: Stefan Agner <stefan@agner.ch>

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2025-06-11 12:04:25 -04:00
Stefan Agner
fd0b894d6a
Fix dynamic port pytest (#5940) 2025-06-11 15:10:31 +02:00
dependabot[bot]
697515b81f
Bump aiohttp from 3.12.9 to 3.12.12 (#5937)
---
updated-dependencies:
- dependency-name: aiohttp
  dependency-version: 3.12.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-10 11:52:14 +02:00
dependabot[bot]
d912c234fa
Bump requests from 2.32.3 to 2.32.4 (#5935)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-10 10:53:45 +02:00
dependabot[bot]
e8445ae8f2
Bump cryptography from 45.0.3 to 45.0.4 (#5936)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-10 09:23:40 +02:00
dependabot[bot]
6710439ce5
Bump ruff from 0.11.12 to 0.11.13 (#5932)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-10 00:41:04 -05:00
dependabot[bot]
95eec03c91
Bump aiohttp from 3.12.6 to 3.12.9 (#5930) 2025-06-05 07:43:55 +01:00
dependabot[bot]
9b686a2d9a
Bump pytest from 8.3.5 to 8.4.0 (#5929)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-03 10:23:35 +02:00
dependabot[bot]
063d69da90
Bump aiohttp from 3.12.4 to 3.12.6 (#5928)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-31 12:35:37 -05:00
dependabot[bot]
baaf04981f
Bump awesomeversion from 24.6.0 to 25.5.0 (#5926)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-31 01:23:53 -05:00
dependabot[bot]
bdb25a7ff8
Bump ruff from 0.11.11 to 0.11.12 (#5927)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-31 01:23:06 -05:00
175 changed files with 3848 additions and 1339 deletions

View File

@ -1,69 +0,0 @@
---
name: Report a bug with the Supervisor on a supported System
about: Report an issue related to the Home Assistant Supervisor.
labels: bug
---
<!-- READ THIS FIRST:
- If you need additional help with this template please refer to https://www.home-assistant.io/help/reporting_issues/
- This is for bugs only. Feature and enhancement requests should go in our community forum: https://community.home-assistant.io/c/feature-requests
- Provide as many details as possible. Paste logs, configuration sample and code into the backticks. Do not delete any text from this template!
- If you have a problem with an add-on, make an issue in it's repository.
-->
<!--
Important: You can only fill a bug repport for an supported system! If you run an unsupported installation. This report would be closed without comment.
-->
### Describe the issue
<!-- Provide as many details as possible. -->
### Steps to reproduce
<!-- What do you do to encounter the issue. -->
1. ...
2. ...
3. ...
### Enviroment details
<!-- You can find these details in the system tab of the supervisor panel, or by using the `ha` CLI. -->
- **Operating System:**: xxx
- **Supervisor version:**: xxx
- **Home Assistant version**: xxx
### Supervisor logs
<details>
<summary>Supervisor logs</summary>
<!--
- Frontend -> Supervisor -> System
- Or use this command: ha supervisor logs
- Logs are more than just errors, even if you don't think it's important, it is.
-->
```
Paste supervisor logs here
```
</details>
### System Information
<details>
<summary>System Information</summary>
<!--
- Use this command: ha info
-->
```
Paste system info here
```
</details>

View File

@ -1,6 +1,5 @@
name: Bug Report Form
name: Report an issue with Home Assistant Supervisor
description: Report an issue related to the Home Assistant Supervisor.
labels: bug
body:
- type: markdown
attributes:
@ -9,7 +8,7 @@ body:
If you have a feature or enhancement request, please use the [feature request][fr] section of our [Community Forum][fr].
[fr]: https://community.home-assistant.io/c/feature-requests
[fr]: https://github.com/orgs/home-assistant/discussions
- type: textarea
validations:
required: true
@ -76,7 +75,7 @@ body:
description: >
The System information can be found in [Settings -> System -> Repairs -> (three dot menu) -> System Information](https://my.home-assistant.io/redirect/system_health/).
Click the copy button at the bottom of the pop-up and paste it here.
[![Open your Home Assistant instance and show health information about your system.](https://my.home-assistant.io/badges/system_health.svg)](https://my.home-assistant.io/redirect/system_health/)
- type: textarea
attributes:
@ -86,7 +85,7 @@ body:
Supervisor diagnostics can be found in [Settings -> Devices & services](https://my.home-assistant.io/redirect/integrations/).
Find the card that says `Home Assistant Supervisor`, open it, and select the three dot menu of the Supervisor integration entry
and select 'Download diagnostics'.
**Please drag-and-drop the downloaded file into the textbox below. Do not copy and paste its contents.**
- type: textarea
attributes:

View File

@ -13,7 +13,7 @@ contact_links:
about: Our documentation has its own issue tracker. Please report issues with the website there.
- name: Request a feature for the Supervisor
url: https://community.home-assistant.io/c/feature-requests
url: https://github.com/orgs/home-assistant/discussions
about: Request an new feature for the Supervisor.
- name: I have a question or need support

53
.github/ISSUE_TEMPLATE/task.yml vendored Normal file
View File

@ -0,0 +1,53 @@
name: Task
description: For staff only - Create a task
type: Task
body:
- type: markdown
attributes:
value: |
## ⚠️ RESTRICTED ACCESS
**This form is restricted to Open Home Foundation staff and authorized contributors only.**
If you are a community member wanting to contribute, please:
- For bug reports: Use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)
- For feature requests: Submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)
---
### For authorized contributors
Use this form to create tasks for development work, improvements, or other actionable items that need to be tracked.
- type: textarea
id: description
attributes:
label: Description
description: |
Provide a clear and detailed description of the task that needs to be accomplished.
Be specific about what needs to be done, why it's important, and any constraints or requirements.
placeholder: |
Describe the task, including:
- What needs to be done
- Why this task is needed
- Expected outcome
- Any constraints or requirements
validations:
required: true
- type: textarea
id: additional_context
attributes:
label: Additional context
description: |
Any additional information, links, research, or context that would be helpful.
Include links to related issues, research, prototypes, roadmap opportunities etc.
placeholder: |
- Roadmap opportunity: [link]
- Epic: [link]
- Feature request: [link]
- Technical design documents: [link]
- Prototype/mockup: [link]
- Dependencies: [links]
validations:
required: false

288
.github/copilot-instructions.md vendored Normal file
View File

@ -0,0 +1,288 @@
# GitHub Copilot & Claude Code Instructions
This repository contains the Home Assistant Supervisor, a Python 3 based container
orchestration and management system for Home Assistant.
## Supervisor Capabilities & Features
### Architecture Overview
Home Assistant Supervisor is a Python-based container orchestration system that
communicates with the Docker daemon to manage containerized components. It is tightly
integrated with the underlying Operating System and core Operating System components
through D-Bus.
**Managed Components:**
- **Home Assistant Core**: The main home automation application running in its own
container (also provides the web interface)
- **Add-ons**: Third-party applications and services (each add-on runs in its own
container)
- **Plugins**: Built-in system services like DNS, Audio, CLI, Multicast, and Observer
- **Host System Integration**: OS-level operations and hardware access via D-Bus
- **Container Networking**: Internal Docker network management and external
connectivity
- **Storage & Backup**: Data persistence and backup management across all containers
**Key Dependencies:**
- **Docker Engine**: Required for all container operations
- **D-Bus**: System-level communication with the host OS
- **systemd**: Service management for host system operations
- **NetworkManager**: Network configuration and management
### Add-on System
**Add-on Architecture**: Add-ons are containerized applications available through
add-on stores. Each store contains multiple add-ons, and each add-on includes metadata
that tells Supervisor the version, startup configuration (permissions), and available
user configurable options. Add-on metadata typically references a container image that
Supervisor fetches during installation. If not, the Supervisor builds the container
image from a Dockerfile.
**Built-in Stores**: Supervisor comes with several pre-configured stores:
- **Core Add-ons**: Official add-ons maintained by the Home Assistant team
- **Community Add-ons**: Popular third-party add-ons repository
- **ESPHome**: Add-ons for ESPHome ecosystem integration
- **Music Assistant**: Audio and music-related add-ons
- **Local Development**: Local folder for testing custom add-ons during development
**Store Management**: Stores are Git-based repositories that are periodically updated.
When updates are available, users receive notifications.
**Add-on Lifecycle**:
- **Installation**: Supervisor fetches or builds container images based on add-on
metadata
- **Configuration**: Schema-validated options with integrated UI management
- **Runtime**: Full container lifecycle management, health monitoring
- **Updates**: Automatic or manual version management
### Update System
**Core Components**: Supervisor, Home Assistant Core, HAOS, and built-in plugins
receive version information from a central JSON file fetched from
`https://version.home-assistant.io/{channel}.json`. The `Updater` class handles
fetching this data, validating signatures, and updating internal version tracking.
**Update Channels**: Three channels (`stable`/`beta`/`dev`) determine which version
JSON file is fetched, allowing users to opt into different release streams.
**Add-on Updates**: Add-on version information comes from store repository updates, not
the central JSON file. When repositories are refreshed via the store system, add-ons
compare their local versions against repository versions to determine update
availability.
### Backup & Recovery System
**Backup Capabilities**:
- **Full Backups**: Complete system state capture including all add-ons,
configuration, and data
- **Partial Backups**: Selective backup of specific components (Home Assistant,
add-ons, folders)
- **Encrypted Backups**: Optional backup encryption with user-provided passwords
- **Multiple Storage Locations**: Local storage and remote backup destinations
**Recovery Features**:
- **One-click Restore**: Simple restoration from backup files
- **Selective Restore**: Choose specific components to restore
- **Automatic Recovery**: Self-healing for common system issues
---
## Supervisor Development
### Python Requirements
- **Compatibility**: Python 3.13+
- **Language Features**: Use modern Python features:
- Type hints with `typing` module
- f-strings (preferred over `%` or `.format()`)
- Dataclasses and enum classes
- Async/await patterns
- Pattern matching where appropriate
### Code Quality Standards
- **Formatting**: Ruff
- **Linting**: PyLint and Ruff
- **Type Checking**: MyPy
- **Testing**: pytest with asyncio support
- **Language**: American English for all code, comments, and documentation
### Code Organization
**Core Structure**:
```
supervisor/
├── __init__.py # Package initialization
├── const.py # Constants and enums
├── coresys.py # Core system management
├── bootstrap.py # System initialization
├── exceptions.py # Custom exception classes
├── api/ # REST API endpoints
├── addons/ # Add-on management
├── backups/ # Backup system
├── docker/ # Docker integration
├── host/ # Host system interface
├── homeassistant/ # Home Assistant Core management
├── dbus/ # D-Bus system integration
├── hardware/ # Hardware detection and management
├── plugins/ # Plugin system
├── resolution/ # Issue detection and resolution
├── security/ # Security management
├── services/ # Service discovery and management
├── store/ # Add-on store management
└── utils/ # Utility functions
```
**Shared Constants**: Use constants from `supervisor/const.py` instead of hardcoding
values. Define new constants following existing patterns and group related constants
together.
### Supervisor Architecture Patterns
**CoreSysAttributes Inheritance Pattern**: Nearly all major classes in Supervisor
inherit from `CoreSysAttributes`, providing access to the centralized system state
via `self.coresys` and convenient `sys_*` properties.
```python
# Standard Supervisor class pattern
class MyManager(CoreSysAttributes):
"""Manage my functionality."""
def __init__(self, coresys: CoreSys):
"""Initialize manager."""
self.coresys: CoreSys = coresys
self._component: MyComponent = MyComponent(coresys)
@property
def component(self) -> MyComponent:
"""Return component handler."""
return self._component
# Access system components via inherited properties
async def do_something(self):
await self.sys_docker.containers.get("my_container")
self.sys_bus.fire_event(BusEvent.MY_EVENT, {"data": "value"})
```
**Key Inherited Properties from CoreSysAttributes**:
- `self.sys_docker` - Docker API access
- `self.sys_run_in_executor()` - Execute blocking operations
- `self.sys_create_task()` - Create async tasks
- `self.sys_bus` - Event bus for system events
- `self.sys_config` - System configuration
- `self.sys_homeassistant` - Home Assistant Core management
- `self.sys_addons` - Add-on management
- `self.sys_host` - Host system access
- `self.sys_dbus` - D-Bus system interface
**Load Pattern**: Many components implement a `load()` method which effectively
initialize the component from external sources (containers, files, D-Bus services).
### API Development
**REST API Structure**:
- **Base Path**: `/api/` for all endpoints
- **Authentication**: Bearer token authentication
- **Consistent Response Format**: `{"result": "ok", "data": {...}}` or
`{"result": "error", "message": "..."}`
- **Validation**: Use voluptuous schemas with `api_validate()`
**Use `@api_process` Decorator**: This decorator handles all standard error handling
and response formatting automatically. The decorator catches `APIError`, `HassioError`,
and other exceptions, returning appropriate HTTP responses.
```python
from ..api.utils import api_process, api_validate
@api_process
async def backup_full(self, request: web.Request) -> dict[str, Any]:
"""Create full backup."""
body = await api_validate(SCHEMA_BACKUP_FULL, request)
job = await self.sys_backups.do_backup_full(**body)
return {ATTR_JOB_ID: job.uuid}
```
### Docker Integration
- **Container Management**: Use Supervisor's Docker manager instead of direct
Docker API
- **Networking**: Supervisor manages internal Docker networks with predefined IP
ranges
- **Security**: AppArmor profiles, capability restrictions, and user namespace
isolation
- **Health Checks**: Implement health monitoring for all managed containers
### D-Bus Integration
- **Use dbus-fast**: Async D-Bus library for system integration
- **Service Management**: systemd, NetworkManager, hostname management
- **Error Handling**: Wrap D-Bus exceptions in Supervisor-specific exceptions
### Async Programming
- **All I/O operations must be async**: File operations, network calls, subprocess
execution
- **Use asyncio patterns**: Prefer `asyncio.gather()` over sequential awaits
- **Executor jobs**: Use `self.sys_run_in_executor()` for blocking operations
- **Two-phase initialization**: `__init__` for sync setup, `post_init()` for async
initialization
### Testing
- **Location**: `tests/` directory with module mirroring
- **Fixtures**: Extensive use of pytest fixtures for CoreSys setup
- **Mocking**: Mock external dependencies (Docker, D-Bus, network calls)
- **Coverage**: Minimum 90% test coverage, 100% for security-sensitive code
### Error Handling
- **Custom Exceptions**: Defined in `exceptions.py` with clear inheritance hierarchy
- **Error Propagation**: Use `from` clause for exception chaining
- **API Errors**: Use `APIError` with appropriate HTTP status codes
### Security Considerations
- **Container Security**: AppArmor profiles mandatory for add-ons, minimal
capabilities
- **Authentication**: Token-based API authentication with role-based access
- **Data Protection**: Backup encryption, secure secret management, comprehensive
input validation
### Development Commands
```bash
# Run tests, adjust paths as necessary
pytest -qsx tests/
# Linting and formatting
ruff check supervisor/
ruff format supervisor/
# Type checking
mypy --ignore-missing-imports supervisor/
# Pre-commit hooks
pre-commit run --all-files
```
Always run the pre-commit hooks at the end of code editing.
### Common Patterns to Follow
**✅ Use These Patterns**:
- Inherit from `CoreSysAttributes` for system access
- Use `@api_process` decorator for API endpoints
- Use `self.sys_run_in_executor()` for blocking operations
- Access Docker via `self.sys_docker` not direct Docker API
- Use constants from `const.py` instead of hardcoding
- Store types in (per-module) `const.py` (e.g. supervisor/store/const.py)
**❌ Avoid These Patterns**:
- Direct Docker API usage - use Supervisor's Docker manager
- Blocking operations in async context (use asyncio alternatives)
- Hardcoded values - use constants from `const.py`
- Manual error handling in API endpoints - let `@api_process` handle it
This guide provides the foundation for contributing to Home Assistant Supervisor.
Follow these patterns and guidelines to ensure code quality, security, and
maintainability.

View File

@ -131,9 +131,9 @@ jobs:
- name: Install Cosign
if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@v3.8.2
uses: sigstore/cosign-installer@v3.9.2
with:
cosign-release: "v2.4.0"
cosign-release: "v2.4.3"
- name: Install dirhash and calc hash
if: needs.init.outputs.publish == 'true'

View File

@ -10,6 +10,7 @@ on:
env:
DEFAULT_PYTHON: "3.13"
PRE_COMMIT_CACHE: ~/.cache/pre-commit
MYPY_CACHE_VERSION: 1
concurrency:
group: "${{ github.workflow }}-${{ github.ref }}"
@ -286,6 +287,52 @@ jobs:
. venv/bin/activate
pylint supervisor tests
mypy:
name: Check mypy
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Generate partial mypy restore key
id: generate-mypy-key
run: |
mypy_version=$(cat requirements_test.txt | grep mypy | cut -d '=' -f 3)
echo "version=$mypy_version" >> $GITHUB_OUTPUT
echo "key=mypy-${{ env.MYPY_CACHE_VERSION }}-$mypy_version-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: >-
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore mypy cache
uses: actions/cache@v4.2.3
with:
path: .mypy_cache
key: >-
${{ runner.os }}-mypy-${{ needs.prepare.outputs.python-version }}-${{ steps.generate-mypy-key.outputs.key }}
restore-keys: >-
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-mypy-${{ env.MYPY_CACHE_VERSION }}-${{ steps.generate-mypy-key.outputs.version }}
- name: Register mypy problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/mypy.json"
- name: Run mypy
run: |
. venv/bin/activate
mypy --ignore-missing-imports supervisor
pytest:
runs-on: ubuntu-latest
needs: prepare
@ -299,9 +346,9 @@ jobs:
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign
uses: sigstore/cosign-installer@v3.8.2
uses: sigstore/cosign-installer@v3.9.2
with:
cosign-release: "v2.4.0"
cosign-release: "v2.4.3"
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3

16
.github/workflows/matchers/mypy.json vendored Normal file
View File

@ -0,0 +1,16 @@
{
"problemMatcher": [
{
"owner": "mypy",
"pattern": [
{
"regexp": "^(.+):(\\d+):\\s(error|warning):\\s(.+)$",
"file": 1,
"line": 2,
"severity": 3,
"message": 4
}
]
}
]
}

View File

@ -0,0 +1,58 @@
name: Restrict task creation
# yamllint disable-line rule:truthy
on:
issues:
types: [opened]
jobs:
check-authorization:
runs-on: ubuntu-latest
# Only run if this is a Task issue type (from the issue form)
if: github.event.issue.issue_type == 'Task'
steps:
- name: Check if user is authorized
uses: actions/github-script@v7
with:
script: |
const issueAuthor = context.payload.issue.user.login;
// Check if user is an organization member
try {
await github.rest.orgs.checkMembershipForUser({
org: 'home-assistant',
username: issueAuthor
});
console.log(`✅ ${issueAuthor} is an organization member`);
return; // Authorized
} catch (error) {
console.log(`❌ ${issueAuthor} is not authorized to create Task issues`);
}
// Close the issue with a comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `Hi @${issueAuthor}, thank you for your contribution!\n\n` +
`Task issues are restricted to Open Home Foundation staff and authorized contributors.\n\n` +
`If you would like to:\n` +
`- Report a bug: Please use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)\n` +
`- Request a feature: Please submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)\n\n` +
`If you believe you should have access to create Task issues, please contact the maintainers.`
});
await github.rest.issues.update({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
state: 'closed'
});
// Add a label to indicate this was auto-closed
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: ['auto-closed']
});

View File

@ -12,7 +12,7 @@ jobs:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Sentry Release
uses: getsentry/action-release@v3.1.1
uses: getsentry/action-release@v3.2.0
env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}

View File

@ -13,3 +13,15 @@ repos:
- id: check-executables-have-shebangs
stages: [manual]
- id: check-json
- repo: local
hooks:
# Run mypy through our wrapper script in order to get the possible
# pyenv and/or virtualenv activated; it may not have been e.g. if
# committing from a GUI tool that was not launched from an activated
# shell.
- id: mypy
name: mypy
entry: script/run-in-env.sh mypy --ignore-missing-imports
language: script
types_or: [python, pyi]
files: ^supervisor/.+\.(py|pyi)$

1
CLAUDE.md Symbolic link
View File

@ -0,0 +1 @@
.github/copilot-instructions.md

View File

@ -12,7 +12,7 @@ cosign:
base_identity: https://github.com/home-assistant/docker-base/.*
identity: https://github.com/home-assistant/supervisor/.*
args:
COSIGN_VERSION: 2.4.0
COSIGN_VERSION: 2.4.3
labels:
io.hass.type: supervisor
org.opencontainers.image.title: Home Assistant Supervisor

View File

@ -1,30 +1,30 @@
aiodns==3.4.0
aiohttp==3.12.4
aiodns==3.5.0
aiohttp==3.12.14
atomicwrites-homeassistant==1.4.1
attrs==25.3.0
awesomeversion==24.6.0
blockbuster==1.5.24
awesomeversion==25.5.0
blockbuster==1.5.25
brotli==1.1.0
ciso8601==2.3.2
colorlog==6.9.0
cpe==1.3.1
cryptography==45.0.3
debugpy==1.8.14
cryptography==45.0.5
debugpy==1.8.15
deepmerge==2.0
dirhash==0.5.0
docker==7.1.0
faust-cchardet==2.1.19
gitpython==3.1.44
gitpython==3.1.45
jinja2==3.1.6
log-rate-limit==1.4.2
orjson==3.10.18
orjson==3.11.1
pulsectl==24.12.0
pyudev==0.24.3
PyYAML==6.0.2
requests==2.32.3
requests==2.32.4
securetar==2025.2.1
sentry-sdk==2.29.1
sentry-sdk==2.33.2
setuptools==80.9.0
voluptuous==0.15.2
dbus-fast==2.44.1
dbus-fast==2.44.2
zlib-fast==0.2.1

View File

@ -1,12 +1,16 @@
astroid==3.3.10
coverage==7.8.2
astroid==3.3.11
coverage==7.10.1
mypy==1.17.0
pre-commit==4.2.0
pylint==3.3.7
pytest-aiohttp==1.1.0
pytest-asyncio==0.25.2
pytest-cov==6.1.1
pytest-cov==6.2.1
pytest-timeout==2.4.0
pytest==8.3.5
ruff==0.11.11
pytest==8.4.1
ruff==0.12.5
time-machine==2.16.0
urllib3==2.4.0
types-docker==7.1.0.20250705
types-pyyaml==6.0.12.20250516
types-requests==2.32.4.20250611
urllib3==2.5.0

30
script/run-in-env.sh Executable file
View File

@ -0,0 +1,30 @@
#!/usr/bin/env sh
set -eu
# Used in venv activate script.
# Would be an error if undefined.
OSTYPE="${OSTYPE-}"
# Activate pyenv and virtualenv if present, then run the specified command
# pyenv, pyenv-virtualenv
if [ -s .python-version ]; then
PYENV_VERSION=$(head -n 1 .python-version)
export PYENV_VERSION
fi
if [ -n "${VIRTUAL_ENV-}" ] && [ -f "${VIRTUAL_ENV}/bin/activate" ]; then
. "${VIRTUAL_ENV}/bin/activate"
else
# other common virtualenvs
my_path=$(git rev-parse --show-toplevel)
for venv in venv .venv .; do
if [ -f "${my_path}/${venv}/bin/activate" ]; then
. "${my_path}/${venv}/bin/activate"
break
fi
done
fi
exec "$@"

View File

@ -360,7 +360,7 @@ class Addon(AddonModel):
@property
def auto_update(self) -> bool:
"""Return if auto update is enable."""
return self.persist.get(ATTR_AUTO_UPDATE, super().auto_update)
return self.persist.get(ATTR_AUTO_UPDATE, False)
@auto_update.setter
def auto_update(self, value: bool) -> None:

View File

@ -15,6 +15,7 @@ from ..const import (
ATTR_SQUASH,
FILE_SUFFIX_CONFIGURATION,
META_ADDON,
SOCKET_DOCKER,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..docker.interface import MAP_ARCH
@ -121,39 +122,64 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
except HassioArchNotFound:
return False
def get_docker_args(self, version: AwesomeVersion, image: str | None = None):
"""Create a dict with Docker build arguments.
def get_docker_args(
self, version: AwesomeVersion, image_tag: str
) -> dict[str, Any]:
"""Create a dict with Docker run args."""
dockerfile_path = self.get_dockerfile().relative_to(self.addon.path_location)
Must be run in executor.
"""
args: dict[str, Any] = {
"path": str(self.addon.path_location),
"tag": f"{image or self.addon.image}:{version!s}",
"dockerfile": str(self.get_dockerfile()),
"pull": True,
"forcerm": not self.sys_dev,
"squash": self.squash,
"platform": MAP_ARCH[self.arch],
"labels": {
"io.hass.version": version,
"io.hass.arch": self.arch,
"io.hass.type": META_ADDON,
"io.hass.name": self._fix_label("name"),
"io.hass.description": self._fix_label("description"),
**self.additional_labels,
},
"buildargs": {
"BUILD_FROM": self.base_image,
"BUILD_VERSION": version,
"BUILD_ARCH": self.sys_arch.default,
**self.additional_args,
},
build_cmd = [
"docker",
"buildx",
"build",
".",
"--tag",
image_tag,
"--file",
str(dockerfile_path),
"--platform",
MAP_ARCH[self.arch],
"--pull",
]
labels = {
"io.hass.version": version,
"io.hass.arch": self.arch,
"io.hass.type": META_ADDON,
"io.hass.name": self._fix_label("name"),
"io.hass.description": self._fix_label("description"),
**self.additional_labels,
}
if self.addon.url:
args["labels"]["io.hass.url"] = self.addon.url
labels["io.hass.url"] = self.addon.url
return args
for key, value in labels.items():
build_cmd.extend(["--label", f"{key}={value}"])
build_args = {
"BUILD_FROM": self.base_image,
"BUILD_VERSION": version,
"BUILD_ARCH": self.sys_arch.default,
**self.additional_args,
}
for key, value in build_args.items():
build_cmd.extend(["--build-arg", f"{key}={value}"])
# The addon path will be mounted from the host system
addon_extern_path = self.sys_config.local_to_extern_path(
self.addon.path_location
)
return {
"command": build_cmd,
"volumes": {
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
addon_extern_path: {"bind": "/addon", "mode": "ro"},
},
"working_dir": "/addon",
}
def _fix_label(self, label_name: str) -> str:
"""Remove characters they are not supported."""

View File

@ -67,6 +67,10 @@ class AddonManager(CoreSysAttributes):
return self.store.get(addon_slug)
return None
def get_local_only(self, addon_slug: str) -> Addon | None:
"""Return an installed add-on from slug."""
return self.local.get(addon_slug)
def from_token(self, token: str) -> Addon | None:
"""Return an add-on from Supervisor token."""
for addon in self.installed:
@ -262,7 +266,7 @@ class AddonManager(CoreSysAttributes):
],
on_condition=AddonsJobError,
)
async def rebuild(self, slug: str) -> asyncio.Task | None:
async def rebuild(self, slug: str, *, force: bool = False) -> asyncio.Task | None:
"""Perform a rebuild of local build add-on.
Returns a Task that completes when addon has state 'started' (see addon.start)
@ -285,7 +289,7 @@ class AddonManager(CoreSysAttributes):
raise AddonsError(
"Version changed, use Update instead Rebuild", _LOGGER.error
)
if not addon.need_build:
if not force and not addon.need_build:
raise AddonsNotSupportedError(
"Can't rebuild a image based add-on", _LOGGER.error
)

View File

@ -664,12 +664,16 @@ class AddonModel(JobGroup, ABC):
"""Validate if addon is available for current system."""
return self._validate_availability(self.data, logger=_LOGGER.error)
def __eq__(self, other):
"""Compaired add-on objects."""
def __eq__(self, other: Any) -> bool:
"""Compare add-on objects."""
if not isinstance(other, AddonModel):
return False
return self.slug == other.slug
def __hash__(self) -> int:
"""Hash for add-on objects."""
return hash(self.slug)
def _validate_availability(
self, config, *, logger: Callable[..., None] | None = None
) -> None:

View File

@ -8,7 +8,7 @@ from typing import Any
from aiohttp import hdrs, web
from ..const import AddonState
from ..const import SUPERVISOR_DOCKER_NAME, AddonState
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import APIAddonNotInstalled, HostNotSupportedError
from ..utils.sentry import async_capture_exception
@ -426,7 +426,7 @@ class RestAPI(CoreSysAttributes):
async def get_supervisor_logs(*args, **kwargs):
try:
return await self._api_host.advanced_logs_handler(
*args, identifier="hassio_supervisor", **kwargs
*args, identifier=SUPERVISOR_DOCKER_NAME, **kwargs
)
except Exception as err: # pylint: disable=broad-exception-caught
# Supervisor logs are critical, so catch everything, log the exception
@ -789,6 +789,7 @@ class RestAPI(CoreSysAttributes):
self.webapp.add_routes(
[
web.get("/docker/info", api_docker.info),
web.post("/docker/options", api_docker.options),
web.get("/docker/registries", api_docker.registries),
web.post("/docker/registries", api_docker.create_registry),
web.delete("/docker/registries/{hostname}", api_docker.remove_registry),

View File

@ -36,6 +36,7 @@ from ..const import (
ATTR_DNS,
ATTR_DOCKER_API,
ATTR_DOCUMENTATION,
ATTR_FORCE,
ATTR_FULL_ACCESS,
ATTR_GPIO,
ATTR_HASSIO_API,
@ -139,6 +140,8 @@ SCHEMA_SECURITY = vol.Schema({vol.Optional(ATTR_PROTECTED): vol.Boolean()})
SCHEMA_UNINSTALL = vol.Schema(
{vol.Optional(ATTR_REMOVE_CONFIG, default=False): vol.Boolean()}
)
SCHEMA_REBUILD = vol.Schema({vol.Optional(ATTR_FORCE, default=False): vol.Boolean()})
# pylint: enable=no-value-for-parameter
@ -461,7 +464,11 @@ class APIAddons(CoreSysAttributes):
async def rebuild(self, request: web.Request) -> None:
"""Rebuild local build add-on."""
addon = self.get_addon_for_request(request)
if start_task := await asyncio.shield(self.sys_addons.rebuild(addon.slug)):
body: dict[str, Any] = await api_validate(SCHEMA_REBUILD, request)
if start_task := await asyncio.shield(
self.sys_addons.rebuild(addon.slug, force=body[ATTR_FORCE])
):
await start_task
@api_process

View File

@ -3,11 +3,13 @@
import asyncio
from collections.abc import Awaitable
import logging
from typing import Any
from typing import Any, cast
from aiohttp import BasicAuth, web
from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE, WWW_AUTHENTICATE
from aiohttp.web import FileField
from aiohttp.web_exceptions import HTTPUnauthorized
from multidict import MultiDictProxy
import voluptuous as vol
from ..addons.addon import Addon
@ -51,7 +53,10 @@ class APIAuth(CoreSysAttributes):
return self.sys_auth.check_login(addon, auth.login, auth.password)
def _process_dict(
self, request: web.Request, addon: Addon, data: dict[str, str]
self,
request: web.Request,
addon: Addon,
data: dict[str, Any] | MultiDictProxy[str | bytes | FileField],
) -> Awaitable[bool]:
"""Process login with dict data.
@ -60,7 +65,15 @@ class APIAuth(CoreSysAttributes):
username = data.get("username") or data.get("user")
password = data.get("password")
return self.sys_auth.check_login(addon, username, password)
# Test that we did receive strings and not something else, raise if so
try:
_ = username.encode and password.encode # type: ignore
except AttributeError:
raise HTTPUnauthorized(headers=REALM_HEADER) from None
return self.sys_auth.check_login(
addon, cast(str, username), cast(str, password)
)
@api_process
async def auth(self, request: web.Request) -> bool:
@ -79,13 +92,18 @@ class APIAuth(CoreSysAttributes):
# Json
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON:
data = await request.json(loads=json_loads)
return await self._process_dict(request, addon, data)
if not await self._process_dict(request, addon, data):
raise HTTPUnauthorized()
return True
# URL encoded
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_URL:
data = await request.post()
return await self._process_dict(request, addon, data)
if not await self._process_dict(request, addon, data):
raise HTTPUnauthorized()
return True
# Advertise Basic authentication by default
raise HTTPUnauthorized(headers=REALM_HEADER)
@api_process

View File

@ -87,4 +87,4 @@ class DetectBlockingIO(StrEnum):
OFF = "off"
ON = "on"
ON_AT_STARTUP = "on_at_startup"
ON_AT_STARTUP = "on-at-startup"

View File

@ -1,7 +1,7 @@
"""Init file for Supervisor network RESTful API."""
import logging
from typing import Any, cast
from typing import Any
from aiohttp import web
import voluptuous as vol
@ -56,8 +56,8 @@ class APIDiscovery(CoreSysAttributes):
}
for message in self.sys_discovery.list_messages
if (
discovered := cast(
Addon, self.sys_addons.get(message.addon, local_only=True)
discovered := self.sys_addons.get_local_only(
message.addon,
)
)
and discovered.state == AddonState.STARTED

View File

@ -6,7 +6,10 @@ from typing import Any
from aiohttp import web
import voluptuous as vol
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
from ..const import (
ATTR_ENABLE_IPV6,
ATTR_HOSTNAME,
ATTR_LOGGING,
ATTR_PASSWORD,
@ -30,10 +33,48 @@ SCHEMA_DOCKER_REGISTRY = vol.Schema(
}
)
# pylint: disable=no-value-for-parameter
SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Maybe(vol.Boolean())})
class APIDocker(CoreSysAttributes):
"""Handle RESTful API for Docker configuration."""
@api_process
async def info(self, request: web.Request):
"""Get docker info."""
data_registries = {}
for hostname, registry in self.sys_docker.config.registries.items():
data_registries[hostname] = {
ATTR_USERNAME: registry[ATTR_USERNAME],
}
return {
ATTR_VERSION: self.sys_docker.info.version,
ATTR_ENABLE_IPV6: self.sys_docker.config.enable_ipv6,
ATTR_STORAGE: self.sys_docker.info.storage,
ATTR_LOGGING: self.sys_docker.info.logging,
ATTR_REGISTRIES: data_registries,
}
@api_process
async def options(self, request: web.Request) -> None:
"""Set docker options."""
body = await api_validate(SCHEMA_OPTIONS, request)
if (
ATTR_ENABLE_IPV6 in body
and self.sys_docker.config.enable_ipv6 != body[ATTR_ENABLE_IPV6]
):
self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6]
_LOGGER.info("Host system reboot required to apply new IPv6 configuration")
self.sys_resolution.create_issue(
IssueType.REBOOT_REQUIRED,
ContextType.SYSTEM,
suggestions=[SuggestionType.EXECUTE_REBOOT],
)
await self.sys_docker.config.save_data()
@api_process
async def registries(self, request) -> dict[str, Any]:
"""Return the list of registries."""
@ -64,18 +105,3 @@ class APIDocker(CoreSysAttributes):
del self.sys_docker.config.registries[hostname]
await self.sys_docker.config.save_data()
@api_process
async def info(self, request: web.Request):
"""Get docker info."""
data_registries = {}
for hostname, registry in self.sys_docker.config.registries.items():
data_registries[hostname] = {
ATTR_USERNAME: registry[ATTR_USERNAME],
}
return {
ATTR_VERSION: self.sys_docker.info.version,
ATTR_STORAGE: self.sys_docker.info.storage,
ATTR_LOGGING: self.sys_docker.info.logging,
ATTR_REGISTRIES: data_registries,
}

View File

@ -309,9 +309,9 @@ class APIIngress(CoreSysAttributes):
def _init_header(
request: web.Request, addon: Addon, session_data: IngressSessionData | None
) -> CIMultiDict | dict[str, str]:
) -> CIMultiDict[str]:
"""Create initial header."""
headers = {}
headers = CIMultiDict[str]()
if session_data is not None:
headers[HEADER_REMOTE_USER_ID] = session_data.user.id
@ -337,7 +337,7 @@ def _init_header(
istr(HEADER_REMOTE_USER_DISPLAY_NAME),
):
continue
headers[name] = value
headers.add(name, value)
# Update X-Forwarded-For
if request.transport:
@ -348,9 +348,9 @@ def _init_header(
return headers
def _response_header(response: aiohttp.ClientResponse) -> dict[str, str]:
def _response_header(response: aiohttp.ClientResponse) -> CIMultiDict[str]:
"""Create response header."""
headers = {}
headers = CIMultiDict[str]()
for name, value in response.headers.items():
if name in (
@ -360,7 +360,7 @@ def _response_header(response: aiohttp.ClientResponse) -> dict[str, str]:
hdrs.CONTENT_ENCODING,
):
continue
headers[name] = value
headers.add(name, value)
return headers

View File

@ -40,8 +40,8 @@ from ..const import (
ATTR_TYPE,
ATTR_VLAN,
ATTR_WIFI,
DOCKER_IPV4_NETWORK_MASK,
DOCKER_NETWORK,
DOCKER_NETWORK_MASK,
)
from ..coresys import CoreSysAttributes
from ..exceptions import APIError, APINotFound, HostNetworkNotFound
@ -203,7 +203,7 @@ class APINetwork(CoreSysAttributes):
],
ATTR_DOCKER: {
ATTR_INTERFACE: DOCKER_NETWORK,
ATTR_ADDRESS: str(DOCKER_NETWORK_MASK),
ATTR_ADDRESS: str(DOCKER_IPV4_NETWORK_MASK),
ATTR_GATEWAY: str(self.sys_docker.network.gateway),
ATTR_DNS: str(self.sys_docker.network.dns),
},

View File

@ -17,6 +17,7 @@ from ..const import (
ATTR_ICON,
ATTR_LOGGING,
ATTR_MACHINE,
ATTR_MACHINE_ID,
ATTR_NAME,
ATTR_OPERATING_SYSTEM,
ATTR_STATE,
@ -48,6 +49,7 @@ class APIRoot(CoreSysAttributes):
ATTR_OPERATING_SYSTEM: self.sys_host.info.operating_system,
ATTR_FEATURES: self.sys_host.features,
ATTR_MACHINE: self.sys_machine,
ATTR_MACHINE_ID: self.sys_machine_id,
ATTR_ARCH: self.sys_arch.default,
ATTR_STATE: self.sys_core.state,
ATTR_SUPPORTED_ARCH: self.sys_arch.supported,

View File

@ -126,9 +126,7 @@ class APIStore(CoreSysAttributes):
"""Generate addon information."""
installed = (
cast(Addon, self.sys_addons.get(addon.slug, local_only=True))
if addon.is_installed
else None
self.sys_addons.get_local_only(addon.slug) if addon.is_installed else None
)
data = {

View File

@ -40,7 +40,7 @@ class CpuArch(CoreSysAttributes):
@property
def supervisor(self) -> str:
"""Return supervisor arch."""
return self.sys_supervisor.arch
return self.sys_supervisor.arch or self._default_arch
@property
def supported(self) -> list[str]:
@ -91,4 +91,14 @@ class CpuArch(CoreSysAttributes):
for check, value in MAP_CPU.items():
if cpu.startswith(check):
return value
return self.sys_supervisor.arch
if self.sys_supervisor.arch:
_LOGGER.warning(
"Unknown CPU architecture %s, falling back to Supervisor architecture.",
cpu,
)
return self.sys_supervisor.arch
_LOGGER.warning(
"Unknown CPU architecture %s, assuming CPU architecture equals Supervisor architecture.",
cpu,
)
return cpu

View File

@ -3,10 +3,10 @@
import asyncio
import hashlib
import logging
from typing import Any
from typing import Any, TypedDict, cast
from .addons.addon import Addon
from .const import ATTR_ADDON, ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
from .const import ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
from .coresys import CoreSys, CoreSysAttributes
from .exceptions import (
AuthError,
@ -21,6 +21,17 @@ from .validate import SCHEMA_AUTH_CONFIG
_LOGGER: logging.Logger = logging.getLogger(__name__)
class BackendAuthRequest(TypedDict):
"""Model for a backend auth request.
https://github.com/home-assistant/core/blob/ed9503324d9d255e6fb077f1614fb6d55800f389/homeassistant/components/hassio/auth.py#L66-L73
"""
username: str
password: str
addon: str
class Auth(FileConfiguration, CoreSysAttributes):
"""Manage SSO for Add-ons with Home Assistant user."""
@ -74,6 +85,9 @@ class Auth(FileConfiguration, CoreSysAttributes):
"""Check username login."""
if password is None:
raise AuthError("None as password is not supported!", _LOGGER.error)
if username is None:
raise AuthError("None as username is not supported!", _LOGGER.error)
_LOGGER.info("Auth request from '%s' for '%s'", addon.slug, username)
# Get from cache
@ -103,11 +117,12 @@ class Auth(FileConfiguration, CoreSysAttributes):
async with self.sys_homeassistant.api.make_request(
"post",
"api/hassio_auth",
json={
ATTR_USERNAME: username,
ATTR_PASSWORD: password,
ATTR_ADDON: addon.slug,
},
json=cast(
dict[str, Any],
BackendAuthRequest(
username=username, password=password, addon=addon.slug
),
),
) as req:
if req.status == 200:
_LOGGER.info("Successful login for '%s'", username)

View File

@ -18,8 +18,6 @@ import time
from typing import Any, Self, cast
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
from securetar import AddFileError, SecureTarFile, atomic_contents_add, secure_path
import voluptuous as vol
from voluptuous.humanize import humanize_error
@ -62,9 +60,11 @@ from ..utils.dt import parse_datetime, utcnow
from ..utils.json import json_bytes
from ..utils.sentinel import DEFAULT
from .const import BUF_SIZE, LOCATION_CLOUD_BACKUP, BackupType
from .utils import key_to_iv, password_to_key
from .utils import password_to_key
from .validate import SCHEMA_BACKUP
IGNORED_COMPARISON_FIELDS = {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}
_LOGGER: logging.Logger = logging.getLogger(__name__)
@ -102,7 +102,6 @@ class Backup(JobGroup):
self._tmp: TemporaryDirectory | None = None
self._outer_secure_tarfile: SecureTarFile | None = None
self._key: bytes | None = None
self._aes: Cipher | None = None
self._locations: dict[str | None, BackupLocation] = {
location: BackupLocation(
path=tar_file,
@ -268,7 +267,7 @@ class Backup(JobGroup):
# Compare all fields except ones about protection. Current encryption status does not affect equality
keys = self._data.keys() | other._data.keys()
for k in keys - {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}:
for k in keys - IGNORED_COMPARISON_FIELDS:
if (
k not in self._data
or k not in other._data
@ -348,16 +347,10 @@ class Backup(JobGroup):
self._init_password(password)
else:
self._key = None
self._aes = None
def _init_password(self, password: str) -> None:
"""Set password + init aes cipher."""
"""Create key from password."""
self._key = password_to_key(password)
self._aes = Cipher(
algorithms.AES(self._key),
modes.CBC(key_to_iv(self._key)),
backend=default_backend(),
)
async def validate_backup(self, location: str | None) -> None:
"""Validate backup.
@ -586,13 +579,21 @@ class Backup(JobGroup):
@Job(name="backup_addon_save", cleanup=False)
async def _addon_save(self, addon: Addon) -> asyncio.Task | None:
"""Store an add-on into backup."""
self.sys_jobs.current.reference = addon.slug
self.sys_jobs.current.reference = slug = addon.slug
if not self._outer_secure_tarfile:
raise RuntimeError(
"Cannot backup components without initializing backup tar"
)
tar_name = f"{addon.slug}.tar{'.gz' if self.compressed else ''}"
# Ensure it is still installed and get current data before proceeding
if not (curr_addon := self.sys_addons.get_local_only(slug)):
_LOGGER.warning(
"Skipping backup of add-on %s because it has been uninstalled",
slug,
)
return None
tar_name = f"{slug}.tar{'.gz' if self.compressed else ''}"
addon_file = self._outer_secure_tarfile.create_inner_tar(
f"./{tar_name}",
@ -601,16 +602,16 @@ class Backup(JobGroup):
)
# Take backup
try:
start_task = await addon.backup(addon_file)
start_task = await curr_addon.backup(addon_file)
except AddonsError as err:
raise BackupError(str(err)) from err
# Store to config
self._data[ATTR_ADDONS].append(
{
ATTR_SLUG: addon.slug,
ATTR_NAME: addon.name,
ATTR_VERSION: addon.version,
ATTR_SLUG: slug,
ATTR_NAME: curr_addon.name,
ATTR_VERSION: curr_addon.version,
# Bug - addon_file.size used to give us this information
# It always returns 0 in current securetar. Skipping until fixed
ATTR_SIZE: 0,
@ -930,5 +931,5 @@ class Backup(JobGroup):
Return a coroutine.
"""
return self.sys_store.update_repositories(
self.repositories, add_with_errors=True, replace=replace
set(self.repositories), issue_on_error=True, replace=replace
)

View File

@ -285,7 +285,7 @@ def check_environment() -> None:
_LOGGER.critical("Can't find Docker socket!")
def register_signal_handlers(loop: asyncio.BaseEventLoop, coresys: CoreSys) -> None:
def register_signal_handlers(loop: asyncio.AbstractEventLoop, coresys: CoreSys) -> None:
"""Register SIGTERM, SIGHUP and SIGKILL to stop the Supervisor."""
try:
loop.add_signal_handler(

View File

@ -2,7 +2,7 @@
from __future__ import annotations
from collections.abc import Awaitable, Callable
from collections.abc import Callable, Coroutine
import logging
from typing import Any
@ -19,7 +19,7 @@ class EventListener:
"""Event listener."""
event_type: BusEvent = attr.ib()
callback: Callable[[Any], Awaitable[None]] = attr.ib()
callback: Callable[[Any], Coroutine[Any, Any, None]] = attr.ib()
class Bus(CoreSysAttributes):
@ -31,7 +31,7 @@ class Bus(CoreSysAttributes):
self._listeners: dict[BusEvent, list[EventListener]] = {}
def register_event(
self, event: BusEvent, callback: Callable[[Any], Awaitable[None]]
self, event: BusEvent, callback: Callable[[Any], Coroutine[Any, Any, None]]
) -> EventListener:
"""Register callback for an event."""
listener = EventListener(event, callback)

View File

@ -66,7 +66,7 @@ _UTC = "UTC"
class CoreConfig(FileConfiguration):
"""Hold all core config data."""
def __init__(self):
def __init__(self) -> None:
"""Initialize config object."""
super().__init__(FILE_HASSIO_CONFIG, SCHEMA_SUPERVISOR_CONFIG)
self._timezone_tzinfo: tzinfo | None = None

View File

@ -2,16 +2,20 @@
from dataclasses import dataclass
from enum import StrEnum
from ipaddress import IPv4Network
from ipaddress import IPv4Network, IPv6Network
from pathlib import Path
from sys import version_info as systemversion
from typing import Self
from typing import NotRequired, Self, TypedDict
from aiohttp import __version__ as aiohttpversion
SUPERVISOR_VERSION = "9999.09.9.dev9999"
SERVER_SOFTWARE = f"HomeAssistantSupervisor/{SUPERVISOR_VERSION} aiohttp/{aiohttpversion} Python/{systemversion[0]}.{systemversion[1]}"
DOCKER_PREFIX: str = "hassio"
OBSERVER_DOCKER_NAME: str = f"{DOCKER_PREFIX}_observer"
SUPERVISOR_DOCKER_NAME: str = f"{DOCKER_PREFIX}_supervisor"
URL_HASSIO_ADDONS = "https://github.com/home-assistant/addons"
URL_HASSIO_APPARMOR = "https://version.home-assistant.io/apparmor_{channel}.txt"
URL_HASSIO_VERSION = "https://version.home-assistant.io/{channel}.json"
@ -41,8 +45,10 @@ SYSTEMD_JOURNAL_PERSISTENT = Path("/var/log/journal")
SYSTEMD_JOURNAL_VOLATILE = Path("/run/log/journal")
DOCKER_NETWORK = "hassio"
DOCKER_NETWORK_MASK = IPv4Network("172.30.32.0/23")
DOCKER_NETWORK_RANGE = IPv4Network("172.30.33.0/24")
DOCKER_NETWORK_DRIVER = "bridge"
DOCKER_IPV6_NETWORK_MASK = IPv6Network("fd0c:ac1e:2100::/48")
DOCKER_IPV4_NETWORK_MASK = IPv4Network("172.30.32.0/23")
DOCKER_IPV4_NETWORK_RANGE = IPv4Network("172.30.33.0/24")
# This needs to match the dockerd --cpu-rt-runtime= argument.
DOCKER_CPU_RUNTIME_TOTAL = 950_000
@ -172,6 +178,7 @@ ATTR_DOCKER_API = "docker_api"
ATTR_DOCUMENTATION = "documentation"
ATTR_DOMAINS = "domains"
ATTR_ENABLE = "enable"
ATTR_ENABLE_IPV6 = "enable_ipv6"
ATTR_ENABLED = "enabled"
ATTR_ENVIRONMENT = "environment"
ATTR_EVENT = "event"
@ -181,6 +188,7 @@ ATTR_FEATURES = "features"
ATTR_FILENAME = "filename"
ATTR_FLAGS = "flags"
ATTR_FOLDERS = "folders"
ATTR_FORCE = "force"
ATTR_FORCE_SECURITY = "force_security"
ATTR_FREQUENCY = "frequency"
ATTR_FULL_ACCESS = "full_access"
@ -239,6 +247,7 @@ ATTR_LOGO = "logo"
ATTR_LONG_DESCRIPTION = "long_description"
ATTR_MAC = "mac"
ATTR_MACHINE = "machine"
ATTR_MACHINE_ID = "machine_id"
ATTR_MAINTAINER = "maintainer"
ATTR_MAP = "map"
ATTR_MEMORY_LIMIT = "memory_limit"
@ -407,10 +416,12 @@ class AddonBoot(StrEnum):
MANUAL = "manual"
@classmethod
def _missing_(cls, value: str) -> Self | None:
def _missing_(cls, value: object) -> Self | None:
"""Convert 'forced' config values to their counterpart."""
if value == AddonBootConfig.MANUAL_ONLY:
return AddonBoot.MANUAL
for member in cls:
if member == AddonBoot.MANUAL:
return member
return None
@ -507,6 +518,16 @@ class CpuArch(StrEnum):
AMD64 = "amd64"
class IngressSessionDataUserDict(TypedDict):
"""Response object for ingress session user."""
id: str
username: NotRequired[str | None]
# Name is an alias for displayname, only one should be used
displayname: NotRequired[str | None]
name: NotRequired[str | None]
@dataclass
class IngressSessionDataUser:
"""Format of an IngressSessionDataUser object."""
@ -515,38 +536,42 @@ class IngressSessionDataUser:
display_name: str | None = None
username: str | None = None
def to_dict(self) -> dict[str, str | None]:
def to_dict(self) -> IngressSessionDataUserDict:
"""Get dictionary representation."""
return {
ATTR_ID: self.id,
ATTR_DISPLAYNAME: self.display_name,
ATTR_USERNAME: self.username,
}
return IngressSessionDataUserDict(
id=self.id, displayname=self.display_name, username=self.username
)
@classmethod
def from_dict(cls, data: dict[str, str | None]) -> Self:
def from_dict(cls, data: IngressSessionDataUserDict) -> Self:
"""Return object from dictionary representation."""
return cls(
id=data[ATTR_ID],
display_name=data.get(ATTR_DISPLAYNAME),
username=data.get(ATTR_USERNAME),
id=data["id"],
display_name=data.get("displayname") or data.get("name"),
username=data.get("username"),
)
class IngressSessionDataDict(TypedDict):
"""Response object for ingress session data."""
user: IngressSessionDataUserDict
@dataclass
class IngressSessionData:
"""Format of an IngressSessionData object."""
user: IngressSessionDataUser
def to_dict(self) -> dict[str, dict[str, str | None]]:
def to_dict(self) -> IngressSessionDataDict:
"""Get dictionary representation."""
return {ATTR_USER: self.user.to_dict()}
return IngressSessionDataDict(user=self.user.to_dict())
@classmethod
def from_dict(cls, data: dict[str, dict[str, str | None]]) -> Self:
def from_dict(cls, data: IngressSessionDataDict) -> Self:
"""Return object from dictionary representation."""
return cls(user=IngressSessionDataUser.from_dict(data[ATTR_USER]))
return cls(user=IngressSessionDataUser.from_dict(data["user"]))
STARTING_STATES = [

View File

@ -28,7 +28,7 @@ from .homeassistant.core import LANDINGPAGE
from .resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from .utils.dt import utcnow
from .utils.sentry import async_capture_exception
from .utils.whoami import WhoamiData, retrieve_whoami
from .utils.whoami import retrieve_whoami
_LOGGER: logging.Logger = logging.getLogger(__name__)
@ -36,7 +36,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
class Core(CoreSysAttributes):
"""Main object of Supervisor."""
def __init__(self, coresys: CoreSys):
def __init__(self, coresys: CoreSys) -> None:
"""Initialize Supervisor object."""
self.coresys: CoreSys = coresys
self._state: CoreState = CoreState.INITIALIZE
@ -91,7 +91,7 @@ class Core(CoreSysAttributes):
"info", {"state": self._state}
)
async def connect(self):
async def connect(self) -> None:
"""Connect Supervisor container."""
# Load information from container
await self.sys_supervisor.load()
@ -120,7 +120,7 @@ class Core(CoreSysAttributes):
self.sys_config.version = self.sys_supervisor.version
await self.sys_config.save_data()
async def setup(self):
async def setup(self) -> None:
"""Start setting up supervisor orchestration."""
await self.set_state(CoreState.SETUP)
@ -216,7 +216,7 @@ class Core(CoreSysAttributes):
# Evaluate the system
await self.sys_resolution.evaluate.evaluate_system()
async def start(self):
async def start(self) -> None:
"""Start Supervisor orchestration."""
await self.set_state(CoreState.STARTUP)
@ -310,7 +310,7 @@ class Core(CoreSysAttributes):
)
_LOGGER.info("Supervisor is up and running")
async def stop(self):
async def stop(self) -> None:
"""Stop a running orchestration."""
# store new last boot / prevent time adjustments
if self.state in (CoreState.RUNNING, CoreState.SHUTDOWN):
@ -358,7 +358,7 @@ class Core(CoreSysAttributes):
_LOGGER.info("Supervisor is down - %d", self.exit_code)
self.sys_loop.stop()
async def shutdown(self, *, remove_homeassistant_container: bool = False):
async def shutdown(self, *, remove_homeassistant_container: bool = False) -> None:
"""Shutdown all running containers in correct order."""
# don't process scheduler anymore
if self.state == CoreState.RUNNING:
@ -382,19 +382,15 @@ class Core(CoreSysAttributes):
if self.state in (CoreState.STOPPING, CoreState.SHUTDOWN):
await self.sys_plugins.shutdown()
async def _update_last_boot(self):
async def _update_last_boot(self) -> None:
"""Update last boot time."""
self.sys_config.last_boot = await self.sys_hardware.helper.last_boot()
if not (last_boot := await self.sys_hardware.helper.last_boot()):
_LOGGER.error("Could not update last boot information!")
return
self.sys_config.last_boot = last_boot
await self.sys_config.save_data()
async def _retrieve_whoami(self, with_ssl: bool) -> WhoamiData | None:
try:
return await retrieve_whoami(self.sys_websession, with_ssl)
except WhoamiSSLError:
_LOGGER.info("Whoami service SSL error")
return None
async def _adjust_system_datetime(self):
async def _adjust_system_datetime(self) -> None:
"""Adjust system time/date on startup."""
# If no timezone is detect or set
# If we are not connected or time sync
@ -406,11 +402,13 @@ class Core(CoreSysAttributes):
# Get Timezone data
try:
data = await self._retrieve_whoami(True)
try:
data = await retrieve_whoami(self.sys_websession, True)
except WhoamiSSLError:
# SSL Date Issue & possible time drift
_LOGGER.info("Whoami service SSL error")
data = await retrieve_whoami(self.sys_websession, False)
# SSL Date Issue & possible time drift
if not data:
data = await self._retrieve_whoami(False)
except WhoamiError as err:
_LOGGER.warning("Can't adjust Time/Date settings: %s", err)
return
@ -426,7 +424,7 @@ class Core(CoreSysAttributes):
await self.sys_host.control.set_datetime(data.dt_utc)
await self.sys_supervisor.check_connectivity()
async def repair(self):
async def repair(self) -> None:
"""Repair system integrity."""
_LOGGER.info("Starting repair of Supervisor Environment")
await self.sys_run_in_executor(self.sys_docker.repair)

View File

@ -62,17 +62,17 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
class CoreSys:
"""Class that handle all shared data."""
def __init__(self):
def __init__(self) -> None:
"""Initialize coresys."""
# Static attributes protected
self._machine_id: str | None = None
self._machine: str | None = None
# External objects
self._loop: asyncio.BaseEventLoop = asyncio.get_running_loop()
self._loop = asyncio.get_running_loop()
# Global objects
self._config: CoreConfig = CoreConfig()
self._config = CoreConfig()
# Internal objects pointers
self._docker: DockerAPI | None = None
@ -122,8 +122,12 @@ class CoreSys:
if self._websession:
await self._websession.close()
resolver: aiohttp.abc.AbstractResolver
try:
resolver = aiohttp.AsyncResolver(loop=self.loop)
# Use "unused" kwargs to force dedicated resolver instance. Otherwise
# aiodns won't reload /etc/resolv.conf which we need to make our connection
# check work in all cases.
resolver = aiohttp.AsyncResolver(loop=self.loop, timeout=None)
# pylint: disable=protected-access
_LOGGER.debug(
"Initializing ClientSession with AsyncResolver. Using nameservers %s",
@ -144,7 +148,7 @@ class CoreSys:
self._websession = session
async def init_machine(self):
async def init_machine(self) -> None:
"""Initialize machine information."""
def _load_machine_id() -> str | None:
@ -188,7 +192,7 @@ class CoreSys:
return UTC
@property
def loop(self) -> asyncio.BaseEventLoop:
def loop(self) -> asyncio.AbstractEventLoop:
"""Return loop object."""
return self._loop
@ -586,7 +590,7 @@ class CoreSys:
return self._machine_id
@machine_id.setter
def machine_id(self, value: str) -> None:
def machine_id(self, value: str | None) -> None:
"""Set a machine-id type string."""
if self._machine_id:
raise RuntimeError("Machine-ID type already set!")
@ -608,8 +612,8 @@ class CoreSys:
self._set_task_context.append(callback)
def run_in_executor(
self, funct: Callable[..., T], *args: tuple[Any], **kwargs: dict[str, Any]
) -> Coroutine[Any, Any, T]:
self, funct: Callable[..., T], *args, **kwargs
) -> asyncio.Future[T]:
"""Add an job to the executor pool."""
if kwargs:
funct = partial(funct, **kwargs)
@ -630,9 +634,9 @@ class CoreSys:
def call_later(
self,
delay: float,
funct: Callable[..., Coroutine[Any, Any, T]],
*args: tuple[Any],
**kwargs: dict[str, Any],
funct: Callable[..., Any],
*args,
**kwargs,
) -> asyncio.TimerHandle:
"""Start a task after a delay."""
if kwargs:
@ -643,9 +647,9 @@ class CoreSys:
def call_at(
self,
when: datetime,
funct: Callable[..., Coroutine[Any, Any, T]],
*args: tuple[Any],
**kwargs: dict[str, Any],
funct: Callable[..., Any],
*args,
**kwargs,
) -> asyncio.TimerHandle:
"""Start a task at the specified datetime."""
if kwargs:
@ -673,7 +677,7 @@ class CoreSysAttributes:
@property
def sys_machine_id(self) -> str | None:
"""Return machine id."""
"""Return machine ID."""
return self.coresys.machine_id
@property
@ -682,7 +686,7 @@ class CoreSysAttributes:
return self.coresys.dev
@property
def sys_loop(self) -> asyncio.BaseEventLoop:
def sys_loop(self) -> asyncio.AbstractEventLoop:
"""Return loop object."""
return self.coresys.loop
@ -832,7 +836,7 @@ class CoreSysAttributes:
def sys_run_in_executor(
self, funct: Callable[..., T], *args, **kwargs
) -> Coroutine[Any, Any, T]:
) -> asyncio.Future[T]:
"""Add a job to the executor pool."""
return self.coresys.run_in_executor(funct, *args, **kwargs)
@ -843,7 +847,7 @@ class CoreSysAttributes:
def sys_call_later(
self,
delay: float,
funct: Callable[..., Coroutine[Any, Any, T]],
funct: Callable[..., Any],
*args,
**kwargs,
) -> asyncio.TimerHandle:
@ -853,7 +857,7 @@ class CoreSysAttributes:
def sys_call_at(
self,
when: datetime,
funct: Callable[..., Coroutine[Any, Any, T]],
funct: Callable[..., Any],
*args,
**kwargs,
) -> asyncio.TimerHandle:

View File

@ -135,6 +135,7 @@ DBUS_ATTR_LAST_ERROR = "LastError"
DBUS_ATTR_LLMNR = "LLMNR"
DBUS_ATTR_LLMNR_HOSTNAME = "LLMNRHostname"
DBUS_ATTR_LOADER_TIMESTAMP_MONOTONIC = "LoaderTimestampMonotonic"
DBUS_ATTR_LOCAL_RTC = "LocalRTC"
DBUS_ATTR_MANAGED = "Managed"
DBUS_ATTR_MODE = "Mode"
DBUS_ATTR_MODEL = "Model"

View File

@ -117,7 +117,7 @@ class DBusInterfaceProxy(DBusInterface, ABC):
"""Initialize object with already connected dbus object."""
await super().initialize(connected_dbus)
if not self.connected_dbus.properties:
if not self.connected_dbus.supports_properties:
self.disconnect()
raise DBusInterfaceError(
f"D-Bus object {self.object_path} is not usable, introspection is missing required properties interface"

View File

@ -8,7 +8,7 @@ from dbus_fast.aio.message_bus import MessageBus
from ..const import SOCKET_DBUS
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import DBusFatalError
from ..exceptions import DBusFatalError, DBusNotConnectedError
from .agent import OSAgent
from .hostname import Hostname
from .interface import DBusInterface
@ -91,6 +91,13 @@ class DBusManager(CoreSysAttributes):
"""Return the message bus."""
return self._bus
@property
def connected_bus(self) -> MessageBus:
"""Return the message bus. Raise if not connected."""
if not self._bus:
raise DBusNotConnectedError()
return self._bus
@property
def all(self) -> list[DBusInterface]:
"""Return all managed dbus interfaces."""

View File

@ -259,7 +259,7 @@ class NetworkManager(DBusInterfaceProxy):
else:
interface.primary = False
interfaces[interface.name] = interface
interfaces[interface.interface_name] = interface
interfaces[interface.hw_address] = interface
# Disconnect removed devices

View File

@ -1,5 +1,6 @@
"""NetworkConnection objects for Network Manager."""
from abc import ABC
from dataclasses import dataclass
from ipaddress import IPv4Address, IPv6Address
@ -29,7 +30,7 @@ class ConnectionProperties:
class WirelessProperties:
"""Wireless Properties object for Network Manager."""
ssid: str | None
ssid: str
assigned_mac: str | None
mode: str | None
powersave: int | None
@ -55,7 +56,7 @@ class EthernetProperties:
class VlanProperties:
"""Ethernet properties object for Network Manager."""
id: int | None
id: int
parent: str | None
@ -67,14 +68,20 @@ class IpAddress:
prefix: int
@dataclass(slots=True)
class IpProperties:
@dataclass
class IpProperties(ABC):
"""IP properties object for Network Manager."""
method: str | None
address_data: list[IpAddress] | None
gateway: str | None
dns: list[bytes | int] | None
@dataclass(slots=True)
class Ip4Properties(IpProperties):
"""IPv4 properties object."""
dns: list[int] | None
@dataclass(slots=True)
@ -83,6 +90,7 @@ class Ip6Properties(IpProperties):
addr_gen_mode: int
ip6_privacy: int
dns: list[bytes] | None
@dataclass(slots=True)

View File

@ -96,7 +96,7 @@ class NetworkConnection(DBusInterfaceProxy):
@ipv4.setter
def ipv4(self, ipv4: IpConfiguration | None) -> None:
"""Set ipv4 configuration."""
"""Set IPv4 configuration."""
if self._ipv4 and self._ipv4 is not ipv4:
self._ipv4.shutdown()
@ -109,7 +109,7 @@ class NetworkConnection(DBusInterfaceProxy):
@ipv6.setter
def ipv6(self, ipv6: IpConfiguration | None) -> None:
"""Set ipv6 configuration."""
"""Set IPv6 configuration."""
if self._ipv6 and self._ipv6 is not ipv6:
self._ipv6.shutdown()

View File

@ -49,7 +49,7 @@ class NetworkInterface(DBusInterfaceProxy):
@property
@dbus_property
def name(self) -> str:
def interface_name(self) -> str:
"""Return interface name."""
return self.properties[DBUS_ATTR_DEVICE_INTERFACE]

View File

@ -12,9 +12,9 @@ from ...utils import dbus_connected
from ..configuration import (
ConnectionProperties,
EthernetProperties,
Ip4Properties,
Ip6Properties,
IpAddress,
IpProperties,
MatchProperties,
VlanProperties,
WirelessProperties,
@ -115,7 +115,7 @@ class NetworkSetting(DBusInterface):
self._wireless_security: WirelessSecurityProperties | None = None
self._ethernet: EthernetProperties | None = None
self._vlan: VlanProperties | None = None
self._ipv4: IpProperties | None = None
self._ipv4: Ip4Properties | None = None
self._ipv6: Ip6Properties | None = None
self._match: MatchProperties | None = None
super().__init__()
@ -151,13 +151,13 @@ class NetworkSetting(DBusInterface):
return self._vlan
@property
def ipv4(self) -> IpProperties | None:
"""Return ipv4 properties if any."""
def ipv4(self) -> Ip4Properties | None:
"""Return IPv4 properties if any."""
return self._ipv4
@property
def ipv6(self) -> Ip6Properties | None:
"""Return ipv6 properties if any."""
"""Return IPv6 properties if any."""
return self._ipv6
@property
@ -271,16 +271,23 @@ class NetworkSetting(DBusInterface):
)
if CONF_ATTR_VLAN in data:
self._vlan = VlanProperties(
id=data[CONF_ATTR_VLAN].get(CONF_ATTR_VLAN_ID),
parent=data[CONF_ATTR_VLAN].get(CONF_ATTR_VLAN_PARENT),
)
if CONF_ATTR_VLAN_ID in data[CONF_ATTR_VLAN]:
self._vlan = VlanProperties(
data[CONF_ATTR_VLAN][CONF_ATTR_VLAN_ID],
data[CONF_ATTR_VLAN].get(CONF_ATTR_VLAN_PARENT),
)
else:
self._vlan = None
_LOGGER.warning(
"Network settings for vlan connection %s missing required vlan id, cannot process it",
self.connection.interface_name,
)
if CONF_ATTR_IPV4 in data:
address_data = None
if ips := data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_ADDRESS_DATA):
address_data = [IpAddress(ip["address"], ip["prefix"]) for ip in ips]
self._ipv4 = IpProperties(
self._ipv4 = Ip4Properties(
method=data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_METHOD),
address_data=address_data,
gateway=data[CONF_ATTR_IPV4].get(CONF_ATTR_IPV4_GATEWAY),

View File

@ -222,8 +222,10 @@ def get_connection_from_interface(
}
elif interface.type == "vlan":
parent = cast(VlanConfig, interface.vlan).interface
if parent in network_manager and (
parent_connection := network_manager.get(parent).connection
if (
parent
and parent in network_manager
and (parent_connection := network_manager.get(parent).connection)
):
parent = parent_connection.uuid

View File

@ -10,6 +10,7 @@ from dbus_fast.aio.message_bus import MessageBus
from ..exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError
from ..utils.dt import get_time_zone, utc_from_timestamp
from .const import (
DBUS_ATTR_LOCAL_RTC,
DBUS_ATTR_NTP,
DBUS_ATTR_NTPSYNCHRONIZED,
DBUS_ATTR_TIMEUSEC,
@ -46,6 +47,12 @@ class TimeDate(DBusInterfaceProxy):
"""Return host timezone."""
return self.properties[DBUS_ATTR_TIMEZONE]
@property
@dbus_property
def local_rtc(self) -> bool:
"""Return whether rtc is local time or utc."""
return self.properties[DBUS_ATTR_LOCAL_RTC]
@property
@dbus_property
def ntp(self) -> bool:

View File

@ -28,6 +28,8 @@ class DeviceSpecificationDataType(TypedDict, total=False):
path: str
label: str
uuid: str
partuuid: str
partlabel: str
@dataclass(slots=True)
@ -40,6 +42,8 @@ class DeviceSpecification:
path: Path | None = None
label: str | None = None
uuid: str | None = None
partuuid: str | None = None
partlabel: str | None = None
@staticmethod
def from_dict(data: DeviceSpecificationDataType) -> "DeviceSpecification":
@ -48,6 +52,8 @@ class DeviceSpecification:
path=Path(data["path"]) if "path" in data else None,
label=data.get("label"),
uuid=data.get("uuid"),
partuuid=data.get("partuuid"),
partlabel=data.get("partlabel"),
)
def to_dict(self) -> dict[str, Variant]:
@ -56,6 +62,8 @@ class DeviceSpecification:
"path": Variant("s", self.path.as_posix()) if self.path else None,
"label": _optional_variant("s", self.label),
"uuid": _optional_variant("s", self.uuid),
"partuuid": _optional_variant("s", self.partuuid),
"partlabel": _optional_variant("s", self.partlabel),
}
return {k: v for k, v in data.items() if v}

View File

@ -12,6 +12,7 @@ from typing import TYPE_CHECKING, cast
from attr import evolve
from awesomeversion import AwesomeVersion
import docker
import docker.errors
from docker.types import Mount
import requests
@ -43,6 +44,7 @@ from ..jobs.decorator import Job
from ..resolution.const import CGROUP_V2_VERSION, ContextType, IssueType, SuggestionType
from ..utils.sentry import async_capture_exception
from .const import (
ADDON_BUILDER_IMAGE,
ENV_TIME,
ENV_TOKEN,
ENV_TOKEN_OLD,
@ -344,7 +346,7 @@ class DockerAddon(DockerInterface):
mounts = [
MOUNT_DEV,
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.addon.path_extern_data.as_posix(),
target=target_data_path or PATH_PRIVATE_DATA.as_posix(),
read_only=False,
@ -355,7 +357,7 @@ class DockerAddon(DockerInterface):
if MappingType.CONFIG in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.CONFIG].path
or PATH_HOMEASSISTANT_CONFIG_LEGACY.as_posix(),
@ -368,7 +370,7 @@ class DockerAddon(DockerInterface):
if self.addon.addon_config_used:
mounts.append(
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.addon.path_extern_config.as_posix(),
target=addon_mapping[MappingType.ADDON_CONFIG].path
or PATH_PUBLIC_CONFIG.as_posix(),
@ -380,7 +382,7 @@ class DockerAddon(DockerInterface):
if MappingType.HOMEASSISTANT_CONFIG in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.HOMEASSISTANT_CONFIG].path
or PATH_HOMEASSISTANT_CONFIG.as_posix(),
@ -393,7 +395,7 @@ class DockerAddon(DockerInterface):
if MappingType.ALL_ADDON_CONFIGS in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_addon_configs.as_posix(),
target=addon_mapping[MappingType.ALL_ADDON_CONFIGS].path
or PATH_ALL_ADDON_CONFIGS.as_posix(),
@ -404,7 +406,7 @@ class DockerAddon(DockerInterface):
if MappingType.SSL in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_ssl.as_posix(),
target=addon_mapping[MappingType.SSL].path or PATH_SSL.as_posix(),
read_only=addon_mapping[MappingType.SSL].read_only,
@ -414,7 +416,7 @@ class DockerAddon(DockerInterface):
if MappingType.ADDONS in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_addons_local.as_posix(),
target=addon_mapping[MappingType.ADDONS].path
or PATH_LOCAL_ADDONS.as_posix(),
@ -425,7 +427,7 @@ class DockerAddon(DockerInterface):
if MappingType.BACKUP in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_backup.as_posix(),
target=addon_mapping[MappingType.BACKUP].path
or PATH_BACKUP.as_posix(),
@ -436,7 +438,7 @@ class DockerAddon(DockerInterface):
if MappingType.SHARE in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_share.as_posix(),
target=addon_mapping[MappingType.SHARE].path
or PATH_SHARE.as_posix(),
@ -448,7 +450,7 @@ class DockerAddon(DockerInterface):
if MappingType.MEDIA in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_media.as_posix(),
target=addon_mapping[MappingType.MEDIA].path
or PATH_MEDIA.as_posix(),
@ -466,7 +468,7 @@ class DockerAddon(DockerInterface):
continue
mounts.append(
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=gpio_path,
target=gpio_path,
read_only=False,
@ -477,7 +479,7 @@ class DockerAddon(DockerInterface):
if self.addon.with_devicetree:
mounts.append(
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source="/sys/firmware/devicetree/base",
target="/device-tree",
read_only=True,
@ -492,7 +494,7 @@ class DockerAddon(DockerInterface):
if self.addon.with_kernel_modules:
mounts.append(
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source="/lib/modules",
target="/lib/modules",
read_only=True,
@ -511,19 +513,19 @@ class DockerAddon(DockerInterface):
if self.addon.with_audio:
mounts += [
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.addon.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf",
read_only=True,
),
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio",
read_only=True,
),
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf",
read_only=True,
@ -534,13 +536,13 @@ class DockerAddon(DockerInterface):
if self.addon.with_journald:
mounts += [
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
target=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
read_only=True,
),
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
target=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
read_only=True,
@ -673,10 +675,41 @@ class DockerAddon(DockerInterface):
_LOGGER.info("Starting build for %s:%s", self.image, version)
def build_image():
return self.sys_docker.images.build(
use_config_proxy=False, **build_env.get_docker_args(version, image)
if build_env.squash:
_LOGGER.warning(
"Ignoring squash build option for %s as Docker BuildKit does not support it.",
self.addon.slug,
)
addon_image_tag = f"{image or self.addon.image}:{version!s}"
docker_version = self.sys_docker.info.version
builder_version_tag = f"{docker_version.major}.{docker_version.minor}.{docker_version.micro}-cli"
builder_name = f"addon_builder_{self.addon.slug}"
# Remove dangling builder container if it exists by any chance
# E.g. because of an abrupt host shutdown/reboot during a build
with suppress(docker.errors.NotFound):
self.sys_docker.containers.get(builder_name).remove(force=True, v=True)
result = self.sys_docker.run_command(
ADDON_BUILDER_IMAGE,
version=builder_version_tag,
name=builder_name,
**build_env.get_docker_args(version, addon_image_tag),
)
logs = result.output.decode("utf-8")
if result.exit_code != 0:
error_message = f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}"
raise docker.errors.DockerException(error_message)
addon_image = self.sys_docker.images.get(addon_image_tag)
return addon_image, logs
try:
docker_image, log = await self.sys_run_in_executor(build_image)
@ -687,15 +720,6 @@ class DockerAddon(DockerInterface):
except (docker.errors.DockerException, requests.RequestException) as err:
_LOGGER.error("Can't build %s:%s: %s", self.image, version, err)
if hasattr(err, "build_log"):
log = "\n".join(
[
x["stream"]
for x in err.build_log # pylint: disable=no-member
if isinstance(x, dict) and "stream" in x
]
)
_LOGGER.error("Build log: \n%s", log)
raise DockerError() from err
_LOGGER.info("Build %s:%s done", self.image, version)

View File

@ -47,7 +47,7 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
mounts = [
MOUNT_DEV,
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_audio.as_posix(),
target=PATH_PRIVATE_DATA.as_posix(),
read_only=False,

View File

@ -74,24 +74,26 @@ ENV_TOKEN_OLD = "HASSIO_TOKEN"
LABEL_MANAGED = "supervisor_managed"
MOUNT_DBUS = Mount(
type=MountType.BIND, source="/run/dbus", target="/run/dbus", read_only=True
type=MountType.BIND.value, source="/run/dbus", target="/run/dbus", read_only=True
)
MOUNT_DEV = Mount(
type=MountType.BIND.value, source="/dev", target="/dev", read_only=True
)
MOUNT_DEV = Mount(type=MountType.BIND, source="/dev", target="/dev", read_only=True)
MOUNT_DEV.setdefault("BindOptions", {})["ReadOnlyNonRecursive"] = True
MOUNT_DOCKER = Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source="/run/docker.sock",
target="/run/docker.sock",
read_only=True,
)
MOUNT_MACHINE_ID = Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=MACHINE_ID.as_posix(),
target=MACHINE_ID.as_posix(),
read_only=True,
)
MOUNT_UDEV = Mount(
type=MountType.BIND, source="/run/udev", target="/run/udev", read_only=True
type=MountType.BIND.value, source="/run/udev", target="/run/udev", read_only=True
)
PATH_PRIVATE_DATA = PurePath("/data")
@ -105,3 +107,6 @@ PATH_BACKUP = PurePath("/backup")
PATH_SHARE = PurePath("/share")
PATH_MEDIA = PurePath("/media")
PATH_CLOUD_BACKUP = PurePath("/cloud_backup")
# https://hub.docker.com/_/docker
ADDON_BUILDER_IMAGE = "docker.io/library/docker"

View File

@ -48,7 +48,7 @@ class DockerDNS(DockerInterface, CoreSysAttributes):
environment={ENV_TIME: self.sys_timezone},
mounts=[
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_dns.as_posix(),
target="/config",
read_only=False,

View File

@ -99,7 +99,7 @@ class DockerHomeAssistant(DockerInterface):
MOUNT_UDEV,
# HA config folder
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target=PATH_PUBLIC_CONFIG.as_posix(),
read_only=False,
@ -112,20 +112,20 @@ class DockerHomeAssistant(DockerInterface):
[
# All other folders
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_ssl.as_posix(),
target=PATH_SSL.as_posix(),
read_only=True,
),
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_share.as_posix(),
target=PATH_SHARE.as_posix(),
read_only=False,
propagation=PropagationMode.RSLAVE.value,
),
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_media.as_posix(),
target=PATH_MEDIA.as_posix(),
read_only=False,
@ -133,19 +133,19 @@ class DockerHomeAssistant(DockerInterface):
),
# Configuration audio
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_homeassistant.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf",
read_only=True,
),
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio",
read_only=True,
),
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf",
read_only=True,
@ -213,24 +213,21 @@ class DockerHomeAssistant(DockerInterface):
privileged=True,
init=True,
entrypoint=[],
detach=True,
stdout=True,
stderr=True,
mounts=[
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target="/config",
read_only=False,
),
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_ssl.as_posix(),
target="/ssl",
read_only=True,
),
Mount(
type=MountType.BIND,
type=MountType.BIND.value,
source=self.sys_config.path_extern_share.as_posix(),
target="/share",
read_only=False,

View File

@ -22,6 +22,7 @@ from docker.types.daemon import CancellableStream
import requests
from ..const import (
ATTR_ENABLE_IPV6,
ATTR_REGISTRIES,
DNS_SUFFIX,
DOCKER_NETWORK,
@ -93,6 +94,16 @@ class DockerConfig(FileConfiguration):
"""Initialize the JSON configuration."""
super().__init__(FILE_HASSIO_DOCKER, SCHEMA_DOCKER_CONFIG)
@property
def enable_ipv6(self) -> bool | None:
"""Return IPv6 configuration for docker network."""
return self._data.get(ATTR_ENABLE_IPV6, None)
@enable_ipv6.setter
def enable_ipv6(self, value: bool | None) -> None:
"""Set IPv6 configuration for docker network."""
self._data[ATTR_ENABLE_IPV6] = value
@property
def registries(self) -> dict[str, Any]:
"""Return credentials for docker registries."""
@ -124,9 +135,11 @@ class DockerAPI:
timeout=900,
),
)
self._network = DockerNetwork(self._docker)
self._info = DockerInfo.new(self.docker.info())
await self.config.read_data()
self._network = await DockerNetwork(self.docker).post_init(
self.config.enable_ipv6
)
return self
@property
@ -281,8 +294,8 @@ class DockerAPI:
def run_command(
self,
image: str,
tag: str = "latest",
command: str | None = None,
version: str = "latest",
command: str | list[str] | None = None,
**kwargs: Any,
) -> CommandReturn:
"""Create a temporary container and run command.
@ -292,12 +305,15 @@ class DockerAPI:
stdout = kwargs.get("stdout", True)
stderr = kwargs.get("stderr", True)
_LOGGER.info("Runing command '%s' on %s", command, image)
image_with_tag = f"{image}:{version}"
_LOGGER.info("Runing command '%s' on %s", command, image_with_tag)
container = None
try:
container = self.docker.containers.run(
f"{image}:{tag}",
image_with_tag,
command=command,
detach=True,
network=self.network.name,
use_config_proxy=False,
**kwargs,
@ -314,9 +330,9 @@ class DockerAPI:
# cleanup container
if container:
with suppress(docker_errors.DockerException, requests.RequestException):
container.remove(force=True)
container.remove(force=True, v=True)
return CommandReturn(result.get("StatusCode"), output)
return CommandReturn(result["StatusCode"], output)
def repair(self) -> None:
"""Repair local docker overlayfs2 issues."""
@ -429,7 +445,7 @@ class DockerAPI:
if remove_container:
with suppress(DockerException, requests.RequestException):
_LOGGER.info("Cleaning %s application", name)
docker_container.remove(force=True)
docker_container.remove(force=True, v=True)
def start_container(self, name: str) -> None:
"""Start Docker container."""

View File

@ -1,17 +1,54 @@
"""Internal network manager for Supervisor."""
import asyncio
from contextlib import suppress
from ipaddress import IPv4Address
import logging
from typing import Self
import docker
import requests
from ..const import DOCKER_NETWORK, DOCKER_NETWORK_MASK, DOCKER_NETWORK_RANGE
from ..const import (
ATTR_AUDIO,
ATTR_CLI,
ATTR_DNS,
ATTR_ENABLE_IPV6,
ATTR_OBSERVER,
ATTR_SUPERVISOR,
DOCKER_IPV4_NETWORK_MASK,
DOCKER_IPV4_NETWORK_RANGE,
DOCKER_IPV6_NETWORK_MASK,
DOCKER_NETWORK,
DOCKER_NETWORK_DRIVER,
DOCKER_PREFIX,
OBSERVER_DOCKER_NAME,
SUPERVISOR_DOCKER_NAME,
)
from ..exceptions import DockerError
_LOGGER: logging.Logger = logging.getLogger(__name__)
DOCKER_ENABLEIPV6 = "EnableIPv6"
DOCKER_NETWORK_PARAMS = {
"name": DOCKER_NETWORK,
"driver": DOCKER_NETWORK_DRIVER,
"ipam": docker.types.IPAMConfig(
pool_configs=[
docker.types.IPAMPool(subnet=str(DOCKER_IPV6_NETWORK_MASK)),
docker.types.IPAMPool(
subnet=str(DOCKER_IPV4_NETWORK_MASK),
gateway=str(DOCKER_IPV4_NETWORK_MASK[1]),
iprange=str(DOCKER_IPV4_NETWORK_RANGE),
),
]
),
ATTR_ENABLE_IPV6: True,
"options": {"com.docker.network.bridge.name": DOCKER_NETWORK},
}
DOCKER_ENABLE_IPV6_DEFAULT = True
class DockerNetwork:
"""Internal Supervisor Network.
@ -22,7 +59,14 @@ class DockerNetwork:
def __init__(self, docker_client: docker.DockerClient):
"""Initialize internal Supervisor network."""
self.docker: docker.DockerClient = docker_client
self._network: docker.models.networks.Network = self._get_network()
self._network: docker.models.networks.Network
async def post_init(self, enable_ipv6: bool | None = None) -> Self:
"""Post init actions that must be done in event loop."""
self._network = await asyncio.get_running_loop().run_in_executor(
None, self._get_network, enable_ipv6
)
return self
@property
def name(self) -> str:
@ -42,55 +86,112 @@ class DockerNetwork:
@property
def gateway(self) -> IPv4Address:
"""Return gateway of the network."""
return DOCKER_NETWORK_MASK[1]
return DOCKER_IPV4_NETWORK_MASK[1]
@property
def supervisor(self) -> IPv4Address:
"""Return supervisor of the network."""
return DOCKER_NETWORK_MASK[2]
return DOCKER_IPV4_NETWORK_MASK[2]
@property
def dns(self) -> IPv4Address:
"""Return dns of the network."""
return DOCKER_NETWORK_MASK[3]
return DOCKER_IPV4_NETWORK_MASK[3]
@property
def audio(self) -> IPv4Address:
"""Return audio of the network."""
return DOCKER_NETWORK_MASK[4]
return DOCKER_IPV4_NETWORK_MASK[4]
@property
def cli(self) -> IPv4Address:
"""Return cli of the network."""
return DOCKER_NETWORK_MASK[5]
return DOCKER_IPV4_NETWORK_MASK[5]
@property
def observer(self) -> IPv4Address:
"""Return observer of the network."""
return DOCKER_NETWORK_MASK[6]
return DOCKER_IPV4_NETWORK_MASK[6]
def _get_network(self) -> docker.models.networks.Network:
def _get_network(
self, enable_ipv6: bool | None = None
) -> docker.models.networks.Network:
"""Get supervisor network."""
try:
return self.docker.networks.get(DOCKER_NETWORK)
if network := self.docker.networks.get(DOCKER_NETWORK):
current_ipv6 = network.attrs.get(DOCKER_ENABLEIPV6, False)
# If the network exists and we don't have an explicit setting,
# simply stick with what we have.
if enable_ipv6 is None or current_ipv6 == enable_ipv6:
return network
# We have an explicit setting which differs from the current state.
_LOGGER.info(
"Migrating Supervisor network to %s",
"IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only",
)
if (containers := network.containers) and (
containers_all := all(
container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME)
for container in containers
)
):
for container in containers:
with suppress(
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
):
network.disconnect(container, force=True)
if not containers or containers_all:
try:
network.remove()
except docker.errors.APIError:
_LOGGER.warning("Failed to remove existing Supervisor network")
return network
else:
_LOGGER.warning(
"System appears to be running, "
"not applying Supervisor network change. "
"Reboot your system to apply the change."
)
return network
except docker.errors.NotFound:
_LOGGER.info("Can't find Supervisor network, creating a new network")
ipam_pool = docker.types.IPAMPool(
subnet=str(DOCKER_NETWORK_MASK),
gateway=str(self.gateway),
iprange=str(DOCKER_NETWORK_RANGE),
network_params = DOCKER_NETWORK_PARAMS.copy()
network_params[ATTR_ENABLE_IPV6] = (
DOCKER_ENABLE_IPV6_DEFAULT if enable_ipv6 is None else enable_ipv6
)
ipam_config = docker.types.IPAMConfig(pool_configs=[ipam_pool])
try:
self._network = self.docker.networks.create(**network_params) # type: ignore
except docker.errors.APIError as err:
raise DockerError(
f"Can't create Supervisor network: {err}", _LOGGER.error
) from err
return self.docker.networks.create(
DOCKER_NETWORK,
driver="bridge",
ipam=ipam_config,
enable_ipv6=False,
options={"com.docker.network.bridge.name": DOCKER_NETWORK},
)
with suppress(DockerError):
self.attach_container_by_name(
SUPERVISOR_DOCKER_NAME, [ATTR_SUPERVISOR], self.supervisor
)
with suppress(DockerError):
self.attach_container_by_name(
OBSERVER_DOCKER_NAME, [ATTR_OBSERVER], self.observer
)
for name, ip in (
(ATTR_CLI, self.cli),
(ATTR_DNS, self.dns),
(ATTR_AUDIO, self.audio),
):
with suppress(DockerError):
self.attach_container_by_name(f"{DOCKER_PREFIX}_{name}", [name], ip)
return self._network
def attach_container(
self,
@ -102,8 +203,6 @@ class DockerNetwork:
Need run inside executor.
"""
ipv4_address = str(ipv4) if ipv4 else None
# Reload Network information
with suppress(docker.errors.DockerException, requests.RequestException):
self.network.reload()
@ -116,12 +215,43 @@ class DockerNetwork:
# Attach Network
try:
self.network.connect(container, aliases=alias, ipv4_address=ipv4_address)
except docker.errors.APIError as err:
self.network.connect(
container, aliases=alias, ipv4_address=str(ipv4) if ipv4 else None
)
except (
docker.errors.NotFound,
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
raise DockerError(
f"Can't link container to hassio-net: {err}", _LOGGER.error
f"Can't connect {container.name} to Supervisor network: {err}",
_LOGGER.error,
) from err
def attach_container_by_name(
self,
name: str,
alias: list[str] | None = None,
ipv4: IPv4Address | None = None,
) -> None:
"""Attach container to Supervisor network.
Need run inside executor.
"""
try:
container = self.docker.containers.get(name)
except (
docker.errors.NotFound,
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
raise DockerError(f"Can't find {name}: {err}", _LOGGER.error) from err
if container.id not in self.containers:
self.attach_container(container, alias, ipv4)
def detach_default_bridge(
self, container: docker.models.containers.Container
) -> None:
@ -130,25 +260,33 @@ class DockerNetwork:
Need run inside executor.
"""
try:
default_network = self.docker.networks.get("bridge")
default_network = self.docker.networks.get(DOCKER_NETWORK_DRIVER)
default_network.disconnect(container)
except docker.errors.NotFound:
return
except docker.errors.APIError as err:
pass
except (
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
raise DockerError(
f"Can't disconnect container from default: {err}", _LOGGER.warning
f"Can't disconnect {container.name} from default network: {err}",
_LOGGER.warning,
) from err
def stale_cleanup(self, container_name: str):
"""Remove force a container from Network.
def stale_cleanup(self, name: str) -> None:
"""Force remove a container from Network.
Fix: https://github.com/moby/moby/issues/23302
"""
try:
self.network.disconnect(container_name, force=True)
except docker.errors.NotFound:
pass
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
self.network.disconnect(name, force=True)
except (
docker.errors.APIError,
docker.errors.DockerException,
requests.RequestException,
) as err:
raise DockerError(
f"Can't disconnect {name} from Supervisor network: {err}",
_LOGGER.warning,
) from err

View File

@ -2,7 +2,7 @@
import logging
from ..const import DOCKER_NETWORK_MASK
from ..const import DOCKER_IPV4_NETWORK_MASK, OBSERVER_DOCKER_NAME
from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit
@ -12,7 +12,6 @@ from .interface import DockerInterface
_LOGGER: logging.Logger = logging.getLogger(__name__)
OBSERVER_DOCKER_NAME: str = "hassio_observer"
ENV_NETWORK_MASK: str = "NETWORK_MASK"
@ -49,7 +48,7 @@ class DockerObserver(DockerInterface, CoreSysAttributes):
environment={
ENV_TIME: self.sys_timezone,
ENV_TOKEN: self.sys_plugins.observer.supervisor_token,
ENV_NETWORK_MASK: DOCKER_NETWORK_MASK,
ENV_NETWORK_MASK: DOCKER_IPV4_NETWORK_MASK,
},
mounts=[MOUNT_DOCKER],
ports={"80/tcp": 4357},

View File

@ -87,19 +87,19 @@ class HomeAssistantCore(JobGroup):
try:
# Evaluate Version if we lost this information
if not self.sys_homeassistant.version:
if self.sys_homeassistant.version:
version = self.sys_homeassistant.version
else:
self.sys_homeassistant.version = (
await self.instance.get_latest_version()
)
version
) = await self.instance.get_latest_version()
await self.instance.attach(
version=self.sys_homeassistant.version, skip_state_event_if_down=True
)
await self.instance.attach(version=version, skip_state_event_if_down=True)
# Ensure we are using correct image for this system (unless user has overridden it)
if not self.sys_homeassistant.override_image:
await self.instance.check_image(
self.sys_homeassistant.version, self.sys_homeassistant.default_image
version, self.sys_homeassistant.default_image
)
self.sys_homeassistant.set_image(self.sys_homeassistant.default_image)
except DockerError:
@ -108,7 +108,7 @@ class HomeAssistantCore(JobGroup):
)
await self.install_landingpage()
else:
self.sys_homeassistant.version = self.instance.version
self.sys_homeassistant.version = self.instance.version or version
self.sys_homeassistant.set_image(self.instance.image)
await self.sys_homeassistant.save_data()
@ -182,12 +182,13 @@ class HomeAssistantCore(JobGroup):
if not self.sys_homeassistant.latest_version:
await self.sys_updater.reload()
if self.sys_homeassistant.latest_version:
if to_version := self.sys_homeassistant.latest_version:
try:
await self.instance.update(
self.sys_homeassistant.latest_version,
to_version,
image=self.sys_updater.image_homeassistant,
)
self.sys_homeassistant.version = self.instance.version or to_version
break
except (DockerError, JobException):
pass
@ -198,7 +199,6 @@ class HomeAssistantCore(JobGroup):
await asyncio.sleep(30)
_LOGGER.info("Home Assistant docker now installed")
self.sys_homeassistant.version = self.instance.version
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
await self.sys_homeassistant.save_data()
@ -231,8 +231,8 @@ class HomeAssistantCore(JobGroup):
backup: bool | None = False,
) -> None:
"""Update HomeAssistant version."""
version = version or self.sys_homeassistant.latest_version
if not version:
to_version = version or self.sys_homeassistant.latest_version
if not to_version:
raise HomeAssistantUpdateError(
"Cannot determine latest version of Home Assistant for update",
_LOGGER.error,
@ -243,9 +243,9 @@ class HomeAssistantCore(JobGroup):
running = await self.instance.is_running()
exists = await self.instance.exists()
if exists and version == self.instance.version:
if exists and to_version == self.instance.version:
raise HomeAssistantUpdateError(
f"Version {version!s} is already installed", _LOGGER.warning
f"Version {to_version!s} is already installed", _LOGGER.warning
)
if backup:
@ -268,7 +268,7 @@ class HomeAssistantCore(JobGroup):
"Updating Home Assistant image failed", _LOGGER.warning
) from err
self.sys_homeassistant.version = self.instance.version
self.sys_homeassistant.version = self.instance.version or to_version
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
if running:
@ -282,7 +282,7 @@ class HomeAssistantCore(JobGroup):
# Update Home Assistant
with suppress(HomeAssistantError):
await _update(version)
await _update(to_version)
if not self.error_state and rollback:
try:

View File

@ -35,6 +35,7 @@ from ..const import (
FILE_HASSIO_HOMEASSISTANT,
BusEvent,
IngressSessionDataUser,
IngressSessionDataUserDict,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
@ -557,18 +558,11 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
async def get_users(self) -> list[IngressSessionDataUser]:
"""Get list of all configured users."""
list_of_users: (
list[dict[str, Any]] | None
list[IngressSessionDataUserDict] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"}
)
if list_of_users:
return [
IngressSessionDataUser(
id=data["id"],
username=data.get("username"),
display_name=data.get("name"),
)
for data in list_of_users
]
return [IngressSessionDataUser.from_dict(data) for data in list_of_users]
return []

View File

@ -2,6 +2,7 @@
from dataclasses import dataclass
from ipaddress import IPv4Address, IPv4Interface, IPv6Address, IPv6Interface
import logging
import socket
from ..dbus.const import (
@ -23,6 +24,8 @@ from .const import (
WifiMode,
)
_LOGGER: logging.Logger = logging.getLogger(__name__)
@dataclass(slots=True)
class AccessPoint:
@ -79,7 +82,7 @@ class VlanConfig:
"""Represent a vlan configuration."""
id: int
interface: str
interface: str | None
@dataclass(slots=True)
@ -108,7 +111,10 @@ class Interface:
if inet.settings.match and inet.settings.match.path:
return inet.settings.match.path == [self.path]
return inet.settings.connection.interface_name == self.name
return (
inet.settings.connection is not None
and inet.settings.connection.interface_name == self.name
)
@staticmethod
def from_dbus_interface(inet: NetworkInterface) -> "Interface":
@ -160,23 +166,23 @@ class Interface:
ipv6_setting = Ip6Setting(InterfaceMethod.DISABLED, [], None, [])
ipv4_ready = (
bool(inet.connection)
inet.connection is not None
and ConnectionStateFlags.IP4_READY in inet.connection.state_flags
)
ipv6_ready = (
bool(inet.connection)
inet.connection is not None
and ConnectionStateFlags.IP6_READY in inet.connection.state_flags
)
return Interface(
inet.name,
inet.hw_address,
inet.path,
inet.settings is not None,
Interface._map_nm_connected(inet.connection),
inet.primary,
Interface._map_nm_type(inet.type),
IpConfig(
name=inet.interface_name,
mac=inet.hw_address,
path=inet.path,
enabled=inet.settings is not None,
connected=Interface._map_nm_connected(inet.connection),
primary=inet.primary,
type=Interface._map_nm_type(inet.type),
ipv4=IpConfig(
address=inet.connection.ipv4.address
if inet.connection.ipv4.address
else [],
@ -188,8 +194,8 @@ class Interface:
)
if inet.connection and inet.connection.ipv4
else IpConfig([], None, [], ipv4_ready),
ipv4_setting,
IpConfig(
ipv4setting=ipv4_setting,
ipv6=IpConfig(
address=inet.connection.ipv6.address
if inet.connection.ipv6.address
else [],
@ -201,30 +207,28 @@ class Interface:
)
if inet.connection and inet.connection.ipv6
else IpConfig([], None, [], ipv6_ready),
ipv6_setting,
Interface._map_nm_wifi(inet),
Interface._map_nm_vlan(inet),
ipv6setting=ipv6_setting,
wifi=Interface._map_nm_wifi(inet),
vlan=Interface._map_nm_vlan(inet),
)
@staticmethod
def _map_nm_method(method: str) -> InterfaceMethod:
def _map_nm_method(method: str | None) -> InterfaceMethod:
"""Map IP interface method."""
mapping = {
NMInterfaceMethod.AUTO: InterfaceMethod.AUTO,
NMInterfaceMethod.DISABLED: InterfaceMethod.DISABLED,
NMInterfaceMethod.MANUAL: InterfaceMethod.STATIC,
NMInterfaceMethod.LINK_LOCAL: InterfaceMethod.DISABLED,
}
return mapping.get(method, InterfaceMethod.DISABLED)
match method:
case NMInterfaceMethod.AUTO.value:
return InterfaceMethod.AUTO
case NMInterfaceMethod.MANUAL:
return InterfaceMethod.STATIC
return InterfaceMethod.DISABLED
@staticmethod
def _map_nm_addr_gen_mode(addr_gen_mode: int) -> InterfaceAddrGenMode:
"""Map IPv6 interface addr_gen_mode."""
mapping = {
NMInterfaceAddrGenMode.EUI64: InterfaceAddrGenMode.EUI64,
NMInterfaceAddrGenMode.STABLE_PRIVACY: InterfaceAddrGenMode.STABLE_PRIVACY,
NMInterfaceAddrGenMode.DEFAULT_OR_EUI64: InterfaceAddrGenMode.DEFAULT_OR_EUI64,
NMInterfaceAddrGenMode.EUI64.value: InterfaceAddrGenMode.EUI64,
NMInterfaceAddrGenMode.STABLE_PRIVACY.value: InterfaceAddrGenMode.STABLE_PRIVACY,
NMInterfaceAddrGenMode.DEFAULT_OR_EUI64.value: InterfaceAddrGenMode.DEFAULT_OR_EUI64,
}
return mapping.get(addr_gen_mode, InterfaceAddrGenMode.DEFAULT)
@ -233,9 +237,9 @@ class Interface:
def _map_nm_ip6_privacy(ip6_privacy: int) -> InterfaceIp6Privacy:
"""Map IPv6 interface ip6_privacy."""
mapping = {
NMInterfaceIp6Privacy.DISABLED: InterfaceIp6Privacy.DISABLED,
NMInterfaceIp6Privacy.ENABLED_PREFER_PUBLIC: InterfaceIp6Privacy.ENABLED_PREFER_PUBLIC,
NMInterfaceIp6Privacy.ENABLED: InterfaceIp6Privacy.ENABLED,
NMInterfaceIp6Privacy.DISABLED.value: InterfaceIp6Privacy.DISABLED,
NMInterfaceIp6Privacy.ENABLED_PREFER_PUBLIC.value: InterfaceIp6Privacy.ENABLED_PREFER_PUBLIC,
NMInterfaceIp6Privacy.ENABLED.value: InterfaceIp6Privacy.ENABLED,
}
return mapping.get(ip6_privacy, InterfaceIp6Privacy.DEFAULT)
@ -253,12 +257,14 @@ class Interface:
@staticmethod
def _map_nm_type(device_type: int) -> InterfaceType:
mapping = {
DeviceType.ETHERNET: InterfaceType.ETHERNET,
DeviceType.WIRELESS: InterfaceType.WIRELESS,
DeviceType.VLAN: InterfaceType.VLAN,
}
return mapping[device_type]
match device_type:
case DeviceType.ETHERNET.value:
return InterfaceType.ETHERNET
case DeviceType.WIRELESS.value:
return InterfaceType.WIRELESS
case DeviceType.VLAN.value:
return InterfaceType.VLAN
raise ValueError(f"Invalid device type: {device_type}")
@staticmethod
def _map_nm_wifi(inet: NetworkInterface) -> WifiConfig | None:
@ -267,15 +273,22 @@ class Interface:
return None
# Authentication and PSK
auth = None
auth = AuthMethod.OPEN
psk = None
if not inet.settings.wireless_security:
auth = AuthMethod.OPEN
elif inet.settings.wireless_security.key_mgmt == "none":
auth = AuthMethod.WEP
elif inet.settings.wireless_security.key_mgmt == "wpa-psk":
auth = AuthMethod.WPA_PSK
psk = inet.settings.wireless_security.psk
if inet.settings.wireless_security:
match inet.settings.wireless_security.key_mgmt:
case "none":
auth = AuthMethod.WEP
case "wpa-psk":
auth = AuthMethod.WPA_PSK
psk = inet.settings.wireless_security.psk
case _:
_LOGGER.warning(
"Auth method %s for network interface %s unsupported, skipping",
inet.settings.wireless_security.key_mgmt,
inet.interface_name,
)
return None
# WifiMode
mode = WifiMode.INFRASTRUCTURE
@ -289,17 +302,17 @@ class Interface:
signal = None
return WifiConfig(
mode,
inet.settings.wireless.ssid,
auth,
psk,
signal,
mode=mode,
ssid=inet.settings.wireless.ssid if inet.settings.wireless else "",
auth=auth,
psk=psk,
signal=signal,
)
@staticmethod
def _map_nm_vlan(inet: NetworkInterface) -> WifiConfig | None:
def _map_nm_vlan(inet: NetworkInterface) -> VlanConfig | None:
"""Create mapping to nm vlan property."""
if inet.type != DeviceType.VLAN or not inet.settings:
if inet.type != DeviceType.VLAN or not inet.settings or not inet.settings.vlan:
return None
return VlanConfig(inet.settings.vlan.id, inet.settings.vlan.parent)

View File

@ -2,7 +2,7 @@
from __future__ import annotations
from collections.abc import AsyncGenerator
from collections.abc import AsyncGenerator, Mapping
from contextlib import asynccontextmanager
import json
import logging
@ -205,7 +205,7 @@ class LogsControl(CoreSysAttributes):
async def journald_logs(
self,
path: str = "/entries",
params: dict[str, str | list[str]] | None = None,
params: Mapping[str, str | list[str]] | None = None,
range_header: str | None = None,
accept: LogFormat = LogFormat.TEXT,
timeout: ClientTimeout | None = None,
@ -226,7 +226,7 @@ class LogsControl(CoreSysAttributes):
base_url = "http://localhost/"
connector = UnixConnector(path=str(SYSTEMD_JOURNAL_GATEWAYD_SOCKET))
async with ClientSession(base_url=base_url, connector=connector) as session:
headers = {ACCEPT: accept}
headers = {ACCEPT: accept.value}
if range_header:
if range_header.endswith(":"):
# Make sure that num_entries is always set - before Systemd v256 it was

View File

@ -8,11 +8,11 @@ from typing import Any
from ..const import ATTR_HOST_INTERNET
from ..coresys import CoreSys, CoreSysAttributes
from ..dbus.const import (
DBUS_ATTR_CONFIGURATION,
DBUS_ATTR_CONNECTION_ENABLED,
DBUS_ATTR_CONNECTIVITY,
DBUS_ATTR_PRIMARY_CONNECTION,
DBUS_IFACE_DNS,
DBUS_IFACE_NM,
DBUS_OBJECT_BASE,
DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED,
ConnectionStateType,
ConnectivityState,
@ -46,6 +46,8 @@ class NetworkManager(CoreSysAttributes):
"""Initialize system center handling."""
self.coresys: CoreSys = coresys
self._connectivity: bool | None = None
# No event need on initial change (NetworkManager initializes with empty list)
self._dns_configuration: list = []
@property
def connectivity(self) -> bool | None:
@ -87,7 +89,7 @@ class NetworkManager(CoreSysAttributes):
for config in self.sys_dbus.network.dns.configuration:
if config.vpn or not config.nameservers:
continue
servers.extend(config.nameservers)
servers.extend([str(ns) for ns in config.nameservers])
return list(dict.fromkeys(servers))
@ -138,8 +140,12 @@ class NetworkManager(CoreSysAttributes):
]
)
self.sys_dbus.network.dbus.properties.on_properties_changed(
self._check_connectivity_changed
self.sys_dbus.network.dbus.properties.on(
"properties_changed", self._check_connectivity_changed
)
self.sys_dbus.network.dns.dbus.properties.on(
"properties_changed", self._check_dns_changed
)
async def _check_connectivity_changed(
@ -152,16 +158,6 @@ class NetworkManager(CoreSysAttributes):
connectivity_check: bool | None = changed.get(DBUS_ATTR_CONNECTION_ENABLED)
connectivity: int | None = changed.get(DBUS_ATTR_CONNECTIVITY)
# This potentially updated the DNS configuration. Make sure the DNS plug-in
# picks up the latest settings.
if (
DBUS_ATTR_PRIMARY_CONNECTION in changed
and changed[DBUS_ATTR_PRIMARY_CONNECTION]
and changed[DBUS_ATTR_PRIMARY_CONNECTION] != DBUS_OBJECT_BASE
and await self.sys_plugins.dns.is_running()
):
await self.sys_plugins.dns.restart()
if (
connectivity_check is True
or DBUS_ATTR_CONNECTION_ENABLED in invalidated
@ -175,6 +171,20 @@ class NetworkManager(CoreSysAttributes):
elif connectivity is not None:
self.connectivity = connectivity == ConnectivityState.CONNECTIVITY_FULL
async def _check_dns_changed(
self, interface: str, changed: dict[str, Any], invalidated: list[str]
):
"""Check if DNS properties have changed."""
if interface != DBUS_IFACE_DNS:
return
if (
DBUS_ATTR_CONFIGURATION in changed
and self._dns_configuration != changed[DBUS_ATTR_CONFIGURATION]
):
self._dns_configuration = changed[DBUS_ATTR_CONFIGURATION]
self.sys_plugins.dns.notify_locals_changed()
async def update(self, *, force_connectivity_check: bool = False):
"""Update properties over dbus."""
_LOGGER.info("Updating local network information")
@ -197,10 +207,16 @@ class NetworkManager(CoreSysAttributes):
with suppress(NetworkInterfaceNotFound):
inet = self.sys_dbus.network.get(interface.name)
con: NetworkConnection = None
con: NetworkConnection | None = None
# Update exist configuration
if inet and interface.equals_dbus_interface(inet) and interface.enabled:
if (
inet
and inet.settings
and inet.settings.connection
and interface.equals_dbus_interface(inet)
and interface.enabled
):
_LOGGER.debug("Updating existing configuration for %s", interface.name)
settings = get_connection_from_interface(
interface,
@ -211,12 +227,12 @@ class NetworkManager(CoreSysAttributes):
try:
await inet.settings.update(settings)
con = await self.sys_dbus.network.activate_connection(
con = activated = await self.sys_dbus.network.activate_connection(
inet.settings.object_path, inet.object_path
)
_LOGGER.debug(
"activate_connection returns %s",
con.object_path,
activated.object_path,
)
except DBusError as err:
raise HostNetworkError(
@ -236,12 +252,16 @@ class NetworkManager(CoreSysAttributes):
settings = get_connection_from_interface(interface, self.sys_dbus.network)
try:
settings, con = await self.sys_dbus.network.add_and_activate_connection(
(
settings,
activated,
) = await self.sys_dbus.network.add_and_activate_connection(
settings, inet.object_path
)
con = activated
_LOGGER.debug(
"add_and_activate_connection returns %s",
con.object_path,
activated.object_path,
)
except DBusError as err:
raise HostNetworkError(
@ -277,7 +297,7 @@ class NetworkManager(CoreSysAttributes):
)
if con:
async with con.dbus.signal(
async with con.connected_dbus.signal(
DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED
) as signal:
# From this point we monitor signals. However, it might be that
@ -303,7 +323,7 @@ class NetworkManager(CoreSysAttributes):
"""Scan on Interface for AccessPoint."""
inet = self.sys_dbus.network.get(interface.name)
if inet.type != DeviceType.WIRELESS:
if inet.type != DeviceType.WIRELESS or not inet.wireless:
raise HostNotSupportedError(
f"Can only scan with wireless card - {interface.name}", _LOGGER.error
)

View File

@ -12,6 +12,7 @@ from .const import (
ATTR_SESSION_DATA,
FILE_HASSIO_INGRESS,
IngressSessionData,
IngressSessionDataDict,
)
from .coresys import CoreSys, CoreSysAttributes
from .utils import check_port
@ -35,7 +36,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
"""Return addon they have this ingress token."""
if token not in self.tokens:
return None
return self.sys_addons.get(self.tokens[token], local_only=True)
return self.sys_addons.get_local_only(self.tokens[token])
def get_session_data(self, session_id: str) -> IngressSessionData | None:
"""Return complementary data of current session or None."""
@ -49,7 +50,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
return self._data[ATTR_SESSION]
@property
def sessions_data(self) -> dict[str, dict[str, str | None]]:
def sessions_data(self) -> dict[str, IngressSessionDataDict]:
"""Return sessions_data."""
return self._data[ATTR_SESSION_DATA]
@ -89,7 +90,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
now = utcnow()
sessions = {}
sessions_data: dict[str, dict[str, str | None]] = {}
sessions_data: dict[str, IngressSessionDataDict] = {}
for session, valid in self.sessions.items():
# check if timestamp valid, to avoid crash on malformed timestamp
try:
@ -118,7 +119,8 @@ class Ingress(FileConfiguration, CoreSysAttributes):
# Read all ingress token and build a map
for addon in self.addons:
self.tokens[addon.ingress_token] = addon.slug
if addon.ingress_token:
self.tokens[addon.ingress_token] = addon.slug
def create_session(self, data: IngressSessionData | None = None) -> str:
"""Create new session."""
@ -141,7 +143,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
try:
valid_until = utc_from_timestamp(self.sessions[session])
except OverflowError:
self.sessions[session] = utcnow() + timedelta(minutes=15)
self.sessions[session] = (utcnow() + timedelta(minutes=15)).timestamp()
return True
# Is still valid?

View File

@ -1,13 +1,13 @@
"""Supervisor job manager."""
import asyncio
from collections.abc import Awaitable, Callable
from contextlib import contextmanager
from collections.abc import Callable, Coroutine, Generator
from contextlib import contextmanager, suppress
from contextvars import Context, ContextVar, Token
from dataclasses import dataclass
from datetime import datetime
import logging
from typing import Any
from typing import Any, Self
from uuid import uuid4
from attrs import Attribute, define, field
@ -27,7 +27,7 @@ from .validate import SCHEMA_JOBS_CONFIG
# When a new asyncio task is started the current context is copied over.
# Modifications to it in one task are not visible to others though.
# This allows us to track what job is currently in progress in each task.
_CURRENT_JOB: ContextVar[str] = ContextVar("current_job")
_CURRENT_JOB: ContextVar[str | None] = ContextVar("current_job", default=None)
_LOGGER: logging.Logger = logging.getLogger(__name__)
@ -75,7 +75,7 @@ class SupervisorJobError:
message: str = "Unknown error, see supervisor logs"
stage: str | None = None
def as_dict(self) -> dict[str, str]:
def as_dict(self) -> dict[str, str | None]:
"""Return dictionary representation."""
return {
"type": self.type_.__name__,
@ -101,9 +101,7 @@ class SupervisorJob:
stage: str | None = field(
default=None, validator=[_invalid_if_done], on_setattr=_on_change
)
parent_id: str | None = field(
factory=lambda: _CURRENT_JOB.get(None), on_setattr=frozen
)
parent_id: str | None = field(factory=_CURRENT_JOB.get, on_setattr=frozen)
done: bool | None = field(init=False, default=None, on_setattr=_on_change)
on_change: Callable[["SupervisorJob", Attribute, Any], None] | None = field(
default=None, on_setattr=frozen
@ -137,7 +135,7 @@ class SupervisorJob:
self.errors += [new_error]
@contextmanager
def start(self):
def start(self) -> Generator[Self]:
"""Start the job in the current task.
This can only be called if the parent ID matches the job running in the current task.
@ -146,11 +144,11 @@ class SupervisorJob:
"""
if self.done is not None:
raise JobStartException("Job has already been started")
if _CURRENT_JOB.get(None) != self.parent_id:
if _CURRENT_JOB.get() != self.parent_id:
raise JobStartException("Job has a different parent from current job")
self.done = False
token: Token[str] | None = None
token: Token[str | None] | None = None
try:
token = _CURRENT_JOB.set(self.uuid)
yield self
@ -193,17 +191,15 @@ class JobManager(FileConfiguration, CoreSysAttributes):
Must be called from within a job. Raises RuntimeError if there is no current job.
"""
try:
return self.get_job(_CURRENT_JOB.get())
except (LookupError, JobNotFound):
raise RuntimeError(
"No job for the current asyncio task!", _LOGGER.critical
) from None
if job_id := _CURRENT_JOB.get():
with suppress(JobNotFound):
return self.get_job(job_id)
raise RuntimeError("No job for the current asyncio task!", _LOGGER.critical)
@property
def is_job(self) -> bool:
"""Return true if there is an active job for the current asyncio task."""
return bool(_CURRENT_JOB.get(None))
return _CURRENT_JOB.get() is not None
def _notify_on_job_change(
self, job: SupervisorJob, attribute: Attribute, value: Any
@ -265,7 +261,7 @@ class JobManager(FileConfiguration, CoreSysAttributes):
def schedule_job(
self,
job_method: Callable[..., Awaitable[Any]],
job_method: Callable[..., Coroutine],
options: JobSchedulerOptions,
*args,
**kwargs,

View File

@ -1,12 +1,12 @@
"""Job decorator."""
import asyncio
from collections.abc import Callable
from collections.abc import Awaitable, Callable
from contextlib import suppress
from datetime import datetime, timedelta
from functools import wraps
import logging
from typing import Any
from typing import Any, cast
from ..const import CoreState
from ..coresys import CoreSys, CoreSysAttributes
@ -43,7 +43,22 @@ class Job(CoreSysAttributes):
throttle_max_calls: int | None = None,
internal: bool = False,
):
"""Initialize the Job class."""
"""Initialize the Job decorator.
Args:
name (str): Unique name for the job. Must not be duplicated.
conditions (list[JobCondition] | None): List of conditions that must be met before the job runs.
cleanup (bool): Whether to clean up the job after execution. Defaults to True. If set to False, the job will remain accessible through the Supervisor API until the next restart.
on_condition (type[JobException] | None): Exception type to raise if a job condition fails. If None, logs the failure.
limit (JobExecutionLimit | None): Execution limit policy for the job (e.g., throttle, once, group-based).
throttle_period (timedelta | Callable | None): Throttle period as a timedelta or a callable returning a timedelta (for rate-limited jobs).
throttle_max_calls (int | None): Maximum number of calls allowed within the throttle period (for rate-limited jobs).
internal (bool): Whether the job is internal (not exposed through the Supervisor API). Defaults to False.
Raises:
RuntimeError: If job name is not unique, or required throttle parameters are missing for the selected limit.
"""
if name in _JOB_NAMES:
raise RuntimeError(f"A job already exists with name {name}!")
@ -54,11 +69,10 @@ class Job(CoreSysAttributes):
self.on_condition = on_condition
self.limit = limit
self._throttle_period = throttle_period
self.throttle_max_calls = throttle_max_calls
self._throttle_max_calls = throttle_max_calls
self._lock: asyncio.Semaphore | None = None
self._method = None
self._last_call: dict[str | None, datetime] = {}
self._rate_limited_calls: dict[str, list[datetime]] | None = None
self._rate_limited_calls: dict[str | None, list[datetime]] | None = None
self._internal = internal
# Validate Options
@ -82,13 +96,29 @@ class Job(CoreSysAttributes):
JobExecutionLimit.THROTTLE_RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
):
if self.throttle_max_calls is None:
if self._throttle_max_calls is None:
raise RuntimeError(
f"Job {name} is using execution limit {limit} without throttle max calls!"
)
self._rate_limited_calls = {}
@property
def throttle_max_calls(self) -> int:
"""Return max calls for throttle."""
if self._throttle_max_calls is None:
raise RuntimeError("No throttle max calls set for job!")
return self._throttle_max_calls
@property
def lock(self) -> asyncio.Semaphore:
"""Return lock for limits."""
# asyncio.Semaphore objects must be created in event loop
# Since this is sync code it is not safe to create if missing here
if not self._lock:
raise RuntimeError("Lock has not been created yet!")
return self._lock
def last_call(self, group_name: str | None = None) -> datetime:
"""Return last call datetime."""
return self._last_call.get(group_name, datetime.min)
@ -97,12 +127,12 @@ class Job(CoreSysAttributes):
"""Set last call datetime."""
self._last_call[group_name] = value
def rate_limited_calls(
self, group_name: str | None = None
) -> list[datetime] | None:
def rate_limited_calls(self, group_name: str | None = None) -> list[datetime]:
"""Return rate limited calls if used."""
if self._rate_limited_calls is None:
return None
raise RuntimeError(
f"Rate limited calls not available for limit type {self.limit}"
)
return self._rate_limited_calls.get(group_name, [])
@ -131,10 +161,10 @@ class Job(CoreSysAttributes):
self._rate_limited_calls[group_name] = value
def throttle_period(self, group_name: str | None = None) -> timedelta | None:
def throttle_period(self, group_name: str | None = None) -> timedelta:
"""Return throttle period."""
if self._throttle_period is None:
return None
raise RuntimeError("No throttle period set for Job!")
if isinstance(self._throttle_period, timedelta):
return self._throttle_period
@ -142,7 +172,7 @@ class Job(CoreSysAttributes):
return self._throttle_period(
self.coresys,
self.last_call(group_name),
self.rate_limited_calls(group_name),
self.rate_limited_calls(group_name) if self._rate_limited_calls else None,
)
def _post_init(self, obj: JobGroup | CoreSysAttributes) -> JobGroup | None:
@ -158,12 +188,12 @@ class Job(CoreSysAttributes):
self._lock = asyncio.Semaphore()
# Job groups
try:
is_job_group = obj.acquire and obj.release
except AttributeError:
is_job_group = False
job_group: JobGroup | None = None
with suppress(AttributeError):
if obj.acquire and obj.release: # type: ignore
job_group = cast(JobGroup, obj)
if not is_job_group and self.limit in (
if not job_group and self.limit in (
JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT,
JobExecutionLimit.GROUP_THROTTLE,
@ -174,7 +204,7 @@ class Job(CoreSysAttributes):
f"Job on {self.name} need to be a JobGroup to use group based limits!"
) from None
return obj if is_job_group else None
return job_group
def _handle_job_condition_exception(self, err: JobConditionException) -> None:
"""Handle a job condition failure."""
@ -184,9 +214,8 @@ class Job(CoreSysAttributes):
return
raise self.on_condition(error_msg, _LOGGER.warning) from None
def __call__(self, method):
def __call__(self, method: Callable[..., Awaitable]):
"""Call the wrapper logic."""
self._method = method
@wraps(method)
async def wrapper(
@ -221,7 +250,7 @@ class Job(CoreSysAttributes):
if self.conditions:
try:
await Job.check_conditions(
self, set(self.conditions), self._method.__qualname__
self, set(self.conditions), method.__qualname__
)
except JobConditionException as err:
return self._handle_job_condition_exception(err)
@ -237,7 +266,7 @@ class Job(CoreSysAttributes):
JobExecutionLimit.GROUP_WAIT,
):
try:
await obj.acquire(
await cast(JobGroup, job_group).acquire(
job, self.limit == JobExecutionLimit.GROUP_WAIT
)
except JobGroupExecutionLimitExceeded as err:
@ -296,12 +325,12 @@ class Job(CoreSysAttributes):
with job.start():
try:
self.set_last_call(datetime.now(), group_name)
if self.rate_limited_calls(group_name) is not None:
if self._rate_limited_calls is not None:
self.add_rate_limited_call(
self.last_call(group_name), group_name
)
return await self._method(obj, *args, **kwargs)
return await method(obj, *args, **kwargs)
# If a method has a conditional JobCondition, they must check it in the method
# These should be handled like normal JobConditions as much as possible
@ -317,11 +346,11 @@ class Job(CoreSysAttributes):
raise JobException() from err
finally:
self._release_exception_limits()
if self.limit in (
if job_group and self.limit in (
JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT,
):
obj.release()
job_group.release()
# Jobs that weren't started are always cleaned up. Also clean up done jobs if required
finally:
@ -473,13 +502,13 @@ class Job(CoreSysAttributes):
):
return
if self.limit == JobExecutionLimit.ONCE and self._lock.locked():
if self.limit == JobExecutionLimit.ONCE and self.lock.locked():
on_condition = (
JobException if self.on_condition is None else self.on_condition
)
raise on_condition("Another job is running")
await self._lock.acquire()
await self.lock.acquire()
def _release_exception_limits(self) -> None:
"""Release possible exception limits."""
@ -490,4 +519,4 @@ class Job(CoreSysAttributes):
JobExecutionLimit.GROUP_THROTTLE_WAIT,
):
return
self._lock.release()
self.lock.release()

View File

@ -41,7 +41,7 @@ class JobGroup(CoreSysAttributes):
def has_lock(self) -> bool:
"""Return true if current task has the lock on this job group."""
return (
self.active_job
self.active_job is not None
and self.sys_jobs.is_job
and self.active_job == self.sys_jobs.current
)

View File

@ -9,7 +9,7 @@ from aiohttp import hdrs
import attr
from sentry_sdk.types import Event, Hint
from ..const import DOCKER_NETWORK_MASK, HEADER_TOKEN, HEADER_TOKEN_OLD, CoreState
from ..const import DOCKER_IPV4_NETWORK_MASK, HEADER_TOKEN, HEADER_TOKEN_OLD, CoreState
from ..coresys import CoreSys
from ..exceptions import AddonConfigurationError
@ -21,7 +21,7 @@ def sanitize_host(host: str) -> str:
try:
# Allow internal URLs
ip = ipaddress.ip_address(host)
if ip in ipaddress.ip_network(DOCKER_NETWORK_MASK):
if ip in ipaddress.ip_network(DOCKER_IPV4_NETWORK_MASK):
return host
except ValueError:
pass

View File

@ -2,9 +2,10 @@
from datetime import datetime, timedelta
import logging
from typing import cast
from ..addons.const import ADDON_UPDATE_CONDITIONS
from ..backups.const import LOCATION_CLOUD_BACKUP
from ..backups.const import LOCATION_CLOUD_BACKUP, LOCATION_TYPE
from ..const import ATTR_TYPE, AddonState
from ..coresys import CoreSysAttributes
from ..exceptions import (
@ -378,6 +379,8 @@ class Tasks(CoreSysAttributes):
]
for backup in old_backups:
try:
await self.sys_backups.remove(backup, [LOCATION_CLOUD_BACKUP])
await self.sys_backups.remove(
backup, [cast(LOCATION_TYPE, LOCATION_CLOUD_BACKUP)]
)
except BackupFileNotFoundError as err:
_LOGGER.debug("Can't remove backup %s: %s", backup.slug, err)

View File

@ -56,7 +56,7 @@ class MountManager(FileConfiguration, CoreSysAttributes):
async def load_config(self) -> Self:
"""Load config in executor."""
await super().load_config()
self._mounts: dict[str, Mount] = {
self._mounts = {
mount[ATTR_NAME]: Mount.from_dict(self.coresys, mount)
for mount in self._data[ATTR_MOUNTS]
}
@ -172,12 +172,12 @@ class MountManager(FileConfiguration, CoreSysAttributes):
errors = await asyncio.gather(*mount_tasks, return_exceptions=True)
for i in range(len(errors)): # pylint: disable=consider-using-enumerate
if not errors[i]:
if not (err := errors[i]):
continue
if mounts[i].failed_issue in self.sys_resolution.issues:
continue
if not isinstance(errors[i], MountError):
await async_capture_exception(errors[i])
if not isinstance(err, MountError):
await async_capture_exception(err)
self.sys_resolution.add_issue(
evolve(mounts[i].failed_issue),
@ -219,7 +219,7 @@ class MountManager(FileConfiguration, CoreSysAttributes):
conditions=[JobCondition.MOUNT_AVAILABLE],
on_condition=MountJobError,
)
async def remove_mount(self, name: str, *, retain_entry: bool = False) -> None:
async def remove_mount(self, name: str, *, retain_entry: bool = False) -> Mount:
"""Remove a mount."""
# Add mount name to job
self.sys_jobs.current.reference = name

View File

@ -2,6 +2,7 @@
from abc import ABC, abstractmethod
import asyncio
from collections.abc import Callable
from functools import cached_property
import logging
from pathlib import Path, PurePath
@ -9,14 +10,6 @@ from pathlib import Path, PurePath
from dbus_fast import Variant
from voluptuous import Coerce
from ..const import (
ATTR_NAME,
ATTR_PASSWORD,
ATTR_PORT,
ATTR_TYPE,
ATTR_USERNAME,
ATTR_VERSION,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..dbus.const import (
DBUS_ATTR_ACTIVE_STATE,
@ -41,22 +34,13 @@ from ..exceptions import (
from ..resolution.const import ContextType, IssueType
from ..resolution.data import Issue
from ..utils.sentry import async_capture_exception
from .const import (
ATTR_PATH,
ATTR_READ_ONLY,
ATTR_SERVER,
ATTR_SHARE,
ATTR_USAGE,
MountCifsVersion,
MountType,
MountUsage,
)
from .const import MountCifsVersion, MountType, MountUsage
from .validate import MountData
_LOGGER: logging.Logger = logging.getLogger(__name__)
COERCE_MOUNT_TYPE = Coerce(MountType)
COERCE_MOUNT_USAGE = Coerce(MountUsage)
COERCE_MOUNT_TYPE: Callable[[str], MountType] = Coerce(MountType)
COERCE_MOUNT_USAGE: Callable[[str], MountUsage] = Coerce(MountUsage)
class Mount(CoreSysAttributes, ABC):
@ -80,7 +64,7 @@ class Mount(CoreSysAttributes, ABC):
if cls not in [Mount, NetworkMount]:
return cls(coresys, data)
type_ = COERCE_MOUNT_TYPE(data[ATTR_TYPE])
type_ = COERCE_MOUNT_TYPE(data["type"])
if type_ == MountType.CIFS:
return CIFSMount(coresys, data)
if type_ == MountType.NFS:
@ -90,32 +74,33 @@ class Mount(CoreSysAttributes, ABC):
def to_dict(self, *, skip_secrets: bool = True) -> MountData:
"""Return dictionary representation."""
return MountData(
name=self.name, type=self.type, usage=self.usage, read_only=self.read_only
name=self.name,
type=self.type,
usage=self.usage and self.usage.value,
read_only=self.read_only,
)
@property
def name(self) -> str:
"""Get name."""
return self._data[ATTR_NAME]
return self._data["name"]
@property
def type(self) -> MountType:
"""Get mount type."""
return COERCE_MOUNT_TYPE(self._data[ATTR_TYPE])
return COERCE_MOUNT_TYPE(self._data["type"])
@property
def usage(self) -> MountUsage | None:
"""Get mount usage."""
return (
COERCE_MOUNT_USAGE(self._data[ATTR_USAGE])
if ATTR_USAGE in self._data
else None
)
if self._data["usage"] is None:
return None
return COERCE_MOUNT_USAGE(self._data["usage"])
@property
def read_only(self) -> bool:
"""Is mount read-only."""
return self._data.get(ATTR_READ_ONLY, False)
return self._data.get("read_only", False)
@property
@abstractmethod
@ -186,20 +171,20 @@ class Mount(CoreSysAttributes, ABC):
async def load(self) -> None:
"""Initialize object."""
# If there's no mount unit, mount it to make one
if not await self._update_unit():
if not (unit := await self._update_unit()):
await self.mount()
return
await self._update_state_await(not_state=UnitActiveState.ACTIVATING)
await self._update_state_await(unit, not_state=UnitActiveState.ACTIVATING)
# If mount is not available, try to reload it
if not await self.is_mounted():
await self.reload()
async def _update_state(self) -> UnitActiveState | None:
async def _update_state(self, unit: SystemdUnit) -> None:
"""Update mount unit state."""
try:
self._state = await self.unit.get_active_state()
self._state = await unit.get_active_state()
except DBusError as err:
await async_capture_exception(err)
raise MountError(
@ -220,10 +205,10 @@ class Mount(CoreSysAttributes, ABC):
async def update(self) -> bool:
"""Update info about mount from dbus. Return true if it is mounted and available."""
if not await self._update_unit():
if not (unit := await self._update_unit()):
return False
await self._update_state()
await self._update_state(unit)
# If active, dismiss corresponding failed mount issue if found
if (
@ -235,16 +220,14 @@ class Mount(CoreSysAttributes, ABC):
async def _update_state_await(
self,
unit: SystemdUnit,
expected_states: list[UnitActiveState] | None = None,
not_state: UnitActiveState = UnitActiveState.ACTIVATING,
) -> None:
"""Update state info about mount from dbus. Wait for one of expected_states to appear or state to change from not_state."""
if not self.unit:
return
try:
async with asyncio.timeout(30), self.unit.properties_changed() as signal:
await self._update_state()
async with asyncio.timeout(30), unit.properties_changed() as signal:
await self._update_state(unit)
while (
expected_states
and self.state not in expected_states
@ -312,8 +295,8 @@ class Mount(CoreSysAttributes, ABC):
f"Could not mount {self.name} due to: {err!s}", _LOGGER.error
) from err
if await self._update_unit():
await self._update_state_await(not_state=UnitActiveState.ACTIVATING)
if unit := await self._update_unit():
await self._update_state_await(unit, not_state=UnitActiveState.ACTIVATING)
if not await self.is_mounted():
raise MountActivationError(
@ -323,17 +306,17 @@ class Mount(CoreSysAttributes, ABC):
async def unmount(self) -> None:
"""Unmount using systemd."""
if not await self._update_unit():
if not (unit := await self._update_unit()):
_LOGGER.info("Mount %s is not mounted, skipping unmount", self.name)
return
await self._update_state()
await self._update_state(unit)
try:
if self.state != UnitActiveState.FAILED:
await self.sys_dbus.systemd.stop_unit(self.unit_name, StopUnitMode.FAIL)
await self._update_state_await(
[UnitActiveState.INACTIVE, UnitActiveState.FAILED]
unit, [UnitActiveState.INACTIVE, UnitActiveState.FAILED]
)
if self.state == UnitActiveState.FAILED:
@ -360,8 +343,10 @@ class Mount(CoreSysAttributes, ABC):
f"Could not reload mount {self.name} due to: {err!s}", _LOGGER.error
) from err
else:
if await self._update_unit():
await self._update_state_await(not_state=UnitActiveState.ACTIVATING)
if unit := await self._update_unit():
await self._update_state_await(
unit, not_state=UnitActiveState.ACTIVATING
)
if not await self.is_mounted():
raise MountActivationError(
@ -381,18 +366,18 @@ class NetworkMount(Mount, ABC):
"""Return dictionary representation."""
out = MountData(server=self.server, **super().to_dict())
if self.port is not None:
out[ATTR_PORT] = self.port
out["port"] = self.port
return out
@property
def server(self) -> str:
"""Get server."""
return self._data[ATTR_SERVER]
return self._data["server"]
@property
def port(self) -> int | None:
"""Get port, returns none if using the protocol default."""
return self._data.get(ATTR_PORT)
return self._data.get("port")
@property
def where(self) -> PurePath:
@ -420,31 +405,31 @@ class CIFSMount(NetworkMount):
def to_dict(self, *, skip_secrets: bool = True) -> MountData:
"""Return dictionary representation."""
out = MountData(share=self.share, **super().to_dict())
if not skip_secrets and self.username is not None:
out[ATTR_USERNAME] = self.username
out[ATTR_PASSWORD] = self.password
out[ATTR_VERSION] = self.version
if not skip_secrets and self.username is not None and self.password is not None:
out["username"] = self.username
out["password"] = self.password
out["version"] = self.version
return out
@property
def share(self) -> str:
"""Get share."""
return self._data[ATTR_SHARE]
return self._data["share"]
@property
def username(self) -> str | None:
"""Get username, returns none if auth is not used."""
return self._data.get(ATTR_USERNAME)
return self._data.get("username")
@property
def password(self) -> str | None:
"""Get password, returns none if auth is not used."""
return self._data.get(ATTR_PASSWORD)
return self._data.get("password")
@property
def version(self) -> str | None:
"""Get password, returns none if auth is not used."""
version = self._data.get(ATTR_VERSION)
"""Get cifs version, returns none if using default."""
version = self._data.get("version")
if version == MountCifsVersion.LEGACY_1_0:
return "1.0"
if version == MountCifsVersion.LEGACY_2_0:
@ -513,7 +498,7 @@ class NFSMount(NetworkMount):
@property
def path(self) -> PurePath:
"""Get path."""
return PurePath(self._data[ATTR_PATH])
return PurePath(self._data["path"])
@property
def what(self) -> str:
@ -543,7 +528,7 @@ class BindMount(Mount):
def create(
coresys: CoreSys,
name: str,
path: Path,
path: PurePath,
usage: MountUsage | None = None,
where: PurePath | None = None,
read_only: bool = False,
@ -568,7 +553,7 @@ class BindMount(Mount):
@property
def path(self) -> PurePath:
"""Get path."""
return PurePath(self._data[ATTR_PATH])
return PurePath(self._data["path"])
@property
def what(self) -> str:

View File

@ -103,7 +103,7 @@ class MountData(TypedDict):
name: str
type: str
read_only: bool
usage: NotRequired[str]
usage: str | None
# CIFS and NFS fields
server: NotRequired[str]
@ -113,6 +113,7 @@ class MountData(TypedDict):
share: NotRequired[str]
username: NotRequired[str]
password: NotRequired[str]
version: NotRequired[str | None]
# NFS and Bind fields
path: NotRequired[str]

View File

@ -5,7 +5,7 @@ from contextlib import suppress
from dataclasses import dataclass
import logging
from pathlib import Path
from typing import Any, Final
from typing import Any, Final, cast
from awesomeversion import AwesomeVersion
@ -24,6 +24,7 @@ from ..exceptions import (
)
from ..jobs.const import JobCondition, JobExecutionLimit
from ..jobs.decorator import Job
from ..resolution.checks.base import CheckBase
from ..resolution.checks.disabled_data_disk import CheckDisabledDataDisk
from ..resolution.checks.multiple_data_disks import CheckMultipleDataDisks
from ..utils.sentry import async_capture_exception
@ -149,7 +150,7 @@ class DataDisk(CoreSysAttributes):
Available disks are drives where nothing on it has been mounted
and it can be formatted.
"""
available: list[UDisks2Drive] = []
available: list[Disk] = []
for drive in self.sys_dbus.udisks2.drives:
block_devices = self._get_block_devices_for_drive(drive)
primary = _get_primary_block_device(block_devices)
@ -166,12 +167,16 @@ class DataDisk(CoreSysAttributes):
@property
def check_multiple_data_disks(self) -> CheckMultipleDataDisks:
"""Resolution center check for multiple data disks."""
return self.sys_resolution.check.get("multiple_data_disks")
return cast(
CheckMultipleDataDisks, self.sys_resolution.check.get("multiple_data_disks")
)
@property
def check_disabled_data_disk(self) -> CheckDisabledDataDisk:
"""Resolution center check for disabled data disk."""
return self.sys_resolution.check.get("disabled_data_disk")
return cast(
CheckDisabledDataDisk, self.sys_resolution.check.get("disabled_data_disk")
)
def _get_block_devices_for_drive(self, drive: UDisks2Drive) -> list[UDisks2Block]:
"""Get block devices for a drive."""
@ -361,7 +366,7 @@ class DataDisk(CoreSysAttributes):
try:
partition_block = await UDisks2Block.new(
partition, self.sys_dbus.bus, sync_properties=False
partition, self.sys_dbus.connected_bus, sync_properties=False
)
except DBusError as err:
raise HassOSDataDiskError(
@ -388,7 +393,7 @@ class DataDisk(CoreSysAttributes):
properties[DBUS_IFACE_BLOCK][DBUS_ATTR_ID_LABEL]
== FILESYSTEM_LABEL_DATA_DISK
):
check = self.check_multiple_data_disks
check: CheckBase = self.check_multiple_data_disks
elif (
properties[DBUS_IFACE_BLOCK][DBUS_ATTR_ID_LABEL]
== FILESYSTEM_LABEL_DISABLED_DATA_DISK
@ -411,7 +416,7 @@ class DataDisk(CoreSysAttributes):
and issue.context == self.check_multiple_data_disks.context
for issue in self.sys_resolution.issues
):
check = self.check_multiple_data_disks
check: CheckBase = self.check_multiple_data_disks
elif any(
issue.type == self.check_disabled_data_disk.issue
and issue.context == self.check_disabled_data_disk.context

View File

@ -1,11 +1,11 @@
"""OS support on supervisor."""
from collections.abc import Awaitable
from dataclasses import dataclass
from datetime import datetime
import errno
import logging
from pathlib import Path, PurePath
from typing import cast
import aiohttp
from awesomeversion import AwesomeVersion, AwesomeVersionException
@ -61,8 +61,8 @@ class SlotStatus:
device=PurePath(data["device"]),
bundle_compatible=data.get("bundle.compatible"),
sha256=data.get("sha256"),
size=data.get("size"),
installed_count=data.get("installed.count"),
size=cast(int | None, data.get("size")),
installed_count=cast(int | None, data.get("installed.count")),
bundle_version=AwesomeVersion(data["bundle.version"])
if "bundle.version" in data
else None,
@ -70,51 +70,17 @@ class SlotStatus:
if "installed.timestamp" in data
else None,
status=data.get("status"),
activated_count=data.get("activated.count"),
activated_count=cast(int | None, data.get("activated.count")),
activated_timestamp=datetime.fromisoformat(data["activated.timestamp"])
if "activated.timestamp" in data
else None,
boot_status=data.get("boot-status"),
boot_status=RaucState(data["boot-status"])
if "boot-status" in data
else None,
bootname=data.get("bootname"),
parent=data.get("parent"),
)
def to_dict(self) -> SlotStatusDataType:
"""Get dictionary representation."""
out: SlotStatusDataType = {
"class": self.class_,
"type": self.type_,
"state": self.state,
"device": self.device.as_posix(),
}
if self.bundle_compatible is not None:
out["bundle.compatible"] = self.bundle_compatible
if self.sha256 is not None:
out["sha256"] = self.sha256
if self.size is not None:
out["size"] = self.size
if self.installed_count is not None:
out["installed.count"] = self.installed_count
if self.bundle_version is not None:
out["bundle.version"] = str(self.bundle_version)
if self.installed_timestamp is not None:
out["installed.timestamp"] = str(self.installed_timestamp)
if self.status is not None:
out["status"] = self.status
if self.activated_count is not None:
out["activated.count"] = self.activated_count
if self.activated_timestamp:
out["activated.timestamp"] = str(self.activated_timestamp)
if self.boot_status:
out["boot-status"] = self.boot_status
if self.bootname is not None:
out["bootname"] = self.bootname
if self.parent is not None:
out["parent"] = self.parent
return out
class OSManager(CoreSysAttributes):
"""OS interface inside supervisor."""
@ -148,7 +114,11 @@ class OSManager(CoreSysAttributes):
def need_update(self) -> bool:
"""Return true if a HassOS update is available."""
try:
return self.version < self.latest_version
return (
self.version is not None
and self.latest_version is not None
and self.version < self.latest_version
)
except (AwesomeVersionException, TypeError):
return False
@ -176,6 +146,9 @@ class OSManager(CoreSysAttributes):
def get_slot_name(self, boot_name: str) -> str:
"""Get slot name from boot name."""
if not self._slots:
raise HassOSSlotNotFound()
for name, status in self._slots.items():
if status.bootname == boot_name:
return name
@ -288,11 +261,8 @@ class OSManager(CoreSysAttributes):
conditions=[JobCondition.HAOS],
on_condition=HassOSJobError,
)
async def config_sync(self) -> Awaitable[None]:
"""Trigger a host config reload from usb.
Return a coroutine.
"""
async def config_sync(self) -> None:
"""Trigger a host config reload from usb."""
_LOGGER.info(
"Synchronizing configuration from USB with Home Assistant Operating System."
)
@ -314,6 +284,10 @@ class OSManager(CoreSysAttributes):
version = version or self.latest_version
# Check installed version
if not version:
raise HassOSUpdateError(
"No version information available, cannot update", _LOGGER.error
)
if version == self.version:
raise HassOSUpdateError(
f"Version {version!s} is already installed", _LOGGER.warning

View File

@ -22,6 +22,7 @@ from ..exceptions import (
AudioUpdateError,
ConfigurationFileError,
DockerError,
PluginError,
)
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
@ -127,7 +128,7 @@ class PluginAudio(PluginBase):
"""Update Audio plugin."""
try:
await super().update(version)
except DockerError as err:
except (DockerError, PluginError) as err:
raise AudioUpdateError("Audio update failed", _LOGGER.error) from err
async def restart(self) -> None:

View File

@ -63,7 +63,11 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
def need_update(self) -> bool:
"""Return True if an update is available."""
try:
return self.version < self.latest_version
return (
self.version is not None
and self.latest_version is not None
and self.version < self.latest_version
)
except (AwesomeVersionException, TypeError):
return False
@ -153,6 +157,10 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
async def start(self) -> None:
"""Start system plugin."""
@abstractmethod
async def stop(self) -> None:
"""Stop system plugin."""
async def load(self) -> None:
"""Load system plugin."""
self.start_watchdog()
@ -160,14 +168,14 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
# Check plugin state
try:
# Evaluate Version if we lost this information
if not self.version:
self.version = await self.instance.get_latest_version()
if self.version:
version = self.version
else:
self.version = version = await self.instance.get_latest_version()
await self.instance.attach(
version=self.version, skip_state_event_if_down=True
)
await self.instance.attach(version=version, skip_state_event_if_down=True)
await self.instance.check_image(self.version, self.default_image)
await self.instance.check_image(version, self.default_image)
except DockerError:
_LOGGER.info(
"No %s plugin Docker image %s found.", self.slug, self.instance.image
@ -177,7 +185,7 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
with suppress(PluginError):
await self.install()
else:
self.version = self.instance.version
self.version = self.instance.version or version
self.image = self.default_image
await self.save_data()
@ -194,11 +202,10 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
if not self.latest_version:
await self.sys_updater.reload()
if self.latest_version:
if to_version := self.latest_version:
with suppress(DockerError):
await self.instance.install(
self.latest_version, image=self.default_image
)
await self.instance.install(to_version, image=self.default_image)
self.version = self.instance.version or to_version
break
_LOGGER.warning(
"Error on installing %s plugin, retrying in 30sec", self.slug
@ -206,23 +213,28 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
await asyncio.sleep(30)
_LOGGER.info("%s plugin now installed", self.slug)
self.version = self.instance.version
self.image = self.default_image
await self.save_data()
async def update(self, version: str | None = None) -> None:
"""Update system plugin."""
version = version or self.latest_version
to_version = AwesomeVersion(version) if version else self.latest_version
if not to_version:
raise PluginError(
f"Cannot determine latest version of plugin {self.slug} for update",
_LOGGER.error,
)
old_image = self.image
if version == self.version:
if to_version == self.version:
_LOGGER.warning(
"Version %s is already installed for %s", version, self.slug
"Version %s is already installed for %s", to_version, self.slug
)
return
await self.instance.update(version, image=self.default_image)
self.version = self.instance.version
await self.instance.update(to_version, image=self.default_image)
self.version = self.instance.version or to_version
self.image = self.default_image
await self.save_data()

View File

@ -14,7 +14,7 @@ from ..coresys import CoreSys
from ..docker.cli import DockerCli
from ..docker.const import ContainerState
from ..docker.stats import DockerStats
from ..exceptions import CliError, CliJobError, CliUpdateError, DockerError
from ..exceptions import CliError, CliJobError, CliUpdateError, DockerError, PluginError
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from ..utils.sentry import async_capture_exception
@ -53,7 +53,7 @@ class PluginCli(PluginBase):
return self.sys_updater.version_cli
@property
def supervisor_token(self) -> str:
def supervisor_token(self) -> str | None:
"""Return an access token for the Supervisor API."""
return self._data.get(ATTR_ACCESS_TOKEN)
@ -66,7 +66,7 @@ class PluginCli(PluginBase):
"""Update local HA cli."""
try:
await super().update(version)
except DockerError as err:
except (DockerError, PluginError) as err:
raise CliUpdateError("CLI update failed", _LOGGER.error) from err
async def start(self) -> None:

View File

@ -15,7 +15,8 @@ from awesomeversion import AwesomeVersion
import jinja2
import voluptuous as vol
from ..const import ATTR_SERVERS, DNS_SUFFIX, LogLevel
from ..bus import EventListener
from ..const import ATTR_SERVERS, DNS_SUFFIX, BusEvent, LogLevel
from ..coresys import CoreSys
from ..dbus.const import MulticastProtocolEnabled
from ..docker.const import ContainerState
@ -28,6 +29,7 @@ from ..exceptions import (
CoreDNSJobError,
CoreDNSUpdateError,
DockerError,
PluginError,
)
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
@ -71,11 +73,17 @@ class PluginDns(PluginBase):
self.slug = "dns"
self.coresys: CoreSys = coresys
self.instance: DockerDNS = DockerDNS(coresys)
self.resolv_template: jinja2.Template | None = None
self.hosts_template: jinja2.Template | None = None
self._resolv_template: jinja2.Template | None = None
self._hosts_template: jinja2.Template | None = None
self._hosts: list[HostEntry] = []
self._loop: bool = False
self._cached_locals: list[str] | None = None
# Debouncing system for rapid local changes
self._locals_changed_handle: asyncio.TimerHandle | None = None
self._restart_after_locals_change_handle: asyncio.Task | None = None
self._connectivity_check_listener: EventListener | None = None
@property
def hosts(self) -> Path:
@ -90,6 +98,12 @@ class PluginDns(PluginBase):
@property
def locals(self) -> list[str]:
"""Return list of local system DNS servers."""
if self._cached_locals is None:
self._cached_locals = self._compute_locals()
return self._cached_locals
def _compute_locals(self) -> list[str]:
"""Compute list of local system DNS servers."""
servers: list[str] = []
for server in [
f"dns://{server!s}" for server in self.sys_host.network.dns_servers
@ -99,6 +113,52 @@ class PluginDns(PluginBase):
return servers
async def _on_dns_container_running(self, event: DockerContainerStateEvent) -> None:
"""Handle DNS container state change to running and trigger connectivity check."""
if event.name == self.instance.name and event.state == ContainerState.RUNNING:
# Wait before CoreDNS actually becomes available
await asyncio.sleep(5)
_LOGGER.debug("CoreDNS started, checking connectivity")
await self.sys_supervisor.check_connectivity()
async def _restart_dns_after_locals_change(self) -> None:
"""Restart DNS after a debounced delay for local changes."""
old_locals = self._cached_locals
new_locals = self._compute_locals()
if old_locals == new_locals:
return
_LOGGER.debug("DNS locals changed from %s to %s", old_locals, new_locals)
self._cached_locals = new_locals
if not await self.instance.is_running():
return
await self.restart()
self._restart_after_locals_change_handle = None
def _trigger_restart_dns_after_locals_change(self) -> None:
"""Trigger a restart of DNS after local changes."""
# Cancel existing restart task if any
if self._restart_after_locals_change_handle:
self._restart_after_locals_change_handle.cancel()
self._restart_after_locals_change_handle = self.sys_create_task(
self._restart_dns_after_locals_change()
)
self._locals_changed_handle = None
def notify_locals_changed(self) -> None:
"""Schedule a debounced DNS restart for local changes."""
# Cancel existing timer if any
if self._locals_changed_handle:
self._locals_changed_handle.cancel()
# Schedule new timer with 1 second delay
self._locals_changed_handle = self.sys_call_later(
1.0, self._trigger_restart_dns_after_locals_change
)
@property
def servers(self) -> list[str]:
"""Return list of DNS servers."""
@ -147,11 +207,25 @@ class PluginDns(PluginBase):
"""Set fallback DNS enabled."""
self._data[ATTR_FALLBACK] = value
@property
def hosts_template(self) -> jinja2.Template:
"""Get hosts jinja template."""
if not self._hosts_template:
raise RuntimeError("Hosts template not set!")
return self._hosts_template
@property
def resolv_template(self) -> jinja2.Template:
"""Get resolv jinja template."""
if not self._resolv_template:
raise RuntimeError("Resolv template not set!")
return self._resolv_template
async def load(self) -> None:
"""Load DNS setup."""
# Initialize CoreDNS Template
try:
self.resolv_template = jinja2.Template(
self._resolv_template = jinja2.Template(
await self.sys_run_in_executor(RESOLV_TMPL.read_text, encoding="utf-8")
)
except OSError as err:
@ -162,7 +236,7 @@ class PluginDns(PluginBase):
_LOGGER.error("Can't read resolve.tmpl: %s", err)
try:
self.hosts_template = jinja2.Template(
self._hosts_template = jinja2.Template(
await self.sys_run_in_executor(HOSTS_TMPL.read_text, encoding="utf-8")
)
except OSError as err:
@ -173,10 +247,19 @@ class PluginDns(PluginBase):
_LOGGER.error("Can't read hosts.tmpl: %s", err)
await self._init_hosts()
# Register Docker event listener for connectivity checks
if not self._connectivity_check_listener:
self._connectivity_check_listener = self.sys_bus.register_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE, self._on_dns_container_running
)
await super().load()
# Update supervisor
await self._write_resolv(HOST_RESOLV)
# Resolv template should always be set but just in case don't fail load
if self._resolv_template:
await self._write_resolv(HOST_RESOLV)
# Reinitializing aiohttp.ClientSession after DNS setup makes sure that
# aiodns is using the right DNS servers (see #5857).
@ -201,7 +284,7 @@ class PluginDns(PluginBase):
"""Update CoreDNS plugin."""
try:
await super().update(version)
except DockerError as err:
except (DockerError, PluginError) as err:
raise CoreDNSUpdateError("CoreDNS update failed", _LOGGER.error) from err
async def restart(self) -> None:
@ -226,6 +309,16 @@ class PluginDns(PluginBase):
async def stop(self) -> None:
"""Stop CoreDNS."""
# Cancel any pending locals change timer
if self._locals_changed_handle:
self._locals_changed_handle.cancel()
self._locals_changed_handle = None
# Wait for any pending restart before stopping
if self._restart_after_locals_change_handle:
self._restart_after_locals_change_handle.cancel()
self._restart_after_locals_change_handle = None
_LOGGER.info("Stopping CoreDNS plugin")
try:
await self.instance.stop()
@ -428,12 +521,6 @@ class PluginDns(PluginBase):
async def _write_resolv(self, resolv_conf: Path) -> None:
"""Update/Write resolv.conf file."""
if not self.resolv_template:
_LOGGER.warning(
"Resolv template is missing, cannot write/update %s", resolv_conf
)
return
nameservers = [str(self.sys_docker.network.dns), "127.0.0.11"]
# Read resolv config

View File

@ -16,6 +16,7 @@ from ..exceptions import (
MulticastError,
MulticastJobError,
MulticastUpdateError,
PluginError,
)
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
@ -63,7 +64,7 @@ class PluginMulticast(PluginBase):
"""Update Multicast plugin."""
try:
await super().update(version)
except DockerError as err:
except (DockerError, PluginError) as err:
raise MulticastUpdateError(
"Multicast update failed", _LOGGER.error
) from err

View File

@ -19,6 +19,7 @@ from ..exceptions import (
ObserverError,
ObserverJobError,
ObserverUpdateError,
PluginError,
)
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
@ -58,7 +59,7 @@ class PluginObserver(PluginBase):
return self.sys_updater.version_observer
@property
def supervisor_token(self) -> str:
def supervisor_token(self) -> str | None:
"""Return an access token for the Observer API."""
return self._data.get(ATTR_ACCESS_TOKEN)
@ -71,7 +72,7 @@ class PluginObserver(PluginBase):
"""Update local HA observer."""
try:
await super().update(version)
except DockerError as err:
except (DockerError, PluginError) as err:
raise ObserverUpdateError(
"HA observer update failed", _LOGGER.error
) from err
@ -90,6 +91,10 @@ class PluginObserver(PluginBase):
_LOGGER.error("Can't start observer plugin")
raise ObserverError() from err
async def stop(self) -> None:
"""Raise. Supervisor should not stop observer."""
raise RuntimeError("Stopping observer without a restart is not supported!")
async def stats(self) -> DockerStats:
"""Return stats of observer."""
try:

View File

@ -67,10 +67,11 @@ class CheckAddonPwned(CheckBase):
@Job(name="check_addon_pwned_approve", conditions=[JobCondition.INTERNET_SYSTEM])
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
addon = self.sys_addons.get(reference)
if not reference:
return False
# Uninstalled
if not addon or not addon.is_installed:
if not (addon := self.sys_addons.get_local_only(reference)):
return False
# Not in use anymore

View File

@ -29,9 +29,11 @@ class CheckDetachedAddonMissing(CheckBase):
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
return (
addon := self.sys_addons.get(reference, local_only=True)
) and addon.is_detached
if not reference:
return False
addon = self.sys_addons.get_local_only(reference)
return addon is not None and addon.is_detached
@property
def issue(self) -> IssueType:

View File

@ -27,9 +27,11 @@ class CheckDetachedAddonRemoved(CheckBase):
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
return (
addon := self.sys_addons.get(reference, local_only=True)
) and addon.is_detached
if not reference:
return False
addon = self.sys_addons.get_local_only(reference)
return addon is not None and addon.is_detached
@property
def issue(self) -> IssueType:

View File

@ -35,6 +35,9 @@ class CheckDisabledDataDisk(CheckBase):
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
if not reference:
return False
resolved = await self.sys_dbus.udisks2.resolve_device(
DeviceSpecification(path=Path(reference))
)
@ -43,7 +46,7 @@ class CheckDisabledDataDisk(CheckBase):
def _is_disabled_data_disk(self, block_device: UDisks2Block) -> bool:
"""Return true if filesystem block device has name indicating it was disabled by OS."""
return (
block_device.filesystem
block_device.filesystem is not None
and block_device.id_label == FILESYSTEM_LABEL_DISABLED_DATA_DISK
)

View File

@ -2,6 +2,7 @@
import asyncio
from datetime import timedelta
from typing import Literal
from aiodns import DNSResolver
from aiodns.error import DNSError
@ -15,6 +16,15 @@ from ..const import DNS_CHECK_HOST, ContextType, IssueType
from .base import CheckBase
async def check_server(
loop: asyncio.AbstractEventLoop, server: str, qtype: Literal["A"] | Literal["AAAA"]
) -> None:
"""Check a DNS server and report issues."""
ip_addr = server[6:] if server.startswith("dns://") else server
async with DNSResolver(loop=loop, nameservers=[ip_addr]) as resolver:
await resolver.query(DNS_CHECK_HOST, qtype)
def setup(coresys: CoreSys) -> CheckBase:
"""Check setup function."""
return CheckDNSServer(coresys)
@ -33,16 +43,18 @@ class CheckDNSServer(CheckBase):
"""Run check if not affected by issue."""
dns_servers = self.dns_servers
results = await asyncio.gather(
*[self._check_server(server) for server in dns_servers],
*[check_server(self.sys_loop, server, "A") for server in dns_servers],
return_exceptions=True,
)
for i in (r for r in range(len(results)) if isinstance(results[r], DNSError)):
self.sys_resolution.create_issue(
IssueType.DNS_SERVER_FAILED,
ContextType.DNS_SERVER,
reference=dns_servers[i],
)
await async_capture_exception(results[i])
# pylint: disable-next=consider-using-enumerate
for i in range(len(results)):
if isinstance(result := results[i], DNSError):
self.sys_resolution.create_issue(
IssueType.DNS_SERVER_FAILED,
ContextType.DNS_SERVER,
reference=dns_servers[i],
)
await async_capture_exception(result)
@Job(name="check_dns_server_approve", conditions=[JobCondition.INTERNET_SYSTEM])
async def approve_check(self, reference: str | None = None) -> bool:
@ -51,18 +63,12 @@ class CheckDNSServer(CheckBase):
return False
try:
await self._check_server(reference)
await check_server(self.sys_loop, reference, "A")
except DNSError:
return True
return False
async def _check_server(self, server: str):
"""Check a DNS server and report issues."""
ip_addr = server[6:] if server.startswith("dns://") else server
resolver = DNSResolver(nameservers=[ip_addr])
await resolver.query(DNS_CHECK_HOST, "A")
@property
def dns_servers(self) -> list[str]:
"""All user and system provided dns servers."""

View File

@ -3,7 +3,6 @@
import asyncio
from datetime import timedelta
from aiodns import DNSResolver
from aiodns.error import DNSError
from ...const import CoreState
@ -11,8 +10,9 @@ from ...coresys import CoreSys
from ...jobs.const import JobCondition, JobExecutionLimit
from ...jobs.decorator import Job
from ...utils.sentry import async_capture_exception
from ..const import DNS_CHECK_HOST, DNS_ERROR_NO_DATA, ContextType, IssueType
from ..const import DNS_ERROR_NO_DATA, ContextType, IssueType
from .base import CheckBase
from .dns_server import check_server
def setup(coresys: CoreSys) -> CheckBase:
@ -33,21 +33,21 @@ class CheckDNSServerIPv6(CheckBase):
"""Run check if not affected by issue."""
dns_servers = self.dns_servers
results = await asyncio.gather(
*[self._check_server(server) for server in dns_servers],
*[check_server(self.sys_loop, server, "AAAA") for server in dns_servers],
return_exceptions=True,
)
for i in (
r
for r in range(len(results))
if isinstance(results[r], DNSError)
and results[r].args[0] != DNS_ERROR_NO_DATA
):
self.sys_resolution.create_issue(
IssueType.DNS_SERVER_IPV6_ERROR,
ContextType.DNS_SERVER,
reference=dns_servers[i],
)
await async_capture_exception(results[i])
# pylint: disable-next=consider-using-enumerate
for i in range(len(results)):
if (
isinstance(result := results[i], DNSError)
and result.args[0] != DNS_ERROR_NO_DATA
):
self.sys_resolution.create_issue(
IssueType.DNS_SERVER_IPV6_ERROR,
ContextType.DNS_SERVER,
reference=dns_servers[i],
)
await async_capture_exception(result)
@Job(
name="check_dns_server_ipv6_approve", conditions=[JobCondition.INTERNET_SYSTEM]
@ -58,19 +58,13 @@ class CheckDNSServerIPv6(CheckBase):
return False
try:
await self._check_server(reference)
await check_server(self.sys_loop, reference, "AAAA")
except DNSError as dns_error:
if dns_error.args[0] != DNS_ERROR_NO_DATA:
return True
return False
async def _check_server(self, server: str):
"""Check a DNS server and report issues."""
ip_addr = server[6:] if server.startswith("dns://") else server
resolver = DNSResolver(nameservers=[ip_addr])
await resolver.query(DNS_CHECK_HOST, "AAAA")
@property
def dns_servers(self) -> list[str]:
"""All user and system provided dns servers."""

View File

@ -0,0 +1,108 @@
"""Helpers to check for duplicate OS installations."""
import logging
from ...const import CoreState
from ...coresys import CoreSys
from ...dbus.udisks2.data import DeviceSpecification
from ..const import ContextType, IssueType, UnhealthyReason
from .base import CheckBase
_LOGGER: logging.Logger = logging.getLogger(__name__)
# Partition labels to check for duplicates (GPT-based installations)
HAOS_PARTITIONS = [
"hassos-boot",
"hassos-kernel0",
"hassos-kernel1",
"hassos-system0",
"hassos-system1",
]
# Partition UUIDs to check for duplicates (MBR-based installations)
HAOS_PARTITION_UUIDS = [
"48617373-01", # hassos-boot
"48617373-05", # hassos-kernel0
"48617373-06", # hassos-system0
"48617373-07", # hassos-kernel1
"48617373-08", # hassos-system1
]
def _get_device_specifications():
"""Generate DeviceSpecification objects for both GPT and MBR partitions."""
# GPT-based installations (partition labels)
for partition_label in HAOS_PARTITIONS:
yield (
DeviceSpecification(partlabel=partition_label),
"partition",
partition_label,
)
# MBR-based installations (partition UUIDs)
for partition_uuid in HAOS_PARTITION_UUIDS:
yield (
DeviceSpecification(partuuid=partition_uuid),
"partition UUID",
partition_uuid,
)
def setup(coresys: CoreSys) -> CheckBase:
"""Check setup function."""
return CheckDuplicateOSInstallation(coresys)
class CheckDuplicateOSInstallation(CheckBase):
"""CheckDuplicateOSInstallation class for check."""
async def run_check(self) -> None:
"""Run check if not affected by issue."""
if not self.sys_os.available:
_LOGGER.debug(
"Skipping duplicate OS installation check, OS is not available"
)
return
for device_spec, spec_type, identifier in _get_device_specifications():
resolved = await self.sys_dbus.udisks2.resolve_device(device_spec)
if resolved and len(resolved) > 1:
_LOGGER.warning(
"Found duplicate OS installation: %s %s exists on %d devices (%s)",
identifier,
spec_type,
len(resolved),
", ".join(str(device.device) for device in resolved),
)
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.DUPLICATE_OS_INSTALLATION
)
self.sys_resolution.create_issue(
IssueType.DUPLICATE_OS_INSTALLATION,
ContextType.SYSTEM,
)
return
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
# Check all partitions for duplicates since issue is created without reference
for device_spec, _, _ in _get_device_specifications():
resolved = await self.sys_dbus.udisks2.resolve_device(device_spec)
if resolved and len(resolved) > 1:
return True
return False
@property
def issue(self) -> IssueType:
"""Return a IssueType enum."""
return IssueType.DUPLICATE_OS_INSTALLATION
@property
def context(self) -> ContextType:
"""Return a ContextType enum."""
return ContextType.SYSTEM
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this check can run."""
return [CoreState.SETUP]

View File

@ -21,6 +21,9 @@ class CheckMultipleDataDisks(CheckBase):
async def run_check(self) -> None:
"""Run check if not affected by issue."""
if not self.sys_os.available:
return
for block_device in self.sys_dbus.udisks2.block_devices:
if self._block_device_has_name_issue(block_device):
self.sys_resolution.create_issue(
@ -35,6 +38,9 @@ class CheckMultipleDataDisks(CheckBase):
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
if not reference:
return False
resolved = await self.sys_dbus.udisks2.resolve_device(
DeviceSpecification(path=Path(reference))
)
@ -43,7 +49,7 @@ class CheckMultipleDataDisks(CheckBase):
def _block_device_has_name_issue(self, block_device: UDisks2Block) -> bool:
"""Return true if filesystem block device incorrectly has data disk name."""
return (
block_device.filesystem
block_device.filesystem is not None
and block_device.id_label == FILESYSTEM_LABEL_DATA_DISK
and block_device.device != self.sys_dbus.agent.datadisk.current_device
)

View File

@ -19,12 +19,12 @@ class CheckNetworkInterfaceIPV4(CheckBase):
async def run_check(self) -> None:
"""Run check if not affected by issue."""
for interface in self.sys_dbus.network.interfaces:
if CheckNetworkInterfaceIPV4.check_interface(interface):
for inet in self.sys_dbus.network.interfaces:
if CheckNetworkInterfaceIPV4.check_interface(inet):
self.sys_resolution.create_issue(
IssueType.IPV4_CONNECTION_PROBLEM,
ContextType.SYSTEM,
interface.name,
inet.interface_name,
)
async def approve_check(self, reference: str | None = None) -> bool:

View File

@ -64,10 +64,11 @@ class UnhealthyReason(StrEnum):
"""Reasons for unsupported status."""
DOCKER = "docker"
DUPLICATE_OS_INSTALLATION = "duplicate_os_installation"
OSERROR_BAD_MESSAGE = "oserror_bad_message"
PRIVILEGED = "privileged"
SUPERVISOR = "supervisor"
SETUP = "setup"
SUPERVISOR = "supervisor"
UNTRUSTED = "untrusted"
@ -83,6 +84,7 @@ class IssueType(StrEnum):
DEVICE_ACCESS_MISSING = "device_access_missing"
DISABLED_DATA_DISK = "disabled_data_disk"
DNS_LOOP = "dns_loop"
DUPLICATE_OS_INSTALLATION = "duplicate_os_installation"
DNS_SERVER_FAILED = "dns_server_failed"
DNS_SERVER_IPV6_ERROR = "dns_server_ipv6_error"
DOCKER_CONFIG = "docker_config"

View File

@ -1,6 +1,6 @@
"""Data objects."""
from uuid import UUID, uuid4
from uuid import uuid4
import attr
@ -20,7 +20,7 @@ class Issue:
type: IssueType = attr.ib()
context: ContextType = attr.ib()
reference: str | None = attr.ib(default=None)
uuid: UUID = attr.ib(factory=lambda: uuid4().hex, eq=False, init=False)
uuid: str = attr.ib(factory=lambda: uuid4().hex, eq=False, init=False)
@attr.s(frozen=True, slots=True)
@ -30,7 +30,7 @@ class Suggestion:
type: SuggestionType = attr.ib()
context: ContextType = attr.ib()
reference: str | None = attr.ib(default=None)
uuid: UUID = attr.ib(factory=lambda: uuid4().hex, eq=False, init=False)
uuid: str = attr.ib(factory=lambda: uuid4().hex, eq=False, init=False)
@attr.s(frozen=True, slots=True)

View File

@ -33,7 +33,7 @@ class EvaluateAppArmor(EvaluateBase):
"""Return a list of valid states when this evaluation can run."""
return [CoreState.INITIALIZE]
async def evaluate(self) -> None:
async def evaluate(self) -> bool:
"""Run evaluation."""
try:
apparmor = await self.sys_run_in_executor(

View File

@ -5,6 +5,8 @@ import logging
from docker.errors import DockerException
from requests import RequestException
from supervisor.docker.const import ADDON_BUILDER_IMAGE
from ...const import CoreState
from ...coresys import CoreSys
from ..const import (
@ -38,7 +40,7 @@ class EvaluateContainer(EvaluateBase):
"""Initialize the evaluation class."""
super().__init__(coresys)
self.coresys = coresys
self._images = set()
self._images: set[str] = set()
@property
def reason(self) -> UnsupportedReason:
@ -60,9 +62,10 @@ class EvaluateContainer(EvaluateBase):
"""Return a set of all known images."""
return {
self.sys_homeassistant.image,
self.sys_supervisor.image,
*(plugin.image for plugin in self.sys_plugins.all_plugins),
*(addon.image for addon in self.sys_addons.installed),
self.sys_supervisor.image or self.sys_supervisor.default_image,
*(plugin.image for plugin in self.sys_plugins.all_plugins if plugin.image),
*(addon.image for addon in self.sys_addons.installed if addon.image),
ADDON_BUILDER_IMAGE,
}
async def evaluate(self) -> bool:

View File

@ -29,6 +29,6 @@ class EvaluateContentTrust(EvaluateBase):
"""Return a list of valid states when this evaluation can run."""
return [CoreState.INITIALIZE, CoreState.SETUP, CoreState.RUNNING]
async def evaluate(self) -> None:
async def evaluate(self) -> bool:
"""Run evaluation."""
return not self.sys_security.content_trust

View File

@ -29,6 +29,6 @@ class EvaluateDbus(EvaluateBase):
"""Return a list of valid states when this evaluation can run."""
return [CoreState.INITIALIZE]
async def evaluate(self) -> None:
async def evaluate(self) -> bool:
"""Run evaluation."""
return not SOCKET_DBUS.exists()
return not await self.sys_run_in_executor(SOCKET_DBUS.exists)

View File

@ -29,7 +29,7 @@ class EvaluateDNSServer(EvaluateBase):
"""Return a list of valid states when this evaluation can run."""
return [CoreState.RUNNING]
async def evaluate(self) -> None:
async def evaluate(self) -> bool:
"""Run evaluation."""
return (
not self.sys_plugins.dns.fallback

View File

@ -36,7 +36,7 @@ class EvaluateDockerConfiguration(EvaluateBase):
"""Return a list of valid states when this evaluation can run."""
return [CoreState.INITIALIZE]
async def evaluate(self):
async def evaluate(self) -> bool:
"""Run evaluation."""
storage_driver = self.sys_docker.info.storage
logging_driver = self.sys_docker.info.logging

View File

@ -29,6 +29,6 @@ class EvaluateDockerVersion(EvaluateBase):
"""Return a list of valid states when this evaluation can run."""
return [CoreState.INITIALIZE]
async def evaluate(self):
async def evaluate(self) -> bool:
"""Run evaluation."""
return not self.sys_docker.info.supported_version

View File

@ -29,6 +29,6 @@ class EvaluateJobConditions(EvaluateBase):
"""Return a list of valid states when this evaluation can run."""
return [CoreState.INITIALIZE, CoreState.SETUP, CoreState.RUNNING]
async def evaluate(self) -> None:
async def evaluate(self) -> bool:
"""Run evaluation."""
return len(self.sys_jobs.ignore_conditions) > 0

View File

@ -32,10 +32,10 @@ class EvaluateLxc(EvaluateBase):
"""Return a list of valid states when this evaluation can run."""
return [CoreState.INITIALIZE]
async def evaluate(self):
async def evaluate(self) -> bool:
"""Run evaluation."""
def check_lxc():
def check_lxc() -> bool:
with suppress(OSError):
if "container=lxc" in Path("/proc/1/environ").read_text(
encoding="utf-8"

Some files were not shown because too many files have changed in this diff Show More