Compare commits

...

259 Commits

Author SHA1 Message Date
J. Nick Koston
af3256e41e Significantly speed up creating backups with isal via zlib-fast
isal is a drop in replacement for zlib with the
cavet that the compression level mappings are different.
zlib-fast is a tiny piece of middleware to convert
the standard zlib compression levels to isal compression
levels to allow for drop-in replacement

https://github.com/bdraco/zlib-fast/releases/tag/v0.1.0
https://github.com/pycompression/python-isal

Compression for backups is ~5x faster than the baseline

https://github.com/powturbo/TurboBench/issues/43
2024-01-27 13:06:41 -10:00
J. Nick Koston
a163121ad4 Fix dirhash failing to import pkg_resources
dirhash needs pkg_resources which is provided by setuptools

https://github.com/home-assistant/supervisor/actions/runs/7513346221/job/20454994962
2024-01-14 00:02:12 -10:00
J. Nick Koston
eb85be2770 Improve json performance by porting core orjson utils (#4816)
* Improve json performance by porting core orjson utils

* port relevant tests

* pylint

* add test for read_json_file

* add test for read_json_file

* remove workaround for core issue we do not have here

---------

Co-authored-by: Pascal Vizeli <pvizeli@syshack.ch>
2024-01-13 19:19:01 +01:00
Mike Degatano
2da27937a5 Update python to 3.12 (#4815)
* Update python to 3.12

* Fix tests and deprecations

* Fix other references to 3.11

* build.json doesn't exist
2024-01-13 16:35:07 +01:00
dependabot[bot]
2a29b801a4 Bump jinja2 from 3.1.2 to 3.1.3 (#4810)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:52:44 +01:00
dependabot[bot]
57e65714b0 Bump actions/download-artifact from 4.1.0 to 4.1.1 (#4809)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:35:09 +01:00
dependabot[bot]
0ae40cb51c Bump gitpython from 3.1.40 to 3.1.41 (#4808)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:31:31 +01:00
dependabot[bot]
ddd195dfc6 Bump sentry-sdk from 1.39.1 to 1.39.2 (#4811)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:29:34 +01:00
dependabot[bot]
54b9f23ec5 Bump actions/cache from 3.3.2 to 3.3.3 (#4813)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:25:51 +01:00
dependabot[bot]
242dd3e626 Bump home-assistant/wheels from 2023.10.5 to 2024.01.0 (#4804) 2024-01-08 08:16:11 +01:00
dependabot[bot]
1b8acb5b60 Bump home-assistant/builder from 2023.12.0 to 2024.01.0 (#4800)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-05 08:19:40 +01:00
dependabot[bot]
a7ab96ab12 Bump flake8 from 6.1.0 to 7.0.0 (#4799)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-05 08:18:05 +01:00
dependabot[bot]
06ab11cf87 Bump attrs from 23.1.0 to 23.2.0 (#4793)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-02 16:57:05 +01:00
dependabot[bot]
1410a1b06e Bump pytest from 7.4.3 to 7.4.4 (#4792) 2024-01-01 14:55:37 +01:00
Mike Degatano
5baf19f7a3 Migrate to pyproject.toml where possible (#4770)
* Migrate to pyproject.toml where possible

* Share requirements and fix version import

* Fix issues with timezone in tests
2023-12-29 11:46:01 +01:00
Mike Degatano
6c66a7ba17 Improve error handling in backup restore (#4791) 2023-12-29 11:45:50 +01:00
dependabot[bot]
37b6e09475 Bump coverage from 7.3.4 to 7.4.0 (#4790)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.3.4 to 7.4.0.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.3.4...7.4.0)

---
updated-dependencies:
- dependency-name: coverage
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-28 09:22:22 +01:00
Jeff Oakley
e08c8ca26d Add support for setting target path in map config (#4694)
* Added support for setting addon target path in map config

* Updated addon target path mapping to use dataclass

* Added check before adding string folder maps

* Moved enum to addon/const, updated map_volumes logic, fixed test

* Removed log used for debugging

* Use more readable approach to determine addon_config_used

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Use cleaner approach for checking volume config

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Use dict syntax and ATTR_TYPE

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Use coerce for validating mapping type

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Default read_only to true in schema

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Use ATTR_TYPE and ATTR_READ_ONLY instead of static strings

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Use constants instead of in-line strings

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Correct type for path

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Added read_only and path constants

* Fixed small syntax error and added includes for constants

* Simplify logic for handling string and dict entries in map config

* Use ATTR_PATH instead of inline string

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Add missing ATTR_PATH reference

* Moved FolderMapping dataclass to data.py

* Fix edge case where "data" map type is used but optional path is not set

* Move FolderMapping dataclass to configuration.py to prevent circular reference

---------

Co-authored-by: Jeff Oakley <jeff.oakley@LearningCircleSoftware.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
Co-authored-by: Pascal Vizeli <pvizeli@syshack.ch>
2023-12-27 15:14:23 -05:00
dependabot[bot]
2c09e7929f Bump black from 23.12.0 to 23.12.1 (#4788) 2023-12-26 08:30:35 +01:00
Stefan Agner
3e760f0d85 Always pass explicit architecture of installed add-ons (#4786)
* Pass architecture of installed add-on on update

When using multi-architecture container images, the architecture of the
add-on is not passed to Docker in all cases. This causes the
architecture of the Supervisor container to be used, which potentially
is not supported by the add-on in question.

This commit passes the architecture of the add-on to Docker, so that
the correct image is pulled.

* Call update with architecture

* Also pass architecture on add-on restore

* Fix pytest
2023-12-21 16:52:25 -05:00
Mike Degatano
3cc6bd19ad Mark system as unhealthy on OSError Bad message errors (#4750)
* Bad message error marks system as unhealthy

* Finish adding test cases for changes

* Rename test file for uniqueness

* bad_message to oserror_bad_message

* Omit some checks and check for network mounts
2023-12-21 18:05:29 +01:00
Mike Degatano
b7ddfba71d Set max reanimation attempts on HA watchdog (#4784) 2023-12-21 16:44:39 +01:00
dependabot[bot]
32f21d208f Bump coverage from 7.3.3 to 7.3.4 (#4785)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.3.3 to 7.3.4.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.3.3...7.3.4)

---
updated-dependencies:
- dependency-name: coverage
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-21 09:01:49 +01:00
Jan Čermák
ed7edd9fe0 Adjust "retry in ..." log messages to avoid confusion (#4783)
As shown in home-assistant/operating-system#3007, error messages printed
to logs when container installation fails can cause some confusion,
because they are sometimes printed to the log on the landing page.
Adjust all wordings of "retry in" to "retrying in" to make it obvious
this happens automatically.
2023-12-20 18:34:42 +01:00
Stefan Agner
fd3c995c7c Fix WiFi WEP configuration (#4781)
It seems that the values for auth-alg and key-mgmt got mixed up.
Trying to safe a WEP configuration currently leads to the error:
23-12-19 10:56:37 ERROR (MainThread) [supervisor.host.network] Can't create config and activate wlan0: 802-11-wireless-security.key-mgmt: 'open' is not a valid value for the property
2023-12-19 13:53:51 +01:00
dependabot[bot]
c0d1a2d53b Bump actions/download-artifact from 4.0.0 to 4.1.0 (#4780)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.0.0 to 4.1.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4.0.0...v4.1.0)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 08:59:43 +01:00
dependabot[bot]
76bc3015a7 Bump deepmerge from 1.1.0 to 1.1.1 (#4779)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 08:45:48 +01:00
dependabot[bot]
ad2896243b Bump sentry-sdk from 1.39.0 to 1.39.1 (#4774)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-18 15:45:05 +01:00
dependabot[bot]
d0dcded42d Bump actions/download-artifact from 3 to 4 (#4777)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Franck Nijhof <git@frenck.dev>
2023-12-15 08:27:10 +01:00
dependabot[bot]
a0dfa01287 Bump actions/upload-artifact from 3.1.3 to 4.0.0 (#4776)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-15 08:12:31 +01:00
dependabot[bot]
4ec5c90180 Bump coverage from 7.3.2 to 7.3.3 (#4775) 2023-12-15 07:25:10 +01:00
dependabot[bot]
a0c813bfc1 Bump securetar from 2023.3.0 to 2023.12.0 (#4771)
Bumps [securetar](https://github.com/pvizeli/securetar) from 2023.3.0 to 2023.12.0.
- [Release notes](https://github.com/pvizeli/securetar/releases)
- [Commits](https://github.com/pvizeli/securetar/compare/2023.3.0...2023.12.0)

---
updated-dependencies:
- dependency-name: securetar
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-14 10:05:08 -05:00
dependabot[bot]
5f7b3a7087 Bump sentry-sdk from 1.38.0 to 1.39.0 (#4766)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-13 08:17:19 +01:00
dependabot[bot]
6426f02a2c Bump black from 23.11.0 to 23.12.0 (#4767)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-13 08:17:01 +01:00
Stefan Agner
7fef92c480 Fix fallback to non-SSL whoami call (#4751)
* Fix fallback to non-SSL whoami call

In case of an exception "data" is not set leading to an error:
cannot access local variable 'data' where it is not associated with a value

Make sure to fallback to the non-SSL whoami call properly.

* Add pytests

* Ignore protected access in pytests

* Add test when system time is behind by more than 3 days

* Fix test_adjust_system_datetime_if_time_behind test and cleanup
2023-12-12 15:24:46 -05:00
Mike Degatano
c64744dedf Refactor addons init to addons manager (#4760)
Co-authored-by: Stefan Agner <stefan@agner.ch>
2023-12-12 09:36:05 +01:00
dependabot[bot]
72a2088931 Bump dbus-fast from 2.20.0 to 2.21.0 (#4761)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.20.0 to 2.21.0.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.20.0...v2.21.0)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 08:46:05 +01:00
dependabot[bot]
db54556b0f Bump docker from 6.1.3 to 7.0.0 (#4756)
Bumps [docker](https://github.com/docker/docker-py) from 6.1.3 to 7.0.0.
- [Release notes](https://github.com/docker/docker-py/releases)
- [Commits](https://github.com/docker/docker-py/compare/6.1.3...7.0.0)

---
updated-dependencies:
- dependency-name: docker
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 08:45:57 +01:00
dependabot[bot]
a2653d8462 Bump sigstore/cosign-installer from 3.2.0 to 3.3.0 (#4764)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 08:18:58 +01:00
dependabot[bot]
ef778238f6 Bump home-assistant/builder from 2023.09.0 to 2023.12.0 (#4763)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 08:18:19 +01:00
dependabot[bot]
4cc0ddc35d Bump pylint from 3.0.2 to 3.0.3 (#4762) 2023-12-12 07:31:55 +01:00
Stefan Agner
a0429179a0 Add Raspberry Pi 5 (#4757) 2023-12-11 11:14:04 +01:00
dependabot[bot]
5cfb45c668 Bump pre-commit from 3.5.0 to 3.6.0 (#4754) 2023-12-11 08:05:17 +01:00
dependabot[bot]
a53b7041f5 Bump typing-extensions from 4.8.0 to 4.9.0 (#4755) 2023-12-11 07:50:37 +01:00
dependabot[bot]
f534fae293 Bump actions/stale from 8.0.0 to 9.0.0 (#4752)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-08 08:50:38 +01:00
dependabot[bot]
f7cbd968d2 Bump getsentry/action-release from 1.4.1 to 1.6.0 (#4747)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-07 09:23:08 +01:00
dependabot[bot]
844d76290c Bump actions/setup-python from 4.8.0 to 5.0.0 (#4748) 2023-12-07 08:09:32 +01:00
Stefan Agner
8c8122eee0 Fix pre-commit GitHub Action cache (#4746)
Currently pre-commit caching seems not to work properly: There is
no cache stored according to GitHub Action tab, and the Prepare
Python dependencies job shows the following warning:
Warning: Path Validation Error: Path(s) specified in the action for caching do(es) not exist, hence no cache is being saved.

This seems to be similar to what have been observed and solved in
Home Assistant Core with https://github.com/home-assistant/core/pull/46696.
Use PRE_COMMIT_CACHE instead of PRE_COMMIT_HOME as well.
2023-12-06 11:30:57 +01:00
Stefan Agner
d63f0d5e0b Address GitHub action deprecation warnings (#4745)
* Remove deprecated set-output from GitHub actions

* Replace get-changed-files GitHub action

The GitHub action jitterbit/get-changed-files@v1 seems abandoned.
Use masesgroup/retrieve-changed-files@v3.0.0 which can be used as
a drop in replacement.
2023-12-06 10:47:08 +01:00
Stefan Agner
96f4ba5d25 Check/get ingress port on add-on load (#4744)
Instead of setting the ingress port on install, make sure to set
the port when the add-on gets loaded (on Supervisor startup and
before installation). This is necessary since the dynamic ingress
ports are not stored as part of the add-on data storage themself
but in the ingress data store. So on every Supervisor start the
port needs to be transferred to the add-on model.

Note that we still need to check the port on add-on update since
the add-on potentially added (dynamic) ingress on update. Same
applies to add-on restore (the restored version might use a dynamic
ingress port).
2023-12-06 10:46:47 +01:00
dependabot[bot]
72e64676da Bump actions/setup-python from 4.7.1 to 4.8.0 (#4743) 2023-12-06 07:23:17 +01:00
Stefan Agner
883e54f989 Make check_port an async function (#4677)
* Make check_port asyncio

This requires to change the ingress_port property to a async method.

* Avoid using wait_for

* Add missing async

* Really await

* Set dynamic ingress port on add-on installation/update

* Fix pytest issue

* Rename async_check_port back to check_port

* Raise RuntimeError in case port is not set

* Make sure port gets set on add-on restore

* Drop unnecessary async

* Simplify check_port by using asyncio.get_running_loop()
2023-12-05 15:49:35 -05:00
dependabot[bot]
c2d4be3304 Bump dbus-fast from 2.15.0 to 2.20.0 (#4741)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.15.0 to 2.20.0.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.15.0...v2.20.0)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 14:41:18 +01:00
dependabot[bot]
de737ddb91 Bump colorlog from 6.7.0 to 6.8.0 (#4739)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 08:54:58 +01:00
Stefan Agner
11ec6dd9ac Wait until mount unit is deactivated on unmount (#4733)
* Wait until mount unit is deactivated on unmount

The current code does not wait until the (bind) mount unit has been
actually deactivated (state "inactive"). This is especially problematic
when restoring a backup, where we deactivate all bind mounts before
restoring the target folder. Before the tarball is actually restored,
we delete all contents of the target folder. This lead to the situation
where the "rm -rf" command got executed before the bind mount actually
got unmounted.

The current code polls the state using an exponentially increasing
delay. Wait up to 30s for the bind mount to actually deactivate.

* Fix function name

* Fix missing await

* Address pytest errors

Change state of systemd unit according to use cases. Note that this
is currently rather fragile, and ideally we should have a smarter
mock service instead.

* Fix pylint

* Fix remaining

* Check transition fo failed as well

* Used alternative mocking mechanism

* Remove state lists in test_manager

---------

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2023-12-01 00:35:15 +01:00
dependabot[bot]
df7541e397 Bump sentry-sdk from 1.37.1 to 1.38.0 (#4737) 2023-11-30 07:36:12 +01:00
Erik Montnemery
95ac53d780 Bump core shutdown timeout for new pre-stopping core state (#4736)
* Bump core shutdown timeout

* Clarify comment

* Update tests
2023-11-28 15:03:25 -05:00
dependabot[bot]
e8c4b32a65 Bump cryptography from 41.0.5 to 41.0.7 (#4734)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-28 20:51:17 +01:00
dependabot[bot]
eca535c978 Bump aiohttp-fast-url-dispatcher from 0.1.1 to 0.3.0 (#4735)
Bumps [aiohttp-fast-url-dispatcher](https://github.com/bdraco/aiohttp-fast-url-dispatcher) from 0.1.1 to 0.3.0.
- [Release notes](https://github.com/bdraco/aiohttp-fast-url-dispatcher/releases)
- [Changelog](https://github.com/bdraco/aiohttp-fast-url-dispatcher/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bdraco/aiohttp-fast-url-dispatcher/compare/v0.1.1...v0.3.0)

---
updated-dependencies:
- dependency-name: aiohttp-fast-url-dispatcher
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-28 11:54:35 +01:00
Stefan Agner
9088810b49 Improve D-Bus error handling for NetworkManager (#4720)
* Improve D-Bus error handling for NetworkManager

There are quite some errors captured which are related by seemingly a
suddenly missing NetworkManager. Errors appear as:
23-11-21 17:42:50 ERROR (MainThread) [supervisor.dbus.network] Error while processing /org/freedesktop/NetworkManager/Devices/10: Remote peer disconnected
...
23-11-21 17:42:50 ERROR (MainThread) [supervisor.dbus.network] Error while processing /org/freedesktop/NetworkManager/Devices/35: The name is not activatable

Both errors seem to already happen at introspection time, however
the current code doesn't converts these errors to Supervisor issues.
This PR uses the already existing `DBus.from_dbus_error()`.

Furthermore this adds a new Exception `DBusNoReplyError` for the
`ErrorType.NO_REPLY` (or `org.freedesktop.DBus.Error.NoReply` in
D-Bus terms, which is the type of the first of the two issues above).

And finally it separates the `ErrorType.SERVICE_UNKNOWN` (or
`org.freedesktop.DBus.Error.ServiceUnknown` in D-Bus terms, which is
the second of the above issue) from `DBusInterfaceError` into a new
`DBusServiceUnkownError`.

This allows to handle errors more specifically.

To avoid too much churn, all instances where `DBusInterfaceError`
got handled, we are now also handling `DBusServiceUnkownError`.

The `DBusNoReplyError` and `DBusServiceUnkownError` appear when
the NetworkManager service stops or crashes. Instead of retrying
every interface we know, just give up if one of these issues appear.
This should significantly lower error messages users are seeing
and Sentry events.

* Remove unnecessary statement

* Fix pytests

* Make sure error strings are compared correctly

* Fix typo/remove unnecessary pylint exception

* Fix DBusError typing

* Add pytest for from_dbus_error

* Revert "Make sure error strings are compared correctly"

This reverts commit 10dc2e4c3887532921414b4291fe3987186db408.

* Add test cases

---------

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2023-11-27 23:32:11 +01:00
dependabot[bot]
172a7053ed Bump dbus-fast from 2.14.0 to 2.15.0 (#4724)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-27 17:58:53 +01:00
Stefan Agner
3d5bd2adef Use find to delete files recursively (#4732)
* Use find to delete files recursively

Instead of using rm -rf use find to delete files recursively. This
has the added benefit that we do not need to rely on shell expansion.

In particular, shell expansion caused the --one-file-system flag to
not work as intended: The idea was that the content of a (left-over)
bind mounted directory would not get deleted. However, since shell
expansion passed the directory to rm, rm happily deleted also files in
that bind mounted directory.

* Pass arguments correctly

* Fix argument order and stderr output

* Improve error handling

Log with exception level if there is an OS level error. Decode the
stderr output correctly.

* Remove unnecessary newline
2023-11-27 11:36:30 -05:00
J. Nick Koston
cb03d039f4 Bump aiohttp to 3.9.1 (#4729)
* Revert "Revert "Bump aiohttp to 3.9.0 (#4714)" (#4722)"

This reverts commit c0868d9dac.

* Bump aiohttp to 3.9.1

changelog: https://github.com/aio-libs/aiohttp/compare/v3.8.6...v3.9.1

The issues that caused us to revert 3.9.0 have been fixed
2023-11-27 13:44:54 +01:00
dependabot[bot]
bb31b1bc6e Bump sentry-sdk from 1.36.0 to 1.37.1 (#4730) 2023-11-27 07:42:31 +01:00
dependabot[bot]
727532858e Bump dessant/lock-threads from 5.0.0 to 5.0.1 (#4723) 2023-11-23 08:21:15 +01:00
J. Nick Koston
c0868d9dac Revert "Bump aiohttp to 3.9.0 (#4714)" (#4722)
This reverts commit f8f51740c1.
2023-11-22 14:27:00 +01:00
dependabot[bot]
ce26e1dac6 Bump sentry-sdk from 1.35.0 to 1.36.0 (#4721)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-22 10:15:02 +01:00
Mike Degatano
c74f87ca12 Fix ingress session cleanup (#4719) 2023-11-21 11:56:01 -05:00
dependabot[bot]
043111b91c Bump urllib3 from 2.0.7 to 2.1.0 (#4707)
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.0.7 to 2.1.0.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/2.0.7...2.1.0)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-21 10:59:43 -05:00
J. Nick Koston
5c579e557c Port core async safe logging to supervisor (#4716)
fixes #4715
2023-11-20 20:33:36 +01:00
J. Nick Koston
f8f51740c1 Bump aiohttp to 3.9.0 (#4714)
* Bump aiohttp to 3.9.0

changelog: https://github.com/aio-libs/aiohttp/compare/v3.8.6...v3.9.0

* DeprecationWarning: shutdown_timeout should be set on BaseRunner
2023-11-20 20:31:16 +01:00
dependabot[bot]
176b63df52 Bump voluptuous from 0.14.0 to 0.14.1 (#4717)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-20 08:38:29 +01:00
dependabot[bot]
e1979357a5 Bump aiohttp-fast-url-dispatcher from 0.1.0 to 0.1.1 (#4713)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-17 12:59:54 +01:00
Paulus Schoutsen
030527a4f2 Use aiohttp-fast-url-dispatcher to avoid linear searching to route urls (#4705)
Co-authored-by: J. Nick Koston <nick@koston.org>
2023-11-16 16:52:31 -05:00
J. Nick Koston
cca74da1f3 Ensure empty body responses never generate an invalid chunked response (#4710) 2023-11-15 11:44:36 +01:00
Stefan Agner
928aff342f Address pytest warnings (#4695) 2023-11-15 10:45:36 +01:00
dependabot[bot]
60a97235df Bump sentry-sdk from 1.34.0 to 1.35.0 (#4708)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 17:11:07 +01:00
dependabot[bot]
c77779cf9d Bump dessant/lock-threads from 4.0.1 to 5.0.0 (#4706)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 17:08:55 +01:00
Stefan Agner
9351796ba8 Avoid empty newlines in Supervisor logs (#4698) 2023-11-13 20:12:17 +01:00
Franck Nijhof
bef0f023d4 Revert "Revert Home Assistant configuration to /config" (#4702) 2023-11-13 20:11:04 +01:00
dependabot[bot]
3116f183f5 Bump voluptuous from 0.13.1 to 0.14.0 (#4701)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 10:29:25 +01:00
Stefan Agner
16b71a22d1 Revert Home Assistant configuration to /config (#4697)
* Revert Home Assistant configuration to `/config`

With the new add-on config feature the intention is to provide a good
location for add-on specific configurations. Currently, add-ons such
as Node RED or ESPHome use the Home Assistant config directory because
this location is accessible to the user (via Samba VSCode add-on etc.).

To make it clear to add-on developer that the new intention is to use
add-on specific config, the implementation now bind mounts the add-on
configuration directory to `/config`. And since some add-ons still need
access to the Home Assistant configuration, its config folder is mounted
to `/homeassistant` under the new scheme.

However, users do know the path `/config`, and edit things e.g. through
the SSH or VS Code add-on. Also `/config` is still the
directory from inside the Core container.

For SSH/VS Code add-on we could work around using a symlink, but that
only works as long as these add-ons don't have a add-on config
themselfs.

This all has very high confusion potential, for not much gain. The
renaming is mainly "developer friendly", but not really user friendly.

Let's minimize potential confusion, and keep things where they are.
The Home Assistant config directory stays at `/config, in all cases,
everwhere.

Map the new add-on configuration directory to `/addon_config`.

* Adjust tests/comments
2023-11-11 13:41:56 +01:00
Stefan Agner
5f4581042c Don't remove add-on conifg on add-on removal (#4696) 2023-11-11 13:23:35 +01:00
dependabot[bot]
6976a4cf2e Bump dbus-fast from 2.12.0 to 2.14.0 (#4688)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.12.0 to 2.14.0.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.12.0...v2.14.0)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-11 12:02:55 +01:00
J. Nick Koston
68d86b3b7b Small speed up to arch is_supported (#4674)
* Small speed up to arch is_supported

* update tests

* mocking

* mocking
2023-11-11 11:58:16 +01:00
Stefan Agner
d7d34d36c8 Create add-on config folder on add-on start (#4690) 2023-11-10 19:55:48 +01:00
Stefan Agner
68da328cc5 Warn users only when old config is actually used (#4691) 2023-11-10 19:37:02 +01:00
J. Nick Koston
78870186d7 Use content-type fast path for common case (#4685)
This is a port of https://github.com/home-assistant/core/pull/103477
from core
2023-11-10 14:31:01 +01:00
Stefan Agner
d634273b48 Add log entries of level INFO to Sentry breadcrumbs (#4676) 2023-11-08 12:59:19 +01:00
dependabot[bot]
2d970eee02 Bump cryptography from 41.0.4 to 41.0.5 (#4649)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-08 10:26:17 +01:00
dependabot[bot]
1f0ea3c6f7 Bump awesomeversion from 23.8.0 to 23.11.0 (#4680)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-08 10:12:03 +01:00
dependabot[bot]
d736913f7f Bump black from 23.10.1 to 23.11.0 (#4682)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-08 10:11:43 +01:00
dependabot[bot]
3e95a9d282 Bump sigstore/cosign-installer from 3.1.2 to 3.2.0 (#4679)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.1.2 to 3.2.0.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.1.2...v3.2.0)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-08 09:37:55 +01:00
J. Nick Koston
7cd7259992 Cache common version checks (#4673)
* Cache common version checks

We check core version quite frequently in the code, and its a bit expensive to do
all the comparsions everywhere. Since its mostly the same check happening over and
over we can cache it

* Cache common version checks

We check core version quite frequently in the code, and its a bit expensive to do
all the comparsions everywhere. Since its mostly the same check happening over and
over we can cache it

* fix import
2023-11-07 16:14:09 -05:00
Mike Degatano
87385cf28e Fix saving ingress data on supervisor shutdown (#4672)
* Fix saving ingress data on supervisor shutdown

* Fix ci issues
2023-11-07 13:07:16 -05:00
dependabot[bot]
3a00c94325 Bump dbus-fast from 2.11.1 to 2.12.0 (#4641)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.11.1 to 2.12.0.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.11.1...v2.12.0)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 10:57:52 -05:00
Stefan Agner
38d5d2307f Bump tooling to target Python version 3.11 (#4666) 2023-11-03 12:02:55 +01:00
Stefan Agner
a0c12e7228 Update devcontainer.json to use the new format (#4665) 2023-11-03 12:01:48 +01:00
dependabot[bot]
b6625ad909 Bump sentry-sdk from 1.33.1 to 1.34.0 (#4667)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-03 12:01:05 +01:00
Stefan Agner
6f01341055 Fix Home Assistant Core API check (#4663)
* Fix Home Assistant Core API check

* Remove check_api_state mock to improve test coverage
2023-11-02 13:21:54 +01:00
Stefan Agner
6762a4153a Revert "Revert "Update base images to 3.11-alpine3.18 (#4639)" (#4646)" (#4657)
This reverts commit 7c576da32c.

With the AppArmor profile updated Supervisor on Alpine 3.18 should work
fine now.
2023-11-02 11:29:15 +01:00
Mike Degatano
31200df89f Addon methods interfacing with docker are job groups (#4659)
* Addon methods interfacing with docker are job groups

* Add test for install
2023-11-02 11:28:48 +01:00
dependabot[bot]
18e422ca77 Bump sentry-sdk from 1.32.0 to 1.33.1 (#4660)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.32.0 to 1.33.1.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.32.0...1.33.1)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-01 16:02:58 -04:00
dependabot[bot]
1b362716e3 Bump ciso8601 from 2.3.0 to 2.3.1 (#4656)
Bumps [ciso8601](https://github.com/closeio/ciso8601) from 2.3.0 to 2.3.1.
- [Release notes](https://github.com/closeio/ciso8601/releases)
- [Changelog](https://github.com/closeio/ciso8601/blob/master/CHANGELOG.md)
- [Commits](https://github.com/closeio/ciso8601/compare/v2.3.0...v2.3.1)

---
updated-dependencies:
- dependency-name: ciso8601
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-01 16:02:24 -04:00
Stefan Agner
1e49129197 Use longer timeouts for API checks before trigger a rollback (#4658)
* Don't check if Core is running to trigger rollback

Currently we check for Core API access and that the state is running. If
this is not fulfilled within 5 minutes, we rollback to the previous
version.

It can take quite a while until Home Assistant Core is in state running.
In fact, after going through bootstrap, it can theoretically take
indefinitely (as in there is no timeout from Core side).

So to trigger rollback, rather than check the state to be running, just
check if the API is accessible in this case. This prevents spurious
rollbacks.

* Check Core status with and timeout after a longer time

Instead of checking the Core API just for response, do check the
state. Use a timeout which is long enough to cover all stages and
other timeouts during Core startup.

* Introduce get_api_state and better status messages

* Update supervisor/homeassistant/api.py

Co-authored-by: J. Nick Koston <nick@koston.org>

* Add successful start test

---------

Co-authored-by: J. Nick Koston <nick@koston.org>
2023-11-01 16:01:38 -04:00
Mike Degatano
a8f818fca5 Don't remove folder itself on restore (#4654)
* Don't remove folder itself on restore

* Allow dirs exist on copytree
2023-10-30 08:42:23 +01:00
Mike Degatano
0f600da096 Add a public config folder per addon (#4650)
* Add a public config folder per addon

* Finish addon_configs map option

* Rename map values and add addon_config
2023-10-27 15:43:57 +02:00
Mike Degatano
b04efe4eac Remove folder only deletes from current filesystem (#4653) 2023-10-26 16:55:42 -04:00
Joakim Plate
7361d39231 Catch unicode decode errors (#4651)
Yaml loader did not catch unicode decode errors as json loader did.

Related to: https://github.com/home-assistant/core/issues/102818
2023-10-26 09:27:05 +02:00
dependabot[bot]
059c0df16c Bump pytest from 7.4.2 to 7.4.3 (#4648)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-25 11:51:06 +02:00
dependabot[bot]
6f6b849335 Bump black from 23.10.0 to 23.10.1 (#4647)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-24 08:43:00 +02:00
Stefan Agner
a390500309 Reload Pulseaudio modules on hardware change (#4638)
* Reload Pulseaudio modules on hardware change

In the past the audio plug-in restarted Pulseaudio on hardware change.
This broke with the s6 updates. However, it also turns out that this is
quite racy: The Supervisor reloads audio data much too quickly, when
Supervisor isn't restarted yet.

Instead, let's reload the relevant modules from Supervisor itself.

This works well with a USB microphone on Home Assistant Green.

Related change: https://github.com/home-assistant/plugin-audio/pull/153

* Fix linter issue
2023-10-23 15:57:57 -04:00
Stefan Agner
7c576da32c Revert "Update base images to 3.11-alpine3.18 (#4639)" (#4646)
This reverts commit b1010c3c61.

It seems that the git version deployed with the latest Alpine doesn't
play nice with Supervisor. Specifically it leads to "fatal: cannot exec
'remote-https': Permission denied" errors.
2023-10-23 15:48:50 -04:00
dependabot[bot]
6d021c1659 Bump pylint from 3.0.1 to 3.0.2 (#4645)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-23 08:59:44 +02:00
Mike Degatano
37c1c89d44 Remove race with watchdog during backup, restore and update (#4635)
* Remove race with watchdog during backup, restore and update

* Fix pylint issues and test

* Stop after image pull during update

* Add test for max failed attempts for plugin watchdog
2023-10-19 22:01:56 -04:00
Mike Degatano
010043f116 Don't warn for removing unstarted jobs (#4632) 2023-10-19 17:35:16 +02:00
Franck Nijhof
b1010c3c61 Update base images to 3.11-alpine3.18 (#4639)
* Update base images to 3.11-alpine3.18

* Adjust hadolint
2023-10-19 10:53:58 +02:00
dependabot[bot]
7f0204bfc3 Bump gitpython from 3.1.38 to 3.1.40 (#4642)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-19 08:54:58 +02:00
dependabot[bot]
a508cc5efd Bump home-assistant/wheels from 2023.10.4 to 2023.10.5 (#4640)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-19 08:46:18 +02:00
dependabot[bot]
65c90696d5 Bump urllib3 from 2.0.6 to 2.0.7 (#4634)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-18 14:34:40 +02:00
dependabot[bot]
b9f47898d6 Bump actions/checkout from 4.1.0 to 4.1.1 (#4636)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-18 12:30:48 +02:00
dependabot[bot]
26f554e46a Bump black from 23.9.1 to 23.10.0 (#4637)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-18 12:30:24 +02:00
Mike Degatano
b57889c84f Use UUID for setting parent interface in vlans (#4633)
* Use UUID for setting parent interface in vlans

* Fix vlan test using interface name
2023-10-17 16:38:27 -04:00
Mike Degatano
77fd1b4017 Capture exception if image is missing on run (#4621)
* Retry run if image missing and handle fixup

* Fix lint and run error test

* Remove retry and just capture exception
2023-10-17 13:55:12 +02:00
dependabot[bot]
ab6745bc99 Bump gitpython from 3.1.37 to 3.1.38 (#4630)
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.37 to 3.1.38.
- [Release notes](https://github.com/gitpython-developers/GitPython/releases)
- [Changelog](https://github.com/gitpython-developers/GitPython/blob/main/CHANGES)
- [Commits](https://github.com/gitpython-developers/GitPython/compare/3.1.37...3.1.38)

---
updated-dependencies:
- dependency-name: gitpython
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-17 13:54:59 +02:00
dependabot[bot]
a5ea3cae72 Bump aiodns from 3.1.0 to 3.1.1 (#4629)
Bumps [aiodns](https://github.com/saghul/aiodns) from 3.1.0 to 3.1.1.
- [Release notes](https://github.com/saghul/aiodns/releases)
- [Changelog](https://github.com/saghul/aiodns/blob/master/ChangeLog)
- [Commits](https://github.com/saghul/aiodns/compare/v3.1.0...v3.1.1)

---
updated-dependencies:
- dependency-name: aiodns
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-17 13:54:23 +02:00
dependabot[bot]
8bcd1b4efd Bump release-drafter/release-drafter from 5.24.0 to 5.25.0 (#4631)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-17 12:26:37 +02:00
Mike Degatano
a24657e565 Handle get users API returning None (#4628)
* Handle get users API returning None

* Skip throttle during test
2023-10-16 21:54:50 +02:00
dependabot[bot]
b7721420fa Bump pre-commit from 3.4.0 to 3.5.0 (#4627)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-16 09:25:35 +02:00
Erwin Douna
6c564fe4fd Fixing multiple typos (#4626) 2023-10-15 22:27:51 +02:00
Mike Degatano
012bfd7e6c Support proxy of binary messages from addons to HA (#4605)
* Support proxy of binary messages from addons to HA

* Added tests for proxy

* Move instantiation into init

* Mock close method on server

* Add invalid auth test and remove auth mock
2023-10-14 18:07:49 +02:00
dependabot[bot]
a70f81aa01 Bump sentry-sdk from 1.31.0 to 1.32.0 (#4623)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-12 09:12:23 +02:00
Mike Degatano
1376a38de5 Eliminate possible addon data race condition during update (#4619)
* Eliminate possible addon data race condition during update

* Fix pylint error

* Use Self type instead of quotes
2023-10-11 12:22:04 -04:00
Mike Degatano
1827ecda65 Call save data after removing mount in fixup (#4620) 2023-10-11 18:18:30 +02:00
Mike Degatano
994c981228 Allow home assistant backups to exclude database (#4591)
* Allow home assistant backups to exclude database

* Tweak

Co-authored-by: Pascal Vizeli <pvizeli@syshack.ch>

---------

Co-authored-by: Franck Nijhof <git@frenck.dev>
Co-authored-by: Pascal Vizeli <pvizeli@syshack.ch>
2023-10-11 08:52:19 +02:00
dependabot[bot]
5bbfbf44ae Bump aiodns from 3.0.0 to 3.1.0 (#4613)
Bumps [aiodns](https://github.com/saghul/aiodns) from 3.0.0 to 3.1.0.
- [Release notes](https://github.com/saghul/aiodns/releases)
- [Changelog](https://github.com/saghul/aiodns/blob/master/ChangeLog)
- [Commits](https://github.com/saghul/aiodns/compare/aiodns-3.0.0...v3.1.0)

---
updated-dependencies:
- dependency-name: aiodns
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-09 12:11:09 +02:00
Mike Degatano
ace58ba735 Unstarted jobs should always be cleaned up (#4604) 2023-10-09 11:57:04 +02:00
dependabot[bot]
f9840306a0 Bump pyupgrade from 3.14.0 to 3.15.0 (#4614)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Franck Nijhof <frenck@frenck.nl>
2023-10-09 08:59:37 +02:00
dependabot[bot]
322b3bbb4e Bump pytest-timeout from 2.1.0 to 2.2.0 (#4615)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-09 08:33:55 +02:00
dependabot[bot]
501318f468 Bump aiohttp from 3.8.5 to 3.8.6 (#4612)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-09 08:31:23 +02:00
dependabot[bot]
0234f38b23 Bump home-assistant/wheels from 2023.10.1 to 2023.10.4 (#4616)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-09 08:30:43 +02:00
dependabot[bot]
8743e0072f Bump pylint from 3.0.0 to 3.0.1 (#4608)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-06 09:05:12 +02:00
dependabot[bot]
a79e06afa7 Bump dbus-fast from 2.10.0 to 2.11.1 (#4603)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.10.0 to 2.11.1.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.10.0...v2.11.1)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-05 15:50:34 -04:00
Mike Degatano
682b8e0535 Core API check during startup can timeout (#4595)
* Core API check during startup can timeout

* Use a more specific exception so caller can differentiate
2023-10-04 18:54:42 +02:00
Mike Degatano
d70aa5f9a9 JobGroups check active job to determine if in progress (#4602) 2023-10-04 18:53:10 +02:00
Chris Carini
1c815dcad1 Remove all PyPi classifiers as this package is not published to PyPi (#4574)
* Update PyPi classifier to Python 3.11

* remove all classifiers

* Update setup.py

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

---------

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2023-10-03 16:22:12 -04:00
dependabot[bot]
afa467a32b Bump pyupgrade from 3.13.0 to 3.14.0 (#4599)
Bumps [pyupgrade](https://github.com/asottile/pyupgrade) from 3.13.0 to 3.14.0.
- [Commits](https://github.com/asottile/pyupgrade/compare/v3.13.0...v3.14.0)

---
updated-dependencies:
- dependency-name: pyupgrade
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-03 13:35:10 -04:00
dependabot[bot]
274218d48e Bump pylint from 2.17.7 to 3.0.0 (#4600)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-03 19:32:18 +02:00
dependabot[bot]
7e73df26ab Bump coverage from 7.3.1 to 7.3.2 (#4598)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-03 19:29:46 +02:00
dependabot[bot]
ef8fc80c95 Bump actions/setup-python from 4.7.0 to 4.7.1 (#4597)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-03 08:51:10 +02:00
dependabot[bot]
05c39144e3 Bump urllib3 from 2.0.5 to 2.0.6 (#4596)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-03 08:18:08 +02:00
dependabot[bot]
f5cd35af47 Bump pylint from 2.17.6 to 2.17.7 (#4594)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-02 08:56:07 +02:00
dependabot[bot]
c69ecdafd0 Bump home-assistant/wheels from 2023.09.1 to 2023.10.1 (#4593)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-02 08:50:39 +02:00
Mike Degatano
fa90c247ec Correct /store/addons api output (#4589) 2023-09-29 09:17:39 -04:00
Mike Degatano
0cd7bd47bb Skip watchdog API test on landingpage (#4588)
* Skip watchdog API test on landingpage

* Skip check from task
2023-09-29 09:17:22 -04:00
dependabot[bot]
36d48d19fc Bump dbus-fast from 2.2.0 to 2.10.0 (#4583)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 2.2.0 to 2.10.0.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v2.2.0...v2.10.0)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-26 16:46:18 -04:00
Mike Degatano
9322b68d47 Change User LED to System Health LED (#4586) 2023-09-26 14:54:41 -04:00
dependabot[bot]
e11ff64b15 Bump pyupgrade from 3.10.1 to 3.13.0 (#4581)
Bumps [pyupgrade](https://github.com/asottile/pyupgrade) from 3.10.1 to 3.13.0.
- [Commits](https://github.com/asottile/pyupgrade/compare/v3.10.1...v3.13.0)

---
updated-dependencies:
- dependency-name: pyupgrade
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-26 14:36:00 -04:00
dependabot[bot]
3776dabfcf Bump cryptography from 41.0.3 to 41.0.4 (#4571)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-26 20:23:31 +02:00
dependabot[bot]
d4e5831f0f Bump gitpython from 3.1.36 to 3.1.37 (#4582)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-26 13:42:59 +02:00
dependabot[bot]
7b3b478e88 Bump pylint from 2.17.5 to 2.17.6 (#4584)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-26 09:41:08 +02:00
Chris Carini
f5afe13e91 Fix typos in docstrings (#4546)
* [typo] `Assitant` -> `Assistant`

* [typo] `an` -> `a`
2023-09-26 09:21:57 +02:00
dependabot[bot]
49ce468d83 Bump home-assistant/wheels from 2023.04.0 to 2023.09.1 (#4580)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-25 11:12:20 +02:00
dependabot[bot]
b26551c812 Bump home-assistant/builder from 2023.08.0 to 2023.09.0 (#4579)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-25 11:11:20 +02:00
dependabot[bot]
394ba580d2 Bump actions/checkout from 4.0.0 to 4.1.0 (#4578)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-25 10:53:19 +02:00
dependabot[bot]
2f7a54f5fd Bump urllib3 from 2.0.4 to 2.0.5 (#4572)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-21 11:04:19 +02:00
dependabot[bot]
360e085926 Bump time-machine from 2.12.0 to 2.13.0 (#4569)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-20 11:50:01 +02:00
dependabot[bot]
042921925d Bump typing-extensions from 4.7.1 to 4.8.0 (#4566)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-19 17:44:22 +02:00
Mike Degatano
dcf024387b Network backups skip free space check (#4563) 2023-09-19 16:28:39 +02:00
Mike Degatano
e1232bc9e7 Add support for green LEDs to API (#4556)
* Add support for green LEDs to API

* Save board config in supervisor and post on start

* Ignore no-value-for-parameter in validate
2023-09-14 09:27:12 -04:00
dependabot[bot]
d96598b5dd Bump sentry-sdk from 1.30.0 to 1.31.0 (#4562)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-14 09:03:51 +02:00
dependabot[bot]
2605f85668 Bump debugpy from 1.7.0 to 1.8.0 (#4559)
Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.7.0 to 1.8.0.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.7.0...v1.8.0)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-13 08:57:38 +02:00
Mike Degatano
2c8e6ca0cd Switch from ruamel.yaml to pyyaml (#4555)
* Switch from ruamel.yaml to pyyaml

* Use CLoader and CDumper when available
2023-09-13 08:57:01 +02:00
Mike Degatano
0225f574be Only tell HA to refresh ingress on restore on change (#4552)
* Only tell HA to refresh ingress on restore on change

* Fix test expecting ingress change

* Assume ingress_panel is false for new addons
2023-09-13 08:50:32 +02:00
dependabot[bot]
34090bf2eb Bump docker/login-action from 2.2.0 to 3.0.0 (#4558)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-13 08:41:52 +02:00
Mike Degatano
5ae585ce13 Unmount mounts before backup restore (#4557) 2023-09-12 18:56:24 -04:00
dependabot[bot]
2bb10a32d7 Bump gitpython from 3.1.35 to 3.1.36 (#4553)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-12 10:22:55 +02:00
dependabot[bot]
435743dd2c Bump dbus-fast from 1.94.1 to 2.2.0 (#4550)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 1.94.1 to 2.2.0.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v1.94.1...v2.2.0)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-11 14:00:54 -04:00
dependabot[bot]
98589fba6d Bump black from 23.7.0 to 23.9.1 (#4549)
Bumps [black](https://github.com/psf/black) from 23.7.0 to 23.9.1.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/23.7.0...23.9.1)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-11 12:43:50 -04:00
Mike Degatano
32da679e02 Ingress does not break when username missing (#4551) 2023-09-11 10:42:31 -04:00
Mike Degatano
44daffc65b Add freeze/thaw apis for external snapshots (#4538)
* Add freeze/thaw apis for external backups

* Error when thaw called before freeze

* Timeout must be > 0
2023-09-09 10:54:19 +02:00
Mike Degatano
0aafda1477 Mount names cannot include non-alphanumerics (#4545) 2023-09-09 10:54:04 +02:00
dependabot[bot]
60604e33b9 Bump debugpy from 1.6.7 to 1.7.0 (#4542)
Bumps [debugpy](https://github.com/microsoft/debugpy) from 1.6.7 to 1.7.0.
- [Release notes](https://github.com/microsoft/debugpy/releases)
- [Commits](https://github.com/microsoft/debugpy/compare/v1.6.7...v1.7.0)

---
updated-dependencies:
- dependency-name: debugpy
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-08 10:34:20 -04:00
dependabot[bot]
98268b377a Bump pytest from 7.4.1 to 7.4.2 (#4541)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 7.4.1 to 7.4.2.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/7.4.1...7.4.2)

---
updated-dependencies:
- dependency-name: pytest
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-08 10:33:23 -04:00
dependabot[bot]
de54979471 Bump brotli from 1.0.9 to 1.1.0 (#4540)
Bumps [brotli](https://github.com/google/brotli) from 1.0.9 to 1.1.0.
- [Release notes](https://github.com/google/brotli/releases)
- [Changelog](https://github.com/google/brotli/blob/master/CHANGELOG.md)
- [Commits](https://github.com/google/brotli/compare/v1.0.9...v1.1.0)

---
updated-dependencies:
- dependency-name: brotli
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-08 10:32:45 -04:00
dependabot[bot]
ee6e339587 Bump pytest-aiohttp from 1.0.4 to 1.0.5 (#4535)
Bumps [pytest-aiohttp](https://github.com/aio-libs/pytest-aiohttp) from 1.0.4 to 1.0.5.
- [Release notes](https://github.com/aio-libs/pytest-aiohttp/releases)
- [Changelog](https://github.com/aio-libs/pytest-aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/pytest-aiohttp/compare/v1.0.4...v1.0.5)

---
updated-dependencies:
- dependency-name: pytest-aiohttp
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-08 10:21:11 -04:00
dependabot[bot]
c16cf89318 Bump coverage from 7.3.0 to 7.3.1 (#4534)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.3.0 to 7.3.1.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.3.0...7.3.1)

---
updated-dependencies:
- dependency-name: coverage
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-08 10:04:31 -04:00
dependabot[bot]
c66cb7423e Bump actions/cache from 3.3.1 to 3.3.2 (#4544)
Bumps [actions/cache](https://github.com/actions/cache) from 3.3.1 to 3.3.2.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3.3.1...v3.3.2)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-08 09:37:38 -04:00
dependabot[bot]
f5bd95a519 Bump actions/upload-artifact from 3.1.2 to 3.1.3 (#4532)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3.1.2 to 3.1.3.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v3.1.2...v3.1.3)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-08 09:27:49 -04:00
dependabot[bot]
500f9ec1c1 Bump gitpython from 3.1.34 to 3.1.35 (#4539)
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.34 to 3.1.35.
- [Release notes](https://github.com/gitpython-developers/GitPython/releases)
- [Changelog](https://github.com/gitpython-developers/GitPython/blob/main/CHANGES)
- [Commits](https://github.com/gitpython-developers/GitPython/compare/3.1.34...3.1.35)

---
updated-dependencies:
- dependency-name: gitpython
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-08 09:19:54 -04:00
dependabot[bot]
a4713d4a1e Bump actions/checkout from 3.6.0 to 4.0.0 (#4527)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3.6.0 to 4.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3.6.0...v4.0.0)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-06 15:35:10 -04:00
dependabot[bot]
04452dfb1a Bump pytest from 7.4.0 to 7.4.1 (#4523)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 7.4.0 to 7.4.1.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/7.4.0...7.4.1)

---
updated-dependencies:
- dependency-name: pytest
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-06 14:58:56 -04:00
dependabot[bot]
69d09851d9 Bump pre-commit from 3.3.3 to 3.4.0 (#4522)
Bumps [pre-commit](https://github.com/pre-commit/pre-commit) from 3.3.3 to 3.4.0.
- [Release notes](https://github.com/pre-commit/pre-commit/releases)
- [Changelog](https://github.com/pre-commit/pre-commit/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pre-commit/pre-commit/compare/v3.3.3...v3.4.0)

---
updated-dependencies:
- dependency-name: pre-commit
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-06 12:54:42 -04:00
Mike Degatano
1b649fe5cd Use newer StrEnum and IntEnum over Enum (#4521)
* Use newer StrEnum and IntEnum over Enum

* Fix validation issue and remove unnecessary .value calls

---------

Co-authored-by: Pascal Vizeli <pvizeli@syshack.ch>
2023-09-06 12:21:04 -04:00
dependabot[bot]
38572a5a86 Bump gitpython from 3.1.32 to 3.1.34 (#4524)
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.32 to 3.1.34.
- [Release notes](https://github.com/gitpython-developers/GitPython/releases)
- [Changelog](https://github.com/gitpython-developers/GitPython/blob/main/CHANGES)
- [Commits](https://github.com/gitpython-developers/GitPython/compare/3.1.32...3.1.34)

---
updated-dependencies:
- dependency-name: gitpython
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-06 12:19:01 -04:00
dependabot[bot]
f5f51169e6 Bump sigstore/cosign-installer from 3.1.1 to 3.1.2 (#4525)
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.1.1 to 3.1.2.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.1.1...v3.1.2)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-05 11:55:41 +02:00
Mike Degatano
07c2178ae1 Add jobs to docker supervisor and use group wait (#4520) 2023-09-03 18:22:17 +02:00
Mike Degatano
f30d21361f Skip unnecessary mounts and privileges for landingpage (#4518)
* Skip unnecessary mounts for landingpage

* Remove privileged and cgroup rules from landingpage
2023-09-03 18:21:35 +02:00
Mike Degatano
6adb4fbcf7 Update store data in one task to prevent races (#4519)
* Update store data in one task to prevent races

* Always return a dictionary
2023-09-03 18:20:26 +02:00
dependabot[bot]
d73962bd7d Bump faust-cchardet from 2.1.18 to 2.1.19 (#4484)
Bumps [faust-cchardet](https://github.com/faust-streaming/cChardet) from 2.1.18 to 2.1.19.
- [Release notes](https://github.com/faust-streaming/cChardet/releases)
- [Changelog](https://github.com/faust-streaming/cChardet/blob/master/CHANGES.rst)
- [Commits](https://github.com/faust-streaming/cChardet/compare/v2.1.18...v2.1.19)

---
updated-dependencies:
- dependency-name: faust-cchardet
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-01 11:18:51 +02:00
Mike Degatano
f4b43739da Skip plugin update on startup if supervisor out of date (#4515) 2023-09-01 11:18:22 +02:00
Mike Degatano
4838b280ad List current job tree in api (#4514) 2023-08-31 10:01:42 +02:00
Mike Degatano
f93b753c03 Backup and restore track progress in job (#4503)
* Backup and restore track progress in job

* Change to stage only updates and fix tests

* Leave HA alone if it wasn't restored

* skip check HA stage message when we don't check

* Change to helper to get current job

* Fix tests

* Mark jobs as internal to skip notifying HA
2023-08-30 16:01:03 -04:00
dependabot[bot]
de06361cb0 Bump dbus-fast from 1.93.0 to 1.94.1 (#4511)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 1.93.0 to 1.94.1.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v1.93.0...v1.94.1)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-30 15:48:27 -04:00
dependabot[bot]
15ce48c8aa Bump sentry-sdk from 1.29.2 to 1.30.0 (#4513)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.29.2 to 1.30.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.29.2...1.30.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-30 15:28:51 -04:00
dependabot[bot]
38758d05a8 Bump actions/checkout from 3.5.3 to 3.6.0 (#4508)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-25 09:54:14 +02:00
Mike Degatano
a79fa14ee7 Don't notify listeners on CoreState.CLOSE (#4506) 2023-08-25 07:22:49 +02:00
Pascal Vizeli
1eb95b4d33 Remove old add-on state refresh (#4504) 2023-08-24 11:04:31 -04:00
dependabot[bot]
d04e47f5b3 Bump dbus-fast from 1.92.0 to 1.93.0 (#4501)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 1.92.0 to 1.93.0.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v1.92.0...v1.93.0)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-23 17:12:49 -04:00
Mike Degatano
dad5118f21 Use dataclasses asdict for dataclasses (#4502) 2023-08-22 17:50:48 -04:00
Florian Bachmann
acc0e5c989 Allows the supervisor to send a session's user to addon with header X-Remote-User (#4152)
* Working draft for x-remote-user

* Renames prop to remote_user

* Allows to set in addon description whether it requests the username

* Fixes addon-options schema

* Sends user ID instead of username to addons

* Adds tests

* Removes configurability of remote-user forwarding

* Update const.py

* Also adds username header

* Fetches full user info object from homeassistant

* Cleaner validation and dataclasses

* Fixes linting

* Fixes linting

* Tries to fix test

* Updates tests

* Updates tests

* Updates tests

* Updates tests

* Updates tests

* Updates tests

* Updates tests

* Updates tests

* Resolves PR comments

* Linting

* Fixes tests

* Update const.py

* Removes header keys if not required

* Moves ignoring user ID headers if no session_data is given

* simplify

* fix lint with new job

---------

Co-authored-by: Pascal Vizeli <pvizeli@syshack.ch>
Co-authored-by: Pascal Vizeli <pascal.vizeli@syshack.ch>
2023-08-22 10:11:13 +02:00
dependabot[bot]
204fcdf479 Bump dbus-fast from 1.91.2 to 1.92.0 (#4500)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 1.91.2 to 1.92.0.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v1.91.2...v1.92.0)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-21 11:03:02 +02:00
Mike Degatano
93ba8a3574 Add job names and references everywhere (#4495)
* Add job names and references everywhere

* Remove group names check and switch to const

* Ensure unique job names in decorator tests
2023-08-21 09:15:37 +02:00
dependabot[bot]
f2f9e3b514 Bump dbus-fast from 1.86.0 to 1.91.2 (#4485)
Bumps [dbus-fast](https://github.com/bluetooth-devices/dbus-fast) from 1.86.0 to 1.91.2.
- [Release notes](https://github.com/bluetooth-devices/dbus-fast/releases)
- [Changelog](https://github.com/Bluetooth-Devices/dbus-fast/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bluetooth-devices/dbus-fast/compare/v1.86.0...v1.91.2)

---
updated-dependencies:
- dependency-name: dbus-fast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 10:32:27 -04:00
Mike Degatano
61288559b3 Always stop the addon before restoring it (#4492)
* Always stop the addon before restoring it

* patch ingress refresh to avoid timeout
2023-08-15 13:08:45 +02:00
dependabot[bot]
bd2c99a455 Bump awesomeversion from 23.5.0 to 23.8.0 (#4494)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 10:38:43 +02:00
dependabot[bot]
1937348b24 Bump time-machine from 2.11.0 to 2.12.0 (#4493)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 09:55:26 +02:00
dependabot[bot]
b7b2fae325 Bump coverage from 7.2.7 to 7.3.0 (#4491)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-14 09:49:04 +02:00
dependabot[bot]
11115923b2 Bump async-timeout from 4.0.2 to 4.0.3 (#4488)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-11 12:13:28 +02:00
dependabot[bot]
295133d2e9 Bump home-assistant/builder from 2023.06.1 to 2023.08.0 (#4489)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-11 10:23:47 +02:00
Mike Degatano
3018b851c8 Missing an await in addon data (#4487) 2023-08-10 16:31:43 -04:00
Mike Degatano
222c3fd485 Address addon storage race condition (#4481)
* Address addon storage race condition

* Add some error test cases
2023-08-10 15:24:43 -04:00
Stefan Agner
9650fd2ba1 Extend container image name validator (#4480)
* Extend container image name validator

The current validator allows certain invalid names (e.g. upper
case), but disallows valid cases (such as ttl.sh/myimage).
Improve the container image validator to support more valid
options and at the same time disallow some of the invalid
options.

Note that this is not a complete/perfect validation still. A much
much more sophisticated regex would be necessary to be 100% accurate.
Also we format the string and replace {machine}/{arch} using Python
format strings. In that regard the image format in Supervisor deviates
from the Docker/OCI container image name format.

* Use an actual invalid image name in config validation
2023-08-10 12:58:33 -04:00
Stefan Agner
c88fd9a7d9 Add Home Assistant Green (#4486) 2023-08-10 17:31:37 +02:00
Mike Degatano
1611beccd1 Add job group execution limit option (#4457)
* Add job group execution limit option

* Fix pylint issues

* Assign variable before usage

* Cleanup jobs when done

* Remove isinstance check for performance

* Explicitly raise from None

* Add some more documentation info
2023-08-08 16:49:17 -04:00
Mike Degatano
71077fb0f7 Fallback on interface name if path is missing (#4479) 2023-08-07 20:53:25 -04:00
dependabot[bot]
9647fba98f Bump cryptography from 41.0.2 to 41.0.3 (#4468)
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.2 to 41.0.3.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.2...41.0.3)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-04 17:50:46 -04:00
Mike Degatano
86f004e45a Use udev path instead of mac or name for nm match (#4476) 2023-08-04 17:39:35 -04:00
Mike Degatano
a98334ede8 Cancel startup wait task on addon uninstallation (#4475)
* Cancel startup wait task on addon uninstallation

* Await startup task instead

* Suppress cancelled error
2023-08-04 16:28:44 -04:00
dependabot[bot]
e19c2d6805 Bump aiohttp from 3.8.4 to 3.8.5 (#4467)
Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.4 to 3.8.5.
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/v3.8.5/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.8.4...v3.8.5)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-03 11:01:58 -04:00
dependabot[bot]
847736dab8 Bump sentry-sdk from 1.29.0 to 1.29.2 (#4470)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-02 09:24:42 +02:00
Mike Degatano
45f930ab21 Revert "Bump aiohttp from 3.8.4 to 3.8.5 (#4448)" (#4466)
This reverts commit b8178414a4.
2023-08-01 17:37:34 -04:00
dependabot[bot]
6ea54f1ddb Bump sentry-sdk from 1.28.1 to 1.29.0 (#4464)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-01 10:14:38 +02:00
Mike Degatano
81ce0a60f6 Missing an await on dns.write_hosts call (#4463) 2023-07-31 23:29:40 +02:00
dependabot[bot]
bf5d839c22 Bump flake8 from 6.0.0 to 6.1.0 (#4461)
Bumps [flake8](https://github.com/pycqa/flake8) from 6.0.0 to 6.1.0.
- [Commits](https://github.com/pycqa/flake8/compare/6.0.0...6.1.0)

---
updated-dependencies:
- dependency-name: flake8
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 10:49:04 -04:00
dependabot[bot]
fc385cfac0 Bump pyupgrade from 3.9.0 to 3.10.1 (#4460)
Bumps [pyupgrade](https://github.com/asottile/pyupgrade) from 3.9.0 to 3.10.1.
- [Commits](https://github.com/asottile/pyupgrade/compare/v3.9.0...v3.10.1)

---
updated-dependencies:
- dependency-name: pyupgrade
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 10:22:21 -04:00
dependabot[bot]
12d55b8411 Bump pylint from 2.17.4 to 2.17.5 (#4456)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-27 09:01:16 +02:00
Mike Degatano
e60af93e2b List discovery only includes healthy addons (#4451)
Co-authored-by: Stefan Agner <stefan@agner.ch>
2023-07-22 23:27:42 +02:00
Mike Degatano
1691f0eac7 Temporary operations for backups take place in destination folder (#4452) 2023-07-21 22:49:31 -04:00
Mike Degatano
be4a6a1564 Allow discovery messages for unknown services with a warning (#4449)
* Allow discovery messages for unknown services with a warning

* Log at warning level and skip sentry report
2023-07-21 15:05:51 -04:00
J. Nick Koston
24c5613a50 Switch to using the get core state api call to check if the API is up (#4445)
Co-authored-by: Joakim Sørensen <joasoe@gmail.com>
Co-authored-by: Franck Nijhof <frenck@frenck.nl>
2023-07-20 13:43:44 +02:00
J. Nick Koston
5266927bf7 Avoid check_api_state in websocket _can_send if we are already connected (#4446) 2023-07-20 11:20:02 +02:00
dependabot[bot]
4bd2000174 Bump urllib3 from 2.0.3 to 2.0.4 (#4447)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-20 11:14:27 +02:00
dependabot[bot]
b8178414a4 Bump aiohttp from 3.8.4 to 3.8.5 (#4448)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-20 11:02:15 +02:00
J. Nick Koston
f9bc2f5993 Increase core websocket proxy maximum message size to 64MiB (#4443) 2023-07-19 20:02:42 +02:00
Mike Degatano
f1a72ee418 Include interface name in match-device settings (#4444) 2023-07-19 10:15:59 +02:00
dependabot[bot]
b19dcef5b7 Bump cryptography from 41.0.1 to 41.0.2 (#4434)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-18 21:58:35 +02:00
Mike Degatano
1f92ab42ca Reduce executor code for docker (#4438)
* Reduce executor code for docker

* Fix pylint errors and move import/export image

* Fix test and a couple other risky executor calls

* Fix dataclass and return

* Fix test case and add one for corrupt docker

* Add some coverage

* Undo changes to docker manager startup
2023-07-18 11:39:39 -04:00
dependabot[bot]
1f940a04fd Bump actions/setup-python from 4.6.1 to 4.7.0 (#4439)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-17 09:30:20 +02:00
dependabot[bot]
f771eaab5f Bump sentry-sdk from 1.28.0 to 1.28.1 (#4440)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-17 09:22:26 +02:00
dependabot[bot]
d1379a8154 Bump black from 23.3.0 to 23.7.0 (#4433)
Bumps [black](https://github.com/psf/black) from 23.3.0 to 23.7.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/23.3.0...23.7.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-13 15:27:00 -04:00
dependabot[bot]
e488f02557 Bump gitpython from 3.1.31 to 3.1.32 (#4435)
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.31 to 3.1.32.
- [Release notes](https://github.com/gitpython-developers/GitPython/releases)
- [Changelog](https://github.com/gitpython-developers/GitPython/blob/main/CHANGES)
- [Commits](https://github.com/gitpython-developers/GitPython/compare/3.1.31...3.1.32)

---
updated-dependencies:
- dependency-name: gitpython
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-11 13:55:17 +02:00
dependabot[bot]
f11cc86254 Bump time-machine from 2.10.0 to 2.11.0 (#4432)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-11 13:27:46 +02:00
dependabot[bot]
175667bfe8 Bump sentry-sdk from 1.27.1 to 1.28.0 (#4436)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-11 08:46:39 +02:00
dependabot[bot]
0a0f14ddea Bump pyupgrade from 3.8.0 to 3.9.0 (#4430)
Bumps [pyupgrade](https://github.com/asottile/pyupgrade) from 3.8.0 to 3.9.0.
- [Commits](https://github.com/asottile/pyupgrade/compare/v3.8.0...v3.9.0)

---
updated-dependencies:
- dependency-name: pyupgrade
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-10 08:54:19 +02:00
dependabot[bot]
9e08677ade Bump sentry-sdk from 1.27.0 to 1.27.1 (#4427)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-07 14:23:02 +02:00
Mike Degatano
abbf8b9b65 Identify network interfaces by mac over name (#4416)
* Identify network interfaces by mac over name

* Refactor long if statement into method
2023-07-06 16:26:19 -04:00
Mike Degatano
96d5fc244e Separate startup event from update check event (#4425)
* Separate startup event from update check event

* Add a queue for messages sent during startup
2023-07-06 12:45:37 -04:00
dependabot[bot]
3b38047fd4 Bump pyupgrade from 3.7.0 to 3.8.0 (#4418)
Bumps [pyupgrade](https://github.com/asottile/pyupgrade) from 3.7.0 to 3.8.0.
- [Commits](https://github.com/asottile/pyupgrade/compare/v3.7.0...v3.8.0)

---
updated-dependencies:
- dependency-name: pyupgrade
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-06 10:07:25 -04:00
262 changed files with 10740 additions and 3705 deletions

View File

@@ -7,34 +7,32 @@
"appPort": ["9123:8123", "7357:4357"], "appPort": ["9123:8123", "7357:4357"],
"postCreateCommand": "bash devcontainer_bootstrap", "postCreateCommand": "bash devcontainer_bootstrap",
"runArgs": ["-e", "GIT_EDITOR=code --wait", "--privileged"], "runArgs": ["-e", "GIT_EDITOR=code --wait", "--privileged"],
"extensions": [ "customizations": {
"ms-python.python", "vscode": {
"ms-python.vscode-pylance", "extensions": [
"visualstudioexptteam.vscodeintellicode", "ms-python.python",
"esbenp.prettier-vscode" "ms-python.pylint",
], "ms-python.vscode-pylance",
"mounts": ["type=volume,target=/var/lib/docker"], "visualstudioexptteam.vscodeintellicode",
"settings": { "esbenp.prettier-vscode"
"terminal.integrated.profiles.linux": { ],
"zsh": { "settings": {
"path": "/usr/bin/zsh" "terminal.integrated.profiles.linux": {
"zsh": {
"path": "/usr/bin/zsh"
}
},
"terminal.integrated.defaultProfile.linux": "zsh",
"editor.formatOnPaste": false,
"editor.formatOnSave": true,
"editor.formatOnType": true,
"files.trimTrailingWhitespace": true,
"python.pythonPath": "/usr/local/bin/python3",
"python.formatting.provider": "black",
"python.formatting.blackArgs": ["--target-version", "py312"],
"python.formatting.blackPath": "/usr/local/bin/black"
} }
}, }
"terminal.integrated.defaultProfile.linux": "zsh", },
"editor.formatOnPaste": false, "mounts": ["type=volume,target=/var/lib/docker"]
"editor.formatOnSave": true,
"editor.formatOnType": true,
"files.trimTrailingWhitespace": true,
"python.pythonPath": "/usr/local/bin/python3",
"python.linting.pylintEnabled": true,
"python.linting.enabled": true,
"python.formatting.provider": "black",
"python.formatting.blackArgs": ["--target-version", "py310"],
"python.formatting.blackPath": "/usr/local/bin/black",
"python.linting.banditPath": "/usr/local/bin/bandit",
"python.linting.flake8Path": "/usr/local/bin/flake8",
"python.linting.mypyPath": "/usr/local/bin/mypy",
"python.linting.pylintPath": "/usr/local/bin/pylint",
"python.linting.pydocstylePath": "/usr/local/bin/pydocstyle"
}
} }

View File

@@ -33,7 +33,7 @@ on:
- setup.py - setup.py
env: env:
DEFAULT_PYTHON: "3.11" DEFAULT_PYTHON: "3.12"
BUILD_NAME: supervisor BUILD_NAME: supervisor
BUILD_TYPE: supervisor BUILD_TYPE: supervisor
@@ -53,7 +53,7 @@ jobs:
requirements: ${{ steps.requirements.outputs.changed }} requirements: ${{ steps.requirements.outputs.changed }}
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -70,13 +70,13 @@ jobs:
- name: Get changed files - name: Get changed files
id: changed_files id: changed_files
if: steps.version.outputs.publish == 'false' if: steps.version.outputs.publish == 'false'
uses: jitterbit/get-changed-files@v1 uses: masesgroup/retrieve-changed-files@v3.0.0
- name: Check if requirements files changed - name: Check if requirements files changed
id: requirements id: requirements
run: | run: |
if [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements.txt|build.json) ]]; then if [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements.txt|build.yaml) ]]; then
echo "::set-output name=changed::true" echo "changed=true" >> "$GITHUB_OUTPUT"
fi fi
build: build:
@@ -92,7 +92,7 @@ jobs:
arch: ${{ fromJson(needs.init.outputs.architectures) }} arch: ${{ fromJson(needs.init.outputs.architectures) }}
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -106,13 +106,13 @@ jobs:
- name: Build wheels - name: Build wheels
if: needs.init.outputs.requirements == 'true' if: needs.init.outputs.requirements == 'true'
uses: home-assistant/wheels@2023.04.0 uses: home-assistant/wheels@2024.01.0
with: with:
abi: cp311 abi: cp312
tag: musllinux_1_2 tag: musllinux_1_2
arch: ${{ matrix.arch }} arch: ${{ matrix.arch }}
wheels-key: ${{ secrets.WHEELS_KEY }} wheels-key: ${{ secrets.WHEELS_KEY }}
apk: "libffi-dev;openssl-dev" apk: "libffi-dev;openssl-dev;yaml-dev"
skip-binary: aiohttp skip-binary: aiohttp
env-file: true env-file: true
requirements: "requirements.txt" requirements: "requirements.txt"
@@ -125,20 +125,20 @@ jobs:
- name: Set up Python ${{ env.DEFAULT_PYTHON }} - name: Set up Python ${{ env.DEFAULT_PYTHON }}
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: actions/setup-python@v4.6.1 uses: actions/setup-python@v5.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
- name: Install Cosign - name: Install Cosign
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@v3.1.1 uses: sigstore/cosign-installer@v3.3.0
with: with:
cosign-release: "v2.0.2" cosign-release: "v2.0.2"
- name: Install dirhash and calc hash - name: Install dirhash and calc hash
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
run: | run: |
pip3 install dirhash pip3 install setuptools dirhash
dir_hash="$(dirhash "${{ github.workspace }}/supervisor" -a sha256 --match "*.py")" dir_hash="$(dirhash "${{ github.workspace }}/supervisor" -a sha256 --match "*.py")"
echo "${dir_hash}" > rootfs/supervisor.sha256 echo "${dir_hash}" > rootfs/supervisor.sha256
@@ -149,7 +149,7 @@ jobs:
- name: Login to GitHub Container Registry - name: Login to GitHub Container Registry
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: docker/login-action@v2.2.0 uses: docker/login-action@v3.0.0
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
@@ -160,7 +160,7 @@ jobs:
run: echo "BUILD_ARGS=--test" >> $GITHUB_ENV run: echo "BUILD_ARGS=--test" >> $GITHUB_ENV
- name: Build supervisor - name: Build supervisor
uses: home-assistant/builder@2023.06.1 uses: home-assistant/builder@2024.01.0
with: with:
args: | args: |
$BUILD_ARGS \ $BUILD_ARGS \
@@ -178,7 +178,7 @@ jobs:
steps: steps:
- name: Checkout the repository - name: Checkout the repository
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Initialize git - name: Initialize git
if: needs.init.outputs.publish == 'true' if: needs.init.outputs.publish == 'true'
@@ -203,11 +203,11 @@ jobs:
timeout-minutes: 60 timeout-minutes: 60
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Build the Supervisor - name: Build the Supervisor
if: needs.init.outputs.publish != 'true' if: needs.init.outputs.publish != 'true'
uses: home-assistant/builder@2023.06.1 uses: home-assistant/builder@2024.01.0
with: with:
args: | args: |
--test \ --test \
@@ -324,7 +324,7 @@ jobs:
if [ "$(echo $test | jq -r '.result')" != "ok" ]; then if [ "$(echo $test | jq -r '.result')" != "ok" ]; then
exit 1 exit 1
fi fi
echo "::set-output name=slug::$(echo $test | jq -r '.data.slug')" echo "slug=$(echo $test | jq -r '.data.slug')" >> "$GITHUB_OUTPUT"
- name: Uninstall SSH add-on - name: Uninstall SSH add-on
run: | run: |

View File

@@ -8,8 +8,8 @@ on:
pull_request: ~ pull_request: ~
env: env:
DEFAULT_PYTHON: "3.11" DEFAULT_PYTHON: "3.12"
PRE_COMMIT_HOME: ~/.cache/pre-commit PRE_COMMIT_CACHE: ~/.cache/pre-commit
concurrency: concurrency:
group: "${{ github.workflow }}-${{ github.ref }}" group: "${{ github.workflow }}-${{ github.ref }}"
@@ -25,15 +25,15 @@ jobs:
name: Prepare Python dependencies name: Prepare Python dependencies
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Set up Python - name: Set up Python
id: python id: python
uses: actions/setup-python@v4.6.1 uses: actions/setup-python@v5.0.0
with: with:
python-version: ${{ env.DEFAULT_PYTHON }} python-version: ${{ env.DEFAULT_PYTHON }}
- name: Restore Python virtual environment - name: Restore Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: venv path: venv
key: | key: |
@@ -47,9 +47,10 @@ jobs:
pip install -r requirements.txt -r requirements_tests.txt pip install -r requirements.txt -r requirements_tests.txt
- name: Restore pre-commit environment from cache - name: Restore pre-commit environment from cache
id: cache-precommit id: cache-precommit
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: ${{ env.PRE_COMMIT_HOME }} path: ${{ env.PRE_COMMIT_CACHE }}
lookup-only: true
key: | key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }} ${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
restore-keys: | restore-keys: |
@@ -66,15 +67,15 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.6.1 uses: actions/setup-python@v5.0.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment - name: Restore Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: venv path: venv
key: | key: |
@@ -87,7 +88,7 @@ jobs:
- name: Run black - name: Run black
run: | run: |
. venv/bin/activate . venv/bin/activate
black --target-version py38 --check supervisor tests setup.py black --target-version py312 --check supervisor tests setup.py
lint-dockerfile: lint-dockerfile:
name: Check Dockerfile name: Check Dockerfile
@@ -95,7 +96,7 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Register hadolint problem matcher - name: Register hadolint problem matcher
run: | run: |
echo "::add-matcher::.github/workflows/matchers/hadolint.json" echo "::add-matcher::.github/workflows/matchers/hadolint.json"
@@ -110,15 +111,15 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.6.1 uses: actions/setup-python@v5.0.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment - name: Restore Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: venv path: venv
key: | key: |
@@ -130,9 +131,9 @@ jobs:
exit 1 exit 1
- name: Restore pre-commit environment from cache - name: Restore pre-commit environment from cache
id: cache-precommit id: cache-precommit
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: ${{ env.PRE_COMMIT_HOME }} path: ${{ env.PRE_COMMIT_CACHE }}
key: | key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }} ${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: Fail job if cache restore failed - name: Fail job if cache restore failed
@@ -154,15 +155,15 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.6.1 uses: actions/setup-python@v5.0.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment - name: Restore Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: venv path: venv
key: | key: |
@@ -186,15 +187,15 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.6.1 uses: actions/setup-python@v5.0.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment - name: Restore Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: venv path: venv
key: | key: |
@@ -206,9 +207,9 @@ jobs:
exit 1 exit 1
- name: Restore pre-commit environment from cache - name: Restore pre-commit environment from cache
id: cache-precommit id: cache-precommit
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: ${{ env.PRE_COMMIT_HOME }} path: ${{ env.PRE_COMMIT_CACHE }}
key: | key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }} ${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: Fail job if cache restore failed - name: Fail job if cache restore failed
@@ -227,15 +228,15 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.6.1 uses: actions/setup-python@v5.0.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment - name: Restore Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: venv path: venv
key: | key: |
@@ -247,9 +248,9 @@ jobs:
exit 1 exit 1
- name: Restore pre-commit environment from cache - name: Restore pre-commit environment from cache
id: cache-precommit id: cache-precommit
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: ${{ env.PRE_COMMIT_HOME }} path: ${{ env.PRE_COMMIT_CACHE }}
key: | key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }} ${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: Fail job if cache restore failed - name: Fail job if cache restore failed
@@ -271,15 +272,15 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.6.1 uses: actions/setup-python@v5.0.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment - name: Restore Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: venv path: venv
key: | key: |
@@ -303,15 +304,15 @@ jobs:
needs: prepare needs: prepare
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.6.1 uses: actions/setup-python@v5.0.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment - name: Restore Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: venv path: venv
key: | key: |
@@ -323,9 +324,9 @@ jobs:
exit 1 exit 1
- name: Restore pre-commit environment from cache - name: Restore pre-commit environment from cache
id: cache-precommit id: cache-precommit
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: ${{ env.PRE_COMMIT_HOME }} path: ${{ env.PRE_COMMIT_CACHE }}
key: | key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }} ${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: Fail job if cache restore failed - name: Fail job if cache restore failed
@@ -344,19 +345,19 @@ jobs:
name: Run tests Python ${{ needs.prepare.outputs.python-version }} name: Run tests Python ${{ needs.prepare.outputs.python-version }}
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.6.1 uses: actions/setup-python@v5.0.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign - name: Install Cosign
uses: sigstore/cosign-installer@v3.1.1 uses: sigstore/cosign-installer@v3.3.0
with: with:
cosign-release: "v2.0.2" cosign-release: "v2.0.2"
- name: Restore Python virtual environment - name: Restore Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: venv path: venv
key: | key: |
@@ -391,7 +392,7 @@ jobs:
-o console_output_style=count \ -o console_output_style=count \
tests tests
- name: Upload coverage artifact - name: Upload coverage artifact
uses: actions/upload-artifact@v3.1.2 uses: actions/upload-artifact@v4.0.0
with: with:
name: coverage-${{ matrix.python-version }} name: coverage-${{ matrix.python-version }}
path: .coverage path: .coverage
@@ -402,15 +403,15 @@ jobs:
needs: ["pytest", "prepare"] needs: ["pytest", "prepare"]
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }} - name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v4.6.1 uses: actions/setup-python@v5.0.0
id: python id: python
with: with:
python-version: ${{ needs.prepare.outputs.python-version }} python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment - name: Restore Python virtual environment
id: cache-venv id: cache-venv
uses: actions/cache@v3.3.1 uses: actions/cache@v3.3.3
with: with:
path: venv path: venv
key: | key: |
@@ -421,7 +422,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache" echo "Failed to restore Python virtual environment from cache"
exit 1 exit 1
- name: Download all coverage artifacts - name: Download all coverage artifacts
uses: actions/download-artifact@v3 uses: actions/download-artifact@v4.1.1
- name: Combine coverage results - name: Combine coverage results
run: | run: |
. venv/bin/activate . venv/bin/activate

View File

@@ -9,7 +9,7 @@ jobs:
lock: lock:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: dessant/lock-threads@v4.0.1 - uses: dessant/lock-threads@v5.0.1
with: with:
github-token: ${{ github.token }} github-token: ${{ github.token }}
issue-inactive-days: "30" issue-inactive-days: "30"

View File

@@ -11,7 +11,7 @@ jobs:
name: Release Drafter name: Release Drafter
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -33,10 +33,10 @@ jobs:
echo Current version: $latest echo Current version: $latest
echo New target version: $datepre.$newpost echo New target version: $datepre.$newpost
echo "::set-output name=version::$datepre.$newpost" echo "version=$datepre.$newpost" >> "$GITHUB_OUTPUT"
- name: Run Release Drafter - name: Run Release Drafter
uses: release-drafter/release-drafter@v5.24.0 uses: release-drafter/release-drafter@v5.25.0
with: with:
tag: ${{ steps.version.outputs.version }} tag: ${{ steps.version.outputs.version }}
name: ${{ steps.version.outputs.version }} name: ${{ steps.version.outputs.version }}

View File

@@ -10,9 +10,9 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Check out code from GitHub - name: Check out code from GitHub
uses: actions/checkout@v3.5.3 uses: actions/checkout@v4.1.1
- name: Sentry Release - name: Sentry Release
uses: getsentry/action-release@v1.4.1 uses: getsentry/action-release@v1.6.0
env: env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }} SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }} SENTRY_ORG: ${{ secrets.SENTRY_ORG }}

View File

@@ -9,7 +9,7 @@ jobs:
stale: stale:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/stale@v8.0.0 - uses: actions/stale@v9.0.0
with: with:
repo-token: ${{ secrets.GITHUB_TOKEN }} repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30 days-before-stale: 30

View File

@@ -3,4 +3,5 @@ ignored:
- DL3006 - DL3006
- DL3013 - DL3013
- DL3018 - DL3018
- DL3042
- SC2155 - SC2155

View File

@@ -1,16 +1,16 @@
repos: repos:
- repo: https://github.com/psf/black - repo: https://github.com/psf/black
rev: 23.1.0 rev: 23.12.1
hooks: hooks:
- id: black - id: black
args: args:
- --safe - --safe
- --quiet - --quiet
- --target-version - --target-version
- py310 - py312
files: ^((supervisor|tests)/.+)?[^/]+\.py$ files: ^((supervisor|tests)/.+)?[^/]+\.py$
- repo: https://github.com/PyCQA/flake8 - repo: https://github.com/PyCQA/flake8
rev: 6.0.0 rev: 7.0.0
hooks: hooks:
- id: flake8 - id: flake8
additional_dependencies: additional_dependencies:
@@ -18,17 +18,17 @@ repos:
- pydocstyle==6.3.0 - pydocstyle==6.3.0
files: ^(supervisor|script|tests)/.+\.py$ files: ^(supervisor|script|tests)/.+\.py$
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0 rev: v4.5.0
hooks: hooks:
- id: check-executables-have-shebangs - id: check-executables-have-shebangs
stages: [manual] stages: [manual]
- id: check-json - id: check-json
- repo: https://github.com/PyCQA/isort - repo: https://github.com/PyCQA/isort
rev: 5.12.0 rev: 5.13.2
hooks: hooks:
- id: isort - id: isort
- repo: https://github.com/asottile/pyupgrade - repo: https://github.com/asottile/pyupgrade
rev: v3.4.0 rev: v3.15.0
hooks: hooks:
- id: pyupgrade - id: pyupgrade
args: [--py310-plus] args: [--py312-plus]

View File

@@ -15,6 +15,7 @@ WORKDIR /usr/src
RUN \ RUN \
set -x \ set -x \
&& apk add --no-cache \ && apk add --no-cache \
findutils \
eudev \ eudev \
eudev-libs \ eudev-libs \
git \ git \
@@ -22,6 +23,7 @@ RUN \
libpulse \ libpulse \
musl \ musl \
openssl \ openssl \
yaml \
\ \
&& curl -Lso /usr/bin/cosign "https://github.com/home-assistant/cosign/releases/download/${COSIGN_VERSION}/cosign_${BUILD_ARCH}" \ && curl -Lso /usr/bin/cosign "https://github.com/home-assistant/cosign/releases/download/${COSIGN_VERSION}/cosign_${BUILD_ARCH}" \
&& chmod a+x /usr/bin/cosign && chmod a+x /usr/bin/cosign
@@ -30,15 +32,14 @@ RUN \
COPY requirements.txt . COPY requirements.txt .
RUN \ RUN \
export MAKEFLAGS="-j$(nproc)" \ export MAKEFLAGS="-j$(nproc)" \
&& pip3 install --no-cache-dir --no-index --only-binary=:all: --find-links \ && pip3 install --only-binary=:all: \
"https://wheels.home-assistant.io/musllinux/" \
-r ./requirements.txt \ -r ./requirements.txt \
&& rm -f requirements.txt && rm -f requirements.txt
# Install Home Assistant Supervisor # Install Home Assistant Supervisor
COPY . supervisor COPY . supervisor
RUN \ RUN \
pip3 install --no-cache-dir -e ./supervisor \ pip3 install -e ./supervisor \
&& python3 -m compileall ./supervisor/supervisor && python3 -m compileall ./supervisor/supervisor

View File

@@ -1,10 +1,10 @@
image: ghcr.io/home-assistant/{arch}-hassio-supervisor image: ghcr.io/home-assistant/{arch}-hassio-supervisor
build_from: build_from:
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.11-alpine3.16 aarch64: ghcr.io/home-assistant/aarch64-base-python:3.12-alpine3.18
armhf: ghcr.io/home-assistant/armhf-base-python:3.11-alpine3.16 armhf: ghcr.io/home-assistant/armhf-base-python:3.12-alpine3.18
armv7: ghcr.io/home-assistant/armv7-base-python:3.11-alpine3.16 armv7: ghcr.io/home-assistant/armv7-base-python:3.12-alpine3.18
amd64: ghcr.io/home-assistant/amd64-base-python:3.11-alpine3.16 amd64: ghcr.io/home-assistant/amd64-base-python:3.12-alpine3.18
i386: ghcr.io/home-assistant/i386-base-python:3.11-alpine3.16 i386: ghcr.io/home-assistant/i386-base-python:3.12-alpine3.18
codenotary: codenotary:
signer: notary@home-assistant.io signer: notary@home-assistant.io
base_image: notary@home-assistant.io base_image: notary@home-assistant.io

View File

@@ -1,45 +0,0 @@
[MASTER]
reports=no
jobs=2
good-names=id,i,j,k,ex,Run,_,fp,T,os
extension-pkg-whitelist=
ciso8601
# Reasons disabled:
# format - handled by black
# locally-disabled - it spams too much
# duplicate-code - unavoidable
# cyclic-import - doesn't test if both import on load
# abstract-class-not-used - is flaky, should not show up but does
# unused-argument - generic callbacks and setup methods create a lot of warnings
# too-many-* - are not enforced for the sake of readability
# too-few-* - same as too-many-*
# abstract-method - with intro of async there are always methods missing
disable=
format,
abstract-method,
cyclic-import,
duplicate-code,
locally-disabled,
no-else-return,
not-context-manager,
too-few-public-methods,
too-many-arguments,
too-many-branches,
too-many-instance-attributes,
too-many-lines,
too-many-locals,
too-many-public-methods,
too-many-return-statements,
too-many-statements,
unused-argument,
consider-using-with
[EXCEPTIONS]
overgeneral-exceptions=builtins.Exception
[TYPECHECK]
ignored-modules = distutils

112
pyproject.toml Normal file
View File

@@ -0,0 +1,112 @@
[build-system]
requires = ["setuptools~=68.0.0", "wheel~=0.40.0"]
build-backend = "setuptools.build_meta"
[project]
name = "Supervisor"
dynamic = ["version", "dependencies"]
license = { text = "Apache-2.0" }
description = "Open-source private cloud os for Home-Assistant based on HassOS"
readme = "README.md"
authors = [
{ name = "The Home Assistant Authors", email = "hello@home-assistant.io" },
]
keywords = ["docker", "home-assistant", "api"]
requires-python = ">=3.12.0"
[project.urls]
"Homepage" = "https://www.home-assistant.io/"
"Source Code" = "https://github.com/home-assistant/supervisor"
"Bug Reports" = "https://github.com/home-assistant/supervisor/issues"
"Docs: Dev" = "https://developers.home-assistant.io/"
"Discord" = "https://www.home-assistant.io/join-chat/"
"Forum" = "https://community.home-assistant.io/"
[tool.setuptools]
platforms = ["any"]
zip-safe = false
include-package-data = true
[tool.setuptools.packages.find]
include = ["supervisor*"]
[tool.pylint.MAIN]
py-version = "3.11"
# Use a conservative default here; 2 should speed up most setups and not hurt
# any too bad. Override on command line as appropriate.
jobs = 2
persistent = false
extension-pkg-allow-list = ["ciso8601"]
[tool.pylint.BASIC]
class-const-naming-style = "any"
good-names = ["id", "i", "j", "k", "ex", "Run", "_", "fp", "T", "os"]
[tool.pylint."MESSAGES CONTROL"]
# Reasons disabled:
# format - handled by black
# abstract-method - with intro of async there are always methods missing
# cyclic-import - doesn't test if both import on load
# duplicate-code - unavoidable
# locally-disabled - it spams too much
# too-many-* - are not enforced for the sake of readability
# too-few-* - same as too-many-*
# unused-argument - generic callbacks and setup methods create a lot of warnings
disable = [
"format",
"abstract-method",
"cyclic-import",
"duplicate-code",
"locally-disabled",
"no-else-return",
"not-context-manager",
"too-few-public-methods",
"too-many-arguments",
"too-many-branches",
"too-many-instance-attributes",
"too-many-lines",
"too-many-locals",
"too-many-public-methods",
"too-many-return-statements",
"too-many-statements",
"unused-argument",
"consider-using-with",
]
[tool.pylint.REPORTS]
score = false
[tool.pylint.TYPECHECK]
ignored-modules = ["distutils"]
[tool.pylint.FORMAT]
expected-line-ending-format = "LF"
[tool.pylint.EXCEPTIONS]
overgeneral-exceptions = ["builtins.BaseException", "builtins.Exception"]
[tool.pytest.ini_options]
testpaths = ["tests"]
norecursedirs = [".git"]
log_format = "%(asctime)s.%(msecs)03d %(levelname)-8s %(threadName)s %(name)s:%(filename)s:%(lineno)s %(message)s"
log_date_format = "%Y-%m-%d %H:%M:%S"
asyncio_mode = "auto"
filterwarnings = [
"error",
"ignore:pkg_resources is deprecated as an API:DeprecationWarning:dirhash",
"ignore::pytest.PytestUnraisableExceptionWarning",
]
[tool.isort]
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
line_length = 88
indent = " "
force_sort_within_sections = true
sections = ["FUTURE", "STDLIB", "THIRDPARTY", "FIRSTPARTY", "LOCALFOLDER"]
default_section = "THIRDPARTY"
forced_separate = "tests"
combine_as_imports = true
use_parentheses = true
known_first_party = ["supervisor", "tests"]

View File

@@ -1,2 +0,0 @@
[pytest]
asyncio_mode = auto

View File

@@ -1,26 +1,30 @@
aiodns==3.0.0 aiodns==3.1.1
aiohttp==3.8.4 aiohttp==3.9.1
async_timeout==4.0.2 aiohttp-fast-url-dispatcher==0.3.0
async_timeout==4.0.3
atomicwrites-homeassistant==1.4.1 atomicwrites-homeassistant==1.4.1
attrs==23.1.0 attrs==23.2.0
awesomeversion==23.5.0 awesomeversion==23.11.0
brotli==1.0.9 brotli==1.1.0
ciso8601==2.3.0 ciso8601==2.3.1
colorlog==6.7.0 colorlog==6.8.0
cpe==1.2.1 cpe==1.2.1
cryptography==41.0.1 cryptography==41.0.7
debugpy==1.6.7 debugpy==1.8.0
deepmerge==1.1.0 deepmerge==1.1.1
dirhash==0.2.1 dirhash==0.2.1
docker==6.1.3 docker==7.0.0
faust-cchardet==2.1.18 faust-cchardet==2.1.19
gitpython==3.1.31 gitpython==3.1.41
jinja2==3.1.2 jinja2==3.1.3
orjson==3.9.10
pulsectl==23.5.2 pulsectl==23.5.2
pyudev==0.24.1 pyudev==0.24.1
ruamel.yaml==0.17.21 PyYAML==6.0.1
securetar==2023.3.0 securetar==2023.12.0
sentry-sdk==1.27.0 sentry-sdk==1.39.2
voluptuous==0.13.1 setuptools==69.0.3
dbus-fast==1.86.0 voluptuous==0.14.1
typing_extensions==4.7.1 dbus-fast==2.21.0
typing_extensions==4.9.0
zlib-fast==0.1.0

View File

@@ -1,16 +1,16 @@
black==23.3.0 black==23.12.1
coverage==7.2.7 coverage==7.4.0
flake8-docstrings==1.7.0 flake8-docstrings==1.7.0
flake8==6.0.0 flake8==7.0.0
pre-commit==3.3.3 pre-commit==3.6.0
pydocstyle==6.3.0 pydocstyle==6.3.0
pylint==2.17.4 pylint==3.0.3
pytest-aiohttp==1.0.4 pytest-aiohttp==1.0.5
pytest-asyncio==0.18.3 pytest-asyncio==0.23.3
pytest-cov==4.1.0 pytest-cov==4.1.0
pytest-timeout==2.1.0 pytest-timeout==2.2.0
pytest==7.4.0 pytest==7.4.4
pyupgrade==3.7.0 pyupgrade==3.15.0
time-machine==2.10.0 time-machine==2.13.0
typing_extensions==4.7.1 typing_extensions==4.9.0
urllib3==2.0.3 urllib3==2.1.0

View File

@@ -15,7 +15,7 @@ do
if [[ "${supervisor_state}" = "running" ]]; then if [[ "${supervisor_state}" = "running" ]]; then
# Check API # Check API
if bashio::supervisor.ping; then if bashio::supervisor.ping > /dev/null; then
failed_count=0 failed_count=0
else else
bashio::log.warning "Maybe found an issue on API healthy" bashio::log.warning "Maybe found an issue on API healthy"

View File

@@ -1,17 +1,3 @@
[isort]
multi_line_output = 3
include_trailing_comma=True
force_grid_wrap=0
line_length=88
indent = " "
force_sort_within_sections = true
sections = FUTURE,STDLIB,THIRDPARTY,FIRSTPARTY,LOCALFOLDER
default_section = THIRDPARTY
forced_separate = tests
combine_as_imports = true
use_parentheses = true
known_first_party = supervisor,tests
[flake8] [flake8]
exclude = .venv,.git,.tox,docs,venv,bin,lib,deps,build exclude = .venv,.git,.tox,docs,venv,bin,lib,deps,build
doctests = True doctests = True

View File

@@ -1,60 +1,27 @@
"""Home Assistant Supervisor setup.""" """Home Assistant Supervisor setup."""
from pathlib import Path
import re
from setuptools import setup from setuptools import setup
from supervisor.const import SUPERVISOR_VERSION RE_SUPERVISOR_VERSION = re.compile(r"^SUPERVISOR_VERSION =\s*(.+)$")
SUPERVISOR_DIR = Path(__file__).parent
REQUIREMENTS_FILE = SUPERVISOR_DIR / "requirements.txt"
CONST_FILE = SUPERVISOR_DIR / "supervisor/const.py"
REQUIREMENTS = REQUIREMENTS_FILE.read_text(encoding="utf-8")
CONSTANTS = CONST_FILE.read_text(encoding="utf-8")
def _get_supervisor_version():
for line in CONSTANTS.split("/n"):
if match := RE_SUPERVISOR_VERSION.match(line):
return match.group(1)
return "99.9.9dev"
setup( setup(
name="Supervisor", version=_get_supervisor_version(),
version=SUPERVISOR_VERSION, dependencies=REQUIREMENTS.split("/n"),
license="BSD License",
author="The Home Assistant Authors",
author_email="hello@home-assistant.io",
url="https://home-assistant.io/",
description=("Open-source private cloud os for Home-Assistant" " based on HassOS"),
long_description=(
"A maintainless private cloud operator system that"
"setup a Home-Assistant instance. Based on HassOS"
),
classifiers=[
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Topic :: Home Automation",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Atmospheric Science",
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.8",
],
keywords=["docker", "home-assistant", "api"],
zip_safe=False,
platforms="any",
packages=[
"supervisor.addons",
"supervisor.api",
"supervisor.backups",
"supervisor.dbus.network",
"supervisor.dbus.network.setting",
"supervisor.dbus",
"supervisor.discovery.services",
"supervisor.discovery",
"supervisor.docker",
"supervisor.homeassistant",
"supervisor.host",
"supervisor.jobs",
"supervisor.misc",
"supervisor.plugins",
"supervisor.resolution.checks",
"supervisor.resolution.evaluations",
"supervisor.resolution.fixups",
"supervisor.resolution",
"supervisor.security",
"supervisor.services.modules",
"supervisor.services",
"supervisor.store",
"supervisor.utils",
"supervisor",
],
include_package_data=True,
) )

View File

@@ -5,7 +5,13 @@ import logging
from pathlib import Path from pathlib import Path
import sys import sys
from supervisor import bootstrap import zlib_fast
# Enable fast zlib before importing supervisor
zlib_fast.enable()
from supervisor import bootstrap # noqa: E402
from supervisor.utils.logging import activate_log_queue_handler # noqa: E402
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -38,6 +44,8 @@ if __name__ == "__main__":
executor = ThreadPoolExecutor(thread_name_prefix="SyncWorker") executor = ThreadPoolExecutor(thread_name_prefix="SyncWorker")
loop.set_default_executor(executor) loop.set_default_executor(executor)
activate_log_queue_handler()
_LOGGER.info("Initializing Supervisor setup") _LOGGER.info("Initializing Supervisor setup")
coresys = loop.run_until_complete(bootstrap.initialize_coresys()) coresys = loop.run_until_complete(bootstrap.initialize_coresys())
loop.set_debug(coresys.config.debug) loop.set_debug(coresys.config.debug)

View File

@@ -1,457 +1 @@
"""Init file for Supervisor add-ons.""" """Init file for Supervisor add-ons."""
import asyncio
from collections.abc import Awaitable
from contextlib import suppress
import logging
import tarfile
from typing import Union
from ..const import AddonBoot, AddonStartup, AddonState
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
AddonConfigurationError,
AddonsError,
AddonsJobError,
AddonsNotSupportedError,
CoreDNSError,
DockerAPIError,
DockerError,
DockerNotFound,
HomeAssistantAPIError,
HostAppArmorError,
)
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..store.addon import AddonStore
from ..utils import check_exception_chain
from ..utils.sentry import capture_exception
from .addon import Addon
from .const import ADDON_UPDATE_CONDITIONS
from .data import AddonsData
_LOGGER: logging.Logger = logging.getLogger(__name__)
AnyAddon = Union[Addon, AddonStore]
class AddonManager(CoreSysAttributes):
"""Manage add-ons inside Supervisor."""
def __init__(self, coresys: CoreSys):
"""Initialize Docker base wrapper."""
self.coresys: CoreSys = coresys
self.data: AddonsData = AddonsData(coresys)
self.local: dict[str, Addon] = {}
self.store: dict[str, AddonStore] = {}
@property
def all(self) -> list[AnyAddon]:
"""Return a list of all add-ons."""
addons: dict[str, AnyAddon] = {**self.store, **self.local}
return list(addons.values())
@property
def installed(self) -> list[Addon]:
"""Return a list of all installed add-ons."""
return list(self.local.values())
def get(self, addon_slug: str, local_only: bool = False) -> AnyAddon | None:
"""Return an add-on from slug.
Prio:
1 - Local
2 - Store
"""
if addon_slug in self.local:
return self.local[addon_slug]
if not local_only:
return self.store.get(addon_slug)
return None
def from_token(self, token: str) -> Addon | None:
"""Return an add-on from Supervisor token."""
for addon in self.installed:
if token == addon.supervisor_token:
return addon
return None
async def load(self) -> None:
"""Start up add-on management."""
tasks = []
for slug in self.data.system:
addon = self.local[slug] = Addon(self.coresys, slug)
tasks.append(self.sys_create_task(addon.load()))
# Run initial tasks
_LOGGER.info("Found %d installed add-ons", len(tasks))
if tasks:
await asyncio.wait(tasks)
# Sync DNS
await self.sync_dns()
async def boot(self, stage: AddonStartup) -> None:
"""Boot add-ons with mode auto."""
tasks: list[Addon] = []
for addon in self.installed:
if addon.boot != AddonBoot.AUTO or addon.startup != stage:
continue
tasks.append(addon)
# Evaluate add-ons which need to be started
_LOGGER.info("Phase '%s' starting %d add-ons", stage, len(tasks))
if not tasks:
return
# Start Add-ons sequential
# avoid issue on slow IO
# Config.wait_boot is deprecated. Until addons update with healthchecks,
# add a sleep task for it to keep the same minimum amount of wait time
wait_boot: list[Awaitable[None]] = [asyncio.sleep(self.sys_config.wait_boot)]
for addon in tasks:
try:
if start_task := await addon.start():
wait_boot.append(start_task)
except AddonsError as err:
# Check if there is an system/user issue
if check_exception_chain(
err, (DockerAPIError, DockerNotFound, AddonConfigurationError)
):
addon.boot = AddonBoot.MANUAL
addon.save_persist()
except Exception as err: # pylint: disable=broad-except
capture_exception(err)
else:
continue
_LOGGER.warning("Can't start Add-on %s", addon.slug)
# Ignore exceptions from waiting for addon startup, addon errors handled elsewhere
await asyncio.gather(*wait_boot, return_exceptions=True)
async def shutdown(self, stage: AddonStartup) -> None:
"""Shutdown addons."""
tasks: list[Addon] = []
for addon in self.installed:
if addon.state != AddonState.STARTED or addon.startup != stage:
continue
tasks.append(addon)
# Evaluate add-ons which need to be stopped
_LOGGER.info("Phase '%s' stopping %d add-ons", stage, len(tasks))
if not tasks:
return
# Stop Add-ons sequential
# avoid issue on slow IO
for addon in tasks:
try:
await addon.stop()
except Exception as err: # pylint: disable=broad-except
_LOGGER.warning("Can't stop Add-on %s: %s", addon.slug, err)
capture_exception(err)
@Job(
conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError,
)
async def install(self, slug: str) -> None:
"""Install an add-on."""
if slug in self.local:
raise AddonsError(f"Add-on {slug} is already installed", _LOGGER.warning)
store = self.store.get(slug)
if not store:
raise AddonsError(f"Add-on {slug} does not exist", _LOGGER.error)
store.validate_availability()
self.data.install(store)
addon = Addon(self.coresys, slug)
await addon.load()
if not addon.path_data.is_dir():
_LOGGER.info(
"Creating Home Assistant add-on data folder %s", addon.path_data
)
addon.path_data.mkdir()
# Setup/Fix AppArmor profile
await addon.install_apparmor()
try:
await addon.instance.install(store.version, store.image, arch=addon.arch)
except DockerError as err:
self.data.uninstall(addon)
raise AddonsError() from err
self.local[slug] = addon
# Reload ingress tokens
if addon.with_ingress:
await self.sys_ingress.reload()
_LOGGER.info("Add-on '%s' successfully installed", slug)
async def uninstall(self, slug: str) -> None:
"""Remove an add-on."""
if slug not in self.local:
_LOGGER.warning("Add-on %s is not installed", slug)
return
addon = self.local[slug]
try:
await addon.instance.remove()
except DockerError as err:
raise AddonsError() from err
addon.state = AddonState.UNKNOWN
await addon.unload()
# Cleanup audio settings
if addon.path_pulse.exists():
with suppress(OSError):
addon.path_pulse.unlink()
# Cleanup AppArmor profile
with suppress(HostAppArmorError):
await addon.uninstall_apparmor()
# Cleanup Ingress panel from sidebar
if addon.ingress_panel:
addon.ingress_panel = False
with suppress(HomeAssistantAPIError):
await self.sys_ingress.update_hass_panel(addon)
# Cleanup Ingress dynamic port assignment
if addon.with_ingress:
self.sys_create_task(self.sys_ingress.reload())
self.sys_ingress.del_dynamic_port(slug)
# Cleanup discovery data
for message in self.sys_discovery.list_messages:
if message.addon != addon.slug:
continue
self.sys_discovery.remove(message)
# Cleanup services data
for service in self.sys_services.list_services:
if addon.slug not in service.active:
continue
service.del_service_data(addon)
self.data.uninstall(addon)
self.local.pop(slug)
_LOGGER.info("Add-on '%s' successfully removed", slug)
@Job(
conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError,
)
async def update(
self, slug: str, backup: bool | None = False
) -> Awaitable[None] | None:
"""Update add-on.
Returns a coroutine that completes when addon has state 'started' (see addon.start)
if addon is started after update. Else nothing is returned.
"""
if slug not in self.local:
raise AddonsError(f"Add-on {slug} is not installed", _LOGGER.error)
addon = self.local[slug]
if addon.is_detached:
raise AddonsError(
f"Add-on {slug} is not available inside store", _LOGGER.error
)
store = self.store[slug]
if addon.version == store.version:
raise AddonsError(f"No update available for add-on {slug}", _LOGGER.warning)
# Check if available, Maybe something have changed
store.validate_availability()
if backup:
await self.sys_backups.do_backup_partial(
name=f"addon_{addon.slug}_{addon.version}",
homeassistant=False,
addons=[addon.slug],
)
# Update instance
last_state: AddonState = addon.state
old_image = addon.image
try:
await addon.instance.update(store.version, store.image)
except DockerError as err:
raise AddonsError() from err
_LOGGER.info("Add-on '%s' successfully updated", slug)
self.data.update(store)
# Cleanup
with suppress(DockerError):
await addon.instance.cleanup(old_image=old_image)
# Setup/Fix AppArmor profile
await addon.install_apparmor()
# restore state
return (
await addon.start()
if last_state in [AddonState.STARTED, AddonState.STARTUP]
else None
)
@Job(
conditions=[
JobCondition.FREE_SPACE,
JobCondition.INTERNET_HOST,
JobCondition.HEALTHY,
],
on_condition=AddonsJobError,
)
async def rebuild(self, slug: str) -> Awaitable[None] | None:
"""Perform a rebuild of local build add-on.
Returns a coroutine that completes when addon has state 'started' (see addon.start)
if addon is started after rebuild. Else nothing is returned.
"""
if slug not in self.local:
raise AddonsError(f"Add-on {slug} is not installed", _LOGGER.error)
addon = self.local[slug]
if addon.is_detached:
raise AddonsError(
f"Add-on {slug} is not available inside store", _LOGGER.error
)
store = self.store[slug]
# Check if a rebuild is possible now
if addon.version != store.version:
raise AddonsError(
"Version changed, use Update instead Rebuild", _LOGGER.error
)
if not addon.need_build:
raise AddonsNotSupportedError(
"Can't rebuild a image based add-on", _LOGGER.error
)
# remove docker container but not addon config
last_state: AddonState = addon.state
try:
await addon.instance.remove()
await addon.instance.install(addon.version)
except DockerError as err:
raise AddonsError() from err
self.data.update(store)
_LOGGER.info("Add-on '%s' successfully rebuilt", slug)
# restore state
return (
await addon.start()
if last_state in [AddonState.STARTED, AddonState.STARTUP]
else None
)
@Job(
conditions=[
JobCondition.FREE_SPACE,
JobCondition.INTERNET_HOST,
JobCondition.HEALTHY,
],
on_condition=AddonsJobError,
)
async def restore(
self, slug: str, tar_file: tarfile.TarFile
) -> Awaitable[None] | None:
"""Restore state of an add-on.
Returns a coroutine that completes when addon has state 'started' (see addon.start)
if addon is started after restore. Else nothing is returned.
"""
if slug not in self.local:
_LOGGER.debug("Add-on %s is not local available for restore", slug)
addon = Addon(self.coresys, slug)
else:
_LOGGER.debug("Add-on %s is local available for restore", slug)
addon = self.local[slug]
wait_for_start = await addon.restore(tar_file)
# Check if new
if slug not in self.local:
_LOGGER.info("Detect new Add-on after restore %s", slug)
self.local[slug] = addon
# Update ingress
if addon.with_ingress:
await self.sys_ingress.reload()
with suppress(HomeAssistantAPIError):
await self.sys_ingress.update_hass_panel(addon)
return wait_for_start
@Job(conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_HOST])
async def repair(self) -> None:
"""Repair local add-ons."""
needs_repair: list[Addon] = []
# Evaluate Add-ons to repair
for addon in self.installed:
if await addon.instance.exists():
continue
needs_repair.append(addon)
_LOGGER.info("Found %d add-ons to repair", len(needs_repair))
if not needs_repair:
return
for addon in needs_repair:
_LOGGER.info("Repairing for add-on: %s", addon.slug)
with suppress(DockerError, KeyError):
# Need pull a image again
if not addon.need_build:
await addon.instance.install(addon.version, addon.image)
continue
# Need local lookup
if addon.need_build and not addon.is_detached:
store = self.store[addon.slug]
# If this add-on is available for rebuild
if addon.version == store.version:
await addon.instance.install(addon.version, addon.image)
continue
_LOGGER.error("Can't repair %s", addon.slug)
with suppress(AddonsError):
await self.uninstall(addon.slug)
async def sync_dns(self) -> None:
"""Sync add-ons DNS names."""
# Update hosts
for addon in self.installed:
try:
if not await addon.instance.is_running():
continue
except DockerError as err:
_LOGGER.warning("Add-on %s is corrupt: %s", addon.slug, err)
self.sys_resolution.create_issue(
IssueType.CORRUPT_DOCKER,
ContextType.ADDON,
reference=addon.slug,
suggestions=[SuggestionType.EXECUTE_REPAIR],
)
capture_exception(err)
else:
self.sys_plugins.dns.add_host(
ipv4=addon.ip_address, names=[addon.hostname], write=False
)
# Write hosts files
with suppress(CoreDNSError):
self.sys_plugins.dns.write_hosts()

View File

@@ -3,6 +3,7 @@ import asyncio
from collections.abc import Awaitable from collections.abc import Awaitable
from contextlib import suppress from contextlib import suppress
from copy import deepcopy from copy import deepcopy
import errno
from ipaddress import IPv4Address from ipaddress import IPv4Address
import logging import logging
from pathlib import Path, PurePath from pathlib import Path, PurePath
@@ -64,12 +65,15 @@ from ..exceptions import (
AddonsNotSupportedError, AddonsNotSupportedError,
ConfigurationFileError, ConfigurationFileError,
DockerError, DockerError,
HomeAssistantAPIError,
HostAppArmorError, HostAppArmorError,
) )
from ..hardware.data import Device from ..hardware.data import Device
from ..homeassistant.const import WSEvent, WSType from ..homeassistant.const import WSEvent, WSType
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason
from ..store.addon import AddonStore
from ..utils import check_port from ..utils import check_port
from ..utils.apparmor import adjust_profile from ..utils.apparmor import adjust_profile
from ..utils.json import read_json_file, write_json_file from ..utils.json import read_json_file, write_json_file
@@ -80,6 +84,7 @@ from .const import (
WATCHDOG_THROTTLE_MAX_CALLS, WATCHDOG_THROTTLE_MAX_CALLS,
WATCHDOG_THROTTLE_PERIOD, WATCHDOG_THROTTLE_PERIOD,
AddonBackupMode, AddonBackupMode,
MappingType,
) )
from .model import AddonModel, Data from .model import AddonModel, Data
from .options import AddonOptions from .options import AddonOptions
@@ -129,54 +134,7 @@ class Addon(AddonModel):
) )
self._listeners: list[EventListener] = [] self._listeners: list[EventListener] = []
self._startup_event = asyncio.Event() self._startup_event = asyncio.Event()
self._startup_task: asyncio.Task | None = None
@Job(
name=f"addon_{slug}_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=AddonsJobError,
)
async def restart_after_problem(addon: Addon, state: ContainerState):
"""Restart unhealthy or failed addon."""
attempts = 0
while await addon.instance.current_state() == state:
if not addon.in_progress:
_LOGGER.warning(
"Watchdog found addon %s is %s, restarting...",
addon.name,
state.value,
)
try:
if state == ContainerState.FAILED:
# Ensure failed container is removed before attempting reanimation
if attempts == 0:
with suppress(DockerError):
await addon.instance.stop(remove_container=True)
await (await addon.start())
else:
await (await addon.restart())
except AddonsError as err:
attempts = attempts + 1
_LOGGER.error(
"Watchdog restart of addon %s failed!", addon.name
)
capture_exception(err)
else:
break
if attempts >= WATCHDOG_MAX_ATTEMPTS:
_LOGGER.critical(
"Watchdog cannot restart addon %s, failed all %s attempts",
addon.name,
attempts,
)
break
await asyncio.sleep(WATCHDOG_RETRY_SECONDS)
self._restart_after_problem = restart_after_problem
def __repr__(self) -> str: def __repr__(self) -> str:
"""Return internal representation.""" """Return internal representation."""
@@ -228,6 +186,7 @@ class Addon(AddonModel):
) )
) )
await self._check_ingress_port()
with suppress(DockerError): with suppress(DockerError):
await self.instance.attach(version=self.version) await self.instance.attach(version=self.version)
@@ -246,6 +205,11 @@ class Addon(AddonModel):
"""Return add-on data from store.""" """Return add-on data from store."""
return self.sys_store.data.addons.get(self.slug, self.data) return self.sys_store.data.addons.get(self.slug, self.data)
@property
def addon_store(self) -> AddonStore | None:
"""Return store representation of addon."""
return self.sys_addons.store.get(self.slug)
@property @property
def persist(self) -> Data: def persist(self) -> Data:
"""Return add-on data/config.""" """Return add-on data/config."""
@@ -434,7 +398,7 @@ class Addon(AddonModel):
port = self.data[ATTR_INGRESS_PORT] port = self.data[ATTR_INGRESS_PORT]
if port == 0: if port == 0:
return self.sys_ingress.get_dynamic_port(self.slug) raise RuntimeError(f"No port set for add-on {self.slug}")
return port return port
@property @property
@@ -500,6 +464,21 @@ class Addon(AddonModel):
"""Return add-on data path external for Docker.""" """Return add-on data path external for Docker."""
return PurePath(self.sys_config.path_extern_addons_data, self.slug) return PurePath(self.sys_config.path_extern_addons_data, self.slug)
@property
def addon_config_used(self) -> bool:
"""Add-on is using its public config folder."""
return MappingType.ADDON_CONFIG in self.map_volumes
@property
def path_config(self) -> Path:
"""Return add-on config path inside Supervisor."""
return Path(self.sys_config.path_addon_configs, self.slug)
@property
def path_extern_config(self) -> PurePath:
"""Return add-on config path external for Docker."""
return PurePath(self.sys_config.path_extern_addon_configs, self.slug)
@property @property
def path_options(self) -> Path: def path_options(self) -> Path:
"""Return path to add-on options.""" """Return path to add-on options."""
@@ -563,7 +542,7 @@ class Addon(AddonModel):
# TCP monitoring # TCP monitoring
if s_prefix == "tcp": if s_prefix == "tcp":
return await self.sys_run_in_executor(check_port, self.ip_address, port) return await check_port(self.ip_address, port)
# lookup the correct protocol from config # lookup the correct protocol from config
if t_proto: if t_proto:
@@ -579,7 +558,7 @@ class Addon(AddonModel):
) as req: ) as req:
if req.status < 300: if req.status < 300:
return True return True
except (asyncio.TimeoutError, aiohttp.ClientError): except (TimeoutError, aiohttp.ClientError):
pass pass
return False return False
@@ -606,16 +585,201 @@ class Addon(AddonModel):
raise AddonConfigurationError() raise AddonConfigurationError()
@Job(
name="addon_unload",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def unload(self) -> None: async def unload(self) -> None:
"""Unload add-on and remove data.""" """Unload add-on and remove data."""
if self._startup_task:
# If we were waiting on startup, cancel that and let the task finish before proceeding
self._startup_task.cancel(f"Removing add-on {self.name} from system")
with suppress(asyncio.CancelledError):
await self._startup_task
for listener in self._listeners: for listener in self._listeners:
self.sys_bus.remove_listener(listener) self.sys_bus.remove_listener(listener)
if not self.path_data.is_dir(): if self.path_data.is_dir():
_LOGGER.info("Removing add-on data folder %s", self.path_data)
await remove_data(self.path_data)
async def _check_ingress_port(self):
"""Assign a ingress port if dynamic port selection is used."""
if not self.with_ingress:
return return
_LOGGER.info("Removing add-on data folder %s", self.path_data) if self.data[ATTR_INGRESS_PORT] == 0:
await remove_data(self.path_data) self.data[ATTR_INGRESS_PORT] = await self.sys_ingress.get_dynamic_port(
self.slug
)
@Job(
name="addon_install",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def install(self) -> None:
"""Install and setup this addon."""
self.sys_addons.data.install(self.addon_store)
await self.load()
if not self.path_data.is_dir():
_LOGGER.info(
"Creating Home Assistant add-on data folder %s", self.path_data
)
self.path_data.mkdir()
# Setup/Fix AppArmor profile
await self.install_apparmor()
# Install image
try:
await self.instance.install(
self.latest_version, self.addon_store.image, arch=self.arch
)
except DockerError as err:
self.sys_addons.data.uninstall(self)
raise AddonsError() from err
# Add to addon manager
self.sys_addons.local[self.slug] = self
# Reload ingress tokens
if self.with_ingress:
await self.sys_ingress.reload()
@Job(
name="addon_uninstall",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def uninstall(self) -> None:
"""Uninstall and cleanup this addon."""
try:
await self.instance.remove()
except DockerError as err:
raise AddonsError() from err
self.state = AddonState.UNKNOWN
await self.unload()
# Cleanup audio settings
if self.path_pulse.exists():
with suppress(OSError):
self.path_pulse.unlink()
# Cleanup AppArmor profile
with suppress(HostAppArmorError):
await self.uninstall_apparmor()
# Cleanup Ingress panel from sidebar
if self.ingress_panel:
self.ingress_panel = False
with suppress(HomeAssistantAPIError):
await self.sys_ingress.update_hass_panel(self)
# Cleanup Ingress dynamic port assignment
if self.with_ingress:
self.sys_create_task(self.sys_ingress.reload())
self.sys_ingress.del_dynamic_port(self.slug)
# Cleanup discovery data
for message in self.sys_discovery.list_messages:
if message.addon != self.slug:
continue
self.sys_discovery.remove(message)
# Cleanup services data
for service in self.sys_services.list_services:
if self.slug not in service.active:
continue
service.del_service_data(self)
# Remove from addon manager
self.sys_addons.data.uninstall(self)
self.sys_addons.local.pop(self.slug)
@Job(
name="addon_update",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def update(self) -> asyncio.Task | None:
"""Update this addon to latest version.
Returns a Task that completes when addon has state 'started' (see start)
if it was running. Else nothing is returned.
"""
old_image = self.image
# Cache data to prevent races with other updates to global
store = self.addon_store.clone()
try:
await self.instance.update(store.version, store.image, arch=self.arch)
except DockerError as err:
raise AddonsError() from err
# Stop the addon if running
if (last_state := self.state) in {AddonState.STARTED, AddonState.STARTUP}:
await self.stop()
try:
_LOGGER.info("Add-on '%s' successfully updated", self.slug)
self.sys_addons.data.update(store)
await self._check_ingress_port()
# Cleanup
with suppress(DockerError):
await self.instance.cleanup(
old_image=old_image, image=store.image, version=store.version
)
# Setup/Fix AppArmor profile
await self.install_apparmor()
finally:
# restore state. Return Task for caller if no exception
out = (
await self.start()
if last_state in {AddonState.STARTED, AddonState.STARTUP}
else None
)
return out
@Job(
name="addon_rebuild",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def rebuild(self) -> asyncio.Task | None:
"""Rebuild this addons container and image.
Returns a Task that completes when addon has state 'started' (see start)
if it was running. Else nothing is returned.
"""
last_state: AddonState = self.state
try:
# remove docker container but not addon config
try:
await self.instance.remove()
await self.instance.install(self.version)
except DockerError as err:
raise AddonsError() from err
self.sys_addons.data.update(self.addon_store)
_LOGGER.info("Add-on '%s' successfully rebuilt", self.slug)
finally:
# restore state
out = (
await self.start()
if last_state in [AddonState.STARTED, AddonState.STARTUP]
else None
)
return out
def write_pulse(self) -> None: def write_pulse(self) -> None:
"""Write asound config to file and return True on success.""" """Write asound config to file and return True on success."""
@@ -631,6 +795,8 @@ class Addon(AddonModel):
try: try:
self.path_pulse.write_text(pulse_config, encoding="utf-8") self.path_pulse.write_text(pulse_config, encoding="utf-8")
except OSError as err: except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error( _LOGGER.error(
"Add-on %s can't write pulse/client.config: %s", self.slug, err "Add-on %s can't write pulse/client.config: %s", self.slug, err
) )
@@ -699,24 +865,34 @@ class Addon(AddonModel):
async def _wait_for_startup(self) -> None: async def _wait_for_startup(self) -> None:
"""Wait for startup event to be set with timeout.""" """Wait for startup event to be set with timeout."""
try: try:
await asyncio.wait_for(self._startup_event.wait(), STARTUP_TIMEOUT) self._startup_task = self.sys_create_task(self._startup_event.wait())
except asyncio.TimeoutError: await asyncio.wait_for(self._startup_task, STARTUP_TIMEOUT)
except TimeoutError:
_LOGGER.warning( _LOGGER.warning(
"Timeout while waiting for addon %s to start, took more then %s seconds", "Timeout while waiting for addon %s to start, took more than %s seconds",
self.name, self.name,
STARTUP_TIMEOUT, STARTUP_TIMEOUT,
) )
except asyncio.CancelledError as err:
_LOGGER.info("Wait for addon startup task cancelled due to: %s", err)
finally:
self._startup_task = None
async def start(self) -> Awaitable[None]: @Job(
name="addon_start",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def start(self) -> asyncio.Task:
"""Set options and start add-on. """Set options and start add-on.
Returns a coroutine that completes when addon has state 'started'. Returns a Task that completes when addon has state 'started'.
For addons with a healthcheck, that is when they become healthy or unhealthy. For addons with a healthcheck, that is when they become healthy or unhealthy.
Addons without a healthcheck have state 'started' immediately. Addons without a healthcheck have state 'started' immediately.
""" """
if await self.instance.is_running(): if await self.instance.is_running():
_LOGGER.warning("%s is already running!", self.slug) _LOGGER.warning("%s is already running!", self.slug)
return self._wait_for_startup() return self.sys_create_task(self._wait_for_startup())
# Access Token # Access Token
self.persist[ATTR_ACCESS_TOKEN] = secrets.token_hex(56) self.persist[ATTR_ACCESS_TOKEN] = secrets.token_hex(56)
@@ -729,6 +905,18 @@ class Addon(AddonModel):
if self.with_audio: if self.with_audio:
self.write_pulse() self.write_pulse()
def _check_addon_config_dir():
if self.path_config.is_dir():
return
_LOGGER.info(
"Creating Home Assistant add-on config folder %s", self.path_config
)
self.path_config.mkdir()
if self.addon_config_used:
await self.sys_run_in_executor(_check_addon_config_dir)
# Start Add-on # Start Add-on
self._startup_event.clear() self._startup_event.clear()
try: try:
@@ -737,8 +925,13 @@ class Addon(AddonModel):
self.state = AddonState.ERROR self.state = AddonState.ERROR
raise AddonsError() from err raise AddonsError() from err
return self._wait_for_startup() return self.sys_create_task(self._wait_for_startup())
@Job(
name="addon_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def stop(self) -> None: async def stop(self) -> None:
"""Stop add-on.""" """Stop add-on."""
self._manual_stop = True self._manual_stop = True
@@ -748,10 +941,15 @@ class Addon(AddonModel):
self.state = AddonState.ERROR self.state = AddonState.ERROR
raise AddonsError() from err raise AddonsError() from err
async def restart(self) -> Awaitable[None]: @Job(
name="addon_restart",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def restart(self) -> asyncio.Task:
"""Restart add-on. """Restart add-on.
Returns a coroutine that completes when addon has state 'started' (see start). Returns a Task that completes when addon has state 'started' (see start).
""" """
with suppress(AddonsError): with suppress(AddonsError):
await self.stop() await self.stop()
@@ -778,11 +976,13 @@ class Addon(AddonModel):
except DockerError as err: except DockerError as err:
raise AddonsError() from err raise AddonsError() from err
@Job(
name="addon_write_stdin",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def write_stdin(self, data) -> None: async def write_stdin(self, data) -> None:
"""Write data to add-on stdin. """Write data to add-on stdin."""
Return a coroutine.
"""
if not self.with_stdin: if not self.with_stdin:
raise AddonsNotSupportedError( raise AddonsNotSupportedError(
f"Add-on {self.slug} does not support writing to stdin!", _LOGGER.error f"Add-on {self.slug} does not support writing to stdin!", _LOGGER.error
@@ -810,14 +1010,59 @@ class Addon(AddonModel):
_LOGGER.error, _LOGGER.error,
) from err ) from err
async def backup(self, tar_file: tarfile.TarFile) -> Awaitable[None] | None: @Job(
name="addon_begin_backup",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def begin_backup(self) -> bool:
"""Execute pre commands or stop addon if necessary.
Returns value of `is_running`. Caller should not call `end_backup` if return is false.
"""
if not await self.is_running():
return False
if self.backup_mode == AddonBackupMode.COLD:
_LOGGER.info("Shutdown add-on %s for cold backup", self.slug)
await self.stop()
elif self.backup_pre is not None:
await self._backup_command(self.backup_pre)
return True
@Job(
name="addon_end_backup",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def end_backup(self) -> asyncio.Task | None:
"""Execute post commands or restart addon if necessary.
Returns a Task that completes when addon has state 'started' (see start)
for cold backup. Else nothing is returned.
"""
if self.backup_mode is AddonBackupMode.COLD:
_LOGGER.info("Starting add-on %s again", self.slug)
return await self.start()
if self.backup_post is not None:
await self._backup_command(self.backup_post)
return None
@Job(
name="addon_backup",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def backup(self, tar_file: tarfile.TarFile) -> asyncio.Task | None:
"""Backup state of an add-on. """Backup state of an add-on.
Returns a coroutine that completes when addon has state 'started' (see start) Returns a Task that completes when addon has state 'started' (see start)
for cold backup. Else nothing is returned. for cold backup. Else nothing is returned.
""" """
wait_for_start: Awaitable[None] | None = None wait_for_start: Awaitable[None] | None = None
is_running = await self.is_running()
with TemporaryDirectory(dir=self.sys_config.path_tmp) as temp: with TemporaryDirectory(dir=self.sys_config.path_tmp) as temp:
temp_path = Path(temp) temp_path = Path(temp)
@@ -869,16 +1114,16 @@ class Addon(AddonModel):
arcname="data", arcname="data",
) )
if ( # Backup config
is_running if self.addon_config_used:
and self.backup_mode == AddonBackupMode.HOT atomic_contents_add(
and self.backup_pre is not None backup,
): self.path_config,
await self._backup_command(self.backup_pre) excludes=self.backup_exclude,
elif is_running and self.backup_mode == AddonBackupMode.COLD: arcname="config",
_LOGGER.info("Shutdown add-on %s for cold backup", self.slug) )
await self.instance.stop()
is_running = await self.begin_backup()
try: try:
_LOGGER.info("Building backup for add-on %s", self.slug) _LOGGER.info("Building backup for add-on %s", self.slug)
await self.sys_run_in_executor(_write_tarfile) await self.sys_run_in_executor(_write_tarfile)
@@ -887,23 +1132,21 @@ class Addon(AddonModel):
f"Can't write tarfile {tar_file}: {err}", _LOGGER.error f"Can't write tarfile {tar_file}: {err}", _LOGGER.error
) from err ) from err
finally: finally:
if ( if is_running:
is_running wait_for_start = await self.end_backup()
and self.backup_mode == AddonBackupMode.HOT
and self.backup_post is not None
):
await self._backup_command(self.backup_post)
elif is_running and self.backup_mode is AddonBackupMode.COLD:
_LOGGER.info("Starting add-on %s again", self.slug)
wait_for_start = await self.start()
_LOGGER.info("Finish backup for addon %s", self.slug) _LOGGER.info("Finish backup for addon %s", self.slug)
return wait_for_start return wait_for_start
async def restore(self, tar_file: tarfile.TarFile) -> Awaitable[None] | None: @Job(
name="addon_restore",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=AddonsJobError,
)
async def restore(self, tar_file: tarfile.TarFile) -> asyncio.Task | None:
"""Restore state of an add-on. """Restore state of an add-on.
Returns a coroutine that completes when addon has state 'started' (see start) Returns a Task that completes when addon has state 'started' (see start)
if addon is started after restore. Else nothing is returned. if addon is started after restore. Else nothing is returned.
""" """
wait_for_start: Awaitable[None] | None = None wait_for_start: Awaitable[None] | None = None
@@ -912,7 +1155,11 @@ class Addon(AddonModel):
def _extract_tarfile(): def _extract_tarfile():
"""Extract tar backup.""" """Extract tar backup."""
with tar_file as backup: with tar_file as backup:
backup.extractall(path=Path(temp), members=secure_path(backup)) backup.extractall(
path=Path(temp),
members=secure_path(backup),
filter="fully_trusted",
)
try: try:
await self.sys_run_in_executor(_extract_tarfile) await self.sys_run_in_executor(_extract_tarfile)
@@ -950,64 +1197,81 @@ class Addon(AddonModel):
self.slug, data[ATTR_USER], data[ATTR_SYSTEM], restore_image self.slug, data[ATTR_USER], data[ATTR_SYSTEM], restore_image
) )
# Check version / restore image # Stop it first if its running
version = data[ATTR_VERSION] if await self.instance.is_running():
if not await self.instance.exists(): await self.stop()
_LOGGER.info("Restore/Install of image for addon %s", self.slug)
image_file = Path(temp, "image.tar")
if image_file.is_file():
with suppress(DockerError):
await self.instance.import_image(image_file)
else:
with suppress(DockerError):
await self.instance.install(version, restore_image)
await self.instance.cleanup()
elif self.instance.version != version or self.legacy:
_LOGGER.info("Restore/Update of image for addon %s", self.slug)
with suppress(DockerError):
await self.instance.update(version, restore_image)
else:
with suppress(DockerError):
await self.instance.stop()
# Restore data
def _restore_data():
"""Restore data."""
temp_data = Path(temp, "data")
if temp_data.is_dir():
shutil.copytree(temp_data, self.path_data, symlinks=True)
else:
self.path_data.mkdir()
_LOGGER.info("Restoring data for addon %s", self.slug)
if self.path_data.is_dir():
await remove_data(self.path_data)
try: try:
await self.sys_run_in_executor(_restore_data) # Check version / restore image
except shutil.Error as err: version = data[ATTR_VERSION]
raise AddonsError( if not await self.instance.exists():
f"Can't restore origin data: {err}", _LOGGER.error _LOGGER.info("Restore/Install of image for addon %s", self.slug)
) from err
image_file = Path(temp, "image.tar")
if image_file.is_file():
with suppress(DockerError):
await self.instance.import_image(image_file)
else:
with suppress(DockerError):
await self.instance.install(
version, restore_image, self.arch
)
await self.instance.cleanup()
elif self.instance.version != version or self.legacy:
_LOGGER.info("Restore/Update of image for addon %s", self.slug)
with suppress(DockerError):
await self.instance.update(version, restore_image, self.arch)
self._check_ingress_port()
# Restore data and config
def _restore_data():
"""Restore data and config."""
temp_data = Path(temp, "data")
if temp_data.is_dir():
shutil.copytree(temp_data, self.path_data, symlinks=True)
else:
self.path_data.mkdir()
temp_config = Path(temp, "config")
if temp_config.is_dir():
shutil.copytree(temp_config, self.path_config, symlinks=True)
elif self.addon_config_used:
self.path_config.mkdir()
_LOGGER.info("Restoring data and config for addon %s", self.slug)
if self.path_data.is_dir():
await remove_data(self.path_data)
if self.path_config.is_dir():
await remove_data(self.path_config)
# Restore AppArmor
profile_file = Path(temp, "apparmor.txt")
if profile_file.exists():
try: try:
await self.sys_host.apparmor.load_profile(self.slug, profile_file) await self.sys_run_in_executor(_restore_data)
except HostAppArmorError as err: except shutil.Error as err:
_LOGGER.error( raise AddonsError(
"Can't restore AppArmor profile for add-on %s", self.slug f"Can't restore origin data: {err}", _LOGGER.error
) ) from err
raise AddonsError() from err
# Is add-on loaded # Restore AppArmor
if not self.loaded: profile_file = Path(temp, "apparmor.txt")
await self.load() if profile_file.exists():
try:
await self.sys_host.apparmor.load_profile(
self.slug, profile_file
)
except HostAppArmorError as err:
_LOGGER.error(
"Can't restore AppArmor profile for add-on %s", self.slug
)
raise AddonsError() from err
# Run add-on # Is add-on loaded
if data[ATTR_STATE] == AddonState.STARTED: if not self.loaded:
wait_for_start = await self.start() await self.load()
finally:
# Run add-on
if data[ATTR_STATE] == AddonState.STARTED:
wait_for_start = await self.start()
_LOGGER.info("Finished restore for add-on %s", self.slug) _LOGGER.info("Finished restore for add-on %s", self.slug)
return wait_for_start return wait_for_start
@@ -1019,6 +1283,50 @@ class Addon(AddonModel):
""" """
return self.instance.check_trust() return self.instance.check_trust()
@Job(
name="addon_restart_after_problem",
limit=JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
on_condition=AddonsJobError,
)
async def _restart_after_problem(self, state: ContainerState):
"""Restart unhealthy or failed addon."""
attempts = 0
while await self.instance.current_state() == state:
if not self.in_progress:
_LOGGER.warning(
"Watchdog found addon %s is %s, restarting...",
self.name,
state,
)
try:
if state == ContainerState.FAILED:
# Ensure failed container is removed before attempting reanimation
if attempts == 0:
with suppress(DockerError):
await self.instance.stop(remove_container=True)
await (await self.start())
else:
await (await self.restart())
except AddonsError as err:
attempts = attempts + 1
_LOGGER.error("Watchdog restart of addon %s failed!", self.name)
capture_exception(err)
else:
break
if attempts >= WATCHDOG_MAX_ATTEMPTS:
_LOGGER.critical(
"Watchdog cannot restart addon %s, failed all %s attempts",
self.name,
attempts,
)
break
await asyncio.sleep(WATCHDOG_RETRY_SECONDS)
async def container_state_changed(self, event: DockerContainerStateEvent) -> None: async def container_state_changed(self, event: DockerContainerStateEvent) -> None:
"""Set addon state from container state.""" """Set addon state from container state."""
if event.name != self.instance.name: if event.name != self.instance.name:
@@ -1053,4 +1361,4 @@ class Addon(AddonModel):
ContainerState.STOPPED, ContainerState.STOPPED,
ContainerState.UNHEALTHY, ContainerState.UNHEALTHY,
]: ]:
await self._restart_after_problem(self, event.state) await self._restart_after_problem(event.state)

View File

@@ -0,0 +1,11 @@
"""Confgiuration Objects for Addon Config."""
from dataclasses import dataclass
@dataclass(slots=True)
class FolderMapping:
"""Represent folder mapping configuration."""
path: str | None
read_only: bool

View File

@@ -1,19 +1,36 @@
"""Add-on static data.""" """Add-on static data."""
from datetime import timedelta from datetime import timedelta
from enum import Enum from enum import StrEnum
from ..jobs.const import JobCondition from ..jobs.const import JobCondition
class AddonBackupMode(str, Enum): class AddonBackupMode(StrEnum):
"""Backup mode of an Add-on.""" """Backup mode of an Add-on."""
HOT = "hot" HOT = "hot"
COLD = "cold" COLD = "cold"
class MappingType(StrEnum):
"""Mapping type of an Add-on Folder."""
DATA = "data"
CONFIG = "config"
SSL = "ssl"
ADDONS = "addons"
BACKUP = "backup"
SHARE = "share"
MEDIA = "media"
HOMEASSISTANT_CONFIG = "homeassistant_config"
ALL_ADDON_CONFIGS = "all_addon_configs"
ADDON_CONFIG = "addon_config"
ATTR_BACKUP = "backup" ATTR_BACKUP = "backup"
ATTR_CODENOTARY = "codenotary" ATTR_CODENOTARY = "codenotary"
ATTR_READ_ONLY = "read_only"
ATTR_PATH = "path"
WATCHDOG_RETRY_SECONDS = 10 WATCHDOG_RETRY_SECONDS = 10
WATCHDOG_MAX_ATTEMPTS = 5 WATCHDOG_MAX_ATTEMPTS = 5
WATCHDOG_THROTTLE_PERIOD = timedelta(minutes=30) WATCHDOG_THROTTLE_PERIOD = timedelta(minutes=30)

View File

@@ -0,0 +1,374 @@
"""Supervisor add-on manager."""
import asyncio
from collections.abc import Awaitable
from contextlib import suppress
import logging
import tarfile
from typing import Union
from ..const import AddonBoot, AddonStartup, AddonState
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
AddonConfigurationError,
AddonsError,
AddonsJobError,
AddonsNotSupportedError,
CoreDNSError,
DockerAPIError,
DockerError,
DockerNotFound,
HassioError,
HomeAssistantAPIError,
)
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..store.addon import AddonStore
from ..utils import check_exception_chain
from ..utils.sentry import capture_exception
from .addon import Addon
from .const import ADDON_UPDATE_CONDITIONS
from .data import AddonsData
_LOGGER: logging.Logger = logging.getLogger(__name__)
AnyAddon = Union[Addon, AddonStore]
class AddonManager(CoreSysAttributes):
"""Manage add-ons inside Supervisor."""
def __init__(self, coresys: CoreSys):
"""Initialize Docker base wrapper."""
self.coresys: CoreSys = coresys
self.data: AddonsData = AddonsData(coresys)
self.local: dict[str, Addon] = {}
self.store: dict[str, AddonStore] = {}
@property
def all(self) -> list[AnyAddon]:
"""Return a list of all add-ons."""
addons: dict[str, AnyAddon] = {**self.store, **self.local}
return list(addons.values())
@property
def installed(self) -> list[Addon]:
"""Return a list of all installed add-ons."""
return list(self.local.values())
def get(self, addon_slug: str, local_only: bool = False) -> AnyAddon | None:
"""Return an add-on from slug.
Prio:
1 - Local
2 - Store
"""
if addon_slug in self.local:
return self.local[addon_slug]
if not local_only:
return self.store.get(addon_slug)
return None
def from_token(self, token: str) -> Addon | None:
"""Return an add-on from Supervisor token."""
for addon in self.installed:
if token == addon.supervisor_token:
return addon
return None
async def load(self) -> None:
"""Start up add-on management."""
tasks = []
for slug in self.data.system:
addon = self.local[slug] = Addon(self.coresys, slug)
tasks.append(self.sys_create_task(addon.load()))
# Run initial tasks
_LOGGER.info("Found %d installed add-ons", len(tasks))
if tasks:
await asyncio.wait(tasks)
# Sync DNS
await self.sync_dns()
async def boot(self, stage: AddonStartup) -> None:
"""Boot add-ons with mode auto."""
tasks: list[Addon] = []
for addon in self.installed:
if addon.boot != AddonBoot.AUTO or addon.startup != stage:
continue
tasks.append(addon)
# Evaluate add-ons which need to be started
_LOGGER.info("Phase '%s' starting %d add-ons", stage, len(tasks))
if not tasks:
return
# Start Add-ons sequential
# avoid issue on slow IO
# Config.wait_boot is deprecated. Until addons update with healthchecks,
# add a sleep task for it to keep the same minimum amount of wait time
wait_boot: list[Awaitable[None]] = [asyncio.sleep(self.sys_config.wait_boot)]
for addon in tasks:
try:
if start_task := await addon.start():
wait_boot.append(start_task)
except AddonsError as err:
# Check if there is an system/user issue
if check_exception_chain(
err, (DockerAPIError, DockerNotFound, AddonConfigurationError)
):
addon.boot = AddonBoot.MANUAL
addon.save_persist()
except HassioError:
pass # These are already handled
else:
continue
_LOGGER.warning("Can't start Add-on %s", addon.slug)
# Ignore exceptions from waiting for addon startup, addon errors handled elsewhere
await asyncio.gather(*wait_boot, return_exceptions=True)
async def shutdown(self, stage: AddonStartup) -> None:
"""Shutdown addons."""
tasks: list[Addon] = []
for addon in self.installed:
if addon.state != AddonState.STARTED or addon.startup != stage:
continue
tasks.append(addon)
# Evaluate add-ons which need to be stopped
_LOGGER.info("Phase '%s' stopping %d add-ons", stage, len(tasks))
if not tasks:
return
# Stop Add-ons sequential
# avoid issue on slow IO
for addon in tasks:
try:
await addon.stop()
except Exception as err: # pylint: disable=broad-except
_LOGGER.warning("Can't stop Add-on %s: %s", addon.slug, err)
capture_exception(err)
@Job(
name="addon_manager_install",
conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError,
)
async def install(self, slug: str) -> None:
"""Install an add-on."""
self.sys_jobs.current.reference = slug
if slug in self.local:
raise AddonsError(f"Add-on {slug} is already installed", _LOGGER.warning)
store = self.store.get(slug)
if not store:
raise AddonsError(f"Add-on {slug} does not exist", _LOGGER.error)
store.validate_availability()
await Addon(self.coresys, slug).install()
_LOGGER.info("Add-on '%s' successfully installed", slug)
async def uninstall(self, slug: str) -> None:
"""Remove an add-on."""
if slug not in self.local:
_LOGGER.warning("Add-on %s is not installed", slug)
return
await self.local[slug].uninstall()
_LOGGER.info("Add-on '%s' successfully removed", slug)
@Job(
name="addon_manager_update",
conditions=ADDON_UPDATE_CONDITIONS,
on_condition=AddonsJobError,
)
async def update(
self, slug: str, backup: bool | None = False
) -> asyncio.Task | None:
"""Update add-on.
Returns a Task that completes when addon has state 'started' (see addon.start)
if addon is started after update. Else nothing is returned.
"""
self.sys_jobs.current.reference = slug
if slug not in self.local:
raise AddonsError(f"Add-on {slug} is not installed", _LOGGER.error)
addon = self.local[slug]
if addon.is_detached:
raise AddonsError(
f"Add-on {slug} is not available inside store", _LOGGER.error
)
store = self.store[slug]
if addon.version == store.version:
raise AddonsError(f"No update available for add-on {slug}", _LOGGER.warning)
# Check if available, Maybe something have changed
store.validate_availability()
if backup:
await self.sys_backups.do_backup_partial(
name=f"addon_{addon.slug}_{addon.version}",
homeassistant=False,
addons=[addon.slug],
)
return await addon.update()
@Job(
name="addon_manager_rebuild",
conditions=[
JobCondition.FREE_SPACE,
JobCondition.INTERNET_HOST,
JobCondition.HEALTHY,
],
on_condition=AddonsJobError,
)
async def rebuild(self, slug: str) -> asyncio.Task | None:
"""Perform a rebuild of local build add-on.
Returns a Task that completes when addon has state 'started' (see addon.start)
if addon is started after rebuild. Else nothing is returned.
"""
self.sys_jobs.current.reference = slug
if slug not in self.local:
raise AddonsError(f"Add-on {slug} is not installed", _LOGGER.error)
addon = self.local[slug]
if addon.is_detached:
raise AddonsError(
f"Add-on {slug} is not available inside store", _LOGGER.error
)
store = self.store[slug]
# Check if a rebuild is possible now
if addon.version != store.version:
raise AddonsError(
"Version changed, use Update instead Rebuild", _LOGGER.error
)
if not addon.need_build:
raise AddonsNotSupportedError(
"Can't rebuild a image based add-on", _LOGGER.error
)
return await addon.rebuild()
@Job(
name="addon_manager_restore",
conditions=[
JobCondition.FREE_SPACE,
JobCondition.INTERNET_HOST,
JobCondition.HEALTHY,
],
on_condition=AddonsJobError,
)
async def restore(
self, slug: str, tar_file: tarfile.TarFile
) -> asyncio.Task | None:
"""Restore state of an add-on.
Returns a Task that completes when addon has state 'started' (see addon.start)
if addon is started after restore. Else nothing is returned.
"""
self.sys_jobs.current.reference = slug
if slug not in self.local:
_LOGGER.debug("Add-on %s is not local available for restore", slug)
addon = Addon(self.coresys, slug)
had_ingress = False
else:
_LOGGER.debug("Add-on %s is local available for restore", slug)
addon = self.local[slug]
had_ingress = addon.ingress_panel
wait_for_start = await addon.restore(tar_file)
# Check if new
if slug not in self.local:
_LOGGER.info("Detect new Add-on after restore %s", slug)
self.local[slug] = addon
# Update ingress
if had_ingress != addon.ingress_panel:
await self.sys_ingress.reload()
with suppress(HomeAssistantAPIError):
await self.sys_ingress.update_hass_panel(addon)
return wait_for_start
@Job(
name="addon_manager_repair",
conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_HOST],
)
async def repair(self) -> None:
"""Repair local add-ons."""
needs_repair: list[Addon] = []
# Evaluate Add-ons to repair
for addon in self.installed:
if await addon.instance.exists():
continue
needs_repair.append(addon)
_LOGGER.info("Found %d add-ons to repair", len(needs_repair))
if not needs_repair:
return
for addon in needs_repair:
_LOGGER.info("Repairing for add-on: %s", addon.slug)
with suppress(DockerError, KeyError):
# Need pull a image again
if not addon.need_build:
await addon.instance.install(addon.version, addon.image)
continue
# Need local lookup
if addon.need_build and not addon.is_detached:
store = self.store[addon.slug]
# If this add-on is available for rebuild
if addon.version == store.version:
await addon.instance.install(addon.version, addon.image)
continue
_LOGGER.error("Can't repair %s", addon.slug)
with suppress(AddonsError):
await self.uninstall(addon.slug)
async def sync_dns(self) -> None:
"""Sync add-ons DNS names."""
# Update hosts
add_host_coros: list[Awaitable[None]] = []
for addon in self.installed:
try:
if not await addon.instance.is_running():
continue
except DockerError as err:
_LOGGER.warning("Add-on %s is corrupt: %s", addon.slug, err)
self.sys_resolution.create_issue(
IssueType.CORRUPT_DOCKER,
ContextType.ADDON,
reference=addon.slug,
suggestions=[SuggestionType.EXECUTE_REPAIR],
)
capture_exception(err)
else:
add_host_coros.append(
self.sys_plugins.dns.add_host(
ipv4=addon.ip_address, names=[addon.hostname], write=False
)
)
await asyncio.gather(*add_host_coros)
# Write hosts files
with suppress(CoreDNSError):
await self.sys_plugins.dns.write_hosts()

View File

@@ -1,6 +1,7 @@
"""Init file for Supervisor add-ons.""" """Init file for Supervisor add-ons."""
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from collections.abc import Awaitable, Callable from collections import defaultdict
from collections.abc import Callable
from contextlib import suppress from contextlib import suppress
import logging import logging
from pathlib import Path from pathlib import Path
@@ -64,6 +65,7 @@ from ..const import (
ATTR_TIMEOUT, ATTR_TIMEOUT,
ATTR_TMPFS, ATTR_TMPFS,
ATTR_TRANSLATIONS, ATTR_TRANSLATIONS,
ATTR_TYPE,
ATTR_UART, ATTR_UART,
ATTR_UDEV, ATTR_UDEV,
ATTR_URL, ATTR_URL,
@@ -79,24 +81,37 @@ from ..const import (
AddonStage, AddonStage,
AddonStartup, AddonStartup,
) )
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys
from ..docker.const import Capabilities from ..docker.const import Capabilities
from ..exceptions import AddonsNotSupportedError from ..exceptions import AddonsNotSupportedError
from .const import ATTR_BACKUP, ATTR_CODENOTARY, AddonBackupMode from ..jobs.const import JOB_GROUP_ADDON
from ..jobs.job_group import JobGroup
from ..utils import version_is_new_enough
from .configuration import FolderMapping
from .const import (
ATTR_BACKUP,
ATTR_CODENOTARY,
ATTR_PATH,
ATTR_READ_ONLY,
AddonBackupMode,
MappingType,
)
from .options import AddonOptions, UiOptions from .options import AddonOptions, UiOptions
from .validate import RE_SERVICE, RE_VOLUME from .validate import RE_SERVICE
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
Data = dict[str, Any] Data = dict[str, Any]
class AddonModel(CoreSysAttributes, ABC): class AddonModel(JobGroup, ABC):
"""Add-on Data layout.""" """Add-on Data layout."""
def __init__(self, coresys: CoreSys, slug: str): def __init__(self, coresys: CoreSys, slug: str):
"""Initialize data holder.""" """Initialize data holder."""
self.coresys: CoreSys = coresys super().__init__(
coresys, JOB_GROUP_ADDON.format_map(defaultdict(str, slug=slug)), slug
)
self.slug: str = slug self.slug: str = slug
@property @property
@@ -532,14 +547,13 @@ class AddonModel(CoreSysAttributes, ABC):
return ATTR_IMAGE not in self.data return ATTR_IMAGE not in self.data
@property @property
def map_volumes(self) -> dict[str, bool]: def map_volumes(self) -> dict[MappingType, FolderMapping]:
"""Return a dict of {volume: read-only} from add-on.""" """Return a dict of {MappingType: FolderMapping} from add-on."""
volumes = {} volumes = {}
for volume in self.data[ATTR_MAP]: for volume in self.data[ATTR_MAP]:
result = RE_VOLUME.match(volume) volumes[MappingType(volume[ATTR_TYPE])] = FolderMapping(
if not result: volume.get(ATTR_PATH), volume[ATTR_READ_ONLY]
continue )
volumes[result.group(1)] = result.group(2) != "rw"
return volumes return volumes
@@ -640,7 +654,9 @@ class AddonModel(CoreSysAttributes, ABC):
# Home Assistant # Home Assistant
version: AwesomeVersion | None = config.get(ATTR_HOMEASSISTANT) version: AwesomeVersion | None = config.get(ATTR_HOMEASSISTANT)
with suppress(AwesomeVersionException, TypeError): with suppress(AwesomeVersionException, TypeError):
if self.sys_homeassistant.version < version: if version and not version_is_new_enough(
self.sys_homeassistant.version, version
):
raise AddonsNotSupportedError( raise AddonsNotSupportedError(
f"Add-on {self.slug} not supported on this system, requires Home Assistant version {version} or greater", f"Add-on {self.slug} not supported on this system, requires Home Assistant version {version} or greater",
logger, logger,
@@ -664,19 +680,3 @@ class AddonModel(CoreSysAttributes, ABC):
# local build # local build
return f"{config[ATTR_REPOSITORY]}/{self.sys_arch.default}-addon-{config[ATTR_SLUG]}" return f"{config[ATTR_REPOSITORY]}/{self.sys_arch.default}-addon-{config[ATTR_SLUG]}"
def install(self) -> Awaitable[None]:
"""Install this add-on."""
return self.sys_addons.install(self.slug)
def uninstall(self) -> Awaitable[None]:
"""Uninstall this add-on."""
return self.sys_addons.uninstall(self.slug)
def update(self, backup: bool | None = False) -> Awaitable[Awaitable[None] | None]:
"""Update this add-on."""
return self.sys_addons.update(self.slug, backup=backup)
def rebuild(self) -> Awaitable[Awaitable[None] | None]:
"""Rebuild this add-on."""
return self.sys_addons.rebuild(self.slug)

View File

@@ -81,6 +81,7 @@ from ..const import (
ATTR_TIMEOUT, ATTR_TIMEOUT,
ATTR_TMPFS, ATTR_TMPFS,
ATTR_TRANSLATIONS, ATTR_TRANSLATIONS,
ATTR_TYPE,
ATTR_UART, ATTR_UART,
ATTR_UDEV, ATTR_UDEV,
ATTR_URL, ATTR_URL,
@@ -109,12 +110,22 @@ from ..validate import (
uuid_match, uuid_match,
version_tag, version_tag,
) )
from .const import ATTR_BACKUP, ATTR_CODENOTARY, RE_SLUG, AddonBackupMode from .const import (
ATTR_BACKUP,
ATTR_CODENOTARY,
ATTR_PATH,
ATTR_READ_ONLY,
RE_SLUG,
AddonBackupMode,
MappingType,
)
from .options import RE_SCHEMA_ELEMENT from .options import RE_SCHEMA_ELEMENT
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
RE_VOLUME = re.compile(r"^(config|ssl|addons|backup|share|media)(?::(rw|ro))?$") RE_VOLUME = re.compile(
r"^(data|config|ssl|addons|backup|share|media|homeassistant_config|all_addon_configs|addon_config)(?::(rw|ro))?$"
)
RE_SERVICE = re.compile(r"^(?P<service>mqtt|mysql):(?P<rights>provide|want|need)$") RE_SERVICE = re.compile(r"^(?P<service>mqtt|mysql):(?P<rights>provide|want|need)$")
@@ -143,6 +154,9 @@ RE_MACHINE = re.compile(
r"|raspberrypi3" r"|raspberrypi3"
r"|raspberrypi4-64" r"|raspberrypi4-64"
r"|raspberrypi4" r"|raspberrypi4"
r"|raspberrypi5-64"
r"|yellow"
r"|green"
r"|tinker" r"|tinker"
r")$" r")$"
) )
@@ -175,6 +189,20 @@ def _warn_addon_config(config: dict[str, Any]):
name, name,
) )
invalid_services: list[str] = []
for service in config.get(ATTR_DISCOVERY, []):
try:
valid_discovery_service(service)
except vol.Invalid:
invalid_services.append(service)
if invalid_services:
_LOGGER.warning(
"Add-on lists the following unknown services for discovery: %s. Please report this to the maintainer of %s",
", ".join(invalid_services),
name,
)
return config return config
@@ -196,9 +224,9 @@ def _migrate_addon_config(protocol=False):
name, name,
) )
if value == "before": if value == "before":
config[ATTR_STARTUP] = AddonStartup.SERVICES.value config[ATTR_STARTUP] = AddonStartup.SERVICES
elif value == "after": elif value == "after":
config[ATTR_STARTUP] = AddonStartup.APPLICATION.value config[ATTR_STARTUP] = AddonStartup.APPLICATION
# UART 2021-01-20 # UART 2021-01-20
if "auto_uart" in config: if "auto_uart" in config:
@@ -244,6 +272,48 @@ def _migrate_addon_config(protocol=False):
name, name,
) )
# 2023-11 "map" entries can also be dict to allow path configuration
volumes = []
for entry in config.get(ATTR_MAP, []):
if isinstance(entry, dict):
volumes.append(entry)
if isinstance(entry, str):
result = RE_VOLUME.match(entry)
if not result:
continue
volumes.append(
{
ATTR_TYPE: result.group(1),
ATTR_READ_ONLY: result.group(2) != "rw",
}
)
if volumes:
config[ATTR_MAP] = volumes
# 2023-10 "config" became "homeassistant" so /config can be used for addon's public config
if any(volume[ATTR_TYPE] == MappingType.CONFIG for volume in volumes):
if any(
volume
and volume[ATTR_TYPE]
in {MappingType.ADDON_CONFIG, MappingType.HOMEASSISTANT_CONFIG}
for volume in volumes
):
_LOGGER.warning(
"Add-on config using incompatible map options, '%s' and '%s' are ignored if '%s' is included. Please report this to the maintainer of %s",
MappingType.ADDON_CONFIG,
MappingType.HOMEASSISTANT_CONFIG,
MappingType.CONFIG,
name,
)
else:
_LOGGER.debug(
"Add-on config using deprecated map option '%s' instead of '%s'. Please report this to the maintainer of %s",
MappingType.CONFIG,
MappingType.HOMEASSISTANT_CONFIG,
name,
)
return config return config
return _migrate return _migrate
@@ -292,7 +362,15 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
vol.Optional(ATTR_DEVICES): [str], vol.Optional(ATTR_DEVICES): [str],
vol.Optional(ATTR_UDEV, default=False): vol.Boolean(), vol.Optional(ATTR_UDEV, default=False): vol.Boolean(),
vol.Optional(ATTR_TMPFS, default=False): vol.Boolean(), vol.Optional(ATTR_TMPFS, default=False): vol.Boolean(),
vol.Optional(ATTR_MAP, default=list): [vol.Match(RE_VOLUME)], vol.Optional(ATTR_MAP, default=list): [
vol.Schema(
{
vol.Required(ATTR_TYPE): vol.Coerce(MappingType),
vol.Optional(ATTR_READ_ONLY, default=True): bool,
vol.Optional(ATTR_PATH): str,
}
)
],
vol.Optional(ATTR_ENVIRONMENT): {vol.Match(r"\w*"): str}, vol.Optional(ATTR_ENVIRONMENT): {vol.Match(r"\w*"): str},
vol.Optional(ATTR_PRIVILEGED): [vol.Coerce(Capabilities)], vol.Optional(ATTR_PRIVILEGED): [vol.Coerce(Capabilities)],
vol.Optional(ATTR_APPARMOR, default=True): vol.Boolean(), vol.Optional(ATTR_APPARMOR, default=True): vol.Boolean(),
@@ -313,7 +391,7 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
vol.Optional(ATTR_DOCKER_API, default=False): vol.Boolean(), vol.Optional(ATTR_DOCKER_API, default=False): vol.Boolean(),
vol.Optional(ATTR_AUTH_API, default=False): vol.Boolean(), vol.Optional(ATTR_AUTH_API, default=False): vol.Boolean(),
vol.Optional(ATTR_SERVICES): [vol.Match(RE_SERVICE)], vol.Optional(ATTR_SERVICES): [vol.Match(RE_SERVICE)],
vol.Optional(ATTR_DISCOVERY): [valid_discovery_service], vol.Optional(ATTR_DISCOVERY): [str],
vol.Optional(ATTR_BACKUP_EXCLUDE): [str], vol.Optional(ATTR_BACKUP_EXCLUDE): [str],
vol.Optional(ATTR_BACKUP_PRE): str, vol.Optional(ATTR_BACKUP_PRE): str,
vol.Optional(ATTR_BACKUP_POST): str, vol.Optional(ATTR_BACKUP_POST): str,

View File

@@ -5,6 +5,7 @@ from pathlib import Path
from typing import Any from typing import Any
from aiohttp import web from aiohttp import web
from aiohttp_fast_url_dispatcher import FastUrlDispatcher, attach_fast_url_dispatcher
from ..const import AddonState from ..const import AddonState
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
@@ -64,9 +65,10 @@ class RestAPI(CoreSysAttributes):
"max_field_size": MAX_LINE_SIZE, "max_field_size": MAX_LINE_SIZE,
}, },
) )
attach_fast_url_dispatcher(self.webapp, FastUrlDispatcher())
# service stuff # service stuff
self._runner: web.AppRunner = web.AppRunner(self.webapp) self._runner: web.AppRunner = web.AppRunner(self.webapp, shutdown_timeout=5)
self._site: web.TCPSite | None = None self._site: web.TCPSite | None = None
async def load(self) -> None: async def load(self) -> None:
@@ -186,6 +188,8 @@ class RestAPI(CoreSysAttributes):
# Boards endpoints # Boards endpoints
self.webapp.add_routes( self.webapp.add_routes(
[ [
web.get("/os/boards/green", api_os.boards_green_info),
web.post("/os/boards/green", api_os.boards_green_options),
web.get("/os/boards/yellow", api_os.boards_yellow_info), web.get("/os/boards/yellow", api_os.boards_yellow_info),
web.post("/os/boards/yellow", api_os.boards_yellow_options), web.post("/os/boards/yellow", api_os.boards_yellow_options),
web.get("/os/boards/{board}", api_os.boards_other_info), web.get("/os/boards/{board}", api_os.boards_other_info),
@@ -485,6 +489,8 @@ class RestAPI(CoreSysAttributes):
web.get("/backups/info", api_backups.info), web.get("/backups/info", api_backups.info),
web.post("/backups/options", api_backups.options), web.post("/backups/options", api_backups.options),
web.post("/backups/reload", api_backups.reload), web.post("/backups/reload", api_backups.reload),
web.post("/backups/freeze", api_backups.freeze),
web.post("/backups/thaw", api_backups.thaw),
web.post("/backups/new/full", api_backups.backup_full), web.post("/backups/new/full", api_backups.backup_full),
web.post("/backups/new/partial", api_backups.backup_partial), web.post("/backups/new/partial", api_backups.backup_partial),
web.post("/backups/new/upload", api_backups.upload), web.post("/backups/new/upload", api_backups.upload),
@@ -667,9 +673,7 @@ class RestAPI(CoreSysAttributes):
async def start(self) -> None: async def start(self) -> None:
"""Run RESTful API webserver.""" """Run RESTful API webserver."""
await self._runner.setup() await self._runner.setup()
self._site = web.TCPSite( self._site = web.TCPSite(self._runner, host="0.0.0.0", port=80)
self._runner, host="0.0.0.0", port=80, shutdown_timeout=5
)
try: try:
await self._site.start() await self._site.start()

View File

@@ -8,8 +8,8 @@ from aiohttp import web
import voluptuous as vol import voluptuous as vol
from voluptuous.humanize import humanize_error from voluptuous.humanize import humanize_error
from ..addons import AnyAddon
from ..addons.addon import Addon from ..addons.addon import Addon
from ..addons.manager import AnyAddon
from ..addons.utils import rating_security from ..addons.utils import rating_security
from ..const import ( from ..const import (
ATTR_ADDONS, ATTR_ADDONS,
@@ -388,7 +388,7 @@ class APIAddons(CoreSysAttributes):
def uninstall(self, request: web.Request) -> Awaitable[None]: def uninstall(self, request: web.Request) -> Awaitable[None]:
"""Uninstall add-on.""" """Uninstall add-on."""
addon = self._extract_addon(request) addon = self._extract_addon(request)
return asyncio.shield(addon.uninstall()) return asyncio.shield(self.sys_addons.uninstall(addon.slug))
@api_process @api_process
async def start(self, request: web.Request) -> None: async def start(self, request: web.Request) -> None:
@@ -414,7 +414,7 @@ class APIAddons(CoreSysAttributes):
async def rebuild(self, request: web.Request) -> None: async def rebuild(self, request: web.Request) -> None:
"""Rebuild local build add-on.""" """Rebuild local build add-on."""
addon = self._extract_addon(request) addon = self._extract_addon(request)
if start_task := await asyncio.shield(addon.rebuild()): if start_task := await asyncio.shield(self.sys_addons.rebuild(addon.slug)):
await start_task await start_task
@api_process_raw(CONTENT_TYPE_BINARY) @api_process_raw(CONTENT_TYPE_BINARY)

View File

@@ -1,11 +1,11 @@
"""Init file for Supervisor Audio RESTful API.""" """Init file for Supervisor Audio RESTful API."""
import asyncio import asyncio
from collections.abc import Awaitable from collections.abc import Awaitable
from dataclasses import asdict
import logging import logging
from typing import Any from typing import Any
from aiohttp import web from aiohttp import web
import attr
import voluptuous as vol import voluptuous as vol
from ..const import ( from ..const import (
@@ -76,15 +76,11 @@ class APIAudio(CoreSysAttributes):
ATTR_UPDATE_AVAILABLE: self.sys_plugins.audio.need_update, ATTR_UPDATE_AVAILABLE: self.sys_plugins.audio.need_update,
ATTR_HOST: str(self.sys_docker.network.audio), ATTR_HOST: str(self.sys_docker.network.audio),
ATTR_AUDIO: { ATTR_AUDIO: {
ATTR_CARD: [attr.asdict(card) for card in self.sys_host.sound.cards], ATTR_CARD: [asdict(card) for card in self.sys_host.sound.cards],
ATTR_INPUT: [ ATTR_INPUT: [asdict(stream) for stream in self.sys_host.sound.inputs],
attr.asdict(stream) for stream in self.sys_host.sound.inputs ATTR_OUTPUT: [asdict(stream) for stream in self.sys_host.sound.outputs],
],
ATTR_OUTPUT: [
attr.asdict(stream) for stream in self.sys_host.sound.outputs
],
ATTR_APPLICATION: [ ATTR_APPLICATION: [
attr.asdict(stream) for stream in self.sys_host.sound.applications asdict(stream) for stream in self.sys_host.sound.applications
], ],
}, },
} }

View File

@@ -11,6 +11,7 @@ from ..addons.addon import Addon
from ..const import ATTR_PASSWORD, ATTR_USERNAME, REQUEST_FROM from ..const import ATTR_PASSWORD, ATTR_USERNAME, REQUEST_FROM
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import APIForbidden from ..exceptions import APIForbidden
from ..utils.json import json_loads
from .const import CONTENT_TYPE_JSON, CONTENT_TYPE_URL from .const import CONTENT_TYPE_JSON, CONTENT_TYPE_URL
from .utils import api_process, api_validate from .utils import api_process, api_validate
@@ -67,7 +68,7 @@ class APIAuth(CoreSysAttributes):
# Json # Json
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON: if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON:
data = await request.json() data = await request.json(loads=json_loads)
return await self._process_dict(request, addon, data) return await self._process_dict(request, addon, data)
# URL encoded # URL encoded

View File

@@ -1,5 +1,6 @@
"""Backups RESTful API.""" """Backups RESTful API."""
import asyncio import asyncio
import errno
import logging import logging
from pathlib import Path from pathlib import Path
import re import re
@@ -20,6 +21,7 @@ from ..const import (
ATTR_DAYS_UNTIL_STALE, ATTR_DAYS_UNTIL_STALE,
ATTR_FOLDERS, ATTR_FOLDERS,
ATTR_HOMEASSISTANT, ATTR_HOMEASSISTANT,
ATTR_HOMEASSISTANT_EXCLUDE_DATABASE,
ATTR_LOCATON, ATTR_LOCATON,
ATTR_NAME, ATTR_NAME,
ATTR_PASSWORD, ATTR_PASSWORD,
@@ -28,12 +30,14 @@ from ..const import (
ATTR_SIZE, ATTR_SIZE,
ATTR_SLUG, ATTR_SLUG,
ATTR_SUPERVISOR_VERSION, ATTR_SUPERVISOR_VERSION,
ATTR_TIMEOUT,
ATTR_TYPE, ATTR_TYPE,
ATTR_VERSION, ATTR_VERSION,
) )
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import APIError from ..exceptions import APIError
from ..mounts.const import MountUsage from ..mounts.const import MountUsage
from ..resolution.const import UnhealthyReason
from .const import CONTENT_TYPE_TAR from .const import CONTENT_TYPE_TAR
from .utils import api_process, api_validate from .utils import api_process, api_validate
@@ -63,6 +67,7 @@ SCHEMA_BACKUP_FULL = vol.Schema(
vol.Optional(ATTR_PASSWORD): vol.Maybe(str), vol.Optional(ATTR_PASSWORD): vol.Maybe(str),
vol.Optional(ATTR_COMPRESSED): vol.Maybe(vol.Boolean()), vol.Optional(ATTR_COMPRESSED): vol.Maybe(vol.Boolean()),
vol.Optional(ATTR_LOCATON): vol.Maybe(str), vol.Optional(ATTR_LOCATON): vol.Maybe(str),
vol.Optional(ATTR_HOMEASSISTANT_EXCLUDE_DATABASE): vol.Boolean(),
} }
) )
@@ -80,6 +85,12 @@ SCHEMA_OPTIONS = vol.Schema(
} }
) )
SCHEMA_FREEZE = vol.Schema(
{
vol.Optional(ATTR_TIMEOUT): vol.All(int, vol.Range(min=1)),
}
)
class APIBackups(CoreSysAttributes): class APIBackups(CoreSysAttributes):
"""Handle RESTful API for backups functions.""" """Handle RESTful API for backups functions."""
@@ -142,7 +153,7 @@ class APIBackups(CoreSysAttributes):
self.sys_backups.save_data() self.sys_backups.save_data()
@api_process @api_process
async def reload(self, request): async def reload(self, _):
"""Reload backup list.""" """Reload backup list."""
await asyncio.shield(self.sys_backups.reload()) await asyncio.shield(self.sys_backups.reload())
return True return True
@@ -177,6 +188,7 @@ class APIBackups(CoreSysAttributes):
ATTR_ADDONS: data_addons, ATTR_ADDONS: data_addons,
ATTR_REPOSITORIES: backup.repositories, ATTR_REPOSITORIES: backup.repositories,
ATTR_FOLDERS: backup.folders, ATTR_FOLDERS: backup.folders,
ATTR_HOMEASSISTANT_EXCLUDE_DATABASE: backup.homeassistant_exclude_database,
} }
def _location_to_mount(self, body: dict[str, Any]) -> dict[str, Any]: def _location_to_mount(self, body: dict[str, Any]) -> dict[str, Any]:
@@ -233,6 +245,17 @@ class APIBackups(CoreSysAttributes):
return await asyncio.shield(self.sys_backups.do_restore_partial(backup, **body)) return await asyncio.shield(self.sys_backups.do_restore_partial(backup, **body))
@api_process
async def freeze(self, request):
"""Initiate manual freeze for external backup."""
body = await api_validate(SCHEMA_FREEZE, request)
await asyncio.shield(self.sys_backups.freeze_all(**body))
@api_process
async def thaw(self, request):
"""Begin thaw after manual freeze."""
await self.sys_backups.thaw_all()
@api_process @api_process
async def remove(self, request): async def remove(self, request):
"""Remove a backup.""" """Remove a backup."""
@@ -267,6 +290,8 @@ class APIBackups(CoreSysAttributes):
backup.write(chunk) backup.write(chunk)
except OSError as err: except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Can't write new backup file: %s", err) _LOGGER.error("Can't write new backup file: %s", err)
return False return False

View File

@@ -23,7 +23,6 @@ ATTR_CONNECTION_BUS = "connection_bus"
ATTR_DATA_DISK = "data_disk" ATTR_DATA_DISK = "data_disk"
ATTR_DEVICE = "device" ATTR_DEVICE = "device"
ATTR_DEV_PATH = "dev_path" ATTR_DEV_PATH = "dev_path"
ATTR_DISK_LED = "disk_led"
ATTR_DISKS = "disks" ATTR_DISKS = "disks"
ATTR_DRIVES = "drives" ATTR_DRIVES = "drives"
ATTR_DT_SYNCHRONIZED = "dt_synchronized" ATTR_DT_SYNCHRONIZED = "dt_synchronized"
@@ -31,8 +30,8 @@ ATTR_DT_UTC = "dt_utc"
ATTR_EJECTABLE = "ejectable" ATTR_EJECTABLE = "ejectable"
ATTR_FALLBACK = "fallback" ATTR_FALLBACK = "fallback"
ATTR_FILESYSTEMS = "filesystems" ATTR_FILESYSTEMS = "filesystems"
ATTR_HEARTBEAT_LED = "heartbeat_led"
ATTR_IDENTIFIERS = "identifiers" ATTR_IDENTIFIERS = "identifiers"
ATTR_JOBS = "jobs"
ATTR_LLMNR = "llmnr" ATTR_LLMNR = "llmnr"
ATTR_LLMNR_HOSTNAME = "llmnr_hostname" ATTR_LLMNR_HOSTNAME = "llmnr_hostname"
ATTR_MDNS = "mdns" ATTR_MDNS = "mdns"
@@ -40,7 +39,6 @@ ATTR_MODEL = "model"
ATTR_MOUNTS = "mounts" ATTR_MOUNTS = "mounts"
ATTR_MOUNT_POINTS = "mount_points" ATTR_MOUNT_POINTS = "mount_points"
ATTR_PANEL_PATH = "panel_path" ATTR_PANEL_PATH = "panel_path"
ATTR_POWER_LED = "power_led"
ATTR_REMOVABLE = "removable" ATTR_REMOVABLE = "removable"
ATTR_REVISION = "revision" ATTR_REVISION = "revision"
ATTR_SEAT = "seat" ATTR_SEAT = "seat"
@@ -48,6 +46,7 @@ ATTR_SIGNED = "signed"
ATTR_STARTUP_TIME = "startup_time" ATTR_STARTUP_TIME = "startup_time"
ATTR_SUBSYSTEM = "subsystem" ATTR_SUBSYSTEM = "subsystem"
ATTR_SYSFS = "sysfs" ATTR_SYSFS = "sysfs"
ATTR_SYSTEM_HEALTH_LED = "system_health_led"
ATTR_TIME_DETECTED = "time_detected" ATTR_TIME_DETECTED = "time_detected"
ATTR_UPDATE_TYPE = "update_type" ATTR_UPDATE_TYPE = "update_type"
ATTR_USE_NTP = "use_ntp" ATTR_USE_NTP = "use_ntp"

View File

@@ -1,6 +1,9 @@
"""Init file for Supervisor network RESTful API.""" """Init file for Supervisor network RESTful API."""
import logging
import voluptuous as vol import voluptuous as vol
from ..addons.addon import Addon
from ..const import ( from ..const import (
ATTR_ADDON, ATTR_ADDON,
ATTR_CONFIG, ATTR_CONFIG,
@@ -9,15 +12,18 @@ from ..const import (
ATTR_SERVICES, ATTR_SERVICES,
ATTR_UUID, ATTR_UUID,
REQUEST_FROM, REQUEST_FROM,
AddonState,
) )
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..discovery.validate import valid_discovery_service from ..discovery.validate import valid_discovery_service
from ..exceptions import APIError, APIForbidden from ..exceptions import APIError, APIForbidden
from .utils import api_process, api_validate, require_home_assistant from .utils import api_process, api_validate, require_home_assistant
_LOGGER: logging.Logger = logging.getLogger(__name__)
SCHEMA_DISCOVERY = vol.Schema( SCHEMA_DISCOVERY = vol.Schema(
{ {
vol.Required(ATTR_SERVICE): valid_discovery_service, vol.Required(ATTR_SERVICE): str,
vol.Optional(ATTR_CONFIG): vol.Maybe(dict), vol.Optional(ATTR_CONFIG): vol.Maybe(dict),
} }
) )
@@ -36,19 +42,19 @@ class APIDiscovery(CoreSysAttributes):
@api_process @api_process
@require_home_assistant @require_home_assistant
async def list(self, request): async def list(self, request):
"""Show register services.""" """Show registered and available services."""
# Get available discovery # Get available discovery
discovery = [] discovery = [
for message in self.sys_discovery.list_messages: {
discovery.append( ATTR_ADDON: message.addon,
{ ATTR_SERVICE: message.service,
ATTR_ADDON: message.addon, ATTR_UUID: message.uuid,
ATTR_SERVICE: message.service, ATTR_CONFIG: message.config,
ATTR_UUID: message.uuid, }
ATTR_CONFIG: message.config, for message in self.sys_discovery.list_messages
} if (addon := self.sys_addons.get(message.addon, local_only=True))
) and addon.state == AddonState.STARTED
]
# Get available services/add-ons # Get available services/add-ons
services = {} services = {}
@@ -62,11 +68,28 @@ class APIDiscovery(CoreSysAttributes):
async def set_discovery(self, request): async def set_discovery(self, request):
"""Write data into a discovery pipeline.""" """Write data into a discovery pipeline."""
body = await api_validate(SCHEMA_DISCOVERY, request) body = await api_validate(SCHEMA_DISCOVERY, request)
addon = request[REQUEST_FROM] addon: Addon = request[REQUEST_FROM]
service = body[ATTR_SERVICE]
try:
valid_discovery_service(service)
except vol.Invalid:
_LOGGER.warning(
"Received discovery message for unknown service %s from addon %s. Please report this to the maintainer of the add-on",
service,
addon.name,
)
# Access? # Access?
if body[ATTR_SERVICE] not in addon.discovery: if body[ATTR_SERVICE] not in addon.discovery:
raise APIForbidden("Can't use discovery!") _LOGGER.error(
"Add-on %s attempted to send discovery for service %s which is not listed in its config. Please report this to the maintainer of the add-on",
addon.name,
service,
)
raise APIForbidden(
"Add-ons must list services they provide via discovery in their config!"
)
# Process discovery message # Process discovery message
message = self.sys_discovery.send(addon, **body) message = self.sys_discovery.send(addon, **body)

View File

@@ -12,6 +12,7 @@ from ..const import (
ATTR_AUDIO_INPUT, ATTR_AUDIO_INPUT,
ATTR_AUDIO_OUTPUT, ATTR_AUDIO_OUTPUT,
ATTR_BACKUP, ATTR_BACKUP,
ATTR_BACKUPS_EXCLUDE_DATABASE,
ATTR_BLK_READ, ATTR_BLK_READ,
ATTR_BLK_WRITE, ATTR_BLK_WRITE,
ATTR_BOOT, ATTR_BOOT,
@@ -51,6 +52,7 @@ SCHEMA_OPTIONS = vol.Schema(
vol.Optional(ATTR_REFRESH_TOKEN): vol.Maybe(str), vol.Optional(ATTR_REFRESH_TOKEN): vol.Maybe(str),
vol.Optional(ATTR_AUDIO_OUTPUT): vol.Maybe(str), vol.Optional(ATTR_AUDIO_OUTPUT): vol.Maybe(str),
vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str), vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str),
vol.Optional(ATTR_BACKUPS_EXCLUDE_DATABASE): vol.Boolean(),
} }
) )
@@ -82,6 +84,7 @@ class APIHomeAssistant(CoreSysAttributes):
ATTR_WATCHDOG: self.sys_homeassistant.watchdog, ATTR_WATCHDOG: self.sys_homeassistant.watchdog,
ATTR_AUDIO_INPUT: self.sys_homeassistant.audio_input, ATTR_AUDIO_INPUT: self.sys_homeassistant.audio_input,
ATTR_AUDIO_OUTPUT: self.sys_homeassistant.audio_output, ATTR_AUDIO_OUTPUT: self.sys_homeassistant.audio_output,
ATTR_BACKUPS_EXCLUDE_DATABASE: self.sys_homeassistant.backups_exclude_database,
} }
@api_process @api_process
@@ -113,6 +116,11 @@ class APIHomeAssistant(CoreSysAttributes):
if ATTR_AUDIO_OUTPUT in body: if ATTR_AUDIO_OUTPUT in body:
self.sys_homeassistant.audio_output = body[ATTR_AUDIO_OUTPUT] self.sys_homeassistant.audio_output = body[ATTR_AUDIO_OUTPUT]
if ATTR_BACKUPS_EXCLUDE_DATABASE in body:
self.sys_homeassistant.backups_exclude_database = body[
ATTR_BACKUPS_EXCLUDE_DATABASE
]
self.sys_homeassistant.save_data() self.sys_homeassistant.save_data()
@api_process @api_process

View File

@@ -21,11 +21,18 @@ from ..const import (
ATTR_ICON, ATTR_ICON,
ATTR_PANELS, ATTR_PANELS,
ATTR_SESSION, ATTR_SESSION,
ATTR_SESSION_DATA_USER_ID,
ATTR_TITLE, ATTR_TITLE,
HEADER_REMOTE_USER_DISPLAY_NAME,
HEADER_REMOTE_USER_ID,
HEADER_REMOTE_USER_NAME,
HEADER_TOKEN, HEADER_TOKEN,
HEADER_TOKEN_OLD, HEADER_TOKEN_OLD,
IngressSessionData,
IngressSessionDataUser,
) )
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import HomeAssistantAPIError
from .const import COOKIE_INGRESS from .const import COOKIE_INGRESS
from .utils import api_process, api_validate, require_home_assistant from .utils import api_process, api_validate, require_home_assistant
@@ -33,10 +40,46 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
VALIDATE_SESSION_DATA = vol.Schema({ATTR_SESSION: str}) VALIDATE_SESSION_DATA = vol.Schema({ATTR_SESSION: str})
"""Expected optional payload of create session request"""
SCHEMA_INGRESS_CREATE_SESSION_DATA = vol.Schema(
{
vol.Optional(ATTR_SESSION_DATA_USER_ID): str,
}
)
# from https://github.com/aio-libs/aiohttp/blob/8ae650bee4add9f131d49b96a0a150311ea58cd1/aiohttp/helpers.py#L1059C1-L1079C1
def must_be_empty_body(method: str, code: int) -> bool:
"""Check if a request must return an empty body."""
return (
status_code_must_be_empty_body(code)
or method_must_be_empty_body(method)
or (200 <= code < 300 and method.upper() == hdrs.METH_CONNECT)
)
def method_must_be_empty_body(method: str) -> bool:
"""Check if a method must return an empty body."""
# https://datatracker.ietf.org/doc/html/rfc9112#section-6.3-2.1
# https://datatracker.ietf.org/doc/html/rfc9112#section-6.3-2.2
return method.upper() == hdrs.METH_HEAD
def status_code_must_be_empty_body(code: int) -> bool:
"""Check if a status code must return an empty body."""
# https://datatracker.ietf.org/doc/html/rfc9112#section-6.3-2.1
return code in {204, 304} or 100 <= code < 200
class APIIngress(CoreSysAttributes): class APIIngress(CoreSysAttributes):
"""Ingress view to handle add-on webui routing.""" """Ingress view to handle add-on webui routing."""
_list_of_users: list[IngressSessionDataUser]
def __init__(self) -> None:
"""Initialize APIIngress."""
self._list_of_users = []
def _extract_addon(self, request: web.Request) -> Addon: def _extract_addon(self, request: web.Request) -> Addon:
"""Return addon, throw an exception it it doesn't exist.""" """Return addon, throw an exception it it doesn't exist."""
token = request.match_info.get("token") token = request.match_info.get("token")
@@ -71,7 +114,19 @@ class APIIngress(CoreSysAttributes):
@require_home_assistant @require_home_assistant
async def create_session(self, request: web.Request) -> dict[str, Any]: async def create_session(self, request: web.Request) -> dict[str, Any]:
"""Create a new session.""" """Create a new session."""
session = self.sys_ingress.create_session() schema_ingress_config_session_data = await api_validate(
SCHEMA_INGRESS_CREATE_SESSION_DATA, request
)
data: IngressSessionData | None = None
if ATTR_SESSION_DATA_USER_ID in schema_ingress_config_session_data:
user = await self._find_user_by_id(
schema_ingress_config_session_data[ATTR_SESSION_DATA_USER_ID]
)
if user:
data = IngressSessionData(user)
session = self.sys_ingress.create_session(data)
return {ATTR_SESSION: session} return {ATTR_SESSION: session}
@api_process @api_process
@@ -99,13 +154,14 @@ class APIIngress(CoreSysAttributes):
# Process requests # Process requests
addon = self._extract_addon(request) addon = self._extract_addon(request)
path = request.match_info.get("path") path = request.match_info.get("path")
session_data = self.sys_ingress.get_session_data(session)
try: try:
# Websocket # Websocket
if _is_websocket(request): if _is_websocket(request):
return await self._handle_websocket(request, addon, path) return await self._handle_websocket(request, addon, path, session_data)
# Request # Request
return await self._handle_request(request, addon, path) return await self._handle_request(request, addon, path, session_data)
except aiohttp.ClientError as err: except aiohttp.ClientError as err:
_LOGGER.error("Ingress error: %s", err) _LOGGER.error("Ingress error: %s", err)
@@ -113,7 +169,11 @@ class APIIngress(CoreSysAttributes):
raise HTTPBadGateway() raise HTTPBadGateway()
async def _handle_websocket( async def _handle_websocket(
self, request: web.Request, addon: Addon, path: str self,
request: web.Request,
addon: Addon,
path: str,
session_data: IngressSessionData | None,
) -> web.WebSocketResponse: ) -> web.WebSocketResponse:
"""Ingress route for websocket.""" """Ingress route for websocket."""
if hdrs.SEC_WEBSOCKET_PROTOCOL in request.headers: if hdrs.SEC_WEBSOCKET_PROTOCOL in request.headers:
@@ -131,7 +191,7 @@ class APIIngress(CoreSysAttributes):
# Preparing # Preparing
url = self._create_url(addon, path) url = self._create_url(addon, path)
source_header = _init_header(request, addon) source_header = _init_header(request, addon, session_data)
# Support GET query # Support GET query
if request.query_string: if request.query_string:
@@ -157,11 +217,15 @@ class APIIngress(CoreSysAttributes):
return ws_server return ws_server
async def _handle_request( async def _handle_request(
self, request: web.Request, addon: Addon, path: str self,
request: web.Request,
addon: Addon,
path: str,
session_data: IngressSessionData | None,
) -> web.Response | web.StreamResponse: ) -> web.Response | web.StreamResponse:
"""Ingress route for request.""" """Ingress route for request."""
url = self._create_url(addon, path) url = self._create_url(addon, path)
source_header = _init_header(request, addon) source_header = _init_header(request, addon, session_data)
# Passing the raw stream breaks requests for some webservers # Passing the raw stream breaks requests for some webservers
# since we just need it for POST requests really, for all other methods # since we just need it for POST requests really, for all other methods
@@ -184,10 +248,18 @@ class APIIngress(CoreSysAttributes):
skip_auto_headers={hdrs.CONTENT_TYPE}, skip_auto_headers={hdrs.CONTENT_TYPE},
) as result: ) as result:
headers = _response_header(result) headers = _response_header(result)
# Avoid parsing content_type in simple cases for better performance
if maybe_content_type := result.headers.get(hdrs.CONTENT_TYPE):
content_type = (maybe_content_type.partition(";"))[0].strip()
else:
content_type = result.content_type
# Simple request # Simple request
if ( if (
hdrs.CONTENT_LENGTH in result.headers # empty body responses should not be streamed,
# otherwise aiohttp < 3.9.0 may generate
# an invalid "0\r\n\r\n" chunk instead of an empty response.
must_be_empty_body(request.method, result.status)
or hdrs.CONTENT_LENGTH in result.headers
and int(result.headers.get(hdrs.CONTENT_LENGTH, 0)) < 4_194_000 and int(result.headers.get(hdrs.CONTENT_LENGTH, 0)) < 4_194_000
): ):
# Return Response # Return Response
@@ -195,13 +267,13 @@ class APIIngress(CoreSysAttributes):
return web.Response( return web.Response(
headers=headers, headers=headers,
status=result.status, status=result.status,
content_type=result.content_type, content_type=content_type,
body=body, body=body,
) )
# Stream response # Stream response
response = web.StreamResponse(status=result.status, headers=headers) response = web.StreamResponse(status=result.status, headers=headers)
response.content_type = result.content_type response.content_type = content_type
try: try:
await response.prepare(request) await response.prepare(request)
@@ -217,11 +289,35 @@ class APIIngress(CoreSysAttributes):
return response return response
async def _find_user_by_id(self, user_id: str) -> IngressSessionDataUser | None:
"""Find user object by the user's ID."""
try:
list_of_users = await self.sys_homeassistant.get_users()
except (HomeAssistantAPIError, TypeError) as err:
_LOGGER.error(
"%s error occurred while requesting list of users: %s", type(err), err
)
return None
def _init_header(request: web.Request, addon: str) -> CIMultiDict | dict[str, str]: if list_of_users is not None:
self._list_of_users = list_of_users
return next((user for user in self._list_of_users if user.id == user_id), None)
def _init_header(
request: web.Request, addon: Addon, session_data: IngressSessionData | None
) -> CIMultiDict | dict[str, str]:
"""Create initial header.""" """Create initial header."""
headers = {} headers = {}
if session_data is not None:
headers[HEADER_REMOTE_USER_ID] = session_data.user.id
if session_data.user.username is not None:
headers[HEADER_REMOTE_USER_NAME] = session_data.user.username
if session_data.user.display_name is not None:
headers[HEADER_REMOTE_USER_DISPLAY_NAME] = session_data.user.display_name
# filter flags # filter flags
for name, value in request.headers.items(): for name, value in request.headers.items():
if name in ( if name in (
@@ -234,6 +330,9 @@ def _init_header(request: web.Request, addon: str) -> CIMultiDict | dict[str, st
hdrs.SEC_WEBSOCKET_KEY, hdrs.SEC_WEBSOCKET_KEY,
istr(HEADER_TOKEN), istr(HEADER_TOKEN),
istr(HEADER_TOKEN_OLD), istr(HEADER_TOKEN_OLD),
istr(HEADER_REMOTE_USER_ID),
istr(HEADER_REMOTE_USER_NAME),
istr(HEADER_REMOTE_USER_DISPLAY_NAME),
): ):
continue continue
headers[name] = value headers[name] = value

View File

@@ -6,7 +6,9 @@ from aiohttp import web
import voluptuous as vol import voluptuous as vol
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..jobs import SupervisorJob
from ..jobs.const import ATTR_IGNORE_CONDITIONS, JobCondition from ..jobs.const import ATTR_IGNORE_CONDITIONS, JobCondition
from .const import ATTR_JOBS
from .utils import api_process, api_validate from .utils import api_process, api_validate
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -19,11 +21,45 @@ SCHEMA_OPTIONS = vol.Schema(
class APIJobs(CoreSysAttributes): class APIJobs(CoreSysAttributes):
"""Handle RESTful API for OS functions.""" """Handle RESTful API for OS functions."""
def _list_jobs(self) -> list[dict[str, Any]]:
"""Return current job tree."""
jobs_by_parent: dict[str | None, list[SupervisorJob]] = {}
for job in self.sys_jobs.jobs:
if job.internal:
continue
if job.parent_id not in jobs_by_parent:
jobs_by_parent[job.parent_id] = [job]
else:
jobs_by_parent[job.parent_id].append(job)
job_list: list[dict[str, Any]] = []
queue: list[tuple[list[dict[str, Any]], SupervisorJob]] = [
(job_list, job) for job in jobs_by_parent.get(None, [])
]
while queue:
(current_list, current_job) = queue.pop(0)
child_jobs: list[dict[str, Any]] = []
# We remove parent_id and instead use that info to represent jobs as a tree
job_dict = current_job.as_dict() | {"child_jobs": child_jobs}
job_dict.pop("parent_id")
current_list.append(job_dict)
if current_job.uuid in jobs_by_parent:
queue.extend(
[(child_jobs, job) for job in jobs_by_parent.get(current_job.uuid)]
)
return job_list
@api_process @api_process
async def info(self, request: web.Request) -> dict[str, Any]: async def info(self, request: web.Request) -> dict[str, Any]:
"""Return JobManager information.""" """Return JobManager information."""
return { return {
ATTR_IGNORE_CONDITIONS: self.sys_jobs.ignore_conditions, ATTR_IGNORE_CONDITIONS: self.sys_jobs.ignore_conditions,
ATTR_JOBS: self._list_jobs(),
} }
@api_process @api_process

View File

@@ -19,6 +19,7 @@ from ...const import (
CoreState, CoreState,
) )
from ...coresys import CoreSys, CoreSysAttributes from ...coresys import CoreSys, CoreSysAttributes
from ...utils import version_is_new_enough
from ..utils import api_return_error, excract_supervisor_token from ..utils import api_return_error, excract_supervisor_token
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -195,7 +196,7 @@ class SecurityMiddleware(CoreSysAttributes):
CoreState.FREEZE, CoreState.FREEZE,
): ):
return api_return_error( return api_return_error(
message=f"System is not ready with state: {self.sys_core.state.value}" message=f"System is not ready with state: {self.sys_core.state}"
) )
return await handler(request) return await handler(request)
@@ -273,9 +274,8 @@ class SecurityMiddleware(CoreSysAttributes):
@middleware @middleware
async def core_proxy(self, request: Request, handler: RequestHandler) -> Response: async def core_proxy(self, request: Request, handler: RequestHandler) -> Response:
"""Validate user from Core API proxy.""" """Validate user from Core API proxy."""
if ( if request[REQUEST_FROM] != self.sys_homeassistant or version_is_new_enough(
request[REQUEST_FROM] != self.sys_homeassistant self.sys_homeassistant.version, _CORE_VERSION
or self.sys_homeassistant.version >= _CORE_VERSION
): ):
return await handler(request) return await handler(request)

View File

@@ -1,11 +1,11 @@
"""REST API for network.""" """REST API for network."""
import asyncio import asyncio
from collections.abc import Awaitable from collections.abc import Awaitable
from dataclasses import replace
from ipaddress import ip_address, ip_interface from ipaddress import ip_address, ip_interface
from typing import Any from typing import Any
from aiohttp import web from aiohttp import web
import attr
import voluptuous as vol import voluptuous as vol
from ..const import ( from ..const import (
@@ -43,8 +43,7 @@ from ..const import (
) )
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import APIError, HostNetworkNotFound from ..exceptions import APIError, HostNetworkNotFound
from ..host.const import AuthMethod, InterfaceType, WifiMode from ..host.configuration import (
from ..host.network import (
AccessPoint, AccessPoint,
Interface, Interface,
InterfaceMethod, InterfaceMethod,
@@ -52,6 +51,7 @@ from ..host.network import (
VlanConfig, VlanConfig,
WifiConfig, WifiConfig,
) )
from ..host.const import AuthMethod, InterfaceType, WifiMode
from .utils import api_process, api_validate from .utils import api_process, api_validate
_SCHEMA_IP_CONFIG = vol.Schema( _SCHEMA_IP_CONFIG = vol.Schema(
@@ -121,6 +121,7 @@ def interface_struct(interface: Interface) -> dict[str, Any]:
ATTR_ENABLED: interface.enabled, ATTR_ENABLED: interface.enabled,
ATTR_CONNECTED: interface.connected, ATTR_CONNECTED: interface.connected,
ATTR_PRIMARY: interface.primary, ATTR_PRIMARY: interface.primary,
ATTR_MAC: interface.mac,
ATTR_IPV4: ipconfig_struct(interface.ipv4) if interface.ipv4 else None, ATTR_IPV4: ipconfig_struct(interface.ipv4) if interface.ipv4 else None,
ATTR_IPV6: ipconfig_struct(interface.ipv6) if interface.ipv6 else None, ATTR_IPV6: ipconfig_struct(interface.ipv6) if interface.ipv6 else None,
ATTR_WIFI: wifi_struct(interface.wifi) if interface.wifi else None, ATTR_WIFI: wifi_struct(interface.wifi) if interface.wifi else None,
@@ -196,19 +197,19 @@ class APINetwork(CoreSysAttributes):
# Apply config # Apply config
for key, config in body.items(): for key, config in body.items():
if key == ATTR_IPV4: if key == ATTR_IPV4:
interface.ipv4 = attr.evolve( interface.ipv4 = replace(
interface.ipv4 interface.ipv4
or IpConfig(InterfaceMethod.STATIC, [], None, [], None), or IpConfig(InterfaceMethod.STATIC, [], None, [], None),
**config, **config,
) )
elif key == ATTR_IPV6: elif key == ATTR_IPV6:
interface.ipv6 = attr.evolve( interface.ipv6 = replace(
interface.ipv6 interface.ipv6
or IpConfig(InterfaceMethod.STATIC, [], None, [], None), or IpConfig(InterfaceMethod.STATIC, [], None, [], None),
**config, **config,
) )
elif key == ATTR_WIFI: elif key == ATTR_WIFI:
interface.wifi = attr.evolve( interface.wifi = replace(
interface.wifi interface.wifi
or WifiConfig( or WifiConfig(
WifiMode.INFRASTRUCTURE, "", AuthMethod.OPEN, None, None WifiMode.INFRASTRUCTURE, "", AuthMethod.OPEN, None, None
@@ -276,6 +277,8 @@ class APINetwork(CoreSysAttributes):
) )
vlan_interface = Interface( vlan_interface = Interface(
"",
"",
"", "",
True, True,
True, True,

View File

@@ -8,11 +8,15 @@ from aiohttp import web
import voluptuous as vol import voluptuous as vol
from ..const import ( from ..const import (
ATTR_ACTIVITY_LED,
ATTR_BOARD, ATTR_BOARD,
ATTR_BOOT, ATTR_BOOT,
ATTR_DEVICES, ATTR_DEVICES,
ATTR_DISK_LED,
ATTR_HEARTBEAT_LED,
ATTR_ID, ATTR_ID,
ATTR_NAME, ATTR_NAME,
ATTR_POWER_LED,
ATTR_SERIAL, ATTR_SERIAL,
ATTR_SIZE, ATTR_SIZE,
ATTR_UPDATE_AVAILABLE, ATTR_UPDATE_AVAILABLE,
@@ -27,21 +31,19 @@ from .const import (
ATTR_DATA_DISK, ATTR_DATA_DISK,
ATTR_DEV_PATH, ATTR_DEV_PATH,
ATTR_DEVICE, ATTR_DEVICE,
ATTR_DISK_LED,
ATTR_DISKS, ATTR_DISKS,
ATTR_HEARTBEAT_LED,
ATTR_MODEL, ATTR_MODEL,
ATTR_POWER_LED, ATTR_SYSTEM_HEALTH_LED,
ATTR_VENDOR, ATTR_VENDOR,
) )
from .utils import api_process, api_validate from .utils import api_process, api_validate
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
# pylint: disable=no-value-for-parameter
SCHEMA_VERSION = vol.Schema({vol.Optional(ATTR_VERSION): version_tag}) SCHEMA_VERSION = vol.Schema({vol.Optional(ATTR_VERSION): version_tag})
SCHEMA_DISK = vol.Schema({vol.Required(ATTR_DEVICE): str}) SCHEMA_DISK = vol.Schema({vol.Required(ATTR_DEVICE): str})
# pylint: disable=no-value-for-parameter
SCHEMA_YELLOW_OPTIONS = vol.Schema( SCHEMA_YELLOW_OPTIONS = vol.Schema(
{ {
vol.Optional(ATTR_DISK_LED): vol.Boolean(), vol.Optional(ATTR_DISK_LED): vol.Boolean(),
@@ -49,6 +51,14 @@ SCHEMA_YELLOW_OPTIONS = vol.Schema(
vol.Optional(ATTR_POWER_LED): vol.Boolean(), vol.Optional(ATTR_POWER_LED): vol.Boolean(),
} }
) )
SCHEMA_GREEN_OPTIONS = vol.Schema(
{
vol.Optional(ATTR_ACTIVITY_LED): vol.Boolean(),
vol.Optional(ATTR_POWER_LED): vol.Boolean(),
vol.Optional(ATTR_SYSTEM_HEALTH_LED): vol.Boolean(),
}
)
# pylint: enable=no-value-for-parameter
class APIOS(CoreSysAttributes): class APIOS(CoreSysAttributes):
@@ -105,6 +115,31 @@ class APIOS(CoreSysAttributes):
], ],
} }
@api_process
async def boards_green_info(self, request: web.Request) -> dict[str, Any]:
"""Get green board settings."""
return {
ATTR_ACTIVITY_LED: self.sys_dbus.agent.board.green.activity_led,
ATTR_POWER_LED: self.sys_dbus.agent.board.green.power_led,
ATTR_SYSTEM_HEALTH_LED: self.sys_dbus.agent.board.green.user_led,
}
@api_process
async def boards_green_options(self, request: web.Request) -> None:
"""Update green board settings."""
body = await api_validate(SCHEMA_GREEN_OPTIONS, request)
if ATTR_ACTIVITY_LED in body:
self.sys_dbus.agent.board.green.activity_led = body[ATTR_ACTIVITY_LED]
if ATTR_POWER_LED in body:
self.sys_dbus.agent.board.green.power_led = body[ATTR_POWER_LED]
if ATTR_SYSTEM_HEALTH_LED in body:
self.sys_dbus.agent.board.green.user_led = body[ATTR_SYSTEM_HEALTH_LED]
self.sys_dbus.agent.board.green.save_data()
@api_process @api_process
async def boards_yellow_info(self, request: web.Request) -> dict[str, Any]: async def boards_yellow_info(self, request: web.Request) -> dict[str, Any]:
"""Get yellow board settings.""" """Get yellow board settings."""
@@ -128,6 +163,7 @@ class APIOS(CoreSysAttributes):
if ATTR_POWER_LED in body: if ATTR_POWER_LED in body:
self.sys_dbus.agent.board.yellow.power_led = body[ATTR_POWER_LED] self.sys_dbus.agent.board.yellow.power_led = body[ATTR_POWER_LED]
self.sys_dbus.agent.board.yellow.save_data()
self.sys_resolution.create_issue( self.sys_resolution.create_issue(
IssueType.REBOOT_REQUIRED, IssueType.REBOOT_REQUIRED,
ContextType.SYSTEM, ContextType.SYSTEM,

View File

@@ -6,7 +6,10 @@ import logging
import aiohttp import aiohttp
from aiohttp import web from aiohttp import web
from aiohttp.client_exceptions import ClientConnectorError from aiohttp.client_exceptions import ClientConnectorError
from aiohttp.client_ws import ClientWebSocketResponse
from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE
from aiohttp.http import WSMessage
from aiohttp.http_websocket import WSMsgType
from aiohttp.web_exceptions import HTTPBadGateway, HTTPUnauthorized from aiohttp.web_exceptions import HTTPBadGateway, HTTPUnauthorized
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
@@ -18,6 +21,13 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
FORWARD_HEADERS = ("X-Speech-Content",) FORWARD_HEADERS = ("X-Speech-Content",)
HEADER_HA_ACCESS = "X-Ha-Access" HEADER_HA_ACCESS = "X-Ha-Access"
# Maximum message size for websocket messages from Home Assistant.
# Since these are coming from core we want the largest possible size
# that is not likely to cause a memory problem as most modern browsers
# support large messages.
# https://github.com/home-assistant/supervisor/issues/4392
MAX_MESSAGE_SIZE_FROM_CORE = 64 * 1024 * 1024
class APIProxy(CoreSysAttributes): class APIProxy(CoreSysAttributes):
"""API Proxy for Home Assistant.""" """API Proxy for Home Assistant."""
@@ -67,7 +77,7 @@ class APIProxy(CoreSysAttributes):
_LOGGER.error("Error on API for request %s", path) _LOGGER.error("Error on API for request %s", path)
except aiohttp.ClientError as err: except aiohttp.ClientError as err:
_LOGGER.error("Client error on API %s request %s", path, err) _LOGGER.error("Client error on API %s request %s", path, err)
except asyncio.TimeoutError: except TimeoutError:
_LOGGER.error("Client timeout error on API request %s", path) _LOGGER.error("Client timeout error on API request %s", path)
raise HTTPBadGateway() raise HTTPBadGateway()
@@ -107,12 +117,14 @@ class APIProxy(CoreSysAttributes):
body=data, status=client.status, content_type=client.content_type body=data, status=client.status, content_type=client.content_type
) )
async def _websocket_client(self): async def _websocket_client(self) -> ClientWebSocketResponse:
"""Initialize a WebSocket API connection.""" """Initialize a WebSocket API connection."""
url = f"{self.sys_homeassistant.api_url}/api/websocket" url = f"{self.sys_homeassistant.api_url}/api/websocket"
try: try:
client = await self.sys_websession.ws_connect(url, heartbeat=30, ssl=False) client = await self.sys_websession.ws_connect(
url, heartbeat=30, ssl=False, max_msg_size=MAX_MESSAGE_SIZE_FROM_CORE
)
# Handle authentication # Handle authentication
data = await client.receive_json() data = await client.receive_json()
@@ -158,6 +170,25 @@ class APIProxy(CoreSysAttributes):
raise APIError() raise APIError()
async def _proxy_message(
self,
read_task: asyncio.Task,
target: web.WebSocketResponse | ClientWebSocketResponse,
) -> None:
"""Proxy a message from client to server or vice versa."""
if read_task.exception():
raise read_task.exception()
msg: WSMessage = read_task.result()
if msg.type == WSMsgType.TEXT:
return await target.send_str(msg.data)
if msg.type == WSMsgType.BINARY:
return await target.send_bytes(msg.data)
raise TypeError(
f"Cannot proxy websocket message of unsupported type: {msg.type}"
)
async def websocket(self, request: web.Request): async def websocket(self, request: web.Request):
"""Initialize a WebSocket API connection.""" """Initialize a WebSocket API connection."""
if not await self.sys_homeassistant.api.check_api_state(): if not await self.sys_homeassistant.api.check_api_state():
@@ -205,13 +236,13 @@ class APIProxy(CoreSysAttributes):
_LOGGER.info("Home Assistant WebSocket API request running") _LOGGER.info("Home Assistant WebSocket API request running")
try: try:
client_read = None client_read: asyncio.Task | None = None
server_read = None server_read: asyncio.Task | None = None
while not server.closed and not client.closed: while not server.closed and not client.closed:
if not client_read: if not client_read:
client_read = self.sys_create_task(client.receive_str()) client_read = self.sys_create_task(client.receive())
if not server_read: if not server_read:
server_read = self.sys_create_task(server.receive_str()) server_read = self.sys_create_task(server.receive())
# wait until data need to be processed # wait until data need to be processed
await asyncio.wait( await asyncio.wait(
@@ -220,14 +251,12 @@ class APIProxy(CoreSysAttributes):
# server # server
if server_read.done() and not client.closed: if server_read.done() and not client.closed:
server_read.exception() await self._proxy_message(server_read, client)
await client.send_str(server_read.result())
server_read = None server_read = None
# client # client
if client_read.done() and not server.closed: if client_read.done() and not server.closed:
client_read.exception() await self._proxy_message(client_read, server)
await server.send_str(client_read.result())
client_read = None client_read = None
except asyncio.CancelledError: except asyncio.CancelledError:
@@ -237,9 +266,9 @@ class APIProxy(CoreSysAttributes):
_LOGGER.info("Home Assistant WebSocket API error: %s", err) _LOGGER.info("Home Assistant WebSocket API error: %s", err)
finally: finally:
if client_read: if client_read and not client_read.done():
client_read.cancel() client_read.cancel()
if server_read: if server_read and not server_read.done():
server_read.cancel() server_read.cancel()
# close connections # close connections

View File

@@ -6,7 +6,7 @@ from typing import Any
from aiohttp import web from aiohttp import web
import voluptuous as vol import voluptuous as vol
from ..addons import AnyAddon from ..addons.manager import AnyAddon
from ..addons.utils import rating_security from ..addons.utils import rating_security
from ..api.const import ATTR_SIGNED from ..api.const import ATTR_SIGNED
from ..api.utils import api_process, api_process_raw, api_validate from ..api.utils import api_process, api_process_raw, api_validate
@@ -186,18 +186,20 @@ class APIStore(CoreSysAttributes):
} }
@api_process @api_process
async def addons_list(self, request: web.Request) -> list[dict[str, Any]]: async def addons_list(self, request: web.Request) -> dict[str, Any]:
"""Return all store add-ons.""" """Return all store add-ons."""
return [ return {
self._generate_addon_information(self.sys_addons.store[addon]) ATTR_ADDONS: [
for addon in self.sys_addons.store self._generate_addon_information(self.sys_addons.store[addon])
] for addon in self.sys_addons.store
]
}
@api_process @api_process
def addons_addon_install(self, request: web.Request) -> Awaitable[None]: def addons_addon_install(self, request: web.Request) -> Awaitable[None]:
"""Install add-on.""" """Install add-on."""
addon = self._extract_addon(request) addon = self._extract_addon(request)
return asyncio.shield(addon.install()) return asyncio.shield(self.sys_addons.install(addon.slug))
@api_process @api_process
async def addons_addon_update(self, request: web.Request) -> None: async def addons_addon_update(self, request: web.Request) -> None:
@@ -209,7 +211,7 @@ class APIStore(CoreSysAttributes):
body = await api_validate(SCHEMA_UPDATE, request) body = await api_validate(SCHEMA_UPDATE, request)
if start_task := await asyncio.shield( if start_task := await asyncio.shield(
addon.update(backup=body.get(ATTR_BACKUP)) self.sys_addons.update(addon.slug, backup=body.get(ATTR_BACKUP))
): ):
await start_task await start_task

View File

@@ -22,7 +22,7 @@ from ..const import (
from ..coresys import CoreSys from ..coresys import CoreSys
from ..exceptions import APIError, APIForbidden, DockerAPIError, HassioError from ..exceptions import APIError, APIForbidden, DockerAPIError, HassioError
from ..utils import check_exception_chain, get_message_from_exception_chain from ..utils import check_exception_chain, get_message_from_exception_chain
from ..utils.json import JSONEncoder from ..utils.json import json_dumps, json_loads as json_loads_util
from ..utils.log_format import format_message from ..utils.log_format import format_message
from .const import CONTENT_TYPE_BINARY from .const import CONTENT_TYPE_BINARY
@@ -48,7 +48,7 @@ def json_loads(data: Any) -> dict[str, Any]:
if not data: if not data:
return {} return {}
try: try:
return json.loads(data) return json_loads_util(data)
except json.JSONDecodeError as err: except json.JSONDecodeError as err:
raise APIError("Invalid json") from err raise APIError("Invalid json") from err
@@ -130,7 +130,7 @@ def api_return_error(
JSON_MESSAGE: message or "Unknown error, see supervisor", JSON_MESSAGE: message or "Unknown error, see supervisor",
}, },
status=400, status=400,
dumps=lambda x: json.dumps(x, cls=JSONEncoder), dumps=json_dumps,
) )
@@ -138,7 +138,7 @@ def api_return_ok(data: dict[str, Any] | None = None) -> web.Response:
"""Return an API ok answer.""" """Return an API ok answer."""
return web.json_response( return web.json_response(
{JSON_RESULT: RESULT_OK, JSON_DATA: data or {}}, {JSON_RESULT: RESULT_OK, JSON_DATA: data or {}},
dumps=lambda x: json.dumps(x, cls=JSONEncoder), dumps=json_dumps,
) )

View File

@@ -28,6 +28,7 @@ class CpuArch(CoreSysAttributes):
"""Initialize CPU Architecture handler.""" """Initialize CPU Architecture handler."""
self.coresys = coresys self.coresys = coresys
self._supported_arch: list[str] = [] self._supported_arch: list[str] = []
self._supported_set: set[str] = set()
self._default_arch: str self._default_arch: str
@property @property
@@ -70,9 +71,11 @@ class CpuArch(CoreSysAttributes):
if native_support not in self._supported_arch: if native_support not in self._supported_arch:
self._supported_arch.append(native_support) self._supported_arch.append(native_support)
self._supported_set = set(self._supported_arch)
def is_supported(self, arch_list: list[str]) -> bool: def is_supported(self, arch_list: list[str]) -> bool:
"""Return True if there is a supported arch by this platform.""" """Return True if there is a supported arch by this platform."""
return not set(self.supported).isdisjoint(set(arch_list)) return not self._supported_set.isdisjoint(arch_list)
def match(self, arch_list: list[str]) -> str: def match(self, arch_list: list[str]) -> str:
"""Return best match for this CPU/Platform.""" """Return best match for this CPU/Platform."""

View File

@@ -1,4 +1,5 @@
"""Representation of a backup file.""" """Representation of a backup file."""
import asyncio
from base64 import b64decode, b64encode from base64 import b64decode, b64encode
from collections.abc import Awaitable from collections.abc import Awaitable
from datetime import timedelta from datetime import timedelta
@@ -18,13 +19,14 @@ from securetar import SecureTarFile, atomic_contents_add, secure_path
import voluptuous as vol import voluptuous as vol
from voluptuous.humanize import humanize_error from voluptuous.humanize import humanize_error
from ..addons import Addon from ..addons.manager import Addon
from ..const import ( from ..const import (
ATTR_ADDONS, ATTR_ADDONS,
ATTR_COMPRESSED, ATTR_COMPRESSED,
ATTR_CRYPTO, ATTR_CRYPTO,
ATTR_DATE, ATTR_DATE,
ATTR_DOCKER, ATTR_DOCKER,
ATTR_EXCLUDE_DATABASE,
ATTR_FOLDERS, ATTR_FOLDERS,
ATTR_HOMEASSISTANT, ATTR_HOMEASSISTANT,
ATTR_NAME, ATTR_NAME,
@@ -129,7 +131,14 @@ class Backup(CoreSysAttributes):
"""Return backup Home Assistant version.""" """Return backup Home Assistant version."""
if self.homeassistant is None: if self.homeassistant is None:
return None return None
return self._data[ATTR_HOMEASSISTANT][ATTR_VERSION] return self.homeassistant[ATTR_VERSION]
@property
def homeassistant_exclude_database(self) -> bool:
"""Return whether database was excluded from Home Assistant backup."""
if self.homeassistant is None:
return None
return self.homeassistant[ATTR_EXCLUDE_DATABASE]
@property @property
def homeassistant(self): def homeassistant(self):
@@ -183,7 +192,15 @@ class Backup(CoreSysAttributes):
days=self.sys_backups.days_until_stale days=self.sys_backups.days_until_stale
) )
def new(self, slug, name, date, sys_type, password=None, compressed=True): def new(
self,
slug: str,
name: str,
date: str,
sys_type: BackupType,
password: str | None = None,
compressed: bool = True,
):
"""Initialize a new backup.""" """Initialize a new backup."""
# Init metadata # Init metadata
self._data[ATTR_VERSION] = 2 self._data[ATTR_VERSION] = 2
@@ -288,7 +305,7 @@ class Backup(CoreSysAttributes):
async def __aenter__(self): async def __aenter__(self):
"""Async context to open a backup.""" """Async context to open a backup."""
self._tmp = TemporaryDirectory(dir=str(self.sys_config.path_tmp)) self._tmp = TemporaryDirectory(dir=str(self.tarfile.parent))
# create a backup # create a backup
if not self.tarfile.is_file(): if not self.tarfile.is_file():
@@ -298,7 +315,11 @@ class Backup(CoreSysAttributes):
def _extract_backup(): def _extract_backup():
"""Extract a backup.""" """Extract a backup."""
with tarfile.open(self.tarfile, "r:") as tar: with tarfile.open(self.tarfile, "r:") as tar:
tar.extractall(path=self._tmp.name, members=secure_path(tar)) tar.extractall(
path=self._tmp.name,
members=secure_path(tar),
filter="fully_trusted",
)
await self.sys_run_in_executor(_extract_backup) await self.sys_run_in_executor(_extract_backup)
@@ -332,14 +353,14 @@ class Backup(CoreSysAttributes):
finally: finally:
self._tmp.cleanup() self._tmp.cleanup()
async def store_addons(self, addon_list: list[str]) -> list[Awaitable[None]]: async def store_addons(self, addon_list: list[str]) -> list[asyncio.Task]:
"""Add a list of add-ons into backup. """Add a list of add-ons into backup.
For each addon that needs to be started after backup, returns a task which For each addon that needs to be started after backup, returns a Task which
completes when that addon has state 'started' (see addon.start). completes when that addon has state 'started' (see addon.start).
""" """
async def _addon_save(addon: Addon) -> Awaitable[None] | None: async def _addon_save(addon: Addon) -> asyncio.Task | None:
"""Task to store an add-on into backup.""" """Task to store an add-on into backup."""
tar_name = f"{addon.slug}.tar{'.gz' if self.compressed else ''}" tar_name = f"{addon.slug}.tar{'.gz' if self.compressed else ''}"
addon_file = SecureTarFile( addon_file = SecureTarFile(
@@ -371,7 +392,7 @@ class Backup(CoreSysAttributes):
# Save Add-ons sequential # Save Add-ons sequential
# avoid issue on slow IO # avoid issue on slow IO
start_tasks: list[Awaitable[None]] = [] start_tasks: list[asyncio.Task] = []
for addon in addon_list: for addon in addon_list:
try: try:
if start_task := await _addon_save(addon): if start_task := await _addon_save(addon):
@@ -381,10 +402,12 @@ class Backup(CoreSysAttributes):
return start_tasks return start_tasks
async def restore_addons(self, addon_list: list[str]) -> list[Awaitable[None]]: async def restore_addons(
self, addon_list: list[str]
) -> tuple[bool, list[asyncio.Task]]:
"""Restore a list add-on from backup.""" """Restore a list add-on from backup."""
async def _addon_restore(addon_slug: str) -> Awaitable[None] | None: async def _addon_restore(addon_slug: str) -> tuple[bool, asyncio.Task | None]:
"""Task to restore an add-on into backup.""" """Task to restore an add-on into backup."""
tar_name = f"{addon_slug}.tar{'.gz' if self.compressed else ''}" tar_name = f"{addon_slug}.tar{'.gz' if self.compressed else ''}"
addon_file = SecureTarFile( addon_file = SecureTarFile(
@@ -398,30 +421,36 @@ class Backup(CoreSysAttributes):
# If exists inside backup # If exists inside backup
if not addon_file.path.exists(): if not addon_file.path.exists():
_LOGGER.error("Can't find backup %s", addon_slug) _LOGGER.error("Can't find backup %s", addon_slug)
return return (False, None)
# Perform a restore # Perform a restore
try: try:
return await self.sys_addons.restore(addon_slug, addon_file) return (True, await self.sys_addons.restore(addon_slug, addon_file))
except AddonsError: except AddonsError:
_LOGGER.error("Can't restore backup %s", addon_slug) _LOGGER.error("Can't restore backup %s", addon_slug)
return (False, None)
# Save Add-ons sequential # Save Add-ons sequential
# avoid issue on slow IO # avoid issue on slow IO
start_tasks: list[Awaitable[None]] = [] start_tasks: list[asyncio.Task] = []
success = True
for slug in addon_list: for slug in addon_list:
try: try:
if start_task := await _addon_restore(slug): addon_success, start_task = await _addon_restore(slug)
start_tasks.append(start_task)
except Exception as err: # pylint: disable=broad-except except Exception as err: # pylint: disable=broad-except
_LOGGER.warning("Can't restore Add-on %s: %s", slug, err) _LOGGER.warning("Can't restore Add-on %s: %s", slug, err)
success = False
else:
success = success and addon_success
if start_task:
start_tasks.append(start_task)
return start_tasks return (success, start_tasks)
async def store_folders(self, folder_list: list[str]): async def store_folders(self, folder_list: list[str]):
"""Backup Supervisor data into backup.""" """Backup Supervisor data into backup."""
def _folder_save(name: str): async def _folder_save(name: str):
"""Take backup of a folder.""" """Take backup of a folder."""
slug_name = name.replace("/", "_") slug_name = name.replace("/", "_")
tar_name = Path( tar_name = Path(
@@ -434,39 +463,43 @@ class Backup(CoreSysAttributes):
_LOGGER.warning("Can't find backup folder %s", name) _LOGGER.warning("Can't find backup folder %s", name)
return return
# Take backup def _save() -> None:
_LOGGER.info("Backing up folder %s", name) # Take backup
with SecureTarFile( _LOGGER.info("Backing up folder %s", name)
tar_name, "w", key=self._key, gzip=self.compressed, bufsize=BUF_SIZE with SecureTarFile(
) as tar_file: tar_name, "w", key=self._key, gzip=self.compressed, bufsize=BUF_SIZE
atomic_contents_add( ) as tar_file:
tar_file, atomic_contents_add(
origin_dir, tar_file,
excludes=[ origin_dir,
bound.bind_mount.local_where.as_posix() excludes=[
for bound in self.sys_mounts.bound_mounts bound.bind_mount.local_where.as_posix()
if bound.bind_mount.local_where for bound in self.sys_mounts.bound_mounts
], if bound.bind_mount.local_where
arcname=".", ],
) arcname=".",
)
_LOGGER.info("Backup folder %s done", name) _LOGGER.info("Backup folder %s done", name)
await self.sys_run_in_executor(_save)
self._data[ATTR_FOLDERS].append(name) self._data[ATTR_FOLDERS].append(name)
# Save folder sequential # Save folder sequential
# avoid issue on slow IO # avoid issue on slow IO
for folder in folder_list: for folder in folder_list:
try: try:
await self.sys_run_in_executor(_folder_save, folder) await _folder_save(folder)
except (tarfile.TarError, OSError) as err: except (tarfile.TarError, OSError) as err:
raise BackupError( raise BackupError(
f"Can't backup folder {folder}: {str(err)}", _LOGGER.error f"Can't backup folder {folder}: {str(err)}", _LOGGER.error
) from err ) from err
async def restore_folders(self, folder_list: list[str]): async def restore_folders(self, folder_list: list[str]) -> bool:
"""Backup Supervisor data into backup.""" """Backup Supervisor data into backup."""
success = True
async def _folder_restore(name: str) -> None: async def _folder_restore(name: str) -> bool:
"""Intenal function to restore a folder.""" """Intenal function to restore a folder."""
slug_name = name.replace("/", "_") slug_name = name.replace("/", "_")
tar_name = Path( tar_name = Path(
@@ -477,14 +510,26 @@ class Backup(CoreSysAttributes):
# Check if exists inside backup # Check if exists inside backup
if not tar_name.exists(): if not tar_name.exists():
_LOGGER.warning("Can't find restore folder %s", name) _LOGGER.warning("Can't find restore folder %s", name)
return return False
# Unmount any mounts within folder
bind_mounts = [
bound.bind_mount
for bound in self.sys_mounts.bound_mounts
if bound.bind_mount.local_where
and bound.bind_mount.local_where.is_relative_to(origin_dir)
]
if bind_mounts:
await asyncio.gather(
*[bind_mount.unmount() for bind_mount in bind_mounts]
)
# Clean old stuff # Clean old stuff
if origin_dir.is_dir(): if origin_dir.is_dir():
await remove_folder(origin_dir, content_only=True) await remove_folder(origin_dir, content_only=True)
# Perform a restore # Perform a restore
def _restore() -> None: def _restore() -> bool:
try: try:
_LOGGER.info("Restore folder %s", name) _LOGGER.info("Restore folder %s", name)
with SecureTarFile( with SecureTarFile(
@@ -494,24 +539,39 @@ class Backup(CoreSysAttributes):
gzip=self.compressed, gzip=self.compressed,
bufsize=BUF_SIZE, bufsize=BUF_SIZE,
) as tar_file: ) as tar_file:
tar_file.extractall(path=origin_dir, members=tar_file) tar_file.extractall(
path=origin_dir, members=tar_file, filter="fully_trusted"
)
_LOGGER.info("Restore folder %s done", name) _LOGGER.info("Restore folder %s done", name)
except (tarfile.TarError, OSError) as err: except (tarfile.TarError, OSError) as err:
_LOGGER.warning("Can't restore folder %s: %s", name, err) _LOGGER.warning("Can't restore folder %s: %s", name, err)
return False
return True
await self.sys_run_in_executor(_restore) try:
return await self.sys_run_in_executor(_restore)
finally:
if bind_mounts:
await asyncio.gather(
*[bind_mount.mount() for bind_mount in bind_mounts]
)
# Restore folder sequential # Restore folder sequential
# avoid issue on slow IO # avoid issue on slow IO
for folder in folder_list: for folder in folder_list:
try: try:
await _folder_restore(folder) success = success and await _folder_restore(folder)
except Exception as err: # pylint: disable=broad-except except Exception as err: # pylint: disable=broad-except
_LOGGER.warning("Can't restore folder %s: %s", folder, err) _LOGGER.warning("Can't restore folder %s: %s", folder, err)
success = False
return success
async def store_homeassistant(self): async def store_homeassistant(self, exclude_database: bool = False):
"""Backup Home Assitant Core configuration folder.""" """Backup Home Assistant Core configuration folder."""
self._data[ATTR_HOMEASSISTANT] = {ATTR_VERSION: self.sys_homeassistant.version} self._data[ATTR_HOMEASSISTANT] = {
ATTR_VERSION: self.sys_homeassistant.version,
ATTR_EXCLUDE_DATABASE: exclude_database,
}
# Backup Home Assistant Core config directory # Backup Home Assistant Core config directory
tar_name = Path( tar_name = Path(
@@ -521,13 +581,13 @@ class Backup(CoreSysAttributes):
tar_name, "w", key=self._key, gzip=self.compressed, bufsize=BUF_SIZE tar_name, "w", key=self._key, gzip=self.compressed, bufsize=BUF_SIZE
) )
await self.sys_homeassistant.backup(homeassistant_file) await self.sys_homeassistant.backup(homeassistant_file, exclude_database)
# Store size # Store size
self.homeassistant[ATTR_SIZE] = homeassistant_file.size self.homeassistant[ATTR_SIZE] = homeassistant_file.size
async def restore_homeassistant(self) -> Awaitable[None]: async def restore_homeassistant(self) -> Awaitable[None]:
"""Restore Home Assitant Core configuration folder.""" """Restore Home Assistant Core configuration folder."""
await self.sys_homeassistant.core.stop() await self.sys_homeassistant.core.stop()
# Restore Home Assistant Core config directory # Restore Home Assistant Core config directory
@@ -538,7 +598,9 @@ class Backup(CoreSysAttributes):
tar_name, "r", key=self._key, gzip=self.compressed, bufsize=BUF_SIZE tar_name, "r", key=self._key, gzip=self.compressed, bufsize=BUF_SIZE
) )
await self.sys_homeassistant.restore(homeassistant_file) await self.sys_homeassistant.restore(
homeassistant_file, self.homeassistant_exclude_database
)
# Generate restore task # Generate restore task
async def _core_update(): async def _core_update():
@@ -561,12 +623,12 @@ class Backup(CoreSysAttributes):
"""Store repository list into backup.""" """Store repository list into backup."""
self.repositories = self.sys_store.repository_urls self.repositories = self.sys_store.repository_urls
async def restore_repositories(self, replace: bool = False): def restore_repositories(self, replace: bool = False) -> Awaitable[None]:
"""Restore repositories from backup. """Restore repositories from backup.
Return a coroutine. Return a coroutine.
""" """
await self.sys_store.update_repositories( return self.sys_store.update_repositories(
self.repositories, add_with_errors=True, replace=replace self.repositories, add_with_errors=True, replace=replace
) )

View File

@@ -1,11 +1,38 @@
"""Backup consts.""" """Backup consts."""
from enum import Enum from enum import StrEnum
BUF_SIZE = 2**20 * 4 # 4MB BUF_SIZE = 2**20 * 4 # 4MB
DEFAULT_FREEZE_TIMEOUT = 600
class BackupType(str, Enum): class BackupType(StrEnum):
"""Backup type enum.""" """Backup type enum."""
FULL = "full" FULL = "full"
PARTIAL = "partial" PARTIAL = "partial"
class BackupJobStage(StrEnum):
"""Backup job stage enum."""
ADDON_REPOSITORIES = "addon_repositories"
ADDONS = "addons"
DOCKER_CONFIG = "docker_config"
FINISHING_FILE = "finishing_file"
FOLDERS = "folders"
HOME_ASSISTANT = "home_assistant"
AWAIT_ADDON_RESTARTS = "await_addon_restarts"
class RestoreJobStage(StrEnum):
"""Restore job stage enum."""
ADDON_REPOSITORIES = "addon_repositories"
ADDONS = "addons"
AWAIT_ADDON_RESTARTS = "await_addon_restarts"
AWAIT_HOME_ASSISTANT_RESTART = "await_home_assistant_restart"
CHECK_HOME_ASSISTANT = "check_home_assistant"
DOCKER_CONFIG = "docker_config"
FOLDERS = "folders"
HOME_ASSISTANT = "home_assistant"
REMOVE_DELTA_ADDONS = "remove_delta_addons"

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import asyncio import asyncio
from collections.abc import Awaitable, Iterable from collections.abc import Awaitable, Iterable
import errno
import logging import logging
from pathlib import Path from pathlib import Path
@@ -13,44 +14,35 @@ from ..const import (
FOLDER_HOMEASSISTANT, FOLDER_HOMEASSISTANT,
CoreState, CoreState,
) )
from ..coresys import CoreSysAttributes
from ..dbus.const import UnitActiveState from ..dbus.const import UnitActiveState
from ..exceptions import AddonsError from ..exceptions import AddonsError, BackupError, BackupInvalidError, BackupJobError
from ..jobs.decorator import Job, JobCondition from ..jobs.const import JOB_GROUP_BACKUP_MANAGER, JobCondition, JobExecutionLimit
from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup
from ..mounts.mount import Mount from ..mounts.mount import Mount
from ..resolution.const import UnhealthyReason
from ..utils.common import FileConfiguration from ..utils.common import FileConfiguration
from ..utils.dt import utcnow from ..utils.dt import utcnow
from ..utils.sentinel import DEFAULT from ..utils.sentinel import DEFAULT
from ..utils.sentry import capture_exception from ..utils.sentry import capture_exception
from .backup import Backup from .backup import Backup
from .const import BackupType from .const import DEFAULT_FREEZE_TIMEOUT, BackupJobStage, BackupType, RestoreJobStage
from .utils import create_slug from .utils import create_slug
from .validate import ALL_FOLDERS, SCHEMA_BACKUPS_CONFIG from .validate import ALL_FOLDERS, SCHEMA_BACKUPS_CONFIG
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
def _list_backup_files(path: Path) -> Iterable[Path]: class BackupManager(FileConfiguration, JobGroup):
"""Return iterable of backup files, suppress and log OSError for network mounts."""
try:
# is_dir does a stat syscall which raises if the mount is down
if path.is_dir():
return path.glob("*.tar")
except OSError as err:
_LOGGER.error("Could not list backups from %s: %s", path.as_posix(), err)
return []
class BackupManager(FileConfiguration, CoreSysAttributes):
"""Manage backups.""" """Manage backups."""
def __init__(self, coresys): def __init__(self, coresys):
"""Initialize a backup manager.""" """Initialize a backup manager."""
super().__init__(FILE_HASSIO_BACKUPS, SCHEMA_BACKUPS_CONFIG) super().__init__(FILE_HASSIO_BACKUPS, SCHEMA_BACKUPS_CONFIG)
self.coresys = coresys super(FileConfiguration, self).__init__(coresys, JOB_GROUP_BACKUP_MANAGER)
self._backups = {} self._backups: dict[str, Backup] = {}
self.lock = asyncio.Lock() self._thaw_task: Awaitable[None] | None = None
self._thaw_event: asyncio.Event = asyncio.Event()
@property @property
def list_backups(self) -> set[Backup]: def list_backups(self) -> set[Backup]:
@@ -76,7 +68,7 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
if mount.state == UnitActiveState.ACTIVE if mount.state == UnitActiveState.ACTIVE
] ]
def get(self, slug): def get(self, slug: str) -> Backup:
"""Return backup object.""" """Return backup object."""
return self._backups.get(slug) return self._backups.get(slug)
@@ -90,6 +82,46 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
return self.sys_config.path_backup return self.sys_config.path_backup
def _change_stage(
self,
stage: BackupJobStage | RestoreJobStage,
backup: Backup | None = None,
):
"""Change the stage of the current job during backup/restore.
Must be called from an existing backup/restore job.
"""
job_name = self.sys_jobs.current.name
if "restore" in job_name:
action = "Restore"
elif "freeze" in job_name:
action = "Freeze"
elif "thaw" in job_name:
action = "Thaw"
else:
action = "Backup"
_LOGGER.info(
"%s %sstarting stage %s",
action,
f"{backup.slug} " if backup else "",
stage,
)
self.sys_jobs.current.stage = stage
def _list_backup_files(self, path: Path) -> Iterable[Path]:
"""Return iterable of backup files, suppress and log OSError for network mounts."""
try:
# is_dir does a stat syscall which raises if the mount is down
if path.is_dir():
return path.glob("*.tar")
except OSError as err:
if err.errno == errno.EBADMSG and path == self.sys_config.path_backup:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Could not list backups from %s: %s", path.as_posix(), err)
return []
def _create_backup( def _create_backup(
self, self,
name: str, name: str,
@@ -98,7 +130,10 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
compressed: bool = True, compressed: bool = True,
location: Mount | type[DEFAULT] | None = DEFAULT, location: Mount | type[DEFAULT] | None = DEFAULT,
) -> Backup: ) -> Backup:
"""Initialize a new backup object from name.""" """Initialize a new backup object from name.
Must be called from an existing backup job.
"""
date_str = utcnow().isoformat() date_str = utcnow().isoformat()
slug = create_slug(name, date_str) slug = create_slug(name, date_str)
tar_file = Path(self._get_base_path(location), f"{slug}.tar") tar_file = Path(self._get_base_path(location), f"{slug}.tar")
@@ -107,19 +142,24 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
backup = Backup(self.coresys, tar_file) backup = Backup(self.coresys, tar_file)
backup.new(slug, name, date_str, sys_type, password, compressed) backup.new(slug, name, date_str, sys_type, password, compressed)
# Add backup ID to job
self.sys_jobs.current.reference = backup.slug
self._change_stage(BackupJobStage.ADDON_REPOSITORIES, backup)
backup.store_repositories() backup.store_repositories()
self._change_stage(BackupJobStage.DOCKER_CONFIG, backup)
backup.store_dockerconfig() backup.store_dockerconfig()
return backup return backup
def load(self): def load(self) -> Awaitable[None]:
"""Load exists backups data. """Load exists backups data.
Return a coroutine. Return a coroutine.
""" """
return self.reload() return self.reload()
async def reload(self): async def reload(self) -> None:
"""Load exists backups.""" """Load exists backups."""
self._backups = {} self._backups = {}
@@ -132,14 +172,14 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
tasks = [ tasks = [
self.sys_create_task(_load_backup(tar_file)) self.sys_create_task(_load_backup(tar_file))
for path in self.backup_locations for path in self.backup_locations
for tar_file in _list_backup_files(path) for tar_file in self._list_backup_files(path)
] ]
_LOGGER.info("Found %d backup files", len(tasks)) _LOGGER.info("Found %d backup files", len(tasks))
if tasks: if tasks:
await asyncio.wait(tasks) await asyncio.wait(tasks)
def remove(self, backup): def remove(self, backup: Backup) -> bool:
"""Remove a backup.""" """Remove a backup."""
try: try:
backup.tarfile.unlink() backup.tarfile.unlink()
@@ -147,12 +187,17 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
_LOGGER.info("Removed backup file %s", backup.slug) _LOGGER.info("Removed backup file %s", backup.slug)
except OSError as err: except OSError as err:
if (
err.errno == errno.EBADMSG
and backup.tarfile.parent == self.sys_config.path_backup
):
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Can't remove backup %s: %s", backup.slug, err) _LOGGER.error("Can't remove backup %s: %s", backup.slug, err)
return False return False
return True return True
async def import_backup(self, tar_file): async def import_backup(self, tar_file: Path) -> Backup | None:
"""Check backup tarfile and import it.""" """Check backup tarfile and import it."""
backup = Backup(self.coresys, tar_file) backup = Backup(self.coresys, tar_file)
@@ -171,6 +216,8 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
backup.tarfile.rename(tar_origin) backup.tarfile.rename(tar_origin)
except OSError as err: except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Can't move backup file to storage: %s", err) _LOGGER.error("Can't move backup file to storage: %s", err)
return None return None
@@ -189,26 +236,39 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
addon_list: list[Addon], addon_list: list[Addon],
folder_list: list[str], folder_list: list[str],
homeassistant: bool, homeassistant: bool,
): homeassistant_exclude_database: bool | None,
) -> Backup | None:
"""Create a backup.
Must be called from an existing backup job.
"""
addon_start_tasks: list[Awaitable[None]] | None = None addon_start_tasks: list[Awaitable[None]] | None = None
try: try:
self.sys_core.state = CoreState.FREEZE self.sys_core.state = CoreState.FREEZE
async with backup: async with backup:
# Backup add-ons # Backup add-ons
if addon_list: if addon_list:
_LOGGER.info("Backing up %s store Add-ons", backup.slug) self._change_stage(BackupJobStage.ADDONS, backup)
addon_start_tasks = await backup.store_addons(addon_list) addon_start_tasks = await backup.store_addons(addon_list)
# HomeAssistant Folder is for v1 # HomeAssistant Folder is for v1
if homeassistant: if homeassistant:
await backup.store_homeassistant() self._change_stage(BackupJobStage.HOME_ASSISTANT, backup)
await backup.store_homeassistant(
self.sys_homeassistant.backups_exclude_database
if homeassistant_exclude_database is None
else homeassistant_exclude_database
)
# Backup folders # Backup folders
if folder_list: if folder_list:
_LOGGER.info("Backing up %s store folders", backup.slug) self._change_stage(BackupJobStage.FOLDERS, backup)
await backup.store_folders(folder_list) await backup.store_folders(folder_list)
self._change_stage(BackupJobStage.FINISHING_FILE, backup)
except Exception as err: # pylint: disable=broad-except except Exception as err: # pylint: disable=broad-except
_LOGGER.exception("Backup %s error", backup.slug) _LOGGER.exception("Backup %s error", backup.slug)
capture_exception(err) capture_exception(err)
@@ -217,6 +277,7 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
self._backups[backup.slug] = backup self._backups[backup.slug] = backup
if addon_start_tasks: if addon_start_tasks:
self._change_stage(BackupJobStage.AWAIT_ADDON_RESTARTS, backup)
# Ignore exceptions from waiting for addon startup, addon errors handled elsewhere # Ignore exceptions from waiting for addon startup, addon errors handled elsewhere
await asyncio.gather(*addon_start_tasks, return_exceptions=True) await asyncio.gather(*addon_start_tasks, return_exceptions=True)
@@ -224,33 +285,48 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
finally: finally:
self.sys_core.state = CoreState.RUNNING self.sys_core.state = CoreState.RUNNING
@Job(conditions=[JobCondition.FREE_SPACE, JobCondition.RUNNING]) @Job(
name="backup_manager_full_backup",
conditions=[JobCondition.RUNNING],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError,
)
async def do_backup_full( async def do_backup_full(
self, self,
name="", name: str = "",
password=None, password: str | None = None,
compressed=True, compressed: bool = True,
location: Mount | type[DEFAULT] | None = DEFAULT, location: Mount | type[DEFAULT] | None = DEFAULT,
): homeassistant_exclude_database: bool | None = None,
) -> Backup | None:
"""Create a full backup.""" """Create a full backup."""
if self.lock.locked(): if self._get_base_path(location) == self.sys_config.path_backup:
_LOGGER.error("A backup/restore process is already running") await Job.check_conditions(
return None self, {JobCondition.FREE_SPACE}, "BackupManager.do_backup_full"
)
backup = self._create_backup( backup = self._create_backup(
name, BackupType.FULL, password, compressed, location name, BackupType.FULL, password, compressed, location
) )
_LOGGER.info("Creating new full backup with slug %s", backup.slug) _LOGGER.info("Creating new full backup with slug %s", backup.slug)
async with self.lock: backup = await self._do_backup(
backup = await self._do_backup( backup,
backup, self.sys_addons.installed, ALL_FOLDERS, True self.sys_addons.installed,
) ALL_FOLDERS,
if backup: True,
_LOGGER.info("Creating full backup with slug %s completed", backup.slug) homeassistant_exclude_database,
return backup )
if backup:
_LOGGER.info("Creating full backup with slug %s completed", backup.slug)
return backup
@Job(conditions=[JobCondition.FREE_SPACE, JobCondition.RUNNING]) @Job(
name="backup_manager_partial_backup",
conditions=[JobCondition.RUNNING],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError,
)
async def do_backup_partial( async def do_backup_partial(
self, self,
name: str = "", name: str = "",
@@ -260,11 +336,13 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
homeassistant: bool = False, homeassistant: bool = False,
compressed: bool = True, compressed: bool = True,
location: Mount | type[DEFAULT] | None = DEFAULT, location: Mount | type[DEFAULT] | None = DEFAULT,
): homeassistant_exclude_database: bool | None = None,
) -> Backup | None:
"""Create a partial backup.""" """Create a partial backup."""
if self.lock.locked(): if self._get_base_path(location) == self.sys_config.path_backup:
_LOGGER.error("A backup/restore process is already running") await Job.check_conditions(
return None self, {JobCondition.FREE_SPACE}, "BackupManager.do_backup_partial"
)
addons = addons or [] addons = addons or []
folders = folders or [] folders = folders or []
@@ -282,21 +360,20 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
) )
_LOGGER.info("Creating new partial backup with slug %s", backup.slug) _LOGGER.info("Creating new partial backup with slug %s", backup.slug)
async with self.lock: addon_list = []
addon_list = [] for addon_slug in addons:
for addon_slug in addons: addon = self.sys_addons.get(addon_slug)
addon = self.sys_addons.get(addon_slug) if addon and addon.is_installed:
if addon and addon.is_installed: addon_list.append(addon)
addon_list.append(addon) continue
continue _LOGGER.warning("Add-on %s not found/installed", addon_slug)
_LOGGER.warning("Add-on %s not found/installed", addon_slug)
backup = await self._do_backup(backup, addon_list, folders, homeassistant) backup = await self._do_backup(
if backup: backup, addon_list, folders, homeassistant, homeassistant_exclude_database
_LOGGER.info( )
"Creating partial backup with slug %s completed", backup.slug if backup:
) _LOGGER.info("Creating partial backup with slug %s completed", backup.slug)
return backup return backup
async def _do_restore( async def _do_restore(
self, self,
@@ -305,28 +382,34 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
folder_list: list[str], folder_list: list[str],
homeassistant: bool, homeassistant: bool,
replace: bool, replace: bool,
): ) -> bool:
"""Restore from a backup.
Must be called from an existing restore job.
"""
addon_start_tasks: list[Awaitable[None]] | None = None addon_start_tasks: list[Awaitable[None]] | None = None
success = True
try: try:
task_hass: asyncio.Task | None = None task_hass: asyncio.Task | None = None
async with backup: async with backup:
# Restore docker config # Restore docker config
_LOGGER.info("Restoring %s Docker config", backup.slug) self._change_stage(RestoreJobStage.DOCKER_CONFIG, backup)
backup.restore_dockerconfig(replace) backup.restore_dockerconfig(replace)
# Process folders # Process folders
if folder_list: if folder_list:
_LOGGER.info("Restoring %s folders", backup.slug) self._change_stage(RestoreJobStage.FOLDERS, backup)
await backup.restore_folders(folder_list) success = await backup.restore_folders(folder_list)
# Process Home-Assistant # Process Home-Assistant
if homeassistant: if homeassistant:
_LOGGER.info("Restoring %s Home Assistant Core", backup.slug) self._change_stage(RestoreJobStage.HOME_ASSISTANT, backup)
task_hass = await backup.restore_homeassistant() task_hass = await backup.restore_homeassistant()
# Delete delta add-ons # Delete delta add-ons
if replace: if replace:
_LOGGER.info("Removing Add-ons not in the backup %s", backup.slug) self._change_stage(RestoreJobStage.REMOVE_DELTA_ADDONS, backup)
for addon in self.sys_addons.installed: for addon in self.sys_addons.installed:
if addon.slug in backup.addon_list: if addon.slug in backup.addon_list:
continue continue
@@ -334,97 +417,123 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
# Remove Add-on because it's not a part of the new env # Remove Add-on because it's not a part of the new env
# Do it sequential avoid issue on slow IO # Do it sequential avoid issue on slow IO
try: try:
await addon.uninstall() await self.sys_addons.uninstall(addon.slug)
except AddonsError: except AddonsError:
_LOGGER.warning("Can't uninstall Add-on %s", addon.slug) _LOGGER.warning("Can't uninstall Add-on %s", addon.slug)
success = False
if addon_list: if addon_list:
_LOGGER.info("Restoring %s Repositories", backup.slug) self._change_stage(RestoreJobStage.ADDON_REPOSITORIES, backup)
await backup.restore_repositories(replace) await backup.restore_repositories(replace)
_LOGGER.info("Restoring %s Add-ons", backup.slug) self._change_stage(RestoreJobStage.ADDONS, backup)
addon_start_tasks = await backup.restore_addons(addon_list) restore_success, addon_start_tasks = await backup.restore_addons(
addon_list
)
success = success and restore_success
# Wait for Home Assistant Core update/downgrade # Wait for Home Assistant Core update/downgrade
if task_hass: if task_hass:
_LOGGER.info("Restore %s wait for Home-Assistant", backup.slug) self._change_stage(
RestoreJobStage.AWAIT_HOME_ASSISTANT_RESTART, backup
)
await task_hass await task_hass
except BackupError:
raise
except Exception as err: # pylint: disable=broad-except except Exception as err: # pylint: disable=broad-except
_LOGGER.exception("Restore %s error", backup.slug) _LOGGER.exception("Restore %s error", backup.slug)
capture_exception(err) capture_exception(err)
return False raise BackupError(
f"Restore {backup.slug} error, check logs for details"
) from err
else: else:
if addon_start_tasks: if addon_start_tasks:
# Ignore exceptions from waiting for addon startup, addon errors handled elsewhere self._change_stage(RestoreJobStage.AWAIT_ADDON_RESTARTS, backup)
await asyncio.gather(*addon_start_tasks, return_exceptions=True) # Failure to resume addons post restore is still a restore failure
if any(
await asyncio.gather(*addon_start_tasks, return_exceptions=True)
):
return False
return True return success
finally: finally:
# Do we need start Home Assistant Core? # Leave Home Assistant alone if it wasn't part of the restore
if not await self.sys_homeassistant.core.is_running(): if homeassistant:
await self.sys_homeassistant.core.start() self._change_stage(RestoreJobStage.CHECK_HOME_ASSISTANT, backup)
# Check If we can access to API / otherwise restart # Do we need start Home Assistant Core?
if not await self.sys_homeassistant.api.check_api_state(): if not await self.sys_homeassistant.core.is_running():
_LOGGER.warning("Need restart HomeAssistant for API") await self.sys_homeassistant.core.start()
await self.sys_homeassistant.core.restart()
# Check If we can access to API / otherwise restart
if not await self.sys_homeassistant.api.check_api_state():
_LOGGER.warning("Need restart HomeAssistant for API")
await self.sys_homeassistant.core.restart()
@Job( @Job(
name="backup_manager_full_restore",
conditions=[ conditions=[
JobCondition.FREE_SPACE, JobCondition.FREE_SPACE,
JobCondition.HEALTHY, JobCondition.HEALTHY,
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
JobCondition.INTERNET_SYSTEM, JobCondition.INTERNET_SYSTEM,
JobCondition.RUNNING, JobCondition.RUNNING,
] ],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError,
) )
async def do_restore_full(self, backup: Backup, password=None): async def do_restore_full(
self, backup: Backup, password: str | None = None
) -> bool:
"""Restore a backup.""" """Restore a backup."""
if self.lock.locked(): # Add backup ID to job
_LOGGER.error("A backup/restore process is already running") self.sys_jobs.current.reference = backup.slug
return False
if backup.sys_type != BackupType.FULL: if backup.sys_type != BackupType.FULL:
_LOGGER.error("%s is only a partial backup!", backup.slug) raise BackupInvalidError(
return False f"{backup.slug} is only a partial backup!", _LOGGER.error
)
if backup.protected and not backup.set_password(password): if backup.protected and not backup.set_password(password):
_LOGGER.error("Invalid password for backup %s", backup.slug) raise BackupInvalidError(
return False f"Invalid password for backup {backup.slug}", _LOGGER.error
)
if backup.supervisor_version > self.sys_supervisor.version: if backup.supervisor_version > self.sys_supervisor.version:
_LOGGER.error( raise BackupInvalidError(
"Backup was made on supervisor version %s, can't restore on %s. Must update supervisor first.", f"Backup was made on supervisor version {backup.supervisor_version}, "
backup.supervisor_version, f"can't restore on {self.sys_supervisor.version}. Must update supervisor first.",
self.sys_supervisor.version, _LOGGER.error,
) )
return False
_LOGGER.info("Full-Restore %s start", backup.slug) _LOGGER.info("Full-Restore %s start", backup.slug)
async with self.lock: self.sys_core.state = CoreState.FREEZE
self.sys_core.state = CoreState.FREEZE
try:
# Stop Home-Assistant / Add-ons # Stop Home-Assistant / Add-ons
await self.sys_core.shutdown() await self.sys_core.shutdown()
success = await self._do_restore( success = await self._do_restore(
backup, backup.addon_list, backup.folders, True, True backup, backup.addon_list, backup.folders, True, True
) )
finally:
self.sys_core.state = CoreState.RUNNING self.sys_core.state = CoreState.RUNNING
if success: if success:
_LOGGER.info("Full-Restore %s done", backup.slug) _LOGGER.info("Full-Restore %s done", backup.slug)
return success
@Job( @Job(
name="backup_manager_partial_restore",
conditions=[ conditions=[
JobCondition.FREE_SPACE, JobCondition.FREE_SPACE,
JobCondition.HEALTHY, JobCondition.HEALTHY,
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
JobCondition.INTERNET_SYSTEM, JobCondition.INTERNET_SYSTEM,
JobCondition.RUNNING, JobCondition.RUNNING,
] ],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError,
) )
async def do_restore_partial( async def do_restore_partial(
self, self,
@@ -433,11 +542,10 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
addons: list[str] | None = None, addons: list[str] | None = None,
folders: list[Path] | None = None, folders: list[Path] | None = None,
password: str | None = None, password: str | None = None,
): ) -> bool:
"""Restore a backup.""" """Restore a backup."""
if self.lock.locked(): # Add backup ID to job
_LOGGER.error("A backup/restore process is already running") self.sys_jobs.current.reference = backup.slug
return False
addon_list = addons or [] addon_list = addons or []
folder_list = folders or [] folder_list = folders or []
@@ -448,30 +556,118 @@ class BackupManager(FileConfiguration, CoreSysAttributes):
homeassistant = True homeassistant = True
if backup.protected and not backup.set_password(password): if backup.protected and not backup.set_password(password):
_LOGGER.error("Invalid password for backup %s", backup.slug) raise BackupInvalidError(
return False f"Invalid password for backup {backup.slug}", _LOGGER.error
)
if backup.homeassistant is None and homeassistant: if backup.homeassistant is None and homeassistant:
_LOGGER.error("No Home Assistant Core data inside the backup") raise BackupInvalidError(
return False "No Home Assistant Core data inside the backup", _LOGGER.error
)
if backup.supervisor_version > self.sys_supervisor.version: if backup.supervisor_version > self.sys_supervisor.version:
_LOGGER.error( raise BackupInvalidError(
"Backup was made on supervisor version %s, can't restore on %s. Must update supervisor first.", f"Backup was made on supervisor version {backup.supervisor_version}, "
backup.supervisor_version, f"can't restore on {self.sys_supervisor.version}. Must update supervisor first.",
self.sys_supervisor.version, _LOGGER.error,
) )
return False
_LOGGER.info("Partial-Restore %s start", backup.slug) _LOGGER.info("Partial-Restore %s start", backup.slug)
async with self.lock: self.sys_core.state = CoreState.FREEZE
self.sys_core.state = CoreState.FREEZE
try:
success = await self._do_restore( success = await self._do_restore(
backup, addon_list, folder_list, homeassistant, False backup, addon_list, folder_list, homeassistant, False
) )
finally:
self.sys_core.state = CoreState.RUNNING self.sys_core.state = CoreState.RUNNING
if success: if success:
_LOGGER.info("Partial-Restore %s done", backup.slug) _LOGGER.info("Partial-Restore %s done", backup.slug)
return success
@Job(
name="backup_manager_freeze_all",
conditions=[JobCondition.RUNNING],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError,
)
async def freeze_all(self, timeout: float = DEFAULT_FREEZE_TIMEOUT) -> None:
"""Freeze system to prepare for an external backup such as an image snapshot."""
self.sys_core.state = CoreState.FREEZE
# Determine running addons
installed = self.sys_addons.installed.copy()
is_running: list[bool] = await asyncio.gather(
*[addon.is_running() for addon in installed]
)
running_addons = [
installed[ind] for ind in range(len(installed)) if is_running[ind]
]
# Create thaw task first to ensure we eventually undo freezes even if the below fails
self._thaw_task = asyncio.shield(
self.sys_create_task(self._thaw_all(running_addons, timeout))
)
# Tell Home Assistant to freeze for a backup
self._change_stage(BackupJobStage.HOME_ASSISTANT)
await self.sys_homeassistant.begin_backup()
# Run all pre-backup tasks for addons
self._change_stage(BackupJobStage.ADDONS)
await asyncio.gather(*[addon.begin_backup() for addon in running_addons])
@Job(
name="backup_manager_thaw_all",
conditions=[JobCondition.FROZEN],
on_condition=BackupJobError,
)
async def _thaw_all(
self, running_addons: list[Addon], timeout: float = DEFAULT_FREEZE_TIMEOUT
) -> None:
"""Thaw system after user signal or timeout."""
try:
try:
await asyncio.wait_for(self._thaw_event.wait(), timeout)
except TimeoutError:
_LOGGER.warning(
"Timeout waiting for signal to thaw after manual freeze, beginning thaw now"
)
self._change_stage(BackupJobStage.HOME_ASSISTANT)
await self.sys_homeassistant.end_backup()
self._change_stage(BackupJobStage.ADDONS)
addon_start_tasks: list[asyncio.Task] = [
task
for task in await asyncio.gather(
*[addon.end_backup() for addon in running_addons]
)
if task
]
finally:
self.sys_core.state = CoreState.RUNNING
self._thaw_event.clear()
self._thaw_task = None
if addon_start_tasks:
self._change_stage(BackupJobStage.AWAIT_ADDON_RESTARTS)
await asyncio.gather(*addon_start_tasks, return_exceptions=True)
@Job(
name="backup_manager_signal_thaw",
conditions=[JobCondition.FROZEN],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError,
internal=True,
)
async def thaw_all(self) -> None:
"""Signal thaw task to begin unfreezing the system."""
if not self._thaw_task:
raise BackupError(
"Freeze was not initiated by freeze API, cannot thaw this way"
)
self._thaw_event.set()
await self._thaw_task

View File

@@ -14,6 +14,7 @@ from ..const import (
ATTR_DATE, ATTR_DATE,
ATTR_DAYS_UNTIL_STALE, ATTR_DAYS_UNTIL_STALE,
ATTR_DOCKER, ATTR_DOCKER,
ATTR_EXCLUDE_DATABASE,
ATTR_FOLDERS, ATTR_FOLDERS,
ATTR_HOMEASSISTANT, ATTR_HOMEASSISTANT,
ATTR_NAME, ATTR_NAME,
@@ -103,6 +104,9 @@ SCHEMA_BACKUP = vol.Schema(
{ {
vol.Required(ATTR_VERSION): version_tag, vol.Required(ATTR_VERSION): version_tag,
vol.Optional(ATTR_SIZE, default=0): vol.Coerce(float), vol.Optional(ATTR_SIZE, default=0): vol.Coerce(float),
vol.Optional(
ATTR_EXCLUDE_DATABASE, default=False
): vol.Boolean(),
}, },
extra=vol.REMOVE_EXTRA, extra=vol.REMOVE_EXTRA,
) )

View File

@@ -6,7 +6,7 @@ import signal
from colorlog import ColoredFormatter from colorlog import ColoredFormatter
from .addons import AddonManager from .addons.manager import AddonManager
from .api import RestAPI from .api import RestAPI
from .arch import CpuArch from .arch import CpuArch
from .auth import Auth from .auth import Auth
@@ -221,6 +221,14 @@ def initialize_system(coresys: CoreSys) -> None:
) )
config.path_emergency.mkdir() config.path_emergency.mkdir()
# Addon Configs folder
if not config.path_addon_configs.is_dir():
_LOGGER.debug(
"Creating Supervisor add-on configs folder at '%s'",
config.path_addon_configs,
)
config.path_addon_configs.mkdir()
def migrate_system_env(coresys: CoreSys) -> None: def migrate_system_env(coresys: CoreSys) -> None:
"""Cleanup some stuff after update.""" """Cleanup some stuff after update."""

View File

@@ -1,5 +1,5 @@
"""Bootstrap Supervisor.""" """Bootstrap Supervisor."""
from datetime import datetime from datetime import UTC, datetime
import logging import logging
import os import os
from pathlib import Path, PurePath from pathlib import Path, PurePath
@@ -48,8 +48,9 @@ MEDIA_DATA = PurePath("media")
MOUNTS_FOLDER = PurePath("mounts") MOUNTS_FOLDER = PurePath("mounts")
MOUNTS_CREDENTIALS = PurePath(".mounts_credentials") MOUNTS_CREDENTIALS = PurePath(".mounts_credentials")
EMERGENCY_DATA = PurePath("emergency") EMERGENCY_DATA = PurePath("emergency")
ADDON_CONFIGS = PurePath("addon_configs")
DEFAULT_BOOT_TIME = datetime.utcfromtimestamp(0).isoformat() DEFAULT_BOOT_TIME = datetime.fromtimestamp(0, UTC).isoformat()
# We filter out UTC because it's the system default fallback # We filter out UTC because it's the system default fallback
# Core also not respect the cotnainer timezone and reset timezones # Core also not respect the cotnainer timezone and reset timezones
@@ -153,7 +154,7 @@ class CoreConfig(FileConfiguration):
def modify_log_level(self) -> None: def modify_log_level(self) -> None:
"""Change log level.""" """Change log level."""
lvl = getattr(logging, str(self.logging.value).upper()) lvl = getattr(logging, self.logging.value.upper())
logging.getLogger("supervisor").setLevel(lvl) logging.getLogger("supervisor").setLevel(lvl)
@property @property
@@ -163,7 +164,7 @@ class CoreConfig(FileConfiguration):
boot_time = parse_datetime(boot_str) boot_time = parse_datetime(boot_str)
if not boot_time: if not boot_time:
return datetime.utcfromtimestamp(1) return datetime.fromtimestamp(1, UTC)
return boot_time return boot_time
@last_boot.setter @last_boot.setter
@@ -231,6 +232,16 @@ class CoreConfig(FileConfiguration):
"""Return root add-on data folder external for Docker.""" """Return root add-on data folder external for Docker."""
return PurePath(self.path_extern_supervisor, ADDONS_DATA) return PurePath(self.path_extern_supervisor, ADDONS_DATA)
@property
def path_addon_configs(self) -> Path:
"""Return root Add-on configs folder."""
return self.path_supervisor / ADDON_CONFIGS
@property
def path_extern_addon_configs(self) -> PurePath:
"""Return root Add-on configs folder external for Docker."""
return PurePath(self.path_extern_supervisor, ADDON_CONFIGS)
@property @property
def path_audio(self) -> Path: def path_audio(self) -> Path:
"""Return root audio data folder.""" """Return root audio data folder."""

View File

@@ -1,8 +1,10 @@
"""Constants file for Supervisor.""" """Constants file for Supervisor."""
from enum import Enum from dataclasses import dataclass
from enum import StrEnum
from ipaddress import ip_network from ipaddress import ip_network
from pathlib import Path from pathlib import Path
from sys import version_info as systemversion from sys import version_info as systemversion
from typing import Self
from aiohttp import __version__ as aiohttpversion from aiohttp import __version__ as aiohttpversion
@@ -18,6 +20,7 @@ SUPERVISOR_DATA = Path("/data")
FILE_HASSIO_ADDONS = Path(SUPERVISOR_DATA, "addons.json") FILE_HASSIO_ADDONS = Path(SUPERVISOR_DATA, "addons.json")
FILE_HASSIO_AUTH = Path(SUPERVISOR_DATA, "auth.json") FILE_HASSIO_AUTH = Path(SUPERVISOR_DATA, "auth.json")
FILE_HASSIO_BACKUPS = Path(SUPERVISOR_DATA, "backups.json") FILE_HASSIO_BACKUPS = Path(SUPERVISOR_DATA, "backups.json")
FILE_HASSIO_BOARD = Path(SUPERVISOR_DATA, "board.json")
FILE_HASSIO_CONFIG = Path(SUPERVISOR_DATA, "config.json") FILE_HASSIO_CONFIG = Path(SUPERVISOR_DATA, "config.json")
FILE_HASSIO_DISCOVERY = Path(SUPERVISOR_DATA, "discovery.json") FILE_HASSIO_DISCOVERY = Path(SUPERVISOR_DATA, "discovery.json")
FILE_HASSIO_DOCKER = Path(SUPERVISOR_DATA, "docker.json") FILE_HASSIO_DOCKER = Path(SUPERVISOR_DATA, "docker.json")
@@ -69,6 +72,9 @@ JSON_RESULT = "result"
RESULT_ERROR = "error" RESULT_ERROR = "error"
RESULT_OK = "ok" RESULT_OK = "ok"
HEADER_REMOTE_USER_ID = "X-Remote-User-Id"
HEADER_REMOTE_USER_NAME = "X-Remote-User-Name"
HEADER_REMOTE_USER_DISPLAY_NAME = "X-Remote-User-Display-Name"
HEADER_TOKEN_OLD = "X-Hassio-Key" HEADER_TOKEN_OLD = "X-Hassio-Key"
HEADER_TOKEN = "X-Supervisor-Token" HEADER_TOKEN = "X-Supervisor-Token"
@@ -84,6 +90,7 @@ REQUEST_FROM = "HASSIO_FROM"
ATTR_ACCESS_TOKEN = "access_token" ATTR_ACCESS_TOKEN = "access_token"
ATTR_ACCESSPOINTS = "accesspoints" ATTR_ACCESSPOINTS = "accesspoints"
ATTR_ACTIVE = "active" ATTR_ACTIVE = "active"
ATTR_ACTIVITY_LED = "activity_led"
ATTR_ADDON = "addon" ATTR_ADDON = "addon"
ATTR_ADDONS = "addons" ATTR_ADDONS = "addons"
ATTR_ADDONS_CUSTOM_LIST = "addons_custom_list" ATTR_ADDONS_CUSTOM_LIST = "addons_custom_list"
@@ -109,6 +116,7 @@ ATTR_BACKUP_EXCLUDE = "backup_exclude"
ATTR_BACKUP_POST = "backup_post" ATTR_BACKUP_POST = "backup_post"
ATTR_BACKUP_PRE = "backup_pre" ATTR_BACKUP_PRE = "backup_pre"
ATTR_BACKUPS = "backups" ATTR_BACKUPS = "backups"
ATTR_BACKUPS_EXCLUDE_DATABASE = "backups_exclude_database"
ATTR_BLK_READ = "blk_read" ATTR_BLK_READ = "blk_read"
ATTR_BLK_WRITE = "blk_write" ATTR_BLK_WRITE = "blk_write"
ATTR_BOARD = "board" ATTR_BOARD = "board"
@@ -148,9 +156,11 @@ ATTR_DIAGNOSTICS = "diagnostics"
ATTR_DISCOVERY = "discovery" ATTR_DISCOVERY = "discovery"
ATTR_DISK = "disk" ATTR_DISK = "disk"
ATTR_DISK_FREE = "disk_free" ATTR_DISK_FREE = "disk_free"
ATTR_DISK_LED = "disk_led"
ATTR_DISK_LIFE_TIME = "disk_life_time" ATTR_DISK_LIFE_TIME = "disk_life_time"
ATTR_DISK_TOTAL = "disk_total" ATTR_DISK_TOTAL = "disk_total"
ATTR_DISK_USED = "disk_used" ATTR_DISK_USED = "disk_used"
ATTR_DISPLAYNAME = "displayname"
ATTR_DNS = "dns" ATTR_DNS = "dns"
ATTR_DOCKER = "docker" ATTR_DOCKER = "docker"
ATTR_DOCKER_API = "docker_api" ATTR_DOCKER_API = "docker_api"
@@ -160,6 +170,7 @@ ATTR_ENABLE = "enable"
ATTR_ENABLED = "enabled" ATTR_ENABLED = "enabled"
ATTR_ENVIRONMENT = "environment" ATTR_ENVIRONMENT = "environment"
ATTR_EVENT = "event" ATTR_EVENT = "event"
ATTR_EXCLUDE_DATABASE = "exclude_database"
ATTR_FEATURES = "features" ATTR_FEATURES = "features"
ATTR_FILENAME = "filename" ATTR_FILENAME = "filename"
ATTR_FLAGS = "flags" ATTR_FLAGS = "flags"
@@ -173,7 +184,9 @@ ATTR_HASSIO_API = "hassio_api"
ATTR_HASSIO_ROLE = "hassio_role" ATTR_HASSIO_ROLE = "hassio_role"
ATTR_HASSOS = "hassos" ATTR_HASSOS = "hassos"
ATTR_HEALTHY = "healthy" ATTR_HEALTHY = "healthy"
ATTR_HEARTBEAT_LED = "heartbeat_led"
ATTR_HOMEASSISTANT = "homeassistant" ATTR_HOMEASSISTANT = "homeassistant"
ATTR_HOMEASSISTANT_EXCLUDE_DATABASE = "homeassistant_exclude_database"
ATTR_HOMEASSISTANT_API = "homeassistant_api" ATTR_HOMEASSISTANT_API = "homeassistant_api"
ATTR_HOST = "host" ATTR_HOST = "host"
ATTR_HOST_DBUS = "host_dbus" ATTR_HOST_DBUS = "host_dbus"
@@ -248,6 +261,7 @@ ATTR_PLUGINS = "plugins"
ATTR_PORT = "port" ATTR_PORT = "port"
ATTR_PORTS = "ports" ATTR_PORTS = "ports"
ATTR_PORTS_DESCRIPTION = "ports_description" ATTR_PORTS_DESCRIPTION = "ports_description"
ATTR_POWER_LED = "power_led"
ATTR_PREFIX = "prefix" ATTR_PREFIX = "prefix"
ATTR_PRIMARY = "primary" ATTR_PRIMARY = "primary"
ATTR_PRIORITY = "priority" ATTR_PRIORITY = "priority"
@@ -271,6 +285,9 @@ ATTR_SERVERS = "servers"
ATTR_SERVICE = "service" ATTR_SERVICE = "service"
ATTR_SERVICES = "services" ATTR_SERVICES = "services"
ATTR_SESSION = "session" ATTR_SESSION = "session"
ATTR_SESSION_DATA = "session_data"
ATTR_SESSION_DATA_USER = "user"
ATTR_SESSION_DATA_USER_ID = "user_id"
ATTR_SIGNAL = "signal" ATTR_SIGNAL = "signal"
ATTR_SIZE = "size" ATTR_SIZE = "size"
ATTR_SLUG = "slug" ATTR_SLUG = "slug"
@@ -308,6 +325,7 @@ ATTR_UPDATE_KEY = "update_key"
ATTR_URL = "url" ATTR_URL = "url"
ATTR_USB = "usb" ATTR_USB = "usb"
ATTR_USER = "user" ATTR_USER = "user"
ATTR_USER_LED = "user_led"
ATTR_USERNAME = "username" ATTR_USERNAME = "username"
ATTR_UUID = "uuid" ATTR_UUID = "uuid"
ATTR_VALID = "valid" ATTR_VALID = "valid"
@@ -327,14 +345,6 @@ PROVIDE_SERVICE = "provide"
NEED_SERVICE = "need" NEED_SERVICE = "need"
WANT_SERVICE = "want" WANT_SERVICE = "want"
MAP_CONFIG = "config"
MAP_SSL = "ssl"
MAP_ADDONS = "addons"
MAP_BACKUP = "backup"
MAP_SHARE = "share"
MAP_MEDIA = "media"
ARCH_ARMHF = "armhf" ARCH_ARMHF = "armhf"
ARCH_ARMV7 = "armv7" ARCH_ARMV7 = "armv7"
ARCH_AARCH64 = "aarch64" ARCH_AARCH64 = "aarch64"
@@ -367,14 +377,14 @@ ROLE_ADMIN = "admin"
ROLE_ALL = [ROLE_DEFAULT, ROLE_HOMEASSISTANT, ROLE_BACKUP, ROLE_MANAGER, ROLE_ADMIN] ROLE_ALL = [ROLE_DEFAULT, ROLE_HOMEASSISTANT, ROLE_BACKUP, ROLE_MANAGER, ROLE_ADMIN]
class AddonBoot(str, Enum): class AddonBoot(StrEnum):
"""Boot mode for the add-on.""" """Boot mode for the add-on."""
AUTO = "auto" AUTO = "auto"
MANUAL = "manual" MANUAL = "manual"
class AddonStartup(str, Enum): class AddonStartup(StrEnum):
"""Startup types of Add-on.""" """Startup types of Add-on."""
INITIALIZE = "initialize" INITIALIZE = "initialize"
@@ -384,7 +394,7 @@ class AddonStartup(str, Enum):
ONCE = "once" ONCE = "once"
class AddonStage(str, Enum): class AddonStage(StrEnum):
"""Stage types of add-on.""" """Stage types of add-on."""
STABLE = "stable" STABLE = "stable"
@@ -392,7 +402,7 @@ class AddonStage(str, Enum):
DEPRECATED = "deprecated" DEPRECATED = "deprecated"
class AddonState(str, Enum): class AddonState(StrEnum):
"""State of add-on.""" """State of add-on."""
STARTUP = "startup" STARTUP = "startup"
@@ -402,7 +412,7 @@ class AddonState(str, Enum):
ERROR = "error" ERROR = "error"
class UpdateChannel(str, Enum): class UpdateChannel(StrEnum):
"""Core supported update channels.""" """Core supported update channels."""
STABLE = "stable" STABLE = "stable"
@@ -410,7 +420,7 @@ class UpdateChannel(str, Enum):
DEV = "dev" DEV = "dev"
class CoreState(str, Enum): class CoreState(StrEnum):
"""Represent current loading state.""" """Represent current loading state."""
INITIALIZE = "initialize" INITIALIZE = "initialize"
@@ -423,7 +433,7 @@ class CoreState(str, Enum):
CLOSE = "close" CLOSE = "close"
class LogLevel(str, Enum): class LogLevel(StrEnum):
"""Logging level of system.""" """Logging level of system."""
DEBUG = "debug" DEBUG = "debug"
@@ -433,7 +443,7 @@ class LogLevel(str, Enum):
CRITICAL = "critical" CRITICAL = "critical"
class HostFeature(str, Enum): class HostFeature(StrEnum):
"""Host feature.""" """Host feature."""
HASSOS = "hassos" HASSOS = "hassos"
@@ -445,15 +455,16 @@ class HostFeature(str, Enum):
TIMEDATE = "timedate" TIMEDATE = "timedate"
class BusEvent(str, Enum): class BusEvent(StrEnum):
"""Bus event type.""" """Bus event type."""
HARDWARE_NEW_DEVICE = "hardware_new_device" HARDWARE_NEW_DEVICE = "hardware_new_device"
HARDWARE_REMOVE_DEVICE = "hardware_remove_device" HARDWARE_REMOVE_DEVICE = "hardware_remove_device"
DOCKER_CONTAINER_STATE_CHANGE = "docker_container_state_change" DOCKER_CONTAINER_STATE_CHANGE = "docker_container_state_change"
SUPERVISOR_STATE_CHANGE = "supervisor_state_change"
class CpuArch(str, Enum): class CpuArch(StrEnum):
"""Supported CPU architectures.""" """Supported CPU architectures."""
ARMV7 = "armv7" ARMV7 = "armv7"
@@ -461,3 +472,52 @@ class CpuArch(str, Enum):
AARCH64 = "aarch64" AARCH64 = "aarch64"
I386 = "i386" I386 = "i386"
AMD64 = "amd64" AMD64 = "amd64"
@dataclass
class IngressSessionDataUser:
"""Format of an IngressSessionDataUser object."""
id: str
display_name: str | None = None
username: str | None = None
def to_dict(self) -> dict[str, str | None]:
"""Get dictionary representation."""
return {
ATTR_ID: self.id,
ATTR_DISPLAYNAME: self.display_name,
ATTR_USERNAME: self.username,
}
@classmethod
def from_dict(cls, data: dict[str, str | None]) -> Self:
"""Return object from dictionary representation."""
return cls(
id=data[ATTR_ID],
display_name=data.get(ATTR_DISPLAYNAME),
username=data.get(ATTR_USERNAME),
)
@dataclass
class IngressSessionData:
"""Format of an IngressSessionData object."""
user: IngressSessionDataUser
def to_dict(self) -> dict[str, dict[str, str | None]]:
"""Get dictionary representation."""
return {ATTR_USER: self.user.to_dict()}
@classmethod
def from_dict(cls, data: dict[str, dict[str, str | None]]) -> Self:
"""Return object from dictionary representation."""
return cls(user=IngressSessionDataUser.from_dict(data[ATTR_USER]))
STARTING_STATES = [
CoreState.INITIALIZE,
CoreState.STARTUP,
CoreState.SETUP,
]

View File

@@ -7,7 +7,14 @@ import logging
import async_timeout import async_timeout
from .const import RUN_SUPERVISOR_STATE, AddonStartup, CoreState from .const import (
ATTR_STARTUP,
RUN_SUPERVISOR_STATE,
STARTING_STATES,
AddonStartup,
BusEvent,
CoreState,
)
from .coresys import CoreSys, CoreSysAttributes from .coresys import CoreSys, CoreSysAttributes
from .exceptions import ( from .exceptions import (
HassioError, HassioError,
@@ -21,7 +28,7 @@ from .homeassistant.core import LANDINGPAGE
from .resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason from .resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from .utils.dt import utcnow from .utils.dt import utcnow
from .utils.sentry import capture_exception from .utils.sentry import capture_exception
from .utils.whoami import retrieve_whoami from .utils.whoami import WhoamiData, retrieve_whoami
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -56,16 +63,23 @@ class Core(CoreSysAttributes):
if self._state == new_state: if self._state == new_state:
return return
try: try:
RUN_SUPERVISOR_STATE.write_text(new_state.value, encoding="utf-8") RUN_SUPERVISOR_STATE.write_text(new_state, encoding="utf-8")
except OSError as err: except OSError as err:
_LOGGER.warning( _LOGGER.warning(
"Can't update the Supervisor state to %s: %s", new_state, err "Can't update the Supervisor state to %s: %s", new_state, err
) )
finally: finally:
self._state = new_state self._state = new_state
self.sys_homeassistant.websocket.supervisor_update_event(
"info", {"state": new_state} # Don't attempt to notify anyone on CLOSE as we're about to stop the event loop
) if new_state != CoreState.CLOSE:
self.sys_bus.fire_event(BusEvent.SUPERVISOR_STATE_CHANGE, new_state)
# These will be received by HA after startup has completed which won't make sense
if new_state not in STARTING_STATES:
self.sys_homeassistant.websocket.supervisor_update_event(
"info", {"state": new_state}
)
async def connect(self): async def connect(self):
"""Connect Supervisor container.""" """Connect Supervisor container."""
@@ -119,12 +133,12 @@ class Core(CoreSysAttributes):
self._adjust_system_datetime(), self._adjust_system_datetime(),
# Load mounts # Load mounts
self.sys_mounts.load(), self.sys_mounts.load(),
# Start docker monitoring # Load Docker manager
self.sys_docker.load(), self.sys_docker.load(),
# Load Plugins container
self.sys_plugins.load(),
# load last available data # load last available data
self.sys_updater.load(), self.sys_updater.load(),
# Load Plugins container
self.sys_plugins.load(),
# Load Home Assistant # Load Home Assistant
self.sys_homeassistant.load(), self.sys_homeassistant.load(),
# Load CPU/Arch # Load CPU/Arch
@@ -236,7 +250,7 @@ class Core(CoreSysAttributes):
except HomeAssistantError as err: except HomeAssistantError as err:
capture_exception(err) capture_exception(err)
else: else:
_LOGGER.info("Skiping start of Home Assistant") _LOGGER.info("Skipping start of Home Assistant")
# Core is not running # Core is not running
if self.sys_homeassistant.core.error_state: if self.sys_homeassistant.core.error_state:
@@ -266,7 +280,9 @@ class Core(CoreSysAttributes):
self.sys_create_task(self.sys_resolution.healthcheck()) self.sys_create_task(self.sys_resolution.healthcheck())
self.state = CoreState.RUNNING self.state = CoreState.RUNNING
self.sys_homeassistant.websocket.supervisor_update_event("supervisor", {}) self.sys_homeassistant.websocket.supervisor_update_event(
"supervisor", {ATTR_STARTUP: "complete"}
)
_LOGGER.info("Supervisor is up and running") _LOGGER.info("Supervisor is up and running")
async def stop(self): async def stop(self):
@@ -293,7 +309,7 @@ class Core(CoreSysAttributes):
) )
] ]
) )
except asyncio.TimeoutError: except TimeoutError:
_LOGGER.warning("Stage 1: Force Shutdown!") _LOGGER.warning("Stage 1: Force Shutdown!")
# Stage 2 # Stage 2
@@ -310,7 +326,7 @@ class Core(CoreSysAttributes):
) )
] ]
) )
except asyncio.TimeoutError: except TimeoutError:
_LOGGER.warning("Stage 2: Force Shutdown!") _LOGGER.warning("Stage 2: Force Shutdown!")
self.state = CoreState.CLOSE self.state = CoreState.CLOSE
@@ -347,6 +363,13 @@ class Core(CoreSysAttributes):
self.sys_config.last_boot = self.sys_hardware.helper.last_boot self.sys_config.last_boot = self.sys_hardware.helper.last_boot
self.sys_config.save_data() self.sys_config.save_data()
async def _retrieve_whoami(self, with_ssl: bool) -> WhoamiData | None:
try:
return await retrieve_whoami(self.sys_websession, with_ssl)
except WhoamiSSLError:
_LOGGER.info("Whoami service SSL error")
return None
async def _adjust_system_datetime(self): async def _adjust_system_datetime(self):
"""Adjust system time/date on startup.""" """Adjust system time/date on startup."""
# If no timezone is detect or set # If no timezone is detect or set
@@ -359,21 +382,15 @@ class Core(CoreSysAttributes):
# Get Timezone data # Get Timezone data
try: try:
data = await retrieve_whoami(self.sys_websession) data = await self._retrieve_whoami(True)
except WhoamiSSLError:
pass # SSL Date Issue & possible time drift
if not data:
data = await self._retrieve_whoami(False)
except WhoamiError as err: except WhoamiError as err:
_LOGGER.warning("Can't adjust Time/Date settings: %s", err) _LOGGER.warning("Can't adjust Time/Date settings: %s", err)
return return
# SSL Date Issue & possible time drift
if not data:
try:
data = await retrieve_whoami(self.sys_websession, with_ssl=False)
except WhoamiError as err:
_LOGGER.error("Can't adjust Time/Date settings: %s", err)
return
self.sys_config.timezone = self.sys_config.timezone or data.timezone self.sys_config.timezone = self.sys_config.timezone or data.timezone
# Calculate if system time is out of sync # Calculate if system time is out of sync

View File

@@ -3,7 +3,9 @@ from __future__ import annotations
import asyncio import asyncio
from collections.abc import Callable, Coroutine from collections.abc import Callable, Coroutine
from contextvars import Context, copy_context
from datetime import datetime from datetime import datetime
from functools import partial
import logging import logging
import os import os
from types import MappingProxyType from types import MappingProxyType
@@ -16,7 +18,7 @@ from .const import ENV_SUPERVISOR_DEV, SERVER_SOFTWARE
from .utils.dt import UTC, get_time_zone from .utils.dt import UTC, get_time_zone
if TYPE_CHECKING: if TYPE_CHECKING:
from .addons import AddonManager from .addons.manager import AddonManager
from .api import RestAPI from .api import RestAPI
from .arch import CpuArch from .arch import CpuArch
from .auth import Auth from .auth import Auth
@@ -98,6 +100,9 @@ class CoreSys:
{aiohttp.hdrs.USER_AGENT: SERVER_SOFTWARE} {aiohttp.hdrs.USER_AGENT: SERVER_SOFTWARE}
) )
# Task factory attributes
self._set_task_context: list[Callable[[Context], Context]] = []
@property @property
def dev(self) -> bool: def dev(self) -> bool:
"""Return True if we run dev mode.""" """Return True if we run dev mode."""
@@ -519,15 +524,33 @@ class CoreSys:
"""Return now in local timezone.""" """Return now in local timezone."""
return datetime.now(get_time_zone(self.timezone) or UTC) return datetime.now(get_time_zone(self.timezone) or UTC)
def add_set_task_context_callback(
self, callback: Callable[[Context], Context]
) -> None:
"""Add callback used to modify context prior to creating a task.
Only used for tasks created via CoreSys.create_task. Callback can modify the provided
context using context.run (ex. `context.run(var.set, "new_value")`). Callback should
return the context to be provided to task.
"""
self._set_task_context.append(callback)
def run_in_executor( def run_in_executor(
self, funct: Callable[..., T], *args: Any self, funct: Callable[..., T], *args: tuple[Any], **kwargs: dict[str, Any]
) -> Coroutine[Any, Any, T]: ) -> Coroutine[Any, Any, T]:
"""Add an job to the executor pool.""" """Add an job to the executor pool."""
if kwargs:
funct = partial(funct, **kwargs)
return self.loop.run_in_executor(None, funct, *args) return self.loop.run_in_executor(None, funct, *args)
def create_task(self, coroutine: Coroutine) -> asyncio.Task: def create_task(self, coroutine: Coroutine) -> asyncio.Task:
"""Create an async task.""" """Create an async task."""
return self.loop.create_task(coroutine) context = copy_context()
for callback in self._set_task_context:
context = callback(context)
return self.loop.create_task(coroutine, context=context)
class CoreSysAttributes: class CoreSysAttributes:
@@ -700,10 +723,10 @@ class CoreSysAttributes:
return self.coresys.now() return self.coresys.now()
def sys_run_in_executor( def sys_run_in_executor(
self, funct: Callable[..., T], *args: Any self, funct: Callable[..., T], *args: tuple[Any], **kwargs: dict[str, Any]
) -> Coroutine[Any, Any, T]: ) -> Coroutine[Any, Any, T]:
"""Add an job to the executor pool.""" """Add a job to the executor pool."""
return self.coresys.run_in_executor(funct, *args) return self.coresys.run_in_executor(funct, *args, **kwargs)
def sys_create_task(self, coroutine: Coroutine) -> asyncio.Task: def sys_create_task(self, coroutine: Coroutine) -> asyncio.Task:
"""Create an async task.""" """Create an async task."""

View File

@@ -5,7 +5,9 @@
"raspberrypi3-64": ["aarch64", "armv7", "armhf"], "raspberrypi3-64": ["aarch64", "armv7", "armhf"],
"raspberrypi4": ["armv7", "armhf"], "raspberrypi4": ["armv7", "armhf"],
"raspberrypi4-64": ["aarch64", "armv7", "armhf"], "raspberrypi4-64": ["aarch64", "armv7", "armhf"],
"raspberrypi5-64": ["aarch64", "armv7", "armhf"],
"yellow": ["aarch64", "armv7", "armhf"], "yellow": ["aarch64", "armv7", "armhf"],
"green": ["aarch64", "armv7", "armhf"],
"tinker": ["armv7", "armhf"], "tinker": ["armv7", "armhf"],
"odroid-c2": ["aarch64", "armv7", "armhf"], "odroid-c2": ["aarch64", "armv7", "armhf"],
"odroid-c4": ["aarch64", "armv7", "armhf"], "odroid-c4": ["aarch64", "armv7", "armhf"],

View File

@@ -6,7 +6,7 @@ from typing import Any
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from dbus_fast.aio.message_bus import MessageBus from dbus_fast.aio.message_bus import MessageBus
from ...exceptions import DBusError, DBusInterfaceError from ...exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError
from ..const import ( from ..const import (
DBUS_ATTR_DIAGNOSTICS, DBUS_ATTR_DIAGNOSTICS,
DBUS_ATTR_VERSION, DBUS_ATTR_VERSION,
@@ -99,7 +99,7 @@ class OSAgent(DBusInterfaceProxy):
await asyncio.gather(*[dbus.connect(bus) for dbus in self.all]) await asyncio.gather(*[dbus.connect(bus) for dbus in self.all])
except DBusError: except DBusError:
_LOGGER.warning("Can't connect to OS-Agent") _LOGGER.warning("Can't connect to OS-Agent")
except DBusInterfaceError: except (DBusServiceUnkownError, DBusInterfaceError):
_LOGGER.warning( _LOGGER.warning(
"No OS-Agent support on the host. Some Host functions have been disabled." "No OS-Agent support on the host. Some Host functions have been disabled."
) )

View File

@@ -11,7 +11,8 @@ from ...const import (
DBUS_OBJECT_HAOS_BOARDS, DBUS_OBJECT_HAOS_BOARDS,
) )
from ...interface import DBusInterfaceProxy, dbus_property from ...interface import DBusInterfaceProxy, dbus_property
from .const import BOARD_NAME_SUPERVISED, BOARD_NAME_YELLOW from .const import BOARD_NAME_GREEN, BOARD_NAME_SUPERVISED, BOARD_NAME_YELLOW
from .green import Green
from .interface import BoardProxy from .interface import BoardProxy
from .supervised import Supervised from .supervised import Supervised
from .yellow import Yellow from .yellow import Yellow
@@ -39,6 +40,14 @@ class BoardManager(DBusInterfaceProxy):
"""Get board name.""" """Get board name."""
return self.properties[DBUS_ATTR_BOARD] return self.properties[DBUS_ATTR_BOARD]
@property
def green(self) -> Green:
"""Get Green board."""
if self.board != BOARD_NAME_GREEN:
raise BoardInvalidError("Green board is not in use", _LOGGER.error)
return self._board_proxy
@property @property
def supervised(self) -> Supervised: def supervised(self) -> Supervised:
"""Get Supervised board.""" """Get Supervised board."""
@@ -61,6 +70,8 @@ class BoardManager(DBusInterfaceProxy):
if self.board == BOARD_NAME_YELLOW: if self.board == BOARD_NAME_YELLOW:
self._board_proxy = Yellow() self._board_proxy = Yellow()
elif self.board == BOARD_NAME_GREEN:
self._board_proxy = Green()
elif self.board == BOARD_NAME_SUPERVISED: elif self.board == BOARD_NAME_SUPERVISED:
self._board_proxy = Supervised() self._board_proxy = Supervised()

View File

@@ -1,4 +1,5 @@
"""Constants for boards.""" """Constants for boards."""
BOARD_NAME_GREEN = "Green"
BOARD_NAME_SUPERVISED = "Supervised" BOARD_NAME_SUPERVISED = "Supervised"
BOARD_NAME_YELLOW = "Yellow" BOARD_NAME_YELLOW = "Yellow"

View File

@@ -0,0 +1,65 @@
"""Green board management."""
import asyncio
from dbus_fast.aio.message_bus import MessageBus
from ....const import ATTR_ACTIVITY_LED, ATTR_POWER_LED, ATTR_USER_LED
from ...const import DBUS_ATTR_ACTIVITY_LED, DBUS_ATTR_POWER_LED, DBUS_ATTR_USER_LED
from ...interface import dbus_property
from .const import BOARD_NAME_GREEN
from .interface import BoardProxy
from .validate import SCHEMA_GREEN_BOARD
class Green(BoardProxy):
"""Green board manager object."""
def __init__(self) -> None:
"""Initialize properties."""
super().__init__(BOARD_NAME_GREEN, SCHEMA_GREEN_BOARD)
@property
@dbus_property
def activity_led(self) -> bool:
"""Get activity LED enabled."""
return self.properties[DBUS_ATTR_ACTIVITY_LED]
@activity_led.setter
def activity_led(self, enabled: bool) -> None:
"""Enable/disable activity LED."""
self._data[ATTR_ACTIVITY_LED] = enabled
asyncio.create_task(self.dbus.Boards.Green.set_activity_led(enabled))
@property
@dbus_property
def power_led(self) -> bool:
"""Get power LED enabled."""
return self.properties[DBUS_ATTR_POWER_LED]
@power_led.setter
def power_led(self, enabled: bool) -> None:
"""Enable/disable power LED."""
self._data[ATTR_POWER_LED] = enabled
asyncio.create_task(self.dbus.Boards.Green.set_power_led(enabled))
@property
@dbus_property
def user_led(self) -> bool:
"""Get user LED enabled."""
return self.properties[DBUS_ATTR_USER_LED]
@user_led.setter
def user_led(self, enabled: bool) -> None:
"""Enable/disable disk LED."""
self._data[ATTR_USER_LED] = enabled
asyncio.create_task(self.dbus.Boards.Green.set_user_led(enabled))
async def connect(self, bus: MessageBus) -> None:
"""Connect to D-Bus."""
await super().connect(bus)
# Set LEDs based on settings on connect
self.activity_led = self._data[ATTR_ACTIVITY_LED]
self.power_led = self._data[ATTR_POWER_LED]
self.user_led = self._data[ATTR_USER_LED]

View File

@@ -1,17 +1,23 @@
"""Board dbus proxy interface.""" """Board dbus proxy interface."""
from voluptuous import Schema
from ....const import FILE_HASSIO_BOARD
from ....utils.common import FileConfiguration
from ...const import DBUS_IFACE_HAOS_BOARDS, DBUS_NAME_HAOS, DBUS_OBJECT_HAOS_BOARDS from ...const import DBUS_IFACE_HAOS_BOARDS, DBUS_NAME_HAOS, DBUS_OBJECT_HAOS_BOARDS
from ...interface import DBusInterfaceProxy from ...interface import DBusInterfaceProxy
from .validate import SCHEMA_BASE_BOARD
class BoardProxy(DBusInterfaceProxy): class BoardProxy(FileConfiguration, DBusInterfaceProxy):
"""DBus interface proxy for os board.""" """DBus interface proxy for os board."""
bus_name: str = DBUS_NAME_HAOS bus_name: str = DBUS_NAME_HAOS
def __init__(self, name: str) -> None: def __init__(self, name: str, file_schema: Schema | None = None) -> None:
"""Initialize properties.""" """Initialize properties."""
super().__init__() super().__init__(FILE_HASSIO_BOARD, file_schema or SCHEMA_BASE_BOARD)
super(FileConfiguration, self).__init__()
self._name: str = name self._name: str = name
self.object_path: str = f"{DBUS_OBJECT_HAOS_BOARDS}/{name}" self.object_path: str = f"{DBUS_OBJECT_HAOS_BOARDS}/{name}"

View File

@@ -0,0 +1,32 @@
"""Validation for board config."""
import voluptuous as vol
from ....const import (
ATTR_ACTIVITY_LED,
ATTR_DISK_LED,
ATTR_HEARTBEAT_LED,
ATTR_POWER_LED,
ATTR_USER_LED,
)
# pylint: disable=no-value-for-parameter
SCHEMA_BASE_BOARD = vol.Schema({}, extra=vol.REMOVE_EXTRA)
SCHEMA_GREEN_BOARD = vol.Schema(
{
vol.Optional(ATTR_ACTIVITY_LED, default=True): vol.Boolean(),
vol.Optional(ATTR_POWER_LED, default=True): vol.Boolean(),
vol.Optional(ATTR_USER_LED, default=True): vol.Boolean(),
},
extra=vol.REMOVE_EXTRA,
)
SCHEMA_YELLOW_BOARD = vol.Schema(
{
vol.Optional(ATTR_DISK_LED, default=True): vol.Boolean(),
vol.Optional(ATTR_HEARTBEAT_LED, default=True): vol.Boolean(),
vol.Optional(ATTR_POWER_LED, default=True): vol.Boolean(),
},
extra=vol.REMOVE_EXTRA,
)

View File

@@ -2,10 +2,14 @@
import asyncio import asyncio
from dbus_fast.aio.message_bus import MessageBus
from ....const import ATTR_DISK_LED, ATTR_HEARTBEAT_LED, ATTR_POWER_LED
from ...const import DBUS_ATTR_DISK_LED, DBUS_ATTR_HEARTBEAT_LED, DBUS_ATTR_POWER_LED from ...const import DBUS_ATTR_DISK_LED, DBUS_ATTR_HEARTBEAT_LED, DBUS_ATTR_POWER_LED
from ...interface import dbus_property from ...interface import dbus_property
from .const import BOARD_NAME_YELLOW from .const import BOARD_NAME_YELLOW
from .interface import BoardProxy from .interface import BoardProxy
from .validate import SCHEMA_YELLOW_BOARD
class Yellow(BoardProxy): class Yellow(BoardProxy):
@@ -13,7 +17,7 @@ class Yellow(BoardProxy):
def __init__(self) -> None: def __init__(self) -> None:
"""Initialize properties.""" """Initialize properties."""
super().__init__(BOARD_NAME_YELLOW) super().__init__(BOARD_NAME_YELLOW, SCHEMA_YELLOW_BOARD)
@property @property
@dbus_property @dbus_property
@@ -24,6 +28,7 @@ class Yellow(BoardProxy):
@heartbeat_led.setter @heartbeat_led.setter
def heartbeat_led(self, enabled: bool) -> None: def heartbeat_led(self, enabled: bool) -> None:
"""Enable/disable heartbeat LED.""" """Enable/disable heartbeat LED."""
self._data[ATTR_HEARTBEAT_LED] = enabled
asyncio.create_task(self.dbus.Boards.Yellow.set_heartbeat_led(enabled)) asyncio.create_task(self.dbus.Boards.Yellow.set_heartbeat_led(enabled))
@property @property
@@ -35,6 +40,7 @@ class Yellow(BoardProxy):
@power_led.setter @power_led.setter
def power_led(self, enabled: bool) -> None: def power_led(self, enabled: bool) -> None:
"""Enable/disable power LED.""" """Enable/disable power LED."""
self._data[ATTR_POWER_LED] = enabled
asyncio.create_task(self.dbus.Boards.Yellow.set_power_led(enabled)) asyncio.create_task(self.dbus.Boards.Yellow.set_power_led(enabled))
@property @property
@@ -46,4 +52,14 @@ class Yellow(BoardProxy):
@disk_led.setter @disk_led.setter
def disk_led(self, enabled: bool) -> None: def disk_led(self, enabled: bool) -> None:
"""Enable/disable disk LED.""" """Enable/disable disk LED."""
self._data[ATTR_DISK_LED] = enabled
asyncio.create_task(self.dbus.Boards.Yellow.set_disk_led(enabled)) asyncio.create_task(self.dbus.Boards.Yellow.set_disk_led(enabled))
async def connect(self, bus: MessageBus) -> None:
"""Connect to D-Bus."""
await super().connect(bus)
# Set LEDs based on settings on connect
self.disk_led = self._data[ATTR_DISK_LED]
self.heartbeat_led = self._data[ATTR_HEARTBEAT_LED]
self.power_led = self._data[ATTR_POWER_LED]

View File

@@ -1,5 +1,5 @@
"""Constants for DBUS.""" """Constants for DBUS."""
from enum import Enum, IntEnum from enum import IntEnum, StrEnum
from socket import AF_INET, AF_INET6 from socket import AF_INET, AF_INET6
DBUS_NAME_HAOS = "io.hass.os" DBUS_NAME_HAOS = "io.hass.os"
@@ -64,6 +64,7 @@ DBUS_OBJECT_UDISKS2 = "/org/freedesktop/UDisks2/Manager"
DBUS_ATTR_ACTIVE_ACCESSPOINT = "ActiveAccessPoint" DBUS_ATTR_ACTIVE_ACCESSPOINT = "ActiveAccessPoint"
DBUS_ATTR_ACTIVE_CONNECTION = "ActiveConnection" DBUS_ATTR_ACTIVE_CONNECTION = "ActiveConnection"
DBUS_ATTR_ACTIVE_CONNECTIONS = "ActiveConnections" DBUS_ATTR_ACTIVE_CONNECTIONS = "ActiveConnections"
DBUS_ATTR_ACTIVITY_LED = "ActivityLED"
DBUS_ATTR_ADDRESS_DATA = "AddressData" DBUS_ATTR_ADDRESS_DATA = "AddressData"
DBUS_ATTR_BITRATE = "Bitrate" DBUS_ATTR_BITRATE = "Bitrate"
DBUS_ATTR_BOARD = "Board" DBUS_ATTR_BOARD = "Board"
@@ -144,6 +145,7 @@ DBUS_ATTR_OPERATION = "Operation"
DBUS_ATTR_OPTIONS = "Options" DBUS_ATTR_OPTIONS = "Options"
DBUS_ATTR_PARSER_VERSION = "ParserVersion" DBUS_ATTR_PARSER_VERSION = "ParserVersion"
DBUS_ATTR_PARTITIONS = "Partitions" DBUS_ATTR_PARTITIONS = "Partitions"
DBUS_ATTR_PATH = "Path"
DBUS_ATTR_POWER_LED = "PowerLED" DBUS_ATTR_POWER_LED = "PowerLED"
DBUS_ATTR_PRIMARY_CONNECTION = "PrimaryConnection" DBUS_ATTR_PRIMARY_CONNECTION = "PrimaryConnection"
DBUS_ATTR_READ_ONLY = "ReadOnly" DBUS_ATTR_READ_ONLY = "ReadOnly"
@@ -168,6 +170,7 @@ DBUS_ATTR_TIMEUSEC = "TimeUSec"
DBUS_ATTR_TIMEZONE = "Timezone" DBUS_ATTR_TIMEZONE = "Timezone"
DBUS_ATTR_TRANSACTION_STATISTICS = "TransactionStatistics" DBUS_ATTR_TRANSACTION_STATISTICS = "TransactionStatistics"
DBUS_ATTR_TYPE = "Type" DBUS_ATTR_TYPE = "Type"
DBUS_ATTR_USER_LED = "UserLED"
DBUS_ATTR_USERSPACE_TIMESTAMP_MONOTONIC = "UserspaceTimestampMonotonic" DBUS_ATTR_USERSPACE_TIMESTAMP_MONOTONIC = "UserspaceTimestampMonotonic"
DBUS_ATTR_UUID_UPPERCASE = "UUID" DBUS_ATTR_UUID_UPPERCASE = "UUID"
DBUS_ATTR_UUID = "Uuid" DBUS_ATTR_UUID = "Uuid"
@@ -180,7 +183,7 @@ DBUS_ATTR_WWN = "WWN"
DBUS_ERR_SYSTEMD_NO_SUCH_UNIT = "org.freedesktop.systemd1.NoSuchUnit" DBUS_ERR_SYSTEMD_NO_SUCH_UNIT = "org.freedesktop.systemd1.NoSuchUnit"
class RaucState(str, Enum): class RaucState(StrEnum):
"""Rauc slot states.""" """Rauc slot states."""
GOOD = "good" GOOD = "good"
@@ -188,7 +191,7 @@ class RaucState(str, Enum):
ACTIVE = "active" ACTIVE = "active"
class InterfaceMethod(str, Enum): class InterfaceMethod(StrEnum):
"""Interface method simple.""" """Interface method simple."""
AUTO = "auto" AUTO = "auto"
@@ -197,14 +200,14 @@ class InterfaceMethod(str, Enum):
LINK_LOCAL = "link-local" LINK_LOCAL = "link-local"
class ConnectionType(str, Enum): class ConnectionType(StrEnum):
"""Connection type.""" """Connection type."""
ETHERNET = "802-3-ethernet" ETHERNET = "802-3-ethernet"
WIRELESS = "802-11-wireless" WIRELESS = "802-11-wireless"
class ConnectionStateType(int, Enum): class ConnectionStateType(IntEnum):
"""Connection states. """Connection states.
https://developer.gnome.org/NetworkManager/stable/nm-dbus-types.html#NMActiveConnectionState https://developer.gnome.org/NetworkManager/stable/nm-dbus-types.html#NMActiveConnectionState
@@ -217,7 +220,7 @@ class ConnectionStateType(int, Enum):
DEACTIVATED = 4 DEACTIVATED = 4
class ConnectionStateFlags(int, Enum): class ConnectionStateFlags(IntEnum):
"""Connection state flags. """Connection state flags.
https://developer-old.gnome.org/NetworkManager/stable/nm-dbus-types.html#NMActivationStateFlags https://developer-old.gnome.org/NetworkManager/stable/nm-dbus-types.html#NMActivationStateFlags
@@ -234,7 +237,7 @@ class ConnectionStateFlags(int, Enum):
EXTERNAL = 0x80 EXTERNAL = 0x80
class ConnectivityState(int, Enum): class ConnectivityState(IntEnum):
"""Network connectvity. """Network connectvity.
https://developer.gnome.org/NetworkManager/unstable/nm-dbus-types.html#NMConnectivityState https://developer.gnome.org/NetworkManager/unstable/nm-dbus-types.html#NMConnectivityState
@@ -247,7 +250,7 @@ class ConnectivityState(int, Enum):
CONNECTIVITY_FULL = 4 CONNECTIVITY_FULL = 4
class DeviceType(int, Enum): class DeviceType(IntEnum):
"""Device types. """Device types.
https://developer.gnome.org/NetworkManager/stable/nm-dbus-types.html#NMDeviceType https://developer.gnome.org/NetworkManager/stable/nm-dbus-types.html#NMDeviceType
@@ -262,7 +265,7 @@ class DeviceType(int, Enum):
VETH = 20 VETH = 20
class WirelessMethodType(int, Enum): class WirelessMethodType(IntEnum):
"""Device Type.""" """Device Type."""
UNKNOWN = 0 UNKNOWN = 0
@@ -279,7 +282,7 @@ class DNSAddressFamily(IntEnum):
INET6 = AF_INET6 INET6 = AF_INET6
class MulticastProtocolEnabled(str, Enum): class MulticastProtocolEnabled(StrEnum):
"""Multicast protocol enabled or resolve.""" """Multicast protocol enabled or resolve."""
YES = "yes" YES = "yes"
@@ -287,7 +290,7 @@ class MulticastProtocolEnabled(str, Enum):
RESOLVE = "resolve" RESOLVE = "resolve"
class DNSOverTLSEnabled(str, Enum): class DNSOverTLSEnabled(StrEnum):
"""DNS over TLS enabled.""" """DNS over TLS enabled."""
YES = "yes" YES = "yes"
@@ -295,7 +298,7 @@ class DNSOverTLSEnabled(str, Enum):
OPPORTUNISTIC = "opportunistic" OPPORTUNISTIC = "opportunistic"
class DNSSECValidation(str, Enum): class DNSSECValidation(StrEnum):
"""DNSSEC validation enforced.""" """DNSSEC validation enforced."""
YES = "yes" YES = "yes"
@@ -303,7 +306,7 @@ class DNSSECValidation(str, Enum):
ALLOW_DOWNGRADE = "allow-downgrade" ALLOW_DOWNGRADE = "allow-downgrade"
class DNSStubListenerEnabled(str, Enum): class DNSStubListenerEnabled(StrEnum):
"""DNS stub listener enabled.""" """DNS stub listener enabled."""
YES = "yes" YES = "yes"
@@ -312,7 +315,7 @@ class DNSStubListenerEnabled(str, Enum):
UDP_ONLY = "udp" UDP_ONLY = "udp"
class ResolvConfMode(str, Enum): class ResolvConfMode(StrEnum):
"""Resolv.conf management mode.""" """Resolv.conf management mode."""
FOREIGN = "foreign" FOREIGN = "foreign"
@@ -322,7 +325,7 @@ class ResolvConfMode(str, Enum):
UPLINK = "uplink" UPLINK = "uplink"
class StopUnitMode(str, Enum): class StopUnitMode(StrEnum):
"""Mode for stopping the unit.""" """Mode for stopping the unit."""
REPLACE = "replace" REPLACE = "replace"
@@ -331,7 +334,7 @@ class StopUnitMode(str, Enum):
IGNORE_REQUIREMENTS = "ignore-requirements" IGNORE_REQUIREMENTS = "ignore-requirements"
class StartUnitMode(str, Enum): class StartUnitMode(StrEnum):
"""Mode for starting the unit.""" """Mode for starting the unit."""
REPLACE = "replace" REPLACE = "replace"
@@ -341,7 +344,7 @@ class StartUnitMode(str, Enum):
ISOLATE = "isolate" ISOLATE = "isolate"
class UnitActiveState(str, Enum): class UnitActiveState(StrEnum):
"""Active state of a systemd unit.""" """Active state of a systemd unit."""
ACTIVE = "active" ACTIVE = "active"

View File

@@ -3,7 +3,7 @@ import logging
from dbus_fast.aio.message_bus import MessageBus from dbus_fast.aio.message_bus import MessageBus
from ..exceptions import DBusError, DBusInterfaceError from ..exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError
from .const import ( from .const import (
DBUS_ATTR_CHASSIS, DBUS_ATTR_CHASSIS,
DBUS_ATTR_DEPLOYMENT, DBUS_ATTR_DEPLOYMENT,
@@ -39,7 +39,7 @@ class Hostname(DBusInterfaceProxy):
await super().connect(bus) await super().connect(bus)
except DBusError: except DBusError:
_LOGGER.warning("Can't connect to systemd-hostname") _LOGGER.warning("Can't connect to systemd-hostname")
except DBusInterfaceError: except (DBusServiceUnkownError, DBusInterfaceError):
_LOGGER.warning( _LOGGER.warning(
"No hostname support on the host. Hostname functions have been disabled." "No hostname support on the host. Hostname functions have been disabled."
) )

View File

@@ -3,7 +3,7 @@ import logging
from dbus_fast.aio.message_bus import MessageBus from dbus_fast.aio.message_bus import MessageBus
from ..exceptions import DBusError, DBusInterfaceError from ..exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError
from .const import DBUS_NAME_LOGIND, DBUS_OBJECT_LOGIND from .const import DBUS_NAME_LOGIND, DBUS_OBJECT_LOGIND
from .interface import DBusInterface from .interface import DBusInterface
from .utils import dbus_connected from .utils import dbus_connected
@@ -28,8 +28,8 @@ class Logind(DBusInterface):
await super().connect(bus) await super().connect(bus)
except DBusError: except DBusError:
_LOGGER.warning("Can't connect to systemd-logind") _LOGGER.warning("Can't connect to systemd-logind")
except DBusInterfaceError: except (DBusServiceUnkownError, DBusInterfaceError):
_LOGGER.info("No systemd-logind support on the host.") _LOGGER.warning("No systemd-logind support on the host.")
@dbus_connected @dbus_connected
async def reboot(self) -> None: async def reboot(self) -> None:

View File

@@ -9,7 +9,10 @@ from ...exceptions import (
DBusError, DBusError,
DBusFatalError, DBusFatalError,
DBusInterfaceError, DBusInterfaceError,
DBusNoReplyError,
DBusServiceUnkownError,
HostNotSupportedError, HostNotSupportedError,
NetworkInterfaceNotFound,
) )
from ...utils.sentry import capture_exception from ...utils.sentry import capture_exception
from ..const import ( from ..const import (
@@ -67,9 +70,9 @@ class NetworkManager(DBusInterfaceProxy):
return self._settings return self._settings
@property @property
def interfaces(self) -> dict[str, NetworkInterface]: def interfaces(self) -> set[NetworkInterface]:
"""Return a dictionary of active interfaces.""" """Return a set of active interfaces."""
return self._interfaces return set(self._interfaces.values())
@property @property
@dbus_property @dbus_property
@@ -83,6 +86,20 @@ class NetworkManager(DBusInterfaceProxy):
"""Return Network Manager version.""" """Return Network Manager version."""
return AwesomeVersion(self.properties[DBUS_ATTR_VERSION]) return AwesomeVersion(self.properties[DBUS_ATTR_VERSION])
def get(self, name_or_mac: str) -> NetworkInterface:
"""Get an interface by name or mac address."""
if name_or_mac not in self._interfaces:
raise NetworkInterfaceNotFound(
f"No interface exists with name or mac address '{name_or_mac}'"
)
return self._interfaces[name_or_mac]
def __contains__(self, item: NetworkInterface | str) -> bool:
"""Return true if specified network interface exists."""
if isinstance(item, str):
return item in self._interfaces
return item in self.interfaces
@dbus_connected @dbus_connected
async def activate_connection( async def activate_connection(
self, connection_object: str, device_object: str self, connection_object: str, device_object: str
@@ -128,7 +145,7 @@ class NetworkManager(DBusInterfaceProxy):
await self.settings.connect(bus) await self.settings.connect(bus)
except DBusError: except DBusError:
_LOGGER.warning("Can't connect to Network Manager") _LOGGER.warning("Can't connect to Network Manager")
except DBusInterfaceError: except (DBusServiceUnkownError, DBusInterfaceError):
_LOGGER.warning( _LOGGER.warning(
"No Network Manager support on the host. Local network functions have been disabled." "No Network Manager support on the host. Local network functions have been disabled."
) )
@@ -167,9 +184,9 @@ class NetworkManager(DBusInterfaceProxy):
if changed and ( if changed and (
DBUS_ATTR_DEVICES not in changed DBUS_ATTR_DEVICES not in changed
or { or {intr.object_path for intr in self.interfaces if intr.managed}.issubset(
intr.object_path for intr in self.interfaces.values() if intr.managed set(changed[DBUS_ATTR_DEVICES])
}.issubset(set(changed[DBUS_ATTR_DEVICES])) )
): ):
# If none of our managed devices were removed then most likely this is just veths changing. # If none of our managed devices were removed then most likely this is just veths changing.
# We don't care about veths and reprocessing all their changes can swamp a system when # We don't care about veths and reprocessing all their changes can swamp a system when
@@ -177,8 +194,8 @@ class NetworkManager(DBusInterfaceProxy):
# in rare occaisions but we'll catch it on the next host update scheduled task. # in rare occaisions but we'll catch it on the next host update scheduled task.
return return
interfaces = {} interfaces: dict[str, NetworkInterface] = {}
curr_devices = {intr.object_path: intr for intr in self.interfaces.values()} curr_devices = {intr.object_path: intr for intr in self.interfaces}
for device in self.properties[DBUS_ATTR_DEVICES]: for device in self.properties[DBUS_ATTR_DEVICES]:
if device in curr_devices and curr_devices[device].is_connected: if device in curr_devices and curr_devices[device].is_connected:
interface = curr_devices[device] interface = curr_devices[device]
@@ -195,8 +212,22 @@ class NetworkManager(DBusInterfaceProxy):
# try to query it. Ignore those cases. # try to query it. Ignore those cases.
_LOGGER.debug("Can't process %s: %s", device, err) _LOGGER.debug("Can't process %s: %s", device, err)
continue continue
except (
DBusNoReplyError,
DBusServiceUnkownError,
) as err:
# This typically means that NetworkManager disappeared. Give up immeaditly.
_LOGGER.error(
"NetworkManager not responding while processing %s: %s. Giving up.",
device,
err,
)
capture_exception(err)
return
except Exception as err: # pylint: disable=broad-except except Exception as err: # pylint: disable=broad-except
_LOGGER.exception("Error while processing %s: %s", device, err) _LOGGER.exception(
"Unkown error while processing %s: %s", device, err
)
capture_exception(err) capture_exception(err)
continue continue
@@ -222,6 +253,7 @@ class NetworkManager(DBusInterfaceProxy):
interface.primary = False interface.primary = False
interfaces[interface.name] = interface interfaces[interface.name] = interface
interfaces[interface.hw_address] = interface
# Disconnect removed devices # Disconnect removed devices
for device in set(curr_devices.keys()) - set( for device in set(curr_devices.keys()) - set(
@@ -242,7 +274,7 @@ class NetworkManager(DBusInterfaceProxy):
def disconnect(self) -> None: def disconnect(self) -> None:
"""Disconnect from D-Bus.""" """Disconnect from D-Bus."""
for intr in self.interfaces.values(): for intr in self.interfaces:
intr.shutdown() intr.shutdown()
super().disconnect() super().disconnect()

View File

@@ -1,66 +1,72 @@
"""NetworkConnection object4s for Network Manager.""" """NetworkConnection objects for Network Manager."""
from dataclasses import dataclass
from ipaddress import IPv4Address, IPv6Address from ipaddress import IPv4Address, IPv6Address
import attr
@dataclass(slots=True)
@attr.s(slots=True)
class DNSConfiguration: class DNSConfiguration:
"""DNS configuration Object.""" """DNS configuration Object."""
nameservers: list[IPv4Address | IPv6Address] = attr.ib() nameservers: list[IPv4Address | IPv6Address]
domains: list[str] = attr.ib() domains: list[str]
interface: str = attr.ib() interface: str
priority: int = attr.ib() priority: int
vpn: bool = attr.ib() vpn: bool
@attr.s(slots=True) @dataclass(slots=True)
class ConnectionProperties: class ConnectionProperties:
"""Connection Properties object for Network Manager.""" """Connection Properties object for Network Manager."""
id: str | None = attr.ib() id: str | None
uuid: str | None = attr.ib() uuid: str | None
type: str | None = attr.ib() type: str | None
interface_name: str | None = attr.ib() interface_name: str | None
@attr.s(slots=True) @dataclass(slots=True)
class WirelessProperties: class WirelessProperties:
"""Wireless Properties object for Network Manager.""" """Wireless Properties object for Network Manager."""
ssid: str | None = attr.ib() ssid: str | None
assigned_mac: str | None = attr.ib() assigned_mac: str | None
mode: str | None = attr.ib() mode: str | None
powersave: int | None = attr.ib() powersave: int | None
@attr.s(slots=True) @dataclass(slots=True)
class WirelessSecurityProperties: class WirelessSecurityProperties:
"""Wireless Security Properties object for Network Manager.""" """Wireless Security Properties object for Network Manager."""
auth_alg: str | None = attr.ib() auth_alg: str | None
key_mgmt: str | None = attr.ib() key_mgmt: str | None
psk: str | None = attr.ib() psk: str | None
@attr.s(slots=True) @dataclass(slots=True)
class EthernetProperties: class EthernetProperties:
"""Ethernet properties object for Network Manager.""" """Ethernet properties object for Network Manager."""
assigned_mac: str | None = attr.ib() assigned_mac: str | None
@attr.s(slots=True) @dataclass(slots=True)
class VlanProperties: class VlanProperties:
"""Ethernet properties object for Network Manager.""" """Ethernet properties object for Network Manager."""
id: int | None = attr.ib() id: int | None
parent: str | None = attr.ib() parent: str | None
@attr.s(slots=True) @dataclass(slots=True)
class IpProperties: class IpProperties:
"""IP properties object for Network Manager.""" """IP properties object for Network Manager."""
method: str | None = attr.ib() method: str | None
@dataclass(slots=True)
class MatchProperties:
"""Match properties object for Network Manager."""
path: list[str] | None = None

View File

@@ -121,7 +121,7 @@ class NetworkConnection(DBusInterfaceProxy):
self._state_flags = { self._state_flags = {
flag flag
for flag in ConnectionStateFlags for flag in ConnectionStateFlags
if flag.value & self.properties[DBUS_ATTR_STATE_FLAGS] if flag & self.properties[DBUS_ATTR_STATE_FLAGS]
} or {ConnectionStateFlags.NONE} } or {ConnectionStateFlags.NONE}
# IPv4 # IPv4

View File

@@ -12,7 +12,7 @@ from ...const import (
ATTR_PRIORITY, ATTR_PRIORITY,
ATTR_VPN, ATTR_VPN,
) )
from ...exceptions import DBusError, DBusInterfaceError from ...exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError
from ..const import ( from ..const import (
DBUS_ATTR_CONFIGURATION, DBUS_ATTR_CONFIGURATION,
DBUS_ATTR_MODE, DBUS_ATTR_MODE,
@@ -67,7 +67,7 @@ class NetworkManagerDNS(DBusInterfaceProxy):
await super().connect(bus) await super().connect(bus)
except DBusError: except DBusError:
_LOGGER.warning("Can't connect to DnsManager") _LOGGER.warning("Can't connect to DnsManager")
except DBusInterfaceError: except (DBusServiceUnkownError, DBusInterfaceError):
_LOGGER.warning( _LOGGER.warning(
"No DnsManager support on the host. Local DNS functions have been disabled." "No DnsManager support on the host. Local DNS functions have been disabled."
) )

View File

@@ -9,7 +9,9 @@ from ..const import (
DBUS_ATTR_DEVICE_INTERFACE, DBUS_ATTR_DEVICE_INTERFACE,
DBUS_ATTR_DEVICE_TYPE, DBUS_ATTR_DEVICE_TYPE,
DBUS_ATTR_DRIVER, DBUS_ATTR_DRIVER,
DBUS_ATTR_HWADDRESS,
DBUS_ATTR_MANAGED, DBUS_ATTR_MANAGED,
DBUS_ATTR_PATH,
DBUS_IFACE_DEVICE, DBUS_IFACE_DEVICE,
DBUS_NAME_NM, DBUS_NAME_NM,
DBUS_OBJECT_BASE, DBUS_OBJECT_BASE,
@@ -67,6 +69,18 @@ class NetworkInterface(DBusInterfaceProxy):
"""Return interface driver.""" """Return interface driver."""
return self.properties[DBUS_ATTR_MANAGED] return self.properties[DBUS_ATTR_MANAGED]
@property
@dbus_property
def hw_address(self) -> str:
"""Return hardware address (i.e. mac address) of device."""
return self.properties[DBUS_ATTR_HWADDRESS]
@property
@dbus_property
def path(self) -> str:
"""Return The path of the device as exposed by the udev property ID_PATH."""
return self.properties[DBUS_ATTR_PATH]
@property @property
def connection(self) -> NetworkConnection | None: def connection(self) -> NetworkConnection | None:
"""Return the connection used for this interface.""" """Return the connection used for this interface."""
@@ -98,6 +112,18 @@ class NetworkInterface(DBusInterfaceProxy):
self._wireless = wireless self._wireless = wireless
def __eq__(self, other: object) -> bool:
"""Is object equal to another."""
return (
isinstance(other, type(self))
and other.bus_name == self.bus_name
and other.object_path == self.object_path
)
def __hash__(self) -> int:
"""Hash of object."""
return hash((self.bus_name, self.object_path))
async def connect(self, bus: MessageBus) -> None: async def connect(self, bus: MessageBus) -> None:
"""Connect to D-Bus.""" """Connect to D-Bus."""
await super().connect(bus) await super().connect(bus)

View File

@@ -2,6 +2,7 @@
import logging import logging
from typing import Any from typing import Any
from dbus_fast import Variant
from dbus_fast.aio.message_bus import MessageBus from dbus_fast.aio.message_bus import MessageBus
from ....const import ATTR_METHOD, ATTR_MODE, ATTR_PSK, ATTR_SSID from ....const import ATTR_METHOD, ATTR_MODE, ATTR_PSK, ATTR_SSID
@@ -12,6 +13,7 @@ from ..configuration import (
ConnectionProperties, ConnectionProperties,
EthernetProperties, EthernetProperties,
IpProperties, IpProperties,
MatchProperties,
VlanProperties, VlanProperties,
WirelessProperties, WirelessProperties,
WirelessSecurityProperties, WirelessSecurityProperties,
@@ -24,6 +26,8 @@ CONF_ATTR_802_WIRELESS_SECURITY = "802-11-wireless-security"
CONF_ATTR_VLAN = "vlan" CONF_ATTR_VLAN = "vlan"
CONF_ATTR_IPV4 = "ipv4" CONF_ATTR_IPV4 = "ipv4"
CONF_ATTR_IPV6 = "ipv6" CONF_ATTR_IPV6 = "ipv6"
CONF_ATTR_MATCH = "match"
CONF_ATTR_PATH = "path"
ATTR_ID = "id" ATTR_ID = "id"
ATTR_UUID = "uuid" ATTR_UUID = "uuid"
@@ -34,6 +38,7 @@ ATTR_POWERSAVE = "powersave"
ATTR_AUTH_ALG = "auth-alg" ATTR_AUTH_ALG = "auth-alg"
ATTR_KEY_MGMT = "key-mgmt" ATTR_KEY_MGMT = "key-mgmt"
ATTR_INTERFACE_NAME = "interface-name" ATTR_INTERFACE_NAME = "interface-name"
ATTR_PATH = "path"
IPV4_6_IGNORE_FIELDS = [ IPV4_6_IGNORE_FIELDS = [
"addresses", "addresses",
@@ -47,8 +52,8 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
def _merge_settings_attribute( def _merge_settings_attribute(
base_settings: Any, base_settings: dict[str, dict[str, Variant]],
new_settings: Any, new_settings: dict[str, dict[str, Variant]],
attribute: str, attribute: str,
*, *,
ignore_current_value: list[str] = None, ignore_current_value: list[str] = None,
@@ -58,8 +63,7 @@ def _merge_settings_attribute(
if attribute in base_settings: if attribute in base_settings:
if ignore_current_value: if ignore_current_value:
for field in ignore_current_value: for field in ignore_current_value:
if field in base_settings[attribute]: base_settings[attribute].pop(field, None)
del base_settings[attribute][field]
base_settings[attribute].update(new_settings[attribute]) base_settings[attribute].update(new_settings[attribute])
else: else:
@@ -85,6 +89,7 @@ class NetworkSetting(DBusInterface):
self._vlan: VlanProperties | None = None self._vlan: VlanProperties | None = None
self._ipv4: IpProperties | None = None self._ipv4: IpProperties | None = None
self._ipv6: IpProperties | None = None self._ipv6: IpProperties | None = None
self._match: MatchProperties | None = None
@property @property
def connection(self) -> ConnectionProperties | None: def connection(self) -> ConnectionProperties | None:
@@ -121,19 +126,29 @@ class NetworkSetting(DBusInterface):
"""Return ipv6 properties if any.""" """Return ipv6 properties if any."""
return self._ipv6 return self._ipv6
@property
def match(self) -> MatchProperties | None:
"""Return match properties if any."""
return self._match
@dbus_connected @dbus_connected
async def get_settings(self) -> dict[str, Any]: async def get_settings(self) -> dict[str, Any]:
"""Return connection settings.""" """Return connection settings."""
return await self.dbus.Settings.Connection.call_get_settings() return await self.dbus.Settings.Connection.call_get_settings()
@dbus_connected @dbus_connected
async def update(self, settings: Any) -> None: async def update(self, settings: dict[str, dict[str, Variant]]) -> None:
"""Update connection settings.""" """Update connection settings."""
new_settings = await self.dbus.Settings.Connection.call_get_settings( new_settings: dict[
unpack_variants=False str, dict[str, Variant]
) ] = await self.dbus.Settings.Connection.call_get_settings(unpack_variants=False)
_merge_settings_attribute(new_settings, settings, CONF_ATTR_CONNECTION) _merge_settings_attribute(
new_settings,
settings,
CONF_ATTR_CONNECTION,
ignore_current_value=[ATTR_INTERFACE_NAME],
)
_merge_settings_attribute(new_settings, settings, CONF_ATTR_802_ETHERNET) _merge_settings_attribute(new_settings, settings, CONF_ATTR_802_ETHERNET)
_merge_settings_attribute(new_settings, settings, CONF_ATTR_802_WIRELESS) _merge_settings_attribute(new_settings, settings, CONF_ATTR_802_WIRELESS)
_merge_settings_attribute( _merge_settings_attribute(
@@ -152,6 +167,7 @@ class NetworkSetting(DBusInterface):
CONF_ATTR_IPV6, CONF_ATTR_IPV6,
ignore_current_value=IPV4_6_IGNORE_FIELDS, ignore_current_value=IPV4_6_IGNORE_FIELDS,
) )
_merge_settings_attribute(new_settings, settings, CONF_ATTR_MATCH)
await self.dbus.Settings.Connection.call_update(new_settings) await self.dbus.Settings.Connection.call_update(new_settings)
@@ -217,3 +233,6 @@ class NetworkSetting(DBusInterface):
self._ipv6 = IpProperties( self._ipv6 = IpProperties(
data[CONF_ATTR_IPV6].get(ATTR_METHOD), data[CONF_ATTR_IPV6].get(ATTR_METHOD),
) )
if CONF_ATTR_MATCH in data:
self._match = MatchProperties(data[CONF_ATTR_MATCH].get(ATTR_PATH))

View File

@@ -2,7 +2,7 @@
from __future__ import annotations from __future__ import annotations
import socket import socket
from typing import TYPE_CHECKING, Any from typing import TYPE_CHECKING
from uuid import uuid4 from uuid import uuid4
from dbus_fast import Variant from dbus_fast import Variant
@@ -15,17 +15,23 @@ from . import (
CONF_ATTR_CONNECTION, CONF_ATTR_CONNECTION,
CONF_ATTR_IPV4, CONF_ATTR_IPV4,
CONF_ATTR_IPV6, CONF_ATTR_IPV6,
CONF_ATTR_MATCH,
CONF_ATTR_PATH,
CONF_ATTR_VLAN, CONF_ATTR_VLAN,
) )
from .. import NetworkManager
from ....host.const import InterfaceMethod, InterfaceType from ....host.const import InterfaceMethod, InterfaceType
if TYPE_CHECKING: if TYPE_CHECKING:
from ....host.network import Interface from ....host.configuration import Interface
def get_connection_from_interface( def get_connection_from_interface(
interface: Interface, name: str | None = None, uuid: str | None = None interface: Interface,
) -> Any: network_manager: NetworkManager,
name: str | None = None,
uuid: str | None = None,
) -> dict[str, dict[str, Variant]]:
"""Generate message argument for network interface update.""" """Generate message argument for network interface update."""
# Generate/Update ID/name # Generate/Update ID/name
@@ -39,26 +45,28 @@ def get_connection_from_interface(
elif interface.type == InterfaceType.WIRELESS: elif interface.type == InterfaceType.WIRELESS:
iftype = "802-11-wireless" iftype = "802-11-wireless"
else: else:
iftype = interface.type.value iftype = interface.type
# Generate UUID # Generate UUID
if not uuid: if not uuid:
uuid = str(uuid4()) uuid = str(uuid4())
connection = { conn: dict[str, dict[str, Variant]] = {
"id": Variant("s", name), CONF_ATTR_CONNECTION: {
"type": Variant("s", iftype), "id": Variant("s", name),
"uuid": Variant("s", uuid), "type": Variant("s", iftype),
"llmnr": Variant("i", 2), "uuid": Variant("s", uuid),
"mdns": Variant("i", 2), "llmnr": Variant("i", 2),
"autoconnect": Variant("b", True), "mdns": Variant("i", 2),
"autoconnect": Variant("b", True),
},
} }
if interface.type != InterfaceType.VLAN: if interface.type != InterfaceType.VLAN:
connection["interface-name"] = Variant("s", interface.name) if interface.path:
conn[CONF_ATTR_MATCH] = {CONF_ATTR_PATH: Variant("as", [interface.path])}
conn = {} else:
conn[CONF_ATTR_CONNECTION] = connection conn[CONF_ATTR_CONNECTION]["interface-name"] = Variant("s", interface.name)
ipv4 = {} ipv4 = {}
if not interface.ipv4 or interface.ipv4.method == InterfaceMethod.AUTO: if not interface.ipv4 or interface.ipv4.method == InterfaceMethod.AUTO:
@@ -117,9 +125,15 @@ def get_connection_from_interface(
if interface.type == InterfaceType.ETHERNET: if interface.type == InterfaceType.ETHERNET:
conn[CONF_ATTR_802_ETHERNET] = {ATTR_ASSIGNED_MAC: Variant("s", "preserve")} conn[CONF_ATTR_802_ETHERNET] = {ATTR_ASSIGNED_MAC: Variant("s", "preserve")}
elif interface.type == "vlan": elif interface.type == "vlan":
parent = interface.vlan.interface
if parent in network_manager and (
parent_connection := network_manager.get(parent).connection
):
parent = parent_connection.uuid
conn[CONF_ATTR_VLAN] = { conn[CONF_ATTR_VLAN] = {
"id": Variant("u", interface.vlan.id), "id": Variant("u", interface.vlan.id),
"parent": Variant("s", interface.vlan.interface), "parent": Variant("s", parent),
} }
elif interface.type == InterfaceType.WIRELESS: elif interface.type == InterfaceType.WIRELESS:
wireless = { wireless = {
@@ -134,8 +148,8 @@ def get_connection_from_interface(
wireless["security"] = Variant("s", CONF_ATTR_802_WIRELESS_SECURITY) wireless["security"] = Variant("s", CONF_ATTR_802_WIRELESS_SECURITY)
wireless_security = {} wireless_security = {}
if interface.wifi.auth == "wep": if interface.wifi.auth == "wep":
wireless_security["auth-alg"] = Variant("s", "none") wireless_security["auth-alg"] = Variant("s", "open")
wireless_security["key-mgmt"] = Variant("s", "open") wireless_security["key-mgmt"] = Variant("s", "none")
elif interface.wifi.auth == "wpa-psk": elif interface.wifi.auth == "wpa-psk":
wireless_security["auth-alg"] = Variant("s", "open") wireless_security["auth-alg"] = Variant("s", "open")
wireless_security["key-mgmt"] = Variant("s", "wpa-psk") wireless_security["key-mgmt"] = Variant("s", "wpa-psk")

View File

@@ -4,7 +4,7 @@ from typing import Any
from dbus_fast.aio.message_bus import MessageBus from dbus_fast.aio.message_bus import MessageBus
from ...exceptions import DBusError, DBusInterfaceError from ...exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError
from ..const import DBUS_NAME_NM, DBUS_OBJECT_SETTINGS from ..const import DBUS_NAME_NM, DBUS_OBJECT_SETTINGS
from ..interface import DBusInterface from ..interface import DBusInterface
from ..network.setting import NetworkSetting from ..network.setting import NetworkSetting
@@ -28,7 +28,7 @@ class NetworkManagerSettings(DBusInterface):
await super().connect(bus) await super().connect(bus)
except DBusError: except DBusError:
_LOGGER.warning("Can't connect to Network Manager Settings") _LOGGER.warning("Can't connect to Network Manager Settings")
except DBusInterfaceError: except (DBusServiceUnkownError, DBusInterfaceError):
_LOGGER.warning( _LOGGER.warning(
"No Network Manager Settings support on the host. Local network functions have been disabled." "No Network Manager Settings support on the host. Local network functions have been disabled."
) )

View File

@@ -4,7 +4,7 @@ from typing import Any
from dbus_fast.aio.message_bus import MessageBus from dbus_fast.aio.message_bus import MessageBus
from ..exceptions import DBusError, DBusInterfaceError from ..exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError
from ..utils.dbus import DBusSignalWrapper from ..utils.dbus import DBusSignalWrapper
from .const import ( from .const import (
DBUS_ATTR_BOOT_SLOT, DBUS_ATTR_BOOT_SLOT,
@@ -49,7 +49,7 @@ class Rauc(DBusInterfaceProxy):
await super().connect(bus) await super().connect(bus)
except DBusError: except DBusError:
_LOGGER.warning("Can't connect to rauc") _LOGGER.warning("Can't connect to rauc")
except DBusInterfaceError: except (DBusServiceUnkownError, DBusInterfaceError):
_LOGGER.warning("Host has no rauc support. OTA updates have been disabled.") _LOGGER.warning("Host has no rauc support. OTA updates have been disabled.")
@property @property
@@ -95,7 +95,7 @@ class Rauc(DBusInterfaceProxy):
@dbus_connected @dbus_connected
async def mark(self, state: RaucState, slot_identifier: str) -> tuple[str, str]: async def mark(self, state: RaucState, slot_identifier: str) -> tuple[str, str]:
"""Get slot status.""" """Get slot status."""
return await self.dbus.Installer.call_mark(state.value, slot_identifier) return await self.dbus.Installer.call_mark(state, slot_identifier)
@dbus_connected @dbus_connected
async def update(self, changed: dict[str, Any] | None = None) -> None: async def update(self, changed: dict[str, Any] | None = None) -> None:

View File

@@ -5,7 +5,7 @@ import logging
from dbus_fast.aio.message_bus import MessageBus from dbus_fast.aio.message_bus import MessageBus
from ..exceptions import DBusError, DBusInterfaceError from ..exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError
from .const import ( from .const import (
DBUS_ATTR_CACHE_STATISTICS, DBUS_ATTR_CACHE_STATISTICS,
DBUS_ATTR_CURRENT_DNS_SERVER, DBUS_ATTR_CURRENT_DNS_SERVER,
@@ -59,7 +59,7 @@ class Resolved(DBusInterfaceProxy):
await super().connect(bus) await super().connect(bus)
except DBusError: except DBusError:
_LOGGER.warning("Can't connect to systemd-resolved.") _LOGGER.warning("Can't connect to systemd-resolved.")
except DBusInterfaceError: except (DBusServiceUnkownError, DBusInterfaceError):
_LOGGER.warning( _LOGGER.warning(
"Host has no systemd-resolved support. DNS will not work correctly." "Host has no systemd-resolved support. DNS will not work correctly."
) )

View File

@@ -10,6 +10,7 @@ from ..exceptions import (
DBusError, DBusError,
DBusFatalError, DBusFatalError,
DBusInterfaceError, DBusInterfaceError,
DBusServiceUnkownError,
DBusSystemdNoSuchUnit, DBusSystemdNoSuchUnit,
) )
from .const import ( from .const import (
@@ -86,7 +87,7 @@ class Systemd(DBusInterfaceProxy):
await super().connect(bus) await super().connect(bus)
except DBusError: except DBusError:
_LOGGER.warning("Can't connect to systemd") _LOGGER.warning("Can't connect to systemd")
except DBusInterfaceError: except (DBusServiceUnkownError, DBusInterfaceError):
_LOGGER.warning( _LOGGER.warning(
"No systemd support on the host. Host control has been disabled." "No systemd support on the host. Host control has been disabled."
) )
@@ -122,25 +123,25 @@ class Systemd(DBusInterfaceProxy):
@systemd_errors @systemd_errors
async def start_unit(self, unit: str, mode: StartUnitMode) -> str: async def start_unit(self, unit: str, mode: StartUnitMode) -> str:
"""Start a systemd service unit. Returns object path of job.""" """Start a systemd service unit. Returns object path of job."""
return await self.dbus.Manager.call_start_unit(unit, mode.value) return await self.dbus.Manager.call_start_unit(unit, mode)
@dbus_connected @dbus_connected
@systemd_errors @systemd_errors
async def stop_unit(self, unit: str, mode: StopUnitMode) -> str: async def stop_unit(self, unit: str, mode: StopUnitMode) -> str:
"""Stop a systemd service unit. Returns object path of job.""" """Stop a systemd service unit. Returns object path of job."""
return await self.dbus.Manager.call_stop_unit(unit, mode.value) return await self.dbus.Manager.call_stop_unit(unit, mode)
@dbus_connected @dbus_connected
@systemd_errors @systemd_errors
async def reload_unit(self, unit: str, mode: StartUnitMode) -> str: async def reload_unit(self, unit: str, mode: StartUnitMode) -> str:
"""Reload a systemd service unit. Returns object path of job.""" """Reload a systemd service unit. Returns object path of job."""
return await self.dbus.Manager.call_reload_or_restart_unit(unit, mode.value) return await self.dbus.Manager.call_reload_or_restart_unit(unit, mode)
@dbus_connected @dbus_connected
@systemd_errors @systemd_errors
async def restart_unit(self, unit: str, mode: StartUnitMode) -> str: async def restart_unit(self, unit: str, mode: StartUnitMode) -> str:
"""Restart a systemd service unit. Returns object path of job.""" """Restart a systemd service unit. Returns object path of job."""
return await self.dbus.Manager.call_restart_unit(unit, mode.value) return await self.dbus.Manager.call_restart_unit(unit, mode)
@dbus_connected @dbus_connected
async def list_units( async def list_units(
@@ -155,7 +156,7 @@ class Systemd(DBusInterfaceProxy):
) -> str: ) -> str:
"""Start a transient unit which is released when stopped or on reboot. Returns object path of job.""" """Start a transient unit which is released when stopped or on reboot. Returns object path of job."""
return await self.dbus.Manager.call_start_transient_unit( return await self.dbus.Manager.call_start_transient_unit(
unit, mode.value, properties, [] unit, mode, properties, []
) )
@dbus_connected @dbus_connected

View File

@@ -4,7 +4,7 @@ import logging
from dbus_fast.aio.message_bus import MessageBus from dbus_fast.aio.message_bus import MessageBus
from ..exceptions import DBusError, DBusInterfaceError from ..exceptions import DBusError, DBusInterfaceError, DBusServiceUnkownError
from ..utils.dt import utc_from_timestamp from ..utils.dt import utc_from_timestamp
from .const import ( from .const import (
DBUS_ATTR_NTP, DBUS_ATTR_NTP,
@@ -63,7 +63,7 @@ class TimeDate(DBusInterfaceProxy):
await super().connect(bus) await super().connect(bus)
except DBusError: except DBusError:
_LOGGER.warning("Can't connect to systemd-timedate") _LOGGER.warning("Can't connect to systemd-timedate")
except DBusInterfaceError: except (DBusServiceUnkownError, DBusInterfaceError):
_LOGGER.warning( _LOGGER.warning(
"No timedate support on the host. Time/Date functions have been disabled." "No timedate support on the host. Time/Date functions have been disabled."
) )

View File

@@ -6,7 +6,12 @@ from typing import Any
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from dbus_fast.aio import MessageBus from dbus_fast.aio import MessageBus
from ...exceptions import DBusError, DBusInterfaceError, DBusObjectError from ...exceptions import (
DBusError,
DBusInterfaceError,
DBusObjectError,
DBusServiceUnkownError,
)
from ..const import ( from ..const import (
DBUS_ATTR_SUPPORTED_FILESYSTEMS, DBUS_ATTR_SUPPORTED_FILESYSTEMS,
DBUS_ATTR_VERSION, DBUS_ATTR_VERSION,
@@ -45,7 +50,7 @@ class UDisks2(DBusInterfaceProxy):
await super().connect(bus) await super().connect(bus)
except DBusError: except DBusError:
_LOGGER.warning("Can't connect to udisks2") _LOGGER.warning("Can't connect to udisks2")
except DBusInterfaceError: except (DBusServiceUnkownError, DBusInterfaceError):
_LOGGER.warning( _LOGGER.warning(
"No udisks2 support on the host. Host control has been disabled." "No udisks2 support on the host. Host control has been disabled."
) )

View File

@@ -263,6 +263,4 @@ class UDisks2Block(DBusInterfaceProxy):
) -> None: ) -> None:
"""Format block device.""" """Format block device."""
options = options.to_dict() if options else {} options = options.to_dict() if options else {}
await self.dbus.Block.call_format( await self.dbus.Block.call_format(type_, options | UDISKS2_DEFAULT_OPTIONS)
type_.value, options | UDISKS2_DEFAULT_OPTIONS
)

View File

@@ -1,20 +1,20 @@
"""Constants for UDisks2.""" """Constants for UDisks2."""
from enum import Enum from enum import StrEnum
from dbus_fast import Variant from dbus_fast import Variant
UDISKS2_DEFAULT_OPTIONS = {"auth.no_user_interaction": Variant("b", True)} UDISKS2_DEFAULT_OPTIONS = {"auth.no_user_interaction": Variant("b", True)}
class EncryptType(str, Enum): class EncryptType(StrEnum):
"""Encryption type.""" """Encryption type."""
LUKS1 = "luks1" LUKS1 = "luks1"
LUKS2 = "luks2" LUKS2 = "luks2"
class EraseMode(str, Enum): class EraseMode(StrEnum):
"""Erase mode.""" """Erase mode."""
ZERO = "zero" ZERO = "zero"
@@ -22,7 +22,7 @@ class EraseMode(str, Enum):
ATA_SECURE_ERASE_ENHANCED = "ata-secure-erase-enhanced" ATA_SECURE_ERASE_ENHANCED = "ata-secure-erase-enhanced"
class FormatType(str, Enum): class FormatType(StrEnum):
"""Format type.""" """Format type."""
EMPTY = "empty" EMPTY = "empty"
@@ -31,7 +31,7 @@ class FormatType(str, Enum):
GPT = "gpt" GPT = "gpt"
class PartitionTableType(str, Enum): class PartitionTableType(StrEnum):
"""Partition Table type.""" """Partition Table type."""
DOS = "dos" DOS = "dos"

View File

@@ -3,10 +3,9 @@
from dataclasses import dataclass from dataclasses import dataclass
from inspect import get_annotations from inspect import get_annotations
from pathlib import Path from pathlib import Path
from typing import Any, TypedDict from typing import Any, NotRequired, TypedDict
from dbus_fast import Variant from dbus_fast import Variant
from typing_extensions import NotRequired
from .const import EncryptType, EraseMode from .const import EncryptType, EraseMode
@@ -167,10 +166,10 @@ class FormatOptions(UDisks2StandardOptions):
) )
if self.encrypt_passpharase if self.encrypt_passpharase
else None, else None,
"encrypt.type": Variant("s", self.encrypt_type.value) "encrypt.type": Variant("s", self.encrypt_type)
if self.encrypt_type if self.encrypt_type
else None, else None,
"erase": Variant("s", self.erase.value) if self.erase else None, "erase": Variant("s", self.erase) if self.erase else None,
"update-partition-type": _optional_variant("b", self.update_partition_type), "update-partition-type": _optional_variant("b", self.update_partition_type),
"no-block": _optional_variant("b", self.no_block), "no-block": _optional_variant("b", self.no_block),
"dry-run-first": _optional_variant("b", self.dry_run_first), "dry-run-first": _optional_variant("b", self.dry_run_first),

View File

@@ -33,7 +33,7 @@ SCHEMA_DISCOVERY = vol.Schema(
{ {
vol.Required(ATTR_UUID): uuid_match, vol.Required(ATTR_UUID): uuid_match,
vol.Required(ATTR_ADDON): str, vol.Required(ATTR_ADDON): str,
vol.Required(ATTR_SERVICE): valid_discovery_service, vol.Required(ATTR_SERVICE): str,
vol.Required(ATTR_CONFIG): vol.Maybe(dict), vol.Required(ATTR_CONFIG): vol.Maybe(dict),
}, },
extra=vol.REMOVE_EXTRA, extra=vol.REMOVE_EXTRA,

View File

@@ -1,7 +1,6 @@
"""Init file for Supervisor add-on Docker object.""" """Init file for Supervisor add-on Docker object."""
from __future__ import annotations from __future__ import annotations
import asyncio
from collections.abc import Awaitable from collections.abc import Awaitable
from contextlib import suppress from contextlib import suppress
from ipaddress import IPv4Address, ip_address from ipaddress import IPv4Address, ip_address
@@ -16,15 +15,10 @@ from docker.types import Mount
import requests import requests
from ..addons.build import AddonBuild from ..addons.build import AddonBuild
from ..addons.const import MappingType
from ..bus import EventListener from ..bus import EventListener
from ..const import ( from ..const import (
DOCKER_CPU_RUNTIME_ALLOCATION, DOCKER_CPU_RUNTIME_ALLOCATION,
MAP_ADDONS,
MAP_BACKUP,
MAP_CONFIG,
MAP_MEDIA,
MAP_SHARE,
MAP_SSL,
SECURITY_DISABLE, SECURITY_DISABLE,
SECURITY_PROFILE, SECURITY_PROFILE,
SYSTEMD_JOURNAL_PERSISTENT, SYSTEMD_JOURNAL_PERSISTENT,
@@ -37,14 +31,15 @@ from ..exceptions import (
CoreDNSError, CoreDNSError,
DBusError, DBusError,
DockerError, DockerError,
DockerJobError,
DockerNotFound, DockerNotFound,
HardwareNotFound, HardwareNotFound,
) )
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..hardware.data import Device from ..hardware.data import Device
from ..jobs.decorator import Job, JobCondition, JobExecutionLimit from ..jobs.const import JobCondition, JobExecutionLimit
from ..jobs.decorator import Job
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils import process_lock
from ..utils.sentry import capture_exception from ..utils.sentry import capture_exception
from .const import ( from .const import (
ENV_TIME, ENV_TIME,
@@ -74,8 +69,8 @@ class DockerAddon(DockerInterface):
def __init__(self, coresys: CoreSys, addon: Addon): def __init__(self, coresys: CoreSys, addon: Addon):
"""Initialize Docker Home Assistant wrapper.""" """Initialize Docker Home Assistant wrapper."""
super().__init__(coresys)
self.addon: Addon = addon self.addon: Addon = addon
super().__init__(coresys)
self._hw_listener: EventListener | None = None self._hw_listener: EventListener | None = None
@@ -278,7 +273,7 @@ class DockerAddon(DockerInterface):
return None return None
@property @property
def capabilities(self) -> list[str] | None: def capabilities(self) -> list[Capabilities] | None:
"""Generate needed capabilities.""" """Generate needed capabilities."""
capabilities: set[Capabilities] = set(self.addon.privileged) capabilities: set[Capabilities] = set(self.addon.privileged)
@@ -292,7 +287,7 @@ class DockerAddon(DockerInterface):
# Return None if no capabilities is present # Return None if no capabilities is present
if capabilities: if capabilities:
return [cap.value for cap in capabilities] return list(capabilities)
return None return None
@property @property
@@ -329,76 +324,118 @@ class DockerAddon(DockerInterface):
"""Return mounts for container.""" """Return mounts for container."""
addon_mapping = self.addon.map_volumes addon_mapping = self.addon.map_volumes
target_data_path = ""
if MappingType.DATA in addon_mapping:
target_data_path = addon_mapping[MappingType.DATA].path
mounts = [ mounts = [
MOUNT_DEV, MOUNT_DEV,
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.addon.path_extern_data.as_posix(), source=self.addon.path_extern_data.as_posix(),
target="/data", target=target_data_path or "/data",
read_only=False, read_only=False,
), ),
] ]
# setup config mappings # setup config mappings
if MAP_CONFIG in addon_mapping: if MappingType.CONFIG in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(), source=self.sys_config.path_extern_homeassistant.as_posix(),
target="/config", target=addon_mapping[MappingType.CONFIG].path or "/config",
read_only=addon_mapping[MAP_CONFIG], read_only=addon_mapping[MappingType.CONFIG].read_only,
) )
) )
if MAP_SSL in addon_mapping: else:
# Map addon's public config folder if not using deprecated config option
if self.addon.addon_config_used:
mounts.append(
Mount(
type=MountType.BIND,
source=self.addon.path_extern_config.as_posix(),
target=addon_mapping[MappingType.ADDON_CONFIG].path
or "/config",
read_only=addon_mapping[MappingType.ADDON_CONFIG].read_only,
)
)
# Map Home Assistant config in new way
if MappingType.HOMEASSISTANT_CONFIG in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.HOMEASSISTANT_CONFIG].path
or "/homeassistant",
read_only=addon_mapping[
MappingType.HOMEASSISTANT_CONFIG
].read_only,
)
)
if MappingType.ALL_ADDON_CONFIGS in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_addon_configs.as_posix(),
target=addon_mapping[MappingType.ALL_ADDON_CONFIGS].path
or "/addon_configs",
read_only=addon_mapping[MappingType.ALL_ADDON_CONFIGS].read_only,
)
)
if MappingType.SSL in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(), source=self.sys_config.path_extern_ssl.as_posix(),
target="/ssl", target=addon_mapping[MappingType.SSL].path or "/ssl",
read_only=addon_mapping[MAP_SSL], read_only=addon_mapping[MappingType.SSL].read_only,
) )
) )
if MAP_ADDONS in addon_mapping: if MappingType.ADDONS in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_addons_local.as_posix(), source=self.sys_config.path_extern_addons_local.as_posix(),
target="/addons", target=addon_mapping[MappingType.ADDONS].path or "/addons",
read_only=addon_mapping[MAP_ADDONS], read_only=addon_mapping[MappingType.ADDONS].read_only,
) )
) )
if MAP_BACKUP in addon_mapping: if MappingType.BACKUP in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_backup.as_posix(), source=self.sys_config.path_extern_backup.as_posix(),
target="/backup", target=addon_mapping[MappingType.BACKUP].path or "/backup",
read_only=addon_mapping[MAP_BACKUP], read_only=addon_mapping[MappingType.BACKUP].read_only,
) )
) )
if MAP_SHARE in addon_mapping: if MappingType.SHARE in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(), source=self.sys_config.path_extern_share.as_posix(),
target="/share", target=addon_mapping[MappingType.SHARE].path or "/share",
read_only=addon_mapping[MAP_SHARE], read_only=addon_mapping[MappingType.SHARE].read_only,
propagation=PropagationMode.RSLAVE.value, propagation=PropagationMode.RSLAVE,
) )
) )
if MAP_MEDIA in addon_mapping: if MappingType.MEDIA in addon_mapping:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_media.as_posix(), source=self.sys_config.path_extern_media.as_posix(),
target="/media", target=addon_mapping[MappingType.MEDIA].path or "/media",
read_only=addon_mapping[MAP_MEDIA], read_only=addon_mapping[MappingType.MEDIA].read_only,
propagation=PropagationMode.RSLAVE.value, propagation=PropagationMode.RSLAVE,
) )
) )
@@ -411,7 +448,7 @@ class DockerAddon(DockerInterface):
continue continue
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=gpio_path, source=gpio_path,
target=gpio_path, target=gpio_path,
read_only=False, read_only=False,
@@ -422,7 +459,7 @@ class DockerAddon(DockerInterface):
if self.addon.with_devicetree: if self.addon.with_devicetree:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source="/sys/firmware/devicetree/base", source="/sys/firmware/devicetree/base",
target="/device-tree", target="/device-tree",
read_only=True, read_only=True,
@@ -437,7 +474,7 @@ class DockerAddon(DockerInterface):
if self.addon.with_kernel_modules: if self.addon.with_kernel_modules:
mounts.append( mounts.append(
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source="/lib/modules", source="/lib/modules",
target="/lib/modules", target="/lib/modules",
read_only=True, read_only=True,
@@ -456,19 +493,19 @@ class DockerAddon(DockerInterface):
if self.addon.with_audio: if self.addon.with_audio:
mounts += [ mounts += [
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.addon.path_extern_pulse.as_posix(), source=self.addon.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf", target="/etc/pulse/client.conf",
read_only=True, read_only=True,
), ),
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(), source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio", target="/run/audio",
read_only=True, read_only=True,
), ),
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_asound.as_posix(), source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf", target="/etc/asound.conf",
read_only=True, read_only=True,
@@ -479,13 +516,13 @@ class DockerAddon(DockerInterface):
if self.addon.with_journald: if self.addon.with_journald:
mounts += [ mounts += [
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=SYSTEMD_JOURNAL_PERSISTENT.as_posix(), source=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
target=SYSTEMD_JOURNAL_PERSISTENT.as_posix(), target=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
read_only=True, read_only=True,
), ),
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=SYSTEMD_JOURNAL_VOLATILE.as_posix(), source=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
target=SYSTEMD_JOURNAL_VOLATILE.as_posix(), target=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
read_only=True, read_only=True,
@@ -494,28 +531,23 @@ class DockerAddon(DockerInterface):
return mounts return mounts
def _run(self) -> None: @Job(
"""Run Docker image. name="docker_addon_run",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
if self._is_running(): async def run(self) -> None:
return """Run Docker image."""
# Security check # Security check
if not self.addon.protected: if not self.addon.protected:
_LOGGER.warning("%s running with disabled protected mode!", self.addon.name) _LOGGER.warning("%s running with disabled protected mode!", self.addon.name)
# Cleanup
self._stop()
# Don't set a hostname if no separate UTS namespace is used # Don't set a hostname if no separate UTS namespace is used
hostname = None if self.uts_mode else self.addon.hostname hostname = None if self.uts_mode else self.addon.hostname
# Create & Run container # Create & Run container
try: try:
docker_container = self.sys_docker.run( await self._run(
self.image,
tag=str(self.addon.version), tag=str(self.addon.version),
name=self.name, name=self.name,
hostname=hostname, hostname=hostname,
@@ -546,14 +578,13 @@ class DockerAddon(DockerInterface):
) )
raise raise
self._meta = docker_container.attrs
_LOGGER.info( _LOGGER.info(
"Starting Docker add-on %s with version %s", self.image, self.version "Starting Docker add-on %s with version %s", self.image, self.version
) )
# Write data to DNS server # Write data to DNS server
try: try:
self.sys_plugins.dns.add_host( await self.sys_plugins.dns.add_host(
ipv4=self.ip_address, names=[self.addon.hostname] ipv4=self.ip_address, names=[self.addon.hostname]
) )
except CoreDNSError as err: except CoreDNSError as err:
@@ -566,13 +597,19 @@ class DockerAddon(DockerInterface):
BusEvent.HARDWARE_NEW_DEVICE, self._hardware_events BusEvent.HARDWARE_NEW_DEVICE, self._hardware_events
) )
def _update( @Job(
self, version: AwesomeVersion, image: str | None = None, latest: bool = False name="docker_addon_update",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def update(
self,
version: AwesomeVersion,
image: str | None = None,
latest: bool = False,
arch: CpuArch | None = None,
) -> None: ) -> None:
"""Update a docker image. """Update a docker image."""
Need run inside executor.
"""
image = image or self.image image = image or self.image
_LOGGER.info( _LOGGER.info(
@@ -580,15 +617,20 @@ class DockerAddon(DockerInterface):
) )
# Update docker image # Update docker image
self._install( await self.install(
version, image=image, latest=latest, need_build=self.addon.latest_need_build version,
image=image,
latest=latest,
arch=arch,
need_build=self.addon.latest_need_build,
) )
# Stop container & cleanup @Job(
with suppress(DockerError): name="docker_addon_install",
self._stop() limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
def _install( )
async def install(
self, self,
version: AwesomeVersion, version: AwesomeVersion,
image: str | None = None, image: str | None = None,
@@ -597,20 +639,14 @@ class DockerAddon(DockerInterface):
*, *,
need_build: bool | None = None, need_build: bool | None = None,
) -> None: ) -> None:
"""Pull Docker image or build it. """Pull Docker image or build it."""
Need run inside executor.
"""
if need_build is None and self.addon.need_build or need_build: if need_build is None and self.addon.need_build or need_build:
self._build(version) await self._build(version)
else: else:
super()._install(version, image, latest, arch) await super().install(version, image, latest, arch)
def _build(self, version: AwesomeVersion) -> None: async def _build(self, version: AwesomeVersion) -> None:
"""Build a Docker container. """Build a Docker container."""
Need run inside executor.
"""
build_env = AddonBuild(self.coresys, self.addon) build_env = AddonBuild(self.coresys, self.addon)
if not build_env.is_valid: if not build_env.is_valid:
_LOGGER.error("Invalid build environment, can't build this add-on!") _LOGGER.error("Invalid build environment, can't build this add-on!")
@@ -618,8 +654,10 @@ class DockerAddon(DockerInterface):
_LOGGER.info("Starting build for %s:%s", self.image, version) _LOGGER.info("Starting build for %s:%s", self.image, version)
try: try:
image, log = self.sys_docker.images.build( image, log = await self.sys_run_in_executor(
use_config_proxy=False, **build_env.get_docker_args(version) self.sys_docker.images.build,
use_config_proxy=False,
**build_env.get_docker_args(version),
) )
_LOGGER.debug("Build %s:%s done: %s", self.image, version, log) _LOGGER.debug("Build %s:%s done: %s", self.image, version, log)
@@ -642,77 +680,51 @@ class DockerAddon(DockerInterface):
_LOGGER.info("Build %s:%s done", self.image, version) _LOGGER.info("Build %s:%s done", self.image, version)
@process_lock @Job(
name="docker_addon_export_image",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
def export_image(self, tar_file: Path) -> Awaitable[None]: def export_image(self, tar_file: Path) -> Awaitable[None]:
"""Export current images into a tar file.""" """Export current images into a tar file."""
return self.sys_run_in_executor(self._export_image, tar_file) return self.sys_run_in_executor(
self.sys_docker.export_image, self.image, self.version, tar_file
)
def _export_image(self, tar_file: Path) -> None: @Job(
"""Export current images into a tar file. name="docker_addon_import_image",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
try: async def import_image(self, tar_file: Path) -> None:
image = self.sys_docker.api.get_image(f"{self.image}:{self.version}")
except (docker.errors.DockerException, requests.RequestException) as err:
_LOGGER.error("Can't fetch image %s: %s", self.image, err)
raise DockerError() from err
_LOGGER.info("Export image %s to %s", self.image, tar_file)
try:
with tar_file.open("wb") as write_tar:
for chunk in image:
write_tar.write(chunk)
except (OSError, requests.RequestException) as err:
_LOGGER.error("Can't write tar file %s: %s", tar_file, err)
raise DockerError() from err
_LOGGER.info("Export image %s done", self.image)
@process_lock
def import_image(self, tar_file: Path) -> Awaitable[None]:
"""Import a tar file as image.""" """Import a tar file as image."""
return self.sys_run_in_executor(self._import_image, tar_file) docker_image = await self.sys_run_in_executor(
self.sys_docker.import_image, tar_file
)
if docker_image:
self._meta = docker_image.attrs
_LOGGER.info("Importing image %s and version %s", tar_file, self.version)
def _import_image(self, tar_file: Path) -> None: with suppress(DockerError):
"""Import a tar file as image. await self.cleanup()
Need run inside executor. @Job(
""" name="docker_addon_write_stdin",
try: limit=JobExecutionLimit.GROUP_ONCE,
with tar_file.open("rb") as read_tar: on_condition=DockerJobError,
docker_image_list = self.sys_docker.images.load(read_tar) )
async def write_stdin(self, data: bytes) -> None:
if len(docker_image_list) != 1:
_LOGGER.warning(
"Unexpected image count %d while importing image from tar",
len(docker_image_list),
)
return
docker_image = docker_image_list[0]
except (docker.errors.DockerException, OSError) as err:
_LOGGER.error("Can't import image %s: %s", self.image, err)
raise DockerError() from err
self._meta = docker_image.attrs
_LOGGER.info("Importing image %s and version %s", tar_file, self.version)
with suppress(DockerError):
self._cleanup()
@process_lock
def write_stdin(self, data: bytes) -> Awaitable[None]:
"""Write to add-on stdin.""" """Write to add-on stdin."""
return self.sys_run_in_executor(self._write_stdin, data) if not await self.is_running():
raise DockerError()
await self.sys_run_in_executor(self._write_stdin, data)
def _write_stdin(self, data: bytes) -> None: def _write_stdin(self, data: bytes) -> None:
"""Write to add-on stdin. """Write to add-on stdin.
Need run inside executor. Need run inside executor.
""" """
if not self._is_running():
raise DockerError()
try: try:
# Load needed docker objects # Load needed docker objects
container = self.sys_docker.containers.get(self.name) container = self.sys_docker.containers.get(self.name)
@@ -730,15 +742,17 @@ class DockerAddon(DockerInterface):
_LOGGER.error("Can't write to %s stdin: %s", self.name, err) _LOGGER.error("Can't write to %s stdin: %s", self.name, err)
raise DockerError() from err raise DockerError() from err
def _stop(self, remove_container=True) -> None: @Job(
"""Stop/remove Docker container. name="docker_addon_stop",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
async def stop(self, remove_container: bool = True) -> None:
"""Stop/remove Docker container."""
# DNS # DNS
if self.ip_address != NO_ADDDRESS: if self.ip_address != NO_ADDDRESS:
try: try:
self.sys_plugins.dns.delete_host(self.addon.hostname) await self.sys_plugins.dns.delete_host(self.addon.hostname)
except CoreDNSError as err: except CoreDNSError as err:
_LOGGER.warning("Can't update DNS for %s", self.name) _LOGGER.warning("Can't update DNS for %s", self.name)
capture_exception(err) capture_exception(err)
@@ -748,9 +762,9 @@ class DockerAddon(DockerInterface):
self.sys_bus.remove_listener(self._hw_listener) self.sys_bus.remove_listener(self._hw_listener)
self._hw_listener = None self._hw_listener = None
super()._stop(remove_container) await super().stop(remove_container)
def _validate_trust( async def _validate_trust(
self, image_id: str, image: str, version: AwesomeVersion self, image_id: str, image: str, version: AwesomeVersion
) -> None: ) -> None:
"""Validate trust of content.""" """Validate trust of content."""
@@ -758,13 +772,14 @@ class DockerAddon(DockerInterface):
return return
checksum = image_id.partition(":")[2] checksum = image_id.partition(":")[2]
job = asyncio.run_coroutine_threadsafe( return await self.sys_security.verify_content(self.addon.codenotary, checksum)
self.sys_security.verify_content(self.addon.codenotary, checksum),
self.sys_loop,
)
job.result()
@Job(conditions=[JobCondition.OS_AGENT], limit=JobExecutionLimit.SINGLE_WAIT) @Job(
name="docker_addon_hardware_events",
conditions=[JobCondition.OS_AGENT],
limit=JobExecutionLimit.SINGLE_WAIT,
internal=True,
)
async def _hardware_events(self, device: Device) -> None: async def _hardware_events(self, device: Device) -> None:
"""Process Hardware events for adjust device access.""" """Process Hardware events for adjust device access."""
if not any( if not any(

View File

@@ -6,7 +6,10 @@ from docker.types import Mount
from ..const import DOCKER_CPU_RUNTIME_ALLOCATION, MACHINE_ID from ..const import DOCKER_CPU_RUNTIME_ALLOCATION, MACHINE_ID
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import ( from .const import (
ENV_TIME, ENV_TIME,
MOUNT_DBUS, MOUNT_DBUS,
@@ -42,7 +45,7 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
mounts = [ mounts = [
MOUNT_DEV, MOUNT_DEV,
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_audio.as_posix(), source=self.sys_config.path_extern_audio.as_posix(),
target="/data", target="/data",
read_only=False, read_only=False,
@@ -65,9 +68,9 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
) + self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.BLUETOOTH) ) + self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.BLUETOOTH)
@property @property
def capabilities(self) -> list[str]: def capabilities(self) -> list[Capabilities]:
"""Generate needed capabilities.""" """Generate needed capabilities."""
return [cap.value for cap in (Capabilities.SYS_NICE, Capabilities.SYS_RESOURCE)] return [Capabilities.SYS_NICE, Capabilities.SYS_RESOURCE]
@property @property
def ulimits(self) -> list[docker.types.Ulimit]: def ulimits(self) -> list[docker.types.Ulimit]:
@@ -82,20 +85,14 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
return None return None
return DOCKER_CPU_RUNTIME_ALLOCATION return DOCKER_CPU_RUNTIME_ALLOCATION
def _run(self) -> None: @Job(
"""Run Docker image. name="docker_audio_run",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
if self._is_running(): async def run(self) -> None:
return """Run Docker image."""
await self._run(
# Cleanup
self._stop()
# Create & Run container
docker_container = self.sys_docker.run(
self.image,
tag=str(self.sys_plugins.audio.version), tag=str(self.sys_plugins.audio.version),
init=False, init=False,
ipv4=self.sys_docker.network.audio, ipv4=self.sys_docker.network.audio,
@@ -112,8 +109,6 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
}, },
mounts=self.mounts, mounts=self.mounts,
) )
self._meta = docker_container.attrs
_LOGGER.info( _LOGGER.info(
"Starting Audio %s with version %s - %s", "Starting Audio %s with version %s - %s",
self.image, self.image,

View File

@@ -2,6 +2,9 @@
import logging import logging
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import ENV_TIME, ENV_TOKEN from .const import ENV_TIME, ENV_TOKEN
from .interface import DockerInterface from .interface import DockerInterface
@@ -23,20 +26,14 @@ class DockerCli(DockerInterface, CoreSysAttributes):
"""Return name of Docker container.""" """Return name of Docker container."""
return CLI_DOCKER_NAME return CLI_DOCKER_NAME
def _run(self) -> None: @Job(
"""Run Docker image. name="docker_cli_run",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
if self._is_running(): async def run(self) -> None:
return """Run Docker image."""
await self._run(
# Cleanup
self._stop()
# Create & Run container
docker_container = self.sys_docker.run(
self.image,
entrypoint=["/init"], entrypoint=["/init"],
tag=str(self.sys_plugins.cli.version), tag=str(self.sys_plugins.cli.version),
init=False, init=False,
@@ -54,8 +51,6 @@ class DockerCli(DockerInterface, CoreSysAttributes):
ENV_TOKEN: self.sys_plugins.cli.supervisor_token, ENV_TOKEN: self.sys_plugins.cli.supervisor_token,
}, },
) )
self._meta = docker_container.attrs
_LOGGER.info( _LOGGER.info(
"Starting CLI %s with version %s - %s", "Starting CLI %s with version %s - %s",
self.image, self.image,

View File

@@ -1,12 +1,12 @@
"""Docker constants.""" """Docker constants."""
from enum import Enum from enum import StrEnum
from docker.types import Mount from docker.types import Mount
from ..const import MACHINE_ID from ..const import MACHINE_ID
class Capabilities(str, Enum): class Capabilities(StrEnum):
"""Linux Capabilities.""" """Linux Capabilities."""
BPF = "BPF" BPF = "BPF"
@@ -24,7 +24,7 @@ class Capabilities(str, Enum):
SYS_TIME = "SYS_TIME" SYS_TIME = "SYS_TIME"
class ContainerState(str, Enum): class ContainerState(StrEnum):
"""State of supervisor managed docker container.""" """State of supervisor managed docker container."""
FAILED = "failed" FAILED = "failed"
@@ -35,7 +35,7 @@ class ContainerState(str, Enum):
UNKNOWN = "unknown" UNKNOWN = "unknown"
class RestartPolicy(str, Enum): class RestartPolicy(StrEnum):
"""Restart policy of container.""" """Restart policy of container."""
NO = "no" NO = "no"
@@ -44,7 +44,7 @@ class RestartPolicy(str, Enum):
ALWAYS = "always" ALWAYS = "always"
class MountType(str, Enum): class MountType(StrEnum):
"""Mount type.""" """Mount type."""
BIND = "bind" BIND = "bind"
@@ -53,7 +53,7 @@ class MountType(str, Enum):
NPIPE = "npipe" NPIPE = "npipe"
class PropagationMode(str, Enum): class PropagationMode(StrEnum):
"""Propagataion mode, only for bind type mounts.""" """Propagataion mode, only for bind type mounts."""
PRIVATE = "private" PRIVATE = "private"
@@ -71,23 +71,21 @@ ENV_TOKEN_OLD = "HASSIO_TOKEN"
LABEL_MANAGED = "supervisor_managed" LABEL_MANAGED = "supervisor_managed"
MOUNT_DBUS = Mount( MOUNT_DBUS = Mount(
type=MountType.BIND.value, source="/run/dbus", target="/run/dbus", read_only=True type=MountType.BIND, source="/run/dbus", target="/run/dbus", read_only=True
)
MOUNT_DEV = Mount(
type=MountType.BIND.value, source="/dev", target="/dev", read_only=True
) )
MOUNT_DEV = Mount(type=MountType.BIND, source="/dev", target="/dev", read_only=True)
MOUNT_DOCKER = Mount( MOUNT_DOCKER = Mount(
type=MountType.BIND.value, type=MountType.BIND,
source="/run/docker.sock", source="/run/docker.sock",
target="/run/docker.sock", target="/run/docker.sock",
read_only=True, read_only=True,
) )
MOUNT_MACHINE_ID = Mount( MOUNT_MACHINE_ID = Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=MACHINE_ID.as_posix(), source=MACHINE_ID.as_posix(),
target=MACHINE_ID.as_posix(), target=MACHINE_ID.as_posix(),
read_only=True, read_only=True,
) )
MOUNT_UDEV = Mount( MOUNT_UDEV = Mount(
type=MountType.BIND.value, source="/run/udev", target="/run/udev", read_only=True type=MountType.BIND, source="/run/udev", target="/run/udev", read_only=True
) )

View File

@@ -4,6 +4,9 @@ import logging
from docker.types import Mount from docker.types import Mount
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import ENV_TIME, MOUNT_DBUS, MountType from .const import ENV_TIME, MOUNT_DBUS, MountType
from .interface import DockerInterface from .interface import DockerInterface
@@ -25,20 +28,14 @@ class DockerDNS(DockerInterface, CoreSysAttributes):
"""Return name of Docker container.""" """Return name of Docker container."""
return DNS_DOCKER_NAME return DNS_DOCKER_NAME
def _run(self) -> None: @Job(
"""Run Docker image. name="docker_dns_run",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
if self._is_running(): async def run(self) -> None:
return """Run Docker image."""
await self._run(
# Cleanup
self._stop()
# Create & Run container
docker_container = self.sys_docker.run(
self.image,
tag=str(self.sys_plugins.dns.version), tag=str(self.sys_plugins.dns.version),
init=False, init=False,
dns=False, dns=False,
@@ -50,7 +47,7 @@ class DockerDNS(DockerInterface, CoreSysAttributes):
environment={ENV_TIME: self.sys_timezone}, environment={ENV_TIME: self.sys_timezone},
mounts=[ mounts=[
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_dns.as_posix(), source=self.sys_config.path_extern_dns.as_posix(),
target="/config", target="/config",
read_only=False, read_only=False,
@@ -59,8 +56,6 @@ class DockerDNS(DockerInterface, CoreSysAttributes):
], ],
oom_score_adj=-300, oom_score_adj=-300,
) )
self._meta = docker_container.attrs
_LOGGER.info( _LOGGER.info(
"Starting DNS %s with version %s - %s", "Starting DNS %s with version %s - %s",
self.image, self.image,

View File

@@ -4,14 +4,14 @@ from ipaddress import IPv4Address
import logging import logging
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
import docker
from docker.types import Mount from docker.types import Mount
import requests
from ..const import LABEL_MACHINE, MACHINE_ID from ..const import LABEL_MACHINE, MACHINE_ID
from ..exceptions import DockerError from ..exceptions import DockerJobError
from ..hardware.const import PolicyGroup from ..hardware.const import PolicyGroup
from ..homeassistant.const import LANDINGPAGE from ..homeassistant.const import LANDINGPAGE
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import ( from .const import (
ENV_TIME, ENV_TIME,
ENV_TOKEN, ENV_TOKEN,
@@ -53,9 +53,10 @@ class DockerHomeAssistant(DockerInterface):
@property @property
def timeout(self) -> int: def timeout(self) -> int:
"""Return timeout for Docker actions.""" """Return timeout for Docker actions."""
# Synchronized homeassistant's S6_SERVICES_GRACETIME # Synchronized with the homeassistant core container's S6_SERVICES_GRACETIME
# to avoid killing Home Assistant Core # to avoid killing Home Assistant Core, see
return 220 + 20 # https://github.com/home-assistant/core/tree/dev/Dockerfile
return 240 + 20
@property @property
def ip_address(self) -> IPv4Address: def ip_address(self) -> IPv4Address:
@@ -66,10 +67,14 @@ class DockerHomeAssistant(DockerInterface):
def cgroups_rules(self) -> list[str]: def cgroups_rules(self) -> list[str]:
"""Return a list of needed cgroups permission.""" """Return a list of needed cgroups permission."""
return ( return (
self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.UART) []
+ self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.VIDEO) if self.sys_homeassistant.version == LANDINGPAGE
+ self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.GPIO) else (
+ self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.USB) self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.UART)
+ self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.VIDEO)
+ self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.GPIO)
+ self.sys_hardware.policy.get_cgroups_rules(PolicyGroup.USB)
)
) )
@property @property
@@ -79,79 +84,81 @@ class DockerHomeAssistant(DockerInterface):
MOUNT_DEV, MOUNT_DEV,
MOUNT_DBUS, MOUNT_DBUS,
MOUNT_UDEV, MOUNT_UDEV,
# Add folders # HA config folder
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(), source=self.sys_config.path_extern_homeassistant.as_posix(),
target="/config", target="/config",
read_only=False, read_only=False,
), ),
Mount(
type=MountType.BIND.value,
source=self.sys_config.path_extern_ssl.as_posix(),
target="/ssl",
read_only=True,
),
Mount(
type=MountType.BIND.value,
source=self.sys_config.path_extern_share.as_posix(),
target="/share",
read_only=False,
propagation=PropagationMode.RSLAVE.value,
),
Mount(
type=MountType.BIND.value,
source=self.sys_config.path_extern_media.as_posix(),
target="/media",
read_only=False,
propagation=PropagationMode.RSLAVE.value,
),
# Configuration audio
Mount(
type=MountType.BIND.value,
source=self.sys_homeassistant.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf",
read_only=True,
),
Mount(
type=MountType.BIND.value,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio",
read_only=True,
),
Mount(
type=MountType.BIND.value,
source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf",
read_only=True,
),
] ]
# Landingpage does not need all this access
if self.sys_homeassistant.version != LANDINGPAGE:
mounts.extend(
[
# All other folders
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(),
target="/ssl",
read_only=True,
),
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(),
target="/share",
read_only=False,
propagation=PropagationMode.RSLAVE.value,
),
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_media.as_posix(),
target="/media",
read_only=False,
propagation=PropagationMode.RSLAVE.value,
),
# Configuration audio
Mount(
type=MountType.BIND,
source=self.sys_homeassistant.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf",
read_only=True,
),
Mount(
type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio",
read_only=True,
),
Mount(
type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf",
read_only=True,
),
]
)
# Machine ID # Machine ID
if MACHINE_ID.exists(): if MACHINE_ID.exists():
mounts.append(MOUNT_MACHINE_ID) mounts.append(MOUNT_MACHINE_ID)
return mounts return mounts
def _run(self) -> None: @Job(
"""Run Docker image. name="docker_home_assistant_run",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
if self._is_running(): async def run(self) -> None:
return """Run Docker image."""
await self._run(
# Cleanup
self._stop()
# Create & Run container
docker_container = self.sys_docker.run(
self.image,
tag=(self.sys_homeassistant.version), tag=(self.sys_homeassistant.version),
name=self.name, name=self.name,
hostname=self.name, hostname=self.name,
detach=True, detach=True,
privileged=True, privileged=self.sys_homeassistant.version != LANDINGPAGE,
init=False, init=False,
security_opt=self.security_opt, security_opt=self.security_opt,
network_mode="host", network_mode="host",
@@ -171,18 +178,19 @@ class DockerHomeAssistant(DockerInterface):
tmpfs={"/tmp": ""}, tmpfs={"/tmp": ""},
oom_score_adj=-300, oom_score_adj=-300,
) )
self._meta = docker_container.attrs
_LOGGER.info( _LOGGER.info(
"Starting Home Assistant %s with version %s", self.image, self.version "Starting Home Assistant %s with version %s", self.image, self.version
) )
def _execute_command(self, command: str) -> CommandReturn: @Job(
"""Create a temporary container and run command. name="docker_home_assistant_execute_command",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
return self.sys_docker.run_command( async def execute_command(self, command: str) -> CommandReturn:
"""Create a temporary container and run command."""
return await self.sys_run_in_executor(
self.sys_docker.run_command,
self.image, self.image,
version=self.sys_homeassistant.version, version=self.sys_homeassistant.version,
command=command, command=command,
@@ -194,19 +202,19 @@ class DockerHomeAssistant(DockerInterface):
stderr=True, stderr=True,
mounts=[ mounts=[
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(), source=self.sys_config.path_extern_homeassistant.as_posix(),
target="/config", target="/config",
read_only=False, read_only=False,
), ),
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(), source=self.sys_config.path_extern_ssl.as_posix(),
target="/ssl", target="/ssl",
read_only=True, read_only=True,
), ),
Mount( Mount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(), source=self.sys_config.path_extern_share.as_posix(),
target="/share", target="/share",
read_only=False, read_only=False,
@@ -217,34 +225,14 @@ class DockerHomeAssistant(DockerInterface):
def is_initialize(self) -> Awaitable[bool]: def is_initialize(self) -> Awaitable[bool]:
"""Return True if Docker container exists.""" """Return True if Docker container exists."""
return self.sys_run_in_executor(self._is_initialize) return self.sys_run_in_executor(
self.sys_docker.container_is_initialized,
self.name,
self.image,
self.sys_homeassistant.version,
)
def _is_initialize(self) -> bool: async def _validate_trust(
"""Return True if docker container exists.
Need run inside executor.
"""
try:
docker_container = self.sys_docker.containers.get(self.name)
docker_image = self.sys_docker.images.get(
f"{self.image}:{self.sys_homeassistant.version}"
)
except docker.errors.NotFound:
return False
except (docker.errors.DockerException, requests.RequestException):
return DockerError()
# we run on an old image, stop and start it
if docker_container.image.id != docker_image.id:
return False
# Check of correct state
if docker_container.status not in ("exited", "running", "created"):
return False
return True
def _validate_trust(
self, image_id: str, image: str, version: AwesomeVersion self, image_id: str, image: str, version: AwesomeVersion
) -> None: ) -> None:
"""Validate trust of content.""" """Validate trust of content."""
@@ -254,4 +242,4 @@ class DockerHomeAssistant(DockerInterface):
except AwesomeVersionCompareException: except AwesomeVersionCompareException:
return return
super()._validate_trust(image_id, image, version) await super()._validate_trust(image_id, image, version)

View File

@@ -1,13 +1,14 @@
"""Interface class for Supervisor Docker object.""" """Interface class for Supervisor Docker object."""
from __future__ import annotations from __future__ import annotations
import asyncio from collections import defaultdict
from collections.abc import Awaitable from collections.abc import Awaitable
from contextlib import suppress from contextlib import suppress
import logging import logging
import re import re
from time import time from time import time
from typing import Any from typing import Any
from uuid import uuid4
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from awesomeversion.strategy import AwesomeVersionStrategy from awesomeversion.strategy import AwesomeVersionStrategy
@@ -24,18 +25,21 @@ from ..const import (
BusEvent, BusEvent,
CpuArch, CpuArch,
) )
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys
from ..exceptions import ( from ..exceptions import (
CodeNotaryError, CodeNotaryError,
CodeNotaryUntrusted, CodeNotaryUntrusted,
DockerAPIError, DockerAPIError,
DockerError, DockerError,
DockerJobError,
DockerNotFound, DockerNotFound,
DockerRequestError, DockerRequestError,
DockerTrustError, DockerTrustError,
) )
from ..jobs.const import JOB_GROUP_DOCKER_INTERFACE, JobExecutionLimit
from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType, SuggestionType from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils import process_lock
from ..utils.sentry import capture_exception from ..utils.sentry import capture_exception
from .const import ContainerState, RestartPolicy from .const import ContainerState, RestartPolicy
from .manager import CommandReturn from .manager import CommandReturn
@@ -73,14 +77,20 @@ def _container_state_from_model(docker_container: Container) -> ContainerState:
return ContainerState.STOPPED return ContainerState.STOPPED
class DockerInterface(CoreSysAttributes): class DockerInterface(JobGroup):
"""Docker Supervisor interface.""" """Docker Supervisor interface."""
def __init__(self, coresys: CoreSys): def __init__(self, coresys: CoreSys):
"""Initialize Docker base wrapper.""" """Initialize Docker base wrapper."""
super().__init__(
coresys,
JOB_GROUP_DOCKER_INTERFACE.format_map(
defaultdict(str, name=self.name or uuid4().hex)
),
self.name,
)
self.coresys: CoreSys = coresys self.coresys: CoreSys = coresys
self._meta: dict[str, Any] | None = None self._meta: dict[str, Any] | None = None
self.lock: asyncio.Lock = asyncio.Lock()
@property @property
def timeout(self) -> int: def timeout(self) -> int:
@@ -141,7 +151,7 @@ class DockerInterface(CoreSysAttributes):
@property @property
def in_progress(self) -> bool: def in_progress(self) -> bool:
"""Return True if a task is in progress.""" """Return True if a task is in progress."""
return self.lock.locked() return self.active_job
@property @property
def restart_policy(self) -> RestartPolicy | None: def restart_policy(self) -> RestartPolicy | None:
@@ -193,7 +203,7 @@ class DockerInterface(CoreSysAttributes):
return credentials return credentials
def _docker_login(self, image: str) -> None: async def _docker_login(self, image: str) -> None:
"""Try to log in to the registry if there are credentials available.""" """Try to log in to the registry if there are credentials available."""
if not self.sys_docker.config.registries: if not self.sys_docker.config.registries:
return return
@@ -202,30 +212,21 @@ class DockerInterface(CoreSysAttributes):
if not credentials: if not credentials:
return return
self.sys_docker.docker.login(**credentials) await self.sys_run_in_executor(self.sys_docker.docker.login, **credentials)
@process_lock @Job(
def install( name="docker_interface_install",
self, limit=JobExecutionLimit.GROUP_ONCE,
version: AwesomeVersion, on_condition=DockerJobError,
image: str | None = None, )
latest: bool = False, async def install(
arch: CpuArch | None = None,
):
"""Pull docker image."""
return self.sys_run_in_executor(self._install, version, image, latest, arch)
def _install(
self, self,
version: AwesomeVersion, version: AwesomeVersion,
image: str | None = None, image: str | None = None,
latest: bool = False, latest: bool = False,
arch: CpuArch | None = None, arch: CpuArch | None = None,
) -> None: ) -> None:
"""Pull Docker image. """Pull docker image."""
Need run inside executor.
"""
image = image or self.image image = image or self.image
arch = arch or self.sys_arch.supervisor arch = arch or self.sys_arch.supervisor
@@ -233,21 +234,24 @@ class DockerInterface(CoreSysAttributes):
try: try:
if self.sys_docker.config.registries: if self.sys_docker.config.registries:
# Try login if we have defined credentials # Try login if we have defined credentials
self._docker_login(image) await self._docker_login(image)
# Pull new image # Pull new image
docker_image = self.sys_docker.images.pull( docker_image = await self.sys_run_in_executor(
self.sys_docker.images.pull,
f"{image}:{version!s}", f"{image}:{version!s}",
platform=MAP_ARCH[arch], platform=MAP_ARCH[arch],
) )
# Validate content # Validate content
try: try:
self._validate_trust(docker_image.id, image, version) await self._validate_trust(docker_image.id, image, version)
except CodeNotaryError: except CodeNotaryError:
with suppress(docker.errors.DockerException): with suppress(docker.errors.DockerException):
self.sys_docker.images.remove( await self.sys_run_in_executor(
image=f"{image}:{version!s}", force=True self.sys_docker.images.remove,
image=f"{image}:{version!s}",
force=True,
) )
raise raise
@@ -256,7 +260,7 @@ class DockerInterface(CoreSysAttributes):
_LOGGER.info( _LOGGER.info(
"Tagging image %s with version %s as latest", image, version "Tagging image %s with version %s as latest", image, version
) )
docker_image.tag(image, tag="latest") await self.sys_run_in_executor(docker_image.tag, image, tag="latest")
except docker.errors.APIError as err: except docker.errors.APIError as err:
if err.status_code == 429: if err.status_code == 429:
self.sys_resolution.create_issue( self.sys_resolution.create_issue(
@@ -289,34 +293,21 @@ class DockerInterface(CoreSysAttributes):
self._meta = docker_image.attrs self._meta = docker_image.attrs
def exists(self) -> Awaitable[bool]: async def exists(self) -> bool:
"""Return True if Docker image exists in local repository.""" """Return True if Docker image exists in local repository."""
return self.sys_run_in_executor(self._exists)
def _exists(self) -> bool:
"""Return True if Docker image exists in local repository.
Need run inside executor.
"""
with suppress(docker.errors.DockerException, requests.RequestException): with suppress(docker.errors.DockerException, requests.RequestException):
self.sys_docker.images.get(f"{self.image}:{self.version!s}") await self.sys_run_in_executor(
self.sys_docker.images.get, f"{self.image}:{self.version!s}"
)
return True return True
return False return False
def is_running(self) -> Awaitable[bool]: async def is_running(self) -> bool:
"""Return True if Docker is running. """Return True if Docker is running."""
Return a Future.
"""
return self.sys_run_in_executor(self._is_running)
def _is_running(self) -> bool:
"""Return True if Docker is running.
Need run inside executor.
"""
try: try:
docker_container = self.sys_docker.containers.get(self.name) docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except docker.errors.NotFound: except docker.errors.NotFound:
return False return False
except docker.errors.DockerException as err: except docker.errors.DockerException as err:
@@ -326,20 +317,12 @@ class DockerInterface(CoreSysAttributes):
return docker_container.status == "running" return docker_container.status == "running"
def current_state(self) -> Awaitable[ContainerState]: async def current_state(self) -> ContainerState:
"""Return current state of container. """Return current state of container."""
Return a Future.
"""
return self.sys_run_in_executor(self._current_state)
def _current_state(self) -> ContainerState:
"""Return current state of container.
Need run inside executor.
"""
try: try:
docker_container = self.sys_docker.containers.get(self.name) docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except docker.errors.NotFound: except docker.errors.NotFound:
return ContainerState.UNKNOWN return ContainerState.UNKNOWN
except docker.errors.DockerException as err: except docker.errors.DockerException as err:
@@ -349,22 +332,15 @@ class DockerInterface(CoreSysAttributes):
return _container_state_from_model(docker_container) return _container_state_from_model(docker_container)
@process_lock @Job(name="docker_interface_attach", limit=JobExecutionLimit.GROUP_WAIT)
def attach( async def attach(
self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False
) -> Awaitable[None]:
"""Attach to running Docker container."""
return self.sys_run_in_executor(self._attach, version, skip_state_event_if_down)
def _attach(
self, version: AwesomeVersion, skip_state_event_if_down: bool = False
) -> None: ) -> None:
"""Attach to running docker container. """Attach to running Docker container."""
Need run inside executor.
"""
with suppress(docker.errors.DockerException, requests.RequestException): with suppress(docker.errors.DockerException, requests.RequestException):
docker_container = self.sys_docker.containers.get(self.name) docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
self._meta = docker_container.attrs self._meta = docker_container.attrs
self.sys_docker.monitor.watch_container(docker_container) self.sys_docker.monitor.watch_container(docker_container)
@@ -374,8 +350,7 @@ class DockerInterface(CoreSysAttributes):
and state in [ContainerState.STOPPED, ContainerState.FAILED] and state in [ContainerState.STOPPED, ContainerState.FAILED]
): ):
# Fire event with current state of container # Fire event with current state of container
self.sys_loop.call_soon_threadsafe( self.sys_bus.fire_event(
self.sys_bus.fire_event,
BusEvent.DOCKER_CONTAINER_STATE_CHANGE, BusEvent.DOCKER_CONTAINER_STATE_CHANGE,
DockerContainerStateEvent( DockerContainerStateEvent(
self.name, state, docker_container.id, int(time()) self.name, state, docker_container.id, int(time())
@@ -393,114 +368,85 @@ class DockerInterface(CoreSysAttributes):
raise DockerError() raise DockerError()
_LOGGER.info("Attaching to %s with version %s", self.image, self.version) _LOGGER.info("Attaching to %s with version %s", self.image, self.version)
@process_lock @Job(
def run(self) -> Awaitable[None]: name="docker_interface_run",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def run(self) -> None:
"""Run Docker image.""" """Run Docker image."""
return self.sys_run_in_executor(self._run)
def _run(self) -> None:
"""Run Docker image.
Need run inside executor.
"""
raise NotImplementedError() raise NotImplementedError()
@process_lock async def _run(self, **kwargs) -> None:
def stop(self, remove_container=True) -> Awaitable[None]: """Run Docker image with retry inf necessary."""
"""Stop/remove Docker container.""" if await self.is_running():
return self.sys_run_in_executor(self._stop, remove_container)
def _stop(self, remove_container=True) -> None:
"""Stop/remove Docker container.
Need run inside executor.
"""
try:
docker_container = self.sys_docker.containers.get(self.name)
except docker.errors.NotFound:
return return
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
if docker_container.status == "running": # Cleanup
_LOGGER.info("Stopping %s application", self.name) await self.stop()
with suppress(docker.errors.DockerException, requests.RequestException):
docker_container.stop(timeout=self.timeout)
if remove_container: # Create & Run container
with suppress(docker.errors.DockerException, requests.RequestException): try:
_LOGGER.info("Cleaning %s application", self.name) docker_container = await self.sys_run_in_executor(
docker_container.remove(force=True) self.sys_docker.run, self.image, **kwargs
)
except DockerNotFound as err:
# If image is missing, capture the exception as this shouldn't happen
capture_exception(err)
raise
@process_lock # Store metadata
self._meta = docker_container.attrs
@Job(
name="docker_interface_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def stop(self, remove_container: bool = True) -> None:
"""Stop/remove Docker container."""
with suppress(DockerNotFound):
await self.sys_run_in_executor(
self.sys_docker.stop_container,
self.name,
self.timeout,
remove_container,
)
@Job(
name="docker_interface_start",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
def start(self) -> Awaitable[None]: def start(self) -> Awaitable[None]:
"""Start Docker container.""" """Start Docker container."""
return self.sys_run_in_executor(self._start) return self.sys_run_in_executor(self.sys_docker.start_container, self.name)
def _start(self) -> None: @Job(
"""Start docker container. name="docker_interface_remove",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
try: async def remove(self) -> None:
docker_container = self.sys_docker.containers.get(self.name)
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"{self.name} not found for starting up", _LOGGER.error
) from err
_LOGGER.info("Starting %s", self.name)
try:
docker_container.start()
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError(f"Can't start {self.name}: {err}", _LOGGER.error) from err
@process_lock
def remove(self) -> Awaitable[None]:
"""Remove Docker images.""" """Remove Docker images."""
return self.sys_run_in_executor(self._remove)
def _remove(self) -> None:
"""Remove docker images.
Needs run inside executor.
"""
# Cleanup container # Cleanup container
with suppress(DockerError): with suppress(DockerError):
self._stop() await self.stop()
_LOGGER.info("Removing image %s with latest and %s", self.image, self.version)
try:
with suppress(docker.errors.ImageNotFound):
self.sys_docker.images.remove(image=f"{self.image}:latest", force=True)
with suppress(docker.errors.ImageNotFound):
self.sys_docker.images.remove(
image=f"{self.image}:{self.version!s}", force=True
)
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't remove image {self.image}: {err}", _LOGGER.warning
) from err
await self.sys_run_in_executor(
self.sys_docker.remove_image, self.image, self.version
)
self._meta = None self._meta = None
@process_lock @Job(
def update( name="docker_interface_update",
self, version: AwesomeVersion, image: str | None = None, latest: bool = False limit=JobExecutionLimit.GROUP_ONCE,
) -> Awaitable[None]: on_condition=DockerJobError,
"""Update a Docker image.""" )
return self.sys_run_in_executor(self._update, version, image, latest) async def update(
def _update(
self, version: AwesomeVersion, image: str | None = None, latest: bool = False self, version: AwesomeVersion, image: str | None = None, latest: bool = False
) -> None: ) -> None:
"""Update a docker image. """Update a Docker image."""
Need run inside executor.
"""
image = image or self.image image = image or self.image
_LOGGER.info( _LOGGER.info(
@@ -508,163 +454,69 @@ class DockerInterface(CoreSysAttributes):
) )
# Update docker image # Update docker image
self._install(version, image=image, latest=latest) await self.install(version, image=image, latest=latest)
# Stop container & cleanup # Stop container & cleanup
with suppress(DockerError): with suppress(DockerError):
self._stop() await self.stop()
def logs(self) -> Awaitable[bytes]: async def logs(self) -> bytes:
"""Return Docker logs of container. """Return Docker logs of container."""
with suppress(DockerError):
Return a Future. return await self.sys_run_in_executor(
""" self.sys_docker.container_logs, self.name
return self.sys_run_in_executor(self._logs) )
def _logs(self) -> bytes:
"""Return Docker logs of container.
Need run inside executor.
"""
try:
docker_container = self.sys_docker.containers.get(self.name)
except (docker.errors.DockerException, requests.RequestException):
return b""
try:
return docker_container.logs(tail=100, stdout=True, stderr=True)
except (docker.errors.DockerException, requests.RequestException) as err:
_LOGGER.warning("Can't grep logs from %s: %s", self.image, err)
return b"" return b""
@process_lock @Job(name="docker_interface_cleanup", limit=JobExecutionLimit.GROUP_WAIT)
def cleanup(self, old_image: str | None = None) -> Awaitable[None]: def cleanup(
self,
old_image: str | None = None,
image: str | None = None,
version: AwesomeVersion | None = None,
) -> Awaitable[None]:
"""Check if old version exists and cleanup.""" """Check if old version exists and cleanup."""
return self.sys_run_in_executor(self._cleanup, old_image) return self.sys_run_in_executor(
self.sys_docker.cleanup_old_images,
image or self.image,
version or self.version,
{old_image} if old_image else None,
)
def _cleanup(self, old_image: str | None = None) -> None: @Job(
"""Check if old version exists and cleanup. name="docker_interface_restart",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
try:
origin = self.sys_docker.images.get(f"{self.image}:{self.version!s}")
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't find {self.image} for cleanup", _LOGGER.warning
) from err
# Cleanup Current
try:
images_list = self.sys_docker.images.list(name=self.image)
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Corrupt docker overlayfs found: {err}", _LOGGER.warning
) from err
for image in images_list:
if origin.id == image.id:
continue
with suppress(docker.errors.DockerException, requests.RequestException):
_LOGGER.info("Cleanup images: %s", image.tags)
self.sys_docker.images.remove(image.id, force=True)
# Cleanup Old
if not old_image or self.image == old_image:
return
try:
images_list = self.sys_docker.images.list(name=old_image)
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Corrupt docker overlayfs found: {err}", _LOGGER.warning
) from err
for image in images_list:
if origin.id == image.id:
continue
with suppress(docker.errors.DockerException, requests.RequestException):
_LOGGER.info("Cleanup images: %s", image.tags)
self.sys_docker.images.remove(image.id, force=True)
@process_lock
def restart(self) -> Awaitable[None]: def restart(self) -> Awaitable[None]:
"""Restart docker container.""" """Restart docker container."""
return self.sys_loop.run_in_executor(None, self._restart) return self.sys_run_in_executor(
self.sys_docker.restart_container, self.name, self.timeout
)
def _restart(self) -> None: @Job(
"""Restart docker container. name="docker_interface_execute_command",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
try: async def execute_command(self, command: str) -> CommandReturn:
container = self.sys_docker.containers.get(self.name)
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
_LOGGER.info("Restarting %s", self.image)
try:
container.restart(timeout=self.timeout)
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't restart {self.image}: {err}", _LOGGER.warning
) from err
@process_lock
def execute_command(self, command: str) -> Awaitable[CommandReturn]:
"""Create a temporary container and run command.""" """Create a temporary container and run command."""
return self.sys_run_in_executor(self._execute_command, command)
def _execute_command(self, command: str) -> CommandReturn:
"""Create a temporary container and run command.
Need run inside executor.
"""
raise NotImplementedError() raise NotImplementedError()
def stats(self) -> Awaitable[DockerStats]: async def stats(self) -> DockerStats:
"""Read and return stats from container.""" """Read and return stats from container."""
return self.sys_run_in_executor(self._stats) stats = await self.sys_run_in_executor(
self.sys_docker.container_stats, self.name
)
return DockerStats(stats)
def _stats(self) -> DockerStats: async def is_failed(self) -> bool:
"""Create a temporary container and run command. """Return True if Docker is failing state."""
Need run inside executor.
"""
try: try:
docker_container = self.sys_docker.containers.get(self.name) docker_container = await self.sys_run_in_executor(
except (docker.errors.DockerException, requests.RequestException) as err: self.sys_docker.containers.get, self.name
raise DockerError() from err )
# container is not running
if docker_container.status != "running":
raise DockerError(f"Container {self.name} is not running", _LOGGER.error)
try:
stats = docker_container.stats(stream=False)
return DockerStats(stats)
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't read stats from {self.name}: {err}", _LOGGER.error
) from err
def is_failed(self) -> Awaitable[bool]:
"""Return True if Docker is failing state.
Return a Future.
"""
return self.sys_run_in_executor(self._is_failed)
def _is_failed(self) -> bool:
"""Return True if Docker is failing state.
Need run inside executor.
"""
try:
docker_container = self.sys_docker.containers.get(self.name)
except docker.errors.NotFound: except docker.errors.NotFound:
return False return False
except (docker.errors.DockerException, requests.RequestException) as err: except (docker.errors.DockerException, requests.RequestException) as err:
@@ -677,18 +529,13 @@ class DockerInterface(CoreSysAttributes):
# Check return value # Check return value
return int(docker_container.attrs["State"]["ExitCode"]) != 0 return int(docker_container.attrs["State"]["ExitCode"]) != 0
def get_latest_version(self) -> Awaitable[AwesomeVersion]: async def get_latest_version(self) -> AwesomeVersion:
"""Return latest version of local image.""" """Return latest version of local image."""
return self.sys_run_in_executor(self._get_latest_version)
def _get_latest_version(self) -> AwesomeVersion:
"""Return latest version of local image.
Need run inside executor.
"""
available_version: list[AwesomeVersion] = [] available_version: list[AwesomeVersion] = []
try: try:
for image in self.sys_docker.images.list(self.image): for image in await self.sys_run_in_executor(
self.sys_docker.images.list, self.image
):
for tag in image.tags: for tag in image.tags:
version = AwesomeVersion(tag.partition(":")[2]) version = AwesomeVersion(tag.partition(":")[2])
if version.strategy == AwesomeVersionStrategy.UNKNOWN: if version.strategy == AwesomeVersionStrategy.UNKNOWN:
@@ -713,51 +560,36 @@ class DockerInterface(CoreSysAttributes):
available_version.sort(reverse=True) available_version.sort(reverse=True)
return available_version[0] return available_version[0]
@process_lock @Job(
name="docker_interface_run_inside",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
def run_inside(self, command: str) -> Awaitable[CommandReturn]: def run_inside(self, command: str) -> Awaitable[CommandReturn]:
"""Execute a command inside Docker container.""" """Execute a command inside Docker container."""
return self.sys_run_in_executor(self._run_inside, command) return self.sys_run_in_executor(
self.sys_docker.container_run_inside, self.name, command
)
def _run_inside(self, command: str) -> CommandReturn: async def _validate_trust(
"""Execute a command inside Docker container.
Need run inside executor.
"""
try:
docker_container = self.sys_docker.containers.get(self.name)
except docker.errors.NotFound:
raise DockerNotFound() from None
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
# Execute
try:
code, output = docker_container.exec_run(command)
except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError() from err
return CommandReturn(code, output)
def _validate_trust(
self, image_id: str, image: str, version: AwesomeVersion self, image_id: str, image: str, version: AwesomeVersion
) -> None: ) -> None:
"""Validate trust of content.""" """Validate trust of content."""
checksum = image_id.partition(":")[2] checksum = image_id.partition(":")[2]
job = asyncio.run_coroutine_threadsafe( return await self.sys_security.verify_own_content(checksum)
self.sys_security.verify_own_content(checksum), self.sys_loop
)
job.result()
@process_lock @Job(
def check_trust(self) -> Awaitable[None]: name="docker_interface_check_trust",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=DockerJobError,
)
async def check_trust(self) -> None:
"""Check trust of exists Docker image.""" """Check trust of exists Docker image."""
return self.sys_run_in_executor(self._check_trust)
def _check_trust(self) -> None:
"""Check trust of current image."""
try: try:
image = self.sys_docker.images.get(f"{self.image}:{self.version!s}") image = await self.sys_run_in_executor(
self.sys_docker.images.get, f"{self.image}:{self.version!s}"
)
except (docker.errors.DockerException, requests.RequestException): except (docker.errors.DockerException, requests.RequestException):
return return
self._validate_trust(image.id, self.image, self.version) await self._validate_trust(image.id, self.image, self.version)

View File

@@ -11,8 +11,9 @@ from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from docker import errors as docker_errors from docker import errors as docker_errors
from docker.api.client import APIClient from docker.api.client import APIClient
from docker.client import DockerClient from docker.client import DockerClient
from docker.errors import DockerException, ImageNotFound, NotFound
from docker.models.containers import Container, ContainerCollection from docker.models.containers import Container, ContainerCollection
from docker.models.images import ImageCollection from docker.models.images import Image, ImageCollection
from docker.models.networks import Network from docker.models.networks import Network
from docker.types.daemon import CancellableStream from docker.types.daemon import CancellableStream
import requests import requests
@@ -351,3 +352,224 @@ class DockerAPI:
with suppress(docker_errors.DockerException, requests.RequestException): with suppress(docker_errors.DockerException, requests.RequestException):
network.disconnect(data.get("Name", cid), force=True) network.disconnect(data.get("Name", cid), force=True)
def container_is_initialized(
self, name: str, image: str, version: AwesomeVersion
) -> bool:
"""Return True if docker container exists in good state and is built from expected image."""
try:
docker_container = self.containers.get(name)
docker_image = self.images.get(f"{image}:{version}")
except NotFound:
return False
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
# Check the image is correct and state is good
return (
docker_container.image.id == docker_image.id
and docker_container.status in ("exited", "running", "created")
)
def stop_container(
self, name: str, timeout: int, remove_container: bool = True
) -> None:
"""Stop/remove Docker container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
if docker_container.status == "running":
_LOGGER.info("Stopping %s application", name)
with suppress(DockerException, requests.RequestException):
docker_container.stop(timeout=timeout)
if remove_container:
with suppress(DockerException, requests.RequestException):
_LOGGER.info("Cleaning %s application", name)
docker_container.remove(force=True)
def start_container(self, name: str) -> None:
"""Start Docker container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
raise DockerNotFound(
f"{name} not found for starting up", _LOGGER.error
) from None
except (DockerException, requests.RequestException) as err:
raise DockerError(
f"Could not get {name} for starting up", _LOGGER.error
) from err
_LOGGER.info("Starting %s", name)
try:
docker_container.start()
except (DockerException, requests.RequestException) as err:
raise DockerError(f"Can't start {name}: {err}", _LOGGER.error) from err
def restart_container(self, name: str, timeout: int) -> None:
"""Restart docker container."""
try:
container: Container = self.containers.get(name)
except NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
_LOGGER.info("Restarting %s", name)
try:
container.restart(timeout=timeout)
except (DockerException, requests.RequestException) as err:
raise DockerError(f"Can't restart {name}: {err}", _LOGGER.warning) from err
def container_logs(self, name: str, tail: int = 100) -> bytes:
"""Return Docker logs of container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
try:
return docker_container.logs(tail=tail, stdout=True, stderr=True)
except (DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't grep logs from {name}: {err}", _LOGGER.warning
) from err
def container_stats(self, name: str) -> dict[str, Any]:
"""Read and return stats from container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
# container is not running
if docker_container.status != "running":
raise DockerError(f"Container {name} is not running", _LOGGER.error)
try:
return docker_container.stats(stream=False)
except (DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't read stats from {name}: {err}", _LOGGER.error
) from err
def container_run_inside(self, name: str, command: str) -> CommandReturn:
"""Execute a command inside Docker container."""
try:
docker_container: Container = self.containers.get(name)
except NotFound:
raise DockerNotFound() from None
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
# Execute
try:
code, output = docker_container.exec_run(command)
except (DockerException, requests.RequestException) as err:
raise DockerError() from err
return CommandReturn(code, output)
def remove_image(
self, image: str, version: AwesomeVersion, latest: bool = True
) -> None:
"""Remove a Docker image by version and latest."""
try:
if latest:
_LOGGER.info("Removing image %s with latest", image)
with suppress(ImageNotFound):
self.images.remove(image=f"{image}:latest", force=True)
_LOGGER.info("Removing image %s with %s", image, version)
with suppress(ImageNotFound):
self.images.remove(image=f"{image}:{version!s}", force=True)
except (DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't remove image {image}: {err}", _LOGGER.warning
) from err
def import_image(self, tar_file: Path) -> Image | None:
"""Import a tar file as image."""
try:
with tar_file.open("rb") as read_tar:
docker_image_list: list[Image] = self.images.load(read_tar)
if len(docker_image_list) != 1:
_LOGGER.warning(
"Unexpected image count %d while importing image from tar",
len(docker_image_list),
)
return None
return docker_image_list[0]
except (DockerException, OSError) as err:
raise DockerError(
f"Can't import image from tar: {err}", _LOGGER.error
) from err
def export_image(self, image: str, version: AwesomeVersion, tar_file: Path) -> None:
"""Export current images into a tar file."""
try:
image = self.api.get_image(f"{image}:{version}")
except (DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't fetch image {image}: {err}", _LOGGER.error
) from err
_LOGGER.info("Export image %s to %s", image, tar_file)
try:
with tar_file.open("wb") as write_tar:
for chunk in image:
write_tar.write(chunk)
except (OSError, requests.RequestException) as err:
raise DockerError(
f"Can't write tar file {tar_file}: {err}", _LOGGER.error
) from err
_LOGGER.info("Export image %s done", image)
def cleanup_old_images(
self,
current_image: str,
current_version: AwesomeVersion,
old_images: set[str] | None = None,
) -> None:
"""Clean up old versions of an image."""
try:
current: Image = self.images.get(f"{current_image}:{current_version!s}")
except ImageNotFound:
raise DockerNotFound(
f"{current_image} not found for cleanup", _LOGGER.warning
) from None
except (DockerException, requests.RequestException) as err:
raise DockerError(
f"Can't get {current_image} for cleanup", _LOGGER.warning
) from err
# Cleanup old and current
image_names = list(
old_images | {current_image} if old_images else {current_image}
)
try:
images_list = self.images.list(name=image_names)
except (DockerException, requests.RequestException) as err:
raise DockerError(
f"Corrupt docker overlayfs found: {err}", _LOGGER.warning
) from err
for image in images_list:
if current.id == image.id:
continue
with suppress(DockerException, requests.RequestException):
_LOGGER.info("Cleanup images: %s", image.tags)
self.images.remove(image.id, force=True)

View File

@@ -2,6 +2,9 @@
import logging import logging
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import ENV_TIME, Capabilities from .const import ENV_TIME, Capabilities
from .interface import DockerInterface from .interface import DockerInterface
@@ -24,24 +27,18 @@ class DockerMulticast(DockerInterface, CoreSysAttributes):
return MULTICAST_DOCKER_NAME return MULTICAST_DOCKER_NAME
@property @property
def capabilities(self) -> list[str]: def capabilities(self) -> list[Capabilities]:
"""Generate needed capabilities.""" """Generate needed capabilities."""
return [Capabilities.NET_ADMIN.value] return [Capabilities.NET_ADMIN]
def _run(self) -> None: @Job(
"""Run Docker image. name="docker_multicast_run",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
if self._is_running(): async def run(self) -> None:
return """Run Docker image."""
await self._run(
# Cleanup
self._stop()
# Create & Run container
docker_container = self.sys_docker.run(
self.image,
tag=str(self.sys_plugins.multicast.version), tag=str(self.sys_plugins.multicast.version),
init=False, init=False,
name=self.name, name=self.name,
@@ -53,8 +50,6 @@ class DockerMulticast(DockerInterface, CoreSysAttributes):
extra_hosts={"supervisor": self.sys_docker.network.supervisor}, extra_hosts={"supervisor": self.sys_docker.network.supervisor},
environment={ENV_TIME: self.sys_timezone}, environment={ENV_TIME: self.sys_timezone},
) )
self._meta = docker_container.attrs
_LOGGER.info( _LOGGER.info(
"Starting Multicast %s with version %s - Host", self.image, self.version "Starting Multicast %s with version %s - Host", self.image, self.version
) )

View File

@@ -21,7 +21,7 @@ class DockerNetwork:
def __init__(self, docker_client: docker.DockerClient): def __init__(self, docker_client: docker.DockerClient):
"""Initialize internal Supervisor network.""" """Initialize internal Supervisor network."""
self.docker: docker.DockerClient = docker_client self.docker: docker.DockerClient = docker_client
self.network: docker.models.networks.Network = self._get_network() self._network: docker.models.networks.Network = self._get_network()
@property @property
def name(self) -> str: def name(self) -> str:
@@ -29,18 +29,14 @@ class DockerNetwork:
return DOCKER_NETWORK return DOCKER_NETWORK
@property @property
def containers(self) -> list[docker.models.containers.Container]: def network(self) -> docker.models.networks.Network:
"""Return of connected containers from network.""" """Return docker network."""
containers: list[docker.models.containers.Container] = [] return self._network
for cid, _ in self.network.attrs.get("Containers", {}).items():
try:
containers.append(self.docker.containers.get(cid))
except docker.errors.NotFound:
_LOGGER.warning("Docker network is corrupt! %s", cid)
except (docker.errors.DockerException, requests.RequestException) as err:
_LOGGER.error("Unknown error with container lookup %s", err)
return containers @property
def containers(self) -> list[str]:
"""Return of connected containers from network."""
return list(self.network.attrs.get("Containers", {}).keys())
@property @property
def gateway(self) -> IPv4Address: def gateway(self) -> IPv4Address:

View File

@@ -3,6 +3,9 @@ import logging
from ..const import DOCKER_NETWORK_MASK from ..const import DOCKER_NETWORK_MASK
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import ENV_TIME, ENV_TOKEN, MOUNT_DOCKER, RestartPolicy from .const import ENV_TIME, ENV_TOKEN, MOUNT_DOCKER, RestartPolicy
from .interface import DockerInterface from .interface import DockerInterface
@@ -25,20 +28,14 @@ class DockerObserver(DockerInterface, CoreSysAttributes):
"""Return name of Docker container.""" """Return name of Docker container."""
return OBSERVER_DOCKER_NAME return OBSERVER_DOCKER_NAME
def _run(self) -> None: @Job(
"""Run Docker image. name="docker_observer_run",
limit=JobExecutionLimit.GROUP_ONCE,
Need run inside executor. on_condition=DockerJobError,
""" )
if self._is_running(): async def run(self) -> None:
return """Run Docker image."""
await self._run(
# Cleanup
self._stop()
# Create & Run container
docker_container = self.sys_docker.run(
self.image,
tag=str(self.sys_plugins.observer.version), tag=str(self.sys_plugins.observer.version),
init=False, init=False,
ipv4=self.sys_docker.network.observer, ipv4=self.sys_docker.network.observer,
@@ -46,7 +43,7 @@ class DockerObserver(DockerInterface, CoreSysAttributes):
hostname=self.name.replace("_", "-"), hostname=self.name.replace("_", "-"),
detach=True, detach=True,
security_opt=self.security_opt, security_opt=self.security_opt,
restart_policy={"Name": RestartPolicy.ALWAYS.value}, restart_policy={"Name": RestartPolicy.ALWAYS},
extra_hosts={"supervisor": self.sys_docker.network.supervisor}, extra_hosts={"supervisor": self.sys_docker.network.supervisor},
environment={ environment={
ENV_TIME: self.sys_timezone, ENV_TIME: self.sys_timezone,
@@ -57,8 +54,6 @@ class DockerObserver(DockerInterface, CoreSysAttributes):
ports={"80/tcp": 4357}, ports={"80/tcp": 4357},
oom_score_adj=-300, oom_score_adj=-300,
) )
self._meta = docker_container.attrs
_LOGGER.info( _LOGGER.info(
"Starting Observer %s with version %s - %s", "Starting Observer %s with version %s - %s",
self.image, self.image,

View File

@@ -8,15 +8,16 @@ from awesomeversion.awesomeversion import AwesomeVersion
import docker import docker
import requests import requests
from ..coresys import CoreSysAttributes
from ..exceptions import DockerError from ..exceptions import DockerError
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from .const import PropagationMode from .const import PropagationMode
from .interface import DockerInterface from .interface import DockerInterface
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
class DockerSupervisor(DockerInterface, CoreSysAttributes): class DockerSupervisor(DockerInterface):
"""Docker Supervisor wrapper for Supervisor.""" """Docker Supervisor wrapper for Supervisor."""
@property @property
@@ -38,20 +39,20 @@ class DockerSupervisor(DockerInterface, CoreSysAttributes):
def host_mounts_available(self) -> bool: def host_mounts_available(self) -> bool:
"""Return True if container can see mounts on host within its data directory.""" """Return True if container can see mounts on host within its data directory."""
return self._meta and any( return self._meta and any(
mount.get("Propagation") == PropagationMode.SLAVE.value mount.get("Propagation") == PropagationMode.SLAVE
for mount in self.meta_mounts for mount in self.meta_mounts
if mount.get("Destination") == "/data" if mount.get("Destination") == "/data"
) )
def _attach( @Job(name="docker_supervisor_attach", limit=JobExecutionLimit.GROUP_WAIT)
self, version: AwesomeVersion, skip_state_event_if_down: bool = False async def attach(
self, version: AwesomeVersion, *, skip_state_event_if_down: bool = False
) -> None: ) -> None:
"""Attach to running docker container. """Attach to running docker container."""
Need run inside executor.
"""
try: try:
docker_container = self.sys_docker.containers.get(self.name) docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name
)
except (docker.errors.DockerException, requests.RequestException) as err: except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError() from err raise DockerError() from err
@@ -63,17 +64,19 @@ class DockerSupervisor(DockerInterface, CoreSysAttributes):
) )
# If already attach # If already attach
if docker_container in self.sys_docker.network.containers: if docker_container.id in self.sys_docker.network.containers:
return return
# Attach to network # Attach to network
_LOGGER.info("Connecting Supervisor to hassio-network") _LOGGER.info("Connecting Supervisor to hassio-network")
self.sys_docker.network.attach_container( await self.sys_run_in_executor(
self.sys_docker.network.attach_container,
docker_container, docker_container,
alias=["supervisor"], alias=["supervisor"],
ipv4=self.sys_docker.network.supervisor, ipv4=self.sys_docker.network.supervisor,
) )
@Job(name="docker_supervisor_retag", limit=JobExecutionLimit.GROUP_WAIT)
def retag(self) -> Awaitable[None]: def retag(self) -> Awaitable[None]:
"""Retag latest image to version.""" """Retag latest image to version."""
return self.sys_run_in_executor(self._retag) return self.sys_run_in_executor(self._retag)
@@ -93,6 +96,7 @@ class DockerSupervisor(DockerInterface, CoreSysAttributes):
f"Can't retag Supervisor version: {err}", _LOGGER.error f"Can't retag Supervisor version: {err}", _LOGGER.error
) from err ) from err
@Job(name="docker_supervisor_update_start_tag", limit=JobExecutionLimit.GROUP_WAIT)
def update_start_tag(self, image: str, version: AwesomeVersion) -> Awaitable[None]: def update_start_tag(self, image: str, version: AwesomeVersion) -> Awaitable[None]:
"""Update start tag to new version.""" """Update start tag to new version."""
return self.sys_run_in_executor(self._update_start_tag, image, version) return self.sys_run_in_executor(self._update_start_tag, image, version)

View File

@@ -36,6 +36,22 @@ class JobConditionException(JobException):
"""Exception happening for job conditions.""" """Exception happening for job conditions."""
class JobStartException(JobException):
"""Exception occurred starting a job on in current asyncio task."""
class JobNotFound(JobException):
"""Exception for job not found."""
class JobInvalidUpdate(JobException):
"""Exception for invalid update to a job."""
class JobGroupExecutionLimitExceeded(JobException):
"""Exception when job group execution limit exceeded."""
# HomeAssistant # HomeAssistant
@@ -51,6 +67,10 @@ class HomeAssistantCrashError(HomeAssistantError):
"""Error on crash of a Home Assistant startup.""" """Error on crash of a Home Assistant startup."""
class HomeAssistantStartupTimeout(HomeAssistantCrashError):
"""Timeout waiting for Home Assistant successful startup."""
class HomeAssistantAPIError(HomeAssistantError): class HomeAssistantAPIError(HomeAssistantError):
"""Home Assistant API exception.""" """Home Assistant API exception."""
@@ -315,6 +335,10 @@ class DBusNotConnectedError(HostNotSupportedError):
"""D-Bus is not connected and call a method.""" """D-Bus is not connected and call a method."""
class DBusServiceUnkownError(HassioNotSupportedError):
"""D-Bus service was not available."""
class DBusInterfaceError(HassioNotSupportedError): class DBusInterfaceError(HassioNotSupportedError):
"""D-Bus interface not connected.""" """D-Bus interface not connected."""
@@ -343,6 +367,10 @@ class DBusTimeoutError(DBusError):
"""D-Bus call timed out.""" """D-Bus call timed out."""
class DBusNoReplyError(DBusError):
"""D-Bus remote didn't reply/disconnected."""
class DBusFatalError(DBusError): class DBusFatalError(DBusError):
"""D-Bus call going wrong. """D-Bus call going wrong.
@@ -478,6 +506,10 @@ class DockerNotFound(DockerError):
"""Docker object don't Exists.""" """Docker object don't Exists."""
class DockerJobError(DockerError, JobException):
"""Error executing docker job."""
# Hardware # Hardware
@@ -561,6 +593,14 @@ class HomeAssistantBackupError(BackupError, HomeAssistantError):
"""Raise if an error during Home Assistant Core backup is happening.""" """Raise if an error during Home Assistant Core backup is happening."""
class BackupInvalidError(BackupError):
"""Raise if backup or password provided is invalid."""
class BackupJobError(BackupError, JobException):
"""Raise on Backup job error."""
# Security # Security
@@ -593,3 +633,10 @@ class MountNotFound(MountError):
class MountJobError(MountError, JobException): class MountJobError(MountError, JobException):
"""Raise on Mount job error.""" """Raise on Mount job error."""
# Network
class NetworkInterfaceNotFound(HassioError):
"""Raise on network interface not found."""

View File

@@ -1,8 +1,8 @@
"""Constants for hardware.""" """Constants for hardware."""
from enum import Enum from enum import StrEnum
class UdevSubsystem(str, Enum): class UdevSubsystem(StrEnum):
"""Udev subsystem class.""" """Udev subsystem class."""
SERIAL = "tty" SERIAL = "tty"
@@ -24,7 +24,7 @@ class UdevSubsystem(str, Enum):
RPI_H264MEM = "rpivid-h264mem" RPI_H264MEM = "rpivid-h264mem"
class PolicyGroup(str, Enum): class PolicyGroup(StrEnum):
"""Policy groups backend.""" """Policy groups backend."""
UART = "uart" UART = "uart"
@@ -35,14 +35,14 @@ class PolicyGroup(str, Enum):
BLUETOOTH = "bluetooth" BLUETOOTH = "bluetooth"
class HardwareAction(str, Enum): class HardwareAction(StrEnum):
"""Hardware device action.""" """Hardware device action."""
ADD = "add" ADD = "add"
REMOVE = "remove" REMOVE = "remove"
class UdevKernelAction(str, Enum): class UdevKernelAction(StrEnum):
"""Udev kernel device action.""" """Udev kernel device action."""
ADD = "add" ADD = "add"

View File

@@ -1,5 +1,5 @@
"""Read hardware info from system.""" """Read hardware info from system."""
from datetime import datetime from datetime import UTC, datetime
import logging import logging
from pathlib import Path from pathlib import Path
import re import re
@@ -55,7 +55,7 @@ class HwHelper(CoreSysAttributes):
_LOGGER.error("Can't found last boot time!") _LOGGER.error("Can't found last boot time!")
return None return None
return datetime.utcfromtimestamp(int(found.group(1))) return datetime.fromtimestamp(int(found.group(1)), UTC)
def hide_virtual_device(self, udev_device: pyudev.Device) -> bool: def hide_virtual_device(self, udev_device: pyudev.Device) -> bool:
"""Small helper to hide not needed Devices.""" """Small helper to hide not needed Devices."""

View File

@@ -94,7 +94,7 @@ class HardwareManager(CoreSysAttributes):
udev_device: pyudev.Device = pyudev.Devices.from_sys_path( udev_device: pyudev.Device = pyudev.Devices.from_sys_path(
self._udev, str(device.sysfs) self._udev, str(device.sysfs)
) )
return udev_device.find_parent(subsystem.value) is not None return udev_device.find_parent(subsystem) is not None
def _import_devices(self) -> None: def _import_devices(self) -> None:
"""Import fresh from udev database.""" """Import fresh from udev database."""

View File

@@ -7,16 +7,19 @@ from typing import Any, AsyncContextManager
import aiohttp import aiohttp
from aiohttp import hdrs from aiohttp import hdrs
from awesomeversion import AwesomeVersion
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import HomeAssistantAPIError, HomeAssistantAuthError from ..exceptions import HomeAssistantAPIError, HomeAssistantAuthError
from ..jobs.const import JobExecutionLimit from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job from ..jobs.decorator import Job
from ..utils import check_port from ..utils import check_port, version_is_new_enough
from .const import LANDINGPAGE from .const import LANDINGPAGE
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
GET_CORE_STATE_MIN_VERSION: AwesomeVersion = AwesomeVersion("2023.8.0.dev20230720")
class HomeAssistantAPI(CoreSysAttributes): class HomeAssistantAPI(CoreSysAttributes):
"""Home Assistant core object for handle it.""" """Home Assistant core object for handle it."""
@@ -29,7 +32,11 @@ class HomeAssistantAPI(CoreSysAttributes):
self.access_token: str | None = None self.access_token: str | None = None
self._access_token_expires: datetime | None = None self._access_token_expires: datetime | None = None
@Job(limit=JobExecutionLimit.SINGLE_WAIT) @Job(
name="home_assistant_api_ensure_access_token",
limit=JobExecutionLimit.SINGLE_WAIT,
internal=True,
)
async def ensure_access_token(self) -> None: async def ensure_access_token(self) -> None:
"""Ensure there is an access token.""" """Ensure there is an access token."""
if ( if (
@@ -100,42 +107,64 @@ class HomeAssistantAPI(CoreSysAttributes):
continue continue
yield resp yield resp
return return
except (asyncio.TimeoutError, aiohttp.ClientError) as err: except (TimeoutError, aiohttp.ClientError) as err:
_LOGGER.error("Error on call %s: %s", url, err) _LOGGER.error("Error on call %s: %s", url, err)
break break
raise HomeAssistantAPIError() raise HomeAssistantAPIError()
async def get_config(self) -> dict[str, Any]: async def _get_json(self, path: str) -> dict[str, Any]:
"""Return Home Assistant config.""" """Return Home Assistant get API."""
async with self.make_request("get", "api/config") as resp: async with self.make_request("get", path) as resp:
if resp.status in (200, 201): if resp.status in (200, 201):
return await resp.json() return await resp.json()
else: else:
_LOGGER.debug("Home Assistant API return: %d", resp.status) _LOGGER.debug("Home Assistant API return: %d", resp.status)
raise HomeAssistantAPIError() raise HomeAssistantAPIError()
async def check_api_state(self) -> bool: async def get_config(self) -> dict[str, Any]:
"""Return True if Home Assistant up and running.""" """Return Home Assistant config."""
return await self._get_json("api/config")
async def get_core_state(self) -> dict[str, Any]:
"""Return Home Assistant core state."""
return await self._get_json("api/core/state")
async def get_api_state(self) -> str | None:
"""Return state of Home Assistant Core or None."""
# Skip check on landingpage # Skip check on landingpage
if ( if (
self.sys_homeassistant.version is None self.sys_homeassistant.version is None
or self.sys_homeassistant.version == LANDINGPAGE or self.sys_homeassistant.version == LANDINGPAGE
): ):
return False return None
# Check if port is up # Check if port is up
if not await self.sys_run_in_executor( if not await check_port(
check_port,
self.sys_homeassistant.ip_address, self.sys_homeassistant.ip_address,
self.sys_homeassistant.api_port, self.sys_homeassistant.api_port,
): ):
return False return None
# Check if API is up # Check if API is up
with suppress(HomeAssistantAPIError): with suppress(HomeAssistantAPIError):
data = await self.get_config() # get_core_state is available since 2023.8.0 and preferred
# since it is significantly faster than get_config because
# it does not require serializing the entire config
if version_is_new_enough(
self.sys_homeassistant.version, GET_CORE_STATE_MIN_VERSION
):
data = await self.get_core_state()
else:
data = await self.get_config()
# Older versions of home assistant does not expose the state # Older versions of home assistant does not expose the state
if data and data.get("state", "RUNNING") == "RUNNING": if data:
return True return data.get("state", "RUNNING")
return None
async def check_api_state(self) -> bool:
"""Return Home Assistant Core state if up."""
if state := await self.get_api_state():
return state == "RUNNING"
return False return False

View File

@@ -1,6 +1,6 @@
"""Constants for homeassistant.""" """Constants for homeassistant."""
from datetime import timedelta from datetime import timedelta
from enum import Enum from enum import StrEnum
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
@@ -19,7 +19,7 @@ CLOSING_STATES = [
] ]
class WSType(str, Enum): class WSType(StrEnum):
"""Websocket types.""" """Websocket types."""
AUTH = "auth" AUTH = "auth"
@@ -28,12 +28,13 @@ class WSType(str, Enum):
BACKUP_END = "backup/end" BACKUP_END = "backup/end"
class WSEvent(str, Enum): class WSEvent(StrEnum):
"""Websocket events.""" """Websocket events."""
ADDON = "addon" ADDON = "addon"
HEALTH_CHANGED = "health_changed" HEALTH_CHANGED = "health_changed"
ISSUE_CHANGED = "issue_changed" ISSUE_CHANGED = "issue_changed"
ISSUE_REMOVED = "issue_removed" ISSUE_REMOVED = "issue_removed"
JOB = "job"
SUPERVISOR_UPDATE = "supervisor_update" SUPERVISOR_UPDATE = "supervisor_update"
SUPPORTED_CHANGED = "supported_changed" SUPPORTED_CHANGED = "supported_changed"

View File

@@ -2,16 +2,18 @@
import asyncio import asyncio
from collections.abc import Awaitable from collections.abc import Awaitable
from contextlib import suppress from contextlib import suppress
from dataclasses import dataclass
from datetime import datetime, timedelta
import logging import logging
import re import re
import secrets import secrets
import shutil import shutil
from typing import Final
import attr
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from ..const import ATTR_HOMEASSISTANT, BusEvent from ..const import ATTR_HOMEASSISTANT, BusEvent
from ..coresys import CoreSys, CoreSysAttributes from ..coresys import CoreSys
from ..docker.const import ContainerState from ..docker.const import ContainerState
from ..docker.homeassistant import DockerHomeAssistant from ..docker.homeassistant import DockerHomeAssistant
from ..docker.monitor import DockerContainerStateEvent from ..docker.monitor import DockerContainerStateEvent
@@ -21,12 +23,15 @@ from ..exceptions import (
HomeAssistantCrashError, HomeAssistantCrashError,
HomeAssistantError, HomeAssistantError,
HomeAssistantJobError, HomeAssistantJobError,
HomeAssistantStartupTimeout,
HomeAssistantUpdateError, HomeAssistantUpdateError,
JobException,
) )
from ..jobs.const import JobExecutionLimit from ..jobs.const import JOB_GROUP_HOME_ASSISTANT_CORE, JobExecutionLimit
from ..jobs.decorator import Job, JobCondition from ..jobs.decorator import Job, JobCondition
from ..jobs.job_group import JobGroup
from ..resolution.const import ContextType, IssueType from ..resolution.const import ContextType, IssueType
from ..utils import convert_to_ascii, process_lock from ..utils import convert_to_ascii
from ..utils.sentry import capture_exception from ..utils.sentry import capture_exception
from .const import ( from .const import (
LANDINGPAGE, LANDINGPAGE,
@@ -38,25 +43,29 @@ from .const import (
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
SECONDS_BETWEEN_API_CHECKS: Final[int] = 5
# Core Stage 1 and some wiggle room
STARTUP_API_RESPONSE_TIMEOUT: Final[timedelta] = timedelta(minutes=3)
# All stages plus event start timeout and some wiggle rooom
STARTUP_API_CHECK_RUNNING_TIMEOUT: Final[timedelta] = timedelta(minutes=15)
RE_YAML_ERROR = re.compile(r"homeassistant\.util\.yaml") RE_YAML_ERROR = re.compile(r"homeassistant\.util\.yaml")
@attr.s(frozen=True) @dataclass
class ConfigResult: class ConfigResult:
"""Return object from config check.""" """Return object from config check."""
valid = attr.ib() valid: bool
log = attr.ib() log: str
class HomeAssistantCore(CoreSysAttributes): class HomeAssistantCore(JobGroup):
"""Home Assistant core object for handle it.""" """Home Assistant core object for handle it."""
def __init__(self, coresys: CoreSys): def __init__(self, coresys: CoreSys):
"""Initialize Home Assistant object.""" """Initialize Home Assistant object."""
self.coresys: CoreSys = coresys super().__init__(coresys, JOB_GROUP_HOME_ASSISTANT_CORE)
self.instance: DockerHomeAssistant = DockerHomeAssistant(coresys) self.instance: DockerHomeAssistant = DockerHomeAssistant(coresys)
self.lock: asyncio.Lock = asyncio.Lock()
self._error_state: bool = False self._error_state: bool = False
@property @property
@@ -95,9 +104,13 @@ class HomeAssistantCore(CoreSysAttributes):
_LOGGER.info("Starting HomeAssistant landingpage") _LOGGER.info("Starting HomeAssistant landingpage")
if not await self.instance.is_running(): if not await self.instance.is_running():
with suppress(HomeAssistantError): with suppress(HomeAssistantError):
await self._start() await self.start()
@process_lock @Job(
name="home_assistant_core_install_landing_page",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError,
)
async def install_landingpage(self) -> None: async def install_landingpage(self) -> None:
"""Install a landing page.""" """Install a landing page."""
# Try to use a preinstalled landingpage # Try to use a preinstalled landingpage
@@ -116,7 +129,7 @@ class HomeAssistantCore(CoreSysAttributes):
while True: while True:
if not self.sys_updater.image_homeassistant: if not self.sys_updater.image_homeassistant:
_LOGGER.warning( _LOGGER.warning(
"Found no information about Home Assistant. Retry in 30sec" "Found no information about Home Assistant. Retrying in 30sec"
) )
await asyncio.sleep(30) await asyncio.sleep(30)
await self.sys_updater.reload() await self.sys_updater.reload()
@@ -127,19 +140,23 @@ class HomeAssistantCore(CoreSysAttributes):
LANDINGPAGE, image=self.sys_updater.image_homeassistant LANDINGPAGE, image=self.sys_updater.image_homeassistant
) )
break break
except DockerError: except (DockerError, JobException):
pass pass
except Exception as err: # pylint: disable=broad-except except Exception as err: # pylint: disable=broad-except
capture_exception(err) capture_exception(err)
_LOGGER.warning("Fails install landingpage, retry after 30sec") _LOGGER.warning("Failed to install landingpage, retrying after 30sec")
await asyncio.sleep(30) await asyncio.sleep(30)
self.sys_homeassistant.version = LANDINGPAGE self.sys_homeassistant.version = LANDINGPAGE
self.sys_homeassistant.image = self.sys_updater.image_homeassistant self.sys_homeassistant.image = self.sys_updater.image_homeassistant
self.sys_homeassistant.save_data() self.sys_homeassistant.save_data()
@process_lock @Job(
name="home_assistant_core_install",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError,
)
async def install(self) -> None: async def install(self) -> None:
"""Install a landing page.""" """Install a landing page."""
_LOGGER.info("Home Assistant setup") _LOGGER.info("Home Assistant setup")
@@ -155,12 +172,12 @@ class HomeAssistantCore(CoreSysAttributes):
image=self.sys_updater.image_homeassistant, image=self.sys_updater.image_homeassistant,
) )
break break
except DockerError: except (DockerError, JobException):
pass pass
except Exception as err: # pylint: disable=broad-except except Exception as err: # pylint: disable=broad-except
capture_exception(err) capture_exception(err)
_LOGGER.warning("Error on Home Assistant installation. Retry in 30sec") _LOGGER.warning("Error on Home Assistant installation. Retrying in 30sec")
await asyncio.sleep(30) await asyncio.sleep(30)
_LOGGER.info("Home Assistant docker now installed") _LOGGER.info("Home Assistant docker now installed")
@@ -171,7 +188,7 @@ class HomeAssistantCore(CoreSysAttributes):
# finishing # finishing
try: try:
_LOGGER.info("Starting Home Assistant") _LOGGER.info("Starting Home Assistant")
await self._start() await self.start()
except HomeAssistantError: except HomeAssistantError:
_LOGGER.error("Can't start Home Assistant!") _LOGGER.error("Can't start Home Assistant!")
@@ -179,8 +196,8 @@ class HomeAssistantCore(CoreSysAttributes):
with suppress(DockerError): with suppress(DockerError):
await self.instance.cleanup() await self.instance.cleanup()
@process_lock
@Job( @Job(
name="home_assistant_core_update",
conditions=[ conditions=[
JobCondition.FREE_SPACE, JobCondition.FREE_SPACE,
JobCondition.HEALTHY, JobCondition.HEALTHY,
@@ -188,6 +205,7 @@ class HomeAssistantCore(CoreSysAttributes):
JobCondition.PLUGINS_UPDATED, JobCondition.PLUGINS_UPDATED,
JobCondition.SUPERVISOR_UPDATED, JobCondition.SUPERVISOR_UPDATED,
], ],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError, on_condition=HomeAssistantJobError,
) )
async def update( async def update(
@@ -231,7 +249,7 @@ class HomeAssistantCore(CoreSysAttributes):
self.sys_homeassistant.image = self.sys_updater.image_homeassistant self.sys_homeassistant.image = self.sys_updater.image_homeassistant
if running: if running:
await self._start() await self.start()
_LOGGER.info("Successfully started Home Assistant %s", to_version) _LOGGER.info("Successfully started Home Assistant %s", to_version)
# Successfull - last step # Successfull - last step
@@ -281,23 +299,11 @@ class HomeAssistantCore(CoreSysAttributes):
self.sys_resolution.create_issue(IssueType.UPDATE_FAILED, ContextType.CORE) self.sys_resolution.create_issue(IssueType.UPDATE_FAILED, ContextType.CORE)
raise HomeAssistantUpdateError() raise HomeAssistantUpdateError()
async def _start(self) -> None: @Job(
"""Start Home Assistant Docker & wait.""" name="home_assistant_core_start",
# Create new API token limit=JobExecutionLimit.GROUP_ONCE,
self.sys_homeassistant.supervisor_token = secrets.token_hex(56) on_condition=HomeAssistantJobError,
self.sys_homeassistant.save_data() )
# Write audio settings
self.sys_homeassistant.write_pulse()
try:
await self.instance.run()
except DockerError as err:
raise HomeAssistantError() from err
await self._block_till_run(self.sys_homeassistant.version)
@process_lock
async def start(self) -> None: async def start(self) -> None:
"""Run Home Assistant docker.""" """Run Home Assistant docker."""
if await self.instance.is_running(): if await self.instance.is_running():
@@ -314,20 +320,37 @@ class HomeAssistantCore(CoreSysAttributes):
await self._block_till_run(self.sys_homeassistant.version) await self._block_till_run(self.sys_homeassistant.version)
# No Instance/Container found, extended start # No Instance/Container found, extended start
else: else:
await self._start() # Create new API token
self.sys_homeassistant.supervisor_token = secrets.token_hex(56)
self.sys_homeassistant.save_data()
@process_lock # Write audio settings
self.sys_homeassistant.write_pulse()
try:
await self.instance.run()
except DockerError as err:
raise HomeAssistantError() from err
await self._block_till_run(self.sys_homeassistant.version)
@Job(
name="home_assistant_core_stop",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError,
)
async def stop(self) -> None: async def stop(self) -> None:
"""Stop Home Assistant Docker. """Stop Home Assistant Docker."""
Return a coroutine.
"""
try: try:
return await self.instance.stop(remove_container=False) return await self.instance.stop(remove_container=False)
except DockerError as err: except DockerError as err:
raise HomeAssistantError() from err raise HomeAssistantError() from err
@process_lock @Job(
name="home_assistant_core_restart",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError,
)
async def restart(self) -> None: async def restart(self) -> None:
"""Restart Home Assistant Docker.""" """Restart Home Assistant Docker."""
try: try:
@@ -337,12 +360,16 @@ class HomeAssistantCore(CoreSysAttributes):
await self._block_till_run(self.sys_homeassistant.version) await self._block_till_run(self.sys_homeassistant.version)
@process_lock @Job(
name="home_assistant_core_rebuild",
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=HomeAssistantJobError,
)
async def rebuild(self) -> None: async def rebuild(self) -> None:
"""Rebuild Home Assistant Docker container.""" """Rebuild Home Assistant Docker container."""
with suppress(DockerError): with suppress(DockerError):
await self.instance.stop() await self.instance.stop()
await self._start() await self.start()
def logs(self) -> Awaitable[bytes]: def logs(self) -> Awaitable[bytes]:
"""Get HomeAssistant docker logs. """Get HomeAssistant docker logs.
@@ -359,10 +386,7 @@ class HomeAssistantCore(CoreSysAttributes):
return self.instance.check_trust() return self.instance.check_trust()
async def stats(self) -> DockerStats: async def stats(self) -> DockerStats:
"""Return stats of Home Assistant. """Return stats of Home Assistant."""
Return a coroutine.
"""
try: try:
return await self.instance.stats() return await self.instance.stats()
except DockerError as err: except DockerError as err:
@@ -385,13 +409,16 @@ class HomeAssistantCore(CoreSysAttributes):
@property @property
def in_progress(self) -> bool: def in_progress(self) -> bool:
"""Return True if a task is in progress.""" """Return True if a task is in progress."""
return self.instance.in_progress or self.lock.locked() return self.instance.in_progress or self.active_job
async def check_config(self) -> ConfigResult: async def check_config(self) -> ConfigResult:
"""Run Home Assistant config check.""" """Run Home Assistant config check."""
result = await self.instance.execute_command( try:
"python3 -m homeassistant -c /config --script check_config" result = await self.instance.execute_command(
) "python3 -m homeassistant -c /config --script check_config"
)
except DockerError as err:
raise HomeAssistantError() from err
# If not valid # If not valid
if result.exit_code is None: if result.exit_code is None:
@@ -416,28 +443,46 @@ class HomeAssistantCore(CoreSysAttributes):
return return
_LOGGER.info("Wait until Home Assistant is ready") _LOGGER.info("Wait until Home Assistant is ready")
while True: deadline = datetime.now() + STARTUP_API_RESPONSE_TIMEOUT
await asyncio.sleep(5) last_state = None
while not (timeout := datetime.now() >= deadline):
await asyncio.sleep(SECONDS_BETWEEN_API_CHECKS)
# 1: Check if Container is is_running # 1: Check if Container is is_running
if not await self.instance.is_running(): if not await self.instance.is_running():
_LOGGER.error("Home Assistant has crashed!") _LOGGER.error("Home Assistant has crashed!")
break break
# 2: Check if API response # 2: Check API response
if await self.sys_homeassistant.api.check_api_state(): if state := await self.sys_homeassistant.api.get_api_state():
_LOGGER.info("Detect a running Home Assistant instance") if last_state is None:
self._error_state = False # API initially available, move deadline up and check API
return # state to be running now
deadline = datetime.now() + STARTUP_API_CHECK_RUNNING_TIMEOUT
if last_state != state:
_LOGGER.info("Home Assistant Core state changed to %s", state)
last_state = state
if state == "RUNNING":
_LOGGER.info("Detect a running Home Assistant instance")
self._error_state = False
return
self._error_state = True self._error_state = True
if timeout:
raise HomeAssistantStartupTimeout(
"No Home Assistant Core response, assuming a fatal startup error",
_LOGGER.error,
)
raise HomeAssistantCrashError() raise HomeAssistantCrashError()
@Job( @Job(
name="home_assistant_core_repair",
conditions=[ conditions=[
JobCondition.FREE_SPACE, JobCondition.FREE_SPACE,
JobCondition.INTERNET_HOST, JobCondition.INTERNET_HOST,
] ],
) )
async def repair(self): async def repair(self):
"""Repair local Home Assistant data.""" """Repair local Home Assistant data."""
@@ -459,6 +504,7 @@ class HomeAssistantCore(CoreSysAttributes):
await self._restart_after_problem(event.state) await self._restart_after_problem(event.state)
@Job( @Job(
name="home_assistant_core_restart_after_problem",
limit=JobExecutionLimit.THROTTLE_RATE_LIMIT, limit=JobExecutionLimit.THROTTLE_RATE_LIMIT,
throttle_period=WATCHDOG_THROTTLE_PERIOD, throttle_period=WATCHDOG_THROTTLE_PERIOD,
throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS, throttle_max_calls=WATCHDOG_THROTTLE_MAX_CALLS,
@@ -470,7 +516,7 @@ class HomeAssistantCore(CoreSysAttributes):
# Don't interrupt a task in progress or if rollback is handling it # Don't interrupt a task in progress or if rollback is handling it
if not (self.in_progress or self.error_state): if not (self.in_progress or self.error_state):
_LOGGER.warning( _LOGGER.warning(
"Watchdog found Home Assistant %s, restarting...", state.value "Watchdog found Home Assistant %s, restarting...", state
) )
if state == ContainerState.FAILED and attempts == 0: if state == ContainerState.FAILED and attempts == 0:
try: try:

Some files were not shown because too many files have changed in this diff Show More