Compare commits

...

99 Commits

Author SHA1 Message Date
dependabot[bot]
52d4bc660e Bump codecov/codecov-action from 4.0.2 to 4.1.0 (#4927)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4.0.2 to 4.1.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4.0.2...v4.1.0)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-27 09:59:01 +01:00
dependabot[bot]
8884696a6c Bump actions/download-artifact from 4.1.2 to 4.1.3 (#4926)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.1.2 to 4.1.3.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4.1.2...v4.1.3)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-27 09:58:21 +01:00
dependabot[bot]
d493ccde28 Bump pytest from 7.4.4 to 8.0.1 (#4901)
* Bump pytest from 7.4.4 to 8.0.1

Bumps [pytest](https://github.com/pytest-dev/pytest) from 7.4.4 to 8.0.1.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/7.4.4...8.0.1)

---
updated-dependencies:
- dependency-name: pytest
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update pytest-asyncio to 0.23.5

* Set scope to function on fixture

* Unthrottle by patching last call to prevent carryover

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
2024-02-27 09:57:44 +01:00
J. Nick Koston
1ececaaaa2 Bump securetar to 2024.2.1 (#4925) 2024-02-26 10:47:47 -10:00
dependabot[bot]
91b48ad432 Bump pylint from 3.0.3 to 3.1.0 (#4921)
Bumps [pylint](https://github.com/pylint-dev/pylint) from 3.0.3 to 3.1.0.
- [Release notes](https://github.com/pylint-dev/pylint/releases)
- [Commits](https://github.com/pylint-dev/pylint/compare/v3.0.3...v3.1.0)

---
updated-dependencies:
- dependency-name: pylint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-26 11:02:23 -05:00
dependabot[bot]
f3fe40a19f Bump codecov/codecov-action from 4.0.1 to 4.0.2 (#4923)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4.0.1 to 4.0.2.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4.0.1...v4.0.2)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-26 10:22:43 -05:00
dependabot[bot]
cf4b29c425 Bump orjson from 3.9.14 to 3.9.15 (#4922)
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.14 to 3.9.15.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.9.14...3.9.15)

---
updated-dependencies:
- dependency-name: orjson
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-26 10:22:17 -05:00
dependabot[bot]
4344e14a9d Bump coverage from 7.4.1 to 7.4.3 (#4920)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.4.1 to 7.4.3.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.4.1...7.4.3)

---
updated-dependencies:
- dependency-name: coverage
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-26 10:18:32 -05:00
dependabot[bot]
df935ec423 Bump typing-extensions from 4.9.0 to 4.10.0 (#4919)
Bumps [typing-extensions](https://github.com/python/typing_extensions) from 4.9.0 to 4.10.0.
- [Release notes](https://github.com/python/typing_extensions/releases)
- [Changelog](https://github.com/python/typing_extensions/blob/main/CHANGELOG.md)
- [Commits](https://github.com/python/typing_extensions/commits)

---
updated-dependencies:
- dependency-name: typing-extensions
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-26 10:18:02 -05:00
dependabot[bot]
e7f9f7504e Bump setuptools from 69.1.0 to 69.1.1 (#4918)
Bumps [setuptools](https://github.com/pypa/setuptools) from 69.1.0 to 69.1.1.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v69.1.0...v69.1.1)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-26 10:15:20 -05:00
dependabot[bot]
5721b2353a Bump cryptography from 42.0.3 to 42.0.5 (#4917)
Bumps [cryptography](https://github.com/pyca/cryptography) from 42.0.3 to 42.0.5.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/42.0.3...42.0.5)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-26 10:14:50 -05:00
Mike Degatano
c9de846d0e Fix missing apis from addons with manager role (#4908)
* Allow mount control from addons with manager role

* Allow available_updates and refresh_updates too
2024-02-21 11:36:29 -05:00
dependabot[bot]
a598108c26 Bump sentry-sdk from 1.40.4 to 1.40.5 (#4905)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.40.4 to 1.40.5.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.40.4...1.40.5)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-20 09:19:15 +01:00
dependabot[bot]
5467aa399d Bump ruff from 0.2.1 to 0.2.2 (#4904)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.2.1 to 0.2.2.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/v0.2.1...v0.2.2)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-19 10:37:44 +01:00
dependabot[bot]
da052b074a Bump urllib3 from 2.2.0 to 2.2.1 (#4903) 2024-02-19 08:07:18 +01:00
dependabot[bot]
90c035edd0 Bump pre-commit from 3.6.1 to 3.6.2 (#4902) 2024-02-19 07:52:18 +01:00
dependabot[bot]
fc4eb44a24 Bump cryptography from 42.0.2 to 42.0.3 (#4895)
Bumps [cryptography](https://github.com/pyca/cryptography) from 42.0.2 to 42.0.3.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/42.0.2...42.0.3)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: J. Nick Koston <nick@koston.org>
2024-02-16 22:10:43 +01:00
Stefan Agner
a71111b378 Fix autoupdate time compare (#4897)
* Fix autoupdate time compare

Make sure both timestamps are UTC, otherwise Python complains with:
TypeError: can't compare offset-naive and offset-aware datetimes

* Use correect attribute

---------

Co-authored-by: Pascal Vizeli <pvizeli@syshack.ch>
2024-02-16 15:40:54 +01:00
dependabot[bot]
52e0c7e484 Bump gitpython from 3.1.41 to 3.1.42 (#4894)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-16 15:31:31 +01:00
Stefan Agner
e32970f191 Fix new complaint by ruff 0.2.1 (#4898) 2024-02-16 14:35:31 +01:00
dependabot[bot]
897cc36017 Bump sentry-sdk from 1.40.3 to 1.40.4 (#4891)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.40.3 to 1.40.4.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.40.3...1.40.4)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-14 09:24:58 +01:00
dependabot[bot]
d79c575860 Bump orjson from 3.9.13 to 3.9.14 (#4890)
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.13 to 3.9.14.
- [Release notes](https://github.com/ijl/orjson/releases)
- [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ijl/orjson/compare/3.9.13...3.9.14)

---
updated-dependencies:
- dependency-name: orjson
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-14 09:24:52 +01:00
J. Nick Koston
1f19f84edd Create backups files without having to copy inner tarballs (#4884)
* Create backups files without having to copy inner tarballs

needs https://github.com/pvizeli/securetar/pull/33

* fix writing json

* fix writing json

* fixes

* fixes

* ensure cleaned up

* need ./

* fix type

* Bump securetar to 2024.2.0

changelog: https://github.com/pvizeli/securetar/compare/2023.12.0...2024.2.0

* backup file is now created sooner

* reorder so comment still makes sense
2024-02-14 09:24:43 +01:00
dependabot[bot]
27c37b8b84 Bump ruff from 0.1.14 to 0.2.1 (#4877)
* Bump ruff from 0.1.14 to 0.2.1

Bumps [ruff](https://github.com/astral-sh/ruff) from 0.1.14 to 0.2.1.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/v0.1.14...v0.2.1)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update .pre-commit-config.yaml

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pascal Vizeli <pvizeli@syshack.ch>
2024-02-14 09:11:26 +01:00
J. Nick Koston
06a5dd3153 Bump securetar to 2024.2.0 (#4888) 2024-02-12 19:34:04 +01:00
Mike Degatano
b5bf270d22 Mount status checks look at connection (#4882)
* Mount status checks look at connection

* Fix tests and refactor to fixture

* Fix test
2024-02-12 17:32:54 +01:00
dependabot[bot]
8e71d69a64 Bump setuptools from 69.0.3 to 69.1.0 (#4885) 2024-02-12 08:31:08 +01:00
dependabot[bot]
06edb6f8a8 Bump pre-commit from 3.6.0 to 3.6.1 (#4887) 2024-02-12 08:20:35 +01:00
dependabot[bot]
dca82ec0a1 Bump sentry-sdk from 1.40.2 to 1.40.3 (#4886) 2024-02-12 08:20:00 +01:00
dependabot[bot]
9c82ce4103 Bump debugpy from 1.8.0 to 1.8.1 (#4881)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-09 08:14:07 +01:00
dependabot[bot]
8a23a9eb1b Bump sentry-sdk from 1.40.1 to 1.40.2 (#4880) 2024-02-08 07:25:48 +01:00
dependabot[bot]
e1b7e515df Bump sentry-sdk from 1.40.0 to 1.40.1 (#4879)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-07 08:19:23 +01:00
dependabot[bot]
c8ff335ed7 Bump awesomeversion from 23.11.0 to 24.2.0 (#4878)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-07 07:54:04 +01:00
dependabot[bot]
5736da8ab7 Bump actions/download-artifact from 4.1.1 to 4.1.2 (#4876)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.1.1 to 4.1.2.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4.1.1...v4.1.2)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-06 08:59:19 +01:00
dependabot[bot]
060bba4dce Bump actions/upload-artifact from 4.3.0 to 4.3.1 (#4875)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.3.0 to 4.3.1.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4.3.0...v4.3.1)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-06 08:59:06 +01:00
Mike Degatano
4c573991d2 Improve error handling when mounts fail (#4872) 2024-02-05 16:24:53 -05:00
Mike Degatano
7fd6dce55f Migrate to Ruff for lint and format (#4852)
* Migrate to Ruff for lint and format

* Fix pylint issues

* DBus property sets into normal awaitable methods

* Fix tests relying on separate tasks in connect

* Fixes from feedback
2024-02-05 11:37:39 -05:00
dependabot[bot]
1861d756e9 Bump voluptuous from 0.14.1 to 0.14.2 (#4874)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-05 08:49:14 +01:00
dependabot[bot]
c36c041f5e Bump orjson from 3.9.12 to 3.9.13 (#4873)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-05 08:45:23 +01:00
dependabot[bot]
c3d877bdd2 Bump cryptography from 42.0.1 to 42.0.2 (#4860)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-02 11:11:50 +01:00
dependabot[bot]
1242030d4a Bump codecov/codecov-action from 3.1.6 to 4.0.1 (#4869)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-02 08:50:53 +01:00
dependabot[bot]
1626e74608 Bump release-drafter/release-drafter from 5.25.0 to 6.0.0 (#4868)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-02 08:50:38 +01:00
dependabot[bot]
b1b913777f Bump sigstore/cosign-installer from 3.3.0 to 3.4.0 (#4864)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-01 08:53:35 +01:00
Stefan Agner
190894010c Reset failed API call counter on successful API call (#4862)
* Reset failed API call counter on successful API call

Make sure to reset the failed API call counter after a successful
API call. While at it also update the log messages a bit to make it
clearer what the problem is exactly.

* Address pytest changes
2024-01-31 11:41:21 -05:00
Stefan Agner
765265723c Explicitly log when API requests timeout (#4861)
Currently a timeout leads to a log entry which simply states:
"Error on call http://172.30.32.1:8123/api/core/state: ". From this,
it is not immeaditly clear what the problem is. This commit adds
a log entry which explicitly states that the request timed out.
2024-01-31 10:17:24 -05:00
dependabot[bot]
7e20502379 Bump urllib3 from 2.1.0 to 2.2.0 (#4859)
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.1.0 to 2.2.0.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/2.1.0...2.2.0)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-31 10:13:34 -05:00
dependabot[bot]
366fc30e9d Bump sentry-sdk from 1.39.2 to 1.40.0 (#4858)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.39.2 to 1.40.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.39.2...1.40.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-31 10:11:11 -05:00
dependabot[bot]
aa91788a69 Bump codecov/codecov-action from 3.1.5 to 3.1.6 (#4857)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-31 08:09:52 +01:00
J. Nick Koston
375789b019 Use orjson encoder for websocket messages (#4854)
I missed that these need to have dumps passed
2024-01-30 09:00:27 -05:00
Mike Degatano
140b769a42 Auto updates to new version delay for 24 hours (#4838) 2024-01-30 08:58:28 -05:00
Mike Degatano
88d718271d Fix serialization issue adding error to job (#4853) 2024-01-30 12:21:02 +01:00
dependabot[bot]
6ed26cdd1f Bump aiohttp from 3.9.2 to 3.9.3 (#4855)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 21:36:01 -10:00
J. Nick Koston
d1851fa607 Significantly speed up creating backups with isal via zlib-fast (#4843) 2024-01-29 10:25:43 -10:00
dependabot[bot]
e846157c52 Bump aiohttp from 3.9.1 to 3.9.2 (#4848)
Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.9.1 to 3.9.2.
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.9.1...v3.9.2)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 14:35:54 -05:00
dependabot[bot]
e190bb4c1a Bump colorlog from 6.8.0 to 6.8.2 (#4846)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 08:53:39 +01:00
dependabot[bot]
137fbe7acd Bump coverage from 7.4.0 to 7.4.1 (#4849) 2024-01-29 08:28:43 +01:00
dependabot[bot]
9ccdb2ae3a Bump cryptography from 41.0.7 to 42.0.1 (#4837)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-28 20:54:01 +01:00
dependabot[bot]
f5f7515744 Bump codecov/codecov-action from 3.1.4 to 3.1.5 (#4840)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-28 20:36:45 +01:00
Mike Degatano
ddadbec7e3 Addon devs can block auto update for breaking versions (#4832) 2024-01-26 08:08:03 +01:00
dependabot[bot]
d24543e103 Bump actions/upload-artifact from 4.2.0 to 4.3.0 (#4835)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-24 08:29:27 +01:00
Mike Degatano
f80c4c9565 Fix bootstrap log typo (#4833) 2024-01-23 17:34:43 -10:00
Mike Degatano
480b383782 Add background option to backup APIs (#4802)
* Add background option to backup APIs

* Fix decorator tests

* Working error handling, initial test cases

* Change to schedule_job and always return job id

* Add tests

* Reorder call at/later args

* Validation errors return immediately in background

* None is invalid option for background

* Must pop the background option from body
2024-01-22 12:09:15 -05:00
dependabot[bot]
d3efd4c24b Bump orjson from 3.9.10 to 3.9.12 (#4825)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-19 08:44:12 +01:00
dependabot[bot]
67a0acffa2 Bump actions/upload-artifact from 4.1.0 to 4.2.0 (#4824) 2024-01-19 07:38:07 +01:00
dependabot[bot]
41b07da399 Bump dbus-fast from 2.21.0 to 2.21.1 (#4822)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-17 08:20:06 +01:00
dependabot[bot]
a6ce55d5b5 Bump actions/cache from 3.3.3 to 4.0.0 (#4821) 2024-01-17 07:24:40 +01:00
Stefan Agner
98c01fe1b3 Fix add-on rebuild with ingress (#4819) 2024-01-15 07:53:25 -10:00
dependabot[bot]
51df986222 Bump actions/upload-artifact from 4.0.0 to 4.1.0 (#4818)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-15 09:15:27 +01:00
J. Nick Koston
9c625f93a5 Fix dirhash failing to import pkg_resources (#4817) 2024-01-14 11:20:35 +01:00
dependabot[bot]
7101d47e2e Bump getsentry/action-release from 1.6.0 to 1.7.0 (#4805)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-14 11:13:21 +01:00
J. Nick Koston
eb85be2770 Improve json performance by porting core orjson utils (#4816)
* Improve json performance by porting core orjson utils

* port relevant tests

* pylint

* add test for read_json_file

* add test for read_json_file

* remove workaround for core issue we do not have here

---------

Co-authored-by: Pascal Vizeli <pvizeli@syshack.ch>
2024-01-13 19:19:01 +01:00
Mike Degatano
2da27937a5 Update python to 3.12 (#4815)
* Update python to 3.12

* Fix tests and deprecations

* Fix other references to 3.11

* build.json doesn't exist
2024-01-13 16:35:07 +01:00
dependabot[bot]
2a29b801a4 Bump jinja2 from 3.1.2 to 3.1.3 (#4810)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:52:44 +01:00
dependabot[bot]
57e65714b0 Bump actions/download-artifact from 4.1.0 to 4.1.1 (#4809)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:35:09 +01:00
dependabot[bot]
0ae40cb51c Bump gitpython from 3.1.40 to 3.1.41 (#4808)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:31:31 +01:00
dependabot[bot]
ddd195dfc6 Bump sentry-sdk from 1.39.1 to 1.39.2 (#4811)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:29:34 +01:00
dependabot[bot]
54b9f23ec5 Bump actions/cache from 3.3.2 to 3.3.3 (#4813)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 09:25:51 +01:00
dependabot[bot]
242dd3e626 Bump home-assistant/wheels from 2023.10.5 to 2024.01.0 (#4804) 2024-01-08 08:16:11 +01:00
dependabot[bot]
1b8acb5b60 Bump home-assistant/builder from 2023.12.0 to 2024.01.0 (#4800)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-05 08:19:40 +01:00
dependabot[bot]
a7ab96ab12 Bump flake8 from 6.1.0 to 7.0.0 (#4799)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-05 08:18:05 +01:00
dependabot[bot]
06ab11cf87 Bump attrs from 23.1.0 to 23.2.0 (#4793)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-02 16:57:05 +01:00
dependabot[bot]
1410a1b06e Bump pytest from 7.4.3 to 7.4.4 (#4792) 2024-01-01 14:55:37 +01:00
Mike Degatano
5baf19f7a3 Migrate to pyproject.toml where possible (#4770)
* Migrate to pyproject.toml where possible

* Share requirements and fix version import

* Fix issues with timezone in tests
2023-12-29 11:46:01 +01:00
Mike Degatano
6c66a7ba17 Improve error handling in backup restore (#4791) 2023-12-29 11:45:50 +01:00
dependabot[bot]
37b6e09475 Bump coverage from 7.3.4 to 7.4.0 (#4790)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.3.4 to 7.4.0.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.3.4...7.4.0)

---
updated-dependencies:
- dependency-name: coverage
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-28 09:22:22 +01:00
Jeff Oakley
e08c8ca26d Add support for setting target path in map config (#4694)
* Added support for setting addon target path in map config

* Updated addon target path mapping to use dataclass

* Added check before adding string folder maps

* Moved enum to addon/const, updated map_volumes logic, fixed test

* Removed log used for debugging

* Use more readable approach to determine addon_config_used

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Use cleaner approach for checking volume config

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Use dict syntax and ATTR_TYPE

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Use coerce for validating mapping type

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Default read_only to true in schema

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Use ATTR_TYPE and ATTR_READ_ONLY instead of static strings

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Use constants instead of in-line strings

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Correct type for path

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Added read_only and path constants

* Fixed small syntax error and added includes for constants

* Simplify logic for handling string and dict entries in map config

* Use ATTR_PATH instead of inline string

Co-authored-by: Mike Degatano <michael.degatano@gmail.com>

* Add missing ATTR_PATH reference

* Moved FolderMapping dataclass to data.py

* Fix edge case where "data" map type is used but optional path is not set

* Move FolderMapping dataclass to configuration.py to prevent circular reference

---------

Co-authored-by: Jeff Oakley <jeff.oakley@LearningCircleSoftware.com>
Co-authored-by: Mike Degatano <michael.degatano@gmail.com>
Co-authored-by: Pascal Vizeli <pvizeli@syshack.ch>
2023-12-27 15:14:23 -05:00
dependabot[bot]
2c09e7929f Bump black from 23.12.0 to 23.12.1 (#4788) 2023-12-26 08:30:35 +01:00
Stefan Agner
3e760f0d85 Always pass explicit architecture of installed add-ons (#4786)
* Pass architecture of installed add-on on update

When using multi-architecture container images, the architecture of the
add-on is not passed to Docker in all cases. This causes the
architecture of the Supervisor container to be used, which potentially
is not supported by the add-on in question.

This commit passes the architecture of the add-on to Docker, so that
the correct image is pulled.

* Call update with architecture

* Also pass architecture on add-on restore

* Fix pytest
2023-12-21 16:52:25 -05:00
Mike Degatano
3cc6bd19ad Mark system as unhealthy on OSError Bad message errors (#4750)
* Bad message error marks system as unhealthy

* Finish adding test cases for changes

* Rename test file for uniqueness

* bad_message to oserror_bad_message

* Omit some checks and check for network mounts
2023-12-21 18:05:29 +01:00
Mike Degatano
b7ddfba71d Set max reanimation attempts on HA watchdog (#4784) 2023-12-21 16:44:39 +01:00
dependabot[bot]
32f21d208f Bump coverage from 7.3.3 to 7.3.4 (#4785)
Bumps [coverage](https://github.com/nedbat/coveragepy) from 7.3.3 to 7.3.4.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.3.3...7.3.4)

---
updated-dependencies:
- dependency-name: coverage
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-21 09:01:49 +01:00
Jan Čermák
ed7edd9fe0 Adjust "retry in ..." log messages to avoid confusion (#4783)
As shown in home-assistant/operating-system#3007, error messages printed
to logs when container installation fails can cause some confusion,
because they are sometimes printed to the log on the landing page.
Adjust all wordings of "retry in" to "retrying in" to make it obvious
this happens automatically.
2023-12-20 18:34:42 +01:00
Stefan Agner
fd3c995c7c Fix WiFi WEP configuration (#4781)
It seems that the values for auth-alg and key-mgmt got mixed up.
Trying to safe a WEP configuration currently leads to the error:
23-12-19 10:56:37 ERROR (MainThread) [supervisor.host.network] Can't create config and activate wlan0: 802-11-wireless-security.key-mgmt: 'open' is not a valid value for the property
2023-12-19 13:53:51 +01:00
dependabot[bot]
c0d1a2d53b Bump actions/download-artifact from 4.0.0 to 4.1.0 (#4780)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.0.0 to 4.1.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4.0.0...v4.1.0)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 08:59:43 +01:00
dependabot[bot]
76bc3015a7 Bump deepmerge from 1.1.0 to 1.1.1 (#4779)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 08:45:48 +01:00
dependabot[bot]
ad2896243b Bump sentry-sdk from 1.39.0 to 1.39.1 (#4774)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-18 15:45:05 +01:00
dependabot[bot]
d0dcded42d Bump actions/download-artifact from 3 to 4 (#4777)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Franck Nijhof <git@frenck.dev>
2023-12-15 08:27:10 +01:00
dependabot[bot]
a0dfa01287 Bump actions/upload-artifact from 3.1.3 to 4.0.0 (#4776)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-15 08:12:31 +01:00
dependabot[bot]
4ec5c90180 Bump coverage from 7.3.2 to 7.3.3 (#4775) 2023-12-15 07:25:10 +01:00
202 changed files with 3927 additions and 1453 deletions

View File

@@ -10,11 +10,13 @@
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"charliermarsh.ruff",
"ms-python.pylint",
"ms-python.vscode-pylance",
"visualstudioexptteam.vscodeintellicode",
"esbenp.prettier-vscode"
"redhat.vscode-yaml",
"esbenp.prettier-vscode",
"GitHub.vscode-pull-request-github"
],
"settings": {
"terminal.integrated.profiles.linux": {
@@ -28,9 +30,9 @@
"editor.formatOnType": true,
"files.trimTrailingWhitespace": true,
"python.pythonPath": "/usr/local/bin/python3",
"python.formatting.provider": "black",
"python.formatting.blackArgs": ["--target-version", "py311"],
"python.formatting.blackPath": "/usr/local/bin/black"
"[python]": {
"editor.defaultFormatter": "charliermarsh.ruff"
}
}
}
},

View File

@@ -52,7 +52,7 @@
- [ ] Local tests pass. **Your PR cannot be merged unless tests pass**
- [ ] There is no commented out code in this PR.
- [ ] I have followed the [development checklist][dev-checklist]
- [ ] The code has been formatted using Black (`black --fast supervisor tests`)
- [ ] The code has been formatted using Ruff (`ruff format supervisor tests`)
- [ ] Tests have been added to verify that the new code works.
If API endpoints of add-on configuration are added/changed:

View File

@@ -33,7 +33,7 @@ on:
- setup.py
env:
DEFAULT_PYTHON: "3.11"
DEFAULT_PYTHON: "3.12"
BUILD_NAME: supervisor
BUILD_TYPE: supervisor
@@ -75,7 +75,7 @@ jobs:
- name: Check if requirements files changed
id: requirements
run: |
if [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements.txt|build.json) ]]; then
if [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements.txt|build.yaml) ]]; then
echo "changed=true" >> "$GITHUB_OUTPUT"
fi
@@ -106,9 +106,9 @@ jobs:
- name: Build wheels
if: needs.init.outputs.requirements == 'true'
uses: home-assistant/wheels@2023.10.5
uses: home-assistant/wheels@2024.01.0
with:
abi: cp311
abi: cp312
tag: musllinux_1_2
arch: ${{ matrix.arch }}
wheels-key: ${{ secrets.WHEELS_KEY }}
@@ -131,14 +131,14 @@ jobs:
- name: Install Cosign
if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@v3.3.0
uses: sigstore/cosign-installer@v3.4.0
with:
cosign-release: "v2.0.2"
- name: Install dirhash and calc hash
if: needs.init.outputs.publish == 'true'
run: |
pip3 install dirhash
pip3 install setuptools dirhash
dir_hash="$(dirhash "${{ github.workspace }}/supervisor" -a sha256 --match "*.py")"
echo "${dir_hash}" > rootfs/supervisor.sha256
@@ -160,7 +160,7 @@ jobs:
run: echo "BUILD_ARGS=--test" >> $GITHUB_ENV
- name: Build supervisor
uses: home-assistant/builder@2023.12.0
uses: home-assistant/builder@2024.01.0
with:
args: |
$BUILD_ARGS \
@@ -207,7 +207,7 @@ jobs:
- name: Build the Supervisor
if: needs.init.outputs.publish != 'true'
uses: home-assistant/builder@2023.12.0
uses: home-assistant/builder@2024.01.0
with:
args: |
--test \

View File

@@ -8,7 +8,7 @@ on:
pull_request: ~
env:
DEFAULT_PYTHON: "3.11"
DEFAULT_PYTHON: "3.12"
PRE_COMMIT_CACHE: ~/.cache/pre-commit
concurrency:
@@ -33,7 +33,7 @@ jobs:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v3.3.2
uses: actions/cache@v4.0.0
with:
path: venv
key: |
@@ -47,7 +47,7 @@ jobs:
pip install -r requirements.txt -r requirements_tests.txt
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v3.3.2
uses: actions/cache@v4.0.0
with:
path: ${{ env.PRE_COMMIT_CACHE }}
lookup-only: true
@@ -61,8 +61,8 @@ jobs:
. venv/bin/activate
pre-commit install-hooks
lint-black:
name: Check black
lint-ruff-format:
name: Check ruff-format
runs-on: ubuntu-latest
needs: prepare
steps:
@@ -75,7 +75,7 @@ jobs:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v3.3.2
uses: actions/cache@v4.0.0
with:
path: venv
key: |
@@ -85,10 +85,67 @@ jobs:
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Run black
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v4.0.0
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: Fail job if cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Run ruff-format
run: |
. venv/bin/activate
black --target-version py311 --check supervisor tests setup.py
pre-commit run --hook-stage manual ruff-format --all-files --show-diff-on-failure
env:
RUFF_OUTPUT_FORMAT: github
lint-ruff:
name: Check ruff
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.0.0
with:
path: venv
key: |
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v4.0.0
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: Fail job if cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Run ruff
run: |
. venv/bin/activate
pre-commit run --hook-stage manual ruff --all-files --show-diff-on-failure
env:
RUFF_OUTPUT_FORMAT: github
lint-dockerfile:
name: Check Dockerfile
@@ -119,7 +176,7 @@ jobs:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v3.3.2
uses: actions/cache@v4.0.0
with:
path: venv
key: |
@@ -131,7 +188,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v3.3.2
uses: actions/cache@v4.0.0
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -149,79 +206,6 @@ jobs:
. venv/bin/activate
pre-commit run --hook-stage manual check-executables-have-shebangs --all-files
lint-flake8:
name: Check flake8
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v3.3.2
with:
path: venv
key: |
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Register flake8 problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/flake8.json"
- name: Run flake8
run: |
. venv/bin/activate
flake8 supervisor tests
lint-isort:
name: Check isort
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v3.3.2
with:
path: venv
key: |
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v3.3.2
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: Fail job if cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Run isort
run: |
. venv/bin/activate
pre-commit run --hook-stage manual isort --all-files --show-diff-on-failure
lint-json:
name: Check JSON
runs-on: ubuntu-latest
@@ -236,7 +220,7 @@ jobs:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v3.3.2
uses: actions/cache@v4.0.0
with:
path: venv
key: |
@@ -248,7 +232,7 @@ jobs:
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v3.3.2
uses: actions/cache@v4.0.0
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
@@ -280,7 +264,7 @@ jobs:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v3.3.2
uses: actions/cache@v4.0.0
with:
path: venv
key: |
@@ -298,47 +282,6 @@ jobs:
. venv/bin/activate
pylint supervisor tests
lint-pyupgrade:
name: Check pyupgrade
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.1.1
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.0.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v3.3.2
with:
path: venv
key: |
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore pre-commit environment from cache
id: cache-precommit
uses: actions/cache@v3.3.2
with:
path: ${{ env.PRE_COMMIT_CACHE }}
key: |
${{ runner.os }}-pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
- name: Fail job if cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Run pyupgrade
run: |
. venv/bin/activate
pre-commit run --hook-stage manual pyupgrade --all-files --show-diff-on-failure
pytest:
runs-on: ubuntu-latest
needs: prepare
@@ -352,12 +295,12 @@ jobs:
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign
uses: sigstore/cosign-installer@v3.3.0
uses: sigstore/cosign-installer@v3.4.0
with:
cosign-release: "v2.0.2"
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v3.3.2
uses: actions/cache@v4.0.0
with:
path: venv
key: |
@@ -392,7 +335,7 @@ jobs:
-o console_output_style=count \
tests
- name: Upload coverage artifact
uses: actions/upload-artifact@v3.1.3
uses: actions/upload-artifact@v4.3.1
with:
name: coverage-${{ matrix.python-version }}
path: .coverage
@@ -411,7 +354,7 @@ jobs:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v3.3.2
uses: actions/cache@v4.0.0
with:
path: venv
key: |
@@ -422,7 +365,7 @@ jobs:
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Download all coverage artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4.1.3
- name: Combine coverage results
run: |
. venv/bin/activate
@@ -430,4 +373,4 @@ jobs:
coverage report
coverage xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3.1.4
uses: codecov/codecov-action@v4.1.0

View File

@@ -1,30 +0,0 @@
{
"problemMatcher": [
{
"owner": "flake8-error",
"severity": "error",
"pattern": [
{
"regexp": "^(.*):(\\d+):(\\d+):\\s(E\\d{3}\\s.*)$",
"file": 1,
"line": 2,
"column": 3,
"message": 4
}
]
},
{
"owner": "flake8-warning",
"severity": "warning",
"pattern": [
{
"regexp": "^(.*):(\\d+):(\\d+):\\s([CDFNW]\\d{3}\\s.*)$",
"file": 1,
"line": 2,
"column": 3,
"message": 4
}
]
}
]
}

View File

@@ -36,7 +36,7 @@ jobs:
echo "version=$datepre.$newpost" >> "$GITHUB_OUTPUT"
- name: Run Release Drafter
uses: release-drafter/release-drafter@v5.25.0
uses: release-drafter/release-drafter@v6.0.0
with:
tag: ${{ steps.version.outputs.version }}
name: ${{ steps.version.outputs.version }}

View File

@@ -12,7 +12,7 @@ jobs:
- name: Check out code from GitHub
uses: actions/checkout@v4.1.1
- name: Sentry Release
uses: getsentry/action-release@v1.6.0
uses: getsentry/action-release@v1.7.0
env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}

View File

@@ -1,34 +1,15 @@
repos:
- repo: https://github.com/psf/black
rev: 23.1.0
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.2.1
hooks:
- id: black
- id: ruff
args:
- --safe
- --quiet
- --target-version
- py311
- --fix
- id: ruff-format
files: ^((supervisor|tests)/.+)?[^/]+\.py$
- repo: https://github.com/PyCQA/flake8
rev: 6.0.0
hooks:
- id: flake8
additional_dependencies:
- flake8-docstrings==1.7.0
- pydocstyle==6.3.0
files: ^(supervisor|script|tests)/.+\.py$
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
rev: v4.5.0
hooks:
- id: check-executables-have-shebangs
stages: [manual]
- id: check-json
- repo: https://github.com/PyCQA/isort
rev: 5.12.0
hooks:
- id: isort
- repo: https://github.com/asottile/pyupgrade
rev: v3.15.0
hooks:
- id: pyupgrade
args: [--py311-plus]

18
.vscode/tasks.json vendored
View File

@@ -58,9 +58,23 @@
"problemMatcher": []
},
{
"label": "Flake8",
"label": "Ruff Check",
"type": "shell",
"command": "flake8 supervisor tests",
"command": "ruff check --fix supervisor tests",
"group": {
"kind": "test",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new"
},
"problemMatcher": []
},
{
"label": "Ruff Format",
"type": "shell",
"command": "ruff format supervisor tests",
"group": {
"kind": "test",
"isDefault": true

View File

@@ -1,10 +1,10 @@
image: ghcr.io/home-assistant/{arch}-hassio-supervisor
build_from:
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.11-alpine3.18
armhf: ghcr.io/home-assistant/armhf-base-python:3.11-alpine3.18
armv7: ghcr.io/home-assistant/armv7-base-python:3.11-alpine3.18
amd64: ghcr.io/home-assistant/amd64-base-python:3.11-alpine3.18
i386: ghcr.io/home-assistant/i386-base-python:3.11-alpine3.18
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.12-alpine3.18
armhf: ghcr.io/home-assistant/armhf-base-python:3.12-alpine3.18
armv7: ghcr.io/home-assistant/armv7-base-python:3.12-alpine3.18
amd64: ghcr.io/home-assistant/amd64-base-python:3.12-alpine3.18
i386: ghcr.io/home-assistant/i386-base-python:3.12-alpine3.18
codenotary:
signer: notary@home-assistant.io
base_image: notary@home-assistant.io

View File

@@ -1,45 +0,0 @@
[MASTER]
reports=no
jobs=2
good-names=id,i,j,k,ex,Run,_,fp,T,os
extension-pkg-whitelist=
ciso8601
# Reasons disabled:
# format - handled by black
# locally-disabled - it spams too much
# duplicate-code - unavoidable
# cyclic-import - doesn't test if both import on load
# abstract-class-not-used - is flaky, should not show up but does
# unused-argument - generic callbacks and setup methods create a lot of warnings
# too-many-* - are not enforced for the sake of readability
# too-few-* - same as too-many-*
# abstract-method - with intro of async there are always methods missing
disable=
format,
abstract-method,
cyclic-import,
duplicate-code,
locally-disabled,
no-else-return,
not-context-manager,
too-few-public-methods,
too-many-arguments,
too-many-branches,
too-many-instance-attributes,
too-many-lines,
too-many-locals,
too-many-public-methods,
too-many-return-statements,
too-many-statements,
unused-argument,
consider-using-with
[EXCEPTIONS]
overgeneral-exceptions=builtins.Exception
[TYPECHECK]
ignored-modules = distutils

371
pyproject.toml Normal file
View File

@@ -0,0 +1,371 @@
[build-system]
requires = ["setuptools~=68.0.0", "wheel~=0.40.0"]
build-backend = "setuptools.build_meta"
[project]
name = "Supervisor"
dynamic = ["version", "dependencies"]
license = { text = "Apache-2.0" }
description = "Open-source private cloud os for Home-Assistant based on HassOS"
readme = "README.md"
authors = [
{ name = "The Home Assistant Authors", email = "hello@home-assistant.io" },
]
keywords = ["docker", "home-assistant", "api"]
requires-python = ">=3.12.0"
[project.urls]
"Homepage" = "https://www.home-assistant.io/"
"Source Code" = "https://github.com/home-assistant/supervisor"
"Bug Reports" = "https://github.com/home-assistant/supervisor/issues"
"Docs: Dev" = "https://developers.home-assistant.io/"
"Discord" = "https://www.home-assistant.io/join-chat/"
"Forum" = "https://community.home-assistant.io/"
[tool.setuptools]
platforms = ["any"]
zip-safe = false
include-package-data = true
[tool.setuptools.packages.find]
include = ["supervisor*"]
[tool.pylint.MAIN]
py-version = "3.11"
# Use a conservative default here; 2 should speed up most setups and not hurt
# any too bad. Override on command line as appropriate.
jobs = 2
persistent = false
extension-pkg-allow-list = ["ciso8601"]
[tool.pylint.BASIC]
class-const-naming-style = "any"
good-names = ["id", "i", "j", "k", "ex", "Run", "_", "fp", "T", "os"]
[tool.pylint."MESSAGES CONTROL"]
# Reasons disabled:
# format - handled by ruff
# abstract-method - with intro of async there are always methods missing
# cyclic-import - doesn't test if both import on load
# duplicate-code - unavoidable
# locally-disabled - it spams too much
# too-many-* - are not enforced for the sake of readability
# too-few-* - same as too-many-*
# unused-argument - generic callbacks and setup methods create a lot of warnings
disable = [
"format",
"abstract-method",
"cyclic-import",
"duplicate-code",
"locally-disabled",
"no-else-return",
"not-context-manager",
"too-few-public-methods",
"too-many-arguments",
"too-many-branches",
"too-many-instance-attributes",
"too-many-lines",
"too-many-locals",
"too-many-public-methods",
"too-many-return-statements",
"too-many-statements",
"unused-argument",
"consider-using-with",
# Handled by ruff
# Ref: <https://github.com/astral-sh/ruff/issues/970>
"await-outside-async", # PLE1142
"bad-str-strip-call", # PLE1310
"bad-string-format-type", # PLE1307
"bidirectional-unicode", # PLE2502
"continue-in-finally", # PLE0116
"duplicate-bases", # PLE0241
"format-needs-mapping", # F502
"function-redefined", # F811
# Needed because ruff does not understand type of __all__ generated by a function
# "invalid-all-format", # PLE0605
"invalid-all-object", # PLE0604
"invalid-character-backspace", # PLE2510
"invalid-character-esc", # PLE2513
"invalid-character-nul", # PLE2514
"invalid-character-sub", # PLE2512
"invalid-character-zero-width-space", # PLE2515
"logging-too-few-args", # PLE1206
"logging-too-many-args", # PLE1205
"missing-format-string-key", # F524
"mixed-format-string", # F506
"no-method-argument", # N805
"no-self-argument", # N805
"nonexistent-operator", # B002
"nonlocal-without-binding", # PLE0117
"not-in-loop", # F701, F702
"notimplemented-raised", # F901
"return-in-init", # PLE0101
"return-outside-function", # F706
"syntax-error", # E999
"too-few-format-args", # F524
"too-many-format-args", # F522
"too-many-star-expressions", # F622
"truncated-format-string", # F501
"undefined-all-variable", # F822
"undefined-variable", # F821
"used-prior-global-declaration", # PLE0118
"yield-inside-async-function", # PLE1700
"yield-outside-function", # F704
"anomalous-backslash-in-string", # W605
"assert-on-string-literal", # PLW0129
"assert-on-tuple", # F631
"bad-format-string", # W1302, F
"bad-format-string-key", # W1300, F
"bare-except", # E722
"binary-op-exception", # PLW0711
"cell-var-from-loop", # B023
# "dangerous-default-value", # B006, ruff catches new occurrences, needs more work
"duplicate-except", # B014
"duplicate-key", # F601
"duplicate-string-formatting-argument", # F
"duplicate-value", # F
"eval-used", # PGH001
"exec-used", # S102
# "expression-not-assigned", # B018, ruff catches new occurrences, needs more work
"f-string-without-interpolation", # F541
"forgotten-debug-statement", # T100
"format-string-without-interpolation", # F
# "global-statement", # PLW0603, ruff catches new occurrences, needs more work
"global-variable-not-assigned", # PLW0602
"implicit-str-concat", # ISC001
"import-self", # PLW0406
"inconsistent-quotes", # Q000
"invalid-envvar-default", # PLW1508
"keyword-arg-before-vararg", # B026
"logging-format-interpolation", # G
"logging-fstring-interpolation", # G
"logging-not-lazy", # G
"misplaced-future", # F404
"named-expr-without-context", # PLW0131
"nested-min-max", # PLW3301
# "pointless-statement", # B018, ruff catches new occurrences, needs more work
"raise-missing-from", # TRY200
# "redefined-builtin", # A001, ruff is way more stricter, needs work
"try-except-raise", # TRY302
"unused-argument", # ARG001, we don't use it
"unused-format-string-argument", #F507
"unused-format-string-key", # F504
"unused-import", # F401
"unused-variable", # F841
"useless-else-on-loop", # PLW0120
"wildcard-import", # F403
"bad-classmethod-argument", # N804
"consider-iterating-dictionary", # SIM118
"empty-docstring", # D419
"invalid-name", # N815
"line-too-long", # E501, disabled globally
"missing-class-docstring", # D101
"missing-final-newline", # W292
"missing-function-docstring", # D103
"missing-module-docstring", # D100
"multiple-imports", #E401
"singleton-comparison", # E711, E712
"subprocess-run-check", # PLW1510
"superfluous-parens", # UP034
"ungrouped-imports", # I001
"unidiomatic-typecheck", # E721
"unnecessary-direct-lambda-call", # PLC3002
"unnecessary-lambda-assignment", # PLC3001
"unneeded-not", # SIM208
"useless-import-alias", # PLC0414
"wrong-import-order", # I001
"wrong-import-position", # E402
"comparison-of-constants", # PLR0133
"comparison-with-itself", # PLR0124
# "consider-alternative-union-syntax", # UP007, typing extension
"consider-merging-isinstance", # PLR1701
# "consider-using-alias", # UP006, typing extension
"consider-using-dict-comprehension", # C402
"consider-using-generator", # C417
"consider-using-get", # SIM401
"consider-using-set-comprehension", # C401
"consider-using-sys-exit", # PLR1722
"consider-using-ternary", # SIM108
"literal-comparison", # F632
"property-with-parameters", # PLR0206
"super-with-arguments", # UP008
"too-many-branches", # PLR0912
"too-many-return-statements", # PLR0911
"too-many-statements", # PLR0915
"trailing-comma-tuple", # COM818
"unnecessary-comprehension", # C416
"use-a-generator", # C417
"use-dict-literal", # C406
"use-list-literal", # C405
"useless-object-inheritance", # UP004
"useless-return", # PLR1711
# "no-self-use", # PLR6301 # Optional plugin, not enabled
]
[tool.pylint.REPORTS]
score = false
[tool.pylint.TYPECHECK]
ignored-modules = ["distutils"]
[tool.pylint.FORMAT]
expected-line-ending-format = "LF"
[tool.pylint.EXCEPTIONS]
overgeneral-exceptions = ["builtins.BaseException", "builtins.Exception"]
[tool.pytest.ini_options]
testpaths = ["tests"]
norecursedirs = [".git"]
log_format = "%(asctime)s.%(msecs)03d %(levelname)-8s %(threadName)s %(name)s:%(filename)s:%(lineno)s %(message)s"
log_date_format = "%Y-%m-%d %H:%M:%S"
asyncio_mode = "auto"
filterwarnings = [
"error",
"ignore:pkg_resources is deprecated as an API:DeprecationWarning:dirhash",
"ignore::pytest.PytestUnraisableExceptionWarning",
]
[tool.ruff]
select = [
"B002", # Python does not support the unary prefix increment
"B007", # Loop control variable {name} not used within loop body
"B014", # Exception handler with duplicate exception
"B023", # Function definition does not bind loop variable {name}
"B026", # Star-arg unpacking after a keyword argument is strongly discouraged
"C", # complexity
"COM818", # Trailing comma on bare tuple prohibited
"D", # docstrings
"DTZ003", # Use datetime.now(tz=) instead of datetime.utcnow()
"DTZ004", # Use datetime.fromtimestamp(ts, tz=) instead of datetime.utcfromtimestamp(ts)
"E", # pycodestyle
"F", # pyflakes/autoflake
"G", # flake8-logging-format
"I", # isort
"ICN001", # import concentions; {name} should be imported as {asname}
"N804", # First argument of a class method should be named cls
"N805", # First argument of a method should be named self
"N815", # Variable {name} in class scope should not be mixedCase
"PGH001", # No builtin eval() allowed
"PGH004", # Use specific rule codes when using noqa
"PLC0414", # Useless import alias. Import alias does not rename original package.
"PLC", # pylint
"PLE", # pylint
"PLR", # pylint
"PLW", # pylint
"Q000", # Double quotes found but single quotes preferred
"RUF006", # Store a reference to the return value of asyncio.create_task
"S102", # Use of exec detected
"S103", # bad-file-permissions
"S108", # hardcoded-temp-file
"S306", # suspicious-mktemp-usage
"S307", # suspicious-eval-usage
"S313", # suspicious-xmlc-element-tree-usage
"S314", # suspicious-xml-element-tree-usage
"S315", # suspicious-xml-expat-reader-usage
"S316", # suspicious-xml-expat-builder-usage
"S317", # suspicious-xml-sax-usage
"S318", # suspicious-xml-mini-dom-usage
"S319", # suspicious-xml-pull-dom-usage
"S320", # suspicious-xmle-tree-usage
"S601", # paramiko-call
"S602", # subprocess-popen-with-shell-equals-true
"S604", # call-with-shell-equals-true
"S608", # hardcoded-sql-expression
"S609", # unix-command-wildcard-injection
"SIM105", # Use contextlib.suppress({exception}) instead of try-except-pass
"SIM117", # Merge with-statements that use the same scope
"SIM118", # Use {key} in {dict} instead of {key} in {dict}.keys()
"SIM201", # Use {left} != {right} instead of not {left} == {right}
"SIM208", # Use {expr} instead of not (not {expr})
"SIM212", # Use {a} if {a} else {b} instead of {b} if not {a} else {a}
"SIM300", # Yoda conditions. Use 'age == 42' instead of '42 == age'.
"SIM401", # Use get from dict with default instead of an if block
"T100", # Trace found: {name} used
"T20", # flake8-print
"TID251", # Banned imports
"TRY004", # Prefer TypeError exception for invalid type
"TRY200", # Use raise from to specify exception cause
"TRY302", # Remove exception handler; error is immediately re-raised
"UP", # pyupgrade
"W", # pycodestyle
]
ignore = [
"D202", # No blank lines allowed after function docstring
"D203", # 1 blank line required before class docstring
"D213", # Multi-line docstring summary should start at the second line
"D406", # Section name should end with a newline
"D407", # Section name underlining
"E501", # line too long
"E731", # do not assign a lambda expression, use a def
# Ignore ignored, as the rule is now back in preview/nursery, which cannot
# be ignored anymore without warnings.
# https://github.com/astral-sh/ruff/issues/7491
# "PLC1901", # Lots of false positives
# False positives https://github.com/astral-sh/ruff/issues/5386
"PLC0208", # Use a sequence type instead of a `set` when iterating over values
"PLR0911", # Too many return statements ({returns} > {max_returns})
"PLR0912", # Too many branches ({branches} > {max_branches})
"PLR0913", # Too many arguments to function call ({c_args} > {max_args})
"PLR0915", # Too many statements ({statements} > {max_statements})
"PLR2004", # Magic value used in comparison, consider replacing {value} with a constant variable
"PLW2901", # Outer {outer_kind} variable {name} overwritten by inner {inner_kind} target
"UP006", # keep type annotation style as is
"UP007", # keep type annotation style as is
# Ignored due to performance: https://github.com/charliermarsh/ruff/issues/2923
"UP038", # Use `X | Y` in `isinstance` call instead of `(X, Y)`
# May conflict with the formatter, https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules
"W191",
"E111",
"E114",
"E117",
"D206",
"D300",
"Q000",
"Q001",
"Q002",
"Q003",
"COM812",
"COM819",
"ISC001",
"ISC002",
# Disabled because ruff does not understand type of __all__ generated by a function
"PLE0605",
]
[tool.ruff.flake8-import-conventions.extend-aliases]
voluptuous = "vol"
[tool.ruff.flake8-pytest-style]
fixture-parentheses = false
[tool.ruff.flake8-tidy-imports.banned-api]
"pytz".msg = "use zoneinfo instead"
[tool.ruff.isort]
force-sort-within-sections = true
section-order = [
"future",
"standard-library",
"third-party",
"first-party",
"local-folder",
]
forced-separate = ["tests"]
known-first-party = ["supervisor", "tests"]
combine-as-imports = true
split-on-trailing-comma = false
[tool.ruff.per-file-ignores]
# DBus Service Mocks must use typing and names understood by dbus-fast
"tests/dbus_service_mocks/*.py" = ["F722", "F821", "N815"]
[tool.ruff.mccabe]
max-complexity = 25

View File

@@ -1,6 +0,0 @@
[pytest]
asyncio_mode = auto
filterwarnings =
error
ignore:pkg_resources is deprecated as an API:DeprecationWarning:dirhash
ignore::pytest.PytestUnraisableExceptionWarning

View File

@@ -1,27 +1,29 @@
aiodns==3.1.1
aiohttp==3.9.1
aiohttp==3.9.3
aiohttp-fast-url-dispatcher==0.3.0
async_timeout==4.0.3
atomicwrites-homeassistant==1.4.1
attrs==23.1.0
awesomeversion==23.11.0
attrs==23.2.0
awesomeversion==24.2.0
brotli==1.1.0
ciso8601==2.3.1
colorlog==6.8.0
colorlog==6.8.2
cpe==1.2.1
cryptography==41.0.7
debugpy==1.8.0
deepmerge==1.1.0
cryptography==42.0.5
debugpy==1.8.1
deepmerge==1.1.1
dirhash==0.2.1
docker==7.0.0
faust-cchardet==2.1.19
gitpython==3.1.40
jinja2==3.1.2
gitpython==3.1.42
jinja2==3.1.3
orjson==3.9.15
pulsectl==23.5.2
pyudev==0.24.1
PyYAML==6.0.1
securetar==2023.12.0
sentry-sdk==1.39.0
voluptuous==0.14.1
dbus-fast==2.21.0
typing_extensions==4.9.0
securetar==2024.2.1
sentry-sdk==1.40.5
setuptools==69.1.1
voluptuous==0.14.2
dbus-fast==2.21.1
typing_extensions==4.10.0
zlib-fast==0.2.0

View File

@@ -1,16 +1,12 @@
black==23.12.0
coverage==7.3.2
flake8-docstrings==1.7.0
flake8==6.1.0
pre-commit==3.6.0
pydocstyle==6.3.0
pylint==3.0.3
coverage==7.4.3
pre-commit==3.6.2
pylint==3.1.0
pytest-aiohttp==1.0.5
pytest-asyncio==0.18.3
pytest-asyncio==0.23.5
pytest-cov==4.1.0
pytest-timeout==2.2.0
pytest==7.4.3
pyupgrade==3.15.0
pytest==8.0.1
ruff==0.2.2
time-machine==2.13.0
typing_extensions==4.9.0
urllib3==2.1.0
typing_extensions==4.10.0
urllib3==2.2.1

View File

@@ -1,31 +0,0 @@
[isort]
multi_line_output = 3
include_trailing_comma=True
force_grid_wrap=0
line_length=88
indent = " "
force_sort_within_sections = true
sections = FUTURE,STDLIB,THIRDPARTY,FIRSTPARTY,LOCALFOLDER
default_section = THIRDPARTY
forced_separate = tests
combine_as_imports = true
use_parentheses = true
known_first_party = supervisor,tests
[flake8]
exclude = .venv,.git,.tox,docs,venv,bin,lib,deps,build
doctests = True
max-line-length = 88
# E501: line too long
# W503: Line break occurred before a binary operator
# E203: Whitespace before ':'
# D202 No blank lines allowed after function docstring
# W504 line break after binary operator
ignore =
E501,
W503,
E203,
D202,
W504
per-file-ignores =
tests/dbus_service_mocks/*.py: F821,F722

View File

@@ -1,48 +1,27 @@
"""Home Assistant Supervisor setup."""
from pathlib import Path
import re
from setuptools import setup
from supervisor.const import SUPERVISOR_VERSION
RE_SUPERVISOR_VERSION = re.compile(r"^SUPERVISOR_VERSION =\s*(.+)$")
SUPERVISOR_DIR = Path(__file__).parent
REQUIREMENTS_FILE = SUPERVISOR_DIR / "requirements.txt"
CONST_FILE = SUPERVISOR_DIR / "supervisor/const.py"
REQUIREMENTS = REQUIREMENTS_FILE.read_text(encoding="utf-8")
CONSTANTS = CONST_FILE.read_text(encoding="utf-8")
def _get_supervisor_version():
for line in CONSTANTS.split("/n"):
if match := RE_SUPERVISOR_VERSION.match(line):
return match.group(1)
return "99.9.9dev"
setup(
name="Supervisor",
version=SUPERVISOR_VERSION,
license="BSD License",
author="The Home Assistant Authors",
author_email="hello@home-assistant.io",
url="https://home-assistant.io/",
description=("Open-source private cloud os for Home-Assistant" " based on HassOS"),
long_description=(
"A maintainless private cloud operator system that"
"setup a Home-Assistant instance. Based on HassOS"
),
keywords=["docker", "home-assistant", "api"],
zip_safe=False,
platforms="any",
packages=[
"supervisor.addons",
"supervisor.api",
"supervisor.backups",
"supervisor.dbus.network",
"supervisor.dbus.network.setting",
"supervisor.dbus",
"supervisor.discovery.services",
"supervisor.discovery",
"supervisor.docker",
"supervisor.homeassistant",
"supervisor.host",
"supervisor.jobs",
"supervisor.misc",
"supervisor.plugins",
"supervisor.resolution.checks",
"supervisor.resolution.evaluations",
"supervisor.resolution.fixups",
"supervisor.resolution",
"supervisor.security",
"supervisor.services.modules",
"supervisor.services",
"supervisor.store",
"supervisor.utils",
"supervisor",
],
include_package_data=True,
version=_get_supervisor_version(),
dependencies=REQUIREMENTS.split("/n"),
)

View File

@@ -5,8 +5,15 @@ import logging
from pathlib import Path
import sys
from supervisor import bootstrap
from supervisor.utils.logging import activate_log_queue_handler
import zlib_fast
# Enable fast zlib before importing supervisor
zlib_fast.enable()
from supervisor import bootstrap # pylint: disable=wrong-import-position # noqa: E402
from supervisor.utils.logging import ( # pylint: disable=wrong-import-position # noqa: E402
activate_log_queue_handler,
)
_LOGGER: logging.Logger = logging.getLogger(__name__)

View File

@@ -3,6 +3,8 @@ import asyncio
from collections.abc import Awaitable
from contextlib import suppress
from copy import deepcopy
from datetime import datetime
import errno
from ipaddress import IPv4Address
import logging
from pathlib import Path, PurePath
@@ -14,11 +16,14 @@ from tempfile import TemporaryDirectory
from typing import Any, Final
import aiohttp
from awesomeversion import AwesomeVersionCompareException
from deepmerge import Merger
from securetar import atomic_contents_add, secure_path
import voluptuous as vol
from voluptuous.humanize import humanize_error
from supervisor.utils.dt import utc_from_timestamp
from ..bus import EventListener
from ..const import (
ATTR_ACCESS_TOKEN,
@@ -45,9 +50,9 @@ from ..const import (
ATTR_USER,
ATTR_UUID,
ATTR_VERSION,
ATTR_VERSION_TIMESTAMP,
ATTR_WATCHDOG,
DNS_SUFFIX,
MAP_ADDON_CONFIG,
AddonBoot,
AddonStartup,
AddonState,
@@ -72,6 +77,7 @@ from ..hardware.data import Device
from ..homeassistant.const import WSEvent, WSType
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason
from ..store.addon import AddonStore
from ..utils import check_port
from ..utils.apparmor import adjust_profile
@@ -83,6 +89,7 @@ from .const import (
WATCHDOG_THROTTLE_MAX_CALLS,
WATCHDOG_THROTTLE_PERIOD,
AddonBackupMode,
MappingType,
)
from .model import AddonModel, Data
from .options import AddonOptions
@@ -277,6 +284,28 @@ class Addon(AddonModel):
"""Set auto update."""
self.persist[ATTR_AUTO_UPDATE] = value
@property
def auto_update_available(self) -> bool:
"""Return if it is safe to auto update addon."""
if not self.need_update or not self.auto_update:
return False
for version in self.breaking_versions:
try:
# Must update to latest so if true update crosses a breaking version
if self.version < version:
return False
except AwesomeVersionCompareException:
# If version scheme changed, we may get compare exception
# If latest version >= breaking version then assume update will
# cross it as the version scheme changes
# If both versions have compare exception, ignore as its in the past
with suppress(AwesomeVersionCompareException):
if self.latest_version >= version:
return False
return True
@property
def watchdog(self) -> bool:
"""Return True if watchdog is enable."""
@@ -319,6 +348,11 @@ class Addon(AddonModel):
"""Return version of add-on."""
return self.data_store[ATTR_VERSION]
@property
def latest_version_timestamp(self) -> datetime:
"""Return when latest version was first seen."""
return utc_from_timestamp(self.data_store[ATTR_VERSION_TIMESTAMP])
@property
def protected(self) -> bool:
"""Return if add-on is in protected mode."""
@@ -465,7 +499,7 @@ class Addon(AddonModel):
@property
def addon_config_used(self) -> bool:
"""Add-on is using its public config folder."""
return MAP_ADDON_CONFIG in self.map_volumes
return MappingType.ADDON_CONFIG in self.map_volumes
@property
def path_config(self) -> Path:
@@ -716,7 +750,7 @@ class Addon(AddonModel):
store = self.addon_store.clone()
try:
await self.instance.update(store.version, store.image)
await self.instance.update(store.version, store.image, arch=self.arch)
except DockerError as err:
raise AddonsError() from err
@@ -768,6 +802,7 @@ class Addon(AddonModel):
raise AddonsError() from err
self.sys_addons.data.update(self.addon_store)
await self._check_ingress_port()
_LOGGER.info("Add-on '%s' successfully rebuilt", self.slug)
finally:
@@ -793,6 +828,8 @@ class Addon(AddonModel):
try:
self.path_pulse.write_text(pulse_config, encoding="utf-8")
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error(
"Add-on %s can't write pulse/client.config: %s", self.slug, err
)
@@ -1151,7 +1188,11 @@ class Addon(AddonModel):
def _extract_tarfile():
"""Extract tar backup."""
with tar_file as backup:
backup.extractall(path=Path(temp), members=secure_path(backup))
backup.extractall(
path=Path(temp),
members=secure_path(backup),
filter="fully_trusted",
)
try:
await self.sys_run_in_executor(_extract_tarfile)
@@ -1205,13 +1246,15 @@ class Addon(AddonModel):
await self.instance.import_image(image_file)
else:
with suppress(DockerError):
await self.instance.install(version, restore_image)
await self.instance.install(
version, restore_image, self.arch
)
await self.instance.cleanup()
elif self.instance.version != version or self.legacy:
_LOGGER.info("Restore/Update of image for addon %s", self.slug)
with suppress(DockerError):
await self.instance.update(version, restore_image)
self._check_ingress_port()
await self.instance.update(version, restore_image, self.arch)
await self._check_ingress_port()
# Restore data and config
def _restore_data():

View File

@@ -0,0 +1,11 @@
"""Confgiuration Objects for Addon Config."""
from dataclasses import dataclass
@dataclass(slots=True)
class FolderMapping:
"""Represent folder mapping configuration."""
path: str | None
read_only: bool

View File

@@ -12,8 +12,26 @@ class AddonBackupMode(StrEnum):
COLD = "cold"
class MappingType(StrEnum):
"""Mapping type of an Add-on Folder."""
DATA = "data"
CONFIG = "config"
SSL = "ssl"
ADDONS = "addons"
BACKUP = "backup"
SHARE = "share"
MEDIA = "media"
HOMEASSISTANT_CONFIG = "homeassistant_config"
ALL_ADDON_CONFIGS = "all_addon_configs"
ADDON_CONFIG = "addon_config"
ATTR_BACKUP = "backup"
ATTR_BREAKING_VERSIONS = "breaking_versions"
ATTR_CODENOTARY = "codenotary"
ATTR_READ_ONLY = "read_only"
ATTR_PATH = "path"
WATCHDOG_RETRY_SECONDS = 10
WATCHDOG_MAX_ATTEMPTS = 5
WATCHDOG_THROTTLE_PERIOD = timedelta(minutes=30)

View File

@@ -3,12 +3,15 @@ from abc import ABC, abstractmethod
from collections import defaultdict
from collections.abc import Callable
from contextlib import suppress
from datetime import datetime
import logging
from pathlib import Path
from typing import Any
from awesomeversion import AwesomeVersion, AwesomeVersionException
from supervisor.utils.dt import utc_from_timestamp
from ..const import (
ATTR_ADVANCED,
ATTR_APPARMOR,
@@ -65,11 +68,13 @@ from ..const import (
ATTR_TIMEOUT,
ATTR_TMPFS,
ATTR_TRANSLATIONS,
ATTR_TYPE,
ATTR_UART,
ATTR_UDEV,
ATTR_URL,
ATTR_USB,
ATTR_VERSION,
ATTR_VERSION_TIMESTAMP,
ATTR_VIDEO,
ATTR_WATCHDOG,
ATTR_WEBUI,
@@ -86,9 +91,18 @@ from ..exceptions import AddonsNotSupportedError
from ..jobs.const import JOB_GROUP_ADDON
from ..jobs.job_group import JobGroup
from ..utils import version_is_new_enough
from .const import ATTR_BACKUP, ATTR_CODENOTARY, AddonBackupMode
from .configuration import FolderMapping
from .const import (
ATTR_BACKUP,
ATTR_BREAKING_VERSIONS,
ATTR_CODENOTARY,
ATTR_PATH,
ATTR_READ_ONLY,
AddonBackupMode,
MappingType,
)
from .options import AddonOptions, UiOptions
from .validate import RE_SERVICE, RE_VOLUME
from .validate import RE_SERVICE
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -212,6 +226,11 @@ class AddonModel(JobGroup, ABC):
"""Return latest version of add-on."""
return self.data[ATTR_VERSION]
@property
def latest_version_timestamp(self) -> datetime:
"""Return when latest version was first seen."""
return utc_from_timestamp(self.data[ATTR_VERSION_TIMESTAMP])
@property
def version(self) -> AwesomeVersion:
"""Return version of add-on."""
@@ -538,14 +557,13 @@ class AddonModel(JobGroup, ABC):
return ATTR_IMAGE not in self.data
@property
def map_volumes(self) -> dict[str, bool]:
"""Return a dict of {volume: read-only} from add-on."""
def map_volumes(self) -> dict[MappingType, FolderMapping]:
"""Return a dict of {MappingType: FolderMapping} from add-on."""
volumes = {}
for volume in self.data[ATTR_MAP]:
result = RE_VOLUME.match(volume)
if not result:
continue
volumes[result.group(1)] = result.group(2) != "rw"
volumes[MappingType(volume[ATTR_TYPE])] = FolderMapping(
volume.get(ATTR_PATH), volume[ATTR_READ_ONLY]
)
return volumes
@@ -612,6 +630,11 @@ class AddonModel(JobGroup, ABC):
"""Return Signer email address for CAS."""
return self.data.get(ATTR_CODENOTARY)
@property
def breaking_versions(self) -> list[AwesomeVersion]:
"""Return breaking versions of addon."""
return self.data[ATTR_BREAKING_VERSIONS]
def validate_availability(self) -> None:
"""Validate if addon is available for current system."""
return self._validate_availability(self.data, logger=_LOGGER.error)

View File

@@ -81,6 +81,7 @@ from ..const import (
ATTR_TIMEOUT,
ATTR_TMPFS,
ATTR_TRANSLATIONS,
ATTR_TYPE,
ATTR_UART,
ATTR_UDEV,
ATTR_URL,
@@ -91,9 +92,6 @@ from ..const import (
ATTR_VIDEO,
ATTR_WATCHDOG,
ATTR_WEBUI,
MAP_ADDON_CONFIG,
MAP_CONFIG,
MAP_HOMEASSISTANT_CONFIG,
ROLE_ALL,
ROLE_DEFAULT,
AddonBoot,
@@ -112,13 +110,22 @@ from ..validate import (
uuid_match,
version_tag,
)
from .const import ATTR_BACKUP, ATTR_CODENOTARY, RE_SLUG, AddonBackupMode
from .const import (
ATTR_BACKUP,
ATTR_BREAKING_VERSIONS,
ATTR_CODENOTARY,
ATTR_PATH,
ATTR_READ_ONLY,
RE_SLUG,
AddonBackupMode,
MappingType,
)
from .options import RE_SCHEMA_ELEMENT
_LOGGER: logging.Logger = logging.getLogger(__name__)
RE_VOLUME = re.compile(
r"^(config|ssl|addons|backup|share|media|homeassistant_config|all_addon_configs|addon_config)(?::(rw|ro))?$"
r"^(data|config|ssl|addons|backup|share|media|homeassistant_config|all_addon_configs|addon_config)(?::(rw|ro))?$"
)
RE_SERVICE = re.compile(r"^(?P<service>mqtt|mysql):(?P<rights>provide|want|need)$")
@@ -266,26 +273,45 @@ def _migrate_addon_config(protocol=False):
name,
)
# 2023-11 "map" entries can also be dict to allow path configuration
volumes = []
for entry in config.get(ATTR_MAP, []):
if isinstance(entry, dict):
volumes.append(entry)
if isinstance(entry, str):
result = RE_VOLUME.match(entry)
if not result:
continue
volumes.append(
{
ATTR_TYPE: result.group(1),
ATTR_READ_ONLY: result.group(2) != "rw",
}
)
if volumes:
config[ATTR_MAP] = volumes
# 2023-10 "config" became "homeassistant" so /config can be used for addon's public config
volumes = [RE_VOLUME.match(entry) for entry in config.get(ATTR_MAP, [])]
if any(volume and volume.group(1) == MAP_CONFIG for volume in volumes):
if any(volume[ATTR_TYPE] == MappingType.CONFIG for volume in volumes):
if any(
volume
and volume.group(1) in {MAP_ADDON_CONFIG, MAP_HOMEASSISTANT_CONFIG}
and volume[ATTR_TYPE]
in {MappingType.ADDON_CONFIG, MappingType.HOMEASSISTANT_CONFIG}
for volume in volumes
):
_LOGGER.warning(
"Add-on config using incompatible map options, '%s' and '%s' are ignored if '%s' is included. Please report this to the maintainer of %s",
MAP_ADDON_CONFIG,
MAP_HOMEASSISTANT_CONFIG,
MAP_CONFIG,
MappingType.ADDON_CONFIG,
MappingType.HOMEASSISTANT_CONFIG,
MappingType.CONFIG,
name,
)
else:
_LOGGER.debug(
"Add-on config using deprecated map option '%s' instead of '%s'. Please report this to the maintainer of %s",
MAP_CONFIG,
MAP_HOMEASSISTANT_CONFIG,
MappingType.CONFIG,
MappingType.HOMEASSISTANT_CONFIG,
name,
)
@@ -337,7 +363,15 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
vol.Optional(ATTR_DEVICES): [str],
vol.Optional(ATTR_UDEV, default=False): vol.Boolean(),
vol.Optional(ATTR_TMPFS, default=False): vol.Boolean(),
vol.Optional(ATTR_MAP, default=list): [vol.Match(RE_VOLUME)],
vol.Optional(ATTR_MAP, default=list): [
vol.Schema(
{
vol.Required(ATTR_TYPE): vol.Coerce(MappingType),
vol.Optional(ATTR_READ_ONLY, default=True): bool,
vol.Optional(ATTR_PATH): str,
}
)
],
vol.Optional(ATTR_ENVIRONMENT): {vol.Match(r"\w*"): str},
vol.Optional(ATTR_PRIVILEGED): [vol.Coerce(Capabilities)],
vol.Optional(ATTR_APPARMOR, default=True): vol.Boolean(),
@@ -389,6 +423,7 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
vol.Coerce(int), vol.Range(min=10, max=300)
),
vol.Optional(ATTR_JOURNALD, default=False): vol.Boolean(),
vol.Optional(ATTR_BREAKING_VERSIONS, default=list): [version_tag],
},
extra=vol.REMOVE_EXTRA,
)

View File

@@ -219,6 +219,8 @@ class RestAPI(CoreSysAttributes):
web.get("/jobs/info", api_jobs.info),
web.post("/jobs/options", api_jobs.options),
web.post("/jobs/reset", api_jobs.reset),
web.get("/jobs/{uuid}", api_jobs.job_info),
web.delete("/jobs/{uuid}", api_jobs.remove_job),
]
)

View File

@@ -11,6 +11,7 @@ from ..addons.addon import Addon
from ..const import ATTR_PASSWORD, ATTR_USERNAME, REQUEST_FROM
from ..coresys import CoreSysAttributes
from ..exceptions import APIForbidden
from ..utils.json import json_loads
from .const import CONTENT_TYPE_JSON, CONTENT_TYPE_URL
from .utils import api_process, api_validate
@@ -67,7 +68,7 @@ class APIAuth(CoreSysAttributes):
# Json
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON:
data = await request.json()
data = await request.json(loads=json_loads)
return await self._process_dict(request, addon, data)
# URL encoded

View File

@@ -1,5 +1,7 @@
"""Backups RESTful API."""
import asyncio
from collections.abc import Callable
import errno
import logging
from pathlib import Path
import re
@@ -10,6 +12,7 @@ from aiohttp import web
from aiohttp.hdrs import CONTENT_DISPOSITION
import voluptuous as vol
from ..backups.backup import Backup
from ..backups.validate import ALL_FOLDERS, FOLDER_HOMEASSISTANT, days_until_stale
from ..const import (
ATTR_ADDONS,
@@ -32,11 +35,15 @@ from ..const import (
ATTR_TIMEOUT,
ATTR_TYPE,
ATTR_VERSION,
BusEvent,
CoreState,
)
from ..coresys import CoreSysAttributes
from ..exceptions import APIError
from ..jobs import JobSchedulerOptions
from ..mounts.const import MountUsage
from .const import CONTENT_TYPE_TAR
from ..resolution.const import UnhealthyReason
from .const import ATTR_BACKGROUND, ATTR_JOB_ID, CONTENT_TYPE_TAR
from .utils import api_process, api_validate
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -48,17 +55,21 @@ RE_SLUGIFY_NAME = re.compile(r"[^A-Za-z0-9]+")
_ALL_FOLDERS = ALL_FOLDERS + [FOLDER_HOMEASSISTANT]
# pylint: disable=no-value-for-parameter
SCHEMA_RESTORE_PARTIAL = vol.Schema(
SCHEMA_RESTORE_FULL = vol.Schema(
{
vol.Optional(ATTR_PASSWORD): vol.Maybe(str),
vol.Optional(ATTR_BACKGROUND, default=False): vol.Boolean(),
}
)
SCHEMA_RESTORE_PARTIAL = SCHEMA_RESTORE_FULL.extend(
{
vol.Optional(ATTR_HOMEASSISTANT): vol.Boolean(),
vol.Optional(ATTR_ADDONS): vol.All([str], vol.Unique()),
vol.Optional(ATTR_FOLDERS): vol.All([vol.In(_ALL_FOLDERS)], vol.Unique()),
}
)
SCHEMA_RESTORE_FULL = vol.Schema({vol.Optional(ATTR_PASSWORD): vol.Maybe(str)})
SCHEMA_BACKUP_FULL = vol.Schema(
{
vol.Optional(ATTR_NAME): str,
@@ -66,6 +77,7 @@ SCHEMA_BACKUP_FULL = vol.Schema(
vol.Optional(ATTR_COMPRESSED): vol.Maybe(vol.Boolean()),
vol.Optional(ATTR_LOCATON): vol.Maybe(str),
vol.Optional(ATTR_HOMEASSISTANT_EXCLUDE_DATABASE): vol.Boolean(),
vol.Optional(ATTR_BACKGROUND, default=False): vol.Boolean(),
}
)
@@ -202,46 +214,109 @@ class APIBackups(CoreSysAttributes):
return body
async def _background_backup_task(
self, backup_method: Callable, *args, **kwargs
) -> tuple[asyncio.Task, str]:
"""Start backup task in background and return task and job ID."""
event = asyncio.Event()
job, backup_task = self.sys_jobs.schedule_job(
backup_method, JobSchedulerOptions(), *args, **kwargs
)
async def release_on_freeze(new_state: CoreState):
if new_state == CoreState.FREEZE:
event.set()
# Wait for system to get into freeze state before returning
# If the backup fails validation it will raise before getting there
listener = self.sys_bus.register_event(
BusEvent.SUPERVISOR_STATE_CHANGE, release_on_freeze
)
try:
await asyncio.wait(
(
backup_task,
self.sys_create_task(event.wait()),
),
return_when=asyncio.FIRST_COMPLETED,
)
return (backup_task, job.uuid)
finally:
self.sys_bus.remove_listener(listener)
@api_process
async def backup_full(self, request):
"""Create full backup."""
body = await api_validate(SCHEMA_BACKUP_FULL, request)
backup = await asyncio.shield(
self.sys_backups.do_backup_full(**self._location_to_mount(body))
background = body.pop(ATTR_BACKGROUND)
backup_task, job_id = await self._background_backup_task(
self.sys_backups.do_backup_full, **self._location_to_mount(body)
)
if background and not backup_task.done():
return {ATTR_JOB_ID: job_id}
backup: Backup = await backup_task
if backup:
return {ATTR_SLUG: backup.slug}
return False
return {ATTR_JOB_ID: job_id, ATTR_SLUG: backup.slug}
raise APIError(
f"An error occurred while making backup, check job '{job_id}' or supervisor logs for details",
job_id=job_id,
)
@api_process
async def backup_partial(self, request):
"""Create a partial backup."""
body = await api_validate(SCHEMA_BACKUP_PARTIAL, request)
backup = await asyncio.shield(
self.sys_backups.do_backup_partial(**self._location_to_mount(body))
background = body.pop(ATTR_BACKGROUND)
backup_task, job_id = await self._background_backup_task(
self.sys_backups.do_backup_partial, **self._location_to_mount(body)
)
if background and not backup_task.done():
return {ATTR_JOB_ID: job_id}
backup: Backup = await backup_task
if backup:
return {ATTR_SLUG: backup.slug}
return False
return {ATTR_JOB_ID: job_id, ATTR_SLUG: backup.slug}
raise APIError(
f"An error occurred while making backup, check job '{job_id}' or supervisor logs for details",
job_id=job_id,
)
@api_process
async def restore_full(self, request):
"""Full restore of a backup."""
backup = self._extract_slug(request)
body = await api_validate(SCHEMA_RESTORE_FULL, request)
background = body.pop(ATTR_BACKGROUND)
restore_task, job_id = await self._background_backup_task(
self.sys_backups.do_restore_full, backup, **body
)
return await asyncio.shield(self.sys_backups.do_restore_full(backup, **body))
if background and not restore_task.done() or await restore_task:
return {ATTR_JOB_ID: job_id}
raise APIError(
f"An error occurred during restore of {backup.slug}, check job '{job_id}' or supervisor logs for details",
job_id=job_id,
)
@api_process
async def restore_partial(self, request):
"""Partial restore a backup."""
backup = self._extract_slug(request)
body = await api_validate(SCHEMA_RESTORE_PARTIAL, request)
background = body.pop(ATTR_BACKGROUND)
restore_task, job_id = await self._background_backup_task(
self.sys_backups.do_restore_partial, backup, **body
)
return await asyncio.shield(self.sys_backups.do_restore_partial(backup, **body))
if background and not restore_task.done() or await restore_task:
return {ATTR_JOB_ID: job_id}
raise APIError(
f"An error occurred during restore of {backup.slug}, check job '{job_id}' or supervisor logs for details",
job_id=job_id,
)
@api_process
async def freeze(self, request):
@@ -288,6 +363,8 @@ class APIBackups(CoreSysAttributes):
backup.write(chunk)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Can't write new backup file: %s", err)
return False

View File

@@ -13,6 +13,7 @@ ATTR_AGENT_VERSION = "agent_version"
ATTR_APPARMOR_VERSION = "apparmor_version"
ATTR_ATTRIBUTES = "attributes"
ATTR_AVAILABLE_UPDATES = "available_updates"
ATTR_BACKGROUND = "background"
ATTR_BOOT_TIMESTAMP = "boot_timestamp"
ATTR_BOOTS = "boots"
ATTR_BROADCAST_LLMNR = "broadcast_llmnr"
@@ -31,6 +32,7 @@ ATTR_EJECTABLE = "ejectable"
ATTR_FALLBACK = "fallback"
ATTR_FILESYSTEMS = "filesystems"
ATTR_IDENTIFIERS = "identifiers"
ATTR_JOB_ID = "job_id"
ATTR_JOBS = "jobs"
ATTR_LLMNR = "llmnr"
ATTR_LLMNR_HOSTNAME = "llmnr_hostname"

View File

@@ -6,6 +6,7 @@ from aiohttp import web
import voluptuous as vol
from ..coresys import CoreSysAttributes
from ..exceptions import APIError
from ..jobs import SupervisorJob
from ..jobs.const import ATTR_IGNORE_CONDITIONS, JobCondition
from .const import ATTR_JOBS
@@ -21,7 +22,7 @@ SCHEMA_OPTIONS = vol.Schema(
class APIJobs(CoreSysAttributes):
"""Handle RESTful API for OS functions."""
def _list_jobs(self) -> list[dict[str, Any]]:
def _list_jobs(self, start: SupervisorJob | None = None) -> list[dict[str, Any]]:
"""Return current job tree."""
jobs_by_parent: dict[str | None, list[SupervisorJob]] = {}
for job in self.sys_jobs.jobs:
@@ -34,9 +35,11 @@ class APIJobs(CoreSysAttributes):
jobs_by_parent[job.parent_id].append(job)
job_list: list[dict[str, Any]] = []
queue: list[tuple[list[dict[str, Any]], SupervisorJob]] = [
(job_list, job) for job in jobs_by_parent.get(None, [])
]
queue: list[tuple[list[dict[str, Any]], SupervisorJob]] = (
[(job_list, start)]
if start
else [(job_list, job) for job in jobs_by_parent.get(None, [])]
)
while queue:
(current_list, current_job) = queue.pop(0)
@@ -78,3 +81,19 @@ class APIJobs(CoreSysAttributes):
async def reset(self, request: web.Request) -> None:
"""Reset options for JobManager."""
self.sys_jobs.reset_data()
@api_process
async def job_info(self, request: web.Request) -> dict[str, Any]:
"""Get details of a job by ID."""
job = self.sys_jobs.get_job(request.match_info.get("uuid"))
return self._list_jobs(job)[0]
@api_process
async def remove_job(self, request: web.Request) -> None:
"""Remove a completed job."""
job = self.sys_jobs.get_job(request.match_info.get("uuid"))
if not job.done:
raise APIError(f"Job {job.uuid} is not done!")
self.sys_jobs.remove_job(job)

View File

@@ -103,6 +103,8 @@ ADDONS_ROLE_ACCESS: dict[str, re.Pattern] = {
r"|/addons(?:/" + RE_SLUG + r"/(?!security).+|/reload)?"
r"|/audio/.+"
r"|/auth/cache"
r"|/available_updates"
r"|/backups.*"
r"|/cli/.+"
r"|/core/.+"
r"|/dns/.+"
@@ -112,16 +114,17 @@ ADDONS_ROLE_ACCESS: dict[str, re.Pattern] = {
r"|/hassos/.+"
r"|/homeassistant/.+"
r"|/host/.+"
r"|/mounts.*"
r"|/multicast/.+"
r"|/network/.+"
r"|/observer/.+"
r"|/os/.+"
r"|/refresh_updates"
r"|/resolution/.+"
r"|/backups.*"
r"|/security/.+"
r"|/snapshots.*"
r"|/store.*"
r"|/supervisor/.+"
r"|/security/.+"
r")$"
),
ROLE_ADMIN: re.compile(

View File

@@ -130,13 +130,17 @@ class APIOS(CoreSysAttributes):
body = await api_validate(SCHEMA_GREEN_OPTIONS, request)
if ATTR_ACTIVITY_LED in body:
self.sys_dbus.agent.board.green.activity_led = body[ATTR_ACTIVITY_LED]
await self.sys_dbus.agent.board.green.set_activity_led(
body[ATTR_ACTIVITY_LED]
)
if ATTR_POWER_LED in body:
self.sys_dbus.agent.board.green.power_led = body[ATTR_POWER_LED]
await self.sys_dbus.agent.board.green.set_power_led(body[ATTR_POWER_LED])
if ATTR_SYSTEM_HEALTH_LED in body:
self.sys_dbus.agent.board.green.user_led = body[ATTR_SYSTEM_HEALTH_LED]
await self.sys_dbus.agent.board.green.set_user_led(
body[ATTR_SYSTEM_HEALTH_LED]
)
self.sys_dbus.agent.board.green.save_data()
@@ -155,13 +159,15 @@ class APIOS(CoreSysAttributes):
body = await api_validate(SCHEMA_YELLOW_OPTIONS, request)
if ATTR_DISK_LED in body:
self.sys_dbus.agent.board.yellow.disk_led = body[ATTR_DISK_LED]
await self.sys_dbus.agent.board.yellow.set_disk_led(body[ATTR_DISK_LED])
if ATTR_HEARTBEAT_LED in body:
self.sys_dbus.agent.board.yellow.heartbeat_led = body[ATTR_HEARTBEAT_LED]
await self.sys_dbus.agent.board.yellow.set_heartbeat_led(
body[ATTR_HEARTBEAT_LED]
)
if ATTR_POWER_LED in body:
self.sys_dbus.agent.board.yellow.power_led = body[ATTR_POWER_LED]
await self.sys_dbus.agent.board.yellow.set_power_led(body[ATTR_POWER_LED])
self.sys_dbus.agent.board.yellow.save_data()
self.sys_resolution.create_issue(

View File

@@ -14,6 +14,7 @@ from aiohttp.web_exceptions import HTTPBadGateway, HTTPUnauthorized
from ..coresys import CoreSysAttributes
from ..exceptions import APIError, HomeAssistantAPIError, HomeAssistantAuthError
from ..utils.json import json_dumps
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -145,7 +146,8 @@ class APIProxy(CoreSysAttributes):
{
"type": "auth",
"access_token": self.sys_homeassistant.api.access_token,
}
},
dumps=json_dumps,
)
data = await client.receive_json()
@@ -202,7 +204,8 @@ class APIProxy(CoreSysAttributes):
# handle authentication
try:
await server.send_json(
{"type": "auth_required", "ha_version": self.sys_homeassistant.version}
{"type": "auth_required", "ha_version": self.sys_homeassistant.version},
dumps=json_dumps,
)
# Check API access
@@ -215,14 +218,16 @@ class APIProxy(CoreSysAttributes):
if not addon or not addon.access_homeassistant_api:
_LOGGER.warning("Unauthorized WebSocket access!")
await server.send_json(
{"type": "auth_invalid", "message": "Invalid access"}
{"type": "auth_invalid", "message": "Invalid access"},
dumps=json_dumps,
)
return server
_LOGGER.info("WebSocket access from %s", addon.slug)
await server.send_json(
{"type": "auth_ok", "ha_version": self.sys_homeassistant.version}
{"type": "auth_ok", "ha_version": self.sys_homeassistant.version},
dumps=json_dumps,
)
except (RuntimeError, ValueError) as err:
_LOGGER.error("Can't initialize handshake: %s", err)

View File

@@ -140,7 +140,7 @@ class APISupervisor(CoreSysAttributes):
if ATTR_DIAGNOSTICS in body:
self.sys_config.diagnostics = body[ATTR_DIAGNOSTICS]
self.sys_dbus.agent.diagnostics = body[ATTR_DIAGNOSTICS]
await self.sys_dbus.agent.set_diagnostics(body[ATTR_DIAGNOSTICS])
if body[ATTR_DIAGNOSTICS]:
init_sentry(self.coresys)

View File

@@ -13,6 +13,7 @@ from ..const import (
HEADER_TOKEN,
HEADER_TOKEN_OLD,
JSON_DATA,
JSON_JOB_ID,
JSON_MESSAGE,
JSON_RESULT,
REQUEST_FROM,
@@ -22,7 +23,7 @@ from ..const import (
from ..coresys import CoreSys
from ..exceptions import APIError, APIForbidden, DockerAPIError, HassioError
from ..utils import check_exception_chain, get_message_from_exception_chain
from ..utils.json import JSONEncoder
from ..utils.json import json_dumps, json_loads as json_loads_util
from ..utils.log_format import format_message
from .const import CONTENT_TYPE_BINARY
@@ -48,7 +49,7 @@ def json_loads(data: Any) -> dict[str, Any]:
if not data:
return {}
try:
return json.loads(data)
return json_loads_util(data)
except json.JSONDecodeError as err:
raise APIError("Invalid json") from err
@@ -124,13 +125,17 @@ def api_return_error(
if check_exception_chain(error, DockerAPIError):
message = format_message(message)
result = {
JSON_RESULT: RESULT_ERROR,
JSON_MESSAGE: message or "Unknown error, see supervisor",
}
if isinstance(error, APIError) and error.job_id:
result[JSON_JOB_ID] = error.job_id
return web.json_response(
{
JSON_RESULT: RESULT_ERROR,
JSON_MESSAGE: message or "Unknown error, see supervisor",
},
result,
status=400,
dumps=lambda x: json.dumps(x, cls=JSONEncoder),
dumps=json_dumps,
)
@@ -138,7 +143,7 @@ def api_return_ok(data: dict[str, Any] | None = None) -> web.Response:
"""Return an API ok answer."""
return web.json_response(
{JSON_RESULT: RESULT_OK, JSON_DATA: data or {}},
dumps=lambda x: json.dumps(x, cls=JSONEncoder),
dumps=json_dumps,
)

View File

@@ -1,14 +1,18 @@
"""Representation of a backup file."""
import asyncio
from base64 import b64decode, b64encode
from collections import defaultdict
from collections.abc import Awaitable
from copy import deepcopy
from datetime import timedelta
from functools import cached_property
import io
import json
import logging
from pathlib import Path
import tarfile
from tempfile import TemporaryDirectory
import time
from typing import Any
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
@@ -42,11 +46,14 @@ from ..const import (
ATTR_VERSION,
CRYPTO_AES128,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import AddonsError, BackupError
from ..coresys import CoreSys
from ..exceptions import AddonsError, BackupError, BackupInvalidError
from ..jobs.const import JOB_GROUP_BACKUP
from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup
from ..utils import remove_folder
from ..utils.dt import parse_datetime, utcnow
from ..utils.json import write_json_file
from ..utils.json import json_bytes
from .const import BUF_SIZE, BackupType
from .utils import key_to_iv, password_to_key
from .validate import SCHEMA_BACKUP
@@ -54,15 +61,25 @@ from .validate import SCHEMA_BACKUP
_LOGGER: logging.Logger = logging.getLogger(__name__)
class Backup(CoreSysAttributes):
class Backup(JobGroup):
"""A single Supervisor backup."""
def __init__(self, coresys: CoreSys, tar_file: Path):
def __init__(
self,
coresys: CoreSys,
tar_file: Path,
slug: str,
data: dict[str, Any] | None = None,
):
"""Initialize a backup."""
self.coresys: CoreSys = coresys
super().__init__(
coresys, JOB_GROUP_BACKUP.format_map(defaultdict(str, slug=slug)), slug
)
self._tarfile: Path = tar_file
self._data: dict[str, Any] = {}
self._data: dict[str, Any] = data or {ATTR_SLUG: slug}
self._tmp = None
self._outer_secure_tarfile: SecureTarFile | None = None
self._outer_secure_tarfile_tarfile: tarfile.TarFile | None = None
self._key: bytes | None = None
self._aes: Cipher | None = None
@@ -87,7 +104,7 @@ class Backup(CoreSysAttributes):
return self._data[ATTR_NAME]
@property
def date(self):
def date(self) -> str:
"""Return backup date."""
return self._data[ATTR_DATE]
@@ -102,32 +119,32 @@ class Backup(CoreSysAttributes):
return self._data[ATTR_COMPRESSED]
@property
def addons(self):
def addons(self) -> list[dict[str, Any]]:
"""Return backup date."""
return self._data[ATTR_ADDONS]
@property
def addon_list(self):
def addon_list(self) -> list[str]:
"""Return a list of add-ons slugs."""
return [addon_data[ATTR_SLUG] for addon_data in self.addons]
@property
def folders(self):
def folders(self) -> list[str]:
"""Return list of saved folders."""
return self._data[ATTR_FOLDERS]
@property
def repositories(self):
def repositories(self) -> list[str]:
"""Return backup date."""
return self._data[ATTR_REPOSITORIES]
@repositories.setter
def repositories(self, value):
def repositories(self, value: list[str]) -> None:
"""Set backup date."""
self._data[ATTR_REPOSITORIES] = value
@property
def homeassistant_version(self):
def homeassistant_version(self) -> AwesomeVersion:
"""Return backup Home Assistant version."""
if self.homeassistant is None:
return None
@@ -141,7 +158,7 @@ class Backup(CoreSysAttributes):
return self.homeassistant[ATTR_EXCLUDE_DATABASE]
@property
def homeassistant(self):
def homeassistant(self) -> dict[str, Any]:
"""Return backup Home Assistant data."""
return self._data[ATTR_HOMEASSISTANT]
@@ -151,12 +168,12 @@ class Backup(CoreSysAttributes):
return self._data[ATTR_SUPERVISOR_VERSION]
@property
def docker(self):
def docker(self) -> dict[str, Any]:
"""Return backup Docker config data."""
return self._data.get(ATTR_DOCKER, {})
@docker.setter
def docker(self, value):
def docker(self, value: dict[str, Any]) -> None:
"""Set the Docker config data."""
self._data[ATTR_DOCKER] = value
@@ -169,32 +186,36 @@ class Backup(CoreSysAttributes):
return None
@property
def size(self):
def size(self) -> float:
"""Return backup size."""
if not self.tarfile.is_file():
return 0
return round(self.tarfile.stat().st_size / 1048576, 2) # calc mbyte
@property
def is_new(self):
def is_new(self) -> bool:
"""Return True if there is new."""
return not self.tarfile.exists()
@property
def tarfile(self):
def tarfile(self) -> Path:
"""Return path to backup tarfile."""
return self._tarfile
@property
def is_current(self):
def is_current(self) -> bool:
"""Return true if backup is current, false if stale."""
return parse_datetime(self.date) >= utcnow() - timedelta(
days=self.sys_backups.days_until_stale
)
@property
def data(self) -> dict[str, Any]:
"""Returns a copy of the data."""
return deepcopy(self._data)
def new(
self,
slug: str,
name: str,
date: str,
sys_type: BackupType,
@@ -204,7 +225,6 @@ class Backup(CoreSysAttributes):
"""Initialize a new backup."""
# Init metadata
self._data[ATTR_VERSION] = 2
self._data[ATTR_SLUG] = slug
self._data[ATTR_NAME] = name
self._data[ATTR_DATE] = date
self._data[ATTR_TYPE] = sys_type
@@ -305,25 +325,55 @@ class Backup(CoreSysAttributes):
async def __aenter__(self):
"""Async context to open a backup."""
self._tmp = TemporaryDirectory(dir=str(self.tarfile.parent))
# create a backup
if not self.tarfile.is_file():
return self
self._outer_secure_tarfile = SecureTarFile(
self.tarfile,
"w",
gzip=False,
bufsize=BUF_SIZE,
)
self._outer_secure_tarfile_tarfile = self._outer_secure_tarfile.__enter__()
return
# extract an existing backup
self._tmp = TemporaryDirectory(dir=str(self.tarfile.parent))
def _extract_backup():
"""Extract a backup."""
with tarfile.open(self.tarfile, "r:") as tar:
tar.extractall(path=self._tmp.name, members=secure_path(tar))
tar.extractall(
path=self._tmp.name,
members=secure_path(tar),
filter="fully_trusted",
)
await self.sys_run_in_executor(_extract_backup)
async def __aexit__(self, exception_type, exception_value, traceback):
"""Async context to close a backup."""
# exists backup or exception on build
if self.tarfile.is_file() or exception_type is not None:
self._tmp.cleanup()
try:
await self._aexit(exception_type, exception_value, traceback)
finally:
if self._tmp:
self._tmp.cleanup()
if self._outer_secure_tarfile:
self._outer_secure_tarfile.__exit__(
exception_type, exception_value, traceback
)
self._outer_secure_tarfile = None
self._outer_secure_tarfile_tarfile = None
async def _aexit(self, exception_type, exception_value, traceback):
"""Cleanup after backup creation.
This is a separate method to allow it to be called from __aexit__ to ensure
that cleanup is always performed, even if an exception is raised.
"""
# If we're not creating a new backup, or if an exception was raised, we're done
if not self._outer_secure_tarfile or exception_type is not None:
return
# validate data
@@ -336,157 +386,254 @@ class Backup(CoreSysAttributes):
raise ValueError("Invalid config") from None
# new backup, build it
def _create_backup():
def _add_backup_json():
"""Create a new backup."""
with tarfile.open(self.tarfile, "w:") as tar:
tar.add(self._tmp.name, arcname=".")
raw_bytes = json_bytes(self._data)
fileobj = io.BytesIO(raw_bytes)
tar_info = tarfile.TarInfo(name="./backup.json")
tar_info.size = len(raw_bytes)
tar_info.mtime = int(time.time())
self._outer_secure_tarfile_tarfile.addfile(tar_info, fileobj=fileobj)
try:
write_json_file(Path(self._tmp.name, "backup.json"), self._data)
await self.sys_run_in_executor(_create_backup)
await self.sys_run_in_executor(_add_backup_json)
except (OSError, json.JSONDecodeError) as err:
self.sys_jobs.current.capture_error(BackupError("Can't write backup"))
_LOGGER.error("Can't write backup: %s", err)
finally:
self._tmp.cleanup()
@Job(name="backup_addon_save", cleanup=False)
async def _addon_save(self, addon: Addon) -> asyncio.Task | None:
"""Store an add-on into backup."""
self.sys_jobs.current.reference = addon.slug
tar_name = f"{addon.slug}.tar{'.gz' if self.compressed else ''}"
addon_file = self._outer_secure_tarfile.create_inner_tar(
f"./{tar_name}",
gzip=self.compressed,
key=self._key,
)
# Take backup
try:
start_task = await addon.backup(addon_file)
except AddonsError as err:
raise BackupError(
f"Can't create backup for {addon.slug}", _LOGGER.error
) from err
# Store to config
self._data[ATTR_ADDONS].append(
{
ATTR_SLUG: addon.slug,
ATTR_NAME: addon.name,
ATTR_VERSION: addon.version,
ATTR_SIZE: addon_file.size,
}
)
return start_task
@Job(name="backup_store_addons", cleanup=False)
async def store_addons(self, addon_list: list[str]) -> list[asyncio.Task]:
"""Add a list of add-ons into backup.
For each addon that needs to be started after backup, returns a Task which
completes when that addon has state 'started' (see addon.start).
"""
async def _addon_save(addon: Addon) -> asyncio.Task | None:
"""Task to store an add-on into backup."""
tar_name = f"{addon.slug}.tar{'.gz' if self.compressed else ''}"
addon_file = SecureTarFile(
Path(self._tmp.name, tar_name),
"w",
key=self._key,
gzip=self.compressed,
bufsize=BUF_SIZE,
)
# Take backup
try:
start_task = await addon.backup(addon_file)
except AddonsError:
_LOGGER.error("Can't create backup for %s", addon.slug)
return
# Store to config
self._data[ATTR_ADDONS].append(
{
ATTR_SLUG: addon.slug,
ATTR_NAME: addon.name,
ATTR_VERSION: addon.version,
ATTR_SIZE: addon_file.size,
}
)
return start_task
# Save Add-ons sequential
# avoid issue on slow IO
# Save Add-ons sequential avoid issue on slow IO
start_tasks: list[asyncio.Task] = []
for addon in addon_list:
try:
if start_task := await _addon_save(addon):
if start_task := await self._addon_save(addon):
start_tasks.append(start_task)
except Exception as err: # pylint: disable=broad-except
_LOGGER.warning("Can't save Add-on %s: %s", addon.slug, err)
return start_tasks
async def restore_addons(self, addon_list: list[str]) -> list[asyncio.Task]:
@Job(name="backup_addon_restore", cleanup=False)
async def _addon_restore(self, addon_slug: str) -> asyncio.Task | None:
"""Restore an add-on from backup."""
self.sys_jobs.current.reference = addon_slug
tar_name = f"{addon_slug}.tar{'.gz' if self.compressed else ''}"
addon_file = SecureTarFile(
Path(self._tmp.name, tar_name),
"r",
key=self._key,
gzip=self.compressed,
bufsize=BUF_SIZE,
)
# If exists inside backup
if not addon_file.path.exists():
raise BackupError(f"Can't find backup {addon_slug}", _LOGGER.error)
# Perform a restore
try:
return await self.sys_addons.restore(addon_slug, addon_file)
except AddonsError as err:
raise BackupError(
f"Can't restore backup {addon_slug}", _LOGGER.error
) from err
@Job(name="backup_restore_addons", cleanup=False)
async def restore_addons(
self, addon_list: list[str]
) -> tuple[bool, list[asyncio.Task]]:
"""Restore a list add-on from backup."""
async def _addon_restore(addon_slug: str) -> asyncio.Task | None:
"""Task to restore an add-on into backup."""
tar_name = f"{addon_slug}.tar{'.gz' if self.compressed else ''}"
addon_file = SecureTarFile(
Path(self._tmp.name, tar_name),
"r",
key=self._key,
gzip=self.compressed,
bufsize=BUF_SIZE,
)
# If exists inside backup
if not addon_file.path.exists():
_LOGGER.error("Can't find backup %s", addon_slug)
return
# Perform a restore
try:
return await self.sys_addons.restore(addon_slug, addon_file)
except AddonsError:
_LOGGER.error("Can't restore backup %s", addon_slug)
# Save Add-ons sequential
# avoid issue on slow IO
# Save Add-ons sequential avoid issue on slow IO
start_tasks: list[asyncio.Task] = []
success = True
for slug in addon_list:
try:
if start_task := await _addon_restore(slug):
start_tasks.append(start_task)
start_task = await self._addon_restore(slug)
except Exception as err: # pylint: disable=broad-except
_LOGGER.warning("Can't restore Add-on %s: %s", slug, err)
success = False
else:
if start_task:
start_tasks.append(start_task)
return start_tasks
return (success, start_tasks)
@Job(name="backup_remove_delta_addons", cleanup=False)
async def remove_delta_addons(self) -> bool:
"""Remove addons which are not in this backup."""
success = True
for addon in self.sys_addons.installed:
if addon.slug in self.addon_list:
continue
# Remove Add-on because it's not a part of the new env
# Do it sequential avoid issue on slow IO
try:
await self.sys_addons.uninstall(addon.slug)
except AddonsError as err:
self.sys_jobs.current.capture_error(err)
_LOGGER.warning("Can't uninstall Add-on %s: %s", addon.slug, err)
success = False
return success
@Job(name="backup_folder_save", cleanup=False)
async def _folder_save(self, name: str):
"""Take backup of a folder."""
self.sys_jobs.current.reference = name
slug_name = name.replace("/", "_")
tar_name = f"{slug_name}.tar{'.gz' if self.compressed else ''}"
origin_dir = Path(self.sys_config.path_supervisor, name)
# Check if exists
if not origin_dir.is_dir():
_LOGGER.warning("Can't find backup folder %s", name)
return
def _save() -> None:
# Take backup
_LOGGER.info("Backing up folder %s", name)
with self._outer_secure_tarfile.create_inner_tar(
f"./{tar_name}",
gzip=self.compressed,
key=self._key,
) as tar_file:
atomic_contents_add(
tar_file,
origin_dir,
excludes=[
bound.bind_mount.local_where.as_posix()
for bound in self.sys_mounts.bound_mounts
if bound.bind_mount.local_where
],
arcname=".",
)
_LOGGER.info("Backup folder %s done", name)
try:
await self.sys_run_in_executor(_save)
except (tarfile.TarError, OSError) as err:
raise BackupError(
f"Can't backup folder {name}: {str(err)}", _LOGGER.error
) from err
self._data[ATTR_FOLDERS].append(name)
@Job(name="backup_store_folders", cleanup=False)
async def store_folders(self, folder_list: list[str]):
"""Backup Supervisor data into backup."""
async def _folder_save(name: str):
"""Take backup of a folder."""
slug_name = name.replace("/", "_")
tar_name = Path(
self._tmp.name, f"{slug_name}.tar{'.gz' if self.compressed else ''}"
)
origin_dir = Path(self.sys_config.path_supervisor, name)
# Check if exists
if not origin_dir.is_dir():
_LOGGER.warning("Can't find backup folder %s", name)
return
def _save() -> None:
# Take backup
_LOGGER.info("Backing up folder %s", name)
with SecureTarFile(
tar_name, "w", key=self._key, gzip=self.compressed, bufsize=BUF_SIZE
) as tar_file:
atomic_contents_add(
tar_file,
origin_dir,
excludes=[
bound.bind_mount.local_where.as_posix()
for bound in self.sys_mounts.bound_mounts
if bound.bind_mount.local_where
],
arcname=".",
)
_LOGGER.info("Backup folder %s done", name)
await self.sys_run_in_executor(_save)
self._data[ATTR_FOLDERS].append(name)
# Save folder sequential
# avoid issue on slow IO
# Save folder sequential avoid issue on slow IO
for folder in folder_list:
await self._folder_save(folder)
@Job(name="backup_folder_restore", cleanup=False)
async def _folder_restore(self, name: str) -> None:
"""Restore a folder."""
self.sys_jobs.current.reference = name
slug_name = name.replace("/", "_")
tar_name = Path(
self._tmp.name, f"{slug_name}.tar{'.gz' if self.compressed else ''}"
)
origin_dir = Path(self.sys_config.path_supervisor, name)
# Check if exists inside backup
if not tar_name.exists():
raise BackupInvalidError(
f"Can't find restore folder {name}", _LOGGER.warning
)
# Unmount any mounts within folder
bind_mounts = [
bound.bind_mount
for bound in self.sys_mounts.bound_mounts
if bound.bind_mount.local_where
and bound.bind_mount.local_where.is_relative_to(origin_dir)
]
if bind_mounts:
await asyncio.gather(*[bind_mount.unmount() for bind_mount in bind_mounts])
# Clean old stuff
if origin_dir.is_dir():
await remove_folder(origin_dir, content_only=True)
# Perform a restore
def _restore() -> bool:
try:
await _folder_save(folder)
_LOGGER.info("Restore folder %s", name)
with SecureTarFile(
tar_name,
"r",
key=self._key,
gzip=self.compressed,
bufsize=BUF_SIZE,
) as tar_file:
tar_file.extractall(
path=origin_dir, members=tar_file, filter="fully_trusted"
)
_LOGGER.info("Restore folder %s done", name)
except (tarfile.TarError, OSError) as err:
raise BackupError(
f"Can't backup folder {folder}: {str(err)}", _LOGGER.error
f"Can't restore folder {name}: {err}", _LOGGER.warning
) from err
return True
async def restore_folders(self, folder_list: list[str]):
try:
return await self.sys_run_in_executor(_restore)
finally:
if bind_mounts:
await asyncio.gather(
*[bind_mount.mount() for bind_mount in bind_mounts]
)
@Job(name="backup_restore_folders", cleanup=False)
async def restore_folders(self, folder_list: list[str]) -> bool:
"""Backup Supervisor data into backup."""
success = True
async def _folder_restore(name: str) -> None:
async def _folder_restore(name: str) -> bool:
"""Intenal function to restore a folder."""
slug_name = name.replace("/", "_")
tar_name = Path(
@@ -497,7 +644,7 @@ class Backup(CoreSysAttributes):
# Check if exists inside backup
if not tar_name.exists():
_LOGGER.warning("Can't find restore folder %s", name)
return
return False
# Unmount any mounts within folder
bind_mounts = [
@@ -516,7 +663,7 @@ class Backup(CoreSysAttributes):
await remove_folder(origin_dir, content_only=True)
# Perform a restore
def _restore() -> None:
def _restore() -> bool:
try:
_LOGGER.info("Restore folder %s", name)
with SecureTarFile(
@@ -526,27 +673,33 @@ class Backup(CoreSysAttributes):
gzip=self.compressed,
bufsize=BUF_SIZE,
) as tar_file:
tar_file.extractall(path=origin_dir, members=tar_file)
tar_file.extractall(
path=origin_dir, members=tar_file, filter="fully_trusted"
)
_LOGGER.info("Restore folder %s done", name)
except (tarfile.TarError, OSError) as err:
_LOGGER.warning("Can't restore folder %s: %s", name, err)
return False
return True
try:
await self.sys_run_in_executor(_restore)
return await self.sys_run_in_executor(_restore)
finally:
if bind_mounts:
await asyncio.gather(
*[bind_mount.mount() for bind_mount in bind_mounts]
)
# Restore folder sequential
# avoid issue on slow IO
# Restore folder sequential avoid issue on slow IO
for folder in folder_list:
try:
await _folder_restore(folder)
await self._folder_restore(folder)
except Exception as err: # pylint: disable=broad-except
_LOGGER.warning("Can't restore folder %s: %s", folder, err)
success = False
return success
@Job(name="backup_store_homeassistant", cleanup=False)
async def store_homeassistant(self, exclude_database: bool = False):
"""Backup Home Assistant Core configuration folder."""
self._data[ATTR_HOMEASSISTANT] = {
@@ -554,12 +707,12 @@ class Backup(CoreSysAttributes):
ATTR_EXCLUDE_DATABASE: exclude_database,
}
tar_name = f"homeassistant.tar{'.gz' if self.compressed else ''}"
# Backup Home Assistant Core config directory
tar_name = Path(
self._tmp.name, f"homeassistant.tar{'.gz' if self.compressed else ''}"
)
homeassistant_file = SecureTarFile(
tar_name, "w", key=self._key, gzip=self.compressed, bufsize=BUF_SIZE
homeassistant_file = self._outer_secure_tarfile.create_inner_tar(
f"./{tar_name}",
gzip=self.compressed,
key=self._key,
)
await self.sys_homeassistant.backup(homeassistant_file, exclude_database)
@@ -567,6 +720,7 @@ class Backup(CoreSysAttributes):
# Store size
self.homeassistant[ATTR_SIZE] = homeassistant_file.size
@Job(name="backup_restore_homeassistant", cleanup=False)
async def restore_homeassistant(self) -> Awaitable[None]:
"""Restore Home Assistant Core configuration folder."""
await self.sys_homeassistant.core.stop()
@@ -600,16 +754,16 @@ class Backup(CoreSysAttributes):
return self.sys_create_task(_core_update())
def store_repositories(self):
def store_repositories(self) -> None:
"""Store repository list into backup."""
self.repositories = self.sys_store.repository_urls
async def restore_repositories(self, replace: bool = False):
def restore_repositories(self, replace: bool = False) -> Awaitable[None]:
"""Restore repositories from backup.
Return a coroutine.
"""
await self.sys_store.update_repositories(
return self.sys_store.update_repositories(
self.repositories, add_with_errors=True, replace=replace
)

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import asyncio
from collections.abc import Awaitable, Iterable
import errno
import logging
from pathlib import Path
@@ -14,11 +15,17 @@ from ..const import (
CoreState,
)
from ..dbus.const import UnitActiveState
from ..exceptions import AddonsError, BackupError, BackupJobError
from ..exceptions import (
BackupError,
BackupInvalidError,
BackupJobError,
BackupMountDownError,
)
from ..jobs.const import JOB_GROUP_BACKUP_MANAGER, JobCondition, JobExecutionLimit
from ..jobs.decorator import Job
from ..jobs.job_group import JobGroup
from ..mounts.mount import Mount
from ..resolution.const import UnhealthyReason
from ..utils.common import FileConfiguration
from ..utils.dt import utcnow
from ..utils.sentinel import DEFAULT
@@ -31,18 +38,6 @@ from .validate import ALL_FOLDERS, SCHEMA_BACKUPS_CONFIG
_LOGGER: logging.Logger = logging.getLogger(__name__)
def _list_backup_files(path: Path) -> Iterable[Path]:
"""Return iterable of backup files, suppress and log OSError for network mounts."""
try:
# is_dir does a stat syscall which raises if the mount is down
if path.is_dir():
return path.glob("*.tar")
except OSError as err:
_LOGGER.error("Could not list backups from %s: %s", path.as_posix(), err)
return []
class BackupManager(FileConfiguration, JobGroup):
"""Manage backups."""
@@ -84,11 +79,15 @@ class BackupManager(FileConfiguration, JobGroup):
def _get_base_path(self, location: Mount | type[DEFAULT] | None = DEFAULT) -> Path:
"""Get base path for backup using location or default location."""
if location:
return location.local_where
if location == DEFAULT and self.sys_mounts.default_backup_mount:
return self.sys_mounts.default_backup_mount.local_where
location = self.sys_mounts.default_backup_mount
if location:
if not location.local_where.is_mount():
raise BackupMountDownError(
f"{location.name} is down, cannot back-up to it", _LOGGER.error
)
return location.local_where
return self.sys_config.path_backup
@@ -119,6 +118,19 @@ class BackupManager(FileConfiguration, JobGroup):
)
self.sys_jobs.current.stage = stage
def _list_backup_files(self, path: Path) -> Iterable[Path]:
"""Return iterable of backup files, suppress and log OSError for network mounts."""
try:
# is_dir does a stat syscall which raises if the mount is down
if path.is_dir():
return path.glob("*.tar")
except OSError as err:
if err.errno == errno.EBADMSG and path == self.sys_config.path_backup:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Could not list backups from %s: %s", path.as_posix(), err)
return []
def _create_backup(
self,
name: str,
@@ -136,8 +148,8 @@ class BackupManager(FileConfiguration, JobGroup):
tar_file = Path(self._get_base_path(location), f"{slug}.tar")
# init object
backup = Backup(self.coresys, tar_file)
backup.new(slug, name, date_str, sys_type, password, compressed)
backup = Backup(self.coresys, tar_file, slug)
backup.new(name, date_str, sys_type, password, compressed)
# Add backup ID to job
self.sys_jobs.current.reference = backup.slug
@@ -162,14 +174,16 @@ class BackupManager(FileConfiguration, JobGroup):
async def _load_backup(tar_file):
"""Load the backup."""
backup = Backup(self.coresys, tar_file)
backup = Backup(self.coresys, tar_file, "temp")
if await backup.load():
self._backups[backup.slug] = backup
self._backups[backup.slug] = Backup(
self.coresys, tar_file, backup.slug, backup.data
)
tasks = [
self.sys_create_task(_load_backup(tar_file))
for path in self.backup_locations
for tar_file in _list_backup_files(path)
for tar_file in self._list_backup_files(path)
]
_LOGGER.info("Found %d backup files", len(tasks))
@@ -184,6 +198,11 @@ class BackupManager(FileConfiguration, JobGroup):
_LOGGER.info("Removed backup file %s", backup.slug)
except OSError as err:
if (
err.errno == errno.EBADMSG
and backup.tarfile.parent == self.sys_config.path_backup
):
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Can't remove backup %s: %s", backup.slug, err)
return False
@@ -191,7 +210,7 @@ class BackupManager(FileConfiguration, JobGroup):
async def import_backup(self, tar_file: Path) -> Backup | None:
"""Check backup tarfile and import it."""
backup = Backup(self.coresys, tar_file)
backup = Backup(self.coresys, tar_file, "temp")
# Read meta data
if not await backup.load():
@@ -208,11 +227,13 @@ class BackupManager(FileConfiguration, JobGroup):
backup.tarfile.rename(tar_origin)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Can't move backup file to storage: %s", err)
return None
# Load new backup
backup = Backup(self.coresys, tar_origin)
backup = Backup(self.coresys, tar_origin, backup.slug, backup.data)
if not await backup.load():
return None
_LOGGER.info("Successfully imported %s", backup.slug)
@@ -259,9 +280,15 @@ class BackupManager(FileConfiguration, JobGroup):
self._change_stage(BackupJobStage.FINISHING_FILE, backup)
except BackupError as err:
self.sys_jobs.current.capture_error(err)
return None
except Exception as err: # pylint: disable=broad-except
_LOGGER.exception("Backup %s error", backup.slug)
capture_exception(err)
self.sys_jobs.current.capture_error(
BackupError(f"Backup {backup.slug} error, see supervisor logs")
)
return None
else:
self._backups[backup.slug] = backup
@@ -280,6 +307,7 @@ class BackupManager(FileConfiguration, JobGroup):
conditions=[JobCondition.RUNNING],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError,
cleanup=False,
)
async def do_backup_full(
self,
@@ -316,6 +344,7 @@ class BackupManager(FileConfiguration, JobGroup):
conditions=[JobCondition.RUNNING],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError,
cleanup=False,
)
async def do_backup_partial(
self,
@@ -378,6 +407,7 @@ class BackupManager(FileConfiguration, JobGroup):
Must be called from an existing restore job.
"""
addon_start_tasks: list[Awaitable[None]] | None = None
success = True
try:
task_hass: asyncio.Task | None = None
@@ -389,7 +419,7 @@ class BackupManager(FileConfiguration, JobGroup):
# Process folders
if folder_list:
self._change_stage(RestoreJobStage.FOLDERS, backup)
await backup.restore_folders(folder_list)
success = await backup.restore_folders(folder_list)
# Process Home-Assistant
if homeassistant:
@@ -399,23 +429,17 @@ class BackupManager(FileConfiguration, JobGroup):
# Delete delta add-ons
if replace:
self._change_stage(RestoreJobStage.REMOVE_DELTA_ADDONS, backup)
for addon in self.sys_addons.installed:
if addon.slug in backup.addon_list:
continue
# Remove Add-on because it's not a part of the new env
# Do it sequential avoid issue on slow IO
try:
await self.sys_addons.uninstall(addon.slug)
except AddonsError:
_LOGGER.warning("Can't uninstall Add-on %s", addon.slug)
success = success and await backup.remove_delta_addons()
if addon_list:
self._change_stage(RestoreJobStage.ADDON_REPOSITORIES, backup)
await backup.restore_repositories(replace)
self._change_stage(RestoreJobStage.ADDONS, backup)
addon_start_tasks = await backup.restore_addons(addon_list)
restore_success, addon_start_tasks = await backup.restore_addons(
addon_list
)
success = success and restore_success
# Wait for Home Assistant Core update/downgrade
if task_hass:
@@ -423,18 +447,24 @@ class BackupManager(FileConfiguration, JobGroup):
RestoreJobStage.AWAIT_HOME_ASSISTANT_RESTART, backup
)
await task_hass
except BackupError:
raise
except Exception as err: # pylint: disable=broad-except
_LOGGER.exception("Restore %s error", backup.slug)
capture_exception(err)
return False
raise BackupError(
f"Restore {backup.slug} error, see supervisor logs"
) from err
else:
if addon_start_tasks:
self._change_stage(RestoreJobStage.AWAIT_ADDON_RESTARTS, backup)
# Ignore exceptions from waiting for addon startup, addon errors handled elsewhere
await asyncio.gather(*addon_start_tasks, return_exceptions=True)
# Failure to resume addons post restore is still a restore failure
if any(
await asyncio.gather(*addon_start_tasks, return_exceptions=True)
):
return False
return True
return success
finally:
# Leave Home Assistant alone if it wasn't part of the restore
if homeassistant:
@@ -442,12 +472,16 @@ class BackupManager(FileConfiguration, JobGroup):
# Do we need start Home Assistant Core?
if not await self.sys_homeassistant.core.is_running():
await self.sys_homeassistant.core.start()
await self.sys_homeassistant.core.start(
_job_override__cleanup=False
)
# Check If we can access to API / otherwise restart
if not await self.sys_homeassistant.api.check_api_state():
_LOGGER.warning("Need restart HomeAssistant for API")
await self.sys_homeassistant.core.restart()
await self.sys_homeassistant.core.restart(
_job_override__cleanup=False
)
@Job(
name="backup_manager_full_restore",
@@ -460,6 +494,7 @@ class BackupManager(FileConfiguration, JobGroup):
],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError,
cleanup=False,
)
async def do_restore_full(
self, backup: Backup, password: str | None = None
@@ -469,32 +504,34 @@ class BackupManager(FileConfiguration, JobGroup):
self.sys_jobs.current.reference = backup.slug
if backup.sys_type != BackupType.FULL:
_LOGGER.error("%s is only a partial backup!", backup.slug)
return False
raise BackupInvalidError(
f"{backup.slug} is only a partial backup!", _LOGGER.error
)
if backup.protected and not backup.set_password(password):
_LOGGER.error("Invalid password for backup %s", backup.slug)
return False
raise BackupInvalidError(
f"Invalid password for backup {backup.slug}", _LOGGER.error
)
if backup.supervisor_version > self.sys_supervisor.version:
_LOGGER.error(
"Backup was made on supervisor version %s, can't restore on %s. Must update supervisor first.",
backup.supervisor_version,
self.sys_supervisor.version,
raise BackupInvalidError(
f"Backup was made on supervisor version {backup.supervisor_version}, "
f"can't restore on {self.sys_supervisor.version}. Must update supervisor first.",
_LOGGER.error,
)
return False
_LOGGER.info("Full-Restore %s start", backup.slug)
self.sys_core.state = CoreState.FREEZE
# Stop Home-Assistant / Add-ons
await self.sys_core.shutdown()
try:
# Stop Home-Assistant / Add-ons
await self.sys_core.shutdown()
success = await self._do_restore(
backup, backup.addon_list, backup.folders, True, True
)
self.sys_core.state = CoreState.RUNNING
success = await self._do_restore(
backup, backup.addon_list, backup.folders, True, True
)
finally:
self.sys_core.state = CoreState.RUNNING
if success:
_LOGGER.info("Full-Restore %s done", backup.slug)
@@ -511,6 +548,7 @@ class BackupManager(FileConfiguration, JobGroup):
],
limit=JobExecutionLimit.GROUP_ONCE,
on_condition=BackupJobError,
cleanup=False,
)
async def do_restore_partial(
self,
@@ -533,29 +571,31 @@ class BackupManager(FileConfiguration, JobGroup):
homeassistant = True
if backup.protected and not backup.set_password(password):
_LOGGER.error("Invalid password for backup %s", backup.slug)
return False
raise BackupInvalidError(
f"Invalid password for backup {backup.slug}", _LOGGER.error
)
if backup.homeassistant is None and homeassistant:
_LOGGER.error("No Home Assistant Core data inside the backup")
return False
raise BackupInvalidError(
"No Home Assistant Core data inside the backup", _LOGGER.error
)
if backup.supervisor_version > self.sys_supervisor.version:
_LOGGER.error(
"Backup was made on supervisor version %s, can't restore on %s. Must update supervisor first.",
backup.supervisor_version,
self.sys_supervisor.version,
raise BackupInvalidError(
f"Backup was made on supervisor version {backup.supervisor_version}, "
f"can't restore on {self.sys_supervisor.version}. Must update supervisor first.",
_LOGGER.error,
)
return False
_LOGGER.info("Partial-Restore %s start", backup.slug)
self.sys_core.state = CoreState.FREEZE
success = await self._do_restore(
backup, addon_list, folder_list, homeassistant, False
)
self.sys_core.state = CoreState.RUNNING
try:
success = await self._do_restore(
backup, addon_list, folder_list, homeassistant, False
)
finally:
self.sys_core.state = CoreState.RUNNING
if success:
_LOGGER.info("Partial-Restore %s done", backup.slug)

View File

@@ -53,7 +53,7 @@ def unique_addons(addons_list):
def v1_homeassistant(
homeassistant_data: dict[str, Any] | None
homeassistant_data: dict[str, Any] | None,
) -> dict[str, Any] | None:
"""Cleanup homeassistant artefacts from v1."""
if not homeassistant_data:

View File

@@ -115,7 +115,7 @@ async def initialize_coresys() -> CoreSys:
_LOGGER.warning(
"Missing SUPERVISOR_MACHINE environment variable. Fallback to deprecated extraction!"
)
_LOGGER.info("Seting up coresys for machine: %s", coresys.machine)
_LOGGER.info("Setting up coresys for machine: %s", coresys.machine)
return coresys

View File

@@ -1,5 +1,5 @@
"""Bootstrap Supervisor."""
from datetime import datetime
from datetime import UTC, datetime
import logging
import os
from pathlib import Path, PurePath
@@ -50,7 +50,7 @@ MOUNTS_CREDENTIALS = PurePath(".mounts_credentials")
EMERGENCY_DATA = PurePath("emergency")
ADDON_CONFIGS = PurePath("addon_configs")
DEFAULT_BOOT_TIME = datetime.utcfromtimestamp(0).isoformat()
DEFAULT_BOOT_TIME = datetime.fromtimestamp(0, UTC).isoformat()
# We filter out UTC because it's the system default fallback
# Core also not respect the cotnainer timezone and reset timezones
@@ -164,7 +164,7 @@ class CoreConfig(FileConfiguration):
boot_time = parse_datetime(boot_str)
if not boot_time:
return datetime.utcfromtimestamp(1)
return datetime.fromtimestamp(1, UTC)
return boot_time
@last_boot.setter

View File

@@ -68,6 +68,7 @@ META_SUPERVISOR = "supervisor"
JSON_DATA = "data"
JSON_MESSAGE = "message"
JSON_RESULT = "result"
JSON_JOB_ID = "job_id"
RESULT_ERROR = "error"
RESULT_OK = "ok"
@@ -331,6 +332,7 @@ ATTR_UUID = "uuid"
ATTR_VALID = "valid"
ATTR_VALUE = "value"
ATTR_VERSION = "version"
ATTR_VERSION_TIMESTAMP = "version_timestamp"
ATTR_VERSION_LATEST = "version_latest"
ATTR_VIDEO = "video"
ATTR_VLAN = "vlan"
@@ -345,17 +347,6 @@ PROVIDE_SERVICE = "provide"
NEED_SERVICE = "need"
WANT_SERVICE = "want"
MAP_CONFIG = "config"
MAP_SSL = "ssl"
MAP_ADDONS = "addons"
MAP_BACKUP = "backup"
MAP_SHARE = "share"
MAP_MEDIA = "media"
MAP_HOMEASSISTANT_CONFIG = "homeassistant_config"
MAP_ALL_ADDON_CONFIGS = "all_addon_configs"
MAP_ADDON_CONFIG = "addon_config"
ARCH_ARMHF = "armhf"
ARCH_ARMV7 = "armv7"
ARCH_AARCH64 = "aarch64"
@@ -469,9 +460,11 @@ class HostFeature(StrEnum):
class BusEvent(StrEnum):
"""Bus event type."""
DOCKER_CONTAINER_STATE_CHANGE = "docker_container_state_change"
HARDWARE_NEW_DEVICE = "hardware_new_device"
HARDWARE_REMOVE_DEVICE = "hardware_remove_device"
DOCKER_CONTAINER_STATE_CHANGE = "docker_container_state_change"
SUPERVISOR_JOB_END = "supervisor_job_end"
SUPERVISOR_JOB_START = "supervisor_job_start"
SUPERVISOR_STATE_CHANGE = "supervisor_state_change"

View File

@@ -5,8 +5,6 @@ from contextlib import suppress
from datetime import timedelta
import logging
import async_timeout
from .const import (
ATTR_STARTUP,
RUN_SUPERVISOR_STATE,
@@ -179,7 +177,15 @@ class Core(CoreSysAttributes):
and not self.sys_dev
and self.supported
):
self.sys_dbus.agent.diagnostics = self.sys_config.diagnostics
try:
await self.sys_dbus.agent.set_diagnostics(self.sys_config.diagnostics)
except Exception as err: # pylint: disable=broad-except
_LOGGER.warning(
"Could not set diagnostics to %s due to %s",
self.sys_config.diagnostics,
err,
)
capture_exception(err)
# Evaluate the system
await self.sys_resolution.evaluate.evaluate_system()
@@ -298,7 +304,7 @@ class Core(CoreSysAttributes):
# Stage 1
try:
async with async_timeout.timeout(10):
async with asyncio.timeout(10):
await asyncio.wait(
[
self.sys_create_task(coro)
@@ -314,7 +320,7 @@ class Core(CoreSysAttributes):
# Stage 2
try:
async with async_timeout.timeout(10):
async with asyncio.timeout(10):
await asyncio.wait(
[
self.sys_create_task(coro)

View File

@@ -544,13 +544,44 @@ class CoreSys:
return self.loop.run_in_executor(None, funct, *args)
def create_task(self, coroutine: Coroutine) -> asyncio.Task:
"""Create an async task."""
def _create_context(self) -> Context:
"""Create a new context for a task."""
context = copy_context()
for callback in self._set_task_context:
context = callback(context)
return context
return self.loop.create_task(coroutine, context=context)
def create_task(self, coroutine: Coroutine) -> asyncio.Task:
"""Create an async task."""
return self.loop.create_task(coroutine, context=self._create_context())
def call_later(
self,
delay: float,
funct: Callable[..., Coroutine[Any, Any, T]],
*args: tuple[Any],
**kwargs: dict[str, Any],
) -> asyncio.TimerHandle:
"""Start a task after a delay."""
if kwargs:
funct = partial(funct, **kwargs)
return self.loop.call_later(delay, funct, *args, context=self._create_context())
def call_at(
self,
when: datetime,
funct: Callable[..., Coroutine[Any, Any, T]],
*args: tuple[Any],
**kwargs: dict[str, Any],
) -> asyncio.TimerHandle:
"""Start a task at the specified datetime."""
if kwargs:
funct = partial(funct, **kwargs)
return self.loop.call_at(
when.timestamp(), funct, *args, context=self._create_context()
)
class CoreSysAttributes:
@@ -731,3 +762,23 @@ class CoreSysAttributes:
def sys_create_task(self, coroutine: Coroutine) -> asyncio.Task:
"""Create an async task."""
return self.coresys.create_task(coroutine)
def sys_call_later(
self,
delay: float,
funct: Callable[..., Coroutine[Any, Any, T]],
*args: tuple[Any],
**kwargs: dict[str, Any],
) -> asyncio.TimerHandle:
"""Start a task after a delay."""
return self.coresys.call_later(delay, funct, *args, **kwargs)
def sys_call_at(
self,
when: datetime,
funct: Callable[..., Coroutine[Any, Any, T]],
*args: tuple[Any],
**kwargs: dict[str, Any],
) -> asyncio.TimerHandle:
"""Start a task at the specified datetime."""
return self.coresys.call_at(when, funct, *args, **kwargs)

View File

@@ -1,5 +1,6 @@
"""OS-Agent implementation for DBUS."""
import asyncio
from collections.abc import Awaitable
import logging
from typing import Any
@@ -80,11 +81,9 @@ class OSAgent(DBusInterfaceProxy):
"""Return if diagnostics is enabled on OS-Agent."""
return self.properties[DBUS_ATTR_DIAGNOSTICS]
@diagnostics.setter
@dbus_property
def diagnostics(self, value: bool) -> None:
def set_diagnostics(self, value: bool) -> Awaitable[None]:
"""Enable or disable OS-Agent diagnostics."""
asyncio.create_task(self.dbus.set_diagnostics(value))
return self.dbus.set_diagnostics(value)
@property
def all(self) -> list[DBusInterface]:

View File

@@ -1,6 +1,7 @@
"""Green board management."""
import asyncio
from collections.abc import Awaitable
from dbus_fast.aio.message_bus import MessageBus
@@ -25,11 +26,10 @@ class Green(BoardProxy):
"""Get activity LED enabled."""
return self.properties[DBUS_ATTR_ACTIVITY_LED]
@activity_led.setter
def activity_led(self, enabled: bool) -> None:
def set_activity_led(self, enabled: bool) -> Awaitable[None]:
"""Enable/disable activity LED."""
self._data[ATTR_ACTIVITY_LED] = enabled
asyncio.create_task(self.dbus.Boards.Green.set_activity_led(enabled))
return self.dbus.Boards.Green.set_activity_led(enabled)
@property
@dbus_property
@@ -37,11 +37,10 @@ class Green(BoardProxy):
"""Get power LED enabled."""
return self.properties[DBUS_ATTR_POWER_LED]
@power_led.setter
def power_led(self, enabled: bool) -> None:
def set_power_led(self, enabled: bool) -> Awaitable[None]:
"""Enable/disable power LED."""
self._data[ATTR_POWER_LED] = enabled
asyncio.create_task(self.dbus.Boards.Green.set_power_led(enabled))
return self.dbus.Boards.Green.set_power_led(enabled)
@property
@dbus_property
@@ -49,17 +48,18 @@ class Green(BoardProxy):
"""Get user LED enabled."""
return self.properties[DBUS_ATTR_USER_LED]
@user_led.setter
def user_led(self, enabled: bool) -> None:
def set_user_led(self, enabled: bool) -> Awaitable[None]:
"""Enable/disable disk LED."""
self._data[ATTR_USER_LED] = enabled
asyncio.create_task(self.dbus.Boards.Green.set_user_led(enabled))
return self.dbus.Boards.Green.set_user_led(enabled)
async def connect(self, bus: MessageBus) -> None:
"""Connect to D-Bus."""
await super().connect(bus)
# Set LEDs based on settings on connect
self.activity_led = self._data[ATTR_ACTIVITY_LED]
self.power_led = self._data[ATTR_POWER_LED]
self.user_led = self._data[ATTR_USER_LED]
await asyncio.gather(
self.set_activity_led(self._data[ATTR_ACTIVITY_LED]),
self.set_power_led(self._data[ATTR_POWER_LED]),
self.set_user_led(self._data[ATTR_USER_LED]),
)

View File

@@ -1,6 +1,7 @@
"""Yellow board management."""
import asyncio
from collections.abc import Awaitable
from dbus_fast.aio.message_bus import MessageBus
@@ -25,11 +26,10 @@ class Yellow(BoardProxy):
"""Get heartbeat LED enabled."""
return self.properties[DBUS_ATTR_HEARTBEAT_LED]
@heartbeat_led.setter
def heartbeat_led(self, enabled: bool) -> None:
def set_heartbeat_led(self, enabled: bool) -> Awaitable[None]:
"""Enable/disable heartbeat LED."""
self._data[ATTR_HEARTBEAT_LED] = enabled
asyncio.create_task(self.dbus.Boards.Yellow.set_heartbeat_led(enabled))
return self.dbus.Boards.Yellow.set_heartbeat_led(enabled)
@property
@dbus_property
@@ -37,11 +37,10 @@ class Yellow(BoardProxy):
"""Get power LED enabled."""
return self.properties[DBUS_ATTR_POWER_LED]
@power_led.setter
def power_led(self, enabled: bool) -> None:
def set_power_led(self, enabled: bool) -> Awaitable[None]:
"""Enable/disable power LED."""
self._data[ATTR_POWER_LED] = enabled
asyncio.create_task(self.dbus.Boards.Yellow.set_power_led(enabled))
return self.dbus.Boards.Yellow.set_power_led(enabled)
@property
@dbus_property
@@ -49,17 +48,18 @@ class Yellow(BoardProxy):
"""Get disk LED enabled."""
return self.properties[DBUS_ATTR_DISK_LED]
@disk_led.setter
def disk_led(self, enabled: bool) -> None:
def set_disk_led(self, enabled: bool) -> Awaitable[None]:
"""Enable/disable disk LED."""
self._data[ATTR_DISK_LED] = enabled
asyncio.create_task(self.dbus.Boards.Yellow.set_disk_led(enabled))
return self.dbus.Boards.Yellow.set_disk_led(enabled)
async def connect(self, bus: MessageBus) -> None:
"""Connect to D-Bus."""
await super().connect(bus)
# Set LEDs based on settings on connect
self.disk_led = self._data[ATTR_DISK_LED]
self.heartbeat_led = self._data[ATTR_HEARTBEAT_LED]
self.power_led = self._data[ATTR_POWER_LED]
await asyncio.gather(
self.set_disk_led(self._data[ATTR_DISK_LED]),
self.set_heartbeat_led(self._data[ATTR_HEARTBEAT_LED]),
self.set_power_led(self._data[ATTR_POWER_LED]),
)

View File

@@ -36,12 +36,14 @@ DBUS_IFACE_RAUC_INSTALLER = "de.pengutronix.rauc.Installer"
DBUS_IFACE_RESOLVED_MANAGER = "org.freedesktop.resolve1.Manager"
DBUS_IFACE_SETTINGS_CONNECTION = "org.freedesktop.NetworkManager.Settings.Connection"
DBUS_IFACE_SYSTEMD_MANAGER = "org.freedesktop.systemd1.Manager"
DBUS_IFACE_SYSTEMD_UNIT = "org.freedesktop.systemd1.Unit"
DBUS_IFACE_TIMEDATE = "org.freedesktop.timedate1"
DBUS_IFACE_UDISKS2_MANAGER = "org.freedesktop.UDisks2.Manager"
DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED = (
"org.freedesktop.NetworkManager.Connection.Active.StateChanged"
)
DBUS_SIGNAL_PROPERTIES_CHANGED = "org.freedesktop.DBus.Properties.PropertiesChanged"
DBUS_SIGNAL_RAUC_INSTALLER_COMPLETED = "de.pengutronix.rauc.Installer.Completed"
DBUS_OBJECT_BASE = "/"
@@ -64,6 +66,7 @@ DBUS_OBJECT_UDISKS2 = "/org/freedesktop/UDisks2/Manager"
DBUS_ATTR_ACTIVE_ACCESSPOINT = "ActiveAccessPoint"
DBUS_ATTR_ACTIVE_CONNECTION = "ActiveConnection"
DBUS_ATTR_ACTIVE_CONNECTIONS = "ActiveConnections"
DBUS_ATTR_ACTIVE_STATE = "ActiveState"
DBUS_ATTR_ACTIVITY_LED = "ActivityLED"
DBUS_ATTR_ADDRESS_DATA = "AddressData"
DBUS_ATTR_BITRATE = "Bitrate"

View File

@@ -7,6 +7,8 @@ from uuid import uuid4
from dbus_fast import Variant
from ....host.const import InterfaceMethod, InterfaceType
from .. import NetworkManager
from . import (
ATTR_ASSIGNED_MAC,
CONF_ATTR_802_ETHERNET,
@@ -19,8 +21,6 @@ from . import (
CONF_ATTR_PATH,
CONF_ATTR_VLAN,
)
from .. import NetworkManager
from ....host.const import InterfaceMethod, InterfaceType
if TYPE_CHECKING:
from ....host.configuration import Interface
@@ -148,8 +148,8 @@ def get_connection_from_interface(
wireless["security"] = Variant("s", CONF_ATTR_802_WIRELESS_SECURITY)
wireless_security = {}
if interface.wifi.auth == "wep":
wireless_security["auth-alg"] = Variant("s", "none")
wireless_security["key-mgmt"] = Variant("s", "open")
wireless_security["auth-alg"] = Variant("s", "open")
wireless_security["key-mgmt"] = Variant("s", "none")
elif interface.wifi.auth == "wpa-psk":
wireless_security["auth-alg"] = Variant("s", "open")
wireless_security["key-mgmt"] = Variant("s", "wpa-psk")

View File

@@ -13,6 +13,7 @@ from ..exceptions import (
DBusServiceUnkownError,
DBusSystemdNoSuchUnit,
)
from ..utils.dbus import DBusSignalWrapper
from .const import (
DBUS_ATTR_FINISH_TIMESTAMP,
DBUS_ATTR_FIRMWARE_TIMESTAMP_MONOTONIC,
@@ -23,6 +24,7 @@ from .const import (
DBUS_IFACE_SYSTEMD_MANAGER,
DBUS_NAME_SYSTEMD,
DBUS_OBJECT_SYSTEMD,
DBUS_SIGNAL_PROPERTIES_CHANGED,
StartUnitMode,
StopUnitMode,
UnitActiveState,
@@ -42,9 +44,7 @@ def systemd_errors(func):
return await func(*args, **kwds)
except DBusFatalError as err:
if err.type == DBUS_ERR_SYSTEMD_NO_SUCH_UNIT:
# pylint: disable=raise-missing-from
raise DBusSystemdNoSuchUnit(str(err))
# pylint: enable=raise-missing-from
raise DBusSystemdNoSuchUnit(str(err)) from None
raise err
return wrapper
@@ -66,6 +66,11 @@ class SystemdUnit(DBusInterface):
"""Get active state of the unit."""
return await self.dbus.Unit.get_active_state()
@dbus_connected
def properties_changed(self) -> DBusSignalWrapper:
"""Return signal wrapper for properties changed."""
return self.dbus.signal(DBUS_SIGNAL_PROPERTIES_CHANGED)
class Systemd(DBusInterfaceProxy):
"""Systemd function handler.

View File

@@ -1,6 +1,6 @@
"""Interface to UDisks2 Drive over D-Bus."""
from datetime import datetime, timezone
from datetime import UTC, datetime
from dbus_fast.aio import MessageBus
@@ -95,7 +95,7 @@ class UDisks2Drive(DBusInterfaceProxy):
"""Return time drive first detected."""
return datetime.fromtimestamp(
self.properties[DBUS_ATTR_TIME_DETECTED] * 10**-6
).astimezone(timezone.utc)
).astimezone(UTC)
@property
@dbus_property

View File

@@ -15,18 +15,10 @@ from docker.types import Mount
import requests
from ..addons.build import AddonBuild
from ..addons.const import MappingType
from ..bus import EventListener
from ..const import (
DOCKER_CPU_RUNTIME_ALLOCATION,
MAP_ADDON_CONFIG,
MAP_ADDONS,
MAP_ALL_ADDON_CONFIGS,
MAP_BACKUP,
MAP_CONFIG,
MAP_HOMEASSISTANT_CONFIG,
MAP_MEDIA,
MAP_SHARE,
MAP_SSL,
SECURITY_DISABLE,
SECURITY_PROFILE,
SYSTEMD_JOURNAL_PERSISTENT,
@@ -241,10 +233,10 @@ class DockerAddon(DockerInterface):
tmpfs = {}
if self.addon.with_tmpfs:
tmpfs["/tmp"] = ""
tmpfs["/tmp"] = "" # noqa: S108
if not self.addon.host_ipc:
tmpfs["/dev/shm"] = ""
tmpfs["/dev/shm"] = "" # noqa: S108
# Return None if no tmpfs is present
if tmpfs:
@@ -332,24 +324,28 @@ class DockerAddon(DockerInterface):
"""Return mounts for container."""
addon_mapping = self.addon.map_volumes
target_data_path = ""
if MappingType.DATA in addon_mapping:
target_data_path = addon_mapping[MappingType.DATA].path
mounts = [
MOUNT_DEV,
Mount(
type=MountType.BIND,
source=self.addon.path_extern_data.as_posix(),
target="/data",
target=target_data_path or "/data",
read_only=False,
),
]
# setup config mappings
if MAP_CONFIG in addon_mapping:
if MappingType.CONFIG in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target="/config",
read_only=addon_mapping[MAP_CONFIG],
target=addon_mapping[MappingType.CONFIG].path or "/config",
read_only=addon_mapping[MappingType.CONFIG].read_only,
)
)
@@ -360,80 +356,85 @@ class DockerAddon(DockerInterface):
Mount(
type=MountType.BIND,
source=self.addon.path_extern_config.as_posix(),
target="/config",
read_only=addon_mapping[MAP_ADDON_CONFIG],
target=addon_mapping[MappingType.ADDON_CONFIG].path
or "/config",
read_only=addon_mapping[MappingType.ADDON_CONFIG].read_only,
)
)
# Map Home Assistant config in new way
if MAP_HOMEASSISTANT_CONFIG in addon_mapping:
if MappingType.HOMEASSISTANT_CONFIG in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target="/homeassistant",
read_only=addon_mapping[MAP_HOMEASSISTANT_CONFIG],
target=addon_mapping[MappingType.HOMEASSISTANT_CONFIG].path
or "/homeassistant",
read_only=addon_mapping[
MappingType.HOMEASSISTANT_CONFIG
].read_only,
)
)
if MAP_ALL_ADDON_CONFIGS in addon_mapping:
if MappingType.ALL_ADDON_CONFIGS in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_addon_configs.as_posix(),
target="/addon_configs",
read_only=addon_mapping[MAP_ALL_ADDON_CONFIGS],
target=addon_mapping[MappingType.ALL_ADDON_CONFIGS].path
or "/addon_configs",
read_only=addon_mapping[MappingType.ALL_ADDON_CONFIGS].read_only,
)
)
if MAP_SSL in addon_mapping:
if MappingType.SSL in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(),
target="/ssl",
read_only=addon_mapping[MAP_SSL],
target=addon_mapping[MappingType.SSL].path or "/ssl",
read_only=addon_mapping[MappingType.SSL].read_only,
)
)
if MAP_ADDONS in addon_mapping:
if MappingType.ADDONS in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_addons_local.as_posix(),
target="/addons",
read_only=addon_mapping[MAP_ADDONS],
target=addon_mapping[MappingType.ADDONS].path or "/addons",
read_only=addon_mapping[MappingType.ADDONS].read_only,
)
)
if MAP_BACKUP in addon_mapping:
if MappingType.BACKUP in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_backup.as_posix(),
target="/backup",
read_only=addon_mapping[MAP_BACKUP],
target=addon_mapping[MappingType.BACKUP].path or "/backup",
read_only=addon_mapping[MappingType.BACKUP].read_only,
)
)
if MAP_SHARE in addon_mapping:
if MappingType.SHARE in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(),
target="/share",
read_only=addon_mapping[MAP_SHARE],
target=addon_mapping[MappingType.SHARE].path or "/share",
read_only=addon_mapping[MappingType.SHARE].read_only,
propagation=PropagationMode.RSLAVE,
)
)
if MAP_MEDIA in addon_mapping:
if MappingType.MEDIA in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND,
source=self.sys_config.path_extern_media.as_posix(),
target="/media",
read_only=addon_mapping[MAP_MEDIA],
target=addon_mapping[MappingType.MEDIA].path or "/media",
read_only=addon_mapping[MappingType.MEDIA].read_only,
propagation=PropagationMode.RSLAVE,
)
)
@@ -602,7 +603,11 @@ class DockerAddon(DockerInterface):
on_condition=DockerJobError,
)
async def update(
self, version: AwesomeVersion, image: str | None = None, latest: bool = False
self,
version: AwesomeVersion,
image: str | None = None,
latest: bool = False,
arch: CpuArch | None = None,
) -> None:
"""Update a docker image."""
image = image or self.image
@@ -613,7 +618,11 @@ class DockerAddon(DockerInterface):
# Update docker image
await self.install(
version, image=image, latest=latest, need_build=self.addon.latest_need_build
version,
image=image,
latest=latest,
arch=arch,
need_build=self.addon.latest_need_build,
)
@Job(

View File

@@ -175,7 +175,7 @@ class DockerHomeAssistant(DockerInterface):
ENV_TOKEN: self.sys_homeassistant.supervisor_token,
ENV_TOKEN_OLD: self.sys_homeassistant.supervisor_token,
},
tmpfs={"/tmp": ""},
tmpfs={"/tmp": ""}, # noqa: S108
oom_score_adj=-300,
)
_LOGGER.info(

View File

@@ -1,4 +1,6 @@
"""Supervisor docker monitor based on events."""
from contextlib import suppress
from dataclasses import dataclass
import logging
from threading import Thread
@@ -47,10 +49,8 @@ class DockerMonitor(CoreSysAttributes, Thread):
async def unload(self):
"""Stop docker events monitor."""
self._events.close()
try:
with suppress(RuntimeError):
self.join(timeout=5)
except RuntimeError:
pass
_LOGGER.info("Stopped docker events monitor")

View File

@@ -304,6 +304,16 @@ class HostLogError(HostError):
class APIError(HassioError, RuntimeError):
"""API errors."""
def __init__(
self,
message: str | None = None,
logger: Callable[..., None] | None = None,
job_id: str | None = None,
) -> None:
"""Raise & log, optionally with job."""
super().__init__(message, logger)
self.job_id = job_id
class APIForbidden(APIError):
"""API forbidden error."""
@@ -593,6 +603,14 @@ class HomeAssistantBackupError(BackupError, HomeAssistantError):
"""Raise if an error during Home Assistant Core backup is happening."""
class BackupInvalidError(BackupError):
"""Raise if backup or password provided is invalid."""
class BackupMountDownError(BackupError):
"""Raise if mount specified for backup is down."""
class BackupJobError(BackupError, JobException):
"""Raise on Backup job error."""

View File

@@ -1,5 +1,5 @@
"""Read hardware info from system."""
from datetime import datetime
from datetime import UTC, datetime
import logging
from pathlib import Path
import re
@@ -55,7 +55,7 @@ class HwHelper(CoreSysAttributes):
_LOGGER.error("Can't found last boot time!")
return None
return datetime.utcfromtimestamp(int(found.group(1)))
return datetime.fromtimestamp(int(found.group(1)), UTC)
def hide_virtual_device(self, udev_device: pyudev.Device) -> bool:
"""Small helper to hide not needed Devices."""

View File

@@ -1,9 +1,9 @@
"""Home Assistant control object."""
import asyncio
from contextlib import asynccontextmanager, suppress
from datetime import datetime, timedelta
from contextlib import AbstractAsyncContextManager, asynccontextmanager, suppress
from datetime import UTC, datetime, timedelta
import logging
from typing import Any, AsyncContextManager
from typing import Any
import aiohttp
from aiohttp import hdrs
@@ -39,9 +39,8 @@ class HomeAssistantAPI(CoreSysAttributes):
)
async def ensure_access_token(self) -> None:
"""Ensure there is an access token."""
if (
self.access_token is not None
and self._access_token_expires > datetime.utcnow()
if self.access_token is not None and self._access_token_expires > datetime.now(
tz=UTC
):
return
@@ -63,7 +62,7 @@ class HomeAssistantAPI(CoreSysAttributes):
_LOGGER.info("Updated Home Assistant API token")
tokens = await resp.json()
self.access_token = tokens["access_token"]
self._access_token_expires = datetime.utcnow() + timedelta(
self._access_token_expires = datetime.now(tz=UTC) + timedelta(
seconds=tokens["expires_in"]
)
@@ -78,7 +77,7 @@ class HomeAssistantAPI(CoreSysAttributes):
timeout: int = 30,
params: dict[str, str] | None = None,
headers: dict[str, str] | None = None,
) -> AsyncContextManager[aiohttp.ClientResponse]:
) -> AbstractAsyncContextManager[aiohttp.ClientResponse]:
"""Async context manager to make a request with right auth."""
url = f"{self.sys_homeassistant.api_url}/{path}"
headers = headers or {}
@@ -107,7 +106,10 @@ class HomeAssistantAPI(CoreSysAttributes):
continue
yield resp
return
except (TimeoutError, aiohttp.ClientError) as err:
except TimeoutError:
_LOGGER.error("Timeout on call %s.", url)
break
except aiohttp.ClientError as err:
_LOGGER.error("Error on call %s: %s", url, err)
break

View File

@@ -129,7 +129,7 @@ class HomeAssistantCore(JobGroup):
while True:
if not self.sys_updater.image_homeassistant:
_LOGGER.warning(
"Found no information about Home Assistant. Retry in 30sec"
"Found no information about Home Assistant. Retrying in 30sec"
)
await asyncio.sleep(30)
await self.sys_updater.reload()
@@ -145,7 +145,7 @@ class HomeAssistantCore(JobGroup):
except Exception as err: # pylint: disable=broad-except
capture_exception(err)
_LOGGER.warning("Fails install landingpage, retry after 30sec")
_LOGGER.warning("Failed to install landingpage, retrying after 30sec")
await asyncio.sleep(30)
self.sys_homeassistant.version = LANDINGPAGE
@@ -177,7 +177,7 @@ class HomeAssistantCore(JobGroup):
except Exception as err: # pylint: disable=broad-except
capture_exception(err)
_LOGGER.warning("Error on Home Assistant installation. Retry in 30sec")
_LOGGER.warning("Error on Home Assistant installation. Retrying in 30sec")
await asyncio.sleep(30)
_LOGGER.info("Home Assistant docker now installed")

View File

@@ -1,6 +1,7 @@
"""Home Assistant control object."""
import asyncio
from datetime import timedelta
import errno
from ipaddress import IPv4Address
import logging
from pathlib import Path, PurePath
@@ -42,6 +43,7 @@ from ..exceptions import (
from ..hardware.const import PolicyGroup
from ..hardware.data import Device
from ..jobs.decorator import Job, JobExecutionLimit
from ..resolution.const import UnhealthyReason
from ..utils import remove_folder
from ..utils.common import FileConfiguration
from ..utils.json import read_json_file, write_json_file
@@ -300,6 +302,8 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
try:
self.path_pulse.write_text(pulse_config, encoding="utf-8")
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Home Assistant can't write pulse/client.config: %s", err)
else:
_LOGGER.info("Update pulse/client.config: %s", self.path_pulse)
@@ -407,7 +411,11 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
def _extract_tarfile():
"""Extract tar backup."""
with tar_file as backup:
backup.extractall(path=temp_path, members=secure_path(backup))
backup.extractall(
path=temp_path,
members=secure_path(backup),
filter="fully_trusted",
)
try:
await self.sys_run_in_executor(_extract_tarfile)
@@ -477,7 +485,8 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
ATTR_REFRESH_TOKEN,
ATTR_WATCHDOG,
):
self._data[attr] = data[attr]
if attr in data:
self._data[attr] = data[attr]
@Job(
name="home_assistant_get_users",

View File

@@ -26,6 +26,7 @@ from ..exceptions import (
HomeAssistantWSError,
HomeAssistantWSNotSupported,
)
from ..utils.json import json_dumps
from .const import CLOSING_STATES, WSEvent, WSType
MIN_VERSION = {
@@ -74,7 +75,7 @@ class WSClient:
self._message_id += 1
_LOGGER.debug("Sending: %s", message)
try:
await self._client.send_json(message)
await self._client.send_json(message, dumps=json_dumps)
except ConnectionError as err:
raise HomeAssistantWSConnectionError(err) from err
@@ -85,7 +86,7 @@ class WSClient:
self._futures[message["id"]] = self._loop.create_future()
_LOGGER.debug("Sending: %s", message)
try:
await self._client.send_json(message)
await self._client.send_json(message, dumps=json_dumps)
except ConnectionError as err:
raise HomeAssistantWSConnectionError(err) from err
@@ -163,7 +164,9 @@ class WSClient:
hello_message = await client.receive_json()
await client.send_json({ATTR_TYPE: WSType.AUTH, ATTR_ACCESS_TOKEN: token})
await client.send_json(
{ATTR_TYPE: WSType.AUTH, ATTR_ACCESS_TOKEN: token}, dumps=json_dumps
)
auth_ok_message = await client.receive_json()

View File

@@ -1,6 +1,8 @@
"""AppArmor control for host."""
from __future__ import annotations
from contextlib import suppress
import errno
import logging
from pathlib import Path
import shutil
@@ -9,7 +11,7 @@ from awesomeversion import AwesomeVersion
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import DBusError, HostAppArmorError
from ..resolution.const import UnsupportedReason
from ..resolution.const import UnhealthyReason, UnsupportedReason
from ..utils.apparmor import validate_profile
from .const import HostFeature
@@ -61,10 +63,8 @@ class AppArmorControl(CoreSysAttributes):
# Load profiles
if self.available:
for profile_name in self._profiles:
try:
with suppress(HostAppArmorError):
await self._load_profile(profile_name)
except HostAppArmorError:
pass
else:
_LOGGER.warning("AppArmor is not enabled on host")
@@ -80,6 +80,8 @@ class AppArmorControl(CoreSysAttributes):
try:
await self.sys_run_in_executor(shutil.copyfile, profile_file, dest_profile)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
raise HostAppArmorError(
f"Can't copy {profile_file}: {err}", _LOGGER.error
) from err
@@ -103,6 +105,8 @@ class AppArmorControl(CoreSysAttributes):
try:
await self.sys_run_in_executor(profile_file.unlink)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
raise HostAppArmorError(
f"Can't remove profile: {err}", _LOGGER.error
) from err
@@ -117,6 +121,8 @@ class AppArmorControl(CoreSysAttributes):
try:
await self.sys_run_in_executor(shutil.copy, profile_file, backup_file)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
raise HostAppArmorError(
f"Can't backup profile {profile_name}: {err}", _LOGGER.error
) from err

View File

@@ -69,8 +69,7 @@ class LogsControl(CoreSysAttributes):
)
async def get_boot_id(self, offset: int = 0) -> str:
"""
Get ID of a boot by offset.
"""Get ID of a boot by offset.
Current boot is offset = 0, negative numbers go that many in the past.
Positive numbers count up from the oldest boot.

View File

@@ -155,11 +155,10 @@ class SoundControl(CoreSysAttributes):
stream = pulse.source_output_info(index)
else:
stream = pulse.source_info(index)
elif application:
stream = pulse.sink_input_info(index)
else:
if application:
stream = pulse.sink_input_info(index)
else:
stream = pulse.sink_info(index)
stream = pulse.sink_info(index)
# Set volume
pulse.volume_set_all_chans(stream, volume)
@@ -190,11 +189,10 @@ class SoundControl(CoreSysAttributes):
stream = pulse.source_output_info(index)
else:
stream = pulse.source_info(index)
elif application:
stream = pulse.sink_input_info(index)
else:
if application:
stream = pulse.sink_input_info(index)
else:
stream = pulse.sink_info(index)
stream = pulse.sink_info(index)
# Mute stream
pulse.mute(stream, mute)

View File

@@ -1,7 +1,11 @@
"""Supervisor job manager."""
from collections.abc import Callable
import asyncio
from collections.abc import Awaitable, Callable
from contextlib import contextmanager
from contextvars import Context, ContextVar, Token
from dataclasses import dataclass
from datetime import datetime
import logging
from typing import Any
from uuid import UUID, uuid4
@@ -10,8 +14,9 @@ from attrs import Attribute, define, field
from attrs.setters import convert as attr_convert, frozen, validate as attr_validate
from attrs.validators import ge, le
from ..const import BusEvent
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import JobNotFound, JobStartException
from ..exceptions import HassioError, JobNotFound, JobStartException
from ..homeassistant.const import WSEvent
from ..utils.common import FileConfiguration
from ..utils.sentry import capture_exception
@@ -27,6 +32,14 @@ _CURRENT_JOB: ContextVar[UUID] = ContextVar("current_job")
_LOGGER: logging.Logger = logging.getLogger(__name__)
@dataclass
class JobSchedulerOptions:
"""Options for scheduling a job."""
start_at: datetime | None = None
delayed_start: float = 0 # Ignored if start_at is set
def _remove_current_job(context: Context) -> Context:
"""Remove the current job from the context."""
context.run(_CURRENT_JOB.set, None)
@@ -48,11 +61,29 @@ def _on_change(instance: "SupervisorJob", attribute: Attribute, value: Any) -> A
return value
def _invalid_if_started(instance: "SupervisorJob", *_) -> None:
"""Validate that job has not been started."""
if instance.done is not None:
raise ValueError("Field cannot be updated once job has started")
@define
class SupervisorJobError:
"""Representation of an error occurring during a supervisor job."""
type_: type[HassioError] = HassioError
message: str = "Unknown error, see supervisor logs"
def as_dict(self) -> dict[str, str]:
"""Return dictionary representation."""
return {"type": self.type_.__name__, "message": self.message}
@define
class SupervisorJob:
"""Representation of a job running in supervisor."""
name: str = field(on_setattr=frozen)
name: str | None = field(default=None, validator=[_invalid_if_started])
reference: str | None = field(default=None, on_setattr=_on_change)
progress: float = field(
default=0,
@@ -65,13 +96,17 @@ class SupervisorJob:
)
uuid: UUID = field(init=False, factory=lambda: uuid4().hex, on_setattr=frozen)
parent_id: UUID | None = field(
init=False, factory=lambda: _CURRENT_JOB.get(None), on_setattr=frozen
factory=lambda: _CURRENT_JOB.get(None), on_setattr=frozen
)
done: bool | None = field(init=False, default=None, on_setattr=_on_change)
on_change: Callable[["SupervisorJob", Attribute, Any], None] | None = field(
default=None, on_setattr=frozen
)
internal: bool = field(default=False, on_setattr=frozen)
internal: bool = field(default=False)
errors: list[SupervisorJobError] = field(
init=False, factory=list, on_setattr=_on_change
)
release_event: asyncio.Event | None = None
def as_dict(self) -> dict[str, Any]:
"""Return dictionary representation."""
@@ -83,8 +118,17 @@ class SupervisorJob:
"stage": self.stage,
"done": self.done,
"parent_id": self.parent_id,
"errors": [err.as_dict() for err in self.errors],
}
def capture_error(self, err: HassioError | None = None) -> None:
"""Capture an error or record that an unknown error has occurred."""
if err:
new_error = SupervisorJobError(type(err), str(err))
else:
new_error = SupervisorJobError()
self.errors += [new_error]
@contextmanager
def start(self):
"""Start the job in the current task.
@@ -156,17 +200,27 @@ class JobManager(FileConfiguration, CoreSysAttributes):
def _notify_on_job_change(
self, job: SupervisorJob, attribute: Attribute, value: Any
) -> None:
"""Notify Home Assistant of a change to a job."""
"""Notify Home Assistant of a change to a job and bus on job start/end."""
if attribute.name == "errors":
value = [err.as_dict() for err in value]
self.sys_homeassistant.websocket.supervisor_event(
WSEvent.JOB, job.as_dict() | {attribute.alias: value}
WSEvent.JOB, job.as_dict() | {attribute.name: value}
)
if attribute.name == "done":
if value is False:
self.sys_bus.fire_event(BusEvent.SUPERVISOR_JOB_START, job.uuid)
if value is True:
self.sys_bus.fire_event(BusEvent.SUPERVISOR_JOB_END, job.uuid)
def new_job(
self,
name: str,
name: str | None = None,
reference: str | None = None,
initial_stage: str | None = None,
internal: bool = False,
no_parent: bool = False,
) -> SupervisorJob:
"""Create a new job."""
job = SupervisorJob(
@@ -175,6 +229,7 @@ class JobManager(FileConfiguration, CoreSysAttributes):
stage=initial_stage,
on_change=None if internal else self._notify_on_job_change,
internal=internal,
**({"parent_id": None} if no_parent else {}),
)
self._jobs[job.uuid] = job
return job
@@ -194,3 +249,30 @@ class JobManager(FileConfiguration, CoreSysAttributes):
_LOGGER.warning("Removing incomplete job %s from job manager", job.name)
del self._jobs[job.uuid]
# Clean up any completed sub jobs of this one
for sub_job in self.jobs:
if sub_job.parent_id == job.uuid and job.done:
self.remove_job(sub_job)
def schedule_job(
self,
job_method: Callable[..., Awaitable[Any]],
options: JobSchedulerOptions,
*args,
**kwargs,
) -> tuple[SupervisorJob, asyncio.Task | asyncio.TimerHandle]:
"""Schedule a job to run later and return job and task or timer handle."""
job = self.new_job(no_parent=True)
def _wrap_task() -> asyncio.Task:
return self.sys_create_task(
job_method(*args, _job__use_existing=job, **kwargs)
)
if options.start_at:
return (job, self.sys_call_at(options.start_at, _wrap_task))
if options.delayed_start:
return (job, self.sys_call_later(options.delayed_start, _wrap_task))
return (job, _wrap_task())

View File

@@ -9,6 +9,7 @@ FILE_CONFIG_JOBS = Path(SUPERVISOR_DATA, "jobs.json")
ATTR_IGNORE_CONDITIONS = "ignore_conditions"
JOB_GROUP_ADDON = "addon_{slug}"
JOB_GROUP_BACKUP = "backup_{slug}"
JOB_GROUP_BACKUP_MANAGER = "backup_manager"
JOB_GROUP_DOCKER_INTERFACE = "container_{name}"
JOB_GROUP_HOME_ASSISTANT_CORE = "home_assistant_core"

View File

@@ -1,6 +1,7 @@
"""Job decorator."""
import asyncio
from collections.abc import Callable
from contextlib import suppress
from datetime import datetime, timedelta
from functools import wraps
import logging
@@ -17,6 +18,7 @@ from ..exceptions import (
from ..host.const import HostFeature
from ..resolution.const import MINIMUM_FREE_SPACE_THRESHOLD, ContextType, IssueType
from ..utils.sentry import capture_exception
from . import SupervisorJob
from .const import JobCondition, JobExecutionLimit
from .job_group import JobGroup
@@ -145,10 +147,8 @@ class Job(CoreSysAttributes):
def _post_init(self, obj: JobGroup | CoreSysAttributes) -> JobGroup | None:
"""Runtime init."""
# Coresys
try:
with suppress(AttributeError):
self.coresys = obj.coresys
except AttributeError:
pass
if not self.coresys:
raise RuntimeError(f"Job on {self.name} need to be an coresys object!")
@@ -157,22 +157,23 @@ class Job(CoreSysAttributes):
self._lock = asyncio.Semaphore()
# Job groups
if self.limit in (
try:
is_job_group = obj.acquire and obj.release
except AttributeError:
is_job_group = False
if not is_job_group and self.limit in (
JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT,
JobExecutionLimit.GROUP_THROTTLE,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
):
try:
_ = obj.acquire and obj.release
except AttributeError:
raise RuntimeError(
f"Job on {self.name} need to be a JobGroup to use group based limits!"
) from None
raise RuntimeError(
f"Job on {self.name} need to be a JobGroup to use group based limits!"
) from None
return obj
return None
return obj if is_job_group else None
def _handle_job_condition_exception(self, err: JobConditionException) -> None:
"""Handle a job condition failure."""
@@ -187,7 +188,13 @@ class Job(CoreSysAttributes):
self._method = method
@wraps(method)
async def wrapper(obj: JobGroup | CoreSysAttributes, *args, **kwargs) -> Any:
async def wrapper(
obj: JobGroup | CoreSysAttributes,
*args,
_job__use_existing: SupervisorJob | None = None,
_job_override__cleanup: bool | None = None,
**kwargs,
) -> Any:
"""Wrap the method.
This method must be on an instance of CoreSysAttributes. If a JOB_GROUP limit
@@ -195,11 +202,18 @@ class Job(CoreSysAttributes):
"""
job_group = self._post_init(obj)
group_name: str | None = job_group.group_name if job_group else None
job = self.sys_jobs.new_job(
self.name,
job_group.job_reference if job_group else None,
internal=self._internal,
)
if _job__use_existing:
job = _job__use_existing
job.name = self.name
job.internal = self._internal
if job_group:
job.reference = job_group.job_reference
else:
job = self.sys_jobs.new_job(
self.name,
job_group.job_reference if job_group else None,
internal=self._internal,
)
try:
# Handle condition
@@ -293,9 +307,11 @@ class Job(CoreSysAttributes):
except JobConditionException as err:
return self._handle_job_condition_exception(err)
except HassioError as err:
job.capture_error(err)
raise err
except Exception as err:
_LOGGER.exception("Unhandled exception: %s", err)
job.capture_error()
capture_exception(err)
raise JobException() from err
finally:
@@ -308,7 +324,12 @@ class Job(CoreSysAttributes):
# Jobs that weren't started are always cleaned up. Also clean up done jobs if required
finally:
if job.done is None or self.cleanup:
if (
job.done is None
or _job_override__cleanup
or _job_override__cleanup is None
and self.cleanup
):
self.sys_jobs.remove_job(job)
return wrapper

View File

@@ -2,9 +2,9 @@
from asyncio import Lock
from . import SupervisorJob
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import JobException, JobGroupExecutionLimitExceeded
from . import SupervisorJob
class JobGroup(CoreSysAttributes):

View File

@@ -5,7 +5,6 @@ from datetime import date, datetime, time, timedelta
import logging
from uuid import UUID, uuid4
import async_timeout
import attr
from ..const import CoreState
@@ -74,7 +73,7 @@ class Scheduler(CoreSysAttributes):
def _schedule_task(self, task: _Task) -> None:
"""Schedule a task on loop."""
if isinstance(task.interval, (int, float)):
task.next = self.sys_loop.call_later(task.interval, self._run_task, task)
task.next = self.sys_call_later(task.interval, self._run_task, task)
elif isinstance(task.interval, time):
today = datetime.combine(date.today(), task.interval)
tomorrow = datetime.combine(date.today() + timedelta(days=1), task.interval)
@@ -85,7 +84,7 @@ class Scheduler(CoreSysAttributes):
else:
calc = tomorrow
task.next = self.sys_loop.call_at(calc.timestamp(), self._run_task, task)
task.next = self.sys_call_at(calc, self._run_task, task)
else:
_LOGGER.critical(
"Unknown interval %s (type: %s) for scheduler %s",
@@ -113,7 +112,7 @@ class Scheduler(CoreSysAttributes):
# Wait until all are shutdown
try:
async with async_timeout.timeout(timeout):
async with asyncio.timeout(timeout):
await asyncio.wait(running)
except TimeoutError:
_LOGGER.error("Timeout while waiting for jobs shutdown")

View File

@@ -1,6 +1,7 @@
"""A collection of tasks."""
import asyncio
from collections.abc import Awaitable
from datetime import timedelta
import logging
from ..addons.const import ADDON_UPDATE_CONDITIONS
@@ -10,11 +11,15 @@ from ..exceptions import AddonsError, HomeAssistantError, ObserverError
from ..homeassistant.const import LANDINGPAGE
from ..jobs.decorator import Job, JobCondition
from ..plugins.const import PLUGIN_UPDATE_CONDITIONS
from ..utils.dt import utcnow
from ..utils.sentry import capture_exception
_LOGGER: logging.Logger = logging.getLogger(__name__)
HASS_WATCHDOG_API = "HASS_WATCHDOG_API"
HASS_WATCHDOG_API_FAILURES = "HASS_WATCHDOG_API_FAILURES"
HASS_WATCHDOG_REANIMATE_FAILURES = "HASS_WATCHDOG_REANIMATE_FAILURES"
HASS_WATCHDOG_MAX_API_ATTEMPTS = 2
HASS_WATCHDOG_MAX_REANIMATE_ATTEMPTS = 5
RUN_UPDATE_SUPERVISOR = 29100
RUN_UPDATE_ADDONS = 57600
@@ -93,6 +98,16 @@ class Tasks(CoreSysAttributes):
# Evaluate available updates
if not addon.need_update:
continue
if not addon.auto_update_available:
_LOGGER.debug(
"Not updating add-on %s from %s to %s as that would cross a known breaking version",
addon.slug,
addon.version,
addon.latest_version,
)
# Delay auto-updates for a day in case of issues
if utcnow() + timedelta(days=1) > addon.latest_version_timestamp:
continue
if not addon.test_update_schema():
_LOGGER.warning(
"Add-on %s will be ignored, schema tests failed", addon.slug
@@ -154,26 +169,46 @@ class Tasks(CoreSysAttributes):
return
if await self.sys_homeassistant.api.check_api_state():
# Home Assistant is running properly
self._cache[HASS_WATCHDOG_REANIMATE_FAILURES] = 0
self._cache[HASS_WATCHDOG_API_FAILURES] = 0
return
# Give up after 5 reanimation failures in a row. Supervisor cannot fix this issue.
reanimate_fails = self._cache.get(HASS_WATCHDOG_REANIMATE_FAILURES, 0)
if reanimate_fails >= HASS_WATCHDOG_MAX_REANIMATE_ATTEMPTS:
if reanimate_fails == HASS_WATCHDOG_MAX_REANIMATE_ATTEMPTS:
_LOGGER.critical(
"Watchdog cannot reanimate Home Assistant Core, failed all %s attempts.",
reanimate_fails,
)
self._cache[HASS_WATCHDOG_REANIMATE_FAILURES] += 1
return
# Init cache data
retry_scan = self._cache.get(HASS_WATCHDOG_API, 0)
api_fails = self._cache.get(HASS_WATCHDOG_API_FAILURES, 0)
# Look like we run into a problem
retry_scan += 1
if retry_scan == 1:
self._cache[HASS_WATCHDOG_API] = retry_scan
_LOGGER.warning("Watchdog miss API response from Home Assistant")
api_fails += 1
if api_fails < HASS_WATCHDOG_MAX_API_ATTEMPTS:
self._cache[HASS_WATCHDOG_API_FAILURES] = api_fails
_LOGGER.warning("Watchdog missed an Home Assistant Core API response.")
return
_LOGGER.error("Watchdog found a problem with Home Assistant API!")
_LOGGER.error(
"Watchdog missed %s Home Assistant Core API responses in a row. Restarting Home Assistant Core API!",
HASS_WATCHDOG_MAX_API_ATTEMPTS,
)
try:
await self.sys_homeassistant.core.restart()
except HomeAssistantError as err:
_LOGGER.error("Home Assistant watchdog reanimation failed!")
capture_exception(err)
if reanimate_fails == 0:
capture_exception(err)
self._cache[HASS_WATCHDOG_REANIMATE_FAILURES] = reanimate_fails + 1
else:
self._cache[HASS_WATCHDOG_REANIMATE_FAILURES] = 0
finally:
self._cache[HASS_WATCHDOG_API] = 0
self._cache[HASS_WATCHDOG_API_FAILURES] = 0
@Job(name="tasks_update_cli", conditions=PLUGIN_AUTO_UPDATE_CONDITIONS)
async def _update_cli(self):

View File

@@ -8,6 +8,7 @@ FILE_CONFIG_MOUNTS = PurePath("mounts.json")
ATTR_DEFAULT_BACKUP_MOUNT = "default_backup_mount"
ATTR_MOUNTS = "mounts"
ATTR_PATH = "path"
ATTR_READ_ONLY = "read_only"
ATTR_SERVER = "server"
ATTR_SHARE = "share"
ATTR_USAGE = "usage"

View File

@@ -145,16 +145,17 @@ class MountManager(FileConfiguration, CoreSysAttributes):
if not self.mounts:
return
await asyncio.wait(
[self.sys_create_task(mount.update()) for mount in self.mounts]
mounts = self.mounts.copy()
results = await asyncio.gather(
*[mount.update() for mount in mounts], return_exceptions=True
)
# Try to reload any newly failed mounts and report issues if failure persists
new_failures = [
mount
for mount in self.mounts
if mount.state != UnitActiveState.ACTIVE
and mount.failed_issue not in self.sys_resolution.issues
mounts[i]
for i in range(len(mounts))
if results[i] is not True
and mounts[i].failed_issue not in self.sys_resolution.issues
]
await self._mount_errors_to_issues(
new_failures, [mount.reload() for mount in new_failures]
@@ -297,6 +298,7 @@ class MountManager(FileConfiguration, CoreSysAttributes):
name=f"{'emergency' if emergency else 'bind'}_{mount.name}",
path=path,
where=where,
read_only=emergency,
),
emergency=emergency,
)

View File

@@ -18,10 +18,12 @@ from ..const import (
)
from ..coresys import CoreSys, CoreSysAttributes
from ..dbus.const import (
DBUS_ATTR_ACTIVE_STATE,
DBUS_ATTR_DESCRIPTION,
DBUS_ATTR_OPTIONS,
DBUS_ATTR_TYPE,
DBUS_ATTR_WHAT,
DBUS_IFACE_SYSTEMD_UNIT,
StartUnitMode,
StopUnitMode,
UnitActiveState,
@@ -39,6 +41,7 @@ from ..resolution.data import Issue
from ..utils.sentry import capture_exception
from .const import (
ATTR_PATH,
ATTR_READ_ONLY,
ATTR_SERVER,
ATTR_SHARE,
ATTR_USAGE,
@@ -81,7 +84,9 @@ class Mount(CoreSysAttributes, ABC):
def to_dict(self, *, skip_secrets: bool = True) -> MountData:
"""Return dictionary representation."""
return MountData(name=self.name, type=self.type, usage=self.usage)
return MountData(
name=self.name, type=self.type, usage=self.usage, read_only=self.read_only
)
@property
def name(self) -> str:
@@ -102,6 +107,11 @@ class Mount(CoreSysAttributes, ABC):
else None
)
@property
def read_only(self) -> bool:
"""Is mount read-only."""
return self._data.get(ATTR_READ_ONLY, False)
@property
@abstractmethod
def what(self) -> str:
@@ -113,9 +123,9 @@ class Mount(CoreSysAttributes, ABC):
"""Where to mount (on host)."""
@property
@abstractmethod
def options(self) -> list[str]:
"""List of options to use to mount."""
return ["ro"] if self.read_only else []
@property
def description(self) -> str:
@@ -154,23 +164,28 @@ class Mount(CoreSysAttributes, ABC):
"""Get issue used if this mount has failed."""
return Issue(IssueType.MOUNT_FAILED, ContextType.MOUNT, reference=self.name)
async def is_mounted(self) -> bool:
"""Return true if successfully mounted and available."""
return self.state == UnitActiveState.ACTIVE
def __eq__(self, other):
"""Return true if mounts are the same."""
return isinstance(other, Mount) and self.name == other.name
async def load(self) -> None:
"""Initialize object."""
await self._update_await_activating()
# If there's no mount unit, mount it to make one
if not self.unit:
if not await self._update_unit():
await self.mount()
return
# At this point any state besides active is treated as a failed mount, try to reload it
elif self.state != UnitActiveState.ACTIVE:
await self._update_state_await(not_state=UnitActiveState.ACTIVATING)
# If mount is not available, try to reload it
if not await self.is_mounted():
await self.reload()
async def update_state(self) -> None:
async def _update_state(self) -> UnitActiveState | None:
"""Update mount unit state."""
try:
self._state = await self.unit.get_active_state()
@@ -180,56 +195,66 @@ class Mount(CoreSysAttributes, ABC):
f"Could not get active state of mount due to: {err!s}"
) from err
async def update(self) -> None:
"""Update info about mount from dbus."""
async def _update_unit(self) -> SystemdUnit | None:
"""Get systemd unit from dbus."""
try:
self._unit = await self.sys_dbus.systemd.get_unit(self.unit_name)
except DBusSystemdNoSuchUnit:
self._unit = None
self._state = None
return
except DBusError as err:
capture_exception(err)
raise MountError(f"Could not get mount unit due to: {err!s}") from err
return self.unit
await self.update_state()
async def update(self) -> bool:
"""Update info about mount from dbus. Return true if it is mounted and available."""
if not await self._update_unit():
return False
await self._update_state()
# If active, dismiss corresponding failed mount issue if found
if (
self.state == UnitActiveState.ACTIVE
and self.failed_issue in self.sys_resolution.issues
):
mounted := await self.is_mounted()
) and self.failed_issue in self.sys_resolution.issues:
self.sys_resolution.dismiss_issue(self.failed_issue)
async def _update_state_await(self, expected_states: list[UnitActiveState]) -> None:
"""Update state info about mount from dbus. Wait up to 30 seconds for the state to appear."""
for i in range(5):
await self.update_state()
if self.state in expected_states:
return
await asyncio.sleep(i**2)
return mounted
_LOGGER.warning(
"Mount %s still in state %s after waiting for 30 seconods to complete",
self.name,
str(self.state).lower(),
)
async def _update_state_await(
self,
expected_states: list[UnitActiveState] | None = None,
not_state: UnitActiveState = UnitActiveState.ACTIVATING,
) -> None:
"""Update state info about mount from dbus. Wait for one of expected_states to appear or state to change from not_state."""
if not self.unit:
return
async def _update_await_activating(self):
"""Update info about mount from dbus. If 'activating' wait up to 30 seconds."""
await self.update()
try:
async with asyncio.timeout(30), self.unit.properties_changed() as signal:
await self._update_state()
while (
expected_states
and self.state not in expected_states
or not expected_states
and self.state == not_state
):
prop_change_signal = await signal.wait_for_signal()
if (
prop_change_signal[0] == DBUS_IFACE_SYSTEMD_UNIT
and DBUS_ATTR_ACTIVE_STATE in prop_change_signal[1]
):
self._state = prop_change_signal[1][
DBUS_ATTR_ACTIVE_STATE
].value
# If we're still activating, give it up to 30 seconds to finish
if self.state == UnitActiveState.ACTIVATING:
_LOGGER.info(
"Mount %s still activating, waiting up to 30 seconds to complete",
except TimeoutError:
_LOGGER.warning(
"Mount %s still in state %s after waiting for 30 seconds to complete",
self.name,
str(self.state).lower(),
)
for _ in range(3):
await asyncio.sleep(10)
await self.update()
if self.state != UnitActiveState.ACTIVATING:
break
async def mount(self) -> None:
"""Mount using systemd."""
@@ -274,9 +299,10 @@ class Mount(CoreSysAttributes, ABC):
f"Could not mount {self.name} due to: {err!s}", _LOGGER.error
) from err
await self._update_await_activating()
if await self._update_unit():
await self._update_state_await(not_state=UnitActiveState.ACTIVATING)
if self.state != UnitActiveState.ACTIVE:
if not await self.is_mounted():
raise MountActivationError(
f"Mounting {self.name} did not succeed. Check host logs for errors from mount or systemd unit {self.unit_name} for details.",
_LOGGER.error,
@@ -284,8 +310,11 @@ class Mount(CoreSysAttributes, ABC):
async def unmount(self) -> None:
"""Unmount using systemd."""
await self.update()
if not await self._update_unit():
_LOGGER.info("Mount %s is not mounted, skipping unmount", self.name)
return
await self._update_state()
try:
if self.state != UnitActiveState.FAILED:
await self.sys_dbus.systemd.stop_unit(self.unit_name, StopUnitMode.FAIL)
@@ -296,8 +325,6 @@ class Mount(CoreSysAttributes, ABC):
if self.state == UnitActiveState.FAILED:
await self.sys_dbus.systemd.reset_failed_unit(self.unit_name)
except DBusSystemdNoSuchUnit:
_LOGGER.info("Mount %s is not mounted, skipping unmount", self.name)
except DBusError as err:
raise MountError(
f"Could not unmount {self.name} due to: {err!s}", _LOGGER.error
@@ -321,9 +348,10 @@ class Mount(CoreSysAttributes, ABC):
f"Could not reload mount {self.name} due to: {err!s}", _LOGGER.error
) from err
await self._update_await_activating()
if await self._update_unit():
await self._update_state_await(not_state=UnitActiveState.ACTIVATING)
if self.state != UnitActiveState.ACTIVE:
if not await self.is_mounted():
raise MountActivationError(
f"Reloading {self.name} did not succeed. Check host logs for errors from mount or systemd unit {self.unit_name} for details.",
_LOGGER.error,
@@ -358,7 +386,16 @@ class NetworkMount(Mount, ABC):
@property
def options(self) -> list[str]:
"""Options to use to mount."""
return [f"port={self.port}"] if self.port else []
options = super().options
if self.port:
options.append(f"port={self.port}")
return options
async def is_mounted(self) -> bool:
"""Return true if successfully mounted and available."""
return self.state == UnitActiveState.ACTIVE and await self.sys_run_in_executor(
self.local_where.is_mount
)
class CIFSMount(NetworkMount):
@@ -484,6 +521,7 @@ class BindMount(Mount):
path: Path,
usage: MountUsage | None = None,
where: PurePath | None = None,
read_only: bool = False,
) -> "BindMount":
"""Create a new bind mount instance."""
return BindMount(
@@ -493,6 +531,7 @@ class BindMount(Mount):
type=MountType.BIND,
path=path.as_posix(),
usage=usage and usage,
read_only=read_only,
),
where=where,
)
@@ -523,4 +562,4 @@ class BindMount(Mount):
@property
def options(self) -> list[str]:
"""List of options to use to mount."""
return ["bind"]
return super().options + ["bind"]

View File

@@ -1,7 +1,7 @@
"""Validation for mount manager."""
import re
from typing import NotRequired, TypedDict
from typing import Any, NotRequired, TypedDict
import voluptuous as vol
@@ -18,6 +18,7 @@ from .const import (
ATTR_DEFAULT_BACKUP_MOUNT,
ATTR_MOUNTS,
ATTR_PATH,
ATTR_READ_ONLY,
ATTR_SERVER,
ATTR_SHARE,
ATTR_USAGE,
@@ -26,6 +27,18 @@ from .const import (
MountUsage,
)
def usage_specific_validation(config: dict[str, Any]) -> dict[str, Any]:
"""Validate config based on usage."""
# Backup mounts cannot be read only
if config[ATTR_USAGE] == MountUsage.BACKUP and config[ATTR_READ_ONLY]:
raise vol.Invalid("Backup mounts cannot be read only")
return config
# pylint: disable=no-value-for-parameter
RE_MOUNT_NAME = re.compile(r"^[A-Za-z0-9_]+$")
RE_PATH_PART = re.compile(r"^[^\\\/]+")
RE_MOUNT_OPTION = re.compile(r"^[^,=]+$")
@@ -41,6 +54,7 @@ _SCHEMA_BASE_MOUNT_CONFIG = vol.Schema(
vol.In([MountType.CIFS.value, MountType.NFS.value]), vol.Coerce(MountType)
),
vol.Required(ATTR_USAGE): vol.Coerce(MountUsage),
vol.Optional(ATTR_READ_ONLY, default=False): vol.Boolean(),
},
extra=vol.REMOVE_EXTRA,
)
@@ -71,7 +85,9 @@ SCHEMA_MOUNT_NFS = _SCHEMA_MOUNT_NETWORK.extend(
}
)
SCHEMA_MOUNT_CONFIG = vol.Any(SCHEMA_MOUNT_CIFS, SCHEMA_MOUNT_NFS)
SCHEMA_MOUNT_CONFIG = vol.All(
vol.Any(SCHEMA_MOUNT_CIFS, SCHEMA_MOUNT_NFS), usage_specific_validation
)
SCHEMA_MOUNTS_CONFIG = vol.Schema(
{
@@ -86,6 +102,7 @@ class MountData(TypedDict):
name: str
type: str
read_only: bool
usage: NotRequired[str]
# CIFS and NFS fields

View File

@@ -1,5 +1,6 @@
"""OS support on supervisor."""
from collections.abc import Awaitable
import errno
import logging
from pathlib import Path
@@ -13,6 +14,7 @@ from ..dbus.rauc import RaucState
from ..exceptions import DBusError, HassOSJobError, HassOSUpdateError
from ..jobs.const import JobCondition, JobExecutionLimit
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason
from .data_disk import DataDisk
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -120,6 +122,8 @@ class OSManager(CoreSysAttributes):
) from err
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
raise HassOSUpdateError(
f"Can't write OTA file: {err!s}", _LOGGER.error
) from err

View File

@@ -4,6 +4,7 @@ Code: https://github.com/home-assistant/plugin-audio
"""
import asyncio
from contextlib import suppress
import errno
import logging
from pathlib import Path, PurePath
import shutil
@@ -25,6 +26,7 @@ from ..exceptions import (
)
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason
from ..utils.json import write_json_file
from ..utils.sentry import capture_exception
from .base import PluginBase
@@ -83,6 +85,9 @@ class PluginAudio(PluginBase):
PULSE_CLIENT_TMPL.read_text(encoding="utf-8")
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Can't read pulse-client.tmpl: %s", err)
await super().load()
@@ -93,6 +98,8 @@ class PluginAudio(PluginBase):
try:
shutil.copy(ASOUND_TMPL, asound)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Can't create default asound: %s", err)
async def install(self) -> None:

View File

@@ -102,7 +102,7 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
async def watchdog_container(self, event: DockerContainerStateEvent) -> None:
"""Process state changes in plugin container and restart if necessary."""
if not (event.name == self.instance.name):
if event.name != self.instance.name:
return
if event.state in {ContainerState.FAILED, ContainerState.UNHEALTHY}:

View File

@@ -66,7 +66,7 @@ class PluginCli(PluginBase):
image=self.sys_updater.image_cli,
)
break
_LOGGER.warning("Error on install cli plugin. Retry in 30sec")
_LOGGER.warning("Error on install cli plugin. Retrying in 30sec")
await asyncio.sleep(30)
_LOGGER.info("CLI plugin is now installed")

View File

@@ -4,6 +4,7 @@ Code: https://github.com/home-assistant/plugin-dns
"""
import asyncio
from contextlib import suppress
import errno
from ipaddress import IPv4Address
import logging
from pathlib import Path
@@ -29,7 +30,7 @@ from ..exceptions import (
)
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from ..utils.json import write_json_file
from ..utils.sentry import capture_exception
from ..validate import dns_url
@@ -146,12 +147,16 @@ class PluginDns(PluginBase):
RESOLV_TMPL.read_text(encoding="utf-8")
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Can't read resolve.tmpl: %s", err)
try:
self.hosts_template = jinja2.Template(
HOSTS_TMPL.read_text(encoding="utf-8")
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.error("Can't read hosts.tmpl: %s", err)
await self._init_hosts()
@@ -175,7 +180,7 @@ class PluginDns(PluginBase):
self.latest_version, image=self.sys_updater.image_dns
)
break
_LOGGER.warning("Error on install CoreDNS plugin. Retry in 30sec")
_LOGGER.warning("Error on install CoreDNS plugin. Retrying in 30sec")
await asyncio.sleep(30)
_LOGGER.info("CoreDNS plugin now installed")
@@ -364,6 +369,8 @@ class PluginDns(PluginBase):
self.hosts.write_text, data, encoding="utf-8"
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
raise CoreDNSError(f"Can't update hosts: {err}", _LOGGER.error) from err
async def add_host(
@@ -436,6 +443,12 @@ class PluginDns(PluginBase):
def _write_resolv(self, resolv_conf: Path) -> None:
"""Update/Write resolv.conf file."""
if not self.resolv_template:
_LOGGER.warning(
"Resolv template is missing, cannot write/update %s", resolv_conf
)
return
nameservers = [str(self.sys_docker.network.dns), "127.0.0.11"]
# Read resolv config
@@ -445,6 +458,8 @@ class PluginDns(PluginBase):
try:
resolv_conf.write_text(data)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
_LOGGER.warning("Can't write/update %s: %s", resolv_conf, err)
return

View File

@@ -62,7 +62,7 @@ class PluginMulticast(PluginBase):
self.latest_version, image=self.sys_updater.image_multicast
)
break
_LOGGER.warning("Error on install Multicast plugin. Retry in 30sec")
_LOGGER.warning("Error on install Multicast plugin. Retrying in 30sec")
await asyncio.sleep(30)
_LOGGER.info("Multicast plugin is now installed")

View File

@@ -70,7 +70,7 @@ class PluginObserver(PluginBase):
self.latest_version, image=self.sys_updater.image_observer
)
break
_LOGGER.warning("Error on install observer plugin. Retry in 30sec")
_LOGGER.warning("Error on install observer plugin. Retrying in 30sec")
await asyncio.sleep(30)
_LOGGER.info("observer plugin now installed")

View File

@@ -25,12 +25,15 @@ class CheckBackups(CheckBase):
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
return 0 == len(
[
backup
for backup in self.sys_backups.list_backups
if backup.sys_type == BackupType.FULL and backup.is_current
]
return (
len(
[
backup
for backup in self.sys_backups.list_backups
if backup.sys_type == BackupType.FULL and backup.is_current
]
)
== 0
)
@property

View File

@@ -26,12 +26,15 @@ class CheckFreeSpace(CheckBase):
return
suggestions: list[SuggestionType] = []
if MINIMUM_FULL_BACKUPS < len(
[
backup
for backup in self.sys_backups.list_backups
if backup.sys_type == BackupType.FULL
]
if (
len(
[
backup
for backup in self.sys_backups.list_backups
if backup.sys_type == BackupType.FULL
]
)
> MINIMUM_FULL_BACKUPS
):
suggestions.append(SuggestionType.CLEAR_FULL_BACKUP)

View File

@@ -59,9 +59,10 @@ class UnhealthyReason(StrEnum):
"""Reasons for unsupported status."""
DOCKER = "docker"
OSERROR_BAD_MESSAGE = "oserror_bad_message"
PRIVILEGED = "privileged"
SUPERVISOR = "supervisor"
SETUP = "setup"
PRIVILEGED = "privileged"
UNTRUSTED = "untrusted"

View File

@@ -28,10 +28,9 @@ class EvaluateBase(ABC, CoreSysAttributes):
self.on_failure,
self.reason,
)
else:
if self.reason in self.sys_resolution.unsupported:
_LOGGER.info("Clearing %s as reason for unsupported", self.reason)
self.sys_resolution.dismiss_unsupported(self.reason)
elif self.reason in self.sys_resolution.unsupported:
_LOGGER.info("Clearing %s as reason for unsupported", self.reason)
self.sys_resolution.dismiss_unsupported(self.reason)
@abstractmethod
async def evaluate(self) -> bool:

View File

@@ -31,10 +31,14 @@ class EvaluateDNSServer(EvaluateBase):
async def evaluate(self) -> None:
"""Run evaluation."""
return not self.sys_plugins.dns.fallback and 0 < len(
[
issue
for issue in self.sys_resolution.issues
if issue.context == ContextType.DNS_SERVER
]
return (
not self.sys_plugins.dns.fallback
and len(
[
issue
for issue in self.sys_resolution.issues
if issue.context == ContextType.DNS_SERVER
]
)
> 0
)

View File

@@ -1,4 +1,5 @@
"""Evaluation class for Content Trust."""
import errno
import logging
from pathlib import Path
@@ -6,7 +7,7 @@ from ...const import CoreState
from ...coresys import CoreSys
from ...exceptions import CodeNotaryError, CodeNotaryUntrusted
from ...utils.codenotary import calc_checksum_path_sourcecode
from ..const import ContextType, IssueType, UnsupportedReason
from ..const import ContextType, IssueType, UnhealthyReason, UnsupportedReason
from .base import EvaluateBase
_SUPERVISOR_SOURCE = Path("/usr/src/supervisor/supervisor")
@@ -48,6 +49,9 @@ class EvaluateSourceMods(EvaluateBase):
calc_checksum_path_sourcecode, _SUPERVISOR_SOURCE
)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
self.sys_resolution.create_issue(
IssueType.CORRUPT_FILESYSTEM, ContextType.SYSTEM
)

View File

@@ -23,7 +23,7 @@ class FixupSystemClearFullBackup(FixupBase):
x for x in self.sys_backups.list_backups if x.sys_type == BackupType.FULL
]
if MINIMUM_FULL_BACKUPS >= len(full_backups):
if len(full_backups) <= MINIMUM_FULL_BACKUPS:
return
_LOGGER.info("Starting removal of old full backups")

View File

@@ -1,5 +1,4 @@
"""
Helper to notify Core about issues.
"""Helper to notify Core about issues.
This helper creates persistant notification in the Core UI.
In the future we want to remove this in favour of a "center" in the UI.

View File

@@ -207,6 +207,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
await self.data.update()
self._read_addons()
@Job(name="store_manager_update_repositories")
async def update_repositories(
self,
list_repositories: list[str],

View File

@@ -1,5 +1,6 @@
"""Init file for Supervisor add-on data."""
from dataclasses import dataclass
import errno
import logging
from pathlib import Path
from typing import Any
@@ -13,14 +14,17 @@ from ..const import (
ATTR_REPOSITORY,
ATTR_SLUG,
ATTR_TRANSLATIONS,
ATTR_VERSION,
ATTR_VERSION_TIMESTAMP,
FILE_SUFFIX_CONFIGURATION,
REPOSITORY_CORE,
REPOSITORY_LOCAL,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ConfigurationFileError
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from ..utils.common import find_one_filetype, read_json_or_yaml_file
from ..utils.dt import utcnow
from ..utils.json import read_json_file
from .const import StoreType
from .utils import extract_hash_from_path
@@ -134,6 +138,19 @@ class StoreData(CoreSysAttributes):
repositories[repo.slug] = repo.config
addons.update(await self._read_addons_folder(repo.path, repo.slug))
# Add a timestamp when we first see a new version
for slug, config in addons.items():
old_config = self.addons.get(slug)
if (
not old_config
or ATTR_VERSION_TIMESTAMP not in old_config
or old_config.get(ATTR_VERSION) != config.get(ATTR_VERSION)
):
config[ATTR_VERSION_TIMESTAMP] = utcnow().timestamp()
else:
config[ATTR_VERSION_TIMESTAMP] = old_config[ATTR_VERSION_TIMESTAMP]
self.repositories = repositories
self.addons = addons
@@ -157,7 +174,9 @@ class StoreData(CoreSysAttributes):
addon_list = await self.sys_run_in_executor(_get_addons_list)
except OSError as err:
suggestion = None
if path.stem != StoreType.LOCAL:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
elif path.stem != StoreType.LOCAL:
suggestion = [SuggestionType.EXECUTE_RESET]
self.sys_resolution.create_issue(
IssueType.CORRUPT_REPOSITORY,

View File

@@ -2,6 +2,7 @@
from collections.abc import Awaitable
from contextlib import suppress
from datetime import timedelta
import errno
from ipaddress import IPv4Address
import logging
from pathlib import Path
@@ -27,7 +28,7 @@ from .exceptions import (
)
from .jobs.const import JobCondition, JobExecutionLimit
from .jobs.decorator import Job
from .resolution.const import ContextType, IssueType
from .resolution.const import ContextType, IssueType, UnhealthyReason
from .utils.codenotary import calc_checksum
from .utils.sentry import capture_exception
@@ -155,6 +156,8 @@ class Supervisor(CoreSysAttributes):
try:
profile_file.write_text(data, encoding="utf-8")
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
raise SupervisorAppArmorError(
f"Can't write temporary profile: {err!s}", _LOGGER.error
) from err

View File

@@ -9,11 +9,10 @@ from pathlib import Path
import shlex
from typing import Final
import async_timeout
from dirhash import dirhash
from . import clean_env
from ..exceptions import CodeNotaryBackendError, CodeNotaryError, CodeNotaryUntrusted
from . import clean_env
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -67,7 +66,7 @@ async def cas_validate(
env=clean_env(),
)
async with async_timeout.timeout(15):
async with asyncio.timeout(15):
data, error = await proc.communicate()
except TimeoutError:
raise CodeNotaryBackendError(

View File

@@ -1,4 +1,5 @@
"""Common utils."""
from contextlib import suppress
import logging
from pathlib import Path
from typing import Any
@@ -101,7 +102,5 @@ class FileConfiguration:
self.read_data()
else:
# write
try:
with suppress(ConfigurationFileError):
write_json_or_yaml_file(self._file, self._data)
except ConfigurationFileError:
pass

View File

@@ -110,7 +110,7 @@ class DBus:
)
return await getattr(proxy_interface, method)(*args)
except DBusFastDBusError as err:
raise DBus.from_dbus_error(err)
raise DBus.from_dbus_error(err) from None
except Exception as err: # pylint: disable=broad-except
capture_exception(err)
raise DBusFatalError(str(err)) from err
@@ -134,7 +134,7 @@ class DBus:
f"Can't parse introspect data: {err}", _LOGGER.error
) from err
except DBusFastDBusError as err:
raise DBus.from_dbus_error(err)
raise DBus.from_dbus_error(err) from None
except (EOFError, TimeoutError):
_LOGGER.warning(
"Busy system at %s - %s", self.bus_name, self.object_path

View File

@@ -1,15 +1,12 @@
"""Tools file for Supervisor."""
from contextlib import suppress
from datetime import datetime, timedelta, timezone, tzinfo
from datetime import UTC, datetime, timedelta, timezone, tzinfo
import re
from typing import Any
import zoneinfo
import ciso8601
UTC = timezone.utc
# Copyright (c) Django Software Foundation and individual contributors.
# All rights reserved.
# https://github.com/django/django/blob/master/LICENSE
@@ -67,7 +64,7 @@ def utcnow() -> datetime:
def utc_from_timestamp(timestamp: float) -> datetime:
"""Return a UTC time from a timestamp."""
return datetime.utcfromtimestamp(timestamp).replace(tzinfo=UTC)
return datetime.fromtimestamp(timestamp, UTC).replace(tzinfo=UTC)
def get_time_zone(time_zone_str: str) -> tzinfo | None:

View File

@@ -1,40 +1,63 @@
"""Tools file for Supervisor."""
from datetime import datetime
import json
from functools import partial
import logging
from pathlib import Path
from typing import Any
from typing import TYPE_CHECKING, Any
from atomicwrites import atomic_write
import orjson
from ..exceptions import JsonFileError
_LOGGER: logging.Logger = logging.getLogger(__name__)
class JSONEncoder(json.JSONEncoder):
"""JSONEncoder that supports Supervisor objects."""
def json_dumps(data: Any) -> str:
"""Dump json string."""
return json_bytes(data).decode("utf-8")
def default(self, o: Any) -> Any:
"""Convert Supervisor special objects.
Hand other objects to the original method.
"""
if isinstance(o, datetime):
return o.isoformat()
if isinstance(o, set):
return list(o)
if isinstance(o, Path):
return o.as_posix()
def json_encoder_default(obj: Any) -> Any:
"""Convert Supervisor special objects."""
if isinstance(obj, (set, tuple)):
return list(obj)
if isinstance(obj, float):
return float(obj)
if isinstance(obj, Path):
return obj.as_posix()
raise TypeError
return super().default(o)
if TYPE_CHECKING:
def json_bytes(obj: Any) -> bytes:
"""Dump json bytes."""
else:
json_bytes = partial(
orjson.dumps, # pylint: disable=no-member
option=orjson.OPT_NON_STR_KEYS, # pylint: disable=no-member
default=json_encoder_default,
)
"""Dump json bytes."""
# pylint - https://github.com/ijl/orjson/issues/248
json_loads = orjson.loads # pylint: disable=no-member
def write_json_file(jsonfile: Path, data: Any) -> None:
"""Write a JSON file."""
try:
with atomic_write(jsonfile, overwrite=True) as fp:
fp.write(json.dumps(data, indent=2, cls=JSONEncoder))
fp.write(
orjson.dumps( # pylint: disable=no-member
data,
option=orjson.OPT_INDENT_2 # pylint: disable=no-member
| orjson.OPT_NON_STR_KEYS, # pylint: disable=no-member
default=json_encoder_default,
).decode("utf-8")
)
jsonfile.chmod(0o600)
except (OSError, ValueError, TypeError) as err:
raise JsonFileError(
@@ -45,7 +68,7 @@ def write_json_file(jsonfile: Path, data: Any) -> None:
def read_json_file(jsonfile: Path) -> Any:
"""Read a JSON file and return a dict."""
try:
return json.loads(jsonfile.read_text())
return json_loads(jsonfile.read_bytes())
except (OSError, ValueError, TypeError, UnicodeDecodeError) as err:
raise JsonFileError(
f"Can't read json from {jsonfile!s}: {err!s}", _LOGGER.error

View File

@@ -8,7 +8,7 @@ from yaml import YAMLError, dump, load
try:
from yaml import CDumper as Dumper, CSafeLoader as SafeLoader
except ImportError:
from yaml import SafeLoader, Dumper
from yaml import Dumper, SafeLoader
from ..exceptions import YamlFileError

View File

@@ -2,9 +2,11 @@
import asyncio
from datetime import timedelta
import errno
from pathlib import Path
from unittest.mock import MagicMock, PropertyMock, patch
from awesomeversion import AwesomeVersion
from docker.errors import DockerException, NotFound
import pytest
from securetar import SecureTarFile
@@ -696,3 +698,53 @@ async def test_local_example_ingress_port_set(
await install_addon_example.load()
assert install_addon_example.ingress_port != 0
def test_addon_pulse_error(
coresys: CoreSys,
install_addon_example: Addon,
caplog: pytest.LogCaptureFixture,
tmp_supervisor_data,
):
"""Test error writing pulse config for addon."""
with patch(
"supervisor.addons.addon.Path.write_text", side_effect=(err := OSError())
):
err.errno = errno.EBUSY
install_addon_example.write_pulse()
assert "can't write pulse/client.config" in caplog.text
assert coresys.core.healthy is True
caplog.clear()
err.errno = errno.EBADMSG
install_addon_example.write_pulse()
assert "can't write pulse/client.config" in caplog.text
assert coresys.core.healthy is False
def test_auto_update_available(coresys: CoreSys, install_addon_example: Addon):
"""Test auto update availability based on versions."""
assert install_addon_example.auto_update is False
assert install_addon_example.need_update is False
assert install_addon_example.auto_update_available is False
with patch.object(
Addon, "version", new=PropertyMock(return_value=AwesomeVersion("1.0"))
):
assert install_addon_example.need_update is True
assert install_addon_example.auto_update_available is False
install_addon_example.auto_update = True
assert install_addon_example.auto_update_available is True
with patch.object(
Addon, "version", new=PropertyMock(return_value=AwesomeVersion("0.9"))
):
assert install_addon_example.auto_update_available is False
with patch.object(
Addon, "version", new=PropertyMock(return_value=AwesomeVersion("test"))
):
assert install_addon_example.auto_update_available is False

View File

@@ -68,7 +68,7 @@ async def test_image_added_removed_on_update(
await coresys.addons.update(TEST_ADDON_SLUG)
build.assert_not_called()
install.assert_called_once_with(
AwesomeVersion("10.0.0"), "test/amd64-my-ssh-addon", False, None
AwesomeVersion("10.0.0"), "test/amd64-my-ssh-addon", False, "amd64"
)
assert install_addon_ssh.need_update is False

View File

@@ -110,7 +110,6 @@ async def test_bad_requests(
fail_on_query_string,
api_system,
caplog: pytest.LogCaptureFixture,
event_loop: asyncio.BaseEventLoop,
) -> None:
"""Test request paths that should be filtered."""
@@ -122,7 +121,7 @@ async def test_bad_requests(
man_params = ""
http = urllib3.PoolManager()
resp = await event_loop.run_in_executor(
resp = await asyncio.get_running_loop().run_in_executor(
None,
http.request,
"GET",

View File

@@ -96,6 +96,7 @@ async def test_api_addon_start_healthcheck(
assert install_addon_ssh.state == AddonState.STOPPED
state_changes: list[AddonState] = []
_container_events_task: asyncio.Task | None = None
async def container_events():
nonlocal state_changes
@@ -111,7 +112,8 @@ async def test_api_addon_start_healthcheck(
)
async def container_events_task(*args, **kwargs):
asyncio.create_task(container_events())
nonlocal _container_events_task
_container_events_task = asyncio.create_task(container_events())
with patch.object(DockerAddon, "run", new=container_events_task):
resp = await api_client.post("/addons/local_ssh/start")
@@ -137,6 +139,7 @@ async def test_api_addon_restart_healthcheck(
assert install_addon_ssh.state == AddonState.STOPPED
state_changes: list[AddonState] = []
_container_events_task: asyncio.Task | None = None
async def container_events():
nonlocal state_changes
@@ -152,7 +155,8 @@ async def test_api_addon_restart_healthcheck(
)
async def container_events_task(*args, **kwargs):
asyncio.create_task(container_events())
nonlocal _container_events_task
_container_events_task = asyncio.create_task(container_events())
with patch.object(DockerAddon, "run", new=container_events_task):
resp = await api_client.post("/addons/local_ssh/restart")
@@ -180,6 +184,7 @@ async def test_api_addon_rebuild_healthcheck(
assert install_addon_ssh.state == AddonState.STARTUP
state_changes: list[AddonState] = []
_container_events_task: asyncio.Task | None = None
async def container_events():
nonlocal state_changes
@@ -200,7 +205,8 @@ async def test_api_addon_rebuild_healthcheck(
)
async def container_events_task(*args, **kwargs):
asyncio.create_task(container_events())
nonlocal _container_events_task
_container_events_task = asyncio.create_task(container_events())
with patch.object(
AddonBuild, "is_valid", new=PropertyMock(return_value=True)
@@ -208,9 +214,7 @@ async def test_api_addon_rebuild_healthcheck(
Addon, "need_build", new=PropertyMock(return_value=True)
), patch.object(
CpuArch, "supported", new=PropertyMock(return_value=["amd64"])
), patch.object(
DockerAddon, "run", new=container_events_task
):
), patch.object(DockerAddon, "run", new=container_events_task):
resp = await api_client.post("/addons/local_ssh/rebuild")
assert state_changes == [AddonState.STOPPED, AddonState.STARTUP]

View File

@@ -2,17 +2,22 @@
import asyncio
from pathlib import Path, PurePath
from unittest.mock import ANY, AsyncMock, patch
from typing import Any
from unittest.mock import ANY, AsyncMock, PropertyMock, patch
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
import pytest
from supervisor.addons.addon import Addon
from supervisor.backups.backup import Backup
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.exceptions import AddonsError, HomeAssistantBackupError
from supervisor.homeassistant.core import HomeAssistantCore
from supervisor.homeassistant.module import HomeAssistant
from supervisor.mounts.mount import Mount
from supervisor.supervisor import Supervisor
async def test_info(api_client, coresys: CoreSys, mock_full_backup: Backup):
@@ -66,6 +71,7 @@ async def test_backup_to_location(
tmp_supervisor_data: Path,
path_extern,
mount_propagation,
mock_is_mount,
):
"""Test making a backup to a specific location with default mount."""
await coresys.mounts.load()
@@ -110,6 +116,7 @@ async def test_backup_to_default(
tmp_supervisor_data,
path_extern,
mount_propagation,
mock_is_mount,
):
"""Test making backup to default mount."""
await coresys.mounts.load()
@@ -199,3 +206,256 @@ async def test_api_backup_exclude_database(
backup.assert_awaited_once_with(ANY, True)
assert resp.status == 200
async def _get_job_info(api_client: TestClient, job_id: str) -> dict[str, Any]:
"""Test background job progress and block until it is done."""
resp = await api_client.get(f"/jobs/{job_id}")
assert resp.status == 200
result = await resp.json()
return result["data"]
@pytest.mark.parametrize(
"backup_type,options",
[
("full", {}),
(
"partial",
{
"homeassistant": True,
"folders": ["addons/local", "media", "share", "ssl"],
},
),
],
)
async def test_api_backup_restore_background(
api_client: TestClient,
coresys: CoreSys,
backup_type: str,
options: dict[str, Any],
tmp_supervisor_data: Path,
path_extern,
):
"""Test background option on backup/restore APIs."""
coresys.core.state = CoreState.RUNNING
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.homeassistant.version = AwesomeVersion("2023.09.0")
(tmp_supervisor_data / "addons/local").mkdir(parents=True)
assert coresys.jobs.jobs == []
resp = await api_client.post(
f"/backups/new/{backup_type}",
json={"background": True, "name": f"{backup_type} backup"} | options,
)
assert resp.status == 200
result = await resp.json()
job_id = result["data"]["job_id"]
assert (await _get_job_info(api_client, job_id))["done"] is False
while not (job := (await _get_job_info(api_client, job_id)))["done"]:
await asyncio.sleep(0)
assert job["name"] == f"backup_manager_{backup_type}_backup"
assert (backup_slug := job["reference"])
assert job["child_jobs"][0]["name"] == "backup_store_homeassistant"
assert job["child_jobs"][0]["reference"] == backup_slug
assert job["child_jobs"][1]["name"] == "backup_store_folders"
assert job["child_jobs"][1]["reference"] == backup_slug
assert {j["reference"] for j in job["child_jobs"][1]["child_jobs"]} == {
"addons/local",
"media",
"share",
"ssl",
}
with patch.object(HomeAssistantCore, "start"):
resp = await api_client.post(
f"/backups/{backup_slug}/restore/{backup_type}",
json={"background": True} | options,
)
assert resp.status == 200
result = await resp.json()
job_id = result["data"]["job_id"]
assert (await _get_job_info(api_client, job_id))["done"] is False
while not (job := (await _get_job_info(api_client, job_id)))["done"]:
await asyncio.sleep(0)
assert job["name"] == f"backup_manager_{backup_type}_restore"
assert job["reference"] == backup_slug
assert job["child_jobs"][0]["name"] == "backup_restore_folders"
assert job["child_jobs"][0]["reference"] == backup_slug
assert {j["reference"] for j in job["child_jobs"][0]["child_jobs"]} == {
"addons/local",
"media",
"share",
"ssl",
}
assert job["child_jobs"][1]["name"] == "backup_restore_homeassistant"
assert job["child_jobs"][1]["reference"] == backup_slug
if backup_type == "full":
assert job["child_jobs"][2]["name"] == "backup_remove_delta_addons"
assert job["child_jobs"][2]["reference"] == backup_slug
@pytest.mark.parametrize(
"backup_type,options",
[
("full", {}),
(
"partial",
{
"homeassistant": True,
"folders": ["addons/local", "media", "share", "ssl"],
"addons": ["local_ssh"],
},
),
],
)
async def test_api_backup_errors(
api_client: TestClient,
coresys: CoreSys,
backup_type: str,
options: dict[str, Any],
tmp_supervisor_data: Path,
install_addon_ssh,
path_extern,
):
"""Test error reporting in backup job."""
coresys.core.state = CoreState.RUNNING
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.homeassistant.version = AwesomeVersion("2023.09.0")
(tmp_supervisor_data / "addons/local").mkdir(parents=True)
assert coresys.jobs.jobs == []
with patch.object(Addon, "backup", side_effect=AddonsError("Backup error")):
resp = await api_client.post(
f"/backups/new/{backup_type}",
json={"name": f"{backup_type} backup"} | options,
)
assert resp.status == 200
result = await resp.json()
job_id = result["data"]["job_id"]
slug = result["data"]["slug"]
job = await _get_job_info(api_client, job_id)
assert job["name"] == f"backup_manager_{backup_type}_backup"
assert job["done"] is True
assert job["reference"] == slug
assert job["errors"] == []
assert job["child_jobs"][0]["name"] == "backup_store_addons"
assert job["child_jobs"][0]["reference"] == slug
assert job["child_jobs"][0]["child_jobs"][0]["name"] == "backup_addon_save"
assert job["child_jobs"][0]["child_jobs"][0]["reference"] == "local_ssh"
assert job["child_jobs"][0]["child_jobs"][0]["errors"] == [
{"type": "BackupError", "message": "Can't create backup for local_ssh"}
]
assert job["child_jobs"][1]["name"] == "backup_store_homeassistant"
assert job["child_jobs"][1]["reference"] == slug
assert job["child_jobs"][2]["name"] == "backup_store_folders"
assert job["child_jobs"][2]["reference"] == slug
assert {j["reference"] for j in job["child_jobs"][2]["child_jobs"]} == {
"addons/local",
"media",
"share",
"ssl",
}
with patch.object(
HomeAssistant, "backup", side_effect=HomeAssistantBackupError("Backup error")
), patch.object(Addon, "backup"):
resp = await api_client.post(
f"/backups/new/{backup_type}",
json={"name": f"{backup_type} backup"} | options,
)
assert resp.status == 400
result = await resp.json()
job_id = result["job_id"]
job = await _get_job_info(api_client, job_id)
assert job["name"] == f"backup_manager_{backup_type}_backup"
assert job["done"] is True
assert job["errors"] == (
err := [{"type": "HomeAssistantBackupError", "message": "Backup error"}]
)
assert job["child_jobs"][0]["name"] == "backup_store_addons"
assert job["child_jobs"][1]["name"] == "backup_store_homeassistant"
assert job["child_jobs"][1]["errors"] == err
assert len(job["child_jobs"]) == 2
async def test_backup_immediate_errors(api_client: TestClient, coresys: CoreSys):
"""Test backup errors that return immediately even in background mode."""
coresys.core.state = CoreState.FREEZE
resp = await api_client.post(
"/backups/new/full",
json={"name": "Test", "background": True},
)
assert resp.status == 400
assert "freeze" in (await resp.json())["message"]
coresys.core.state = CoreState.RUNNING
coresys.hardware.disk.get_disk_free_space = lambda x: 0.5
resp = await api_client.post(
"/backups/new/partial",
json={"name": "Test", "homeassistant": True, "background": True},
)
assert resp.status == 400
assert "not enough free space" in (await resp.json())["message"]
async def test_restore_immediate_errors(
request: pytest.FixtureRequest,
api_client: TestClient,
coresys: CoreSys,
mock_partial_backup: Backup,
):
"""Test restore errors that return immediately even in background mode."""
coresys.core.state = CoreState.RUNNING
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
resp = await api_client.post(
f"/backups/{mock_partial_backup.slug}/restore/full", json={"background": True}
)
assert resp.status == 400
assert "only a partial backup" in (await resp.json())["message"]
with patch.object(
Backup,
"supervisor_version",
new=PropertyMock(return_value=AwesomeVersion("2024.01.0")),
), patch.object(
Supervisor,
"version",
new=PropertyMock(return_value=AwesomeVersion("2023.12.0")),
):
resp = await api_client.post(
f"/backups/{mock_partial_backup.slug}/restore/partial",
json={"background": True, "homeassistant": True},
)
assert resp.status == 400
assert "Must update supervisor" in (await resp.json())["message"]
with patch.object(
Backup, "protected", new=PropertyMock(return_value=True)
), patch.object(Backup, "set_password", return_value=False):
resp = await api_client.post(
f"/backups/{mock_partial_backup.slug}/restore/partial",
json={"background": True, "homeassistant": True},
)
assert resp.status == 400
assert "Invalid password" in (await resp.json())["message"]
with patch.object(Backup, "homeassistant", new=PropertyMock(return_value=None)):
resp = await api_client.post(
f"/backups/{mock_partial_backup.slug}/restore/partial",
json={"background": True, "homeassistant": True},
)
assert resp.status == 400
assert "No Home Assistant" in (await resp.json())["message"]

View File

@@ -93,8 +93,8 @@ async def test_jobs_tree_representation(api_client: TestClient, coresys: CoreSys
await self.event.wait()
test = TestClass(coresys)
asyncio.create_task(test.test_jobs_tree_outer())
asyncio.create_task(test.test_jobs_tree_alt())
outer_task = asyncio.create_task(test.test_jobs_tree_outer())
alt_task = asyncio.create_task(test.test_jobs_tree_alt())
await asyncio.sleep(0)
resp = await api_client.get("/jobs/info")
@@ -107,6 +107,7 @@ async def test_jobs_tree_representation(api_client: TestClient, coresys: CoreSys
"progress": 50,
"stage": None,
"done": False,
"errors": [],
"child_jobs": [
{
"name": "test_jobs_tree_inner",
@@ -116,6 +117,7 @@ async def test_jobs_tree_representation(api_client: TestClient, coresys: CoreSys
"stage": None,
"done": False,
"child_jobs": [],
"errors": [],
},
],
},
@@ -127,6 +129,7 @@ async def test_jobs_tree_representation(api_client: TestClient, coresys: CoreSys
"stage": "init",
"done": False,
"child_jobs": [],
"errors": [],
},
]
@@ -144,5 +147,72 @@ async def test_jobs_tree_representation(api_client: TestClient, coresys: CoreSys
"stage": "end",
"done": True,
"child_jobs": [],
"errors": [],
},
]
await outer_task
await alt_task
async def test_job_manual_cleanup(api_client: TestClient, coresys: CoreSys):
"""Test manually cleaning up a job via API."""
class TestClass:
"""Test class."""
def __init__(self, coresys: CoreSys):
"""Initialize the test class."""
self.coresys = coresys
self.event = asyncio.Event()
self.job_id: str | None = None
@Job(name="test_job_manual_cleanup", cleanup=False)
async def test_job_manual_cleanup(self) -> None:
"""Job that requires manual cleanup."""
self.job_id = coresys.jobs.current.uuid
await self.event.wait()
test = TestClass(coresys)
task = asyncio.create_task(test.test_job_manual_cleanup())
await asyncio.sleep(0)
# Check the job details
resp = await api_client.get(f"/jobs/{test.job_id}")
assert resp.status == 200
result = await resp.json()
assert result["data"] == {
"name": "test_job_manual_cleanup",
"reference": None,
"uuid": test.job_id,
"progress": 0,
"stage": None,
"done": False,
"child_jobs": [],
"errors": [],
}
# Only done jobs can be deleted via API
resp = await api_client.delete(f"/jobs/{test.job_id}")
assert resp.status == 400
result = await resp.json()
assert result["message"] == f"Job {test.job_id} is not done!"
# Let the job finish
test.event.set()
await task
# Check that it is now done
resp = await api_client.get(f"/jobs/{test.job_id}")
assert resp.status == 200
result = await resp.json()
assert result["data"]["done"] is True
# Delete it
resp = await api_client.delete(f"/jobs/{test.job_id}")
assert resp.status == 200
# Confirm it no longer exists
resp = await api_client.get(f"/jobs/{test.job_id}")
assert resp.status == 400
result = await resp.json()
assert result["message"] == f"No job found with id {test.job_id}"

Some files were not shown because too many files have changed in this diff Show More