Compare commits

...

16 Commits

Author SHA1 Message Date
Mike Degatano
f32c50818d Adjust existing tests and remove new ones for not found errors 2026-02-19 22:34:44 +00:00
Mike Degatano
597c1949ad Fix lint, test and feedback issues 2026-02-19 22:20:44 +00:00
Mike Degatano
ee84354714 Ensure uuid of dismissed suggestion/issue matches an existing one 2026-02-19 21:39:49 +00:00
Mike Degatano
4a1c816b92 Finish dockerpy to aiodocker migration (#6578) 2026-02-18 08:49:15 +01:00
Mike Degatano
b70f44bf1f Bump aiodocker from 0.25.0 to 0.26.0 (#6577) 2026-02-17 14:26:01 -05:00
Stefan Agner
c981b3b4c2 Extend and improve release drafter config (#6576)
* Extend and improve release drafter config

Extend the release drafter config with more types (labels) and order
them by priority. Inspired by conventional commits, in particular
the list documented at (including the order):
https://github.com/pvdlg/conventional-changelog-metahub?tab=readme-ov-file#commit-types

Additionally, we left the "breaking-change" and "dependencies" labels.

* Add revert to the list of labels
2026-02-17 19:32:25 +01:00
Stefan Agner
f2d0ceab33 Add missing WIFI_P2P device type to NetworkManager enum (#6574)
Add the missing WIFI_P2P (30) entry to the DeviceType NetworkManager
enum. Without it, systems with a Wi-Fi P2P interface log a warning:

  Unknown DeviceType value received from D-Bus: 30

Closes #6573

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 13:13:02 -05:00
Stefan Agner
3147d080a2 Unify Core user handling with HomeAssistantUser model (#6558)
* Unify Core user listing with HomeAssistantUser model

Replace the ingress-specific IngressSessionDataUser with a general
HomeAssistantUser dataclass that models the Core config/auth/list WS
response. This deduplicates the WS call (previously in both auth.py
and module.py) into a single HomeAssistant.list_users() method.

- Add HomeAssistantUser dataclass with fields matching Core's user API
- Remove get_users() and its unnecessary 5-minute Job throttle
- Auth and ingress consumers both use HomeAssistant.list_users()
- Auth API endpoint uses typed attribute access instead of dict keys
- Migrate session serialization from legacy "displayname" to "name"
- Accept both keys in schema/deserialization for backwards compat
- Add test for loading persisted sessions with legacy displayname key

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Tighten list_users() to trust Core's auth/list contract

Core's config/auth/list WS command always returns a list, never None.
Replace the silent `if not raw: return []` (which also swallowed empty
lists) with an assert, remove the dead AuthListUsersNoneResponseError
exception class, and document the HomeAssistantWSError contract in the
docstring.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove | None from async_send_command return type

The WebSocket result is always set from data["result"] in _receive_json,
never explicitly to None. Remove the misleading | None from the return
type of both WSClient and HomeAssistantWebSocket async_send_command, and
drop the now-unnecessary assert in list_users.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Use HomeAssistantWSConnectionError in _ensure_connected

_ensure_connected and connect_with_auth raise on connection-level
failures, so use the more specific HomeAssistantWSConnectionError
instead of the broad HomeAssistantWSError. This allows callers to
distinguish connection errors from Core API errors (e.g. unsuccessful
WebSocket command responses). Also document that _ensure_connected can
propagate HomeAssistantAuthError from ensure_access_token.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove user list cache from _find_user_by_id

Drop the _list_of_users cache to avoid stale auth data in ingress
session creation. The method now fetches users fresh each time and
returns None on any API error instead of serving potentially outdated
cached results.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 18:31:08 +01:00
dependabot[bot]
09a4e9d5a2 Bump actions/stale from 10.1.1 to 10.2.0 (#6571)
Bumps [actions/stale](https://github.com/actions/stale) from 10.1.1 to 10.2.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](997185467f...b5d41d4e1d)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-17 10:36:56 +01:00
dependabot[bot]
d93e728918 Bump sentry-sdk from 2.52.0 to 2.53.0 (#6572)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 2.52.0 to 2.53.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/2.52.0...2.53.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-version: 2.53.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-17 10:34:12 +01:00
c0ffeeca7
27c6af4b4b App store strings: rename add-on to app (#6569) 2026-02-16 09:20:53 +01:00
Stefan Agner
00f2578d61 Add missing BRIDGE device type to NetworkManager enum (#6567)
NMDeviceType 13 (NM_DEVICE_TYPE_BRIDGE) was not listed in the
DeviceType enum, causing a warning when NetworkManager reported
a bridge interface.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 10:25:15 -05:00
Stefan Agner
50e6c88237 Add periodic progress logging during initial Core installation (#6562)
* Add periodic progress logging during initial Core installation

Log installation progress every 15 seconds while downloading the
Home Assistant Core image during initial setup (landing page to core
transition). Uses asyncio.Event with wait_for timeout to produce
time-based logs independent of Docker pull events, ensuring visibility
even when the network stalls.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Add test coverage

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
2026-02-13 14:17:35 +01:00
dependabot[bot]
0cce2dad3c Bump ruff from 0.15.0 to 0.15.1 (#6565)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.15.0 to 0.15.1.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.0...0.15.1)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.15.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-13 08:59:59 +01:00
Stefan Agner
8dd42cb7a0 Fix getting Supervisor IP address in testing (#6564)
* Fix getting Supervisor IP address in testing

Newer Docker versions (probably newer than 29.x) do not have a global
IPAddress attribute under .NetworkSettings anymore. There is a network
specific map under Networks. For our case the hassio has the relevant
IP address. This network specific maps already existed before, hence
the new inspect format works for old as well as new Docker versions.

While at it, also adjust the test fixture.

* Actively wait for hassio IPAddress to become valid
2026-02-13 08:12:19 +01:00
Mike Degatano
590674ba7c Remove blocking I/O added to import_image (#6557)
* Remove blocking I/O added to import_image

* Add scanned modules to extra blockbuster functions

* Use same cast avoidance approach in export_image

* Remove unnecessary local image_writer variable

* Remove unnecessary local image_tar_stream variable

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
2026-02-12 17:37:15 +01:00
35 changed files with 496 additions and 373 deletions

View File

@@ -5,45 +5,53 @@ categories:
- title: ":boom: Breaking Changes"
label: "breaking-change"
- title: ":wrench: Build"
label: "build"
- title: ":boar: Chore"
label: "chore"
- title: ":sparkles: New Features"
label: "new-feature"
- title: ":zap: Performance"
label: "performance"
- title: ":recycle: Refactor"
label: "refactor"
- title: ":green_heart: CI"
label: "ci"
- title: ":bug: Bug Fixes"
label: "bugfix"
- title: ":white_check_mark: Test"
- title: ":gem: Style"
label: "style"
- title: ":package: Refactor"
label: "refactor"
- title: ":rocket: Performance"
label: "performance"
- title: ":rotating_light: Test"
label: "test"
- title: ":hammer_and_wrench: Build"
label: "build"
- title: ":gear: CI"
label: "ci"
- title: ":recycle: Chore"
label: "chore"
- title: ":wastebasket: Revert"
label: "revert"
- title: ":arrow_up: Dependency Updates"
label: "dependencies"
collapse-after: 1
include-labels:
- "breaking-change"
- "build"
- "chore"
- "performance"
- "refactor"
- "new-feature"
- "bugfix"
- "dependencies"
- "style"
- "refactor"
- "performance"
- "test"
- "build"
- "ci"
- "chore"
- "revert"
- "dependencies"
template: |

View File

@@ -296,7 +296,11 @@ jobs:
- &wait_for_supervisor
name: Wait for Supervisor to come up
run: |
SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' hassio_supervisor)
until SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.Networks.hassio.IPAddress}}' hassio_supervisor 2>/dev/null) && \
[ -n "$SUPERVISOR" ] && [ "$SUPERVISOR" != "<no value>" ]; do
echo "Waiting for network configuration..."
sleep 1
done
echo "Waiting for Supervisor API at http://${SUPERVISOR}/supervisor/ping"
timeout=300
elapsed=0

View File

@@ -9,7 +9,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
- uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 30

View File

@@ -1,5 +1,5 @@
aiodns==4.0.0
aiodocker==0.25.0
aiodocker==0.26.0
aiohttp==3.13.3
atomicwrites-homeassistant==1.4.1
attrs==25.4.0
@@ -14,7 +14,6 @@ cryptography==46.0.5
debugpy==1.8.20
deepmerge==2.0
dirhash==0.5.0
docker==7.1.0
faust-cchardet==2.1.19
gitpython==3.1.46
jinja2==3.1.6
@@ -25,7 +24,7 @@ pyudev==0.24.4
PyYAML==6.0.3
requests==2.32.5
securetar==2025.12.0
sentry-sdk==2.52.0
sentry-sdk==2.53.0
setuptools==82.0.0
voluptuous==0.16.0
dbus-fast==4.0.0

View File

@@ -8,9 +8,8 @@ pytest-asyncio==1.3.0
pytest-cov==7.0.0
pytest-timeout==2.4.0
pytest==9.0.2
ruff==0.15.0
ruff==0.15.1
time-machine==3.2.0
types-docker==7.1.0.20260109
types-pyyaml==6.0.12.20250915
types-requests==2.32.4.20260107
urllib3==2.6.3

View File

@@ -190,11 +190,10 @@ class Addon(AddonModel):
self._startup_event.set()
# Dismiss boot failed issue if present and we started
if (
new_state == AddonState.STARTED
and self.boot_failed_issue in self.sys_resolution.issues
if new_state == AddonState.STARTED and (
issue := self.sys_resolution.get_issue_if_present(self.boot_failed_issue)
):
self.sys_resolution.dismiss_issue(self.boot_failed_issue)
self.sys_resolution.dismiss_issue(issue)
# Dismiss device access missing issue if present and we stopped
if (
@@ -362,11 +361,10 @@ class Addon(AddonModel):
self.persist[ATTR_BOOT] = value
# Dismiss boot failed issue if present and boot at start disabled
if (
value == AddonBoot.MANUAL
and self._boot_failed_issue in self.sys_resolution.issues
if value == AddonBoot.MANUAL and (
issue := self.sys_resolution.get_issue_if_present(self._boot_failed_issue)
):
self.sys_resolution.dismiss_issue(self._boot_failed_issue)
self.sys_resolution.dismiss_issue(issue)
@property
def auto_update(self) -> bool:

View File

@@ -127,14 +127,14 @@ class APIAuth(CoreSysAttributes):
return {
ATTR_USERS: [
{
ATTR_USERNAME: user[ATTR_USERNAME],
ATTR_NAME: user[ATTR_NAME],
ATTR_IS_OWNER: user[ATTR_IS_OWNER],
ATTR_IS_ACTIVE: user[ATTR_IS_ACTIVE],
ATTR_LOCAL_ONLY: user[ATTR_LOCAL_ONLY],
ATTR_GROUP_IDS: user[ATTR_GROUP_IDS],
ATTR_USERNAME: user.username,
ATTR_NAME: user.name,
ATTR_IS_OWNER: user.is_owner,
ATTR_IS_ACTIVE: user.is_active,
ATTR_LOCAL_ONLY: user.local_only,
ATTR_GROUP_IDS: user.group_ids,
}
for user in await self.sys_auth.list_users()
if user[ATTR_USERNAME]
if user.username
]
}

View File

@@ -29,8 +29,8 @@ from ..const import (
HEADER_REMOTE_USER_NAME,
HEADER_TOKEN,
HEADER_TOKEN_OLD,
HomeAssistantUser,
IngressSessionData,
IngressSessionDataUser,
)
from ..coresys import CoreSysAttributes
from ..exceptions import HomeAssistantAPIError
@@ -75,12 +75,6 @@ def status_code_must_be_empty_body(code: int) -> bool:
class APIIngress(CoreSysAttributes):
"""Ingress view to handle add-on webui routing."""
_list_of_users: list[IngressSessionDataUser]
def __init__(self) -> None:
"""Initialize APIIngress."""
self._list_of_users = []
def _extract_addon(self, request: web.Request) -> Addon:
"""Return addon, throw an exception it it doesn't exist."""
token = request.match_info["token"]
@@ -306,20 +300,15 @@ class APIIngress(CoreSysAttributes):
return response
async def _find_user_by_id(self, user_id: str) -> IngressSessionDataUser | None:
async def _find_user_by_id(self, user_id: str) -> HomeAssistantUser | None:
"""Find user object by the user's ID."""
try:
list_of_users = await self.sys_homeassistant.get_users()
except (HomeAssistantAPIError, TypeError) as err:
_LOGGER.error(
"%s error occurred while requesting list of users: %s", type(err), err
)
users = await self.sys_homeassistant.list_users()
except HomeAssistantAPIError as err:
_LOGGER.warning("Could not fetch list of users: %s", err)
return None
if list_of_users is not None:
self._list_of_users = list_of_users
return next((user for user in self._list_of_users if user.id == user_id), None)
return next((user for user in users if user.id == user_id), None)
def _init_header(
@@ -332,8 +321,8 @@ def _init_header(
headers[HEADER_REMOTE_USER_ID] = session_data.user.id
if session_data.user.username is not None:
headers[HEADER_REMOTE_USER_NAME] = session_data.user.username
if session_data.user.display_name is not None:
headers[HEADER_REMOTE_USER_DISPLAY_NAME] = session_data.user.display_name
if session_data.user.name is not None:
headers[HEADER_REMOTE_USER_DISPLAY_NAME] = session_data.user.name
# filter flags
for name, value in request.headers.items():

View File

@@ -19,7 +19,6 @@ from ..const import (
ATTR_UNSUPPORTED,
)
from ..coresys import CoreSysAttributes
from ..exceptions import APINotFound, ResolutionNotFound
from ..resolution.checks.base import CheckBase
from ..resolution.data import Issue, Suggestion
from .utils import api_process, api_validate
@@ -32,24 +31,17 @@ class APIResoulution(CoreSysAttributes):
def _extract_issue(self, request: web.Request) -> Issue:
"""Extract issue from request or raise."""
try:
return self.sys_resolution.get_issue(request.match_info["issue"])
except ResolutionNotFound:
raise APINotFound("The supplied UUID is not a valid issue") from None
return self.sys_resolution.get_issue_by_id(request.match_info["issue"])
def _extract_suggestion(self, request: web.Request) -> Suggestion:
"""Extract suggestion from request or raise."""
try:
return self.sys_resolution.get_suggestion(request.match_info["suggestion"])
except ResolutionNotFound:
raise APINotFound("The supplied UUID is not a valid suggestion") from None
return self.sys_resolution.get_suggestion_by_id(
request.match_info["suggestion"]
)
def _extract_check(self, request: web.Request) -> CheckBase:
"""Extract check from request or raise."""
try:
return self.sys_resolution.check.get(request.match_info["check"])
except ResolutionNotFound:
raise APINotFound("The supplied check slug is not available") from None
return self.sys_resolution.check.get(request.match_info["check"])
def _generate_suggestion_information(self, suggestion: Suggestion):
"""Generate suggestion information for response."""

View File

@@ -6,13 +6,12 @@ import logging
from typing import Any, TypedDict, cast
from .addons.addon import Addon
from .const import ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
from .const import ATTR_PASSWORD, ATTR_USERNAME, FILE_HASSIO_AUTH, HomeAssistantUser
from .coresys import CoreSys, CoreSysAttributes
from .exceptions import (
AuthHomeAssistantAPIValidationError,
AuthInvalidNonStringValueError,
AuthListUsersError,
AuthListUsersNoneResponseError,
AuthPasswordResetError,
HomeAssistantAPIError,
HomeAssistantWSError,
@@ -157,22 +156,14 @@ class Auth(FileConfiguration, CoreSysAttributes):
raise AuthPasswordResetError(user=username)
async def list_users(self) -> list[dict[str, Any]]:
async def list_users(self) -> list[HomeAssistantUser]:
"""List users on the Home Assistant instance."""
try:
users: (
list[dict[str, Any]] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"}
)
return await self.sys_homeassistant.list_users()
except HomeAssistantWSError as err:
_LOGGER.error("Can't request listing users on Home Assistant: %s", err)
raise AuthListUsersError() from err
if users is not None:
return users
raise AuthListUsersNoneResponseError(_LOGGER.error)
@staticmethod
def _rehash(value: str, salt2: str = "") -> str:
"""Rehash a value."""

View File

@@ -1,11 +1,12 @@
"""Constants file for Supervisor."""
from collections.abc import Mapping
from dataclasses import dataclass
from enum import StrEnum
from ipaddress import IPv4Network, IPv6Network
from pathlib import Path
from sys import version_info as systemversion
from typing import NotRequired, Self, TypedDict
from typing import Any, NotRequired, Self, TypedDict
from aiohttp import __version__ as aiohttpversion
@@ -536,60 +537,77 @@ class CpuArch(StrEnum):
AMD64 = "amd64"
class IngressSessionDataUserDict(TypedDict):
"""Response object for ingress session user."""
id: str
username: NotRequired[str | None]
# Name is an alias for displayname, only one should be used
displayname: NotRequired[str | None]
name: NotRequired[str | None]
@dataclass
class IngressSessionDataUser:
"""Format of an IngressSessionDataUser object."""
class HomeAssistantUser:
"""A Home Assistant Core user.
Incomplete model — Core's User object has additional fields
(credentials, refresh_tokens, etc.) that are not represented here.
Only fields used by the Supervisor are included.
"""
id: str
display_name: str | None = None
username: str | None = None
def to_dict(self) -> IngressSessionDataUserDict:
"""Get dictionary representation."""
return IngressSessionDataUserDict(
id=self.id, displayname=self.display_name, username=self.username
)
name: str | None = None
is_owner: bool = False
is_active: bool = False
local_only: bool = False
system_generated: bool = False
group_ids: list[str] | None = None
@classmethod
def from_dict(cls, data: IngressSessionDataUserDict) -> Self:
def from_dict(cls, data: Mapping[str, Any]) -> Self:
"""Return object from dictionary representation."""
return cls(
id=data["id"],
display_name=data.get("displayname") or data.get("name"),
username=data.get("username"),
# "displayname" is a legacy key from old ingress session data
name=data.get("name") or data.get("displayname"),
is_owner=data.get("is_owner", False),
is_active=data.get("is_active", False),
local_only=data.get("local_only", False),
system_generated=data.get("system_generated", False),
group_ids=data.get("group_ids"),
)
class IngressSessionDataUserDict(TypedDict):
"""Serialization format for user data stored in ingress sessions.
Legacy data may contain "displayname" instead of "name".
"""
id: str
username: NotRequired[str | None]
name: NotRequired[str | None]
class IngressSessionDataDict(TypedDict):
"""Response object for ingress session data."""
"""Serialization format for ingress session data."""
user: IngressSessionDataUserDict
@dataclass
class IngressSessionData:
"""Format of an IngressSessionData object."""
"""Ingress session data attached to a session token."""
user: IngressSessionDataUser
user: HomeAssistantUser
def to_dict(self) -> IngressSessionDataDict:
"""Get dictionary representation."""
return IngressSessionDataDict(user=self.user.to_dict())
return IngressSessionDataDict(
user=IngressSessionDataUserDict(
id=self.user.id,
name=self.user.name,
username=self.user.username,
)
)
@classmethod
def from_dict(cls, data: IngressSessionDataDict) -> Self:
def from_dict(cls, data: Mapping[str, Any]) -> Self:
"""Return object from dictionary representation."""
return cls(user=IngressSessionDataUser.from_dict(data["user"]))
return cls(user=HomeAssistantUser.from_dict(data["user"]))
STARTING_STATES = [

View File

@@ -306,9 +306,11 @@ class DeviceType(DBusIntEnum):
WIRELESS = 2
BLUETOOTH = 5
VLAN = 11
BRIDGE = 13
TUN = 16
VETH = 20
WIREGUARD = 29
WIFI_P2P = 30
LOOPBACK = 32

View File

@@ -874,11 +874,12 @@ class DockerAddon(DockerInterface):
await super().stop(remove_container)
# If there is a device access issue and the container is removed, clear it
if (
remove_container
and self.addon.device_access_missing_issue in self.sys_resolution.issues
if remove_container and (
issue := self.sys_resolution.get_issue_if_present(
self.addon.device_access_missing_issue
)
):
self.sys_resolution.dismiss_issue(self.addon.device_access_missing_issue)
self.sys_resolution.dismiss_issue(issue)
@Job(
name="docker_addon_hardware_events",

View File

@@ -7,9 +7,8 @@ from collections.abc import Mapping
from contextlib import suppress
from dataclasses import dataclass
import errno
from functools import partial
from http import HTTPStatus
from io import BufferedWriter
from io import BufferedReader, BufferedWriter
from ipaddress import IPv4Address
import json
import logging
@@ -25,8 +24,6 @@ from aiodocker.stream import Stream
from aiodocker.types import JSONObject
from aiohttp import ClientTimeout, UnixConnector
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
from docker import errors as docker_errors
from docker.client import DockerClient
import requests
from ..const import (
@@ -270,8 +267,6 @@ class DockerAPI(CoreSysAttributes):
def __init__(self, coresys: CoreSys):
"""Initialize Docker base wrapper."""
self.coresys = coresys
# We keep both until we can fully refactor to aiodocker
self._dockerpy: DockerClient | None = None
self.docker: aiodocker.Docker = aiodocker.Docker(
url="unix://localhost", # dummy hostname for URL composition
connector=UnixConnector(SOCKET_DOCKER.as_posix()),
@@ -289,15 +284,6 @@ class DockerAPI(CoreSysAttributes):
async def post_init(self) -> Self:
"""Post init actions that must be done in event loop."""
self._dockerpy = await asyncio.get_running_loop().run_in_executor(
None,
partial(
DockerClient,
base_url=f"unix:/{SOCKET_DOCKER.as_posix()}",
version="auto",
timeout=900,
),
)
self._info = await DockerInfo.new(await self.docker.system.info())
await self.config.read_data()
self._network = await DockerNetwork(self.docker).post_init(
@@ -305,13 +291,6 @@ class DockerAPI(CoreSysAttributes):
)
return self
@property
def dockerpy(self) -> DockerClient:
"""Get docker API client."""
if not self._dockerpy:
raise RuntimeError("Docker API Client not initialized!")
return self._dockerpy
@property
def network(self) -> DockerNetwork:
"""Get Docker network."""
@@ -725,43 +704,40 @@ class DockerAPI(CoreSysAttributes):
async def repair(self) -> None:
"""Repair local docker overlayfs2 issues."""
def repair_docker_blocking():
_LOGGER.info("Prune stale containers")
try:
output = self.dockerpy.api.prune_containers()
_LOGGER.debug("Containers prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for containers prune: %s", err)
_LOGGER.info("Prune stale containers")
try:
output = await self.docker.containers.prune()
_LOGGER.debug("Containers prune: %s", output)
except aiodocker.DockerError as err:
_LOGGER.warning("Error for containers prune: %s", err)
_LOGGER.info("Prune stale images")
try:
output = self.dockerpy.api.prune_images(filters={"dangling": False})
_LOGGER.debug("Images prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for images prune: %s", err)
_LOGGER.info("Prune stale images")
try:
output = await self.images.prune(filters={"dangling": "false"})
_LOGGER.debug("Images prune: %s", output)
except aiodocker.DockerError as err:
_LOGGER.warning("Error for images prune: %s", err)
_LOGGER.info("Prune stale builds")
try:
output = self.dockerpy.api.prune_builds()
_LOGGER.debug("Builds prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for builds prune: %s", err)
_LOGGER.info("Prune stale builds")
try:
output = await self.images.prune_builds()
_LOGGER.debug("Builds prune: %s", output)
except aiodocker.DockerError as err:
_LOGGER.warning("Error for builds prune: %s", err)
_LOGGER.info("Prune stale volumes")
try:
output = self.dockerpy.api.prune_volumes()
_LOGGER.debug("Volumes prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for volumes prune: %s", err)
_LOGGER.info("Prune stale volumes")
try:
output = await self.docker.volumes.prune()
_LOGGER.debug("Volumes prune: %s", output)
except aiodocker.DockerError as err:
_LOGGER.warning("Error for volumes prune: %s", err)
_LOGGER.info("Prune stale networks")
try:
output = self.dockerpy.api.prune_networks()
_LOGGER.debug("Networks prune: %s", output)
except docker_errors.APIError as err:
_LOGGER.warning("Error for networks prune: %s", err)
await self.sys_run_in_executor(repair_docker_blocking)
_LOGGER.info("Prune stale networks")
try:
output = await self.docker.networks.prune()
_LOGGER.debug("Networks prune: %s", output)
except aiodocker.DockerError as err:
_LOGGER.warning("Error for networks prune: %s", err)
_LOGGER.info("Fix stale container on hassio network")
try:
@@ -1025,13 +1001,30 @@ class DockerAPI(CoreSysAttributes):
async def import_image(self, tar_file: Path) -> dict[str, Any] | None:
"""Import a tar file as image."""
image_tar_stream: BufferedReader | None = None
try:
with tar_file.open("rb") as read_tar:
resp: list[dict[str, Any]] = await self.images.import_image(read_tar)
except (aiodocker.DockerError, OSError) as err:
# Lambda avoids need for a cast here. Since return type of open is based on mode
image_tar_stream = await self.sys_run_in_executor(
lambda: tar_file.open("rb")
)
resp: list[dict[str, Any]] = await self.images.import_image(
image_tar_stream
)
except aiodocker.DockerError as err:
raise DockerError(
f"Can't import image from tar: {err}", _LOGGER.error
) from err
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
raise DockerError(
f"Can't read tar file {tar_file}: {err}", _LOGGER.error
) from err
finally:
if image_tar_stream:
await self.sys_run_in_executor(image_tar_stream.close)
docker_image_list: list[str] = []
for chunk in resp:
@@ -1066,12 +1059,13 @@ class DockerAPI(CoreSysAttributes):
image_tar_stream: BufferedWriter | None = None
try:
image_tar_stream = image_writer = cast(
BufferedWriter, await self.sys_run_in_executor(tar_file.open, "wb")
# Lambda avoids need for a cast here. Since return type of open is based on mode
image_tar_stream = await self.sys_run_in_executor(
lambda: tar_file.open("wb")
)
async with self.images.export_image(f"{image}:{version}") as content:
async for chunk in content.iter_chunked(DEFAULT_CHUNK_SIZE):
await self.sys_run_in_executor(image_writer.write, chunk)
await self.sys_run_in_executor(image_tar_stream.write, chunk)
except aiodocker.DockerError as err:
raise DockerError(
f"Can't fetch image {image}:{version}: {err}", _LOGGER.error

View File

@@ -620,18 +620,6 @@ class AuthListUsersError(AuthError, APIUnknownSupervisorError):
message_template = "Can't request listing users on Home Assistant"
class AuthListUsersNoneResponseError(AuthError, APIInternalServerError):
"""Auth error if listing users returned invalid None response."""
error_key = "auth_list_users_none_response_error"
message_template = "Home Assistant returned invalid response of `{none}` instead of a list of users. Check Home Assistant logs for details (check with `{logs_command}`)"
extra_fields = {"none": "None", "logs_command": "ha core logs"}
def __init__(self, logger: Callable[..., None] | None = None) -> None:
"""Initialize exception."""
super().__init__(None, logger)
class AuthInvalidNonStringValueError(AuthError, APIUnauthorized):
"""Auth error if something besides a string provided as username or password."""
@@ -976,6 +964,44 @@ class ResolutionFixupJobError(ResolutionFixupError, JobException):
"""Raise on job error."""
class ResolutionCheckNotFound(ResolutionNotFound, APINotFound): # pylint: disable=too-many-ancestors
"""Raise if check does not exist."""
error_key = "resolution_check_not_found_error"
message_template = "Check '{check}' does not exist"
def __init__(
self, logger: Callable[..., None] | None = None, *, check: str
) -> None:
"""Initialize exception."""
self.extra_fields = {"check": check}
super().__init__(None, logger)
class ResolutionIssueNotFound(ResolutionNotFound, APINotFound): # pylint: disable=too-many-ancestors
"""Raise if issue does not exist."""
error_key = "resolution_issue_not_found_error"
message_template = "Issue {uuid} does not exist"
def __init__(self, logger: Callable[..., None] | None = None, *, uuid: str) -> None:
"""Initialize exception."""
self.extra_fields = {"uuid": uuid}
super().__init__(None, logger)
class ResolutionSuggestionNotFound(ResolutionNotFound, APINotFound): # pylint: disable=too-many-ancestors
"""Raise if suggestion does not exist."""
error_key = "resolution_suggestion_not_found_error"
message_template = "Suggestion {uuid} does not exist"
def __init__(self, logger: Callable[..., None] | None = None, *, uuid: str) -> None:
"""Initialize exception."""
self.extra_fields = {"uuid": uuid}
super().__init__(None, logger)
# Store

View File

@@ -182,28 +182,53 @@ class HomeAssistantCore(JobGroup):
concurrency=JobConcurrency.GROUP_REJECT,
)
async def install(self) -> None:
"""Install a landing page."""
"""Install Home Assistant Core."""
_LOGGER.info("Home Assistant setup")
while True:
# read homeassistant tag and install it
if not self.sys_homeassistant.latest_version:
await self.sys_updater.reload()
stop_progress_log = asyncio.Event()
if to_version := self.sys_homeassistant.latest_version:
async def _periodic_progress_log() -> None:
"""Log installation progress periodically for user visibility."""
while not stop_progress_log.is_set():
try:
await self.instance.update(
to_version,
image=self.sys_updater.image_homeassistant,
)
self.sys_homeassistant.version = self.instance.version or to_version
break
except (DockerError, JobException):
pass
except Exception as err: # pylint: disable=broad-except
await async_capture_exception(err)
await asyncio.wait_for(stop_progress_log.wait(), timeout=15)
except TimeoutError:
if (job := self.instance.active_job) and job.progress:
_LOGGER.info(
"Downloading Home Assistant Core image, %d%%",
int(job.progress),
)
else:
_LOGGER.info("Home Assistant Core installation in progress")
_LOGGER.warning("Error on Home Assistant installation. Retrying in 30sec")
await asyncio.sleep(30)
progress_task = self.sys_create_task(_periodic_progress_log())
try:
while True:
# read homeassistant tag and install it
if not self.sys_homeassistant.latest_version:
await self.sys_updater.reload()
if to_version := self.sys_homeassistant.latest_version:
try:
await self.instance.update(
to_version,
image=self.sys_updater.image_homeassistant,
)
self.sys_homeassistant.version = (
self.instance.version or to_version
)
break
except (DockerError, JobException):
pass
except Exception as err: # pylint: disable=broad-except
await async_capture_exception(err)
_LOGGER.warning(
"Error on Home Assistant installation. Retrying in 30sec"
)
await asyncio.sleep(30)
finally:
stop_progress_log.set()
await progress_task
_LOGGER.info("Home Assistant docker now installed")
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)

View File

@@ -1,7 +1,6 @@
"""Home Assistant control object."""
import asyncio
from datetime import timedelta
import errno
from ipaddress import IPv4Address
import logging
@@ -35,8 +34,7 @@ from ..const import (
ATTR_WATCHDOG,
FILE_HASSIO_HOMEASSISTANT,
BusEvent,
IngressSessionDataUser,
IngressSessionDataUserDict,
HomeAssistantUser,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
@@ -47,7 +45,6 @@ from ..exceptions import (
)
from ..hardware.const import PolicyGroup
from ..hardware.data import Device
from ..jobs.const import JobConcurrency, JobThrottle
from ..jobs.decorator import Job
from ..resolution.const import UnhealthyReason
from ..utils import remove_folder, remove_folder_with_excludes
@@ -570,21 +567,12 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
if attr in data:
self._data[attr] = data[attr]
@Job(
name="home_assistant_get_users",
throttle_period=timedelta(minutes=5),
internal=True,
concurrency=JobConcurrency.QUEUE,
throttle=JobThrottle.THROTTLE,
)
async def get_users(self) -> list[IngressSessionDataUser]:
"""Get list of all configured users."""
list_of_users: (
list[IngressSessionDataUserDict] | None
) = await self.sys_homeassistant.websocket.async_send_command(
async def list_users(self) -> list[HomeAssistantUser]:
"""Fetch list of all users from Home Assistant Core via WebSocket.
Raises HomeAssistantWSError on WebSocket connection/communication failure.
"""
raw: list[dict[str, Any]] = await self.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"}
)
if list_of_users:
return [IngressSessionDataUser.from_dict(data) for data in list_of_users]
return []
return [HomeAssistantUser.from_dict(data) for data in raw]

View File

@@ -65,7 +65,7 @@ class WSClient:
if not self._client.closed:
await self._client.close()
async def async_send_command(self, message: dict[str, Any]) -> T | None:
async def async_send_command(self, message: dict[str, Any]) -> T:
"""Send a websocket message, and return the response."""
self._message_id += 1
message["id"] = self._message_id
@@ -146,7 +146,7 @@ class WSClient:
try:
client = await session.ws_connect(url, ssl=False)
except aiohttp.client_exceptions.ClientConnectorError:
raise HomeAssistantWSError("Can't connect") from None
raise HomeAssistantWSConnectionError("Can't connect") from None
hello_message = await client.receive_json()
@@ -200,10 +200,11 @@ class HomeAssistantWebSocket(CoreSysAttributes):
async def _ensure_connected(self) -> None:
"""Ensure WebSocket connection is ready.
Raises HomeAssistantWSError if unable to connect.
Raises HomeAssistantWSConnectionError if unable to connect.
Raises HomeAssistantAuthError if authentication with Core fails.
"""
if self.sys_core.state in CLOSING_STATES:
raise HomeAssistantWSError(
raise HomeAssistantWSConnectionError(
"WebSocket not available, system is shutting down"
)
@@ -211,7 +212,7 @@ class HomeAssistantWebSocket(CoreSysAttributes):
# If we are already connected, we can avoid the check_api_state call
# since it makes a new socket connection and we already have one.
if not connected and not await self.sys_homeassistant.api.check_api_state():
raise HomeAssistantWSError(
raise HomeAssistantWSConnectionError(
"Can't connect to Home Assistant Core WebSocket, the API is not reachable"
)
@@ -251,10 +252,10 @@ class HomeAssistantWebSocket(CoreSysAttributes):
await self._client.close()
self._client = None
async def async_send_command(self, message: dict[str, Any]) -> T | None:
async def async_send_command(self, message: dict[str, Any]) -> T:
"""Send a command and return the response.
Raises HomeAssistantWSError if unable to connect to Home Assistant Core.
Raises HomeAssistantWSError on WebSocket connection or communication failure.
"""
await self._ensure_connected()
# _ensure_connected guarantees self._client is set

View File

@@ -215,10 +215,10 @@ class Mount(CoreSysAttributes, ABC):
await self._update_state(unit)
# If active, dismiss corresponding failed mount issue if found
if (
mounted := await self.is_mounted()
) and self.failed_issue in self.sys_resolution.issues:
self.sys_resolution.dismiss_issue(self.failed_issue)
if (mounted := await self.is_mounted()) and (
issue := self.sys_resolution.get_issue_if_present(self.failed_issue)
):
self.sys_resolution.dismiss_issue(issue)
return mounted
@@ -361,8 +361,8 @@ class Mount(CoreSysAttributes, ABC):
await self._restart()
# If it is mounted now, dismiss corresponding issue if present
if self.failed_issue in self.sys_resolution.issues:
self.sys_resolution.dismiss_issue(self.failed_issue)
if issue := self.sys_resolution.get_issue_if_present(self.failed_issue):
self.sys_resolution.dismiss_issue(issue)
async def _restart(self) -> None:
"""Restart mount unit to re-mount."""

View File

@@ -6,7 +6,7 @@ from typing import Any
from ..const import ATTR_CHECKS
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ResolutionNotFound
from ..exceptions import ResolutionCheckNotFound
from ..utils.sentry import async_capture_exception
from .checks.base import CheckBase
from .validate import get_valid_modules
@@ -50,7 +50,7 @@ class ResolutionCheck(CoreSysAttributes):
if slug in self._checks:
return self._checks[slug]
raise ResolutionNotFound(f"Check with slug {slug} not found!")
raise ResolutionCheckNotFound(check=slug)
async def check_system(self) -> None:
"""Check the system."""

View File

@@ -7,7 +7,11 @@ import attr
from ..bus import EventListener
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ResolutionError, ResolutionNotFound
from ..exceptions import (
ResolutionError,
ResolutionIssueNotFound,
ResolutionSuggestionNotFound,
)
from ..homeassistant.const import WSEvent
from ..utils.common import FileConfiguration
from .check import ResolutionCheck
@@ -165,21 +169,37 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
]
}
def get_suggestion(self, uuid: str) -> Suggestion:
def get_suggestion_by_id(self, uuid: str) -> Suggestion:
"""Return suggestion with uuid."""
for suggestion in self._suggestions:
if suggestion.uuid != uuid:
continue
return suggestion
raise ResolutionNotFound()
raise ResolutionSuggestionNotFound(uuid=uuid)
def get_issue(self, uuid: str) -> Issue:
def get_suggestion_if_present(self, suggestion: Suggestion) -> Suggestion | None:
"""Get suggestion matching provided one if it exists in resolution manager."""
for s in self._suggestions:
if s != suggestion:
continue
return s
return None
def get_issue_by_id(self, uuid: str) -> Issue:
"""Return issue with uuid."""
for issue in self._issues:
if issue.uuid != uuid:
continue
return issue
raise ResolutionNotFound()
raise ResolutionIssueNotFound(uuid=uuid)
def get_issue_if_present(self, issue: Issue) -> Issue | None:
"""Get issue matching provided one if it exists in resolution manager."""
for i in self._issues:
if i != issue:
continue
return i
return None
def create_issue(
self,
@@ -234,20 +254,13 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
async def apply_suggestion(self, suggestion: Suggestion) -> None:
"""Apply suggested action."""
if suggestion not in self._suggestions:
raise ResolutionError(
f"Suggestion {suggestion.uuid} is not valid", _LOGGER.warning
)
suggestion = self.get_suggestion_by_id(suggestion.uuid)
await self.fixup.apply_fixup(suggestion)
await self.healthcheck()
def dismiss_suggestion(self, suggestion: Suggestion) -> None:
"""Dismiss suggested action."""
if suggestion not in self._suggestions:
raise ResolutionError(
f"The UUID {suggestion.uuid} is not valid suggestion", _LOGGER.warning
)
suggestion = self.get_suggestion_by_id(suggestion.uuid)
self._suggestions.remove(suggestion)
# Remove event listeners if present
@@ -263,10 +276,7 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
def dismiss_issue(self, issue: Issue) -> None:
"""Dismiss suggested action."""
if issue not in self._issues:
raise ResolutionError(
f"The UUID {issue.uuid} is not a valid issue", _LOGGER.warning
)
issue = self.get_issue_by_id(issue.uuid)
self._issues.remove(issue)
# Event on issue removal

View File

@@ -1,11 +1,11 @@
{
"local": {
"name": "Local add-ons",
"name": "Local apps",
"url": "https://home-assistant.io/hassio",
"maintainer": "you"
},
"core": {
"name": "Official add-ons",
"name": "Official apps",
"url": "https://home-assistant.io/addons",
"maintainer": "Home Assistant"
}

View File

@@ -31,6 +31,7 @@ from .const import (
ATTR_LOGGING,
ATTR_MTU,
ATTR_MULTICAST,
ATTR_NAME,
ATTR_OBSERVER,
ATTR_OTA,
ATTR_PASSWORD,
@@ -206,7 +207,9 @@ SCHEMA_SESSION_DATA = vol.Schema(
{
vol.Required(ATTR_ID): str,
vol.Required(ATTR_USERNAME, default=None): vol.Maybe(str),
vol.Required(ATTR_DISPLAYNAME, default=None): vol.Maybe(str),
vol.Required(ATTR_NAME, default=None): vol.Maybe(str),
# Legacy key, replaced by ATTR_NAME
vol.Optional(ATTR_DISPLAYNAME): vol.Maybe(str),
}
)
}

View File

@@ -1,7 +1,6 @@
"""Test auth API."""
from datetime import UTC, datetime, timedelta
from typing import Any
from unittest.mock import AsyncMock, MagicMock, patch
from aiohttp.hdrs import WWW_AUTHENTICATE
@@ -169,46 +168,25 @@ async def test_list_users(
]
@pytest.mark.parametrize(
("send_command_mock", "error_response", "expected_log"),
[
(
AsyncMock(return_value=None),
{
"result": "error",
"message": "Home Assistant returned invalid response of `None` instead of a list of users. Check Home Assistant logs for details (check with `ha core logs`)",
"error_key": "auth_list_users_none_response_error",
"extra_fields": {"none": "None", "logs_command": "ha core logs"},
},
"Home Assistant returned invalid response of `None` instead of a list of users. Check Home Assistant logs for details (check with `ha core logs`)",
),
(
AsyncMock(side_effect=HomeAssistantWSError("fail")),
{
"result": "error",
"message": "Can't request listing users on Home Assistant. Check supervisor logs for details (check with 'ha supervisor logs')",
"error_key": "auth_list_users_error",
"extra_fields": {"logs_command": "ha supervisor logs"},
},
"Can't request listing users on Home Assistant: fail",
),
],
)
async def test_list_users_failure(
async def test_list_users_ws_error(
api_client: TestClient,
ha_ws_client: AsyncMock,
caplog: pytest.LogCaptureFixture,
send_command_mock: AsyncMock,
error_response: dict[str, Any],
expected_log: str,
):
"""Test failure listing users via API."""
ha_ws_client.async_send_command = send_command_mock
"""Test WS error when listing users via API."""
ha_ws_client.async_send_command = AsyncMock(
side_effect=HomeAssistantWSError("fail")
)
resp = await api_client.get("/auth/list")
assert resp.status == 500
result = await resp.json()
assert result == error_response
assert expected_log in caplog.text
assert result == {
"result": "error",
"message": "Can't request listing users on Home Assistant. Check supervisor logs for details (check with 'ha supervisor logs')",
"error_key": "auth_list_users_error",
"extra_fields": {"logs_command": "ha supervisor logs"},
}
assert "Can't request listing users on Home Assistant: fail" in caplog.text
@pytest.mark.parametrize(

View File

@@ -99,9 +99,7 @@ async def test_validate_session_with_user_id(
assert session in coresys.ingress.sessions_data
assert coresys.ingress.get_session_data(session).user.id == "some-id"
assert coresys.ingress.get_session_data(session).user.username == "sn"
assert (
coresys.ingress.get_session_data(session).user.display_name == "Some Name"
)
assert coresys.ingress.get_session_data(session).user.name == "Some Name"
async def test_ingress_proxy_no_content_type_for_empty_body_responses(

View File

@@ -1,5 +1,6 @@
"""Test Resolution API."""
from http import HTTPStatus
from unittest.mock import AsyncMock
from aiohttp.test_utils import TestClient
@@ -46,7 +47,7 @@ async def test_api_resolution_base(coresys: CoreSys, api_client: TestClient):
async def test_api_resolution_dismiss_suggestion(
coresys: CoreSys, api_client: TestClient
):
"""Test resolution manager suggestion apply api."""
"""Test resolution manager dismiss suggestion api."""
coresys.resolution.add_suggestion(
clear_backup := Suggestion(SuggestionType.CLEAR_FULL_BACKUP, ContextType.SYSTEM)
)
@@ -189,7 +190,9 @@ async def test_issue_not_found(api_client: TestClient, method: str, url: str):
resp = await api_client.request(method, url)
assert resp.status == 404
body = await resp.json()
assert body["message"] == "The supplied UUID is not a valid issue"
assert body["message"] == "Issue bad does not exist"
assert body["error_key"] == "resolution_issue_not_found_error"
assert body["extra_fields"] == {"uuid": "bad"}
@pytest.mark.parametrize(
@@ -201,7 +204,9 @@ async def test_suggestion_not_found(api_client: TestClient, method: str, url: st
resp = await api_client.request(method, url)
assert resp.status == 404
body = await resp.json()
assert body["message"] == "The supplied UUID is not a valid suggestion"
assert body["message"] == "Suggestion bad does not exist"
assert body["error_key"] == "resolution_suggestion_not_found_error"
assert body["extra_fields"] == {"uuid": "bad"}
@pytest.mark.parametrize(
@@ -211,6 +216,8 @@ async def test_suggestion_not_found(api_client: TestClient, method: str, url: st
async def test_check_not_found(api_client: TestClient, method: str, url: str):
"""Test check not found error."""
resp = await api_client.request(method, url)
assert resp.status == 404
assert resp.status == HTTPStatus.NOT_FOUND
body = await resp.json()
assert body["message"] == "The supplied check slug is not available"
assert body["message"] == "Check 'bad' does not exist"
assert body["error_key"] == "resolution_check_not_found_error"
assert body["extra_fields"] == {"check": "bad"}

View File

@@ -15,10 +15,11 @@ from aiodocker.events import DockerEvents
from aiodocker.execs import Exec
from aiodocker.networks import DockerNetwork, DockerNetworks
from aiodocker.system import DockerSystem
from aiodocker.volumes import DockerVolumes
from aiohttp import ClientSession, web
from aiohttp.test_utils import TestClient
from awesomeversion import AwesomeVersion
from blockbuster import BlockBuster, blockbuster_ctx
from blockbuster import BlockBuster, BlockBusterFunction
from dbus_fast import BusType
from dbus_fast.aio.message_bus import MessageBus
import pytest
@@ -94,9 +95,17 @@ def blockbuster(request: pytest.FixtureRequest) -> BlockBuster | None:
# But it will ignore calls to libraries and such that do blocking I/O directly from tests
# Removing that would be nice but a todo for the future
# pylint: disable-next=contextmanager-generator-missing-cleanup
with blockbuster_ctx(scanned_modules=["supervisor"]) as bb:
yield bb
SCANNED_MODULES = ["supervisor"]
blockbuster = BlockBuster(scanned_modules=SCANNED_MODULES)
blockbuster.functions["pathlib.Path.open"] = BlockBusterFunction(
Path, "open", scanned_modules=SCANNED_MODULES
)
blockbuster.functions["pathlib.Path.close"] = BlockBusterFunction(
Path, "close", scanned_modules=SCANNED_MODULES
)
blockbuster.activate()
yield blockbuster
blockbuster.deactivate()
@pytest.fixture
@@ -152,7 +161,6 @@ async def docker() -> DockerAPI:
}
with (
patch("supervisor.docker.manager.DockerClient", return_value=MagicMock()),
patch(
"supervisor.docker.manager.aiodocker.Docker",
return_value=(
@@ -162,6 +170,7 @@ async def docker() -> DockerAPI:
containers=(docker_containers := MagicMock(spec=DockerContainers)),
events=(docker_events := MagicMock(spec=DockerEvents)),
system=(docker_system := MagicMock(spec=DockerSystem)),
volumes=MagicMock(spec=DockerVolumes),
)
),
),

View File

@@ -9,7 +9,6 @@ import aiodocker
from aiodocker.containers import DockerContainer
from aiodocker.networks import DockerNetwork
from awesomeversion import AwesomeVersion
from docker.errors import APIError
import pytest
from supervisor.const import DNS_SUFFIX, ENV_SUPERVISOR_CPU_RT
@@ -184,14 +183,6 @@ async def test_run_command_custom_stdout_stderr(
async def test_run_command_with_mounts(docker: DockerAPI):
"""Test command execution with mounts are correctly converted."""
# Mock container and its methods
mock_container = MagicMock()
mock_container.wait.return_value = {"StatusCode": 0}
mock_container.logs.return_value = ["output"]
# Mock docker containers.run to return our mock container
docker.dockerpy.containers.run.return_value = mock_container
# Create test mounts
mounts = [
DockerMount(
@@ -456,13 +447,13 @@ async def test_repair(
await coresys.docker.repair()
coresys.docker.dockerpy.api.prune_containers.assert_called_once()
coresys.docker.dockerpy.api.prune_images.assert_called_once_with(
filters={"dangling": False}
coresys.docker.docker.containers.prune.assert_called_once()
coresys.docker.docker.images.prune.assert_called_once_with(
filters={"dangling": "false"}
)
coresys.docker.dockerpy.api.prune_builds.assert_called_once()
coresys.docker.dockerpy.api.prune_volumes.assert_called_once()
coresys.docker.dockerpy.api.prune_networks.assert_called_once()
coresys.docker.docker.images.prune_builds.assert_called_once()
coresys.docker.docker.volumes.prune.assert_called_once()
coresys.docker.docker.networks.prune.assert_called_once()
hassio.disconnect.assert_called_once_with({"Container": "corrupt", "Force": True})
host.disconnect.assert_not_called()
assert "Docker fatal error on container fail on hassio" in caplog.text
@@ -470,24 +461,27 @@ async def test_repair(
async def test_repair_failures(coresys: CoreSys, caplog: pytest.LogCaptureFixture):
"""Test repair proceeds best it can through failures."""
coresys.docker.dockerpy.api.prune_containers.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_images.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_builds.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_volumes.side_effect = APIError("fail")
coresys.docker.dockerpy.api.prune_networks.side_effect = APIError("fail")
coresys.docker.docker.networks.get.side_effect = err = aiodocker.DockerError(
HTTPStatus.NOT_FOUND, {"message": "missing"}
fail_err = aiodocker.DockerError(
HTTPStatus.INTERNAL_SERVER_ERROR, {"message": "fail"}
)
coresys.docker.docker.containers.prune.side_effect = fail_err
coresys.docker.docker.images.prune.side_effect = fail_err
coresys.docker.docker.images.prune_builds.side_effect = fail_err
coresys.docker.docker.volumes.prune.side_effect = fail_err
coresys.docker.docker.networks.prune.side_effect = fail_err
coresys.docker.docker.networks.get.side_effect = missing_err = (
aiodocker.DockerError(HTTPStatus.NOT_FOUND, {"message": "missing"})
)
await coresys.docker.repair()
assert "Error for containers prune: fail" in caplog.text
assert "Error for images prune: fail" in caplog.text
assert "Error for builds prune: fail" in caplog.text
assert "Error for volumes prune: fail" in caplog.text
assert "Error for networks prune: fail" in caplog.text
assert f"Error for networks hassio prune: {err!s}" in caplog.text
assert f"Error for networks host prune: {err!s}" in caplog.text
assert f"Error for containers prune: {fail_err!s}" in caplog.text
assert f"Error for images prune: {fail_err!s}" in caplog.text
assert f"Error for builds prune: {fail_err!s}" in caplog.text
assert f"Error for volumes prune: {fail_err!s}" in caplog.text
assert f"Error for networks prune: {fail_err!s}" in caplog.text
assert f"Error for networks hassio prune: {missing_err!s}" in caplog.text
assert f"Error for networks host prune: {missing_err!s}" in caplog.text
@pytest.mark.parametrize("log_starter", [("Loaded image ID"), ("Loaded image")])

View File

@@ -210,28 +210,14 @@
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "067cd11a63f96d227dcc0f01d3e4f5053c368021becd0b4b2da4f301cfda3d29",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SandboxKey": "/var/run/docker/netns/067cd11a63f9",
"Ports": {
"1883/tcp": [
{ "HostIp": "0.0.0.0", "HostPort": "1883" },
{ "HostIp": "::", "HostPort": "1883" }
]
},
"SandboxKey": "/var/run/docker/netns/067cd11a63f9",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"hassio": {
"IPAMConfig": null,

View File

@@ -1,5 +1,6 @@
"""Test Home Assistant core."""
import asyncio
from datetime import datetime, timedelta
from http import HTTPStatus
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, call, patch
@@ -206,6 +207,58 @@ async def test_install_other_error(
assert "Unhandled exception:" not in caplog.text
@pytest.mark.parametrize(
("active_job", "expected_log"),
[
(None, "Home Assistant Core installation in progress"),
(MagicMock(progress=45.0), "Downloading Home Assistant Core image, 45%"),
],
)
async def test_install_logs_progress_periodically(
coresys: CoreSys,
caplog: pytest.LogCaptureFixture,
active_job: MagicMock | None,
expected_log: str,
):
"""Test install logs progress periodically during image pull."""
coresys.security.force = True
coresys.docker.images.pull.return_value = AsyncIterator([{}])
original_wait_for = asyncio.wait_for
async def mock_wait_for(coro, *, timeout=None):
"""Immediately timeout for the progress log wait, pass through others."""
if timeout == 15:
coro.close()
await asyncio.sleep(0)
raise TimeoutError
return await original_wait_for(coro, timeout=timeout)
with (
patch.object(HomeAssistantCore, "start"),
patch.object(DockerHomeAssistant, "cleanup"),
patch.object(
Updater,
"image_homeassistant",
new=PropertyMock(return_value="homeassistant"),
),
patch.object(
Updater, "version_homeassistant", new=PropertyMock(return_value="2022.7.3")
),
patch.object(
DockerInterface, "arch", new=PropertyMock(return_value=CpuArch.AMD64)
),
patch("supervisor.homeassistant.core.asyncio.wait_for", new=mock_wait_for),
patch.object(
DockerHomeAssistant,
"active_job",
new=PropertyMock(return_value=active_job),
),
):
await coresys.homeassistant.core.install()
assert expected_log in caplog.text
@pytest.mark.parametrize(
("container_exc", "image_exc", "delete_calls"),
[

View File

@@ -58,12 +58,11 @@ async def test_load(
assert ha_ws_client.async_send_command.call_args_list[0][0][0] == {"lorem": "ipsum"}
async def test_get_users_none(coresys: CoreSys, ha_ws_client: AsyncMock):
"""Test get users returning none does not fail."""
async def test_list_users_none(coresys: CoreSys, ha_ws_client: AsyncMock):
"""Test list users raises on unexpected None response from Core."""
ha_ws_client.async_send_command.return_value = None
assert (
await coresys.homeassistant.get_users.__wrapped__(coresys.homeassistant) == []
)
with pytest.raises(TypeError):
await coresys.homeassistant.list_users()
async def test_write_pulse_error(coresys: CoreSys, caplog: pytest.LogCaptureFixture):

View File

@@ -8,7 +8,7 @@ import pytest
from supervisor.const import CoreState
from supervisor.coresys import CoreSys
from supervisor.exceptions import HomeAssistantWSError
from supervisor.exceptions import HomeAssistantWSConnectionError
from supervisor.homeassistant.const import WSEvent, WSType
@@ -81,7 +81,7 @@ async def test_send_command_core_not_reachable(
ha_ws_client.connected = False
with (
patch.object(coresys.homeassistant.api, "check_api_state", return_value=False),
pytest.raises(HomeAssistantWSError, match="not reachable"),
pytest.raises(HomeAssistantWSConnectionError, match="not reachable"),
):
await coresys.homeassistant.websocket.async_send_command({"type": "test"})
@@ -102,7 +102,7 @@ async def test_fire_and_forget_core_not_reachable(
async def test_send_command_during_shutdown(coresys: CoreSys, ha_ws_client: AsyncMock):
"""Test async_send_command raises during shutdown."""
await coresys.core.set_state(CoreState.SHUTDOWN)
with pytest.raises(HomeAssistantWSError, match="shutting down"):
with pytest.raises(HomeAssistantWSConnectionError, match="shutting down"):
await coresys.homeassistant.websocket.async_send_command({"type": "test"})
ha_ws_client.async_send_command.assert_not_called()

View File

@@ -43,7 +43,9 @@ async def test_reading_addon_files_error(coresys: CoreSys):
assert reset_repo in coresys.resolution.suggestions
assert coresys.core.healthy is True
coresys.resolution.dismiss_issue(corrupt_repo)
coresys.resolution.dismiss_issue(
coresys.resolution.get_issue_if_present(corrupt_repo)
)
err.errno = errno.EBADMSG
assert (await coresys.store.data._find_addon_configs(Path("test"), {})) is None
assert corrupt_repo in coresys.resolution.issues

View File

@@ -1,10 +1,11 @@
"""Test ingress."""
from datetime import timedelta
import json
from pathlib import Path
from unittest.mock import ANY, patch
from supervisor.const import IngressSessionData, IngressSessionDataUser
from supervisor.const import HomeAssistantUser, IngressSessionData
from supervisor.coresys import CoreSys
from supervisor.ingress import Ingress
from supervisor.utils.dt import utc_from_timestamp
@@ -34,7 +35,7 @@ def test_session_handling(coresys: CoreSys):
def test_session_handling_with_session_data(coresys: CoreSys):
"""Create and test session."""
session = coresys.ingress.create_session(
IngressSessionData(IngressSessionDataUser("some-id"))
IngressSessionData(HomeAssistantUser("some-id"))
)
assert session
@@ -76,7 +77,7 @@ async def test_ingress_save_data(coresys: CoreSys, tmp_supervisor_data: Path):
with patch("supervisor.ingress.FILE_HASSIO_INGRESS", new=config_file):
ingress = await Ingress(coresys).load_config()
session = ingress.create_session(
IngressSessionData(IngressSessionDataUser("123", "Test", "test"))
IngressSessionData(HomeAssistantUser("123", name="Test", username="test"))
)
await ingress.save_data()
@@ -87,12 +88,47 @@ async def test_ingress_save_data(coresys: CoreSys, tmp_supervisor_data: Path):
assert await coresys.run_in_executor(get_config) == {
"session": {session: ANY},
"session_data": {
session: {"user": {"id": "123", "displayname": "Test", "username": "test"}}
session: {"user": {"id": "123", "name": "Test", "username": "test"}}
},
"ports": {},
}
async def test_ingress_load_legacy_displayname(
coresys: CoreSys, tmp_supervisor_data: Path
):
"""Test loading session data with legacy 'displayname' key."""
config_file = tmp_supervisor_data / "ingress.json"
session_token = "a" * 128
config_file.write_text(
json.dumps(
{
"session": {session_token: 9999999999.0},
"session_data": {
session_token: {
"user": {
"id": "456",
"displayname": "Legacy Name",
"username": "legacy",
}
}
},
"ports": {},
}
)
)
with patch("supervisor.ingress.FILE_HASSIO_INGRESS", new=config_file):
ingress = await Ingress(coresys).load_config()
session_data = ingress.get_session_data(session_token)
assert session_data is not None
assert session_data.user.id == "456"
assert session_data.user.name == "Legacy Name"
assert session_data.user.username == "legacy"
async def test_ingress_reload_ignore_none_data(coresys: CoreSys):
"""Test reloading ingress does not add None for session data and create errors."""
session = coresys.ingress.create_session()

View File

@@ -1,5 +1,6 @@
"""Tests for apparmor utility."""
import asyncio
from pathlib import Path
import pytest
@@ -31,13 +32,20 @@ profile test flags=(attach_disconnected,mediate_deleted) {
async def test_valid_apparmor_file():
"""Test a valid apparmor file."""
assert validate_profile("example", get_fixture_path("apparmor_valid.txt"))
assert await asyncio.get_running_loop().run_in_executor(
None, validate_profile, "example", get_fixture_path("apparmor_valid.txt")
)
async def test_apparmor_missing_profile(caplog: pytest.LogCaptureFixture):
"""Test apparmor file missing profile."""
with pytest.raises(AppArmorInvalidError):
validate_profile("example", get_fixture_path("apparmor_no_profile.txt"))
await asyncio.get_running_loop().run_in_executor(
None,
validate_profile,
"example",
get_fixture_path("apparmor_no_profile.txt"),
)
assert (
"Missing AppArmor profile inside file: apparmor_no_profile.txt" in caplog.text
@@ -47,7 +55,12 @@ async def test_apparmor_missing_profile(caplog: pytest.LogCaptureFixture):
async def test_apparmor_multiple_profiles(caplog: pytest.LogCaptureFixture):
"""Test apparmor file with too many profiles."""
with pytest.raises(AppArmorInvalidError):
validate_profile("example", get_fixture_path("apparmor_multiple_profiles.txt"))
await asyncio.get_running_loop().run_in_executor(
None,
validate_profile,
"example",
get_fixture_path("apparmor_multiple_profiles.txt"),
)
assert (
"Too many AppArmor profiles inside file: apparmor_multiple_profiles.txt"