Compare commits

...

18 Commits

Author SHA1 Message Date
Jan Čermák
dc44e117a9 Merge branch 'main' into container-create-to-aiodocker 2025-12-11 23:45:39 +01:00
dependabot[bot]
4df0db9df4 Bump aiodns from 3.6.0 to 3.6.1 (#6423)
Bumps [aiodns](https://github.com/saghul/aiodns) from 3.6.0 to 3.6.1.
- [Release notes](https://github.com/saghul/aiodns/releases)
- [Changelog](https://github.com/aio-libs/aiodns/blob/master/ChangeLog)
- [Commits](https://github.com/saghul/aiodns/compare/v3.6.0...v3.6.1)

---
updated-dependencies:
- dependency-name: aiodns
  dependency-version: 3.6.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-11 23:42:40 +01:00
Mike Degatano
ed2275a8cf Fixes from feedback 2025-12-11 20:10:12 +00:00
Mike Degatano
c29a82c47d Fix tests 2025-12-11 20:04:55 +00:00
Mike Degatano
0599238217 Env not Environment 2025-12-11 20:04:54 +00:00
Mike Degatano
b30be21df4 Fix extra hosts transformation 2025-12-11 20:04:54 +00:00
Mike Degatano
7d2bfe8fa6 Migrate create container to aiodocker 2025-12-11 20:04:54 +00:00
dependabot[bot]
27c53048f6 Bump codecov/codecov-action from 5.5.1 to 5.5.2 (#6416) 2025-12-10 09:06:35 +01:00
dependabot[bot]
88ab5e9196 Bump peter-evans/create-pull-request from 7.0.11 to 8.0.0 (#6417) 2025-12-10 07:46:19 +01:00
dependabot[bot]
b7a7475d47 Bump coverage from 7.12.0 to 7.13.0 (#6414) 2025-12-09 07:24:07 +01:00
dependabot[bot]
5fe6b934e2 Bump urllib3 from 2.6.0 to 2.6.1 (#6413) 2025-12-09 07:14:39 +01:00
Hendrik Bergunde
a2d301ed27 Increase timeout waiting for Core API to work around 2025.12.x issues (#6404)
* Fix too short timeouts for Synology NAS 

With Home Assistant Core 2025.12.x updates available the STARTUP_API_RESPONSE_TIMEOUT that HA supervisor is willing to wait (before assuming a startup failure and rolling back the entire core update) seems to be too low on not-so-beefy hosts. The problem has been seen on Synology NAS machines running Home Assistant on the side (like in my case). I have doubled the timeout from 3 to 6 minutes and the upgrade to Core 2025.12.1 works on my Synology DS723+. My update took 4min 56s -- hence the timeout increase was proven necessary.

* Fix tests for increased API Timeout

* Increase the timeout to 10 minutes

* Increase the timeout in tests

---------

Co-authored-by: Jan Čermák <sairon@users.noreply.github.com>
2025-12-08 11:05:57 -05:00
Jan Čermák
cdef1831ba Add option to Core settings to enable duplicated logs (#6400)
Introduce new option `duplicate_log_file` to HA Core configuration that will
set an environment variable `HA_DUPLICATE_LOG_FILE=1` for the Core container if
enabled. This will serve as a flag for Core to enable the legacy log file,
along the standard logging which is handled by Systemd Journal.
2025-12-08 16:35:56 +01:00
dependabot[bot]
b79130816b Bump aiodns from 3.5.0 to 3.6.0 (#6408) 2025-12-08 08:24:12 +01:00
dependabot[bot]
923bc2ba87 Bump backports-zstd from 1.1.0 to 1.2.0 (#6410)
Bumps [backports-zstd](https://github.com/rogdham/backports.zstd) from 1.1.0 to 1.2.0.
- [Changelog](https://github.com/Rogdham/backports.zstd/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rogdham/backports.zstd/compare/v1.1.0...v1.2.0)

---
updated-dependencies:
- dependency-name: backports-zstd
  dependency-version: 1.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 08:22:07 +01:00
dependabot[bot]
0f6b211151 Bump pytest from 9.0.1 to 9.0.2 (#6409)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 9.0.1 to 9.0.2.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/9.0.1...9.0.2)

---
updated-dependencies:
- dependency-name: pytest
  dependency-version: 9.0.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 08:21:53 +01:00
dependabot[bot]
054c6d0365 Bump peter-evans/create-pull-request from 7.0.9 to 7.0.11 (#6406)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.9 to 7.0.11.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](84ae59a2cd...22a9089034)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 7.0.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 08:21:33 +01:00
dependabot[bot]
d920bde7e4 Bump orjson from 3.11.4 to 3.11.5 (#6407) 2025-12-08 07:24:53 +01:00
42 changed files with 754 additions and 442 deletions

View File

@@ -428,4 +428,4 @@ jobs:
coverage report coverage report
coverage xml coverage xml
- name: Upload coverage to Codecov - name: Upload coverage to Codecov
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1 uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2

View File

@@ -68,7 +68,7 @@ jobs:
run: | run: |
rm -f supervisor/api/panel/home_assistant_frontend_supervisor-*.tar.gz rm -f supervisor/api/panel/home_assistant_frontend_supervisor-*.tar.gz
- name: Create PR - name: Create PR
uses: peter-evans/create-pull-request@84ae59a2cdc2258d6fa0732dd66352dddae2a412 # v7.0.9 uses: peter-evans/create-pull-request@98357b18bf14b5342f975ff684046ec3b2a07725 # v8.0.0
with: with:
commit-message: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}" commit-message: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
branch: autoupdate-frontend branch: autoupdate-frontend

View File

@@ -1,10 +1,10 @@
aiodns==3.5.0 aiodns==3.6.1
aiodocker==0.24.0 aiodocker==0.24.0
aiohttp==3.13.2 aiohttp==3.13.2
atomicwrites-homeassistant==1.4.1 atomicwrites-homeassistant==1.4.1
attrs==25.4.0 attrs==25.4.0
awesomeversion==25.8.0 awesomeversion==25.8.0
backports.zstd==1.1.0 backports.zstd==1.2.0
blockbuster==1.5.26 blockbuster==1.5.26
brotli==1.2.0 brotli==1.2.0
ciso8601==2.3.3 ciso8601==2.3.3
@@ -19,7 +19,7 @@ faust-cchardet==2.1.19
gitpython==3.1.45 gitpython==3.1.45
jinja2==3.1.6 jinja2==3.1.6
log-rate-limit==1.4.2 log-rate-limit==1.4.2
orjson==3.11.4 orjson==3.11.5
pulsectl==24.12.0 pulsectl==24.12.0
pyudev==0.24.4 pyudev==0.24.4
PyYAML==6.0.3 PyYAML==6.0.3

View File

@@ -1,5 +1,5 @@
astroid==4.0.2 astroid==4.0.2
coverage==7.12.0 coverage==7.13.0
mypy==1.19.0 mypy==1.19.0
pre-commit==4.5.0 pre-commit==4.5.0
pylint==4.0.4 pylint==4.0.4
@@ -7,10 +7,10 @@ pytest-aiohttp==1.1.0
pytest-asyncio==1.3.0 pytest-asyncio==1.3.0
pytest-cov==7.0.0 pytest-cov==7.0.0
pytest-timeout==2.4.0 pytest-timeout==2.4.0
pytest==9.0.1 pytest==9.0.2
ruff==0.14.8 ruff==0.14.8
time-machine==3.1.0 time-machine==3.1.0
types-docker==7.1.0.20251202 types-docker==7.1.0.20251202
types-pyyaml==6.0.12.20250915 types-pyyaml==6.0.12.20250915
types-requests==2.32.4.20250913 types-requests==2.32.4.20250913
urllib3==2.6.0 urllib3==2.6.1

View File

@@ -18,6 +18,7 @@ from ..const import (
ATTR_BLK_WRITE, ATTR_BLK_WRITE,
ATTR_BOOT, ATTR_BOOT,
ATTR_CPU_PERCENT, ATTR_CPU_PERCENT,
ATTR_DUPLICATE_LOG_FILE,
ATTR_IMAGE, ATTR_IMAGE,
ATTR_IP_ADDRESS, ATTR_IP_ADDRESS,
ATTR_JOB_ID, ATTR_JOB_ID,
@@ -55,6 +56,7 @@ SCHEMA_OPTIONS = vol.Schema(
vol.Optional(ATTR_AUDIO_OUTPUT): vol.Maybe(str), vol.Optional(ATTR_AUDIO_OUTPUT): vol.Maybe(str),
vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str), vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str),
vol.Optional(ATTR_BACKUPS_EXCLUDE_DATABASE): vol.Boolean(), vol.Optional(ATTR_BACKUPS_EXCLUDE_DATABASE): vol.Boolean(),
vol.Optional(ATTR_DUPLICATE_LOG_FILE): vol.Boolean(),
} }
) )
@@ -112,6 +114,7 @@ class APIHomeAssistant(CoreSysAttributes):
ATTR_AUDIO_INPUT: self.sys_homeassistant.audio_input, ATTR_AUDIO_INPUT: self.sys_homeassistant.audio_input,
ATTR_AUDIO_OUTPUT: self.sys_homeassistant.audio_output, ATTR_AUDIO_OUTPUT: self.sys_homeassistant.audio_output,
ATTR_BACKUPS_EXCLUDE_DATABASE: self.sys_homeassistant.backups_exclude_database, ATTR_BACKUPS_EXCLUDE_DATABASE: self.sys_homeassistant.backups_exclude_database,
ATTR_DUPLICATE_LOG_FILE: self.sys_homeassistant.duplicate_log_file,
} }
@api_process @api_process
@@ -151,6 +154,9 @@ class APIHomeAssistant(CoreSysAttributes):
ATTR_BACKUPS_EXCLUDE_DATABASE ATTR_BACKUPS_EXCLUDE_DATABASE
] ]
if ATTR_DUPLICATE_LOG_FILE in body:
self.sys_homeassistant.duplicate_log_file = body[ATTR_DUPLICATE_LOG_FILE]
await self.sys_homeassistant.save_data() await self.sys_homeassistant.save_data()
@api_process @api_process

View File

@@ -179,6 +179,7 @@ ATTR_DOCKER = "docker"
ATTR_DOCKER_API = "docker_api" ATTR_DOCKER_API = "docker_api"
ATTR_DOCUMENTATION = "documentation" ATTR_DOCUMENTATION = "documentation"
ATTR_DOMAINS = "domains" ATTR_DOMAINS = "domains"
ATTR_DUPLICATE_LOG_FILE = "duplicate_log_file"
ATTR_ENABLE = "enable" ATTR_ENABLE = "enable"
ATTR_ENABLE_IPV6 = "enable_ipv6" ATTR_ENABLE_IPV6 = "enable_ipv6"
ATTR_ENABLED = "enabled" ATTR_ENABLED = "enabled"

View File

@@ -10,14 +10,13 @@ import os
from pathlib import Path from pathlib import Path
from socket import SocketIO from socket import SocketIO
import tempfile import tempfile
from typing import TYPE_CHECKING, cast from typing import TYPE_CHECKING, Literal, cast
import aiodocker import aiodocker
from attr import evolve from attr import evolve
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
import docker import docker
import docker.errors import docker.errors
from docker.types import Mount
import requests import requests
from ..addons.build import AddonBuild from ..addons.build import AddonBuild
@@ -68,8 +67,11 @@ from .const import (
PATH_SHARE, PATH_SHARE,
PATH_SSL, PATH_SSL,
Capabilities, Capabilities,
DockerMount,
MountBindOptions,
MountType, MountType,
PropagationMode, PropagationMode,
Ulimit,
) )
from .interface import DockerInterface from .interface import DockerInterface
@@ -272,7 +274,7 @@ class DockerAddon(DockerInterface):
} }
@property @property
def network_mode(self) -> str | None: def network_mode(self) -> Literal["host"] | None:
"""Return network mode for add-on.""" """Return network mode for add-on."""
if self.addon.host_network: if self.addon.host_network:
return "host" return "host"
@@ -311,28 +313,28 @@ class DockerAddon(DockerInterface):
return None return None
@property @property
def ulimits(self) -> list[docker.types.Ulimit] | None: def ulimits(self) -> list[Ulimit] | None:
"""Generate ulimits for add-on.""" """Generate ulimits for add-on."""
limits: list[docker.types.Ulimit] = [] limits: list[Ulimit] = []
# Need schedule functions # Need schedule functions
if self.addon.with_realtime: if self.addon.with_realtime:
limits.append(docker.types.Ulimit(name="rtprio", soft=90, hard=99)) limits.append(Ulimit(name="rtprio", soft=90, hard=99))
# Set available memory for memlock to 128MB # Set available memory for memlock to 128MB
mem = 128 * 1024 * 1024 mem = 128 * 1024 * 1024
limits.append(docker.types.Ulimit(name="memlock", soft=mem, hard=mem)) limits.append(Ulimit(name="memlock", soft=mem, hard=mem))
# Add configurable ulimits from add-on config # Add configurable ulimits from add-on config
for name, config in self.addon.ulimits.items(): for name, config in self.addon.ulimits.items():
if isinstance(config, int): if isinstance(config, int):
# Simple format: both soft and hard limits are the same # Simple format: both soft and hard limits are the same
limits.append(docker.types.Ulimit(name=name, soft=config, hard=config)) limits.append(Ulimit(name=name, soft=config, hard=config))
elif isinstance(config, dict): elif isinstance(config, dict):
# Detailed format: both soft and hard limits are mandatory # Detailed format: both soft and hard limits are mandatory
soft = config["soft"] soft = config["soft"]
hard = config["hard"] hard = config["hard"]
limits.append(docker.types.Ulimit(name=name, soft=soft, hard=hard)) limits.append(Ulimit(name=name, soft=soft, hard=hard))
# Return None if no ulimits are present # Return None if no ulimits are present
if limits: if limits:
@@ -351,7 +353,7 @@ class DockerAddon(DockerInterface):
return None return None
@property @property
def mounts(self) -> list[Mount]: def mounts(self) -> list[DockerMount]:
"""Return mounts for container.""" """Return mounts for container."""
addon_mapping = self.addon.map_volumes addon_mapping = self.addon.map_volumes
@@ -361,8 +363,8 @@ class DockerAddon(DockerInterface):
mounts = [ mounts = [
MOUNT_DEV, MOUNT_DEV,
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.addon.path_extern_data.as_posix(), source=self.addon.path_extern_data.as_posix(),
target=target_data_path or PATH_PRIVATE_DATA.as_posix(), target=target_data_path or PATH_PRIVATE_DATA.as_posix(),
read_only=False, read_only=False,
@@ -372,8 +374,8 @@ class DockerAddon(DockerInterface):
# setup config mappings # setup config mappings
if MappingType.CONFIG in addon_mapping: if MappingType.CONFIG in addon_mapping:
mounts.append( mounts.append(
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(), source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.CONFIG].path target=addon_mapping[MappingType.CONFIG].path
or PATH_HOMEASSISTANT_CONFIG_LEGACY.as_posix(), or PATH_HOMEASSISTANT_CONFIG_LEGACY.as_posix(),
@@ -385,8 +387,8 @@ class DockerAddon(DockerInterface):
# Map addon's public config folder if not using deprecated config option # Map addon's public config folder if not using deprecated config option
if self.addon.addon_config_used: if self.addon.addon_config_used:
mounts.append( mounts.append(
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.addon.path_extern_config.as_posix(), source=self.addon.path_extern_config.as_posix(),
target=addon_mapping[MappingType.ADDON_CONFIG].path target=addon_mapping[MappingType.ADDON_CONFIG].path
or PATH_PUBLIC_CONFIG.as_posix(), or PATH_PUBLIC_CONFIG.as_posix(),
@@ -397,8 +399,8 @@ class DockerAddon(DockerInterface):
# Map Home Assistant config in new way # Map Home Assistant config in new way
if MappingType.HOMEASSISTANT_CONFIG in addon_mapping: if MappingType.HOMEASSISTANT_CONFIG in addon_mapping:
mounts.append( mounts.append(
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(), source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.HOMEASSISTANT_CONFIG].path target=addon_mapping[MappingType.HOMEASSISTANT_CONFIG].path
or PATH_HOMEASSISTANT_CONFIG.as_posix(), or PATH_HOMEASSISTANT_CONFIG.as_posix(),
@@ -410,8 +412,8 @@ class DockerAddon(DockerInterface):
if MappingType.ALL_ADDON_CONFIGS in addon_mapping: if MappingType.ALL_ADDON_CONFIGS in addon_mapping:
mounts.append( mounts.append(
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_addon_configs.as_posix(), source=self.sys_config.path_extern_addon_configs.as_posix(),
target=addon_mapping[MappingType.ALL_ADDON_CONFIGS].path target=addon_mapping[MappingType.ALL_ADDON_CONFIGS].path
or PATH_ALL_ADDON_CONFIGS.as_posix(), or PATH_ALL_ADDON_CONFIGS.as_posix(),
@@ -421,8 +423,8 @@ class DockerAddon(DockerInterface):
if MappingType.SSL in addon_mapping: if MappingType.SSL in addon_mapping:
mounts.append( mounts.append(
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(), source=self.sys_config.path_extern_ssl.as_posix(),
target=addon_mapping[MappingType.SSL].path or PATH_SSL.as_posix(), target=addon_mapping[MappingType.SSL].path or PATH_SSL.as_posix(),
read_only=addon_mapping[MappingType.SSL].read_only, read_only=addon_mapping[MappingType.SSL].read_only,
@@ -431,8 +433,8 @@ class DockerAddon(DockerInterface):
if MappingType.ADDONS in addon_mapping: if MappingType.ADDONS in addon_mapping:
mounts.append( mounts.append(
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_addons_local.as_posix(), source=self.sys_config.path_extern_addons_local.as_posix(),
target=addon_mapping[MappingType.ADDONS].path target=addon_mapping[MappingType.ADDONS].path
or PATH_LOCAL_ADDONS.as_posix(), or PATH_LOCAL_ADDONS.as_posix(),
@@ -442,8 +444,8 @@ class DockerAddon(DockerInterface):
if MappingType.BACKUP in addon_mapping: if MappingType.BACKUP in addon_mapping:
mounts.append( mounts.append(
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_backup.as_posix(), source=self.sys_config.path_extern_backup.as_posix(),
target=addon_mapping[MappingType.BACKUP].path target=addon_mapping[MappingType.BACKUP].path
or PATH_BACKUP.as_posix(), or PATH_BACKUP.as_posix(),
@@ -453,25 +455,25 @@ class DockerAddon(DockerInterface):
if MappingType.SHARE in addon_mapping: if MappingType.SHARE in addon_mapping:
mounts.append( mounts.append(
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(), source=self.sys_config.path_extern_share.as_posix(),
target=addon_mapping[MappingType.SHARE].path target=addon_mapping[MappingType.SHARE].path
or PATH_SHARE.as_posix(), or PATH_SHARE.as_posix(),
read_only=addon_mapping[MappingType.SHARE].read_only, read_only=addon_mapping[MappingType.SHARE].read_only,
propagation=PropagationMode.RSLAVE, bind_options=MountBindOptions(propagation=PropagationMode.RSLAVE),
) )
) )
if MappingType.MEDIA in addon_mapping: if MappingType.MEDIA in addon_mapping:
mounts.append( mounts.append(
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_media.as_posix(), source=self.sys_config.path_extern_media.as_posix(),
target=addon_mapping[MappingType.MEDIA].path target=addon_mapping[MappingType.MEDIA].path
or PATH_MEDIA.as_posix(), or PATH_MEDIA.as_posix(),
read_only=addon_mapping[MappingType.MEDIA].read_only, read_only=addon_mapping[MappingType.MEDIA].read_only,
propagation=PropagationMode.RSLAVE, bind_options=MountBindOptions(propagation=PropagationMode.RSLAVE),
) )
) )
@@ -483,8 +485,8 @@ class DockerAddon(DockerInterface):
if not Path(gpio_path).exists(): if not Path(gpio_path).exists():
continue continue
mounts.append( mounts.append(
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=gpio_path, source=gpio_path,
target=gpio_path, target=gpio_path,
read_only=False, read_only=False,
@@ -494,8 +496,8 @@ class DockerAddon(DockerInterface):
# DeviceTree support # DeviceTree support
if self.addon.with_devicetree: if self.addon.with_devicetree:
mounts.append( mounts.append(
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source="/sys/firmware/devicetree/base", source="/sys/firmware/devicetree/base",
target="/device-tree", target="/device-tree",
read_only=True, read_only=True,
@@ -509,8 +511,8 @@ class DockerAddon(DockerInterface):
# Kernel Modules support # Kernel Modules support
if self.addon.with_kernel_modules: if self.addon.with_kernel_modules:
mounts.append( mounts.append(
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source="/lib/modules", source="/lib/modules",
target="/lib/modules", target="/lib/modules",
read_only=True, read_only=True,
@@ -528,20 +530,20 @@ class DockerAddon(DockerInterface):
# Configuration Audio # Configuration Audio
if self.addon.with_audio: if self.addon.with_audio:
mounts += [ mounts += [
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.addon.path_extern_pulse.as_posix(), source=self.addon.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf", target="/etc/pulse/client.conf",
read_only=True, read_only=True,
), ),
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(), source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio", target="/run/audio",
read_only=True, read_only=True,
), ),
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_asound.as_posix(), source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf", target="/etc/asound.conf",
read_only=True, read_only=True,
@@ -551,14 +553,14 @@ class DockerAddon(DockerInterface):
# System Journal access # System Journal access
if self.addon.with_journald: if self.addon.with_journald:
mounts += [ mounts += [
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=SYSTEMD_JOURNAL_PERSISTENT.as_posix(), source=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
target=SYSTEMD_JOURNAL_PERSISTENT.as_posix(), target=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
read_only=True, read_only=True,
), ),
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=SYSTEMD_JOURNAL_VOLATILE.as_posix(), source=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
target=SYSTEMD_JOURNAL_VOLATILE.as_posix(), target=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
read_only=True, read_only=True,
@@ -706,7 +708,9 @@ class DockerAddon(DockerInterface):
# Remove dangling builder container if it exists by any chance # Remove dangling builder container if it exists by any chance
# E.g. because of an abrupt host shutdown/reboot during a build # E.g. because of an abrupt host shutdown/reboot during a build
with suppress(docker.errors.NotFound): with suppress(docker.errors.NotFound):
self.sys_docker.containers.get(builder_name).remove(force=True, v=True) self.sys_docker.containers_legacy.get(builder_name).remove(
force=True, v=True
)
# Generate Docker config with registry credentials for base image if needed # Generate Docker config with registry credentials for base image if needed
docker_config_path: Path | None = None docker_config_path: Path | None = None
@@ -833,7 +837,7 @@ class DockerAddon(DockerInterface):
""" """
try: try:
# Load needed docker objects # Load needed docker objects
container = self.sys_docker.containers.get(self.name) container = self.sys_docker.containers_legacy.get(self.name)
# attach_socket returns SocketIO for local Docker connections (Unix socket) # attach_socket returns SocketIO for local Docker connections (Unix socket)
socket = cast( socket = cast(
SocketIO, container.attach_socket(params={"stdin": 1, "stream": 1}) SocketIO, container.attach_socket(params={"stdin": 1, "stream": 1})
@@ -896,7 +900,7 @@ class DockerAddon(DockerInterface):
try: try:
docker_container = await self.sys_run_in_executor( docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name self.sys_docker.containers_legacy.get, self.name
) )
except docker.errors.NotFound: except docker.errors.NotFound:
if self._hw_listener: if self._hw_listener:

View File

@@ -2,9 +2,6 @@
import logging import logging
import docker
from docker.types import Mount
from ..const import DOCKER_CPU_RUNTIME_ALLOCATION from ..const import DOCKER_CPU_RUNTIME_ALLOCATION
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
@@ -19,7 +16,9 @@ from .const import (
MOUNT_UDEV, MOUNT_UDEV,
PATH_PRIVATE_DATA, PATH_PRIVATE_DATA,
Capabilities, Capabilities,
DockerMount,
MountType, MountType,
Ulimit,
) )
from .interface import DockerInterface from .interface import DockerInterface
@@ -42,12 +41,12 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
return AUDIO_DOCKER_NAME return AUDIO_DOCKER_NAME
@property @property
def mounts(self) -> list[Mount]: def mounts(self) -> list[DockerMount]:
"""Return mounts for container.""" """Return mounts for container."""
mounts = [ mounts = [
MOUNT_DEV, MOUNT_DEV,
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_audio.as_posix(), source=self.sys_config.path_extern_audio.as_posix(),
target=PATH_PRIVATE_DATA.as_posix(), target=PATH_PRIVATE_DATA.as_posix(),
read_only=False, read_only=False,
@@ -75,10 +74,10 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
return [Capabilities.SYS_NICE, Capabilities.SYS_RESOURCE] return [Capabilities.SYS_NICE, Capabilities.SYS_RESOURCE]
@property @property
def ulimits(self) -> list[docker.types.Ulimit]: def ulimits(self) -> list[Ulimit]:
"""Generate ulimits for audio.""" """Generate ulimits for audio."""
# Pulseaudio by default tries to use real-time scheduling with priority of 5. # Pulseaudio by default tries to use real-time scheduling with priority of 5.
return [docker.types.Ulimit(name="rtprio", soft=10, hard=10)] return [Ulimit(name="rtprio", soft=10, hard=10)]
@property @property
def cpu_rt_runtime(self) -> int | None: def cpu_rt_runtime(self) -> int | None:

View File

@@ -3,13 +3,12 @@
from __future__ import annotations from __future__ import annotations
from contextlib import suppress from contextlib import suppress
from dataclasses import dataclass
from enum import Enum, StrEnum from enum import Enum, StrEnum
from functools import total_ordering from functools import total_ordering
from pathlib import PurePath from pathlib import PurePath
import re import re
from typing import cast from typing import Any, cast
from docker.types import Mount
from ..const import MACHINE_ID from ..const import MACHINE_ID
@@ -133,33 +132,94 @@ class PullImageLayerStage(Enum):
return None return None
@dataclass(slots=True, frozen=True)
class MountBindOptions:
"""Bind options for docker mount."""
propagation: PropagationMode | None = None
read_only_non_recursive: bool | None = None
def to_dict(self) -> dict[str, Any]:
"""To dictionary representation."""
out: dict[str, Any] = {}
if self.propagation:
out["Propagation"] = self.propagation.value
if self.read_only_non_recursive is not None:
out["ReadOnlyNonRecursive"] = self.read_only_non_recursive
return out
@dataclass(slots=True, frozen=True)
class DockerMount:
"""A docker mount."""
type: MountType
source: str
target: str
read_only: bool
bind_options: MountBindOptions | None = None
def to_dict(self) -> dict[str, Any]:
"""To dictionary representation."""
out: dict[str, Any] = {
"Type": self.type.value,
"Source": self.source,
"Target": self.target,
"ReadOnly": self.read_only,
}
if self.bind_options:
out["BindOptions"] = self.bind_options.to_dict()
return out
@dataclass(slots=True, frozen=True)
class Ulimit:
"""A linux user limit."""
name: str
soft: int
hard: int
def to_dict(self) -> dict[str, str | int]:
"""To dictionary representation."""
return {
"Name": self.name,
"Soft": self.soft,
"Hard": self.hard,
}
ENV_DUPLICATE_LOG_FILE = "HA_DUPLICATE_LOG_FILE"
ENV_TIME = "TZ" ENV_TIME = "TZ"
ENV_TOKEN = "SUPERVISOR_TOKEN" ENV_TOKEN = "SUPERVISOR_TOKEN"
ENV_TOKEN_OLD = "HASSIO_TOKEN" ENV_TOKEN_OLD = "HASSIO_TOKEN"
LABEL_MANAGED = "supervisor_managed" LABEL_MANAGED = "supervisor_managed"
MOUNT_DBUS = Mount( MOUNT_DBUS = DockerMount(
type=MountType.BIND.value, source="/run/dbus", target="/run/dbus", read_only=True type=MountType.BIND, source="/run/dbus", target="/run/dbus", read_only=True
) )
MOUNT_DEV = Mount( MOUNT_DEV = DockerMount(
type=MountType.BIND.value, source="/dev", target="/dev", read_only=True type=MountType.BIND,
source="/dev",
target="/dev",
read_only=True,
bind_options=MountBindOptions(read_only_non_recursive=True),
) )
MOUNT_DEV.setdefault("BindOptions", {})["ReadOnlyNonRecursive"] = True MOUNT_DOCKER = DockerMount(
MOUNT_DOCKER = Mount( type=MountType.BIND,
type=MountType.BIND.value,
source="/run/docker.sock", source="/run/docker.sock",
target="/run/docker.sock", target="/run/docker.sock",
read_only=True, read_only=True,
) )
MOUNT_MACHINE_ID = Mount( MOUNT_MACHINE_ID = DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=MACHINE_ID.as_posix(), source=MACHINE_ID.as_posix(),
target=MACHINE_ID.as_posix(), target=MACHINE_ID.as_posix(),
read_only=True, read_only=True,
) )
MOUNT_UDEV = Mount( MOUNT_UDEV = DockerMount(
type=MountType.BIND.value, source="/run/udev", target="/run/udev", read_only=True type=MountType.BIND, source="/run/udev", target="/run/udev", read_only=True
) )
PATH_PRIVATE_DATA = PurePath("/data") PATH_PRIVATE_DATA = PurePath("/data")

View File

@@ -2,13 +2,11 @@
import logging import logging
from docker.types import Mount
from ..coresys import CoreSysAttributes from ..coresys import CoreSysAttributes
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
from ..jobs.const import JobConcurrency from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ENV_TIME, MOUNT_DBUS, MountType from .const import ENV_TIME, MOUNT_DBUS, DockerMount, MountType
from .interface import DockerInterface from .interface import DockerInterface
_LOGGER: logging.Logger = logging.getLogger(__name__) _LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -47,8 +45,8 @@ class DockerDNS(DockerInterface, CoreSysAttributes):
security_opt=self.security_opt, security_opt=self.security_opt,
environment={ENV_TIME: self.sys_timezone}, environment={ENV_TIME: self.sys_timezone},
mounts=[ mounts=[
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_dns.as_posix(), source=self.sys_config.path_extern_dns.as_posix(),
target="/config", target="/config",
read_only=False, read_only=False,

View File

@@ -5,7 +5,6 @@ import logging
import re import re
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from docker.types import Mount
from ..const import LABEL_MACHINE from ..const import LABEL_MACHINE
from ..exceptions import DockerJobError from ..exceptions import DockerJobError
@@ -14,6 +13,7 @@ from ..homeassistant.const import LANDINGPAGE
from ..jobs.const import JobConcurrency from ..jobs.const import JobConcurrency
from ..jobs.decorator import Job from ..jobs.decorator import Job
from .const import ( from .const import (
ENV_DUPLICATE_LOG_FILE,
ENV_TIME, ENV_TIME,
ENV_TOKEN, ENV_TOKEN,
ENV_TOKEN_OLD, ENV_TOKEN_OLD,
@@ -25,6 +25,8 @@ from .const import (
PATH_PUBLIC_CONFIG, PATH_PUBLIC_CONFIG,
PATH_SHARE, PATH_SHARE,
PATH_SSL, PATH_SSL,
DockerMount,
MountBindOptions,
MountType, MountType,
PropagationMode, PropagationMode,
) )
@@ -90,15 +92,15 @@ class DockerHomeAssistant(DockerInterface):
) )
@property @property
def mounts(self) -> list[Mount]: def mounts(self) -> list[DockerMount]:
"""Return mounts for container.""" """Return mounts for container."""
mounts = [ mounts = [
MOUNT_DEV, MOUNT_DEV,
MOUNT_DBUS, MOUNT_DBUS,
MOUNT_UDEV, MOUNT_UDEV,
# HA config folder # HA config folder
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(), source=self.sys_config.path_extern_homeassistant.as_posix(),
target=PATH_PUBLIC_CONFIG.as_posix(), target=PATH_PUBLIC_CONFIG.as_posix(),
read_only=False, read_only=False,
@@ -110,41 +112,45 @@ class DockerHomeAssistant(DockerInterface):
mounts.extend( mounts.extend(
[ [
# All other folders # All other folders
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(), source=self.sys_config.path_extern_ssl.as_posix(),
target=PATH_SSL.as_posix(), target=PATH_SSL.as_posix(),
read_only=True, read_only=True,
), ),
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(), source=self.sys_config.path_extern_share.as_posix(),
target=PATH_SHARE.as_posix(), target=PATH_SHARE.as_posix(),
read_only=False, read_only=False,
propagation=PropagationMode.RSLAVE.value, bind_options=MountBindOptions(
propagation=PropagationMode.RSLAVE
),
), ),
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_media.as_posix(), source=self.sys_config.path_extern_media.as_posix(),
target=PATH_MEDIA.as_posix(), target=PATH_MEDIA.as_posix(),
read_only=False, read_only=False,
propagation=PropagationMode.RSLAVE.value, bind_options=MountBindOptions(
propagation=PropagationMode.RSLAVE
),
), ),
# Configuration audio # Configuration audio
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_homeassistant.path_extern_pulse.as_posix(), source=self.sys_homeassistant.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf", target="/etc/pulse/client.conf",
read_only=True, read_only=True,
), ),
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(), source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio", target="/run/audio",
read_only=True, read_only=True,
), ),
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_asound.as_posix(), source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf", target="/etc/asound.conf",
read_only=True, read_only=True,
@@ -174,6 +180,8 @@ class DockerHomeAssistant(DockerInterface):
} }
if restore_job_id: if restore_job_id:
environment[ENV_RESTORE_JOB_ID] = restore_job_id environment[ENV_RESTORE_JOB_ID] = restore_job_id
if self.sys_homeassistant.duplicate_log_file:
environment[ENV_DUPLICATE_LOG_FILE] = "1"
await self._run( await self._run(
tag=(self.sys_homeassistant.version), tag=(self.sys_homeassistant.version),
name=self.name, name=self.name,
@@ -213,20 +221,20 @@ class DockerHomeAssistant(DockerInterface):
init=True, init=True,
entrypoint=[], entrypoint=[],
mounts=[ mounts=[
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(), source=self.sys_config.path_extern_homeassistant.as_posix(),
target="/config", target="/config",
read_only=False, read_only=False,
), ),
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(), source=self.sys_config.path_extern_ssl.as_posix(),
target="/ssl", target="/ssl",
read_only=True, read_only=True,
), ),
Mount( DockerMount(
type=MountType.BIND.value, type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(), source=self.sys_config.path_extern_share.as_posix(),
target="/share", target="/share",
read_only=False, read_only=False,

View File

@@ -461,7 +461,7 @@ class DockerInterface(JobGroup, ABC):
"""Get docker container, returns None if not found.""" """Get docker container, returns None if not found."""
try: try:
return await self.sys_run_in_executor( return await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name self.sys_docker.containers_legacy.get, self.name
) )
except docker.errors.NotFound: except docker.errors.NotFound:
return None return None
@@ -493,7 +493,7 @@ class DockerInterface(JobGroup, ABC):
"""Attach to running Docker container.""" """Attach to running Docker container."""
with suppress(docker.errors.DockerException, requests.RequestException): with suppress(docker.errors.DockerException, requests.RequestException):
docker_container = await self.sys_run_in_executor( docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name self.sys_docker.containers_legacy.get, self.name
) )
self._meta = docker_container.attrs self._meta = docker_container.attrs
self.sys_docker.monitor.watch_container(docker_container) self.sys_docker.monitor.watch_container(docker_container)
@@ -533,8 +533,11 @@ class DockerInterface(JobGroup, ABC):
"""Run Docker image.""" """Run Docker image."""
raise NotImplementedError() raise NotImplementedError()
async def _run(self, **kwargs) -> None: async def _run(self, *, name: str, **kwargs) -> None:
"""Run Docker image with retry inf necessary.""" """Run Docker image with retry if necessary."""
if not (image := self.image):
raise ValueError(f"Cannot determine image to use to run {self.name}!")
if await self.is_running(): if await self.is_running():
return return
@@ -543,16 +546,14 @@ class DockerInterface(JobGroup, ABC):
# Create & Run container # Create & Run container
try: try:
docker_container = await self.sys_run_in_executor( container_metadata = await self.sys_docker.run(image, name=name, **kwargs)
self.sys_docker.run, self.image, **kwargs
)
except DockerNotFound as err: except DockerNotFound as err:
# If image is missing, capture the exception as this shouldn't happen # If image is missing, capture the exception as this shouldn't happen
await async_capture_exception(err) await async_capture_exception(err)
raise raise
# Store metadata # Store metadata
self._meta = docker_container.attrs self._meta = container_metadata
@Job( @Job(
name="docker_interface_stop", name="docker_interface_stop",

View File

@@ -13,10 +13,12 @@ import logging
import os import os
from pathlib import Path from pathlib import Path
import re import re
from typing import Any, Final, Self, cast from typing import Any, Final, Literal, Self, cast
import aiodocker import aiodocker
from aiodocker.containers import DockerContainers
from aiodocker.images import DockerImages from aiodocker.images import DockerImages
from aiodocker.types import JSONObject
from aiohttp import ClientSession, ClientTimeout, UnixConnector from aiohttp import ClientSession, ClientTimeout, UnixConnector
import attr import attr
from awesomeversion import AwesomeVersion, AwesomeVersionCompareException from awesomeversion import AwesomeVersion, AwesomeVersionCompareException
@@ -49,7 +51,16 @@ from ..exceptions import (
) )
from ..utils.common import FileConfiguration from ..utils.common import FileConfiguration
from ..validate import SCHEMA_DOCKER_CONFIG from ..validate import SCHEMA_DOCKER_CONFIG
from .const import DOCKER_HUB, DOCKER_HUB_LEGACY, LABEL_MANAGED from .const import (
DOCKER_HUB,
DOCKER_HUB_LEGACY,
LABEL_MANAGED,
Capabilities,
DockerMount,
MountType,
RestartPolicy,
Ulimit,
)
from .monitor import DockerMonitor from .monitor import DockerMonitor
from .network import DockerNetwork from .network import DockerNetwork
from .utils import get_registry_from_image from .utils import get_registry_from_image
@@ -297,8 +308,13 @@ class DockerAPI(CoreSysAttributes):
return self.docker.images return self.docker.images
@property @property
def containers(self) -> ContainerCollection: def containers(self) -> DockerContainers:
"""Return API containers.""" """Return API containers."""
return self.docker.containers
@property
def containers_legacy(self) -> ContainerCollection:
"""Return API containers from Dockerpy."""
return self.dockerpy.containers return self.dockerpy.containers
@property @property
@@ -331,50 +347,137 @@ class DockerAPI(CoreSysAttributes):
"""Stop docker events monitor.""" """Stop docker events monitor."""
await self.monitor.unload() await self.monitor.unload()
def run( def _create_container_config(
self, self,
image: str, image: str,
*,
tag: str = "latest", tag: str = "latest",
dns: bool = True, dns: bool = True,
ipv4: IPv4Address | None = None, init: bool = False,
**kwargs: Any, hostname: str | None = None,
) -> Container: detach: bool = True,
"""Create a Docker container and run it. security_opt: list[str] | None = None,
restart_policy: dict[str, RestartPolicy] | None = None,
extra_hosts: dict[str, IPv4Address] | None = None,
environment: dict[str, str | None] | None = None,
mounts: list[DockerMount] | None = None,
ports: dict[str, str | int | None] | None = None,
oom_score_adj: int | None = None,
network_mode: Literal["host"] | None = None,
privileged: bool = False,
device_cgroup_rules: list[str] | None = None,
tmpfs: dict[str, str] | None = None,
entrypoint: list[str] | None = None,
cap_add: list[Capabilities] | None = None,
ulimits: list[Ulimit] | None = None,
cpu_rt_runtime: int | None = None,
stdin_open: bool = False,
pid_mode: str | None = None,
uts_mode: str | None = None,
) -> JSONObject:
"""Map kwargs to create container config.
Need run inside executor. This only covers the docker options we currently use. It is not intended
to be exhaustive as its dockerpy equivalent was. We'll add to it as we
make use of new feature.
""" """
name: str | None = kwargs.get("name") # Set up host dependent config for container
network_mode: str | None = kwargs.get("network_mode") host_config: dict[str, Any] = {
hostname: str | None = kwargs.get("hostname") "NetworkMode": network_mode if network_mode else "default",
"Init": init,
"Privileged": privileged,
}
if security_opt:
host_config["SecurityOpt"] = security_opt
if restart_policy:
host_config["RestartPolicy"] = restart_policy
if extra_hosts:
host_config["ExtraHosts"] = [f"{k}:{v}" for k, v in extra_hosts.items()]
if mounts:
host_config["Mounts"] = [mount.to_dict() for mount in mounts]
if oom_score_adj is not None:
host_config["OomScoreAdj"] = oom_score_adj
if device_cgroup_rules:
host_config["DeviceCgroupRules"] = device_cgroup_rules
if tmpfs:
host_config["Tmpfs"] = tmpfs
if cap_add:
host_config["CapAdd"] = cap_add
if cpu_rt_runtime is not None:
host_config["CPURealtimeRuntime"] = cpu_rt_runtime
if pid_mode:
host_config["PidMode"] = pid_mode
if uts_mode:
host_config["UtsMode"] = uts_mode
if ulimits:
host_config["Ulimits"] = [limit.to_dict() for limit in ulimits]
if "labels" not in kwargs: # Full container config
kwargs["labels"] = {} config: dict[str, Any] = {
elif isinstance(kwargs["labels"], list): "Image": f"{image}:{tag}",
kwargs["labels"] = dict.fromkeys(kwargs["labels"], "") "Labels": {LABEL_MANAGED: ""},
"OpenStdin": stdin_open,
"StdinOnce": not detach and stdin_open,
"AttachStdin": not detach and stdin_open,
"AttachStdout": not detach,
"AttachStderr": not detach,
"HostConfig": host_config,
}
if hostname:
config["Hostname"] = hostname
if environment:
config["Env"] = [
env if val is None else f"{env}={val}"
for env, val in environment.items()
]
if entrypoint:
config["Entrypoint"] = entrypoint
kwargs["labels"][LABEL_MANAGED] = "" # Set up networking
# Setup DNS
if dns: if dns:
kwargs["dns"] = [str(self.network.dns)] host_config["Dns"] = [str(self.network.dns)]
kwargs["dns_search"] = [DNS_SUFFIX] host_config["DnsSearch"] = [DNS_SUFFIX]
# CoreDNS forward plug-in fails in ~6s, then fallback triggers. # CoreDNS forward plug-in fails in ~6s, then fallback triggers.
# However, the default timeout of glibc and musl is 5s. Increase # However, the default timeout of glibc and musl is 5s. Increase
# default timeout to make sure CoreDNS fallback is working # default timeout to make sure CoreDNS fallback is working
# on first query. # on first query.
kwargs["dns_opt"] = ["timeout:10"] host_config["DnsOptions"] = ["timeout:10"]
if hostname: if hostname:
kwargs["domainname"] = DNS_SUFFIX config["Domainname"] = DNS_SUFFIX
# Setup network # Setup ports
if not network_mode: if ports:
kwargs["network"] = None port_bindings = {
port if "/" in port else f"{port}/tcp": [
{"HostIp": "", "HostPort": str(host_port) if host_port else ""}
]
for port, host_port in ports.items()
}
config["ExposedPorts"] = {port: {} for port in port_bindings}
host_config["PortBindings"] = port_bindings
return config
async def run(
self,
image: str,
*,
name: str,
tag: str = "latest",
hostname: str | None = None,
mounts: list[DockerMount] | None = None,
network_mode: Literal["host"] | None = None,
ipv4: IPv4Address | None = None,
**kwargs,
) -> dict[str, Any]:
"""Create a Docker container and run it."""
if not image or not name:
raise ValueError("image, name and tag cannot be an empty string!")
# Setup cidfile and bind mount it # Setup cidfile and bind mount it
cidfile_path = None cidfile_path = self.coresys.config.path_cid_files / f"{name}.cid"
if name:
cidfile_path = self.coresys.config.path_cid_files / f"{name}.cid"
def create_cidfile() -> None:
# Remove the file/directory if it exists e.g. as a leftover from unclean shutdown # Remove the file/directory if it exists e.g. as a leftover from unclean shutdown
# Note: Can be a directory if Docker auto-started container with restart policy # Note: Can be a directory if Docker auto-started container with restart policy
# before Supervisor could write the CID file # before Supervisor could write the CID file
@@ -388,31 +491,37 @@ class DockerAPI(CoreSysAttributes):
# from creating it as a directory if container auto-starts # from creating it as a directory if container auto-starts
cidfile_path.touch() cidfile_path.touch()
extern_cidfile_path = ( await self.sys_run_in_executor(create_cidfile)
self.coresys.config.path_extern_cid_files / f"{name}.cid"
)
# Bind mount to /run/cid in container # Bind mount to /run/cid in container
if "volumes" not in kwargs: extern_cidfile_path = self.coresys.config.path_extern_cid_files / f"{name}.cid"
kwargs["volumes"] = {} cid_mount = DockerMount(
kwargs["volumes"][str(extern_cidfile_path)] = { type=MountType.BIND,
"bind": "/run/cid", source=extern_cidfile_path.as_posix(),
"mode": "ro", target="/run/cid",
} read_only=True,
)
if mounts is None:
mounts = [cid_mount]
else:
mounts = [*mounts, cid_mount]
# Create container # Create container
config = self._create_container_config(
image,
tag=tag,
hostname=hostname,
mounts=mounts,
network_mode=network_mode,
**kwargs,
)
try: try:
container = self.containers.create( container = await self.containers.create(config, name=name)
f"{image}:{tag}", use_config_proxy=False, **kwargs except aiodocker.DockerError as err:
) if err.status == HTTPStatus.NOT_FOUND:
if cidfile_path: raise DockerNotFound(
with cidfile_path.open("w", encoding="ascii") as cidfile: f"Image {image}:{tag} does not exist for {name}", _LOGGER.error
cidfile.write(str(container.id)) ) from err
except docker_errors.NotFound as err:
raise DockerNotFound(
f"Image {image}:{tag} does not exist for {name}", _LOGGER.error
) from err
except docker_errors.DockerException as err:
raise DockerAPIError( raise DockerAPIError(
f"Can't create container from {name}: {err}", _LOGGER.error f"Can't create container from {name}: {err}", _LOGGER.error
) from err ) from err
@@ -421,43 +530,62 @@ class DockerAPI(CoreSysAttributes):
f"Dockerd connection issue for {name}: {err}", _LOGGER.error f"Dockerd connection issue for {name}: {err}", _LOGGER.error
) from err ) from err
# Attach network # Get container metadata
if not network_mode: try:
alias = [hostname] if hostname else None container_attrs = await container.show()
try: except aiodocker.DockerError as err:
self.network.attach_container(container, alias=alias, ipv4=ipv4) raise DockerAPIError(
except DockerError: f"Can't inspect new container {name}: {err}", _LOGGER.error
_LOGGER.warning("Can't attach %s to hassio-network!", name) ) from err
else: except requests.RequestException as err:
with suppress(DockerError): raise DockerRequestError(
self.network.detach_default_bridge(container) f"Dockerd connection issue for {name}: {err}", _LOGGER.error
else: ) from err
host_network: Network = self.dockerpy.networks.get(DOCKER_NETWORK_HOST)
# Check if container is register on host # Setup network and store container id in cidfile
# https://github.com/moby/moby/issues/23302 def setup_network_and_cidfile() -> None:
if name and name in ( # Write cidfile
val.get("Name") with cidfile_path.open("w", encoding="ascii") as cidfile:
for val in host_network.attrs.get("Containers", {}).values() cidfile.write(str(container.id))
):
with suppress(docker_errors.NotFound): # Attach network
host_network.disconnect(name, force=True) if not network_mode:
alias = [hostname] if hostname else None
try:
self.network.attach_container(
container.id, name, alias=alias, ipv4=ipv4
)
except DockerError:
_LOGGER.warning("Can't attach %s to hassio-network!", name)
else:
with suppress(DockerError):
self.network.detach_default_bridge(container.id, name)
else:
host_network: Network = self.dockerpy.networks.get(DOCKER_NETWORK_HOST)
# Check if container is register on host
# https://github.com/moby/moby/issues/23302
if name and name in (
val.get("Name")
for val in host_network.attrs.get("Containers", {}).values()
):
with suppress(docker_errors.NotFound):
host_network.disconnect(name, force=True)
await self.sys_run_in_executor(setup_network_and_cidfile)
# Run container # Run container
try: try:
container.start() await container.start()
except docker_errors.DockerException as err: except aiodocker.DockerError as err:
raise DockerAPIError(f"Can't start {name}: {err}", _LOGGER.error) from err raise DockerAPIError(f"Can't start {name}: {err}", _LOGGER.error) from err
except requests.RequestException as err: except requests.RequestException as err:
raise DockerRequestError( raise DockerRequestError(
f"Dockerd connection issue for {name}: {err}", _LOGGER.error f"Dockerd connection issue for {name}: {err}", _LOGGER.error
) from err ) from err
# Update metadata # Return metadata
with suppress(docker_errors.DockerException, requests.RequestException): return container_attrs
container.reload()
return container
async def pull_image( async def pull_image(
self, self,
@@ -610,7 +738,9 @@ class DockerAPI(CoreSysAttributes):
) -> bool: ) -> bool:
"""Return True if docker container exists in good state and is built from expected image.""" """Return True if docker container exists in good state and is built from expected image."""
try: try:
docker_container = await self.sys_run_in_executor(self.containers.get, name) docker_container = await self.sys_run_in_executor(
self.containers_legacy.get, name
)
docker_image = await self.images.inspect(f"{image}:{version}") docker_image = await self.images.inspect(f"{image}:{version}")
except docker_errors.NotFound: except docker_errors.NotFound:
return False return False
@@ -639,7 +769,7 @@ class DockerAPI(CoreSysAttributes):
) -> None: ) -> None:
"""Stop/remove Docker container.""" """Stop/remove Docker container."""
try: try:
docker_container: Container = self.containers.get(name) docker_container: Container = self.containers_legacy.get(name)
except docker_errors.NotFound: except docker_errors.NotFound:
# Generally suppressed so we don't log this # Generally suppressed so we don't log this
raise DockerNotFound() from None raise DockerNotFound() from None
@@ -666,7 +796,7 @@ class DockerAPI(CoreSysAttributes):
def start_container(self, name: str) -> None: def start_container(self, name: str) -> None:
"""Start Docker container.""" """Start Docker container."""
try: try:
docker_container: Container = self.containers.get(name) docker_container: Container = self.containers_legacy.get(name)
except docker_errors.NotFound: except docker_errors.NotFound:
raise DockerNotFound( raise DockerNotFound(
f"{name} not found for starting up", _LOGGER.error f"{name} not found for starting up", _LOGGER.error
@@ -685,7 +815,7 @@ class DockerAPI(CoreSysAttributes):
def restart_container(self, name: str, timeout: int) -> None: def restart_container(self, name: str, timeout: int) -> None:
"""Restart docker container.""" """Restart docker container."""
try: try:
container: Container = self.containers.get(name) container: Container = self.containers_legacy.get(name)
except docker_errors.NotFound: except docker_errors.NotFound:
raise DockerNotFound( raise DockerNotFound(
f"Container {name} not found for restarting", _LOGGER.warning f"Container {name} not found for restarting", _LOGGER.warning
@@ -704,7 +834,7 @@ class DockerAPI(CoreSysAttributes):
def container_logs(self, name: str, tail: int = 100) -> bytes: def container_logs(self, name: str, tail: int = 100) -> bytes:
"""Return Docker logs of container.""" """Return Docker logs of container."""
try: try:
docker_container: Container = self.containers.get(name) docker_container: Container = self.containers_legacy.get(name)
except docker_errors.NotFound: except docker_errors.NotFound:
raise DockerNotFound( raise DockerNotFound(
f"Container {name} not found for logs", _LOGGER.warning f"Container {name} not found for logs", _LOGGER.warning
@@ -724,7 +854,7 @@ class DockerAPI(CoreSysAttributes):
def container_stats(self, name: str) -> dict[str, Any]: def container_stats(self, name: str) -> dict[str, Any]:
"""Read and return stats from container.""" """Read and return stats from container."""
try: try:
docker_container: Container = self.containers.get(name) docker_container: Container = self.containers_legacy.get(name)
except docker_errors.NotFound: except docker_errors.NotFound:
raise DockerNotFound( raise DockerNotFound(
f"Container {name} not found for stats", _LOGGER.warning f"Container {name} not found for stats", _LOGGER.warning
@@ -749,7 +879,7 @@ class DockerAPI(CoreSysAttributes):
def container_run_inside(self, name: str, command: str) -> CommandReturn: def container_run_inside(self, name: str, command: str) -> CommandReturn:
"""Execute a command inside Docker container.""" """Execute a command inside Docker container."""
try: try:
docker_container: Container = self.containers.get(name) docker_container: Container = self.containers_legacy.get(name)
except docker_errors.NotFound: except docker_errors.NotFound:
raise DockerNotFound( raise DockerNotFound(
f"Container {name} not found for running command", _LOGGER.warning f"Container {name} not found for running command", _LOGGER.warning

View File

@@ -7,7 +7,6 @@ import logging
from typing import Self, cast from typing import Self, cast
import docker import docker
from docker.models.containers import Container
from docker.models.networks import Network from docker.models.networks import Network
import requests import requests
@@ -220,7 +219,8 @@ class DockerNetwork:
def attach_container( def attach_container(
self, self,
container: Container, container_id: str,
name: str,
alias: list[str] | None = None, alias: list[str] | None = None,
ipv4: IPv4Address | None = None, ipv4: IPv4Address | None = None,
) -> None: ) -> None:
@@ -233,15 +233,15 @@ class DockerNetwork:
self.network.reload() self.network.reload()
# Check stale Network # Check stale Network
if container.name and container.name in ( if name in (
val.get("Name") for val in self.network.attrs.get("Containers", {}).values() val.get("Name") for val in self.network.attrs.get("Containers", {}).values()
): ):
self.stale_cleanup(container.name) self.stale_cleanup(name)
# Attach Network # Attach Network
try: try:
self.network.connect( self.network.connect(
container, aliases=alias, ipv4_address=str(ipv4) if ipv4 else None container_id, aliases=alias, ipv4_address=str(ipv4) if ipv4 else None
) )
except ( except (
docker.errors.NotFound, docker.errors.NotFound,
@@ -250,7 +250,7 @@ class DockerNetwork:
requests.RequestException, requests.RequestException,
) as err: ) as err:
raise DockerError( raise DockerError(
f"Can't connect {container.name} to Supervisor network: {err}", f"Can't connect {name} to Supervisor network: {err}",
_LOGGER.error, _LOGGER.error,
) from err ) from err
@@ -274,17 +274,20 @@ class DockerNetwork:
) as err: ) as err:
raise DockerError(f"Can't find {name}: {err}", _LOGGER.error) from err raise DockerError(f"Can't find {name}: {err}", _LOGGER.error) from err
if container.id not in self.containers: if not (container_id := container.id):
self.attach_container(container, alias, ipv4) raise DockerError(f"Received invalid metadata from docker for {name}")
def detach_default_bridge(self, container: Container) -> None: if container_id not in self.containers:
self.attach_container(container_id, name, alias, ipv4)
def detach_default_bridge(self, container_id: str, name: str) -> None:
"""Detach default Docker bridge. """Detach default Docker bridge.
Need run inside executor. Need run inside executor.
""" """
try: try:
default_network = self.docker.networks.get(DOCKER_NETWORK_DRIVER) default_network = self.docker.networks.get(DOCKER_NETWORK_DRIVER)
default_network.disconnect(container) default_network.disconnect(container_id)
except docker.errors.NotFound: except docker.errors.NotFound:
pass pass
except ( except (
@@ -293,7 +296,7 @@ class DockerNetwork:
requests.RequestException, requests.RequestException,
) as err: ) as err:
raise DockerError( raise DockerError(
f"Can't disconnect {container.name} from default network: {err}", f"Can't disconnect {name} from default network: {err}",
_LOGGER.warning, _LOGGER.warning,
) from err ) from err

View File

@@ -54,7 +54,7 @@ class DockerSupervisor(DockerInterface):
"""Attach to running docker container.""" """Attach to running docker container."""
try: try:
docker_container = await self.sys_run_in_executor( docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name self.sys_docker.containers_legacy.get, self.name
) )
except (docker.errors.DockerException, requests.RequestException) as err: except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError() from err raise DockerError() from err
@@ -74,7 +74,8 @@ class DockerSupervisor(DockerInterface):
_LOGGER.info("Connecting Supervisor to hassio-network") _LOGGER.info("Connecting Supervisor to hassio-network")
await self.sys_run_in_executor( await self.sys_run_in_executor(
self.sys_docker.network.attach_container, self.sys_docker.network.attach_container,
docker_container, docker_container.id,
self.name,
alias=["supervisor"], alias=["supervisor"],
ipv4=self.sys_docker.network.supervisor, ipv4=self.sys_docker.network.supervisor,
) )
@@ -90,7 +91,7 @@ class DockerSupervisor(DockerInterface):
Need run inside executor. Need run inside executor.
""" """
try: try:
docker_container = self.sys_docker.containers.get(self.name) docker_container = self.sys_docker.containers_legacy.get(self.name)
except (docker.errors.DockerException, requests.RequestException) as err: except (docker.errors.DockerException, requests.RequestException) as err:
raise DockerError( raise DockerError(
f"Could not get Supervisor container for retag: {err}", _LOGGER.error f"Could not get Supervisor container for retag: {err}", _LOGGER.error
@@ -118,7 +119,7 @@ class DockerSupervisor(DockerInterface):
"""Update start tag to new version.""" """Update start tag to new version."""
try: try:
docker_container = await self.sys_run_in_executor( docker_container = await self.sys_run_in_executor(
self.sys_docker.containers.get, self.name self.sys_docker.containers_legacy.get, self.name
) )
docker_image = await self.sys_docker.images.inspect(f"{image}:{version!s}") docker_image = await self.sys_docker.images.inspect(f"{image}:{version!s}")
except ( except (

View File

@@ -48,7 +48,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
SECONDS_BETWEEN_API_CHECKS: Final[int] = 5 SECONDS_BETWEEN_API_CHECKS: Final[int] = 5
# Core Stage 1 and some wiggle room # Core Stage 1 and some wiggle room
STARTUP_API_RESPONSE_TIMEOUT: Final[timedelta] = timedelta(minutes=3) STARTUP_API_RESPONSE_TIMEOUT: Final[timedelta] = timedelta(minutes=10)
# All stages plus event start timeout and some wiggle rooom # All stages plus event start timeout and some wiggle rooom
STARTUP_API_CHECK_RUNNING_TIMEOUT: Final[timedelta] = timedelta(minutes=15) STARTUP_API_CHECK_RUNNING_TIMEOUT: Final[timedelta] = timedelta(minutes=15)
# While database migration is running, the timeout will be extended # While database migration is running, the timeout will be extended

View File

@@ -23,6 +23,7 @@ from ..const import (
ATTR_AUDIO_OUTPUT, ATTR_AUDIO_OUTPUT,
ATTR_BACKUPS_EXCLUDE_DATABASE, ATTR_BACKUPS_EXCLUDE_DATABASE,
ATTR_BOOT, ATTR_BOOT,
ATTR_DUPLICATE_LOG_FILE,
ATTR_IMAGE, ATTR_IMAGE,
ATTR_MESSAGE, ATTR_MESSAGE,
ATTR_PORT, ATTR_PORT,
@@ -299,6 +300,16 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
"""Set whether backups should exclude database by default.""" """Set whether backups should exclude database by default."""
self._data[ATTR_BACKUPS_EXCLUDE_DATABASE] = value self._data[ATTR_BACKUPS_EXCLUDE_DATABASE] = value
@property
def duplicate_log_file(self) -> bool:
"""Return True if Home Assistant should duplicate logs to file."""
return self._data[ATTR_DUPLICATE_LOG_FILE]
@duplicate_log_file.setter
def duplicate_log_file(self, value: bool) -> None:
"""Set whether Home Assistant should duplicate logs to file."""
self._data[ATTR_DUPLICATE_LOG_FILE] = value
async def load(self) -> None: async def load(self) -> None:
"""Prepare Home Assistant object.""" """Prepare Home Assistant object."""
await asyncio.wait( await asyncio.wait(

View File

@@ -10,6 +10,7 @@ from ..const import (
ATTR_AUDIO_OUTPUT, ATTR_AUDIO_OUTPUT,
ATTR_BACKUPS_EXCLUDE_DATABASE, ATTR_BACKUPS_EXCLUDE_DATABASE,
ATTR_BOOT, ATTR_BOOT,
ATTR_DUPLICATE_LOG_FILE,
ATTR_IMAGE, ATTR_IMAGE,
ATTR_PORT, ATTR_PORT,
ATTR_REFRESH_TOKEN, ATTR_REFRESH_TOKEN,
@@ -36,6 +37,7 @@ SCHEMA_HASS_CONFIG = vol.Schema(
vol.Optional(ATTR_AUDIO_OUTPUT, default=None): vol.Maybe(str), vol.Optional(ATTR_AUDIO_OUTPUT, default=None): vol.Maybe(str),
vol.Optional(ATTR_AUDIO_INPUT, default=None): vol.Maybe(str), vol.Optional(ATTR_AUDIO_INPUT, default=None): vol.Maybe(str),
vol.Optional(ATTR_BACKUPS_EXCLUDE_DATABASE, default=False): vol.Boolean(), vol.Optional(ATTR_BACKUPS_EXCLUDE_DATABASE, default=False): vol.Boolean(),
vol.Optional(ATTR_DUPLICATE_LOG_FILE, default=False): vol.Boolean(),
vol.Optional(ATTR_OVERRIDE_IMAGE, default=False): vol.Boolean(), vol.Optional(ATTR_OVERRIDE_IMAGE, default=False): vol.Boolean(),
}, },
extra=vol.REMOVE_EXTRA, extra=vol.REMOVE_EXTRA,

View File

@@ -74,7 +74,9 @@ class EvaluateContainer(EvaluateBase):
self._images.clear() self._images.clear()
try: try:
containers = await self.sys_run_in_executor(self.sys_docker.containers.list) containers = await self.sys_run_in_executor(
self.sys_docker.containers_legacy.list
)
except (DockerException, RequestException) as err: except (DockerException, RequestException) as err:
_LOGGER.error("Corrupt docker overlayfs detect: %s", err) _LOGGER.error("Corrupt docker overlayfs detect: %s", err)
self.sys_resolution.create_issue( self.sys_resolution.create_issue(

View File

@@ -227,7 +227,7 @@ async def test_listener_attached_on_install(
container_collection.get.side_effect = DockerException() container_collection.get.side_effect = DockerException()
with ( with (
patch( patch(
"supervisor.docker.manager.DockerAPI.containers", "supervisor.docker.manager.DockerAPI.containers_legacy",
new=PropertyMock(return_value=container_collection), new=PropertyMock(return_value=container_collection),
), ),
patch("pathlib.Path.is_dir", return_value=True), patch("pathlib.Path.is_dir", return_value=True),
@@ -527,7 +527,7 @@ async def test_backup_with_pre_command_error(
exc_type_raised: type[HassioError], exc_type_raised: type[HassioError],
) -> None: ) -> None:
"""Test backing up an addon with error running pre command.""" """Test backing up an addon with error running pre command."""
coresys.docker.containers.get.side_effect = container_get_side_effect coresys.docker.containers_legacy.get.side_effect = container_get_side_effect
container.exec_run.side_effect = exec_run_side_effect container.exec_run.side_effect = exec_run_side_effect
install_addon_ssh.path_data.mkdir() install_addon_ssh.path_data.mkdir()

View File

@@ -679,7 +679,7 @@ async def test_addon_write_stdin_not_supported_error(api_client: TestClient):
async def test_addon_rebuild_fails_error(api_client: TestClient, coresys: CoreSys): async def test_addon_rebuild_fails_error(api_client: TestClient, coresys: CoreSys):
"""Test error when build fails during rebuild for addon.""" """Test error when build fails during rebuild for addon."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000 coresys.hardware.disk.get_disk_free_space = lambda x: 5000
coresys.docker.containers.run.side_effect = DockerException("fail") coresys.docker.containers_legacy.run.side_effect = DockerException("fail")
with ( with (
patch.object( patch.object(

View File

@@ -1201,10 +1201,8 @@ async def test_restore_homeassistant_adds_env(
assert docker.containers.create.call_args.kwargs["name"] == "homeassistant" assert docker.containers.create.call_args.kwargs["name"] == "homeassistant"
assert ( assert (
docker.containers.create.call_args.kwargs["environment"][ f"SUPERVISOR_RESTORE_JOB_ID={job.uuid}"
"SUPERVISOR_RESTORE_JOB_ID" in docker.containers.create.call_args.args[0]["Env"]
]
== job.uuid
) )

View File

@@ -35,9 +35,9 @@ async def test_api_core_logs(
async def test_api_stats(api_client: TestClient, coresys: CoreSys): async def test_api_stats(api_client: TestClient, coresys: CoreSys):
"""Test stats.""" """Test stats."""
coresys.docker.containers.get.return_value.status = "running" coresys.docker.containers_legacy.get.return_value.status = "running"
coresys.docker.containers.get.return_value.stats.return_value = load_json_fixture( coresys.docker.containers_legacy.get.return_value.stats.return_value = (
"container_stats.json" load_json_fixture("container_stats.json")
) )
resp = await api_client.get("/homeassistant/stats") resp = await api_client.get("/homeassistant/stats")
@@ -138,14 +138,14 @@ async def test_api_rebuild(
await api_client.post("/homeassistant/rebuild") await api_client.post("/homeassistant/rebuild")
assert container.remove.call_count == 2 assert container.remove.call_count == 2
container.start.assert_called_once() coresys.docker.containers.create.return_value.start.assert_called_once()
assert not safe_mode_marker.exists() assert not safe_mode_marker.exists()
with patch.object(HomeAssistantCore, "_block_till_run"): with patch.object(HomeAssistantCore, "_block_till_run"):
await api_client.post("/homeassistant/rebuild", json={"safe_mode": True}) await api_client.post("/homeassistant/rebuild", json={"safe_mode": True})
assert container.remove.call_count == 4 assert container.remove.call_count == 4
assert container.start.call_count == 2 assert coresys.docker.containers.create.return_value.start.call_count == 2
assert safe_mode_marker.exists() assert safe_mode_marker.exists()

View File

@@ -412,9 +412,9 @@ async def test_api_progress_updates_supervisor_update(
async def test_api_supervisor_stats(api_client: TestClient, coresys: CoreSys): async def test_api_supervisor_stats(api_client: TestClient, coresys: CoreSys):
"""Test supervisor stats.""" """Test supervisor stats."""
coresys.docker.containers.get.return_value.status = "running" coresys.docker.containers_legacy.get.return_value.status = "running"
coresys.docker.containers.get.return_value.stats.return_value = load_json_fixture( coresys.docker.containers_legacy.get.return_value.stats.return_value = (
"container_stats.json" load_json_fixture("container_stats.json")
) )
resp = await api_client.get("/supervisor/stats") resp = await api_client.get("/supervisor/stats")
@@ -430,7 +430,7 @@ async def test_supervisor_api_stats_failure(
api_client: TestClient, coresys: CoreSys, caplog: pytest.LogCaptureFixture api_client: TestClient, coresys: CoreSys, caplog: pytest.LogCaptureFixture
): ):
"""Test supervisor stats failure.""" """Test supervisor stats failure."""
coresys.docker.containers.get.side_effect = DockerException("fail") coresys.docker.containers_legacy.get.side_effect = DockerException("fail")
resp = await api_client.get("/supervisor/stats") resp = await api_client.get("/supervisor/stats")
assert resp.status == 500 assert resp.status == 500

View File

@@ -9,6 +9,7 @@ import subprocess
from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, patch from unittest.mock import AsyncMock, MagicMock, Mock, PropertyMock, patch
from uuid import uuid4 from uuid import uuid4
from aiodocker.containers import DockerContainer, DockerContainers
from aiodocker.docker import DockerImages from aiodocker.docker import DockerImages
from aiohttp import ClientSession, web from aiohttp import ClientSession, web
from aiohttp.test_utils import TestClient from aiohttp.test_utils import TestClient
@@ -120,11 +121,13 @@ async def docker() -> DockerAPI:
"Id": "test123", "Id": "test123",
"RepoTags": ["ghcr.io/home-assistant/amd64-hassio-supervisor:latest"], "RepoTags": ["ghcr.io/home-assistant/amd64-hassio-supervisor:latest"],
} }
container_inspect = image_inspect | {"State": {"ExitCode": 0}}
with ( with (
patch("supervisor.docker.manager.DockerClient", return_value=MagicMock()), patch("supervisor.docker.manager.DockerClient", return_value=MagicMock()),
patch( patch(
"supervisor.docker.manager.DockerAPI.containers", return_value=MagicMock() "supervisor.docker.manager.DockerAPI.containers_legacy",
return_value=MagicMock(),
), ),
patch("supervisor.docker.manager.DockerAPI.api", return_value=MagicMock()), patch("supervisor.docker.manager.DockerAPI.api", return_value=MagicMock()),
patch("supervisor.docker.manager.DockerAPI.info", return_value=MagicMock()), patch("supervisor.docker.manager.DockerAPI.info", return_value=MagicMock()),
@@ -136,6 +139,12 @@ async def docker() -> DockerAPI:
return_value=(docker_images := MagicMock(spec=DockerImages)) return_value=(docker_images := MagicMock(spec=DockerImages))
), ),
), ),
patch(
"supervisor.docker.manager.DockerAPI.containers",
new=PropertyMock(
return_value=(docker_containers := MagicMock(spec=DockerContainers))
),
),
): ):
docker_obj = await DockerAPI(MagicMock()).post_init() docker_obj = await DockerAPI(MagicMock()).post_init()
docker_obj.config._data = {"registries": {}} docker_obj.config._data = {"registries": {}}
@@ -147,9 +156,15 @@ async def docker() -> DockerAPI:
docker_images.import_image = AsyncMock( docker_images.import_image = AsyncMock(
return_value=[{"stream": "Loaded image: test:latest\n"}] return_value=[{"stream": "Loaded image: test:latest\n"}]
) )
docker_images.pull.return_value = AsyncIterator([{}]) docker_images.pull.return_value = AsyncIterator([{}])
docker_containers.get.return_value = docker_container = MagicMock(
spec=DockerContainer
)
docker_containers.list.return_value = [docker_container]
docker_containers.create.return_value = docker_container
docker_container.show.return_value = container_inspect
docker_obj.info.logging = "journald" docker_obj.info.logging = "journald"
docker_obj.info.storage = "overlay2" docker_obj.info.storage = "overlay2"
docker_obj.info.version = AwesomeVersion("1.0.0") docker_obj.info.version = AwesomeVersion("1.0.0")
@@ -790,7 +805,7 @@ async def docker_logs(docker: DockerAPI, supervisor_name) -> MagicMock:
"""Mock log output for a container from docker.""" """Mock log output for a container from docker."""
container_mock = MagicMock() container_mock = MagicMock()
container_mock.logs.return_value = load_binary_fixture("logs_docker_container.txt") container_mock.logs.return_value = load_binary_fixture("logs_docker_container.txt")
docker.containers.get.return_value = container_mock docker.containers_legacy.get.return_value = container_mock
yield container_mock.logs yield container_mock.logs
@@ -824,7 +839,7 @@ async def os_available(request: pytest.FixtureRequest) -> None:
@pytest.fixture @pytest.fixture
async def mount_propagation(docker: DockerAPI, coresys: CoreSys) -> None: async def mount_propagation(docker: DockerAPI, coresys: CoreSys) -> None:
"""Mock supervisor connected to container with propagation set.""" """Mock supervisor connected to container with propagation set."""
docker.containers.get.return_value = supervisor = MagicMock() docker.containers_legacy.get.return_value = supervisor = MagicMock()
supervisor.attrs = { supervisor.attrs = {
"Mounts": [ "Mounts": [
{ {
@@ -844,10 +859,11 @@ async def mount_propagation(docker: DockerAPI, coresys: CoreSys) -> None:
@pytest.fixture @pytest.fixture
async def container(docker: DockerAPI) -> MagicMock: async def container(docker: DockerAPI) -> MagicMock:
"""Mock attrs and status for container on attach.""" """Mock attrs and status for container on attach."""
docker.containers.get.return_value = addon = MagicMock() attrs = {"State": {"ExitCode": 0}}
docker.containers.create.return_value = addon docker.containers_legacy.get.return_value = addon = MagicMock(
addon.status = "stopped" status="stopped", attrs=attrs
addon.attrs = {"State": {"ExitCode": 0}} )
docker.containers.create.return_value.show.return_value = attrs
yield addon yield addon

View File

@@ -1,7 +1,12 @@
"""Docker tests.""" """Docker tests."""
from docker.types import Mount from supervisor.docker.const import DockerMount, MountBindOptions, MountType
# dev mount with equivalent of bind-recursive=writable specified via dict value # dev mount with equivalent of bind-recursive=writable specified via dict value
DEV_MOUNT = Mount(type="bind", source="/dev", target="/dev", read_only=True) DEV_MOUNT = DockerMount(
DEV_MOUNT["BindOptions"] = {"ReadOnlyNonRecursive": True} type=MountType.BIND,
source="/dev",
target="/dev",
read_only=True,
bind_options=MountBindOptions(read_only_non_recursive=True),
)

View File

@@ -1,13 +1,13 @@
"""Test docker addon setup.""" """Test docker addon setup."""
import asyncio import asyncio
from http import HTTPStatus
from ipaddress import IPv4Address from ipaddress import IPv4Address
from pathlib import Path from pathlib import Path
from typing import Any from typing import Any
from unittest.mock import MagicMock, Mock, PropertyMock, patch from unittest.mock import MagicMock, Mock, PropertyMock, patch
from docker.errors import NotFound import aiodocker
from docker.types import Mount
import pytest import pytest
from supervisor.addons import validate as vd from supervisor.addons import validate as vd
@@ -18,6 +18,12 @@ from supervisor.const import BusEvent
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.dbus.agent.cgroup import CGroup from supervisor.dbus.agent.cgroup import CGroup
from supervisor.docker.addon import DockerAddon from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import (
DockerMount,
MountBindOptions,
MountType,
PropagationMode,
)
from supervisor.docker.manager import DockerAPI from supervisor.docker.manager import DockerAPI
from supervisor.exceptions import CoreDNSError, DockerNotFound from supervisor.exceptions import CoreDNSError, DockerNotFound
from supervisor.hardware.data import Device from supervisor.hardware.data import Device
@@ -80,8 +86,8 @@ def test_base_volumes_included(
# Data added as rw # Data added as rw
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=docker_addon.addon.path_extern_data.as_posix(), source=docker_addon.addon.path_extern_data.as_posix(),
target="/data", target="/data",
read_only=False, read_only=False,
@@ -99,8 +105,8 @@ def test_addon_map_folder_defaults(
) )
# Config added and is marked rw # Config added and is marked rw
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.config.path_extern_homeassistant.as_posix(), source=coresys.config.path_extern_homeassistant.as_posix(),
target="/config", target="/config",
read_only=False, read_only=False,
@@ -110,8 +116,8 @@ def test_addon_map_folder_defaults(
# SSL added and defaults to ro # SSL added and defaults to ro
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.config.path_extern_ssl.as_posix(), source=coresys.config.path_extern_ssl.as_posix(),
target="/ssl", target="/ssl",
read_only=True, read_only=True,
@@ -121,30 +127,30 @@ def test_addon_map_folder_defaults(
# Media added and propagation set # Media added and propagation set
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.config.path_extern_media.as_posix(), source=coresys.config.path_extern_media.as_posix(),
target="/media", target="/media",
read_only=True, read_only=True,
propagation="rslave", bind_options=MountBindOptions(propagation=PropagationMode.RSLAVE),
) )
in docker_addon.mounts in docker_addon.mounts
) )
# Share added and propagation set # Share added and propagation set
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.config.path_extern_share.as_posix(), source=coresys.config.path_extern_share.as_posix(),
target="/share", target="/share",
read_only=True, read_only=True,
propagation="rslave", bind_options=MountBindOptions(propagation=PropagationMode.RSLAVE),
) )
in docker_addon.mounts in docker_addon.mounts
) )
# Backup not added # Backup not added
assert "/backup" not in [mount["Target"] for mount in docker_addon.mounts] assert "/backup" not in [mount.target for mount in docker_addon.mounts]
def test_addon_map_homeassistant_folder( def test_addon_map_homeassistant_folder(
@@ -157,8 +163,8 @@ def test_addon_map_homeassistant_folder(
# Home Assistant config folder mounted to /homeassistant, not /config # Home Assistant config folder mounted to /homeassistant, not /config
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.config.path_extern_homeassistant.as_posix(), source=coresys.config.path_extern_homeassistant.as_posix(),
target="/homeassistant", target="/homeassistant",
read_only=True, read_only=True,
@@ -177,8 +183,8 @@ def test_addon_map_addon_configs_folder(
# Addon configs folder included # Addon configs folder included
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.config.path_extern_addon_configs.as_posix(), source=coresys.config.path_extern_addon_configs.as_posix(),
target="/addon_configs", target="/addon_configs",
read_only=True, read_only=True,
@@ -197,8 +203,8 @@ def test_addon_map_addon_config_folder(
# Addon config folder included # Addon config folder included
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=docker_addon.addon.path_extern_config.as_posix(), source=docker_addon.addon.path_extern_config.as_posix(),
target="/config", target="/config",
read_only=True, read_only=True,
@@ -220,8 +226,8 @@ def test_addon_map_addon_config_folder_with_custom_target(
# Addon config folder included # Addon config folder included
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=docker_addon.addon.path_extern_config.as_posix(), source=docker_addon.addon.path_extern_config.as_posix(),
target="/custom/target/path", target="/custom/target/path",
read_only=False, read_only=False,
@@ -240,8 +246,8 @@ def test_addon_map_data_folder_with_custom_target(
# Addon config folder included # Addon config folder included
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=docker_addon.addon.path_extern_data.as_posix(), source=docker_addon.addon.path_extern_data.as_posix(),
target="/custom/data/path", target="/custom/data/path",
read_only=False, read_only=False,
@@ -260,8 +266,8 @@ def test_addon_ignore_on_config_map(
# Config added and is marked rw # Config added and is marked rw
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.config.path_extern_homeassistant.as_posix(), source=coresys.config.path_extern_homeassistant.as_posix(),
target="/config", target="/config",
read_only=False, read_only=False,
@@ -271,11 +277,10 @@ def test_addon_ignore_on_config_map(
# Mount for addon's specific config folder omitted since config in map field # Mount for addon's specific config folder omitted since config in map field
assert ( assert (
len([mount for mount in docker_addon.mounts if mount["Target"] == "/config"]) len([mount for mount in docker_addon.mounts if mount.target == "/config"]) == 1
== 1
) )
# Home Assistant mount omitted since config in map field # Home Assistant mount omitted since config in map field
assert "/homeassistant" not in [mount["Target"] for mount in docker_addon.mounts] assert "/homeassistant" not in [mount.target for mount in docker_addon.mounts]
def test_journald_addon( def test_journald_addon(
@@ -287,8 +292,8 @@ def test_journald_addon(
) )
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source="/var/log/journal", source="/var/log/journal",
target="/var/log/journal", target="/var/log/journal",
read_only=True, read_only=True,
@@ -296,8 +301,8 @@ def test_journald_addon(
in docker_addon.mounts in docker_addon.mounts
) )
assert ( assert (
Mount( DockerMount(
type="bind", type=MountType.BIND,
source="/run/log/journal", source="/run/log/journal",
target="/run/log/journal", target="/run/log/journal",
read_only=True, read_only=True,
@@ -314,7 +319,7 @@ def test_not_journald_addon(
coresys, addonsdata_system, "basic-addon-config.json" coresys, addonsdata_system, "basic-addon-config.json"
) )
assert "/var/log/journal" not in [mount["Target"] for mount in docker_addon.mounts] assert "/var/log/journal" not in [mount.target for mount in docker_addon.mounts]
async def test_addon_run_docker_error( async def test_addon_run_docker_error(
@@ -325,7 +330,9 @@ async def test_addon_run_docker_error(
): ):
"""Test docker error when addon is run.""" """Test docker error when addon is run."""
await coresys.dbus.timedate.connect(coresys.dbus.bus) await coresys.dbus.timedate.connect(coresys.dbus.bus)
coresys.docker.containers.create.side_effect = NotFound("Missing") coresys.docker.containers.create.side_effect = aiodocker.DockerError(
HTTPStatus.NOT_FOUND, {"message": "missing"}
)
docker_addon = get_docker_addon( docker_addon = get_docker_addon(
coresys, addonsdata_system, "basic-addon-config.json" coresys, addonsdata_system, "basic-addon-config.json"
) )

View File

@@ -2,22 +2,24 @@
from ipaddress import IPv4Address from ipaddress import IPv4Address
from pathlib import Path from pathlib import Path
from unittest.mock import patch from unittest.mock import MagicMock, patch
from docker.types import Mount import pytest
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.const import DockerMount, MountType, Ulimit
from supervisor.docker.manager import DockerAPI from supervisor.docker.manager import DockerAPI
from . import DEV_MOUNT from . import DEV_MOUNT
async def test_start(coresys: CoreSys, tmp_supervisor_data: Path, path_extern): @pytest.mark.usefixtures("path_extern")
async def test_start(coresys: CoreSys, tmp_supervisor_data: Path, container: MagicMock):
"""Test starting audio plugin.""" """Test starting audio plugin."""
config_file = tmp_supervisor_data / "audio" / "pulse_audio.json" config_file = tmp_supervisor_data / "audio" / "pulse_audio.json"
assert not config_file.exists() assert not config_file.exists()
with patch.object(DockerAPI, "run") as run: with patch.object(DockerAPI, "run", return_value=container.attrs) as run:
await coresys.plugins.audio.start() await coresys.plugins.audio.start()
run.assert_called_once() run.assert_called_once()
@@ -26,21 +28,31 @@ async def test_start(coresys: CoreSys, tmp_supervisor_data: Path, path_extern):
assert run.call_args.kwargs["hostname"] == "hassio-audio" assert run.call_args.kwargs["hostname"] == "hassio-audio"
assert run.call_args.kwargs["cap_add"] == ["SYS_NICE", "SYS_RESOURCE"] assert run.call_args.kwargs["cap_add"] == ["SYS_NICE", "SYS_RESOURCE"]
assert run.call_args.kwargs["ulimits"] == [ assert run.call_args.kwargs["ulimits"] == [
{"Name": "rtprio", "Soft": 10, "Hard": 10} Ulimit(name="rtprio", soft=10, hard=10)
] ]
assert run.call_args.kwargs["mounts"] == [ assert run.call_args.kwargs["mounts"] == [
DEV_MOUNT, DEV_MOUNT,
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.config.path_extern_audio.as_posix(), source=coresys.config.path_extern_audio.as_posix(),
target="/data", target="/data",
read_only=False, read_only=False,
), ),
Mount(type="bind", source="/run/dbus", target="/run/dbus", read_only=True), DockerMount(
Mount(type="bind", source="/run/udev", target="/run/udev", read_only=True), type=MountType.BIND,
Mount( source="/run/dbus",
type="bind", target="/run/dbus",
read_only=True,
),
DockerMount(
type=MountType.BIND,
source="/run/udev",
target="/run/udev",
read_only=True,
),
DockerMount(
type=MountType.BIND,
source="/etc/machine-id", source="/etc/machine-id",
target="/etc/machine-id", target="/etc/machine-id",
read_only=True, read_only=True,

View File

@@ -2,20 +2,22 @@
from ipaddress import IPv4Address from ipaddress import IPv4Address
from pathlib import Path from pathlib import Path
from unittest.mock import patch from unittest.mock import MagicMock, patch
from docker.types import Mount import pytest
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.const import DockerMount, MountType
from supervisor.docker.manager import DockerAPI from supervisor.docker.manager import DockerAPI
async def test_start(coresys: CoreSys, tmp_supervisor_data: Path, path_extern): @pytest.mark.usefixtures("path_extern")
async def test_start(coresys: CoreSys, tmp_supervisor_data: Path, container: MagicMock):
"""Test starting dns plugin.""" """Test starting dns plugin."""
config_file = tmp_supervisor_data / "dns" / "coredns.json" config_file = tmp_supervisor_data / "dns" / "coredns.json"
assert not config_file.exists() assert not config_file.exists()
with patch.object(DockerAPI, "run") as run: with patch.object(DockerAPI, "run", return_value=container.attrs) as run:
await coresys.plugins.dns.start() await coresys.plugins.dns.start()
run.assert_called_once() run.assert_called_once()
@@ -25,13 +27,18 @@ async def test_start(coresys: CoreSys, tmp_supervisor_data: Path, path_extern):
assert run.call_args.kwargs["dns"] is False assert run.call_args.kwargs["dns"] is False
assert run.call_args.kwargs["oom_score_adj"] == -300 assert run.call_args.kwargs["oom_score_adj"] == -300
assert run.call_args.kwargs["mounts"] == [ assert run.call_args.kwargs["mounts"] == [
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.config.path_extern_dns.as_posix(), source=coresys.config.path_extern_dns.as_posix(),
target="/config", target="/config",
read_only=False, read_only=False,
), ),
Mount(type="bind", source="/run/dbus", target="/run/dbus", read_only=True), DockerMount(
type=MountType.BIND,
source="/run/dbus",
target="/run/dbus",
read_only=True,
),
] ]
assert "volumes" not in run.call_args.kwargs assert "volumes" not in run.call_args.kwargs

View File

@@ -1,13 +1,18 @@
"""Test Home Assistant container.""" """Test Home Assistant container."""
from ipaddress import IPv4Address from ipaddress import IPv4Address
from pathlib import Path
from unittest.mock import ANY, MagicMock, patch from unittest.mock import ANY, MagicMock, patch
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
from docker.types import Mount import pytest
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.const import (
DockerMount,
MountBindOptions,
MountType,
PropagationMode,
)
from supervisor.docker.homeassistant import DockerHomeAssistant from supervisor.docker.homeassistant import DockerHomeAssistant
from supervisor.docker.manager import DockerAPI from supervisor.docker.manager import DockerAPI
from supervisor.homeassistant.const import LANDINGPAGE from supervisor.homeassistant.const import LANDINGPAGE
@@ -15,14 +20,13 @@ from supervisor.homeassistant.const import LANDINGPAGE
from . import DEV_MOUNT from . import DEV_MOUNT
async def test_homeassistant_start( @pytest.mark.usefixtures("tmp_supervisor_data", "path_extern")
coresys: CoreSys, tmp_supervisor_data: Path, path_extern async def test_homeassistant_start(coresys: CoreSys, container: MagicMock):
):
"""Test starting homeassistant.""" """Test starting homeassistant."""
coresys.homeassistant.version = AwesomeVersion("2023.8.1") coresys.homeassistant.version = AwesomeVersion("2023.8.1")
with ( with (
patch.object(DockerAPI, "run") as run, patch.object(DockerAPI, "run", return_value=container.attrs) as run,
patch.object( patch.object(
DockerHomeAssistant, "is_running", side_effect=[False, False, True] DockerHomeAssistant, "is_running", side_effect=[False, False, True]
), ),
@@ -46,57 +50,68 @@ async def test_homeassistant_start(
"TZ": ANY, "TZ": ANY,
"SUPERVISOR_TOKEN": ANY, "SUPERVISOR_TOKEN": ANY,
"HASSIO_TOKEN": ANY, "HASSIO_TOKEN": ANY,
# no "HA_DUPLICATE_LOG_FILE"
} }
assert run.call_args.kwargs["mounts"] == [ assert run.call_args.kwargs["mounts"] == [
DEV_MOUNT, DEV_MOUNT,
Mount(type="bind", source="/run/dbus", target="/run/dbus", read_only=True), DockerMount(
Mount(type="bind", source="/run/udev", target="/run/udev", read_only=True), type=MountType.BIND,
Mount( source="/run/dbus",
type="bind", target="/run/dbus",
read_only=True,
),
DockerMount(
type=MountType.BIND,
source="/run/udev",
target="/run/udev",
read_only=True,
),
DockerMount(
type=MountType.BIND,
source=coresys.config.path_extern_homeassistant.as_posix(), source=coresys.config.path_extern_homeassistant.as_posix(),
target="/config", target="/config",
read_only=False, read_only=False,
), ),
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.config.path_extern_ssl.as_posix(), source=coresys.config.path_extern_ssl.as_posix(),
target="/ssl", target="/ssl",
read_only=True, read_only=True,
), ),
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.config.path_extern_share.as_posix(), source=coresys.config.path_extern_share.as_posix(),
target="/share", target="/share",
read_only=False, read_only=False,
propagation="rslave", bind_options=MountBindOptions(propagation=PropagationMode.RSLAVE),
), ),
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.config.path_extern_media.as_posix(), source=coresys.config.path_extern_media.as_posix(),
target="/media", target="/media",
read_only=False, read_only=False,
propagation="rslave", bind_options=MountBindOptions(propagation=PropagationMode.RSLAVE),
), ),
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.homeassistant.path_extern_pulse.as_posix(), source=coresys.homeassistant.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf", target="/etc/pulse/client.conf",
read_only=True, read_only=True,
), ),
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.plugins.audio.path_extern_pulse.as_posix(), source=coresys.plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio", target="/run/audio",
read_only=True, read_only=True,
), ),
Mount( DockerMount(
type="bind", type=MountType.BIND,
source=coresys.plugins.audio.path_extern_asound.as_posix(), source=coresys.plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf", target="/etc/asound.conf",
read_only=True, read_only=True,
), ),
Mount( DockerMount(
type="bind", type=MountType.BIND,
source="/etc/machine-id", source="/etc/machine-id",
target="/etc/machine-id", target="/etc/machine-id",
read_only=True, read_only=True,
@@ -105,14 +120,36 @@ async def test_homeassistant_start(
assert "volumes" not in run.call_args.kwargs assert "volumes" not in run.call_args.kwargs
async def test_landingpage_start( @pytest.mark.usefixtures("tmp_supervisor_data", "path_extern")
coresys: CoreSys, tmp_supervisor_data: Path, path_extern async def test_homeassistant_start_with_duplicate_log_file(
coresys: CoreSys, container: MagicMock
): ):
"""Test starting homeassistant with duplicate_log_file enabled."""
coresys.homeassistant.version = AwesomeVersion("2025.12.0")
coresys.homeassistant.duplicate_log_file = True
with (
patch.object(DockerAPI, "run", return_value=container.attrs) as run,
patch.object(
DockerHomeAssistant, "is_running", side_effect=[False, False, True]
),
patch("supervisor.homeassistant.core.asyncio.sleep"),
):
await coresys.homeassistant.core.start()
run.assert_called_once()
env = run.call_args.kwargs["environment"]
assert "HA_DUPLICATE_LOG_FILE" in env
assert env["HA_DUPLICATE_LOG_FILE"] == "1"
@pytest.mark.usefixtures("tmp_supervisor_data", "path_extern")
async def test_landingpage_start(coresys: CoreSys, container: MagicMock):
"""Test starting landingpage.""" """Test starting landingpage."""
coresys.homeassistant.version = LANDINGPAGE coresys.homeassistant.version = LANDINGPAGE
with ( with (
patch.object(DockerAPI, "run") as run, patch.object(DockerAPI, "run", return_value=container.attrs) as run,
patch.object(DockerHomeAssistant, "is_running", return_value=False), patch.object(DockerHomeAssistant, "is_running", return_value=False),
): ):
await coresys.homeassistant.core.start() await coresys.homeassistant.core.start()
@@ -133,19 +170,30 @@ async def test_landingpage_start(
"TZ": ANY, "TZ": ANY,
"SUPERVISOR_TOKEN": ANY, "SUPERVISOR_TOKEN": ANY,
"HASSIO_TOKEN": ANY, "HASSIO_TOKEN": ANY,
# no "HA_DUPLICATE_LOG_FILE"
} }
assert run.call_args.kwargs["mounts"] == [ assert run.call_args.kwargs["mounts"] == [
DEV_MOUNT, DEV_MOUNT,
Mount(type="bind", source="/run/dbus", target="/run/dbus", read_only=True), DockerMount(
Mount(type="bind", source="/run/udev", target="/run/udev", read_only=True), type=MountType.BIND,
Mount( source="/run/dbus",
type="bind", target="/run/dbus",
read_only=True,
),
DockerMount(
type=MountType.BIND,
source="/run/udev",
target="/run/udev",
read_only=True,
),
DockerMount(
type=MountType.BIND,
source=coresys.config.path_extern_homeassistant.as_posix(), source=coresys.config.path_extern_homeassistant.as_posix(),
target="/config", target="/config",
read_only=False, read_only=False,
), ),
Mount( DockerMount(
type="bind", type=MountType.BIND,
source="/etc/machine-id", source="/etc/machine-id",
target="/etc/machine-id", target="/etc/machine-id",
read_only=True, read_only=True,

View File

@@ -1,6 +1,7 @@
"""Test Docker interface.""" """Test Docker interface."""
import asyncio import asyncio
from http import HTTPStatus
from pathlib import Path from pathlib import Path
from typing import Any from typing import Any
from unittest.mock import ANY, AsyncMock, MagicMock, Mock, PropertyMock, call, patch from unittest.mock import ANY, AsyncMock, MagicMock, Mock, PropertyMock, call, patch
@@ -148,7 +149,7 @@ async def test_current_state(
container_collection = MagicMock() container_collection = MagicMock()
container_collection.get.return_value = Container(attrs) container_collection.get.return_value = Container(attrs)
with patch( with patch(
"supervisor.docker.manager.DockerAPI.containers", "supervisor.docker.manager.DockerAPI.containers_legacy",
new=PropertyMock(return_value=container_collection), new=PropertyMock(return_value=container_collection),
): ):
assert await coresys.homeassistant.core.instance.current_state() == expected assert await coresys.homeassistant.core.instance.current_state() == expected
@@ -158,7 +159,7 @@ async def test_current_state_failures(coresys: CoreSys):
"""Test failure states for current state.""" """Test failure states for current state."""
container_collection = MagicMock() container_collection = MagicMock()
with patch( with patch(
"supervisor.docker.manager.DockerAPI.containers", "supervisor.docker.manager.DockerAPI.containers_legacy",
new=PropertyMock(return_value=container_collection), new=PropertyMock(return_value=container_collection),
): ):
container_collection.get.side_effect = NotFound("dne") container_collection.get.side_effect = NotFound("dne")
@@ -211,7 +212,7 @@ async def test_attach_existing_container(
container_collection.get.return_value = Container(attrs) container_collection.get.return_value = Container(attrs)
with ( with (
patch( patch(
"supervisor.docker.manager.DockerAPI.containers", "supervisor.docker.manager.DockerAPI.containers_legacy",
new=PropertyMock(return_value=container_collection), new=PropertyMock(return_value=container_collection),
), ),
patch.object(type(coresys.bus), "fire_event") as fire_event, patch.object(type(coresys.bus), "fire_event") as fire_event,
@@ -253,7 +254,7 @@ async def test_attach_existing_container(
async def test_attach_container_failure(coresys: CoreSys): async def test_attach_container_failure(coresys: CoreSys):
"""Test attach fails to find container but finds image.""" """Test attach fails to find container but finds image."""
coresys.docker.containers.get.side_effect = DockerException() coresys.docker.containers_legacy.get.side_effect = DockerException()
coresys.docker.images.inspect.return_value.setdefault("Config", {})["Image"] = ( coresys.docker.images.inspect.return_value.setdefault("Config", {})["Image"] = (
"sha256:abc123" "sha256:abc123"
) )
@@ -271,7 +272,7 @@ async def test_attach_container_failure(coresys: CoreSys):
async def test_attach_total_failure(coresys: CoreSys): async def test_attach_total_failure(coresys: CoreSys):
"""Test attach fails to find container or image.""" """Test attach fails to find container or image."""
coresys.docker.containers.get.side_effect = DockerException coresys.docker.containers_legacy.get.side_effect = DockerException
coresys.docker.images.inspect.side_effect = aiodocker.DockerError( coresys.docker.images.inspect.side_effect = aiodocker.DockerError(
400, {"message": ""} 400, {"message": ""}
) )
@@ -304,8 +305,10 @@ async def test_run_missing_image(
tmp_supervisor_data: Path, tmp_supervisor_data: Path,
): ):
"""Test run captures the exception when image is missing.""" """Test run captures the exception when image is missing."""
coresys.docker.containers.create.side_effect = [NotFound("missing"), MagicMock()] coresys.docker.containers.create.side_effect = [
container.status = "stopped" aiodocker.DockerError(HTTPStatus.NOT_FOUND, {"message": "missing"}),
MagicMock(),
]
install_addon_ssh.data["image"] = "test_image" install_addon_ssh.data["image"] = "test_image"
with pytest.raises(DockerNotFound): with pytest.raises(DockerNotFound):

View File

@@ -4,6 +4,7 @@ import asyncio
from pathlib import Path from pathlib import Path
from unittest.mock import AsyncMock, MagicMock, patch from unittest.mock import AsyncMock, MagicMock, patch
from aiodocker.containers import DockerContainer
from docker.errors import APIError, DockerException, NotFound from docker.errors import APIError, DockerException, NotFound
import pytest import pytest
from requests import RequestException from requests import RequestException
@@ -139,40 +140,38 @@ async def test_run_command_custom_stdout_stderr(docker: DockerAPI):
assert result.output == b"output" assert result.output == b"output"
async def test_run_container_with_cidfile( @pytest.mark.usefixtures("path_extern", "tmp_supervisor_data")
coresys: CoreSys, docker: DockerAPI, path_extern, tmp_supervisor_data async def test_run_container_with_cidfile(coresys: CoreSys, docker: DockerAPI):
):
"""Test container creation with cidfile and bind mount.""" """Test container creation with cidfile and bind mount."""
# Mock container # Mock container
mock_container = MagicMock() mock_container = MagicMock(spec=DockerContainer, id="test_container_id_12345")
mock_container.id = "test_container_id_12345" mock_container.show.return_value = mock_metadata = {"Id": mock_container.id}
container_name = "test_container" container_name = "test_container"
cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid" cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid"
extern_cidfile_path = coresys.config.path_extern_cid_files / f"{container_name}.cid" extern_cidfile_path = coresys.config.path_extern_cid_files / f"{container_name}.cid"
docker.dockerpy.containers.run.return_value = mock_container docker.containers.create.return_value = mock_container
# Mock container creation # Mock container creation
with patch.object( with patch.object(
docker.containers, "create", return_value=mock_container docker.containers, "create", return_value=mock_container
) as create_mock: ) as create_mock:
# Execute run with a container name # Execute run with a container name
loop = asyncio.get_event_loop() result = await docker.run("test_image", tag="latest", name=container_name)
result = await loop.run_in_executor(
None,
lambda kwrgs: docker.run(**kwrgs),
{"image": "test_image", "tag": "latest", "name": container_name},
)
# Check the container creation parameters # Check the container creation parameters
create_mock.assert_called_once() create_mock.assert_called_once()
kwargs = create_mock.call_args[1] create_config = create_mock.call_args.args[0]
assert "volumes" in kwargs assert "HostConfig" in create_config
assert str(extern_cidfile_path) in kwargs["volumes"] assert "Mounts" in create_config["HostConfig"]
assert kwargs["volumes"][str(extern_cidfile_path)]["bind"] == "/run/cid" assert {
assert kwargs["volumes"][str(extern_cidfile_path)]["mode"] == "ro" "Type": "bind",
"Source": str(extern_cidfile_path),
"Target": "/run/cid",
"ReadOnly": True,
} in create_config["HostConfig"]["Mounts"]
# Verify container start was called # Verify container start was called
mock_container.start.assert_called_once() mock_container.start.assert_called_once()
@@ -181,16 +180,15 @@ async def test_run_container_with_cidfile(
assert cidfile_path.exists() assert cidfile_path.exists()
assert cidfile_path.read_text() == mock_container.id assert cidfile_path.read_text() == mock_container.id
assert result == mock_container assert result == mock_metadata
async def test_run_container_with_leftover_cidfile( @pytest.mark.usefixtures("path_extern", "tmp_supervisor_data")
coresys: CoreSys, docker: DockerAPI, path_extern, tmp_supervisor_data async def test_run_container_with_leftover_cidfile(coresys: CoreSys, docker: DockerAPI):
):
"""Test container creation removes leftover cidfile before creating new one.""" """Test container creation removes leftover cidfile before creating new one."""
# Mock container # Mock container
mock_container = MagicMock() mock_container = MagicMock(spec=DockerContainer, id="test_container_id_new")
mock_container.id = "test_container_id_new" mock_container.show.return_value = mock_metadata = {"Id": mock_container.id}
container_name = "test_container" container_name = "test_container"
cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid" cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid"
@@ -203,12 +201,7 @@ async def test_run_container_with_leftover_cidfile(
docker.containers, "create", return_value=mock_container docker.containers, "create", return_value=mock_container
) as create_mock: ) as create_mock:
# Execute run with a container name # Execute run with a container name
loop = asyncio.get_event_loop() result = await docker.run("test_image", tag="latest", name=container_name)
result = await loop.run_in_executor(
None,
lambda kwrgs: docker.run(**kwrgs),
{"image": "test_image", "tag": "latest", "name": container_name},
)
# Verify container was created # Verify container was created
create_mock.assert_called_once() create_mock.assert_called_once()
@@ -217,7 +210,7 @@ async def test_run_container_with_leftover_cidfile(
assert cidfile_path.exists() assert cidfile_path.exists()
assert cidfile_path.read_text() == mock_container.id assert cidfile_path.read_text() == mock_container.id
assert result == mock_container assert result == mock_metadata
async def test_stop_container_with_cidfile_cleanup( async def test_stop_container_with_cidfile_cleanup(
@@ -236,7 +229,7 @@ async def test_stop_container_with_cidfile_cleanup(
# Mock the containers.get method and cidfile cleanup # Mock the containers.get method and cidfile cleanup
with ( with (
patch.object(docker.containers, "get", return_value=mock_container), patch.object(docker.containers_legacy, "get", return_value=mock_container),
): ):
# Call stop_container with remove_container=True # Call stop_container with remove_container=True
loop = asyncio.get_event_loop() loop = asyncio.get_event_loop()
@@ -263,7 +256,7 @@ async def test_stop_container_without_removal_no_cidfile_cleanup(docker: DockerA
# Mock the containers.get method and cidfile cleanup # Mock the containers.get method and cidfile cleanup
with ( with (
patch.object(docker.containers, "get", return_value=mock_container), patch.object(docker.containers_legacy, "get", return_value=mock_container),
patch("pathlib.Path.unlink") as mock_unlink, patch("pathlib.Path.unlink") as mock_unlink,
): ):
# Call stop_container with remove_container=False # Call stop_container with remove_container=False
@@ -277,9 +270,8 @@ async def test_stop_container_without_removal_no_cidfile_cleanup(docker: DockerA
mock_unlink.assert_not_called() mock_unlink.assert_not_called()
async def test_cidfile_cleanup_handles_oserror( @pytest.mark.usefixtures("path_extern", "tmp_supervisor_data")
coresys: CoreSys, docker: DockerAPI, path_extern, tmp_supervisor_data async def test_cidfile_cleanup_handles_oserror(coresys: CoreSys, docker: DockerAPI):
):
"""Test that cidfile cleanup handles OSError gracefully.""" """Test that cidfile cleanup handles OSError gracefully."""
# Mock container # Mock container
mock_container = MagicMock() mock_container = MagicMock()
@@ -293,7 +285,7 @@ async def test_cidfile_cleanup_handles_oserror(
# Mock the containers.get method and cidfile cleanup to raise OSError # Mock the containers.get method and cidfile cleanup to raise OSError
with ( with (
patch.object(docker.containers, "get", return_value=mock_container), patch.object(docker.containers_legacy, "get", return_value=mock_container),
patch("pathlib.Path.is_dir", return_value=False), patch("pathlib.Path.is_dir", return_value=False),
patch("pathlib.Path.is_file", return_value=True), patch("pathlib.Path.is_file", return_value=True),
patch( patch(
@@ -311,8 +303,9 @@ async def test_cidfile_cleanup_handles_oserror(
mock_unlink.assert_called_once_with(missing_ok=True) mock_unlink.assert_called_once_with(missing_ok=True)
@pytest.mark.usefixtures("path_extern", "tmp_supervisor_data")
async def test_run_container_with_leftover_cidfile_directory( async def test_run_container_with_leftover_cidfile_directory(
coresys: CoreSys, docker: DockerAPI, path_extern, tmp_supervisor_data coresys: CoreSys, docker: DockerAPI
): ):
"""Test container creation removes leftover cidfile directory before creating new one. """Test container creation removes leftover cidfile directory before creating new one.
@@ -321,8 +314,8 @@ async def test_run_container_with_leftover_cidfile_directory(
the bind mount source as a directory. the bind mount source as a directory.
""" """
# Mock container # Mock container
mock_container = MagicMock() mock_container = MagicMock(spec=DockerContainer, id="test_container_id_new")
mock_container.id = "test_container_id_new" mock_container.show.return_value = mock_metadata = {"Id": mock_container.id}
container_name = "test_container" container_name = "test_container"
cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid" cidfile_path = coresys.config.path_cid_files / f"{container_name}.cid"
@@ -336,12 +329,7 @@ async def test_run_container_with_leftover_cidfile_directory(
docker.containers, "create", return_value=mock_container docker.containers, "create", return_value=mock_container
) as create_mock: ) as create_mock:
# Execute run with a container name # Execute run with a container name
loop = asyncio.get_event_loop() result = await docker.run("test_image", tag="latest", name=container_name)
result = await loop.run_in_executor(
None,
lambda kwrgs: docker.run(**kwrgs),
{"image": "test_image", "tag": "latest", "name": container_name},
)
# Verify container was created # Verify container was created
create_mock.assert_called_once() create_mock.assert_called_once()
@@ -351,7 +339,7 @@ async def test_run_container_with_leftover_cidfile_directory(
assert cidfile_path.is_file() assert cidfile_path.is_file()
assert cidfile_path.read_text() == mock_container.id assert cidfile_path.read_text() == mock_container.id
assert result == mock_container assert result == mock_metadata
async def test_repair(coresys: CoreSys, caplog: pytest.LogCaptureFixture): async def test_repair(coresys: CoreSys, caplog: pytest.LogCaptureFixture):

View File

@@ -120,7 +120,7 @@ async def test_unlabeled_container(coresys: CoreSys):
} }
) )
with patch( with patch(
"supervisor.docker.manager.DockerAPI.containers", "supervisor.docker.manager.DockerAPI.containers_legacy",
new=PropertyMock(return_value=container_collection), new=PropertyMock(return_value=container_collection),
): ):
await coresys.homeassistant.core.instance.attach(AwesomeVersion("2022.7.3")) await coresys.homeassistant.core.instance.attach(AwesomeVersion("2022.7.3"))

View File

@@ -1,17 +1,16 @@
"""Test Observer plugin container.""" """Test Observer plugin container."""
from ipaddress import IPv4Address, ip_network from ipaddress import IPv4Address, ip_network
from unittest.mock import patch from unittest.mock import MagicMock, patch
from docker.types import Mount
from supervisor.coresys import CoreSys from supervisor.coresys import CoreSys
from supervisor.docker.const import DockerMount, MountType
from supervisor.docker.manager import DockerAPI from supervisor.docker.manager import DockerAPI
async def test_start(coresys: CoreSys): async def test_start(coresys: CoreSys, container: MagicMock):
"""Test starting observer plugin.""" """Test starting observer plugin."""
with patch.object(DockerAPI, "run") as run: with patch.object(DockerAPI, "run", return_value=container.attrs) as run:
await coresys.plugins.observer.start() await coresys.plugins.observer.start()
run.assert_called_once() run.assert_called_once()
@@ -28,8 +27,8 @@ async def test_start(coresys: CoreSys):
) )
assert run.call_args.kwargs["ports"] == {"80/tcp": 4357} assert run.call_args.kwargs["ports"] == {"80/tcp": 4357}
assert run.call_args.kwargs["mounts"] == [ assert run.call_args.kwargs["mounts"] == [
Mount( DockerMount(
type="bind", type=MountType.BIND,
source="/run/docker.sock", source="/run/docker.sock",
target="/run/docker.sock", target="/run/docker.sock",
read_only=True, read_only=True,

View File

@@ -238,6 +238,7 @@ async def test_install_other_error(
@pytest.mark.usefixtures("path_extern") @pytest.mark.usefixtures("path_extern")
async def test_start( async def test_start(
coresys: CoreSys, coresys: CoreSys,
container: MagicMock,
container_exc: DockerException | None, container_exc: DockerException | None,
image_exc: aiodocker.DockerError | None, image_exc: aiodocker.DockerError | None,
remove_calls: list[call], remove_calls: list[call],
@@ -245,8 +246,8 @@ async def test_start(
"""Test starting Home Assistant.""" """Test starting Home Assistant."""
coresys.docker.images.inspect.return_value = {"Id": "123"} coresys.docker.images.inspect.return_value = {"Id": "123"}
coresys.docker.images.inspect.side_effect = image_exc coresys.docker.images.inspect.side_effect = image_exc
coresys.docker.containers.get.return_value.id = "123" coresys.docker.containers_legacy.get.return_value.id = "123"
coresys.docker.containers.get.side_effect = container_exc coresys.docker.containers_legacy.get.side_effect = container_exc
with ( with (
patch.object( patch.object(
@@ -254,7 +255,7 @@ async def test_start(
"version", "version",
new=PropertyMock(return_value=AwesomeVersion("2023.7.0")), new=PropertyMock(return_value=AwesomeVersion("2023.7.0")),
), ),
patch.object(DockerAPI, "run") as run, patch.object(DockerAPI, "run", return_value=container.attrs) as run,
patch.object(HomeAssistantCore, "_block_till_run") as block_till_run, patch.object(HomeAssistantCore, "_block_till_run") as block_till_run,
): ):
await coresys.homeassistant.core.start() await coresys.homeassistant.core.start()
@@ -268,17 +269,18 @@ async def test_start(
assert run.call_args.kwargs["name"] == "homeassistant" assert run.call_args.kwargs["name"] == "homeassistant"
assert run.call_args.kwargs["hostname"] == "homeassistant" assert run.call_args.kwargs["hostname"] == "homeassistant"
coresys.docker.containers.get.return_value.stop.assert_not_called() coresys.docker.containers_legacy.get.return_value.stop.assert_not_called()
assert ( assert (
coresys.docker.containers.get.return_value.remove.call_args_list == remove_calls coresys.docker.containers_legacy.get.return_value.remove.call_args_list
== remove_calls
) )
async def test_start_existing_container(coresys: CoreSys, path_extern): async def test_start_existing_container(coresys: CoreSys, path_extern):
"""Test starting Home Assistant when container exists and is viable.""" """Test starting Home Assistant when container exists and is viable."""
coresys.docker.images.inspect.return_value = {"Id": "123"} coresys.docker.images.inspect.return_value = {"Id": "123"}
coresys.docker.containers.get.return_value.image.id = "123" coresys.docker.containers_legacy.get.return_value.image.id = "123"
coresys.docker.containers.get.return_value.status = "exited" coresys.docker.containers_legacy.get.return_value.status = "exited"
with ( with (
patch.object( patch.object(
@@ -291,29 +293,29 @@ async def test_start_existing_container(coresys: CoreSys, path_extern):
await coresys.homeassistant.core.start() await coresys.homeassistant.core.start()
block_till_run.assert_called_once() block_till_run.assert_called_once()
coresys.docker.containers.get.return_value.start.assert_called_once() coresys.docker.containers_legacy.get.return_value.start.assert_called_once()
coresys.docker.containers.get.return_value.stop.assert_not_called() coresys.docker.containers_legacy.get.return_value.stop.assert_not_called()
coresys.docker.containers.get.return_value.remove.assert_not_called() coresys.docker.containers_legacy.get.return_value.remove.assert_not_called()
coresys.docker.containers.get.return_value.run.assert_not_called() coresys.docker.containers_legacy.get.return_value.run.assert_not_called()
@pytest.mark.parametrize("exists", [True, False]) @pytest.mark.parametrize("exists", [True, False])
async def test_stop(coresys: CoreSys, exists: bool): async def test_stop(coresys: CoreSys, exists: bool):
"""Test stoppping Home Assistant.""" """Test stoppping Home Assistant."""
if exists: if exists:
coresys.docker.containers.get.return_value.status = "running" coresys.docker.containers_legacy.get.return_value.status = "running"
else: else:
coresys.docker.containers.get.side_effect = NotFound("missing") coresys.docker.containers_legacy.get.side_effect = NotFound("missing")
await coresys.homeassistant.core.stop() await coresys.homeassistant.core.stop()
coresys.docker.containers.get.return_value.remove.assert_not_called() coresys.docker.containers_legacy.get.return_value.remove.assert_not_called()
if exists: if exists:
coresys.docker.containers.get.return_value.stop.assert_called_once_with( coresys.docker.containers_legacy.get.return_value.stop.assert_called_once_with(
timeout=260 timeout=260
) )
else: else:
coresys.docker.containers.get.return_value.stop.assert_not_called() coresys.docker.containers_legacy.get.return_value.stop.assert_not_called()
async def test_restart(coresys: CoreSys): async def test_restart(coresys: CoreSys):
@@ -322,18 +324,20 @@ async def test_restart(coresys: CoreSys):
await coresys.homeassistant.core.restart() await coresys.homeassistant.core.restart()
block_till_run.assert_called_once() block_till_run.assert_called_once()
coresys.docker.containers.get.return_value.restart.assert_called_once_with( coresys.docker.containers_legacy.get.return_value.restart.assert_called_once_with(
timeout=260 timeout=260
) )
coresys.docker.containers.get.return_value.stop.assert_not_called() coresys.docker.containers_legacy.get.return_value.stop.assert_not_called()
@pytest.mark.parametrize("get_error", [NotFound("missing"), DockerException(), None]) @pytest.mark.parametrize("get_error", [NotFound("missing"), DockerException(), None])
async def test_restart_failures(coresys: CoreSys, get_error: DockerException | None): async def test_restart_failures(coresys: CoreSys, get_error: DockerException | None):
"""Test restart fails when container missing or can't be restarted.""" """Test restart fails when container missing or can't be restarted."""
coresys.docker.containers.get.return_value.restart.side_effect = DockerException() coresys.docker.containers_legacy.get.return_value.restart.side_effect = (
DockerException()
)
if get_error: if get_error:
coresys.docker.containers.get.side_effect = get_error coresys.docker.containers_legacy.get.side_effect = get_error
with pytest.raises(HomeAssistantError): with pytest.raises(HomeAssistantError):
await coresys.homeassistant.core.restart() await coresys.homeassistant.core.restart()
@@ -352,10 +356,12 @@ async def test_stats_failures(
coresys: CoreSys, get_error: DockerException | None, status: str coresys: CoreSys, get_error: DockerException | None, status: str
): ):
"""Test errors when getting stats.""" """Test errors when getting stats."""
coresys.docker.containers.get.return_value.status = status coresys.docker.containers_legacy.get.return_value.status = status
coresys.docker.containers.get.return_value.stats.side_effect = DockerException() coresys.docker.containers_legacy.get.return_value.stats.side_effect = (
DockerException()
)
if get_error: if get_error:
coresys.docker.containers.get.side_effect = get_error coresys.docker.containers_legacy.get.side_effect = get_error
with pytest.raises(HomeAssistantError): with pytest.raises(HomeAssistantError):
await coresys.homeassistant.core.stats() await coresys.homeassistant.core.stats()
@@ -387,7 +393,7 @@ async def test_api_check_timeout(
): ):
await coresys.homeassistant.core.start() await coresys.homeassistant.core.start()
assert coresys.homeassistant.api.get_api_state.call_count == 3 assert coresys.homeassistant.api.get_api_state.call_count == 10
assert ( assert (
"No Home Assistant Core response, assuming a fatal startup error" in caplog.text "No Home Assistant Core response, assuming a fatal startup error" in caplog.text
) )

View File

@@ -1,7 +1,6 @@
"""Test base plugin functionality.""" """Test base plugin functionality."""
import asyncio import asyncio
from pathlib import Path
from unittest.mock import ANY, MagicMock, Mock, PropertyMock, call, patch from unittest.mock import ANY, MagicMock, Mock, PropertyMock, call, patch
from awesomeversion import AwesomeVersion from awesomeversion import AwesomeVersion
@@ -159,15 +158,13 @@ async def test_plugin_watchdog(coresys: CoreSys, plugin: PluginBase) -> None:
], ],
indirect=["plugin"], indirect=["plugin"],
) )
@pytest.mark.usefixtures("coresys", "tmp_supervisor_data", "path_extern")
async def test_plugin_watchdog_max_failed_attempts( async def test_plugin_watchdog_max_failed_attempts(
coresys: CoreSys,
capture_exception: Mock, capture_exception: Mock,
plugin: PluginBase, plugin: PluginBase,
error: PluginError, error: PluginError,
container: MagicMock, container: MagicMock,
caplog: pytest.LogCaptureFixture, caplog: pytest.LogCaptureFixture,
tmp_supervisor_data: Path,
path_extern,
) -> None: ) -> None:
"""Test plugin watchdog gives up after max failed attempts.""" """Test plugin watchdog gives up after max failed attempts."""
with patch.object(type(plugin.instance), "attach"): with patch.object(type(plugin.instance), "attach"):

View File

@@ -76,7 +76,7 @@ async def test_check(
docker: DockerAPI, coresys: CoreSys, install_addon_ssh: Addon, folder: str docker: DockerAPI, coresys: CoreSys, install_addon_ssh: Addon, folder: str
): ):
"""Test check reports issue when containers have incorrect config.""" """Test check reports issue when containers have incorrect config."""
docker.containers.get = _make_mock_container_get( docker.containers_legacy.get = _make_mock_container_get(
["homeassistant", "hassio_audio", "addon_local_ssh"], folder ["homeassistant", "hassio_audio", "addon_local_ssh"], folder
) )
# Use state used in setup() # Use state used in setup()
@@ -132,7 +132,7 @@ async def test_check(
assert await docker_config.approve_check() assert await docker_config.approve_check()
# IF config issue is resolved, all issues are removed except the main one. Which will be removed if check isn't approved # IF config issue is resolved, all issues are removed except the main one. Which will be removed if check isn't approved
docker.containers.get = _make_mock_container_get([]) docker.containers_legacy.get = _make_mock_container_get([])
with patch.object(DockerInterface, "is_running", return_value=True): with patch.object(DockerInterface, "is_running", return_value=True):
await coresys.plugins.load() await coresys.plugins.load()
await coresys.homeassistant.load() await coresys.homeassistant.load()
@@ -159,7 +159,7 @@ async def test_addon_volume_mount_not_flagged(
] # No media/share ] # No media/share
# Mock container that has VOLUME mount to media/share with wrong propagation # Mock container that has VOLUME mount to media/share with wrong propagation
docker.containers.get = _make_mock_container_get_with_volume_mount( docker.containers_legacy.get = _make_mock_container_get_with_volume_mount(
["addon_local_ssh"], folder ["addon_local_ssh"], folder
) )
@@ -221,7 +221,7 @@ async def test_addon_configured_mount_still_flagged(
out.attrs["Mounts"].append(mount) out.attrs["Mounts"].append(mount)
return out return out
docker.containers.get = mock_container_get docker.containers_legacy.get = mock_container_get
await coresys.core.set_state(CoreState.SETUP) await coresys.core.set_state(CoreState.SETUP)
with patch.object(DockerInterface, "is_running", return_value=True): with patch.object(DockerInterface, "is_running", return_value=True):
@@ -275,7 +275,7 @@ async def test_addon_custom_target_path_flagged(
out.attrs["Mounts"].append(mount) out.attrs["Mounts"].append(mount)
return out return out
docker.containers.get = mock_container_get docker.containers_legacy.get = mock_container_get
await coresys.core.set_state(CoreState.SETUP) await coresys.core.set_state(CoreState.SETUP)
with patch.object(DockerInterface, "is_running", return_value=True): with patch.object(DockerInterface, "is_running", return_value=True):

View File

@@ -30,7 +30,7 @@ async def test_evaluation(coresys: CoreSys):
assert container.reason not in coresys.resolution.unsupported assert container.reason not in coresys.resolution.unsupported
assert UnhealthyReason.DOCKER not in coresys.resolution.unhealthy assert UnhealthyReason.DOCKER not in coresys.resolution.unhealthy
coresys.docker.containers.list.return_value = [ coresys.docker.containers_legacy.list.return_value = [
_make_image_attr("armhfbuild/watchtower:latest"), _make_image_attr("armhfbuild/watchtower:latest"),
_make_image_attr("concerco/watchtowerv6:10.0.2"), _make_image_attr("concerco/watchtowerv6:10.0.2"),
_make_image_attr("containrrr/watchtower:1.1"), _make_image_attr("containrrr/watchtower:1.1"),
@@ -47,7 +47,7 @@ async def test_evaluation(coresys: CoreSys):
"pyouroboros/ouroboros:1.4.3", "pyouroboros/ouroboros:1.4.3",
} }
coresys.docker.containers.list.return_value = [] coresys.docker.containers_legacy.list.return_value = []
await container() await container()
assert container.reason not in coresys.resolution.unsupported assert container.reason not in coresys.resolution.unsupported
@@ -62,7 +62,7 @@ async def test_corrupt_docker(coresys: CoreSys):
corrupt_docker = Issue(IssueType.CORRUPT_DOCKER, ContextType.SYSTEM) corrupt_docker = Issue(IssueType.CORRUPT_DOCKER, ContextType.SYSTEM)
assert corrupt_docker not in coresys.resolution.issues assert corrupt_docker not in coresys.resolution.issues
coresys.docker.containers.list.side_effect = DockerException coresys.docker.containers_legacy.list.side_effect = DockerException
await container() await container()
assert corrupt_docker in coresys.resolution.issues assert corrupt_docker in coresys.resolution.issues

View File

@@ -33,7 +33,7 @@ async def test_evaluation(coresys: CoreSys, install_addon_ssh: Addon):
meta.attrs = observer_attrs if name == "hassio_observer" else addon_attrs meta.attrs = observer_attrs if name == "hassio_observer" else addon_attrs
return meta return meta
coresys.docker.containers.get = get_container coresys.docker.containers_legacy.get = get_container
await coresys.plugins.observer.instance.attach(TEST_VERSION) await coresys.plugins.observer.instance.attach(TEST_VERSION)
await install_addon_ssh.instance.attach(TEST_VERSION) await install_addon_ssh.instance.attach(TEST_VERSION)

View File

@@ -31,7 +31,7 @@ async def _mock_wait_for_container() -> None:
async def test_fixup(docker: DockerAPI, coresys: CoreSys, install_addon_ssh: Addon): async def test_fixup(docker: DockerAPI, coresys: CoreSys, install_addon_ssh: Addon):
"""Test fixup rebuilds addon's container.""" """Test fixup rebuilds addon's container."""
docker.containers.get = make_mock_container_get("running") docker.containers_legacy.get = make_mock_container_get("running")
addon_execute_rebuild = FixupAddonExecuteRebuild(coresys) addon_execute_rebuild = FixupAddonExecuteRebuild(coresys)
@@ -61,7 +61,7 @@ async def test_fixup_stopped_core(
): ):
"""Test fixup just removes addon's container when it is stopped.""" """Test fixup just removes addon's container when it is stopped."""
caplog.clear() caplog.clear()
docker.containers.get = make_mock_container_get("stopped") docker.containers_legacy.get = make_mock_container_get("stopped")
addon_execute_rebuild = FixupAddonExecuteRebuild(coresys) addon_execute_rebuild = FixupAddonExecuteRebuild(coresys)
coresys.resolution.create_issue( coresys.resolution.create_issue(
@@ -76,7 +76,7 @@ async def test_fixup_stopped_core(
assert not coresys.resolution.issues assert not coresys.resolution.issues
assert not coresys.resolution.suggestions assert not coresys.resolution.suggestions
docker.containers.get("addon_local_ssh").remove.assert_called_once_with( docker.containers_legacy.get("addon_local_ssh").remove.assert_called_once_with(
force=True, v=True force=True, v=True
) )
assert "Addon local_ssh is stopped" in caplog.text assert "Addon local_ssh is stopped" in caplog.text
@@ -90,7 +90,7 @@ async def test_fixup_unknown_core(
): ):
"""Test fixup does nothing if addon's container has already been removed.""" """Test fixup does nothing if addon's container has already been removed."""
caplog.clear() caplog.clear()
docker.containers.get.side_effect = NotFound("") docker.containers_legacy.get.side_effect = NotFound("")
addon_execute_rebuild = FixupAddonExecuteRebuild(coresys) addon_execute_rebuild = FixupAddonExecuteRebuild(coresys)
coresys.resolution.create_issue( coresys.resolution.create_issue(

View File

@@ -27,7 +27,7 @@ def make_mock_container_get(status: str):
async def test_fixup(docker: DockerAPI, coresys: CoreSys): async def test_fixup(docker: DockerAPI, coresys: CoreSys):
"""Test fixup rebuilds core's container.""" """Test fixup rebuilds core's container."""
docker.containers.get = make_mock_container_get("running") docker.containers_legacy.get = make_mock_container_get("running")
core_execute_rebuild = FixupCoreExecuteRebuild(coresys) core_execute_rebuild = FixupCoreExecuteRebuild(coresys)
@@ -51,7 +51,7 @@ async def test_fixup_stopped_core(
): ):
"""Test fixup just removes HA's container when it is stopped.""" """Test fixup just removes HA's container when it is stopped."""
caplog.clear() caplog.clear()
docker.containers.get = make_mock_container_get("stopped") docker.containers_legacy.get = make_mock_container_get("stopped")
core_execute_rebuild = FixupCoreExecuteRebuild(coresys) core_execute_rebuild = FixupCoreExecuteRebuild(coresys)
coresys.resolution.create_issue( coresys.resolution.create_issue(
@@ -65,7 +65,7 @@ async def test_fixup_stopped_core(
assert not coresys.resolution.issues assert not coresys.resolution.issues
assert not coresys.resolution.suggestions assert not coresys.resolution.suggestions
docker.containers.get("homeassistant").remove.assert_called_once_with( docker.containers_legacy.get("homeassistant").remove.assert_called_once_with(
force=True, v=True force=True, v=True
) )
assert "Home Assistant is stopped" in caplog.text assert "Home Assistant is stopped" in caplog.text
@@ -76,7 +76,7 @@ async def test_fixup_unknown_core(
): ):
"""Test fixup does nothing if core's container has already been removed.""" """Test fixup does nothing if core's container has already been removed."""
caplog.clear() caplog.clear()
docker.containers.get.side_effect = NotFound("") docker.containers_legacy.get.side_effect = NotFound("")
core_execute_rebuild = FixupCoreExecuteRebuild(coresys) core_execute_rebuild = FixupCoreExecuteRebuild(coresys)
coresys.resolution.create_issue( coresys.resolution.create_issue(

View File

@@ -28,7 +28,7 @@ def make_mock_container_get(status: str):
@pytest.mark.parametrize("status", ["running", "stopped"]) @pytest.mark.parametrize("status", ["running", "stopped"])
async def test_fixup(docker: DockerAPI, coresys: CoreSys, status: str): async def test_fixup(docker: DockerAPI, coresys: CoreSys, status: str):
"""Test fixup rebuilds plugin's container regardless of current state.""" """Test fixup rebuilds plugin's container regardless of current state."""
docker.containers.get = make_mock_container_get(status) docker.containers_legacy.get = make_mock_container_get(status)
plugin_execute_rebuild = FixupPluginExecuteRebuild(coresys) plugin_execute_rebuild = FixupPluginExecuteRebuild(coresys)