Compare commits

...

6 Commits

Author SHA1 Message Date
Stefan Agner
6e6d0d3700 Skip system-checks fan-out in test_events_on_issue_changes
The test asserts that apply_suggestion fires an ISSUE_REMOVED event.
ISSUE_REMOVED is fired by dismiss_issue inside FixupBase.__call__, before
apply_suggestion calls healthcheck. The healthcheck call afterwards is
incidental to this test's intent, but it fans out into check_system()
which runs CheckDNSServer (A and AAAA) - real aiodns query_dns() probes
against the NetworkManager mock's stub nameserver 192.168.30.1 that each
hit the default ~10 s aiodns timeout. The file took ~21 s to run.

The slowness has been latent since #3818 (Aug 2022), which added the
apply_suggestion step at the end of test_events_on_issue_changes two
days after the DNS check landed in its current form (#3811). The default
24 h JobThrottle on CheckDNSServer.run_check tends to mask the cost in
full-suite runs once any earlier test has tripped the throttle, which is
likely why this slipped through.

Mock coresys.resolution.healthcheck for just this one apply_suggestion
call rather than introducing a file-wide DNS mock. The patch is local to
the slow call site and the test's assertion is unaffected. The file
drops from ~21 s to ~2.5 s.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 19:47:43 +02:00
dependabot[bot]
287aee22e6 Bump cryptography from 46.0.7 to 47.0.0 (#6774)
Bumps [cryptography](https://github.com/pyca/cryptography) from 46.0.7 to 47.0.0.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/46.0.7...47.0.0)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-version: 47.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-27 14:25:24 +02:00
dependabot[bot]
71c2200c59 Bump ruff from 0.15.11 to 0.15.12 (#6773)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-27 12:47:46 +02:00
Stefan Agner
61ca2524b2 Return proper API errors for mqtt/mysql service conflicts (#6767)
* Return proper API errors for mqtt/mysql service conflicts

After #6739 added unexpected-error logging and Sentry capture to the
api_process wrappers, SUPERVISOR-1JTQ and SUPERVISOR-1JWM surfaced as
user-triggered service conflicts that were being treated as unexpected
errors:

- POST /services/{mqtt,mysql} when another app already provides the
  service.
- DELETE /services/{mqtt,mysql} when no app currently provides it.

Both paths raised a generic ServicesError, which the API layer turned
into an opaque HTTP 400 without a translation key, and which #6739 now
also logs and captures via Sentry.

Introduce ServiceAlreadyProvidedError (409 Conflict) and
ServiceNotProvidedError (404 Not Found) as new-style API exceptions with
translation keys and extra_fields, plus a shared APIConflict base class
for future 409 responses. The mqtt and mysql service modules now raise
these instead, so the API returns structured, translatable responses and
these expected user conflicts stop being captured as bugs.

Fixes SUPERVISOR-1JTQ
Fixes SUPERVISOR-1JWM

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Don't log handled errors verbose

Missing/already present service information are well handled errors with
clear API responses. The client is supposed to handle these errors. No
need to log verbosly.

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 21:56:12 +02:00
Stefan Agner
4938fb215d Improve Docker port-in-use detection and handling (#6766)
Triaging SUPERVISOR-1JWK turned up a missed port conflict:
RE_PORT_CONFLICT_ERROR only matched one of the Docker daemon's
port-in-use message shapes. The two variants produced by current moby
— "Bind for <ip>:<port> failed: port is already allocated" from
portallocator and "failed to bind host port <ip>:<port>/<proto>:
address already in use" from osallocator — fell through to
DockerAPIError, got re-raised as AppUnknownError, and the watchdog
shipped them to Sentry as unknown errors.

Widen the regex to match all known shapes (including the older form
embedding the container endpoint, still observed from older daemons
and wrappers), anchored on the "failed to set up container networking"
prefix and one of the "address already in use" or "port is already
allocated" suffixes. Log the raw Docker message at debug level before
converting, so curious users can still see the exact upstream text
(host IP, container endpoint, protocol) when investigating which
process is holding the port.

The watchdog's _restart_after_problem now catches AppPortConflict
explicitly ahead of the generic AppsError handler: log a warning,
break the retry loop, do not call async_capture_exception. A port
conflict is an environment condition — another process grabbed the
port while the add-on was down — so retrying cannot make it succeed
and reporting to Sentry is noise.

With port conflicts now raised as typed APIError subclasses at the
detection site, the DockerAPIError → format_message() rewrite fallback
in api_return_error has no work left. Drop the fallback and delete
supervisor/utils/log_format.py along with its tests; the module only
ever handled port-conflict prose.

Fixes SUPERVISOR-1JWK

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 21:55:18 +02:00
Stefan Agner
2011633946 Reuse check_exception_chain in Sentry filter, tighten its types (#6757)
Switch the Sentry noise filter in filter_data to call the existing
check_exception_chain helper instead of an inline loop. One shared
utility for "does the chain contain this type" matches what the
reviewer suggested and removes a bit of duplication.

While touching check_exception_chain:

- Walk __cause__ instead of __context__. __cause__ is what Python sets
  when code uses `raise B() from a`, which is the explicit "caused by"
  signal we actually want to match. __context__ can also include
  unrelated in-flight exceptions from surrounding except blocks.
  Every existing call site in Supervisor uses `raise X from err`, which
  sets both attributes, so switching is behaviour-preserving for all
  current callers.
- Replace the `Any` type of object_type with
  `type[BaseException] | tuple[type[BaseException], ...]`, which is
  what isinstance/issubclass actually accept and lets mypy catch
  misuse at the call site.
- Replace `issubclass(type(err), object_type)` with `isinstance`, which
  is the idiomatic form and honours virtual subclasses.

Review feedback from #6732.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-24 21:42:41 +02:00
16 changed files with 228 additions and 94 deletions

View File

@@ -9,7 +9,7 @@ brotli==1.2.0
ciso8601==2.3.3
colorlog==6.10.1
cpe==1.3.1
cryptography==46.0.7
cryptography==47.0.0
debugpy==1.8.20
deepmerge==2.0
dirhash==0.5.0

View File

@@ -8,7 +8,7 @@ pytest-asyncio==1.3.0
pytest-cov==7.1.0
pytest-timeout==2.4.0
pytest==9.0.3
ruff==0.15.11
ruff==0.15.12
time-machine==3.2.0
types-pyyaml==6.0.12.20260408
urllib3==2.6.3

View File

@@ -728,7 +728,7 @@ class App(AppModel):
) as req:
if req.status < 300:
return True
except (TimeoutError, aiohttp.ClientError):
except TimeoutError, aiohttp.ClientError:
pass
return False
@@ -1621,6 +1621,11 @@ class App(AppModel):
await (await self.start())
else:
await (await self.restart())
except AppPortConflict as err:
_LOGGER.warning(
"Watchdog cannot restart app %s: %s", self.name, err
)
break
except AppsError as err:
attempts = attempts + 1
_LOGGER.error("Watchdog restart of app %s failed!", self.name)

View File

@@ -27,11 +27,10 @@ from ..const import (
RESULT_OK,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import APIError, DockerAPIError, HassioError
from ..exceptions import APIError, HassioError
from ..jobs import JobSchedulerOptions, SupervisorJob
from ..utils import check_exception_chain, get_message_from_exception_chain
from ..utils import get_message_from_exception_chain
from ..utils.json import json_dumps, json_loads as json_loads_util
from ..utils.log_format import format_message
from ..utils.sentry import async_capture_exception
from . import const
@@ -153,8 +152,6 @@ def api_return_error(
"""Return an API error message."""
if error and not message:
message = get_message_from_exception_chain(error)
if check_exception_chain(error, DockerAPIError):
message = format_message(message)
if not message:
message = "Unknown error, see Supervisor logs"

View File

@@ -68,7 +68,9 @@ MIN_SUPPORTED_DOCKER: Final = AwesomeVersion("24.0.0")
DOCKER_NETWORK_HOST: Final = "host"
RE_IMPORT_IMAGE_STREAM = re.compile(r"(^Loaded image ID: |^Loaded image: )(.+)$")
RE_PORT_CONFLICT_ERROR = re.compile(
r"^failed to set up container networking: .* failed to bind host port for 0.0.0.0:(\d+):\d+(?:\.\d+){3}:\d+\/\w+: address already in use$"
r"^failed to set up container networking: .*"
r"0\.0\.0\.0:(\d+).*"
r"(?:address already in use|port is already allocated)$"
)
@@ -588,6 +590,11 @@ class DockerAPI(CoreSysAttributes):
if err.status == HTTPStatus.INTERNAL_SERVER_ERROR and (
match := RE_PORT_CONFLICT_ERROR.match(err.message)
):
_LOGGER.debug(
"Docker port conflict starting %s: %s",
name or container.id,
err.message,
)
raise DockerContainerPortConflict(
_LOGGER.error, name=name or container.id, port=int(match.group(1))
) from err

View File

@@ -85,6 +85,12 @@ class APIGone(APIError):
status = 410
class APIConflict(APIError):
"""API conflict error."""
status = 409
class APITooManyRequests(APIError):
"""API too many requests error."""
@@ -669,6 +675,41 @@ class ServicesError(HassioError):
"""Services Errors."""
class ServiceAlreadyProvidedError(ServicesError, APIConflict):
"""Raise when a service is already provided by another app."""
error_key = "service_already_provided_error"
message_template = "The {service} service is already provided by {app}"
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
service: str,
app: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"service": service, "app": app}
super().__init__(None, logger)
class ServiceNotProvidedError(ServicesError, APINotFound):
"""Raise when a service is not currently provided by any app."""
error_key = "service_not_provided_error"
message_template = "The {service} service is not currently provided by any app"
def __init__(
self,
logger: Callable[..., None] | None = None,
*,
service: str,
) -> None:
"""Initialize exception."""
self.extra_fields = {"service": service}
super().__init__(None, logger)
# utils/dbus

View File

@@ -13,6 +13,7 @@ from sentry_sdk.types import Event, Hint
from ..const import DOCKER_IPV4_NETWORK_MASK, HEADER_TOKEN, HEADER_TOKEN_OLD, CoreState
from ..coresys import CoreSys
from ..exceptions import APITooManyRequests, AppConfigurationError
from ..utils import check_exception_chain
_LOGGER: logging.Logger = logging.getLogger(__name__)
@@ -46,21 +47,16 @@ def sanitize_url(url: str) -> str:
def filter_data(coresys: CoreSys, event: Event, hint: Hint) -> Event | None:
"""Filter event data before sending to sentry."""
# Ignore some exceptions. Walk the __cause__ chain because rate-limit
# errors are often wrapped (e.g. DockerHubRateLimitExceeded wrapped in
# SupervisorUpdateError by supervisor.update()).
# Ignore some exceptions. check_exception_chain walks __cause__ so
# wrapped rate limits (e.g. DockerHubRateLimitExceeded wrapped in
# SupervisorUpdateError via `raise X from err`) are also dropped.
if "exc_info" in hint:
_, exc_value, _ = hint["exc_info"]
err: BaseException | None = exc_value
while err is not None:
if isinstance(err, (AppConfigurationError, APITooManyRequests)):
_LOGGER.debug(
"Skipping Sentry event for %s: %s",
type(err).__name__,
exc_value,
)
return None
err = err.__cause__
if exc_value is not None and check_exception_chain(
exc_value, (AppConfigurationError, APITooManyRequests)
):
_LOGGER.debug("Skipping Sentry event for %s", type(exc_value).__name__)
return None
# Ignore issue if system is not supported or diagnostics is disabled
if not coresys.config.diagnostics or not coresys.core.supported or coresys.dev:

View File

@@ -6,7 +6,7 @@ from typing import Any
import voluptuous as vol
from ...addons.addon import App
from ...exceptions import ServicesError
from ...exceptions import ServiceAlreadyProvidedError, ServiceNotProvidedError
from ...validate import network_port
from ..const import (
ATTR_APP,
@@ -68,9 +68,10 @@ class MQTTService(ServiceInterface):
async def set_service_data(self, app: App, data: dict[str, Any]) -> None:
"""Write the data into service object."""
if self.enabled:
raise ServicesError(
f"There is already a MQTT service in use from {self._data[ATTR_APP]}",
_LOGGER.error,
raise ServiceAlreadyProvidedError(
_LOGGER.debug,
service=SERVICE_MQTT,
app=self._data[ATTR_APP],
)
self._data.update(data)
@@ -82,9 +83,7 @@ class MQTTService(ServiceInterface):
async def del_service_data(self, app: App) -> None:
"""Remove the data from service object."""
if not self.enabled:
raise ServicesError(
"Can't remove nonexistent service data", _LOGGER.warning
)
raise ServiceNotProvidedError(_LOGGER.debug, service=SERVICE_MQTT)
self._data.clear()
await self.save()

View File

@@ -6,7 +6,7 @@ from typing import Any
import voluptuous as vol
from ...addons.addon import App
from ...exceptions import ServicesError
from ...exceptions import ServiceAlreadyProvidedError, ServiceNotProvidedError
from ...validate import network_port
from ..const import (
ATTR_APP,
@@ -62,9 +62,10 @@ class MySQLService(ServiceInterface):
async def set_service_data(self, app: App, data: dict[str, Any]) -> None:
"""Write the data into service object."""
if self.enabled:
raise ServicesError(
f"There is already a MySQL service in use from {self._data[ATTR_APP]}",
_LOGGER.error,
raise ServiceAlreadyProvidedError(
_LOGGER.debug,
service=SERVICE_MYSQL,
app=self._data[ATTR_APP],
)
self._data.update(data)
@@ -76,7 +77,7 @@ class MySQLService(ServiceInterface):
async def del_service_data(self, app: App) -> None:
"""Remove the data from service object."""
if not self.enabled:
raise ServicesError("Can't remove not exists services", _LOGGER.warning)
raise ServiceNotProvidedError(_LOGGER.debug, service=SERVICE_MYSQL)
self._data.clear()
await self.save()

View File

@@ -11,7 +11,6 @@ import re
import socket
import subprocess
from tempfile import TemporaryDirectory
from typing import Any
from awesomeversion import AwesomeVersion
@@ -57,18 +56,27 @@ async def check_port(address: IPv4Address, port: int) -> bool:
return True
def check_exception_chain(err: BaseException, object_type: Any) -> bool:
"""Check if exception chain include sub exception.
def check_exception_chain(
err: BaseException,
object_type: type[BaseException] | tuple[type[BaseException], ...],
) -> bool:
"""Check if exception chain contains the target type.
It's not full recursive because we need mostly only access to the latest.
Walks the __cause__ chain, which Python sets when code explicitly chains
exceptions via `raise B() from a`. Our codebase consistently uses that
pattern for re-raises, so __cause__ reliably reflects the "caused by"
relationship we want to match. __context__ is not used because it can
include unrelated in-flight exceptions from surrounding except blocks.
Not fully recursive because we mostly only need access to the latest.
"""
if issubclass(type(err), object_type):
if isinstance(err, object_type):
return True
if not err.__context__:
if err.__cause__ is None:
return False
return check_exception_chain(err.__context__, object_type)
return check_exception_chain(err.__cause__, object_type)
def get_message_from_exception_chain(err: BaseException) -> str:

View File

@@ -1,21 +0,0 @@
"""Custom log messages."""
import logging
import re
_LOGGER: logging.Logger = logging.getLogger(__name__)
RE_BIND_FAILED = re.compile(
r".*[listen tcp|Bind for].*(?:[0-9]{1,3}\.){3}[0-9]{1,3}:(\d*).*[bind|failed]:[address already in use|port is already allocated].*"
)
def format_message(message: str) -> str:
"""Return a formatted message if it's known."""
match = RE_BIND_FAILED.match(message)
if match:
return (
f"Port '{match.group(1)}' is already in use by something else on the host."
)
return message

View File

@@ -189,6 +189,35 @@ async def test_app_watchdog(coresys: CoreSys, install_app_ssh: App) -> None:
start.assert_not_called()
async def test_watchdog_port_conflict_does_not_retry(
coresys: CoreSys,
install_app_ssh: App,
caplog: pytest.LogCaptureFixture,
) -> None:
"""Watchdog must not retry or capture when start fails with a port conflict."""
with patch.object(DockerApp, "attach"):
await install_app_ssh.load()
install_app_ssh.watchdog = True
install_app_ssh._manual_stop = False # pylint: disable=protected-access
with (
patch.object(
App, "start", side_effect=AppPortConflict(name=TEST_ADDON_SLUG, port=2222)
) as start,
patch.object(DockerApp, "current_state", return_value=ContainerState.FAILED),
patch.object(DockerApp, "stop"),
patch("supervisor.addons.addon.async_capture_exception") as capture_exception,
):
caplog.clear()
_fire_test_event(coresys, f"addon_{TEST_ADDON_SLUG}", ContainerState.FAILED)
await asyncio.sleep(0)
start.assert_called_once()
capture_exception.assert_not_called()
assert f"Watchdog cannot restart app {install_app_ssh.name}" in caplog.text
async def test_watchdog_on_stop(coresys: CoreSys, install_app_ssh: App) -> None:
"""Test app watchdog restarts app on stop if not manual."""
with patch.object(DockerApp, "attach"):
@@ -1111,6 +1140,23 @@ async def test_app_disable_boot_dismisses_boot_fail(
assert coresys.resolution.suggestions == []
@pytest.mark.parametrize(
("docker_message", "port"),
[
(
"failed to set up container networking: driver failed programming external connectivity on endpoint addon_local_ssh (ea4d0fdaa72cf86f2c9199a04208e3eaf0c5a0d6fd34b3c7f4fab2daadb1f3a9): failed to bind host port for 0.0.0.0:2222:172.30.33.4:22/tcp: address already in use",
2222,
),
(
"failed to set up container networking: driver failed programming external connectivity on endpoint addon_local_ssh (ea4d0fdaa72cf86f2c9199a04208e3eaf0c5a0d6fd34b3c7f4fab2daadb1f3a9): Bind for 0.0.0.0:2222 failed: port is already allocated",
2222,
),
(
"failed to set up container networking: driver failed programming external connectivity on endpoint addon_local_ssh (ea4d0fdaa72cf86f2c9199a04208e3eaf0c5a0d6fd34b3c7f4fab2daadb1f3a9): failed to bind host port 0.0.0.0:2222/tcp: address already in use",
2222,
),
],
)
@pytest.mark.usefixtures(
"container", "mock_amd64_arch_supported", "path_extern", "tmp_supervisor_data"
)
@@ -1118,12 +1164,13 @@ async def test_app_start_port_conflict_error(
coresys: CoreSys,
install_app_ssh: App,
caplog: pytest.LogCaptureFixture,
docker_message: str,
port: int,
):
"""Test port conflict error when trying to start app."""
install_app_ssh.data["image"] = "test/amd64-addon-ssh"
coresys.docker.containers.create.return_value.start.side_effect = aiodocker.DockerError(
HTTPStatus.INTERNAL_SERVER_ERROR,
"failed to set up container networking: driver failed programming external connectivity on endpoint addon_local_ssh (ea4d0fdaa72cf86f2c9199a04208e3eaf0c5a0d6fd34b3c7f4fab2daadb1f3a9): failed to bind host port for 0.0.0.0:2222:172.30.33.4:22/tcp: address already in use",
coresys.docker.containers.create.return_value.start.side_effect = (
aiodocker.DockerError(HTTPStatus.INTERNAL_SERVER_ERROR, docker_message)
)
await install_app_ssh.load()
@@ -1132,12 +1179,12 @@ async def test_app_start_port_conflict_error(
patch.object(App, "write_options"),
pytest.raises(
AppPortConflict,
check=lambda exc: exc.extra_fields == {"name": "local_ssh", "port": 2222},
check=lambda exc: exc.extra_fields == {"name": "local_ssh", "port": port},
),
):
await install_app_ssh.start()
assert (
"Cannot start container addon_local_ssh because port 2222 is already in use"
f"Cannot start container addon_local_ssh because port {port} is already in use"
in caplog.text
)

View File

@@ -3,6 +3,12 @@
from aiohttp.test_utils import TestClient
import pytest
from supervisor.addons.addon import App
from supervisor.const import ATTR_SERVICES
from supervisor.coresys import CoreSys
from tests.const import TEST_ADDON_SLUG
@pytest.mark.parametrize(
("method", "url"),
@@ -14,3 +20,59 @@ async def test_service_not_found(api_client: TestClient, method: str, url: str):
assert resp.status == 404
body = await resp.json()
assert body["message"] == "Service does not exist"
@pytest.mark.parametrize("service", ["mqtt", "mysql"])
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
async def test_set_service_already_provided(
api_client: TestClient,
coresys: CoreSys,
install_app_ssh: App,
service: str,
):
"""Test setting service data when another app already provides it returns 409."""
install_app_ssh.data[ATTR_SERVICES] = [f"{service}:provide"]
await coresys.services.load()
coresys.services.data._data[service].update( # pylint: disable=protected-access
{"host": "existing", "port": 1883, "addon": "core_mosquitto"}
)
resp = await api_client.post(
f"/services/{service}",
json={"host": "new.example.com", "port": 1883},
)
assert resp.status == 409
body = await resp.json()
assert body["result"] == "error"
assert body["error_key"] == "service_already_provided_error"
assert body["extra_fields"] == {"service": service, "app": "core_mosquitto"}
assert (
body["message"]
== f"The {service} service is already provided by core_mosquitto"
)
@pytest.mark.parametrize("service", ["mqtt", "mysql"])
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
async def test_del_service_not_provided(
api_client: TestClient,
coresys: CoreSys,
install_app_ssh: App,
service: str,
):
"""Test deleting service data when no app provides it returns 404."""
install_app_ssh.data[ATTR_SERVICES] = [f"{service}:provide"]
await coresys.services.load()
coresys.services.data._data[service].clear() # pylint: disable=protected-access
resp = await api_client.delete(f"/services/{service}")
assert resp.status == 404
body = await resp.json()
assert body["result"] == "error"
assert body["error_key"] == "service_not_provided_error"
assert body["extra_fields"] == {"service": service}
assert (
body["message"] == f"The {service} service is not currently provided by any app"
)

View File

@@ -9,14 +9,21 @@ from supervisor.coresys import CoreSys
from supervisor.exceptions import ObserverPortConflict
@pytest.mark.parametrize(
"docker_message",
[
"failed to set up container networking: driver failed programming external connectivity on endpoint hassio_observer (ea4d0fdaa72cf86f2c9199a04208e3eaf0c5a0d6fd34b3c7f4fab2daadb1f3a9): failed to bind host port for 0.0.0.0:4357:172.30.33.4:80/tcp: address already in use",
"failed to set up container networking: driver failed programming external connectivity on endpoint hassio_observer (ea4d0fdaa72cf86f2c9199a04208e3eaf0c5a0d6fd34b3c7f4fab2daadb1f3a9): Bind for 0.0.0.0:4357 failed: port is already allocated",
"failed to set up container networking: driver failed programming external connectivity on endpoint hassio_observer (ea4d0fdaa72cf86f2c9199a04208e3eaf0c5a0d6fd34b3c7f4fab2daadb1f3a9): failed to bind host port 0.0.0.0:4357/tcp: address already in use",
],
)
@pytest.mark.usefixtures("container", "tmp_supervisor_data", "path_extern")
async def test_observer_start_port_conflict(
coresys: CoreSys, caplog: pytest.LogCaptureFixture
coresys: CoreSys, caplog: pytest.LogCaptureFixture, docker_message: str
):
"""Test port conflict error when trying to start observer."""
coresys.docker.containers.create.return_value.start.side_effect = aiodocker.DockerError(
HTTPStatus.INTERNAL_SERVER_ERROR,
"failed to set up container networking: driver failed programming external connectivity on endpoint hassio_observer (ea4d0fdaa72cf86f2c9199a04208e3eaf0c5a0d6fd34b3c7f4fab2daadb1f3a9): failed to bind host port for 0.0.0.0:4357:172.30.33.4:80/tcp: address already in use",
coresys.docker.containers.create.return_value.start.side_effect = (
aiodocker.DockerError(HTTPStatus.INTERNAL_SERVER_ERROR, docker_message)
)
await coresys.plugins.observer.load()

View File

@@ -273,9 +273,15 @@ async def test_events_on_issue_changes(
"issue_changed", issue_expected | {"suggestions": [suggestion_expected]}
) in [call.args[0] for call in ha_ws_client.async_send_command.call_args_list]
# Applying a suggestion should only fire an issue removed event
# Applying a suggestion should only fire an issue removed event.
# Mock healthcheck to avoid running the system-checks fan-out, which is
# not relevant to this assertion (ISSUE_REMOVED is fired by dismiss_issue
# inside the fixup, before apply_suggestion calls healthcheck).
ha_ws_client.async_send_command.reset_mock()
with patch("shutil.disk_usage", return_value=(42, 42, 2 * (1024.0**3))):
with (
patch("shutil.disk_usage", return_value=(42, 42, 2 * (1024.0**3))),
patch.object(coresys.resolution, "healthcheck", new_callable=AsyncMock),
):
await coresys.resolution.apply_suggestion(suggestion)
await asyncio.sleep(0)

View File

@@ -1,21 +0,0 @@
"""Tests for message formater."""
from supervisor.utils.log_format import format_message
def test_format_message_port():
"""Tests for message formater."""
message = '500 Server Error: Internal Server Error: Bind for 0.0.0.0:80 failed: port is already allocated")'
assert (
format_message(message)
== "Port '80' is already in use by something else on the host."
)
def test_format_message_port_alternative():
"""Tests for message formater."""
message = 'Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use")'
assert (
format_message(message)
== "Port '80' is already in use by something else on the host."
)