Compare commits

...

6 Commits

Author SHA1 Message Date
dependabot[bot]
4a49a64c76 Bump types-docker from 7.1.0.20251127 to 7.1.0.20251129
Bumps [types-docker](https://github.com/typeshed-internal/stub_uploader) from 7.1.0.20251127 to 7.1.0.20251129.
- [Commits](https://github.com/typeshed-internal/stub_uploader/commits)

---
updated-dependencies:
- dependency-name: types-docker
  dependency-version: 7.1.0.20251129
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-01 07:42:42 +00:00
dependabot[bot]
72159a0ae2 Bump pylint from 4.0.3 to 4.0.4 (#6368)
Bumps [pylint](https://github.com/pylint-dev/pylint) from 4.0.3 to 4.0.4.
- [Release notes](https://github.com/pylint-dev/pylint/releases)
- [Commits](https://github.com/pylint-dev/pylint/compare/v4.0.3...v4.0.4)

---
updated-dependencies:
- dependency-name: pylint
  dependency-version: 4.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 08:41:31 +01:00
dependabot[bot]
0a7b26187d Bump ruff from 0.14.6 to 0.14.7 (#6367)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.14.6 to 0.14.7.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.14.6...0.14.7)

---
updated-dependencies:
- dependency-name: ruff
  dependency-version: 0.14.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 08:41:14 +01:00
dependabot[bot]
2dc1f9224e Bump home-assistant/builder from 2025.09.0 to 2025.11.0 (#6363)
Bumps [home-assistant/builder](https://github.com/home-assistant/builder) from 2025.09.0 to 2025.11.0.
- [Release notes](https://github.com/home-assistant/builder/releases)
- [Commits](https://github.com/home-assistant/builder/compare/2025.09.0...2025.11.0)

---
updated-dependencies:
- dependency-name: home-assistant/builder
  dependency-version: 2025.11.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 08:40:38 +01:00
Mike Degatano
6302c7d394 Fix progress when using containerd snapshotter (#6357)
* Fix progress when using containerd snapshotter

* Add test for tiny image download under containerd-snapshotter

* Fix API tests after progress allocation change

* Fix test for auth changes

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-27 16:26:22 +01:00
Jan Čermák
f55fd891e9 Add API endpoint for migrating Docker storage driver (#6361)
Implement Supervisor API for home-assistant/os-agent#238, adding possibility to
schedule migration either to Containerd overlayfs driver, or migration to the
graph overlay2 driver, once the device is rebooted the next time. While it's
technically in the DBus OS interface, in Supervisor's abstraction it makes more
sense to put it under `/docker` endpoints.
2025-11-27 16:02:39 +01:00
15 changed files with 486 additions and 51 deletions

View File

@@ -165,7 +165,7 @@ jobs:
# home-assistant/builder doesn't support sha pinning
- name: Build supervisor
uses: home-assistant/builder@2025.09.0
uses: home-assistant/builder@2025.11.0
with:
args: |
$BUILD_ARGS \
@@ -211,7 +211,7 @@ jobs:
# home-assistant/builder doesn't support sha pinning
- name: Build the Supervisor
if: needs.init.outputs.publish != 'true'
uses: home-assistant/builder@2025.09.0
uses: home-assistant/builder@2025.11.0
with:
args: |
--test \

View File

@@ -2,15 +2,15 @@ astroid==4.0.2
coverage==7.12.0
mypy==1.18.2
pre-commit==4.5.0
pylint==4.0.3
pylint==4.0.4
pytest-aiohttp==1.1.0
pytest-asyncio==1.3.0
pytest-cov==7.0.0
pytest-timeout==2.4.0
pytest==9.0.1
ruff==0.14.6
ruff==0.14.7
time-machine==3.1.0
types-docker==7.1.0.20251127
types-docker==7.1.0.20251129
types-pyyaml==6.0.12.20250915
types-requests==2.32.4.20250913
urllib3==2.5.0

View File

@@ -813,6 +813,10 @@ class RestAPI(CoreSysAttributes):
self.webapp.add_routes(
[
web.get("/docker/info", api_docker.info),
web.post(
"/docker/migrate-storage-driver",
api_docker.migrate_docker_storage_driver,
),
web.post("/docker/options", api_docker.options),
web.get("/docker/registries", api_docker.registries),
web.post("/docker/registries", api_docker.create_registry),

View File

@@ -4,6 +4,7 @@ import logging
from typing import Any
from aiohttp import web
from awesomeversion import AwesomeVersion
import voluptuous as vol
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
@@ -16,6 +17,7 @@ from ..const import (
ATTR_PASSWORD,
ATTR_REGISTRIES,
ATTR_STORAGE,
ATTR_STORAGE_DRIVER,
ATTR_USERNAME,
ATTR_VERSION,
)
@@ -42,6 +44,12 @@ SCHEMA_OPTIONS = vol.Schema(
}
)
SCHEMA_MIGRATE_DOCKER_STORAGE_DRIVER = vol.Schema(
{
vol.Required(ATTR_STORAGE_DRIVER): vol.In(["overlayfs", "overlay2"]),
}
)
class APIDocker(CoreSysAttributes):
"""Handle RESTful API for Docker configuration."""
@@ -123,3 +131,27 @@ class APIDocker(CoreSysAttributes):
del self.sys_docker.config.registries[hostname]
await self.sys_docker.config.save_data()
@api_process
async def migrate_docker_storage_driver(self, request: web.Request) -> None:
"""Migrate Docker storage driver."""
if (
not self.coresys.os.available
or not self.coresys.os.version
or self.coresys.os.version < AwesomeVersion("17.0.dev0")
):
raise APINotFound(
"Home Assistant OS 17.0 or newer required for Docker storage driver migration"
)
body = await api_validate(SCHEMA_MIGRATE_DOCKER_STORAGE_DRIVER, request)
await self.sys_dbus.agent.system.migrate_docker_storage_driver(
body[ATTR_STORAGE_DRIVER]
)
_LOGGER.info("Host system reboot required to apply Docker storage migration")
self.sys_resolution.create_issue(
IssueType.REBOOT_REQUIRED,
ContextType.SYSTEM,
suggestions=[SuggestionType.EXECUTE_REBOOT],
)

View File

@@ -328,6 +328,7 @@ ATTR_STATE = "state"
ATTR_STATIC = "static"
ATTR_STDIN = "stdin"
ATTR_STORAGE = "storage"
ATTR_STORAGE_DRIVER = "storage_driver"
ATTR_SUGGESTIONS = "suggestions"
ATTR_SUPERVISOR = "supervisor"
ATTR_SUPERVISOR_INTERNET = "supervisor_internet"

View File

@@ -15,3 +15,8 @@ class System(DBusInterface):
async def schedule_wipe_device(self) -> bool:
"""Schedule a factory reset on next system boot."""
return await self.connected_dbus.System.call("schedule_wipe_device")
@dbus_connected
async def migrate_docker_storage_driver(self, backend: str) -> None:
"""Migrate Docker storage driver."""
await self.connected_dbus.System.call("migrate_docker_storage_driver", backend)

View File

@@ -226,28 +226,16 @@ class DockerInterface(JobGroup, ABC):
job = j
break
# This likely only occurs if the logs came in out of sync and we got progress before the Pulling FS Layer one
# There should no longer be any real risk of logs out of order anymore.
# However tests with very small images have shown that sometimes Docker
# skips stages in log. So keeping this one as a safety check on null job
if not job:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for image id {reference.id} and parent job {install_job_id} but could not find a matching job, skipping",
_LOGGER.debug,
)
# Hopefully these come in order but if they sometimes get out of sync, avoid accidentally going backwards
# If it happens a lot though we may need to reconsider the value of this feature
if job.done:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} but job was done, skipping",
_LOGGER.debug,
)
if job.stage and stage < PullImageLayerStage.from_status(job.stage):
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} but job was already on stage {job.stage}, skipping",
_LOGGER.debug,
)
# For progress calcuation we assume downloading and extracting are each 50% of the time and others stages negligible
# For progress calculation we assume downloading is 70% of time, extracting is 30% and others stages negligible
progress = job.progress
match stage:
case PullImageLayerStage.DOWNLOADING | PullImageLayerStage.EXTRACTING:
@@ -256,22 +244,26 @@ class DockerInterface(JobGroup, ABC):
and reference.progress_detail.current
and reference.progress_detail.total
):
progress = 50 * (
progress = (
reference.progress_detail.current
/ reference.progress_detail.total
)
if stage == PullImageLayerStage.EXTRACTING:
progress += 50
if stage == PullImageLayerStage.DOWNLOADING:
progress = 70 * progress
else:
progress = 70 + 30 * progress
case (
PullImageLayerStage.VERIFYING_CHECKSUM
| PullImageLayerStage.DOWNLOAD_COMPLETE
):
progress = 50
progress = 70
case PullImageLayerStage.PULL_COMPLETE:
progress = 100
case PullImageLayerStage.RETRYING_DOWNLOAD:
progress = 0
# No real risk of getting things out of order in current implementation
# but keeping this one in case another change to these trips us up.
if stage != PullImageLayerStage.RETRYING_DOWNLOAD and progress < job.progress:
raise DockerLogOutOfOrder(
f"Received pull image log with status {reference.status} for job {job.uuid} that implied progress was {progress} but current progress is {job.progress}, skipping",
@@ -335,7 +327,7 @@ class DockerInterface(JobGroup, ABC):
progress = 0.0
stage = PullImageLayerStage.PULL_COMPLETE
for job in layer_jobs:
if not job.extra:
if not job.extra or not job.extra.get("total"):
return
progress += job.progress * (job.extra["total"] / total)
job_stage = PullImageLayerStage.from_status(cast(str, job.stage))

View File

@@ -111,10 +111,15 @@ class PullProgressDetail:
"""Progress detail information for pull.
Documentation lacking but both of these seem to be in bytes when populated.
Containerd-snapshot update - When leveraging this new feature, this information
becomes useless to us while extracting. It simply tells elapsed time using
current and units.
"""
current: int | None = None
total: int | None = None
units: str | None = None
@classmethod
def from_pull_log_dict(cls, value: dict[str, int]) -> PullProgressDetail:

View File

@@ -4,6 +4,11 @@ from aiohttp.test_utils import TestClient
import pytest
from supervisor.coresys import CoreSys
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
from supervisor.resolution.data import Issue, Suggestion
from tests.dbus_service_mocks.agent_system import System as SystemService
from tests.dbus_service_mocks.base import DBusServiceMock
@pytest.mark.asyncio
@@ -84,3 +89,79 @@ async def test_registry_not_found(api_client: TestClient):
assert resp.status == 404
body = await resp.json()
assert body["message"] == "Hostname bad does not exist in registries"
@pytest.mark.parametrize("os_available", ["17.0.rc1"], indirect=True)
async def test_api_migrate_docker_storage_driver(
api_client: TestClient,
coresys: CoreSys,
os_agent_services: dict[str, DBusServiceMock],
os_available,
):
"""Test Docker storage driver migration."""
system_service: SystemService = os_agent_services["agent_system"]
system_service.MigrateDockerStorageDriver.calls.clear()
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "overlayfs"},
)
assert resp.status == 200
assert system_service.MigrateDockerStorageDriver.calls == [("overlayfs",)]
assert (
Issue(IssueType.REBOOT_REQUIRED, ContextType.SYSTEM)
in coresys.resolution.issues
)
assert (
Suggestion(SuggestionType.EXECUTE_REBOOT, ContextType.SYSTEM)
in coresys.resolution.suggestions
)
# Test migration back to overlay2 (graph driver)
system_service.MigrateDockerStorageDriver.calls.clear()
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "overlay2"},
)
assert resp.status == 200
assert system_service.MigrateDockerStorageDriver.calls == [("overlay2",)]
@pytest.mark.parametrize("os_available", ["17.0.rc1"], indirect=True)
async def test_api_migrate_docker_storage_driver_invalid_backend(
api_client: TestClient,
os_available,
):
"""Test 400 is returned for invalid storage driver."""
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "invalid"},
)
assert resp.status == 400
async def test_api_migrate_docker_storage_driver_not_os(
api_client: TestClient,
coresys: CoreSys,
):
"""Test 404 is returned if not running on HAOS."""
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "overlayfs"},
)
assert resp.status == 404
@pytest.mark.parametrize("os_available", ["16.2"], indirect=True)
async def test_api_migrate_docker_storage_driver_old_os(
api_client: TestClient,
coresys: CoreSys,
os_available,
):
"""Test 404 is returned if OS is older than 17.0."""
resp = await api_client.post(
"/docker/migrate-storage-driver",
json={"storage_driver": "overlayfs"},
)
assert resp.status == 404

View File

@@ -323,29 +323,29 @@ async def test_api_progress_updates_home_assistant_update(
},
{
"stage": None,
"progress": 1.2,
"progress": 1.7,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"progress": 4.0,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"progress": 98.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"progress": 98.3,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"progress": 99.3,
"done": False,
},
{

View File

@@ -773,29 +773,29 @@ async def test_api_progress_updates_addon_install_update(
},
{
"stage": None,
"progress": 1.2,
"progress": 1.7,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"progress": 4.0,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"progress": 98.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"progress": 98.3,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"progress": 99.3,
"done": False,
},
{

View File

@@ -371,29 +371,29 @@ async def test_api_progress_updates_supervisor_update(
},
{
"stage": None,
"progress": 1.2,
"progress": 1.7,
"done": False,
},
{
"stage": None,
"progress": 2.8,
"progress": 4.0,
"done": False,
},
]
assert events[-5:] == [
{
"stage": None,
"progress": 97.2,
"progress": 98.2,
"done": False,
},
{
"stage": None,
"progress": 98.4,
"progress": 98.3,
"done": False,
},
{
"stage": None,
"progress": 99.4,
"progress": 99.3,
"done": False,
},
{

View File

@@ -1,6 +1,6 @@
"""Mock of OS Agent System dbus service."""
from dbus_fast import DBusError
from dbus_fast import DBusError, ErrorType
from .base import DBusServiceMock, dbus_method
@@ -21,6 +21,7 @@ class System(DBusServiceMock):
object_path = "/io/hass/os/System"
interface = "io.hass.os.System"
response_schedule_wipe_device: bool | DBusError = True
response_migrate_docker_storage_driver: None | DBusError = None
@dbus_method()
def ScheduleWipeDevice(self) -> "b":
@@ -28,3 +29,14 @@ class System(DBusServiceMock):
if isinstance(self.response_schedule_wipe_device, DBusError):
raise self.response_schedule_wipe_device # pylint: disable=raising-bad-type
return self.response_schedule_wipe_device
@dbus_method()
def MigrateDockerStorageDriver(self, backend: "s") -> None:
"""Migrate Docker storage driver."""
if isinstance(self.response_migrate_docker_storage_driver, DBusError):
raise self.response_migrate_docker_storage_driver # pylint: disable=raising-bad-type
if backend not in ("overlayfs", "overlay2"):
raise DBusError(
ErrorType.FAILED,
f"unsupported driver: {backend} (only 'overlayfs' and 'overlay2' are supported)",
)

View File

@@ -26,7 +26,10 @@ from supervisor.exceptions import (
DockerNotFound,
DockerRequestError,
)
from supervisor.jobs import JobSchedulerOptions, SupervisorJob
from supervisor.homeassistant.const import WSEvent, WSType
from supervisor.jobs import ChildJobSyncFilter, JobSchedulerOptions, SupervisorJob
from supervisor.jobs.decorator import Job
from supervisor.supervisor import Supervisor
from tests.common import AsyncIterator, load_json_fixture
@@ -314,7 +317,7 @@ async def test_install_fires_progress_events(
},
{"status": "Already exists", "progressDetail": {}, "id": "6e771e15690e"},
{"status": "Pulling fs layer", "progressDetail": {}, "id": "1578b14a573c"},
{"status": "Waiting", "progressDetail": {}, "id": "2488d0e401e1"},
{"status": "Waiting", "progressDetail": {}, "id": "1578b14a573c"},
{
"status": "Downloading",
"progressDetail": {"current": 1378, "total": 1486},
@@ -384,7 +387,7 @@ async def test_install_fires_progress_events(
job_id=ANY,
status="Waiting",
progress_detail=PullProgressDetail(),
id="2488d0e401e1",
id="1578b14a573c",
),
PullLogEntry(
job_id=ANY,
@@ -538,6 +541,7 @@ async def test_install_raises_on_pull_error(
"status": "Pulling from home-assistant/odroid-n2-homeassistant",
"id": "2025.7.2",
},
{"status": "Pulling fs layer", "progressDetail": {}, "id": "1578b14a573c"},
{
"status": "Downloading",
"progressDetail": {"current": 1378, "total": 1486},
@@ -592,16 +596,39 @@ async def test_install_progress_handles_download_restart(
capture_exception.assert_not_called()
@pytest.mark.parametrize(
"extract_log",
[
{
"status": "Extracting",
"progressDetail": {"current": 96, "total": 96},
"progress": "[==================================================>] 96B/96B",
"id": "02a6e69d8d00",
},
{
"status": "Extracting",
"progressDetail": {"current": 1, "units": "s"},
"progress": "1 s",
"id": "02a6e69d8d00",
},
],
ids=["normal_extract_log", "containerd_snapshot_extract_log"],
)
async def test_install_progress_handles_layers_skipping_download(
coresys: CoreSys,
test_docker_interface: DockerInterface,
capture_exception: Mock,
extract_log: dict[str, Any],
):
"""Test install handles small layers that skip downloading phase and go directly to download complete.
Reproduces the real-world scenario from Supervisor issue #6286:
- Small layer (02a6e69d8d00) completes Download complete at 10:14:08 without ever Downloading
- Normal layer (3f4a84073184) starts Downloading at 10:14:09 with progress updates
Under containerd snapshotter this presumably can still occur and Supervisor will have even less info
since extract logs don't have a total. Supervisor should generally just ignore these and set progress
from the larger images that take all the time.
"""
coresys.core.set_state(CoreState.RUNNING)
@@ -645,12 +672,7 @@ async def test_install_progress_handles_layers_skipping_download(
},
{"status": "Pull complete", "progressDetail": {}, "id": "3f4a84073184"},
# Small layer finally extracts (10:14:58 in logs)
{
"status": "Extracting",
"progressDetail": {"current": 96, "total": 96},
"progress": "[==================================================>] 96B/96B",
"id": "02a6e69d8d00",
},
extract_log,
{"status": "Pull complete", "progressDetail": {}, "id": "02a6e69d8d00"},
{"status": "Digest: sha256:test"},
{"status": "Status: Downloaded newer image for test/image:latest"},
@@ -758,3 +780,88 @@ async def test_missing_total_handled_gracefully(
await event.wait()
capture_exception.assert_not_called()
async def test_install_progress_containerd_snapshot(
coresys: CoreSys, ha_ws_client: AsyncMock
):
"""Test install handles docker progress events using containerd snapshotter."""
coresys.core.set_state(CoreState.RUNNING)
class TestDockerInterface(DockerInterface):
"""Test interface for events."""
@property
def name(self) -> str:
"""Name of test interface."""
return "test_interface"
@Job(
name="mock_docker_interface_install",
child_job_syncs=[
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
],
)
async def mock_install(self) -> None:
"""Mock install."""
await super().install(
AwesomeVersion("1.2.3"), image="test", arch=CpuArch.I386
)
# Fixture emulates log as received when using containerd snapshotter
# Should not error but progress gets choppier once extraction starts
logs = load_json_fixture("docker_pull_image_log_containerd_snapshot.json")
coresys.docker.images.pull.return_value = AsyncIterator(logs)
test_docker_interface = TestDockerInterface(coresys)
with patch.object(Supervisor, "arch", PropertyMock(return_value="i386")):
await test_docker_interface.mock_install()
coresys.docker.images.pull.assert_called_once_with(
"test", tag="1.2.3", platform="linux/386", auth=None, stream=True
)
coresys.docker.images.inspect.assert_called_once_with("test:1.2.3")
await asyncio.sleep(1)
def job_event(progress: float, done: bool = False):
return {
"type": WSType.SUPERVISOR_EVENT,
"data": {
"event": WSEvent.JOB,
"data": {
"name": "mock_docker_interface_install",
"reference": "test_interface",
"uuid": ANY,
"progress": progress,
"stage": None,
"done": done,
"parent_id": None,
"errors": [],
"created": ANY,
"extra": None,
},
},
}
assert [c.args[0] for c in ha_ws_client.async_send_command.call_args_list] == [
# During downloading we get continuous progress updates from download status
job_event(0),
job_event(3.4),
job_event(8.5),
job_event(10.2),
job_event(15.3),
job_event(18.8),
job_event(29.0),
job_event(35.8),
job_event(42.6),
job_event(49.5),
job_event(56.0),
job_event(62.8),
# Downloading phase is considered 70% of total. After we only get one update
# per image downloaded when extraction is finished. It uses the total size
# received during downloading to determine percent complete then.
job_event(70.0),
job_event(84.8),
job_event(100),
job_event(100, True),
]

View File

@@ -0,0 +1,196 @@
[
{
"status": "Pulling from home-assistant/home-assistant",
"id": "2025.12.0.dev202511080235"
},
{ "status": "Pulling fs layer", "progressDetail": {}, "id": "eafecc6b43cc" },
{ "status": "Pulling fs layer", "progressDetail": {}, "id": "333270549f95" },
{
"status": "Downloading",
"progressDetail": { "current": 1048576, "total": 21863319 },
"progress": "[==\u003e ] 1.049MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 1048576, "total": 21179924 },
"progress": "[==\u003e ] 1.049MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 4194304, "total": 21863319 },
"progress": "[=========\u003e ] 4.194MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 2097152, "total": 21179924 },
"progress": "[====\u003e ] 2.097MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 7340032, "total": 21863319 },
"progress": "[================\u003e ] 7.34MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 4194304, "total": 21179924 },
"progress": "[=========\u003e ] 4.194MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 13631488, "total": 21863319 },
"progress": "[===============================\u003e ] 13.63MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 8388608, "total": 21179924 },
"progress": "[===================\u003e ] 8.389MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 17825792, "total": 21863319 },
"progress": "[========================================\u003e ] 17.83MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 12582912, "total": 21179924 },
"progress": "[=============================\u003e ] 12.58MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 21863319, "total": 21863319 },
"progress": "[==================================================\u003e] 21.86MB/21.86MB",
"id": "eafecc6b43cc"
},
{
"status": "Downloading",
"progressDetail": { "current": 16777216, "total": 21179924 },
"progress": "[=======================================\u003e ] 16.78MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Downloading",
"progressDetail": { "current": 21179924, "total": 21179924 },
"progress": "[==================================================\u003e] 21.18MB/21.18MB",
"id": "333270549f95"
},
{
"status": "Download complete",
"progressDetail": { "hidecounts": true },
"id": "eafecc6b43cc"
},
{
"status": "Download complete",
"progressDetail": { "hidecounts": true },
"id": "333270549f95"
},
{
"status": "Extracting",
"progressDetail": { "current": 1, "units": "s" },
"progress": "1 s",
"id": "333270549f95"
},
{
"status": "Extracting",
"progressDetail": { "current": 1, "units": "s" },
"progress": "1 s",
"id": "333270549f95"
},
{
"status": "Pull complete",
"progressDetail": { "hidecounts": true },
"id": "333270549f95"
},
{
"status": "Extracting",
"progressDetail": { "current": 1, "units": "s" },
"progress": "1 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 1, "units": "s" },
"progress": "1 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 2, "units": "s" },
"progress": "2 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 2, "units": "s" },
"progress": "2 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 3, "units": "s" },
"progress": "3 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 3, "units": "s" },
"progress": "3 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 4, "units": "s" },
"progress": "4 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 4, "units": "s" },
"progress": "4 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 5, "units": "s" },
"progress": "5 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 5, "units": "s" },
"progress": "5 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 6, "units": "s" },
"progress": "6 s",
"id": "eafecc6b43cc"
},
{
"status": "Extracting",
"progressDetail": { "current": 6, "units": "s" },
"progress": "6 s",
"id": "eafecc6b43cc"
},
{
"status": "Pull complete",
"progressDetail": { "hidecounts": true },
"id": "eafecc6b43cc"
},
{
"status": "Digest: sha256:bfc9efc13552c0c228f3d9d35987331cce68b43c9bc79c80a57eeadadd44cccf"
},
{
"status": "Status: Downloaded newer image for ghcr.io/home-assistant/home-assistant:2025.12.0.dev202511080235"
}
]