Compare commits

..

No commits in common. "main" and "2025.06.0" have entirely different histories.

114 changed files with 1013 additions and 3333 deletions

69
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,69 @@
---
name: Report a bug with the Supervisor on a supported System
about: Report an issue related to the Home Assistant Supervisor.
labels: bug
---
<!-- READ THIS FIRST:
- If you need additional help with this template please refer to https://www.home-assistant.io/help/reporting_issues/
- This is for bugs only. Feature and enhancement requests should go in our community forum: https://community.home-assistant.io/c/feature-requests
- Provide as many details as possible. Paste logs, configuration sample and code into the backticks. Do not delete any text from this template!
- If you have a problem with an add-on, make an issue in it's repository.
-->
<!--
Important: You can only fill a bug repport for an supported system! If you run an unsupported installation. This report would be closed without comment.
-->
### Describe the issue
<!-- Provide as many details as possible. -->
### Steps to reproduce
<!-- What do you do to encounter the issue. -->
1. ...
2. ...
3. ...
### Enviroment details
<!-- You can find these details in the system tab of the supervisor panel, or by using the `ha` CLI. -->
- **Operating System:**: xxx
- **Supervisor version:**: xxx
- **Home Assistant version**: xxx
### Supervisor logs
<details>
<summary>Supervisor logs</summary>
<!--
- Frontend -> Supervisor -> System
- Or use this command: ha supervisor logs
- Logs are more than just errors, even if you don't think it's important, it is.
-->
```
Paste supervisor logs here
```
</details>
### System Information
<details>
<summary>System Information</summary>
<!--
- Use this command: ha info
-->
```
Paste system info here
```
</details>

View File

@ -1,5 +1,6 @@
name: Report an issue with Home Assistant Supervisor
name: Bug Report Form
description: Report an issue related to the Home Assistant Supervisor.
labels: bug
body:
- type: markdown
attributes:
@ -8,7 +9,7 @@ body:
If you have a feature or enhancement request, please use the [feature request][fr] section of our [Community Forum][fr].
[fr]: https://github.com/orgs/home-assistant/discussions
[fr]: https://community.home-assistant.io/c/feature-requests
- type: textarea
validations:
required: true
@ -75,7 +76,7 @@ body:
description: >
The System information can be found in [Settings -> System -> Repairs -> (three dot menu) -> System Information](https://my.home-assistant.io/redirect/system_health/).
Click the copy button at the bottom of the pop-up and paste it here.
[![Open your Home Assistant instance and show health information about your system.](https://my.home-assistant.io/badges/system_health.svg)](https://my.home-assistant.io/redirect/system_health/)
- type: textarea
attributes:
@ -85,7 +86,7 @@ body:
Supervisor diagnostics can be found in [Settings -> Devices & services](https://my.home-assistant.io/redirect/integrations/).
Find the card that says `Home Assistant Supervisor`, open it, and select the three dot menu of the Supervisor integration entry
and select 'Download diagnostics'.
**Please drag-and-drop the downloaded file into the textbox below. Do not copy and paste its contents.**
- type: textarea
attributes:

View File

@ -13,7 +13,7 @@ contact_links:
about: Our documentation has its own issue tracker. Please report issues with the website there.
- name: Request a feature for the Supervisor
url: https://github.com/orgs/home-assistant/discussions
url: https://community.home-assistant.io/c/feature-requests
about: Request an new feature for the Supervisor.
- name: I have a question or need support

View File

@ -1,53 +0,0 @@
name: Task
description: For staff only - Create a task
type: Task
body:
- type: markdown
attributes:
value: |
## ⚠️ RESTRICTED ACCESS
**This form is restricted to Open Home Foundation staff and authorized contributors only.**
If you are a community member wanting to contribute, please:
- For bug reports: Use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)
- For feature requests: Submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)
---
### For authorized contributors
Use this form to create tasks for development work, improvements, or other actionable items that need to be tracked.
- type: textarea
id: description
attributes:
label: Description
description: |
Provide a clear and detailed description of the task that needs to be accomplished.
Be specific about what needs to be done, why it's important, and any constraints or requirements.
placeholder: |
Describe the task, including:
- What needs to be done
- Why this task is needed
- Expected outcome
- Any constraints or requirements
validations:
required: true
- type: textarea
id: additional_context
attributes:
label: Additional context
description: |
Any additional information, links, research, or context that would be helpful.
Include links to related issues, research, prototypes, roadmap opportunities etc.
placeholder: |
- Roadmap opportunity: [link]
- Epic: [link]
- Feature request: [link]
- Technical design documents: [link]
- Prototype/mockup: [link]
- Dependencies: [links]
validations:
required: false

View File

@ -1,288 +0,0 @@
# GitHub Copilot & Claude Code Instructions
This repository contains the Home Assistant Supervisor, a Python 3 based container
orchestration and management system for Home Assistant.
## Supervisor Capabilities & Features
### Architecture Overview
Home Assistant Supervisor is a Python-based container orchestration system that
communicates with the Docker daemon to manage containerized components. It is tightly
integrated with the underlying Operating System and core Operating System components
through D-Bus.
**Managed Components:**
- **Home Assistant Core**: The main home automation application running in its own
container (also provides the web interface)
- **Add-ons**: Third-party applications and services (each add-on runs in its own
container)
- **Plugins**: Built-in system services like DNS, Audio, CLI, Multicast, and Observer
- **Host System Integration**: OS-level operations and hardware access via D-Bus
- **Container Networking**: Internal Docker network management and external
connectivity
- **Storage & Backup**: Data persistence and backup management across all containers
**Key Dependencies:**
- **Docker Engine**: Required for all container operations
- **D-Bus**: System-level communication with the host OS
- **systemd**: Service management for host system operations
- **NetworkManager**: Network configuration and management
### Add-on System
**Add-on Architecture**: Add-ons are containerized applications available through
add-on stores. Each store contains multiple add-ons, and each add-on includes metadata
that tells Supervisor the version, startup configuration (permissions), and available
user configurable options. Add-on metadata typically references a container image that
Supervisor fetches during installation. If not, the Supervisor builds the container
image from a Dockerfile.
**Built-in Stores**: Supervisor comes with several pre-configured stores:
- **Core Add-ons**: Official add-ons maintained by the Home Assistant team
- **Community Add-ons**: Popular third-party add-ons repository
- **ESPHome**: Add-ons for ESPHome ecosystem integration
- **Music Assistant**: Audio and music-related add-ons
- **Local Development**: Local folder for testing custom add-ons during development
**Store Management**: Stores are Git-based repositories that are periodically updated.
When updates are available, users receive notifications.
**Add-on Lifecycle**:
- **Installation**: Supervisor fetches or builds container images based on add-on
metadata
- **Configuration**: Schema-validated options with integrated UI management
- **Runtime**: Full container lifecycle management, health monitoring
- **Updates**: Automatic or manual version management
### Update System
**Core Components**: Supervisor, Home Assistant Core, HAOS, and built-in plugins
receive version information from a central JSON file fetched from
`https://version.home-assistant.io/{channel}.json`. The `Updater` class handles
fetching this data, validating signatures, and updating internal version tracking.
**Update Channels**: Three channels (`stable`/`beta`/`dev`) determine which version
JSON file is fetched, allowing users to opt into different release streams.
**Add-on Updates**: Add-on version information comes from store repository updates, not
the central JSON file. When repositories are refreshed via the store system, add-ons
compare their local versions against repository versions to determine update
availability.
### Backup & Recovery System
**Backup Capabilities**:
- **Full Backups**: Complete system state capture including all add-ons,
configuration, and data
- **Partial Backups**: Selective backup of specific components (Home Assistant,
add-ons, folders)
- **Encrypted Backups**: Optional backup encryption with user-provided passwords
- **Multiple Storage Locations**: Local storage and remote backup destinations
**Recovery Features**:
- **One-click Restore**: Simple restoration from backup files
- **Selective Restore**: Choose specific components to restore
- **Automatic Recovery**: Self-healing for common system issues
---
## Supervisor Development
### Python Requirements
- **Compatibility**: Python 3.13+
- **Language Features**: Use modern Python features:
- Type hints with `typing` module
- f-strings (preferred over `%` or `.format()`)
- Dataclasses and enum classes
- Async/await patterns
- Pattern matching where appropriate
### Code Quality Standards
- **Formatting**: Ruff
- **Linting**: PyLint and Ruff
- **Type Checking**: MyPy
- **Testing**: pytest with asyncio support
- **Language**: American English for all code, comments, and documentation
### Code Organization
**Core Structure**:
```
supervisor/
├── __init__.py # Package initialization
├── const.py # Constants and enums
├── coresys.py # Core system management
├── bootstrap.py # System initialization
├── exceptions.py # Custom exception classes
├── api/ # REST API endpoints
├── addons/ # Add-on management
├── backups/ # Backup system
├── docker/ # Docker integration
├── host/ # Host system interface
├── homeassistant/ # Home Assistant Core management
├── dbus/ # D-Bus system integration
├── hardware/ # Hardware detection and management
├── plugins/ # Plugin system
├── resolution/ # Issue detection and resolution
├── security/ # Security management
├── services/ # Service discovery and management
├── store/ # Add-on store management
└── utils/ # Utility functions
```
**Shared Constants**: Use constants from `supervisor/const.py` instead of hardcoding
values. Define new constants following existing patterns and group related constants
together.
### Supervisor Architecture Patterns
**CoreSysAttributes Inheritance Pattern**: Nearly all major classes in Supervisor
inherit from `CoreSysAttributes`, providing access to the centralized system state
via `self.coresys` and convenient `sys_*` properties.
```python
# Standard Supervisor class pattern
class MyManager(CoreSysAttributes):
"""Manage my functionality."""
def __init__(self, coresys: CoreSys):
"""Initialize manager."""
self.coresys: CoreSys = coresys
self._component: MyComponent = MyComponent(coresys)
@property
def component(self) -> MyComponent:
"""Return component handler."""
return self._component
# Access system components via inherited properties
async def do_something(self):
await self.sys_docker.containers.get("my_container")
self.sys_bus.fire_event(BusEvent.MY_EVENT, {"data": "value"})
```
**Key Inherited Properties from CoreSysAttributes**:
- `self.sys_docker` - Docker API access
- `self.sys_run_in_executor()` - Execute blocking operations
- `self.sys_create_task()` - Create async tasks
- `self.sys_bus` - Event bus for system events
- `self.sys_config` - System configuration
- `self.sys_homeassistant` - Home Assistant Core management
- `self.sys_addons` - Add-on management
- `self.sys_host` - Host system access
- `self.sys_dbus` - D-Bus system interface
**Load Pattern**: Many components implement a `load()` method which effectively
initialize the component from external sources (containers, files, D-Bus services).
### API Development
**REST API Structure**:
- **Base Path**: `/api/` for all endpoints
- **Authentication**: Bearer token authentication
- **Consistent Response Format**: `{"result": "ok", "data": {...}}` or
`{"result": "error", "message": "..."}`
- **Validation**: Use voluptuous schemas with `api_validate()`
**Use `@api_process` Decorator**: This decorator handles all standard error handling
and response formatting automatically. The decorator catches `APIError`, `HassioError`,
and other exceptions, returning appropriate HTTP responses.
```python
from ..api.utils import api_process, api_validate
@api_process
async def backup_full(self, request: web.Request) -> dict[str, Any]:
"""Create full backup."""
body = await api_validate(SCHEMA_BACKUP_FULL, request)
job = await self.sys_backups.do_backup_full(**body)
return {ATTR_JOB_ID: job.uuid}
```
### Docker Integration
- **Container Management**: Use Supervisor's Docker manager instead of direct
Docker API
- **Networking**: Supervisor manages internal Docker networks with predefined IP
ranges
- **Security**: AppArmor profiles, capability restrictions, and user namespace
isolation
- **Health Checks**: Implement health monitoring for all managed containers
### D-Bus Integration
- **Use dbus-fast**: Async D-Bus library for system integration
- **Service Management**: systemd, NetworkManager, hostname management
- **Error Handling**: Wrap D-Bus exceptions in Supervisor-specific exceptions
### Async Programming
- **All I/O operations must be async**: File operations, network calls, subprocess
execution
- **Use asyncio patterns**: Prefer `asyncio.gather()` over sequential awaits
- **Executor jobs**: Use `self.sys_run_in_executor()` for blocking operations
- **Two-phase initialization**: `__init__` for sync setup, `post_init()` for async
initialization
### Testing
- **Location**: `tests/` directory with module mirroring
- **Fixtures**: Extensive use of pytest fixtures for CoreSys setup
- **Mocking**: Mock external dependencies (Docker, D-Bus, network calls)
- **Coverage**: Minimum 90% test coverage, 100% for security-sensitive code
### Error Handling
- **Custom Exceptions**: Defined in `exceptions.py` with clear inheritance hierarchy
- **Error Propagation**: Use `from` clause for exception chaining
- **API Errors**: Use `APIError` with appropriate HTTP status codes
### Security Considerations
- **Container Security**: AppArmor profiles mandatory for add-ons, minimal
capabilities
- **Authentication**: Token-based API authentication with role-based access
- **Data Protection**: Backup encryption, secure secret management, comprehensive
input validation
### Development Commands
```bash
# Run tests, adjust paths as necessary
pytest -qsx tests/
# Linting and formatting
ruff check supervisor/
ruff format supervisor/
# Type checking
mypy --ignore-missing-imports supervisor/
# Pre-commit hooks
pre-commit run --all-files
```
Always run the pre-commit hooks at the end of code editing.
### Common Patterns to Follow
**✅ Use These Patterns**:
- Inherit from `CoreSysAttributes` for system access
- Use `@api_process` decorator for API endpoints
- Use `self.sys_run_in_executor()` for blocking operations
- Access Docker via `self.sys_docker` not direct Docker API
- Use constants from `const.py` instead of hardcoding
- Store types in (per-module) `const.py` (e.g. supervisor/store/const.py)
**❌ Avoid These Patterns**:
- Direct Docker API usage - use Supervisor's Docker manager
- Blocking operations in async context (use asyncio alternatives)
- Hardcoded values - use constants from `const.py`
- Manual error handling in API endpoints - let `@api_process` handle it
This guide provides the foundation for contributing to Home Assistant Supervisor.
Follow these patterns and guidelines to ensure code quality, security, and
maintainability.

View File

@ -131,7 +131,7 @@ jobs:
- name: Install Cosign
if: needs.init.outputs.publish == 'true'
uses: sigstore/cosign-installer@v3.9.2
uses: sigstore/cosign-installer@v3.8.2
with:
cosign-release: "v2.4.3"

View File

@ -10,7 +10,6 @@ on:
env:
DEFAULT_PYTHON: "3.13"
PRE_COMMIT_CACHE: ~/.cache/pre-commit
MYPY_CACHE_VERSION: 1
concurrency:
group: "${{ github.workflow }}-${{ github.ref }}"
@ -287,52 +286,6 @@ jobs:
. venv/bin/activate
pylint supervisor tests
mypy:
name: Check mypy
runs-on: ubuntu-latest
needs: prepare
steps:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
uses: actions/setup-python@v5.6.0
id: python
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Generate partial mypy restore key
id: generate-mypy-key
run: |
mypy_version=$(cat requirements_test.txt | grep mypy | cut -d '=' -f 3)
echo "version=$mypy_version" >> $GITHUB_OUTPUT
echo "key=mypy-${{ env.MYPY_CACHE_VERSION }}-$mypy_version-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@v4.2.3
with:
path: venv
key: >-
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
- name: Fail job if Python cache restore failed
if: steps.cache-venv.outputs.cache-hit != 'true'
run: |
echo "Failed to restore Python virtual environment from cache"
exit 1
- name: Restore mypy cache
uses: actions/cache@v4.2.3
with:
path: .mypy_cache
key: >-
${{ runner.os }}-mypy-${{ needs.prepare.outputs.python-version }}-${{ steps.generate-mypy-key.outputs.key }}
restore-keys: >-
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-mypy-${{ env.MYPY_CACHE_VERSION }}-${{ steps.generate-mypy-key.outputs.version }}
- name: Register mypy problem matcher
run: |
echo "::add-matcher::.github/workflows/matchers/mypy.json"
- name: Run mypy
run: |
. venv/bin/activate
mypy --ignore-missing-imports supervisor
pytest:
runs-on: ubuntu-latest
needs: prepare
@ -346,7 +299,7 @@ jobs:
with:
python-version: ${{ needs.prepare.outputs.python-version }}
- name: Install Cosign
uses: sigstore/cosign-installer@v3.9.2
uses: sigstore/cosign-installer@v3.8.2
with:
cosign-release: "v2.4.3"
- name: Restore Python virtual environment

View File

@ -1,16 +0,0 @@
{
"problemMatcher": [
{
"owner": "mypy",
"pattern": [
{
"regexp": "^(.+):(\\d+):\\s(error|warning):\\s(.+)$",
"file": 1,
"line": 2,
"severity": 3,
"message": 4
}
]
}
]
}

View File

@ -1,58 +0,0 @@
name: Restrict task creation
# yamllint disable-line rule:truthy
on:
issues:
types: [opened]
jobs:
check-authorization:
runs-on: ubuntu-latest
# Only run if this is a Task issue type (from the issue form)
if: github.event.issue.issue_type == 'Task'
steps:
- name: Check if user is authorized
uses: actions/github-script@v7
with:
script: |
const issueAuthor = context.payload.issue.user.login;
// Check if user is an organization member
try {
await github.rest.orgs.checkMembershipForUser({
org: 'home-assistant',
username: issueAuthor
});
console.log(`✅ ${issueAuthor} is an organization member`);
return; // Authorized
} catch (error) {
console.log(`❌ ${issueAuthor} is not authorized to create Task issues`);
}
// Close the issue with a comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `Hi @${issueAuthor}, thank you for your contribution!\n\n` +
`Task issues are restricted to Open Home Foundation staff and authorized contributors.\n\n` +
`If you would like to:\n` +
`- Report a bug: Please use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)\n` +
`- Request a feature: Please submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)\n\n` +
`If you believe you should have access to create Task issues, please contact the maintainers.`
});
await github.rest.issues.update({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
state: 'closed'
});
// Add a label to indicate this was auto-closed
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: ['auto-closed']
});

View File

@ -12,7 +12,7 @@ jobs:
- name: Check out code from GitHub
uses: actions/checkout@v4.2.2
- name: Sentry Release
uses: getsentry/action-release@v3.2.0
uses: getsentry/action-release@v3.1.1
env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}

View File

@ -13,15 +13,3 @@ repos:
- id: check-executables-have-shebangs
stages: [manual]
- id: check-json
- repo: local
hooks:
# Run mypy through our wrapper script in order to get the possible
# pyenv and/or virtualenv activated; it may not have been e.g. if
# committing from a GUI tool that was not launched from an activated
# shell.
- id: mypy
name: mypy
entry: script/run-in-env.sh mypy --ignore-missing-imports
language: script
types_or: [python, pyi]
files: ^supervisor/.+\.(py|pyi)$

View File

@ -1 +0,0 @@
.github/copilot-instructions.md

View File

@ -1,30 +1,30 @@
aiodns==3.5.0
aiohttp==3.12.15
aiohttp==3.12.13
atomicwrites-homeassistant==1.4.1
attrs==25.3.0
awesomeversion==25.5.0
blockbuster==1.5.25
blockbuster==1.5.24
brotli==1.1.0
ciso8601==2.3.2
colorlog==6.9.0
cpe==1.3.1
cryptography==45.0.5
debugpy==1.8.15
cryptography==45.0.4
debugpy==1.8.14
deepmerge==2.0
dirhash==0.5.0
docker==7.1.0
faust-cchardet==2.1.19
gitpython==3.1.45
gitpython==3.1.44
jinja2==3.1.6
log-rate-limit==1.4.2
orjson==3.11.1
orjson==3.10.18
pulsectl==24.12.0
pyudev==0.24.3
PyYAML==6.0.2
requests==2.32.4
securetar==2025.2.1
sentry-sdk==2.34.1
sentry-sdk==2.30.0
setuptools==80.9.0
voluptuous==0.15.2
dbus-fast==2.44.2
dbus-fast==2.44.1
zlib-fast==0.2.1

View File

@ -1,16 +1,12 @@
astroid==3.3.11
coverage==7.10.1
mypy==1.17.0
astroid==3.3.10
coverage==7.9.1
pre-commit==4.2.0
pylint==3.3.7
pytest-aiohttp==1.1.0
pytest-asyncio==0.25.2
pytest-cov==6.2.1
pytest-timeout==2.4.0
pytest==8.4.1
ruff==0.12.7
pytest==8.4.0
ruff==0.11.13
time-machine==2.16.0
types-docker==7.1.0.20250705
types-pyyaml==6.0.12.20250516
types-requests==2.32.4.20250611
urllib3==2.5.0
urllib3==2.4.0

View File

@ -1,30 +0,0 @@
#!/usr/bin/env sh
set -eu
# Used in venv activate script.
# Would be an error if undefined.
OSTYPE="${OSTYPE-}"
# Activate pyenv and virtualenv if present, then run the specified command
# pyenv, pyenv-virtualenv
if [ -s .python-version ]; then
PYENV_VERSION=$(head -n 1 .python-version)
export PYENV_VERSION
fi
if [ -n "${VIRTUAL_ENV-}" ] && [ -f "${VIRTUAL_ENV}/bin/activate" ]; then
. "${VIRTUAL_ENV}/bin/activate"
else
# other common virtualenvs
my_path=$(git rev-parse --show-toplevel)
for venv in venv .venv .; do
if [ -f "${my_path}/${venv}/bin/activate" ]; then
. "${my_path}/${venv}/bin/activate"
break
fi
done
fi
exec "$@"

View File

@ -360,7 +360,7 @@ class Addon(AddonModel):
@property
def auto_update(self) -> bool:
"""Return if auto update is enable."""
return self.persist.get(ATTR_AUTO_UPDATE, False)
return self.persist.get(ATTR_AUTO_UPDATE, super().auto_update)
@auto_update.setter
def auto_update(self, value: bool) -> None:

View File

@ -15,7 +15,6 @@ from ..const import (
ATTR_SQUASH,
FILE_SUFFIX_CONFIGURATION,
META_ADDON,
SOCKET_DOCKER,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..docker.interface import MAP_ARCH
@ -122,64 +121,39 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
except HassioArchNotFound:
return False
def get_docker_args(
self, version: AwesomeVersion, image_tag: str
) -> dict[str, Any]:
"""Create a dict with Docker run args."""
dockerfile_path = self.get_dockerfile().relative_to(self.addon.path_location)
def get_docker_args(self, version: AwesomeVersion, image: str | None = None):
"""Create a dict with Docker build arguments.
build_cmd = [
"docker",
"buildx",
"build",
".",
"--tag",
image_tag,
"--file",
str(dockerfile_path),
"--platform",
MAP_ARCH[self.arch],
"--pull",
]
labels = {
"io.hass.version": version,
"io.hass.arch": self.arch,
"io.hass.type": META_ADDON,
"io.hass.name": self._fix_label("name"),
"io.hass.description": self._fix_label("description"),
**self.additional_labels,
Must be run in executor.
"""
args: dict[str, Any] = {
"path": str(self.addon.path_location),
"tag": f"{image or self.addon.image}:{version!s}",
"dockerfile": str(self.get_dockerfile()),
"pull": True,
"forcerm": not self.sys_dev,
"squash": self.squash,
"platform": MAP_ARCH[self.arch],
"labels": {
"io.hass.version": version,
"io.hass.arch": self.arch,
"io.hass.type": META_ADDON,
"io.hass.name": self._fix_label("name"),
"io.hass.description": self._fix_label("description"),
**self.additional_labels,
},
"buildargs": {
"BUILD_FROM": self.base_image,
"BUILD_VERSION": version,
"BUILD_ARCH": self.sys_arch.default,
**self.additional_args,
},
}
if self.addon.url:
labels["io.hass.url"] = self.addon.url
args["labels"]["io.hass.url"] = self.addon.url
for key, value in labels.items():
build_cmd.extend(["--label", f"{key}={value}"])
build_args = {
"BUILD_FROM": self.base_image,
"BUILD_VERSION": version,
"BUILD_ARCH": self.sys_arch.default,
**self.additional_args,
}
for key, value in build_args.items():
build_cmd.extend(["--build-arg", f"{key}={value}"])
# The addon path will be mounted from the host system
addon_extern_path = self.sys_config.local_to_extern_path(
self.addon.path_location
)
return {
"command": build_cmd,
"volumes": {
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
addon_extern_path: {"bind": "/addon", "mode": "ro"},
},
"working_dir": "/addon",
}
return args
def _fix_label(self, label_name: str) -> str:
"""Remove characters they are not supported."""

View File

@ -266,7 +266,7 @@ class AddonManager(CoreSysAttributes):
],
on_condition=AddonsJobError,
)
async def rebuild(self, slug: str, *, force: bool = False) -> asyncio.Task | None:
async def rebuild(self, slug: str) -> asyncio.Task | None:
"""Perform a rebuild of local build add-on.
Returns a Task that completes when addon has state 'started' (see addon.start)
@ -289,7 +289,7 @@ class AddonManager(CoreSysAttributes):
raise AddonsError(
"Version changed, use Update instead Rebuild", _LOGGER.error
)
if not force and not addon.need_build:
if not addon.need_build:
raise AddonsNotSupportedError(
"Can't rebuild a image based add-on", _LOGGER.error
)

View File

@ -664,16 +664,12 @@ class AddonModel(JobGroup, ABC):
"""Validate if addon is available for current system."""
return self._validate_availability(self.data, logger=_LOGGER.error)
def __eq__(self, other: Any) -> bool:
"""Compare add-on objects."""
def __eq__(self, other):
"""Compaired add-on objects."""
if not isinstance(other, AddonModel):
return False
return self.slug == other.slug
def __hash__(self) -> int:
"""Hash for add-on objects."""
return hash(self.slug)
def _validate_availability(
self, config, *, logger: Callable[..., None] | None = None
) -> None:

View File

@ -36,7 +36,6 @@ from ..const import (
ATTR_DNS,
ATTR_DOCKER_API,
ATTR_DOCUMENTATION,
ATTR_FORCE,
ATTR_FULL_ACCESS,
ATTR_GPIO,
ATTR_HASSIO_API,
@ -140,8 +139,6 @@ SCHEMA_SECURITY = vol.Schema({vol.Optional(ATTR_PROTECTED): vol.Boolean()})
SCHEMA_UNINSTALL = vol.Schema(
{vol.Optional(ATTR_REMOVE_CONFIG, default=False): vol.Boolean()}
)
SCHEMA_REBUILD = vol.Schema({vol.Optional(ATTR_FORCE, default=False): vol.Boolean()})
# pylint: enable=no-value-for-parameter
@ -464,11 +461,7 @@ class APIAddons(CoreSysAttributes):
async def rebuild(self, request: web.Request) -> None:
"""Rebuild local build add-on."""
addon = self.get_addon_for_request(request)
body: dict[str, Any] = await api_validate(SCHEMA_REBUILD, request)
if start_task := await asyncio.shield(
self.sys_addons.rebuild(addon.slug, force=body[ATTR_FORCE])
):
if start_task := await asyncio.shield(self.sys_addons.rebuild(addon.slug)):
await start_task
@api_process

View File

@ -3,13 +3,11 @@
import asyncio
from collections.abc import Awaitable
import logging
from typing import Any, cast
from typing import Any
from aiohttp import BasicAuth, web
from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE, WWW_AUTHENTICATE
from aiohttp.web import FileField
from aiohttp.web_exceptions import HTTPUnauthorized
from multidict import MultiDictProxy
import voluptuous as vol
from ..addons.addon import Addon
@ -53,10 +51,7 @@ class APIAuth(CoreSysAttributes):
return self.sys_auth.check_login(addon, auth.login, auth.password)
def _process_dict(
self,
request: web.Request,
addon: Addon,
data: dict[str, Any] | MultiDictProxy[str | bytes | FileField],
self, request: web.Request, addon: Addon, data: dict[str, str]
) -> Awaitable[bool]:
"""Process login with dict data.
@ -65,15 +60,7 @@ class APIAuth(CoreSysAttributes):
username = data.get("username") or data.get("user")
password = data.get("password")
# Test that we did receive strings and not something else, raise if so
try:
_ = username.encode and password.encode # type: ignore
except AttributeError:
raise HTTPUnauthorized(headers=REALM_HEADER) from None
return self.sys_auth.check_login(
addon, cast(str, username), cast(str, password)
)
return self.sys_auth.check_login(addon, username, password)
@api_process
async def auth(self, request: web.Request) -> bool:
@ -92,18 +79,13 @@ class APIAuth(CoreSysAttributes):
# Json
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON:
data = await request.json(loads=json_loads)
if not await self._process_dict(request, addon, data):
raise HTTPUnauthorized()
return True
return await self._process_dict(request, addon, data)
# URL encoded
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_URL:
data = await request.post()
if not await self._process_dict(request, addon, data):
raise HTTPUnauthorized()
return True
return await self._process_dict(request, addon, data)
# Advertise Basic authentication by default
raise HTTPUnauthorized(headers=REALM_HEADER)
@api_process

View File

@ -87,4 +87,4 @@ class DetectBlockingIO(StrEnum):
OFF = "off"
ON = "on"
ON_AT_STARTUP = "on-at-startup"
ON_AT_STARTUP = "on_at_startup"

View File

@ -6,8 +6,6 @@ from typing import Any
from aiohttp import web
import voluptuous as vol
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
from ..const import (
ATTR_ENABLE_IPV6,
ATTR_HOSTNAME,
@ -34,7 +32,7 @@ SCHEMA_DOCKER_REGISTRY = vol.Schema(
)
# pylint: disable=no-value-for-parameter
SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Maybe(vol.Boolean())})
SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Boolean()})
class APIDocker(CoreSysAttributes):
@ -61,17 +59,8 @@ class APIDocker(CoreSysAttributes):
"""Set docker options."""
body = await api_validate(SCHEMA_OPTIONS, request)
if (
ATTR_ENABLE_IPV6 in body
and self.sys_docker.config.enable_ipv6 != body[ATTR_ENABLE_IPV6]
):
if ATTR_ENABLE_IPV6 in body:
self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6]
_LOGGER.info("Host system reboot required to apply new IPv6 configuration")
self.sys_resolution.create_issue(
IssueType.REBOOT_REQUIRED,
ContextType.SYSTEM,
suggestions=[SuggestionType.EXECUTE_REBOOT],
)
await self.sys_docker.config.save_data()

View File

@ -309,9 +309,9 @@ class APIIngress(CoreSysAttributes):
def _init_header(
request: web.Request, addon: Addon, session_data: IngressSessionData | None
) -> CIMultiDict[str]:
) -> CIMultiDict | dict[str, str]:
"""Create initial header."""
headers = CIMultiDict[str]()
headers = {}
if session_data is not None:
headers[HEADER_REMOTE_USER_ID] = session_data.user.id
@ -337,7 +337,7 @@ def _init_header(
istr(HEADER_REMOTE_USER_DISPLAY_NAME),
):
continue
headers.add(name, value)
headers[name] = value
# Update X-Forwarded-For
if request.transport:
@ -348,9 +348,9 @@ def _init_header(
return headers
def _response_header(response: aiohttp.ClientResponse) -> CIMultiDict[str]:
def _response_header(response: aiohttp.ClientResponse) -> dict[str, str]:
"""Create response header."""
headers = CIMultiDict[str]()
headers = {}
for name, value in response.headers.items():
if name in (
@ -360,7 +360,7 @@ def _response_header(response: aiohttp.ClientResponse) -> CIMultiDict[str]:
hdrs.CONTENT_ENCODING,
):
continue
headers.add(name, value)
headers[name] = value
return headers

View File

@ -40,7 +40,7 @@ class CpuArch(CoreSysAttributes):
@property
def supervisor(self) -> str:
"""Return supervisor arch."""
return self.sys_supervisor.arch or self._default_arch
return self.sys_supervisor.arch
@property
def supported(self) -> list[str]:
@ -91,14 +91,4 @@ class CpuArch(CoreSysAttributes):
for check, value in MAP_CPU.items():
if cpu.startswith(check):
return value
if self.sys_supervisor.arch:
_LOGGER.warning(
"Unknown CPU architecture %s, falling back to Supervisor architecture.",
cpu,
)
return self.sys_supervisor.arch
_LOGGER.warning(
"Unknown CPU architecture %s, assuming CPU architecture equals Supervisor architecture.",
cpu,
)
return cpu
return self.sys_supervisor.arch

View File

@ -3,10 +3,10 @@
import asyncio
import hashlib
import logging
from typing import Any, TypedDict, cast
from typing import Any
from .addons.addon import Addon
from .const import ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
from .const import ATTR_ADDON, ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
from .coresys import CoreSys, CoreSysAttributes
from .exceptions import (
AuthError,
@ -21,17 +21,6 @@ from .validate import SCHEMA_AUTH_CONFIG
_LOGGER: logging.Logger = logging.getLogger(__name__)
class BackendAuthRequest(TypedDict):
"""Model for a backend auth request.
https://github.com/home-assistant/core/blob/ed9503324d9d255e6fb077f1614fb6d55800f389/homeassistant/components/hassio/auth.py#L66-L73
"""
username: str
password: str
addon: str
class Auth(FileConfiguration, CoreSysAttributes):
"""Manage SSO for Add-ons with Home Assistant user."""
@ -85,9 +74,6 @@ class Auth(FileConfiguration, CoreSysAttributes):
"""Check username login."""
if password is None:
raise AuthError("None as password is not supported!", _LOGGER.error)
if username is None:
raise AuthError("None as username is not supported!", _LOGGER.error)
_LOGGER.info("Auth request from '%s' for '%s'", addon.slug, username)
# Get from cache
@ -117,12 +103,11 @@ class Auth(FileConfiguration, CoreSysAttributes):
async with self.sys_homeassistant.api.make_request(
"post",
"api/hassio_auth",
json=cast(
dict[str, Any],
BackendAuthRequest(
username=username, password=password, addon=addon.slug
),
),
json={
ATTR_USERNAME: username,
ATTR_PASSWORD: password,
ATTR_ADDON: addon.slug,
},
) as req:
if req.status == 200:
_LOGGER.info("Successful login for '%s'", username)

View File

@ -63,8 +63,6 @@ from .const import BUF_SIZE, LOCATION_CLOUD_BACKUP, BackupType
from .utils import password_to_key
from .validate import SCHEMA_BACKUP
IGNORED_COMPARISON_FIELDS = {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}
_LOGGER: logging.Logger = logging.getLogger(__name__)
@ -267,7 +265,7 @@ class Backup(JobGroup):
# Compare all fields except ones about protection. Current encryption status does not affect equality
keys = self._data.keys() | other._data.keys()
for k in keys - IGNORED_COMPARISON_FIELDS:
for k in keys - {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}:
if (
k not in self._data
or k not in other._data
@ -579,21 +577,13 @@ class Backup(JobGroup):
@Job(name="backup_addon_save", cleanup=False)
async def _addon_save(self, addon: Addon) -> asyncio.Task | None:
"""Store an add-on into backup."""
self.sys_jobs.current.reference = slug = addon.slug
self.sys_jobs.current.reference = addon.slug
if not self._outer_secure_tarfile:
raise RuntimeError(
"Cannot backup components without initializing backup tar"
)
# Ensure it is still installed and get current data before proceeding
if not (curr_addon := self.sys_addons.get_local_only(slug)):
_LOGGER.warning(
"Skipping backup of add-on %s because it has been uninstalled",
slug,
)
return None
tar_name = f"{slug}.tar{'.gz' if self.compressed else ''}"
tar_name = f"{addon.slug}.tar{'.gz' if self.compressed else ''}"
addon_file = self._outer_secure_tarfile.create_inner_tar(
f"./{tar_name}",
@ -602,16 +592,16 @@ class Backup(JobGroup):
)
# Take backup
try:
start_task = await curr_addon.backup(addon_file)
start_task = await addon.backup(addon_file)
except AddonsError as err:
raise BackupError(str(err)) from err
# Store to config
self._data[ATTR_ADDONS].append(
{
ATTR_SLUG: slug,
ATTR_NAME: curr_addon.name,
ATTR_VERSION: curr_addon.version,
ATTR_SLUG: addon.slug,
ATTR_NAME: addon.name,
ATTR_VERSION: addon.version,
# Bug - addon_file.size used to give us this information
# It always returns 0 in current securetar. Skipping until fixed
ATTR_SIZE: 0,
@ -931,5 +921,5 @@ class Backup(JobGroup):
Return a coroutine.
"""
return self.sys_store.update_repositories(
set(self.repositories), issue_on_error=True, replace=replace
self.repositories, add_with_errors=True, replace=replace
)

View File

@ -285,7 +285,7 @@ def check_environment() -> None:
_LOGGER.critical("Can't find Docker socket!")
def register_signal_handlers(loop: asyncio.AbstractEventLoop, coresys: CoreSys) -> None:
def register_signal_handlers(loop: asyncio.BaseEventLoop, coresys: CoreSys) -> None:
"""Register SIGTERM, SIGHUP and SIGKILL to stop the Supervisor."""
try:
loop.add_signal_handler(

View File

@ -2,7 +2,7 @@
from __future__ import annotations
from collections.abc import Callable, Coroutine
from collections.abc import Awaitable, Callable
import logging
from typing import Any
@ -19,7 +19,7 @@ class EventListener:
"""Event listener."""
event_type: BusEvent = attr.ib()
callback: Callable[[Any], Coroutine[Any, Any, None]] = attr.ib()
callback: Callable[[Any], Awaitable[None]] = attr.ib()
class Bus(CoreSysAttributes):
@ -31,7 +31,7 @@ class Bus(CoreSysAttributes):
self._listeners: dict[BusEvent, list[EventListener]] = {}
def register_event(
self, event: BusEvent, callback: Callable[[Any], Coroutine[Any, Any, None]]
self, event: BusEvent, callback: Callable[[Any], Awaitable[None]]
) -> EventListener:
"""Register callback for an event."""
listener = EventListener(event, callback)

View File

@ -66,7 +66,7 @@ _UTC = "UTC"
class CoreConfig(FileConfiguration):
"""Hold all core config data."""
def __init__(self) -> None:
def __init__(self):
"""Initialize config object."""
super().__init__(FILE_HASSIO_CONFIG, SCHEMA_SUPERVISOR_CONFIG)
self._timezone_tzinfo: tzinfo | None = None

View File

@ -5,7 +5,7 @@ from enum import StrEnum
from ipaddress import IPv4Network, IPv6Network
from pathlib import Path
from sys import version_info as systemversion
from typing import NotRequired, Self, TypedDict
from typing import Self
from aiohttp import __version__ as aiohttpversion
@ -188,7 +188,6 @@ ATTR_FEATURES = "features"
ATTR_FILENAME = "filename"
ATTR_FLAGS = "flags"
ATTR_FOLDERS = "folders"
ATTR_FORCE = "force"
ATTR_FORCE_SECURITY = "force_security"
ATTR_FREQUENCY = "frequency"
ATTR_FULL_ACCESS = "full_access"
@ -416,12 +415,10 @@ class AddonBoot(StrEnum):
MANUAL = "manual"
@classmethod
def _missing_(cls, value: object) -> Self | None:
def _missing_(cls, value: str) -> Self | None:
"""Convert 'forced' config values to their counterpart."""
if value == AddonBootConfig.MANUAL_ONLY:
for member in cls:
if member == AddonBoot.MANUAL:
return member
return AddonBoot.MANUAL
return None
@ -518,16 +515,6 @@ class CpuArch(StrEnum):
AMD64 = "amd64"
class IngressSessionDataUserDict(TypedDict):
"""Response object for ingress session user."""
id: str
username: NotRequired[str | None]
# Name is an alias for displayname, only one should be used
displayname: NotRequired[str | None]
name: NotRequired[str | None]
@dataclass
class IngressSessionDataUser:
"""Format of an IngressSessionDataUser object."""
@ -536,42 +523,38 @@ class IngressSessionDataUser:
display_name: str | None = None
username: str | None = None
def to_dict(self) -> IngressSessionDataUserDict:
def to_dict(self) -> dict[str, str | None]:
"""Get dictionary representation."""
return IngressSessionDataUserDict(
id=self.id, displayname=self.display_name, username=self.username
)
return {
ATTR_ID: self.id,
ATTR_DISPLAYNAME: self.display_name,
ATTR_USERNAME: self.username,
}
@classmethod
def from_dict(cls, data: IngressSessionDataUserDict) -> Self:
def from_dict(cls, data: dict[str, str | None]) -> Self:
"""Return object from dictionary representation."""
return cls(
id=data["id"],
display_name=data.get("displayname") or data.get("name"),
username=data.get("username"),
id=data[ATTR_ID],
display_name=data.get(ATTR_DISPLAYNAME),
username=data.get(ATTR_USERNAME),
)
class IngressSessionDataDict(TypedDict):
"""Response object for ingress session data."""
user: IngressSessionDataUserDict
@dataclass
class IngressSessionData:
"""Format of an IngressSessionData object."""
user: IngressSessionDataUser
def to_dict(self) -> IngressSessionDataDict:
def to_dict(self) -> dict[str, dict[str, str | None]]:
"""Get dictionary representation."""
return IngressSessionDataDict(user=self.user.to_dict())
return {ATTR_USER: self.user.to_dict()}
@classmethod
def from_dict(cls, data: IngressSessionDataDict) -> Self:
def from_dict(cls, data: dict[str, dict[str, str | None]]) -> Self:
"""Return object from dictionary representation."""
return cls(user=IngressSessionDataUser.from_dict(data["user"]))
return cls(user=IngressSessionDataUser.from_dict(data[ATTR_USER]))
STARTING_STATES = [

View File

@ -28,7 +28,7 @@ from .homeassistant.core import LANDINGPAGE
from .resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from .utils.dt import utcnow
from .utils.sentry import async_capture_exception
from .utils.whoami import retrieve_whoami
from .utils.whoami import WhoamiData, retrieve_whoami
_LOGGER: logging.Logger = logging.getLogger(__name__)
@ -36,7 +36,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
class Core(CoreSysAttributes):
"""Main object of Supervisor."""
def __init__(self, coresys: CoreSys) -> None:
def __init__(self, coresys: CoreSys):
"""Initialize Supervisor object."""
self.coresys: CoreSys = coresys
self._state: CoreState = CoreState.INITIALIZE
@ -91,7 +91,7 @@ class Core(CoreSysAttributes):
"info", {"state": self._state}
)
async def connect(self) -> None:
async def connect(self):
"""Connect Supervisor container."""
# Load information from container
await self.sys_supervisor.load()
@ -120,7 +120,7 @@ class Core(CoreSysAttributes):
self.sys_config.version = self.sys_supervisor.version
await self.sys_config.save_data()
async def setup(self) -> None:
async def setup(self):
"""Start setting up supervisor orchestration."""
await self.set_state(CoreState.SETUP)
@ -216,7 +216,7 @@ class Core(CoreSysAttributes):
# Evaluate the system
await self.sys_resolution.evaluate.evaluate_system()
async def start(self) -> None:
async def start(self):
"""Start Supervisor orchestration."""
await self.set_state(CoreState.STARTUP)
@ -310,7 +310,7 @@ class Core(CoreSysAttributes):
)
_LOGGER.info("Supervisor is up and running")
async def stop(self) -> None:
async def stop(self):
"""Stop a running orchestration."""
# store new last boot / prevent time adjustments
if self.state in (CoreState.RUNNING, CoreState.SHUTDOWN):
@ -358,7 +358,7 @@ class Core(CoreSysAttributes):
_LOGGER.info("Supervisor is down - %d", self.exit_code)
self.sys_loop.stop()
async def shutdown(self, *, remove_homeassistant_container: bool = False) -> None:
async def shutdown(self, *, remove_homeassistant_container: bool = False):
"""Shutdown all running containers in correct order."""
# don't process scheduler anymore
if self.state == CoreState.RUNNING:
@ -382,15 +382,19 @@ class Core(CoreSysAttributes):
if self.state in (CoreState.STOPPING, CoreState.SHUTDOWN):
await self.sys_plugins.shutdown()
async def _update_last_boot(self) -> None:
async def _update_last_boot(self):
"""Update last boot time."""
if not (last_boot := await self.sys_hardware.helper.last_boot()):
_LOGGER.error("Could not update last boot information!")
return
self.sys_config.last_boot = last_boot
self.sys_config.last_boot = await self.sys_hardware.helper.last_boot()
await self.sys_config.save_data()
async def _adjust_system_datetime(self) -> None:
async def _retrieve_whoami(self, with_ssl: bool) -> WhoamiData | None:
try:
return await retrieve_whoami(self.sys_websession, with_ssl)
except WhoamiSSLError:
_LOGGER.info("Whoami service SSL error")
return None
async def _adjust_system_datetime(self):
"""Adjust system time/date on startup."""
# If no timezone is detect or set
# If we are not connected or time sync
@ -402,13 +406,11 @@ class Core(CoreSysAttributes):
# Get Timezone data
try:
try:
data = await retrieve_whoami(self.sys_websession, True)
except WhoamiSSLError:
# SSL Date Issue & possible time drift
_LOGGER.info("Whoami service SSL error")
data = await retrieve_whoami(self.sys_websession, False)
data = await self._retrieve_whoami(True)
# SSL Date Issue & possible time drift
if not data:
data = await self._retrieve_whoami(False)
except WhoamiError as err:
_LOGGER.warning("Can't adjust Time/Date settings: %s", err)
return
@ -424,7 +426,7 @@ class Core(CoreSysAttributes):
await self.sys_host.control.set_datetime(data.dt_utc)
await self.sys_supervisor.check_connectivity()
async def repair(self) -> None:
async def repair(self):
"""Repair system integrity."""
_LOGGER.info("Starting repair of Supervisor Environment")
await self.sys_run_in_executor(self.sys_docker.repair)

View File

@ -62,17 +62,17 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
class CoreSys:
"""Class that handle all shared data."""
def __init__(self) -> None:
def __init__(self):
"""Initialize coresys."""
# Static attributes protected
self._machine_id: str | None = None
self._machine: str | None = None
# External objects
self._loop = asyncio.get_running_loop()
self._loop: asyncio.BaseEventLoop = asyncio.get_running_loop()
# Global objects
self._config = CoreConfig()
self._config: CoreConfig = CoreConfig()
# Internal objects pointers
self._docker: DockerAPI | None = None
@ -122,12 +122,8 @@ class CoreSys:
if self._websession:
await self._websession.close()
resolver: aiohttp.abc.AbstractResolver
try:
# Use "unused" kwargs to force dedicated resolver instance. Otherwise
# aiodns won't reload /etc/resolv.conf which we need to make our connection
# check work in all cases.
resolver = aiohttp.AsyncResolver(loop=self.loop, timeout=None)
resolver = aiohttp.AsyncResolver(loop=self.loop)
# pylint: disable=protected-access
_LOGGER.debug(
"Initializing ClientSession with AsyncResolver. Using nameservers %s",
@ -148,7 +144,7 @@ class CoreSys:
self._websession = session
async def init_machine(self) -> None:
async def init_machine(self):
"""Initialize machine information."""
def _load_machine_id() -> str | None:
@ -192,7 +188,7 @@ class CoreSys:
return UTC
@property
def loop(self) -> asyncio.AbstractEventLoop:
def loop(self) -> asyncio.BaseEventLoop:
"""Return loop object."""
return self._loop
@ -590,7 +586,7 @@ class CoreSys:
return self._machine_id
@machine_id.setter
def machine_id(self, value: str | None) -> None:
def machine_id(self, value: str) -> None:
"""Set a machine-id type string."""
if self._machine_id:
raise RuntimeError("Machine-ID type already set!")
@ -612,8 +608,8 @@ class CoreSys:
self._set_task_context.append(callback)
def run_in_executor(
self, funct: Callable[..., T], *args, **kwargs
) -> asyncio.Future[T]:
self, funct: Callable[..., T], *args: tuple[Any], **kwargs: dict[str, Any]
) -> Coroutine[Any, Any, T]:
"""Add an job to the executor pool."""
if kwargs:
funct = partial(funct, **kwargs)
@ -635,8 +631,8 @@ class CoreSys:
self,
delay: float,
funct: Callable[..., Any],
*args,
**kwargs,
*args: tuple[Any],
**kwargs: dict[str, Any],
) -> asyncio.TimerHandle:
"""Start a task after a delay."""
if kwargs:
@ -648,8 +644,8 @@ class CoreSys:
self,
when: datetime,
funct: Callable[..., Any],
*args,
**kwargs,
*args: tuple[Any],
**kwargs: dict[str, Any],
) -> asyncio.TimerHandle:
"""Start a task at the specified datetime."""
if kwargs:
@ -686,7 +682,7 @@ class CoreSysAttributes:
return self.coresys.dev
@property
def sys_loop(self) -> asyncio.AbstractEventLoop:
def sys_loop(self) -> asyncio.BaseEventLoop:
"""Return loop object."""
return self.coresys.loop
@ -836,7 +832,7 @@ class CoreSysAttributes:
def sys_run_in_executor(
self, funct: Callable[..., T], *args, **kwargs
) -> asyncio.Future[T]:
) -> Coroutine[Any, Any, T]:
"""Add a job to the executor pool."""
return self.coresys.run_in_executor(funct, *args, **kwargs)

View File

@ -117,7 +117,7 @@ class DBusInterfaceProxy(DBusInterface, ABC):
"""Initialize object with already connected dbus object."""
await super().initialize(connected_dbus)
if not self.connected_dbus.supports_properties:
if not self.connected_dbus.properties:
self.disconnect()
raise DBusInterfaceError(
f"D-Bus object {self.object_path} is not usable, introspection is missing required properties interface"

View File

@ -259,7 +259,7 @@ class NetworkManager(DBusInterfaceProxy):
else:
interface.primary = False
interfaces[interface.interface_name] = interface
interfaces[interface.name] = interface
interfaces[interface.hw_address] = interface
# Disconnect removed devices

View File

@ -49,7 +49,7 @@ class NetworkInterface(DBusInterfaceProxy):
@property
@dbus_property
def interface_name(self) -> str:
def name(self) -> str:
"""Return interface name."""
return self.properties[DBUS_ATTR_DEVICE_INTERFACE]

View File

@ -28,8 +28,6 @@ class DeviceSpecificationDataType(TypedDict, total=False):
path: str
label: str
uuid: str
partuuid: str
partlabel: str
@dataclass(slots=True)
@ -42,8 +40,6 @@ class DeviceSpecification:
path: Path | None = None
label: str | None = None
uuid: str | None = None
partuuid: str | None = None
partlabel: str | None = None
@staticmethod
def from_dict(data: DeviceSpecificationDataType) -> "DeviceSpecification":
@ -52,8 +48,6 @@ class DeviceSpecification:
path=Path(data["path"]) if "path" in data else None,
label=data.get("label"),
uuid=data.get("uuid"),
partuuid=data.get("partuuid"),
partlabel=data.get("partlabel"),
)
def to_dict(self) -> dict[str, Variant]:
@ -62,8 +56,6 @@ class DeviceSpecification:
"path": Variant("s", self.path.as_posix()) if self.path else None,
"label": _optional_variant("s", self.label),
"uuid": _optional_variant("s", self.uuid),
"partuuid": _optional_variant("s", self.partuuid),
"partlabel": _optional_variant("s", self.partlabel),
}
return {k: v for k, v in data.items() if v}

View File

@ -12,7 +12,6 @@ from typing import TYPE_CHECKING, cast
from attr import evolve
from awesomeversion import AwesomeVersion
import docker
import docker.errors
from docker.types import Mount
import requests
@ -44,7 +43,6 @@ from ..jobs.decorator import Job
from ..resolution.const import CGROUP_V2_VERSION, ContextType, IssueType, SuggestionType
from ..utils.sentry import async_capture_exception
from .const import (
ADDON_BUILDER_IMAGE,
ENV_TIME,
ENV_TOKEN,
ENV_TOKEN_OLD,
@ -346,7 +344,7 @@ class DockerAddon(DockerInterface):
mounts = [
MOUNT_DEV,
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.addon.path_extern_data.as_posix(),
target=target_data_path or PATH_PRIVATE_DATA.as_posix(),
read_only=False,
@ -357,7 +355,7 @@ class DockerAddon(DockerInterface):
if MappingType.CONFIG in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.CONFIG].path
or PATH_HOMEASSISTANT_CONFIG_LEGACY.as_posix(),
@ -370,7 +368,7 @@ class DockerAddon(DockerInterface):
if self.addon.addon_config_used:
mounts.append(
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.addon.path_extern_config.as_posix(),
target=addon_mapping[MappingType.ADDON_CONFIG].path
or PATH_PUBLIC_CONFIG.as_posix(),
@ -382,7 +380,7 @@ class DockerAddon(DockerInterface):
if MappingType.HOMEASSISTANT_CONFIG in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target=addon_mapping[MappingType.HOMEASSISTANT_CONFIG].path
or PATH_HOMEASSISTANT_CONFIG.as_posix(),
@ -395,7 +393,7 @@ class DockerAddon(DockerInterface):
if MappingType.ALL_ADDON_CONFIGS in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_addon_configs.as_posix(),
target=addon_mapping[MappingType.ALL_ADDON_CONFIGS].path
or PATH_ALL_ADDON_CONFIGS.as_posix(),
@ -406,7 +404,7 @@ class DockerAddon(DockerInterface):
if MappingType.SSL in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(),
target=addon_mapping[MappingType.SSL].path or PATH_SSL.as_posix(),
read_only=addon_mapping[MappingType.SSL].read_only,
@ -416,7 +414,7 @@ class DockerAddon(DockerInterface):
if MappingType.ADDONS in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_addons_local.as_posix(),
target=addon_mapping[MappingType.ADDONS].path
or PATH_LOCAL_ADDONS.as_posix(),
@ -427,7 +425,7 @@ class DockerAddon(DockerInterface):
if MappingType.BACKUP in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_backup.as_posix(),
target=addon_mapping[MappingType.BACKUP].path
or PATH_BACKUP.as_posix(),
@ -438,7 +436,7 @@ class DockerAddon(DockerInterface):
if MappingType.SHARE in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(),
target=addon_mapping[MappingType.SHARE].path
or PATH_SHARE.as_posix(),
@ -450,7 +448,7 @@ class DockerAddon(DockerInterface):
if MappingType.MEDIA in addon_mapping:
mounts.append(
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_media.as_posix(),
target=addon_mapping[MappingType.MEDIA].path
or PATH_MEDIA.as_posix(),
@ -468,7 +466,7 @@ class DockerAddon(DockerInterface):
continue
mounts.append(
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=gpio_path,
target=gpio_path,
read_only=False,
@ -479,7 +477,7 @@ class DockerAddon(DockerInterface):
if self.addon.with_devicetree:
mounts.append(
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source="/sys/firmware/devicetree/base",
target="/device-tree",
read_only=True,
@ -494,7 +492,7 @@ class DockerAddon(DockerInterface):
if self.addon.with_kernel_modules:
mounts.append(
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source="/lib/modules",
target="/lib/modules",
read_only=True,
@ -513,19 +511,19 @@ class DockerAddon(DockerInterface):
if self.addon.with_audio:
mounts += [
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.addon.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf",
read_only=True,
),
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio",
read_only=True,
),
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf",
read_only=True,
@ -536,13 +534,13 @@ class DockerAddon(DockerInterface):
if self.addon.with_journald:
mounts += [
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
target=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
read_only=True,
),
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
target=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
read_only=True,
@ -675,41 +673,10 @@ class DockerAddon(DockerInterface):
_LOGGER.info("Starting build for %s:%s", self.image, version)
def build_image():
if build_env.squash:
_LOGGER.warning(
"Ignoring squash build option for %s as Docker BuildKit does not support it.",
self.addon.slug,
)
addon_image_tag = f"{image or self.addon.image}:{version!s}"
docker_version = self.sys_docker.info.version
builder_version_tag = f"{docker_version.major}.{docker_version.minor}.{docker_version.micro}-cli"
builder_name = f"addon_builder_{self.addon.slug}"
# Remove dangling builder container if it exists by any chance
# E.g. because of an abrupt host shutdown/reboot during a build
with suppress(docker.errors.NotFound):
self.sys_docker.containers.get(builder_name).remove(force=True, v=True)
result = self.sys_docker.run_command(
ADDON_BUILDER_IMAGE,
version=builder_version_tag,
name=builder_name,
**build_env.get_docker_args(version, addon_image_tag),
return self.sys_docker.images.build(
use_config_proxy=False, **build_env.get_docker_args(version, image)
)
logs = result.output.decode("utf-8")
if result.exit_code != 0:
error_message = f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}"
raise docker.errors.DockerException(error_message)
addon_image = self.sys_docker.images.get(addon_image_tag)
return addon_image, logs
try:
docker_image, log = await self.sys_run_in_executor(build_image)
@ -720,6 +687,15 @@ class DockerAddon(DockerInterface):
except (docker.errors.DockerException, requests.RequestException) as err:
_LOGGER.error("Can't build %s:%s: %s", self.image, version, err)
if hasattr(err, "build_log"):
log = "\n".join(
[
x["stream"]
for x in err.build_log # pylint: disable=no-member
if isinstance(x, dict) and "stream" in x
]
)
_LOGGER.error("Build log: \n%s", log)
raise DockerError() from err
_LOGGER.info("Build %s:%s done", self.image, version)

View File

@ -47,7 +47,7 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
mounts = [
MOUNT_DEV,
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_audio.as_posix(),
target=PATH_PRIVATE_DATA.as_posix(),
read_only=False,

View File

@ -74,26 +74,24 @@ ENV_TOKEN_OLD = "HASSIO_TOKEN"
LABEL_MANAGED = "supervisor_managed"
MOUNT_DBUS = Mount(
type=MountType.BIND.value, source="/run/dbus", target="/run/dbus", read_only=True
)
MOUNT_DEV = Mount(
type=MountType.BIND.value, source="/dev", target="/dev", read_only=True
type=MountType.BIND, source="/run/dbus", target="/run/dbus", read_only=True
)
MOUNT_DEV = Mount(type=MountType.BIND, source="/dev", target="/dev", read_only=True)
MOUNT_DEV.setdefault("BindOptions", {})["ReadOnlyNonRecursive"] = True
MOUNT_DOCKER = Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source="/run/docker.sock",
target="/run/docker.sock",
read_only=True,
)
MOUNT_MACHINE_ID = Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=MACHINE_ID.as_posix(),
target=MACHINE_ID.as_posix(),
read_only=True,
)
MOUNT_UDEV = Mount(
type=MountType.BIND.value, source="/run/udev", target="/run/udev", read_only=True
type=MountType.BIND, source="/run/udev", target="/run/udev", read_only=True
)
PATH_PRIVATE_DATA = PurePath("/data")
@ -107,6 +105,3 @@ PATH_BACKUP = PurePath("/backup")
PATH_SHARE = PurePath("/share")
PATH_MEDIA = PurePath("/media")
PATH_CLOUD_BACKUP = PurePath("/cloud_backup")
# https://hub.docker.com/_/docker
ADDON_BUILDER_IMAGE = "docker.io/library/docker"

View File

@ -48,7 +48,7 @@ class DockerDNS(DockerInterface, CoreSysAttributes):
environment={ENV_TIME: self.sys_timezone},
mounts=[
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_dns.as_posix(),
target="/config",
read_only=False,

View File

@ -99,7 +99,7 @@ class DockerHomeAssistant(DockerInterface):
MOUNT_UDEV,
# HA config folder
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target=PATH_PUBLIC_CONFIG.as_posix(),
read_only=False,
@ -112,20 +112,20 @@ class DockerHomeAssistant(DockerInterface):
[
# All other folders
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(),
target=PATH_SSL.as_posix(),
read_only=True,
),
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(),
target=PATH_SHARE.as_posix(),
read_only=False,
propagation=PropagationMode.RSLAVE.value,
),
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_media.as_posix(),
target=PATH_MEDIA.as_posix(),
read_only=False,
@ -133,19 +133,19 @@ class DockerHomeAssistant(DockerInterface):
),
# Configuration audio
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_homeassistant.path_extern_pulse.as_posix(),
target="/etc/pulse/client.conf",
read_only=True,
),
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
target="/run/audio",
read_only=True,
),
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_plugins.audio.path_extern_asound.as_posix(),
target="/etc/asound.conf",
read_only=True,
@ -213,21 +213,24 @@ class DockerHomeAssistant(DockerInterface):
privileged=True,
init=True,
entrypoint=[],
detach=True,
stdout=True,
stderr=True,
mounts=[
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_homeassistant.as_posix(),
target="/config",
read_only=False,
),
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_ssl.as_posix(),
target="/ssl",
read_only=True,
),
Mount(
type=MountType.BIND.value,
type=MountType.BIND,
source=self.sys_config.path_extern_share.as_posix(),
target="/share",
read_only=False,

View File

@ -95,12 +95,12 @@ class DockerConfig(FileConfiguration):
super().__init__(FILE_HASSIO_DOCKER, SCHEMA_DOCKER_CONFIG)
@property
def enable_ipv6(self) -> bool | None:
def enable_ipv6(self) -> bool:
"""Return IPv6 configuration for docker network."""
return self._data.get(ATTR_ENABLE_IPV6, None)
return self._data.get(ATTR_ENABLE_IPV6, False)
@enable_ipv6.setter
def enable_ipv6(self, value: bool | None) -> None:
def enable_ipv6(self, value: bool) -> None:
"""Set IPv6 configuration for docker network."""
self._data[ATTR_ENABLE_IPV6] = value
@ -294,8 +294,8 @@ class DockerAPI:
def run_command(
self,
image: str,
version: str = "latest",
command: str | list[str] | None = None,
tag: str = "latest",
command: str | None = None,
**kwargs: Any,
) -> CommandReturn:
"""Create a temporary container and run command.
@ -305,15 +305,12 @@ class DockerAPI:
stdout = kwargs.get("stdout", True)
stderr = kwargs.get("stderr", True)
image_with_tag = f"{image}:{version}"
_LOGGER.info("Runing command '%s' on %s", command, image_with_tag)
_LOGGER.info("Runing command '%s' on %s", command, image)
container = None
try:
container = self.docker.containers.run(
image_with_tag,
f"{image}:{tag}",
command=command,
detach=True,
network=self.network.name,
use_config_proxy=False,
**kwargs,
@ -330,9 +327,9 @@ class DockerAPI:
# cleanup container
if container:
with suppress(docker_errors.DockerException, requests.RequestException):
container.remove(force=True, v=True)
container.remove(force=True)
return CommandReturn(result["StatusCode"], output)
return CommandReturn(result.get("StatusCode"), output)
def repair(self) -> None:
"""Repair local docker overlayfs2 issues."""
@ -445,7 +442,7 @@ class DockerAPI:
if remove_container:
with suppress(DockerException, requests.RequestException):
_LOGGER.info("Cleaning %s application", name)
docker_container.remove(force=True, v=True)
docker_container.remove(force=True)
def start_container(self, name: str) -> None:
"""Start Docker container."""

View File

@ -47,8 +47,6 @@ DOCKER_NETWORK_PARAMS = {
"options": {"com.docker.network.bridge.name": DOCKER_NETWORK},
}
DOCKER_ENABLE_IPV6_DEFAULT = True
class DockerNetwork:
"""Internal Supervisor Network.
@ -59,9 +57,9 @@ class DockerNetwork:
def __init__(self, docker_client: docker.DockerClient):
"""Initialize internal Supervisor network."""
self.docker: docker.DockerClient = docker_client
self._network: docker.models.networks.Network
self._network: docker.models.networks.Network | None = None
async def post_init(self, enable_ipv6: bool | None = None) -> Self:
async def post_init(self, enable_ipv6: bool = False) -> Self:
"""Post init actions that must be done in event loop."""
self._network = await asyncio.get_running_loop().run_in_executor(
None, self._get_network, enable_ipv6
@ -113,24 +111,16 @@ class DockerNetwork:
"""Return observer of the network."""
return DOCKER_IPV4_NETWORK_MASK[6]
def _get_network(
self, enable_ipv6: bool | None = None
) -> docker.models.networks.Network:
def _get_network(self, enable_ipv6: bool = False) -> docker.models.networks.Network:
"""Get supervisor network."""
try:
if network := self.docker.networks.get(DOCKER_NETWORK):
current_ipv6 = network.attrs.get(DOCKER_ENABLEIPV6, False)
# If the network exists and we don't have an explicit setting,
# simply stick with what we have.
if enable_ipv6 is None or current_ipv6 == enable_ipv6:
if network.attrs.get(DOCKER_ENABLEIPV6) == enable_ipv6:
return network
# We have an explicit setting which differs from the current state.
_LOGGER.info(
"Migrating Supervisor network to %s",
"IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only",
)
if (containers := network.containers) and (
containers_all := all(
container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME)
@ -144,7 +134,6 @@ class DockerNetwork:
requests.RequestException,
):
network.disconnect(container, force=True)
if not containers or containers_all:
try:
network.remove()
@ -162,12 +151,10 @@ class DockerNetwork:
_LOGGER.info("Can't find Supervisor network, creating a new network")
network_params = DOCKER_NETWORK_PARAMS.copy()
network_params[ATTR_ENABLE_IPV6] = (
DOCKER_ENABLE_IPV6_DEFAULT if enable_ipv6 is None else enable_ipv6
)
network_params[ATTR_ENABLE_IPV6] = enable_ipv6
try:
self._network = self.docker.networks.create(**network_params) # type: ignore
self._network = self.docker.networks.create(**network_params)
except docker.errors.APIError as err:
raise DockerError(
f"Can't create Supervisor network: {err}", _LOGGER.error

View File

@ -87,19 +87,19 @@ class HomeAssistantCore(JobGroup):
try:
# Evaluate Version if we lost this information
if self.sys_homeassistant.version:
version = self.sys_homeassistant.version
else:
if not self.sys_homeassistant.version:
self.sys_homeassistant.version = (
version
) = await self.instance.get_latest_version()
await self.instance.get_latest_version()
)
await self.instance.attach(version=version, skip_state_event_if_down=True)
await self.instance.attach(
version=self.sys_homeassistant.version, skip_state_event_if_down=True
)
# Ensure we are using correct image for this system (unless user has overridden it)
if not self.sys_homeassistant.override_image:
await self.instance.check_image(
version, self.sys_homeassistant.default_image
self.sys_homeassistant.version, self.sys_homeassistant.default_image
)
self.sys_homeassistant.set_image(self.sys_homeassistant.default_image)
except DockerError:
@ -108,7 +108,7 @@ class HomeAssistantCore(JobGroup):
)
await self.install_landingpage()
else:
self.sys_homeassistant.version = self.instance.version or version
self.sys_homeassistant.version = self.instance.version
self.sys_homeassistant.set_image(self.instance.image)
await self.sys_homeassistant.save_data()
@ -182,13 +182,12 @@ class HomeAssistantCore(JobGroup):
if not self.sys_homeassistant.latest_version:
await self.sys_updater.reload()
if to_version := self.sys_homeassistant.latest_version:
if self.sys_homeassistant.latest_version:
try:
await self.instance.update(
to_version,
self.sys_homeassistant.latest_version,
image=self.sys_updater.image_homeassistant,
)
self.sys_homeassistant.version = self.instance.version or to_version
break
except (DockerError, JobException):
pass
@ -199,6 +198,7 @@ class HomeAssistantCore(JobGroup):
await asyncio.sleep(30)
_LOGGER.info("Home Assistant docker now installed")
self.sys_homeassistant.version = self.instance.version
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
await self.sys_homeassistant.save_data()
@ -231,8 +231,8 @@ class HomeAssistantCore(JobGroup):
backup: bool | None = False,
) -> None:
"""Update HomeAssistant version."""
to_version = version or self.sys_homeassistant.latest_version
if not to_version:
version = version or self.sys_homeassistant.latest_version
if not version:
raise HomeAssistantUpdateError(
"Cannot determine latest version of Home Assistant for update",
_LOGGER.error,
@ -243,9 +243,9 @@ class HomeAssistantCore(JobGroup):
running = await self.instance.is_running()
exists = await self.instance.exists()
if exists and to_version == self.instance.version:
if exists and version == self.instance.version:
raise HomeAssistantUpdateError(
f"Version {to_version!s} is already installed", _LOGGER.warning
f"Version {version!s} is already installed", _LOGGER.warning
)
if backup:
@ -268,7 +268,7 @@ class HomeAssistantCore(JobGroup):
"Updating Home Assistant image failed", _LOGGER.warning
) from err
self.sys_homeassistant.version = self.instance.version or to_version
self.sys_homeassistant.version = self.instance.version
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
if running:
@ -282,7 +282,7 @@ class HomeAssistantCore(JobGroup):
# Update Home Assistant
with suppress(HomeAssistantError):
await _update(to_version)
await _update(version)
if not self.error_state and rollback:
try:

View File

@ -35,7 +35,6 @@ from ..const import (
FILE_HASSIO_HOMEASSISTANT,
BusEvent,
IngressSessionDataUser,
IngressSessionDataUserDict,
)
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
@ -558,11 +557,18 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
async def get_users(self) -> list[IngressSessionDataUser]:
"""Get list of all configured users."""
list_of_users: (
list[IngressSessionDataUserDict] | None
list[dict[str, Any]] | None
) = await self.sys_homeassistant.websocket.async_send_command(
{ATTR_TYPE: "config/auth/list"}
)
if list_of_users:
return [IngressSessionDataUser.from_dict(data) for data in list_of_users]
return [
IngressSessionDataUser(
id=data["id"],
username=data.get("username"),
display_name=data.get("name"),
)
for data in list_of_users
]
return []

View File

@ -175,7 +175,7 @@ class Interface:
)
return Interface(
name=inet.interface_name,
name=inet.name,
mac=inet.hw_address,
path=inet.path,
enabled=inet.settings is not None,
@ -286,7 +286,7 @@ class Interface:
_LOGGER.warning(
"Auth method %s for network interface %s unsupported, skipping",
inet.settings.wireless_security.key_mgmt,
inet.interface_name,
inet.name,
)
return None

View File

@ -8,11 +8,11 @@ from typing import Any
from ..const import ATTR_HOST_INTERNET
from ..coresys import CoreSys, CoreSysAttributes
from ..dbus.const import (
DBUS_ATTR_CONFIGURATION,
DBUS_ATTR_CONNECTION_ENABLED,
DBUS_ATTR_CONNECTIVITY,
DBUS_IFACE_DNS,
DBUS_ATTR_PRIMARY_CONNECTION,
DBUS_IFACE_NM,
DBUS_OBJECT_BASE,
DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED,
ConnectionStateType,
ConnectivityState,
@ -46,8 +46,6 @@ class NetworkManager(CoreSysAttributes):
"""Initialize system center handling."""
self.coresys: CoreSys = coresys
self._connectivity: bool | None = None
# No event need on initial change (NetworkManager initializes with empty list)
self._dns_configuration: list = []
@property
def connectivity(self) -> bool | None:
@ -140,12 +138,8 @@ class NetworkManager(CoreSysAttributes):
]
)
self.sys_dbus.network.dbus.properties.on(
"properties_changed", self._check_connectivity_changed
)
self.sys_dbus.network.dns.dbus.properties.on(
"properties_changed", self._check_dns_changed
self.sys_dbus.network.dbus.properties.on_properties_changed(
self._check_connectivity_changed
)
async def _check_connectivity_changed(
@ -158,6 +152,16 @@ class NetworkManager(CoreSysAttributes):
connectivity_check: bool | None = changed.get(DBUS_ATTR_CONNECTION_ENABLED)
connectivity: int | None = changed.get(DBUS_ATTR_CONNECTIVITY)
# This potentially updated the DNS configuration. Make sure the DNS plug-in
# picks up the latest settings.
if (
DBUS_ATTR_PRIMARY_CONNECTION in changed
and changed[DBUS_ATTR_PRIMARY_CONNECTION]
and changed[DBUS_ATTR_PRIMARY_CONNECTION] != DBUS_OBJECT_BASE
and await self.sys_plugins.dns.is_running()
):
await self.sys_plugins.dns.restart()
if (
connectivity_check is True
or DBUS_ATTR_CONNECTION_ENABLED in invalidated
@ -171,20 +175,6 @@ class NetworkManager(CoreSysAttributes):
elif connectivity is not None:
self.connectivity = connectivity == ConnectivityState.CONNECTIVITY_FULL
async def _check_dns_changed(
self, interface: str, changed: dict[str, Any], invalidated: list[str]
):
"""Check if DNS properties have changed."""
if interface != DBUS_IFACE_DNS:
return
if (
DBUS_ATTR_CONFIGURATION in changed
and self._dns_configuration != changed[DBUS_ATTR_CONFIGURATION]
):
self._dns_configuration = changed[DBUS_ATTR_CONFIGURATION]
self.sys_plugins.dns.notify_locals_changed()
async def update(self, *, force_connectivity_check: bool = False):
"""Update properties over dbus."""
_LOGGER.info("Updating local network information")

View File

@ -12,7 +12,6 @@ from .const import (
ATTR_SESSION_DATA,
FILE_HASSIO_INGRESS,
IngressSessionData,
IngressSessionDataDict,
)
from .coresys import CoreSys, CoreSysAttributes
from .utils import check_port
@ -50,7 +49,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
return self._data[ATTR_SESSION]
@property
def sessions_data(self) -> dict[str, IngressSessionDataDict]:
def sessions_data(self) -> dict[str, dict[str, str | None]]:
"""Return sessions_data."""
return self._data[ATTR_SESSION_DATA]
@ -90,7 +89,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
now = utcnow()
sessions = {}
sessions_data: dict[str, IngressSessionDataDict] = {}
sessions_data: dict[str, dict[str, str | None]] = {}
for session, valid in self.sessions.items():
# check if timestamp valid, to avoid crash on malformed timestamp
try:
@ -119,8 +118,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
# Read all ingress token and build a map
for addon in self.addons:
if addon.ingress_token:
self.tokens[addon.ingress_token] = addon.slug
self.tokens[addon.ingress_token] = addon.slug
def create_session(self, data: IngressSessionData | None = None) -> str:
"""Create new session."""
@ -143,7 +141,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
try:
valid_until = utc_from_timestamp(self.sessions[session])
except OverflowError:
self.sessions[session] = (utcnow() + timedelta(minutes=15)).timestamp()
self.sessions[session] = utcnow() + timedelta(minutes=15)
return True
# Is still valid?

View File

@ -34,60 +34,8 @@ class JobCondition(StrEnum):
SUPERVISOR_UPDATED = "supervisor_updated"
class JobConcurrency(StrEnum):
"""Job concurrency control.
Controls how many instances of a job can run simultaneously.
Individual Concurrency (applies to each method separately):
- REJECT: Fail immediately if another instance is already running
- QUEUE: Wait for the current instance to finish, then run
Group Concurrency (applies across all methods on a JobGroup):
- GROUP_REJECT: Fail if ANY job is running on the JobGroup
- GROUP_QUEUE: Wait for ANY running job on the JobGroup to finish
JobGroup Behavior:
- All methods on the same JobGroup instance share a single lock
- Methods can call other methods on the same group without deadlock
- Uses the JobGroup.group_name for coordination
- Requires the class to inherit from JobGroup
"""
REJECT = "reject" # Fail if already running (was ONCE)
QUEUE = "queue" # Wait if already running (was SINGLE_WAIT)
GROUP_REJECT = "group_reject" # Was GROUP_ONCE
GROUP_QUEUE = "group_queue" # Was GROUP_WAIT
class JobThrottle(StrEnum):
"""Job throttling control.
Controls how frequently jobs can be executed.
Individual Throttling (each method has its own throttle state):
- THROTTLE: Skip execution if called within throttle_period
- RATE_LIMIT: Allow up to throttle_max_calls within throttle_period, then fail
Group Throttling (all methods on a JobGroup share throttle state):
- GROUP_THROTTLE: Skip if ANY method was called within throttle_period
- GROUP_RATE_LIMIT: Allow up to throttle_max_calls total across ALL methods
JobGroup Behavior:
- All methods on the same JobGroup instance share throttle counters/timers
- Uses the JobGroup.group_name as the key for tracking state
- If one method is throttled, other methods may also be throttled
- Requires the class to inherit from JobGroup
"""
THROTTLE = "throttle" # Skip if called too frequently
RATE_LIMIT = "rate_limit" # Rate limiting with max calls per period
GROUP_THROTTLE = "group_throttle" # Group version of THROTTLE
GROUP_RATE_LIMIT = "group_rate_limit" # Group version of RATE_LIMIT
class JobExecutionLimit(StrEnum):
"""Job Execution limits - DEPRECATED: Use JobConcurrency and JobThrottle instead."""
"""Job Execution limits."""
ONCE = "once"
SINGLE_WAIT = "single_wait"

View File

@ -20,7 +20,7 @@ from ..host.const import HostFeature
from ..resolution.const import MINIMUM_FREE_SPACE_THRESHOLD, ContextType, IssueType
from ..utils.sentry import async_capture_exception
from . import SupervisorJob
from .const import JobConcurrency, JobCondition, JobExecutionLimit, JobThrottle
from .const import JobCondition, JobExecutionLimit
from .job_group import JobGroup
_LOGGER: logging.Logger = logging.getLogger(__package__)
@ -36,34 +36,14 @@ class Job(CoreSysAttributes):
conditions: list[JobCondition] | None = None,
cleanup: bool = True,
on_condition: type[JobException] | None = None,
concurrency: JobConcurrency | None = None,
throttle: JobThrottle | None = None,
limit: JobExecutionLimit | None = None,
throttle_period: timedelta
| Callable[[CoreSys, datetime, list[datetime] | None], timedelta]
| None = None,
throttle_max_calls: int | None = None,
internal: bool = False,
# Backward compatibility - DEPRECATED
limit: JobExecutionLimit | None = None,
): # pylint: disable=too-many-positional-arguments
"""Initialize the Job decorator.
Args:
name (str): Unique name for the job. Must not be duplicated.
conditions (list[JobCondition] | None): List of conditions that must be met before the job runs.
cleanup (bool): Whether to clean up the job after execution. Defaults to True. If set to False, the job will remain accessible through the Supervisor API until the next restart.
on_condition (type[JobException] | None): Exception type to raise if a job condition fails. If None, logs the failure.
concurrency (JobConcurrency | None): Concurrency control policy (e.g., reject, queue, group-based).
throttle (JobThrottle | None): Throttling policy (e.g., throttle, rate_limit, group-based).
throttle_period (timedelta | Callable | None): Throttle period as a timedelta or a callable returning a timedelta (for throttled jobs).
throttle_max_calls (int | None): Maximum number of calls allowed within the throttle period (for rate-limited jobs).
internal (bool): Whether the job is internal (not exposed through the Supervisor API). Defaults to False.
limit (JobExecutionLimit | None): DEPRECATED - Use concurrency and throttle instead.
Raises:
RuntimeError: If job name is not unique, or required throttle parameters are missing for the selected throttle policy.
"""
):
"""Initialize the Job class."""
if name in _JOB_NAMES:
raise RuntimeError(f"A job already exists with name {name}!")
@ -72,6 +52,7 @@ class Job(CoreSysAttributes):
self.conditions = conditions
self.cleanup = cleanup
self.on_condition = on_condition
self.limit = limit
self._throttle_period = throttle_period
self._throttle_max_calls = throttle_max_calls
self._lock: asyncio.Semaphore | None = None
@ -79,90 +60,33 @@ class Job(CoreSysAttributes):
self._rate_limited_calls: dict[str | None, list[datetime]] | None = None
self._internal = internal
# Handle backward compatibility with limit parameter
if limit is not None:
if concurrency is not None or throttle is not None:
raise RuntimeError(
f"Job {name} cannot specify both 'limit' (deprecated) and 'concurrency'/'throttle' parameters!"
)
# Map old limit values to new parameters
concurrency, throttle = self._map_limit_to_new_params(limit)
self.concurrency = concurrency
self.throttle = throttle
# Validate Options
self._validate_parameters()
def _map_limit_to_new_params(
self, limit: JobExecutionLimit
) -> tuple[JobConcurrency | None, JobThrottle | None]:
"""Map old limit parameter to new concurrency and throttle parameters."""
mapping = {
JobExecutionLimit.ONCE: (JobConcurrency.REJECT, None),
JobExecutionLimit.SINGLE_WAIT: (JobConcurrency.QUEUE, None),
JobExecutionLimit.THROTTLE: (None, JobThrottle.THROTTLE),
JobExecutionLimit.THROTTLE_WAIT: (
JobConcurrency.QUEUE,
JobThrottle.THROTTLE,
),
JobExecutionLimit.THROTTLE_RATE_LIMIT: (None, JobThrottle.RATE_LIMIT),
JobExecutionLimit.GROUP_ONCE: (JobConcurrency.GROUP_REJECT, None),
JobExecutionLimit.GROUP_WAIT: (JobConcurrency.GROUP_QUEUE, None),
JobExecutionLimit.GROUP_THROTTLE: (None, JobThrottle.GROUP_THROTTLE),
JobExecutionLimit.GROUP_THROTTLE_WAIT: (
# Seems a bit counter intuitive, but GROUP_QUEUE deadlocks
# tests/jobs/test_job_decorator.py::test_execution_limit_group_throttle_wait
# The reason this deadlocks is because when using GROUP_QUEUE and the
# throttle limit is hit, the group lock is trying to be unlocked outside
# of the job context. The current implementation doesn't allow to unlock
# the group lock when the job is not running.
JobConcurrency.QUEUE,
JobThrottle.GROUP_THROTTLE,
),
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT: (
None,
JobThrottle.GROUP_RATE_LIMIT,
),
}
return mapping.get(limit, (None, None))
def _validate_parameters(self) -> None:
"""Validate job parameters."""
# Validate throttle parameters
if (
self.throttle
self.limit
in (
JobThrottle.THROTTLE,
JobThrottle.GROUP_THROTTLE,
JobThrottle.RATE_LIMIT,
JobThrottle.GROUP_RATE_LIMIT,
JobExecutionLimit.THROTTLE,
JobExecutionLimit.THROTTLE_WAIT,
JobExecutionLimit.THROTTLE_RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
)
and self._throttle_period is None
):
raise RuntimeError(
f"Job {self.name} is using throttle {self.throttle} without a throttle period!"
f"Job {name} is using execution limit {limit} without a throttle period!"
)
if self.throttle in (
JobThrottle.RATE_LIMIT,
JobThrottle.GROUP_RATE_LIMIT,
if self.limit in (
JobExecutionLimit.THROTTLE_RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
):
if self._throttle_max_calls is None:
raise RuntimeError(
f"Job {self.name} is using throttle {self.throttle} without throttle max calls!"
f"Job {name} is using execution limit {limit} without throttle max calls!"
)
self._rate_limited_calls = {}
if self.throttle is not None and self.concurrency in (
JobConcurrency.GROUP_REJECT,
JobConcurrency.GROUP_QUEUE,
):
# We cannot release group locks when Job is not running (e.g. throttled)
# which makes these combinations impossible to use currently.
raise RuntimeError(
f"Job {self.name} is using throttling ({self.throttle}) with group concurrency ({self.concurrency}), which is not allowed!"
)
self._rate_limited_calls = {}
@property
def throttle_max_calls(self) -> int:
@ -192,7 +116,7 @@ class Job(CoreSysAttributes):
"""Return rate limited calls if used."""
if self._rate_limited_calls is None:
raise RuntimeError(
"Rate limited calls not available for this throttle type"
f"Rate limited calls not available for limit type {self.limit}"
)
return self._rate_limited_calls.get(group_name, [])
@ -203,7 +127,7 @@ class Job(CoreSysAttributes):
"""Add a rate limited call to list if used."""
if self._rate_limited_calls is None:
raise RuntimeError(
"Rate limited calls not available for this throttle type"
f"Rate limited calls not available for limit type {self.limit}"
)
if group_name in self._rate_limited_calls:
@ -217,7 +141,7 @@ class Job(CoreSysAttributes):
"""Set rate limited calls if used."""
if self._rate_limited_calls is None:
raise RuntimeError(
"Rate limited calls not available for this throttle type"
f"Rate limited calls not available for limit type {self.limit}"
)
self._rate_limited_calls[group_name] = value
@ -254,24 +178,16 @@ class Job(CoreSysAttributes):
if obj.acquire and obj.release: # type: ignore
job_group = cast(JobGroup, obj)
# Check for group-based parameters
if not job_group:
if self.concurrency in (
JobConcurrency.GROUP_REJECT,
JobConcurrency.GROUP_QUEUE,
):
raise RuntimeError(
f"Job {self.name} uses group concurrency ({self.concurrency}) but is not on a JobGroup! "
f"The class must inherit from JobGroup to use GROUP_REJECT or GROUP_QUEUE."
) from None
if self.throttle in (
JobThrottle.GROUP_THROTTLE,
JobThrottle.GROUP_RATE_LIMIT,
):
raise RuntimeError(
f"Job {self.name} uses group throttling ({self.throttle}) but is not on a JobGroup! "
f"The class must inherit from JobGroup to use GROUP_THROTTLE or GROUP_RATE_LIMIT."
) from None
if not job_group and self.limit in (
JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT,
JobExecutionLimit.GROUP_THROTTLE,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
):
raise RuntimeError(
f"Job on {self.name} need to be a JobGroup to use group based limits!"
) from None
return job_group
@ -324,15 +240,71 @@ class Job(CoreSysAttributes):
except JobConditionException as err:
return self._handle_job_condition_exception(err)
# Handle execution limits
await self._handle_concurrency_control(job_group, job)
try:
if not await self._handle_throttling(group_name):
self._release_concurrency_control(job_group)
return # Job was throttled, exit early
except Exception:
self._release_concurrency_control(job_group)
raise
# Handle exection limits
if self.limit in (
JobExecutionLimit.SINGLE_WAIT,
JobExecutionLimit.ONCE,
):
await self._acquire_exection_limit()
elif self.limit in (
JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT,
):
try:
await cast(JobGroup, job_group).acquire(
job, self.limit == JobExecutionLimit.GROUP_WAIT
)
except JobGroupExecutionLimitExceeded as err:
if self.on_condition:
raise self.on_condition(str(err)) from err
raise err
elif self.limit in (
JobExecutionLimit.THROTTLE,
JobExecutionLimit.GROUP_THROTTLE,
):
time_since_last_call = datetime.now() - self.last_call(group_name)
if time_since_last_call < self.throttle_period(group_name):
return
elif self.limit in (
JobExecutionLimit.THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
):
await self._acquire_exection_limit()
time_since_last_call = datetime.now() - self.last_call(group_name)
if time_since_last_call < self.throttle_period(group_name):
self._release_exception_limits()
return
elif self.limit in (
JobExecutionLimit.THROTTLE_RATE_LIMIT,
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
):
# Only reprocess array when necessary (at limit)
if (
len(self.rate_limited_calls(group_name))
>= self.throttle_max_calls
):
self.set_rate_limited_calls(
[
call
for call in self.rate_limited_calls(group_name)
if call
> datetime.now() - self.throttle_period(group_name)
],
group_name,
)
if (
len(self.rate_limited_calls(group_name))
>= self.throttle_max_calls
):
on_condition = (
JobException
if self.on_condition is None
else self.on_condition
)
raise on_condition(
f"Rate limit exceeded, more than {self.throttle_max_calls} calls in {self.throttle_period(group_name)}",
)
# Execute Job
with job.start():
@ -358,7 +330,12 @@ class Job(CoreSysAttributes):
await async_capture_exception(err)
raise JobException() from err
finally:
self._release_concurrency_control(job_group)
self._release_exception_limits()
if job_group and self.limit in (
JobExecutionLimit.GROUP_ONCE,
JobExecutionLimit.GROUP_WAIT,
):
job_group.release()
# Jobs that weren't started are always cleaned up. Also clean up done jobs if required
finally:
@ -500,75 +477,31 @@ class Job(CoreSysAttributes):
f"'{method_name}' blocked from execution, mounting not supported on system"
)
def _release_concurrency_control(self, job_group: JobGroup | None) -> None:
"""Release concurrency control locks."""
if self.concurrency == JobConcurrency.REJECT:
if self.lock.locked():
self.lock.release()
elif self.concurrency == JobConcurrency.QUEUE:
if self.lock.locked():
self.lock.release()
elif self.concurrency in (
JobConcurrency.GROUP_REJECT,
JobConcurrency.GROUP_QUEUE,
async def _acquire_exection_limit(self) -> None:
"""Process exection limits."""
if self.limit not in (
JobExecutionLimit.SINGLE_WAIT,
JobExecutionLimit.ONCE,
JobExecutionLimit.THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
):
if job_group and job_group.has_lock:
job_group.release()
return
async def _handle_concurrency_control(
self, job_group: JobGroup | None, job: SupervisorJob
) -> None:
"""Handle concurrency control limits."""
if self.concurrency == JobConcurrency.REJECT:
if self.lock.locked():
on_condition = (
JobException if self.on_condition is None else self.on_condition
)
raise on_condition("Another job is running")
await self.lock.acquire()
elif self.concurrency == JobConcurrency.QUEUE:
await self.lock.acquire()
elif self.concurrency == JobConcurrency.GROUP_REJECT:
try:
await cast(JobGroup, job_group).acquire(job, wait=False)
except JobGroupExecutionLimitExceeded as err:
if self.on_condition:
raise self.on_condition(str(err)) from err
raise err
elif self.concurrency == JobConcurrency.GROUP_QUEUE:
try:
await cast(JobGroup, job_group).acquire(job, wait=True)
except JobGroupExecutionLimitExceeded as err:
if self.on_condition:
raise self.on_condition(str(err)) from err
raise err
if self.limit == JobExecutionLimit.ONCE and self.lock.locked():
on_condition = (
JobException if self.on_condition is None else self.on_condition
)
raise on_condition("Another job is running")
async def _handle_throttling(self, group_name: str | None) -> bool:
"""Handle throttling limits. Returns True if job should continue, False if throttled."""
if self.throttle in (JobThrottle.THROTTLE, JobThrottle.GROUP_THROTTLE):
time_since_last_call = datetime.now() - self.last_call(group_name)
throttle_period = self.throttle_period(group_name)
if time_since_last_call < throttle_period:
# Always return False when throttled (skip execution)
return False
elif self.throttle in (JobThrottle.RATE_LIMIT, JobThrottle.GROUP_RATE_LIMIT):
# Only reprocess array when necessary (at limit)
if len(self.rate_limited_calls(group_name)) >= self.throttle_max_calls:
self.set_rate_limited_calls(
[
call
for call in self.rate_limited_calls(group_name)
if call > datetime.now() - self.throttle_period(group_name)
],
group_name,
)
await self.lock.acquire()
if len(self.rate_limited_calls(group_name)) >= self.throttle_max_calls:
on_condition = (
JobException if self.on_condition is None else self.on_condition
)
raise on_condition(
f"Rate limit exceeded, more than {self.throttle_max_calls} calls in {self.throttle_period(group_name)}",
)
return True
def _release_exception_limits(self) -> None:
"""Release possible exception limits."""
if self.limit not in (
JobExecutionLimit.SINGLE_WAIT,
JobExecutionLimit.ONCE,
JobExecutionLimit.THROTTLE_WAIT,
JobExecutionLimit.GROUP_THROTTLE_WAIT,
):
return
self.lock.release()

View File

@ -272,7 +272,6 @@ class OSManager(CoreSysAttributes):
name="os_manager_update",
conditions=[
JobCondition.HAOS,
JobCondition.HEALTHY,
JobCondition.INTERNET_SYSTEM,
JobCondition.RUNNING,
JobCondition.SUPERVISOR_UPDATED,

View File

@ -22,7 +22,6 @@ from ..exceptions import (
AudioUpdateError,
ConfigurationFileError,
DockerError,
PluginError,
)
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
@ -128,7 +127,7 @@ class PluginAudio(PluginBase):
"""Update Audio plugin."""
try:
await super().update(version)
except (DockerError, PluginError) as err:
except DockerError as err:
raise AudioUpdateError("Audio update failed", _LOGGER.error) from err
async def restart(self) -> None:

View File

@ -168,14 +168,14 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
# Check plugin state
try:
# Evaluate Version if we lost this information
if self.version:
version = self.version
else:
self.version = version = await self.instance.get_latest_version()
if not self.version:
self.version = await self.instance.get_latest_version()
await self.instance.attach(version=version, skip_state_event_if_down=True)
await self.instance.attach(
version=self.version, skip_state_event_if_down=True
)
await self.instance.check_image(version, self.default_image)
await self.instance.check_image(self.version, self.default_image)
except DockerError:
_LOGGER.info(
"No %s plugin Docker image %s found.", self.slug, self.instance.image
@ -185,7 +185,7 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
with suppress(PluginError):
await self.install()
else:
self.version = self.instance.version or version
self.version = self.instance.version
self.image = self.default_image
await self.save_data()
@ -202,10 +202,11 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
if not self.latest_version:
await self.sys_updater.reload()
if to_version := self.latest_version:
if self.latest_version:
with suppress(DockerError):
await self.instance.install(to_version, image=self.default_image)
self.version = self.instance.version or to_version
await self.instance.install(
self.latest_version, image=self.default_image
)
break
_LOGGER.warning(
"Error on installing %s plugin, retrying in 30sec", self.slug
@ -213,28 +214,23 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
await asyncio.sleep(30)
_LOGGER.info("%s plugin now installed", self.slug)
self.version = self.instance.version
self.image = self.default_image
await self.save_data()
async def update(self, version: str | None = None) -> None:
"""Update system plugin."""
to_version = AwesomeVersion(version) if version else self.latest_version
if not to_version:
raise PluginError(
f"Cannot determine latest version of plugin {self.slug} for update",
_LOGGER.error,
)
version = version or self.latest_version
old_image = self.image
if to_version == self.version:
if version == self.version:
_LOGGER.warning(
"Version %s is already installed for %s", to_version, self.slug
"Version %s is already installed for %s", version, self.slug
)
return
await self.instance.update(to_version, image=self.default_image)
self.version = self.instance.version or to_version
await self.instance.update(version, image=self.default_image)
self.version = self.instance.version
self.image = self.default_image
await self.save_data()

View File

@ -6,6 +6,7 @@ Code: https://github.com/home-assistant/plugin-cli
from collections.abc import Awaitable
import logging
import secrets
from typing import cast
from awesomeversion import AwesomeVersion
@ -14,7 +15,7 @@ from ..coresys import CoreSys
from ..docker.cli import DockerCli
from ..docker.const import ContainerState
from ..docker.stats import DockerStats
from ..exceptions import CliError, CliJobError, CliUpdateError, DockerError, PluginError
from ..exceptions import CliError, CliJobError, CliUpdateError, DockerError
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
from ..utils.sentry import async_capture_exception
@ -53,9 +54,9 @@ class PluginCli(PluginBase):
return self.sys_updater.version_cli
@property
def supervisor_token(self) -> str | None:
def supervisor_token(self) -> str:
"""Return an access token for the Supervisor API."""
return self._data.get(ATTR_ACCESS_TOKEN)
return cast(str, self._data[ATTR_ACCESS_TOKEN])
@Job(
name="plugin_cli_update",
@ -66,7 +67,7 @@ class PluginCli(PluginBase):
"""Update local HA cli."""
try:
await super().update(version)
except (DockerError, PluginError) as err:
except DockerError as err:
raise CliUpdateError("CLI update failed", _LOGGER.error) from err
async def start(self) -> None:

View File

@ -15,8 +15,7 @@ from awesomeversion import AwesomeVersion
import jinja2
import voluptuous as vol
from ..bus import EventListener
from ..const import ATTR_SERVERS, DNS_SUFFIX, BusEvent, LogLevel
from ..const import ATTR_SERVERS, DNS_SUFFIX, LogLevel
from ..coresys import CoreSys
from ..dbus.const import MulticastProtocolEnabled
from ..docker.const import ContainerState
@ -29,7 +28,6 @@ from ..exceptions import (
CoreDNSJobError,
CoreDNSUpdateError,
DockerError,
PluginError,
)
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
@ -78,12 +76,6 @@ class PluginDns(PluginBase):
self._hosts: list[HostEntry] = []
self._loop: bool = False
self._cached_locals: list[str] | None = None
# Debouncing system for rapid local changes
self._locals_changed_handle: asyncio.TimerHandle | None = None
self._restart_after_locals_change_handle: asyncio.Task | None = None
self._connectivity_check_listener: EventListener | None = None
@property
def hosts(self) -> Path:
@ -98,12 +90,6 @@ class PluginDns(PluginBase):
@property
def locals(self) -> list[str]:
"""Return list of local system DNS servers."""
if self._cached_locals is None:
self._cached_locals = self._compute_locals()
return self._cached_locals
def _compute_locals(self) -> list[str]:
"""Compute list of local system DNS servers."""
servers: list[str] = []
for server in [
f"dns://{server!s}" for server in self.sys_host.network.dns_servers
@ -113,52 +99,6 @@ class PluginDns(PluginBase):
return servers
async def _on_dns_container_running(self, event: DockerContainerStateEvent) -> None:
"""Handle DNS container state change to running and trigger connectivity check."""
if event.name == self.instance.name and event.state == ContainerState.RUNNING:
# Wait before CoreDNS actually becomes available
await asyncio.sleep(5)
_LOGGER.debug("CoreDNS started, checking connectivity")
await self.sys_supervisor.check_connectivity()
async def _restart_dns_after_locals_change(self) -> None:
"""Restart DNS after a debounced delay for local changes."""
old_locals = self._cached_locals
new_locals = self._compute_locals()
if old_locals == new_locals:
return
_LOGGER.debug("DNS locals changed from %s to %s", old_locals, new_locals)
self._cached_locals = new_locals
if not await self.instance.is_running():
return
await self.restart()
self._restart_after_locals_change_handle = None
def _trigger_restart_dns_after_locals_change(self) -> None:
"""Trigger a restart of DNS after local changes."""
# Cancel existing restart task if any
if self._restart_after_locals_change_handle:
self._restart_after_locals_change_handle.cancel()
self._restart_after_locals_change_handle = self.sys_create_task(
self._restart_dns_after_locals_change()
)
self._locals_changed_handle = None
def notify_locals_changed(self) -> None:
"""Schedule a debounced DNS restart for local changes."""
# Cancel existing timer if any
if self._locals_changed_handle:
self._locals_changed_handle.cancel()
# Schedule new timer with 1 second delay
self._locals_changed_handle = self.sys_call_later(
1.0, self._trigger_restart_dns_after_locals_change
)
@property
def servers(self) -> list[str]:
"""Return list of DNS servers."""
@ -247,13 +187,6 @@ class PluginDns(PluginBase):
_LOGGER.error("Can't read hosts.tmpl: %s", err)
await self._init_hosts()
# Register Docker event listener for connectivity checks
if not self._connectivity_check_listener:
self._connectivity_check_listener = self.sys_bus.register_event(
BusEvent.DOCKER_CONTAINER_STATE_CHANGE, self._on_dns_container_running
)
await super().load()
# Update supervisor
@ -284,7 +217,7 @@ class PluginDns(PluginBase):
"""Update CoreDNS plugin."""
try:
await super().update(version)
except (DockerError, PluginError) as err:
except DockerError as err:
raise CoreDNSUpdateError("CoreDNS update failed", _LOGGER.error) from err
async def restart(self) -> None:
@ -309,16 +242,6 @@ class PluginDns(PluginBase):
async def stop(self) -> None:
"""Stop CoreDNS."""
# Cancel any pending locals change timer
if self._locals_changed_handle:
self._locals_changed_handle.cancel()
self._locals_changed_handle = None
# Wait for any pending restart before stopping
if self._restart_after_locals_change_handle:
self._restart_after_locals_change_handle.cancel()
self._restart_after_locals_change_handle = None
_LOGGER.info("Stopping CoreDNS plugin")
try:
await self.instance.stop()

View File

@ -16,7 +16,6 @@ from ..exceptions import (
MulticastError,
MulticastJobError,
MulticastUpdateError,
PluginError,
)
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
@ -64,7 +63,7 @@ class PluginMulticast(PluginBase):
"""Update Multicast plugin."""
try:
await super().update(version)
except (DockerError, PluginError) as err:
except DockerError as err:
raise MulticastUpdateError(
"Multicast update failed", _LOGGER.error
) from err

View File

@ -5,6 +5,7 @@ Code: https://github.com/home-assistant/plugin-observer
import logging
import secrets
from typing import cast
import aiohttp
from awesomeversion import AwesomeVersion
@ -19,7 +20,6 @@ from ..exceptions import (
ObserverError,
ObserverJobError,
ObserverUpdateError,
PluginError,
)
from ..jobs.const import JobExecutionLimit
from ..jobs.decorator import Job
@ -59,9 +59,9 @@ class PluginObserver(PluginBase):
return self.sys_updater.version_observer
@property
def supervisor_token(self) -> str | None:
def supervisor_token(self) -> str:
"""Return an access token for the Observer API."""
return self._data.get(ATTR_ACCESS_TOKEN)
return cast(str, self._data[ATTR_ACCESS_TOKEN])
@Job(
name="plugin_observer_update",
@ -72,7 +72,7 @@ class PluginObserver(PluginBase):
"""Update local HA observer."""
try:
await super().update(version)
except (DockerError, PluginError) as err:
except DockerError as err:
raise ObserverUpdateError(
"HA observer update failed", _LOGGER.error
) from err

View File

@ -21,8 +21,17 @@ async def check_server(
) -> None:
"""Check a DNS server and report issues."""
ip_addr = server[6:] if server.startswith("dns://") else server
async with DNSResolver(loop=loop, nameservers=[ip_addr]) as resolver:
resolver = DNSResolver(loop=loop, nameservers=[ip_addr])
try:
await resolver.query(DNS_CHECK_HOST, qtype)
finally:
def _delete_resolver():
"""Close resolver to avoid memory leaks."""
nonlocal resolver
del resolver
loop.call_later(1, _delete_resolver)
def setup(coresys: CoreSys) -> CheckBase:

View File

@ -1,108 +0,0 @@
"""Helpers to check for duplicate OS installations."""
import logging
from ...const import CoreState
from ...coresys import CoreSys
from ...dbus.udisks2.data import DeviceSpecification
from ..const import ContextType, IssueType, UnhealthyReason
from .base import CheckBase
_LOGGER: logging.Logger = logging.getLogger(__name__)
# Partition labels to check for duplicates (GPT-based installations)
HAOS_PARTITIONS = [
"hassos-boot",
"hassos-kernel0",
"hassos-kernel1",
"hassos-system0",
"hassos-system1",
]
# Partition UUIDs to check for duplicates (MBR-based installations)
HAOS_PARTITION_UUIDS = [
"48617373-01", # hassos-boot
"48617373-05", # hassos-kernel0
"48617373-06", # hassos-system0
"48617373-07", # hassos-kernel1
"48617373-08", # hassos-system1
]
def _get_device_specifications():
"""Generate DeviceSpecification objects for both GPT and MBR partitions."""
# GPT-based installations (partition labels)
for partition_label in HAOS_PARTITIONS:
yield (
DeviceSpecification(partlabel=partition_label),
"partition",
partition_label,
)
# MBR-based installations (partition UUIDs)
for partition_uuid in HAOS_PARTITION_UUIDS:
yield (
DeviceSpecification(partuuid=partition_uuid),
"partition UUID",
partition_uuid,
)
def setup(coresys: CoreSys) -> CheckBase:
"""Check setup function."""
return CheckDuplicateOSInstallation(coresys)
class CheckDuplicateOSInstallation(CheckBase):
"""CheckDuplicateOSInstallation class for check."""
async def run_check(self) -> None:
"""Run check if not affected by issue."""
if not self.sys_os.available:
_LOGGER.debug(
"Skipping duplicate OS installation check, OS is not available"
)
return
for device_spec, spec_type, identifier in _get_device_specifications():
resolved = await self.sys_dbus.udisks2.resolve_device(device_spec)
if resolved and len(resolved) > 1:
_LOGGER.warning(
"Found duplicate OS installation: %s %s exists on %d devices (%s)",
identifier,
spec_type,
len(resolved),
", ".join(str(device.device) for device in resolved),
)
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.DUPLICATE_OS_INSTALLATION
)
self.sys_resolution.create_issue(
IssueType.DUPLICATE_OS_INSTALLATION,
ContextType.SYSTEM,
)
return
async def approve_check(self, reference: str | None = None) -> bool:
"""Approve check if it is affected by issue."""
# Check all partitions for duplicates since issue is created without reference
for device_spec, _, _ in _get_device_specifications():
resolved = await self.sys_dbus.udisks2.resolve_device(device_spec)
if resolved and len(resolved) > 1:
return True
return False
@property
def issue(self) -> IssueType:
"""Return a IssueType enum."""
return IssueType.DUPLICATE_OS_INSTALLATION
@property
def context(self) -> ContextType:
"""Return a ContextType enum."""
return ContextType.SYSTEM
@property
def states(self) -> list[CoreState]:
"""Return a list of valid states when this check can run."""
return [CoreState.SETUP]

View File

@ -21,9 +21,6 @@ class CheckMultipleDataDisks(CheckBase):
async def run_check(self) -> None:
"""Run check if not affected by issue."""
if not self.sys_os.available:
return
for block_device in self.sys_dbus.udisks2.block_devices:
if self._block_device_has_name_issue(block_device):
self.sys_resolution.create_issue(

View File

@ -19,12 +19,12 @@ class CheckNetworkInterfaceIPV4(CheckBase):
async def run_check(self) -> None:
"""Run check if not affected by issue."""
for inet in self.sys_dbus.network.interfaces:
if CheckNetworkInterfaceIPV4.check_interface(inet):
for interface in self.sys_dbus.network.interfaces:
if CheckNetworkInterfaceIPV4.check_interface(interface):
self.sys_resolution.create_issue(
IssueType.IPV4_CONNECTION_PROBLEM,
ContextType.SYSTEM,
inet.interface_name,
interface.name,
)
async def approve_check(self, reference: str | None = None) -> bool:

View File

@ -64,11 +64,10 @@ class UnhealthyReason(StrEnum):
"""Reasons for unsupported status."""
DOCKER = "docker"
DUPLICATE_OS_INSTALLATION = "duplicate_os_installation"
OSERROR_BAD_MESSAGE = "oserror_bad_message"
PRIVILEGED = "privileged"
SETUP = "setup"
SUPERVISOR = "supervisor"
SETUP = "setup"
UNTRUSTED = "untrusted"
@ -84,7 +83,6 @@ class IssueType(StrEnum):
DEVICE_ACCESS_MISSING = "device_access_missing"
DISABLED_DATA_DISK = "disabled_data_disk"
DNS_LOOP = "dns_loop"
DUPLICATE_OS_INSTALLATION = "duplicate_os_installation"
DNS_SERVER_FAILED = "dns_server_failed"
DNS_SERVER_IPV6_ERROR = "dns_server_ipv6_error"
DOCKER_CONFIG = "docker_config"

View File

@ -5,8 +5,6 @@ import logging
from docker.errors import DockerException
from requests import RequestException
from supervisor.docker.const import ADDON_BUILDER_IMAGE
from ...const import CoreState
from ...coresys import CoreSys
from ..const import (
@ -62,10 +60,9 @@ class EvaluateContainer(EvaluateBase):
"""Return a set of all known images."""
return {
self.sys_homeassistant.image,
self.sys_supervisor.image or self.sys_supervisor.default_image,
self.sys_supervisor.image,
*(plugin.image for plugin in self.sys_plugins.all_plugins if plugin.image),
*(addon.image for addon in self.sys_addons.installed if addon.image),
ADDON_BUILDER_IMAGE,
}
async def evaluate(self) -> bool:

View File

@ -3,7 +3,6 @@
from abc import ABC, abstractmethod
import logging
from ...const import BusEvent
from ...coresys import CoreSys, CoreSysAttributes
from ...exceptions import ResolutionFixupError
from ..const import ContextType, IssueType, SuggestionType
@ -67,11 +66,6 @@ class FixupBase(ABC, CoreSysAttributes):
"""Return if a fixup can be apply as auto fix."""
return False
@property
def bus_event(self) -> BusEvent | None:
"""Return the BusEvent that triggers this fixup, or None if not event-based."""
return None
@property
def all_suggestions(self) -> list[Suggestion]:
"""List of all suggestions which when applied run this fixup."""

View File

@ -2,7 +2,6 @@
import logging
from ...const import BusEvent
from ...coresys import CoreSys
from ...exceptions import (
ResolutionFixupError,
@ -69,8 +68,3 @@ class FixupStoreExecuteReload(FixupBase):
def auto(self) -> bool:
"""Return if a fixup can be apply as auto fix."""
return True
@property
def bus_event(self) -> BusEvent | None:
"""Return the BusEvent that triggers this fixup, or None if not event-based."""
return BusEvent.SUPERVISOR_CONNECTIVITY_CHANGE

View File

@ -1,5 +1,6 @@
"""Helpers to check and fix issues with free space."""
from functools import partial
import logging
from ...coresys import CoreSys
@ -11,6 +12,7 @@ from ...exceptions import (
)
from ...jobs.const import JobCondition
from ...jobs.decorator import Job
from ...utils import remove_folder
from ..const import ContextType, IssueType, SuggestionType
from .base import FixupBase
@ -42,8 +44,15 @@ class FixupStoreExecuteReset(FixupBase):
_LOGGER.warning("Can't find store %s for fixup", reference)
return
# Local add-ons are not a git repo, can't remove and re-pull
if repository.git:
await self.sys_run_in_executor(
partial(remove_folder, folder=repository.git.path, content_only=True)
)
# Load data again
try:
await repository.reset()
await repository.load()
except StoreError:
raise ResolutionFixupError() from None

View File

@ -5,7 +5,6 @@ from typing import Any
import attr
from ..bus import EventListener
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ResolutionError, ResolutionNotFound
from ..homeassistant.const import WSEvent
@ -47,9 +46,6 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
self._unsupported: list[UnsupportedReason] = []
self._unhealthy: list[UnhealthyReason] = []
# Map suggestion UUID to event listeners (list)
self._suggestion_listeners: dict[str, list[EventListener]] = {}
async def load_modules(self):
"""Load resolution evaluation, check and fixup modules."""
@ -109,19 +105,6 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
)
self._suggestions.append(suggestion)
# Register event listeners if fixups have a bus_event
listeners: list[EventListener] = []
for fixup in self.fixup.fixes_for_suggestion(suggestion):
if fixup.auto and fixup.bus_event:
def event_callback(reference, fixup=fixup):
return fixup(suggestion)
listener = self.sys_bus.register_event(fixup.bus_event, event_callback)
listeners.append(listener)
if listeners:
self._suggestion_listeners[suggestion.uuid] = listeners
# Event on suggestion added to issue
for issue in self.issues_for_suggestion(suggestion):
self.sys_homeassistant.websocket.supervisor_event(
@ -250,11 +233,6 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
)
self._suggestions.remove(suggestion)
# Remove event listeners if present
listeners = self._suggestion_listeners.pop(suggestion.uuid, [])
for listener in listeners:
self.sys_bus.remove_listener(listener)
# Event on suggestion removed from issues
for issue in self.issues_for_suggestion(suggestion):
self.sys_homeassistant.websocket.supervisor_event(

View File

@ -4,7 +4,7 @@ import asyncio
from collections.abc import Awaitable
import logging
from ..const import ATTR_REPOSITORIES, REPOSITORY_CORE, URL_HASSIO_ADDONS
from ..const import ATTR_REPOSITORIES, URL_HASSIO_ADDONS
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import (
StoreError,
@ -18,10 +18,14 @@ from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils.common import FileConfiguration
from .addon import AddonStore
from .const import FILE_HASSIO_STORE, BuiltinRepository
from .const import FILE_HASSIO_STORE, StoreType
from .data import StoreData
from .repository import Repository
from .validate import DEFAULT_REPOSITORIES, SCHEMA_STORE_FILE
from .validate import (
BUILTIN_REPOSITORIES,
SCHEMA_STORE_FILE,
ensure_builtin_repositories,
)
_LOGGER: logging.Logger = logging.getLogger(__name__)
@ -52,8 +56,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
return [
repository.source
for repository in self.all
if repository.slug
not in {BuiltinRepository.LOCAL.value, BuiltinRepository.CORE.value}
if repository.type == StoreType.GIT
]
def get(self, slug: str) -> Repository:
@ -62,15 +65,20 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
raise StoreNotFound()
return self.repositories[slug]
def get_from_url(self, url: str) -> Repository:
"""Return Repository with slug."""
for repository in self.all:
if repository.source != url:
continue
return repository
raise StoreNotFound()
async def load(self) -> None:
"""Start up add-on store management."""
# Make sure the built-in repositories are all present
# This is especially important when adding new built-in repositories
# to make sure existing installations have them.
all_repositories: set[str] = (
set(self._data.get(ATTR_REPOSITORIES, [])) | DEFAULT_REPOSITORIES
"""Start up add-on management."""
# Init custom repositories and load add-ons
await self.update_repositories(
self._data[ATTR_REPOSITORIES], add_with_errors=True
)
await self.update_repositories(all_repositories, issue_on_error=True)
@Job(
name="store_manager_reload",
@ -81,7 +89,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
"""Update add-ons from repository and reload list."""
# Make a copy to prevent race with other tasks
repositories = [repository] if repository else self.all.copy()
results: list[bool | BaseException] = await asyncio.gather(
results: list[bool | Exception] = await asyncio.gather(
*[repo.update() for repo in repositories], return_exceptions=True
)
@ -118,16 +126,16 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
)
async def add_repository(self, url: str, *, persist: bool = True) -> None:
"""Add a repository."""
await self._add_repository(url, persist=persist, issue_on_error=False)
await self._add_repository(url, persist=persist, add_with_errors=False)
async def _add_repository(
self, url: str, *, persist: bool = True, issue_on_error: bool = False
self, url: str, *, persist: bool = True, add_with_errors: bool = False
) -> None:
"""Add a repository."""
if url == URL_HASSIO_ADDONS:
url = REPOSITORY_CORE
url = StoreType.CORE
repository = Repository.create(self.coresys, url)
repository = Repository(self.coresys, url)
if repository.slug in self.repositories:
raise StoreError(f"Can't add {url}, already in the store", _LOGGER.error)
@ -137,7 +145,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
await repository.load()
except StoreGitCloneError as err:
_LOGGER.error("Can't retrieve data from %s due to %s", url, err)
if issue_on_error:
if add_with_errors:
self.sys_resolution.create_issue(
IssueType.FATAL_ERROR,
ContextType.STORE,
@ -150,7 +158,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
except StoreGitError as err:
_LOGGER.error("Can't load data from repository %s due to %s", url, err)
if issue_on_error:
if add_with_errors:
self.sys_resolution.create_issue(
IssueType.FATAL_ERROR,
ContextType.STORE,
@ -163,7 +171,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
except StoreJobError as err:
_LOGGER.error("Can't add repository %s due to %s", url, err)
if issue_on_error:
if add_with_errors:
self.sys_resolution.create_issue(
IssueType.FATAL_ERROR,
ContextType.STORE,
@ -175,8 +183,8 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
raise err
else:
if not await repository.validate():
if issue_on_error:
if not await self.sys_run_in_executor(repository.validate):
if add_with_errors:
_LOGGER.error("%s is not a valid add-on repository", url)
self.sys_resolution.create_issue(
IssueType.CORRUPT_REPOSITORY,
@ -205,7 +213,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
async def remove_repository(self, repository: Repository, *, persist: bool = True):
"""Remove a repository."""
if repository.is_builtin:
if repository.source in BUILTIN_REPOSITORIES:
raise StoreInvalidAddonRepo(
"Can't remove built-in repositories!", logger=_LOGGER.error
)
@ -226,50 +234,40 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
@Job(name="store_manager_update_repositories")
async def update_repositories(
self,
list_repositories: set[str],
list_repositories: list[str],
*,
issue_on_error: bool = False,
add_with_errors: bool = False,
replace: bool = True,
):
"""Update repositories by adding new ones and removing stale ones."""
current_repositories = {repository.source for repository in self.all}
# Determine repositories to add
repositories_to_add = list_repositories - current_repositories
"""Add a new custom repository."""
new_rep = set(
ensure_builtin_repositories(list_repositories)
if replace
else list_repositories + self.repository_urls
)
old_rep = {repository.source for repository in self.all}
# Add new repositories
add_errors = await asyncio.gather(
*[
# Use _add_repository to avoid JobCondition.SUPERVISOR_UPDATED
# to prevent proper loading of repositories on startup.
self._add_repository(url, persist=False, issue_on_error=True)
if issue_on_error
self._add_repository(url, persist=False, add_with_errors=True)
if add_with_errors
else self.add_repository(url, persist=False)
for url in repositories_to_add
for url in new_rep - old_rep
],
return_exceptions=True,
)
remove_errors: list[BaseException | None] = []
if replace:
# Determine repositories to remove
repositories_to_remove: list[Repository] = [
repository
for repository in self.all
if repository.source not in list_repositories
and not repository.is_builtin
]
# Delete stale repositories
remove_errors = await asyncio.gather(
*[
self.remove_repository(self.get_from_url(url), persist=False)
for url in old_rep - new_rep - BUILTIN_REPOSITORIES
],
return_exceptions=True,
)
# Remove repositories
remove_errors = await asyncio.gather(
*[
self.remove_repository(repository, persist=False)
for repository in repositories_to_remove
],
return_exceptions=True,
)
# Always update data, even if there are errors, some changes may have succeeded
# Always update data, even there are errors, some changes may have succeeded
await self.data.update()
await self._read_addons()

View File

@ -3,35 +3,14 @@
from enum import StrEnum
from pathlib import Path
from ..const import (
REPOSITORY_CORE,
REPOSITORY_LOCAL,
SUPERVISOR_DATA,
URL_HASSIO_ADDONS,
)
from ..const import SUPERVISOR_DATA
FILE_HASSIO_STORE = Path(SUPERVISOR_DATA, "store.json")
"""Repository type definitions for the store."""
class BuiltinRepository(StrEnum):
"""All built-in repositories that come pre-configured."""
class StoreType(StrEnum):
"""Store Types."""
# Local repository (non-git, special handling)
LOCAL = REPOSITORY_LOCAL
# Git-based built-in repositories
CORE = REPOSITORY_CORE
COMMUNITY_ADDONS = "https://github.com/hassio-addons/repository"
ESPHOME = "https://github.com/esphome/home-assistant-addon"
MUSIC_ASSISTANT = "https://github.com/music-assistant/home-assistant-addon"
@property
def git_url(self) -> str:
"""Return the git URL for this repository."""
if self == BuiltinRepository.LOCAL:
raise RuntimeError("Local repository does not have a git URL")
if self == BuiltinRepository.CORE:
return URL_HASSIO_ADDONS
else:
return self.value # For URL-based repos, value is the URL
CORE = "core"
LOCAL = "local"
GIT = "git"

View File

@ -25,6 +25,7 @@ from ..exceptions import ConfigurationFileError
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from ..utils.common import find_one_filetype, read_json_or_yaml_file
from ..utils.json import read_json_file
from .const import StoreType
from .utils import extract_hash_from_path
from .validate import SCHEMA_REPOSITORY_CONFIG
@ -46,7 +47,7 @@ def _read_addon_translations(addon_path: Path) -> dict:
Should be run in the executor.
"""
translations_dir = addon_path / "translations"
translations: dict[str, Any] = {}
translations = {}
if not translations_dir.exists():
return translations
@ -143,7 +144,7 @@ class StoreData(CoreSysAttributes):
self.addons = addons
async def _find_addon_configs(
self, path: Path, repository: str
self, path: Path, repository: dict
) -> list[Path] | None:
"""Find add-ons in the path."""
@ -168,7 +169,7 @@ class StoreData(CoreSysAttributes):
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
elif repository != REPOSITORY_LOCAL:
elif path.stem != StoreType.LOCAL:
suggestion = [SuggestionType.EXECUTE_RESET]
self.sys_resolution.create_issue(
IssueType.CORRUPT_REPOSITORY,

View File

@ -1,20 +1,19 @@
"""Init file for Supervisor add-on Git."""
import asyncio
import errno
import functools as ft
import logging
from pathlib import Path
from tempfile import TemporaryDirectory
import git
from ..const import ATTR_BRANCH, ATTR_URL
from ..const import ATTR_BRANCH, ATTR_URL, URL_HASSIO_ADDONS
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import StoreGitCloneError, StoreGitError, StoreJobError
from ..jobs.decorator import Job, JobCondition
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
from ..resolution.const import ContextType, IssueType, SuggestionType
from ..utils import remove_folder
from .utils import get_hash_from_repository
from .validate import RE_REPOSITORY
_LOGGER: logging.Logger = logging.getLogger(__name__)
@ -23,6 +22,8 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
class GitRepo(CoreSysAttributes):
"""Manage Add-on Git repository."""
builtin: bool
def __init__(self, coresys: CoreSys, path: Path, url: str):
"""Initialize Git base wrapper."""
self.coresys: CoreSys = coresys
@ -30,9 +31,7 @@ class GitRepo(CoreSysAttributes):
self.path: Path = path
self.lock: asyncio.Lock = asyncio.Lock()
if not (repository := RE_REPOSITORY.match(url)):
raise ValueError(f"Invalid url provided for repository GitRepo: {url}")
self.data: dict[str, str] = repository.groupdict()
self.data: dict[str, str] = RE_REPOSITORY.match(url).groupdict()
def __repr__(self) -> str:
"""Return internal representation."""
@ -86,77 +85,35 @@ class GitRepo(CoreSysAttributes):
async def clone(self) -> None:
"""Clone git add-on repository."""
async with self.lock:
await self._clone()
@Job(
name="git_repo_reset",
conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_SYSTEM],
on_condition=StoreJobError,
)
async def reset(self) -> None:
"""Reset repository to fix issue with local copy."""
# Clone into temporary folder
temp_dir = await self.sys_run_in_executor(
TemporaryDirectory, dir=self.sys_config.path_tmp
)
temp_path = Path(temp_dir.name)
try:
await self._clone(temp_path)
# Remove corrupted repo and move temp clone to its place
def move_clone():
remove_folder(folder=self.path)
temp_path.rename(self.path)
async with self.lock:
try:
await self.sys_run_in_executor(move_clone)
except OSError as err:
if err.errno == errno.EBADMSG:
self.sys_resolution.add_unhealthy_reason(
UnhealthyReason.OSERROR_BAD_MESSAGE
)
raise StoreGitCloneError(
f"Can't move clone due to: {err!s}", _LOGGER.error
) from err
finally:
# Clean up temporary directory in case of error
# If the folder was moved this will do nothing
await self.sys_run_in_executor(temp_dir.cleanup)
async def _clone(self, path: Path | None = None) -> None:
"""Clone git add-on repository to location."""
path = path or self.path
git_args = {
attribute: value
for attribute, value in (
("recursive", True),
("branch", self.branch),
("depth", 1),
("shallow-submodules", True),
)
if value is not None
}
try:
_LOGGER.info("Cloning add-on %s repository from %s", path, self.url)
self.repo = await self.sys_run_in_executor(
ft.partial(
git.Repo.clone_from,
self.url,
str(path),
**git_args, # type: ignore
git_args = {
attribute: value
for attribute, value in (
("recursive", True),
("branch", self.branch),
("depth", 1),
("shallow-submodules", True),
)
)
if value is not None
}
except (
git.InvalidGitRepositoryError,
git.NoSuchPathError,
git.CommandError,
UnicodeDecodeError,
) as err:
_LOGGER.error("Can't clone %s repository: %s.", self.url, err)
raise StoreGitCloneError() from err
try:
_LOGGER.info(
"Cloning add-on %s repository from %s", self.path, self.url
)
self.repo = await self.sys_run_in_executor(
ft.partial(
git.Repo.clone_from, self.url, str(self.path), **git_args
)
)
except (
git.InvalidGitRepositoryError,
git.NoSuchPathError,
git.CommandError,
UnicodeDecodeError,
) as err:
_LOGGER.error("Can't clone %s repository: %s.", self.url, err)
raise StoreGitCloneError() from err
@Job(
name="git_repo_pull",
@ -167,10 +124,10 @@ class GitRepo(CoreSysAttributes):
"""Pull Git add-on repo."""
if self.lock.locked():
_LOGGER.warning("There is already a task in progress")
return False
return
if self.repo is None:
_LOGGER.warning("No valid repository for %s", self.url)
return False
return
async with self.lock:
_LOGGER.info("Update add-on %s repository from %s", self.path, self.url)
@ -189,7 +146,7 @@ class GitRepo(CoreSysAttributes):
await self.sys_run_in_executor(
ft.partial(
self.repo.remotes.origin.fetch,
**{"update-shallow": True, "depth": 1}, # type: ignore
**{"update-shallow": True, "depth": 1},
)
)
@ -235,17 +192,12 @@ class GitRepo(CoreSysAttributes):
)
raise StoreGitError() from err
async def remove(self) -> None:
async def _remove(self):
"""Remove a repository."""
if self.lock.locked():
_LOGGER.warning(
"Cannot remove add-on repository %s, there is already a task in progress",
self.url,
)
_LOGGER.warning("There is already a task in progress")
return
_LOGGER.info("Removing custom add-on repository %s", self.url)
def _remove_git_dir(path: Path) -> None:
if not path.is_dir():
return
@ -253,3 +205,30 @@ class GitRepo(CoreSysAttributes):
async with self.lock:
await self.sys_run_in_executor(_remove_git_dir, self.path)
class GitRepoHassIO(GitRepo):
"""Supervisor add-ons repository."""
builtin: bool = False
def __init__(self, coresys):
"""Initialize Git Supervisor add-on repository."""
super().__init__(coresys, coresys.config.path_addons_core, URL_HASSIO_ADDONS)
class GitRepoCustom(GitRepo):
"""Custom add-ons repository."""
builtin: bool = False
def __init__(self, coresys, url):
"""Initialize custom Git Supervisor addo-n repository."""
path = Path(coresys.config.path_addons_git, get_hash_from_repository(url))
super().__init__(coresys, path, url)
async def remove(self):
"""Remove a custom repository."""
_LOGGER.info("Removing custom add-on repository %s", self.url)
await self._remove()

View File

@ -1,8 +1,5 @@
"""Represent a Supervisor repository."""
from __future__ import annotations
from abc import ABC, abstractmethod
import logging
from pathlib import Path
@ -10,19 +7,12 @@ import voluptuous as vol
from supervisor.utils import get_latest_mtime
from ..const import (
ATTR_MAINTAINER,
ATTR_NAME,
ATTR_URL,
FILE_SUFFIX_CONFIGURATION,
REPOSITORY_CORE,
REPOSITORY_LOCAL,
)
from ..const import ATTR_MAINTAINER, ATTR_NAME, ATTR_URL, FILE_SUFFIX_CONFIGURATION
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import ConfigurationFileError, StoreError
from ..utils.common import read_json_or_yaml_file
from .const import BuiltinRepository
from .git import GitRepo
from .const import StoreType
from .git import GitRepo, GitRepoCustom, GitRepoHassIO
from .utils import get_hash_from_repository
from .validate import SCHEMA_REPOSITORY_CONFIG
@ -30,48 +20,27 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
UNKNOWN = "unknown"
class Repository(CoreSysAttributes, ABC):
class Repository(CoreSysAttributes):
"""Add-on store repository in Supervisor."""
def __init__(self, coresys: CoreSys, repository: str, local_path: Path, slug: str):
def __init__(self, coresys: CoreSys, repository: str):
"""Initialize add-on store repository object."""
self._slug: str = slug
self._local_path: Path = local_path
self.coresys: CoreSys = coresys
self.git: GitRepo | None = None
self.source: str = repository
@staticmethod
def create(coresys: CoreSys, repository: str) -> Repository:
"""Create a repository instance."""
if repository in BuiltinRepository:
return Repository._create_builtin(coresys, BuiltinRepository(repository))
if repository == StoreType.LOCAL:
self._slug = repository
self._type = StoreType.LOCAL
self._latest_mtime: float | None = None
elif repository == StoreType.CORE:
self.git = GitRepoHassIO(coresys)
self._slug = repository
self._type = StoreType.CORE
else:
return Repository._create_custom(coresys, repository)
@staticmethod
def _create_builtin(coresys: CoreSys, builtin: BuiltinRepository) -> Repository:
"""Create builtin repository."""
if builtin == BuiltinRepository.LOCAL:
slug = REPOSITORY_LOCAL
local_path = coresys.config.path_addons_local
return RepositoryLocal(coresys, local_path, slug)
elif builtin == BuiltinRepository.CORE:
slug = REPOSITORY_CORE
local_path = coresys.config.path_addons_core
else:
# For other builtin repositories (URL-based)
slug = get_hash_from_repository(builtin.value)
local_path = coresys.config.path_addons_git / slug
return RepositoryGitBuiltin(
coresys, builtin.value, local_path, slug, builtin.git_url
)
@staticmethod
def _create_custom(coresys: CoreSys, repository: str) -> RepositoryCustom:
"""Create custom repository."""
slug = get_hash_from_repository(repository)
local_path = coresys.config.path_addons_git / slug
return RepositoryCustom(coresys, repository, local_path, slug)
self.git = GitRepoCustom(coresys, repository)
self._slug = get_hash_from_repository(repository)
self._type = StoreType.GIT
def __repr__(self) -> str:
"""Return internal representation."""
@ -83,9 +52,9 @@ class Repository(CoreSysAttributes, ABC):
return self._slug
@property
def local_path(self) -> Path:
"""Return local path to repository."""
return self._local_path
def type(self) -> StoreType:
"""Return type of the store."""
return self._type
@property
def data(self) -> dict:
@ -107,123 +76,55 @@ class Repository(CoreSysAttributes, ABC):
"""Return url of repository."""
return self.data.get(ATTR_MAINTAINER, UNKNOWN)
@property
@abstractmethod
def is_builtin(self) -> bool:
"""Return True if this is a built-in repository."""
def validate(self) -> bool:
"""Check if store is valid.
@abstractmethod
async def validate(self) -> bool:
"""Check if store is valid."""
@abstractmethod
async def load(self) -> None:
"""Load addon repository."""
@abstractmethod
async def update(self) -> bool:
"""Update add-on repository.
Returns True if the repository was updated.
Must be run in executor.
"""
@abstractmethod
async def remove(self) -> None:
"""Remove add-on repository."""
@abstractmethod
async def reset(self) -> None:
"""Reset add-on repository to fix corruption issue with files."""
class RepositoryBuiltin(Repository, ABC):
"""A built-in add-on repository."""
@property
def is_builtin(self) -> bool:
"""Return True if this is a built-in repository."""
return True
async def validate(self) -> bool:
"""Assume built-in repositories are always valid."""
return True
async def remove(self) -> None:
"""Raise. Not supported for built-in repositories."""
raise StoreError("Can't remove built-in repositories!", _LOGGER.error)
class RepositoryGit(Repository, ABC):
"""A git based add-on repository."""
_git: GitRepo
async def load(self) -> None:
"""Load addon repository."""
await self._git.load()
async def update(self) -> bool:
"""Update add-on repository.
Returns True if the repository was updated.
"""
if not await self.validate():
return False
return await self._git.pull()
async def validate(self) -> bool:
"""Check if store is valid."""
def validate_file() -> bool:
# If exists?
for filetype in FILE_SUFFIX_CONFIGURATION:
repository_file = Path(self._git.path / f"repository{filetype}")
if repository_file.exists():
break
if not repository_file.exists():
return False
# If valid?
try:
SCHEMA_REPOSITORY_CONFIG(read_json_or_yaml_file(repository_file))
except (ConfigurationFileError, vol.Invalid) as err:
_LOGGER.warning("Could not validate repository configuration %s", err)
return False
if self.type != StoreType.GIT:
return True
return await self.sys_run_in_executor(validate_file)
# If exists?
for filetype in FILE_SUFFIX_CONFIGURATION:
repository_file = Path(self.git.path / f"repository{filetype}")
if repository_file.exists():
break
async def reset(self) -> None:
"""Reset add-on repository to fix corruption issue with files."""
await self._git.reset()
await self.load()
if not repository_file.exists():
return False
# If valid?
try:
SCHEMA_REPOSITORY_CONFIG(read_json_or_yaml_file(repository_file))
except (ConfigurationFileError, vol.Invalid) as err:
_LOGGER.warning("Could not validate repository configuration %s", err)
return False
class RepositoryLocal(RepositoryBuiltin):
"""A local add-on repository."""
def __init__(self, coresys: CoreSys, local_path: Path, slug: str) -> None:
"""Initialize object."""
super().__init__(coresys, BuiltinRepository.LOCAL.value, local_path, slug)
self._latest_mtime: float | None = None
return True
async def load(self) -> None:
"""Load addon repository."""
self._latest_mtime, _ = await self.sys_run_in_executor(
get_latest_mtime, self.local_path
)
if not self.git:
self._latest_mtime, _ = await self.sys_run_in_executor(
get_latest_mtime, self.sys_config.path_addons_local
)
return
await self.git.load()
async def update(self) -> bool:
"""Update add-on repository.
Returns True if the repository was updated.
"""
if not await self.sys_run_in_executor(self.validate):
return False
if self.type != StoreType.LOCAL:
return await self.git.pull()
# Check local modifications
latest_mtime, modified_path = await self.sys_run_in_executor(
get_latest_mtime, self.local_path
get_latest_mtime, self.sys_config.path_addons_local
)
if self._latest_mtime != latest_mtime:
_LOGGER.debug(
@ -236,37 +137,9 @@ class RepositoryLocal(RepositoryBuiltin):
return False
async def reset(self) -> None:
"""Raise. Not supported for local repository."""
raise StoreError(
"Can't reset local repository as it is not git based!", _LOGGER.error
)
class RepositoryGitBuiltin(RepositoryBuiltin, RepositoryGit):
"""A built-in add-on repository based on git."""
def __init__(
self, coresys: CoreSys, repository: str, local_path: Path, slug: str, url: str
) -> None:
"""Initialize object."""
super().__init__(coresys, repository, local_path, slug)
self._git = GitRepo(coresys, local_path, url)
class RepositoryCustom(RepositoryGit):
"""A custom add-on repository."""
def __init__(self, coresys: CoreSys, url: str, local_path: Path, slug: str) -> None:
"""Initialize object."""
super().__init__(coresys, url, local_path, slug)
self._git = GitRepo(coresys, local_path, url)
@property
def is_builtin(self) -> bool:
"""Return True if this is a built-in repository."""
return False
async def remove(self) -> None:
"""Remove add-on repository."""
await self._git.remove()
if self.type != StoreType.GIT:
raise StoreError("Can't remove built-in repositories!", _LOGGER.error)
await self.git.remove()

View File

@ -4,7 +4,18 @@ import voluptuous as vol
from ..const import ATTR_MAINTAINER, ATTR_NAME, ATTR_REPOSITORIES, ATTR_URL
from ..validate import RE_REPOSITORY
from .const import BuiltinRepository
from .const import StoreType
URL_COMMUNITY_ADDONS = "https://github.com/hassio-addons/repository"
URL_ESPHOME = "https://github.com/esphome/home-assistant-addon"
URL_MUSIC_ASSISTANT = "https://github.com/music-assistant/home-assistant-addon"
BUILTIN_REPOSITORIES = {
StoreType.CORE,
StoreType.LOCAL,
URL_COMMUNITY_ADDONS,
URL_ESPHOME,
URL_MUSIC_ASSISTANT,
}
# pylint: disable=no-value-for-parameter
SCHEMA_REPOSITORY_CONFIG = vol.Schema(
@ -17,9 +28,18 @@ SCHEMA_REPOSITORY_CONFIG = vol.Schema(
)
def ensure_builtin_repositories(addon_repositories: list[str]) -> list[str]:
"""Ensure builtin repositories are in list.
Note: This should not be used in validation as the resulting list is not
stable. This can have side effects when comparing data later on.
"""
return list(set(addon_repositories) | BUILTIN_REPOSITORIES)
def validate_repository(repository: str) -> str:
"""Validate a valid repository."""
if repository in BuiltinRepository:
if repository in [StoreType.CORE, StoreType.LOCAL]:
return repository
data = RE_REPOSITORY.match(repository)
@ -35,12 +55,10 @@ def validate_repository(repository: str) -> str:
repositories = vol.All([validate_repository], vol.Unique())
DEFAULT_REPOSITORIES = {repo.value for repo in BuiltinRepository}
SCHEMA_STORE_FILE = vol.Schema(
{
vol.Optional(
ATTR_REPOSITORIES, default=lambda: list(DEFAULT_REPOSITORIES)
ATTR_REPOSITORIES, default=list(BUILTIN_REPOSITORIES)
): repositories,
},
extra=vol.REMOVE_EXTRA,

View File

@ -46,7 +46,7 @@ def _check_connectivity_throttle_period(coresys: CoreSys, *_) -> timedelta:
if coresys.supervisor.connectivity:
return timedelta(minutes=10)
return timedelta(seconds=5)
return timedelta(seconds=30)
class Supervisor(CoreSysAttributes):
@ -106,22 +106,17 @@ class Supervisor(CoreSysAttributes):
return AwesomeVersion(SUPERVISOR_VERSION)
@property
def latest_version(self) -> AwesomeVersion | None:
"""Return last available version of ."""
def latest_version(self) -> AwesomeVersion:
"""Return last available version of Home Assistant."""
return self.sys_updater.version_supervisor
@property
def default_image(self) -> str:
"""Return the default image for this system."""
return f"ghcr.io/home-assistant/{self.sys_arch.supervisor}-hassio-supervisor"
@property
def image(self) -> str | None:
"""Return image name of Supervisor container."""
def image(self) -> str:
"""Return image name of Home Assistant container."""
return self.instance.image
@property
def arch(self) -> str | None:
def arch(self) -> str:
"""Return arch of the Supervisor container."""
return self.instance.arch
@ -197,19 +192,13 @@ class Supervisor(CoreSysAttributes):
async def update(self, version: AwesomeVersion | None = None) -> None:
"""Update Supervisor version."""
version = version or self.latest_version or self.version
version = version or self.latest_version
if version == self.version:
if version == self.sys_supervisor.version:
raise SupervisorUpdateError(
f"Version {version!s} is already installed", _LOGGER.warning
)
image = self.sys_updater.image_supervisor or self.instance.image
if not image:
raise SupervisorUpdateError(
"Cannot determine image to use for supervisor update!", _LOGGER.error
)
# First update own AppArmor
try:
await self.update_apparmor()
@ -222,8 +211,12 @@ class Supervisor(CoreSysAttributes):
# Update container
_LOGGER.info("Update Supervisor to version %s", version)
try:
await self.instance.install(version, image=image)
await self.instance.update_start_tag(image, version)
await self.instance.install(
version, image=self.sys_updater.image_supervisor
)
await self.instance.update_start_tag(
self.sys_updater.image_supervisor, version
)
except DockerError as err:
self.sys_resolution.create_issue(
IssueType.UPDATE_FAILED, ContextType.SUPERVISOR
@ -234,7 +227,7 @@ class Supervisor(CoreSysAttributes):
) from err
self.sys_config.version = version
self.sys_config.image = image
self.sys_config.image = self.sys_updater.image_supervisor
await self.sys_config.save_data()
self.sys_create_task(self.sys_core.stop())
@ -291,16 +284,14 @@ class Supervisor(CoreSysAttributes):
limit=JobExecutionLimit.THROTTLE,
throttle_period=_check_connectivity_throttle_period,
)
async def check_connectivity(self) -> None:
"""Check the Internet connectivity from Supervisor's point of view."""
async def check_connectivity(self):
"""Check the connection."""
timeout = aiohttp.ClientTimeout(total=10)
try:
await self.sys_websession.head(
"https://checkonline.home-assistant.io/online.txt", timeout=timeout
)
except (ClientError, TimeoutError) as err:
_LOGGER.debug("Supervisor Connectivity check failed: %s", err)
except (ClientError, TimeoutError):
self.connectivity = False
else:
_LOGGER.debug("Supervisor Connectivity check succeeded")
self.connectivity = True

View File

@ -27,7 +27,7 @@ from .const import (
BusEvent,
UpdateChannel,
)
from .coresys import CoreSys, CoreSysAttributes
from .coresys import CoreSysAttributes
from .exceptions import (
CodeNotaryError,
CodeNotaryUntrusted,
@ -45,7 +45,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
class Updater(FileConfiguration, CoreSysAttributes):
"""Fetch last versions from version.json."""
def __init__(self, coresys: CoreSys) -> None:
def __init__(self, coresys):
"""Initialize updater."""
super().__init__(FILE_HASSIO_UPDATER, SCHEMA_UPDATER_CONFIG)
self.coresys = coresys

View File

@ -56,7 +56,7 @@ async def check_port(address: IPv4Address, port: int) -> bool:
return True
def check_exception_chain(err: BaseException, object_type: Any) -> bool:
def check_exception_chain(err: Exception, object_type: Any) -> bool:
"""Check if exception chain include sub exception.
It's not full recursive because we need mostly only access to the latest.
@ -70,7 +70,7 @@ def check_exception_chain(err: BaseException, object_type: Any) -> bool:
return check_exception_chain(err.__context__, object_type)
def get_message_from_exception_chain(err: BaseException) -> str:
def get_message_from_exception_chain(err: Exception) -> str:
"""Get the first message from the exception chain."""
if str(err):
return str(err)
@ -119,8 +119,8 @@ def remove_folder_with_excludes(
Must be run in executor.
"""
with TemporaryDirectory(dir=tmp_dir) as temp_path_str:
temp_path = Path(temp_path_str)
with TemporaryDirectory(dir=tmp_dir) as temp_path:
temp_path = Path(temp_path)
moved_files: list[Path] = []
for item in folder.iterdir():
if any(item.match(exclude) for exclude in excludes):

View File

@ -87,15 +87,13 @@ class FileConfiguration:
if not self._file:
raise RuntimeError("Path to config file must be set!")
def _read_data(file: Path) -> dict[str, Any]:
if file.is_file():
def _read_data() -> dict[str, Any]:
if self._file.is_file():
with suppress(ConfigurationFileError):
return read_json_or_yaml_file(file)
return read_json_or_yaml_file(self._file)
return _DEFAULT
self._data = await asyncio.get_running_loop().run_in_executor(
None, _read_data, self._file
)
self._data = await asyncio.get_running_loop().run_in_executor(None, _read_data)
# Validate
try:

View File

@ -3,9 +3,9 @@
from __future__ import annotations
import asyncio
from collections.abc import Awaitable, Callable
from collections.abc import Awaitable, Callable, Coroutine
import logging
from typing import Any, Protocol, cast
from typing import Any, cast
from dbus_fast import (
ErrorType,
@ -46,20 +46,6 @@ DBUS_INTERFACE_PROPERTIES: str = "org.freedesktop.DBus.Properties"
DBUS_METHOD_GETALL: str = "org.freedesktop.DBus.Properties.GetAll"
class GetWithUnpack(Protocol):
"""Protocol class for dbus get signature."""
def __call__(self, *, unpack_variants: bool = True) -> Awaitable[Any]:
"""Signature for dbus get unpack kwarg."""
class UpdatePropertiesCallback(Protocol):
"""Protocol class for update properties callback."""
def __call__(self, changed: dict[str, Any] | None = None) -> Awaitable[None]:
"""Signature for an update properties callback function."""
class DBus:
"""DBus handler."""
@ -230,17 +216,10 @@ class DBus:
return self._proxy_obj is not None
@property
def supports_properties(self) -> bool:
"""Return true if properties interface supported by DBus object."""
return DBUS_INTERFACE_PROPERTIES in self._proxies
@property
def properties(self) -> DBusCallWrapper:
def properties(self) -> DBusCallWrapper | None:
"""Get properties proxy interface."""
if not self.supports_properties:
raise DBusInterfaceError(
f"DBus Object does not have interface {DBUS_INTERFACE_PROPERTIES}"
)
if DBUS_INTERFACE_PROPERTIES not in self._proxies:
return None
return DBusCallWrapper(self, DBUS_INTERFACE_PROPERTIES)
@property
@ -252,12 +231,16 @@ class DBus:
async def get_properties(self, interface: str) -> dict[str, Any]:
"""Read all properties from interface."""
return await self.properties.call("get_all", interface)
if not self.properties:
raise DBusInterfaceError(
f"DBus Object does not have interface {DBUS_INTERFACE_PROPERTIES}"
)
return await self.properties.call_get_all(interface)
def sync_property_changes(
self,
interface: str,
update: UpdatePropertiesCallback,
update: Callable[[dict[str, Any]], Coroutine[None]],
) -> Callable:
"""Sync property changes for interface with cache.
@ -266,7 +249,7 @@ class DBus:
async def sync_property_change(
prop_interface: str, changed: dict[str, Variant], invalidated: list[str]
) -> None:
):
"""Sync property changes to cache."""
if interface != prop_interface:
return
@ -284,12 +267,12 @@ class DBus:
else:
await update(changed)
self.properties.on("properties_changed", sync_property_change)
self.properties.on_properties_changed(sync_property_change)
return sync_property_change
def stop_sync_property_changes(self, sync_property_change: Callable):
"""Stop syncing property changes with cache."""
self.properties.off("properties_changed", sync_property_change)
self.properties.off_properties_changed(sync_property_change)
def disconnect(self):
"""Remove all active signal listeners."""
@ -373,11 +356,10 @@ class DBusCallWrapper:
if not self._proxy:
return DBusCallWrapper(self.dbus, f"{self.interface}.{name}")
dbus_proxy = self._proxy
dbus_parts = name.split("_", 1)
dbus_type = dbus_parts[0]
if not hasattr(dbus_proxy, name):
if not hasattr(self._proxy, name):
message = f"{name} does not exist in D-Bus interface {self.interface}!"
if dbus_type == "call":
raise DBusInterfaceMethodError(message, _LOGGER.error)
@ -401,7 +383,7 @@ class DBusCallWrapper:
if dbus_type == "on":
def _on_signal(callback: Callable):
getattr(dbus_proxy, name)(callback, unpack_variants=True)
getattr(self._proxy, name)(callback, unpack_variants=True)
# pylint: disable=protected-access
self.dbus._add_signal_monitor(self.interface, dbus_name, callback)
@ -410,7 +392,7 @@ class DBusCallWrapper:
return _on_signal
def _off_signal(callback: Callable):
getattr(dbus_proxy, name)(callback, unpack_variants=True)
getattr(self._proxy, name)(callback, unpack_variants=True)
# pylint: disable=protected-access
if (
@ -439,7 +421,7 @@ class DBusCallWrapper:
def _method_wrapper(*args, unpack_variants: bool = True) -> Awaitable:
return DBus.call_dbus(
dbus_proxy, name, *args, unpack_variants=unpack_variants
self._proxy, name, *args, unpack_variants=unpack_variants
)
return _method_wrapper
@ -447,7 +429,7 @@ class DBusCallWrapper:
elif dbus_type == "set":
def _set_wrapper(*args) -> Awaitable:
return DBus.call_dbus(dbus_proxy, name, *args, unpack_variants=False)
return DBus.call_dbus(self._proxy, name, *args, unpack_variants=False)
return _set_wrapper
@ -466,7 +448,7 @@ class DBusCallWrapper:
def get(self, name: str, *, unpack_variants: bool = True) -> Awaitable[Any]:
"""Get a dbus property value."""
return cast(GetWithUnpack, self._dbus_action(f"get_{name}"))(
return cast(Callable[[bool], Awaitable[Any]], self._dbus_action(f"get_{name}"))(
unpack_variants=unpack_variants
)

View File

@ -3,6 +3,7 @@
import asyncio
from functools import partial
import logging
from typing import Any
from aiohttp.web_exceptions import HTTPBadGateway, HTTPServiceUnavailable
import sentry_sdk
@ -12,7 +13,6 @@ from sentry_sdk.integrations.dedupe import DedupeIntegration
from sentry_sdk.integrations.excepthook import ExcepthookIntegration
from sentry_sdk.integrations.logging import LoggingIntegration
from sentry_sdk.integrations.threading import ThreadingIntegration
from sentry_sdk.scrubber import DEFAULT_DENYLIST, EventScrubber
from ..const import SUPERVISOR_VERSION
from ..coresys import CoreSys
@ -27,7 +27,6 @@ def init_sentry(coresys: CoreSys) -> None:
"""Initialize sentry client."""
if not sentry_sdk.is_initialized():
_LOGGER.info("Initializing Supervisor Sentry")
denylist = DEFAULT_DENYLIST + ["psk", "ssid"]
# Don't use AsyncioIntegration(). We commonly handle task exceptions
# outside of tasks. This would cause exception we gracefully handle to
# be captured by sentry.
@ -36,7 +35,6 @@ def init_sentry(coresys: CoreSys) -> None:
before_send=partial(filter_data, coresys),
auto_enabling_integrations=False,
default_integrations=False,
event_scrubber=EventScrubber(denylist=denylist),
integrations=[
AioHttpIntegration(
failed_request_status_codes=frozenset(range(500, 600))
@ -58,6 +56,28 @@ def init_sentry(coresys: CoreSys) -> None:
)
def capture_event(event: dict[str, Any], only_once: str | None = None):
"""Capture an event and send to sentry.
Must be called in executor.
"""
if sentry_sdk.is_initialized():
if only_once and only_once not in only_once_events:
only_once_events.add(only_once)
sentry_sdk.capture_event(event)
async def async_capture_event(event: dict[str, Any], only_once: str | None = None):
"""Capture an event and send to sentry.
Safe to call from event loop.
"""
if sentry_sdk.is_initialized():
await asyncio.get_running_loop().run_in_executor(
None, capture_event, event, only_once
)
def capture_exception(err: BaseException) -> None:
"""Capture an exception and send to sentry.

View File

@ -107,17 +107,17 @@ async def journal_logs_reader(
# followed by a newline as separator to the next field.
if not data.endswith(b"\n"):
raise MalformedBinaryEntryError(
f"Failed parsing binary entry {data.decode('utf-8', errors='replace')}"
f"Failed parsing binary entry {data}"
)
field_name = name.decode("utf-8")
if field_name not in formatter_.required_fields:
name = name.decode("utf-8")
if name not in formatter_.required_fields:
# we must read to the end of the entry in the stream, so we can
# only continue the loop here
continue
# strip \n for simple fields before decoding
entries[field_name] = data[:-1].decode("utf-8")
entries[name] = data[:-1].decode("utf-8")
def _parse_boot_json(boot_json_bytes: bytes) -> tuple[int, str]:

View File

@ -9,7 +9,7 @@ from yaml import YAMLError, dump, load
try:
from yaml import CDumper as Dumper, CSafeLoader as SafeLoader
except ImportError:
from yaml import Dumper, SafeLoader # type: ignore
from yaml import Dumper, SafeLoader
from ..exceptions import YamlFileError

View File

@ -182,7 +182,7 @@ SCHEMA_DOCKER_CONFIG = vol.Schema(
}
}
),
vol.Optional(ATTR_ENABLE_IPV6, default=None): vol.Maybe(vol.Boolean()),
vol.Optional(ATTR_ENABLE_IPV6): vol.Boolean(),
}
)

View File

@ -18,7 +18,6 @@ from supervisor.const import AddonBoot, AddonState, BusEvent
from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState
from supervisor.docker.manager import CommandReturn
from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError
from supervisor.hardware.helper import HwHelper
@ -28,7 +27,7 @@ from supervisor.utils.dt import utcnow
from .test_manager import BOOT_FAIL_ISSUE, BOOT_FAIL_SUGGESTIONS
from tests.common import get_fixture_path, is_in_list
from tests.common import get_fixture_path
from tests.const import TEST_ADDON_SLUG
@ -209,7 +208,7 @@ async def test_watchdog_on_stop(coresys: CoreSys, install_addon_ssh: Addon) -> N
async def test_listener_attached_on_install(
coresys: CoreSys, mock_amd64_arch_supported: None, test_repository
coresys: CoreSys, mock_amd64_arch_supported: None, repository
):
"""Test events listener attached on addon install."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
@ -242,7 +241,7 @@ async def test_listener_attached_on_install(
)
async def test_watchdog_during_attach(
coresys: CoreSys,
test_repository: Repository,
repository: Repository,
boot_timedelta: timedelta,
restart_count: int,
):
@ -710,7 +709,7 @@ async def test_local_example_install(
coresys: CoreSys,
container: MagicMock,
tmp_supervisor_data: Path,
test_repository,
repository,
mock_aarch64_arch_supported: None,
):
"""Test install of an addon."""
@ -820,7 +819,7 @@ async def test_paths_cache(coresys: CoreSys, install_addon_ssh: Addon):
with (
patch("supervisor.addons.addon.Path.exists", return_value=True),
patch("supervisor.store.repository.RepositoryLocal.update", return_value=True),
patch("supervisor.store.repository.Repository.update", return_value=True),
):
await coresys.store.reload(coresys.store.get("local"))
@ -841,25 +840,10 @@ async def test_addon_loads_wrong_image(
install_addon_ssh.persist["image"] = "local/aarch64-addon-ssh"
assert install_addon_ssh.image == "local/aarch64-addon-ssh"
with (
patch("pathlib.Path.is_file", return_value=True),
patch.object(
coresys.docker,
"run_command",
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
) as mock_run_command,
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
with patch("pathlib.Path.is_file", return_value=True):
await install_addon_ssh.load()
container.remove.assert_called_with(force=True, v=True)
# one for removing the addon, one for removing the addon builder
assert coresys.docker.images.remove.call_count == 2
container.remove.assert_called_once_with(force=True)
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": "local/aarch64-addon-ssh:latest",
"force": True,
@ -868,18 +852,12 @@ async def test_addon_loads_wrong_image(
"image": "local/aarch64-addon-ssh:9.2.1",
"force": True,
}
mock_run_command.assert_called_once()
assert mock_run_command.call_args.args[0] == "docker.io/library/docker"
assert mock_run_command.call_args.kwargs["version"] == "1.0.0-cli"
command = mock_run_command.call_args.kwargs["command"]
assert is_in_list(
["--platform", "linux/amd64"],
command,
)
assert is_in_list(
["--tag", "local/amd64-addon-ssh:9.2.1"],
command,
coresys.docker.images.build.assert_called_once()
assert (
coresys.docker.images.build.call_args.kwargs["tag"]
== "local/amd64-addon-ssh:9.2.1"
)
assert coresys.docker.images.build.call_args.kwargs["platform"] == "linux/amd64"
assert install_addon_ssh.image == "local/amd64-addon-ssh"
coresys.addons.data.save_data.assert_called_once()
@ -893,33 +871,15 @@ async def test_addon_loads_missing_image(
"""Test addon corrects a missing image on load."""
coresys.docker.images.get.side_effect = ImageNotFound("missing")
with (
patch("pathlib.Path.is_file", return_value=True),
patch.object(
coresys.docker,
"run_command",
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
) as mock_run_command,
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
with patch("pathlib.Path.is_file", return_value=True):
await install_addon_ssh.load()
mock_run_command.assert_called_once()
assert mock_run_command.call_args.args[0] == "docker.io/library/docker"
assert mock_run_command.call_args.kwargs["version"] == "1.0.0-cli"
command = mock_run_command.call_args.kwargs["command"]
assert is_in_list(
["--platform", "linux/amd64"],
command,
)
assert is_in_list(
["--tag", "local/amd64-addon-ssh:9.2.1"],
command,
coresys.docker.images.build.assert_called_once()
assert (
coresys.docker.images.build.call_args.kwargs["tag"]
== "local/amd64-addon-ssh:9.2.1"
)
assert coresys.docker.images.build.call_args.kwargs["platform"] == "linux/amd64"
assert install_addon_ssh.image == "local/amd64-addon-ssh"
@ -940,14 +900,7 @@ async def test_addon_load_succeeds_with_docker_errors(
# Image build failure
coresys.docker.images.build.side_effect = DockerException()
caplog.clear()
with (
patch("pathlib.Path.is_file", return_value=True),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
with patch("pathlib.Path.is_file", return_value=True):
await install_addon_ssh.load()
assert "Can't build local/amd64-addon-ssh:9.2.1" in caplog.text

View File

@ -8,13 +8,10 @@ from supervisor.addons.addon import Addon
from supervisor.addons.build import AddonBuild
from supervisor.coresys import CoreSys
from tests.common import is_in_list
async def test_platform_set(coresys: CoreSys, install_addon_ssh: Addon):
"""Test platform set in container build args."""
"""Test platform set in docker args."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
with (
patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
@ -22,23 +19,17 @@ async def test_platform_set(coresys: CoreSys, install_addon_ssh: Addon):
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
build.get_docker_args, AwesomeVersion("latest")
)
assert is_in_list(["--platform", "linux/amd64"], args["command"])
assert args["platform"] == "linux/amd64"
async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon):
"""Test dockerfile path in container build args."""
"""Test platform set in docker args."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
with (
patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
@ -46,17 +37,12 @@ async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon)
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
build.get_docker_args, AwesomeVersion("latest")
)
assert is_in_list(["--file", "Dockerfile"], args["command"])
assert args["dockerfile"].endswith("fixtures/addons/local/ssh/Dockerfile")
assert str(await coresys.run_in_executor(build.get_dockerfile)).endswith(
"fixtures/addons/local/ssh/Dockerfile"
)
@ -64,9 +50,8 @@ async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon)
async def test_dockerfile_evaluation_arch(coresys: CoreSys, install_addon_ssh: Addon):
"""Test dockerfile arch evaluation in container build args."""
"""Test platform set in docker args."""
build = await AddonBuild(coresys, install_addon_ssh).load_config()
with (
patch.object(
type(coresys.arch), "supported", new=PropertyMock(return_value=["aarch64"])
@ -74,17 +59,12 @@ async def test_dockerfile_evaluation_arch(coresys: CoreSys, install_addon_ssh: A
patch.object(
type(coresys.arch), "default", new=PropertyMock(return_value="aarch64")
),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
args = await coresys.run_in_executor(
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
build.get_docker_args, AwesomeVersion("latest")
)
assert is_in_list(["--file", "Dockerfile.aarch64"], args["command"])
assert args["dockerfile"].endswith("fixtures/addons/local/ssh/Dockerfile.aarch64")
assert str(await coresys.run_in_executor(build.get_dockerfile)).endswith(
"fixtures/addons/local/ssh/Dockerfile.aarch64"
)

View File

@ -29,7 +29,7 @@ from supervisor.plugins.dns import PluginDns
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
from supervisor.resolution.data import Issue, Suggestion
from supervisor.store.addon import AddonStore
from supervisor.store.repository import RepositoryLocal
from supervisor.store.repository import Repository
from supervisor.utils import check_exception_chain
from supervisor.utils.common import write_json_file
@ -67,7 +67,7 @@ async def fixture_remove_wait_boot(coresys: CoreSys) -> AsyncGenerator[None]:
@pytest.fixture(name="install_addon_example_image")
async def fixture_install_addon_example_image(
coresys: CoreSys, test_repository
coresys: CoreSys, repository
) -> Generator[Addon]:
"""Install local_example add-on with image."""
store = coresys.addons.store["local_example_image"]
@ -442,7 +442,7 @@ async def test_store_data_changes_during_update(
update_task = coresys.create_task(simulate_update())
await asyncio.sleep(0)
with patch.object(RepositoryLocal, "update", return_value=True):
with patch.object(Repository, "update", return_value=True):
await coresys.store.reload()
assert "image" not in coresys.store.data.addons["local_ssh"]

View File

@ -14,7 +14,6 @@ from supervisor.const import AddonState
from supervisor.coresys import CoreSys
from supervisor.docker.addon import DockerAddon
from supervisor.docker.const import ContainerState
from supervisor.docker.manager import CommandReturn
from supervisor.docker.monitor import DockerContainerStateEvent
from supervisor.exceptions import HassioError
from supervisor.store.repository import Repository
@ -54,7 +53,7 @@ async def test_addons_info(
# DEPRECATED - Remove with legacy routing logic on 1/2023
async def test_addons_info_not_installed(
api_client: TestClient, coresys: CoreSys, test_repository: Repository
api_client: TestClient, coresys: CoreSys, repository: Repository
):
"""Test getting addon info for not installed addon."""
resp = await api_client.get(f"/addons/{TEST_ADDON_SLUG}/info")
@ -240,19 +239,6 @@ async def test_api_addon_rebuild_healthcheck(
patch.object(Addon, "need_build", new=PropertyMock(return_value=True)),
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
patch.object(DockerAddon, "run", new=container_events_task),
patch.object(
coresys.docker,
"run_command",
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
),
patch.object(
DockerAddon, "healthcheck", new=PropertyMock(return_value={"exists": True})
),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
resp = await api_client.post("/addons/local_ssh/rebuild")
@ -261,98 +247,6 @@ async def test_api_addon_rebuild_healthcheck(
assert resp.status == 200
async def test_api_addon_rebuild_force(
api_client: TestClient,
coresys: CoreSys,
install_addon_ssh: Addon,
container: MagicMock,
tmp_supervisor_data,
path_extern,
):
"""Test rebuilding an image-based addon with force parameter."""
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
container.status = "running"
install_addon_ssh.path_data.mkdir()
container.attrs["Config"] = {"Healthcheck": "exists"}
await install_addon_ssh.load()
await asyncio.sleep(0)
assert install_addon_ssh.state == AddonState.STARTUP
state_changes: list[AddonState] = []
_container_events_task: asyncio.Task | None = None
async def container_events():
nonlocal state_changes
await install_addon_ssh.container_state_changed(
_create_test_event(f"addon_{TEST_ADDON_SLUG}", ContainerState.STOPPED)
)
state_changes.append(install_addon_ssh.state)
await install_addon_ssh.container_state_changed(
_create_test_event(f"addon_{TEST_ADDON_SLUG}", ContainerState.RUNNING)
)
state_changes.append(install_addon_ssh.state)
await asyncio.sleep(0)
await install_addon_ssh.container_state_changed(
_create_test_event(f"addon_{TEST_ADDON_SLUG}", ContainerState.HEALTHY)
)
async def container_events_task(*args, **kwargs):
nonlocal _container_events_task
_container_events_task = asyncio.create_task(container_events())
# Test 1: Without force, image-based addon should fail
with (
patch.object(AddonBuild, "is_valid", return_value=True),
patch.object(DockerAddon, "is_running", return_value=False),
patch.object(
Addon, "need_build", new=PropertyMock(return_value=False)
), # Image-based
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
):
resp = await api_client.post("/addons/local_ssh/rebuild")
assert resp.status == 400
result = await resp.json()
assert "Can't rebuild a image based add-on" in result["message"]
# Reset state for next test
state_changes.clear()
# Test 2: With force=True, image-based addon should succeed
with (
patch.object(AddonBuild, "is_valid", return_value=True),
patch.object(DockerAddon, "is_running", return_value=False),
patch.object(
Addon, "need_build", new=PropertyMock(return_value=False)
), # Image-based
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
patch.object(DockerAddon, "run", new=container_events_task),
patch.object(
coresys.docker,
"run_command",
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
),
patch.object(
DockerAddon, "healthcheck", new=PropertyMock(return_value={"exists": True})
),
patch.object(
type(coresys.config),
"local_to_extern_path",
return_value="/addon/path/on/host",
),
):
resp = await api_client.post("/addons/local_ssh/rebuild", json={"force": True})
assert state_changes == [AddonState.STOPPED, AddonState.STARTUP]
assert install_addon_ssh.state == AddonState.STARTED
assert resp.status == 200
await _container_events_task
async def test_api_addon_uninstall(
api_client: TestClient,
coresys: CoreSys,
@ -533,7 +427,7 @@ async def test_addon_not_found(
("get", "/addons/local_ssh/logs/boots/1/follow", False),
],
)
@pytest.mark.usefixtures("test_repository")
@pytest.mark.usefixtures("repository")
async def test_addon_not_installed(
api_client: TestClient, method: str, url: str, json_expected: bool
):

View File

@ -3,7 +3,6 @@
from datetime import UTC, datetime, timedelta
from unittest.mock import AsyncMock, MagicMock, patch
from aiohttp.hdrs import WWW_AUTHENTICATE
from aiohttp.test_utils import TestClient
import pytest
@ -120,44 +119,16 @@ async def test_list_users(
]
@pytest.mark.parametrize(
("field", "api_client"),
[("username", TEST_ADDON_SLUG), ("user", TEST_ADDON_SLUG)],
indirect=["api_client"],
)
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
async def test_auth_json_success(
api_client: TestClient,
mock_check_login: AsyncMock,
install_addon_ssh: Addon,
field: str,
api_client: TestClient, mock_check_login: AsyncMock, install_addon_ssh: Addon
):
"""Test successful JSON auth."""
mock_check_login.return_value = True
resp = await api_client.post("/auth", json={field: "test", "password": "pass"})
resp = await api_client.post("/auth", json={"username": "test", "password": "pass"})
assert resp.status == 200
@pytest.mark.parametrize(
("user", "password", "api_client"),
[
(None, "password", TEST_ADDON_SLUG),
("user", None, TEST_ADDON_SLUG),
],
indirect=["api_client"],
)
async def test_auth_json_failure_none(
api_client: TestClient,
mock_check_login: AsyncMock,
install_addon_ssh: Addon,
user: str | None,
password: str | None,
):
"""Test failed JSON auth with none user or password."""
mock_check_login.return_value = True
resp = await api_client.post("/auth", json={"username": user, "password": password})
assert resp.status == 401
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
async def test_auth_json_invalid_credentials(
api_client: TestClient, mock_check_login: AsyncMock, install_addon_ssh: Addon
@ -167,8 +138,8 @@ async def test_auth_json_invalid_credentials(
resp = await api_client.post(
"/auth", json={"username": "test", "password": "wrong"}
)
assert WWW_AUTHENTICATE not in resp.headers
assert resp.status == 401
# Do we really want the API to return 400 here?
assert resp.status == 400
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
@ -177,7 +148,7 @@ async def test_auth_json_empty_body(api_client: TestClient, install_addon_ssh: A
resp = await api_client.post(
"/auth", data="", headers={"Content-Type": "application/json"}
)
assert resp.status == 401
assert resp.status == 400
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
@ -214,8 +185,8 @@ async def test_auth_urlencoded_failure(
data="username=test&password=fail",
headers={"Content-Type": "application/x-www-form-urlencoded"},
)
assert WWW_AUTHENTICATE not in resp.headers
assert resp.status == 401
# Do we really want the API to return 400 here?
assert resp.status == 400
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
@ -226,7 +197,7 @@ async def test_auth_unsupported_content_type(
resp = await api_client.post(
"/auth", data="something", headers={"Content-Type": "text/plain"}
)
assert "Basic realm" in resp.headers[WWW_AUTHENTICATE]
# This probably should be 400 here for better consistency
assert resp.status == 401

View File

@ -19,7 +19,7 @@ async def test_api_docker_info(api_client: TestClient):
async def test_api_network_enable_ipv6(coresys: CoreSys, api_client: TestClient):
"""Test setting docker network for enabled IPv6."""
assert coresys.docker.config.enable_ipv6 is None
assert coresys.docker.config.enable_ipv6 is False
resp = await api_client.post("/docker/options", json={"enable_ipv6": True})
assert resp.status == 200

View File

@ -30,7 +30,7 @@ REPO_URL = "https://github.com/awesome-developer/awesome-repo"
async def test_api_store(
api_client: TestClient,
store_addon: AddonStore,
test_repository: Repository,
repository: Repository,
caplog: pytest.LogCaptureFixture,
):
"""Test /store REST API."""
@ -38,7 +38,7 @@ async def test_api_store(
result = await resp.json()
assert result["data"]["addons"][-1]["slug"] == store_addon.slug
assert result["data"]["repositories"][-1]["slug"] == test_repository.slug
assert result["data"]["repositories"][-1]["slug"] == repository.slug
assert (
f"Add-on {store_addon.slug} not supported on this platform" not in caplog.text
@ -73,25 +73,23 @@ async def test_api_store_addons_addon_version(
@pytest.mark.asyncio
async def test_api_store_repositories(
api_client: TestClient, test_repository: Repository
):
async def test_api_store_repositories(api_client: TestClient, repository: Repository):
"""Test /store/repositories REST API."""
resp = await api_client.get("/store/repositories")
result = await resp.json()
assert result["data"][-1]["slug"] == test_repository.slug
assert result["data"][-1]["slug"] == repository.slug
@pytest.mark.asyncio
async def test_api_store_repositories_repository(
api_client: TestClient, test_repository: Repository
api_client: TestClient, repository: Repository
):
"""Test /store/repositories/{repository} REST API."""
resp = await api_client.get(f"/store/repositories/{test_repository.slug}")
resp = await api_client.get(f"/store/repositories/{repository.slug}")
result = await resp.json()
assert result["data"]["slug"] == test_repository.slug
assert result["data"]["slug"] == repository.slug
async def test_api_store_add_repository(
@ -99,8 +97,8 @@ async def test_api_store_add_repository(
) -> None:
"""Test POST /store/repositories REST API."""
with (
patch("supervisor.store.repository.RepositoryGit.load", return_value=None),
patch("supervisor.store.repository.RepositoryGit.validate", return_value=True),
patch("supervisor.store.repository.Repository.load", return_value=None),
patch("supervisor.store.repository.Repository.validate", return_value=True),
):
response = await api_client.post(
"/store/repositories", json={"repository": REPO_URL}
@ -108,17 +106,18 @@ async def test_api_store_add_repository(
assert response.status == 200
assert REPO_URL in coresys.store.repository_urls
assert isinstance(coresys.store.get_from_url(REPO_URL), Repository)
async def test_api_store_remove_repository(
api_client: TestClient, coresys: CoreSys, test_repository: Repository
api_client: TestClient, coresys: CoreSys, repository: Repository
):
"""Test DELETE /store/repositories/{repository} REST API."""
response = await api_client.delete(f"/store/repositories/{test_repository.slug}")
response = await api_client.delete(f"/store/repositories/{repository.slug}")
assert response.status == 200
assert test_repository.source not in coresys.store.repository_urls
assert test_repository.slug not in coresys.store.repositories
assert repository.source not in coresys.store.repository_urls
assert repository.slug not in coresys.store.repositories
async def test_api_store_update_healthcheck(
@ -330,7 +329,7 @@ async def test_store_addon_not_found(
("post", "/addons/local_ssh/update"),
],
)
@pytest.mark.usefixtures("test_repository")
@pytest.mark.usefixtures("repository")
async def test_store_addon_not_installed(api_client: TestClient, method: str, url: str):
"""Test store addon not installed error."""
resp = await api_client.request(method, url)

View File

@ -9,7 +9,12 @@ from blockbuster import BlockingError
import pytest
from supervisor.coresys import CoreSys
from supervisor.exceptions import HassioError, HostNotSupportedError, StoreGitError
from supervisor.exceptions import (
HassioError,
HostNotSupportedError,
StoreGitError,
StoreNotFound,
)
from supervisor.store.repository import Repository
from tests.api import common_test_api_advanced_logs
@ -33,10 +38,12 @@ async def test_api_supervisor_options_add_repository(
):
"""Test add a repository via POST /supervisor/options REST API."""
assert REPO_URL not in coresys.store.repository_urls
with pytest.raises(StoreNotFound):
coresys.store.get_from_url(REPO_URL)
with (
patch("supervisor.store.repository.RepositoryGit.load", return_value=None),
patch("supervisor.store.repository.RepositoryGit.validate", return_value=True),
patch("supervisor.store.repository.Repository.load", return_value=None),
patch("supervisor.store.repository.Repository.validate", return_value=True),
):
response = await api_client.post(
"/supervisor/options", json={"addons_repositories": [REPO_URL]}
@ -44,22 +51,23 @@ async def test_api_supervisor_options_add_repository(
assert response.status == 200
assert REPO_URL in coresys.store.repository_urls
assert isinstance(coresys.store.get_from_url(REPO_URL), Repository)
async def test_api_supervisor_options_remove_repository(
api_client: TestClient, coresys: CoreSys, test_repository: Repository
api_client: TestClient, coresys: CoreSys, repository: Repository
):
"""Test remove a repository via POST /supervisor/options REST API."""
assert test_repository.source in coresys.store.repository_urls
assert test_repository.slug in coresys.store.repositories
assert repository.source in coresys.store.repository_urls
assert repository.slug in coresys.store.repositories
response = await api_client.post(
"/supervisor/options", json={"addons_repositories": []}
)
assert response.status == 200
assert test_repository.source not in coresys.store.repository_urls
assert test_repository.slug not in coresys.store.repositories
assert repository.source not in coresys.store.repository_urls
assert repository.slug not in coresys.store.repositories
@pytest.mark.parametrize("git_error", [None, StoreGitError()])
@ -68,9 +76,9 @@ async def test_api_supervisor_options_repositories_skipped_on_error(
):
"""Test repositories skipped on error via POST /supervisor/options REST API."""
with (
patch("supervisor.store.repository.RepositoryGit.load", side_effect=git_error),
patch("supervisor.store.repository.RepositoryGit.validate", return_value=False),
patch("supervisor.store.repository.RepositoryCustom.remove"),
patch("supervisor.store.repository.Repository.load", side_effect=git_error),
patch("supervisor.store.repository.Repository.validate", return_value=False),
patch("supervisor.store.repository.Repository.remove"),
):
response = await api_client.post(
"/supervisor/options", json={"addons_repositories": [REPO_URL]}
@ -79,6 +87,8 @@ async def test_api_supervisor_options_repositories_skipped_on_error(
assert response.status == 400
assert len(coresys.resolution.suggestions) == 0
assert REPO_URL not in coresys.store.repository_urls
with pytest.raises(StoreNotFound):
coresys.store.get_from_url(REPO_URL)
async def test_api_supervisor_options_repo_error_with_config_change(
@ -88,7 +98,7 @@ async def test_api_supervisor_options_repo_error_with_config_change(
assert not coresys.config.debug
with patch(
"supervisor.store.repository.RepositoryGit.load", side_effect=StoreGitError()
"supervisor.store.repository.Repository.load", side_effect=StoreGitError()
):
response = await api_client.post(
"/supervisor/options",
@ -261,7 +271,7 @@ async def test_api_supervisor_options_country(api_client: TestClient, coresys: C
@pytest.mark.parametrize(
("blockbuster", "option_value", "config_value"),
[("no_blockbuster", "on", False), ("no_blockbuster", "on-at-startup", True)],
[("no_blockbuster", "on", False), ("no_blockbuster", "on_at_startup", True)],
indirect=["blockbuster"],
)
async def test_api_supervisor_options_blocking_io(

View File

@ -2244,33 +2244,3 @@ async def test_get_upload_path_for_mount_location(
result = await manager.get_upload_path_for_location(mount)
assert result == mount.local_where
@pytest.mark.usefixtures(
"supervisor_internet", "tmp_supervisor_data", "path_extern", "install_addon_example"
)
async def test_backup_addon_skips_uninstalled(
coresys: CoreSys, caplog: pytest.LogCaptureFixture
):
"""Test restore installing new addon."""
await coresys.core.set_state(CoreState.RUNNING)
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
assert "local_example" in coresys.addons.local
orig_store_addons = Backup.store_addons
async def mock_store_addons(*args, **kwargs):
# Mock an uninstall during the backup process
await coresys.addons.uninstall("local_example")
await orig_store_addons(*args, **kwargs)
with patch.object(Backup, "store_addons", new=mock_store_addons):
backup: Backup = await coresys.backups.do_backup_partial(
addons=["local_example"], folders=["ssl"]
)
assert "local_example" not in coresys.addons.local
assert not backup.addons
assert (
"Skipping backup of add-on local_example because it has been uninstalled"
in caplog.text
)

View File

@ -105,20 +105,6 @@ def reset_last_call(func, group: str | None = None) -> None:
get_job_decorator(func).set_last_call(datetime.min, group)
def is_in_list(a: list, b: list):
"""Check if all elements in list a are in list b in order.
Taken from https://stackoverflow.com/a/69175987/12156188.
"""
for c in a:
if c in b:
b = b[b.index(c) :]
else:
return False
return True
class MockResponse:
"""Mock response for aiohttp requests."""

View File

@ -66,7 +66,6 @@ from .dbus_service_mocks.base import DBusServiceMock
from .dbus_service_mocks.network_connection_settings import (
ConnectionSettings as ConnectionSettingsService,
)
from .dbus_service_mocks.network_dns_manager import DnsManager as DnsManagerService
from .dbus_service_mocks.network_manager import NetworkManager as NetworkManagerService
# pylint: disable=redefined-outer-name, protected-access
@ -132,7 +131,7 @@ async def docker() -> DockerAPI:
docker_obj.info.logging = "journald"
docker_obj.info.storage = "overlay2"
docker_obj.info.version = AwesomeVersion("1.0.0")
docker_obj.info.version = "1.0.0"
yield docker_obj
@ -221,14 +220,6 @@ async def network_manager_service(
yield network_manager_services["network_manager"]
@pytest.fixture
async def dns_manager_service(
network_manager_services: dict[str, DBusServiceMock | dict[str, DBusServiceMock]],
) -> AsyncGenerator[DnsManagerService]:
"""Return DNS Manager service mock."""
yield network_manager_services["network_dns_manager"]
@pytest.fixture(name="connection_settings_service")
async def fixture_connection_settings_service(
network_manager_services: dict[str, DBusServiceMock | dict[str, DBusServiceMock]],
@ -418,7 +409,7 @@ async def coresys(
coresys_obj.init_websession = AsyncMock()
# Don't remove files/folders related to addons and stores
with patch("supervisor.store.git.GitRepo.remove"):
with patch("supervisor.store.git.GitRepo._remove"):
yield coresys_obj
await coresys_obj.dbus.unload()
@ -591,7 +582,7 @@ def run_supervisor_state(request: pytest.FixtureRequest) -> Generator[MagicMock]
@pytest.fixture
def store_addon(coresys: CoreSys, tmp_path, test_repository):
def store_addon(coresys: CoreSys, tmp_path, repository):
"""Store add-on fixture."""
addon_obj = AddonStore(coresys, "test_store_addon")
@ -604,16 +595,23 @@ def store_addon(coresys: CoreSys, tmp_path, test_repository):
@pytest.fixture
async def test_repository(coresys: CoreSys):
"""Test add-on store repository fixture."""
async def repository(coresys: CoreSys):
"""Repository fixture."""
coresys.store._data[ATTR_REPOSITORIES].remove(
"https://github.com/hassio-addons/repository"
)
coresys.store._data[ATTR_REPOSITORIES].remove(
"https://github.com/esphome/home-assistant-addon"
)
coresys.config._data[ATTR_ADDONS_CUSTOM_LIST] = []
with (
patch("supervisor.store.validate.BUILTIN_REPOSITORIES", {"local", "core"}),
patch("supervisor.store.git.GitRepo.load", return_value=None),
):
await coresys.store.load()
repository_obj = Repository.create(
repository_obj = Repository(
coresys, "https://github.com/awesome-developer/awesome-repo"
)
@ -626,7 +624,7 @@ async def test_repository(coresys: CoreSys):
@pytest.fixture
async def install_addon_ssh(coresys: CoreSys, test_repository):
async def install_addon_ssh(coresys: CoreSys, repository):
"""Install local_ssh add-on."""
store = coresys.addons.store[TEST_ADDON_SLUG]
await coresys.addons.data.install(store)
@ -638,7 +636,7 @@ async def install_addon_ssh(coresys: CoreSys, test_repository):
@pytest.fixture
async def install_addon_example(coresys: CoreSys, test_repository):
async def install_addon_example(coresys: CoreSys, repository):
"""Install local_example add-on."""
store = coresys.addons.store["local_example"]
await coresys.addons.data.install(store)
@ -764,6 +762,16 @@ async def capture_exception() -> Mock:
yield capture_exception
@pytest.fixture
async def capture_event() -> Mock:
"""Mock capture event for testing."""
with (
patch("supervisor.utils.sentry.sentry_sdk.is_initialized", return_value=True),
patch("supervisor.utils.sentry.sentry_sdk.capture_event") as capture_event,
):
yield capture_event
@pytest.fixture
async def os_available(request: pytest.FixtureRequest) -> None:
"""Mock os as available."""

View File

@ -55,13 +55,13 @@ async def test_network_interface_ethernet(
interface = NetworkInterface("/org/freedesktop/NetworkManager/Devices/1")
assert interface.sync_properties is False
assert interface.interface_name is None
assert interface.name is None
assert interface.type is None
await interface.connect(dbus_session_bus)
assert interface.sync_properties is True
assert interface.interface_name == TEST_INTERFACE_ETH_NAME
assert interface.name == TEST_INTERFACE_ETH_NAME
assert interface.type == DeviceType.ETHERNET
assert interface.managed is True
assert interface.wireless is None
@ -108,7 +108,7 @@ async def test_network_interface_wlan(
await interface.connect(dbus_session_bus)
assert interface.sync_properties is True
assert interface.interface_name == TEST_INTERFACE_WLAN_NAME
assert interface.name == TEST_INTERFACE_WLAN_NAME
assert interface.type == DeviceType.WIRELESS
assert interface.wireless is not None
assert interface.wireless.bitrate == 0

View File

@ -1,136 +0,0 @@
"""Test Docker manager."""
from unittest.mock import MagicMock
from docker.errors import DockerException
import pytest
from requests import RequestException
from supervisor.docker.manager import CommandReturn, DockerAPI
from supervisor.exceptions import DockerError
async def test_run_command_success(docker: DockerAPI):
"""Test successful command execution."""
# Mock container and its methods
mock_container = MagicMock()
mock_container.wait.return_value = {"StatusCode": 0}
mock_container.logs.return_value = b"command output"
# Mock docker containers.run to return our mock container
docker.docker.containers.run.return_value = mock_container
# Execute the command
result = docker.run_command(
image="alpine", version="3.18", command="echo hello", stdout=True, stderr=True
)
# Verify the result
assert isinstance(result, CommandReturn)
assert result.exit_code == 0
assert result.output == b"command output"
# Verify docker.containers.run was called correctly
docker.docker.containers.run.assert_called_once_with(
"alpine:3.18",
command="echo hello",
detach=True,
network=docker.network.name,
use_config_proxy=False,
stdout=True,
stderr=True,
)
# Verify container cleanup
mock_container.remove.assert_called_once_with(force=True, v=True)
async def test_run_command_with_defaults(docker: DockerAPI):
"""Test command execution with default parameters."""
# Mock container and its methods
mock_container = MagicMock()
mock_container.wait.return_value = {"StatusCode": 1}
mock_container.logs.return_value = b"error output"
# Mock docker containers.run to return our mock container
docker.docker.containers.run.return_value = mock_container
# Execute the command with minimal parameters
result = docker.run_command(image="ubuntu")
# Verify the result
assert isinstance(result, CommandReturn)
assert result.exit_code == 1
assert result.output == b"error output"
# Verify docker.containers.run was called with defaults
docker.docker.containers.run.assert_called_once_with(
"ubuntu:latest", # default tag
command=None, # default command
detach=True,
network=docker.network.name,
use_config_proxy=False,
)
# Verify container.logs was called with default stdout/stderr
mock_container.logs.assert_called_once_with(stdout=True, stderr=True)
async def test_run_command_docker_exception(docker: DockerAPI):
"""Test command execution when Docker raises an exception."""
# Mock docker containers.run to raise DockerException
docker.docker.containers.run.side_effect = DockerException("Docker error")
# Execute the command and expect DockerError
with pytest.raises(DockerError, match="Can't execute command: Docker error"):
docker.run_command(image="alpine", command="test")
async def test_run_command_request_exception(docker: DockerAPI):
"""Test command execution when requests raises an exception."""
# Mock docker containers.run to raise RequestException
docker.docker.containers.run.side_effect = RequestException("Connection error")
# Execute the command and expect DockerError
with pytest.raises(DockerError, match="Can't execute command: Connection error"):
docker.run_command(image="alpine", command="test")
async def test_run_command_cleanup_on_exception(docker: DockerAPI):
"""Test that container cleanup happens even when an exception occurs."""
# Mock container
mock_container = MagicMock()
# Mock docker.containers.run to return container, but container.wait to raise exception
docker.docker.containers.run.return_value = mock_container
mock_container.wait.side_effect = DockerException("Wait failed")
# Execute the command and expect DockerError
with pytest.raises(DockerError):
docker.run_command(image="alpine", command="test")
# Verify container cleanup still happened
mock_container.remove.assert_called_once_with(force=True, v=True)
async def test_run_command_custom_stdout_stderr(docker: DockerAPI):
"""Test command execution with custom stdout/stderr settings."""
# Mock container and its methods
mock_container = MagicMock()
mock_container.wait.return_value = {"StatusCode": 0}
mock_container.logs.return_value = b"output"
# Mock docker containers.run to return our mock container
docker.docker.containers.run.return_value = mock_container
# Execute the command with custom stdout/stderr
result = docker.run_command(
image="alpine", command="test", stdout=False, stderr=True
)
# Verify container.logs was called with the correct parameters
mock_container.logs.assert_called_once_with(stdout=False, stderr=True)
# Verify the result
assert result.exit_code == 0
assert result.output == b"output"

View File

@ -111,39 +111,3 @@ async def test_network_recreation(
network_params[ATTR_ENABLE_IPV6] = new_enable_ipv6
mock_create.assert_called_with(**network_params)
async def test_network_default_ipv6_for_new_installations():
"""Test that IPv6 is enabled by default when no user setting is provided (None)."""
with (
patch(
"supervisor.docker.network.DockerNetwork.docker",
new_callable=PropertyMock,
return_value=MagicMock(),
create=True,
),
patch(
"supervisor.docker.network.DockerNetwork.docker.networks",
new_callable=PropertyMock,
return_value=MagicMock(),
create=True,
),
patch(
"supervisor.docker.network.DockerNetwork.docker.networks.get",
side_effect=docker.errors.NotFound("Network not found"),
),
patch(
"supervisor.docker.network.DockerNetwork.docker.networks.create",
return_value=MockNetwork(False, None, True),
) as mock_create,
):
# Pass None as enable_ipv6 to simulate no user setting
network = (await DockerNetwork(MagicMock()).post_init(None)).network
assert network is not None
assert network.attrs.get(DOCKER_ENABLEIPV6) is True
# Verify that create was called with IPv6 enabled by default
expected_params = DOCKER_NETWORK_PARAMS.copy()
expected_params[ATTR_ENABLE_IPV6] = True
mock_create.assert_called_with(**expected_params)

View File

@ -200,8 +200,7 @@ async def test_start(
coresys.docker.containers.get.return_value.stop.assert_not_called()
if container_exists:
coresys.docker.containers.get.return_value.remove.assert_called_once_with(
force=True,
v=True,
force=True
)
else:
coresys.docker.containers.get.return_value.remove.assert_not_called()
@ -398,7 +397,7 @@ async def test_core_loads_wrong_image_for_machine(
await coresys.homeassistant.core.load()
container.remove.assert_called_once_with(force=True, v=True)
container.remove.assert_called_once_with(force=True)
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": "ghcr.io/home-assistant/odroid-n2-homeassistant:latest",
"force": True,
@ -445,7 +444,7 @@ async def test_core_loads_wrong_image_for_architecture(
await coresys.homeassistant.core.load()
container.remove.assert_called_once_with(force=True, v=True)
container.remove.assert_called_once_with(force=True)
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
"image": "ghcr.io/home-assistant/qemux86-64-homeassistant:latest",
"force": True,

View File

@ -2,9 +2,8 @@
# pylint: disable=protected-access
import asyncio
from unittest.mock import PropertyMock, patch
from unittest.mock import AsyncMock, PropertyMock, patch
from dbus_fast import Variant
import pytest
from supervisor.coresys import CoreSys
@ -88,47 +87,23 @@ async def test_connectivity_events(coresys: CoreSys, force: bool):
)
async def test_dns_configuration_change_triggers_notify_locals_changed(
coresys: CoreSys, dns_manager_service
async def test_dns_restart_on_connection_change(
coresys: CoreSys, network_manager_service: NetworkManagerService
):
"""Test that DNS configuration changes trigger notify_locals_changed."""
"""Test dns plugin is restarted when primary connection changes."""
await coresys.host.network.load()
with (
patch.object(PluginDns, "restart") as restart,
patch.object(
PluginDns, "is_running", new_callable=AsyncMock, return_value=True
),
):
network_manager_service.emit_properties_changed({"PrimaryConnection": "/"})
await network_manager_service.ping()
restart.assert_not_called()
with patch.object(PluginDns, "notify_locals_changed") as notify_locals_changed:
# Test that non-Configuration changes don't trigger notify_locals_changed
dns_manager_service.emit_properties_changed({"Mode": "default"})
await dns_manager_service.ping()
notify_locals_changed.assert_not_called()
# Test that Configuration changes trigger notify_locals_changed
configuration = [
{
"nameservers": Variant("as", ["192.168.2.2"]),
"domains": Variant("as", ["lan"]),
"interface": Variant("s", "eth0"),
"priority": Variant("i", 100),
"vpn": Variant("b", False),
}
]
dns_manager_service.emit_properties_changed({"Configuration": configuration})
await dns_manager_service.ping()
notify_locals_changed.assert_called_once()
notify_locals_changed.reset_mock()
# Test that subsequent Configuration changes also trigger notify_locals_changed
different_configuration = [
{
"nameservers": Variant("as", ["8.8.8.8"]),
"domains": Variant("as", ["example.com"]),
"interface": Variant("s", "wlan0"),
"priority": Variant("i", 200),
"vpn": Variant("b", True),
}
]
dns_manager_service.emit_properties_changed(
{"Configuration": different_configuration}
network_manager_service.emit_properties_changed(
{"PrimaryConnection": "/org/freedesktop/NetworkManager/ActiveConnection/2"}
)
await dns_manager_service.ping()
notify_locals_changed.assert_called_once()
await network_manager_service.ping()
restart.assert_called_once()

View File

@ -20,7 +20,7 @@ from supervisor.exceptions import (
from supervisor.host.const import HostFeature
from supervisor.host.manager import HostManager
from supervisor.jobs import JobSchedulerOptions, SupervisorJob
from supervisor.jobs.const import JobConcurrency, JobExecutionLimit, JobThrottle
from supervisor.jobs.const import JobExecutionLimit
from supervisor.jobs.decorator import Job, JobCondition
from supervisor.jobs.job_group import JobGroup
from supervisor.os.manager import OSManager
@ -1212,93 +1212,3 @@ async def test_job_scheduled_at(coresys: CoreSys):
assert job.name == "test_job_scheduled_at_job_task"
assert job.stage == "work"
assert job.parent_id is None
async def test_concurency_reject_and_throttle(coresys: CoreSys):
"""Test the concurrency rejct and throttle job execution limit."""
class TestClass:
"""Test class."""
def __init__(self, coresys: CoreSys):
"""Initialize the test class."""
self.coresys = coresys
self.run = asyncio.Lock()
self.call = 0
@Job(
name="test_concurency_reject_and_throttle_execute",
concurrency=JobConcurrency.REJECT,
throttle=JobThrottle.THROTTLE,
throttle_period=timedelta(hours=1),
)
async def execute(self, sleep: float):
"""Execute the class method."""
assert not self.run.locked()
async with self.run:
await asyncio.sleep(sleep)
self.call += 1
test = TestClass(coresys)
results = await asyncio.gather(
*[test.execute(0.1), test.execute(0.1), test.execute(0.1)],
return_exceptions=True,
)
assert results[0] is None
assert isinstance(results[1], JobException)
assert isinstance(results[2], JobException)
assert test.call == 1
await asyncio.gather(*[test.execute(0.1)])
assert test.call == 1
@pytest.mark.parametrize("error", [None, PluginJobError])
async def test_concurency_reject_and_rate_limit(
coresys: CoreSys, error: JobException | None
):
"""Test the concurrency rejct and rate limit job execution limit."""
class TestClass:
"""Test class."""
def __init__(self, coresys: CoreSys):
"""Initialize the test class."""
self.coresys = coresys
self.run = asyncio.Lock()
self.call = 0
@Job(
name=f"test_concurency_reject_and_rate_limit_execute_{uuid4().hex}",
concurrency=JobConcurrency.REJECT,
throttle=JobThrottle.RATE_LIMIT,
throttle_period=timedelta(hours=1),
throttle_max_calls=1,
on_condition=error,
)
async def execute(self, sleep: float = 0):
"""Execute the class method."""
async with self.run:
await asyncio.sleep(sleep)
self.call += 1
test = TestClass(coresys)
results = await asyncio.gather(
*[test.execute(0.1), test.execute(), test.execute()], return_exceptions=True
)
assert results[0] is None
assert isinstance(results[1], JobException)
assert isinstance(results[2], JobException)
assert test.call == 1
with pytest.raises(JobException if error is None else error):
await test.execute()
assert test.call == 1
with time_machine.travel(utcnow() + timedelta(hours=1)):
await test.execute()
assert test.call == 2

Some files were not shown because too many files have changed in this diff Show More