mirror of
https://github.com/home-assistant/supervisor.git
synced 2025-07-29 20:16:31 +00:00
Compare commits
49 Commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
deac85bddb | ||
![]() |
7dcf5ba631 | ||
![]() |
a004830131 | ||
![]() |
a8cc6c416d | ||
![]() |
74b26642b0 | ||
![]() |
5e26ab5f4a | ||
![]() |
a841cb8282 | ||
![]() |
3b1b03c8a7 | ||
![]() |
680428f304 | ||
![]() |
f34128c37e | ||
![]() |
2ed0682b34 | ||
![]() |
fbb0915ef8 | ||
![]() |
780ae1e15c | ||
![]() |
c617358855 | ||
![]() |
b679c4f4d8 | ||
![]() |
c946c421f2 | ||
![]() |
aeabf7ea25 | ||
![]() |
365b838abf | ||
![]() |
99c040520e | ||
![]() |
eefe2f2e06 | ||
![]() |
a366e36b37 | ||
![]() |
27a2fde9e1 | ||
![]() |
9a0f530a2f | ||
![]() |
baf9695cf7 | ||
![]() |
7873c457d5 | ||
![]() |
cbc48c381f | ||
![]() |
11e37011bd | ||
![]() |
cfda559a90 | ||
![]() |
806bd9f52c | ||
![]() |
953f7d01d7 | ||
![]() |
381e719a0e | ||
![]() |
296071067d | ||
![]() |
8336537f51 | ||
![]() |
5c90a00263 | ||
![]() |
1f2bf77784 | ||
![]() |
9aa4f381b8 | ||
![]() |
ae036ceffe | ||
![]() |
f0ea0d4a44 | ||
![]() |
abc44946bb | ||
![]() |
3e20a0937d | ||
![]() |
6cebf52249 | ||
![]() |
bc57deb474 | ||
![]() |
38750d74a8 | ||
![]() |
d1c1a2d418 | ||
![]() |
cf32f036c0 | ||
![]() |
b8852872fe | ||
![]() |
779f47e25d | ||
![]() |
be8b36b560 | ||
![]() |
8378d434d4 |
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@ -8,7 +8,7 @@ body:
|
||||
|
||||
If you have a feature or enhancement request, please use the [feature request][fr] section of our [Community Forum][fr].
|
||||
|
||||
[fr]: https://community.home-assistant.io/c/feature-requests
|
||||
[fr]: https://github.com/orgs/home-assistant/discussions
|
||||
- type: textarea
|
||||
validations:
|
||||
required: true
|
||||
@ -75,7 +75,7 @@ body:
|
||||
description: >
|
||||
The System information can be found in [Settings -> System -> Repairs -> (three dot menu) -> System Information](https://my.home-assistant.io/redirect/system_health/).
|
||||
Click the copy button at the bottom of the pop-up and paste it here.
|
||||
|
||||
|
||||
[](https://my.home-assistant.io/redirect/system_health/)
|
||||
- type: textarea
|
||||
attributes:
|
||||
@ -85,7 +85,7 @@ body:
|
||||
Supervisor diagnostics can be found in [Settings -> Devices & services](https://my.home-assistant.io/redirect/integrations/).
|
||||
Find the card that says `Home Assistant Supervisor`, open it, and select the three dot menu of the Supervisor integration entry
|
||||
and select 'Download diagnostics'.
|
||||
|
||||
|
||||
**Please drag-and-drop the downloaded file into the textbox below. Do not copy and paste its contents.**
|
||||
- type: textarea
|
||||
attributes:
|
||||
|
53
.github/ISSUE_TEMPLATE/task.yml
vendored
Normal file
53
.github/ISSUE_TEMPLATE/task.yml
vendored
Normal file
@ -0,0 +1,53 @@
|
||||
name: Task
|
||||
description: For staff only - Create a task
|
||||
type: Task
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
## ⚠️ RESTRICTED ACCESS
|
||||
|
||||
**This form is restricted to Open Home Foundation staff and authorized contributors only.**
|
||||
|
||||
If you are a community member wanting to contribute, please:
|
||||
- For bug reports: Use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)
|
||||
- For feature requests: Submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)
|
||||
|
||||
---
|
||||
|
||||
### For authorized contributors
|
||||
|
||||
Use this form to create tasks for development work, improvements, or other actionable items that need to be tracked.
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Description
|
||||
description: |
|
||||
Provide a clear and detailed description of the task that needs to be accomplished.
|
||||
|
||||
Be specific about what needs to be done, why it's important, and any constraints or requirements.
|
||||
placeholder: |
|
||||
Describe the task, including:
|
||||
- What needs to be done
|
||||
- Why this task is needed
|
||||
- Expected outcome
|
||||
- Any constraints or requirements
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: additional_context
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: |
|
||||
Any additional information, links, research, or context that would be helpful.
|
||||
|
||||
Include links to related issues, research, prototypes, roadmap opportunities etc.
|
||||
placeholder: |
|
||||
- Roadmap opportunity: [link]
|
||||
- Epic: [link]
|
||||
- Feature request: [link]
|
||||
- Technical design documents: [link]
|
||||
- Prototype/mockup: [link]
|
||||
- Dependencies: [links]
|
||||
validations:
|
||||
required: false
|
288
.github/copilot-instructions.md
vendored
Normal file
288
.github/copilot-instructions.md
vendored
Normal file
@ -0,0 +1,288 @@
|
||||
# GitHub Copilot & Claude Code Instructions
|
||||
|
||||
This repository contains the Home Assistant Supervisor, a Python 3 based container
|
||||
orchestration and management system for Home Assistant.
|
||||
|
||||
## Supervisor Capabilities & Features
|
||||
|
||||
### Architecture Overview
|
||||
|
||||
Home Assistant Supervisor is a Python-based container orchestration system that
|
||||
communicates with the Docker daemon to manage containerized components. It is tightly
|
||||
integrated with the underlying Operating System and core Operating System components
|
||||
through D-Bus.
|
||||
|
||||
**Managed Components:**
|
||||
- **Home Assistant Core**: The main home automation application running in its own
|
||||
container (also provides the web interface)
|
||||
- **Add-ons**: Third-party applications and services (each add-on runs in its own
|
||||
container)
|
||||
- **Plugins**: Built-in system services like DNS, Audio, CLI, Multicast, and Observer
|
||||
- **Host System Integration**: OS-level operations and hardware access via D-Bus
|
||||
- **Container Networking**: Internal Docker network management and external
|
||||
connectivity
|
||||
- **Storage & Backup**: Data persistence and backup management across all containers
|
||||
|
||||
**Key Dependencies:**
|
||||
- **Docker Engine**: Required for all container operations
|
||||
- **D-Bus**: System-level communication with the host OS
|
||||
- **systemd**: Service management for host system operations
|
||||
- **NetworkManager**: Network configuration and management
|
||||
|
||||
### Add-on System
|
||||
|
||||
**Add-on Architecture**: Add-ons are containerized applications available through
|
||||
add-on stores. Each store contains multiple add-ons, and each add-on includes metadata
|
||||
that tells Supervisor the version, startup configuration (permissions), and available
|
||||
user configurable options. Add-on metadata typically references a container image that
|
||||
Supervisor fetches during installation. If not, the Supervisor builds the container
|
||||
image from a Dockerfile.
|
||||
|
||||
**Built-in Stores**: Supervisor comes with several pre-configured stores:
|
||||
- **Core Add-ons**: Official add-ons maintained by the Home Assistant team
|
||||
- **Community Add-ons**: Popular third-party add-ons repository
|
||||
- **ESPHome**: Add-ons for ESPHome ecosystem integration
|
||||
- **Music Assistant**: Audio and music-related add-ons
|
||||
- **Local Development**: Local folder for testing custom add-ons during development
|
||||
|
||||
**Store Management**: Stores are Git-based repositories that are periodically updated.
|
||||
When updates are available, users receive notifications.
|
||||
|
||||
**Add-on Lifecycle**:
|
||||
- **Installation**: Supervisor fetches or builds container images based on add-on
|
||||
metadata
|
||||
- **Configuration**: Schema-validated options with integrated UI management
|
||||
- **Runtime**: Full container lifecycle management, health monitoring
|
||||
- **Updates**: Automatic or manual version management
|
||||
|
||||
### Update System
|
||||
|
||||
**Core Components**: Supervisor, Home Assistant Core, HAOS, and built-in plugins
|
||||
receive version information from a central JSON file fetched from
|
||||
`https://version.home-assistant.io/{channel}.json`. The `Updater` class handles
|
||||
fetching this data, validating signatures, and updating internal version tracking.
|
||||
|
||||
**Update Channels**: Three channels (`stable`/`beta`/`dev`) determine which version
|
||||
JSON file is fetched, allowing users to opt into different release streams.
|
||||
|
||||
**Add-on Updates**: Add-on version information comes from store repository updates, not
|
||||
the central JSON file. When repositories are refreshed via the store system, add-ons
|
||||
compare their local versions against repository versions to determine update
|
||||
availability.
|
||||
|
||||
### Backup & Recovery System
|
||||
|
||||
**Backup Capabilities**:
|
||||
- **Full Backups**: Complete system state capture including all add-ons,
|
||||
configuration, and data
|
||||
- **Partial Backups**: Selective backup of specific components (Home Assistant,
|
||||
add-ons, folders)
|
||||
- **Encrypted Backups**: Optional backup encryption with user-provided passwords
|
||||
- **Multiple Storage Locations**: Local storage and remote backup destinations
|
||||
|
||||
**Recovery Features**:
|
||||
- **One-click Restore**: Simple restoration from backup files
|
||||
- **Selective Restore**: Choose specific components to restore
|
||||
- **Automatic Recovery**: Self-healing for common system issues
|
||||
|
||||
---
|
||||
|
||||
## Supervisor Development
|
||||
|
||||
### Python Requirements
|
||||
|
||||
- **Compatibility**: Python 3.13+
|
||||
- **Language Features**: Use modern Python features:
|
||||
- Type hints with `typing` module
|
||||
- f-strings (preferred over `%` or `.format()`)
|
||||
- Dataclasses and enum classes
|
||||
- Async/await patterns
|
||||
- Pattern matching where appropriate
|
||||
|
||||
### Code Quality Standards
|
||||
|
||||
- **Formatting**: Ruff
|
||||
- **Linting**: PyLint and Ruff
|
||||
- **Type Checking**: MyPy
|
||||
- **Testing**: pytest with asyncio support
|
||||
- **Language**: American English for all code, comments, and documentation
|
||||
|
||||
### Code Organization
|
||||
|
||||
**Core Structure**:
|
||||
```
|
||||
supervisor/
|
||||
├── __init__.py # Package initialization
|
||||
├── const.py # Constants and enums
|
||||
├── coresys.py # Core system management
|
||||
├── bootstrap.py # System initialization
|
||||
├── exceptions.py # Custom exception classes
|
||||
├── api/ # REST API endpoints
|
||||
├── addons/ # Add-on management
|
||||
├── backups/ # Backup system
|
||||
├── docker/ # Docker integration
|
||||
├── host/ # Host system interface
|
||||
├── homeassistant/ # Home Assistant Core management
|
||||
├── dbus/ # D-Bus system integration
|
||||
├── hardware/ # Hardware detection and management
|
||||
├── plugins/ # Plugin system
|
||||
├── resolution/ # Issue detection and resolution
|
||||
├── security/ # Security management
|
||||
├── services/ # Service discovery and management
|
||||
├── store/ # Add-on store management
|
||||
└── utils/ # Utility functions
|
||||
```
|
||||
|
||||
**Shared Constants**: Use constants from `supervisor/const.py` instead of hardcoding
|
||||
values. Define new constants following existing patterns and group related constants
|
||||
together.
|
||||
|
||||
### Supervisor Architecture Patterns
|
||||
|
||||
**CoreSysAttributes Inheritance Pattern**: Nearly all major classes in Supervisor
|
||||
inherit from `CoreSysAttributes`, providing access to the centralized system state
|
||||
via `self.coresys` and convenient `sys_*` properties.
|
||||
|
||||
```python
|
||||
# Standard Supervisor class pattern
|
||||
class MyManager(CoreSysAttributes):
|
||||
"""Manage my functionality."""
|
||||
|
||||
def __init__(self, coresys: CoreSys):
|
||||
"""Initialize manager."""
|
||||
self.coresys: CoreSys = coresys
|
||||
self._component: MyComponent = MyComponent(coresys)
|
||||
|
||||
@property
|
||||
def component(self) -> MyComponent:
|
||||
"""Return component handler."""
|
||||
return self._component
|
||||
|
||||
# Access system components via inherited properties
|
||||
async def do_something(self):
|
||||
await self.sys_docker.containers.get("my_container")
|
||||
self.sys_bus.fire_event(BusEvent.MY_EVENT, {"data": "value"})
|
||||
```
|
||||
|
||||
**Key Inherited Properties from CoreSysAttributes**:
|
||||
- `self.sys_docker` - Docker API access
|
||||
- `self.sys_run_in_executor()` - Execute blocking operations
|
||||
- `self.sys_create_task()` - Create async tasks
|
||||
- `self.sys_bus` - Event bus for system events
|
||||
- `self.sys_config` - System configuration
|
||||
- `self.sys_homeassistant` - Home Assistant Core management
|
||||
- `self.sys_addons` - Add-on management
|
||||
- `self.sys_host` - Host system access
|
||||
- `self.sys_dbus` - D-Bus system interface
|
||||
|
||||
**Load Pattern**: Many components implement a `load()` method which effectively
|
||||
initialize the component from external sources (containers, files, D-Bus services).
|
||||
|
||||
### API Development
|
||||
|
||||
**REST API Structure**:
|
||||
- **Base Path**: `/api/` for all endpoints
|
||||
- **Authentication**: Bearer token authentication
|
||||
- **Consistent Response Format**: `{"result": "ok", "data": {...}}` or
|
||||
`{"result": "error", "message": "..."}`
|
||||
- **Validation**: Use voluptuous schemas with `api_validate()`
|
||||
|
||||
**Use `@api_process` Decorator**: This decorator handles all standard error handling
|
||||
and response formatting automatically. The decorator catches `APIError`, `HassioError`,
|
||||
and other exceptions, returning appropriate HTTP responses.
|
||||
|
||||
```python
|
||||
from ..api.utils import api_process, api_validate
|
||||
|
||||
@api_process
|
||||
async def backup_full(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Create full backup."""
|
||||
body = await api_validate(SCHEMA_BACKUP_FULL, request)
|
||||
job = await self.sys_backups.do_backup_full(**body)
|
||||
return {ATTR_JOB_ID: job.uuid}
|
||||
```
|
||||
|
||||
### Docker Integration
|
||||
|
||||
- **Container Management**: Use Supervisor's Docker manager instead of direct
|
||||
Docker API
|
||||
- **Networking**: Supervisor manages internal Docker networks with predefined IP
|
||||
ranges
|
||||
- **Security**: AppArmor profiles, capability restrictions, and user namespace
|
||||
isolation
|
||||
- **Health Checks**: Implement health monitoring for all managed containers
|
||||
|
||||
### D-Bus Integration
|
||||
|
||||
- **Use dbus-fast**: Async D-Bus library for system integration
|
||||
- **Service Management**: systemd, NetworkManager, hostname management
|
||||
- **Error Handling**: Wrap D-Bus exceptions in Supervisor-specific exceptions
|
||||
|
||||
### Async Programming
|
||||
|
||||
- **All I/O operations must be async**: File operations, network calls, subprocess
|
||||
execution
|
||||
- **Use asyncio patterns**: Prefer `asyncio.gather()` over sequential awaits
|
||||
- **Executor jobs**: Use `self.sys_run_in_executor()` for blocking operations
|
||||
- **Two-phase initialization**: `__init__` for sync setup, `post_init()` for async
|
||||
initialization
|
||||
|
||||
### Testing
|
||||
|
||||
- **Location**: `tests/` directory with module mirroring
|
||||
- **Fixtures**: Extensive use of pytest fixtures for CoreSys setup
|
||||
- **Mocking**: Mock external dependencies (Docker, D-Bus, network calls)
|
||||
- **Coverage**: Minimum 90% test coverage, 100% for security-sensitive code
|
||||
|
||||
### Error Handling
|
||||
|
||||
- **Custom Exceptions**: Defined in `exceptions.py` with clear inheritance hierarchy
|
||||
- **Error Propagation**: Use `from` clause for exception chaining
|
||||
- **API Errors**: Use `APIError` with appropriate HTTP status codes
|
||||
|
||||
### Security Considerations
|
||||
|
||||
- **Container Security**: AppArmor profiles mandatory for add-ons, minimal
|
||||
capabilities
|
||||
- **Authentication**: Token-based API authentication with role-based access
|
||||
- **Data Protection**: Backup encryption, secure secret management, comprehensive
|
||||
input validation
|
||||
|
||||
### Development Commands
|
||||
|
||||
```bash
|
||||
# Run tests, adjust paths as necessary
|
||||
pytest -qsx tests/
|
||||
|
||||
# Linting and formatting
|
||||
ruff check supervisor/
|
||||
ruff format supervisor/
|
||||
|
||||
# Type checking
|
||||
mypy --ignore-missing-imports supervisor/
|
||||
|
||||
# Pre-commit hooks
|
||||
pre-commit run --all-files
|
||||
```
|
||||
|
||||
Always run the pre-commit hooks at the end of code editing.
|
||||
|
||||
### Common Patterns to Follow
|
||||
|
||||
**✅ Use These Patterns**:
|
||||
- Inherit from `CoreSysAttributes` for system access
|
||||
- Use `@api_process` decorator for API endpoints
|
||||
- Use `self.sys_run_in_executor()` for blocking operations
|
||||
- Access Docker via `self.sys_docker` not direct Docker API
|
||||
- Use constants from `const.py` instead of hardcoding
|
||||
- Store types in (per-module) `const.py` (e.g. supervisor/store/const.py)
|
||||
|
||||
**❌ Avoid These Patterns**:
|
||||
- Direct Docker API usage - use Supervisor's Docker manager
|
||||
- Blocking operations in async context (use asyncio alternatives)
|
||||
- Hardcoded values - use constants from `const.py`
|
||||
- Manual error handling in API endpoints - let `@api_process` handle it
|
||||
|
||||
This guide provides the foundation for contributing to Home Assistant Supervisor.
|
||||
Follow these patterns and guidelines to ensure code quality, security, and
|
||||
maintainability.
|
2
.github/workflows/builder.yml
vendored
2
.github/workflows/builder.yml
vendored
@ -131,7 +131,7 @@ jobs:
|
||||
|
||||
- name: Install Cosign
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: sigstore/cosign-installer@v3.9.1
|
||||
uses: sigstore/cosign-installer@v3.9.2
|
||||
with:
|
||||
cosign-release: "v2.4.3"
|
||||
|
||||
|
2
.github/workflows/ci.yaml
vendored
2
.github/workflows/ci.yaml
vendored
@ -346,7 +346,7 @@ jobs:
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Install Cosign
|
||||
uses: sigstore/cosign-installer@v3.9.1
|
||||
uses: sigstore/cosign-installer@v3.9.2
|
||||
with:
|
||||
cosign-release: "v2.4.3"
|
||||
- name: Restore Python virtual environment
|
||||
|
58
.github/workflows/restrict-task-creation.yml
vendored
Normal file
58
.github/workflows/restrict-task-creation.yml
vendored
Normal file
@ -0,0 +1,58 @@
|
||||
name: Restrict task creation
|
||||
|
||||
# yamllint disable-line rule:truthy
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
check-authorization:
|
||||
runs-on: ubuntu-latest
|
||||
# Only run if this is a Task issue type (from the issue form)
|
||||
if: github.event.issue.issue_type == 'Task'
|
||||
steps:
|
||||
- name: Check if user is authorized
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const issueAuthor = context.payload.issue.user.login;
|
||||
|
||||
// Check if user is an organization member
|
||||
try {
|
||||
await github.rest.orgs.checkMembershipForUser({
|
||||
org: 'home-assistant',
|
||||
username: issueAuthor
|
||||
});
|
||||
console.log(`✅ ${issueAuthor} is an organization member`);
|
||||
return; // Authorized
|
||||
} catch (error) {
|
||||
console.log(`❌ ${issueAuthor} is not authorized to create Task issues`);
|
||||
}
|
||||
|
||||
// Close the issue with a comment
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
body: `Hi @${issueAuthor}, thank you for your contribution!\n\n` +
|
||||
`Task issues are restricted to Open Home Foundation staff and authorized contributors.\n\n` +
|
||||
`If you would like to:\n` +
|
||||
`- Report a bug: Please use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)\n` +
|
||||
`- Request a feature: Please submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)\n\n` +
|
||||
`If you believe you should have access to create Task issues, please contact the maintainers.`
|
||||
});
|
||||
|
||||
await github.rest.issues.update({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
state: 'closed'
|
||||
});
|
||||
|
||||
// Add a label to indicate this was auto-closed
|
||||
await github.rest.issues.addLabels({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
labels: ['auto-closed']
|
||||
});
|
@ -1,30 +1,30 @@
|
||||
aiodns==3.5.0
|
||||
aiohttp==3.12.13
|
||||
aiohttp==3.12.14
|
||||
atomicwrites-homeassistant==1.4.1
|
||||
attrs==25.3.0
|
||||
awesomeversion==25.5.0
|
||||
blockbuster==1.5.24
|
||||
blockbuster==1.5.25
|
||||
brotli==1.1.0
|
||||
ciso8601==2.3.2
|
||||
colorlog==6.9.0
|
||||
cpe==1.3.1
|
||||
cryptography==45.0.4
|
||||
debugpy==1.8.14
|
||||
cryptography==45.0.5
|
||||
debugpy==1.8.15
|
||||
deepmerge==2.0
|
||||
dirhash==0.5.0
|
||||
docker==7.1.0
|
||||
faust-cchardet==2.1.19
|
||||
gitpython==3.1.44
|
||||
gitpython==3.1.45
|
||||
jinja2==3.1.6
|
||||
log-rate-limit==1.4.2
|
||||
orjson==3.10.18
|
||||
orjson==3.11.1
|
||||
pulsectl==24.12.0
|
||||
pyudev==0.24.3
|
||||
PyYAML==6.0.2
|
||||
requests==2.32.4
|
||||
securetar==2025.2.1
|
||||
sentry-sdk==2.30.0
|
||||
sentry-sdk==2.33.2
|
||||
setuptools==80.9.0
|
||||
voluptuous==0.15.2
|
||||
dbus-fast==2.44.1
|
||||
dbus-fast==2.44.2
|
||||
zlib-fast==0.2.1
|
||||
|
@ -1,6 +1,6 @@
|
||||
astroid==3.3.10
|
||||
coverage==7.9.1
|
||||
mypy==1.16.1
|
||||
astroid==3.3.11
|
||||
coverage==7.10.1
|
||||
mypy==1.17.0
|
||||
pre-commit==4.2.0
|
||||
pylint==3.3.7
|
||||
pytest-aiohttp==1.1.0
|
||||
@ -8,9 +8,9 @@ pytest-asyncio==0.25.2
|
||||
pytest-cov==6.2.1
|
||||
pytest-timeout==2.4.0
|
||||
pytest==8.4.1
|
||||
ruff==0.12.0
|
||||
ruff==0.12.5
|
||||
time-machine==2.16.0
|
||||
types-docker==7.1.0.20250523
|
||||
types-docker==7.1.0.20250705
|
||||
types-pyyaml==6.0.12.20250516
|
||||
types-requests==2.32.4.20250611
|
||||
urllib3==2.5.0
|
||||
|
@ -15,6 +15,7 @@ from ..const import (
|
||||
ATTR_SQUASH,
|
||||
FILE_SUFFIX_CONFIGURATION,
|
||||
META_ADDON,
|
||||
SOCKET_DOCKER,
|
||||
)
|
||||
from ..coresys import CoreSys, CoreSysAttributes
|
||||
from ..docker.interface import MAP_ARCH
|
||||
@ -121,39 +122,64 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
|
||||
except HassioArchNotFound:
|
||||
return False
|
||||
|
||||
def get_docker_args(self, version: AwesomeVersion, image: str | None = None):
|
||||
"""Create a dict with Docker build arguments.
|
||||
def get_docker_args(
|
||||
self, version: AwesomeVersion, image_tag: str
|
||||
) -> dict[str, Any]:
|
||||
"""Create a dict with Docker run args."""
|
||||
dockerfile_path = self.get_dockerfile().relative_to(self.addon.path_location)
|
||||
|
||||
Must be run in executor.
|
||||
"""
|
||||
args: dict[str, Any] = {
|
||||
"path": str(self.addon.path_location),
|
||||
"tag": f"{image or self.addon.image}:{version!s}",
|
||||
"dockerfile": str(self.get_dockerfile()),
|
||||
"pull": True,
|
||||
"forcerm": not self.sys_dev,
|
||||
"squash": self.squash,
|
||||
"platform": MAP_ARCH[self.arch],
|
||||
"labels": {
|
||||
"io.hass.version": version,
|
||||
"io.hass.arch": self.arch,
|
||||
"io.hass.type": META_ADDON,
|
||||
"io.hass.name": self._fix_label("name"),
|
||||
"io.hass.description": self._fix_label("description"),
|
||||
**self.additional_labels,
|
||||
},
|
||||
"buildargs": {
|
||||
"BUILD_FROM": self.base_image,
|
||||
"BUILD_VERSION": version,
|
||||
"BUILD_ARCH": self.sys_arch.default,
|
||||
**self.additional_args,
|
||||
},
|
||||
build_cmd = [
|
||||
"docker",
|
||||
"buildx",
|
||||
"build",
|
||||
".",
|
||||
"--tag",
|
||||
image_tag,
|
||||
"--file",
|
||||
str(dockerfile_path),
|
||||
"--platform",
|
||||
MAP_ARCH[self.arch],
|
||||
"--pull",
|
||||
]
|
||||
|
||||
labels = {
|
||||
"io.hass.version": version,
|
||||
"io.hass.arch": self.arch,
|
||||
"io.hass.type": META_ADDON,
|
||||
"io.hass.name": self._fix_label("name"),
|
||||
"io.hass.description": self._fix_label("description"),
|
||||
**self.additional_labels,
|
||||
}
|
||||
|
||||
if self.addon.url:
|
||||
args["labels"]["io.hass.url"] = self.addon.url
|
||||
labels["io.hass.url"] = self.addon.url
|
||||
|
||||
return args
|
||||
for key, value in labels.items():
|
||||
build_cmd.extend(["--label", f"{key}={value}"])
|
||||
|
||||
build_args = {
|
||||
"BUILD_FROM": self.base_image,
|
||||
"BUILD_VERSION": version,
|
||||
"BUILD_ARCH": self.sys_arch.default,
|
||||
**self.additional_args,
|
||||
}
|
||||
|
||||
for key, value in build_args.items():
|
||||
build_cmd.extend(["--build-arg", f"{key}={value}"])
|
||||
|
||||
# The addon path will be mounted from the host system
|
||||
addon_extern_path = self.sys_config.local_to_extern_path(
|
||||
self.addon.path_location
|
||||
)
|
||||
|
||||
return {
|
||||
"command": build_cmd,
|
||||
"volumes": {
|
||||
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
|
||||
addon_extern_path: {"bind": "/addon", "mode": "ro"},
|
||||
},
|
||||
"working_dir": "/addon",
|
||||
}
|
||||
|
||||
def _fix_label(self, label_name: str) -> str:
|
||||
"""Remove characters they are not supported."""
|
||||
|
@ -266,7 +266,7 @@ class AddonManager(CoreSysAttributes):
|
||||
],
|
||||
on_condition=AddonsJobError,
|
||||
)
|
||||
async def rebuild(self, slug: str) -> asyncio.Task | None:
|
||||
async def rebuild(self, slug: str, *, force: bool = False) -> asyncio.Task | None:
|
||||
"""Perform a rebuild of local build add-on.
|
||||
|
||||
Returns a Task that completes when addon has state 'started' (see addon.start)
|
||||
@ -289,7 +289,7 @@ class AddonManager(CoreSysAttributes):
|
||||
raise AddonsError(
|
||||
"Version changed, use Update instead Rebuild", _LOGGER.error
|
||||
)
|
||||
if not addon.need_build:
|
||||
if not force and not addon.need_build:
|
||||
raise AddonsNotSupportedError(
|
||||
"Can't rebuild a image based add-on", _LOGGER.error
|
||||
)
|
||||
|
@ -36,6 +36,7 @@ from ..const import (
|
||||
ATTR_DNS,
|
||||
ATTR_DOCKER_API,
|
||||
ATTR_DOCUMENTATION,
|
||||
ATTR_FORCE,
|
||||
ATTR_FULL_ACCESS,
|
||||
ATTR_GPIO,
|
||||
ATTR_HASSIO_API,
|
||||
@ -139,6 +140,8 @@ SCHEMA_SECURITY = vol.Schema({vol.Optional(ATTR_PROTECTED): vol.Boolean()})
|
||||
SCHEMA_UNINSTALL = vol.Schema(
|
||||
{vol.Optional(ATTR_REMOVE_CONFIG, default=False): vol.Boolean()}
|
||||
)
|
||||
|
||||
SCHEMA_REBUILD = vol.Schema({vol.Optional(ATTR_FORCE, default=False): vol.Boolean()})
|
||||
# pylint: enable=no-value-for-parameter
|
||||
|
||||
|
||||
@ -461,7 +464,11 @@ class APIAddons(CoreSysAttributes):
|
||||
async def rebuild(self, request: web.Request) -> None:
|
||||
"""Rebuild local build add-on."""
|
||||
addon = self.get_addon_for_request(request)
|
||||
if start_task := await asyncio.shield(self.sys_addons.rebuild(addon.slug)):
|
||||
body: dict[str, Any] = await api_validate(SCHEMA_REBUILD, request)
|
||||
|
||||
if start_task := await asyncio.shield(
|
||||
self.sys_addons.rebuild(addon.slug, force=body[ATTR_FORCE])
|
||||
):
|
||||
await start_task
|
||||
|
||||
@api_process
|
||||
|
@ -92,13 +92,18 @@ class APIAuth(CoreSysAttributes):
|
||||
# Json
|
||||
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON:
|
||||
data = await request.json(loads=json_loads)
|
||||
return await self._process_dict(request, addon, data)
|
||||
if not await self._process_dict(request, addon, data):
|
||||
raise HTTPUnauthorized()
|
||||
return True
|
||||
|
||||
# URL encoded
|
||||
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_URL:
|
||||
data = await request.post()
|
||||
return await self._process_dict(request, addon, data)
|
||||
if not await self._process_dict(request, addon, data):
|
||||
raise HTTPUnauthorized()
|
||||
return True
|
||||
|
||||
# Advertise Basic authentication by default
|
||||
raise HTTPUnauthorized(headers=REALM_HEADER)
|
||||
|
||||
@api_process
|
||||
|
@ -6,6 +6,8 @@ from typing import Any
|
||||
from aiohttp import web
|
||||
import voluptuous as vol
|
||||
|
||||
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
|
||||
|
||||
from ..const import (
|
||||
ATTR_ENABLE_IPV6,
|
||||
ATTR_HOSTNAME,
|
||||
@ -32,7 +34,7 @@ SCHEMA_DOCKER_REGISTRY = vol.Schema(
|
||||
)
|
||||
|
||||
# pylint: disable=no-value-for-parameter
|
||||
SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Boolean()})
|
||||
SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Maybe(vol.Boolean())})
|
||||
|
||||
|
||||
class APIDocker(CoreSysAttributes):
|
||||
@ -59,8 +61,17 @@ class APIDocker(CoreSysAttributes):
|
||||
"""Set docker options."""
|
||||
body = await api_validate(SCHEMA_OPTIONS, request)
|
||||
|
||||
if ATTR_ENABLE_IPV6 in body:
|
||||
if (
|
||||
ATTR_ENABLE_IPV6 in body
|
||||
and self.sys_docker.config.enable_ipv6 != body[ATTR_ENABLE_IPV6]
|
||||
):
|
||||
self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6]
|
||||
_LOGGER.info("Host system reboot required to apply new IPv6 configuration")
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.REBOOT_REQUIRED,
|
||||
ContextType.SYSTEM,
|
||||
suggestions=[SuggestionType.EXECUTE_REBOOT],
|
||||
)
|
||||
|
||||
await self.sys_docker.config.save_data()
|
||||
|
||||
|
@ -309,9 +309,9 @@ class APIIngress(CoreSysAttributes):
|
||||
|
||||
def _init_header(
|
||||
request: web.Request, addon: Addon, session_data: IngressSessionData | None
|
||||
) -> CIMultiDict | dict[str, str]:
|
||||
) -> CIMultiDict[str]:
|
||||
"""Create initial header."""
|
||||
headers = {}
|
||||
headers = CIMultiDict[str]()
|
||||
|
||||
if session_data is not None:
|
||||
headers[HEADER_REMOTE_USER_ID] = session_data.user.id
|
||||
@ -337,7 +337,7 @@ def _init_header(
|
||||
istr(HEADER_REMOTE_USER_DISPLAY_NAME),
|
||||
):
|
||||
continue
|
||||
headers[name] = value
|
||||
headers.add(name, value)
|
||||
|
||||
# Update X-Forwarded-For
|
||||
if request.transport:
|
||||
@ -348,9 +348,9 @@ def _init_header(
|
||||
return headers
|
||||
|
||||
|
||||
def _response_header(response: aiohttp.ClientResponse) -> dict[str, str]:
|
||||
def _response_header(response: aiohttp.ClientResponse) -> CIMultiDict[str]:
|
||||
"""Create response header."""
|
||||
headers = {}
|
||||
headers = CIMultiDict[str]()
|
||||
|
||||
for name, value in response.headers.items():
|
||||
if name in (
|
||||
@ -360,7 +360,7 @@ def _response_header(response: aiohttp.ClientResponse) -> dict[str, str]:
|
||||
hdrs.CONTENT_ENCODING,
|
||||
):
|
||||
continue
|
||||
headers[name] = value
|
||||
headers.add(name, value)
|
||||
|
||||
return headers
|
||||
|
||||
|
@ -63,6 +63,8 @@ from .const import BUF_SIZE, LOCATION_CLOUD_BACKUP, BackupType
|
||||
from .utils import password_to_key
|
||||
from .validate import SCHEMA_BACKUP
|
||||
|
||||
IGNORED_COMPARISON_FIELDS = {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@ -265,7 +267,7 @@ class Backup(JobGroup):
|
||||
|
||||
# Compare all fields except ones about protection. Current encryption status does not affect equality
|
||||
keys = self._data.keys() | other._data.keys()
|
||||
for k in keys - {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}:
|
||||
for k in keys - IGNORED_COMPARISON_FIELDS:
|
||||
if (
|
||||
k not in self._data
|
||||
or k not in other._data
|
||||
@ -577,13 +579,21 @@ class Backup(JobGroup):
|
||||
@Job(name="backup_addon_save", cleanup=False)
|
||||
async def _addon_save(self, addon: Addon) -> asyncio.Task | None:
|
||||
"""Store an add-on into backup."""
|
||||
self.sys_jobs.current.reference = addon.slug
|
||||
self.sys_jobs.current.reference = slug = addon.slug
|
||||
if not self._outer_secure_tarfile:
|
||||
raise RuntimeError(
|
||||
"Cannot backup components without initializing backup tar"
|
||||
)
|
||||
|
||||
tar_name = f"{addon.slug}.tar{'.gz' if self.compressed else ''}"
|
||||
# Ensure it is still installed and get current data before proceeding
|
||||
if not (curr_addon := self.sys_addons.get_local_only(slug)):
|
||||
_LOGGER.warning(
|
||||
"Skipping backup of add-on %s because it has been uninstalled",
|
||||
slug,
|
||||
)
|
||||
return None
|
||||
|
||||
tar_name = f"{slug}.tar{'.gz' if self.compressed else ''}"
|
||||
|
||||
addon_file = self._outer_secure_tarfile.create_inner_tar(
|
||||
f"./{tar_name}",
|
||||
@ -592,16 +602,16 @@ class Backup(JobGroup):
|
||||
)
|
||||
# Take backup
|
||||
try:
|
||||
start_task = await addon.backup(addon_file)
|
||||
start_task = await curr_addon.backup(addon_file)
|
||||
except AddonsError as err:
|
||||
raise BackupError(str(err)) from err
|
||||
|
||||
# Store to config
|
||||
self._data[ATTR_ADDONS].append(
|
||||
{
|
||||
ATTR_SLUG: addon.slug,
|
||||
ATTR_NAME: addon.name,
|
||||
ATTR_VERSION: addon.version,
|
||||
ATTR_SLUG: slug,
|
||||
ATTR_NAME: curr_addon.name,
|
||||
ATTR_VERSION: curr_addon.version,
|
||||
# Bug - addon_file.size used to give us this information
|
||||
# It always returns 0 in current securetar. Skipping until fixed
|
||||
ATTR_SIZE: 0,
|
||||
@ -921,5 +931,5 @@ class Backup(JobGroup):
|
||||
Return a coroutine.
|
||||
"""
|
||||
return self.sys_store.update_repositories(
|
||||
self.repositories, add_with_errors=True, replace=replace
|
||||
set(self.repositories), issue_on_error=True, replace=replace
|
||||
)
|
||||
|
@ -188,6 +188,7 @@ ATTR_FEATURES = "features"
|
||||
ATTR_FILENAME = "filename"
|
||||
ATTR_FLAGS = "flags"
|
||||
ATTR_FOLDERS = "folders"
|
||||
ATTR_FORCE = "force"
|
||||
ATTR_FORCE_SECURITY = "force_security"
|
||||
ATTR_FREQUENCY = "frequency"
|
||||
ATTR_FULL_ACCESS = "full_access"
|
||||
|
@ -124,7 +124,10 @@ class CoreSys:
|
||||
|
||||
resolver: aiohttp.abc.AbstractResolver
|
||||
try:
|
||||
resolver = aiohttp.AsyncResolver(loop=self.loop)
|
||||
# Use "unused" kwargs to force dedicated resolver instance. Otherwise
|
||||
# aiodns won't reload /etc/resolv.conf which we need to make our connection
|
||||
# check work in all cases.
|
||||
resolver = aiohttp.AsyncResolver(loop=self.loop, timeout=None)
|
||||
# pylint: disable=protected-access
|
||||
_LOGGER.debug(
|
||||
"Initializing ClientSession with AsyncResolver. Using nameservers %s",
|
||||
|
@ -28,6 +28,8 @@ class DeviceSpecificationDataType(TypedDict, total=False):
|
||||
path: str
|
||||
label: str
|
||||
uuid: str
|
||||
partuuid: str
|
||||
partlabel: str
|
||||
|
||||
|
||||
@dataclass(slots=True)
|
||||
@ -40,6 +42,8 @@ class DeviceSpecification:
|
||||
path: Path | None = None
|
||||
label: str | None = None
|
||||
uuid: str | None = None
|
||||
partuuid: str | None = None
|
||||
partlabel: str | None = None
|
||||
|
||||
@staticmethod
|
||||
def from_dict(data: DeviceSpecificationDataType) -> "DeviceSpecification":
|
||||
@ -48,6 +52,8 @@ class DeviceSpecification:
|
||||
path=Path(data["path"]) if "path" in data else None,
|
||||
label=data.get("label"),
|
||||
uuid=data.get("uuid"),
|
||||
partuuid=data.get("partuuid"),
|
||||
partlabel=data.get("partlabel"),
|
||||
)
|
||||
|
||||
def to_dict(self) -> dict[str, Variant]:
|
||||
@ -56,6 +62,8 @@ class DeviceSpecification:
|
||||
"path": Variant("s", self.path.as_posix()) if self.path else None,
|
||||
"label": _optional_variant("s", self.label),
|
||||
"uuid": _optional_variant("s", self.uuid),
|
||||
"partuuid": _optional_variant("s", self.partuuid),
|
||||
"partlabel": _optional_variant("s", self.partlabel),
|
||||
}
|
||||
return {k: v for k, v in data.items() if v}
|
||||
|
||||
|
@ -12,6 +12,7 @@ from typing import TYPE_CHECKING, cast
|
||||
from attr import evolve
|
||||
from awesomeversion import AwesomeVersion
|
||||
import docker
|
||||
import docker.errors
|
||||
from docker.types import Mount
|
||||
import requests
|
||||
|
||||
@ -43,6 +44,7 @@ from ..jobs.decorator import Job
|
||||
from ..resolution.const import CGROUP_V2_VERSION, ContextType, IssueType, SuggestionType
|
||||
from ..utils.sentry import async_capture_exception
|
||||
from .const import (
|
||||
ADDON_BUILDER_IMAGE,
|
||||
ENV_TIME,
|
||||
ENV_TOKEN,
|
||||
ENV_TOKEN_OLD,
|
||||
@ -673,10 +675,41 @@ class DockerAddon(DockerInterface):
|
||||
_LOGGER.info("Starting build for %s:%s", self.image, version)
|
||||
|
||||
def build_image():
|
||||
return self.sys_docker.images.build(
|
||||
use_config_proxy=False, **build_env.get_docker_args(version, image)
|
||||
if build_env.squash:
|
||||
_LOGGER.warning(
|
||||
"Ignoring squash build option for %s as Docker BuildKit does not support it.",
|
||||
self.addon.slug,
|
||||
)
|
||||
|
||||
addon_image_tag = f"{image or self.addon.image}:{version!s}"
|
||||
|
||||
docker_version = self.sys_docker.info.version
|
||||
builder_version_tag = f"{docker_version.major}.{docker_version.minor}.{docker_version.micro}-cli"
|
||||
|
||||
builder_name = f"addon_builder_{self.addon.slug}"
|
||||
|
||||
# Remove dangling builder container if it exists by any chance
|
||||
# E.g. because of an abrupt host shutdown/reboot during a build
|
||||
with suppress(docker.errors.NotFound):
|
||||
self.sys_docker.containers.get(builder_name).remove(force=True, v=True)
|
||||
|
||||
result = self.sys_docker.run_command(
|
||||
ADDON_BUILDER_IMAGE,
|
||||
version=builder_version_tag,
|
||||
name=builder_name,
|
||||
**build_env.get_docker_args(version, addon_image_tag),
|
||||
)
|
||||
|
||||
logs = result.output.decode("utf-8")
|
||||
|
||||
if result.exit_code != 0:
|
||||
error_message = f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}"
|
||||
raise docker.errors.DockerException(error_message)
|
||||
|
||||
addon_image = self.sys_docker.images.get(addon_image_tag)
|
||||
|
||||
return addon_image, logs
|
||||
|
||||
try:
|
||||
docker_image, log = await self.sys_run_in_executor(build_image)
|
||||
|
||||
@ -687,15 +720,6 @@ class DockerAddon(DockerInterface):
|
||||
|
||||
except (docker.errors.DockerException, requests.RequestException) as err:
|
||||
_LOGGER.error("Can't build %s:%s: %s", self.image, version, err)
|
||||
if hasattr(err, "build_log"):
|
||||
log = "\n".join(
|
||||
[
|
||||
x["stream"]
|
||||
for x in err.build_log # pylint: disable=no-member
|
||||
if isinstance(x, dict) and "stream" in x
|
||||
]
|
||||
)
|
||||
_LOGGER.error("Build log: \n%s", log)
|
||||
raise DockerError() from err
|
||||
|
||||
_LOGGER.info("Build %s:%s done", self.image, version)
|
||||
|
@ -107,3 +107,6 @@ PATH_BACKUP = PurePath("/backup")
|
||||
PATH_SHARE = PurePath("/share")
|
||||
PATH_MEDIA = PurePath("/media")
|
||||
PATH_CLOUD_BACKUP = PurePath("/cloud_backup")
|
||||
|
||||
# https://hub.docker.com/_/docker
|
||||
ADDON_BUILDER_IMAGE = "docker.io/library/docker"
|
||||
|
@ -213,9 +213,6 @@ class DockerHomeAssistant(DockerInterface):
|
||||
privileged=True,
|
||||
init=True,
|
||||
entrypoint=[],
|
||||
detach=True,
|
||||
stdout=True,
|
||||
stderr=True,
|
||||
mounts=[
|
||||
Mount(
|
||||
type=MountType.BIND.value,
|
||||
|
@ -95,12 +95,12 @@ class DockerConfig(FileConfiguration):
|
||||
super().__init__(FILE_HASSIO_DOCKER, SCHEMA_DOCKER_CONFIG)
|
||||
|
||||
@property
|
||||
def enable_ipv6(self) -> bool:
|
||||
def enable_ipv6(self) -> bool | None:
|
||||
"""Return IPv6 configuration for docker network."""
|
||||
return self._data.get(ATTR_ENABLE_IPV6, False)
|
||||
return self._data.get(ATTR_ENABLE_IPV6, None)
|
||||
|
||||
@enable_ipv6.setter
|
||||
def enable_ipv6(self, value: bool) -> None:
|
||||
def enable_ipv6(self, value: bool | None) -> None:
|
||||
"""Set IPv6 configuration for docker network."""
|
||||
self._data[ATTR_ENABLE_IPV6] = value
|
||||
|
||||
@ -294,8 +294,8 @@ class DockerAPI:
|
||||
def run_command(
|
||||
self,
|
||||
image: str,
|
||||
tag: str = "latest",
|
||||
command: str | None = None,
|
||||
version: str = "latest",
|
||||
command: str | list[str] | None = None,
|
||||
**kwargs: Any,
|
||||
) -> CommandReturn:
|
||||
"""Create a temporary container and run command.
|
||||
@ -305,12 +305,15 @@ class DockerAPI:
|
||||
stdout = kwargs.get("stdout", True)
|
||||
stderr = kwargs.get("stderr", True)
|
||||
|
||||
_LOGGER.info("Runing command '%s' on %s", command, image)
|
||||
image_with_tag = f"{image}:{version}"
|
||||
|
||||
_LOGGER.info("Runing command '%s' on %s", command, image_with_tag)
|
||||
container = None
|
||||
try:
|
||||
container = self.docker.containers.run(
|
||||
f"{image}:{tag}",
|
||||
image_with_tag,
|
||||
command=command,
|
||||
detach=True,
|
||||
network=self.network.name,
|
||||
use_config_proxy=False,
|
||||
**kwargs,
|
||||
@ -327,9 +330,9 @@ class DockerAPI:
|
||||
# cleanup container
|
||||
if container:
|
||||
with suppress(docker_errors.DockerException, requests.RequestException):
|
||||
container.remove(force=True)
|
||||
container.remove(force=True, v=True)
|
||||
|
||||
return CommandReturn(result.get("StatusCode"), output)
|
||||
return CommandReturn(result["StatusCode"], output)
|
||||
|
||||
def repair(self) -> None:
|
||||
"""Repair local docker overlayfs2 issues."""
|
||||
@ -442,7 +445,7 @@ class DockerAPI:
|
||||
if remove_container:
|
||||
with suppress(DockerException, requests.RequestException):
|
||||
_LOGGER.info("Cleaning %s application", name)
|
||||
docker_container.remove(force=True)
|
||||
docker_container.remove(force=True, v=True)
|
||||
|
||||
def start_container(self, name: str) -> None:
|
||||
"""Start Docker container."""
|
||||
|
@ -47,6 +47,8 @@ DOCKER_NETWORK_PARAMS = {
|
||||
"options": {"com.docker.network.bridge.name": DOCKER_NETWORK},
|
||||
}
|
||||
|
||||
DOCKER_ENABLE_IPV6_DEFAULT = True
|
||||
|
||||
|
||||
class DockerNetwork:
|
||||
"""Internal Supervisor Network.
|
||||
@ -59,7 +61,7 @@ class DockerNetwork:
|
||||
self.docker: docker.DockerClient = docker_client
|
||||
self._network: docker.models.networks.Network
|
||||
|
||||
async def post_init(self, enable_ipv6: bool = False) -> Self:
|
||||
async def post_init(self, enable_ipv6: bool | None = None) -> Self:
|
||||
"""Post init actions that must be done in event loop."""
|
||||
self._network = await asyncio.get_running_loop().run_in_executor(
|
||||
None, self._get_network, enable_ipv6
|
||||
@ -111,16 +113,24 @@ class DockerNetwork:
|
||||
"""Return observer of the network."""
|
||||
return DOCKER_IPV4_NETWORK_MASK[6]
|
||||
|
||||
def _get_network(self, enable_ipv6: bool = False) -> docker.models.networks.Network:
|
||||
def _get_network(
|
||||
self, enable_ipv6: bool | None = None
|
||||
) -> docker.models.networks.Network:
|
||||
"""Get supervisor network."""
|
||||
try:
|
||||
if network := self.docker.networks.get(DOCKER_NETWORK):
|
||||
if network.attrs.get(DOCKER_ENABLEIPV6) == enable_ipv6:
|
||||
current_ipv6 = network.attrs.get(DOCKER_ENABLEIPV6, False)
|
||||
# If the network exists and we don't have an explicit setting,
|
||||
# simply stick with what we have.
|
||||
if enable_ipv6 is None or current_ipv6 == enable_ipv6:
|
||||
return network
|
||||
|
||||
# We have an explicit setting which differs from the current state.
|
||||
_LOGGER.info(
|
||||
"Migrating Supervisor network to %s",
|
||||
"IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only",
|
||||
)
|
||||
|
||||
if (containers := network.containers) and (
|
||||
containers_all := all(
|
||||
container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME)
|
||||
@ -134,6 +144,7 @@ class DockerNetwork:
|
||||
requests.RequestException,
|
||||
):
|
||||
network.disconnect(container, force=True)
|
||||
|
||||
if not containers or containers_all:
|
||||
try:
|
||||
network.remove()
|
||||
@ -151,7 +162,9 @@ class DockerNetwork:
|
||||
_LOGGER.info("Can't find Supervisor network, creating a new network")
|
||||
|
||||
network_params = DOCKER_NETWORK_PARAMS.copy()
|
||||
network_params[ATTR_ENABLE_IPV6] = enable_ipv6
|
||||
network_params[ATTR_ENABLE_IPV6] = (
|
||||
DOCKER_ENABLE_IPV6_DEFAULT if enable_ipv6 is None else enable_ipv6
|
||||
)
|
||||
|
||||
try:
|
||||
self._network = self.docker.networks.create(**network_params) # type: ignore
|
||||
|
@ -8,11 +8,11 @@ from typing import Any
|
||||
from ..const import ATTR_HOST_INTERNET
|
||||
from ..coresys import CoreSys, CoreSysAttributes
|
||||
from ..dbus.const import (
|
||||
DBUS_ATTR_CONFIGURATION,
|
||||
DBUS_ATTR_CONNECTION_ENABLED,
|
||||
DBUS_ATTR_CONNECTIVITY,
|
||||
DBUS_ATTR_PRIMARY_CONNECTION,
|
||||
DBUS_IFACE_DNS,
|
||||
DBUS_IFACE_NM,
|
||||
DBUS_OBJECT_BASE,
|
||||
DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED,
|
||||
ConnectionStateType,
|
||||
ConnectivityState,
|
||||
@ -46,6 +46,8 @@ class NetworkManager(CoreSysAttributes):
|
||||
"""Initialize system center handling."""
|
||||
self.coresys: CoreSys = coresys
|
||||
self._connectivity: bool | None = None
|
||||
# No event need on initial change (NetworkManager initializes with empty list)
|
||||
self._dns_configuration: list = []
|
||||
|
||||
@property
|
||||
def connectivity(self) -> bool | None:
|
||||
@ -142,6 +144,10 @@ class NetworkManager(CoreSysAttributes):
|
||||
"properties_changed", self._check_connectivity_changed
|
||||
)
|
||||
|
||||
self.sys_dbus.network.dns.dbus.properties.on(
|
||||
"properties_changed", self._check_dns_changed
|
||||
)
|
||||
|
||||
async def _check_connectivity_changed(
|
||||
self, interface: str, changed: dict[str, Any], invalidated: list[str]
|
||||
):
|
||||
@ -152,16 +158,6 @@ class NetworkManager(CoreSysAttributes):
|
||||
connectivity_check: bool | None = changed.get(DBUS_ATTR_CONNECTION_ENABLED)
|
||||
connectivity: int | None = changed.get(DBUS_ATTR_CONNECTIVITY)
|
||||
|
||||
# This potentially updated the DNS configuration. Make sure the DNS plug-in
|
||||
# picks up the latest settings.
|
||||
if (
|
||||
DBUS_ATTR_PRIMARY_CONNECTION in changed
|
||||
and changed[DBUS_ATTR_PRIMARY_CONNECTION]
|
||||
and changed[DBUS_ATTR_PRIMARY_CONNECTION] != DBUS_OBJECT_BASE
|
||||
and await self.sys_plugins.dns.is_running()
|
||||
):
|
||||
await self.sys_plugins.dns.restart()
|
||||
|
||||
if (
|
||||
connectivity_check is True
|
||||
or DBUS_ATTR_CONNECTION_ENABLED in invalidated
|
||||
@ -175,6 +171,20 @@ class NetworkManager(CoreSysAttributes):
|
||||
elif connectivity is not None:
|
||||
self.connectivity = connectivity == ConnectivityState.CONNECTIVITY_FULL
|
||||
|
||||
async def _check_dns_changed(
|
||||
self, interface: str, changed: dict[str, Any], invalidated: list[str]
|
||||
):
|
||||
"""Check if DNS properties have changed."""
|
||||
if interface != DBUS_IFACE_DNS:
|
||||
return
|
||||
|
||||
if (
|
||||
DBUS_ATTR_CONFIGURATION in changed
|
||||
and self._dns_configuration != changed[DBUS_ATTR_CONFIGURATION]
|
||||
):
|
||||
self._dns_configuration = changed[DBUS_ATTR_CONFIGURATION]
|
||||
self.sys_plugins.dns.notify_locals_changed()
|
||||
|
||||
async def update(self, *, force_connectivity_check: bool = False):
|
||||
"""Update properties over dbus."""
|
||||
_LOGGER.info("Updating local network information")
|
||||
|
@ -15,7 +15,8 @@ from awesomeversion import AwesomeVersion
|
||||
import jinja2
|
||||
import voluptuous as vol
|
||||
|
||||
from ..const import ATTR_SERVERS, DNS_SUFFIX, LogLevel
|
||||
from ..bus import EventListener
|
||||
from ..const import ATTR_SERVERS, DNS_SUFFIX, BusEvent, LogLevel
|
||||
from ..coresys import CoreSys
|
||||
from ..dbus.const import MulticastProtocolEnabled
|
||||
from ..docker.const import ContainerState
|
||||
@ -77,6 +78,12 @@ class PluginDns(PluginBase):
|
||||
|
||||
self._hosts: list[HostEntry] = []
|
||||
self._loop: bool = False
|
||||
self._cached_locals: list[str] | None = None
|
||||
|
||||
# Debouncing system for rapid local changes
|
||||
self._locals_changed_handle: asyncio.TimerHandle | None = None
|
||||
self._restart_after_locals_change_handle: asyncio.Task | None = None
|
||||
self._connectivity_check_listener: EventListener | None = None
|
||||
|
||||
@property
|
||||
def hosts(self) -> Path:
|
||||
@ -91,6 +98,12 @@ class PluginDns(PluginBase):
|
||||
@property
|
||||
def locals(self) -> list[str]:
|
||||
"""Return list of local system DNS servers."""
|
||||
if self._cached_locals is None:
|
||||
self._cached_locals = self._compute_locals()
|
||||
return self._cached_locals
|
||||
|
||||
def _compute_locals(self) -> list[str]:
|
||||
"""Compute list of local system DNS servers."""
|
||||
servers: list[str] = []
|
||||
for server in [
|
||||
f"dns://{server!s}" for server in self.sys_host.network.dns_servers
|
||||
@ -100,6 +113,52 @@ class PluginDns(PluginBase):
|
||||
|
||||
return servers
|
||||
|
||||
async def _on_dns_container_running(self, event: DockerContainerStateEvent) -> None:
|
||||
"""Handle DNS container state change to running and trigger connectivity check."""
|
||||
if event.name == self.instance.name and event.state == ContainerState.RUNNING:
|
||||
# Wait before CoreDNS actually becomes available
|
||||
await asyncio.sleep(5)
|
||||
|
||||
_LOGGER.debug("CoreDNS started, checking connectivity")
|
||||
await self.sys_supervisor.check_connectivity()
|
||||
|
||||
async def _restart_dns_after_locals_change(self) -> None:
|
||||
"""Restart DNS after a debounced delay for local changes."""
|
||||
old_locals = self._cached_locals
|
||||
new_locals = self._compute_locals()
|
||||
if old_locals == new_locals:
|
||||
return
|
||||
|
||||
_LOGGER.debug("DNS locals changed from %s to %s", old_locals, new_locals)
|
||||
self._cached_locals = new_locals
|
||||
if not await self.instance.is_running():
|
||||
return
|
||||
|
||||
await self.restart()
|
||||
self._restart_after_locals_change_handle = None
|
||||
|
||||
def _trigger_restart_dns_after_locals_change(self) -> None:
|
||||
"""Trigger a restart of DNS after local changes."""
|
||||
# Cancel existing restart task if any
|
||||
if self._restart_after_locals_change_handle:
|
||||
self._restart_after_locals_change_handle.cancel()
|
||||
|
||||
self._restart_after_locals_change_handle = self.sys_create_task(
|
||||
self._restart_dns_after_locals_change()
|
||||
)
|
||||
self._locals_changed_handle = None
|
||||
|
||||
def notify_locals_changed(self) -> None:
|
||||
"""Schedule a debounced DNS restart for local changes."""
|
||||
# Cancel existing timer if any
|
||||
if self._locals_changed_handle:
|
||||
self._locals_changed_handle.cancel()
|
||||
|
||||
# Schedule new timer with 1 second delay
|
||||
self._locals_changed_handle = self.sys_call_later(
|
||||
1.0, self._trigger_restart_dns_after_locals_change
|
||||
)
|
||||
|
||||
@property
|
||||
def servers(self) -> list[str]:
|
||||
"""Return list of DNS servers."""
|
||||
@ -188,6 +247,13 @@ class PluginDns(PluginBase):
|
||||
_LOGGER.error("Can't read hosts.tmpl: %s", err)
|
||||
|
||||
await self._init_hosts()
|
||||
|
||||
# Register Docker event listener for connectivity checks
|
||||
if not self._connectivity_check_listener:
|
||||
self._connectivity_check_listener = self.sys_bus.register_event(
|
||||
BusEvent.DOCKER_CONTAINER_STATE_CHANGE, self._on_dns_container_running
|
||||
)
|
||||
|
||||
await super().load()
|
||||
|
||||
# Update supervisor
|
||||
@ -243,6 +309,16 @@ class PluginDns(PluginBase):
|
||||
|
||||
async def stop(self) -> None:
|
||||
"""Stop CoreDNS."""
|
||||
# Cancel any pending locals change timer
|
||||
if self._locals_changed_handle:
|
||||
self._locals_changed_handle.cancel()
|
||||
self._locals_changed_handle = None
|
||||
|
||||
# Wait for any pending restart before stopping
|
||||
if self._restart_after_locals_change_handle:
|
||||
self._restart_after_locals_change_handle.cancel()
|
||||
self._restart_after_locals_change_handle = None
|
||||
|
||||
_LOGGER.info("Stopping CoreDNS plugin")
|
||||
try:
|
||||
await self.instance.stop()
|
||||
|
108
supervisor/resolution/checks/duplicate_os_installation.py
Normal file
108
supervisor/resolution/checks/duplicate_os_installation.py
Normal file
@ -0,0 +1,108 @@
|
||||
"""Helpers to check for duplicate OS installations."""
|
||||
|
||||
import logging
|
||||
|
||||
from ...const import CoreState
|
||||
from ...coresys import CoreSys
|
||||
from ...dbus.udisks2.data import DeviceSpecification
|
||||
from ..const import ContextType, IssueType, UnhealthyReason
|
||||
from .base import CheckBase
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
# Partition labels to check for duplicates (GPT-based installations)
|
||||
HAOS_PARTITIONS = [
|
||||
"hassos-boot",
|
||||
"hassos-kernel0",
|
||||
"hassos-kernel1",
|
||||
"hassos-system0",
|
||||
"hassos-system1",
|
||||
]
|
||||
|
||||
# Partition UUIDs to check for duplicates (MBR-based installations)
|
||||
HAOS_PARTITION_UUIDS = [
|
||||
"48617373-01", # hassos-boot
|
||||
"48617373-05", # hassos-kernel0
|
||||
"48617373-06", # hassos-system0
|
||||
"48617373-07", # hassos-kernel1
|
||||
"48617373-08", # hassos-system1
|
||||
]
|
||||
|
||||
|
||||
def _get_device_specifications():
|
||||
"""Generate DeviceSpecification objects for both GPT and MBR partitions."""
|
||||
# GPT-based installations (partition labels)
|
||||
for partition_label in HAOS_PARTITIONS:
|
||||
yield (
|
||||
DeviceSpecification(partlabel=partition_label),
|
||||
"partition",
|
||||
partition_label,
|
||||
)
|
||||
|
||||
# MBR-based installations (partition UUIDs)
|
||||
for partition_uuid in HAOS_PARTITION_UUIDS:
|
||||
yield (
|
||||
DeviceSpecification(partuuid=partition_uuid),
|
||||
"partition UUID",
|
||||
partition_uuid,
|
||||
)
|
||||
|
||||
|
||||
def setup(coresys: CoreSys) -> CheckBase:
|
||||
"""Check setup function."""
|
||||
return CheckDuplicateOSInstallation(coresys)
|
||||
|
||||
|
||||
class CheckDuplicateOSInstallation(CheckBase):
|
||||
"""CheckDuplicateOSInstallation class for check."""
|
||||
|
||||
async def run_check(self) -> None:
|
||||
"""Run check if not affected by issue."""
|
||||
if not self.sys_os.available:
|
||||
_LOGGER.debug(
|
||||
"Skipping duplicate OS installation check, OS is not available"
|
||||
)
|
||||
return
|
||||
|
||||
for device_spec, spec_type, identifier in _get_device_specifications():
|
||||
resolved = await self.sys_dbus.udisks2.resolve_device(device_spec)
|
||||
if resolved and len(resolved) > 1:
|
||||
_LOGGER.warning(
|
||||
"Found duplicate OS installation: %s %s exists on %d devices (%s)",
|
||||
identifier,
|
||||
spec_type,
|
||||
len(resolved),
|
||||
", ".join(str(device.device) for device in resolved),
|
||||
)
|
||||
self.sys_resolution.add_unhealthy_reason(
|
||||
UnhealthyReason.DUPLICATE_OS_INSTALLATION
|
||||
)
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.DUPLICATE_OS_INSTALLATION,
|
||||
ContextType.SYSTEM,
|
||||
)
|
||||
return
|
||||
|
||||
async def approve_check(self, reference: str | None = None) -> bool:
|
||||
"""Approve check if it is affected by issue."""
|
||||
# Check all partitions for duplicates since issue is created without reference
|
||||
for device_spec, _, _ in _get_device_specifications():
|
||||
resolved = await self.sys_dbus.udisks2.resolve_device(device_spec)
|
||||
if resolved and len(resolved) > 1:
|
||||
return True
|
||||
return False
|
||||
|
||||
@property
|
||||
def issue(self) -> IssueType:
|
||||
"""Return a IssueType enum."""
|
||||
return IssueType.DUPLICATE_OS_INSTALLATION
|
||||
|
||||
@property
|
||||
def context(self) -> ContextType:
|
||||
"""Return a ContextType enum."""
|
||||
return ContextType.SYSTEM
|
||||
|
||||
@property
|
||||
def states(self) -> list[CoreState]:
|
||||
"""Return a list of valid states when this check can run."""
|
||||
return [CoreState.SETUP]
|
@ -21,6 +21,9 @@ class CheckMultipleDataDisks(CheckBase):
|
||||
|
||||
async def run_check(self) -> None:
|
||||
"""Run check if not affected by issue."""
|
||||
if not self.sys_os.available:
|
||||
return
|
||||
|
||||
for block_device in self.sys_dbus.udisks2.block_devices:
|
||||
if self._block_device_has_name_issue(block_device):
|
||||
self.sys_resolution.create_issue(
|
||||
|
@ -64,10 +64,11 @@ class UnhealthyReason(StrEnum):
|
||||
"""Reasons for unsupported status."""
|
||||
|
||||
DOCKER = "docker"
|
||||
DUPLICATE_OS_INSTALLATION = "duplicate_os_installation"
|
||||
OSERROR_BAD_MESSAGE = "oserror_bad_message"
|
||||
PRIVILEGED = "privileged"
|
||||
SUPERVISOR = "supervisor"
|
||||
SETUP = "setup"
|
||||
SUPERVISOR = "supervisor"
|
||||
UNTRUSTED = "untrusted"
|
||||
|
||||
|
||||
@ -83,6 +84,7 @@ class IssueType(StrEnum):
|
||||
DEVICE_ACCESS_MISSING = "device_access_missing"
|
||||
DISABLED_DATA_DISK = "disabled_data_disk"
|
||||
DNS_LOOP = "dns_loop"
|
||||
DUPLICATE_OS_INSTALLATION = "duplicate_os_installation"
|
||||
DNS_SERVER_FAILED = "dns_server_failed"
|
||||
DNS_SERVER_IPV6_ERROR = "dns_server_ipv6_error"
|
||||
DOCKER_CONFIG = "docker_config"
|
||||
|
@ -5,6 +5,8 @@ import logging
|
||||
from docker.errors import DockerException
|
||||
from requests import RequestException
|
||||
|
||||
from supervisor.docker.const import ADDON_BUILDER_IMAGE
|
||||
|
||||
from ...const import CoreState
|
||||
from ...coresys import CoreSys
|
||||
from ..const import (
|
||||
@ -63,6 +65,7 @@ class EvaluateContainer(EvaluateBase):
|
||||
self.sys_supervisor.image or self.sys_supervisor.default_image,
|
||||
*(plugin.image for plugin in self.sys_plugins.all_plugins if plugin.image),
|
||||
*(addon.image for addon in self.sys_addons.installed if addon.image),
|
||||
ADDON_BUILDER_IMAGE,
|
||||
}
|
||||
|
||||
async def evaluate(self) -> bool:
|
||||
|
@ -3,6 +3,7 @@
|
||||
from abc import ABC, abstractmethod
|
||||
import logging
|
||||
|
||||
from ...const import BusEvent
|
||||
from ...coresys import CoreSys, CoreSysAttributes
|
||||
from ...exceptions import ResolutionFixupError
|
||||
from ..const import ContextType, IssueType, SuggestionType
|
||||
@ -66,6 +67,11 @@ class FixupBase(ABC, CoreSysAttributes):
|
||||
"""Return if a fixup can be apply as auto fix."""
|
||||
return False
|
||||
|
||||
@property
|
||||
def bus_event(self) -> BusEvent | None:
|
||||
"""Return the BusEvent that triggers this fixup, or None if not event-based."""
|
||||
return None
|
||||
|
||||
@property
|
||||
def all_suggestions(self) -> list[Suggestion]:
|
||||
"""List of all suggestions which when applied run this fixup."""
|
||||
|
@ -2,6 +2,7 @@
|
||||
|
||||
import logging
|
||||
|
||||
from ...const import BusEvent
|
||||
from ...coresys import CoreSys
|
||||
from ...exceptions import (
|
||||
ResolutionFixupError,
|
||||
@ -68,3 +69,8 @@ class FixupStoreExecuteReload(FixupBase):
|
||||
def auto(self) -> bool:
|
||||
"""Return if a fixup can be apply as auto fix."""
|
||||
return True
|
||||
|
||||
@property
|
||||
def bus_event(self) -> BusEvent | None:
|
||||
"""Return the BusEvent that triggers this fixup, or None if not event-based."""
|
||||
return BusEvent.SUPERVISOR_CONNECTIVITY_CHANGE
|
||||
|
@ -1,6 +1,5 @@
|
||||
"""Helpers to check and fix issues with free space."""
|
||||
|
||||
from functools import partial
|
||||
import logging
|
||||
|
||||
from ...coresys import CoreSys
|
||||
@ -12,7 +11,6 @@ from ...exceptions import (
|
||||
)
|
||||
from ...jobs.const import JobCondition
|
||||
from ...jobs.decorator import Job
|
||||
from ...utils import remove_folder
|
||||
from ..const import ContextType, IssueType, SuggestionType
|
||||
from .base import FixupBase
|
||||
|
||||
@ -44,15 +42,8 @@ class FixupStoreExecuteReset(FixupBase):
|
||||
_LOGGER.warning("Can't find store %s for fixup", reference)
|
||||
return
|
||||
|
||||
# Local add-ons are not a git repo, can't remove and re-pull
|
||||
if repository.git:
|
||||
await self.sys_run_in_executor(
|
||||
partial(remove_folder, folder=repository.git.path, content_only=True)
|
||||
)
|
||||
|
||||
# Load data again
|
||||
try:
|
||||
await repository.load()
|
||||
await repository.reset()
|
||||
except StoreError:
|
||||
raise ResolutionFixupError() from None
|
||||
|
||||
|
@ -5,6 +5,7 @@ from typing import Any
|
||||
|
||||
import attr
|
||||
|
||||
from ..bus import EventListener
|
||||
from ..coresys import CoreSys, CoreSysAttributes
|
||||
from ..exceptions import ResolutionError, ResolutionNotFound
|
||||
from ..homeassistant.const import WSEvent
|
||||
@ -46,6 +47,9 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
|
||||
self._unsupported: list[UnsupportedReason] = []
|
||||
self._unhealthy: list[UnhealthyReason] = []
|
||||
|
||||
# Map suggestion UUID to event listeners (list)
|
||||
self._suggestion_listeners: dict[str, list[EventListener]] = {}
|
||||
|
||||
async def load_modules(self):
|
||||
"""Load resolution evaluation, check and fixup modules."""
|
||||
|
||||
@ -105,6 +109,19 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
|
||||
)
|
||||
self._suggestions.append(suggestion)
|
||||
|
||||
# Register event listeners if fixups have a bus_event
|
||||
listeners: list[EventListener] = []
|
||||
for fixup in self.fixup.fixes_for_suggestion(suggestion):
|
||||
if fixup.auto and fixup.bus_event:
|
||||
|
||||
def event_callback(reference, fixup=fixup):
|
||||
return fixup(suggestion)
|
||||
|
||||
listener = self.sys_bus.register_event(fixup.bus_event, event_callback)
|
||||
listeners.append(listener)
|
||||
if listeners:
|
||||
self._suggestion_listeners[suggestion.uuid] = listeners
|
||||
|
||||
# Event on suggestion added to issue
|
||||
for issue in self.issues_for_suggestion(suggestion):
|
||||
self.sys_homeassistant.websocket.supervisor_event(
|
||||
@ -233,6 +250,11 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
|
||||
)
|
||||
self._suggestions.remove(suggestion)
|
||||
|
||||
# Remove event listeners if present
|
||||
listeners = self._suggestion_listeners.pop(suggestion.uuid, [])
|
||||
for listener in listeners:
|
||||
self.sys_bus.remove_listener(listener)
|
||||
|
||||
# Event on suggestion removed from issues
|
||||
for issue in self.issues_for_suggestion(suggestion):
|
||||
self.sys_homeassistant.websocket.supervisor_event(
|
||||
|
@ -4,7 +4,7 @@ import asyncio
|
||||
from collections.abc import Awaitable
|
||||
import logging
|
||||
|
||||
from ..const import ATTR_REPOSITORIES, URL_HASSIO_ADDONS
|
||||
from ..const import ATTR_REPOSITORIES, REPOSITORY_CORE, URL_HASSIO_ADDONS
|
||||
from ..coresys import CoreSys, CoreSysAttributes
|
||||
from ..exceptions import (
|
||||
StoreError,
|
||||
@ -18,14 +18,10 @@ from ..jobs.decorator import Job, JobCondition
|
||||
from ..resolution.const import ContextType, IssueType, SuggestionType
|
||||
from ..utils.common import FileConfiguration
|
||||
from .addon import AddonStore
|
||||
from .const import FILE_HASSIO_STORE, StoreType
|
||||
from .const import FILE_HASSIO_STORE, BuiltinRepository
|
||||
from .data import StoreData
|
||||
from .repository import Repository
|
||||
from .validate import (
|
||||
BUILTIN_REPOSITORIES,
|
||||
SCHEMA_STORE_FILE,
|
||||
ensure_builtin_repositories,
|
||||
)
|
||||
from .validate import DEFAULT_REPOSITORIES, SCHEMA_STORE_FILE
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
@ -56,7 +52,8 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
||||
return [
|
||||
repository.source
|
||||
for repository in self.all
|
||||
if repository.type == StoreType.GIT
|
||||
if repository.slug
|
||||
not in {BuiltinRepository.LOCAL.value, BuiltinRepository.CORE.value}
|
||||
]
|
||||
|
||||
def get(self, slug: str) -> Repository:
|
||||
@ -65,20 +62,15 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
||||
raise StoreNotFound()
|
||||
return self.repositories[slug]
|
||||
|
||||
def get_from_url(self, url: str) -> Repository:
|
||||
"""Return Repository with slug."""
|
||||
for repository in self.all:
|
||||
if repository.source != url:
|
||||
continue
|
||||
return repository
|
||||
raise StoreNotFound()
|
||||
|
||||
async def load(self) -> None:
|
||||
"""Start up add-on management."""
|
||||
# Init custom repositories and load add-ons
|
||||
await self.update_repositories(
|
||||
self._data[ATTR_REPOSITORIES], add_with_errors=True
|
||||
"""Start up add-on store management."""
|
||||
# Make sure the built-in repositories are all present
|
||||
# This is especially important when adding new built-in repositories
|
||||
# to make sure existing installations have them.
|
||||
all_repositories: set[str] = (
|
||||
set(self._data.get(ATTR_REPOSITORIES, [])) | DEFAULT_REPOSITORIES
|
||||
)
|
||||
await self.update_repositories(all_repositories, issue_on_error=True)
|
||||
|
||||
@Job(
|
||||
name="store_manager_reload",
|
||||
@ -126,16 +118,16 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
||||
)
|
||||
async def add_repository(self, url: str, *, persist: bool = True) -> None:
|
||||
"""Add a repository."""
|
||||
await self._add_repository(url, persist=persist, add_with_errors=False)
|
||||
await self._add_repository(url, persist=persist, issue_on_error=False)
|
||||
|
||||
async def _add_repository(
|
||||
self, url: str, *, persist: bool = True, add_with_errors: bool = False
|
||||
self, url: str, *, persist: bool = True, issue_on_error: bool = False
|
||||
) -> None:
|
||||
"""Add a repository."""
|
||||
if url == URL_HASSIO_ADDONS:
|
||||
url = StoreType.CORE
|
||||
url = REPOSITORY_CORE
|
||||
|
||||
repository = Repository(self.coresys, url)
|
||||
repository = Repository.create(self.coresys, url)
|
||||
|
||||
if repository.slug in self.repositories:
|
||||
raise StoreError(f"Can't add {url}, already in the store", _LOGGER.error)
|
||||
@ -145,7 +137,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
||||
await repository.load()
|
||||
except StoreGitCloneError as err:
|
||||
_LOGGER.error("Can't retrieve data from %s due to %s", url, err)
|
||||
if add_with_errors:
|
||||
if issue_on_error:
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.FATAL_ERROR,
|
||||
ContextType.STORE,
|
||||
@ -158,7 +150,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
||||
|
||||
except StoreGitError as err:
|
||||
_LOGGER.error("Can't load data from repository %s due to %s", url, err)
|
||||
if add_with_errors:
|
||||
if issue_on_error:
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.FATAL_ERROR,
|
||||
ContextType.STORE,
|
||||
@ -171,7 +163,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
||||
|
||||
except StoreJobError as err:
|
||||
_LOGGER.error("Can't add repository %s due to %s", url, err)
|
||||
if add_with_errors:
|
||||
if issue_on_error:
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.FATAL_ERROR,
|
||||
ContextType.STORE,
|
||||
@ -183,8 +175,8 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
||||
raise err
|
||||
|
||||
else:
|
||||
if not await self.sys_run_in_executor(repository.validate):
|
||||
if add_with_errors:
|
||||
if not await repository.validate():
|
||||
if issue_on_error:
|
||||
_LOGGER.error("%s is not a valid add-on repository", url)
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.CORRUPT_REPOSITORY,
|
||||
@ -213,7 +205,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
||||
|
||||
async def remove_repository(self, repository: Repository, *, persist: bool = True):
|
||||
"""Remove a repository."""
|
||||
if repository.source in BUILTIN_REPOSITORIES:
|
||||
if repository.is_builtin:
|
||||
raise StoreInvalidAddonRepo(
|
||||
"Can't remove built-in repositories!", logger=_LOGGER.error
|
||||
)
|
||||
@ -234,40 +226,50 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
||||
@Job(name="store_manager_update_repositories")
|
||||
async def update_repositories(
|
||||
self,
|
||||
list_repositories: list[str],
|
||||
list_repositories: set[str],
|
||||
*,
|
||||
add_with_errors: bool = False,
|
||||
issue_on_error: bool = False,
|
||||
replace: bool = True,
|
||||
):
|
||||
"""Add a new custom repository."""
|
||||
new_rep = set(
|
||||
ensure_builtin_repositories(list_repositories)
|
||||
if replace
|
||||
else list_repositories + self.repository_urls
|
||||
)
|
||||
old_rep = {repository.source for repository in self.all}
|
||||
"""Update repositories by adding new ones and removing stale ones."""
|
||||
current_repositories = {repository.source for repository in self.all}
|
||||
|
||||
# Determine repositories to add
|
||||
repositories_to_add = list_repositories - current_repositories
|
||||
|
||||
# Add new repositories
|
||||
add_errors = await asyncio.gather(
|
||||
*[
|
||||
self._add_repository(url, persist=False, add_with_errors=True)
|
||||
if add_with_errors
|
||||
# Use _add_repository to avoid JobCondition.SUPERVISOR_UPDATED
|
||||
# to prevent proper loading of repositories on startup.
|
||||
self._add_repository(url, persist=False, issue_on_error=True)
|
||||
if issue_on_error
|
||||
else self.add_repository(url, persist=False)
|
||||
for url in new_rep - old_rep
|
||||
for url in repositories_to_add
|
||||
],
|
||||
return_exceptions=True,
|
||||
)
|
||||
|
||||
# Delete stale repositories
|
||||
remove_errors = await asyncio.gather(
|
||||
*[
|
||||
self.remove_repository(self.get_from_url(url), persist=False)
|
||||
for url in old_rep - new_rep - BUILTIN_REPOSITORIES
|
||||
],
|
||||
return_exceptions=True,
|
||||
)
|
||||
remove_errors: list[BaseException | None] = []
|
||||
if replace:
|
||||
# Determine repositories to remove
|
||||
repositories_to_remove: list[Repository] = [
|
||||
repository
|
||||
for repository in self.all
|
||||
if repository.source not in list_repositories
|
||||
and not repository.is_builtin
|
||||
]
|
||||
|
||||
# Always update data, even there are errors, some changes may have succeeded
|
||||
# Remove repositories
|
||||
remove_errors = await asyncio.gather(
|
||||
*[
|
||||
self.remove_repository(repository, persist=False)
|
||||
for repository in repositories_to_remove
|
||||
],
|
||||
return_exceptions=True,
|
||||
)
|
||||
|
||||
# Always update data, even if there are errors, some changes may have succeeded
|
||||
await self.data.update()
|
||||
await self._read_addons()
|
||||
|
||||
|
@ -3,14 +3,35 @@
|
||||
from enum import StrEnum
|
||||
from pathlib import Path
|
||||
|
||||
from ..const import SUPERVISOR_DATA
|
||||
from ..const import (
|
||||
REPOSITORY_CORE,
|
||||
REPOSITORY_LOCAL,
|
||||
SUPERVISOR_DATA,
|
||||
URL_HASSIO_ADDONS,
|
||||
)
|
||||
|
||||
FILE_HASSIO_STORE = Path(SUPERVISOR_DATA, "store.json")
|
||||
"""Repository type definitions for the store."""
|
||||
|
||||
|
||||
class StoreType(StrEnum):
|
||||
"""Store Types."""
|
||||
class BuiltinRepository(StrEnum):
|
||||
"""All built-in repositories that come pre-configured."""
|
||||
|
||||
CORE = "core"
|
||||
LOCAL = "local"
|
||||
GIT = "git"
|
||||
# Local repository (non-git, special handling)
|
||||
LOCAL = REPOSITORY_LOCAL
|
||||
|
||||
# Git-based built-in repositories
|
||||
CORE = REPOSITORY_CORE
|
||||
COMMUNITY_ADDONS = "https://github.com/hassio-addons/repository"
|
||||
ESPHOME = "https://github.com/esphome/home-assistant-addon"
|
||||
MUSIC_ASSISTANT = "https://github.com/music-assistant/home-assistant-addon"
|
||||
|
||||
@property
|
||||
def git_url(self) -> str:
|
||||
"""Return the git URL for this repository."""
|
||||
if self == BuiltinRepository.LOCAL:
|
||||
raise RuntimeError("Local repository does not have a git URL")
|
||||
if self == BuiltinRepository.CORE:
|
||||
return URL_HASSIO_ADDONS
|
||||
else:
|
||||
return self.value # For URL-based repos, value is the URL
|
||||
|
@ -25,7 +25,6 @@ from ..exceptions import ConfigurationFileError
|
||||
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
|
||||
from ..utils.common import find_one_filetype, read_json_or_yaml_file
|
||||
from ..utils.json import read_json_file
|
||||
from .const import StoreType
|
||||
from .utils import extract_hash_from_path
|
||||
from .validate import SCHEMA_REPOSITORY_CONFIG
|
||||
|
||||
@ -169,7 +168,7 @@ class StoreData(CoreSysAttributes):
|
||||
self.sys_resolution.add_unhealthy_reason(
|
||||
UnhealthyReason.OSERROR_BAD_MESSAGE
|
||||
)
|
||||
elif path.stem != StoreType.LOCAL:
|
||||
elif repository != REPOSITORY_LOCAL:
|
||||
suggestion = [SuggestionType.EXECUTE_RESET]
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.CORRUPT_REPOSITORY,
|
||||
|
@ -1,19 +1,20 @@
|
||||
"""Init file for Supervisor add-on Git."""
|
||||
|
||||
import asyncio
|
||||
import errno
|
||||
import functools as ft
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
|
||||
import git
|
||||
|
||||
from ..const import ATTR_BRANCH, ATTR_URL, URL_HASSIO_ADDONS
|
||||
from ..const import ATTR_BRANCH, ATTR_URL
|
||||
from ..coresys import CoreSys, CoreSysAttributes
|
||||
from ..exceptions import StoreGitCloneError, StoreGitError, StoreJobError
|
||||
from ..jobs.decorator import Job, JobCondition
|
||||
from ..resolution.const import ContextType, IssueType, SuggestionType
|
||||
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
|
||||
from ..utils import remove_folder
|
||||
from .utils import get_hash_from_repository
|
||||
from .validate import RE_REPOSITORY
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
@ -22,8 +23,6 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
class GitRepo(CoreSysAttributes):
|
||||
"""Manage Add-on Git repository."""
|
||||
|
||||
builtin: bool
|
||||
|
||||
def __init__(self, coresys: CoreSys, path: Path, url: str):
|
||||
"""Initialize Git base wrapper."""
|
||||
self.coresys: CoreSys = coresys
|
||||
@ -87,38 +86,77 @@ class GitRepo(CoreSysAttributes):
|
||||
async def clone(self) -> None:
|
||||
"""Clone git add-on repository."""
|
||||
async with self.lock:
|
||||
git_args = {
|
||||
attribute: value
|
||||
for attribute, value in (
|
||||
("recursive", True),
|
||||
("branch", self.branch),
|
||||
("depth", 1),
|
||||
("shallow-submodules", True),
|
||||
)
|
||||
if value is not None
|
||||
}
|
||||
await self._clone()
|
||||
|
||||
try:
|
||||
_LOGGER.info(
|
||||
"Cloning add-on %s repository from %s", self.path, self.url
|
||||
)
|
||||
self.repo = await self.sys_run_in_executor(
|
||||
ft.partial(
|
||||
git.Repo.clone_from,
|
||||
self.url,
|
||||
str(self.path),
|
||||
**git_args, # type: ignore
|
||||
)
|
||||
)
|
||||
@Job(
|
||||
name="git_repo_reset",
|
||||
conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_SYSTEM],
|
||||
on_condition=StoreJobError,
|
||||
)
|
||||
async def reset(self) -> None:
|
||||
"""Reset repository to fix issue with local copy."""
|
||||
# Clone into temporary folder
|
||||
temp_dir = await self.sys_run_in_executor(
|
||||
TemporaryDirectory, dir=self.sys_config.path_tmp
|
||||
)
|
||||
temp_path = Path(temp_dir.name)
|
||||
try:
|
||||
await self._clone(temp_path)
|
||||
|
||||
except (
|
||||
git.InvalidGitRepositoryError,
|
||||
git.NoSuchPathError,
|
||||
git.CommandError,
|
||||
UnicodeDecodeError,
|
||||
) as err:
|
||||
_LOGGER.error("Can't clone %s repository: %s.", self.url, err)
|
||||
raise StoreGitCloneError() from err
|
||||
# Remove corrupted repo and move temp clone to its place
|
||||
def move_clone():
|
||||
remove_folder(folder=self.path)
|
||||
temp_path.rename(self.path)
|
||||
|
||||
async with self.lock:
|
||||
try:
|
||||
await self.sys_run_in_executor(move_clone)
|
||||
except OSError as err:
|
||||
if err.errno == errno.EBADMSG:
|
||||
self.sys_resolution.add_unhealthy_reason(
|
||||
UnhealthyReason.OSERROR_BAD_MESSAGE
|
||||
)
|
||||
raise StoreGitCloneError(
|
||||
f"Can't move clone due to: {err!s}", _LOGGER.error
|
||||
) from err
|
||||
finally:
|
||||
# Clean up temporary directory in case of error
|
||||
# If the folder was moved this will do nothing
|
||||
await self.sys_run_in_executor(temp_dir.cleanup)
|
||||
|
||||
async def _clone(self, path: Path | None = None) -> None:
|
||||
"""Clone git add-on repository to location."""
|
||||
path = path or self.path
|
||||
git_args = {
|
||||
attribute: value
|
||||
for attribute, value in (
|
||||
("recursive", True),
|
||||
("branch", self.branch),
|
||||
("depth", 1),
|
||||
("shallow-submodules", True),
|
||||
)
|
||||
if value is not None
|
||||
}
|
||||
|
||||
try:
|
||||
_LOGGER.info("Cloning add-on %s repository from %s", path, self.url)
|
||||
self.repo = await self.sys_run_in_executor(
|
||||
ft.partial(
|
||||
git.Repo.clone_from,
|
||||
self.url,
|
||||
str(path),
|
||||
**git_args, # type: ignore
|
||||
)
|
||||
)
|
||||
|
||||
except (
|
||||
git.InvalidGitRepositoryError,
|
||||
git.NoSuchPathError,
|
||||
git.CommandError,
|
||||
UnicodeDecodeError,
|
||||
) as err:
|
||||
_LOGGER.error("Can't clone %s repository: %s.", self.url, err)
|
||||
raise StoreGitCloneError() from err
|
||||
|
||||
@Job(
|
||||
name="git_repo_pull",
|
||||
@ -197,12 +235,17 @@ class GitRepo(CoreSysAttributes):
|
||||
)
|
||||
raise StoreGitError() from err
|
||||
|
||||
async def _remove(self):
|
||||
async def remove(self) -> None:
|
||||
"""Remove a repository."""
|
||||
if self.lock.locked():
|
||||
_LOGGER.warning("There is already a task in progress")
|
||||
_LOGGER.warning(
|
||||
"Cannot remove add-on repository %s, there is already a task in progress",
|
||||
self.url,
|
||||
)
|
||||
return
|
||||
|
||||
_LOGGER.info("Removing custom add-on repository %s", self.url)
|
||||
|
||||
def _remove_git_dir(path: Path) -> None:
|
||||
if not path.is_dir():
|
||||
return
|
||||
@ -210,30 +253,3 @@ class GitRepo(CoreSysAttributes):
|
||||
|
||||
async with self.lock:
|
||||
await self.sys_run_in_executor(_remove_git_dir, self.path)
|
||||
|
||||
|
||||
class GitRepoHassIO(GitRepo):
|
||||
"""Supervisor add-ons repository."""
|
||||
|
||||
builtin: bool = False
|
||||
|
||||
def __init__(self, coresys):
|
||||
"""Initialize Git Supervisor add-on repository."""
|
||||
super().__init__(coresys, coresys.config.path_addons_core, URL_HASSIO_ADDONS)
|
||||
|
||||
|
||||
class GitRepoCustom(GitRepo):
|
||||
"""Custom add-ons repository."""
|
||||
|
||||
builtin: bool = False
|
||||
|
||||
def __init__(self, coresys, url):
|
||||
"""Initialize custom Git Supervisor addo-n repository."""
|
||||
path = Path(coresys.config.path_addons_git, get_hash_from_repository(url))
|
||||
|
||||
super().__init__(coresys, path, url)
|
||||
|
||||
async def remove(self):
|
||||
"""Remove a custom repository."""
|
||||
_LOGGER.info("Removing custom add-on repository %s", self.url)
|
||||
await self._remove()
|
||||
|
@ -1,19 +1,28 @@
|
||||
"""Represent a Supervisor repository."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import cast
|
||||
|
||||
import voluptuous as vol
|
||||
|
||||
from supervisor.utils import get_latest_mtime
|
||||
|
||||
from ..const import ATTR_MAINTAINER, ATTR_NAME, ATTR_URL, FILE_SUFFIX_CONFIGURATION
|
||||
from ..const import (
|
||||
ATTR_MAINTAINER,
|
||||
ATTR_NAME,
|
||||
ATTR_URL,
|
||||
FILE_SUFFIX_CONFIGURATION,
|
||||
REPOSITORY_CORE,
|
||||
REPOSITORY_LOCAL,
|
||||
)
|
||||
from ..coresys import CoreSys, CoreSysAttributes
|
||||
from ..exceptions import ConfigurationFileError, StoreError
|
||||
from ..utils.common import read_json_or_yaml_file
|
||||
from .const import StoreType
|
||||
from .git import GitRepo, GitRepoCustom, GitRepoHassIO
|
||||
from .const import BuiltinRepository
|
||||
from .git import GitRepo
|
||||
from .utils import get_hash_from_repository
|
||||
from .validate import SCHEMA_REPOSITORY_CONFIG
|
||||
|
||||
@ -21,27 +30,48 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
UNKNOWN = "unknown"
|
||||
|
||||
|
||||
class Repository(CoreSysAttributes):
|
||||
class Repository(CoreSysAttributes, ABC):
|
||||
"""Add-on store repository in Supervisor."""
|
||||
|
||||
def __init__(self, coresys: CoreSys, repository: str):
|
||||
def __init__(self, coresys: CoreSys, repository: str, local_path: Path, slug: str):
|
||||
"""Initialize add-on store repository object."""
|
||||
self._slug: str = slug
|
||||
self._local_path: Path = local_path
|
||||
self.coresys: CoreSys = coresys
|
||||
self.git: GitRepo | None = None
|
||||
|
||||
self.source: str = repository
|
||||
if repository == StoreType.LOCAL:
|
||||
self._slug = repository
|
||||
self._type = StoreType.LOCAL
|
||||
self._latest_mtime: float | None = None
|
||||
elif repository == StoreType.CORE:
|
||||
self.git = GitRepoHassIO(coresys)
|
||||
self._slug = repository
|
||||
self._type = StoreType.CORE
|
||||
|
||||
@staticmethod
|
||||
def create(coresys: CoreSys, repository: str) -> Repository:
|
||||
"""Create a repository instance."""
|
||||
if repository in BuiltinRepository:
|
||||
return Repository._create_builtin(coresys, BuiltinRepository(repository))
|
||||
else:
|
||||
self.git = GitRepoCustom(coresys, repository)
|
||||
self._slug = get_hash_from_repository(repository)
|
||||
self._type = StoreType.GIT
|
||||
return Repository._create_custom(coresys, repository)
|
||||
|
||||
@staticmethod
|
||||
def _create_builtin(coresys: CoreSys, builtin: BuiltinRepository) -> Repository:
|
||||
"""Create builtin repository."""
|
||||
if builtin == BuiltinRepository.LOCAL:
|
||||
slug = REPOSITORY_LOCAL
|
||||
local_path = coresys.config.path_addons_local
|
||||
return RepositoryLocal(coresys, local_path, slug)
|
||||
elif builtin == BuiltinRepository.CORE:
|
||||
slug = REPOSITORY_CORE
|
||||
local_path = coresys.config.path_addons_core
|
||||
else:
|
||||
# For other builtin repositories (URL-based)
|
||||
slug = get_hash_from_repository(builtin.value)
|
||||
local_path = coresys.config.path_addons_git / slug
|
||||
return RepositoryGitBuiltin(
|
||||
coresys, builtin.value, local_path, slug, builtin.git_url
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _create_custom(coresys: CoreSys, repository: str) -> RepositoryCustom:
|
||||
"""Create custom repository."""
|
||||
slug = get_hash_from_repository(repository)
|
||||
local_path = coresys.config.path_addons_git / slug
|
||||
return RepositoryCustom(coresys, repository, local_path, slug)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
"""Return internal representation."""
|
||||
@ -53,9 +83,9 @@ class Repository(CoreSysAttributes):
|
||||
return self._slug
|
||||
|
||||
@property
|
||||
def type(self) -> StoreType:
|
||||
"""Return type of the store."""
|
||||
return self._type
|
||||
def local_path(self) -> Path:
|
||||
"""Return local path to repository."""
|
||||
return self._local_path
|
||||
|
||||
@property
|
||||
def data(self) -> dict:
|
||||
@ -77,55 +107,123 @@ class Repository(CoreSysAttributes):
|
||||
"""Return url of repository."""
|
||||
return self.data.get(ATTR_MAINTAINER, UNKNOWN)
|
||||
|
||||
def validate(self) -> bool:
|
||||
"""Check if store is valid.
|
||||
@property
|
||||
@abstractmethod
|
||||
def is_builtin(self) -> bool:
|
||||
"""Return True if this is a built-in repository."""
|
||||
|
||||
Must be run in executor.
|
||||
@abstractmethod
|
||||
async def validate(self) -> bool:
|
||||
"""Check if store is valid."""
|
||||
|
||||
@abstractmethod
|
||||
async def load(self) -> None:
|
||||
"""Load addon repository."""
|
||||
|
||||
@abstractmethod
|
||||
async def update(self) -> bool:
|
||||
"""Update add-on repository.
|
||||
|
||||
Returns True if the repository was updated.
|
||||
"""
|
||||
if not self.git or self.type == StoreType.CORE:
|
||||
return True
|
||||
|
||||
# If exists?
|
||||
for filetype in FILE_SUFFIX_CONFIGURATION:
|
||||
repository_file = Path(self.git.path / f"repository{filetype}")
|
||||
if repository_file.exists():
|
||||
break
|
||||
@abstractmethod
|
||||
async def remove(self) -> None:
|
||||
"""Remove add-on repository."""
|
||||
|
||||
if not repository_file.exists():
|
||||
return False
|
||||
@abstractmethod
|
||||
async def reset(self) -> None:
|
||||
"""Reset add-on repository to fix corruption issue with files."""
|
||||
|
||||
# If valid?
|
||||
try:
|
||||
SCHEMA_REPOSITORY_CONFIG(read_json_or_yaml_file(repository_file))
|
||||
except (ConfigurationFileError, vol.Invalid) as err:
|
||||
_LOGGER.warning("Could not validate repository configuration %s", err)
|
||||
return False
|
||||
|
||||
class RepositoryBuiltin(Repository, ABC):
|
||||
"""A built-in add-on repository."""
|
||||
|
||||
@property
|
||||
def is_builtin(self) -> bool:
|
||||
"""Return True if this is a built-in repository."""
|
||||
return True
|
||||
|
||||
async def validate(self) -> bool:
|
||||
"""Assume built-in repositories are always valid."""
|
||||
return True
|
||||
|
||||
async def remove(self) -> None:
|
||||
"""Raise. Not supported for built-in repositories."""
|
||||
raise StoreError("Can't remove built-in repositories!", _LOGGER.error)
|
||||
|
||||
|
||||
class RepositoryGit(Repository, ABC):
|
||||
"""A git based add-on repository."""
|
||||
|
||||
_git: GitRepo
|
||||
|
||||
async def load(self) -> None:
|
||||
"""Load addon repository."""
|
||||
if not self.git:
|
||||
self._latest_mtime, _ = await self.sys_run_in_executor(
|
||||
get_latest_mtime, self.sys_config.path_addons_local
|
||||
)
|
||||
return
|
||||
await self.git.load()
|
||||
await self._git.load()
|
||||
|
||||
async def update(self) -> bool:
|
||||
"""Update add-on repository.
|
||||
|
||||
Returns True if the repository was updated.
|
||||
"""
|
||||
if not await self.sys_run_in_executor(self.validate):
|
||||
if not await self.validate():
|
||||
return False
|
||||
|
||||
if self.git:
|
||||
return await self.git.pull()
|
||||
return await self._git.pull()
|
||||
|
||||
async def validate(self) -> bool:
|
||||
"""Check if store is valid."""
|
||||
|
||||
def validate_file() -> bool:
|
||||
# If exists?
|
||||
for filetype in FILE_SUFFIX_CONFIGURATION:
|
||||
repository_file = Path(self._git.path / f"repository{filetype}")
|
||||
if repository_file.exists():
|
||||
break
|
||||
|
||||
if not repository_file.exists():
|
||||
return False
|
||||
|
||||
# If valid?
|
||||
try:
|
||||
SCHEMA_REPOSITORY_CONFIG(read_json_or_yaml_file(repository_file))
|
||||
except (ConfigurationFileError, vol.Invalid) as err:
|
||||
_LOGGER.warning("Could not validate repository configuration %s", err)
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
return await self.sys_run_in_executor(validate_file)
|
||||
|
||||
async def reset(self) -> None:
|
||||
"""Reset add-on repository to fix corruption issue with files."""
|
||||
await self._git.reset()
|
||||
await self.load()
|
||||
|
||||
|
||||
class RepositoryLocal(RepositoryBuiltin):
|
||||
"""A local add-on repository."""
|
||||
|
||||
def __init__(self, coresys: CoreSys, local_path: Path, slug: str) -> None:
|
||||
"""Initialize object."""
|
||||
super().__init__(coresys, BuiltinRepository.LOCAL.value, local_path, slug)
|
||||
self._latest_mtime: float | None = None
|
||||
|
||||
async def load(self) -> None:
|
||||
"""Load addon repository."""
|
||||
self._latest_mtime, _ = await self.sys_run_in_executor(
|
||||
get_latest_mtime, self.local_path
|
||||
)
|
||||
|
||||
async def update(self) -> bool:
|
||||
"""Update add-on repository.
|
||||
|
||||
Returns True if the repository was updated.
|
||||
"""
|
||||
# Check local modifications
|
||||
latest_mtime, modified_path = await self.sys_run_in_executor(
|
||||
get_latest_mtime, self.sys_config.path_addons_local
|
||||
get_latest_mtime, self.local_path
|
||||
)
|
||||
if self._latest_mtime != latest_mtime:
|
||||
_LOGGER.debug(
|
||||
@ -138,9 +236,37 @@ class Repository(CoreSysAttributes):
|
||||
|
||||
return False
|
||||
|
||||
async def reset(self) -> None:
|
||||
"""Raise. Not supported for local repository."""
|
||||
raise StoreError(
|
||||
"Can't reset local repository as it is not git based!", _LOGGER.error
|
||||
)
|
||||
|
||||
|
||||
class RepositoryGitBuiltin(RepositoryBuiltin, RepositoryGit):
|
||||
"""A built-in add-on repository based on git."""
|
||||
|
||||
def __init__(
|
||||
self, coresys: CoreSys, repository: str, local_path: Path, slug: str, url: str
|
||||
) -> None:
|
||||
"""Initialize object."""
|
||||
super().__init__(coresys, repository, local_path, slug)
|
||||
self._git = GitRepo(coresys, local_path, url)
|
||||
|
||||
|
||||
class RepositoryCustom(RepositoryGit):
|
||||
"""A custom add-on repository."""
|
||||
|
||||
def __init__(self, coresys: CoreSys, url: str, local_path: Path, slug: str) -> None:
|
||||
"""Initialize object."""
|
||||
super().__init__(coresys, url, local_path, slug)
|
||||
self._git = GitRepo(coresys, local_path, url)
|
||||
|
||||
@property
|
||||
def is_builtin(self) -> bool:
|
||||
"""Return True if this is a built-in repository."""
|
||||
return False
|
||||
|
||||
async def remove(self) -> None:
|
||||
"""Remove add-on repository."""
|
||||
if not self.git or self.type == StoreType.CORE:
|
||||
raise StoreError("Can't remove built-in repositories!", _LOGGER.error)
|
||||
|
||||
await cast(GitRepoCustom, self.git).remove()
|
||||
await self._git.remove()
|
||||
|
@ -4,18 +4,7 @@ import voluptuous as vol
|
||||
|
||||
from ..const import ATTR_MAINTAINER, ATTR_NAME, ATTR_REPOSITORIES, ATTR_URL
|
||||
from ..validate import RE_REPOSITORY
|
||||
from .const import StoreType
|
||||
|
||||
URL_COMMUNITY_ADDONS = "https://github.com/hassio-addons/repository"
|
||||
URL_ESPHOME = "https://github.com/esphome/home-assistant-addon"
|
||||
URL_MUSIC_ASSISTANT = "https://github.com/music-assistant/home-assistant-addon"
|
||||
BUILTIN_REPOSITORIES = {
|
||||
StoreType.CORE,
|
||||
StoreType.LOCAL,
|
||||
URL_COMMUNITY_ADDONS,
|
||||
URL_ESPHOME,
|
||||
URL_MUSIC_ASSISTANT,
|
||||
}
|
||||
from .const import BuiltinRepository
|
||||
|
||||
# pylint: disable=no-value-for-parameter
|
||||
SCHEMA_REPOSITORY_CONFIG = vol.Schema(
|
||||
@ -28,18 +17,9 @@ SCHEMA_REPOSITORY_CONFIG = vol.Schema(
|
||||
)
|
||||
|
||||
|
||||
def ensure_builtin_repositories(addon_repositories: list[str]) -> list[str]:
|
||||
"""Ensure builtin repositories are in list.
|
||||
|
||||
Note: This should not be used in validation as the resulting list is not
|
||||
stable. This can have side effects when comparing data later on.
|
||||
"""
|
||||
return list(set(addon_repositories) | BUILTIN_REPOSITORIES)
|
||||
|
||||
|
||||
def validate_repository(repository: str) -> str:
|
||||
"""Validate a valid repository."""
|
||||
if repository in [StoreType.CORE, StoreType.LOCAL]:
|
||||
if repository in BuiltinRepository:
|
||||
return repository
|
||||
|
||||
data = RE_REPOSITORY.match(repository)
|
||||
@ -55,10 +35,12 @@ def validate_repository(repository: str) -> str:
|
||||
|
||||
repositories = vol.All([validate_repository], vol.Unique())
|
||||
|
||||
DEFAULT_REPOSITORIES = {repo.value for repo in BuiltinRepository}
|
||||
|
||||
SCHEMA_STORE_FILE = vol.Schema(
|
||||
{
|
||||
vol.Optional(
|
||||
ATTR_REPOSITORIES, default=list(BUILTIN_REPOSITORIES)
|
||||
ATTR_REPOSITORIES, default=lambda: list(DEFAULT_REPOSITORIES)
|
||||
): repositories,
|
||||
},
|
||||
extra=vol.REMOVE_EXTRA,
|
||||
|
@ -46,7 +46,7 @@ def _check_connectivity_throttle_period(coresys: CoreSys, *_) -> timedelta:
|
||||
if coresys.supervisor.connectivity:
|
||||
return timedelta(minutes=10)
|
||||
|
||||
return timedelta(seconds=30)
|
||||
return timedelta(seconds=5)
|
||||
|
||||
|
||||
class Supervisor(CoreSysAttributes):
|
||||
@ -291,14 +291,16 @@ class Supervisor(CoreSysAttributes):
|
||||
limit=JobExecutionLimit.THROTTLE,
|
||||
throttle_period=_check_connectivity_throttle_period,
|
||||
)
|
||||
async def check_connectivity(self):
|
||||
"""Check the connection."""
|
||||
async def check_connectivity(self) -> None:
|
||||
"""Check the Internet connectivity from Supervisor's point of view."""
|
||||
timeout = aiohttp.ClientTimeout(total=10)
|
||||
try:
|
||||
await self.sys_websession.head(
|
||||
"https://checkonline.home-assistant.io/online.txt", timeout=timeout
|
||||
)
|
||||
except (ClientError, TimeoutError):
|
||||
except (ClientError, TimeoutError) as err:
|
||||
_LOGGER.debug("Supervisor Connectivity check failed: %s", err)
|
||||
self.connectivity = False
|
||||
else:
|
||||
_LOGGER.debug("Supervisor Connectivity check succeeded")
|
||||
self.connectivity = True
|
||||
|
@ -12,6 +12,7 @@ from sentry_sdk.integrations.dedupe import DedupeIntegration
|
||||
from sentry_sdk.integrations.excepthook import ExcepthookIntegration
|
||||
from sentry_sdk.integrations.logging import LoggingIntegration
|
||||
from sentry_sdk.integrations.threading import ThreadingIntegration
|
||||
from sentry_sdk.scrubber import DEFAULT_DENYLIST, EventScrubber
|
||||
|
||||
from ..const import SUPERVISOR_VERSION
|
||||
from ..coresys import CoreSys
|
||||
@ -26,6 +27,7 @@ def init_sentry(coresys: CoreSys) -> None:
|
||||
"""Initialize sentry client."""
|
||||
if not sentry_sdk.is_initialized():
|
||||
_LOGGER.info("Initializing Supervisor Sentry")
|
||||
denylist = DEFAULT_DENYLIST + ["psk", "ssid"]
|
||||
# Don't use AsyncioIntegration(). We commonly handle task exceptions
|
||||
# outside of tasks. This would cause exception we gracefully handle to
|
||||
# be captured by sentry.
|
||||
@ -34,6 +36,7 @@ def init_sentry(coresys: CoreSys) -> None:
|
||||
before_send=partial(filter_data, coresys),
|
||||
auto_enabling_integrations=False,
|
||||
default_integrations=False,
|
||||
event_scrubber=EventScrubber(denylist=denylist),
|
||||
integrations=[
|
||||
AioHttpIntegration(
|
||||
failed_request_status_codes=frozenset(range(500, 600))
|
||||
|
@ -182,7 +182,7 @@ SCHEMA_DOCKER_CONFIG = vol.Schema(
|
||||
}
|
||||
}
|
||||
),
|
||||
vol.Optional(ATTR_ENABLE_IPV6): vol.Boolean(),
|
||||
vol.Optional(ATTR_ENABLE_IPV6, default=None): vol.Maybe(vol.Boolean()),
|
||||
}
|
||||
)
|
||||
|
||||
|
@ -18,6 +18,7 @@ from supervisor.const import AddonBoot, AddonState, BusEvent
|
||||
from supervisor.coresys import CoreSys
|
||||
from supervisor.docker.addon import DockerAddon
|
||||
from supervisor.docker.const import ContainerState
|
||||
from supervisor.docker.manager import CommandReturn
|
||||
from supervisor.docker.monitor import DockerContainerStateEvent
|
||||
from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError
|
||||
from supervisor.hardware.helper import HwHelper
|
||||
@ -27,7 +28,7 @@ from supervisor.utils.dt import utcnow
|
||||
|
||||
from .test_manager import BOOT_FAIL_ISSUE, BOOT_FAIL_SUGGESTIONS
|
||||
|
||||
from tests.common import get_fixture_path
|
||||
from tests.common import get_fixture_path, is_in_list
|
||||
from tests.const import TEST_ADDON_SLUG
|
||||
|
||||
|
||||
@ -208,7 +209,7 @@ async def test_watchdog_on_stop(coresys: CoreSys, install_addon_ssh: Addon) -> N
|
||||
|
||||
|
||||
async def test_listener_attached_on_install(
|
||||
coresys: CoreSys, mock_amd64_arch_supported: None, repository
|
||||
coresys: CoreSys, mock_amd64_arch_supported: None, test_repository
|
||||
):
|
||||
"""Test events listener attached on addon install."""
|
||||
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
|
||||
@ -241,7 +242,7 @@ async def test_listener_attached_on_install(
|
||||
)
|
||||
async def test_watchdog_during_attach(
|
||||
coresys: CoreSys,
|
||||
repository: Repository,
|
||||
test_repository: Repository,
|
||||
boot_timedelta: timedelta,
|
||||
restart_count: int,
|
||||
):
|
||||
@ -709,7 +710,7 @@ async def test_local_example_install(
|
||||
coresys: CoreSys,
|
||||
container: MagicMock,
|
||||
tmp_supervisor_data: Path,
|
||||
repository,
|
||||
test_repository,
|
||||
mock_aarch64_arch_supported: None,
|
||||
):
|
||||
"""Test install of an addon."""
|
||||
@ -819,7 +820,7 @@ async def test_paths_cache(coresys: CoreSys, install_addon_ssh: Addon):
|
||||
|
||||
with (
|
||||
patch("supervisor.addons.addon.Path.exists", return_value=True),
|
||||
patch("supervisor.store.repository.Repository.update", return_value=True),
|
||||
patch("supervisor.store.repository.RepositoryLocal.update", return_value=True),
|
||||
):
|
||||
await coresys.store.reload(coresys.store.get("local"))
|
||||
|
||||
@ -840,10 +841,25 @@ async def test_addon_loads_wrong_image(
|
||||
install_addon_ssh.persist["image"] = "local/aarch64-addon-ssh"
|
||||
assert install_addon_ssh.image == "local/aarch64-addon-ssh"
|
||||
|
||||
with patch("pathlib.Path.is_file", return_value=True):
|
||||
with (
|
||||
patch("pathlib.Path.is_file", return_value=True),
|
||||
patch.object(
|
||||
coresys.docker,
|
||||
"run_command",
|
||||
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
|
||||
) as mock_run_command,
|
||||
patch.object(
|
||||
type(coresys.config),
|
||||
"local_to_extern_path",
|
||||
return_value="/addon/path/on/host",
|
||||
),
|
||||
):
|
||||
await install_addon_ssh.load()
|
||||
|
||||
container.remove.assert_called_once_with(force=True)
|
||||
container.remove.assert_called_with(force=True, v=True)
|
||||
# one for removing the addon, one for removing the addon builder
|
||||
assert coresys.docker.images.remove.call_count == 2
|
||||
|
||||
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
|
||||
"image": "local/aarch64-addon-ssh:latest",
|
||||
"force": True,
|
||||
@ -852,12 +868,18 @@ async def test_addon_loads_wrong_image(
|
||||
"image": "local/aarch64-addon-ssh:9.2.1",
|
||||
"force": True,
|
||||
}
|
||||
coresys.docker.images.build.assert_called_once()
|
||||
assert (
|
||||
coresys.docker.images.build.call_args.kwargs["tag"]
|
||||
== "local/amd64-addon-ssh:9.2.1"
|
||||
mock_run_command.assert_called_once()
|
||||
assert mock_run_command.call_args.args[0] == "docker.io/library/docker"
|
||||
assert mock_run_command.call_args.kwargs["version"] == "1.0.0-cli"
|
||||
command = mock_run_command.call_args.kwargs["command"]
|
||||
assert is_in_list(
|
||||
["--platform", "linux/amd64"],
|
||||
command,
|
||||
)
|
||||
assert is_in_list(
|
||||
["--tag", "local/amd64-addon-ssh:9.2.1"],
|
||||
command,
|
||||
)
|
||||
assert coresys.docker.images.build.call_args.kwargs["platform"] == "linux/amd64"
|
||||
assert install_addon_ssh.image == "local/amd64-addon-ssh"
|
||||
coresys.addons.data.save_data.assert_called_once()
|
||||
|
||||
@ -871,15 +893,33 @@ async def test_addon_loads_missing_image(
|
||||
"""Test addon corrects a missing image on load."""
|
||||
coresys.docker.images.get.side_effect = ImageNotFound("missing")
|
||||
|
||||
with patch("pathlib.Path.is_file", return_value=True):
|
||||
with (
|
||||
patch("pathlib.Path.is_file", return_value=True),
|
||||
patch.object(
|
||||
coresys.docker,
|
||||
"run_command",
|
||||
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
|
||||
) as mock_run_command,
|
||||
patch.object(
|
||||
type(coresys.config),
|
||||
"local_to_extern_path",
|
||||
return_value="/addon/path/on/host",
|
||||
),
|
||||
):
|
||||
await install_addon_ssh.load()
|
||||
|
||||
coresys.docker.images.build.assert_called_once()
|
||||
assert (
|
||||
coresys.docker.images.build.call_args.kwargs["tag"]
|
||||
== "local/amd64-addon-ssh:9.2.1"
|
||||
mock_run_command.assert_called_once()
|
||||
assert mock_run_command.call_args.args[0] == "docker.io/library/docker"
|
||||
assert mock_run_command.call_args.kwargs["version"] == "1.0.0-cli"
|
||||
command = mock_run_command.call_args.kwargs["command"]
|
||||
assert is_in_list(
|
||||
["--platform", "linux/amd64"],
|
||||
command,
|
||||
)
|
||||
assert is_in_list(
|
||||
["--tag", "local/amd64-addon-ssh:9.2.1"],
|
||||
command,
|
||||
)
|
||||
assert coresys.docker.images.build.call_args.kwargs["platform"] == "linux/amd64"
|
||||
assert install_addon_ssh.image == "local/amd64-addon-ssh"
|
||||
|
||||
|
||||
@ -900,7 +940,14 @@ async def test_addon_load_succeeds_with_docker_errors(
|
||||
# Image build failure
|
||||
coresys.docker.images.build.side_effect = DockerException()
|
||||
caplog.clear()
|
||||
with patch("pathlib.Path.is_file", return_value=True):
|
||||
with (
|
||||
patch("pathlib.Path.is_file", return_value=True),
|
||||
patch.object(
|
||||
type(coresys.config),
|
||||
"local_to_extern_path",
|
||||
return_value="/addon/path/on/host",
|
||||
),
|
||||
):
|
||||
await install_addon_ssh.load()
|
||||
assert "Can't build local/amd64-addon-ssh:9.2.1" in caplog.text
|
||||
|
||||
|
@ -8,10 +8,13 @@ from supervisor.addons.addon import Addon
|
||||
from supervisor.addons.build import AddonBuild
|
||||
from supervisor.coresys import CoreSys
|
||||
|
||||
from tests.common import is_in_list
|
||||
|
||||
|
||||
async def test_platform_set(coresys: CoreSys, install_addon_ssh: Addon):
|
||||
"""Test platform set in docker args."""
|
||||
"""Test platform set in container build args."""
|
||||
build = await AddonBuild(coresys, install_addon_ssh).load_config()
|
||||
|
||||
with (
|
||||
patch.object(
|
||||
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
|
||||
@ -19,17 +22,23 @@ async def test_platform_set(coresys: CoreSys, install_addon_ssh: Addon):
|
||||
patch.object(
|
||||
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
|
||||
),
|
||||
patch.object(
|
||||
type(coresys.config),
|
||||
"local_to_extern_path",
|
||||
return_value="/addon/path/on/host",
|
||||
),
|
||||
):
|
||||
args = await coresys.run_in_executor(
|
||||
build.get_docker_args, AwesomeVersion("latest")
|
||||
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
|
||||
)
|
||||
|
||||
assert args["platform"] == "linux/amd64"
|
||||
assert is_in_list(["--platform", "linux/amd64"], args["command"])
|
||||
|
||||
|
||||
async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon):
|
||||
"""Test platform set in docker args."""
|
||||
"""Test dockerfile path in container build args."""
|
||||
build = await AddonBuild(coresys, install_addon_ssh).load_config()
|
||||
|
||||
with (
|
||||
patch.object(
|
||||
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
|
||||
@ -37,12 +46,17 @@ async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon)
|
||||
patch.object(
|
||||
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
|
||||
),
|
||||
patch.object(
|
||||
type(coresys.config),
|
||||
"local_to_extern_path",
|
||||
return_value="/addon/path/on/host",
|
||||
),
|
||||
):
|
||||
args = await coresys.run_in_executor(
|
||||
build.get_docker_args, AwesomeVersion("latest")
|
||||
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
|
||||
)
|
||||
|
||||
assert args["dockerfile"].endswith("fixtures/addons/local/ssh/Dockerfile")
|
||||
assert is_in_list(["--file", "Dockerfile"], args["command"])
|
||||
assert str(await coresys.run_in_executor(build.get_dockerfile)).endswith(
|
||||
"fixtures/addons/local/ssh/Dockerfile"
|
||||
)
|
||||
@ -50,8 +64,9 @@ async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon)
|
||||
|
||||
|
||||
async def test_dockerfile_evaluation_arch(coresys: CoreSys, install_addon_ssh: Addon):
|
||||
"""Test platform set in docker args."""
|
||||
"""Test dockerfile arch evaluation in container build args."""
|
||||
build = await AddonBuild(coresys, install_addon_ssh).load_config()
|
||||
|
||||
with (
|
||||
patch.object(
|
||||
type(coresys.arch), "supported", new=PropertyMock(return_value=["aarch64"])
|
||||
@ -59,12 +74,17 @@ async def test_dockerfile_evaluation_arch(coresys: CoreSys, install_addon_ssh: A
|
||||
patch.object(
|
||||
type(coresys.arch), "default", new=PropertyMock(return_value="aarch64")
|
||||
),
|
||||
patch.object(
|
||||
type(coresys.config),
|
||||
"local_to_extern_path",
|
||||
return_value="/addon/path/on/host",
|
||||
),
|
||||
):
|
||||
args = await coresys.run_in_executor(
|
||||
build.get_docker_args, AwesomeVersion("latest")
|
||||
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
|
||||
)
|
||||
|
||||
assert args["dockerfile"].endswith("fixtures/addons/local/ssh/Dockerfile.aarch64")
|
||||
assert is_in_list(["--file", "Dockerfile.aarch64"], args["command"])
|
||||
assert str(await coresys.run_in_executor(build.get_dockerfile)).endswith(
|
||||
"fixtures/addons/local/ssh/Dockerfile.aarch64"
|
||||
)
|
||||
|
@ -29,7 +29,7 @@ from supervisor.plugins.dns import PluginDns
|
||||
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
|
||||
from supervisor.resolution.data import Issue, Suggestion
|
||||
from supervisor.store.addon import AddonStore
|
||||
from supervisor.store.repository import Repository
|
||||
from supervisor.store.repository import RepositoryLocal
|
||||
from supervisor.utils import check_exception_chain
|
||||
from supervisor.utils.common import write_json_file
|
||||
|
||||
@ -67,7 +67,7 @@ async def fixture_remove_wait_boot(coresys: CoreSys) -> AsyncGenerator[None]:
|
||||
|
||||
@pytest.fixture(name="install_addon_example_image")
|
||||
async def fixture_install_addon_example_image(
|
||||
coresys: CoreSys, repository
|
||||
coresys: CoreSys, test_repository
|
||||
) -> Generator[Addon]:
|
||||
"""Install local_example add-on with image."""
|
||||
store = coresys.addons.store["local_example_image"]
|
||||
@ -442,7 +442,7 @@ async def test_store_data_changes_during_update(
|
||||
update_task = coresys.create_task(simulate_update())
|
||||
await asyncio.sleep(0)
|
||||
|
||||
with patch.object(Repository, "update", return_value=True):
|
||||
with patch.object(RepositoryLocal, "update", return_value=True):
|
||||
await coresys.store.reload()
|
||||
|
||||
assert "image" not in coresys.store.data.addons["local_ssh"]
|
||||
|
@ -14,6 +14,7 @@ from supervisor.const import AddonState
|
||||
from supervisor.coresys import CoreSys
|
||||
from supervisor.docker.addon import DockerAddon
|
||||
from supervisor.docker.const import ContainerState
|
||||
from supervisor.docker.manager import CommandReturn
|
||||
from supervisor.docker.monitor import DockerContainerStateEvent
|
||||
from supervisor.exceptions import HassioError
|
||||
from supervisor.store.repository import Repository
|
||||
@ -53,7 +54,7 @@ async def test_addons_info(
|
||||
|
||||
# DEPRECATED - Remove with legacy routing logic on 1/2023
|
||||
async def test_addons_info_not_installed(
|
||||
api_client: TestClient, coresys: CoreSys, repository: Repository
|
||||
api_client: TestClient, coresys: CoreSys, test_repository: Repository
|
||||
):
|
||||
"""Test getting addon info for not installed addon."""
|
||||
resp = await api_client.get(f"/addons/{TEST_ADDON_SLUG}/info")
|
||||
@ -239,6 +240,19 @@ async def test_api_addon_rebuild_healthcheck(
|
||||
patch.object(Addon, "need_build", new=PropertyMock(return_value=True)),
|
||||
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
|
||||
patch.object(DockerAddon, "run", new=container_events_task),
|
||||
patch.object(
|
||||
coresys.docker,
|
||||
"run_command",
|
||||
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
|
||||
),
|
||||
patch.object(
|
||||
DockerAddon, "healthcheck", new=PropertyMock(return_value={"exists": True})
|
||||
),
|
||||
patch.object(
|
||||
type(coresys.config),
|
||||
"local_to_extern_path",
|
||||
return_value="/addon/path/on/host",
|
||||
),
|
||||
):
|
||||
resp = await api_client.post("/addons/local_ssh/rebuild")
|
||||
|
||||
@ -247,6 +261,98 @@ async def test_api_addon_rebuild_healthcheck(
|
||||
assert resp.status == 200
|
||||
|
||||
|
||||
async def test_api_addon_rebuild_force(
|
||||
api_client: TestClient,
|
||||
coresys: CoreSys,
|
||||
install_addon_ssh: Addon,
|
||||
container: MagicMock,
|
||||
tmp_supervisor_data,
|
||||
path_extern,
|
||||
):
|
||||
"""Test rebuilding an image-based addon with force parameter."""
|
||||
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
|
||||
container.status = "running"
|
||||
install_addon_ssh.path_data.mkdir()
|
||||
container.attrs["Config"] = {"Healthcheck": "exists"}
|
||||
await install_addon_ssh.load()
|
||||
await asyncio.sleep(0)
|
||||
assert install_addon_ssh.state == AddonState.STARTUP
|
||||
|
||||
state_changes: list[AddonState] = []
|
||||
_container_events_task: asyncio.Task | None = None
|
||||
|
||||
async def container_events():
|
||||
nonlocal state_changes
|
||||
|
||||
await install_addon_ssh.container_state_changed(
|
||||
_create_test_event(f"addon_{TEST_ADDON_SLUG}", ContainerState.STOPPED)
|
||||
)
|
||||
state_changes.append(install_addon_ssh.state)
|
||||
|
||||
await install_addon_ssh.container_state_changed(
|
||||
_create_test_event(f"addon_{TEST_ADDON_SLUG}", ContainerState.RUNNING)
|
||||
)
|
||||
state_changes.append(install_addon_ssh.state)
|
||||
await asyncio.sleep(0)
|
||||
|
||||
await install_addon_ssh.container_state_changed(
|
||||
_create_test_event(f"addon_{TEST_ADDON_SLUG}", ContainerState.HEALTHY)
|
||||
)
|
||||
|
||||
async def container_events_task(*args, **kwargs):
|
||||
nonlocal _container_events_task
|
||||
_container_events_task = asyncio.create_task(container_events())
|
||||
|
||||
# Test 1: Without force, image-based addon should fail
|
||||
with (
|
||||
patch.object(AddonBuild, "is_valid", return_value=True),
|
||||
patch.object(DockerAddon, "is_running", return_value=False),
|
||||
patch.object(
|
||||
Addon, "need_build", new=PropertyMock(return_value=False)
|
||||
), # Image-based
|
||||
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
|
||||
):
|
||||
resp = await api_client.post("/addons/local_ssh/rebuild")
|
||||
|
||||
assert resp.status == 400
|
||||
result = await resp.json()
|
||||
assert "Can't rebuild a image based add-on" in result["message"]
|
||||
|
||||
# Reset state for next test
|
||||
state_changes.clear()
|
||||
|
||||
# Test 2: With force=True, image-based addon should succeed
|
||||
with (
|
||||
patch.object(AddonBuild, "is_valid", return_value=True),
|
||||
patch.object(DockerAddon, "is_running", return_value=False),
|
||||
patch.object(
|
||||
Addon, "need_build", new=PropertyMock(return_value=False)
|
||||
), # Image-based
|
||||
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
|
||||
patch.object(DockerAddon, "run", new=container_events_task),
|
||||
patch.object(
|
||||
coresys.docker,
|
||||
"run_command",
|
||||
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
|
||||
),
|
||||
patch.object(
|
||||
DockerAddon, "healthcheck", new=PropertyMock(return_value={"exists": True})
|
||||
),
|
||||
patch.object(
|
||||
type(coresys.config),
|
||||
"local_to_extern_path",
|
||||
return_value="/addon/path/on/host",
|
||||
),
|
||||
):
|
||||
resp = await api_client.post("/addons/local_ssh/rebuild", json={"force": True})
|
||||
|
||||
assert state_changes == [AddonState.STOPPED, AddonState.STARTUP]
|
||||
assert install_addon_ssh.state == AddonState.STARTED
|
||||
assert resp.status == 200
|
||||
|
||||
await _container_events_task
|
||||
|
||||
|
||||
async def test_api_addon_uninstall(
|
||||
api_client: TestClient,
|
||||
coresys: CoreSys,
|
||||
@ -427,7 +533,7 @@ async def test_addon_not_found(
|
||||
("get", "/addons/local_ssh/logs/boots/1/follow", False),
|
||||
],
|
||||
)
|
||||
@pytest.mark.usefixtures("repository")
|
||||
@pytest.mark.usefixtures("test_repository")
|
||||
async def test_addon_not_installed(
|
||||
api_client: TestClient, method: str, url: str, json_expected: bool
|
||||
):
|
||||
|
@ -3,6 +3,7 @@
|
||||
from datetime import UTC, datetime, timedelta
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
from aiohttp.hdrs import WWW_AUTHENTICATE
|
||||
from aiohttp.test_utils import TestClient
|
||||
import pytest
|
||||
|
||||
@ -166,8 +167,8 @@ async def test_auth_json_invalid_credentials(
|
||||
resp = await api_client.post(
|
||||
"/auth", json={"username": "test", "password": "wrong"}
|
||||
)
|
||||
# Do we really want the API to return 400 here?
|
||||
assert resp.status == 400
|
||||
assert WWW_AUTHENTICATE not in resp.headers
|
||||
assert resp.status == 401
|
||||
|
||||
|
||||
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
|
||||
@ -213,8 +214,8 @@ async def test_auth_urlencoded_failure(
|
||||
data="username=test&password=fail",
|
||||
headers={"Content-Type": "application/x-www-form-urlencoded"},
|
||||
)
|
||||
# Do we really want the API to return 400 here?
|
||||
assert resp.status == 400
|
||||
assert WWW_AUTHENTICATE not in resp.headers
|
||||
assert resp.status == 401
|
||||
|
||||
|
||||
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
|
||||
@ -225,7 +226,7 @@ async def test_auth_unsupported_content_type(
|
||||
resp = await api_client.post(
|
||||
"/auth", data="something", headers={"Content-Type": "text/plain"}
|
||||
)
|
||||
# This probably should be 400 here for better consistency
|
||||
assert "Basic realm" in resp.headers[WWW_AUTHENTICATE]
|
||||
assert resp.status == 401
|
||||
|
||||
|
||||
|
@ -19,7 +19,7 @@ async def test_api_docker_info(api_client: TestClient):
|
||||
|
||||
async def test_api_network_enable_ipv6(coresys: CoreSys, api_client: TestClient):
|
||||
"""Test setting docker network for enabled IPv6."""
|
||||
assert coresys.docker.config.enable_ipv6 is False
|
||||
assert coresys.docker.config.enable_ipv6 is None
|
||||
|
||||
resp = await api_client.post("/docker/options", json={"enable_ipv6": True})
|
||||
assert resp.status == 200
|
||||
|
@ -30,7 +30,7 @@ REPO_URL = "https://github.com/awesome-developer/awesome-repo"
|
||||
async def test_api_store(
|
||||
api_client: TestClient,
|
||||
store_addon: AddonStore,
|
||||
repository: Repository,
|
||||
test_repository: Repository,
|
||||
caplog: pytest.LogCaptureFixture,
|
||||
):
|
||||
"""Test /store REST API."""
|
||||
@ -38,7 +38,7 @@ async def test_api_store(
|
||||
result = await resp.json()
|
||||
|
||||
assert result["data"]["addons"][-1]["slug"] == store_addon.slug
|
||||
assert result["data"]["repositories"][-1]["slug"] == repository.slug
|
||||
assert result["data"]["repositories"][-1]["slug"] == test_repository.slug
|
||||
|
||||
assert (
|
||||
f"Add-on {store_addon.slug} not supported on this platform" not in caplog.text
|
||||
@ -73,23 +73,25 @@ async def test_api_store_addons_addon_version(
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_api_store_repositories(api_client: TestClient, repository: Repository):
|
||||
async def test_api_store_repositories(
|
||||
api_client: TestClient, test_repository: Repository
|
||||
):
|
||||
"""Test /store/repositories REST API."""
|
||||
resp = await api_client.get("/store/repositories")
|
||||
result = await resp.json()
|
||||
|
||||
assert result["data"][-1]["slug"] == repository.slug
|
||||
assert result["data"][-1]["slug"] == test_repository.slug
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_api_store_repositories_repository(
|
||||
api_client: TestClient, repository: Repository
|
||||
api_client: TestClient, test_repository: Repository
|
||||
):
|
||||
"""Test /store/repositories/{repository} REST API."""
|
||||
resp = await api_client.get(f"/store/repositories/{repository.slug}")
|
||||
resp = await api_client.get(f"/store/repositories/{test_repository.slug}")
|
||||
result = await resp.json()
|
||||
|
||||
assert result["data"]["slug"] == repository.slug
|
||||
assert result["data"]["slug"] == test_repository.slug
|
||||
|
||||
|
||||
async def test_api_store_add_repository(
|
||||
@ -97,8 +99,8 @@ async def test_api_store_add_repository(
|
||||
) -> None:
|
||||
"""Test POST /store/repositories REST API."""
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
||||
patch("supervisor.store.repository.Repository.validate", return_value=True),
|
||||
patch("supervisor.store.repository.RepositoryGit.load", return_value=None),
|
||||
patch("supervisor.store.repository.RepositoryGit.validate", return_value=True),
|
||||
):
|
||||
response = await api_client.post(
|
||||
"/store/repositories", json={"repository": REPO_URL}
|
||||
@ -106,18 +108,17 @@ async def test_api_store_add_repository(
|
||||
|
||||
assert response.status == 200
|
||||
assert REPO_URL in coresys.store.repository_urls
|
||||
assert isinstance(coresys.store.get_from_url(REPO_URL), Repository)
|
||||
|
||||
|
||||
async def test_api_store_remove_repository(
|
||||
api_client: TestClient, coresys: CoreSys, repository: Repository
|
||||
api_client: TestClient, coresys: CoreSys, test_repository: Repository
|
||||
):
|
||||
"""Test DELETE /store/repositories/{repository} REST API."""
|
||||
response = await api_client.delete(f"/store/repositories/{repository.slug}")
|
||||
response = await api_client.delete(f"/store/repositories/{test_repository.slug}")
|
||||
|
||||
assert response.status == 200
|
||||
assert repository.source not in coresys.store.repository_urls
|
||||
assert repository.slug not in coresys.store.repositories
|
||||
assert test_repository.source not in coresys.store.repository_urls
|
||||
assert test_repository.slug not in coresys.store.repositories
|
||||
|
||||
|
||||
async def test_api_store_update_healthcheck(
|
||||
@ -329,7 +330,7 @@ async def test_store_addon_not_found(
|
||||
("post", "/addons/local_ssh/update"),
|
||||
],
|
||||
)
|
||||
@pytest.mark.usefixtures("repository")
|
||||
@pytest.mark.usefixtures("test_repository")
|
||||
async def test_store_addon_not_installed(api_client: TestClient, method: str, url: str):
|
||||
"""Test store addon not installed error."""
|
||||
resp = await api_client.request(method, url)
|
||||
|
@ -9,12 +9,7 @@ from blockbuster import BlockingError
|
||||
import pytest
|
||||
|
||||
from supervisor.coresys import CoreSys
|
||||
from supervisor.exceptions import (
|
||||
HassioError,
|
||||
HostNotSupportedError,
|
||||
StoreGitError,
|
||||
StoreNotFound,
|
||||
)
|
||||
from supervisor.exceptions import HassioError, HostNotSupportedError, StoreGitError
|
||||
from supervisor.store.repository import Repository
|
||||
|
||||
from tests.api import common_test_api_advanced_logs
|
||||
@ -38,12 +33,10 @@ async def test_api_supervisor_options_add_repository(
|
||||
):
|
||||
"""Test add a repository via POST /supervisor/options REST API."""
|
||||
assert REPO_URL not in coresys.store.repository_urls
|
||||
with pytest.raises(StoreNotFound):
|
||||
coresys.store.get_from_url(REPO_URL)
|
||||
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
||||
patch("supervisor.store.repository.Repository.validate", return_value=True),
|
||||
patch("supervisor.store.repository.RepositoryGit.load", return_value=None),
|
||||
patch("supervisor.store.repository.RepositoryGit.validate", return_value=True),
|
||||
):
|
||||
response = await api_client.post(
|
||||
"/supervisor/options", json={"addons_repositories": [REPO_URL]}
|
||||
@ -51,23 +44,22 @@ async def test_api_supervisor_options_add_repository(
|
||||
|
||||
assert response.status == 200
|
||||
assert REPO_URL in coresys.store.repository_urls
|
||||
assert isinstance(coresys.store.get_from_url(REPO_URL), Repository)
|
||||
|
||||
|
||||
async def test_api_supervisor_options_remove_repository(
|
||||
api_client: TestClient, coresys: CoreSys, repository: Repository
|
||||
api_client: TestClient, coresys: CoreSys, test_repository: Repository
|
||||
):
|
||||
"""Test remove a repository via POST /supervisor/options REST API."""
|
||||
assert repository.source in coresys.store.repository_urls
|
||||
assert repository.slug in coresys.store.repositories
|
||||
assert test_repository.source in coresys.store.repository_urls
|
||||
assert test_repository.slug in coresys.store.repositories
|
||||
|
||||
response = await api_client.post(
|
||||
"/supervisor/options", json={"addons_repositories": []}
|
||||
)
|
||||
|
||||
assert response.status == 200
|
||||
assert repository.source not in coresys.store.repository_urls
|
||||
assert repository.slug not in coresys.store.repositories
|
||||
assert test_repository.source not in coresys.store.repository_urls
|
||||
assert test_repository.slug not in coresys.store.repositories
|
||||
|
||||
|
||||
@pytest.mark.parametrize("git_error", [None, StoreGitError()])
|
||||
@ -76,9 +68,9 @@ async def test_api_supervisor_options_repositories_skipped_on_error(
|
||||
):
|
||||
"""Test repositories skipped on error via POST /supervisor/options REST API."""
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.load", side_effect=git_error),
|
||||
patch("supervisor.store.repository.Repository.validate", return_value=False),
|
||||
patch("supervisor.store.repository.Repository.remove"),
|
||||
patch("supervisor.store.repository.RepositoryGit.load", side_effect=git_error),
|
||||
patch("supervisor.store.repository.RepositoryGit.validate", return_value=False),
|
||||
patch("supervisor.store.repository.RepositoryCustom.remove"),
|
||||
):
|
||||
response = await api_client.post(
|
||||
"/supervisor/options", json={"addons_repositories": [REPO_URL]}
|
||||
@ -87,8 +79,6 @@ async def test_api_supervisor_options_repositories_skipped_on_error(
|
||||
assert response.status == 400
|
||||
assert len(coresys.resolution.suggestions) == 0
|
||||
assert REPO_URL not in coresys.store.repository_urls
|
||||
with pytest.raises(StoreNotFound):
|
||||
coresys.store.get_from_url(REPO_URL)
|
||||
|
||||
|
||||
async def test_api_supervisor_options_repo_error_with_config_change(
|
||||
@ -98,7 +88,7 @@ async def test_api_supervisor_options_repo_error_with_config_change(
|
||||
assert not coresys.config.debug
|
||||
|
||||
with patch(
|
||||
"supervisor.store.repository.Repository.load", side_effect=StoreGitError()
|
||||
"supervisor.store.repository.RepositoryGit.load", side_effect=StoreGitError()
|
||||
):
|
||||
response = await api_client.post(
|
||||
"/supervisor/options",
|
||||
|
@ -2244,3 +2244,33 @@ async def test_get_upload_path_for_mount_location(
|
||||
result = await manager.get_upload_path_for_location(mount)
|
||||
|
||||
assert result == mount.local_where
|
||||
|
||||
|
||||
@pytest.mark.usefixtures(
|
||||
"supervisor_internet", "tmp_supervisor_data", "path_extern", "install_addon_example"
|
||||
)
|
||||
async def test_backup_addon_skips_uninstalled(
|
||||
coresys: CoreSys, caplog: pytest.LogCaptureFixture
|
||||
):
|
||||
"""Test restore installing new addon."""
|
||||
await coresys.core.set_state(CoreState.RUNNING)
|
||||
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
|
||||
assert "local_example" in coresys.addons.local
|
||||
orig_store_addons = Backup.store_addons
|
||||
|
||||
async def mock_store_addons(*args, **kwargs):
|
||||
# Mock an uninstall during the backup process
|
||||
await coresys.addons.uninstall("local_example")
|
||||
await orig_store_addons(*args, **kwargs)
|
||||
|
||||
with patch.object(Backup, "store_addons", new=mock_store_addons):
|
||||
backup: Backup = await coresys.backups.do_backup_partial(
|
||||
addons=["local_example"], folders=["ssl"]
|
||||
)
|
||||
|
||||
assert "local_example" not in coresys.addons.local
|
||||
assert not backup.addons
|
||||
assert (
|
||||
"Skipping backup of add-on local_example because it has been uninstalled"
|
||||
in caplog.text
|
||||
)
|
||||
|
@ -105,6 +105,20 @@ def reset_last_call(func, group: str | None = None) -> None:
|
||||
get_job_decorator(func).set_last_call(datetime.min, group)
|
||||
|
||||
|
||||
def is_in_list(a: list, b: list):
|
||||
"""Check if all elements in list a are in list b in order.
|
||||
|
||||
Taken from https://stackoverflow.com/a/69175987/12156188.
|
||||
"""
|
||||
|
||||
for c in a:
|
||||
if c in b:
|
||||
b = b[b.index(c) :]
|
||||
else:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
class MockResponse:
|
||||
"""Mock response for aiohttp requests."""
|
||||
|
||||
|
@ -66,6 +66,7 @@ from .dbus_service_mocks.base import DBusServiceMock
|
||||
from .dbus_service_mocks.network_connection_settings import (
|
||||
ConnectionSettings as ConnectionSettingsService,
|
||||
)
|
||||
from .dbus_service_mocks.network_dns_manager import DnsManager as DnsManagerService
|
||||
from .dbus_service_mocks.network_manager import NetworkManager as NetworkManagerService
|
||||
|
||||
# pylint: disable=redefined-outer-name, protected-access
|
||||
@ -131,7 +132,7 @@ async def docker() -> DockerAPI:
|
||||
|
||||
docker_obj.info.logging = "journald"
|
||||
docker_obj.info.storage = "overlay2"
|
||||
docker_obj.info.version = "1.0.0"
|
||||
docker_obj.info.version = AwesomeVersion("1.0.0")
|
||||
|
||||
yield docker_obj
|
||||
|
||||
@ -220,6 +221,14 @@ async def network_manager_service(
|
||||
yield network_manager_services["network_manager"]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def dns_manager_service(
|
||||
network_manager_services: dict[str, DBusServiceMock | dict[str, DBusServiceMock]],
|
||||
) -> AsyncGenerator[DnsManagerService]:
|
||||
"""Return DNS Manager service mock."""
|
||||
yield network_manager_services["network_dns_manager"]
|
||||
|
||||
|
||||
@pytest.fixture(name="connection_settings_service")
|
||||
async def fixture_connection_settings_service(
|
||||
network_manager_services: dict[str, DBusServiceMock | dict[str, DBusServiceMock]],
|
||||
@ -409,7 +418,7 @@ async def coresys(
|
||||
coresys_obj.init_websession = AsyncMock()
|
||||
|
||||
# Don't remove files/folders related to addons and stores
|
||||
with patch("supervisor.store.git.GitRepo._remove"):
|
||||
with patch("supervisor.store.git.GitRepo.remove"):
|
||||
yield coresys_obj
|
||||
|
||||
await coresys_obj.dbus.unload()
|
||||
@ -582,7 +591,7 @@ def run_supervisor_state(request: pytest.FixtureRequest) -> Generator[MagicMock]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def store_addon(coresys: CoreSys, tmp_path, repository):
|
||||
def store_addon(coresys: CoreSys, tmp_path, test_repository):
|
||||
"""Store add-on fixture."""
|
||||
addon_obj = AddonStore(coresys, "test_store_addon")
|
||||
|
||||
@ -595,23 +604,16 @@ def store_addon(coresys: CoreSys, tmp_path, repository):
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def repository(coresys: CoreSys):
|
||||
"""Repository fixture."""
|
||||
coresys.store._data[ATTR_REPOSITORIES].remove(
|
||||
"https://github.com/hassio-addons/repository"
|
||||
)
|
||||
coresys.store._data[ATTR_REPOSITORIES].remove(
|
||||
"https://github.com/esphome/home-assistant-addon"
|
||||
)
|
||||
async def test_repository(coresys: CoreSys):
|
||||
"""Test add-on store repository fixture."""
|
||||
coresys.config._data[ATTR_ADDONS_CUSTOM_LIST] = []
|
||||
|
||||
with (
|
||||
patch("supervisor.store.validate.BUILTIN_REPOSITORIES", {"local", "core"}),
|
||||
patch("supervisor.store.git.GitRepo.load", return_value=None),
|
||||
):
|
||||
await coresys.store.load()
|
||||
|
||||
repository_obj = Repository(
|
||||
repository_obj = Repository.create(
|
||||
coresys, "https://github.com/awesome-developer/awesome-repo"
|
||||
)
|
||||
|
||||
@ -624,7 +626,7 @@ async def repository(coresys: CoreSys):
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def install_addon_ssh(coresys: CoreSys, repository):
|
||||
async def install_addon_ssh(coresys: CoreSys, test_repository):
|
||||
"""Install local_ssh add-on."""
|
||||
store = coresys.addons.store[TEST_ADDON_SLUG]
|
||||
await coresys.addons.data.install(store)
|
||||
@ -636,7 +638,7 @@ async def install_addon_ssh(coresys: CoreSys, repository):
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def install_addon_example(coresys: CoreSys, repository):
|
||||
async def install_addon_example(coresys: CoreSys, test_repository):
|
||||
"""Install local_example add-on."""
|
||||
store = coresys.addons.store["local_example"]
|
||||
await coresys.addons.data.install(store)
|
||||
|
136
tests/docker/test_manager.py
Normal file
136
tests/docker/test_manager.py
Normal file
@ -0,0 +1,136 @@
|
||||
"""Test Docker manager."""
|
||||
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from docker.errors import DockerException
|
||||
import pytest
|
||||
from requests import RequestException
|
||||
|
||||
from supervisor.docker.manager import CommandReturn, DockerAPI
|
||||
from supervisor.exceptions import DockerError
|
||||
|
||||
|
||||
async def test_run_command_success(docker: DockerAPI):
|
||||
"""Test successful command execution."""
|
||||
# Mock container and its methods
|
||||
mock_container = MagicMock()
|
||||
mock_container.wait.return_value = {"StatusCode": 0}
|
||||
mock_container.logs.return_value = b"command output"
|
||||
|
||||
# Mock docker containers.run to return our mock container
|
||||
docker.docker.containers.run.return_value = mock_container
|
||||
|
||||
# Execute the command
|
||||
result = docker.run_command(
|
||||
image="alpine", version="3.18", command="echo hello", stdout=True, stderr=True
|
||||
)
|
||||
|
||||
# Verify the result
|
||||
assert isinstance(result, CommandReturn)
|
||||
assert result.exit_code == 0
|
||||
assert result.output == b"command output"
|
||||
|
||||
# Verify docker.containers.run was called correctly
|
||||
docker.docker.containers.run.assert_called_once_with(
|
||||
"alpine:3.18",
|
||||
command="echo hello",
|
||||
detach=True,
|
||||
network=docker.network.name,
|
||||
use_config_proxy=False,
|
||||
stdout=True,
|
||||
stderr=True,
|
||||
)
|
||||
|
||||
# Verify container cleanup
|
||||
mock_container.remove.assert_called_once_with(force=True, v=True)
|
||||
|
||||
|
||||
async def test_run_command_with_defaults(docker: DockerAPI):
|
||||
"""Test command execution with default parameters."""
|
||||
# Mock container and its methods
|
||||
mock_container = MagicMock()
|
||||
mock_container.wait.return_value = {"StatusCode": 1}
|
||||
mock_container.logs.return_value = b"error output"
|
||||
|
||||
# Mock docker containers.run to return our mock container
|
||||
docker.docker.containers.run.return_value = mock_container
|
||||
|
||||
# Execute the command with minimal parameters
|
||||
result = docker.run_command(image="ubuntu")
|
||||
|
||||
# Verify the result
|
||||
assert isinstance(result, CommandReturn)
|
||||
assert result.exit_code == 1
|
||||
assert result.output == b"error output"
|
||||
|
||||
# Verify docker.containers.run was called with defaults
|
||||
docker.docker.containers.run.assert_called_once_with(
|
||||
"ubuntu:latest", # default tag
|
||||
command=None, # default command
|
||||
detach=True,
|
||||
network=docker.network.name,
|
||||
use_config_proxy=False,
|
||||
)
|
||||
|
||||
# Verify container.logs was called with default stdout/stderr
|
||||
mock_container.logs.assert_called_once_with(stdout=True, stderr=True)
|
||||
|
||||
|
||||
async def test_run_command_docker_exception(docker: DockerAPI):
|
||||
"""Test command execution when Docker raises an exception."""
|
||||
# Mock docker containers.run to raise DockerException
|
||||
docker.docker.containers.run.side_effect = DockerException("Docker error")
|
||||
|
||||
# Execute the command and expect DockerError
|
||||
with pytest.raises(DockerError, match="Can't execute command: Docker error"):
|
||||
docker.run_command(image="alpine", command="test")
|
||||
|
||||
|
||||
async def test_run_command_request_exception(docker: DockerAPI):
|
||||
"""Test command execution when requests raises an exception."""
|
||||
# Mock docker containers.run to raise RequestException
|
||||
docker.docker.containers.run.side_effect = RequestException("Connection error")
|
||||
|
||||
# Execute the command and expect DockerError
|
||||
with pytest.raises(DockerError, match="Can't execute command: Connection error"):
|
||||
docker.run_command(image="alpine", command="test")
|
||||
|
||||
|
||||
async def test_run_command_cleanup_on_exception(docker: DockerAPI):
|
||||
"""Test that container cleanup happens even when an exception occurs."""
|
||||
# Mock container
|
||||
mock_container = MagicMock()
|
||||
|
||||
# Mock docker.containers.run to return container, but container.wait to raise exception
|
||||
docker.docker.containers.run.return_value = mock_container
|
||||
mock_container.wait.side_effect = DockerException("Wait failed")
|
||||
|
||||
# Execute the command and expect DockerError
|
||||
with pytest.raises(DockerError):
|
||||
docker.run_command(image="alpine", command="test")
|
||||
|
||||
# Verify container cleanup still happened
|
||||
mock_container.remove.assert_called_once_with(force=True, v=True)
|
||||
|
||||
|
||||
async def test_run_command_custom_stdout_stderr(docker: DockerAPI):
|
||||
"""Test command execution with custom stdout/stderr settings."""
|
||||
# Mock container and its methods
|
||||
mock_container = MagicMock()
|
||||
mock_container.wait.return_value = {"StatusCode": 0}
|
||||
mock_container.logs.return_value = b"output"
|
||||
|
||||
# Mock docker containers.run to return our mock container
|
||||
docker.docker.containers.run.return_value = mock_container
|
||||
|
||||
# Execute the command with custom stdout/stderr
|
||||
result = docker.run_command(
|
||||
image="alpine", command="test", stdout=False, stderr=True
|
||||
)
|
||||
|
||||
# Verify container.logs was called with the correct parameters
|
||||
mock_container.logs.assert_called_once_with(stdout=False, stderr=True)
|
||||
|
||||
# Verify the result
|
||||
assert result.exit_code == 0
|
||||
assert result.output == b"output"
|
@ -111,3 +111,39 @@ async def test_network_recreation(
|
||||
network_params[ATTR_ENABLE_IPV6] = new_enable_ipv6
|
||||
|
||||
mock_create.assert_called_with(**network_params)
|
||||
|
||||
|
||||
async def test_network_default_ipv6_for_new_installations():
|
||||
"""Test that IPv6 is enabled by default when no user setting is provided (None)."""
|
||||
with (
|
||||
patch(
|
||||
"supervisor.docker.network.DockerNetwork.docker",
|
||||
new_callable=PropertyMock,
|
||||
return_value=MagicMock(),
|
||||
create=True,
|
||||
),
|
||||
patch(
|
||||
"supervisor.docker.network.DockerNetwork.docker.networks",
|
||||
new_callable=PropertyMock,
|
||||
return_value=MagicMock(),
|
||||
create=True,
|
||||
),
|
||||
patch(
|
||||
"supervisor.docker.network.DockerNetwork.docker.networks.get",
|
||||
side_effect=docker.errors.NotFound("Network not found"),
|
||||
),
|
||||
patch(
|
||||
"supervisor.docker.network.DockerNetwork.docker.networks.create",
|
||||
return_value=MockNetwork(False, None, True),
|
||||
) as mock_create,
|
||||
):
|
||||
# Pass None as enable_ipv6 to simulate no user setting
|
||||
network = (await DockerNetwork(MagicMock()).post_init(None)).network
|
||||
|
||||
assert network is not None
|
||||
assert network.attrs.get(DOCKER_ENABLEIPV6) is True
|
||||
|
||||
# Verify that create was called with IPv6 enabled by default
|
||||
expected_params = DOCKER_NETWORK_PARAMS.copy()
|
||||
expected_params[ATTR_ENABLE_IPV6] = True
|
||||
mock_create.assert_called_with(**expected_params)
|
||||
|
@ -200,7 +200,8 @@ async def test_start(
|
||||
coresys.docker.containers.get.return_value.stop.assert_not_called()
|
||||
if container_exists:
|
||||
coresys.docker.containers.get.return_value.remove.assert_called_once_with(
|
||||
force=True
|
||||
force=True,
|
||||
v=True,
|
||||
)
|
||||
else:
|
||||
coresys.docker.containers.get.return_value.remove.assert_not_called()
|
||||
@ -397,7 +398,7 @@ async def test_core_loads_wrong_image_for_machine(
|
||||
|
||||
await coresys.homeassistant.core.load()
|
||||
|
||||
container.remove.assert_called_once_with(force=True)
|
||||
container.remove.assert_called_once_with(force=True, v=True)
|
||||
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
|
||||
"image": "ghcr.io/home-assistant/odroid-n2-homeassistant:latest",
|
||||
"force": True,
|
||||
@ -444,7 +445,7 @@ async def test_core_loads_wrong_image_for_architecture(
|
||||
|
||||
await coresys.homeassistant.core.load()
|
||||
|
||||
container.remove.assert_called_once_with(force=True)
|
||||
container.remove.assert_called_once_with(force=True, v=True)
|
||||
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
|
||||
"image": "ghcr.io/home-assistant/qemux86-64-homeassistant:latest",
|
||||
"force": True,
|
||||
|
@ -2,8 +2,9 @@
|
||||
|
||||
# pylint: disable=protected-access
|
||||
import asyncio
|
||||
from unittest.mock import AsyncMock, PropertyMock, patch
|
||||
from unittest.mock import PropertyMock, patch
|
||||
|
||||
from dbus_fast import Variant
|
||||
import pytest
|
||||
|
||||
from supervisor.coresys import CoreSys
|
||||
@ -87,23 +88,47 @@ async def test_connectivity_events(coresys: CoreSys, force: bool):
|
||||
)
|
||||
|
||||
|
||||
async def test_dns_restart_on_connection_change(
|
||||
coresys: CoreSys, network_manager_service: NetworkManagerService
|
||||
async def test_dns_configuration_change_triggers_notify_locals_changed(
|
||||
coresys: CoreSys, dns_manager_service
|
||||
):
|
||||
"""Test dns plugin is restarted when primary connection changes."""
|
||||
"""Test that DNS configuration changes trigger notify_locals_changed."""
|
||||
await coresys.host.network.load()
|
||||
with (
|
||||
patch.object(PluginDns, "restart") as restart,
|
||||
patch.object(
|
||||
PluginDns, "is_running", new_callable=AsyncMock, return_value=True
|
||||
),
|
||||
):
|
||||
network_manager_service.emit_properties_changed({"PrimaryConnection": "/"})
|
||||
await network_manager_service.ping()
|
||||
restart.assert_not_called()
|
||||
|
||||
network_manager_service.emit_properties_changed(
|
||||
{"PrimaryConnection": "/org/freedesktop/NetworkManager/ActiveConnection/2"}
|
||||
with patch.object(PluginDns, "notify_locals_changed") as notify_locals_changed:
|
||||
# Test that non-Configuration changes don't trigger notify_locals_changed
|
||||
dns_manager_service.emit_properties_changed({"Mode": "default"})
|
||||
await dns_manager_service.ping()
|
||||
notify_locals_changed.assert_not_called()
|
||||
|
||||
# Test that Configuration changes trigger notify_locals_changed
|
||||
configuration = [
|
||||
{
|
||||
"nameservers": Variant("as", ["192.168.2.2"]),
|
||||
"domains": Variant("as", ["lan"]),
|
||||
"interface": Variant("s", "eth0"),
|
||||
"priority": Variant("i", 100),
|
||||
"vpn": Variant("b", False),
|
||||
}
|
||||
]
|
||||
|
||||
dns_manager_service.emit_properties_changed({"Configuration": configuration})
|
||||
await dns_manager_service.ping()
|
||||
notify_locals_changed.assert_called_once()
|
||||
|
||||
notify_locals_changed.reset_mock()
|
||||
# Test that subsequent Configuration changes also trigger notify_locals_changed
|
||||
different_configuration = [
|
||||
{
|
||||
"nameservers": Variant("as", ["8.8.8.8"]),
|
||||
"domains": Variant("as", ["example.com"]),
|
||||
"interface": Variant("s", "wlan0"),
|
||||
"priority": Variant("i", 200),
|
||||
"vpn": Variant("b", True),
|
||||
}
|
||||
]
|
||||
|
||||
dns_manager_service.emit_properties_changed(
|
||||
{"Configuration": different_configuration}
|
||||
)
|
||||
await network_manager_service.ping()
|
||||
restart.assert_called_once()
|
||||
await dns_manager_service.ping()
|
||||
notify_locals_changed.assert_called_once()
|
||||
|
@ -35,6 +35,17 @@ async def fixture_write_json() -> Mock:
|
||||
yield write_json_file
|
||||
|
||||
|
||||
@pytest.fixture(name="mock_call_later")
|
||||
def fixture_mock_call_later(coresys: CoreSys):
|
||||
"""Mock sys_call_later with zero delay for testing."""
|
||||
|
||||
def mock_call_later(_delay, *args, **kwargs) -> asyncio.TimerHandle:
|
||||
"""Mock to remove delay."""
|
||||
return coresys.call_later(0, *args, **kwargs)
|
||||
|
||||
return mock_call_later
|
||||
|
||||
|
||||
async def test_config_write(
|
||||
coresys: CoreSys,
|
||||
docker_interface: tuple[AsyncMock, AsyncMock],
|
||||
@ -98,6 +109,7 @@ async def test_reset(coresys: CoreSys):
|
||||
unlink.assert_called_once()
|
||||
write_hosts.assert_called_once()
|
||||
|
||||
# Verify the hosts data structure is properly initialized
|
||||
# pylint: disable=protected-access
|
||||
assert coresys.plugins.dns._hosts == [
|
||||
HostEntry(
|
||||
@ -239,3 +251,233 @@ async def test_load_error_writing_resolv(
|
||||
|
||||
assert "Can't write/update /etc/resolv.conf" in caplog.text
|
||||
assert coresys.core.healthy is False
|
||||
|
||||
|
||||
async def test_notify_locals_changed_end_to_end_with_changes_and_running(
|
||||
coresys: CoreSys, mock_call_later
|
||||
):
|
||||
"""Test notify_locals_changed end-to-end: local DNS changes detected and plugin restarted."""
|
||||
dns_plugin = coresys.plugins.dns
|
||||
|
||||
# Set cached locals to something different from current network state
|
||||
current_locals = dns_plugin._compute_locals()
|
||||
dns_plugin._cached_locals = (
|
||||
["dns://192.168.1.1"]
|
||||
if current_locals != ["dns://192.168.1.1"]
|
||||
else ["dns://192.168.1.2"]
|
||||
)
|
||||
|
||||
with (
|
||||
patch.object(dns_plugin, "restart") as mock_restart,
|
||||
patch.object(dns_plugin.instance, "is_running", return_value=True),
|
||||
patch.object(dns_plugin, "sys_call_later", new=mock_call_later),
|
||||
):
|
||||
# Call notify_locals_changed
|
||||
dns_plugin.notify_locals_changed()
|
||||
|
||||
# Wait for the async task to complete
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
# Verify restart was called and cached locals were updated
|
||||
mock_restart.assert_called_once()
|
||||
assert dns_plugin._cached_locals == current_locals
|
||||
|
||||
|
||||
async def test_notify_locals_changed_end_to_end_with_changes_but_not_running(
|
||||
coresys: CoreSys, mock_call_later
|
||||
):
|
||||
"""Test notify_locals_changed end-to-end: local DNS changes detected but plugin not running."""
|
||||
dns_plugin = coresys.plugins.dns
|
||||
|
||||
# Set cached locals to something different from current network state
|
||||
current_locals = dns_plugin._compute_locals()
|
||||
dns_plugin._cached_locals = (
|
||||
["dns://192.168.1.1"]
|
||||
if current_locals != ["dns://192.168.1.1"]
|
||||
else ["dns://192.168.1.2"]
|
||||
)
|
||||
|
||||
with (
|
||||
patch.object(dns_plugin, "restart") as mock_restart,
|
||||
patch.object(dns_plugin.instance, "is_running", return_value=False),
|
||||
patch.object(dns_plugin, "sys_call_later", new=mock_call_later),
|
||||
):
|
||||
# Call notify_locals_changed
|
||||
dns_plugin.notify_locals_changed()
|
||||
|
||||
# Wait for the async task to complete
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
# Verify restart was NOT called but cached locals were still updated
|
||||
mock_restart.assert_not_called()
|
||||
assert dns_plugin._cached_locals == current_locals
|
||||
|
||||
|
||||
async def test_notify_locals_changed_end_to_end_no_changes(
|
||||
coresys: CoreSys, mock_call_later
|
||||
):
|
||||
"""Test notify_locals_changed end-to-end: no local DNS changes detected."""
|
||||
dns_plugin = coresys.plugins.dns
|
||||
|
||||
# Set cached locals to match current network state
|
||||
current_locals = dns_plugin._compute_locals()
|
||||
dns_plugin._cached_locals = current_locals
|
||||
|
||||
with (
|
||||
patch.object(dns_plugin, "restart") as mock_restart,
|
||||
patch.object(dns_plugin, "sys_call_later", new=mock_call_later),
|
||||
):
|
||||
# Call notify_locals_changed
|
||||
dns_plugin.notify_locals_changed()
|
||||
|
||||
# Wait for the async task to complete
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
# Verify restart was NOT called since no changes
|
||||
mock_restart.assert_not_called()
|
||||
assert dns_plugin._cached_locals == current_locals
|
||||
|
||||
|
||||
async def test_notify_locals_changed_debouncing_cancels_previous_timer(
|
||||
coresys: CoreSys,
|
||||
):
|
||||
"""Test notify_locals_changed debouncing cancels previous timer before creating new one."""
|
||||
dns_plugin = coresys.plugins.dns
|
||||
|
||||
# Set cached locals to trigger change detection
|
||||
current_locals = dns_plugin._compute_locals()
|
||||
dns_plugin._cached_locals = (
|
||||
["dns://192.168.1.1"]
|
||||
if current_locals != ["dns://192.168.1.1"]
|
||||
else ["dns://192.168.1.2"]
|
||||
)
|
||||
|
||||
call_count = 0
|
||||
handles = []
|
||||
|
||||
def mock_call_later_with_tracking(_delay, *args, **kwargs) -> asyncio.TimerHandle:
|
||||
"""Mock to remove delay and track calls."""
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
handle = coresys.call_later(0, *args, **kwargs)
|
||||
handles.append(handle)
|
||||
return handle
|
||||
|
||||
with (
|
||||
patch.object(dns_plugin, "restart") as mock_restart,
|
||||
patch.object(dns_plugin.instance, "is_running", return_value=True),
|
||||
patch.object(dns_plugin, "sys_call_later", new=mock_call_later_with_tracking),
|
||||
):
|
||||
# First call sets up timer
|
||||
dns_plugin.notify_locals_changed()
|
||||
assert call_count == 1
|
||||
first_handle = dns_plugin._locals_changed_handle
|
||||
assert first_handle is not None
|
||||
|
||||
# Second call should cancel first timer and create new one
|
||||
dns_plugin.notify_locals_changed()
|
||||
assert call_count == 2
|
||||
second_handle = dns_plugin._locals_changed_handle
|
||||
assert second_handle is not None
|
||||
assert first_handle != second_handle
|
||||
|
||||
# Wait for the async task to complete
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
# Verify restart was called once for the final timer
|
||||
mock_restart.assert_called_once()
|
||||
assert dns_plugin._cached_locals == current_locals
|
||||
|
||||
|
||||
async def test_stop_cancels_pending_timers_and_tasks(coresys: CoreSys):
|
||||
"""Test stop cancels pending locals change timers and restart tasks to prevent resource leaks."""
|
||||
dns_plugin = coresys.plugins.dns
|
||||
|
||||
mock_timer_handle = Mock()
|
||||
mock_task_handle = Mock()
|
||||
dns_plugin._locals_changed_handle = mock_timer_handle
|
||||
dns_plugin._restart_after_locals_change_handle = mock_task_handle
|
||||
|
||||
with patch.object(dns_plugin.instance, "stop"):
|
||||
await dns_plugin.stop()
|
||||
|
||||
# Should cancel pending timer and task, then clean up
|
||||
mock_timer_handle.cancel.assert_called_once()
|
||||
mock_task_handle.cancel.assert_called_once()
|
||||
assert dns_plugin._locals_changed_handle is None
|
||||
assert dns_plugin._restart_after_locals_change_handle is None
|
||||
|
||||
|
||||
async def test_dns_restart_triggers_connectivity_check(coresys: CoreSys):
|
||||
"""Test end-to-end that DNS container restart triggers connectivity check."""
|
||||
dns_plugin = coresys.plugins.dns
|
||||
|
||||
# Load the plugin to register the event listener
|
||||
with (
|
||||
patch.object(type(dns_plugin.instance), "attach"),
|
||||
patch.object(type(dns_plugin.instance), "is_running", return_value=True),
|
||||
):
|
||||
await dns_plugin.load()
|
||||
|
||||
# Verify listener was registered (connectivity check listener should be stored)
|
||||
assert dns_plugin._connectivity_check_listener is not None
|
||||
|
||||
# Create event to signal when connectivity check is called
|
||||
connectivity_check_event = asyncio.Event()
|
||||
|
||||
# Mock connectivity check to set the event when called
|
||||
async def mock_check_connectivity():
|
||||
connectivity_check_event.set()
|
||||
|
||||
with (
|
||||
patch.object(
|
||||
coresys.supervisor,
|
||||
"check_connectivity",
|
||||
side_effect=mock_check_connectivity,
|
||||
),
|
||||
patch("supervisor.plugins.dns.asyncio.sleep") as mock_sleep,
|
||||
):
|
||||
# Fire the DNS container state change event through bus system
|
||||
coresys.bus.fire_event(
|
||||
BusEvent.DOCKER_CONTAINER_STATE_CHANGE,
|
||||
DockerContainerStateEvent(
|
||||
name="hassio_dns",
|
||||
state=ContainerState.RUNNING,
|
||||
id="test_id",
|
||||
time=1234567890,
|
||||
),
|
||||
)
|
||||
|
||||
# Wait for connectivity check to be called
|
||||
await asyncio.wait_for(connectivity_check_event.wait(), timeout=1.0)
|
||||
|
||||
# Verify sleep was called with correct delay
|
||||
mock_sleep.assert_called_once_with(5)
|
||||
|
||||
# Reset and test that other containers don't trigger check
|
||||
connectivity_check_event.clear()
|
||||
mock_sleep.reset_mock()
|
||||
|
||||
# Fire event for different container
|
||||
coresys.bus.fire_event(
|
||||
BusEvent.DOCKER_CONTAINER_STATE_CHANGE,
|
||||
DockerContainerStateEvent(
|
||||
name="hassio_homeassistant",
|
||||
state=ContainerState.RUNNING,
|
||||
id="test_id",
|
||||
time=1234567890,
|
||||
),
|
||||
)
|
||||
|
||||
# Wait a bit and verify connectivity check was NOT triggered
|
||||
try:
|
||||
await asyncio.wait_for(connectivity_check_event.wait(), timeout=0.1)
|
||||
assert False, (
|
||||
"Connectivity check should not have been called for other containers"
|
||||
)
|
||||
except TimeoutError:
|
||||
# This is expected - connectivity check should not be called
|
||||
pass
|
||||
|
||||
# Verify sleep was not called for other containers
|
||||
mock_sleep.assert_not_called()
|
||||
|
@ -363,7 +363,7 @@ async def test_load_with_incorrect_image(
|
||||
|
||||
await plugin.load()
|
||||
|
||||
container.remove.assert_called_once_with(force=True)
|
||||
container.remove.assert_called_once_with(force=True, v=True)
|
||||
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
|
||||
"image": f"{old_image}:latest",
|
||||
"force": True,
|
||||
|
201
tests/resolution/check/test_check_duplicate_os_installation.py
Normal file
201
tests/resolution/check/test_check_duplicate_os_installation.py
Normal file
@ -0,0 +1,201 @@
|
||||
"""Test check for duplicate OS installation."""
|
||||
|
||||
from types import SimpleNamespace
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from supervisor.const import CoreState
|
||||
from supervisor.coresys import CoreSys
|
||||
from supervisor.dbus.udisks2.data import DeviceSpecification
|
||||
from supervisor.resolution.checks.duplicate_os_installation import (
|
||||
CheckDuplicateOSInstallation,
|
||||
)
|
||||
from supervisor.resolution.const import ContextType, IssueType, UnhealthyReason
|
||||
|
||||
|
||||
async def test_base(coresys: CoreSys):
|
||||
"""Test check basics."""
|
||||
duplicate_os_installation = CheckDuplicateOSInstallation(coresys)
|
||||
assert duplicate_os_installation.slug == "duplicate_os_installation"
|
||||
assert duplicate_os_installation.enabled
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("os_available")
|
||||
async def test_check_no_duplicates(coresys: CoreSys):
|
||||
"""Test check when no duplicate OS installations exist."""
|
||||
duplicate_os_installation = CheckDuplicateOSInstallation(coresys)
|
||||
await coresys.core.set_state(CoreState.SETUP)
|
||||
|
||||
with patch.object(
|
||||
coresys.dbus.udisks2, "resolve_device", return_value=[], new_callable=AsyncMock
|
||||
) as mock_resolve:
|
||||
await duplicate_os_installation.run_check()
|
||||
assert len(coresys.resolution.issues) == 0
|
||||
assert (
|
||||
mock_resolve.call_count == 10
|
||||
) # 5 partition labels + 5 partition UUIDs checked
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("os_available")
|
||||
async def test_check_with_duplicates(coresys: CoreSys):
|
||||
"""Test check when duplicate OS installations exist."""
|
||||
duplicate_os_installation = CheckDuplicateOSInstallation(coresys)
|
||||
await coresys.core.set_state(CoreState.SETUP)
|
||||
|
||||
mock_devices = [
|
||||
SimpleNamespace(device="/dev/mmcblk0p1"),
|
||||
SimpleNamespace(device="/dev/nvme0n1p1"),
|
||||
] # Two devices found
|
||||
|
||||
# Mock resolve_device to return duplicates for first partition, empty for others
|
||||
async def mock_resolve_device(spec):
|
||||
if spec.partlabel == "hassos-boot": # First partition in the list
|
||||
return mock_devices
|
||||
return []
|
||||
|
||||
with patch.object(
|
||||
coresys.dbus.udisks2, "resolve_device", side_effect=mock_resolve_device
|
||||
) as mock_resolve:
|
||||
await duplicate_os_installation.run_check()
|
||||
|
||||
# Should find issue for first partition with duplicates
|
||||
assert len(coresys.resolution.issues) == 1
|
||||
assert coresys.resolution.issues[0].type == IssueType.DUPLICATE_OS_INSTALLATION
|
||||
assert coresys.resolution.issues[0].context == ContextType.SYSTEM
|
||||
assert coresys.resolution.issues[0].reference is None
|
||||
|
||||
# Should mark system as unhealthy
|
||||
assert UnhealthyReason.DUPLICATE_OS_INSTALLATION in coresys.resolution.unhealthy
|
||||
|
||||
# Should only check first partition (returns early)
|
||||
mock_resolve.assert_called_once_with(
|
||||
DeviceSpecification(partlabel="hassos-boot")
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("os_available")
|
||||
async def test_check_with_mbr_duplicates(coresys: CoreSys):
|
||||
"""Test check when duplicate MBR OS installations exist."""
|
||||
duplicate_os_installation = CheckDuplicateOSInstallation(coresys)
|
||||
await coresys.core.set_state(CoreState.SETUP)
|
||||
|
||||
mock_devices = [
|
||||
SimpleNamespace(device="/dev/mmcblk0p1"),
|
||||
SimpleNamespace(device="/dev/nvme0n1p1"),
|
||||
] # Two devices found
|
||||
|
||||
# Mock resolve_device to return duplicates for first MBR partition UUID, empty for others
|
||||
async def mock_resolve_device(spec):
|
||||
if spec.partuuid == "48617373-01": # hassos-boot MBR UUID
|
||||
return mock_devices
|
||||
return []
|
||||
|
||||
with patch.object(
|
||||
coresys.dbus.udisks2, "resolve_device", side_effect=mock_resolve_device
|
||||
) as mock_resolve:
|
||||
await duplicate_os_installation.run_check()
|
||||
|
||||
# Should find issue for first MBR partition with duplicates
|
||||
assert len(coresys.resolution.issues) == 1
|
||||
assert coresys.resolution.issues[0].type == IssueType.DUPLICATE_OS_INSTALLATION
|
||||
assert coresys.resolution.issues[0].context == ContextType.SYSTEM
|
||||
assert coresys.resolution.issues[0].reference is None
|
||||
|
||||
# Should mark system as unhealthy
|
||||
assert UnhealthyReason.DUPLICATE_OS_INSTALLATION in coresys.resolution.unhealthy
|
||||
|
||||
# Should check all partition labels first (5 calls), then MBR UUIDs until duplicate found (1 call)
|
||||
assert mock_resolve.call_count == 6
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("os_available")
|
||||
async def test_check_with_single_device(coresys: CoreSys):
|
||||
"""Test check when single device found for each partition."""
|
||||
duplicate_os_installation = CheckDuplicateOSInstallation(coresys)
|
||||
await coresys.core.set_state(CoreState.SETUP)
|
||||
|
||||
mock_device = [SimpleNamespace(device="/dev/mmcblk0p1")]
|
||||
|
||||
with patch.object(
|
||||
coresys.dbus.udisks2,
|
||||
"resolve_device",
|
||||
return_value=mock_device,
|
||||
new_callable=AsyncMock,
|
||||
) as mock_resolve:
|
||||
await duplicate_os_installation.run_check()
|
||||
|
||||
# Should not create any issues
|
||||
assert len(coresys.resolution.issues) == 0
|
||||
assert (
|
||||
mock_resolve.call_count == 10
|
||||
) # All 5 partition labels + 5 partition UUIDs checked
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("os_available")
|
||||
async def test_approve_with_duplicates(coresys: CoreSys):
|
||||
"""Test approve when duplicates exist."""
|
||||
duplicate_os_installation = CheckDuplicateOSInstallation(coresys)
|
||||
|
||||
# Test the logic directly - since D-Bus mocking has issues, we'll verify the method exists
|
||||
# and follows the correct pattern for approve_check without reference
|
||||
assert duplicate_os_installation.approve_check.__name__ == "approve_check"
|
||||
assert duplicate_os_installation.issue == IssueType.DUPLICATE_OS_INSTALLATION
|
||||
assert duplicate_os_installation.context == ContextType.SYSTEM
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("os_available")
|
||||
async def test_approve_without_duplicates(coresys: CoreSys):
|
||||
"""Test approve when no duplicates exist."""
|
||||
duplicate_os_installation = CheckDuplicateOSInstallation(coresys)
|
||||
|
||||
mock_device = [SimpleNamespace(device="/dev/mmcblk0p1")]
|
||||
|
||||
with patch.object(
|
||||
coresys.dbus.udisks2,
|
||||
"resolve_device",
|
||||
return_value=mock_device,
|
||||
new_callable=AsyncMock,
|
||||
):
|
||||
result = await duplicate_os_installation.approve_check()
|
||||
assert result is False
|
||||
|
||||
|
||||
async def test_did_run(coresys: CoreSys):
|
||||
"""Test that the check ran as expected."""
|
||||
duplicate_os_installation = CheckDuplicateOSInstallation(coresys)
|
||||
should_run = duplicate_os_installation.states
|
||||
should_not_run = [state for state in CoreState if state not in should_run]
|
||||
assert len(should_run) != 0
|
||||
assert len(should_not_run) != 0
|
||||
|
||||
with patch(
|
||||
"supervisor.resolution.checks.duplicate_os_installation.CheckDuplicateOSInstallation.run_check",
|
||||
return_value=None,
|
||||
) as check:
|
||||
for state in should_run:
|
||||
await coresys.core.set_state(state)
|
||||
await duplicate_os_installation()
|
||||
check.assert_called_once()
|
||||
check.reset_mock()
|
||||
|
||||
for state in should_not_run:
|
||||
await coresys.core.set_state(state)
|
||||
await duplicate_os_installation()
|
||||
check.assert_not_called()
|
||||
check.reset_mock()
|
||||
|
||||
|
||||
async def test_check_no_devices_resolved_on_os_unavailable(coresys: CoreSys):
|
||||
"""Test check when OS is unavailable."""
|
||||
duplicate_os_installation = CheckDuplicateOSInstallation(coresys)
|
||||
await coresys.core.set_state(CoreState.SETUP)
|
||||
|
||||
with patch.object(
|
||||
coresys.dbus.udisks2, "resolve_device", return_value=[], new_callable=AsyncMock
|
||||
) as mock_resolve:
|
||||
await duplicate_os_installation.run_check()
|
||||
assert len(coresys.resolution.issues) == 0
|
||||
assert (
|
||||
mock_resolve.call_count == 0
|
||||
) # No devices resolved since OS is unavailable
|
@ -33,6 +33,7 @@ async def test_base(coresys: CoreSys):
|
||||
assert multiple_data_disks.enabled
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("os_available")
|
||||
async def test_check(coresys: CoreSys, sda1_block_service: BlockService):
|
||||
"""Test check."""
|
||||
multiple_data_disks = CheckMultipleDataDisks(coresys)
|
||||
|
@ -76,7 +76,9 @@ async def test_fixup_stopped_core(
|
||||
|
||||
assert not coresys.resolution.issues
|
||||
assert not coresys.resolution.suggestions
|
||||
docker.containers.get("addon_local_ssh").remove.assert_called_once_with(force=True)
|
||||
docker.containers.get("addon_local_ssh").remove.assert_called_once_with(
|
||||
force=True, v=True
|
||||
)
|
||||
assert "Addon local_ssh is stopped" in caplog.text
|
||||
|
||||
|
||||
|
@ -65,7 +65,9 @@ async def test_fixup_stopped_core(
|
||||
|
||||
assert not coresys.resolution.issues
|
||||
assert not coresys.resolution.suggestions
|
||||
docker.containers.get("homeassistant").remove.assert_called_once_with(force=True)
|
||||
docker.containers.get("homeassistant").remove.assert_called_once_with(
|
||||
force=True, v=True
|
||||
)
|
||||
assert "Home Assistant is stopped" in caplog.text
|
||||
|
||||
|
||||
|
@ -1,9 +1,14 @@
|
||||
"""Test evaluation base."""
|
||||
|
||||
# pylint: disable=import-error,protected-access
|
||||
import asyncio
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from supervisor.const import BusEvent
|
||||
from supervisor.coresys import CoreSys
|
||||
from supervisor.exceptions import ResolutionFixupError
|
||||
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
|
||||
from supervisor.resolution.data import Issue, Suggestion
|
||||
from supervisor.resolution.fixups.store_execute_reload import FixupStoreExecuteReload
|
||||
@ -32,3 +37,94 @@ async def test_fixup(coresys: CoreSys, supervisor_internet):
|
||||
assert mock_repositorie.update.called
|
||||
assert len(coresys.resolution.suggestions) == 0
|
||||
assert len(coresys.resolution.issues) == 0
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("supervisor_internet")
|
||||
async def test_store_execute_reload_runs_on_connectivity_true(coresys: CoreSys):
|
||||
"""Test fixup runs when connectivity goes from false to true."""
|
||||
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
|
||||
coresys.supervisor.connectivity = False
|
||||
await asyncio.sleep(0)
|
||||
|
||||
mock_repository = AsyncMock()
|
||||
coresys.store.repositories["test_store"] = mock_repository
|
||||
coresys.resolution.add_issue(
|
||||
Issue(
|
||||
IssueType.FATAL_ERROR,
|
||||
ContextType.STORE,
|
||||
reference="test_store",
|
||||
),
|
||||
suggestions=[SuggestionType.EXECUTE_RELOAD],
|
||||
)
|
||||
|
||||
with patch.object(coresys.store, "reload") as mock_reload:
|
||||
# Fire event with connectivity True
|
||||
coresys.supervisor.connectivity = True
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
mock_repository.load.assert_called_once()
|
||||
mock_reload.assert_awaited_once_with(mock_repository)
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("supervisor_internet")
|
||||
async def test_store_execute_reload_does_not_run_on_connectivity_false(
|
||||
coresys: CoreSys,
|
||||
):
|
||||
"""Test fixup does not run when connectivity goes from true to false."""
|
||||
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
|
||||
coresys.supervisor.connectivity = True
|
||||
await asyncio.sleep(0)
|
||||
|
||||
mock_repository = AsyncMock()
|
||||
coresys.store.repositories["test_store"] = mock_repository
|
||||
coresys.resolution.add_issue(
|
||||
Issue(
|
||||
IssueType.FATAL_ERROR,
|
||||
ContextType.STORE,
|
||||
reference="test_store",
|
||||
),
|
||||
suggestions=[SuggestionType.EXECUTE_RELOAD],
|
||||
)
|
||||
|
||||
# Fire event with connectivity True
|
||||
coresys.supervisor.connectivity = False
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
mock_repository.load.assert_not_called()
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("supervisor_internet")
|
||||
async def test_store_execute_reload_dismiss_suggestion_removes_listener(
|
||||
coresys: CoreSys,
|
||||
):
|
||||
"""Test fixup does not run on event if suggestion has been dismissed."""
|
||||
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
|
||||
coresys.supervisor.connectivity = True
|
||||
await asyncio.sleep(0)
|
||||
|
||||
mock_repository = AsyncMock()
|
||||
coresys.store.repositories["test_store"] = mock_repository
|
||||
coresys.resolution.add_issue(
|
||||
issue := Issue(
|
||||
IssueType.FATAL_ERROR,
|
||||
ContextType.STORE,
|
||||
reference="test_store",
|
||||
),
|
||||
suggestions=[SuggestionType.EXECUTE_RELOAD],
|
||||
)
|
||||
|
||||
with patch.object(
|
||||
FixupStoreExecuteReload, "process_fixup", side_effect=ResolutionFixupError
|
||||
) as mock_fixup:
|
||||
# Fire event with issue there to trigger fixup
|
||||
coresys.bus.fire_event(BusEvent.SUPERVISOR_CONNECTIVITY_CHANGE, True)
|
||||
await asyncio.sleep(0.1)
|
||||
mock_fixup.assert_called_once()
|
||||
|
||||
# Remove issue and suggestion and re-fire to see listener is gone
|
||||
mock_fixup.reset_mock()
|
||||
coresys.resolution.dismiss_issue(issue)
|
||||
|
||||
coresys.bus.fire_event(BusEvent.SUPERVISOR_CONNECTIVITY_CHANGE, True)
|
||||
await asyncio.sleep(0.1)
|
||||
mock_fixup.assert_not_called()
|
||||
|
@ -10,7 +10,7 @@ from supervisor.resolution.fixups.store_execute_remove import FixupStoreExecuteR
|
||||
from supervisor.store.repository import Repository
|
||||
|
||||
|
||||
async def test_fixup(coresys: CoreSys, repository: Repository):
|
||||
async def test_fixup(coresys: CoreSys, test_repository: Repository):
|
||||
"""Test fixup."""
|
||||
store_execute_remove = FixupStoreExecuteRemove(coresys)
|
||||
|
||||
@ -18,16 +18,20 @@ async def test_fixup(coresys: CoreSys, repository: Repository):
|
||||
|
||||
coresys.resolution.add_suggestion(
|
||||
Suggestion(
|
||||
SuggestionType.EXECUTE_REMOVE, ContextType.STORE, reference=repository.slug
|
||||
SuggestionType.EXECUTE_REMOVE,
|
||||
ContextType.STORE,
|
||||
reference=test_repository.slug,
|
||||
)
|
||||
)
|
||||
coresys.resolution.add_issue(
|
||||
Issue(
|
||||
IssueType.CORRUPT_REPOSITORY, ContextType.STORE, reference=repository.slug
|
||||
IssueType.CORRUPT_REPOSITORY,
|
||||
ContextType.STORE,
|
||||
reference=test_repository.slug,
|
||||
)
|
||||
)
|
||||
|
||||
with patch.object(type(repository), "remove") as remove_repo:
|
||||
with patch.object(type(test_repository), "remove") as remove_repo:
|
||||
await store_execute_remove()
|
||||
|
||||
assert remove_repo.called
|
||||
@ -36,4 +40,4 @@ async def test_fixup(coresys: CoreSys, repository: Repository):
|
||||
assert len(coresys.resolution.suggestions) == 0
|
||||
assert len(coresys.resolution.issues) == 0
|
||||
|
||||
assert repository.slug not in coresys.store.repositories
|
||||
assert test_repository.slug not in coresys.store.repositories
|
||||
|
@ -1,43 +1,149 @@
|
||||
"""Test evaluation base."""
|
||||
|
||||
# pylint: disable=import-error,protected-access
|
||||
import errno
|
||||
from os import listdir
|
||||
from pathlib import Path
|
||||
from unittest.mock import AsyncMock, patch
|
||||
from unittest.mock import PropertyMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from supervisor.config import CoreConfig
|
||||
from supervisor.coresys import CoreSys
|
||||
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
|
||||
from supervisor.exceptions import StoreGitCloneError
|
||||
from supervisor.resolution.const import (
|
||||
ContextType,
|
||||
IssueType,
|
||||
SuggestionType,
|
||||
UnhealthyReason,
|
||||
)
|
||||
from supervisor.resolution.data import Issue, Suggestion
|
||||
from supervisor.resolution.fixups.store_execute_reset import FixupStoreExecuteReset
|
||||
from supervisor.store.git import GitRepo
|
||||
from supervisor.store.repository import Repository
|
||||
|
||||
|
||||
async def test_fixup(coresys: CoreSys, supervisor_internet, tmp_path):
|
||||
@pytest.fixture(name="mock_addons_git", autouse=True)
|
||||
async def fixture_mock_addons_git(tmp_supervisor_data: Path) -> None:
|
||||
"""Mock addons git path."""
|
||||
with patch.object(
|
||||
CoreConfig,
|
||||
"path_addons_git",
|
||||
new=PropertyMock(return_value=tmp_supervisor_data / "addons" / "git"),
|
||||
):
|
||||
yield
|
||||
|
||||
|
||||
def add_store_reset_suggestion(coresys: CoreSys) -> None:
|
||||
"""Add suggestion for tests."""
|
||||
coresys.resolution.add_suggestion(
|
||||
Suggestion(
|
||||
SuggestionType.EXECUTE_RESET, ContextType.STORE, reference="94cfad5a"
|
||||
)
|
||||
)
|
||||
coresys.resolution.add_issue(
|
||||
Issue(IssueType.CORRUPT_REPOSITORY, ContextType.STORE, reference="94cfad5a")
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("supervisor_internet")
|
||||
async def test_fixup(coresys: CoreSys):
|
||||
"""Test fixup."""
|
||||
store_execute_reset = FixupStoreExecuteReset(coresys)
|
||||
test_repo = Path(tmp_path, "test_repo")
|
||||
test_repo = coresys.config.path_addons_git / "94cfad5a"
|
||||
|
||||
assert store_execute_reset.auto
|
||||
|
||||
coresys.resolution.add_suggestion(
|
||||
Suggestion(SuggestionType.EXECUTE_RESET, ContextType.STORE, reference="test")
|
||||
)
|
||||
coresys.resolution.add_issue(
|
||||
Issue(IssueType.CORRUPT_REPOSITORY, ContextType.STORE, reference="test")
|
||||
)
|
||||
|
||||
test_repo.mkdir()
|
||||
(test_repo / ".git").mkdir()
|
||||
add_store_reset_suggestion(coresys)
|
||||
test_repo.mkdir(parents=True)
|
||||
good_marker = test_repo / ".git"
|
||||
(corrupt_marker := (test_repo / "corrupt")).touch()
|
||||
assert test_repo.exists()
|
||||
assert not good_marker.exists()
|
||||
assert corrupt_marker.exists()
|
||||
|
||||
mock_repositorie = AsyncMock()
|
||||
mock_repositorie.git.path = test_repo
|
||||
coresys.store.repositories["test"] = mock_repositorie
|
||||
assert len(listdir(test_repo)) > 0
|
||||
async def mock_clone(obj: GitRepo, path: Path | None = None):
|
||||
"""Mock of clone method."""
|
||||
path = path or obj.path
|
||||
await coresys.run_in_executor((path / ".git").mkdir)
|
||||
|
||||
with patch("shutil.disk_usage", return_value=(42, 42, 2 * (1024.0**3))):
|
||||
coresys.store.repositories["94cfad5a"] = Repository.create(
|
||||
coresys, "https://github.com/home-assistant/addons-example"
|
||||
)
|
||||
with (
|
||||
patch.object(GitRepo, "load"),
|
||||
patch.object(GitRepo, "_clone", new=mock_clone),
|
||||
patch("shutil.disk_usage", return_value=(42, 42, 2 * (1024.0**3))),
|
||||
):
|
||||
await store_execute_reset()
|
||||
|
||||
assert len(listdir(test_repo)) == 0
|
||||
assert mock_repositorie.load.called
|
||||
assert test_repo.exists()
|
||||
assert good_marker.exists()
|
||||
assert not corrupt_marker.exists()
|
||||
assert len(coresys.resolution.suggestions) == 0
|
||||
assert len(coresys.resolution.issues) == 0
|
||||
assert len(listdir(coresys.config.path_tmp)) == 0
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("supervisor_internet")
|
||||
async def test_fixup_clone_fail(coresys: CoreSys):
|
||||
"""Test fixup does not delete cache when clone fails."""
|
||||
store_execute_reset = FixupStoreExecuteReset(coresys)
|
||||
test_repo = coresys.config.path_addons_git / "94cfad5a"
|
||||
|
||||
add_store_reset_suggestion(coresys)
|
||||
test_repo.mkdir(parents=True)
|
||||
(corrupt_marker := (test_repo / "corrupt")).touch()
|
||||
assert test_repo.exists()
|
||||
assert corrupt_marker.exists()
|
||||
|
||||
coresys.store.repositories["94cfad5a"] = Repository.create(
|
||||
coresys, "https://github.com/home-assistant/addons-example"
|
||||
)
|
||||
with (
|
||||
patch.object(GitRepo, "load"),
|
||||
patch.object(GitRepo, "_clone", side_effect=StoreGitCloneError),
|
||||
patch("shutil.disk_usage", return_value=(42, 42, 2 * (1024.0**3))),
|
||||
):
|
||||
await store_execute_reset()
|
||||
|
||||
assert test_repo.exists()
|
||||
assert corrupt_marker.exists()
|
||||
assert len(coresys.resolution.suggestions) == 1
|
||||
assert len(coresys.resolution.issues) == 1
|
||||
assert len(listdir(coresys.config.path_tmp)) == 0
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
("error_num", "unhealthy"), [(errno.EBUSY, False), (errno.EBADMSG, True)]
|
||||
)
|
||||
@pytest.mark.usefixtures("supervisor_internet")
|
||||
async def test_fixup_move_fail(coresys: CoreSys, error_num: int, unhealthy: bool):
|
||||
"""Test fixup cleans up clone on move fail.
|
||||
|
||||
This scenario shouldn't really happen unless something is pretty wrong with the system.
|
||||
It will leave the user in a bind without the git cache but at least we try to clean up tmp.
|
||||
"""
|
||||
store_execute_reset = FixupStoreExecuteReset(coresys)
|
||||
test_repo = coresys.config.path_addons_git / "94cfad5a"
|
||||
|
||||
add_store_reset_suggestion(coresys)
|
||||
test_repo.mkdir(parents=True)
|
||||
coresys.store.repositories["94cfad5a"] = Repository.create(
|
||||
coresys, "https://github.com/home-assistant/addons-example"
|
||||
)
|
||||
with (
|
||||
patch.object(GitRepo, "load"),
|
||||
patch.object(GitRepo, "_clone"),
|
||||
patch("supervisor.store.git.Path.rename", side_effect=(err := OSError())),
|
||||
patch("shutil.disk_usage", return_value=(42, 42, 2 * (1024.0**3))),
|
||||
):
|
||||
err.errno = error_num
|
||||
await store_execute_reset()
|
||||
|
||||
assert len(coresys.resolution.suggestions) == 1
|
||||
assert len(coresys.resolution.issues) == 1
|
||||
assert len(listdir(coresys.config.path_tmp)) == 0
|
||||
assert (
|
||||
UnhealthyReason.OSERROR_BAD_MESSAGE in coresys.resolution.unhealthy
|
||||
) is unhealthy
|
||||
|
@ -3,14 +3,14 @@
|
||||
from supervisor.coresys import CoreSys
|
||||
|
||||
|
||||
def test_local_store(coresys: CoreSys, repository) -> None:
|
||||
def test_local_store(coresys: CoreSys, test_repository) -> None:
|
||||
"""Test loading from local store."""
|
||||
assert coresys.store.get("local")
|
||||
|
||||
assert "local_ssh" in coresys.addons.store
|
||||
|
||||
|
||||
def test_core_store(coresys: CoreSys, repository) -> None:
|
||||
def test_core_store(coresys: CoreSys, test_repository) -> None:
|
||||
"""Test loading from core store."""
|
||||
assert coresys.store.get("core")
|
||||
|
||||
|
@ -15,11 +15,20 @@ from supervisor.exceptions import (
|
||||
StoreNotFound,
|
||||
)
|
||||
from supervisor.resolution.const import SuggestionType
|
||||
from supervisor.store import BUILTIN_REPOSITORIES, StoreManager
|
||||
from supervisor.store import StoreManager
|
||||
from supervisor.store.addon import AddonStore
|
||||
from supervisor.store.const import BuiltinRepository
|
||||
from supervisor.store.repository import Repository
|
||||
|
||||
|
||||
def get_repository_by_url(store_manager: StoreManager, url: str) -> Repository:
|
||||
"""Test helper to get repository by URL."""
|
||||
for repository in store_manager.all:
|
||||
if repository.source == url:
|
||||
return repository
|
||||
raise StoreNotFound()
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _auto_supervisor_internet(supervisor_internet):
|
||||
# Use the supervisor_internet fixture to ensure that all tests has internet access
|
||||
@ -33,7 +42,7 @@ async def test_add_valid_repository(
|
||||
"""Test add custom repository."""
|
||||
current = coresys.store.repository_urls
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
||||
patch("supervisor.store.git.GitRepo.load", return_value=None),
|
||||
patch(
|
||||
"supervisor.utils.common.read_yaml_file",
|
||||
return_value={"name": "Awesome repository"},
|
||||
@ -41,11 +50,13 @@ async def test_add_valid_repository(
|
||||
patch("pathlib.Path.exists", return_value=True),
|
||||
):
|
||||
if use_update:
|
||||
await store_manager.update_repositories(current + ["http://example.com"])
|
||||
await store_manager.update_repositories(
|
||||
set(current) | {"http://example.com"}
|
||||
)
|
||||
else:
|
||||
await store_manager.add_repository("http://example.com")
|
||||
|
||||
assert store_manager.get_from_url("http://example.com").validate()
|
||||
assert get_repository_by_url(store_manager, "http://example.com").validate()
|
||||
|
||||
assert "http://example.com" in coresys.store.repository_urls
|
||||
|
||||
@ -54,19 +65,19 @@ async def test_add_invalid_repository(coresys: CoreSys, store_manager: StoreMana
|
||||
"""Test add invalid custom repository."""
|
||||
current = coresys.store.repository_urls
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
||||
patch("supervisor.store.git.GitRepo.load", return_value=None),
|
||||
patch(
|
||||
"pathlib.Path.read_text",
|
||||
return_value="",
|
||||
),
|
||||
):
|
||||
await store_manager.update_repositories(
|
||||
current + ["http://example.com"], add_with_errors=True
|
||||
set(current) | {"http://example.com"}, issue_on_error=True
|
||||
)
|
||||
|
||||
assert not await coresys.run_in_executor(
|
||||
store_manager.get_from_url("http://example.com").validate
|
||||
)
|
||||
assert not await get_repository_by_url(
|
||||
store_manager, "http://example.com"
|
||||
).validate()
|
||||
|
||||
assert "http://example.com" in coresys.store.repository_urls
|
||||
assert coresys.resolution.suggestions[-1].type == SuggestionType.EXECUTE_REMOVE
|
||||
@ -79,7 +90,7 @@ async def test_error_on_invalid_repository(
|
||||
"""Test invalid repository not added."""
|
||||
current = coresys.store.repository_urls
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
||||
patch("supervisor.store.git.GitRepo.load", return_value=None),
|
||||
patch(
|
||||
"pathlib.Path.read_text",
|
||||
return_value="",
|
||||
@ -87,14 +98,14 @@ async def test_error_on_invalid_repository(
|
||||
pytest.raises(StoreError),
|
||||
):
|
||||
if use_update:
|
||||
await store_manager.update_repositories(current + ["http://example.com"])
|
||||
await store_manager.update_repositories(
|
||||
set(current) | {"http://example.com"}
|
||||
)
|
||||
else:
|
||||
await store_manager.add_repository("http://example.com")
|
||||
|
||||
assert "http://example.com" not in coresys.store.repository_urls
|
||||
assert len(coresys.resolution.suggestions) == 0
|
||||
with pytest.raises(StoreNotFound):
|
||||
store_manager.get_from_url("http://example.com")
|
||||
|
||||
|
||||
async def test_add_invalid_repository_file(
|
||||
@ -103,7 +114,7 @@ async def test_add_invalid_repository_file(
|
||||
"""Test add invalid custom repository file."""
|
||||
current = coresys.store.repository_urls
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
||||
patch("supervisor.store.git.GitRepo.load", return_value=None),
|
||||
patch(
|
||||
"pathlib.Path.read_text",
|
||||
return_value=json.dumps({"name": "Awesome repository"}),
|
||||
@ -111,10 +122,12 @@ async def test_add_invalid_repository_file(
|
||||
patch("pathlib.Path.exists", return_value=False),
|
||||
):
|
||||
await store_manager.update_repositories(
|
||||
current + ["http://example.com"], add_with_errors=True
|
||||
set(current) | {"http://example.com"}, issue_on_error=True
|
||||
)
|
||||
|
||||
assert not store_manager.get_from_url("http://example.com").validate()
|
||||
assert not await get_repository_by_url(
|
||||
store_manager, "http://example.com"
|
||||
).validate()
|
||||
|
||||
assert "http://example.com" in coresys.store.repository_urls
|
||||
assert coresys.resolution.suggestions[-1].type == SuggestionType.EXECUTE_REMOVE
|
||||
@ -135,14 +148,13 @@ async def test_add_repository_with_git_error(
|
||||
):
|
||||
"""Test repo added with issue on git error."""
|
||||
current = coresys.store.repository_urls
|
||||
with patch("supervisor.store.repository.Repository.load", side_effect=git_error):
|
||||
with patch("supervisor.store.git.GitRepo.load", side_effect=git_error):
|
||||
await store_manager.update_repositories(
|
||||
current + ["http://example.com"], add_with_errors=True
|
||||
set(current) | {"http://example.com"}, issue_on_error=True
|
||||
)
|
||||
|
||||
assert "http://example.com" in coresys.store.repository_urls
|
||||
assert coresys.resolution.suggestions[-1].type == suggestion_type
|
||||
assert isinstance(store_manager.get_from_url("http://example.com"), Repository)
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
@ -163,18 +175,18 @@ async def test_error_on_repository_with_git_error(
|
||||
"""Test repo not added on git error."""
|
||||
current = coresys.store.repository_urls
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.load", side_effect=git_error),
|
||||
patch("supervisor.store.git.GitRepo.load", side_effect=git_error),
|
||||
pytest.raises(StoreError),
|
||||
):
|
||||
if use_update:
|
||||
await store_manager.update_repositories(current + ["http://example.com"])
|
||||
await store_manager.update_repositories(
|
||||
set(current) | {"http://example.com"}
|
||||
)
|
||||
else:
|
||||
await store_manager.add_repository("http://example.com")
|
||||
|
||||
assert "http://example.com" not in coresys.store.repository_urls
|
||||
assert len(coresys.resolution.suggestions) == 0
|
||||
with pytest.raises(StoreNotFound):
|
||||
store_manager.get_from_url("http://example.com")
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@ -182,8 +194,10 @@ async def test_preinstall_valid_repository(
|
||||
coresys: CoreSys, store_manager: StoreManager
|
||||
):
|
||||
"""Test add core repository valid."""
|
||||
with patch("supervisor.store.repository.Repository.load", return_value=None):
|
||||
await store_manager.update_repositories(BUILTIN_REPOSITORIES)
|
||||
with patch("supervisor.store.git.GitRepo.load", return_value=None):
|
||||
await store_manager.update_repositories(
|
||||
{repo.value for repo in BuiltinRepository}
|
||||
)
|
||||
|
||||
def validate():
|
||||
assert store_manager.get("core").validate()
|
||||
@ -199,21 +213,21 @@ async def test_preinstall_valid_repository(
|
||||
async def test_remove_repository(
|
||||
coresys: CoreSys,
|
||||
store_manager: StoreManager,
|
||||
repository: Repository,
|
||||
test_repository: Repository,
|
||||
use_update: bool,
|
||||
):
|
||||
"""Test removing a custom repository."""
|
||||
assert repository.source in coresys.store.repository_urls
|
||||
assert repository.slug in coresys.store.repositories
|
||||
assert test_repository.source in coresys.store.repository_urls
|
||||
assert test_repository.slug in coresys.store.repositories
|
||||
|
||||
if use_update:
|
||||
await store_manager.update_repositories([])
|
||||
await store_manager.update_repositories(set())
|
||||
else:
|
||||
await store_manager.remove_repository(repository)
|
||||
await store_manager.remove_repository(test_repository)
|
||||
|
||||
assert repository.source not in coresys.store.repository_urls
|
||||
assert repository.slug not in coresys.addons.store
|
||||
assert repository.slug not in coresys.store.repositories
|
||||
assert test_repository.source not in coresys.store.repository_urls
|
||||
assert test_repository.slug not in coresys.addons.store
|
||||
assert test_repository.slug not in coresys.store.repositories
|
||||
|
||||
|
||||
@pytest.mark.parametrize("use_update", [True, False])
|
||||
@ -235,7 +249,7 @@ async def test_remove_used_repository(
|
||||
match="Can't remove 'https://github.com/awesome-developer/awesome-repo'. It's used by installed add-ons",
|
||||
):
|
||||
if use_update:
|
||||
await store_manager.update_repositories([])
|
||||
await store_manager.update_repositories(set())
|
||||
else:
|
||||
await store_manager.remove_repository(
|
||||
coresys.store.repositories[store_addon.repository]
|
||||
@ -244,9 +258,9 @@ async def test_remove_used_repository(
|
||||
|
||||
async def test_update_partial_error(coresys: CoreSys, store_manager: StoreManager):
|
||||
"""Test partial error on update does partial save and errors."""
|
||||
with patch("supervisor.store.repository.Repository.validate", return_value=True):
|
||||
with patch("supervisor.store.repository.Repository.load", return_value=None):
|
||||
await store_manager.update_repositories([])
|
||||
with patch("supervisor.store.repository.RepositoryGit.validate", return_value=True):
|
||||
with patch("supervisor.store.git.GitRepo.load", return_value=None):
|
||||
await store_manager.update_repositories(set())
|
||||
|
||||
store_manager.data.update.assert_called_once()
|
||||
store_manager.data.update.reset_mock()
|
||||
@ -256,13 +270,13 @@ async def test_update_partial_error(coresys: CoreSys, store_manager: StoreManage
|
||||
|
||||
with (
|
||||
patch(
|
||||
"supervisor.store.repository.Repository.load",
|
||||
"supervisor.store.git.GitRepo.load",
|
||||
side_effect=[None, StoreGitError()],
|
||||
),
|
||||
pytest.raises(StoreError),
|
||||
):
|
||||
await store_manager.update_repositories(
|
||||
current + ["http://example.com", "http://example2.com"]
|
||||
set(current) | {"http://example.com", "http://example2.com"}
|
||||
)
|
||||
|
||||
assert len(coresys.store.repository_urls) == initial + 1
|
||||
@ -270,36 +284,36 @@ async def test_update_partial_error(coresys: CoreSys, store_manager: StoreManage
|
||||
|
||||
|
||||
async def test_error_adding_duplicate(
|
||||
coresys: CoreSys, store_manager: StoreManager, repository: Repository
|
||||
coresys: CoreSys, store_manager: StoreManager, test_repository: Repository
|
||||
):
|
||||
"""Test adding a duplicate repository causes an error."""
|
||||
assert repository.source in coresys.store.repository_urls
|
||||
assert test_repository.source in coresys.store.repository_urls
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.validate", return_value=True),
|
||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
||||
patch("supervisor.store.repository.RepositoryGit.validate", return_value=True),
|
||||
patch("supervisor.store.git.GitRepo.load", return_value=None),
|
||||
pytest.raises(StoreError),
|
||||
):
|
||||
await store_manager.add_repository(repository.source)
|
||||
await store_manager.add_repository(test_repository.source)
|
||||
|
||||
|
||||
async def test_add_with_update_repositories(
|
||||
coresys: CoreSys, store_manager: StoreManager, repository: Repository
|
||||
coresys: CoreSys, store_manager: StoreManager, test_repository: Repository
|
||||
):
|
||||
"""Test adding repositories to existing ones using update."""
|
||||
assert repository.source in coresys.store.repository_urls
|
||||
assert test_repository.source in coresys.store.repository_urls
|
||||
assert "http://example.com" not in coresys.store.repository_urls
|
||||
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
||||
patch("supervisor.store.git.GitRepo.load", return_value=None),
|
||||
patch(
|
||||
"supervisor.utils.common.read_yaml_file",
|
||||
return_value={"name": "Awesome repository"},
|
||||
),
|
||||
patch("pathlib.Path.exists", return_value=True),
|
||||
):
|
||||
await store_manager.update_repositories(["http://example.com"], replace=False)
|
||||
await store_manager.update_repositories({"http://example.com"}, replace=False)
|
||||
|
||||
assert repository.source in coresys.store.repository_urls
|
||||
assert test_repository.source in coresys.store.repository_urls
|
||||
assert "http://example.com" in coresys.store.repository_urls
|
||||
|
||||
|
||||
@ -316,7 +330,7 @@ async def test_add_repository_fails_if_out_of_date(
|
||||
):
|
||||
if use_update:
|
||||
await store_manager.update_repositories(
|
||||
coresys.store.repository_urls + ["http://example.com"],
|
||||
set(coresys.store.repository_urls) | {"http://example.com"}
|
||||
)
|
||||
else:
|
||||
await store_manager.add_repository("http://example.com")
|
||||
@ -328,7 +342,7 @@ async def test_repositories_loaded_ignore_updates(
|
||||
):
|
||||
"""Test repositories loaded whether or not supervisor needs an update."""
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
||||
patch("supervisor.store.git.GitRepo.load", return_value=None),
|
||||
patch.object(
|
||||
type(coresys.supervisor),
|
||||
"need_update",
|
||||
|
@ -32,7 +32,8 @@ async def test_default_load(coresys: CoreSys):
|
||||
refresh_cache_calls.add(obj.slug)
|
||||
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
||||
patch("supervisor.store.repository.RepositoryGit.load", return_value=None),
|
||||
patch("supervisor.store.repository.RepositoryLocal.load", return_value=None),
|
||||
patch.object(type(coresys.config), "addons_repositories", return_value=[]),
|
||||
patch("pathlib.Path.exists", return_value=True),
|
||||
patch.object(AddonStore, "refresh_path_cache", new=mock_refresh_cache),
|
||||
@ -80,9 +81,13 @@ async def test_load_with_custom_repository(coresys: CoreSys):
|
||||
store_manager = await StoreManager(coresys).load_config()
|
||||
|
||||
with (
|
||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
||||
patch("supervisor.store.repository.RepositoryGit.load", return_value=None),
|
||||
patch("supervisor.store.repository.RepositoryLocal.load", return_value=None),
|
||||
patch.object(type(coresys.config), "addons_repositories", return_value=[]),
|
||||
patch("supervisor.store.repository.Repository.validate", return_value=True),
|
||||
patch("supervisor.store.repository.RepositoryGit.validate", return_value=True),
|
||||
patch(
|
||||
"supervisor.store.repository.RepositoryLocal.validate", return_value=True
|
||||
),
|
||||
patch("pathlib.Path.exists", return_value=True),
|
||||
patch.object(AddonStore, "refresh_path_cache", new=mock_refresh_cache),
|
||||
):
|
||||
@ -198,7 +203,7 @@ async def test_update_unavailable_addon(
|
||||
)
|
||||
async def test_install_unavailable_addon(
|
||||
coresys: CoreSys,
|
||||
repository: Repository,
|
||||
test_repository: Repository,
|
||||
caplog: pytest.LogCaptureFixture,
|
||||
config: dict[str, Any],
|
||||
log: str,
|
||||
|
@ -50,8 +50,8 @@ async def test_connectivity_check(
|
||||
[
|
||||
(None, timedelta(minutes=5), True),
|
||||
(None, timedelta(minutes=15), False),
|
||||
(ClientError(), timedelta(seconds=20), True),
|
||||
(ClientError(), timedelta(seconds=40), False),
|
||||
(ClientError(), timedelta(seconds=3), True),
|
||||
(ClientError(), timedelta(seconds=10), False),
|
||||
],
|
||||
)
|
||||
async def test_connectivity_check_throttling(
|
||||
|
Loading…
x
Reference in New Issue
Block a user