mirror of
https://github.com/home-assistant/supervisor.git
synced 2025-08-01 13:27:43 +00:00
Compare commits
70 Commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
27b092aed0 | ||
![]() |
3af13cb7e2 | ||
![]() |
6871ea4b81 | ||
![]() |
cf77ab2290 | ||
![]() |
ceeffa3284 | ||
![]() |
31f2f70cd9 | ||
![]() |
deac85bddb | ||
![]() |
7dcf5ba631 | ||
![]() |
a004830131 | ||
![]() |
a8cc6c416d | ||
![]() |
74b26642b0 | ||
![]() |
5e26ab5f4a | ||
![]() |
a841cb8282 | ||
![]() |
3b1b03c8a7 | ||
![]() |
680428f304 | ||
![]() |
f34128c37e | ||
![]() |
2ed0682b34 | ||
![]() |
fbb0915ef8 | ||
![]() |
780ae1e15c | ||
![]() |
c617358855 | ||
![]() |
b679c4f4d8 | ||
![]() |
c946c421f2 | ||
![]() |
aeabf7ea25 | ||
![]() |
365b838abf | ||
![]() |
99c040520e | ||
![]() |
eefe2f2e06 | ||
![]() |
a366e36b37 | ||
![]() |
27a2fde9e1 | ||
![]() |
9a0f530a2f | ||
![]() |
baf9695cf7 | ||
![]() |
7873c457d5 | ||
![]() |
cbc48c381f | ||
![]() |
11e37011bd | ||
![]() |
cfda559a90 | ||
![]() |
806bd9f52c | ||
![]() |
953f7d01d7 | ||
![]() |
381e719a0e | ||
![]() |
296071067d | ||
![]() |
8336537f51 | ||
![]() |
5c90a00263 | ||
![]() |
1f2bf77784 | ||
![]() |
9aa4f381b8 | ||
![]() |
ae036ceffe | ||
![]() |
f0ea0d4a44 | ||
![]() |
abc44946bb | ||
![]() |
3e20a0937d | ||
![]() |
6cebf52249 | ||
![]() |
bc57deb474 | ||
![]() |
38750d74a8 | ||
![]() |
d1c1a2d418 | ||
![]() |
cf32f036c0 | ||
![]() |
b8852872fe | ||
![]() |
779f47e25d | ||
![]() |
be8b36b560 | ||
![]() |
8378d434d4 | ||
![]() |
0b79e09bc0 | ||
![]() |
d747a59696 | ||
![]() |
3ee7c082ec | ||
![]() |
3f921e50b3 | ||
![]() |
0370320f75 | ||
![]() |
1e19e26ef3 | ||
![]() |
e1a18eeba8 | ||
![]() |
b030879efd | ||
![]() |
dfa1602ac6 | ||
![]() |
bbda943583 | ||
![]() |
aea15b65b7 | ||
![]() |
5c04249e41 | ||
![]() |
456cec7ed1 | ||
![]() |
52a519e55c | ||
![]() |
fcb20d0ae8 |
69
.github/ISSUE_TEMPLATE.md
vendored
69
.github/ISSUE_TEMPLATE.md
vendored
@ -1,69 +0,0 @@
|
|||||||
---
|
|
||||||
name: Report a bug with the Supervisor on a supported System
|
|
||||||
about: Report an issue related to the Home Assistant Supervisor.
|
|
||||||
labels: bug
|
|
||||||
---
|
|
||||||
|
|
||||||
<!-- READ THIS FIRST:
|
|
||||||
- If you need additional help with this template please refer to https://www.home-assistant.io/help/reporting_issues/
|
|
||||||
- This is for bugs only. Feature and enhancement requests should go in our community forum: https://community.home-assistant.io/c/feature-requests
|
|
||||||
- Provide as many details as possible. Paste logs, configuration sample and code into the backticks. Do not delete any text from this template!
|
|
||||||
- If you have a problem with an add-on, make an issue in it's repository.
|
|
||||||
-->
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Important: You can only fill a bug repport for an supported system! If you run an unsupported installation. This report would be closed without comment.
|
|
||||||
-->
|
|
||||||
|
|
||||||
### Describe the issue
|
|
||||||
|
|
||||||
<!-- Provide as many details as possible. -->
|
|
||||||
|
|
||||||
### Steps to reproduce
|
|
||||||
|
|
||||||
<!-- What do you do to encounter the issue. -->
|
|
||||||
|
|
||||||
1. ...
|
|
||||||
2. ...
|
|
||||||
3. ...
|
|
||||||
|
|
||||||
### Enviroment details
|
|
||||||
|
|
||||||
<!-- You can find these details in the system tab of the supervisor panel, or by using the `ha` CLI. -->
|
|
||||||
|
|
||||||
- **Operating System:**: xxx
|
|
||||||
- **Supervisor version:**: xxx
|
|
||||||
- **Home Assistant version**: xxx
|
|
||||||
|
|
||||||
### Supervisor logs
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>Supervisor logs</summary>
|
|
||||||
<!--
|
|
||||||
- Frontend -> Supervisor -> System
|
|
||||||
- Or use this command: ha supervisor logs
|
|
||||||
- Logs are more than just errors, even if you don't think it's important, it is.
|
|
||||||
-->
|
|
||||||
|
|
||||||
```
|
|
||||||
Paste supervisor logs here
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
### System Information
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>System Information</summary>
|
|
||||||
<!--
|
|
||||||
- Use this command: ha info
|
|
||||||
-->
|
|
||||||
|
|
||||||
```
|
|
||||||
Paste system info here
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
5
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
5
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@ -1,6 +1,5 @@
|
|||||||
name: Bug Report Form
|
name: Report an issue with Home Assistant Supervisor
|
||||||
description: Report an issue related to the Home Assistant Supervisor.
|
description: Report an issue related to the Home Assistant Supervisor.
|
||||||
labels: bug
|
|
||||||
body:
|
body:
|
||||||
- type: markdown
|
- type: markdown
|
||||||
attributes:
|
attributes:
|
||||||
@ -9,7 +8,7 @@ body:
|
|||||||
|
|
||||||
If you have a feature or enhancement request, please use the [feature request][fr] section of our [Community Forum][fr].
|
If you have a feature or enhancement request, please use the [feature request][fr] section of our [Community Forum][fr].
|
||||||
|
|
||||||
[fr]: https://community.home-assistant.io/c/feature-requests
|
[fr]: https://github.com/orgs/home-assistant/discussions
|
||||||
- type: textarea
|
- type: textarea
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
2
.github/ISSUE_TEMPLATE/config.yml
vendored
2
.github/ISSUE_TEMPLATE/config.yml
vendored
@ -13,7 +13,7 @@ contact_links:
|
|||||||
about: Our documentation has its own issue tracker. Please report issues with the website there.
|
about: Our documentation has its own issue tracker. Please report issues with the website there.
|
||||||
|
|
||||||
- name: Request a feature for the Supervisor
|
- name: Request a feature for the Supervisor
|
||||||
url: https://community.home-assistant.io/c/feature-requests
|
url: https://github.com/orgs/home-assistant/discussions
|
||||||
about: Request an new feature for the Supervisor.
|
about: Request an new feature for the Supervisor.
|
||||||
|
|
||||||
- name: I have a question or need support
|
- name: I have a question or need support
|
||||||
|
53
.github/ISSUE_TEMPLATE/task.yml
vendored
Normal file
53
.github/ISSUE_TEMPLATE/task.yml
vendored
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
name: Task
|
||||||
|
description: For staff only - Create a task
|
||||||
|
type: Task
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: |
|
||||||
|
## ⚠️ RESTRICTED ACCESS
|
||||||
|
|
||||||
|
**This form is restricted to Open Home Foundation staff and authorized contributors only.**
|
||||||
|
|
||||||
|
If you are a community member wanting to contribute, please:
|
||||||
|
- For bug reports: Use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)
|
||||||
|
- For feature requests: Submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### For authorized contributors
|
||||||
|
|
||||||
|
Use this form to create tasks for development work, improvements, or other actionable items that need to be tracked.
|
||||||
|
- type: textarea
|
||||||
|
id: description
|
||||||
|
attributes:
|
||||||
|
label: Description
|
||||||
|
description: |
|
||||||
|
Provide a clear and detailed description of the task that needs to be accomplished.
|
||||||
|
|
||||||
|
Be specific about what needs to be done, why it's important, and any constraints or requirements.
|
||||||
|
placeholder: |
|
||||||
|
Describe the task, including:
|
||||||
|
- What needs to be done
|
||||||
|
- Why this task is needed
|
||||||
|
- Expected outcome
|
||||||
|
- Any constraints or requirements
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: additional_context
|
||||||
|
attributes:
|
||||||
|
label: Additional context
|
||||||
|
description: |
|
||||||
|
Any additional information, links, research, or context that would be helpful.
|
||||||
|
|
||||||
|
Include links to related issues, research, prototypes, roadmap opportunities etc.
|
||||||
|
placeholder: |
|
||||||
|
- Roadmap opportunity: [link]
|
||||||
|
- Epic: [link]
|
||||||
|
- Feature request: [link]
|
||||||
|
- Technical design documents: [link]
|
||||||
|
- Prototype/mockup: [link]
|
||||||
|
- Dependencies: [links]
|
||||||
|
validations:
|
||||||
|
required: false
|
288
.github/copilot-instructions.md
vendored
Normal file
288
.github/copilot-instructions.md
vendored
Normal file
@ -0,0 +1,288 @@
|
|||||||
|
# GitHub Copilot & Claude Code Instructions
|
||||||
|
|
||||||
|
This repository contains the Home Assistant Supervisor, a Python 3 based container
|
||||||
|
orchestration and management system for Home Assistant.
|
||||||
|
|
||||||
|
## Supervisor Capabilities & Features
|
||||||
|
|
||||||
|
### Architecture Overview
|
||||||
|
|
||||||
|
Home Assistant Supervisor is a Python-based container orchestration system that
|
||||||
|
communicates with the Docker daemon to manage containerized components. It is tightly
|
||||||
|
integrated with the underlying Operating System and core Operating System components
|
||||||
|
through D-Bus.
|
||||||
|
|
||||||
|
**Managed Components:**
|
||||||
|
- **Home Assistant Core**: The main home automation application running in its own
|
||||||
|
container (also provides the web interface)
|
||||||
|
- **Add-ons**: Third-party applications and services (each add-on runs in its own
|
||||||
|
container)
|
||||||
|
- **Plugins**: Built-in system services like DNS, Audio, CLI, Multicast, and Observer
|
||||||
|
- **Host System Integration**: OS-level operations and hardware access via D-Bus
|
||||||
|
- **Container Networking**: Internal Docker network management and external
|
||||||
|
connectivity
|
||||||
|
- **Storage & Backup**: Data persistence and backup management across all containers
|
||||||
|
|
||||||
|
**Key Dependencies:**
|
||||||
|
- **Docker Engine**: Required for all container operations
|
||||||
|
- **D-Bus**: System-level communication with the host OS
|
||||||
|
- **systemd**: Service management for host system operations
|
||||||
|
- **NetworkManager**: Network configuration and management
|
||||||
|
|
||||||
|
### Add-on System
|
||||||
|
|
||||||
|
**Add-on Architecture**: Add-ons are containerized applications available through
|
||||||
|
add-on stores. Each store contains multiple add-ons, and each add-on includes metadata
|
||||||
|
that tells Supervisor the version, startup configuration (permissions), and available
|
||||||
|
user configurable options. Add-on metadata typically references a container image that
|
||||||
|
Supervisor fetches during installation. If not, the Supervisor builds the container
|
||||||
|
image from a Dockerfile.
|
||||||
|
|
||||||
|
**Built-in Stores**: Supervisor comes with several pre-configured stores:
|
||||||
|
- **Core Add-ons**: Official add-ons maintained by the Home Assistant team
|
||||||
|
- **Community Add-ons**: Popular third-party add-ons repository
|
||||||
|
- **ESPHome**: Add-ons for ESPHome ecosystem integration
|
||||||
|
- **Music Assistant**: Audio and music-related add-ons
|
||||||
|
- **Local Development**: Local folder for testing custom add-ons during development
|
||||||
|
|
||||||
|
**Store Management**: Stores are Git-based repositories that are periodically updated.
|
||||||
|
When updates are available, users receive notifications.
|
||||||
|
|
||||||
|
**Add-on Lifecycle**:
|
||||||
|
- **Installation**: Supervisor fetches or builds container images based on add-on
|
||||||
|
metadata
|
||||||
|
- **Configuration**: Schema-validated options with integrated UI management
|
||||||
|
- **Runtime**: Full container lifecycle management, health monitoring
|
||||||
|
- **Updates**: Automatic or manual version management
|
||||||
|
|
||||||
|
### Update System
|
||||||
|
|
||||||
|
**Core Components**: Supervisor, Home Assistant Core, HAOS, and built-in plugins
|
||||||
|
receive version information from a central JSON file fetched from
|
||||||
|
`https://version.home-assistant.io/{channel}.json`. The `Updater` class handles
|
||||||
|
fetching this data, validating signatures, and updating internal version tracking.
|
||||||
|
|
||||||
|
**Update Channels**: Three channels (`stable`/`beta`/`dev`) determine which version
|
||||||
|
JSON file is fetched, allowing users to opt into different release streams.
|
||||||
|
|
||||||
|
**Add-on Updates**: Add-on version information comes from store repository updates, not
|
||||||
|
the central JSON file. When repositories are refreshed via the store system, add-ons
|
||||||
|
compare their local versions against repository versions to determine update
|
||||||
|
availability.
|
||||||
|
|
||||||
|
### Backup & Recovery System
|
||||||
|
|
||||||
|
**Backup Capabilities**:
|
||||||
|
- **Full Backups**: Complete system state capture including all add-ons,
|
||||||
|
configuration, and data
|
||||||
|
- **Partial Backups**: Selective backup of specific components (Home Assistant,
|
||||||
|
add-ons, folders)
|
||||||
|
- **Encrypted Backups**: Optional backup encryption with user-provided passwords
|
||||||
|
- **Multiple Storage Locations**: Local storage and remote backup destinations
|
||||||
|
|
||||||
|
**Recovery Features**:
|
||||||
|
- **One-click Restore**: Simple restoration from backup files
|
||||||
|
- **Selective Restore**: Choose specific components to restore
|
||||||
|
- **Automatic Recovery**: Self-healing for common system issues
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Supervisor Development
|
||||||
|
|
||||||
|
### Python Requirements
|
||||||
|
|
||||||
|
- **Compatibility**: Python 3.13+
|
||||||
|
- **Language Features**: Use modern Python features:
|
||||||
|
- Type hints with `typing` module
|
||||||
|
- f-strings (preferred over `%` or `.format()`)
|
||||||
|
- Dataclasses and enum classes
|
||||||
|
- Async/await patterns
|
||||||
|
- Pattern matching where appropriate
|
||||||
|
|
||||||
|
### Code Quality Standards
|
||||||
|
|
||||||
|
- **Formatting**: Ruff
|
||||||
|
- **Linting**: PyLint and Ruff
|
||||||
|
- **Type Checking**: MyPy
|
||||||
|
- **Testing**: pytest with asyncio support
|
||||||
|
- **Language**: American English for all code, comments, and documentation
|
||||||
|
|
||||||
|
### Code Organization
|
||||||
|
|
||||||
|
**Core Structure**:
|
||||||
|
```
|
||||||
|
supervisor/
|
||||||
|
├── __init__.py # Package initialization
|
||||||
|
├── const.py # Constants and enums
|
||||||
|
├── coresys.py # Core system management
|
||||||
|
├── bootstrap.py # System initialization
|
||||||
|
├── exceptions.py # Custom exception classes
|
||||||
|
├── api/ # REST API endpoints
|
||||||
|
├── addons/ # Add-on management
|
||||||
|
├── backups/ # Backup system
|
||||||
|
├── docker/ # Docker integration
|
||||||
|
├── host/ # Host system interface
|
||||||
|
├── homeassistant/ # Home Assistant Core management
|
||||||
|
├── dbus/ # D-Bus system integration
|
||||||
|
├── hardware/ # Hardware detection and management
|
||||||
|
├── plugins/ # Plugin system
|
||||||
|
├── resolution/ # Issue detection and resolution
|
||||||
|
├── security/ # Security management
|
||||||
|
├── services/ # Service discovery and management
|
||||||
|
├── store/ # Add-on store management
|
||||||
|
└── utils/ # Utility functions
|
||||||
|
```
|
||||||
|
|
||||||
|
**Shared Constants**: Use constants from `supervisor/const.py` instead of hardcoding
|
||||||
|
values. Define new constants following existing patterns and group related constants
|
||||||
|
together.
|
||||||
|
|
||||||
|
### Supervisor Architecture Patterns
|
||||||
|
|
||||||
|
**CoreSysAttributes Inheritance Pattern**: Nearly all major classes in Supervisor
|
||||||
|
inherit from `CoreSysAttributes`, providing access to the centralized system state
|
||||||
|
via `self.coresys` and convenient `sys_*` properties.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Standard Supervisor class pattern
|
||||||
|
class MyManager(CoreSysAttributes):
|
||||||
|
"""Manage my functionality."""
|
||||||
|
|
||||||
|
def __init__(self, coresys: CoreSys):
|
||||||
|
"""Initialize manager."""
|
||||||
|
self.coresys: CoreSys = coresys
|
||||||
|
self._component: MyComponent = MyComponent(coresys)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def component(self) -> MyComponent:
|
||||||
|
"""Return component handler."""
|
||||||
|
return self._component
|
||||||
|
|
||||||
|
# Access system components via inherited properties
|
||||||
|
async def do_something(self):
|
||||||
|
await self.sys_docker.containers.get("my_container")
|
||||||
|
self.sys_bus.fire_event(BusEvent.MY_EVENT, {"data": "value"})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Inherited Properties from CoreSysAttributes**:
|
||||||
|
- `self.sys_docker` - Docker API access
|
||||||
|
- `self.sys_run_in_executor()` - Execute blocking operations
|
||||||
|
- `self.sys_create_task()` - Create async tasks
|
||||||
|
- `self.sys_bus` - Event bus for system events
|
||||||
|
- `self.sys_config` - System configuration
|
||||||
|
- `self.sys_homeassistant` - Home Assistant Core management
|
||||||
|
- `self.sys_addons` - Add-on management
|
||||||
|
- `self.sys_host` - Host system access
|
||||||
|
- `self.sys_dbus` - D-Bus system interface
|
||||||
|
|
||||||
|
**Load Pattern**: Many components implement a `load()` method which effectively
|
||||||
|
initialize the component from external sources (containers, files, D-Bus services).
|
||||||
|
|
||||||
|
### API Development
|
||||||
|
|
||||||
|
**REST API Structure**:
|
||||||
|
- **Base Path**: `/api/` for all endpoints
|
||||||
|
- **Authentication**: Bearer token authentication
|
||||||
|
- **Consistent Response Format**: `{"result": "ok", "data": {...}}` or
|
||||||
|
`{"result": "error", "message": "..."}`
|
||||||
|
- **Validation**: Use voluptuous schemas with `api_validate()`
|
||||||
|
|
||||||
|
**Use `@api_process` Decorator**: This decorator handles all standard error handling
|
||||||
|
and response formatting automatically. The decorator catches `APIError`, `HassioError`,
|
||||||
|
and other exceptions, returning appropriate HTTP responses.
|
||||||
|
|
||||||
|
```python
|
||||||
|
from ..api.utils import api_process, api_validate
|
||||||
|
|
||||||
|
@api_process
|
||||||
|
async def backup_full(self, request: web.Request) -> dict[str, Any]:
|
||||||
|
"""Create full backup."""
|
||||||
|
body = await api_validate(SCHEMA_BACKUP_FULL, request)
|
||||||
|
job = await self.sys_backups.do_backup_full(**body)
|
||||||
|
return {ATTR_JOB_ID: job.uuid}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Docker Integration
|
||||||
|
|
||||||
|
- **Container Management**: Use Supervisor's Docker manager instead of direct
|
||||||
|
Docker API
|
||||||
|
- **Networking**: Supervisor manages internal Docker networks with predefined IP
|
||||||
|
ranges
|
||||||
|
- **Security**: AppArmor profiles, capability restrictions, and user namespace
|
||||||
|
isolation
|
||||||
|
- **Health Checks**: Implement health monitoring for all managed containers
|
||||||
|
|
||||||
|
### D-Bus Integration
|
||||||
|
|
||||||
|
- **Use dbus-fast**: Async D-Bus library for system integration
|
||||||
|
- **Service Management**: systemd, NetworkManager, hostname management
|
||||||
|
- **Error Handling**: Wrap D-Bus exceptions in Supervisor-specific exceptions
|
||||||
|
|
||||||
|
### Async Programming
|
||||||
|
|
||||||
|
- **All I/O operations must be async**: File operations, network calls, subprocess
|
||||||
|
execution
|
||||||
|
- **Use asyncio patterns**: Prefer `asyncio.gather()` over sequential awaits
|
||||||
|
- **Executor jobs**: Use `self.sys_run_in_executor()` for blocking operations
|
||||||
|
- **Two-phase initialization**: `__init__` for sync setup, `post_init()` for async
|
||||||
|
initialization
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
|
||||||
|
- **Location**: `tests/` directory with module mirroring
|
||||||
|
- **Fixtures**: Extensive use of pytest fixtures for CoreSys setup
|
||||||
|
- **Mocking**: Mock external dependencies (Docker, D-Bus, network calls)
|
||||||
|
- **Coverage**: Minimum 90% test coverage, 100% for security-sensitive code
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
- **Custom Exceptions**: Defined in `exceptions.py` with clear inheritance hierarchy
|
||||||
|
- **Error Propagation**: Use `from` clause for exception chaining
|
||||||
|
- **API Errors**: Use `APIError` with appropriate HTTP status codes
|
||||||
|
|
||||||
|
### Security Considerations
|
||||||
|
|
||||||
|
- **Container Security**: AppArmor profiles mandatory for add-ons, minimal
|
||||||
|
capabilities
|
||||||
|
- **Authentication**: Token-based API authentication with role-based access
|
||||||
|
- **Data Protection**: Backup encryption, secure secret management, comprehensive
|
||||||
|
input validation
|
||||||
|
|
||||||
|
### Development Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run tests, adjust paths as necessary
|
||||||
|
pytest -qsx tests/
|
||||||
|
|
||||||
|
# Linting and formatting
|
||||||
|
ruff check supervisor/
|
||||||
|
ruff format supervisor/
|
||||||
|
|
||||||
|
# Type checking
|
||||||
|
mypy --ignore-missing-imports supervisor/
|
||||||
|
|
||||||
|
# Pre-commit hooks
|
||||||
|
pre-commit run --all-files
|
||||||
|
```
|
||||||
|
|
||||||
|
Always run the pre-commit hooks at the end of code editing.
|
||||||
|
|
||||||
|
### Common Patterns to Follow
|
||||||
|
|
||||||
|
**✅ Use These Patterns**:
|
||||||
|
- Inherit from `CoreSysAttributes` for system access
|
||||||
|
- Use `@api_process` decorator for API endpoints
|
||||||
|
- Use `self.sys_run_in_executor()` for blocking operations
|
||||||
|
- Access Docker via `self.sys_docker` not direct Docker API
|
||||||
|
- Use constants from `const.py` instead of hardcoding
|
||||||
|
- Store types in (per-module) `const.py` (e.g. supervisor/store/const.py)
|
||||||
|
|
||||||
|
**❌ Avoid These Patterns**:
|
||||||
|
- Direct Docker API usage - use Supervisor's Docker manager
|
||||||
|
- Blocking operations in async context (use asyncio alternatives)
|
||||||
|
- Hardcoded values - use constants from `const.py`
|
||||||
|
- Manual error handling in API endpoints - let `@api_process` handle it
|
||||||
|
|
||||||
|
This guide provides the foundation for contributing to Home Assistant Supervisor.
|
||||||
|
Follow these patterns and guidelines to ensure code quality, security, and
|
||||||
|
maintainability.
|
2
.github/workflows/builder.yml
vendored
2
.github/workflows/builder.yml
vendored
@ -131,7 +131,7 @@ jobs:
|
|||||||
|
|
||||||
- name: Install Cosign
|
- name: Install Cosign
|
||||||
if: needs.init.outputs.publish == 'true'
|
if: needs.init.outputs.publish == 'true'
|
||||||
uses: sigstore/cosign-installer@v3.8.2
|
uses: sigstore/cosign-installer@v3.9.2
|
||||||
with:
|
with:
|
||||||
cosign-release: "v2.4.3"
|
cosign-release: "v2.4.3"
|
||||||
|
|
||||||
|
49
.github/workflows/ci.yaml
vendored
49
.github/workflows/ci.yaml
vendored
@ -10,6 +10,7 @@ on:
|
|||||||
env:
|
env:
|
||||||
DEFAULT_PYTHON: "3.13"
|
DEFAULT_PYTHON: "3.13"
|
||||||
PRE_COMMIT_CACHE: ~/.cache/pre-commit
|
PRE_COMMIT_CACHE: ~/.cache/pre-commit
|
||||||
|
MYPY_CACHE_VERSION: 1
|
||||||
|
|
||||||
concurrency:
|
concurrency:
|
||||||
group: "${{ github.workflow }}-${{ github.ref }}"
|
group: "${{ github.workflow }}-${{ github.ref }}"
|
||||||
@ -286,6 +287,52 @@ jobs:
|
|||||||
. venv/bin/activate
|
. venv/bin/activate
|
||||||
pylint supervisor tests
|
pylint supervisor tests
|
||||||
|
|
||||||
|
mypy:
|
||||||
|
name: Check mypy
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: prepare
|
||||||
|
steps:
|
||||||
|
- name: Check out code from GitHub
|
||||||
|
uses: actions/checkout@v4.2.2
|
||||||
|
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||||
|
uses: actions/setup-python@v5.6.0
|
||||||
|
id: python
|
||||||
|
with:
|
||||||
|
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||||
|
- name: Generate partial mypy restore key
|
||||||
|
id: generate-mypy-key
|
||||||
|
run: |
|
||||||
|
mypy_version=$(cat requirements_test.txt | grep mypy | cut -d '=' -f 3)
|
||||||
|
echo "version=$mypy_version" >> $GITHUB_OUTPUT
|
||||||
|
echo "key=mypy-${{ env.MYPY_CACHE_VERSION }}-$mypy_version-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
|
||||||
|
- name: Restore Python virtual environment
|
||||||
|
id: cache-venv
|
||||||
|
uses: actions/cache@v4.2.3
|
||||||
|
with:
|
||||||
|
path: venv
|
||||||
|
key: >-
|
||||||
|
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
|
||||||
|
- name: Fail job if Python cache restore failed
|
||||||
|
if: steps.cache-venv.outputs.cache-hit != 'true'
|
||||||
|
run: |
|
||||||
|
echo "Failed to restore Python virtual environment from cache"
|
||||||
|
exit 1
|
||||||
|
- name: Restore mypy cache
|
||||||
|
uses: actions/cache@v4.2.3
|
||||||
|
with:
|
||||||
|
path: .mypy_cache
|
||||||
|
key: >-
|
||||||
|
${{ runner.os }}-mypy-${{ needs.prepare.outputs.python-version }}-${{ steps.generate-mypy-key.outputs.key }}
|
||||||
|
restore-keys: >-
|
||||||
|
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-mypy-${{ env.MYPY_CACHE_VERSION }}-${{ steps.generate-mypy-key.outputs.version }}
|
||||||
|
- name: Register mypy problem matcher
|
||||||
|
run: |
|
||||||
|
echo "::add-matcher::.github/workflows/matchers/mypy.json"
|
||||||
|
- name: Run mypy
|
||||||
|
run: |
|
||||||
|
. venv/bin/activate
|
||||||
|
mypy --ignore-missing-imports supervisor
|
||||||
|
|
||||||
pytest:
|
pytest:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: prepare
|
needs: prepare
|
||||||
@ -299,7 +346,7 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||||
- name: Install Cosign
|
- name: Install Cosign
|
||||||
uses: sigstore/cosign-installer@v3.8.2
|
uses: sigstore/cosign-installer@v3.9.2
|
||||||
with:
|
with:
|
||||||
cosign-release: "v2.4.3"
|
cosign-release: "v2.4.3"
|
||||||
- name: Restore Python virtual environment
|
- name: Restore Python virtual environment
|
||||||
|
16
.github/workflows/matchers/mypy.json
vendored
Normal file
16
.github/workflows/matchers/mypy.json
vendored
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
{
|
||||||
|
"problemMatcher": [
|
||||||
|
{
|
||||||
|
"owner": "mypy",
|
||||||
|
"pattern": [
|
||||||
|
{
|
||||||
|
"regexp": "^(.+):(\\d+):\\s(error|warning):\\s(.+)$",
|
||||||
|
"file": 1,
|
||||||
|
"line": 2,
|
||||||
|
"severity": 3,
|
||||||
|
"message": 4
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
58
.github/workflows/restrict-task-creation.yml
vendored
Normal file
58
.github/workflows/restrict-task-creation.yml
vendored
Normal file
@ -0,0 +1,58 @@
|
|||||||
|
name: Restrict task creation
|
||||||
|
|
||||||
|
# yamllint disable-line rule:truthy
|
||||||
|
on:
|
||||||
|
issues:
|
||||||
|
types: [opened]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
check-authorization:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
# Only run if this is a Task issue type (from the issue form)
|
||||||
|
if: github.event.issue.issue_type == 'Task'
|
||||||
|
steps:
|
||||||
|
- name: Check if user is authorized
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
script: |
|
||||||
|
const issueAuthor = context.payload.issue.user.login;
|
||||||
|
|
||||||
|
// Check if user is an organization member
|
||||||
|
try {
|
||||||
|
await github.rest.orgs.checkMembershipForUser({
|
||||||
|
org: 'home-assistant',
|
||||||
|
username: issueAuthor
|
||||||
|
});
|
||||||
|
console.log(`✅ ${issueAuthor} is an organization member`);
|
||||||
|
return; // Authorized
|
||||||
|
} catch (error) {
|
||||||
|
console.log(`❌ ${issueAuthor} is not authorized to create Task issues`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Close the issue with a comment
|
||||||
|
await github.rest.issues.createComment({
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
issue_number: context.issue.number,
|
||||||
|
body: `Hi @${issueAuthor}, thank you for your contribution!\n\n` +
|
||||||
|
`Task issues are restricted to Open Home Foundation staff and authorized contributors.\n\n` +
|
||||||
|
`If you would like to:\n` +
|
||||||
|
`- Report a bug: Please use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)\n` +
|
||||||
|
`- Request a feature: Please submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)\n\n` +
|
||||||
|
`If you believe you should have access to create Task issues, please contact the maintainers.`
|
||||||
|
});
|
||||||
|
|
||||||
|
await github.rest.issues.update({
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
issue_number: context.issue.number,
|
||||||
|
state: 'closed'
|
||||||
|
});
|
||||||
|
|
||||||
|
// Add a label to indicate this was auto-closed
|
||||||
|
await github.rest.issues.addLabels({
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
issue_number: context.issue.number,
|
||||||
|
labels: ['auto-closed']
|
||||||
|
});
|
2
.github/workflows/sentry.yaml
vendored
2
.github/workflows/sentry.yaml
vendored
@ -12,7 +12,7 @@ jobs:
|
|||||||
- name: Check out code from GitHub
|
- name: Check out code from GitHub
|
||||||
uses: actions/checkout@v4.2.2
|
uses: actions/checkout@v4.2.2
|
||||||
- name: Sentry Release
|
- name: Sentry Release
|
||||||
uses: getsentry/action-release@v3.1.1
|
uses: getsentry/action-release@v3.2.0
|
||||||
env:
|
env:
|
||||||
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
|
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
|
||||||
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}
|
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}
|
||||||
|
@ -13,3 +13,15 @@ repos:
|
|||||||
- id: check-executables-have-shebangs
|
- id: check-executables-have-shebangs
|
||||||
stages: [manual]
|
stages: [manual]
|
||||||
- id: check-json
|
- id: check-json
|
||||||
|
- repo: local
|
||||||
|
hooks:
|
||||||
|
# Run mypy through our wrapper script in order to get the possible
|
||||||
|
# pyenv and/or virtualenv activated; it may not have been e.g. if
|
||||||
|
# committing from a GUI tool that was not launched from an activated
|
||||||
|
# shell.
|
||||||
|
- id: mypy
|
||||||
|
name: mypy
|
||||||
|
entry: script/run-in-env.sh mypy --ignore-missing-imports
|
||||||
|
language: script
|
||||||
|
types_or: [python, pyi]
|
||||||
|
files: ^supervisor/.+\.(py|pyi)$
|
||||||
|
@ -1,30 +1,30 @@
|
|||||||
aiodns==3.5.0
|
aiodns==3.5.0
|
||||||
aiohttp==3.12.13
|
aiohttp==3.12.15
|
||||||
atomicwrites-homeassistant==1.4.1
|
atomicwrites-homeassistant==1.4.1
|
||||||
attrs==25.3.0
|
attrs==25.3.0
|
||||||
awesomeversion==25.5.0
|
awesomeversion==25.5.0
|
||||||
blockbuster==1.5.24
|
blockbuster==1.5.25
|
||||||
brotli==1.1.0
|
brotli==1.1.0
|
||||||
ciso8601==2.3.2
|
ciso8601==2.3.2
|
||||||
colorlog==6.9.0
|
colorlog==6.9.0
|
||||||
cpe==1.3.1
|
cpe==1.3.1
|
||||||
cryptography==45.0.4
|
cryptography==45.0.5
|
||||||
debugpy==1.8.14
|
debugpy==1.8.15
|
||||||
deepmerge==2.0
|
deepmerge==2.0
|
||||||
dirhash==0.5.0
|
dirhash==0.5.0
|
||||||
docker==7.1.0
|
docker==7.1.0
|
||||||
faust-cchardet==2.1.19
|
faust-cchardet==2.1.19
|
||||||
gitpython==3.1.44
|
gitpython==3.1.45
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
log-rate-limit==1.4.2
|
log-rate-limit==1.4.2
|
||||||
orjson==3.10.18
|
orjson==3.11.1
|
||||||
pulsectl==24.12.0
|
pulsectl==24.12.0
|
||||||
pyudev==0.24.3
|
pyudev==0.24.3
|
||||||
PyYAML==6.0.2
|
PyYAML==6.0.2
|
||||||
requests==2.32.4
|
requests==2.32.4
|
||||||
securetar==2025.2.1
|
securetar==2025.2.1
|
||||||
sentry-sdk==2.30.0
|
sentry-sdk==2.34.1
|
||||||
setuptools==80.9.0
|
setuptools==80.9.0
|
||||||
voluptuous==0.15.2
|
voluptuous==0.15.2
|
||||||
dbus-fast==2.44.1
|
dbus-fast==2.44.2
|
||||||
zlib-fast==0.2.1
|
zlib-fast==0.2.1
|
||||||
|
@ -1,12 +1,16 @@
|
|||||||
astroid==3.3.10
|
astroid==3.3.11
|
||||||
coverage==7.9.1
|
coverage==7.10.1
|
||||||
|
mypy==1.17.0
|
||||||
pre-commit==4.2.0
|
pre-commit==4.2.0
|
||||||
pylint==3.3.7
|
pylint==3.3.7
|
||||||
pytest-aiohttp==1.1.0
|
pytest-aiohttp==1.1.0
|
||||||
pytest-asyncio==0.25.2
|
pytest-asyncio==0.25.2
|
||||||
pytest-cov==6.2.1
|
pytest-cov==6.2.1
|
||||||
pytest-timeout==2.4.0
|
pytest-timeout==2.4.0
|
||||||
pytest==8.4.0
|
pytest==8.4.1
|
||||||
ruff==0.11.13
|
ruff==0.12.7
|
||||||
time-machine==2.16.0
|
time-machine==2.16.0
|
||||||
urllib3==2.4.0
|
types-docker==7.1.0.20250705
|
||||||
|
types-pyyaml==6.0.12.20250516
|
||||||
|
types-requests==2.32.4.20250611
|
||||||
|
urllib3==2.5.0
|
||||||
|
30
script/run-in-env.sh
Executable file
30
script/run-in-env.sh
Executable file
@ -0,0 +1,30 @@
|
|||||||
|
#!/usr/bin/env sh
|
||||||
|
set -eu
|
||||||
|
|
||||||
|
# Used in venv activate script.
|
||||||
|
# Would be an error if undefined.
|
||||||
|
OSTYPE="${OSTYPE-}"
|
||||||
|
|
||||||
|
# Activate pyenv and virtualenv if present, then run the specified command
|
||||||
|
|
||||||
|
# pyenv, pyenv-virtualenv
|
||||||
|
if [ -s .python-version ]; then
|
||||||
|
PYENV_VERSION=$(head -n 1 .python-version)
|
||||||
|
export PYENV_VERSION
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -n "${VIRTUAL_ENV-}" ] && [ -f "${VIRTUAL_ENV}/bin/activate" ]; then
|
||||||
|
. "${VIRTUAL_ENV}/bin/activate"
|
||||||
|
else
|
||||||
|
# other common virtualenvs
|
||||||
|
my_path=$(git rev-parse --show-toplevel)
|
||||||
|
|
||||||
|
for venv in venv .venv .; do
|
||||||
|
if [ -f "${my_path}/${venv}/bin/activate" ]; then
|
||||||
|
. "${my_path}/${venv}/bin/activate"
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
exec "$@"
|
@ -360,7 +360,7 @@ class Addon(AddonModel):
|
|||||||
@property
|
@property
|
||||||
def auto_update(self) -> bool:
|
def auto_update(self) -> bool:
|
||||||
"""Return if auto update is enable."""
|
"""Return if auto update is enable."""
|
||||||
return self.persist.get(ATTR_AUTO_UPDATE, super().auto_update)
|
return self.persist.get(ATTR_AUTO_UPDATE, False)
|
||||||
|
|
||||||
@auto_update.setter
|
@auto_update.setter
|
||||||
def auto_update(self, value: bool) -> None:
|
def auto_update(self, value: bool) -> None:
|
||||||
|
@ -15,6 +15,7 @@ from ..const import (
|
|||||||
ATTR_SQUASH,
|
ATTR_SQUASH,
|
||||||
FILE_SUFFIX_CONFIGURATION,
|
FILE_SUFFIX_CONFIGURATION,
|
||||||
META_ADDON,
|
META_ADDON,
|
||||||
|
SOCKET_DOCKER,
|
||||||
)
|
)
|
||||||
from ..coresys import CoreSys, CoreSysAttributes
|
from ..coresys import CoreSys, CoreSysAttributes
|
||||||
from ..docker.interface import MAP_ARCH
|
from ..docker.interface import MAP_ARCH
|
||||||
@ -121,39 +122,64 @@ class AddonBuild(FileConfiguration, CoreSysAttributes):
|
|||||||
except HassioArchNotFound:
|
except HassioArchNotFound:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def get_docker_args(self, version: AwesomeVersion, image: str | None = None):
|
def get_docker_args(
|
||||||
"""Create a dict with Docker build arguments.
|
self, version: AwesomeVersion, image_tag: str
|
||||||
|
) -> dict[str, Any]:
|
||||||
|
"""Create a dict with Docker run args."""
|
||||||
|
dockerfile_path = self.get_dockerfile().relative_to(self.addon.path_location)
|
||||||
|
|
||||||
Must be run in executor.
|
build_cmd = [
|
||||||
"""
|
"docker",
|
||||||
args: dict[str, Any] = {
|
"buildx",
|
||||||
"path": str(self.addon.path_location),
|
"build",
|
||||||
"tag": f"{image or self.addon.image}:{version!s}",
|
".",
|
||||||
"dockerfile": str(self.get_dockerfile()),
|
"--tag",
|
||||||
"pull": True,
|
image_tag,
|
||||||
"forcerm": not self.sys_dev,
|
"--file",
|
||||||
"squash": self.squash,
|
str(dockerfile_path),
|
||||||
"platform": MAP_ARCH[self.arch],
|
"--platform",
|
||||||
"labels": {
|
MAP_ARCH[self.arch],
|
||||||
"io.hass.version": version,
|
"--pull",
|
||||||
"io.hass.arch": self.arch,
|
]
|
||||||
"io.hass.type": META_ADDON,
|
|
||||||
"io.hass.name": self._fix_label("name"),
|
labels = {
|
||||||
"io.hass.description": self._fix_label("description"),
|
"io.hass.version": version,
|
||||||
**self.additional_labels,
|
"io.hass.arch": self.arch,
|
||||||
},
|
"io.hass.type": META_ADDON,
|
||||||
"buildargs": {
|
"io.hass.name": self._fix_label("name"),
|
||||||
"BUILD_FROM": self.base_image,
|
"io.hass.description": self._fix_label("description"),
|
||||||
"BUILD_VERSION": version,
|
**self.additional_labels,
|
||||||
"BUILD_ARCH": self.sys_arch.default,
|
|
||||||
**self.additional_args,
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if self.addon.url:
|
if self.addon.url:
|
||||||
args["labels"]["io.hass.url"] = self.addon.url
|
labels["io.hass.url"] = self.addon.url
|
||||||
|
|
||||||
return args
|
for key, value in labels.items():
|
||||||
|
build_cmd.extend(["--label", f"{key}={value}"])
|
||||||
|
|
||||||
|
build_args = {
|
||||||
|
"BUILD_FROM": self.base_image,
|
||||||
|
"BUILD_VERSION": version,
|
||||||
|
"BUILD_ARCH": self.sys_arch.default,
|
||||||
|
**self.additional_args,
|
||||||
|
}
|
||||||
|
|
||||||
|
for key, value in build_args.items():
|
||||||
|
build_cmd.extend(["--build-arg", f"{key}={value}"])
|
||||||
|
|
||||||
|
# The addon path will be mounted from the host system
|
||||||
|
addon_extern_path = self.sys_config.local_to_extern_path(
|
||||||
|
self.addon.path_location
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"command": build_cmd,
|
||||||
|
"volumes": {
|
||||||
|
SOCKET_DOCKER: {"bind": "/var/run/docker.sock", "mode": "rw"},
|
||||||
|
addon_extern_path: {"bind": "/addon", "mode": "ro"},
|
||||||
|
},
|
||||||
|
"working_dir": "/addon",
|
||||||
|
}
|
||||||
|
|
||||||
def _fix_label(self, label_name: str) -> str:
|
def _fix_label(self, label_name: str) -> str:
|
||||||
"""Remove characters they are not supported."""
|
"""Remove characters they are not supported."""
|
||||||
|
@ -266,7 +266,7 @@ class AddonManager(CoreSysAttributes):
|
|||||||
],
|
],
|
||||||
on_condition=AddonsJobError,
|
on_condition=AddonsJobError,
|
||||||
)
|
)
|
||||||
async def rebuild(self, slug: str) -> asyncio.Task | None:
|
async def rebuild(self, slug: str, *, force: bool = False) -> asyncio.Task | None:
|
||||||
"""Perform a rebuild of local build add-on.
|
"""Perform a rebuild of local build add-on.
|
||||||
|
|
||||||
Returns a Task that completes when addon has state 'started' (see addon.start)
|
Returns a Task that completes when addon has state 'started' (see addon.start)
|
||||||
@ -289,7 +289,7 @@ class AddonManager(CoreSysAttributes):
|
|||||||
raise AddonsError(
|
raise AddonsError(
|
||||||
"Version changed, use Update instead Rebuild", _LOGGER.error
|
"Version changed, use Update instead Rebuild", _LOGGER.error
|
||||||
)
|
)
|
||||||
if not addon.need_build:
|
if not force and not addon.need_build:
|
||||||
raise AddonsNotSupportedError(
|
raise AddonsNotSupportedError(
|
||||||
"Can't rebuild a image based add-on", _LOGGER.error
|
"Can't rebuild a image based add-on", _LOGGER.error
|
||||||
)
|
)
|
||||||
|
@ -664,12 +664,16 @@ class AddonModel(JobGroup, ABC):
|
|||||||
"""Validate if addon is available for current system."""
|
"""Validate if addon is available for current system."""
|
||||||
return self._validate_availability(self.data, logger=_LOGGER.error)
|
return self._validate_availability(self.data, logger=_LOGGER.error)
|
||||||
|
|
||||||
def __eq__(self, other):
|
def __eq__(self, other: Any) -> bool:
|
||||||
"""Compaired add-on objects."""
|
"""Compare add-on objects."""
|
||||||
if not isinstance(other, AddonModel):
|
if not isinstance(other, AddonModel):
|
||||||
return False
|
return False
|
||||||
return self.slug == other.slug
|
return self.slug == other.slug
|
||||||
|
|
||||||
|
def __hash__(self) -> int:
|
||||||
|
"""Hash for add-on objects."""
|
||||||
|
return hash(self.slug)
|
||||||
|
|
||||||
def _validate_availability(
|
def _validate_availability(
|
||||||
self, config, *, logger: Callable[..., None] | None = None
|
self, config, *, logger: Callable[..., None] | None = None
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -36,6 +36,7 @@ from ..const import (
|
|||||||
ATTR_DNS,
|
ATTR_DNS,
|
||||||
ATTR_DOCKER_API,
|
ATTR_DOCKER_API,
|
||||||
ATTR_DOCUMENTATION,
|
ATTR_DOCUMENTATION,
|
||||||
|
ATTR_FORCE,
|
||||||
ATTR_FULL_ACCESS,
|
ATTR_FULL_ACCESS,
|
||||||
ATTR_GPIO,
|
ATTR_GPIO,
|
||||||
ATTR_HASSIO_API,
|
ATTR_HASSIO_API,
|
||||||
@ -139,6 +140,8 @@ SCHEMA_SECURITY = vol.Schema({vol.Optional(ATTR_PROTECTED): vol.Boolean()})
|
|||||||
SCHEMA_UNINSTALL = vol.Schema(
|
SCHEMA_UNINSTALL = vol.Schema(
|
||||||
{vol.Optional(ATTR_REMOVE_CONFIG, default=False): vol.Boolean()}
|
{vol.Optional(ATTR_REMOVE_CONFIG, default=False): vol.Boolean()}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
SCHEMA_REBUILD = vol.Schema({vol.Optional(ATTR_FORCE, default=False): vol.Boolean()})
|
||||||
# pylint: enable=no-value-for-parameter
|
# pylint: enable=no-value-for-parameter
|
||||||
|
|
||||||
|
|
||||||
@ -461,7 +464,11 @@ class APIAddons(CoreSysAttributes):
|
|||||||
async def rebuild(self, request: web.Request) -> None:
|
async def rebuild(self, request: web.Request) -> None:
|
||||||
"""Rebuild local build add-on."""
|
"""Rebuild local build add-on."""
|
||||||
addon = self.get_addon_for_request(request)
|
addon = self.get_addon_for_request(request)
|
||||||
if start_task := await asyncio.shield(self.sys_addons.rebuild(addon.slug)):
|
body: dict[str, Any] = await api_validate(SCHEMA_REBUILD, request)
|
||||||
|
|
||||||
|
if start_task := await asyncio.shield(
|
||||||
|
self.sys_addons.rebuild(addon.slug, force=body[ATTR_FORCE])
|
||||||
|
):
|
||||||
await start_task
|
await start_task
|
||||||
|
|
||||||
@api_process
|
@api_process
|
||||||
|
@ -3,11 +3,13 @@
|
|||||||
import asyncio
|
import asyncio
|
||||||
from collections.abc import Awaitable
|
from collections.abc import Awaitable
|
||||||
import logging
|
import logging
|
||||||
from typing import Any
|
from typing import Any, cast
|
||||||
|
|
||||||
from aiohttp import BasicAuth, web
|
from aiohttp import BasicAuth, web
|
||||||
from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE, WWW_AUTHENTICATE
|
from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE, WWW_AUTHENTICATE
|
||||||
|
from aiohttp.web import FileField
|
||||||
from aiohttp.web_exceptions import HTTPUnauthorized
|
from aiohttp.web_exceptions import HTTPUnauthorized
|
||||||
|
from multidict import MultiDictProxy
|
||||||
import voluptuous as vol
|
import voluptuous as vol
|
||||||
|
|
||||||
from ..addons.addon import Addon
|
from ..addons.addon import Addon
|
||||||
@ -51,7 +53,10 @@ class APIAuth(CoreSysAttributes):
|
|||||||
return self.sys_auth.check_login(addon, auth.login, auth.password)
|
return self.sys_auth.check_login(addon, auth.login, auth.password)
|
||||||
|
|
||||||
def _process_dict(
|
def _process_dict(
|
||||||
self, request: web.Request, addon: Addon, data: dict[str, str]
|
self,
|
||||||
|
request: web.Request,
|
||||||
|
addon: Addon,
|
||||||
|
data: dict[str, Any] | MultiDictProxy[str | bytes | FileField],
|
||||||
) -> Awaitable[bool]:
|
) -> Awaitable[bool]:
|
||||||
"""Process login with dict data.
|
"""Process login with dict data.
|
||||||
|
|
||||||
@ -60,7 +65,15 @@ class APIAuth(CoreSysAttributes):
|
|||||||
username = data.get("username") or data.get("user")
|
username = data.get("username") or data.get("user")
|
||||||
password = data.get("password")
|
password = data.get("password")
|
||||||
|
|
||||||
return self.sys_auth.check_login(addon, username, password)
|
# Test that we did receive strings and not something else, raise if so
|
||||||
|
try:
|
||||||
|
_ = username.encode and password.encode # type: ignore
|
||||||
|
except AttributeError:
|
||||||
|
raise HTTPUnauthorized(headers=REALM_HEADER) from None
|
||||||
|
|
||||||
|
return self.sys_auth.check_login(
|
||||||
|
addon, cast(str, username), cast(str, password)
|
||||||
|
)
|
||||||
|
|
||||||
@api_process
|
@api_process
|
||||||
async def auth(self, request: web.Request) -> bool:
|
async def auth(self, request: web.Request) -> bool:
|
||||||
@ -79,13 +92,18 @@ class APIAuth(CoreSysAttributes):
|
|||||||
# Json
|
# Json
|
||||||
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON:
|
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON:
|
||||||
data = await request.json(loads=json_loads)
|
data = await request.json(loads=json_loads)
|
||||||
return await self._process_dict(request, addon, data)
|
if not await self._process_dict(request, addon, data):
|
||||||
|
raise HTTPUnauthorized()
|
||||||
|
return True
|
||||||
|
|
||||||
# URL encoded
|
# URL encoded
|
||||||
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_URL:
|
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_URL:
|
||||||
data = await request.post()
|
data = await request.post()
|
||||||
return await self._process_dict(request, addon, data)
|
if not await self._process_dict(request, addon, data):
|
||||||
|
raise HTTPUnauthorized()
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Advertise Basic authentication by default
|
||||||
raise HTTPUnauthorized(headers=REALM_HEADER)
|
raise HTTPUnauthorized(headers=REALM_HEADER)
|
||||||
|
|
||||||
@api_process
|
@api_process
|
||||||
|
@ -87,4 +87,4 @@ class DetectBlockingIO(StrEnum):
|
|||||||
|
|
||||||
OFF = "off"
|
OFF = "off"
|
||||||
ON = "on"
|
ON = "on"
|
||||||
ON_AT_STARTUP = "on_at_startup"
|
ON_AT_STARTUP = "on-at-startup"
|
||||||
|
@ -6,6 +6,8 @@ from typing import Any
|
|||||||
from aiohttp import web
|
from aiohttp import web
|
||||||
import voluptuous as vol
|
import voluptuous as vol
|
||||||
|
|
||||||
|
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
|
||||||
|
|
||||||
from ..const import (
|
from ..const import (
|
||||||
ATTR_ENABLE_IPV6,
|
ATTR_ENABLE_IPV6,
|
||||||
ATTR_HOSTNAME,
|
ATTR_HOSTNAME,
|
||||||
@ -32,7 +34,7 @@ SCHEMA_DOCKER_REGISTRY = vol.Schema(
|
|||||||
)
|
)
|
||||||
|
|
||||||
# pylint: disable=no-value-for-parameter
|
# pylint: disable=no-value-for-parameter
|
||||||
SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Boolean()})
|
SCHEMA_OPTIONS = vol.Schema({vol.Optional(ATTR_ENABLE_IPV6): vol.Maybe(vol.Boolean())})
|
||||||
|
|
||||||
|
|
||||||
class APIDocker(CoreSysAttributes):
|
class APIDocker(CoreSysAttributes):
|
||||||
@ -59,8 +61,17 @@ class APIDocker(CoreSysAttributes):
|
|||||||
"""Set docker options."""
|
"""Set docker options."""
|
||||||
body = await api_validate(SCHEMA_OPTIONS, request)
|
body = await api_validate(SCHEMA_OPTIONS, request)
|
||||||
|
|
||||||
if ATTR_ENABLE_IPV6 in body:
|
if (
|
||||||
|
ATTR_ENABLE_IPV6 in body
|
||||||
|
and self.sys_docker.config.enable_ipv6 != body[ATTR_ENABLE_IPV6]
|
||||||
|
):
|
||||||
self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6]
|
self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6]
|
||||||
|
_LOGGER.info("Host system reboot required to apply new IPv6 configuration")
|
||||||
|
self.sys_resolution.create_issue(
|
||||||
|
IssueType.REBOOT_REQUIRED,
|
||||||
|
ContextType.SYSTEM,
|
||||||
|
suggestions=[SuggestionType.EXECUTE_REBOOT],
|
||||||
|
)
|
||||||
|
|
||||||
await self.sys_docker.config.save_data()
|
await self.sys_docker.config.save_data()
|
||||||
|
|
||||||
|
@ -309,9 +309,9 @@ class APIIngress(CoreSysAttributes):
|
|||||||
|
|
||||||
def _init_header(
|
def _init_header(
|
||||||
request: web.Request, addon: Addon, session_data: IngressSessionData | None
|
request: web.Request, addon: Addon, session_data: IngressSessionData | None
|
||||||
) -> CIMultiDict | dict[str, str]:
|
) -> CIMultiDict[str]:
|
||||||
"""Create initial header."""
|
"""Create initial header."""
|
||||||
headers = {}
|
headers = CIMultiDict[str]()
|
||||||
|
|
||||||
if session_data is not None:
|
if session_data is not None:
|
||||||
headers[HEADER_REMOTE_USER_ID] = session_data.user.id
|
headers[HEADER_REMOTE_USER_ID] = session_data.user.id
|
||||||
@ -337,7 +337,7 @@ def _init_header(
|
|||||||
istr(HEADER_REMOTE_USER_DISPLAY_NAME),
|
istr(HEADER_REMOTE_USER_DISPLAY_NAME),
|
||||||
):
|
):
|
||||||
continue
|
continue
|
||||||
headers[name] = value
|
headers.add(name, value)
|
||||||
|
|
||||||
# Update X-Forwarded-For
|
# Update X-Forwarded-For
|
||||||
if request.transport:
|
if request.transport:
|
||||||
@ -348,9 +348,9 @@ def _init_header(
|
|||||||
return headers
|
return headers
|
||||||
|
|
||||||
|
|
||||||
def _response_header(response: aiohttp.ClientResponse) -> dict[str, str]:
|
def _response_header(response: aiohttp.ClientResponse) -> CIMultiDict[str]:
|
||||||
"""Create response header."""
|
"""Create response header."""
|
||||||
headers = {}
|
headers = CIMultiDict[str]()
|
||||||
|
|
||||||
for name, value in response.headers.items():
|
for name, value in response.headers.items():
|
||||||
if name in (
|
if name in (
|
||||||
@ -360,7 +360,7 @@ def _response_header(response: aiohttp.ClientResponse) -> dict[str, str]:
|
|||||||
hdrs.CONTENT_ENCODING,
|
hdrs.CONTENT_ENCODING,
|
||||||
):
|
):
|
||||||
continue
|
continue
|
||||||
headers[name] = value
|
headers.add(name, value)
|
||||||
|
|
||||||
return headers
|
return headers
|
||||||
|
|
||||||
|
@ -40,7 +40,7 @@ class CpuArch(CoreSysAttributes):
|
|||||||
@property
|
@property
|
||||||
def supervisor(self) -> str:
|
def supervisor(self) -> str:
|
||||||
"""Return supervisor arch."""
|
"""Return supervisor arch."""
|
||||||
return self.sys_supervisor.arch
|
return self.sys_supervisor.arch or self._default_arch
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def supported(self) -> list[str]:
|
def supported(self) -> list[str]:
|
||||||
@ -91,4 +91,14 @@ class CpuArch(CoreSysAttributes):
|
|||||||
for check, value in MAP_CPU.items():
|
for check, value in MAP_CPU.items():
|
||||||
if cpu.startswith(check):
|
if cpu.startswith(check):
|
||||||
return value
|
return value
|
||||||
return self.sys_supervisor.arch
|
if self.sys_supervisor.arch:
|
||||||
|
_LOGGER.warning(
|
||||||
|
"Unknown CPU architecture %s, falling back to Supervisor architecture.",
|
||||||
|
cpu,
|
||||||
|
)
|
||||||
|
return self.sys_supervisor.arch
|
||||||
|
_LOGGER.warning(
|
||||||
|
"Unknown CPU architecture %s, assuming CPU architecture equals Supervisor architecture.",
|
||||||
|
cpu,
|
||||||
|
)
|
||||||
|
return cpu
|
||||||
|
@ -3,10 +3,10 @@
|
|||||||
import asyncio
|
import asyncio
|
||||||
import hashlib
|
import hashlib
|
||||||
import logging
|
import logging
|
||||||
from typing import Any
|
from typing import Any, TypedDict, cast
|
||||||
|
|
||||||
from .addons.addon import Addon
|
from .addons.addon import Addon
|
||||||
from .const import ATTR_ADDON, ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
|
from .const import ATTR_PASSWORD, ATTR_TYPE, ATTR_USERNAME, FILE_HASSIO_AUTH
|
||||||
from .coresys import CoreSys, CoreSysAttributes
|
from .coresys import CoreSys, CoreSysAttributes
|
||||||
from .exceptions import (
|
from .exceptions import (
|
||||||
AuthError,
|
AuthError,
|
||||||
@ -21,6 +21,17 @@ from .validate import SCHEMA_AUTH_CONFIG
|
|||||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class BackendAuthRequest(TypedDict):
|
||||||
|
"""Model for a backend auth request.
|
||||||
|
|
||||||
|
https://github.com/home-assistant/core/blob/ed9503324d9d255e6fb077f1614fb6d55800f389/homeassistant/components/hassio/auth.py#L66-L73
|
||||||
|
"""
|
||||||
|
|
||||||
|
username: str
|
||||||
|
password: str
|
||||||
|
addon: str
|
||||||
|
|
||||||
|
|
||||||
class Auth(FileConfiguration, CoreSysAttributes):
|
class Auth(FileConfiguration, CoreSysAttributes):
|
||||||
"""Manage SSO for Add-ons with Home Assistant user."""
|
"""Manage SSO for Add-ons with Home Assistant user."""
|
||||||
|
|
||||||
@ -74,6 +85,9 @@ class Auth(FileConfiguration, CoreSysAttributes):
|
|||||||
"""Check username login."""
|
"""Check username login."""
|
||||||
if password is None:
|
if password is None:
|
||||||
raise AuthError("None as password is not supported!", _LOGGER.error)
|
raise AuthError("None as password is not supported!", _LOGGER.error)
|
||||||
|
if username is None:
|
||||||
|
raise AuthError("None as username is not supported!", _LOGGER.error)
|
||||||
|
|
||||||
_LOGGER.info("Auth request from '%s' for '%s'", addon.slug, username)
|
_LOGGER.info("Auth request from '%s' for '%s'", addon.slug, username)
|
||||||
|
|
||||||
# Get from cache
|
# Get from cache
|
||||||
@ -103,11 +117,12 @@ class Auth(FileConfiguration, CoreSysAttributes):
|
|||||||
async with self.sys_homeassistant.api.make_request(
|
async with self.sys_homeassistant.api.make_request(
|
||||||
"post",
|
"post",
|
||||||
"api/hassio_auth",
|
"api/hassio_auth",
|
||||||
json={
|
json=cast(
|
||||||
ATTR_USERNAME: username,
|
dict[str, Any],
|
||||||
ATTR_PASSWORD: password,
|
BackendAuthRequest(
|
||||||
ATTR_ADDON: addon.slug,
|
username=username, password=password, addon=addon.slug
|
||||||
},
|
),
|
||||||
|
),
|
||||||
) as req:
|
) as req:
|
||||||
if req.status == 200:
|
if req.status == 200:
|
||||||
_LOGGER.info("Successful login for '%s'", username)
|
_LOGGER.info("Successful login for '%s'", username)
|
||||||
|
@ -63,6 +63,8 @@ from .const import BUF_SIZE, LOCATION_CLOUD_BACKUP, BackupType
|
|||||||
from .utils import password_to_key
|
from .utils import password_to_key
|
||||||
from .validate import SCHEMA_BACKUP
|
from .validate import SCHEMA_BACKUP
|
||||||
|
|
||||||
|
IGNORED_COMPARISON_FIELDS = {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}
|
||||||
|
|
||||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
@ -265,7 +267,7 @@ class Backup(JobGroup):
|
|||||||
|
|
||||||
# Compare all fields except ones about protection. Current encryption status does not affect equality
|
# Compare all fields except ones about protection. Current encryption status does not affect equality
|
||||||
keys = self._data.keys() | other._data.keys()
|
keys = self._data.keys() | other._data.keys()
|
||||||
for k in keys - {ATTR_PROTECTED, ATTR_CRYPTO, ATTR_DOCKER}:
|
for k in keys - IGNORED_COMPARISON_FIELDS:
|
||||||
if (
|
if (
|
||||||
k not in self._data
|
k not in self._data
|
||||||
or k not in other._data
|
or k not in other._data
|
||||||
@ -577,13 +579,21 @@ class Backup(JobGroup):
|
|||||||
@Job(name="backup_addon_save", cleanup=False)
|
@Job(name="backup_addon_save", cleanup=False)
|
||||||
async def _addon_save(self, addon: Addon) -> asyncio.Task | None:
|
async def _addon_save(self, addon: Addon) -> asyncio.Task | None:
|
||||||
"""Store an add-on into backup."""
|
"""Store an add-on into backup."""
|
||||||
self.sys_jobs.current.reference = addon.slug
|
self.sys_jobs.current.reference = slug = addon.slug
|
||||||
if not self._outer_secure_tarfile:
|
if not self._outer_secure_tarfile:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
"Cannot backup components without initializing backup tar"
|
"Cannot backup components without initializing backup tar"
|
||||||
)
|
)
|
||||||
|
|
||||||
tar_name = f"{addon.slug}.tar{'.gz' if self.compressed else ''}"
|
# Ensure it is still installed and get current data before proceeding
|
||||||
|
if not (curr_addon := self.sys_addons.get_local_only(slug)):
|
||||||
|
_LOGGER.warning(
|
||||||
|
"Skipping backup of add-on %s because it has been uninstalled",
|
||||||
|
slug,
|
||||||
|
)
|
||||||
|
return None
|
||||||
|
|
||||||
|
tar_name = f"{slug}.tar{'.gz' if self.compressed else ''}"
|
||||||
|
|
||||||
addon_file = self._outer_secure_tarfile.create_inner_tar(
|
addon_file = self._outer_secure_tarfile.create_inner_tar(
|
||||||
f"./{tar_name}",
|
f"./{tar_name}",
|
||||||
@ -592,16 +602,16 @@ class Backup(JobGroup):
|
|||||||
)
|
)
|
||||||
# Take backup
|
# Take backup
|
||||||
try:
|
try:
|
||||||
start_task = await addon.backup(addon_file)
|
start_task = await curr_addon.backup(addon_file)
|
||||||
except AddonsError as err:
|
except AddonsError as err:
|
||||||
raise BackupError(str(err)) from err
|
raise BackupError(str(err)) from err
|
||||||
|
|
||||||
# Store to config
|
# Store to config
|
||||||
self._data[ATTR_ADDONS].append(
|
self._data[ATTR_ADDONS].append(
|
||||||
{
|
{
|
||||||
ATTR_SLUG: addon.slug,
|
ATTR_SLUG: slug,
|
||||||
ATTR_NAME: addon.name,
|
ATTR_NAME: curr_addon.name,
|
||||||
ATTR_VERSION: addon.version,
|
ATTR_VERSION: curr_addon.version,
|
||||||
# Bug - addon_file.size used to give us this information
|
# Bug - addon_file.size used to give us this information
|
||||||
# It always returns 0 in current securetar. Skipping until fixed
|
# It always returns 0 in current securetar. Skipping until fixed
|
||||||
ATTR_SIZE: 0,
|
ATTR_SIZE: 0,
|
||||||
@ -921,5 +931,5 @@ class Backup(JobGroup):
|
|||||||
Return a coroutine.
|
Return a coroutine.
|
||||||
"""
|
"""
|
||||||
return self.sys_store.update_repositories(
|
return self.sys_store.update_repositories(
|
||||||
self.repositories, add_with_errors=True, replace=replace
|
set(self.repositories), issue_on_error=True, replace=replace
|
||||||
)
|
)
|
||||||
|
@ -285,7 +285,7 @@ def check_environment() -> None:
|
|||||||
_LOGGER.critical("Can't find Docker socket!")
|
_LOGGER.critical("Can't find Docker socket!")
|
||||||
|
|
||||||
|
|
||||||
def register_signal_handlers(loop: asyncio.BaseEventLoop, coresys: CoreSys) -> None:
|
def register_signal_handlers(loop: asyncio.AbstractEventLoop, coresys: CoreSys) -> None:
|
||||||
"""Register SIGTERM, SIGHUP and SIGKILL to stop the Supervisor."""
|
"""Register SIGTERM, SIGHUP and SIGKILL to stop the Supervisor."""
|
||||||
try:
|
try:
|
||||||
loop.add_signal_handler(
|
loop.add_signal_handler(
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from collections.abc import Awaitable, Callable
|
from collections.abc import Callable, Coroutine
|
||||||
import logging
|
import logging
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
@ -19,7 +19,7 @@ class EventListener:
|
|||||||
"""Event listener."""
|
"""Event listener."""
|
||||||
|
|
||||||
event_type: BusEvent = attr.ib()
|
event_type: BusEvent = attr.ib()
|
||||||
callback: Callable[[Any], Awaitable[None]] = attr.ib()
|
callback: Callable[[Any], Coroutine[Any, Any, None]] = attr.ib()
|
||||||
|
|
||||||
|
|
||||||
class Bus(CoreSysAttributes):
|
class Bus(CoreSysAttributes):
|
||||||
@ -31,7 +31,7 @@ class Bus(CoreSysAttributes):
|
|||||||
self._listeners: dict[BusEvent, list[EventListener]] = {}
|
self._listeners: dict[BusEvent, list[EventListener]] = {}
|
||||||
|
|
||||||
def register_event(
|
def register_event(
|
||||||
self, event: BusEvent, callback: Callable[[Any], Awaitable[None]]
|
self, event: BusEvent, callback: Callable[[Any], Coroutine[Any, Any, None]]
|
||||||
) -> EventListener:
|
) -> EventListener:
|
||||||
"""Register callback for an event."""
|
"""Register callback for an event."""
|
||||||
listener = EventListener(event, callback)
|
listener = EventListener(event, callback)
|
||||||
|
@ -66,7 +66,7 @@ _UTC = "UTC"
|
|||||||
class CoreConfig(FileConfiguration):
|
class CoreConfig(FileConfiguration):
|
||||||
"""Hold all core config data."""
|
"""Hold all core config data."""
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self) -> None:
|
||||||
"""Initialize config object."""
|
"""Initialize config object."""
|
||||||
super().__init__(FILE_HASSIO_CONFIG, SCHEMA_SUPERVISOR_CONFIG)
|
super().__init__(FILE_HASSIO_CONFIG, SCHEMA_SUPERVISOR_CONFIG)
|
||||||
self._timezone_tzinfo: tzinfo | None = None
|
self._timezone_tzinfo: tzinfo | None = None
|
||||||
|
@ -5,7 +5,7 @@ from enum import StrEnum
|
|||||||
from ipaddress import IPv4Network, IPv6Network
|
from ipaddress import IPv4Network, IPv6Network
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from sys import version_info as systemversion
|
from sys import version_info as systemversion
|
||||||
from typing import Self
|
from typing import NotRequired, Self, TypedDict
|
||||||
|
|
||||||
from aiohttp import __version__ as aiohttpversion
|
from aiohttp import __version__ as aiohttpversion
|
||||||
|
|
||||||
@ -188,6 +188,7 @@ ATTR_FEATURES = "features"
|
|||||||
ATTR_FILENAME = "filename"
|
ATTR_FILENAME = "filename"
|
||||||
ATTR_FLAGS = "flags"
|
ATTR_FLAGS = "flags"
|
||||||
ATTR_FOLDERS = "folders"
|
ATTR_FOLDERS = "folders"
|
||||||
|
ATTR_FORCE = "force"
|
||||||
ATTR_FORCE_SECURITY = "force_security"
|
ATTR_FORCE_SECURITY = "force_security"
|
||||||
ATTR_FREQUENCY = "frequency"
|
ATTR_FREQUENCY = "frequency"
|
||||||
ATTR_FULL_ACCESS = "full_access"
|
ATTR_FULL_ACCESS = "full_access"
|
||||||
@ -415,10 +416,12 @@ class AddonBoot(StrEnum):
|
|||||||
MANUAL = "manual"
|
MANUAL = "manual"
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def _missing_(cls, value: str) -> Self | None:
|
def _missing_(cls, value: object) -> Self | None:
|
||||||
"""Convert 'forced' config values to their counterpart."""
|
"""Convert 'forced' config values to their counterpart."""
|
||||||
if value == AddonBootConfig.MANUAL_ONLY:
|
if value == AddonBootConfig.MANUAL_ONLY:
|
||||||
return AddonBoot.MANUAL
|
for member in cls:
|
||||||
|
if member == AddonBoot.MANUAL:
|
||||||
|
return member
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
@ -515,6 +518,16 @@ class CpuArch(StrEnum):
|
|||||||
AMD64 = "amd64"
|
AMD64 = "amd64"
|
||||||
|
|
||||||
|
|
||||||
|
class IngressSessionDataUserDict(TypedDict):
|
||||||
|
"""Response object for ingress session user."""
|
||||||
|
|
||||||
|
id: str
|
||||||
|
username: NotRequired[str | None]
|
||||||
|
# Name is an alias for displayname, only one should be used
|
||||||
|
displayname: NotRequired[str | None]
|
||||||
|
name: NotRequired[str | None]
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class IngressSessionDataUser:
|
class IngressSessionDataUser:
|
||||||
"""Format of an IngressSessionDataUser object."""
|
"""Format of an IngressSessionDataUser object."""
|
||||||
@ -523,38 +536,42 @@ class IngressSessionDataUser:
|
|||||||
display_name: str | None = None
|
display_name: str | None = None
|
||||||
username: str | None = None
|
username: str | None = None
|
||||||
|
|
||||||
def to_dict(self) -> dict[str, str | None]:
|
def to_dict(self) -> IngressSessionDataUserDict:
|
||||||
"""Get dictionary representation."""
|
"""Get dictionary representation."""
|
||||||
return {
|
return IngressSessionDataUserDict(
|
||||||
ATTR_ID: self.id,
|
id=self.id, displayname=self.display_name, username=self.username
|
||||||
ATTR_DISPLAYNAME: self.display_name,
|
)
|
||||||
ATTR_USERNAME: self.username,
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_dict(cls, data: dict[str, str | None]) -> Self:
|
def from_dict(cls, data: IngressSessionDataUserDict) -> Self:
|
||||||
"""Return object from dictionary representation."""
|
"""Return object from dictionary representation."""
|
||||||
return cls(
|
return cls(
|
||||||
id=data[ATTR_ID],
|
id=data["id"],
|
||||||
display_name=data.get(ATTR_DISPLAYNAME),
|
display_name=data.get("displayname") or data.get("name"),
|
||||||
username=data.get(ATTR_USERNAME),
|
username=data.get("username"),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class IngressSessionDataDict(TypedDict):
|
||||||
|
"""Response object for ingress session data."""
|
||||||
|
|
||||||
|
user: IngressSessionDataUserDict
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class IngressSessionData:
|
class IngressSessionData:
|
||||||
"""Format of an IngressSessionData object."""
|
"""Format of an IngressSessionData object."""
|
||||||
|
|
||||||
user: IngressSessionDataUser
|
user: IngressSessionDataUser
|
||||||
|
|
||||||
def to_dict(self) -> dict[str, dict[str, str | None]]:
|
def to_dict(self) -> IngressSessionDataDict:
|
||||||
"""Get dictionary representation."""
|
"""Get dictionary representation."""
|
||||||
return {ATTR_USER: self.user.to_dict()}
|
return IngressSessionDataDict(user=self.user.to_dict())
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_dict(cls, data: dict[str, dict[str, str | None]]) -> Self:
|
def from_dict(cls, data: IngressSessionDataDict) -> Self:
|
||||||
"""Return object from dictionary representation."""
|
"""Return object from dictionary representation."""
|
||||||
return cls(user=IngressSessionDataUser.from_dict(data[ATTR_USER]))
|
return cls(user=IngressSessionDataUser.from_dict(data["user"]))
|
||||||
|
|
||||||
|
|
||||||
STARTING_STATES = [
|
STARTING_STATES = [
|
||||||
|
@ -28,7 +28,7 @@ from .homeassistant.core import LANDINGPAGE
|
|||||||
from .resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
|
from .resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
|
||||||
from .utils.dt import utcnow
|
from .utils.dt import utcnow
|
||||||
from .utils.sentry import async_capture_exception
|
from .utils.sentry import async_capture_exception
|
||||||
from .utils.whoami import WhoamiData, retrieve_whoami
|
from .utils.whoami import retrieve_whoami
|
||||||
|
|
||||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@ -36,7 +36,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
|
|||||||
class Core(CoreSysAttributes):
|
class Core(CoreSysAttributes):
|
||||||
"""Main object of Supervisor."""
|
"""Main object of Supervisor."""
|
||||||
|
|
||||||
def __init__(self, coresys: CoreSys):
|
def __init__(self, coresys: CoreSys) -> None:
|
||||||
"""Initialize Supervisor object."""
|
"""Initialize Supervisor object."""
|
||||||
self.coresys: CoreSys = coresys
|
self.coresys: CoreSys = coresys
|
||||||
self._state: CoreState = CoreState.INITIALIZE
|
self._state: CoreState = CoreState.INITIALIZE
|
||||||
@ -91,7 +91,7 @@ class Core(CoreSysAttributes):
|
|||||||
"info", {"state": self._state}
|
"info", {"state": self._state}
|
||||||
)
|
)
|
||||||
|
|
||||||
async def connect(self):
|
async def connect(self) -> None:
|
||||||
"""Connect Supervisor container."""
|
"""Connect Supervisor container."""
|
||||||
# Load information from container
|
# Load information from container
|
||||||
await self.sys_supervisor.load()
|
await self.sys_supervisor.load()
|
||||||
@ -120,7 +120,7 @@ class Core(CoreSysAttributes):
|
|||||||
self.sys_config.version = self.sys_supervisor.version
|
self.sys_config.version = self.sys_supervisor.version
|
||||||
await self.sys_config.save_data()
|
await self.sys_config.save_data()
|
||||||
|
|
||||||
async def setup(self):
|
async def setup(self) -> None:
|
||||||
"""Start setting up supervisor orchestration."""
|
"""Start setting up supervisor orchestration."""
|
||||||
await self.set_state(CoreState.SETUP)
|
await self.set_state(CoreState.SETUP)
|
||||||
|
|
||||||
@ -216,7 +216,7 @@ class Core(CoreSysAttributes):
|
|||||||
# Evaluate the system
|
# Evaluate the system
|
||||||
await self.sys_resolution.evaluate.evaluate_system()
|
await self.sys_resolution.evaluate.evaluate_system()
|
||||||
|
|
||||||
async def start(self):
|
async def start(self) -> None:
|
||||||
"""Start Supervisor orchestration."""
|
"""Start Supervisor orchestration."""
|
||||||
await self.set_state(CoreState.STARTUP)
|
await self.set_state(CoreState.STARTUP)
|
||||||
|
|
||||||
@ -310,7 +310,7 @@ class Core(CoreSysAttributes):
|
|||||||
)
|
)
|
||||||
_LOGGER.info("Supervisor is up and running")
|
_LOGGER.info("Supervisor is up and running")
|
||||||
|
|
||||||
async def stop(self):
|
async def stop(self) -> None:
|
||||||
"""Stop a running orchestration."""
|
"""Stop a running orchestration."""
|
||||||
# store new last boot / prevent time adjustments
|
# store new last boot / prevent time adjustments
|
||||||
if self.state in (CoreState.RUNNING, CoreState.SHUTDOWN):
|
if self.state in (CoreState.RUNNING, CoreState.SHUTDOWN):
|
||||||
@ -358,7 +358,7 @@ class Core(CoreSysAttributes):
|
|||||||
_LOGGER.info("Supervisor is down - %d", self.exit_code)
|
_LOGGER.info("Supervisor is down - %d", self.exit_code)
|
||||||
self.sys_loop.stop()
|
self.sys_loop.stop()
|
||||||
|
|
||||||
async def shutdown(self, *, remove_homeassistant_container: bool = False):
|
async def shutdown(self, *, remove_homeassistant_container: bool = False) -> None:
|
||||||
"""Shutdown all running containers in correct order."""
|
"""Shutdown all running containers in correct order."""
|
||||||
# don't process scheduler anymore
|
# don't process scheduler anymore
|
||||||
if self.state == CoreState.RUNNING:
|
if self.state == CoreState.RUNNING:
|
||||||
@ -382,19 +382,15 @@ class Core(CoreSysAttributes):
|
|||||||
if self.state in (CoreState.STOPPING, CoreState.SHUTDOWN):
|
if self.state in (CoreState.STOPPING, CoreState.SHUTDOWN):
|
||||||
await self.sys_plugins.shutdown()
|
await self.sys_plugins.shutdown()
|
||||||
|
|
||||||
async def _update_last_boot(self):
|
async def _update_last_boot(self) -> None:
|
||||||
"""Update last boot time."""
|
"""Update last boot time."""
|
||||||
self.sys_config.last_boot = await self.sys_hardware.helper.last_boot()
|
if not (last_boot := await self.sys_hardware.helper.last_boot()):
|
||||||
|
_LOGGER.error("Could not update last boot information!")
|
||||||
|
return
|
||||||
|
self.sys_config.last_boot = last_boot
|
||||||
await self.sys_config.save_data()
|
await self.sys_config.save_data()
|
||||||
|
|
||||||
async def _retrieve_whoami(self, with_ssl: bool) -> WhoamiData | None:
|
async def _adjust_system_datetime(self) -> None:
|
||||||
try:
|
|
||||||
return await retrieve_whoami(self.sys_websession, with_ssl)
|
|
||||||
except WhoamiSSLError:
|
|
||||||
_LOGGER.info("Whoami service SSL error")
|
|
||||||
return None
|
|
||||||
|
|
||||||
async def _adjust_system_datetime(self):
|
|
||||||
"""Adjust system time/date on startup."""
|
"""Adjust system time/date on startup."""
|
||||||
# If no timezone is detect or set
|
# If no timezone is detect or set
|
||||||
# If we are not connected or time sync
|
# If we are not connected or time sync
|
||||||
@ -406,11 +402,13 @@ class Core(CoreSysAttributes):
|
|||||||
|
|
||||||
# Get Timezone data
|
# Get Timezone data
|
||||||
try:
|
try:
|
||||||
data = await self._retrieve_whoami(True)
|
try:
|
||||||
|
data = await retrieve_whoami(self.sys_websession, True)
|
||||||
|
except WhoamiSSLError:
|
||||||
|
# SSL Date Issue & possible time drift
|
||||||
|
_LOGGER.info("Whoami service SSL error")
|
||||||
|
data = await retrieve_whoami(self.sys_websession, False)
|
||||||
|
|
||||||
# SSL Date Issue & possible time drift
|
|
||||||
if not data:
|
|
||||||
data = await self._retrieve_whoami(False)
|
|
||||||
except WhoamiError as err:
|
except WhoamiError as err:
|
||||||
_LOGGER.warning("Can't adjust Time/Date settings: %s", err)
|
_LOGGER.warning("Can't adjust Time/Date settings: %s", err)
|
||||||
return
|
return
|
||||||
@ -426,7 +424,7 @@ class Core(CoreSysAttributes):
|
|||||||
await self.sys_host.control.set_datetime(data.dt_utc)
|
await self.sys_host.control.set_datetime(data.dt_utc)
|
||||||
await self.sys_supervisor.check_connectivity()
|
await self.sys_supervisor.check_connectivity()
|
||||||
|
|
||||||
async def repair(self):
|
async def repair(self) -> None:
|
||||||
"""Repair system integrity."""
|
"""Repair system integrity."""
|
||||||
_LOGGER.info("Starting repair of Supervisor Environment")
|
_LOGGER.info("Starting repair of Supervisor Environment")
|
||||||
await self.sys_run_in_executor(self.sys_docker.repair)
|
await self.sys_run_in_executor(self.sys_docker.repair)
|
||||||
|
@ -62,17 +62,17 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
|
|||||||
class CoreSys:
|
class CoreSys:
|
||||||
"""Class that handle all shared data."""
|
"""Class that handle all shared data."""
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self) -> None:
|
||||||
"""Initialize coresys."""
|
"""Initialize coresys."""
|
||||||
# Static attributes protected
|
# Static attributes protected
|
||||||
self._machine_id: str | None = None
|
self._machine_id: str | None = None
|
||||||
self._machine: str | None = None
|
self._machine: str | None = None
|
||||||
|
|
||||||
# External objects
|
# External objects
|
||||||
self._loop: asyncio.BaseEventLoop = asyncio.get_running_loop()
|
self._loop = asyncio.get_running_loop()
|
||||||
|
|
||||||
# Global objects
|
# Global objects
|
||||||
self._config: CoreConfig = CoreConfig()
|
self._config = CoreConfig()
|
||||||
|
|
||||||
# Internal objects pointers
|
# Internal objects pointers
|
||||||
self._docker: DockerAPI | None = None
|
self._docker: DockerAPI | None = None
|
||||||
@ -122,8 +122,12 @@ class CoreSys:
|
|||||||
if self._websession:
|
if self._websession:
|
||||||
await self._websession.close()
|
await self._websession.close()
|
||||||
|
|
||||||
|
resolver: aiohttp.abc.AbstractResolver
|
||||||
try:
|
try:
|
||||||
resolver = aiohttp.AsyncResolver(loop=self.loop)
|
# Use "unused" kwargs to force dedicated resolver instance. Otherwise
|
||||||
|
# aiodns won't reload /etc/resolv.conf which we need to make our connection
|
||||||
|
# check work in all cases.
|
||||||
|
resolver = aiohttp.AsyncResolver(loop=self.loop, timeout=None)
|
||||||
# pylint: disable=protected-access
|
# pylint: disable=protected-access
|
||||||
_LOGGER.debug(
|
_LOGGER.debug(
|
||||||
"Initializing ClientSession with AsyncResolver. Using nameservers %s",
|
"Initializing ClientSession with AsyncResolver. Using nameservers %s",
|
||||||
@ -144,7 +148,7 @@ class CoreSys:
|
|||||||
|
|
||||||
self._websession = session
|
self._websession = session
|
||||||
|
|
||||||
async def init_machine(self):
|
async def init_machine(self) -> None:
|
||||||
"""Initialize machine information."""
|
"""Initialize machine information."""
|
||||||
|
|
||||||
def _load_machine_id() -> str | None:
|
def _load_machine_id() -> str | None:
|
||||||
@ -188,7 +192,7 @@ class CoreSys:
|
|||||||
return UTC
|
return UTC
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def loop(self) -> asyncio.BaseEventLoop:
|
def loop(self) -> asyncio.AbstractEventLoop:
|
||||||
"""Return loop object."""
|
"""Return loop object."""
|
||||||
return self._loop
|
return self._loop
|
||||||
|
|
||||||
@ -586,7 +590,7 @@ class CoreSys:
|
|||||||
return self._machine_id
|
return self._machine_id
|
||||||
|
|
||||||
@machine_id.setter
|
@machine_id.setter
|
||||||
def machine_id(self, value: str) -> None:
|
def machine_id(self, value: str | None) -> None:
|
||||||
"""Set a machine-id type string."""
|
"""Set a machine-id type string."""
|
||||||
if self._machine_id:
|
if self._machine_id:
|
||||||
raise RuntimeError("Machine-ID type already set!")
|
raise RuntimeError("Machine-ID type already set!")
|
||||||
@ -608,8 +612,8 @@ class CoreSys:
|
|||||||
self._set_task_context.append(callback)
|
self._set_task_context.append(callback)
|
||||||
|
|
||||||
def run_in_executor(
|
def run_in_executor(
|
||||||
self, funct: Callable[..., T], *args: tuple[Any], **kwargs: dict[str, Any]
|
self, funct: Callable[..., T], *args, **kwargs
|
||||||
) -> Coroutine[Any, Any, T]:
|
) -> asyncio.Future[T]:
|
||||||
"""Add an job to the executor pool."""
|
"""Add an job to the executor pool."""
|
||||||
if kwargs:
|
if kwargs:
|
||||||
funct = partial(funct, **kwargs)
|
funct = partial(funct, **kwargs)
|
||||||
@ -631,8 +635,8 @@ class CoreSys:
|
|||||||
self,
|
self,
|
||||||
delay: float,
|
delay: float,
|
||||||
funct: Callable[..., Any],
|
funct: Callable[..., Any],
|
||||||
*args: tuple[Any],
|
*args,
|
||||||
**kwargs: dict[str, Any],
|
**kwargs,
|
||||||
) -> asyncio.TimerHandle:
|
) -> asyncio.TimerHandle:
|
||||||
"""Start a task after a delay."""
|
"""Start a task after a delay."""
|
||||||
if kwargs:
|
if kwargs:
|
||||||
@ -644,8 +648,8 @@ class CoreSys:
|
|||||||
self,
|
self,
|
||||||
when: datetime,
|
when: datetime,
|
||||||
funct: Callable[..., Any],
|
funct: Callable[..., Any],
|
||||||
*args: tuple[Any],
|
*args,
|
||||||
**kwargs: dict[str, Any],
|
**kwargs,
|
||||||
) -> asyncio.TimerHandle:
|
) -> asyncio.TimerHandle:
|
||||||
"""Start a task at the specified datetime."""
|
"""Start a task at the specified datetime."""
|
||||||
if kwargs:
|
if kwargs:
|
||||||
@ -682,7 +686,7 @@ class CoreSysAttributes:
|
|||||||
return self.coresys.dev
|
return self.coresys.dev
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def sys_loop(self) -> asyncio.BaseEventLoop:
|
def sys_loop(self) -> asyncio.AbstractEventLoop:
|
||||||
"""Return loop object."""
|
"""Return loop object."""
|
||||||
return self.coresys.loop
|
return self.coresys.loop
|
||||||
|
|
||||||
@ -832,7 +836,7 @@ class CoreSysAttributes:
|
|||||||
|
|
||||||
def sys_run_in_executor(
|
def sys_run_in_executor(
|
||||||
self, funct: Callable[..., T], *args, **kwargs
|
self, funct: Callable[..., T], *args, **kwargs
|
||||||
) -> Coroutine[Any, Any, T]:
|
) -> asyncio.Future[T]:
|
||||||
"""Add a job to the executor pool."""
|
"""Add a job to the executor pool."""
|
||||||
return self.coresys.run_in_executor(funct, *args, **kwargs)
|
return self.coresys.run_in_executor(funct, *args, **kwargs)
|
||||||
|
|
||||||
|
@ -117,7 +117,7 @@ class DBusInterfaceProxy(DBusInterface, ABC):
|
|||||||
"""Initialize object with already connected dbus object."""
|
"""Initialize object with already connected dbus object."""
|
||||||
await super().initialize(connected_dbus)
|
await super().initialize(connected_dbus)
|
||||||
|
|
||||||
if not self.connected_dbus.properties:
|
if not self.connected_dbus.supports_properties:
|
||||||
self.disconnect()
|
self.disconnect()
|
||||||
raise DBusInterfaceError(
|
raise DBusInterfaceError(
|
||||||
f"D-Bus object {self.object_path} is not usable, introspection is missing required properties interface"
|
f"D-Bus object {self.object_path} is not usable, introspection is missing required properties interface"
|
||||||
|
@ -259,7 +259,7 @@ class NetworkManager(DBusInterfaceProxy):
|
|||||||
else:
|
else:
|
||||||
interface.primary = False
|
interface.primary = False
|
||||||
|
|
||||||
interfaces[interface.name] = interface
|
interfaces[interface.interface_name] = interface
|
||||||
interfaces[interface.hw_address] = interface
|
interfaces[interface.hw_address] = interface
|
||||||
|
|
||||||
# Disconnect removed devices
|
# Disconnect removed devices
|
||||||
|
@ -49,7 +49,7 @@ class NetworkInterface(DBusInterfaceProxy):
|
|||||||
|
|
||||||
@property
|
@property
|
||||||
@dbus_property
|
@dbus_property
|
||||||
def name(self) -> str:
|
def interface_name(self) -> str:
|
||||||
"""Return interface name."""
|
"""Return interface name."""
|
||||||
return self.properties[DBUS_ATTR_DEVICE_INTERFACE]
|
return self.properties[DBUS_ATTR_DEVICE_INTERFACE]
|
||||||
|
|
||||||
|
@ -28,6 +28,8 @@ class DeviceSpecificationDataType(TypedDict, total=False):
|
|||||||
path: str
|
path: str
|
||||||
label: str
|
label: str
|
||||||
uuid: str
|
uuid: str
|
||||||
|
partuuid: str
|
||||||
|
partlabel: str
|
||||||
|
|
||||||
|
|
||||||
@dataclass(slots=True)
|
@dataclass(slots=True)
|
||||||
@ -40,6 +42,8 @@ class DeviceSpecification:
|
|||||||
path: Path | None = None
|
path: Path | None = None
|
||||||
label: str | None = None
|
label: str | None = None
|
||||||
uuid: str | None = None
|
uuid: str | None = None
|
||||||
|
partuuid: str | None = None
|
||||||
|
partlabel: str | None = None
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def from_dict(data: DeviceSpecificationDataType) -> "DeviceSpecification":
|
def from_dict(data: DeviceSpecificationDataType) -> "DeviceSpecification":
|
||||||
@ -48,6 +52,8 @@ class DeviceSpecification:
|
|||||||
path=Path(data["path"]) if "path" in data else None,
|
path=Path(data["path"]) if "path" in data else None,
|
||||||
label=data.get("label"),
|
label=data.get("label"),
|
||||||
uuid=data.get("uuid"),
|
uuid=data.get("uuid"),
|
||||||
|
partuuid=data.get("partuuid"),
|
||||||
|
partlabel=data.get("partlabel"),
|
||||||
)
|
)
|
||||||
|
|
||||||
def to_dict(self) -> dict[str, Variant]:
|
def to_dict(self) -> dict[str, Variant]:
|
||||||
@ -56,6 +62,8 @@ class DeviceSpecification:
|
|||||||
"path": Variant("s", self.path.as_posix()) if self.path else None,
|
"path": Variant("s", self.path.as_posix()) if self.path else None,
|
||||||
"label": _optional_variant("s", self.label),
|
"label": _optional_variant("s", self.label),
|
||||||
"uuid": _optional_variant("s", self.uuid),
|
"uuid": _optional_variant("s", self.uuid),
|
||||||
|
"partuuid": _optional_variant("s", self.partuuid),
|
||||||
|
"partlabel": _optional_variant("s", self.partlabel),
|
||||||
}
|
}
|
||||||
return {k: v for k, v in data.items() if v}
|
return {k: v for k, v in data.items() if v}
|
||||||
|
|
||||||
|
@ -12,6 +12,7 @@ from typing import TYPE_CHECKING, cast
|
|||||||
from attr import evolve
|
from attr import evolve
|
||||||
from awesomeversion import AwesomeVersion
|
from awesomeversion import AwesomeVersion
|
||||||
import docker
|
import docker
|
||||||
|
import docker.errors
|
||||||
from docker.types import Mount
|
from docker.types import Mount
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
@ -43,6 +44,7 @@ from ..jobs.decorator import Job
|
|||||||
from ..resolution.const import CGROUP_V2_VERSION, ContextType, IssueType, SuggestionType
|
from ..resolution.const import CGROUP_V2_VERSION, ContextType, IssueType, SuggestionType
|
||||||
from ..utils.sentry import async_capture_exception
|
from ..utils.sentry import async_capture_exception
|
||||||
from .const import (
|
from .const import (
|
||||||
|
ADDON_BUILDER_IMAGE,
|
||||||
ENV_TIME,
|
ENV_TIME,
|
||||||
ENV_TOKEN,
|
ENV_TOKEN,
|
||||||
ENV_TOKEN_OLD,
|
ENV_TOKEN_OLD,
|
||||||
@ -344,7 +346,7 @@ class DockerAddon(DockerInterface):
|
|||||||
mounts = [
|
mounts = [
|
||||||
MOUNT_DEV,
|
MOUNT_DEV,
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.addon.path_extern_data.as_posix(),
|
source=self.addon.path_extern_data.as_posix(),
|
||||||
target=target_data_path or PATH_PRIVATE_DATA.as_posix(),
|
target=target_data_path or PATH_PRIVATE_DATA.as_posix(),
|
||||||
read_only=False,
|
read_only=False,
|
||||||
@ -355,7 +357,7 @@ class DockerAddon(DockerInterface):
|
|||||||
if MappingType.CONFIG in addon_mapping:
|
if MappingType.CONFIG in addon_mapping:
|
||||||
mounts.append(
|
mounts.append(
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_homeassistant.as_posix(),
|
source=self.sys_config.path_extern_homeassistant.as_posix(),
|
||||||
target=addon_mapping[MappingType.CONFIG].path
|
target=addon_mapping[MappingType.CONFIG].path
|
||||||
or PATH_HOMEASSISTANT_CONFIG_LEGACY.as_posix(),
|
or PATH_HOMEASSISTANT_CONFIG_LEGACY.as_posix(),
|
||||||
@ -368,7 +370,7 @@ class DockerAddon(DockerInterface):
|
|||||||
if self.addon.addon_config_used:
|
if self.addon.addon_config_used:
|
||||||
mounts.append(
|
mounts.append(
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.addon.path_extern_config.as_posix(),
|
source=self.addon.path_extern_config.as_posix(),
|
||||||
target=addon_mapping[MappingType.ADDON_CONFIG].path
|
target=addon_mapping[MappingType.ADDON_CONFIG].path
|
||||||
or PATH_PUBLIC_CONFIG.as_posix(),
|
or PATH_PUBLIC_CONFIG.as_posix(),
|
||||||
@ -380,7 +382,7 @@ class DockerAddon(DockerInterface):
|
|||||||
if MappingType.HOMEASSISTANT_CONFIG in addon_mapping:
|
if MappingType.HOMEASSISTANT_CONFIG in addon_mapping:
|
||||||
mounts.append(
|
mounts.append(
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_homeassistant.as_posix(),
|
source=self.sys_config.path_extern_homeassistant.as_posix(),
|
||||||
target=addon_mapping[MappingType.HOMEASSISTANT_CONFIG].path
|
target=addon_mapping[MappingType.HOMEASSISTANT_CONFIG].path
|
||||||
or PATH_HOMEASSISTANT_CONFIG.as_posix(),
|
or PATH_HOMEASSISTANT_CONFIG.as_posix(),
|
||||||
@ -393,7 +395,7 @@ class DockerAddon(DockerInterface):
|
|||||||
if MappingType.ALL_ADDON_CONFIGS in addon_mapping:
|
if MappingType.ALL_ADDON_CONFIGS in addon_mapping:
|
||||||
mounts.append(
|
mounts.append(
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_addon_configs.as_posix(),
|
source=self.sys_config.path_extern_addon_configs.as_posix(),
|
||||||
target=addon_mapping[MappingType.ALL_ADDON_CONFIGS].path
|
target=addon_mapping[MappingType.ALL_ADDON_CONFIGS].path
|
||||||
or PATH_ALL_ADDON_CONFIGS.as_posix(),
|
or PATH_ALL_ADDON_CONFIGS.as_posix(),
|
||||||
@ -404,7 +406,7 @@ class DockerAddon(DockerInterface):
|
|||||||
if MappingType.SSL in addon_mapping:
|
if MappingType.SSL in addon_mapping:
|
||||||
mounts.append(
|
mounts.append(
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_ssl.as_posix(),
|
source=self.sys_config.path_extern_ssl.as_posix(),
|
||||||
target=addon_mapping[MappingType.SSL].path or PATH_SSL.as_posix(),
|
target=addon_mapping[MappingType.SSL].path or PATH_SSL.as_posix(),
|
||||||
read_only=addon_mapping[MappingType.SSL].read_only,
|
read_only=addon_mapping[MappingType.SSL].read_only,
|
||||||
@ -414,7 +416,7 @@ class DockerAddon(DockerInterface):
|
|||||||
if MappingType.ADDONS in addon_mapping:
|
if MappingType.ADDONS in addon_mapping:
|
||||||
mounts.append(
|
mounts.append(
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_addons_local.as_posix(),
|
source=self.sys_config.path_extern_addons_local.as_posix(),
|
||||||
target=addon_mapping[MappingType.ADDONS].path
|
target=addon_mapping[MappingType.ADDONS].path
|
||||||
or PATH_LOCAL_ADDONS.as_posix(),
|
or PATH_LOCAL_ADDONS.as_posix(),
|
||||||
@ -425,7 +427,7 @@ class DockerAddon(DockerInterface):
|
|||||||
if MappingType.BACKUP in addon_mapping:
|
if MappingType.BACKUP in addon_mapping:
|
||||||
mounts.append(
|
mounts.append(
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_backup.as_posix(),
|
source=self.sys_config.path_extern_backup.as_posix(),
|
||||||
target=addon_mapping[MappingType.BACKUP].path
|
target=addon_mapping[MappingType.BACKUP].path
|
||||||
or PATH_BACKUP.as_posix(),
|
or PATH_BACKUP.as_posix(),
|
||||||
@ -436,7 +438,7 @@ class DockerAddon(DockerInterface):
|
|||||||
if MappingType.SHARE in addon_mapping:
|
if MappingType.SHARE in addon_mapping:
|
||||||
mounts.append(
|
mounts.append(
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_share.as_posix(),
|
source=self.sys_config.path_extern_share.as_posix(),
|
||||||
target=addon_mapping[MappingType.SHARE].path
|
target=addon_mapping[MappingType.SHARE].path
|
||||||
or PATH_SHARE.as_posix(),
|
or PATH_SHARE.as_posix(),
|
||||||
@ -448,7 +450,7 @@ class DockerAddon(DockerInterface):
|
|||||||
if MappingType.MEDIA in addon_mapping:
|
if MappingType.MEDIA in addon_mapping:
|
||||||
mounts.append(
|
mounts.append(
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_media.as_posix(),
|
source=self.sys_config.path_extern_media.as_posix(),
|
||||||
target=addon_mapping[MappingType.MEDIA].path
|
target=addon_mapping[MappingType.MEDIA].path
|
||||||
or PATH_MEDIA.as_posix(),
|
or PATH_MEDIA.as_posix(),
|
||||||
@ -466,7 +468,7 @@ class DockerAddon(DockerInterface):
|
|||||||
continue
|
continue
|
||||||
mounts.append(
|
mounts.append(
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=gpio_path,
|
source=gpio_path,
|
||||||
target=gpio_path,
|
target=gpio_path,
|
||||||
read_only=False,
|
read_only=False,
|
||||||
@ -477,7 +479,7 @@ class DockerAddon(DockerInterface):
|
|||||||
if self.addon.with_devicetree:
|
if self.addon.with_devicetree:
|
||||||
mounts.append(
|
mounts.append(
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source="/sys/firmware/devicetree/base",
|
source="/sys/firmware/devicetree/base",
|
||||||
target="/device-tree",
|
target="/device-tree",
|
||||||
read_only=True,
|
read_only=True,
|
||||||
@ -492,7 +494,7 @@ class DockerAddon(DockerInterface):
|
|||||||
if self.addon.with_kernel_modules:
|
if self.addon.with_kernel_modules:
|
||||||
mounts.append(
|
mounts.append(
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source="/lib/modules",
|
source="/lib/modules",
|
||||||
target="/lib/modules",
|
target="/lib/modules",
|
||||||
read_only=True,
|
read_only=True,
|
||||||
@ -511,19 +513,19 @@ class DockerAddon(DockerInterface):
|
|||||||
if self.addon.with_audio:
|
if self.addon.with_audio:
|
||||||
mounts += [
|
mounts += [
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.addon.path_extern_pulse.as_posix(),
|
source=self.addon.path_extern_pulse.as_posix(),
|
||||||
target="/etc/pulse/client.conf",
|
target="/etc/pulse/client.conf",
|
||||||
read_only=True,
|
read_only=True,
|
||||||
),
|
),
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
|
source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
|
||||||
target="/run/audio",
|
target="/run/audio",
|
||||||
read_only=True,
|
read_only=True,
|
||||||
),
|
),
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_plugins.audio.path_extern_asound.as_posix(),
|
source=self.sys_plugins.audio.path_extern_asound.as_posix(),
|
||||||
target="/etc/asound.conf",
|
target="/etc/asound.conf",
|
||||||
read_only=True,
|
read_only=True,
|
||||||
@ -534,13 +536,13 @@ class DockerAddon(DockerInterface):
|
|||||||
if self.addon.with_journald:
|
if self.addon.with_journald:
|
||||||
mounts += [
|
mounts += [
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
|
source=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
|
||||||
target=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
|
target=SYSTEMD_JOURNAL_PERSISTENT.as_posix(),
|
||||||
read_only=True,
|
read_only=True,
|
||||||
),
|
),
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
|
source=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
|
||||||
target=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
|
target=SYSTEMD_JOURNAL_VOLATILE.as_posix(),
|
||||||
read_only=True,
|
read_only=True,
|
||||||
@ -673,10 +675,41 @@ class DockerAddon(DockerInterface):
|
|||||||
_LOGGER.info("Starting build for %s:%s", self.image, version)
|
_LOGGER.info("Starting build for %s:%s", self.image, version)
|
||||||
|
|
||||||
def build_image():
|
def build_image():
|
||||||
return self.sys_docker.images.build(
|
if build_env.squash:
|
||||||
use_config_proxy=False, **build_env.get_docker_args(version, image)
|
_LOGGER.warning(
|
||||||
|
"Ignoring squash build option for %s as Docker BuildKit does not support it.",
|
||||||
|
self.addon.slug,
|
||||||
|
)
|
||||||
|
|
||||||
|
addon_image_tag = f"{image or self.addon.image}:{version!s}"
|
||||||
|
|
||||||
|
docker_version = self.sys_docker.info.version
|
||||||
|
builder_version_tag = f"{docker_version.major}.{docker_version.minor}.{docker_version.micro}-cli"
|
||||||
|
|
||||||
|
builder_name = f"addon_builder_{self.addon.slug}"
|
||||||
|
|
||||||
|
# Remove dangling builder container if it exists by any chance
|
||||||
|
# E.g. because of an abrupt host shutdown/reboot during a build
|
||||||
|
with suppress(docker.errors.NotFound):
|
||||||
|
self.sys_docker.containers.get(builder_name).remove(force=True, v=True)
|
||||||
|
|
||||||
|
result = self.sys_docker.run_command(
|
||||||
|
ADDON_BUILDER_IMAGE,
|
||||||
|
version=builder_version_tag,
|
||||||
|
name=builder_name,
|
||||||
|
**build_env.get_docker_args(version, addon_image_tag),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
logs = result.output.decode("utf-8")
|
||||||
|
|
||||||
|
if result.exit_code != 0:
|
||||||
|
error_message = f"Docker build failed for {addon_image_tag} (exit code {result.exit_code}). Build output:\n{logs}"
|
||||||
|
raise docker.errors.DockerException(error_message)
|
||||||
|
|
||||||
|
addon_image = self.sys_docker.images.get(addon_image_tag)
|
||||||
|
|
||||||
|
return addon_image, logs
|
||||||
|
|
||||||
try:
|
try:
|
||||||
docker_image, log = await self.sys_run_in_executor(build_image)
|
docker_image, log = await self.sys_run_in_executor(build_image)
|
||||||
|
|
||||||
@ -687,15 +720,6 @@ class DockerAddon(DockerInterface):
|
|||||||
|
|
||||||
except (docker.errors.DockerException, requests.RequestException) as err:
|
except (docker.errors.DockerException, requests.RequestException) as err:
|
||||||
_LOGGER.error("Can't build %s:%s: %s", self.image, version, err)
|
_LOGGER.error("Can't build %s:%s: %s", self.image, version, err)
|
||||||
if hasattr(err, "build_log"):
|
|
||||||
log = "\n".join(
|
|
||||||
[
|
|
||||||
x["stream"]
|
|
||||||
for x in err.build_log # pylint: disable=no-member
|
|
||||||
if isinstance(x, dict) and "stream" in x
|
|
||||||
]
|
|
||||||
)
|
|
||||||
_LOGGER.error("Build log: \n%s", log)
|
|
||||||
raise DockerError() from err
|
raise DockerError() from err
|
||||||
|
|
||||||
_LOGGER.info("Build %s:%s done", self.image, version)
|
_LOGGER.info("Build %s:%s done", self.image, version)
|
||||||
|
@ -47,7 +47,7 @@ class DockerAudio(DockerInterface, CoreSysAttributes):
|
|||||||
mounts = [
|
mounts = [
|
||||||
MOUNT_DEV,
|
MOUNT_DEV,
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_audio.as_posix(),
|
source=self.sys_config.path_extern_audio.as_posix(),
|
||||||
target=PATH_PRIVATE_DATA.as_posix(),
|
target=PATH_PRIVATE_DATA.as_posix(),
|
||||||
read_only=False,
|
read_only=False,
|
||||||
|
@ -74,24 +74,26 @@ ENV_TOKEN_OLD = "HASSIO_TOKEN"
|
|||||||
LABEL_MANAGED = "supervisor_managed"
|
LABEL_MANAGED = "supervisor_managed"
|
||||||
|
|
||||||
MOUNT_DBUS = Mount(
|
MOUNT_DBUS = Mount(
|
||||||
type=MountType.BIND, source="/run/dbus", target="/run/dbus", read_only=True
|
type=MountType.BIND.value, source="/run/dbus", target="/run/dbus", read_only=True
|
||||||
|
)
|
||||||
|
MOUNT_DEV = Mount(
|
||||||
|
type=MountType.BIND.value, source="/dev", target="/dev", read_only=True
|
||||||
)
|
)
|
||||||
MOUNT_DEV = Mount(type=MountType.BIND, source="/dev", target="/dev", read_only=True)
|
|
||||||
MOUNT_DEV.setdefault("BindOptions", {})["ReadOnlyNonRecursive"] = True
|
MOUNT_DEV.setdefault("BindOptions", {})["ReadOnlyNonRecursive"] = True
|
||||||
MOUNT_DOCKER = Mount(
|
MOUNT_DOCKER = Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source="/run/docker.sock",
|
source="/run/docker.sock",
|
||||||
target="/run/docker.sock",
|
target="/run/docker.sock",
|
||||||
read_only=True,
|
read_only=True,
|
||||||
)
|
)
|
||||||
MOUNT_MACHINE_ID = Mount(
|
MOUNT_MACHINE_ID = Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=MACHINE_ID.as_posix(),
|
source=MACHINE_ID.as_posix(),
|
||||||
target=MACHINE_ID.as_posix(),
|
target=MACHINE_ID.as_posix(),
|
||||||
read_only=True,
|
read_only=True,
|
||||||
)
|
)
|
||||||
MOUNT_UDEV = Mount(
|
MOUNT_UDEV = Mount(
|
||||||
type=MountType.BIND, source="/run/udev", target="/run/udev", read_only=True
|
type=MountType.BIND.value, source="/run/udev", target="/run/udev", read_only=True
|
||||||
)
|
)
|
||||||
|
|
||||||
PATH_PRIVATE_DATA = PurePath("/data")
|
PATH_PRIVATE_DATA = PurePath("/data")
|
||||||
@ -105,3 +107,6 @@ PATH_BACKUP = PurePath("/backup")
|
|||||||
PATH_SHARE = PurePath("/share")
|
PATH_SHARE = PurePath("/share")
|
||||||
PATH_MEDIA = PurePath("/media")
|
PATH_MEDIA = PurePath("/media")
|
||||||
PATH_CLOUD_BACKUP = PurePath("/cloud_backup")
|
PATH_CLOUD_BACKUP = PurePath("/cloud_backup")
|
||||||
|
|
||||||
|
# https://hub.docker.com/_/docker
|
||||||
|
ADDON_BUILDER_IMAGE = "docker.io/library/docker"
|
||||||
|
@ -48,7 +48,7 @@ class DockerDNS(DockerInterface, CoreSysAttributes):
|
|||||||
environment={ENV_TIME: self.sys_timezone},
|
environment={ENV_TIME: self.sys_timezone},
|
||||||
mounts=[
|
mounts=[
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_dns.as_posix(),
|
source=self.sys_config.path_extern_dns.as_posix(),
|
||||||
target="/config",
|
target="/config",
|
||||||
read_only=False,
|
read_only=False,
|
||||||
|
@ -99,7 +99,7 @@ class DockerHomeAssistant(DockerInterface):
|
|||||||
MOUNT_UDEV,
|
MOUNT_UDEV,
|
||||||
# HA config folder
|
# HA config folder
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_homeassistant.as_posix(),
|
source=self.sys_config.path_extern_homeassistant.as_posix(),
|
||||||
target=PATH_PUBLIC_CONFIG.as_posix(),
|
target=PATH_PUBLIC_CONFIG.as_posix(),
|
||||||
read_only=False,
|
read_only=False,
|
||||||
@ -112,20 +112,20 @@ class DockerHomeAssistant(DockerInterface):
|
|||||||
[
|
[
|
||||||
# All other folders
|
# All other folders
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_ssl.as_posix(),
|
source=self.sys_config.path_extern_ssl.as_posix(),
|
||||||
target=PATH_SSL.as_posix(),
|
target=PATH_SSL.as_posix(),
|
||||||
read_only=True,
|
read_only=True,
|
||||||
),
|
),
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_share.as_posix(),
|
source=self.sys_config.path_extern_share.as_posix(),
|
||||||
target=PATH_SHARE.as_posix(),
|
target=PATH_SHARE.as_posix(),
|
||||||
read_only=False,
|
read_only=False,
|
||||||
propagation=PropagationMode.RSLAVE.value,
|
propagation=PropagationMode.RSLAVE.value,
|
||||||
),
|
),
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_media.as_posix(),
|
source=self.sys_config.path_extern_media.as_posix(),
|
||||||
target=PATH_MEDIA.as_posix(),
|
target=PATH_MEDIA.as_posix(),
|
||||||
read_only=False,
|
read_only=False,
|
||||||
@ -133,19 +133,19 @@ class DockerHomeAssistant(DockerInterface):
|
|||||||
),
|
),
|
||||||
# Configuration audio
|
# Configuration audio
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_homeassistant.path_extern_pulse.as_posix(),
|
source=self.sys_homeassistant.path_extern_pulse.as_posix(),
|
||||||
target="/etc/pulse/client.conf",
|
target="/etc/pulse/client.conf",
|
||||||
read_only=True,
|
read_only=True,
|
||||||
),
|
),
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
|
source=self.sys_plugins.audio.path_extern_pulse.as_posix(),
|
||||||
target="/run/audio",
|
target="/run/audio",
|
||||||
read_only=True,
|
read_only=True,
|
||||||
),
|
),
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_plugins.audio.path_extern_asound.as_posix(),
|
source=self.sys_plugins.audio.path_extern_asound.as_posix(),
|
||||||
target="/etc/asound.conf",
|
target="/etc/asound.conf",
|
||||||
read_only=True,
|
read_only=True,
|
||||||
@ -213,24 +213,21 @@ class DockerHomeAssistant(DockerInterface):
|
|||||||
privileged=True,
|
privileged=True,
|
||||||
init=True,
|
init=True,
|
||||||
entrypoint=[],
|
entrypoint=[],
|
||||||
detach=True,
|
|
||||||
stdout=True,
|
|
||||||
stderr=True,
|
|
||||||
mounts=[
|
mounts=[
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_homeassistant.as_posix(),
|
source=self.sys_config.path_extern_homeassistant.as_posix(),
|
||||||
target="/config",
|
target="/config",
|
||||||
read_only=False,
|
read_only=False,
|
||||||
),
|
),
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_ssl.as_posix(),
|
source=self.sys_config.path_extern_ssl.as_posix(),
|
||||||
target="/ssl",
|
target="/ssl",
|
||||||
read_only=True,
|
read_only=True,
|
||||||
),
|
),
|
||||||
Mount(
|
Mount(
|
||||||
type=MountType.BIND,
|
type=MountType.BIND.value,
|
||||||
source=self.sys_config.path_extern_share.as_posix(),
|
source=self.sys_config.path_extern_share.as_posix(),
|
||||||
target="/share",
|
target="/share",
|
||||||
read_only=False,
|
read_only=False,
|
||||||
|
@ -95,12 +95,12 @@ class DockerConfig(FileConfiguration):
|
|||||||
super().__init__(FILE_HASSIO_DOCKER, SCHEMA_DOCKER_CONFIG)
|
super().__init__(FILE_HASSIO_DOCKER, SCHEMA_DOCKER_CONFIG)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def enable_ipv6(self) -> bool:
|
def enable_ipv6(self) -> bool | None:
|
||||||
"""Return IPv6 configuration for docker network."""
|
"""Return IPv6 configuration for docker network."""
|
||||||
return self._data.get(ATTR_ENABLE_IPV6, False)
|
return self._data.get(ATTR_ENABLE_IPV6, None)
|
||||||
|
|
||||||
@enable_ipv6.setter
|
@enable_ipv6.setter
|
||||||
def enable_ipv6(self, value: bool) -> None:
|
def enable_ipv6(self, value: bool | None) -> None:
|
||||||
"""Set IPv6 configuration for docker network."""
|
"""Set IPv6 configuration for docker network."""
|
||||||
self._data[ATTR_ENABLE_IPV6] = value
|
self._data[ATTR_ENABLE_IPV6] = value
|
||||||
|
|
||||||
@ -294,8 +294,8 @@ class DockerAPI:
|
|||||||
def run_command(
|
def run_command(
|
||||||
self,
|
self,
|
||||||
image: str,
|
image: str,
|
||||||
tag: str = "latest",
|
version: str = "latest",
|
||||||
command: str | None = None,
|
command: str | list[str] | None = None,
|
||||||
**kwargs: Any,
|
**kwargs: Any,
|
||||||
) -> CommandReturn:
|
) -> CommandReturn:
|
||||||
"""Create a temporary container and run command.
|
"""Create a temporary container and run command.
|
||||||
@ -305,12 +305,15 @@ class DockerAPI:
|
|||||||
stdout = kwargs.get("stdout", True)
|
stdout = kwargs.get("stdout", True)
|
||||||
stderr = kwargs.get("stderr", True)
|
stderr = kwargs.get("stderr", True)
|
||||||
|
|
||||||
_LOGGER.info("Runing command '%s' on %s", command, image)
|
image_with_tag = f"{image}:{version}"
|
||||||
|
|
||||||
|
_LOGGER.info("Runing command '%s' on %s", command, image_with_tag)
|
||||||
container = None
|
container = None
|
||||||
try:
|
try:
|
||||||
container = self.docker.containers.run(
|
container = self.docker.containers.run(
|
||||||
f"{image}:{tag}",
|
image_with_tag,
|
||||||
command=command,
|
command=command,
|
||||||
|
detach=True,
|
||||||
network=self.network.name,
|
network=self.network.name,
|
||||||
use_config_proxy=False,
|
use_config_proxy=False,
|
||||||
**kwargs,
|
**kwargs,
|
||||||
@ -327,9 +330,9 @@ class DockerAPI:
|
|||||||
# cleanup container
|
# cleanup container
|
||||||
if container:
|
if container:
|
||||||
with suppress(docker_errors.DockerException, requests.RequestException):
|
with suppress(docker_errors.DockerException, requests.RequestException):
|
||||||
container.remove(force=True)
|
container.remove(force=True, v=True)
|
||||||
|
|
||||||
return CommandReturn(result.get("StatusCode"), output)
|
return CommandReturn(result["StatusCode"], output)
|
||||||
|
|
||||||
def repair(self) -> None:
|
def repair(self) -> None:
|
||||||
"""Repair local docker overlayfs2 issues."""
|
"""Repair local docker overlayfs2 issues."""
|
||||||
@ -442,7 +445,7 @@ class DockerAPI:
|
|||||||
if remove_container:
|
if remove_container:
|
||||||
with suppress(DockerException, requests.RequestException):
|
with suppress(DockerException, requests.RequestException):
|
||||||
_LOGGER.info("Cleaning %s application", name)
|
_LOGGER.info("Cleaning %s application", name)
|
||||||
docker_container.remove(force=True)
|
docker_container.remove(force=True, v=True)
|
||||||
|
|
||||||
def start_container(self, name: str) -> None:
|
def start_container(self, name: str) -> None:
|
||||||
"""Start Docker container."""
|
"""Start Docker container."""
|
||||||
|
@ -47,6 +47,8 @@ DOCKER_NETWORK_PARAMS = {
|
|||||||
"options": {"com.docker.network.bridge.name": DOCKER_NETWORK},
|
"options": {"com.docker.network.bridge.name": DOCKER_NETWORK},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
DOCKER_ENABLE_IPV6_DEFAULT = True
|
||||||
|
|
||||||
|
|
||||||
class DockerNetwork:
|
class DockerNetwork:
|
||||||
"""Internal Supervisor Network.
|
"""Internal Supervisor Network.
|
||||||
@ -57,9 +59,9 @@ class DockerNetwork:
|
|||||||
def __init__(self, docker_client: docker.DockerClient):
|
def __init__(self, docker_client: docker.DockerClient):
|
||||||
"""Initialize internal Supervisor network."""
|
"""Initialize internal Supervisor network."""
|
||||||
self.docker: docker.DockerClient = docker_client
|
self.docker: docker.DockerClient = docker_client
|
||||||
self._network: docker.models.networks.Network | None = None
|
self._network: docker.models.networks.Network
|
||||||
|
|
||||||
async def post_init(self, enable_ipv6: bool = False) -> Self:
|
async def post_init(self, enable_ipv6: bool | None = None) -> Self:
|
||||||
"""Post init actions that must be done in event loop."""
|
"""Post init actions that must be done in event loop."""
|
||||||
self._network = await asyncio.get_running_loop().run_in_executor(
|
self._network = await asyncio.get_running_loop().run_in_executor(
|
||||||
None, self._get_network, enable_ipv6
|
None, self._get_network, enable_ipv6
|
||||||
@ -111,16 +113,24 @@ class DockerNetwork:
|
|||||||
"""Return observer of the network."""
|
"""Return observer of the network."""
|
||||||
return DOCKER_IPV4_NETWORK_MASK[6]
|
return DOCKER_IPV4_NETWORK_MASK[6]
|
||||||
|
|
||||||
def _get_network(self, enable_ipv6: bool = False) -> docker.models.networks.Network:
|
def _get_network(
|
||||||
|
self, enable_ipv6: bool | None = None
|
||||||
|
) -> docker.models.networks.Network:
|
||||||
"""Get supervisor network."""
|
"""Get supervisor network."""
|
||||||
try:
|
try:
|
||||||
if network := self.docker.networks.get(DOCKER_NETWORK):
|
if network := self.docker.networks.get(DOCKER_NETWORK):
|
||||||
if network.attrs.get(DOCKER_ENABLEIPV6) == enable_ipv6:
|
current_ipv6 = network.attrs.get(DOCKER_ENABLEIPV6, False)
|
||||||
|
# If the network exists and we don't have an explicit setting,
|
||||||
|
# simply stick with what we have.
|
||||||
|
if enable_ipv6 is None or current_ipv6 == enable_ipv6:
|
||||||
return network
|
return network
|
||||||
|
|
||||||
|
# We have an explicit setting which differs from the current state.
|
||||||
_LOGGER.info(
|
_LOGGER.info(
|
||||||
"Migrating Supervisor network to %s",
|
"Migrating Supervisor network to %s",
|
||||||
"IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only",
|
"IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only",
|
||||||
)
|
)
|
||||||
|
|
||||||
if (containers := network.containers) and (
|
if (containers := network.containers) and (
|
||||||
containers_all := all(
|
containers_all := all(
|
||||||
container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME)
|
container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME)
|
||||||
@ -134,6 +144,7 @@ class DockerNetwork:
|
|||||||
requests.RequestException,
|
requests.RequestException,
|
||||||
):
|
):
|
||||||
network.disconnect(container, force=True)
|
network.disconnect(container, force=True)
|
||||||
|
|
||||||
if not containers or containers_all:
|
if not containers or containers_all:
|
||||||
try:
|
try:
|
||||||
network.remove()
|
network.remove()
|
||||||
@ -151,10 +162,12 @@ class DockerNetwork:
|
|||||||
_LOGGER.info("Can't find Supervisor network, creating a new network")
|
_LOGGER.info("Can't find Supervisor network, creating a new network")
|
||||||
|
|
||||||
network_params = DOCKER_NETWORK_PARAMS.copy()
|
network_params = DOCKER_NETWORK_PARAMS.copy()
|
||||||
network_params[ATTR_ENABLE_IPV6] = enable_ipv6
|
network_params[ATTR_ENABLE_IPV6] = (
|
||||||
|
DOCKER_ENABLE_IPV6_DEFAULT if enable_ipv6 is None else enable_ipv6
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self._network = self.docker.networks.create(**network_params)
|
self._network = self.docker.networks.create(**network_params) # type: ignore
|
||||||
except docker.errors.APIError as err:
|
except docker.errors.APIError as err:
|
||||||
raise DockerError(
|
raise DockerError(
|
||||||
f"Can't create Supervisor network: {err}", _LOGGER.error
|
f"Can't create Supervisor network: {err}", _LOGGER.error
|
||||||
|
@ -87,19 +87,19 @@ class HomeAssistantCore(JobGroup):
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
# Evaluate Version if we lost this information
|
# Evaluate Version if we lost this information
|
||||||
if not self.sys_homeassistant.version:
|
if self.sys_homeassistant.version:
|
||||||
|
version = self.sys_homeassistant.version
|
||||||
|
else:
|
||||||
self.sys_homeassistant.version = (
|
self.sys_homeassistant.version = (
|
||||||
await self.instance.get_latest_version()
|
version
|
||||||
)
|
) = await self.instance.get_latest_version()
|
||||||
|
|
||||||
await self.instance.attach(
|
await self.instance.attach(version=version, skip_state_event_if_down=True)
|
||||||
version=self.sys_homeassistant.version, skip_state_event_if_down=True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Ensure we are using correct image for this system (unless user has overridden it)
|
# Ensure we are using correct image for this system (unless user has overridden it)
|
||||||
if not self.sys_homeassistant.override_image:
|
if not self.sys_homeassistant.override_image:
|
||||||
await self.instance.check_image(
|
await self.instance.check_image(
|
||||||
self.sys_homeassistant.version, self.sys_homeassistant.default_image
|
version, self.sys_homeassistant.default_image
|
||||||
)
|
)
|
||||||
self.sys_homeassistant.set_image(self.sys_homeassistant.default_image)
|
self.sys_homeassistant.set_image(self.sys_homeassistant.default_image)
|
||||||
except DockerError:
|
except DockerError:
|
||||||
@ -108,7 +108,7 @@ class HomeAssistantCore(JobGroup):
|
|||||||
)
|
)
|
||||||
await self.install_landingpage()
|
await self.install_landingpage()
|
||||||
else:
|
else:
|
||||||
self.sys_homeassistant.version = self.instance.version
|
self.sys_homeassistant.version = self.instance.version or version
|
||||||
self.sys_homeassistant.set_image(self.instance.image)
|
self.sys_homeassistant.set_image(self.instance.image)
|
||||||
await self.sys_homeassistant.save_data()
|
await self.sys_homeassistant.save_data()
|
||||||
|
|
||||||
@ -182,12 +182,13 @@ class HomeAssistantCore(JobGroup):
|
|||||||
if not self.sys_homeassistant.latest_version:
|
if not self.sys_homeassistant.latest_version:
|
||||||
await self.sys_updater.reload()
|
await self.sys_updater.reload()
|
||||||
|
|
||||||
if self.sys_homeassistant.latest_version:
|
if to_version := self.sys_homeassistant.latest_version:
|
||||||
try:
|
try:
|
||||||
await self.instance.update(
|
await self.instance.update(
|
||||||
self.sys_homeassistant.latest_version,
|
to_version,
|
||||||
image=self.sys_updater.image_homeassistant,
|
image=self.sys_updater.image_homeassistant,
|
||||||
)
|
)
|
||||||
|
self.sys_homeassistant.version = self.instance.version or to_version
|
||||||
break
|
break
|
||||||
except (DockerError, JobException):
|
except (DockerError, JobException):
|
||||||
pass
|
pass
|
||||||
@ -198,7 +199,6 @@ class HomeAssistantCore(JobGroup):
|
|||||||
await asyncio.sleep(30)
|
await asyncio.sleep(30)
|
||||||
|
|
||||||
_LOGGER.info("Home Assistant docker now installed")
|
_LOGGER.info("Home Assistant docker now installed")
|
||||||
self.sys_homeassistant.version = self.instance.version
|
|
||||||
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
|
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
|
||||||
await self.sys_homeassistant.save_data()
|
await self.sys_homeassistant.save_data()
|
||||||
|
|
||||||
@ -231,8 +231,8 @@ class HomeAssistantCore(JobGroup):
|
|||||||
backup: bool | None = False,
|
backup: bool | None = False,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Update HomeAssistant version."""
|
"""Update HomeAssistant version."""
|
||||||
version = version or self.sys_homeassistant.latest_version
|
to_version = version or self.sys_homeassistant.latest_version
|
||||||
if not version:
|
if not to_version:
|
||||||
raise HomeAssistantUpdateError(
|
raise HomeAssistantUpdateError(
|
||||||
"Cannot determine latest version of Home Assistant for update",
|
"Cannot determine latest version of Home Assistant for update",
|
||||||
_LOGGER.error,
|
_LOGGER.error,
|
||||||
@ -243,9 +243,9 @@ class HomeAssistantCore(JobGroup):
|
|||||||
running = await self.instance.is_running()
|
running = await self.instance.is_running()
|
||||||
exists = await self.instance.exists()
|
exists = await self.instance.exists()
|
||||||
|
|
||||||
if exists and version == self.instance.version:
|
if exists and to_version == self.instance.version:
|
||||||
raise HomeAssistantUpdateError(
|
raise HomeAssistantUpdateError(
|
||||||
f"Version {version!s} is already installed", _LOGGER.warning
|
f"Version {to_version!s} is already installed", _LOGGER.warning
|
||||||
)
|
)
|
||||||
|
|
||||||
if backup:
|
if backup:
|
||||||
@ -268,7 +268,7 @@ class HomeAssistantCore(JobGroup):
|
|||||||
"Updating Home Assistant image failed", _LOGGER.warning
|
"Updating Home Assistant image failed", _LOGGER.warning
|
||||||
) from err
|
) from err
|
||||||
|
|
||||||
self.sys_homeassistant.version = self.instance.version
|
self.sys_homeassistant.version = self.instance.version or to_version
|
||||||
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
|
self.sys_homeassistant.set_image(self.sys_updater.image_homeassistant)
|
||||||
|
|
||||||
if running:
|
if running:
|
||||||
@ -282,7 +282,7 @@ class HomeAssistantCore(JobGroup):
|
|||||||
|
|
||||||
# Update Home Assistant
|
# Update Home Assistant
|
||||||
with suppress(HomeAssistantError):
|
with suppress(HomeAssistantError):
|
||||||
await _update(version)
|
await _update(to_version)
|
||||||
|
|
||||||
if not self.error_state and rollback:
|
if not self.error_state and rollback:
|
||||||
try:
|
try:
|
||||||
|
@ -35,6 +35,7 @@ from ..const import (
|
|||||||
FILE_HASSIO_HOMEASSISTANT,
|
FILE_HASSIO_HOMEASSISTANT,
|
||||||
BusEvent,
|
BusEvent,
|
||||||
IngressSessionDataUser,
|
IngressSessionDataUser,
|
||||||
|
IngressSessionDataUserDict,
|
||||||
)
|
)
|
||||||
from ..coresys import CoreSys, CoreSysAttributes
|
from ..coresys import CoreSys, CoreSysAttributes
|
||||||
from ..exceptions import (
|
from ..exceptions import (
|
||||||
@ -557,18 +558,11 @@ class HomeAssistant(FileConfiguration, CoreSysAttributes):
|
|||||||
async def get_users(self) -> list[IngressSessionDataUser]:
|
async def get_users(self) -> list[IngressSessionDataUser]:
|
||||||
"""Get list of all configured users."""
|
"""Get list of all configured users."""
|
||||||
list_of_users: (
|
list_of_users: (
|
||||||
list[dict[str, Any]] | None
|
list[IngressSessionDataUserDict] | None
|
||||||
) = await self.sys_homeassistant.websocket.async_send_command(
|
) = await self.sys_homeassistant.websocket.async_send_command(
|
||||||
{ATTR_TYPE: "config/auth/list"}
|
{ATTR_TYPE: "config/auth/list"}
|
||||||
)
|
)
|
||||||
|
|
||||||
if list_of_users:
|
if list_of_users:
|
||||||
return [
|
return [IngressSessionDataUser.from_dict(data) for data in list_of_users]
|
||||||
IngressSessionDataUser(
|
|
||||||
id=data["id"],
|
|
||||||
username=data.get("username"),
|
|
||||||
display_name=data.get("name"),
|
|
||||||
)
|
|
||||||
for data in list_of_users
|
|
||||||
]
|
|
||||||
return []
|
return []
|
||||||
|
@ -175,7 +175,7 @@ class Interface:
|
|||||||
)
|
)
|
||||||
|
|
||||||
return Interface(
|
return Interface(
|
||||||
name=inet.name,
|
name=inet.interface_name,
|
||||||
mac=inet.hw_address,
|
mac=inet.hw_address,
|
||||||
path=inet.path,
|
path=inet.path,
|
||||||
enabled=inet.settings is not None,
|
enabled=inet.settings is not None,
|
||||||
@ -286,7 +286,7 @@ class Interface:
|
|||||||
_LOGGER.warning(
|
_LOGGER.warning(
|
||||||
"Auth method %s for network interface %s unsupported, skipping",
|
"Auth method %s for network interface %s unsupported, skipping",
|
||||||
inet.settings.wireless_security.key_mgmt,
|
inet.settings.wireless_security.key_mgmt,
|
||||||
inet.name,
|
inet.interface_name,
|
||||||
)
|
)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
@ -8,11 +8,11 @@ from typing import Any
|
|||||||
from ..const import ATTR_HOST_INTERNET
|
from ..const import ATTR_HOST_INTERNET
|
||||||
from ..coresys import CoreSys, CoreSysAttributes
|
from ..coresys import CoreSys, CoreSysAttributes
|
||||||
from ..dbus.const import (
|
from ..dbus.const import (
|
||||||
|
DBUS_ATTR_CONFIGURATION,
|
||||||
DBUS_ATTR_CONNECTION_ENABLED,
|
DBUS_ATTR_CONNECTION_ENABLED,
|
||||||
DBUS_ATTR_CONNECTIVITY,
|
DBUS_ATTR_CONNECTIVITY,
|
||||||
DBUS_ATTR_PRIMARY_CONNECTION,
|
DBUS_IFACE_DNS,
|
||||||
DBUS_IFACE_NM,
|
DBUS_IFACE_NM,
|
||||||
DBUS_OBJECT_BASE,
|
|
||||||
DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED,
|
DBUS_SIGNAL_NM_CONNECTION_ACTIVE_CHANGED,
|
||||||
ConnectionStateType,
|
ConnectionStateType,
|
||||||
ConnectivityState,
|
ConnectivityState,
|
||||||
@ -46,6 +46,8 @@ class NetworkManager(CoreSysAttributes):
|
|||||||
"""Initialize system center handling."""
|
"""Initialize system center handling."""
|
||||||
self.coresys: CoreSys = coresys
|
self.coresys: CoreSys = coresys
|
||||||
self._connectivity: bool | None = None
|
self._connectivity: bool | None = None
|
||||||
|
# No event need on initial change (NetworkManager initializes with empty list)
|
||||||
|
self._dns_configuration: list = []
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def connectivity(self) -> bool | None:
|
def connectivity(self) -> bool | None:
|
||||||
@ -138,8 +140,12 @@ class NetworkManager(CoreSysAttributes):
|
|||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
self.sys_dbus.network.dbus.properties.on_properties_changed(
|
self.sys_dbus.network.dbus.properties.on(
|
||||||
self._check_connectivity_changed
|
"properties_changed", self._check_connectivity_changed
|
||||||
|
)
|
||||||
|
|
||||||
|
self.sys_dbus.network.dns.dbus.properties.on(
|
||||||
|
"properties_changed", self._check_dns_changed
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _check_connectivity_changed(
|
async def _check_connectivity_changed(
|
||||||
@ -152,16 +158,6 @@ class NetworkManager(CoreSysAttributes):
|
|||||||
connectivity_check: bool | None = changed.get(DBUS_ATTR_CONNECTION_ENABLED)
|
connectivity_check: bool | None = changed.get(DBUS_ATTR_CONNECTION_ENABLED)
|
||||||
connectivity: int | None = changed.get(DBUS_ATTR_CONNECTIVITY)
|
connectivity: int | None = changed.get(DBUS_ATTR_CONNECTIVITY)
|
||||||
|
|
||||||
# This potentially updated the DNS configuration. Make sure the DNS plug-in
|
|
||||||
# picks up the latest settings.
|
|
||||||
if (
|
|
||||||
DBUS_ATTR_PRIMARY_CONNECTION in changed
|
|
||||||
and changed[DBUS_ATTR_PRIMARY_CONNECTION]
|
|
||||||
and changed[DBUS_ATTR_PRIMARY_CONNECTION] != DBUS_OBJECT_BASE
|
|
||||||
and await self.sys_plugins.dns.is_running()
|
|
||||||
):
|
|
||||||
await self.sys_plugins.dns.restart()
|
|
||||||
|
|
||||||
if (
|
if (
|
||||||
connectivity_check is True
|
connectivity_check is True
|
||||||
or DBUS_ATTR_CONNECTION_ENABLED in invalidated
|
or DBUS_ATTR_CONNECTION_ENABLED in invalidated
|
||||||
@ -175,6 +171,20 @@ class NetworkManager(CoreSysAttributes):
|
|||||||
elif connectivity is not None:
|
elif connectivity is not None:
|
||||||
self.connectivity = connectivity == ConnectivityState.CONNECTIVITY_FULL
|
self.connectivity = connectivity == ConnectivityState.CONNECTIVITY_FULL
|
||||||
|
|
||||||
|
async def _check_dns_changed(
|
||||||
|
self, interface: str, changed: dict[str, Any], invalidated: list[str]
|
||||||
|
):
|
||||||
|
"""Check if DNS properties have changed."""
|
||||||
|
if interface != DBUS_IFACE_DNS:
|
||||||
|
return
|
||||||
|
|
||||||
|
if (
|
||||||
|
DBUS_ATTR_CONFIGURATION in changed
|
||||||
|
and self._dns_configuration != changed[DBUS_ATTR_CONFIGURATION]
|
||||||
|
):
|
||||||
|
self._dns_configuration = changed[DBUS_ATTR_CONFIGURATION]
|
||||||
|
self.sys_plugins.dns.notify_locals_changed()
|
||||||
|
|
||||||
async def update(self, *, force_connectivity_check: bool = False):
|
async def update(self, *, force_connectivity_check: bool = False):
|
||||||
"""Update properties over dbus."""
|
"""Update properties over dbus."""
|
||||||
_LOGGER.info("Updating local network information")
|
_LOGGER.info("Updating local network information")
|
||||||
|
@ -12,6 +12,7 @@ from .const import (
|
|||||||
ATTR_SESSION_DATA,
|
ATTR_SESSION_DATA,
|
||||||
FILE_HASSIO_INGRESS,
|
FILE_HASSIO_INGRESS,
|
||||||
IngressSessionData,
|
IngressSessionData,
|
||||||
|
IngressSessionDataDict,
|
||||||
)
|
)
|
||||||
from .coresys import CoreSys, CoreSysAttributes
|
from .coresys import CoreSys, CoreSysAttributes
|
||||||
from .utils import check_port
|
from .utils import check_port
|
||||||
@ -49,7 +50,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
|
|||||||
return self._data[ATTR_SESSION]
|
return self._data[ATTR_SESSION]
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def sessions_data(self) -> dict[str, dict[str, str | None]]:
|
def sessions_data(self) -> dict[str, IngressSessionDataDict]:
|
||||||
"""Return sessions_data."""
|
"""Return sessions_data."""
|
||||||
return self._data[ATTR_SESSION_DATA]
|
return self._data[ATTR_SESSION_DATA]
|
||||||
|
|
||||||
@ -89,7 +90,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
|
|||||||
now = utcnow()
|
now = utcnow()
|
||||||
|
|
||||||
sessions = {}
|
sessions = {}
|
||||||
sessions_data: dict[str, dict[str, str | None]] = {}
|
sessions_data: dict[str, IngressSessionDataDict] = {}
|
||||||
for session, valid in self.sessions.items():
|
for session, valid in self.sessions.items():
|
||||||
# check if timestamp valid, to avoid crash on malformed timestamp
|
# check if timestamp valid, to avoid crash on malformed timestamp
|
||||||
try:
|
try:
|
||||||
@ -118,7 +119,8 @@ class Ingress(FileConfiguration, CoreSysAttributes):
|
|||||||
|
|
||||||
# Read all ingress token and build a map
|
# Read all ingress token and build a map
|
||||||
for addon in self.addons:
|
for addon in self.addons:
|
||||||
self.tokens[addon.ingress_token] = addon.slug
|
if addon.ingress_token:
|
||||||
|
self.tokens[addon.ingress_token] = addon.slug
|
||||||
|
|
||||||
def create_session(self, data: IngressSessionData | None = None) -> str:
|
def create_session(self, data: IngressSessionData | None = None) -> str:
|
||||||
"""Create new session."""
|
"""Create new session."""
|
||||||
@ -141,7 +143,7 @@ class Ingress(FileConfiguration, CoreSysAttributes):
|
|||||||
try:
|
try:
|
||||||
valid_until = utc_from_timestamp(self.sessions[session])
|
valid_until = utc_from_timestamp(self.sessions[session])
|
||||||
except OverflowError:
|
except OverflowError:
|
||||||
self.sessions[session] = utcnow() + timedelta(minutes=15)
|
self.sessions[session] = (utcnow() + timedelta(minutes=15)).timestamp()
|
||||||
return True
|
return True
|
||||||
|
|
||||||
# Is still valid?
|
# Is still valid?
|
||||||
|
@ -34,8 +34,60 @@ class JobCondition(StrEnum):
|
|||||||
SUPERVISOR_UPDATED = "supervisor_updated"
|
SUPERVISOR_UPDATED = "supervisor_updated"
|
||||||
|
|
||||||
|
|
||||||
|
class JobConcurrency(StrEnum):
|
||||||
|
"""Job concurrency control.
|
||||||
|
|
||||||
|
Controls how many instances of a job can run simultaneously.
|
||||||
|
|
||||||
|
Individual Concurrency (applies to each method separately):
|
||||||
|
- REJECT: Fail immediately if another instance is already running
|
||||||
|
- QUEUE: Wait for the current instance to finish, then run
|
||||||
|
|
||||||
|
Group Concurrency (applies across all methods on a JobGroup):
|
||||||
|
- GROUP_REJECT: Fail if ANY job is running on the JobGroup
|
||||||
|
- GROUP_QUEUE: Wait for ANY running job on the JobGroup to finish
|
||||||
|
|
||||||
|
JobGroup Behavior:
|
||||||
|
- All methods on the same JobGroup instance share a single lock
|
||||||
|
- Methods can call other methods on the same group without deadlock
|
||||||
|
- Uses the JobGroup.group_name for coordination
|
||||||
|
- Requires the class to inherit from JobGroup
|
||||||
|
"""
|
||||||
|
|
||||||
|
REJECT = "reject" # Fail if already running (was ONCE)
|
||||||
|
QUEUE = "queue" # Wait if already running (was SINGLE_WAIT)
|
||||||
|
GROUP_REJECT = "group_reject" # Was GROUP_ONCE
|
||||||
|
GROUP_QUEUE = "group_queue" # Was GROUP_WAIT
|
||||||
|
|
||||||
|
|
||||||
|
class JobThrottle(StrEnum):
|
||||||
|
"""Job throttling control.
|
||||||
|
|
||||||
|
Controls how frequently jobs can be executed.
|
||||||
|
|
||||||
|
Individual Throttling (each method has its own throttle state):
|
||||||
|
- THROTTLE: Skip execution if called within throttle_period
|
||||||
|
- RATE_LIMIT: Allow up to throttle_max_calls within throttle_period, then fail
|
||||||
|
|
||||||
|
Group Throttling (all methods on a JobGroup share throttle state):
|
||||||
|
- GROUP_THROTTLE: Skip if ANY method was called within throttle_period
|
||||||
|
- GROUP_RATE_LIMIT: Allow up to throttle_max_calls total across ALL methods
|
||||||
|
|
||||||
|
JobGroup Behavior:
|
||||||
|
- All methods on the same JobGroup instance share throttle counters/timers
|
||||||
|
- Uses the JobGroup.group_name as the key for tracking state
|
||||||
|
- If one method is throttled, other methods may also be throttled
|
||||||
|
- Requires the class to inherit from JobGroup
|
||||||
|
"""
|
||||||
|
|
||||||
|
THROTTLE = "throttle" # Skip if called too frequently
|
||||||
|
RATE_LIMIT = "rate_limit" # Rate limiting with max calls per period
|
||||||
|
GROUP_THROTTLE = "group_throttle" # Group version of THROTTLE
|
||||||
|
GROUP_RATE_LIMIT = "group_rate_limit" # Group version of RATE_LIMIT
|
||||||
|
|
||||||
|
|
||||||
class JobExecutionLimit(StrEnum):
|
class JobExecutionLimit(StrEnum):
|
||||||
"""Job Execution limits."""
|
"""Job Execution limits - DEPRECATED: Use JobConcurrency and JobThrottle instead."""
|
||||||
|
|
||||||
ONCE = "once"
|
ONCE = "once"
|
||||||
SINGLE_WAIT = "single_wait"
|
SINGLE_WAIT = "single_wait"
|
||||||
|
@ -20,7 +20,7 @@ from ..host.const import HostFeature
|
|||||||
from ..resolution.const import MINIMUM_FREE_SPACE_THRESHOLD, ContextType, IssueType
|
from ..resolution.const import MINIMUM_FREE_SPACE_THRESHOLD, ContextType, IssueType
|
||||||
from ..utils.sentry import async_capture_exception
|
from ..utils.sentry import async_capture_exception
|
||||||
from . import SupervisorJob
|
from . import SupervisorJob
|
||||||
from .const import JobCondition, JobExecutionLimit
|
from .const import JobConcurrency, JobCondition, JobExecutionLimit, JobThrottle
|
||||||
from .job_group import JobGroup
|
from .job_group import JobGroup
|
||||||
|
|
||||||
_LOGGER: logging.Logger = logging.getLogger(__package__)
|
_LOGGER: logging.Logger = logging.getLogger(__package__)
|
||||||
@ -36,14 +36,34 @@ class Job(CoreSysAttributes):
|
|||||||
conditions: list[JobCondition] | None = None,
|
conditions: list[JobCondition] | None = None,
|
||||||
cleanup: bool = True,
|
cleanup: bool = True,
|
||||||
on_condition: type[JobException] | None = None,
|
on_condition: type[JobException] | None = None,
|
||||||
limit: JobExecutionLimit | None = None,
|
concurrency: JobConcurrency | None = None,
|
||||||
|
throttle: JobThrottle | None = None,
|
||||||
throttle_period: timedelta
|
throttle_period: timedelta
|
||||||
| Callable[[CoreSys, datetime, list[datetime] | None], timedelta]
|
| Callable[[CoreSys, datetime, list[datetime] | None], timedelta]
|
||||||
| None = None,
|
| None = None,
|
||||||
throttle_max_calls: int | None = None,
|
throttle_max_calls: int | None = None,
|
||||||
internal: bool = False,
|
internal: bool = False,
|
||||||
):
|
# Backward compatibility - DEPRECATED
|
||||||
"""Initialize the Job class."""
|
limit: JobExecutionLimit | None = None,
|
||||||
|
): # pylint: disable=too-many-positional-arguments
|
||||||
|
"""Initialize the Job decorator.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
name (str): Unique name for the job. Must not be duplicated.
|
||||||
|
conditions (list[JobCondition] | None): List of conditions that must be met before the job runs.
|
||||||
|
cleanup (bool): Whether to clean up the job after execution. Defaults to True. If set to False, the job will remain accessible through the Supervisor API until the next restart.
|
||||||
|
on_condition (type[JobException] | None): Exception type to raise if a job condition fails. If None, logs the failure.
|
||||||
|
concurrency (JobConcurrency | None): Concurrency control policy (e.g., reject, queue, group-based).
|
||||||
|
throttle (JobThrottle | None): Throttling policy (e.g., throttle, rate_limit, group-based).
|
||||||
|
throttle_period (timedelta | Callable | None): Throttle period as a timedelta or a callable returning a timedelta (for throttled jobs).
|
||||||
|
throttle_max_calls (int | None): Maximum number of calls allowed within the throttle period (for rate-limited jobs).
|
||||||
|
internal (bool): Whether the job is internal (not exposed through the Supervisor API). Defaults to False.
|
||||||
|
limit (JobExecutionLimit | None): DEPRECATED - Use concurrency and throttle instead.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
RuntimeError: If job name is not unique, or required throttle parameters are missing for the selected throttle policy.
|
||||||
|
|
||||||
|
"""
|
||||||
if name in _JOB_NAMES:
|
if name in _JOB_NAMES:
|
||||||
raise RuntimeError(f"A job already exists with name {name}!")
|
raise RuntimeError(f"A job already exists with name {name}!")
|
||||||
|
|
||||||
@ -52,7 +72,6 @@ class Job(CoreSysAttributes):
|
|||||||
self.conditions = conditions
|
self.conditions = conditions
|
||||||
self.cleanup = cleanup
|
self.cleanup = cleanup
|
||||||
self.on_condition = on_condition
|
self.on_condition = on_condition
|
||||||
self.limit = limit
|
|
||||||
self._throttle_period = throttle_period
|
self._throttle_period = throttle_period
|
||||||
self._throttle_max_calls = throttle_max_calls
|
self._throttle_max_calls = throttle_max_calls
|
||||||
self._lock: asyncio.Semaphore | None = None
|
self._lock: asyncio.Semaphore | None = None
|
||||||
@ -60,34 +79,91 @@ class Job(CoreSysAttributes):
|
|||||||
self._rate_limited_calls: dict[str | None, list[datetime]] | None = None
|
self._rate_limited_calls: dict[str | None, list[datetime]] | None = None
|
||||||
self._internal = internal
|
self._internal = internal
|
||||||
|
|
||||||
|
# Handle backward compatibility with limit parameter
|
||||||
|
if limit is not None:
|
||||||
|
if concurrency is not None or throttle is not None:
|
||||||
|
raise RuntimeError(
|
||||||
|
f"Job {name} cannot specify both 'limit' (deprecated) and 'concurrency'/'throttle' parameters!"
|
||||||
|
)
|
||||||
|
# Map old limit values to new parameters
|
||||||
|
concurrency, throttle = self._map_limit_to_new_params(limit)
|
||||||
|
|
||||||
|
self.concurrency = concurrency
|
||||||
|
self.throttle = throttle
|
||||||
|
|
||||||
# Validate Options
|
# Validate Options
|
||||||
|
self._validate_parameters()
|
||||||
|
|
||||||
|
def _map_limit_to_new_params(
|
||||||
|
self, limit: JobExecutionLimit
|
||||||
|
) -> tuple[JobConcurrency | None, JobThrottle | None]:
|
||||||
|
"""Map old limit parameter to new concurrency and throttle parameters."""
|
||||||
|
mapping = {
|
||||||
|
JobExecutionLimit.ONCE: (JobConcurrency.REJECT, None),
|
||||||
|
JobExecutionLimit.SINGLE_WAIT: (JobConcurrency.QUEUE, None),
|
||||||
|
JobExecutionLimit.THROTTLE: (None, JobThrottle.THROTTLE),
|
||||||
|
JobExecutionLimit.THROTTLE_WAIT: (
|
||||||
|
JobConcurrency.QUEUE,
|
||||||
|
JobThrottle.THROTTLE,
|
||||||
|
),
|
||||||
|
JobExecutionLimit.THROTTLE_RATE_LIMIT: (None, JobThrottle.RATE_LIMIT),
|
||||||
|
JobExecutionLimit.GROUP_ONCE: (JobConcurrency.GROUP_REJECT, None),
|
||||||
|
JobExecutionLimit.GROUP_WAIT: (JobConcurrency.GROUP_QUEUE, None),
|
||||||
|
JobExecutionLimit.GROUP_THROTTLE: (None, JobThrottle.GROUP_THROTTLE),
|
||||||
|
JobExecutionLimit.GROUP_THROTTLE_WAIT: (
|
||||||
|
# Seems a bit counter intuitive, but GROUP_QUEUE deadlocks
|
||||||
|
# tests/jobs/test_job_decorator.py::test_execution_limit_group_throttle_wait
|
||||||
|
# The reason this deadlocks is because when using GROUP_QUEUE and the
|
||||||
|
# throttle limit is hit, the group lock is trying to be unlocked outside
|
||||||
|
# of the job context. The current implementation doesn't allow to unlock
|
||||||
|
# the group lock when the job is not running.
|
||||||
|
JobConcurrency.QUEUE,
|
||||||
|
JobThrottle.GROUP_THROTTLE,
|
||||||
|
),
|
||||||
|
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT: (
|
||||||
|
None,
|
||||||
|
JobThrottle.GROUP_RATE_LIMIT,
|
||||||
|
),
|
||||||
|
}
|
||||||
|
return mapping.get(limit, (None, None))
|
||||||
|
|
||||||
|
def _validate_parameters(self) -> None:
|
||||||
|
"""Validate job parameters."""
|
||||||
|
# Validate throttle parameters
|
||||||
if (
|
if (
|
||||||
self.limit
|
self.throttle
|
||||||
in (
|
in (
|
||||||
JobExecutionLimit.THROTTLE,
|
JobThrottle.THROTTLE,
|
||||||
JobExecutionLimit.THROTTLE_WAIT,
|
JobThrottle.GROUP_THROTTLE,
|
||||||
JobExecutionLimit.THROTTLE_RATE_LIMIT,
|
JobThrottle.RATE_LIMIT,
|
||||||
JobExecutionLimit.GROUP_THROTTLE,
|
JobThrottle.GROUP_RATE_LIMIT,
|
||||||
JobExecutionLimit.GROUP_THROTTLE_WAIT,
|
|
||||||
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
|
|
||||||
)
|
)
|
||||||
and self._throttle_period is None
|
and self._throttle_period is None
|
||||||
):
|
):
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
f"Job {name} is using execution limit {limit} without a throttle period!"
|
f"Job {self.name} is using throttle {self.throttle} without a throttle period!"
|
||||||
)
|
)
|
||||||
|
|
||||||
if self.limit in (
|
if self.throttle in (
|
||||||
JobExecutionLimit.THROTTLE_RATE_LIMIT,
|
JobThrottle.RATE_LIMIT,
|
||||||
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
|
JobThrottle.GROUP_RATE_LIMIT,
|
||||||
):
|
):
|
||||||
if self._throttle_max_calls is None:
|
if self._throttle_max_calls is None:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
f"Job {name} is using execution limit {limit} without throttle max calls!"
|
f"Job {self.name} is using throttle {self.throttle} without throttle max calls!"
|
||||||
)
|
)
|
||||||
|
|
||||||
self._rate_limited_calls = {}
|
self._rate_limited_calls = {}
|
||||||
|
|
||||||
|
if self.throttle is not None and self.concurrency in (
|
||||||
|
JobConcurrency.GROUP_REJECT,
|
||||||
|
JobConcurrency.GROUP_QUEUE,
|
||||||
|
):
|
||||||
|
# We cannot release group locks when Job is not running (e.g. throttled)
|
||||||
|
# which makes these combinations impossible to use currently.
|
||||||
|
raise RuntimeError(
|
||||||
|
f"Job {self.name} is using throttling ({self.throttle}) with group concurrency ({self.concurrency}), which is not allowed!"
|
||||||
|
)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def throttle_max_calls(self) -> int:
|
def throttle_max_calls(self) -> int:
|
||||||
"""Return max calls for throttle."""
|
"""Return max calls for throttle."""
|
||||||
@ -116,7 +192,7 @@ class Job(CoreSysAttributes):
|
|||||||
"""Return rate limited calls if used."""
|
"""Return rate limited calls if used."""
|
||||||
if self._rate_limited_calls is None:
|
if self._rate_limited_calls is None:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
f"Rate limited calls not available for limit type {self.limit}"
|
"Rate limited calls not available for this throttle type"
|
||||||
)
|
)
|
||||||
|
|
||||||
return self._rate_limited_calls.get(group_name, [])
|
return self._rate_limited_calls.get(group_name, [])
|
||||||
@ -127,7 +203,7 @@ class Job(CoreSysAttributes):
|
|||||||
"""Add a rate limited call to list if used."""
|
"""Add a rate limited call to list if used."""
|
||||||
if self._rate_limited_calls is None:
|
if self._rate_limited_calls is None:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
f"Rate limited calls not available for limit type {self.limit}"
|
"Rate limited calls not available for this throttle type"
|
||||||
)
|
)
|
||||||
|
|
||||||
if group_name in self._rate_limited_calls:
|
if group_name in self._rate_limited_calls:
|
||||||
@ -141,7 +217,7 @@ class Job(CoreSysAttributes):
|
|||||||
"""Set rate limited calls if used."""
|
"""Set rate limited calls if used."""
|
||||||
if self._rate_limited_calls is None:
|
if self._rate_limited_calls is None:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
f"Rate limited calls not available for limit type {self.limit}"
|
"Rate limited calls not available for this throttle type"
|
||||||
)
|
)
|
||||||
|
|
||||||
self._rate_limited_calls[group_name] = value
|
self._rate_limited_calls[group_name] = value
|
||||||
@ -178,16 +254,24 @@ class Job(CoreSysAttributes):
|
|||||||
if obj.acquire and obj.release: # type: ignore
|
if obj.acquire and obj.release: # type: ignore
|
||||||
job_group = cast(JobGroup, obj)
|
job_group = cast(JobGroup, obj)
|
||||||
|
|
||||||
if not job_group and self.limit in (
|
# Check for group-based parameters
|
||||||
JobExecutionLimit.GROUP_ONCE,
|
if not job_group:
|
||||||
JobExecutionLimit.GROUP_WAIT,
|
if self.concurrency in (
|
||||||
JobExecutionLimit.GROUP_THROTTLE,
|
JobConcurrency.GROUP_REJECT,
|
||||||
JobExecutionLimit.GROUP_THROTTLE_WAIT,
|
JobConcurrency.GROUP_QUEUE,
|
||||||
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
|
):
|
||||||
):
|
raise RuntimeError(
|
||||||
raise RuntimeError(
|
f"Job {self.name} uses group concurrency ({self.concurrency}) but is not on a JobGroup! "
|
||||||
f"Job on {self.name} need to be a JobGroup to use group based limits!"
|
f"The class must inherit from JobGroup to use GROUP_REJECT or GROUP_QUEUE."
|
||||||
) from None
|
) from None
|
||||||
|
if self.throttle in (
|
||||||
|
JobThrottle.GROUP_THROTTLE,
|
||||||
|
JobThrottle.GROUP_RATE_LIMIT,
|
||||||
|
):
|
||||||
|
raise RuntimeError(
|
||||||
|
f"Job {self.name} uses group throttling ({self.throttle}) but is not on a JobGroup! "
|
||||||
|
f"The class must inherit from JobGroup to use GROUP_THROTTLE or GROUP_RATE_LIMIT."
|
||||||
|
) from None
|
||||||
|
|
||||||
return job_group
|
return job_group
|
||||||
|
|
||||||
@ -240,71 +324,15 @@ class Job(CoreSysAttributes):
|
|||||||
except JobConditionException as err:
|
except JobConditionException as err:
|
||||||
return self._handle_job_condition_exception(err)
|
return self._handle_job_condition_exception(err)
|
||||||
|
|
||||||
# Handle exection limits
|
# Handle execution limits
|
||||||
if self.limit in (
|
await self._handle_concurrency_control(job_group, job)
|
||||||
JobExecutionLimit.SINGLE_WAIT,
|
try:
|
||||||
JobExecutionLimit.ONCE,
|
if not await self._handle_throttling(group_name):
|
||||||
):
|
self._release_concurrency_control(job_group)
|
||||||
await self._acquire_exection_limit()
|
return # Job was throttled, exit early
|
||||||
elif self.limit in (
|
except Exception:
|
||||||
JobExecutionLimit.GROUP_ONCE,
|
self._release_concurrency_control(job_group)
|
||||||
JobExecutionLimit.GROUP_WAIT,
|
raise
|
||||||
):
|
|
||||||
try:
|
|
||||||
await cast(JobGroup, job_group).acquire(
|
|
||||||
job, self.limit == JobExecutionLimit.GROUP_WAIT
|
|
||||||
)
|
|
||||||
except JobGroupExecutionLimitExceeded as err:
|
|
||||||
if self.on_condition:
|
|
||||||
raise self.on_condition(str(err)) from err
|
|
||||||
raise err
|
|
||||||
elif self.limit in (
|
|
||||||
JobExecutionLimit.THROTTLE,
|
|
||||||
JobExecutionLimit.GROUP_THROTTLE,
|
|
||||||
):
|
|
||||||
time_since_last_call = datetime.now() - self.last_call(group_name)
|
|
||||||
if time_since_last_call < self.throttle_period(group_name):
|
|
||||||
return
|
|
||||||
elif self.limit in (
|
|
||||||
JobExecutionLimit.THROTTLE_WAIT,
|
|
||||||
JobExecutionLimit.GROUP_THROTTLE_WAIT,
|
|
||||||
):
|
|
||||||
await self._acquire_exection_limit()
|
|
||||||
time_since_last_call = datetime.now() - self.last_call(group_name)
|
|
||||||
if time_since_last_call < self.throttle_period(group_name):
|
|
||||||
self._release_exception_limits()
|
|
||||||
return
|
|
||||||
elif self.limit in (
|
|
||||||
JobExecutionLimit.THROTTLE_RATE_LIMIT,
|
|
||||||
JobExecutionLimit.GROUP_THROTTLE_RATE_LIMIT,
|
|
||||||
):
|
|
||||||
# Only reprocess array when necessary (at limit)
|
|
||||||
if (
|
|
||||||
len(self.rate_limited_calls(group_name))
|
|
||||||
>= self.throttle_max_calls
|
|
||||||
):
|
|
||||||
self.set_rate_limited_calls(
|
|
||||||
[
|
|
||||||
call
|
|
||||||
for call in self.rate_limited_calls(group_name)
|
|
||||||
if call
|
|
||||||
> datetime.now() - self.throttle_period(group_name)
|
|
||||||
],
|
|
||||||
group_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
if (
|
|
||||||
len(self.rate_limited_calls(group_name))
|
|
||||||
>= self.throttle_max_calls
|
|
||||||
):
|
|
||||||
on_condition = (
|
|
||||||
JobException
|
|
||||||
if self.on_condition is None
|
|
||||||
else self.on_condition
|
|
||||||
)
|
|
||||||
raise on_condition(
|
|
||||||
f"Rate limit exceeded, more than {self.throttle_max_calls} calls in {self.throttle_period(group_name)}",
|
|
||||||
)
|
|
||||||
|
|
||||||
# Execute Job
|
# Execute Job
|
||||||
with job.start():
|
with job.start():
|
||||||
@ -330,12 +358,7 @@ class Job(CoreSysAttributes):
|
|||||||
await async_capture_exception(err)
|
await async_capture_exception(err)
|
||||||
raise JobException() from err
|
raise JobException() from err
|
||||||
finally:
|
finally:
|
||||||
self._release_exception_limits()
|
self._release_concurrency_control(job_group)
|
||||||
if job_group and self.limit in (
|
|
||||||
JobExecutionLimit.GROUP_ONCE,
|
|
||||||
JobExecutionLimit.GROUP_WAIT,
|
|
||||||
):
|
|
||||||
job_group.release()
|
|
||||||
|
|
||||||
# Jobs that weren't started are always cleaned up. Also clean up done jobs if required
|
# Jobs that weren't started are always cleaned up. Also clean up done jobs if required
|
||||||
finally:
|
finally:
|
||||||
@ -477,31 +500,75 @@ class Job(CoreSysAttributes):
|
|||||||
f"'{method_name}' blocked from execution, mounting not supported on system"
|
f"'{method_name}' blocked from execution, mounting not supported on system"
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _acquire_exection_limit(self) -> None:
|
def _release_concurrency_control(self, job_group: JobGroup | None) -> None:
|
||||||
"""Process exection limits."""
|
"""Release concurrency control locks."""
|
||||||
if self.limit not in (
|
if self.concurrency == JobConcurrency.REJECT:
|
||||||
JobExecutionLimit.SINGLE_WAIT,
|
if self.lock.locked():
|
||||||
JobExecutionLimit.ONCE,
|
self.lock.release()
|
||||||
JobExecutionLimit.THROTTLE_WAIT,
|
elif self.concurrency == JobConcurrency.QUEUE:
|
||||||
JobExecutionLimit.GROUP_THROTTLE_WAIT,
|
if self.lock.locked():
|
||||||
|
self.lock.release()
|
||||||
|
elif self.concurrency in (
|
||||||
|
JobConcurrency.GROUP_REJECT,
|
||||||
|
JobConcurrency.GROUP_QUEUE,
|
||||||
):
|
):
|
||||||
return
|
if job_group and job_group.has_lock:
|
||||||
|
job_group.release()
|
||||||
|
|
||||||
if self.limit == JobExecutionLimit.ONCE and self.lock.locked():
|
async def _handle_concurrency_control(
|
||||||
on_condition = (
|
self, job_group: JobGroup | None, job: SupervisorJob
|
||||||
JobException if self.on_condition is None else self.on_condition
|
) -> None:
|
||||||
)
|
"""Handle concurrency control limits."""
|
||||||
raise on_condition("Another job is running")
|
if self.concurrency == JobConcurrency.REJECT:
|
||||||
|
if self.lock.locked():
|
||||||
|
on_condition = (
|
||||||
|
JobException if self.on_condition is None else self.on_condition
|
||||||
|
)
|
||||||
|
raise on_condition("Another job is running")
|
||||||
|
await self.lock.acquire()
|
||||||
|
elif self.concurrency == JobConcurrency.QUEUE:
|
||||||
|
await self.lock.acquire()
|
||||||
|
elif self.concurrency == JobConcurrency.GROUP_REJECT:
|
||||||
|
try:
|
||||||
|
await cast(JobGroup, job_group).acquire(job, wait=False)
|
||||||
|
except JobGroupExecutionLimitExceeded as err:
|
||||||
|
if self.on_condition:
|
||||||
|
raise self.on_condition(str(err)) from err
|
||||||
|
raise err
|
||||||
|
elif self.concurrency == JobConcurrency.GROUP_QUEUE:
|
||||||
|
try:
|
||||||
|
await cast(JobGroup, job_group).acquire(job, wait=True)
|
||||||
|
except JobGroupExecutionLimitExceeded as err:
|
||||||
|
if self.on_condition:
|
||||||
|
raise self.on_condition(str(err)) from err
|
||||||
|
raise err
|
||||||
|
|
||||||
await self.lock.acquire()
|
async def _handle_throttling(self, group_name: str | None) -> bool:
|
||||||
|
"""Handle throttling limits. Returns True if job should continue, False if throttled."""
|
||||||
|
if self.throttle in (JobThrottle.THROTTLE, JobThrottle.GROUP_THROTTLE):
|
||||||
|
time_since_last_call = datetime.now() - self.last_call(group_name)
|
||||||
|
throttle_period = self.throttle_period(group_name)
|
||||||
|
if time_since_last_call < throttle_period:
|
||||||
|
# Always return False when throttled (skip execution)
|
||||||
|
return False
|
||||||
|
elif self.throttle in (JobThrottle.RATE_LIMIT, JobThrottle.GROUP_RATE_LIMIT):
|
||||||
|
# Only reprocess array when necessary (at limit)
|
||||||
|
if len(self.rate_limited_calls(group_name)) >= self.throttle_max_calls:
|
||||||
|
self.set_rate_limited_calls(
|
||||||
|
[
|
||||||
|
call
|
||||||
|
for call in self.rate_limited_calls(group_name)
|
||||||
|
if call > datetime.now() - self.throttle_period(group_name)
|
||||||
|
],
|
||||||
|
group_name,
|
||||||
|
)
|
||||||
|
|
||||||
def _release_exception_limits(self) -> None:
|
if len(self.rate_limited_calls(group_name)) >= self.throttle_max_calls:
|
||||||
"""Release possible exception limits."""
|
on_condition = (
|
||||||
if self.limit not in (
|
JobException if self.on_condition is None else self.on_condition
|
||||||
JobExecutionLimit.SINGLE_WAIT,
|
)
|
||||||
JobExecutionLimit.ONCE,
|
raise on_condition(
|
||||||
JobExecutionLimit.THROTTLE_WAIT,
|
f"Rate limit exceeded, more than {self.throttle_max_calls} calls in {self.throttle_period(group_name)}",
|
||||||
JobExecutionLimit.GROUP_THROTTLE_WAIT,
|
)
|
||||||
):
|
|
||||||
return
|
return True
|
||||||
self.lock.release()
|
|
||||||
|
@ -272,6 +272,7 @@ class OSManager(CoreSysAttributes):
|
|||||||
name="os_manager_update",
|
name="os_manager_update",
|
||||||
conditions=[
|
conditions=[
|
||||||
JobCondition.HAOS,
|
JobCondition.HAOS,
|
||||||
|
JobCondition.HEALTHY,
|
||||||
JobCondition.INTERNET_SYSTEM,
|
JobCondition.INTERNET_SYSTEM,
|
||||||
JobCondition.RUNNING,
|
JobCondition.RUNNING,
|
||||||
JobCondition.SUPERVISOR_UPDATED,
|
JobCondition.SUPERVISOR_UPDATED,
|
||||||
|
@ -22,6 +22,7 @@ from ..exceptions import (
|
|||||||
AudioUpdateError,
|
AudioUpdateError,
|
||||||
ConfigurationFileError,
|
ConfigurationFileError,
|
||||||
DockerError,
|
DockerError,
|
||||||
|
PluginError,
|
||||||
)
|
)
|
||||||
from ..jobs.const import JobExecutionLimit
|
from ..jobs.const import JobExecutionLimit
|
||||||
from ..jobs.decorator import Job
|
from ..jobs.decorator import Job
|
||||||
@ -127,7 +128,7 @@ class PluginAudio(PluginBase):
|
|||||||
"""Update Audio plugin."""
|
"""Update Audio plugin."""
|
||||||
try:
|
try:
|
||||||
await super().update(version)
|
await super().update(version)
|
||||||
except DockerError as err:
|
except (DockerError, PluginError) as err:
|
||||||
raise AudioUpdateError("Audio update failed", _LOGGER.error) from err
|
raise AudioUpdateError("Audio update failed", _LOGGER.error) from err
|
||||||
|
|
||||||
async def restart(self) -> None:
|
async def restart(self) -> None:
|
||||||
|
@ -168,14 +168,14 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
|
|||||||
# Check plugin state
|
# Check plugin state
|
||||||
try:
|
try:
|
||||||
# Evaluate Version if we lost this information
|
# Evaluate Version if we lost this information
|
||||||
if not self.version:
|
if self.version:
|
||||||
self.version = await self.instance.get_latest_version()
|
version = self.version
|
||||||
|
else:
|
||||||
|
self.version = version = await self.instance.get_latest_version()
|
||||||
|
|
||||||
await self.instance.attach(
|
await self.instance.attach(version=version, skip_state_event_if_down=True)
|
||||||
version=self.version, skip_state_event_if_down=True
|
|
||||||
)
|
|
||||||
|
|
||||||
await self.instance.check_image(self.version, self.default_image)
|
await self.instance.check_image(version, self.default_image)
|
||||||
except DockerError:
|
except DockerError:
|
||||||
_LOGGER.info(
|
_LOGGER.info(
|
||||||
"No %s plugin Docker image %s found.", self.slug, self.instance.image
|
"No %s plugin Docker image %s found.", self.slug, self.instance.image
|
||||||
@ -185,7 +185,7 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
|
|||||||
with suppress(PluginError):
|
with suppress(PluginError):
|
||||||
await self.install()
|
await self.install()
|
||||||
else:
|
else:
|
||||||
self.version = self.instance.version
|
self.version = self.instance.version or version
|
||||||
self.image = self.default_image
|
self.image = self.default_image
|
||||||
await self.save_data()
|
await self.save_data()
|
||||||
|
|
||||||
@ -202,11 +202,10 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
|
|||||||
if not self.latest_version:
|
if not self.latest_version:
|
||||||
await self.sys_updater.reload()
|
await self.sys_updater.reload()
|
||||||
|
|
||||||
if self.latest_version:
|
if to_version := self.latest_version:
|
||||||
with suppress(DockerError):
|
with suppress(DockerError):
|
||||||
await self.instance.install(
|
await self.instance.install(to_version, image=self.default_image)
|
||||||
self.latest_version, image=self.default_image
|
self.version = self.instance.version or to_version
|
||||||
)
|
|
||||||
break
|
break
|
||||||
_LOGGER.warning(
|
_LOGGER.warning(
|
||||||
"Error on installing %s plugin, retrying in 30sec", self.slug
|
"Error on installing %s plugin, retrying in 30sec", self.slug
|
||||||
@ -214,23 +213,28 @@ class PluginBase(ABC, FileConfiguration, CoreSysAttributes):
|
|||||||
await asyncio.sleep(30)
|
await asyncio.sleep(30)
|
||||||
|
|
||||||
_LOGGER.info("%s plugin now installed", self.slug)
|
_LOGGER.info("%s plugin now installed", self.slug)
|
||||||
self.version = self.instance.version
|
|
||||||
self.image = self.default_image
|
self.image = self.default_image
|
||||||
await self.save_data()
|
await self.save_data()
|
||||||
|
|
||||||
async def update(self, version: str | None = None) -> None:
|
async def update(self, version: str | None = None) -> None:
|
||||||
"""Update system plugin."""
|
"""Update system plugin."""
|
||||||
version = version or self.latest_version
|
to_version = AwesomeVersion(version) if version else self.latest_version
|
||||||
|
if not to_version:
|
||||||
|
raise PluginError(
|
||||||
|
f"Cannot determine latest version of plugin {self.slug} for update",
|
||||||
|
_LOGGER.error,
|
||||||
|
)
|
||||||
|
|
||||||
old_image = self.image
|
old_image = self.image
|
||||||
|
|
||||||
if version == self.version:
|
if to_version == self.version:
|
||||||
_LOGGER.warning(
|
_LOGGER.warning(
|
||||||
"Version %s is already installed for %s", version, self.slug
|
"Version %s is already installed for %s", to_version, self.slug
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
await self.instance.update(version, image=self.default_image)
|
await self.instance.update(to_version, image=self.default_image)
|
||||||
self.version = self.instance.version
|
self.version = self.instance.version or to_version
|
||||||
self.image = self.default_image
|
self.image = self.default_image
|
||||||
await self.save_data()
|
await self.save_data()
|
||||||
|
|
||||||
|
@ -6,7 +6,6 @@ Code: https://github.com/home-assistant/plugin-cli
|
|||||||
from collections.abc import Awaitable
|
from collections.abc import Awaitable
|
||||||
import logging
|
import logging
|
||||||
import secrets
|
import secrets
|
||||||
from typing import cast
|
|
||||||
|
|
||||||
from awesomeversion import AwesomeVersion
|
from awesomeversion import AwesomeVersion
|
||||||
|
|
||||||
@ -15,7 +14,7 @@ from ..coresys import CoreSys
|
|||||||
from ..docker.cli import DockerCli
|
from ..docker.cli import DockerCli
|
||||||
from ..docker.const import ContainerState
|
from ..docker.const import ContainerState
|
||||||
from ..docker.stats import DockerStats
|
from ..docker.stats import DockerStats
|
||||||
from ..exceptions import CliError, CliJobError, CliUpdateError, DockerError
|
from ..exceptions import CliError, CliJobError, CliUpdateError, DockerError, PluginError
|
||||||
from ..jobs.const import JobExecutionLimit
|
from ..jobs.const import JobExecutionLimit
|
||||||
from ..jobs.decorator import Job
|
from ..jobs.decorator import Job
|
||||||
from ..utils.sentry import async_capture_exception
|
from ..utils.sentry import async_capture_exception
|
||||||
@ -54,9 +53,9 @@ class PluginCli(PluginBase):
|
|||||||
return self.sys_updater.version_cli
|
return self.sys_updater.version_cli
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def supervisor_token(self) -> str:
|
def supervisor_token(self) -> str | None:
|
||||||
"""Return an access token for the Supervisor API."""
|
"""Return an access token for the Supervisor API."""
|
||||||
return cast(str, self._data[ATTR_ACCESS_TOKEN])
|
return self._data.get(ATTR_ACCESS_TOKEN)
|
||||||
|
|
||||||
@Job(
|
@Job(
|
||||||
name="plugin_cli_update",
|
name="plugin_cli_update",
|
||||||
@ -67,7 +66,7 @@ class PluginCli(PluginBase):
|
|||||||
"""Update local HA cli."""
|
"""Update local HA cli."""
|
||||||
try:
|
try:
|
||||||
await super().update(version)
|
await super().update(version)
|
||||||
except DockerError as err:
|
except (DockerError, PluginError) as err:
|
||||||
raise CliUpdateError("CLI update failed", _LOGGER.error) from err
|
raise CliUpdateError("CLI update failed", _LOGGER.error) from err
|
||||||
|
|
||||||
async def start(self) -> None:
|
async def start(self) -> None:
|
||||||
|
@ -15,7 +15,8 @@ from awesomeversion import AwesomeVersion
|
|||||||
import jinja2
|
import jinja2
|
||||||
import voluptuous as vol
|
import voluptuous as vol
|
||||||
|
|
||||||
from ..const import ATTR_SERVERS, DNS_SUFFIX, LogLevel
|
from ..bus import EventListener
|
||||||
|
from ..const import ATTR_SERVERS, DNS_SUFFIX, BusEvent, LogLevel
|
||||||
from ..coresys import CoreSys
|
from ..coresys import CoreSys
|
||||||
from ..dbus.const import MulticastProtocolEnabled
|
from ..dbus.const import MulticastProtocolEnabled
|
||||||
from ..docker.const import ContainerState
|
from ..docker.const import ContainerState
|
||||||
@ -28,6 +29,7 @@ from ..exceptions import (
|
|||||||
CoreDNSJobError,
|
CoreDNSJobError,
|
||||||
CoreDNSUpdateError,
|
CoreDNSUpdateError,
|
||||||
DockerError,
|
DockerError,
|
||||||
|
PluginError,
|
||||||
)
|
)
|
||||||
from ..jobs.const import JobExecutionLimit
|
from ..jobs.const import JobExecutionLimit
|
||||||
from ..jobs.decorator import Job
|
from ..jobs.decorator import Job
|
||||||
@ -76,6 +78,12 @@ class PluginDns(PluginBase):
|
|||||||
|
|
||||||
self._hosts: list[HostEntry] = []
|
self._hosts: list[HostEntry] = []
|
||||||
self._loop: bool = False
|
self._loop: bool = False
|
||||||
|
self._cached_locals: list[str] | None = None
|
||||||
|
|
||||||
|
# Debouncing system for rapid local changes
|
||||||
|
self._locals_changed_handle: asyncio.TimerHandle | None = None
|
||||||
|
self._restart_after_locals_change_handle: asyncio.Task | None = None
|
||||||
|
self._connectivity_check_listener: EventListener | None = None
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def hosts(self) -> Path:
|
def hosts(self) -> Path:
|
||||||
@ -90,6 +98,12 @@ class PluginDns(PluginBase):
|
|||||||
@property
|
@property
|
||||||
def locals(self) -> list[str]:
|
def locals(self) -> list[str]:
|
||||||
"""Return list of local system DNS servers."""
|
"""Return list of local system DNS servers."""
|
||||||
|
if self._cached_locals is None:
|
||||||
|
self._cached_locals = self._compute_locals()
|
||||||
|
return self._cached_locals
|
||||||
|
|
||||||
|
def _compute_locals(self) -> list[str]:
|
||||||
|
"""Compute list of local system DNS servers."""
|
||||||
servers: list[str] = []
|
servers: list[str] = []
|
||||||
for server in [
|
for server in [
|
||||||
f"dns://{server!s}" for server in self.sys_host.network.dns_servers
|
f"dns://{server!s}" for server in self.sys_host.network.dns_servers
|
||||||
@ -99,6 +113,52 @@ class PluginDns(PluginBase):
|
|||||||
|
|
||||||
return servers
|
return servers
|
||||||
|
|
||||||
|
async def _on_dns_container_running(self, event: DockerContainerStateEvent) -> None:
|
||||||
|
"""Handle DNS container state change to running and trigger connectivity check."""
|
||||||
|
if event.name == self.instance.name and event.state == ContainerState.RUNNING:
|
||||||
|
# Wait before CoreDNS actually becomes available
|
||||||
|
await asyncio.sleep(5)
|
||||||
|
|
||||||
|
_LOGGER.debug("CoreDNS started, checking connectivity")
|
||||||
|
await self.sys_supervisor.check_connectivity()
|
||||||
|
|
||||||
|
async def _restart_dns_after_locals_change(self) -> None:
|
||||||
|
"""Restart DNS after a debounced delay for local changes."""
|
||||||
|
old_locals = self._cached_locals
|
||||||
|
new_locals = self._compute_locals()
|
||||||
|
if old_locals == new_locals:
|
||||||
|
return
|
||||||
|
|
||||||
|
_LOGGER.debug("DNS locals changed from %s to %s", old_locals, new_locals)
|
||||||
|
self._cached_locals = new_locals
|
||||||
|
if not await self.instance.is_running():
|
||||||
|
return
|
||||||
|
|
||||||
|
await self.restart()
|
||||||
|
self._restart_after_locals_change_handle = None
|
||||||
|
|
||||||
|
def _trigger_restart_dns_after_locals_change(self) -> None:
|
||||||
|
"""Trigger a restart of DNS after local changes."""
|
||||||
|
# Cancel existing restart task if any
|
||||||
|
if self._restart_after_locals_change_handle:
|
||||||
|
self._restart_after_locals_change_handle.cancel()
|
||||||
|
|
||||||
|
self._restart_after_locals_change_handle = self.sys_create_task(
|
||||||
|
self._restart_dns_after_locals_change()
|
||||||
|
)
|
||||||
|
self._locals_changed_handle = None
|
||||||
|
|
||||||
|
def notify_locals_changed(self) -> None:
|
||||||
|
"""Schedule a debounced DNS restart for local changes."""
|
||||||
|
# Cancel existing timer if any
|
||||||
|
if self._locals_changed_handle:
|
||||||
|
self._locals_changed_handle.cancel()
|
||||||
|
|
||||||
|
# Schedule new timer with 1 second delay
|
||||||
|
self._locals_changed_handle = self.sys_call_later(
|
||||||
|
1.0, self._trigger_restart_dns_after_locals_change
|
||||||
|
)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def servers(self) -> list[str]:
|
def servers(self) -> list[str]:
|
||||||
"""Return list of DNS servers."""
|
"""Return list of DNS servers."""
|
||||||
@ -187,6 +247,13 @@ class PluginDns(PluginBase):
|
|||||||
_LOGGER.error("Can't read hosts.tmpl: %s", err)
|
_LOGGER.error("Can't read hosts.tmpl: %s", err)
|
||||||
|
|
||||||
await self._init_hosts()
|
await self._init_hosts()
|
||||||
|
|
||||||
|
# Register Docker event listener for connectivity checks
|
||||||
|
if not self._connectivity_check_listener:
|
||||||
|
self._connectivity_check_listener = self.sys_bus.register_event(
|
||||||
|
BusEvent.DOCKER_CONTAINER_STATE_CHANGE, self._on_dns_container_running
|
||||||
|
)
|
||||||
|
|
||||||
await super().load()
|
await super().load()
|
||||||
|
|
||||||
# Update supervisor
|
# Update supervisor
|
||||||
@ -217,7 +284,7 @@ class PluginDns(PluginBase):
|
|||||||
"""Update CoreDNS plugin."""
|
"""Update CoreDNS plugin."""
|
||||||
try:
|
try:
|
||||||
await super().update(version)
|
await super().update(version)
|
||||||
except DockerError as err:
|
except (DockerError, PluginError) as err:
|
||||||
raise CoreDNSUpdateError("CoreDNS update failed", _LOGGER.error) from err
|
raise CoreDNSUpdateError("CoreDNS update failed", _LOGGER.error) from err
|
||||||
|
|
||||||
async def restart(self) -> None:
|
async def restart(self) -> None:
|
||||||
@ -242,6 +309,16 @@ class PluginDns(PluginBase):
|
|||||||
|
|
||||||
async def stop(self) -> None:
|
async def stop(self) -> None:
|
||||||
"""Stop CoreDNS."""
|
"""Stop CoreDNS."""
|
||||||
|
# Cancel any pending locals change timer
|
||||||
|
if self._locals_changed_handle:
|
||||||
|
self._locals_changed_handle.cancel()
|
||||||
|
self._locals_changed_handle = None
|
||||||
|
|
||||||
|
# Wait for any pending restart before stopping
|
||||||
|
if self._restart_after_locals_change_handle:
|
||||||
|
self._restart_after_locals_change_handle.cancel()
|
||||||
|
self._restart_after_locals_change_handle = None
|
||||||
|
|
||||||
_LOGGER.info("Stopping CoreDNS plugin")
|
_LOGGER.info("Stopping CoreDNS plugin")
|
||||||
try:
|
try:
|
||||||
await self.instance.stop()
|
await self.instance.stop()
|
||||||
|
@ -16,6 +16,7 @@ from ..exceptions import (
|
|||||||
MulticastError,
|
MulticastError,
|
||||||
MulticastJobError,
|
MulticastJobError,
|
||||||
MulticastUpdateError,
|
MulticastUpdateError,
|
||||||
|
PluginError,
|
||||||
)
|
)
|
||||||
from ..jobs.const import JobExecutionLimit
|
from ..jobs.const import JobExecutionLimit
|
||||||
from ..jobs.decorator import Job
|
from ..jobs.decorator import Job
|
||||||
@ -63,7 +64,7 @@ class PluginMulticast(PluginBase):
|
|||||||
"""Update Multicast plugin."""
|
"""Update Multicast plugin."""
|
||||||
try:
|
try:
|
||||||
await super().update(version)
|
await super().update(version)
|
||||||
except DockerError as err:
|
except (DockerError, PluginError) as err:
|
||||||
raise MulticastUpdateError(
|
raise MulticastUpdateError(
|
||||||
"Multicast update failed", _LOGGER.error
|
"Multicast update failed", _LOGGER.error
|
||||||
) from err
|
) from err
|
||||||
|
@ -5,7 +5,6 @@ Code: https://github.com/home-assistant/plugin-observer
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
import secrets
|
import secrets
|
||||||
from typing import cast
|
|
||||||
|
|
||||||
import aiohttp
|
import aiohttp
|
||||||
from awesomeversion import AwesomeVersion
|
from awesomeversion import AwesomeVersion
|
||||||
@ -20,6 +19,7 @@ from ..exceptions import (
|
|||||||
ObserverError,
|
ObserverError,
|
||||||
ObserverJobError,
|
ObserverJobError,
|
||||||
ObserverUpdateError,
|
ObserverUpdateError,
|
||||||
|
PluginError,
|
||||||
)
|
)
|
||||||
from ..jobs.const import JobExecutionLimit
|
from ..jobs.const import JobExecutionLimit
|
||||||
from ..jobs.decorator import Job
|
from ..jobs.decorator import Job
|
||||||
@ -59,9 +59,9 @@ class PluginObserver(PluginBase):
|
|||||||
return self.sys_updater.version_observer
|
return self.sys_updater.version_observer
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def supervisor_token(self) -> str:
|
def supervisor_token(self) -> str | None:
|
||||||
"""Return an access token for the Observer API."""
|
"""Return an access token for the Observer API."""
|
||||||
return cast(str, self._data[ATTR_ACCESS_TOKEN])
|
return self._data.get(ATTR_ACCESS_TOKEN)
|
||||||
|
|
||||||
@Job(
|
@Job(
|
||||||
name="plugin_observer_update",
|
name="plugin_observer_update",
|
||||||
@ -72,7 +72,7 @@ class PluginObserver(PluginBase):
|
|||||||
"""Update local HA observer."""
|
"""Update local HA observer."""
|
||||||
try:
|
try:
|
||||||
await super().update(version)
|
await super().update(version)
|
||||||
except DockerError as err:
|
except (DockerError, PluginError) as err:
|
||||||
raise ObserverUpdateError(
|
raise ObserverUpdateError(
|
||||||
"HA observer update failed", _LOGGER.error
|
"HA observer update failed", _LOGGER.error
|
||||||
) from err
|
) from err
|
||||||
|
@ -21,17 +21,8 @@ async def check_server(
|
|||||||
) -> None:
|
) -> None:
|
||||||
"""Check a DNS server and report issues."""
|
"""Check a DNS server and report issues."""
|
||||||
ip_addr = server[6:] if server.startswith("dns://") else server
|
ip_addr = server[6:] if server.startswith("dns://") else server
|
||||||
resolver = DNSResolver(loop=loop, nameservers=[ip_addr])
|
async with DNSResolver(loop=loop, nameservers=[ip_addr]) as resolver:
|
||||||
try:
|
|
||||||
await resolver.query(DNS_CHECK_HOST, qtype)
|
await resolver.query(DNS_CHECK_HOST, qtype)
|
||||||
finally:
|
|
||||||
|
|
||||||
def _delete_resolver():
|
|
||||||
"""Close resolver to avoid memory leaks."""
|
|
||||||
nonlocal resolver
|
|
||||||
del resolver
|
|
||||||
|
|
||||||
loop.call_later(1, _delete_resolver)
|
|
||||||
|
|
||||||
|
|
||||||
def setup(coresys: CoreSys) -> CheckBase:
|
def setup(coresys: CoreSys) -> CheckBase:
|
||||||
|
108
supervisor/resolution/checks/duplicate_os_installation.py
Normal file
108
supervisor/resolution/checks/duplicate_os_installation.py
Normal file
@ -0,0 +1,108 @@
|
|||||||
|
"""Helpers to check for duplicate OS installations."""
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
from ...const import CoreState
|
||||||
|
from ...coresys import CoreSys
|
||||||
|
from ...dbus.udisks2.data import DeviceSpecification
|
||||||
|
from ..const import ContextType, IssueType, UnhealthyReason
|
||||||
|
from .base import CheckBase
|
||||||
|
|
||||||
|
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Partition labels to check for duplicates (GPT-based installations)
|
||||||
|
HAOS_PARTITIONS = [
|
||||||
|
"hassos-boot",
|
||||||
|
"hassos-kernel0",
|
||||||
|
"hassos-kernel1",
|
||||||
|
"hassos-system0",
|
||||||
|
"hassos-system1",
|
||||||
|
]
|
||||||
|
|
||||||
|
# Partition UUIDs to check for duplicates (MBR-based installations)
|
||||||
|
HAOS_PARTITION_UUIDS = [
|
||||||
|
"48617373-01", # hassos-boot
|
||||||
|
"48617373-05", # hassos-kernel0
|
||||||
|
"48617373-06", # hassos-system0
|
||||||
|
"48617373-07", # hassos-kernel1
|
||||||
|
"48617373-08", # hassos-system1
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _get_device_specifications():
|
||||||
|
"""Generate DeviceSpecification objects for both GPT and MBR partitions."""
|
||||||
|
# GPT-based installations (partition labels)
|
||||||
|
for partition_label in HAOS_PARTITIONS:
|
||||||
|
yield (
|
||||||
|
DeviceSpecification(partlabel=partition_label),
|
||||||
|
"partition",
|
||||||
|
partition_label,
|
||||||
|
)
|
||||||
|
|
||||||
|
# MBR-based installations (partition UUIDs)
|
||||||
|
for partition_uuid in HAOS_PARTITION_UUIDS:
|
||||||
|
yield (
|
||||||
|
DeviceSpecification(partuuid=partition_uuid),
|
||||||
|
"partition UUID",
|
||||||
|
partition_uuid,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def setup(coresys: CoreSys) -> CheckBase:
|
||||||
|
"""Check setup function."""
|
||||||
|
return CheckDuplicateOSInstallation(coresys)
|
||||||
|
|
||||||
|
|
||||||
|
class CheckDuplicateOSInstallation(CheckBase):
|
||||||
|
"""CheckDuplicateOSInstallation class for check."""
|
||||||
|
|
||||||
|
async def run_check(self) -> None:
|
||||||
|
"""Run check if not affected by issue."""
|
||||||
|
if not self.sys_os.available:
|
||||||
|
_LOGGER.debug(
|
||||||
|
"Skipping duplicate OS installation check, OS is not available"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
for device_spec, spec_type, identifier in _get_device_specifications():
|
||||||
|
resolved = await self.sys_dbus.udisks2.resolve_device(device_spec)
|
||||||
|
if resolved and len(resolved) > 1:
|
||||||
|
_LOGGER.warning(
|
||||||
|
"Found duplicate OS installation: %s %s exists on %d devices (%s)",
|
||||||
|
identifier,
|
||||||
|
spec_type,
|
||||||
|
len(resolved),
|
||||||
|
", ".join(str(device.device) for device in resolved),
|
||||||
|
)
|
||||||
|
self.sys_resolution.add_unhealthy_reason(
|
||||||
|
UnhealthyReason.DUPLICATE_OS_INSTALLATION
|
||||||
|
)
|
||||||
|
self.sys_resolution.create_issue(
|
||||||
|
IssueType.DUPLICATE_OS_INSTALLATION,
|
||||||
|
ContextType.SYSTEM,
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
async def approve_check(self, reference: str | None = None) -> bool:
|
||||||
|
"""Approve check if it is affected by issue."""
|
||||||
|
# Check all partitions for duplicates since issue is created without reference
|
||||||
|
for device_spec, _, _ in _get_device_specifications():
|
||||||
|
resolved = await self.sys_dbus.udisks2.resolve_device(device_spec)
|
||||||
|
if resolved and len(resolved) > 1:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def issue(self) -> IssueType:
|
||||||
|
"""Return a IssueType enum."""
|
||||||
|
return IssueType.DUPLICATE_OS_INSTALLATION
|
||||||
|
|
||||||
|
@property
|
||||||
|
def context(self) -> ContextType:
|
||||||
|
"""Return a ContextType enum."""
|
||||||
|
return ContextType.SYSTEM
|
||||||
|
|
||||||
|
@property
|
||||||
|
def states(self) -> list[CoreState]:
|
||||||
|
"""Return a list of valid states when this check can run."""
|
||||||
|
return [CoreState.SETUP]
|
@ -21,6 +21,9 @@ class CheckMultipleDataDisks(CheckBase):
|
|||||||
|
|
||||||
async def run_check(self) -> None:
|
async def run_check(self) -> None:
|
||||||
"""Run check if not affected by issue."""
|
"""Run check if not affected by issue."""
|
||||||
|
if not self.sys_os.available:
|
||||||
|
return
|
||||||
|
|
||||||
for block_device in self.sys_dbus.udisks2.block_devices:
|
for block_device in self.sys_dbus.udisks2.block_devices:
|
||||||
if self._block_device_has_name_issue(block_device):
|
if self._block_device_has_name_issue(block_device):
|
||||||
self.sys_resolution.create_issue(
|
self.sys_resolution.create_issue(
|
||||||
|
@ -19,12 +19,12 @@ class CheckNetworkInterfaceIPV4(CheckBase):
|
|||||||
|
|
||||||
async def run_check(self) -> None:
|
async def run_check(self) -> None:
|
||||||
"""Run check if not affected by issue."""
|
"""Run check if not affected by issue."""
|
||||||
for interface in self.sys_dbus.network.interfaces:
|
for inet in self.sys_dbus.network.interfaces:
|
||||||
if CheckNetworkInterfaceIPV4.check_interface(interface):
|
if CheckNetworkInterfaceIPV4.check_interface(inet):
|
||||||
self.sys_resolution.create_issue(
|
self.sys_resolution.create_issue(
|
||||||
IssueType.IPV4_CONNECTION_PROBLEM,
|
IssueType.IPV4_CONNECTION_PROBLEM,
|
||||||
ContextType.SYSTEM,
|
ContextType.SYSTEM,
|
||||||
interface.name,
|
inet.interface_name,
|
||||||
)
|
)
|
||||||
|
|
||||||
async def approve_check(self, reference: str | None = None) -> bool:
|
async def approve_check(self, reference: str | None = None) -> bool:
|
||||||
|
@ -64,10 +64,11 @@ class UnhealthyReason(StrEnum):
|
|||||||
"""Reasons for unsupported status."""
|
"""Reasons for unsupported status."""
|
||||||
|
|
||||||
DOCKER = "docker"
|
DOCKER = "docker"
|
||||||
|
DUPLICATE_OS_INSTALLATION = "duplicate_os_installation"
|
||||||
OSERROR_BAD_MESSAGE = "oserror_bad_message"
|
OSERROR_BAD_MESSAGE = "oserror_bad_message"
|
||||||
PRIVILEGED = "privileged"
|
PRIVILEGED = "privileged"
|
||||||
SUPERVISOR = "supervisor"
|
|
||||||
SETUP = "setup"
|
SETUP = "setup"
|
||||||
|
SUPERVISOR = "supervisor"
|
||||||
UNTRUSTED = "untrusted"
|
UNTRUSTED = "untrusted"
|
||||||
|
|
||||||
|
|
||||||
@ -83,6 +84,7 @@ class IssueType(StrEnum):
|
|||||||
DEVICE_ACCESS_MISSING = "device_access_missing"
|
DEVICE_ACCESS_MISSING = "device_access_missing"
|
||||||
DISABLED_DATA_DISK = "disabled_data_disk"
|
DISABLED_DATA_DISK = "disabled_data_disk"
|
||||||
DNS_LOOP = "dns_loop"
|
DNS_LOOP = "dns_loop"
|
||||||
|
DUPLICATE_OS_INSTALLATION = "duplicate_os_installation"
|
||||||
DNS_SERVER_FAILED = "dns_server_failed"
|
DNS_SERVER_FAILED = "dns_server_failed"
|
||||||
DNS_SERVER_IPV6_ERROR = "dns_server_ipv6_error"
|
DNS_SERVER_IPV6_ERROR = "dns_server_ipv6_error"
|
||||||
DOCKER_CONFIG = "docker_config"
|
DOCKER_CONFIG = "docker_config"
|
||||||
|
@ -5,6 +5,8 @@ import logging
|
|||||||
from docker.errors import DockerException
|
from docker.errors import DockerException
|
||||||
from requests import RequestException
|
from requests import RequestException
|
||||||
|
|
||||||
|
from supervisor.docker.const import ADDON_BUILDER_IMAGE
|
||||||
|
|
||||||
from ...const import CoreState
|
from ...const import CoreState
|
||||||
from ...coresys import CoreSys
|
from ...coresys import CoreSys
|
||||||
from ..const import (
|
from ..const import (
|
||||||
@ -60,9 +62,10 @@ class EvaluateContainer(EvaluateBase):
|
|||||||
"""Return a set of all known images."""
|
"""Return a set of all known images."""
|
||||||
return {
|
return {
|
||||||
self.sys_homeassistant.image,
|
self.sys_homeassistant.image,
|
||||||
self.sys_supervisor.image,
|
self.sys_supervisor.image or self.sys_supervisor.default_image,
|
||||||
*(plugin.image for plugin in self.sys_plugins.all_plugins if plugin.image),
|
*(plugin.image for plugin in self.sys_plugins.all_plugins if plugin.image),
|
||||||
*(addon.image for addon in self.sys_addons.installed if addon.image),
|
*(addon.image for addon in self.sys_addons.installed if addon.image),
|
||||||
|
ADDON_BUILDER_IMAGE,
|
||||||
}
|
}
|
||||||
|
|
||||||
async def evaluate(self) -> bool:
|
async def evaluate(self) -> bool:
|
||||||
|
@ -3,6 +3,7 @@
|
|||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
|
from ...const import BusEvent
|
||||||
from ...coresys import CoreSys, CoreSysAttributes
|
from ...coresys import CoreSys, CoreSysAttributes
|
||||||
from ...exceptions import ResolutionFixupError
|
from ...exceptions import ResolutionFixupError
|
||||||
from ..const import ContextType, IssueType, SuggestionType
|
from ..const import ContextType, IssueType, SuggestionType
|
||||||
@ -66,6 +67,11 @@ class FixupBase(ABC, CoreSysAttributes):
|
|||||||
"""Return if a fixup can be apply as auto fix."""
|
"""Return if a fixup can be apply as auto fix."""
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def bus_event(self) -> BusEvent | None:
|
||||||
|
"""Return the BusEvent that triggers this fixup, or None if not event-based."""
|
||||||
|
return None
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def all_suggestions(self) -> list[Suggestion]:
|
def all_suggestions(self) -> list[Suggestion]:
|
||||||
"""List of all suggestions which when applied run this fixup."""
|
"""List of all suggestions which when applied run this fixup."""
|
||||||
|
@ -2,6 +2,7 @@
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
|
from ...const import BusEvent
|
||||||
from ...coresys import CoreSys
|
from ...coresys import CoreSys
|
||||||
from ...exceptions import (
|
from ...exceptions import (
|
||||||
ResolutionFixupError,
|
ResolutionFixupError,
|
||||||
@ -68,3 +69,8 @@ class FixupStoreExecuteReload(FixupBase):
|
|||||||
def auto(self) -> bool:
|
def auto(self) -> bool:
|
||||||
"""Return if a fixup can be apply as auto fix."""
|
"""Return if a fixup can be apply as auto fix."""
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
@property
|
||||||
|
def bus_event(self) -> BusEvent | None:
|
||||||
|
"""Return the BusEvent that triggers this fixup, or None if not event-based."""
|
||||||
|
return BusEvent.SUPERVISOR_CONNECTIVITY_CHANGE
|
||||||
|
@ -1,6 +1,5 @@
|
|||||||
"""Helpers to check and fix issues with free space."""
|
"""Helpers to check and fix issues with free space."""
|
||||||
|
|
||||||
from functools import partial
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from ...coresys import CoreSys
|
from ...coresys import CoreSys
|
||||||
@ -12,7 +11,6 @@ from ...exceptions import (
|
|||||||
)
|
)
|
||||||
from ...jobs.const import JobCondition
|
from ...jobs.const import JobCondition
|
||||||
from ...jobs.decorator import Job
|
from ...jobs.decorator import Job
|
||||||
from ...utils import remove_folder
|
|
||||||
from ..const import ContextType, IssueType, SuggestionType
|
from ..const import ContextType, IssueType, SuggestionType
|
||||||
from .base import FixupBase
|
from .base import FixupBase
|
||||||
|
|
||||||
@ -44,15 +42,8 @@ class FixupStoreExecuteReset(FixupBase):
|
|||||||
_LOGGER.warning("Can't find store %s for fixup", reference)
|
_LOGGER.warning("Can't find store %s for fixup", reference)
|
||||||
return
|
return
|
||||||
|
|
||||||
# Local add-ons are not a git repo, can't remove and re-pull
|
|
||||||
if repository.git:
|
|
||||||
await self.sys_run_in_executor(
|
|
||||||
partial(remove_folder, folder=repository.git.path, content_only=True)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Load data again
|
|
||||||
try:
|
try:
|
||||||
await repository.load()
|
await repository.reset()
|
||||||
except StoreError:
|
except StoreError:
|
||||||
raise ResolutionFixupError() from None
|
raise ResolutionFixupError() from None
|
||||||
|
|
||||||
|
@ -5,6 +5,7 @@ from typing import Any
|
|||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
|
||||||
|
from ..bus import EventListener
|
||||||
from ..coresys import CoreSys, CoreSysAttributes
|
from ..coresys import CoreSys, CoreSysAttributes
|
||||||
from ..exceptions import ResolutionError, ResolutionNotFound
|
from ..exceptions import ResolutionError, ResolutionNotFound
|
||||||
from ..homeassistant.const import WSEvent
|
from ..homeassistant.const import WSEvent
|
||||||
@ -46,6 +47,9 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
|
|||||||
self._unsupported: list[UnsupportedReason] = []
|
self._unsupported: list[UnsupportedReason] = []
|
||||||
self._unhealthy: list[UnhealthyReason] = []
|
self._unhealthy: list[UnhealthyReason] = []
|
||||||
|
|
||||||
|
# Map suggestion UUID to event listeners (list)
|
||||||
|
self._suggestion_listeners: dict[str, list[EventListener]] = {}
|
||||||
|
|
||||||
async def load_modules(self):
|
async def load_modules(self):
|
||||||
"""Load resolution evaluation, check and fixup modules."""
|
"""Load resolution evaluation, check and fixup modules."""
|
||||||
|
|
||||||
@ -105,6 +109,19 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
|
|||||||
)
|
)
|
||||||
self._suggestions.append(suggestion)
|
self._suggestions.append(suggestion)
|
||||||
|
|
||||||
|
# Register event listeners if fixups have a bus_event
|
||||||
|
listeners: list[EventListener] = []
|
||||||
|
for fixup in self.fixup.fixes_for_suggestion(suggestion):
|
||||||
|
if fixup.auto and fixup.bus_event:
|
||||||
|
|
||||||
|
def event_callback(reference, fixup=fixup):
|
||||||
|
return fixup(suggestion)
|
||||||
|
|
||||||
|
listener = self.sys_bus.register_event(fixup.bus_event, event_callback)
|
||||||
|
listeners.append(listener)
|
||||||
|
if listeners:
|
||||||
|
self._suggestion_listeners[suggestion.uuid] = listeners
|
||||||
|
|
||||||
# Event on suggestion added to issue
|
# Event on suggestion added to issue
|
||||||
for issue in self.issues_for_suggestion(suggestion):
|
for issue in self.issues_for_suggestion(suggestion):
|
||||||
self.sys_homeassistant.websocket.supervisor_event(
|
self.sys_homeassistant.websocket.supervisor_event(
|
||||||
@ -233,6 +250,11 @@ class ResolutionManager(FileConfiguration, CoreSysAttributes):
|
|||||||
)
|
)
|
||||||
self._suggestions.remove(suggestion)
|
self._suggestions.remove(suggestion)
|
||||||
|
|
||||||
|
# Remove event listeners if present
|
||||||
|
listeners = self._suggestion_listeners.pop(suggestion.uuid, [])
|
||||||
|
for listener in listeners:
|
||||||
|
self.sys_bus.remove_listener(listener)
|
||||||
|
|
||||||
# Event on suggestion removed from issues
|
# Event on suggestion removed from issues
|
||||||
for issue in self.issues_for_suggestion(suggestion):
|
for issue in self.issues_for_suggestion(suggestion):
|
||||||
self.sys_homeassistant.websocket.supervisor_event(
|
self.sys_homeassistant.websocket.supervisor_event(
|
||||||
|
@ -4,7 +4,7 @@ import asyncio
|
|||||||
from collections.abc import Awaitable
|
from collections.abc import Awaitable
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from ..const import ATTR_REPOSITORIES, URL_HASSIO_ADDONS
|
from ..const import ATTR_REPOSITORIES, REPOSITORY_CORE, URL_HASSIO_ADDONS
|
||||||
from ..coresys import CoreSys, CoreSysAttributes
|
from ..coresys import CoreSys, CoreSysAttributes
|
||||||
from ..exceptions import (
|
from ..exceptions import (
|
||||||
StoreError,
|
StoreError,
|
||||||
@ -18,14 +18,10 @@ from ..jobs.decorator import Job, JobCondition
|
|||||||
from ..resolution.const import ContextType, IssueType, SuggestionType
|
from ..resolution.const import ContextType, IssueType, SuggestionType
|
||||||
from ..utils.common import FileConfiguration
|
from ..utils.common import FileConfiguration
|
||||||
from .addon import AddonStore
|
from .addon import AddonStore
|
||||||
from .const import FILE_HASSIO_STORE, StoreType
|
from .const import FILE_HASSIO_STORE, BuiltinRepository
|
||||||
from .data import StoreData
|
from .data import StoreData
|
||||||
from .repository import Repository
|
from .repository import Repository
|
||||||
from .validate import (
|
from .validate import DEFAULT_REPOSITORIES, SCHEMA_STORE_FILE
|
||||||
BUILTIN_REPOSITORIES,
|
|
||||||
SCHEMA_STORE_FILE,
|
|
||||||
ensure_builtin_repositories,
|
|
||||||
)
|
|
||||||
|
|
||||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@ -56,7 +52,8 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
|||||||
return [
|
return [
|
||||||
repository.source
|
repository.source
|
||||||
for repository in self.all
|
for repository in self.all
|
||||||
if repository.type == StoreType.GIT
|
if repository.slug
|
||||||
|
not in {BuiltinRepository.LOCAL.value, BuiltinRepository.CORE.value}
|
||||||
]
|
]
|
||||||
|
|
||||||
def get(self, slug: str) -> Repository:
|
def get(self, slug: str) -> Repository:
|
||||||
@ -65,20 +62,15 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
|||||||
raise StoreNotFound()
|
raise StoreNotFound()
|
||||||
return self.repositories[slug]
|
return self.repositories[slug]
|
||||||
|
|
||||||
def get_from_url(self, url: str) -> Repository:
|
|
||||||
"""Return Repository with slug."""
|
|
||||||
for repository in self.all:
|
|
||||||
if repository.source != url:
|
|
||||||
continue
|
|
||||||
return repository
|
|
||||||
raise StoreNotFound()
|
|
||||||
|
|
||||||
async def load(self) -> None:
|
async def load(self) -> None:
|
||||||
"""Start up add-on management."""
|
"""Start up add-on store management."""
|
||||||
# Init custom repositories and load add-ons
|
# Make sure the built-in repositories are all present
|
||||||
await self.update_repositories(
|
# This is especially important when adding new built-in repositories
|
||||||
self._data[ATTR_REPOSITORIES], add_with_errors=True
|
# to make sure existing installations have them.
|
||||||
|
all_repositories: set[str] = (
|
||||||
|
set(self._data.get(ATTR_REPOSITORIES, [])) | DEFAULT_REPOSITORIES
|
||||||
)
|
)
|
||||||
|
await self.update_repositories(all_repositories, issue_on_error=True)
|
||||||
|
|
||||||
@Job(
|
@Job(
|
||||||
name="store_manager_reload",
|
name="store_manager_reload",
|
||||||
@ -89,7 +81,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
|||||||
"""Update add-ons from repository and reload list."""
|
"""Update add-ons from repository and reload list."""
|
||||||
# Make a copy to prevent race with other tasks
|
# Make a copy to prevent race with other tasks
|
||||||
repositories = [repository] if repository else self.all.copy()
|
repositories = [repository] if repository else self.all.copy()
|
||||||
results: list[bool | Exception] = await asyncio.gather(
|
results: list[bool | BaseException] = await asyncio.gather(
|
||||||
*[repo.update() for repo in repositories], return_exceptions=True
|
*[repo.update() for repo in repositories], return_exceptions=True
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -126,16 +118,16 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
|||||||
)
|
)
|
||||||
async def add_repository(self, url: str, *, persist: bool = True) -> None:
|
async def add_repository(self, url: str, *, persist: bool = True) -> None:
|
||||||
"""Add a repository."""
|
"""Add a repository."""
|
||||||
await self._add_repository(url, persist=persist, add_with_errors=False)
|
await self._add_repository(url, persist=persist, issue_on_error=False)
|
||||||
|
|
||||||
async def _add_repository(
|
async def _add_repository(
|
||||||
self, url: str, *, persist: bool = True, add_with_errors: bool = False
|
self, url: str, *, persist: bool = True, issue_on_error: bool = False
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Add a repository."""
|
"""Add a repository."""
|
||||||
if url == URL_HASSIO_ADDONS:
|
if url == URL_HASSIO_ADDONS:
|
||||||
url = StoreType.CORE
|
url = REPOSITORY_CORE
|
||||||
|
|
||||||
repository = Repository(self.coresys, url)
|
repository = Repository.create(self.coresys, url)
|
||||||
|
|
||||||
if repository.slug in self.repositories:
|
if repository.slug in self.repositories:
|
||||||
raise StoreError(f"Can't add {url}, already in the store", _LOGGER.error)
|
raise StoreError(f"Can't add {url}, already in the store", _LOGGER.error)
|
||||||
@ -145,7 +137,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
|||||||
await repository.load()
|
await repository.load()
|
||||||
except StoreGitCloneError as err:
|
except StoreGitCloneError as err:
|
||||||
_LOGGER.error("Can't retrieve data from %s due to %s", url, err)
|
_LOGGER.error("Can't retrieve data from %s due to %s", url, err)
|
||||||
if add_with_errors:
|
if issue_on_error:
|
||||||
self.sys_resolution.create_issue(
|
self.sys_resolution.create_issue(
|
||||||
IssueType.FATAL_ERROR,
|
IssueType.FATAL_ERROR,
|
||||||
ContextType.STORE,
|
ContextType.STORE,
|
||||||
@ -158,7 +150,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
|||||||
|
|
||||||
except StoreGitError as err:
|
except StoreGitError as err:
|
||||||
_LOGGER.error("Can't load data from repository %s due to %s", url, err)
|
_LOGGER.error("Can't load data from repository %s due to %s", url, err)
|
||||||
if add_with_errors:
|
if issue_on_error:
|
||||||
self.sys_resolution.create_issue(
|
self.sys_resolution.create_issue(
|
||||||
IssueType.FATAL_ERROR,
|
IssueType.FATAL_ERROR,
|
||||||
ContextType.STORE,
|
ContextType.STORE,
|
||||||
@ -171,7 +163,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
|||||||
|
|
||||||
except StoreJobError as err:
|
except StoreJobError as err:
|
||||||
_LOGGER.error("Can't add repository %s due to %s", url, err)
|
_LOGGER.error("Can't add repository %s due to %s", url, err)
|
||||||
if add_with_errors:
|
if issue_on_error:
|
||||||
self.sys_resolution.create_issue(
|
self.sys_resolution.create_issue(
|
||||||
IssueType.FATAL_ERROR,
|
IssueType.FATAL_ERROR,
|
||||||
ContextType.STORE,
|
ContextType.STORE,
|
||||||
@ -183,8 +175,8 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
|||||||
raise err
|
raise err
|
||||||
|
|
||||||
else:
|
else:
|
||||||
if not await self.sys_run_in_executor(repository.validate):
|
if not await repository.validate():
|
||||||
if add_with_errors:
|
if issue_on_error:
|
||||||
_LOGGER.error("%s is not a valid add-on repository", url)
|
_LOGGER.error("%s is not a valid add-on repository", url)
|
||||||
self.sys_resolution.create_issue(
|
self.sys_resolution.create_issue(
|
||||||
IssueType.CORRUPT_REPOSITORY,
|
IssueType.CORRUPT_REPOSITORY,
|
||||||
@ -213,7 +205,7 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
|||||||
|
|
||||||
async def remove_repository(self, repository: Repository, *, persist: bool = True):
|
async def remove_repository(self, repository: Repository, *, persist: bool = True):
|
||||||
"""Remove a repository."""
|
"""Remove a repository."""
|
||||||
if repository.source in BUILTIN_REPOSITORIES:
|
if repository.is_builtin:
|
||||||
raise StoreInvalidAddonRepo(
|
raise StoreInvalidAddonRepo(
|
||||||
"Can't remove built-in repositories!", logger=_LOGGER.error
|
"Can't remove built-in repositories!", logger=_LOGGER.error
|
||||||
)
|
)
|
||||||
@ -234,40 +226,50 @@ class StoreManager(CoreSysAttributes, FileConfiguration):
|
|||||||
@Job(name="store_manager_update_repositories")
|
@Job(name="store_manager_update_repositories")
|
||||||
async def update_repositories(
|
async def update_repositories(
|
||||||
self,
|
self,
|
||||||
list_repositories: list[str],
|
list_repositories: set[str],
|
||||||
*,
|
*,
|
||||||
add_with_errors: bool = False,
|
issue_on_error: bool = False,
|
||||||
replace: bool = True,
|
replace: bool = True,
|
||||||
):
|
):
|
||||||
"""Add a new custom repository."""
|
"""Update repositories by adding new ones and removing stale ones."""
|
||||||
new_rep = set(
|
current_repositories = {repository.source for repository in self.all}
|
||||||
ensure_builtin_repositories(list_repositories)
|
|
||||||
if replace
|
# Determine repositories to add
|
||||||
else list_repositories + self.repository_urls
|
repositories_to_add = list_repositories - current_repositories
|
||||||
)
|
|
||||||
old_rep = {repository.source for repository in self.all}
|
|
||||||
|
|
||||||
# Add new repositories
|
# Add new repositories
|
||||||
add_errors = await asyncio.gather(
|
add_errors = await asyncio.gather(
|
||||||
*[
|
*[
|
||||||
self._add_repository(url, persist=False, add_with_errors=True)
|
# Use _add_repository to avoid JobCondition.SUPERVISOR_UPDATED
|
||||||
if add_with_errors
|
# to prevent proper loading of repositories on startup.
|
||||||
|
self._add_repository(url, persist=False, issue_on_error=True)
|
||||||
|
if issue_on_error
|
||||||
else self.add_repository(url, persist=False)
|
else self.add_repository(url, persist=False)
|
||||||
for url in new_rep - old_rep
|
for url in repositories_to_add
|
||||||
],
|
],
|
||||||
return_exceptions=True,
|
return_exceptions=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Delete stale repositories
|
remove_errors: list[BaseException | None] = []
|
||||||
remove_errors = await asyncio.gather(
|
if replace:
|
||||||
*[
|
# Determine repositories to remove
|
||||||
self.remove_repository(self.get_from_url(url), persist=False)
|
repositories_to_remove: list[Repository] = [
|
||||||
for url in old_rep - new_rep - BUILTIN_REPOSITORIES
|
repository
|
||||||
],
|
for repository in self.all
|
||||||
return_exceptions=True,
|
if repository.source not in list_repositories
|
||||||
)
|
and not repository.is_builtin
|
||||||
|
]
|
||||||
|
|
||||||
# Always update data, even there are errors, some changes may have succeeded
|
# Remove repositories
|
||||||
|
remove_errors = await asyncio.gather(
|
||||||
|
*[
|
||||||
|
self.remove_repository(repository, persist=False)
|
||||||
|
for repository in repositories_to_remove
|
||||||
|
],
|
||||||
|
return_exceptions=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Always update data, even if there are errors, some changes may have succeeded
|
||||||
await self.data.update()
|
await self.data.update()
|
||||||
await self._read_addons()
|
await self._read_addons()
|
||||||
|
|
||||||
|
@ -3,14 +3,35 @@
|
|||||||
from enum import StrEnum
|
from enum import StrEnum
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from ..const import SUPERVISOR_DATA
|
from ..const import (
|
||||||
|
REPOSITORY_CORE,
|
||||||
|
REPOSITORY_LOCAL,
|
||||||
|
SUPERVISOR_DATA,
|
||||||
|
URL_HASSIO_ADDONS,
|
||||||
|
)
|
||||||
|
|
||||||
FILE_HASSIO_STORE = Path(SUPERVISOR_DATA, "store.json")
|
FILE_HASSIO_STORE = Path(SUPERVISOR_DATA, "store.json")
|
||||||
|
"""Repository type definitions for the store."""
|
||||||
|
|
||||||
|
|
||||||
class StoreType(StrEnum):
|
class BuiltinRepository(StrEnum):
|
||||||
"""Store Types."""
|
"""All built-in repositories that come pre-configured."""
|
||||||
|
|
||||||
CORE = "core"
|
# Local repository (non-git, special handling)
|
||||||
LOCAL = "local"
|
LOCAL = REPOSITORY_LOCAL
|
||||||
GIT = "git"
|
|
||||||
|
# Git-based built-in repositories
|
||||||
|
CORE = REPOSITORY_CORE
|
||||||
|
COMMUNITY_ADDONS = "https://github.com/hassio-addons/repository"
|
||||||
|
ESPHOME = "https://github.com/esphome/home-assistant-addon"
|
||||||
|
MUSIC_ASSISTANT = "https://github.com/music-assistant/home-assistant-addon"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def git_url(self) -> str:
|
||||||
|
"""Return the git URL for this repository."""
|
||||||
|
if self == BuiltinRepository.LOCAL:
|
||||||
|
raise RuntimeError("Local repository does not have a git URL")
|
||||||
|
if self == BuiltinRepository.CORE:
|
||||||
|
return URL_HASSIO_ADDONS
|
||||||
|
else:
|
||||||
|
return self.value # For URL-based repos, value is the URL
|
||||||
|
@ -25,7 +25,6 @@ from ..exceptions import ConfigurationFileError
|
|||||||
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
|
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
|
||||||
from ..utils.common import find_one_filetype, read_json_or_yaml_file
|
from ..utils.common import find_one_filetype, read_json_or_yaml_file
|
||||||
from ..utils.json import read_json_file
|
from ..utils.json import read_json_file
|
||||||
from .const import StoreType
|
|
||||||
from .utils import extract_hash_from_path
|
from .utils import extract_hash_from_path
|
||||||
from .validate import SCHEMA_REPOSITORY_CONFIG
|
from .validate import SCHEMA_REPOSITORY_CONFIG
|
||||||
|
|
||||||
@ -47,7 +46,7 @@ def _read_addon_translations(addon_path: Path) -> dict:
|
|||||||
Should be run in the executor.
|
Should be run in the executor.
|
||||||
"""
|
"""
|
||||||
translations_dir = addon_path / "translations"
|
translations_dir = addon_path / "translations"
|
||||||
translations = {}
|
translations: dict[str, Any] = {}
|
||||||
|
|
||||||
if not translations_dir.exists():
|
if not translations_dir.exists():
|
||||||
return translations
|
return translations
|
||||||
@ -144,7 +143,7 @@ class StoreData(CoreSysAttributes):
|
|||||||
self.addons = addons
|
self.addons = addons
|
||||||
|
|
||||||
async def _find_addon_configs(
|
async def _find_addon_configs(
|
||||||
self, path: Path, repository: dict
|
self, path: Path, repository: str
|
||||||
) -> list[Path] | None:
|
) -> list[Path] | None:
|
||||||
"""Find add-ons in the path."""
|
"""Find add-ons in the path."""
|
||||||
|
|
||||||
@ -169,7 +168,7 @@ class StoreData(CoreSysAttributes):
|
|||||||
self.sys_resolution.add_unhealthy_reason(
|
self.sys_resolution.add_unhealthy_reason(
|
||||||
UnhealthyReason.OSERROR_BAD_MESSAGE
|
UnhealthyReason.OSERROR_BAD_MESSAGE
|
||||||
)
|
)
|
||||||
elif path.stem != StoreType.LOCAL:
|
elif repository != REPOSITORY_LOCAL:
|
||||||
suggestion = [SuggestionType.EXECUTE_RESET]
|
suggestion = [SuggestionType.EXECUTE_RESET]
|
||||||
self.sys_resolution.create_issue(
|
self.sys_resolution.create_issue(
|
||||||
IssueType.CORRUPT_REPOSITORY,
|
IssueType.CORRUPT_REPOSITORY,
|
||||||
|
@ -1,19 +1,20 @@
|
|||||||
"""Init file for Supervisor add-on Git."""
|
"""Init file for Supervisor add-on Git."""
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
|
import errno
|
||||||
import functools as ft
|
import functools as ft
|
||||||
import logging
|
import logging
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from tempfile import TemporaryDirectory
|
||||||
|
|
||||||
import git
|
import git
|
||||||
|
|
||||||
from ..const import ATTR_BRANCH, ATTR_URL, URL_HASSIO_ADDONS
|
from ..const import ATTR_BRANCH, ATTR_URL
|
||||||
from ..coresys import CoreSys, CoreSysAttributes
|
from ..coresys import CoreSys, CoreSysAttributes
|
||||||
from ..exceptions import StoreGitCloneError, StoreGitError, StoreJobError
|
from ..exceptions import StoreGitCloneError, StoreGitError, StoreJobError
|
||||||
from ..jobs.decorator import Job, JobCondition
|
from ..jobs.decorator import Job, JobCondition
|
||||||
from ..resolution.const import ContextType, IssueType, SuggestionType
|
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
|
||||||
from ..utils import remove_folder
|
from ..utils import remove_folder
|
||||||
from .utils import get_hash_from_repository
|
|
||||||
from .validate import RE_REPOSITORY
|
from .validate import RE_REPOSITORY
|
||||||
|
|
||||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||||
@ -22,8 +23,6 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
|
|||||||
class GitRepo(CoreSysAttributes):
|
class GitRepo(CoreSysAttributes):
|
||||||
"""Manage Add-on Git repository."""
|
"""Manage Add-on Git repository."""
|
||||||
|
|
||||||
builtin: bool
|
|
||||||
|
|
||||||
def __init__(self, coresys: CoreSys, path: Path, url: str):
|
def __init__(self, coresys: CoreSys, path: Path, url: str):
|
||||||
"""Initialize Git base wrapper."""
|
"""Initialize Git base wrapper."""
|
||||||
self.coresys: CoreSys = coresys
|
self.coresys: CoreSys = coresys
|
||||||
@ -31,7 +30,9 @@ class GitRepo(CoreSysAttributes):
|
|||||||
self.path: Path = path
|
self.path: Path = path
|
||||||
self.lock: asyncio.Lock = asyncio.Lock()
|
self.lock: asyncio.Lock = asyncio.Lock()
|
||||||
|
|
||||||
self.data: dict[str, str] = RE_REPOSITORY.match(url).groupdict()
|
if not (repository := RE_REPOSITORY.match(url)):
|
||||||
|
raise ValueError(f"Invalid url provided for repository GitRepo: {url}")
|
||||||
|
self.data: dict[str, str] = repository.groupdict()
|
||||||
|
|
||||||
def __repr__(self) -> str:
|
def __repr__(self) -> str:
|
||||||
"""Return internal representation."""
|
"""Return internal representation."""
|
||||||
@ -85,35 +86,77 @@ class GitRepo(CoreSysAttributes):
|
|||||||
async def clone(self) -> None:
|
async def clone(self) -> None:
|
||||||
"""Clone git add-on repository."""
|
"""Clone git add-on repository."""
|
||||||
async with self.lock:
|
async with self.lock:
|
||||||
git_args = {
|
await self._clone()
|
||||||
attribute: value
|
|
||||||
for attribute, value in (
|
|
||||||
("recursive", True),
|
|
||||||
("branch", self.branch),
|
|
||||||
("depth", 1),
|
|
||||||
("shallow-submodules", True),
|
|
||||||
)
|
|
||||||
if value is not None
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
@Job(
|
||||||
_LOGGER.info(
|
name="git_repo_reset",
|
||||||
"Cloning add-on %s repository from %s", self.path, self.url
|
conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_SYSTEM],
|
||||||
)
|
on_condition=StoreJobError,
|
||||||
self.repo = await self.sys_run_in_executor(
|
)
|
||||||
ft.partial(
|
async def reset(self) -> None:
|
||||||
git.Repo.clone_from, self.url, str(self.path), **git_args
|
"""Reset repository to fix issue with local copy."""
|
||||||
)
|
# Clone into temporary folder
|
||||||
)
|
temp_dir = await self.sys_run_in_executor(
|
||||||
|
TemporaryDirectory, dir=self.sys_config.path_tmp
|
||||||
|
)
|
||||||
|
temp_path = Path(temp_dir.name)
|
||||||
|
try:
|
||||||
|
await self._clone(temp_path)
|
||||||
|
|
||||||
except (
|
# Remove corrupted repo and move temp clone to its place
|
||||||
git.InvalidGitRepositoryError,
|
def move_clone():
|
||||||
git.NoSuchPathError,
|
remove_folder(folder=self.path)
|
||||||
git.CommandError,
|
temp_path.rename(self.path)
|
||||||
UnicodeDecodeError,
|
|
||||||
) as err:
|
async with self.lock:
|
||||||
_LOGGER.error("Can't clone %s repository: %s.", self.url, err)
|
try:
|
||||||
raise StoreGitCloneError() from err
|
await self.sys_run_in_executor(move_clone)
|
||||||
|
except OSError as err:
|
||||||
|
if err.errno == errno.EBADMSG:
|
||||||
|
self.sys_resolution.add_unhealthy_reason(
|
||||||
|
UnhealthyReason.OSERROR_BAD_MESSAGE
|
||||||
|
)
|
||||||
|
raise StoreGitCloneError(
|
||||||
|
f"Can't move clone due to: {err!s}", _LOGGER.error
|
||||||
|
) from err
|
||||||
|
finally:
|
||||||
|
# Clean up temporary directory in case of error
|
||||||
|
# If the folder was moved this will do nothing
|
||||||
|
await self.sys_run_in_executor(temp_dir.cleanup)
|
||||||
|
|
||||||
|
async def _clone(self, path: Path | None = None) -> None:
|
||||||
|
"""Clone git add-on repository to location."""
|
||||||
|
path = path or self.path
|
||||||
|
git_args = {
|
||||||
|
attribute: value
|
||||||
|
for attribute, value in (
|
||||||
|
("recursive", True),
|
||||||
|
("branch", self.branch),
|
||||||
|
("depth", 1),
|
||||||
|
("shallow-submodules", True),
|
||||||
|
)
|
||||||
|
if value is not None
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
_LOGGER.info("Cloning add-on %s repository from %s", path, self.url)
|
||||||
|
self.repo = await self.sys_run_in_executor(
|
||||||
|
ft.partial(
|
||||||
|
git.Repo.clone_from,
|
||||||
|
self.url,
|
||||||
|
str(path),
|
||||||
|
**git_args, # type: ignore
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
except (
|
||||||
|
git.InvalidGitRepositoryError,
|
||||||
|
git.NoSuchPathError,
|
||||||
|
git.CommandError,
|
||||||
|
UnicodeDecodeError,
|
||||||
|
) as err:
|
||||||
|
_LOGGER.error("Can't clone %s repository: %s.", self.url, err)
|
||||||
|
raise StoreGitCloneError() from err
|
||||||
|
|
||||||
@Job(
|
@Job(
|
||||||
name="git_repo_pull",
|
name="git_repo_pull",
|
||||||
@ -124,10 +167,10 @@ class GitRepo(CoreSysAttributes):
|
|||||||
"""Pull Git add-on repo."""
|
"""Pull Git add-on repo."""
|
||||||
if self.lock.locked():
|
if self.lock.locked():
|
||||||
_LOGGER.warning("There is already a task in progress")
|
_LOGGER.warning("There is already a task in progress")
|
||||||
return
|
return False
|
||||||
if self.repo is None:
|
if self.repo is None:
|
||||||
_LOGGER.warning("No valid repository for %s", self.url)
|
_LOGGER.warning("No valid repository for %s", self.url)
|
||||||
return
|
return False
|
||||||
|
|
||||||
async with self.lock:
|
async with self.lock:
|
||||||
_LOGGER.info("Update add-on %s repository from %s", self.path, self.url)
|
_LOGGER.info("Update add-on %s repository from %s", self.path, self.url)
|
||||||
@ -146,7 +189,7 @@ class GitRepo(CoreSysAttributes):
|
|||||||
await self.sys_run_in_executor(
|
await self.sys_run_in_executor(
|
||||||
ft.partial(
|
ft.partial(
|
||||||
self.repo.remotes.origin.fetch,
|
self.repo.remotes.origin.fetch,
|
||||||
**{"update-shallow": True, "depth": 1},
|
**{"update-shallow": True, "depth": 1}, # type: ignore
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -192,12 +235,17 @@ class GitRepo(CoreSysAttributes):
|
|||||||
)
|
)
|
||||||
raise StoreGitError() from err
|
raise StoreGitError() from err
|
||||||
|
|
||||||
async def _remove(self):
|
async def remove(self) -> None:
|
||||||
"""Remove a repository."""
|
"""Remove a repository."""
|
||||||
if self.lock.locked():
|
if self.lock.locked():
|
||||||
_LOGGER.warning("There is already a task in progress")
|
_LOGGER.warning(
|
||||||
|
"Cannot remove add-on repository %s, there is already a task in progress",
|
||||||
|
self.url,
|
||||||
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
_LOGGER.info("Removing custom add-on repository %s", self.url)
|
||||||
|
|
||||||
def _remove_git_dir(path: Path) -> None:
|
def _remove_git_dir(path: Path) -> None:
|
||||||
if not path.is_dir():
|
if not path.is_dir():
|
||||||
return
|
return
|
||||||
@ -205,30 +253,3 @@ class GitRepo(CoreSysAttributes):
|
|||||||
|
|
||||||
async with self.lock:
|
async with self.lock:
|
||||||
await self.sys_run_in_executor(_remove_git_dir, self.path)
|
await self.sys_run_in_executor(_remove_git_dir, self.path)
|
||||||
|
|
||||||
|
|
||||||
class GitRepoHassIO(GitRepo):
|
|
||||||
"""Supervisor add-ons repository."""
|
|
||||||
|
|
||||||
builtin: bool = False
|
|
||||||
|
|
||||||
def __init__(self, coresys):
|
|
||||||
"""Initialize Git Supervisor add-on repository."""
|
|
||||||
super().__init__(coresys, coresys.config.path_addons_core, URL_HASSIO_ADDONS)
|
|
||||||
|
|
||||||
|
|
||||||
class GitRepoCustom(GitRepo):
|
|
||||||
"""Custom add-ons repository."""
|
|
||||||
|
|
||||||
builtin: bool = False
|
|
||||||
|
|
||||||
def __init__(self, coresys, url):
|
|
||||||
"""Initialize custom Git Supervisor addo-n repository."""
|
|
||||||
path = Path(coresys.config.path_addons_git, get_hash_from_repository(url))
|
|
||||||
|
|
||||||
super().__init__(coresys, path, url)
|
|
||||||
|
|
||||||
async def remove(self):
|
|
||||||
"""Remove a custom repository."""
|
|
||||||
_LOGGER.info("Removing custom add-on repository %s", self.url)
|
|
||||||
await self._remove()
|
|
||||||
|
@ -1,5 +1,8 @@
|
|||||||
"""Represent a Supervisor repository."""
|
"""Represent a Supervisor repository."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
import logging
|
import logging
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
@ -7,12 +10,19 @@ import voluptuous as vol
|
|||||||
|
|
||||||
from supervisor.utils import get_latest_mtime
|
from supervisor.utils import get_latest_mtime
|
||||||
|
|
||||||
from ..const import ATTR_MAINTAINER, ATTR_NAME, ATTR_URL, FILE_SUFFIX_CONFIGURATION
|
from ..const import (
|
||||||
|
ATTR_MAINTAINER,
|
||||||
|
ATTR_NAME,
|
||||||
|
ATTR_URL,
|
||||||
|
FILE_SUFFIX_CONFIGURATION,
|
||||||
|
REPOSITORY_CORE,
|
||||||
|
REPOSITORY_LOCAL,
|
||||||
|
)
|
||||||
from ..coresys import CoreSys, CoreSysAttributes
|
from ..coresys import CoreSys, CoreSysAttributes
|
||||||
from ..exceptions import ConfigurationFileError, StoreError
|
from ..exceptions import ConfigurationFileError, StoreError
|
||||||
from ..utils.common import read_json_or_yaml_file
|
from ..utils.common import read_json_or_yaml_file
|
||||||
from .const import StoreType
|
from .const import BuiltinRepository
|
||||||
from .git import GitRepo, GitRepoCustom, GitRepoHassIO
|
from .git import GitRepo
|
||||||
from .utils import get_hash_from_repository
|
from .utils import get_hash_from_repository
|
||||||
from .validate import SCHEMA_REPOSITORY_CONFIG
|
from .validate import SCHEMA_REPOSITORY_CONFIG
|
||||||
|
|
||||||
@ -20,27 +30,48 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
|
|||||||
UNKNOWN = "unknown"
|
UNKNOWN = "unknown"
|
||||||
|
|
||||||
|
|
||||||
class Repository(CoreSysAttributes):
|
class Repository(CoreSysAttributes, ABC):
|
||||||
"""Add-on store repository in Supervisor."""
|
"""Add-on store repository in Supervisor."""
|
||||||
|
|
||||||
def __init__(self, coresys: CoreSys, repository: str):
|
def __init__(self, coresys: CoreSys, repository: str, local_path: Path, slug: str):
|
||||||
"""Initialize add-on store repository object."""
|
"""Initialize add-on store repository object."""
|
||||||
|
self._slug: str = slug
|
||||||
|
self._local_path: Path = local_path
|
||||||
self.coresys: CoreSys = coresys
|
self.coresys: CoreSys = coresys
|
||||||
self.git: GitRepo | None = None
|
|
||||||
|
|
||||||
self.source: str = repository
|
self.source: str = repository
|
||||||
if repository == StoreType.LOCAL:
|
|
||||||
self._slug = repository
|
@staticmethod
|
||||||
self._type = StoreType.LOCAL
|
def create(coresys: CoreSys, repository: str) -> Repository:
|
||||||
self._latest_mtime: float | None = None
|
"""Create a repository instance."""
|
||||||
elif repository == StoreType.CORE:
|
if repository in BuiltinRepository:
|
||||||
self.git = GitRepoHassIO(coresys)
|
return Repository._create_builtin(coresys, BuiltinRepository(repository))
|
||||||
self._slug = repository
|
|
||||||
self._type = StoreType.CORE
|
|
||||||
else:
|
else:
|
||||||
self.git = GitRepoCustom(coresys, repository)
|
return Repository._create_custom(coresys, repository)
|
||||||
self._slug = get_hash_from_repository(repository)
|
|
||||||
self._type = StoreType.GIT
|
@staticmethod
|
||||||
|
def _create_builtin(coresys: CoreSys, builtin: BuiltinRepository) -> Repository:
|
||||||
|
"""Create builtin repository."""
|
||||||
|
if builtin == BuiltinRepository.LOCAL:
|
||||||
|
slug = REPOSITORY_LOCAL
|
||||||
|
local_path = coresys.config.path_addons_local
|
||||||
|
return RepositoryLocal(coresys, local_path, slug)
|
||||||
|
elif builtin == BuiltinRepository.CORE:
|
||||||
|
slug = REPOSITORY_CORE
|
||||||
|
local_path = coresys.config.path_addons_core
|
||||||
|
else:
|
||||||
|
# For other builtin repositories (URL-based)
|
||||||
|
slug = get_hash_from_repository(builtin.value)
|
||||||
|
local_path = coresys.config.path_addons_git / slug
|
||||||
|
return RepositoryGitBuiltin(
|
||||||
|
coresys, builtin.value, local_path, slug, builtin.git_url
|
||||||
|
)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _create_custom(coresys: CoreSys, repository: str) -> RepositoryCustom:
|
||||||
|
"""Create custom repository."""
|
||||||
|
slug = get_hash_from_repository(repository)
|
||||||
|
local_path = coresys.config.path_addons_git / slug
|
||||||
|
return RepositoryCustom(coresys, repository, local_path, slug)
|
||||||
|
|
||||||
def __repr__(self) -> str:
|
def __repr__(self) -> str:
|
||||||
"""Return internal representation."""
|
"""Return internal representation."""
|
||||||
@ -52,9 +83,9 @@ class Repository(CoreSysAttributes):
|
|||||||
return self._slug
|
return self._slug
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def type(self) -> StoreType:
|
def local_path(self) -> Path:
|
||||||
"""Return type of the store."""
|
"""Return local path to repository."""
|
||||||
return self._type
|
return self._local_path
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def data(self) -> dict:
|
def data(self) -> dict:
|
||||||
@ -76,55 +107,123 @@ class Repository(CoreSysAttributes):
|
|||||||
"""Return url of repository."""
|
"""Return url of repository."""
|
||||||
return self.data.get(ATTR_MAINTAINER, UNKNOWN)
|
return self.data.get(ATTR_MAINTAINER, UNKNOWN)
|
||||||
|
|
||||||
def validate(self) -> bool:
|
@property
|
||||||
"""Check if store is valid.
|
@abstractmethod
|
||||||
|
def is_builtin(self) -> bool:
|
||||||
|
"""Return True if this is a built-in repository."""
|
||||||
|
|
||||||
Must be run in executor.
|
@abstractmethod
|
||||||
|
async def validate(self) -> bool:
|
||||||
|
"""Check if store is valid."""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def load(self) -> None:
|
||||||
|
"""Load addon repository."""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def update(self) -> bool:
|
||||||
|
"""Update add-on repository.
|
||||||
|
|
||||||
|
Returns True if the repository was updated.
|
||||||
"""
|
"""
|
||||||
if self.type != StoreType.GIT:
|
|
||||||
return True
|
|
||||||
|
|
||||||
# If exists?
|
@abstractmethod
|
||||||
for filetype in FILE_SUFFIX_CONFIGURATION:
|
async def remove(self) -> None:
|
||||||
repository_file = Path(self.git.path / f"repository{filetype}")
|
"""Remove add-on repository."""
|
||||||
if repository_file.exists():
|
|
||||||
break
|
|
||||||
|
|
||||||
if not repository_file.exists():
|
@abstractmethod
|
||||||
return False
|
async def reset(self) -> None:
|
||||||
|
"""Reset add-on repository to fix corruption issue with files."""
|
||||||
|
|
||||||
# If valid?
|
|
||||||
try:
|
|
||||||
SCHEMA_REPOSITORY_CONFIG(read_json_or_yaml_file(repository_file))
|
|
||||||
except (ConfigurationFileError, vol.Invalid) as err:
|
|
||||||
_LOGGER.warning("Could not validate repository configuration %s", err)
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
class RepositoryBuiltin(Repository, ABC):
|
||||||
|
"""A built-in add-on repository."""
|
||||||
|
|
||||||
|
@property
|
||||||
|
def is_builtin(self) -> bool:
|
||||||
|
"""Return True if this is a built-in repository."""
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
async def validate(self) -> bool:
|
||||||
|
"""Assume built-in repositories are always valid."""
|
||||||
|
return True
|
||||||
|
|
||||||
|
async def remove(self) -> None:
|
||||||
|
"""Raise. Not supported for built-in repositories."""
|
||||||
|
raise StoreError("Can't remove built-in repositories!", _LOGGER.error)
|
||||||
|
|
||||||
|
|
||||||
|
class RepositoryGit(Repository, ABC):
|
||||||
|
"""A git based add-on repository."""
|
||||||
|
|
||||||
|
_git: GitRepo
|
||||||
|
|
||||||
async def load(self) -> None:
|
async def load(self) -> None:
|
||||||
"""Load addon repository."""
|
"""Load addon repository."""
|
||||||
if not self.git:
|
await self._git.load()
|
||||||
self._latest_mtime, _ = await self.sys_run_in_executor(
|
|
||||||
get_latest_mtime, self.sys_config.path_addons_local
|
|
||||||
)
|
|
||||||
return
|
|
||||||
await self.git.load()
|
|
||||||
|
|
||||||
async def update(self) -> bool:
|
async def update(self) -> bool:
|
||||||
"""Update add-on repository.
|
"""Update add-on repository.
|
||||||
|
|
||||||
Returns True if the repository was updated.
|
Returns True if the repository was updated.
|
||||||
"""
|
"""
|
||||||
if not await self.sys_run_in_executor(self.validate):
|
if not await self.validate():
|
||||||
return False
|
return False
|
||||||
|
|
||||||
if self.type != StoreType.LOCAL:
|
return await self._git.pull()
|
||||||
return await self.git.pull()
|
|
||||||
|
|
||||||
|
async def validate(self) -> bool:
|
||||||
|
"""Check if store is valid."""
|
||||||
|
|
||||||
|
def validate_file() -> bool:
|
||||||
|
# If exists?
|
||||||
|
for filetype in FILE_SUFFIX_CONFIGURATION:
|
||||||
|
repository_file = Path(self._git.path / f"repository{filetype}")
|
||||||
|
if repository_file.exists():
|
||||||
|
break
|
||||||
|
|
||||||
|
if not repository_file.exists():
|
||||||
|
return False
|
||||||
|
|
||||||
|
# If valid?
|
||||||
|
try:
|
||||||
|
SCHEMA_REPOSITORY_CONFIG(read_json_or_yaml_file(repository_file))
|
||||||
|
except (ConfigurationFileError, vol.Invalid) as err:
|
||||||
|
_LOGGER.warning("Could not validate repository configuration %s", err)
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
return await self.sys_run_in_executor(validate_file)
|
||||||
|
|
||||||
|
async def reset(self) -> None:
|
||||||
|
"""Reset add-on repository to fix corruption issue with files."""
|
||||||
|
await self._git.reset()
|
||||||
|
await self.load()
|
||||||
|
|
||||||
|
|
||||||
|
class RepositoryLocal(RepositoryBuiltin):
|
||||||
|
"""A local add-on repository."""
|
||||||
|
|
||||||
|
def __init__(self, coresys: CoreSys, local_path: Path, slug: str) -> None:
|
||||||
|
"""Initialize object."""
|
||||||
|
super().__init__(coresys, BuiltinRepository.LOCAL.value, local_path, slug)
|
||||||
|
self._latest_mtime: float | None = None
|
||||||
|
|
||||||
|
async def load(self) -> None:
|
||||||
|
"""Load addon repository."""
|
||||||
|
self._latest_mtime, _ = await self.sys_run_in_executor(
|
||||||
|
get_latest_mtime, self.local_path
|
||||||
|
)
|
||||||
|
|
||||||
|
async def update(self) -> bool:
|
||||||
|
"""Update add-on repository.
|
||||||
|
|
||||||
|
Returns True if the repository was updated.
|
||||||
|
"""
|
||||||
# Check local modifications
|
# Check local modifications
|
||||||
latest_mtime, modified_path = await self.sys_run_in_executor(
|
latest_mtime, modified_path = await self.sys_run_in_executor(
|
||||||
get_latest_mtime, self.sys_config.path_addons_local
|
get_latest_mtime, self.local_path
|
||||||
)
|
)
|
||||||
if self._latest_mtime != latest_mtime:
|
if self._latest_mtime != latest_mtime:
|
||||||
_LOGGER.debug(
|
_LOGGER.debug(
|
||||||
@ -137,9 +236,37 @@ class Repository(CoreSysAttributes):
|
|||||||
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
async def reset(self) -> None:
|
||||||
|
"""Raise. Not supported for local repository."""
|
||||||
|
raise StoreError(
|
||||||
|
"Can't reset local repository as it is not git based!", _LOGGER.error
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class RepositoryGitBuiltin(RepositoryBuiltin, RepositoryGit):
|
||||||
|
"""A built-in add-on repository based on git."""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self, coresys: CoreSys, repository: str, local_path: Path, slug: str, url: str
|
||||||
|
) -> None:
|
||||||
|
"""Initialize object."""
|
||||||
|
super().__init__(coresys, repository, local_path, slug)
|
||||||
|
self._git = GitRepo(coresys, local_path, url)
|
||||||
|
|
||||||
|
|
||||||
|
class RepositoryCustom(RepositoryGit):
|
||||||
|
"""A custom add-on repository."""
|
||||||
|
|
||||||
|
def __init__(self, coresys: CoreSys, url: str, local_path: Path, slug: str) -> None:
|
||||||
|
"""Initialize object."""
|
||||||
|
super().__init__(coresys, url, local_path, slug)
|
||||||
|
self._git = GitRepo(coresys, local_path, url)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def is_builtin(self) -> bool:
|
||||||
|
"""Return True if this is a built-in repository."""
|
||||||
|
return False
|
||||||
|
|
||||||
async def remove(self) -> None:
|
async def remove(self) -> None:
|
||||||
"""Remove add-on repository."""
|
"""Remove add-on repository."""
|
||||||
if self.type != StoreType.GIT:
|
await self._git.remove()
|
||||||
raise StoreError("Can't remove built-in repositories!", _LOGGER.error)
|
|
||||||
|
|
||||||
await self.git.remove()
|
|
||||||
|
@ -4,18 +4,7 @@ import voluptuous as vol
|
|||||||
|
|
||||||
from ..const import ATTR_MAINTAINER, ATTR_NAME, ATTR_REPOSITORIES, ATTR_URL
|
from ..const import ATTR_MAINTAINER, ATTR_NAME, ATTR_REPOSITORIES, ATTR_URL
|
||||||
from ..validate import RE_REPOSITORY
|
from ..validate import RE_REPOSITORY
|
||||||
from .const import StoreType
|
from .const import BuiltinRepository
|
||||||
|
|
||||||
URL_COMMUNITY_ADDONS = "https://github.com/hassio-addons/repository"
|
|
||||||
URL_ESPHOME = "https://github.com/esphome/home-assistant-addon"
|
|
||||||
URL_MUSIC_ASSISTANT = "https://github.com/music-assistant/home-assistant-addon"
|
|
||||||
BUILTIN_REPOSITORIES = {
|
|
||||||
StoreType.CORE,
|
|
||||||
StoreType.LOCAL,
|
|
||||||
URL_COMMUNITY_ADDONS,
|
|
||||||
URL_ESPHOME,
|
|
||||||
URL_MUSIC_ASSISTANT,
|
|
||||||
}
|
|
||||||
|
|
||||||
# pylint: disable=no-value-for-parameter
|
# pylint: disable=no-value-for-parameter
|
||||||
SCHEMA_REPOSITORY_CONFIG = vol.Schema(
|
SCHEMA_REPOSITORY_CONFIG = vol.Schema(
|
||||||
@ -28,18 +17,9 @@ SCHEMA_REPOSITORY_CONFIG = vol.Schema(
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def ensure_builtin_repositories(addon_repositories: list[str]) -> list[str]:
|
|
||||||
"""Ensure builtin repositories are in list.
|
|
||||||
|
|
||||||
Note: This should not be used in validation as the resulting list is not
|
|
||||||
stable. This can have side effects when comparing data later on.
|
|
||||||
"""
|
|
||||||
return list(set(addon_repositories) | BUILTIN_REPOSITORIES)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_repository(repository: str) -> str:
|
def validate_repository(repository: str) -> str:
|
||||||
"""Validate a valid repository."""
|
"""Validate a valid repository."""
|
||||||
if repository in [StoreType.CORE, StoreType.LOCAL]:
|
if repository in BuiltinRepository:
|
||||||
return repository
|
return repository
|
||||||
|
|
||||||
data = RE_REPOSITORY.match(repository)
|
data = RE_REPOSITORY.match(repository)
|
||||||
@ -55,10 +35,12 @@ def validate_repository(repository: str) -> str:
|
|||||||
|
|
||||||
repositories = vol.All([validate_repository], vol.Unique())
|
repositories = vol.All([validate_repository], vol.Unique())
|
||||||
|
|
||||||
|
DEFAULT_REPOSITORIES = {repo.value for repo in BuiltinRepository}
|
||||||
|
|
||||||
SCHEMA_STORE_FILE = vol.Schema(
|
SCHEMA_STORE_FILE = vol.Schema(
|
||||||
{
|
{
|
||||||
vol.Optional(
|
vol.Optional(
|
||||||
ATTR_REPOSITORIES, default=list(BUILTIN_REPOSITORIES)
|
ATTR_REPOSITORIES, default=lambda: list(DEFAULT_REPOSITORIES)
|
||||||
): repositories,
|
): repositories,
|
||||||
},
|
},
|
||||||
extra=vol.REMOVE_EXTRA,
|
extra=vol.REMOVE_EXTRA,
|
||||||
|
@ -46,7 +46,7 @@ def _check_connectivity_throttle_period(coresys: CoreSys, *_) -> timedelta:
|
|||||||
if coresys.supervisor.connectivity:
|
if coresys.supervisor.connectivity:
|
||||||
return timedelta(minutes=10)
|
return timedelta(minutes=10)
|
||||||
|
|
||||||
return timedelta(seconds=30)
|
return timedelta(seconds=5)
|
||||||
|
|
||||||
|
|
||||||
class Supervisor(CoreSysAttributes):
|
class Supervisor(CoreSysAttributes):
|
||||||
@ -106,17 +106,22 @@ class Supervisor(CoreSysAttributes):
|
|||||||
return AwesomeVersion(SUPERVISOR_VERSION)
|
return AwesomeVersion(SUPERVISOR_VERSION)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def latest_version(self) -> AwesomeVersion:
|
def latest_version(self) -> AwesomeVersion | None:
|
||||||
"""Return last available version of Home Assistant."""
|
"""Return last available version of ."""
|
||||||
return self.sys_updater.version_supervisor
|
return self.sys_updater.version_supervisor
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def image(self) -> str:
|
def default_image(self) -> str:
|
||||||
"""Return image name of Home Assistant container."""
|
"""Return the default image for this system."""
|
||||||
|
return f"ghcr.io/home-assistant/{self.sys_arch.supervisor}-hassio-supervisor"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def image(self) -> str | None:
|
||||||
|
"""Return image name of Supervisor container."""
|
||||||
return self.instance.image
|
return self.instance.image
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def arch(self) -> str:
|
def arch(self) -> str | None:
|
||||||
"""Return arch of the Supervisor container."""
|
"""Return arch of the Supervisor container."""
|
||||||
return self.instance.arch
|
return self.instance.arch
|
||||||
|
|
||||||
@ -192,13 +197,19 @@ class Supervisor(CoreSysAttributes):
|
|||||||
|
|
||||||
async def update(self, version: AwesomeVersion | None = None) -> None:
|
async def update(self, version: AwesomeVersion | None = None) -> None:
|
||||||
"""Update Supervisor version."""
|
"""Update Supervisor version."""
|
||||||
version = version or self.latest_version
|
version = version or self.latest_version or self.version
|
||||||
|
|
||||||
if version == self.sys_supervisor.version:
|
if version == self.version:
|
||||||
raise SupervisorUpdateError(
|
raise SupervisorUpdateError(
|
||||||
f"Version {version!s} is already installed", _LOGGER.warning
|
f"Version {version!s} is already installed", _LOGGER.warning
|
||||||
)
|
)
|
||||||
|
|
||||||
|
image = self.sys_updater.image_supervisor or self.instance.image
|
||||||
|
if not image:
|
||||||
|
raise SupervisorUpdateError(
|
||||||
|
"Cannot determine image to use for supervisor update!", _LOGGER.error
|
||||||
|
)
|
||||||
|
|
||||||
# First update own AppArmor
|
# First update own AppArmor
|
||||||
try:
|
try:
|
||||||
await self.update_apparmor()
|
await self.update_apparmor()
|
||||||
@ -211,12 +222,8 @@ class Supervisor(CoreSysAttributes):
|
|||||||
# Update container
|
# Update container
|
||||||
_LOGGER.info("Update Supervisor to version %s", version)
|
_LOGGER.info("Update Supervisor to version %s", version)
|
||||||
try:
|
try:
|
||||||
await self.instance.install(
|
await self.instance.install(version, image=image)
|
||||||
version, image=self.sys_updater.image_supervisor
|
await self.instance.update_start_tag(image, version)
|
||||||
)
|
|
||||||
await self.instance.update_start_tag(
|
|
||||||
self.sys_updater.image_supervisor, version
|
|
||||||
)
|
|
||||||
except DockerError as err:
|
except DockerError as err:
|
||||||
self.sys_resolution.create_issue(
|
self.sys_resolution.create_issue(
|
||||||
IssueType.UPDATE_FAILED, ContextType.SUPERVISOR
|
IssueType.UPDATE_FAILED, ContextType.SUPERVISOR
|
||||||
@ -227,7 +234,7 @@ class Supervisor(CoreSysAttributes):
|
|||||||
) from err
|
) from err
|
||||||
|
|
||||||
self.sys_config.version = version
|
self.sys_config.version = version
|
||||||
self.sys_config.image = self.sys_updater.image_supervisor
|
self.sys_config.image = image
|
||||||
await self.sys_config.save_data()
|
await self.sys_config.save_data()
|
||||||
|
|
||||||
self.sys_create_task(self.sys_core.stop())
|
self.sys_create_task(self.sys_core.stop())
|
||||||
@ -284,14 +291,16 @@ class Supervisor(CoreSysAttributes):
|
|||||||
limit=JobExecutionLimit.THROTTLE,
|
limit=JobExecutionLimit.THROTTLE,
|
||||||
throttle_period=_check_connectivity_throttle_period,
|
throttle_period=_check_connectivity_throttle_period,
|
||||||
)
|
)
|
||||||
async def check_connectivity(self):
|
async def check_connectivity(self) -> None:
|
||||||
"""Check the connection."""
|
"""Check the Internet connectivity from Supervisor's point of view."""
|
||||||
timeout = aiohttp.ClientTimeout(total=10)
|
timeout = aiohttp.ClientTimeout(total=10)
|
||||||
try:
|
try:
|
||||||
await self.sys_websession.head(
|
await self.sys_websession.head(
|
||||||
"https://checkonline.home-assistant.io/online.txt", timeout=timeout
|
"https://checkonline.home-assistant.io/online.txt", timeout=timeout
|
||||||
)
|
)
|
||||||
except (ClientError, TimeoutError):
|
except (ClientError, TimeoutError) as err:
|
||||||
|
_LOGGER.debug("Supervisor Connectivity check failed: %s", err)
|
||||||
self.connectivity = False
|
self.connectivity = False
|
||||||
else:
|
else:
|
||||||
|
_LOGGER.debug("Supervisor Connectivity check succeeded")
|
||||||
self.connectivity = True
|
self.connectivity = True
|
||||||
|
@ -27,7 +27,7 @@ from .const import (
|
|||||||
BusEvent,
|
BusEvent,
|
||||||
UpdateChannel,
|
UpdateChannel,
|
||||||
)
|
)
|
||||||
from .coresys import CoreSysAttributes
|
from .coresys import CoreSys, CoreSysAttributes
|
||||||
from .exceptions import (
|
from .exceptions import (
|
||||||
CodeNotaryError,
|
CodeNotaryError,
|
||||||
CodeNotaryUntrusted,
|
CodeNotaryUntrusted,
|
||||||
@ -45,7 +45,7 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
|
|||||||
class Updater(FileConfiguration, CoreSysAttributes):
|
class Updater(FileConfiguration, CoreSysAttributes):
|
||||||
"""Fetch last versions from version.json."""
|
"""Fetch last versions from version.json."""
|
||||||
|
|
||||||
def __init__(self, coresys):
|
def __init__(self, coresys: CoreSys) -> None:
|
||||||
"""Initialize updater."""
|
"""Initialize updater."""
|
||||||
super().__init__(FILE_HASSIO_UPDATER, SCHEMA_UPDATER_CONFIG)
|
super().__init__(FILE_HASSIO_UPDATER, SCHEMA_UPDATER_CONFIG)
|
||||||
self.coresys = coresys
|
self.coresys = coresys
|
||||||
|
@ -56,7 +56,7 @@ async def check_port(address: IPv4Address, port: int) -> bool:
|
|||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
def check_exception_chain(err: Exception, object_type: Any) -> bool:
|
def check_exception_chain(err: BaseException, object_type: Any) -> bool:
|
||||||
"""Check if exception chain include sub exception.
|
"""Check if exception chain include sub exception.
|
||||||
|
|
||||||
It's not full recursive because we need mostly only access to the latest.
|
It's not full recursive because we need mostly only access to the latest.
|
||||||
@ -70,7 +70,7 @@ def check_exception_chain(err: Exception, object_type: Any) -> bool:
|
|||||||
return check_exception_chain(err.__context__, object_type)
|
return check_exception_chain(err.__context__, object_type)
|
||||||
|
|
||||||
|
|
||||||
def get_message_from_exception_chain(err: Exception) -> str:
|
def get_message_from_exception_chain(err: BaseException) -> str:
|
||||||
"""Get the first message from the exception chain."""
|
"""Get the first message from the exception chain."""
|
||||||
if str(err):
|
if str(err):
|
||||||
return str(err)
|
return str(err)
|
||||||
@ -119,8 +119,8 @@ def remove_folder_with_excludes(
|
|||||||
|
|
||||||
Must be run in executor.
|
Must be run in executor.
|
||||||
"""
|
"""
|
||||||
with TemporaryDirectory(dir=tmp_dir) as temp_path:
|
with TemporaryDirectory(dir=tmp_dir) as temp_path_str:
|
||||||
temp_path = Path(temp_path)
|
temp_path = Path(temp_path_str)
|
||||||
moved_files: list[Path] = []
|
moved_files: list[Path] = []
|
||||||
for item in folder.iterdir():
|
for item in folder.iterdir():
|
||||||
if any(item.match(exclude) for exclude in excludes):
|
if any(item.match(exclude) for exclude in excludes):
|
||||||
|
@ -87,13 +87,15 @@ class FileConfiguration:
|
|||||||
if not self._file:
|
if not self._file:
|
||||||
raise RuntimeError("Path to config file must be set!")
|
raise RuntimeError("Path to config file must be set!")
|
||||||
|
|
||||||
def _read_data() -> dict[str, Any]:
|
def _read_data(file: Path) -> dict[str, Any]:
|
||||||
if self._file.is_file():
|
if file.is_file():
|
||||||
with suppress(ConfigurationFileError):
|
with suppress(ConfigurationFileError):
|
||||||
return read_json_or_yaml_file(self._file)
|
return read_json_or_yaml_file(file)
|
||||||
return _DEFAULT
|
return _DEFAULT
|
||||||
|
|
||||||
self._data = await asyncio.get_running_loop().run_in_executor(None, _read_data)
|
self._data = await asyncio.get_running_loop().run_in_executor(
|
||||||
|
None, _read_data, self._file
|
||||||
|
)
|
||||||
|
|
||||||
# Validate
|
# Validate
|
||||||
try:
|
try:
|
||||||
|
@ -3,9 +3,9 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
from collections.abc import Awaitable, Callable, Coroutine
|
from collections.abc import Awaitable, Callable
|
||||||
import logging
|
import logging
|
||||||
from typing import Any, cast
|
from typing import Any, Protocol, cast
|
||||||
|
|
||||||
from dbus_fast import (
|
from dbus_fast import (
|
||||||
ErrorType,
|
ErrorType,
|
||||||
@ -46,6 +46,20 @@ DBUS_INTERFACE_PROPERTIES: str = "org.freedesktop.DBus.Properties"
|
|||||||
DBUS_METHOD_GETALL: str = "org.freedesktop.DBus.Properties.GetAll"
|
DBUS_METHOD_GETALL: str = "org.freedesktop.DBus.Properties.GetAll"
|
||||||
|
|
||||||
|
|
||||||
|
class GetWithUnpack(Protocol):
|
||||||
|
"""Protocol class for dbus get signature."""
|
||||||
|
|
||||||
|
def __call__(self, *, unpack_variants: bool = True) -> Awaitable[Any]:
|
||||||
|
"""Signature for dbus get unpack kwarg."""
|
||||||
|
|
||||||
|
|
||||||
|
class UpdatePropertiesCallback(Protocol):
|
||||||
|
"""Protocol class for update properties callback."""
|
||||||
|
|
||||||
|
def __call__(self, changed: dict[str, Any] | None = None) -> Awaitable[None]:
|
||||||
|
"""Signature for an update properties callback function."""
|
||||||
|
|
||||||
|
|
||||||
class DBus:
|
class DBus:
|
||||||
"""DBus handler."""
|
"""DBus handler."""
|
||||||
|
|
||||||
@ -216,10 +230,17 @@ class DBus:
|
|||||||
return self._proxy_obj is not None
|
return self._proxy_obj is not None
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def properties(self) -> DBusCallWrapper | None:
|
def supports_properties(self) -> bool:
|
||||||
|
"""Return true if properties interface supported by DBus object."""
|
||||||
|
return DBUS_INTERFACE_PROPERTIES in self._proxies
|
||||||
|
|
||||||
|
@property
|
||||||
|
def properties(self) -> DBusCallWrapper:
|
||||||
"""Get properties proxy interface."""
|
"""Get properties proxy interface."""
|
||||||
if DBUS_INTERFACE_PROPERTIES not in self._proxies:
|
if not self.supports_properties:
|
||||||
return None
|
raise DBusInterfaceError(
|
||||||
|
f"DBus Object does not have interface {DBUS_INTERFACE_PROPERTIES}"
|
||||||
|
)
|
||||||
return DBusCallWrapper(self, DBUS_INTERFACE_PROPERTIES)
|
return DBusCallWrapper(self, DBUS_INTERFACE_PROPERTIES)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
@ -231,16 +252,12 @@ class DBus:
|
|||||||
|
|
||||||
async def get_properties(self, interface: str) -> dict[str, Any]:
|
async def get_properties(self, interface: str) -> dict[str, Any]:
|
||||||
"""Read all properties from interface."""
|
"""Read all properties from interface."""
|
||||||
if not self.properties:
|
return await self.properties.call("get_all", interface)
|
||||||
raise DBusInterfaceError(
|
|
||||||
f"DBus Object does not have interface {DBUS_INTERFACE_PROPERTIES}"
|
|
||||||
)
|
|
||||||
return await self.properties.call_get_all(interface)
|
|
||||||
|
|
||||||
def sync_property_changes(
|
def sync_property_changes(
|
||||||
self,
|
self,
|
||||||
interface: str,
|
interface: str,
|
||||||
update: Callable[[dict[str, Any]], Coroutine[None]],
|
update: UpdatePropertiesCallback,
|
||||||
) -> Callable:
|
) -> Callable:
|
||||||
"""Sync property changes for interface with cache.
|
"""Sync property changes for interface with cache.
|
||||||
|
|
||||||
@ -249,7 +266,7 @@ class DBus:
|
|||||||
|
|
||||||
async def sync_property_change(
|
async def sync_property_change(
|
||||||
prop_interface: str, changed: dict[str, Variant], invalidated: list[str]
|
prop_interface: str, changed: dict[str, Variant], invalidated: list[str]
|
||||||
):
|
) -> None:
|
||||||
"""Sync property changes to cache."""
|
"""Sync property changes to cache."""
|
||||||
if interface != prop_interface:
|
if interface != prop_interface:
|
||||||
return
|
return
|
||||||
@ -267,12 +284,12 @@ class DBus:
|
|||||||
else:
|
else:
|
||||||
await update(changed)
|
await update(changed)
|
||||||
|
|
||||||
self.properties.on_properties_changed(sync_property_change)
|
self.properties.on("properties_changed", sync_property_change)
|
||||||
return sync_property_change
|
return sync_property_change
|
||||||
|
|
||||||
def stop_sync_property_changes(self, sync_property_change: Callable):
|
def stop_sync_property_changes(self, sync_property_change: Callable):
|
||||||
"""Stop syncing property changes with cache."""
|
"""Stop syncing property changes with cache."""
|
||||||
self.properties.off_properties_changed(sync_property_change)
|
self.properties.off("properties_changed", sync_property_change)
|
||||||
|
|
||||||
def disconnect(self):
|
def disconnect(self):
|
||||||
"""Remove all active signal listeners."""
|
"""Remove all active signal listeners."""
|
||||||
@ -356,10 +373,11 @@ class DBusCallWrapper:
|
|||||||
if not self._proxy:
|
if not self._proxy:
|
||||||
return DBusCallWrapper(self.dbus, f"{self.interface}.{name}")
|
return DBusCallWrapper(self.dbus, f"{self.interface}.{name}")
|
||||||
|
|
||||||
|
dbus_proxy = self._proxy
|
||||||
dbus_parts = name.split("_", 1)
|
dbus_parts = name.split("_", 1)
|
||||||
dbus_type = dbus_parts[0]
|
dbus_type = dbus_parts[0]
|
||||||
|
|
||||||
if not hasattr(self._proxy, name):
|
if not hasattr(dbus_proxy, name):
|
||||||
message = f"{name} does not exist in D-Bus interface {self.interface}!"
|
message = f"{name} does not exist in D-Bus interface {self.interface}!"
|
||||||
if dbus_type == "call":
|
if dbus_type == "call":
|
||||||
raise DBusInterfaceMethodError(message, _LOGGER.error)
|
raise DBusInterfaceMethodError(message, _LOGGER.error)
|
||||||
@ -383,7 +401,7 @@ class DBusCallWrapper:
|
|||||||
if dbus_type == "on":
|
if dbus_type == "on":
|
||||||
|
|
||||||
def _on_signal(callback: Callable):
|
def _on_signal(callback: Callable):
|
||||||
getattr(self._proxy, name)(callback, unpack_variants=True)
|
getattr(dbus_proxy, name)(callback, unpack_variants=True)
|
||||||
|
|
||||||
# pylint: disable=protected-access
|
# pylint: disable=protected-access
|
||||||
self.dbus._add_signal_monitor(self.interface, dbus_name, callback)
|
self.dbus._add_signal_monitor(self.interface, dbus_name, callback)
|
||||||
@ -392,7 +410,7 @@ class DBusCallWrapper:
|
|||||||
return _on_signal
|
return _on_signal
|
||||||
|
|
||||||
def _off_signal(callback: Callable):
|
def _off_signal(callback: Callable):
|
||||||
getattr(self._proxy, name)(callback, unpack_variants=True)
|
getattr(dbus_proxy, name)(callback, unpack_variants=True)
|
||||||
|
|
||||||
# pylint: disable=protected-access
|
# pylint: disable=protected-access
|
||||||
if (
|
if (
|
||||||
@ -421,7 +439,7 @@ class DBusCallWrapper:
|
|||||||
|
|
||||||
def _method_wrapper(*args, unpack_variants: bool = True) -> Awaitable:
|
def _method_wrapper(*args, unpack_variants: bool = True) -> Awaitable:
|
||||||
return DBus.call_dbus(
|
return DBus.call_dbus(
|
||||||
self._proxy, name, *args, unpack_variants=unpack_variants
|
dbus_proxy, name, *args, unpack_variants=unpack_variants
|
||||||
)
|
)
|
||||||
|
|
||||||
return _method_wrapper
|
return _method_wrapper
|
||||||
@ -429,7 +447,7 @@ class DBusCallWrapper:
|
|||||||
elif dbus_type == "set":
|
elif dbus_type == "set":
|
||||||
|
|
||||||
def _set_wrapper(*args) -> Awaitable:
|
def _set_wrapper(*args) -> Awaitable:
|
||||||
return DBus.call_dbus(self._proxy, name, *args, unpack_variants=False)
|
return DBus.call_dbus(dbus_proxy, name, *args, unpack_variants=False)
|
||||||
|
|
||||||
return _set_wrapper
|
return _set_wrapper
|
||||||
|
|
||||||
@ -448,7 +466,7 @@ class DBusCallWrapper:
|
|||||||
|
|
||||||
def get(self, name: str, *, unpack_variants: bool = True) -> Awaitable[Any]:
|
def get(self, name: str, *, unpack_variants: bool = True) -> Awaitable[Any]:
|
||||||
"""Get a dbus property value."""
|
"""Get a dbus property value."""
|
||||||
return cast(Callable[[bool], Awaitable[Any]], self._dbus_action(f"get_{name}"))(
|
return cast(GetWithUnpack, self._dbus_action(f"get_{name}"))(
|
||||||
unpack_variants=unpack_variants
|
unpack_variants=unpack_variants
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -3,7 +3,6 @@
|
|||||||
import asyncio
|
import asyncio
|
||||||
from functools import partial
|
from functools import partial
|
||||||
import logging
|
import logging
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
from aiohttp.web_exceptions import HTTPBadGateway, HTTPServiceUnavailable
|
from aiohttp.web_exceptions import HTTPBadGateway, HTTPServiceUnavailable
|
||||||
import sentry_sdk
|
import sentry_sdk
|
||||||
@ -13,6 +12,7 @@ from sentry_sdk.integrations.dedupe import DedupeIntegration
|
|||||||
from sentry_sdk.integrations.excepthook import ExcepthookIntegration
|
from sentry_sdk.integrations.excepthook import ExcepthookIntegration
|
||||||
from sentry_sdk.integrations.logging import LoggingIntegration
|
from sentry_sdk.integrations.logging import LoggingIntegration
|
||||||
from sentry_sdk.integrations.threading import ThreadingIntegration
|
from sentry_sdk.integrations.threading import ThreadingIntegration
|
||||||
|
from sentry_sdk.scrubber import DEFAULT_DENYLIST, EventScrubber
|
||||||
|
|
||||||
from ..const import SUPERVISOR_VERSION
|
from ..const import SUPERVISOR_VERSION
|
||||||
from ..coresys import CoreSys
|
from ..coresys import CoreSys
|
||||||
@ -27,6 +27,7 @@ def init_sentry(coresys: CoreSys) -> None:
|
|||||||
"""Initialize sentry client."""
|
"""Initialize sentry client."""
|
||||||
if not sentry_sdk.is_initialized():
|
if not sentry_sdk.is_initialized():
|
||||||
_LOGGER.info("Initializing Supervisor Sentry")
|
_LOGGER.info("Initializing Supervisor Sentry")
|
||||||
|
denylist = DEFAULT_DENYLIST + ["psk", "ssid"]
|
||||||
# Don't use AsyncioIntegration(). We commonly handle task exceptions
|
# Don't use AsyncioIntegration(). We commonly handle task exceptions
|
||||||
# outside of tasks. This would cause exception we gracefully handle to
|
# outside of tasks. This would cause exception we gracefully handle to
|
||||||
# be captured by sentry.
|
# be captured by sentry.
|
||||||
@ -35,6 +36,7 @@ def init_sentry(coresys: CoreSys) -> None:
|
|||||||
before_send=partial(filter_data, coresys),
|
before_send=partial(filter_data, coresys),
|
||||||
auto_enabling_integrations=False,
|
auto_enabling_integrations=False,
|
||||||
default_integrations=False,
|
default_integrations=False,
|
||||||
|
event_scrubber=EventScrubber(denylist=denylist),
|
||||||
integrations=[
|
integrations=[
|
||||||
AioHttpIntegration(
|
AioHttpIntegration(
|
||||||
failed_request_status_codes=frozenset(range(500, 600))
|
failed_request_status_codes=frozenset(range(500, 600))
|
||||||
@ -56,28 +58,6 @@ def init_sentry(coresys: CoreSys) -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def capture_event(event: dict[str, Any], only_once: str | None = None):
|
|
||||||
"""Capture an event and send to sentry.
|
|
||||||
|
|
||||||
Must be called in executor.
|
|
||||||
"""
|
|
||||||
if sentry_sdk.is_initialized():
|
|
||||||
if only_once and only_once not in only_once_events:
|
|
||||||
only_once_events.add(only_once)
|
|
||||||
sentry_sdk.capture_event(event)
|
|
||||||
|
|
||||||
|
|
||||||
async def async_capture_event(event: dict[str, Any], only_once: str | None = None):
|
|
||||||
"""Capture an event and send to sentry.
|
|
||||||
|
|
||||||
Safe to call from event loop.
|
|
||||||
"""
|
|
||||||
if sentry_sdk.is_initialized():
|
|
||||||
await asyncio.get_running_loop().run_in_executor(
|
|
||||||
None, capture_event, event, only_once
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def capture_exception(err: BaseException) -> None:
|
def capture_exception(err: BaseException) -> None:
|
||||||
"""Capture an exception and send to sentry.
|
"""Capture an exception and send to sentry.
|
||||||
|
|
||||||
|
@ -107,17 +107,17 @@ async def journal_logs_reader(
|
|||||||
# followed by a newline as separator to the next field.
|
# followed by a newline as separator to the next field.
|
||||||
if not data.endswith(b"\n"):
|
if not data.endswith(b"\n"):
|
||||||
raise MalformedBinaryEntryError(
|
raise MalformedBinaryEntryError(
|
||||||
f"Failed parsing binary entry {data}"
|
f"Failed parsing binary entry {data.decode('utf-8', errors='replace')}"
|
||||||
)
|
)
|
||||||
|
|
||||||
name = name.decode("utf-8")
|
field_name = name.decode("utf-8")
|
||||||
if name not in formatter_.required_fields:
|
if field_name not in formatter_.required_fields:
|
||||||
# we must read to the end of the entry in the stream, so we can
|
# we must read to the end of the entry in the stream, so we can
|
||||||
# only continue the loop here
|
# only continue the loop here
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# strip \n for simple fields before decoding
|
# strip \n for simple fields before decoding
|
||||||
entries[name] = data[:-1].decode("utf-8")
|
entries[field_name] = data[:-1].decode("utf-8")
|
||||||
|
|
||||||
|
|
||||||
def _parse_boot_json(boot_json_bytes: bytes) -> tuple[int, str]:
|
def _parse_boot_json(boot_json_bytes: bytes) -> tuple[int, str]:
|
||||||
|
@ -9,7 +9,7 @@ from yaml import YAMLError, dump, load
|
|||||||
try:
|
try:
|
||||||
from yaml import CDumper as Dumper, CSafeLoader as SafeLoader
|
from yaml import CDumper as Dumper, CSafeLoader as SafeLoader
|
||||||
except ImportError:
|
except ImportError:
|
||||||
from yaml import Dumper, SafeLoader
|
from yaml import Dumper, SafeLoader # type: ignore
|
||||||
|
|
||||||
from ..exceptions import YamlFileError
|
from ..exceptions import YamlFileError
|
||||||
|
|
||||||
|
@ -182,7 +182,7 @@ SCHEMA_DOCKER_CONFIG = vol.Schema(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
),
|
),
|
||||||
vol.Optional(ATTR_ENABLE_IPV6): vol.Boolean(),
|
vol.Optional(ATTR_ENABLE_IPV6, default=None): vol.Maybe(vol.Boolean()),
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -18,6 +18,7 @@ from supervisor.const import AddonBoot, AddonState, BusEvent
|
|||||||
from supervisor.coresys import CoreSys
|
from supervisor.coresys import CoreSys
|
||||||
from supervisor.docker.addon import DockerAddon
|
from supervisor.docker.addon import DockerAddon
|
||||||
from supervisor.docker.const import ContainerState
|
from supervisor.docker.const import ContainerState
|
||||||
|
from supervisor.docker.manager import CommandReturn
|
||||||
from supervisor.docker.monitor import DockerContainerStateEvent
|
from supervisor.docker.monitor import DockerContainerStateEvent
|
||||||
from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError
|
from supervisor.exceptions import AddonsError, AddonsJobError, AudioUpdateError
|
||||||
from supervisor.hardware.helper import HwHelper
|
from supervisor.hardware.helper import HwHelper
|
||||||
@ -27,7 +28,7 @@ from supervisor.utils.dt import utcnow
|
|||||||
|
|
||||||
from .test_manager import BOOT_FAIL_ISSUE, BOOT_FAIL_SUGGESTIONS
|
from .test_manager import BOOT_FAIL_ISSUE, BOOT_FAIL_SUGGESTIONS
|
||||||
|
|
||||||
from tests.common import get_fixture_path
|
from tests.common import get_fixture_path, is_in_list
|
||||||
from tests.const import TEST_ADDON_SLUG
|
from tests.const import TEST_ADDON_SLUG
|
||||||
|
|
||||||
|
|
||||||
@ -208,7 +209,7 @@ async def test_watchdog_on_stop(coresys: CoreSys, install_addon_ssh: Addon) -> N
|
|||||||
|
|
||||||
|
|
||||||
async def test_listener_attached_on_install(
|
async def test_listener_attached_on_install(
|
||||||
coresys: CoreSys, mock_amd64_arch_supported: None, repository
|
coresys: CoreSys, mock_amd64_arch_supported: None, test_repository
|
||||||
):
|
):
|
||||||
"""Test events listener attached on addon install."""
|
"""Test events listener attached on addon install."""
|
||||||
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
|
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
|
||||||
@ -241,7 +242,7 @@ async def test_listener_attached_on_install(
|
|||||||
)
|
)
|
||||||
async def test_watchdog_during_attach(
|
async def test_watchdog_during_attach(
|
||||||
coresys: CoreSys,
|
coresys: CoreSys,
|
||||||
repository: Repository,
|
test_repository: Repository,
|
||||||
boot_timedelta: timedelta,
|
boot_timedelta: timedelta,
|
||||||
restart_count: int,
|
restart_count: int,
|
||||||
):
|
):
|
||||||
@ -709,7 +710,7 @@ async def test_local_example_install(
|
|||||||
coresys: CoreSys,
|
coresys: CoreSys,
|
||||||
container: MagicMock,
|
container: MagicMock,
|
||||||
tmp_supervisor_data: Path,
|
tmp_supervisor_data: Path,
|
||||||
repository,
|
test_repository,
|
||||||
mock_aarch64_arch_supported: None,
|
mock_aarch64_arch_supported: None,
|
||||||
):
|
):
|
||||||
"""Test install of an addon."""
|
"""Test install of an addon."""
|
||||||
@ -819,7 +820,7 @@ async def test_paths_cache(coresys: CoreSys, install_addon_ssh: Addon):
|
|||||||
|
|
||||||
with (
|
with (
|
||||||
patch("supervisor.addons.addon.Path.exists", return_value=True),
|
patch("supervisor.addons.addon.Path.exists", return_value=True),
|
||||||
patch("supervisor.store.repository.Repository.update", return_value=True),
|
patch("supervisor.store.repository.RepositoryLocal.update", return_value=True),
|
||||||
):
|
):
|
||||||
await coresys.store.reload(coresys.store.get("local"))
|
await coresys.store.reload(coresys.store.get("local"))
|
||||||
|
|
||||||
@ -840,10 +841,25 @@ async def test_addon_loads_wrong_image(
|
|||||||
install_addon_ssh.persist["image"] = "local/aarch64-addon-ssh"
|
install_addon_ssh.persist["image"] = "local/aarch64-addon-ssh"
|
||||||
assert install_addon_ssh.image == "local/aarch64-addon-ssh"
|
assert install_addon_ssh.image == "local/aarch64-addon-ssh"
|
||||||
|
|
||||||
with patch("pathlib.Path.is_file", return_value=True):
|
with (
|
||||||
|
patch("pathlib.Path.is_file", return_value=True),
|
||||||
|
patch.object(
|
||||||
|
coresys.docker,
|
||||||
|
"run_command",
|
||||||
|
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
|
||||||
|
) as mock_run_command,
|
||||||
|
patch.object(
|
||||||
|
type(coresys.config),
|
||||||
|
"local_to_extern_path",
|
||||||
|
return_value="/addon/path/on/host",
|
||||||
|
),
|
||||||
|
):
|
||||||
await install_addon_ssh.load()
|
await install_addon_ssh.load()
|
||||||
|
|
||||||
container.remove.assert_called_once_with(force=True)
|
container.remove.assert_called_with(force=True, v=True)
|
||||||
|
# one for removing the addon, one for removing the addon builder
|
||||||
|
assert coresys.docker.images.remove.call_count == 2
|
||||||
|
|
||||||
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
|
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
|
||||||
"image": "local/aarch64-addon-ssh:latest",
|
"image": "local/aarch64-addon-ssh:latest",
|
||||||
"force": True,
|
"force": True,
|
||||||
@ -852,12 +868,18 @@ async def test_addon_loads_wrong_image(
|
|||||||
"image": "local/aarch64-addon-ssh:9.2.1",
|
"image": "local/aarch64-addon-ssh:9.2.1",
|
||||||
"force": True,
|
"force": True,
|
||||||
}
|
}
|
||||||
coresys.docker.images.build.assert_called_once()
|
mock_run_command.assert_called_once()
|
||||||
assert (
|
assert mock_run_command.call_args.args[0] == "docker.io/library/docker"
|
||||||
coresys.docker.images.build.call_args.kwargs["tag"]
|
assert mock_run_command.call_args.kwargs["version"] == "1.0.0-cli"
|
||||||
== "local/amd64-addon-ssh:9.2.1"
|
command = mock_run_command.call_args.kwargs["command"]
|
||||||
|
assert is_in_list(
|
||||||
|
["--platform", "linux/amd64"],
|
||||||
|
command,
|
||||||
|
)
|
||||||
|
assert is_in_list(
|
||||||
|
["--tag", "local/amd64-addon-ssh:9.2.1"],
|
||||||
|
command,
|
||||||
)
|
)
|
||||||
assert coresys.docker.images.build.call_args.kwargs["platform"] == "linux/amd64"
|
|
||||||
assert install_addon_ssh.image == "local/amd64-addon-ssh"
|
assert install_addon_ssh.image == "local/amd64-addon-ssh"
|
||||||
coresys.addons.data.save_data.assert_called_once()
|
coresys.addons.data.save_data.assert_called_once()
|
||||||
|
|
||||||
@ -871,15 +893,33 @@ async def test_addon_loads_missing_image(
|
|||||||
"""Test addon corrects a missing image on load."""
|
"""Test addon corrects a missing image on load."""
|
||||||
coresys.docker.images.get.side_effect = ImageNotFound("missing")
|
coresys.docker.images.get.side_effect = ImageNotFound("missing")
|
||||||
|
|
||||||
with patch("pathlib.Path.is_file", return_value=True):
|
with (
|
||||||
|
patch("pathlib.Path.is_file", return_value=True),
|
||||||
|
patch.object(
|
||||||
|
coresys.docker,
|
||||||
|
"run_command",
|
||||||
|
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
|
||||||
|
) as mock_run_command,
|
||||||
|
patch.object(
|
||||||
|
type(coresys.config),
|
||||||
|
"local_to_extern_path",
|
||||||
|
return_value="/addon/path/on/host",
|
||||||
|
),
|
||||||
|
):
|
||||||
await install_addon_ssh.load()
|
await install_addon_ssh.load()
|
||||||
|
|
||||||
coresys.docker.images.build.assert_called_once()
|
mock_run_command.assert_called_once()
|
||||||
assert (
|
assert mock_run_command.call_args.args[0] == "docker.io/library/docker"
|
||||||
coresys.docker.images.build.call_args.kwargs["tag"]
|
assert mock_run_command.call_args.kwargs["version"] == "1.0.0-cli"
|
||||||
== "local/amd64-addon-ssh:9.2.1"
|
command = mock_run_command.call_args.kwargs["command"]
|
||||||
|
assert is_in_list(
|
||||||
|
["--platform", "linux/amd64"],
|
||||||
|
command,
|
||||||
|
)
|
||||||
|
assert is_in_list(
|
||||||
|
["--tag", "local/amd64-addon-ssh:9.2.1"],
|
||||||
|
command,
|
||||||
)
|
)
|
||||||
assert coresys.docker.images.build.call_args.kwargs["platform"] == "linux/amd64"
|
|
||||||
assert install_addon_ssh.image == "local/amd64-addon-ssh"
|
assert install_addon_ssh.image == "local/amd64-addon-ssh"
|
||||||
|
|
||||||
|
|
||||||
@ -900,7 +940,14 @@ async def test_addon_load_succeeds_with_docker_errors(
|
|||||||
# Image build failure
|
# Image build failure
|
||||||
coresys.docker.images.build.side_effect = DockerException()
|
coresys.docker.images.build.side_effect = DockerException()
|
||||||
caplog.clear()
|
caplog.clear()
|
||||||
with patch("pathlib.Path.is_file", return_value=True):
|
with (
|
||||||
|
patch("pathlib.Path.is_file", return_value=True),
|
||||||
|
patch.object(
|
||||||
|
type(coresys.config),
|
||||||
|
"local_to_extern_path",
|
||||||
|
return_value="/addon/path/on/host",
|
||||||
|
),
|
||||||
|
):
|
||||||
await install_addon_ssh.load()
|
await install_addon_ssh.load()
|
||||||
assert "Can't build local/amd64-addon-ssh:9.2.1" in caplog.text
|
assert "Can't build local/amd64-addon-ssh:9.2.1" in caplog.text
|
||||||
|
|
||||||
|
@ -8,10 +8,13 @@ from supervisor.addons.addon import Addon
|
|||||||
from supervisor.addons.build import AddonBuild
|
from supervisor.addons.build import AddonBuild
|
||||||
from supervisor.coresys import CoreSys
|
from supervisor.coresys import CoreSys
|
||||||
|
|
||||||
|
from tests.common import is_in_list
|
||||||
|
|
||||||
|
|
||||||
async def test_platform_set(coresys: CoreSys, install_addon_ssh: Addon):
|
async def test_platform_set(coresys: CoreSys, install_addon_ssh: Addon):
|
||||||
"""Test platform set in docker args."""
|
"""Test platform set in container build args."""
|
||||||
build = await AddonBuild(coresys, install_addon_ssh).load_config()
|
build = await AddonBuild(coresys, install_addon_ssh).load_config()
|
||||||
|
|
||||||
with (
|
with (
|
||||||
patch.object(
|
patch.object(
|
||||||
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
|
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
|
||||||
@ -19,17 +22,23 @@ async def test_platform_set(coresys: CoreSys, install_addon_ssh: Addon):
|
|||||||
patch.object(
|
patch.object(
|
||||||
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
|
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
|
||||||
),
|
),
|
||||||
|
patch.object(
|
||||||
|
type(coresys.config),
|
||||||
|
"local_to_extern_path",
|
||||||
|
return_value="/addon/path/on/host",
|
||||||
|
),
|
||||||
):
|
):
|
||||||
args = await coresys.run_in_executor(
|
args = await coresys.run_in_executor(
|
||||||
build.get_docker_args, AwesomeVersion("latest")
|
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
|
||||||
)
|
)
|
||||||
|
|
||||||
assert args["platform"] == "linux/amd64"
|
assert is_in_list(["--platform", "linux/amd64"], args["command"])
|
||||||
|
|
||||||
|
|
||||||
async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon):
|
async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon):
|
||||||
"""Test platform set in docker args."""
|
"""Test dockerfile path in container build args."""
|
||||||
build = await AddonBuild(coresys, install_addon_ssh).load_config()
|
build = await AddonBuild(coresys, install_addon_ssh).load_config()
|
||||||
|
|
||||||
with (
|
with (
|
||||||
patch.object(
|
patch.object(
|
||||||
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
|
type(coresys.arch), "supported", new=PropertyMock(return_value=["amd64"])
|
||||||
@ -37,12 +46,17 @@ async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon)
|
|||||||
patch.object(
|
patch.object(
|
||||||
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
|
type(coresys.arch), "default", new=PropertyMock(return_value="amd64")
|
||||||
),
|
),
|
||||||
|
patch.object(
|
||||||
|
type(coresys.config),
|
||||||
|
"local_to_extern_path",
|
||||||
|
return_value="/addon/path/on/host",
|
||||||
|
),
|
||||||
):
|
):
|
||||||
args = await coresys.run_in_executor(
|
args = await coresys.run_in_executor(
|
||||||
build.get_docker_args, AwesomeVersion("latest")
|
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
|
||||||
)
|
)
|
||||||
|
|
||||||
assert args["dockerfile"].endswith("fixtures/addons/local/ssh/Dockerfile")
|
assert is_in_list(["--file", "Dockerfile"], args["command"])
|
||||||
assert str(await coresys.run_in_executor(build.get_dockerfile)).endswith(
|
assert str(await coresys.run_in_executor(build.get_dockerfile)).endswith(
|
||||||
"fixtures/addons/local/ssh/Dockerfile"
|
"fixtures/addons/local/ssh/Dockerfile"
|
||||||
)
|
)
|
||||||
@ -50,8 +64,9 @@ async def test_dockerfile_evaluation(coresys: CoreSys, install_addon_ssh: Addon)
|
|||||||
|
|
||||||
|
|
||||||
async def test_dockerfile_evaluation_arch(coresys: CoreSys, install_addon_ssh: Addon):
|
async def test_dockerfile_evaluation_arch(coresys: CoreSys, install_addon_ssh: Addon):
|
||||||
"""Test platform set in docker args."""
|
"""Test dockerfile arch evaluation in container build args."""
|
||||||
build = await AddonBuild(coresys, install_addon_ssh).load_config()
|
build = await AddonBuild(coresys, install_addon_ssh).load_config()
|
||||||
|
|
||||||
with (
|
with (
|
||||||
patch.object(
|
patch.object(
|
||||||
type(coresys.arch), "supported", new=PropertyMock(return_value=["aarch64"])
|
type(coresys.arch), "supported", new=PropertyMock(return_value=["aarch64"])
|
||||||
@ -59,12 +74,17 @@ async def test_dockerfile_evaluation_arch(coresys: CoreSys, install_addon_ssh: A
|
|||||||
patch.object(
|
patch.object(
|
||||||
type(coresys.arch), "default", new=PropertyMock(return_value="aarch64")
|
type(coresys.arch), "default", new=PropertyMock(return_value="aarch64")
|
||||||
),
|
),
|
||||||
|
patch.object(
|
||||||
|
type(coresys.config),
|
||||||
|
"local_to_extern_path",
|
||||||
|
return_value="/addon/path/on/host",
|
||||||
|
),
|
||||||
):
|
):
|
||||||
args = await coresys.run_in_executor(
|
args = await coresys.run_in_executor(
|
||||||
build.get_docker_args, AwesomeVersion("latest")
|
build.get_docker_args, AwesomeVersion("latest"), "test-image:latest"
|
||||||
)
|
)
|
||||||
|
|
||||||
assert args["dockerfile"].endswith("fixtures/addons/local/ssh/Dockerfile.aarch64")
|
assert is_in_list(["--file", "Dockerfile.aarch64"], args["command"])
|
||||||
assert str(await coresys.run_in_executor(build.get_dockerfile)).endswith(
|
assert str(await coresys.run_in_executor(build.get_dockerfile)).endswith(
|
||||||
"fixtures/addons/local/ssh/Dockerfile.aarch64"
|
"fixtures/addons/local/ssh/Dockerfile.aarch64"
|
||||||
)
|
)
|
||||||
|
@ -29,7 +29,7 @@ from supervisor.plugins.dns import PluginDns
|
|||||||
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
|
from supervisor.resolution.const import ContextType, IssueType, SuggestionType
|
||||||
from supervisor.resolution.data import Issue, Suggestion
|
from supervisor.resolution.data import Issue, Suggestion
|
||||||
from supervisor.store.addon import AddonStore
|
from supervisor.store.addon import AddonStore
|
||||||
from supervisor.store.repository import Repository
|
from supervisor.store.repository import RepositoryLocal
|
||||||
from supervisor.utils import check_exception_chain
|
from supervisor.utils import check_exception_chain
|
||||||
from supervisor.utils.common import write_json_file
|
from supervisor.utils.common import write_json_file
|
||||||
|
|
||||||
@ -67,7 +67,7 @@ async def fixture_remove_wait_boot(coresys: CoreSys) -> AsyncGenerator[None]:
|
|||||||
|
|
||||||
@pytest.fixture(name="install_addon_example_image")
|
@pytest.fixture(name="install_addon_example_image")
|
||||||
async def fixture_install_addon_example_image(
|
async def fixture_install_addon_example_image(
|
||||||
coresys: CoreSys, repository
|
coresys: CoreSys, test_repository
|
||||||
) -> Generator[Addon]:
|
) -> Generator[Addon]:
|
||||||
"""Install local_example add-on with image."""
|
"""Install local_example add-on with image."""
|
||||||
store = coresys.addons.store["local_example_image"]
|
store = coresys.addons.store["local_example_image"]
|
||||||
@ -442,7 +442,7 @@ async def test_store_data_changes_during_update(
|
|||||||
update_task = coresys.create_task(simulate_update())
|
update_task = coresys.create_task(simulate_update())
|
||||||
await asyncio.sleep(0)
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
with patch.object(Repository, "update", return_value=True):
|
with patch.object(RepositoryLocal, "update", return_value=True):
|
||||||
await coresys.store.reload()
|
await coresys.store.reload()
|
||||||
|
|
||||||
assert "image" not in coresys.store.data.addons["local_ssh"]
|
assert "image" not in coresys.store.data.addons["local_ssh"]
|
||||||
|
@ -14,6 +14,7 @@ from supervisor.const import AddonState
|
|||||||
from supervisor.coresys import CoreSys
|
from supervisor.coresys import CoreSys
|
||||||
from supervisor.docker.addon import DockerAddon
|
from supervisor.docker.addon import DockerAddon
|
||||||
from supervisor.docker.const import ContainerState
|
from supervisor.docker.const import ContainerState
|
||||||
|
from supervisor.docker.manager import CommandReturn
|
||||||
from supervisor.docker.monitor import DockerContainerStateEvent
|
from supervisor.docker.monitor import DockerContainerStateEvent
|
||||||
from supervisor.exceptions import HassioError
|
from supervisor.exceptions import HassioError
|
||||||
from supervisor.store.repository import Repository
|
from supervisor.store.repository import Repository
|
||||||
@ -53,7 +54,7 @@ async def test_addons_info(
|
|||||||
|
|
||||||
# DEPRECATED - Remove with legacy routing logic on 1/2023
|
# DEPRECATED - Remove with legacy routing logic on 1/2023
|
||||||
async def test_addons_info_not_installed(
|
async def test_addons_info_not_installed(
|
||||||
api_client: TestClient, coresys: CoreSys, repository: Repository
|
api_client: TestClient, coresys: CoreSys, test_repository: Repository
|
||||||
):
|
):
|
||||||
"""Test getting addon info for not installed addon."""
|
"""Test getting addon info for not installed addon."""
|
||||||
resp = await api_client.get(f"/addons/{TEST_ADDON_SLUG}/info")
|
resp = await api_client.get(f"/addons/{TEST_ADDON_SLUG}/info")
|
||||||
@ -239,6 +240,19 @@ async def test_api_addon_rebuild_healthcheck(
|
|||||||
patch.object(Addon, "need_build", new=PropertyMock(return_value=True)),
|
patch.object(Addon, "need_build", new=PropertyMock(return_value=True)),
|
||||||
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
|
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
|
||||||
patch.object(DockerAddon, "run", new=container_events_task),
|
patch.object(DockerAddon, "run", new=container_events_task),
|
||||||
|
patch.object(
|
||||||
|
coresys.docker,
|
||||||
|
"run_command",
|
||||||
|
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
|
||||||
|
),
|
||||||
|
patch.object(
|
||||||
|
DockerAddon, "healthcheck", new=PropertyMock(return_value={"exists": True})
|
||||||
|
),
|
||||||
|
patch.object(
|
||||||
|
type(coresys.config),
|
||||||
|
"local_to_extern_path",
|
||||||
|
return_value="/addon/path/on/host",
|
||||||
|
),
|
||||||
):
|
):
|
||||||
resp = await api_client.post("/addons/local_ssh/rebuild")
|
resp = await api_client.post("/addons/local_ssh/rebuild")
|
||||||
|
|
||||||
@ -247,6 +261,98 @@ async def test_api_addon_rebuild_healthcheck(
|
|||||||
assert resp.status == 200
|
assert resp.status == 200
|
||||||
|
|
||||||
|
|
||||||
|
async def test_api_addon_rebuild_force(
|
||||||
|
api_client: TestClient,
|
||||||
|
coresys: CoreSys,
|
||||||
|
install_addon_ssh: Addon,
|
||||||
|
container: MagicMock,
|
||||||
|
tmp_supervisor_data,
|
||||||
|
path_extern,
|
||||||
|
):
|
||||||
|
"""Test rebuilding an image-based addon with force parameter."""
|
||||||
|
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
|
||||||
|
container.status = "running"
|
||||||
|
install_addon_ssh.path_data.mkdir()
|
||||||
|
container.attrs["Config"] = {"Healthcheck": "exists"}
|
||||||
|
await install_addon_ssh.load()
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
assert install_addon_ssh.state == AddonState.STARTUP
|
||||||
|
|
||||||
|
state_changes: list[AddonState] = []
|
||||||
|
_container_events_task: asyncio.Task | None = None
|
||||||
|
|
||||||
|
async def container_events():
|
||||||
|
nonlocal state_changes
|
||||||
|
|
||||||
|
await install_addon_ssh.container_state_changed(
|
||||||
|
_create_test_event(f"addon_{TEST_ADDON_SLUG}", ContainerState.STOPPED)
|
||||||
|
)
|
||||||
|
state_changes.append(install_addon_ssh.state)
|
||||||
|
|
||||||
|
await install_addon_ssh.container_state_changed(
|
||||||
|
_create_test_event(f"addon_{TEST_ADDON_SLUG}", ContainerState.RUNNING)
|
||||||
|
)
|
||||||
|
state_changes.append(install_addon_ssh.state)
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
await install_addon_ssh.container_state_changed(
|
||||||
|
_create_test_event(f"addon_{TEST_ADDON_SLUG}", ContainerState.HEALTHY)
|
||||||
|
)
|
||||||
|
|
||||||
|
async def container_events_task(*args, **kwargs):
|
||||||
|
nonlocal _container_events_task
|
||||||
|
_container_events_task = asyncio.create_task(container_events())
|
||||||
|
|
||||||
|
# Test 1: Without force, image-based addon should fail
|
||||||
|
with (
|
||||||
|
patch.object(AddonBuild, "is_valid", return_value=True),
|
||||||
|
patch.object(DockerAddon, "is_running", return_value=False),
|
||||||
|
patch.object(
|
||||||
|
Addon, "need_build", new=PropertyMock(return_value=False)
|
||||||
|
), # Image-based
|
||||||
|
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
|
||||||
|
):
|
||||||
|
resp = await api_client.post("/addons/local_ssh/rebuild")
|
||||||
|
|
||||||
|
assert resp.status == 400
|
||||||
|
result = await resp.json()
|
||||||
|
assert "Can't rebuild a image based add-on" in result["message"]
|
||||||
|
|
||||||
|
# Reset state for next test
|
||||||
|
state_changes.clear()
|
||||||
|
|
||||||
|
# Test 2: With force=True, image-based addon should succeed
|
||||||
|
with (
|
||||||
|
patch.object(AddonBuild, "is_valid", return_value=True),
|
||||||
|
patch.object(DockerAddon, "is_running", return_value=False),
|
||||||
|
patch.object(
|
||||||
|
Addon, "need_build", new=PropertyMock(return_value=False)
|
||||||
|
), # Image-based
|
||||||
|
patch.object(CpuArch, "supported", new=PropertyMock(return_value=["amd64"])),
|
||||||
|
patch.object(DockerAddon, "run", new=container_events_task),
|
||||||
|
patch.object(
|
||||||
|
coresys.docker,
|
||||||
|
"run_command",
|
||||||
|
new=PropertyMock(return_value=CommandReturn(0, b"Build successful")),
|
||||||
|
),
|
||||||
|
patch.object(
|
||||||
|
DockerAddon, "healthcheck", new=PropertyMock(return_value={"exists": True})
|
||||||
|
),
|
||||||
|
patch.object(
|
||||||
|
type(coresys.config),
|
||||||
|
"local_to_extern_path",
|
||||||
|
return_value="/addon/path/on/host",
|
||||||
|
),
|
||||||
|
):
|
||||||
|
resp = await api_client.post("/addons/local_ssh/rebuild", json={"force": True})
|
||||||
|
|
||||||
|
assert state_changes == [AddonState.STOPPED, AddonState.STARTUP]
|
||||||
|
assert install_addon_ssh.state == AddonState.STARTED
|
||||||
|
assert resp.status == 200
|
||||||
|
|
||||||
|
await _container_events_task
|
||||||
|
|
||||||
|
|
||||||
async def test_api_addon_uninstall(
|
async def test_api_addon_uninstall(
|
||||||
api_client: TestClient,
|
api_client: TestClient,
|
||||||
coresys: CoreSys,
|
coresys: CoreSys,
|
||||||
@ -427,7 +533,7 @@ async def test_addon_not_found(
|
|||||||
("get", "/addons/local_ssh/logs/boots/1/follow", False),
|
("get", "/addons/local_ssh/logs/boots/1/follow", False),
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
@pytest.mark.usefixtures("repository")
|
@pytest.mark.usefixtures("test_repository")
|
||||||
async def test_addon_not_installed(
|
async def test_addon_not_installed(
|
||||||
api_client: TestClient, method: str, url: str, json_expected: bool
|
api_client: TestClient, method: str, url: str, json_expected: bool
|
||||||
):
|
):
|
||||||
|
@ -3,6 +3,7 @@
|
|||||||
from datetime import UTC, datetime, timedelta
|
from datetime import UTC, datetime, timedelta
|
||||||
from unittest.mock import AsyncMock, MagicMock, patch
|
from unittest.mock import AsyncMock, MagicMock, patch
|
||||||
|
|
||||||
|
from aiohttp.hdrs import WWW_AUTHENTICATE
|
||||||
from aiohttp.test_utils import TestClient
|
from aiohttp.test_utils import TestClient
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
@ -119,16 +120,44 @@ async def test_list_users(
|
|||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
|
@pytest.mark.parametrize(
|
||||||
|
("field", "api_client"),
|
||||||
|
[("username", TEST_ADDON_SLUG), ("user", TEST_ADDON_SLUG)],
|
||||||
|
indirect=["api_client"],
|
||||||
|
)
|
||||||
async def test_auth_json_success(
|
async def test_auth_json_success(
|
||||||
api_client: TestClient, mock_check_login: AsyncMock, install_addon_ssh: Addon
|
api_client: TestClient,
|
||||||
|
mock_check_login: AsyncMock,
|
||||||
|
install_addon_ssh: Addon,
|
||||||
|
field: str,
|
||||||
):
|
):
|
||||||
"""Test successful JSON auth."""
|
"""Test successful JSON auth."""
|
||||||
mock_check_login.return_value = True
|
mock_check_login.return_value = True
|
||||||
resp = await api_client.post("/auth", json={"username": "test", "password": "pass"})
|
resp = await api_client.post("/auth", json={field: "test", "password": "pass"})
|
||||||
assert resp.status == 200
|
assert resp.status == 200
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize(
|
||||||
|
("user", "password", "api_client"),
|
||||||
|
[
|
||||||
|
(None, "password", TEST_ADDON_SLUG),
|
||||||
|
("user", None, TEST_ADDON_SLUG),
|
||||||
|
],
|
||||||
|
indirect=["api_client"],
|
||||||
|
)
|
||||||
|
async def test_auth_json_failure_none(
|
||||||
|
api_client: TestClient,
|
||||||
|
mock_check_login: AsyncMock,
|
||||||
|
install_addon_ssh: Addon,
|
||||||
|
user: str | None,
|
||||||
|
password: str | None,
|
||||||
|
):
|
||||||
|
"""Test failed JSON auth with none user or password."""
|
||||||
|
mock_check_login.return_value = True
|
||||||
|
resp = await api_client.post("/auth", json={"username": user, "password": password})
|
||||||
|
assert resp.status == 401
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
|
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
|
||||||
async def test_auth_json_invalid_credentials(
|
async def test_auth_json_invalid_credentials(
|
||||||
api_client: TestClient, mock_check_login: AsyncMock, install_addon_ssh: Addon
|
api_client: TestClient, mock_check_login: AsyncMock, install_addon_ssh: Addon
|
||||||
@ -138,8 +167,8 @@ async def test_auth_json_invalid_credentials(
|
|||||||
resp = await api_client.post(
|
resp = await api_client.post(
|
||||||
"/auth", json={"username": "test", "password": "wrong"}
|
"/auth", json={"username": "test", "password": "wrong"}
|
||||||
)
|
)
|
||||||
# Do we really want the API to return 400 here?
|
assert WWW_AUTHENTICATE not in resp.headers
|
||||||
assert resp.status == 400
|
assert resp.status == 401
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
|
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
|
||||||
@ -148,7 +177,7 @@ async def test_auth_json_empty_body(api_client: TestClient, install_addon_ssh: A
|
|||||||
resp = await api_client.post(
|
resp = await api_client.post(
|
||||||
"/auth", data="", headers={"Content-Type": "application/json"}
|
"/auth", data="", headers={"Content-Type": "application/json"}
|
||||||
)
|
)
|
||||||
assert resp.status == 400
|
assert resp.status == 401
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
|
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
|
||||||
@ -185,8 +214,8 @@ async def test_auth_urlencoded_failure(
|
|||||||
data="username=test&password=fail",
|
data="username=test&password=fail",
|
||||||
headers={"Content-Type": "application/x-www-form-urlencoded"},
|
headers={"Content-Type": "application/x-www-form-urlencoded"},
|
||||||
)
|
)
|
||||||
# Do we really want the API to return 400 here?
|
assert WWW_AUTHENTICATE not in resp.headers
|
||||||
assert resp.status == 400
|
assert resp.status == 401
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
|
@pytest.mark.parametrize("api_client", [TEST_ADDON_SLUG], indirect=True)
|
||||||
@ -197,7 +226,7 @@ async def test_auth_unsupported_content_type(
|
|||||||
resp = await api_client.post(
|
resp = await api_client.post(
|
||||||
"/auth", data="something", headers={"Content-Type": "text/plain"}
|
"/auth", data="something", headers={"Content-Type": "text/plain"}
|
||||||
)
|
)
|
||||||
# This probably should be 400 here for better consistency
|
assert "Basic realm" in resp.headers[WWW_AUTHENTICATE]
|
||||||
assert resp.status == 401
|
assert resp.status == 401
|
||||||
|
|
||||||
|
|
||||||
|
@ -19,7 +19,7 @@ async def test_api_docker_info(api_client: TestClient):
|
|||||||
|
|
||||||
async def test_api_network_enable_ipv6(coresys: CoreSys, api_client: TestClient):
|
async def test_api_network_enable_ipv6(coresys: CoreSys, api_client: TestClient):
|
||||||
"""Test setting docker network for enabled IPv6."""
|
"""Test setting docker network for enabled IPv6."""
|
||||||
assert coresys.docker.config.enable_ipv6 is False
|
assert coresys.docker.config.enable_ipv6 is None
|
||||||
|
|
||||||
resp = await api_client.post("/docker/options", json={"enable_ipv6": True})
|
resp = await api_client.post("/docker/options", json={"enable_ipv6": True})
|
||||||
assert resp.status == 200
|
assert resp.status == 200
|
||||||
|
@ -30,7 +30,7 @@ REPO_URL = "https://github.com/awesome-developer/awesome-repo"
|
|||||||
async def test_api_store(
|
async def test_api_store(
|
||||||
api_client: TestClient,
|
api_client: TestClient,
|
||||||
store_addon: AddonStore,
|
store_addon: AddonStore,
|
||||||
repository: Repository,
|
test_repository: Repository,
|
||||||
caplog: pytest.LogCaptureFixture,
|
caplog: pytest.LogCaptureFixture,
|
||||||
):
|
):
|
||||||
"""Test /store REST API."""
|
"""Test /store REST API."""
|
||||||
@ -38,7 +38,7 @@ async def test_api_store(
|
|||||||
result = await resp.json()
|
result = await resp.json()
|
||||||
|
|
||||||
assert result["data"]["addons"][-1]["slug"] == store_addon.slug
|
assert result["data"]["addons"][-1]["slug"] == store_addon.slug
|
||||||
assert result["data"]["repositories"][-1]["slug"] == repository.slug
|
assert result["data"]["repositories"][-1]["slug"] == test_repository.slug
|
||||||
|
|
||||||
assert (
|
assert (
|
||||||
f"Add-on {store_addon.slug} not supported on this platform" not in caplog.text
|
f"Add-on {store_addon.slug} not supported on this platform" not in caplog.text
|
||||||
@ -73,23 +73,25 @@ async def test_api_store_addons_addon_version(
|
|||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_api_store_repositories(api_client: TestClient, repository: Repository):
|
async def test_api_store_repositories(
|
||||||
|
api_client: TestClient, test_repository: Repository
|
||||||
|
):
|
||||||
"""Test /store/repositories REST API."""
|
"""Test /store/repositories REST API."""
|
||||||
resp = await api_client.get("/store/repositories")
|
resp = await api_client.get("/store/repositories")
|
||||||
result = await resp.json()
|
result = await resp.json()
|
||||||
|
|
||||||
assert result["data"][-1]["slug"] == repository.slug
|
assert result["data"][-1]["slug"] == test_repository.slug
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_api_store_repositories_repository(
|
async def test_api_store_repositories_repository(
|
||||||
api_client: TestClient, repository: Repository
|
api_client: TestClient, test_repository: Repository
|
||||||
):
|
):
|
||||||
"""Test /store/repositories/{repository} REST API."""
|
"""Test /store/repositories/{repository} REST API."""
|
||||||
resp = await api_client.get(f"/store/repositories/{repository.slug}")
|
resp = await api_client.get(f"/store/repositories/{test_repository.slug}")
|
||||||
result = await resp.json()
|
result = await resp.json()
|
||||||
|
|
||||||
assert result["data"]["slug"] == repository.slug
|
assert result["data"]["slug"] == test_repository.slug
|
||||||
|
|
||||||
|
|
||||||
async def test_api_store_add_repository(
|
async def test_api_store_add_repository(
|
||||||
@ -97,8 +99,8 @@ async def test_api_store_add_repository(
|
|||||||
) -> None:
|
) -> None:
|
||||||
"""Test POST /store/repositories REST API."""
|
"""Test POST /store/repositories REST API."""
|
||||||
with (
|
with (
|
||||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
patch("supervisor.store.repository.RepositoryGit.load", return_value=None),
|
||||||
patch("supervisor.store.repository.Repository.validate", return_value=True),
|
patch("supervisor.store.repository.RepositoryGit.validate", return_value=True),
|
||||||
):
|
):
|
||||||
response = await api_client.post(
|
response = await api_client.post(
|
||||||
"/store/repositories", json={"repository": REPO_URL}
|
"/store/repositories", json={"repository": REPO_URL}
|
||||||
@ -106,18 +108,17 @@ async def test_api_store_add_repository(
|
|||||||
|
|
||||||
assert response.status == 200
|
assert response.status == 200
|
||||||
assert REPO_URL in coresys.store.repository_urls
|
assert REPO_URL in coresys.store.repository_urls
|
||||||
assert isinstance(coresys.store.get_from_url(REPO_URL), Repository)
|
|
||||||
|
|
||||||
|
|
||||||
async def test_api_store_remove_repository(
|
async def test_api_store_remove_repository(
|
||||||
api_client: TestClient, coresys: CoreSys, repository: Repository
|
api_client: TestClient, coresys: CoreSys, test_repository: Repository
|
||||||
):
|
):
|
||||||
"""Test DELETE /store/repositories/{repository} REST API."""
|
"""Test DELETE /store/repositories/{repository} REST API."""
|
||||||
response = await api_client.delete(f"/store/repositories/{repository.slug}")
|
response = await api_client.delete(f"/store/repositories/{test_repository.slug}")
|
||||||
|
|
||||||
assert response.status == 200
|
assert response.status == 200
|
||||||
assert repository.source not in coresys.store.repository_urls
|
assert test_repository.source not in coresys.store.repository_urls
|
||||||
assert repository.slug not in coresys.store.repositories
|
assert test_repository.slug not in coresys.store.repositories
|
||||||
|
|
||||||
|
|
||||||
async def test_api_store_update_healthcheck(
|
async def test_api_store_update_healthcheck(
|
||||||
@ -329,7 +330,7 @@ async def test_store_addon_not_found(
|
|||||||
("post", "/addons/local_ssh/update"),
|
("post", "/addons/local_ssh/update"),
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
@pytest.mark.usefixtures("repository")
|
@pytest.mark.usefixtures("test_repository")
|
||||||
async def test_store_addon_not_installed(api_client: TestClient, method: str, url: str):
|
async def test_store_addon_not_installed(api_client: TestClient, method: str, url: str):
|
||||||
"""Test store addon not installed error."""
|
"""Test store addon not installed error."""
|
||||||
resp = await api_client.request(method, url)
|
resp = await api_client.request(method, url)
|
||||||
|
@ -9,12 +9,7 @@ from blockbuster import BlockingError
|
|||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
from supervisor.coresys import CoreSys
|
from supervisor.coresys import CoreSys
|
||||||
from supervisor.exceptions import (
|
from supervisor.exceptions import HassioError, HostNotSupportedError, StoreGitError
|
||||||
HassioError,
|
|
||||||
HostNotSupportedError,
|
|
||||||
StoreGitError,
|
|
||||||
StoreNotFound,
|
|
||||||
)
|
|
||||||
from supervisor.store.repository import Repository
|
from supervisor.store.repository import Repository
|
||||||
|
|
||||||
from tests.api import common_test_api_advanced_logs
|
from tests.api import common_test_api_advanced_logs
|
||||||
@ -38,12 +33,10 @@ async def test_api_supervisor_options_add_repository(
|
|||||||
):
|
):
|
||||||
"""Test add a repository via POST /supervisor/options REST API."""
|
"""Test add a repository via POST /supervisor/options REST API."""
|
||||||
assert REPO_URL not in coresys.store.repository_urls
|
assert REPO_URL not in coresys.store.repository_urls
|
||||||
with pytest.raises(StoreNotFound):
|
|
||||||
coresys.store.get_from_url(REPO_URL)
|
|
||||||
|
|
||||||
with (
|
with (
|
||||||
patch("supervisor.store.repository.Repository.load", return_value=None),
|
patch("supervisor.store.repository.RepositoryGit.load", return_value=None),
|
||||||
patch("supervisor.store.repository.Repository.validate", return_value=True),
|
patch("supervisor.store.repository.RepositoryGit.validate", return_value=True),
|
||||||
):
|
):
|
||||||
response = await api_client.post(
|
response = await api_client.post(
|
||||||
"/supervisor/options", json={"addons_repositories": [REPO_URL]}
|
"/supervisor/options", json={"addons_repositories": [REPO_URL]}
|
||||||
@ -51,23 +44,22 @@ async def test_api_supervisor_options_add_repository(
|
|||||||
|
|
||||||
assert response.status == 200
|
assert response.status == 200
|
||||||
assert REPO_URL in coresys.store.repository_urls
|
assert REPO_URL in coresys.store.repository_urls
|
||||||
assert isinstance(coresys.store.get_from_url(REPO_URL), Repository)
|
|
||||||
|
|
||||||
|
|
||||||
async def test_api_supervisor_options_remove_repository(
|
async def test_api_supervisor_options_remove_repository(
|
||||||
api_client: TestClient, coresys: CoreSys, repository: Repository
|
api_client: TestClient, coresys: CoreSys, test_repository: Repository
|
||||||
):
|
):
|
||||||
"""Test remove a repository via POST /supervisor/options REST API."""
|
"""Test remove a repository via POST /supervisor/options REST API."""
|
||||||
assert repository.source in coresys.store.repository_urls
|
assert test_repository.source in coresys.store.repository_urls
|
||||||
assert repository.slug in coresys.store.repositories
|
assert test_repository.slug in coresys.store.repositories
|
||||||
|
|
||||||
response = await api_client.post(
|
response = await api_client.post(
|
||||||
"/supervisor/options", json={"addons_repositories": []}
|
"/supervisor/options", json={"addons_repositories": []}
|
||||||
)
|
)
|
||||||
|
|
||||||
assert response.status == 200
|
assert response.status == 200
|
||||||
assert repository.source not in coresys.store.repository_urls
|
assert test_repository.source not in coresys.store.repository_urls
|
||||||
assert repository.slug not in coresys.store.repositories
|
assert test_repository.slug not in coresys.store.repositories
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("git_error", [None, StoreGitError()])
|
@pytest.mark.parametrize("git_error", [None, StoreGitError()])
|
||||||
@ -76,9 +68,9 @@ async def test_api_supervisor_options_repositories_skipped_on_error(
|
|||||||
):
|
):
|
||||||
"""Test repositories skipped on error via POST /supervisor/options REST API."""
|
"""Test repositories skipped on error via POST /supervisor/options REST API."""
|
||||||
with (
|
with (
|
||||||
patch("supervisor.store.repository.Repository.load", side_effect=git_error),
|
patch("supervisor.store.repository.RepositoryGit.load", side_effect=git_error),
|
||||||
patch("supervisor.store.repository.Repository.validate", return_value=False),
|
patch("supervisor.store.repository.RepositoryGit.validate", return_value=False),
|
||||||
patch("supervisor.store.repository.Repository.remove"),
|
patch("supervisor.store.repository.RepositoryCustom.remove"),
|
||||||
):
|
):
|
||||||
response = await api_client.post(
|
response = await api_client.post(
|
||||||
"/supervisor/options", json={"addons_repositories": [REPO_URL]}
|
"/supervisor/options", json={"addons_repositories": [REPO_URL]}
|
||||||
@ -87,8 +79,6 @@ async def test_api_supervisor_options_repositories_skipped_on_error(
|
|||||||
assert response.status == 400
|
assert response.status == 400
|
||||||
assert len(coresys.resolution.suggestions) == 0
|
assert len(coresys.resolution.suggestions) == 0
|
||||||
assert REPO_URL not in coresys.store.repository_urls
|
assert REPO_URL not in coresys.store.repository_urls
|
||||||
with pytest.raises(StoreNotFound):
|
|
||||||
coresys.store.get_from_url(REPO_URL)
|
|
||||||
|
|
||||||
|
|
||||||
async def test_api_supervisor_options_repo_error_with_config_change(
|
async def test_api_supervisor_options_repo_error_with_config_change(
|
||||||
@ -98,7 +88,7 @@ async def test_api_supervisor_options_repo_error_with_config_change(
|
|||||||
assert not coresys.config.debug
|
assert not coresys.config.debug
|
||||||
|
|
||||||
with patch(
|
with patch(
|
||||||
"supervisor.store.repository.Repository.load", side_effect=StoreGitError()
|
"supervisor.store.repository.RepositoryGit.load", side_effect=StoreGitError()
|
||||||
):
|
):
|
||||||
response = await api_client.post(
|
response = await api_client.post(
|
||||||
"/supervisor/options",
|
"/supervisor/options",
|
||||||
@ -271,7 +261,7 @@ async def test_api_supervisor_options_country(api_client: TestClient, coresys: C
|
|||||||
|
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
("blockbuster", "option_value", "config_value"),
|
("blockbuster", "option_value", "config_value"),
|
||||||
[("no_blockbuster", "on", False), ("no_blockbuster", "on_at_startup", True)],
|
[("no_blockbuster", "on", False), ("no_blockbuster", "on-at-startup", True)],
|
||||||
indirect=["blockbuster"],
|
indirect=["blockbuster"],
|
||||||
)
|
)
|
||||||
async def test_api_supervisor_options_blocking_io(
|
async def test_api_supervisor_options_blocking_io(
|
||||||
|
@ -2244,3 +2244,33 @@ async def test_get_upload_path_for_mount_location(
|
|||||||
result = await manager.get_upload_path_for_location(mount)
|
result = await manager.get_upload_path_for_location(mount)
|
||||||
|
|
||||||
assert result == mount.local_where
|
assert result == mount.local_where
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.usefixtures(
|
||||||
|
"supervisor_internet", "tmp_supervisor_data", "path_extern", "install_addon_example"
|
||||||
|
)
|
||||||
|
async def test_backup_addon_skips_uninstalled(
|
||||||
|
coresys: CoreSys, caplog: pytest.LogCaptureFixture
|
||||||
|
):
|
||||||
|
"""Test restore installing new addon."""
|
||||||
|
await coresys.core.set_state(CoreState.RUNNING)
|
||||||
|
coresys.hardware.disk.get_disk_free_space = lambda x: 5000
|
||||||
|
assert "local_example" in coresys.addons.local
|
||||||
|
orig_store_addons = Backup.store_addons
|
||||||
|
|
||||||
|
async def mock_store_addons(*args, **kwargs):
|
||||||
|
# Mock an uninstall during the backup process
|
||||||
|
await coresys.addons.uninstall("local_example")
|
||||||
|
await orig_store_addons(*args, **kwargs)
|
||||||
|
|
||||||
|
with patch.object(Backup, "store_addons", new=mock_store_addons):
|
||||||
|
backup: Backup = await coresys.backups.do_backup_partial(
|
||||||
|
addons=["local_example"], folders=["ssl"]
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "local_example" not in coresys.addons.local
|
||||||
|
assert not backup.addons
|
||||||
|
assert (
|
||||||
|
"Skipping backup of add-on local_example because it has been uninstalled"
|
||||||
|
in caplog.text
|
||||||
|
)
|
||||||
|
@ -105,6 +105,20 @@ def reset_last_call(func, group: str | None = None) -> None:
|
|||||||
get_job_decorator(func).set_last_call(datetime.min, group)
|
get_job_decorator(func).set_last_call(datetime.min, group)
|
||||||
|
|
||||||
|
|
||||||
|
def is_in_list(a: list, b: list):
|
||||||
|
"""Check if all elements in list a are in list b in order.
|
||||||
|
|
||||||
|
Taken from https://stackoverflow.com/a/69175987/12156188.
|
||||||
|
"""
|
||||||
|
|
||||||
|
for c in a:
|
||||||
|
if c in b:
|
||||||
|
b = b[b.index(c) :]
|
||||||
|
else:
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
class MockResponse:
|
class MockResponse:
|
||||||
"""Mock response for aiohttp requests."""
|
"""Mock response for aiohttp requests."""
|
||||||
|
|
||||||
|
@ -66,6 +66,7 @@ from .dbus_service_mocks.base import DBusServiceMock
|
|||||||
from .dbus_service_mocks.network_connection_settings import (
|
from .dbus_service_mocks.network_connection_settings import (
|
||||||
ConnectionSettings as ConnectionSettingsService,
|
ConnectionSettings as ConnectionSettingsService,
|
||||||
)
|
)
|
||||||
|
from .dbus_service_mocks.network_dns_manager import DnsManager as DnsManagerService
|
||||||
from .dbus_service_mocks.network_manager import NetworkManager as NetworkManagerService
|
from .dbus_service_mocks.network_manager import NetworkManager as NetworkManagerService
|
||||||
|
|
||||||
# pylint: disable=redefined-outer-name, protected-access
|
# pylint: disable=redefined-outer-name, protected-access
|
||||||
@ -131,7 +132,7 @@ async def docker() -> DockerAPI:
|
|||||||
|
|
||||||
docker_obj.info.logging = "journald"
|
docker_obj.info.logging = "journald"
|
||||||
docker_obj.info.storage = "overlay2"
|
docker_obj.info.storage = "overlay2"
|
||||||
docker_obj.info.version = "1.0.0"
|
docker_obj.info.version = AwesomeVersion("1.0.0")
|
||||||
|
|
||||||
yield docker_obj
|
yield docker_obj
|
||||||
|
|
||||||
@ -220,6 +221,14 @@ async def network_manager_service(
|
|||||||
yield network_manager_services["network_manager"]
|
yield network_manager_services["network_manager"]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
async def dns_manager_service(
|
||||||
|
network_manager_services: dict[str, DBusServiceMock | dict[str, DBusServiceMock]],
|
||||||
|
) -> AsyncGenerator[DnsManagerService]:
|
||||||
|
"""Return DNS Manager service mock."""
|
||||||
|
yield network_manager_services["network_dns_manager"]
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(name="connection_settings_service")
|
@pytest.fixture(name="connection_settings_service")
|
||||||
async def fixture_connection_settings_service(
|
async def fixture_connection_settings_service(
|
||||||
network_manager_services: dict[str, DBusServiceMock | dict[str, DBusServiceMock]],
|
network_manager_services: dict[str, DBusServiceMock | dict[str, DBusServiceMock]],
|
||||||
@ -409,7 +418,7 @@ async def coresys(
|
|||||||
coresys_obj.init_websession = AsyncMock()
|
coresys_obj.init_websession = AsyncMock()
|
||||||
|
|
||||||
# Don't remove files/folders related to addons and stores
|
# Don't remove files/folders related to addons and stores
|
||||||
with patch("supervisor.store.git.GitRepo._remove"):
|
with patch("supervisor.store.git.GitRepo.remove"):
|
||||||
yield coresys_obj
|
yield coresys_obj
|
||||||
|
|
||||||
await coresys_obj.dbus.unload()
|
await coresys_obj.dbus.unload()
|
||||||
@ -582,7 +591,7 @@ def run_supervisor_state(request: pytest.FixtureRequest) -> Generator[MagicMock]
|
|||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def store_addon(coresys: CoreSys, tmp_path, repository):
|
def store_addon(coresys: CoreSys, tmp_path, test_repository):
|
||||||
"""Store add-on fixture."""
|
"""Store add-on fixture."""
|
||||||
addon_obj = AddonStore(coresys, "test_store_addon")
|
addon_obj = AddonStore(coresys, "test_store_addon")
|
||||||
|
|
||||||
@ -595,23 +604,16 @@ def store_addon(coresys: CoreSys, tmp_path, repository):
|
|||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
async def repository(coresys: CoreSys):
|
async def test_repository(coresys: CoreSys):
|
||||||
"""Repository fixture."""
|
"""Test add-on store repository fixture."""
|
||||||
coresys.store._data[ATTR_REPOSITORIES].remove(
|
|
||||||
"https://github.com/hassio-addons/repository"
|
|
||||||
)
|
|
||||||
coresys.store._data[ATTR_REPOSITORIES].remove(
|
|
||||||
"https://github.com/esphome/home-assistant-addon"
|
|
||||||
)
|
|
||||||
coresys.config._data[ATTR_ADDONS_CUSTOM_LIST] = []
|
coresys.config._data[ATTR_ADDONS_CUSTOM_LIST] = []
|
||||||
|
|
||||||
with (
|
with (
|
||||||
patch("supervisor.store.validate.BUILTIN_REPOSITORIES", {"local", "core"}),
|
|
||||||
patch("supervisor.store.git.GitRepo.load", return_value=None),
|
patch("supervisor.store.git.GitRepo.load", return_value=None),
|
||||||
):
|
):
|
||||||
await coresys.store.load()
|
await coresys.store.load()
|
||||||
|
|
||||||
repository_obj = Repository(
|
repository_obj = Repository.create(
|
||||||
coresys, "https://github.com/awesome-developer/awesome-repo"
|
coresys, "https://github.com/awesome-developer/awesome-repo"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -624,7 +626,7 @@ async def repository(coresys: CoreSys):
|
|||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
async def install_addon_ssh(coresys: CoreSys, repository):
|
async def install_addon_ssh(coresys: CoreSys, test_repository):
|
||||||
"""Install local_ssh add-on."""
|
"""Install local_ssh add-on."""
|
||||||
store = coresys.addons.store[TEST_ADDON_SLUG]
|
store = coresys.addons.store[TEST_ADDON_SLUG]
|
||||||
await coresys.addons.data.install(store)
|
await coresys.addons.data.install(store)
|
||||||
@ -636,7 +638,7 @@ async def install_addon_ssh(coresys: CoreSys, repository):
|
|||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
async def install_addon_example(coresys: CoreSys, repository):
|
async def install_addon_example(coresys: CoreSys, test_repository):
|
||||||
"""Install local_example add-on."""
|
"""Install local_example add-on."""
|
||||||
store = coresys.addons.store["local_example"]
|
store = coresys.addons.store["local_example"]
|
||||||
await coresys.addons.data.install(store)
|
await coresys.addons.data.install(store)
|
||||||
@ -762,16 +764,6 @@ async def capture_exception() -> Mock:
|
|||||||
yield capture_exception
|
yield capture_exception
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
async def capture_event() -> Mock:
|
|
||||||
"""Mock capture event for testing."""
|
|
||||||
with (
|
|
||||||
patch("supervisor.utils.sentry.sentry_sdk.is_initialized", return_value=True),
|
|
||||||
patch("supervisor.utils.sentry.sentry_sdk.capture_event") as capture_event,
|
|
||||||
):
|
|
||||||
yield capture_event
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
async def os_available(request: pytest.FixtureRequest) -> None:
|
async def os_available(request: pytest.FixtureRequest) -> None:
|
||||||
"""Mock os as available."""
|
"""Mock os as available."""
|
||||||
|
@ -55,13 +55,13 @@ async def test_network_interface_ethernet(
|
|||||||
interface = NetworkInterface("/org/freedesktop/NetworkManager/Devices/1")
|
interface = NetworkInterface("/org/freedesktop/NetworkManager/Devices/1")
|
||||||
|
|
||||||
assert interface.sync_properties is False
|
assert interface.sync_properties is False
|
||||||
assert interface.name is None
|
assert interface.interface_name is None
|
||||||
assert interface.type is None
|
assert interface.type is None
|
||||||
|
|
||||||
await interface.connect(dbus_session_bus)
|
await interface.connect(dbus_session_bus)
|
||||||
|
|
||||||
assert interface.sync_properties is True
|
assert interface.sync_properties is True
|
||||||
assert interface.name == TEST_INTERFACE_ETH_NAME
|
assert interface.interface_name == TEST_INTERFACE_ETH_NAME
|
||||||
assert interface.type == DeviceType.ETHERNET
|
assert interface.type == DeviceType.ETHERNET
|
||||||
assert interface.managed is True
|
assert interface.managed is True
|
||||||
assert interface.wireless is None
|
assert interface.wireless is None
|
||||||
@ -108,7 +108,7 @@ async def test_network_interface_wlan(
|
|||||||
await interface.connect(dbus_session_bus)
|
await interface.connect(dbus_session_bus)
|
||||||
|
|
||||||
assert interface.sync_properties is True
|
assert interface.sync_properties is True
|
||||||
assert interface.name == TEST_INTERFACE_WLAN_NAME
|
assert interface.interface_name == TEST_INTERFACE_WLAN_NAME
|
||||||
assert interface.type == DeviceType.WIRELESS
|
assert interface.type == DeviceType.WIRELESS
|
||||||
assert interface.wireless is not None
|
assert interface.wireless is not None
|
||||||
assert interface.wireless.bitrate == 0
|
assert interface.wireless.bitrate == 0
|
||||||
|
136
tests/docker/test_manager.py
Normal file
136
tests/docker/test_manager.py
Normal file
@ -0,0 +1,136 @@
|
|||||||
|
"""Test Docker manager."""
|
||||||
|
|
||||||
|
from unittest.mock import MagicMock
|
||||||
|
|
||||||
|
from docker.errors import DockerException
|
||||||
|
import pytest
|
||||||
|
from requests import RequestException
|
||||||
|
|
||||||
|
from supervisor.docker.manager import CommandReturn, DockerAPI
|
||||||
|
from supervisor.exceptions import DockerError
|
||||||
|
|
||||||
|
|
||||||
|
async def test_run_command_success(docker: DockerAPI):
|
||||||
|
"""Test successful command execution."""
|
||||||
|
# Mock container and its methods
|
||||||
|
mock_container = MagicMock()
|
||||||
|
mock_container.wait.return_value = {"StatusCode": 0}
|
||||||
|
mock_container.logs.return_value = b"command output"
|
||||||
|
|
||||||
|
# Mock docker containers.run to return our mock container
|
||||||
|
docker.docker.containers.run.return_value = mock_container
|
||||||
|
|
||||||
|
# Execute the command
|
||||||
|
result = docker.run_command(
|
||||||
|
image="alpine", version="3.18", command="echo hello", stdout=True, stderr=True
|
||||||
|
)
|
||||||
|
|
||||||
|
# Verify the result
|
||||||
|
assert isinstance(result, CommandReturn)
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert result.output == b"command output"
|
||||||
|
|
||||||
|
# Verify docker.containers.run was called correctly
|
||||||
|
docker.docker.containers.run.assert_called_once_with(
|
||||||
|
"alpine:3.18",
|
||||||
|
command="echo hello",
|
||||||
|
detach=True,
|
||||||
|
network=docker.network.name,
|
||||||
|
use_config_proxy=False,
|
||||||
|
stdout=True,
|
||||||
|
stderr=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Verify container cleanup
|
||||||
|
mock_container.remove.assert_called_once_with(force=True, v=True)
|
||||||
|
|
||||||
|
|
||||||
|
async def test_run_command_with_defaults(docker: DockerAPI):
|
||||||
|
"""Test command execution with default parameters."""
|
||||||
|
# Mock container and its methods
|
||||||
|
mock_container = MagicMock()
|
||||||
|
mock_container.wait.return_value = {"StatusCode": 1}
|
||||||
|
mock_container.logs.return_value = b"error output"
|
||||||
|
|
||||||
|
# Mock docker containers.run to return our mock container
|
||||||
|
docker.docker.containers.run.return_value = mock_container
|
||||||
|
|
||||||
|
# Execute the command with minimal parameters
|
||||||
|
result = docker.run_command(image="ubuntu")
|
||||||
|
|
||||||
|
# Verify the result
|
||||||
|
assert isinstance(result, CommandReturn)
|
||||||
|
assert result.exit_code == 1
|
||||||
|
assert result.output == b"error output"
|
||||||
|
|
||||||
|
# Verify docker.containers.run was called with defaults
|
||||||
|
docker.docker.containers.run.assert_called_once_with(
|
||||||
|
"ubuntu:latest", # default tag
|
||||||
|
command=None, # default command
|
||||||
|
detach=True,
|
||||||
|
network=docker.network.name,
|
||||||
|
use_config_proxy=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Verify container.logs was called with default stdout/stderr
|
||||||
|
mock_container.logs.assert_called_once_with(stdout=True, stderr=True)
|
||||||
|
|
||||||
|
|
||||||
|
async def test_run_command_docker_exception(docker: DockerAPI):
|
||||||
|
"""Test command execution when Docker raises an exception."""
|
||||||
|
# Mock docker containers.run to raise DockerException
|
||||||
|
docker.docker.containers.run.side_effect = DockerException("Docker error")
|
||||||
|
|
||||||
|
# Execute the command and expect DockerError
|
||||||
|
with pytest.raises(DockerError, match="Can't execute command: Docker error"):
|
||||||
|
docker.run_command(image="alpine", command="test")
|
||||||
|
|
||||||
|
|
||||||
|
async def test_run_command_request_exception(docker: DockerAPI):
|
||||||
|
"""Test command execution when requests raises an exception."""
|
||||||
|
# Mock docker containers.run to raise RequestException
|
||||||
|
docker.docker.containers.run.side_effect = RequestException("Connection error")
|
||||||
|
|
||||||
|
# Execute the command and expect DockerError
|
||||||
|
with pytest.raises(DockerError, match="Can't execute command: Connection error"):
|
||||||
|
docker.run_command(image="alpine", command="test")
|
||||||
|
|
||||||
|
|
||||||
|
async def test_run_command_cleanup_on_exception(docker: DockerAPI):
|
||||||
|
"""Test that container cleanup happens even when an exception occurs."""
|
||||||
|
# Mock container
|
||||||
|
mock_container = MagicMock()
|
||||||
|
|
||||||
|
# Mock docker.containers.run to return container, but container.wait to raise exception
|
||||||
|
docker.docker.containers.run.return_value = mock_container
|
||||||
|
mock_container.wait.side_effect = DockerException("Wait failed")
|
||||||
|
|
||||||
|
# Execute the command and expect DockerError
|
||||||
|
with pytest.raises(DockerError):
|
||||||
|
docker.run_command(image="alpine", command="test")
|
||||||
|
|
||||||
|
# Verify container cleanup still happened
|
||||||
|
mock_container.remove.assert_called_once_with(force=True, v=True)
|
||||||
|
|
||||||
|
|
||||||
|
async def test_run_command_custom_stdout_stderr(docker: DockerAPI):
|
||||||
|
"""Test command execution with custom stdout/stderr settings."""
|
||||||
|
# Mock container and its methods
|
||||||
|
mock_container = MagicMock()
|
||||||
|
mock_container.wait.return_value = {"StatusCode": 0}
|
||||||
|
mock_container.logs.return_value = b"output"
|
||||||
|
|
||||||
|
# Mock docker containers.run to return our mock container
|
||||||
|
docker.docker.containers.run.return_value = mock_container
|
||||||
|
|
||||||
|
# Execute the command with custom stdout/stderr
|
||||||
|
result = docker.run_command(
|
||||||
|
image="alpine", command="test", stdout=False, stderr=True
|
||||||
|
)
|
||||||
|
|
||||||
|
# Verify container.logs was called with the correct parameters
|
||||||
|
mock_container.logs.assert_called_once_with(stdout=False, stderr=True)
|
||||||
|
|
||||||
|
# Verify the result
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert result.output == b"output"
|
@ -111,3 +111,39 @@ async def test_network_recreation(
|
|||||||
network_params[ATTR_ENABLE_IPV6] = new_enable_ipv6
|
network_params[ATTR_ENABLE_IPV6] = new_enable_ipv6
|
||||||
|
|
||||||
mock_create.assert_called_with(**network_params)
|
mock_create.assert_called_with(**network_params)
|
||||||
|
|
||||||
|
|
||||||
|
async def test_network_default_ipv6_for_new_installations():
|
||||||
|
"""Test that IPv6 is enabled by default when no user setting is provided (None)."""
|
||||||
|
with (
|
||||||
|
patch(
|
||||||
|
"supervisor.docker.network.DockerNetwork.docker",
|
||||||
|
new_callable=PropertyMock,
|
||||||
|
return_value=MagicMock(),
|
||||||
|
create=True,
|
||||||
|
),
|
||||||
|
patch(
|
||||||
|
"supervisor.docker.network.DockerNetwork.docker.networks",
|
||||||
|
new_callable=PropertyMock,
|
||||||
|
return_value=MagicMock(),
|
||||||
|
create=True,
|
||||||
|
),
|
||||||
|
patch(
|
||||||
|
"supervisor.docker.network.DockerNetwork.docker.networks.get",
|
||||||
|
side_effect=docker.errors.NotFound("Network not found"),
|
||||||
|
),
|
||||||
|
patch(
|
||||||
|
"supervisor.docker.network.DockerNetwork.docker.networks.create",
|
||||||
|
return_value=MockNetwork(False, None, True),
|
||||||
|
) as mock_create,
|
||||||
|
):
|
||||||
|
# Pass None as enable_ipv6 to simulate no user setting
|
||||||
|
network = (await DockerNetwork(MagicMock()).post_init(None)).network
|
||||||
|
|
||||||
|
assert network is not None
|
||||||
|
assert network.attrs.get(DOCKER_ENABLEIPV6) is True
|
||||||
|
|
||||||
|
# Verify that create was called with IPv6 enabled by default
|
||||||
|
expected_params = DOCKER_NETWORK_PARAMS.copy()
|
||||||
|
expected_params[ATTR_ENABLE_IPV6] = True
|
||||||
|
mock_create.assert_called_with(**expected_params)
|
||||||
|
@ -200,7 +200,8 @@ async def test_start(
|
|||||||
coresys.docker.containers.get.return_value.stop.assert_not_called()
|
coresys.docker.containers.get.return_value.stop.assert_not_called()
|
||||||
if container_exists:
|
if container_exists:
|
||||||
coresys.docker.containers.get.return_value.remove.assert_called_once_with(
|
coresys.docker.containers.get.return_value.remove.assert_called_once_with(
|
||||||
force=True
|
force=True,
|
||||||
|
v=True,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
coresys.docker.containers.get.return_value.remove.assert_not_called()
|
coresys.docker.containers.get.return_value.remove.assert_not_called()
|
||||||
@ -397,7 +398,7 @@ async def test_core_loads_wrong_image_for_machine(
|
|||||||
|
|
||||||
await coresys.homeassistant.core.load()
|
await coresys.homeassistant.core.load()
|
||||||
|
|
||||||
container.remove.assert_called_once_with(force=True)
|
container.remove.assert_called_once_with(force=True, v=True)
|
||||||
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
|
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
|
||||||
"image": "ghcr.io/home-assistant/odroid-n2-homeassistant:latest",
|
"image": "ghcr.io/home-assistant/odroid-n2-homeassistant:latest",
|
||||||
"force": True,
|
"force": True,
|
||||||
@ -444,7 +445,7 @@ async def test_core_loads_wrong_image_for_architecture(
|
|||||||
|
|
||||||
await coresys.homeassistant.core.load()
|
await coresys.homeassistant.core.load()
|
||||||
|
|
||||||
container.remove.assert_called_once_with(force=True)
|
container.remove.assert_called_once_with(force=True, v=True)
|
||||||
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
|
assert coresys.docker.images.remove.call_args_list[0].kwargs == {
|
||||||
"image": "ghcr.io/home-assistant/qemux86-64-homeassistant:latest",
|
"image": "ghcr.io/home-assistant/qemux86-64-homeassistant:latest",
|
||||||
"force": True,
|
"force": True,
|
||||||
|
@ -2,8 +2,9 @@
|
|||||||
|
|
||||||
# pylint: disable=protected-access
|
# pylint: disable=protected-access
|
||||||
import asyncio
|
import asyncio
|
||||||
from unittest.mock import AsyncMock, PropertyMock, patch
|
from unittest.mock import PropertyMock, patch
|
||||||
|
|
||||||
|
from dbus_fast import Variant
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
from supervisor.coresys import CoreSys
|
from supervisor.coresys import CoreSys
|
||||||
@ -87,23 +88,47 @@ async def test_connectivity_events(coresys: CoreSys, force: bool):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
async def test_dns_restart_on_connection_change(
|
async def test_dns_configuration_change_triggers_notify_locals_changed(
|
||||||
coresys: CoreSys, network_manager_service: NetworkManagerService
|
coresys: CoreSys, dns_manager_service
|
||||||
):
|
):
|
||||||
"""Test dns plugin is restarted when primary connection changes."""
|
"""Test that DNS configuration changes trigger notify_locals_changed."""
|
||||||
await coresys.host.network.load()
|
await coresys.host.network.load()
|
||||||
with (
|
|
||||||
patch.object(PluginDns, "restart") as restart,
|
|
||||||
patch.object(
|
|
||||||
PluginDns, "is_running", new_callable=AsyncMock, return_value=True
|
|
||||||
),
|
|
||||||
):
|
|
||||||
network_manager_service.emit_properties_changed({"PrimaryConnection": "/"})
|
|
||||||
await network_manager_service.ping()
|
|
||||||
restart.assert_not_called()
|
|
||||||
|
|
||||||
network_manager_service.emit_properties_changed(
|
with patch.object(PluginDns, "notify_locals_changed") as notify_locals_changed:
|
||||||
{"PrimaryConnection": "/org/freedesktop/NetworkManager/ActiveConnection/2"}
|
# Test that non-Configuration changes don't trigger notify_locals_changed
|
||||||
|
dns_manager_service.emit_properties_changed({"Mode": "default"})
|
||||||
|
await dns_manager_service.ping()
|
||||||
|
notify_locals_changed.assert_not_called()
|
||||||
|
|
||||||
|
# Test that Configuration changes trigger notify_locals_changed
|
||||||
|
configuration = [
|
||||||
|
{
|
||||||
|
"nameservers": Variant("as", ["192.168.2.2"]),
|
||||||
|
"domains": Variant("as", ["lan"]),
|
||||||
|
"interface": Variant("s", "eth0"),
|
||||||
|
"priority": Variant("i", 100),
|
||||||
|
"vpn": Variant("b", False),
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
dns_manager_service.emit_properties_changed({"Configuration": configuration})
|
||||||
|
await dns_manager_service.ping()
|
||||||
|
notify_locals_changed.assert_called_once()
|
||||||
|
|
||||||
|
notify_locals_changed.reset_mock()
|
||||||
|
# Test that subsequent Configuration changes also trigger notify_locals_changed
|
||||||
|
different_configuration = [
|
||||||
|
{
|
||||||
|
"nameservers": Variant("as", ["8.8.8.8"]),
|
||||||
|
"domains": Variant("as", ["example.com"]),
|
||||||
|
"interface": Variant("s", "wlan0"),
|
||||||
|
"priority": Variant("i", 200),
|
||||||
|
"vpn": Variant("b", True),
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
dns_manager_service.emit_properties_changed(
|
||||||
|
{"Configuration": different_configuration}
|
||||||
)
|
)
|
||||||
await network_manager_service.ping()
|
await dns_manager_service.ping()
|
||||||
restart.assert_called_once()
|
notify_locals_changed.assert_called_once()
|
||||||
|
@ -20,7 +20,7 @@ from supervisor.exceptions import (
|
|||||||
from supervisor.host.const import HostFeature
|
from supervisor.host.const import HostFeature
|
||||||
from supervisor.host.manager import HostManager
|
from supervisor.host.manager import HostManager
|
||||||
from supervisor.jobs import JobSchedulerOptions, SupervisorJob
|
from supervisor.jobs import JobSchedulerOptions, SupervisorJob
|
||||||
from supervisor.jobs.const import JobExecutionLimit
|
from supervisor.jobs.const import JobConcurrency, JobExecutionLimit, JobThrottle
|
||||||
from supervisor.jobs.decorator import Job, JobCondition
|
from supervisor.jobs.decorator import Job, JobCondition
|
||||||
from supervisor.jobs.job_group import JobGroup
|
from supervisor.jobs.job_group import JobGroup
|
||||||
from supervisor.os.manager import OSManager
|
from supervisor.os.manager import OSManager
|
||||||
@ -1212,3 +1212,93 @@ async def test_job_scheduled_at(coresys: CoreSys):
|
|||||||
assert job.name == "test_job_scheduled_at_job_task"
|
assert job.name == "test_job_scheduled_at_job_task"
|
||||||
assert job.stage == "work"
|
assert job.stage == "work"
|
||||||
assert job.parent_id is None
|
assert job.parent_id is None
|
||||||
|
|
||||||
|
|
||||||
|
async def test_concurency_reject_and_throttle(coresys: CoreSys):
|
||||||
|
"""Test the concurrency rejct and throttle job execution limit."""
|
||||||
|
|
||||||
|
class TestClass:
|
||||||
|
"""Test class."""
|
||||||
|
|
||||||
|
def __init__(self, coresys: CoreSys):
|
||||||
|
"""Initialize the test class."""
|
||||||
|
self.coresys = coresys
|
||||||
|
self.run = asyncio.Lock()
|
||||||
|
self.call = 0
|
||||||
|
|
||||||
|
@Job(
|
||||||
|
name="test_concurency_reject_and_throttle_execute",
|
||||||
|
concurrency=JobConcurrency.REJECT,
|
||||||
|
throttle=JobThrottle.THROTTLE,
|
||||||
|
throttle_period=timedelta(hours=1),
|
||||||
|
)
|
||||||
|
async def execute(self, sleep: float):
|
||||||
|
"""Execute the class method."""
|
||||||
|
assert not self.run.locked()
|
||||||
|
async with self.run:
|
||||||
|
await asyncio.sleep(sleep)
|
||||||
|
self.call += 1
|
||||||
|
|
||||||
|
test = TestClass(coresys)
|
||||||
|
|
||||||
|
results = await asyncio.gather(
|
||||||
|
*[test.execute(0.1), test.execute(0.1), test.execute(0.1)],
|
||||||
|
return_exceptions=True,
|
||||||
|
)
|
||||||
|
assert results[0] is None
|
||||||
|
assert isinstance(results[1], JobException)
|
||||||
|
assert isinstance(results[2], JobException)
|
||||||
|
assert test.call == 1
|
||||||
|
|
||||||
|
await asyncio.gather(*[test.execute(0.1)])
|
||||||
|
assert test.call == 1
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("error", [None, PluginJobError])
|
||||||
|
async def test_concurency_reject_and_rate_limit(
|
||||||
|
coresys: CoreSys, error: JobException | None
|
||||||
|
):
|
||||||
|
"""Test the concurrency rejct and rate limit job execution limit."""
|
||||||
|
|
||||||
|
class TestClass:
|
||||||
|
"""Test class."""
|
||||||
|
|
||||||
|
def __init__(self, coresys: CoreSys):
|
||||||
|
"""Initialize the test class."""
|
||||||
|
self.coresys = coresys
|
||||||
|
self.run = asyncio.Lock()
|
||||||
|
self.call = 0
|
||||||
|
|
||||||
|
@Job(
|
||||||
|
name=f"test_concurency_reject_and_rate_limit_execute_{uuid4().hex}",
|
||||||
|
concurrency=JobConcurrency.REJECT,
|
||||||
|
throttle=JobThrottle.RATE_LIMIT,
|
||||||
|
throttle_period=timedelta(hours=1),
|
||||||
|
throttle_max_calls=1,
|
||||||
|
on_condition=error,
|
||||||
|
)
|
||||||
|
async def execute(self, sleep: float = 0):
|
||||||
|
"""Execute the class method."""
|
||||||
|
async with self.run:
|
||||||
|
await asyncio.sleep(sleep)
|
||||||
|
self.call += 1
|
||||||
|
|
||||||
|
test = TestClass(coresys)
|
||||||
|
|
||||||
|
results = await asyncio.gather(
|
||||||
|
*[test.execute(0.1), test.execute(), test.execute()], return_exceptions=True
|
||||||
|
)
|
||||||
|
assert results[0] is None
|
||||||
|
assert isinstance(results[1], JobException)
|
||||||
|
assert isinstance(results[2], JobException)
|
||||||
|
assert test.call == 1
|
||||||
|
|
||||||
|
with pytest.raises(JobException if error is None else error):
|
||||||
|
await test.execute()
|
||||||
|
|
||||||
|
assert test.call == 1
|
||||||
|
|
||||||
|
with time_machine.travel(utcnow() + timedelta(hours=1)):
|
||||||
|
await test.execute()
|
||||||
|
|
||||||
|
assert test.call == 2
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user