mirror of
https://github.com/home-assistant/supervisor.git
synced 2026-04-29 20:02:48 +00:00
Compare commits
1 Commits
fix-slow-r
...
fix-error-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0786e06eb9 |
@@ -1,8 +1,6 @@
|
||||
{
|
||||
"name": "Supervisor dev",
|
||||
"image": "ghcr.io/home-assistant/devcontainer:6-supervisor",
|
||||
"overrideCommand": false,
|
||||
"remoteUser": "vscode",
|
||||
"image": "ghcr.io/home-assistant/devcontainer:2-supervisor",
|
||||
"containerEnv": {
|
||||
"WORKSPACE_DIRECTORY": "${containerWorkspaceFolder}"
|
||||
},
|
||||
@@ -19,10 +17,10 @@
|
||||
"charliermarsh.ruff",
|
||||
"ms-python.pylint",
|
||||
"ms-python.vscode-pylance",
|
||||
"visualstudioexptteam.vscodeintellicode",
|
||||
"redhat.vscode-yaml",
|
||||
"esbenp.prettier-vscode",
|
||||
"GitHub.vscode-pull-request-github",
|
||||
"GitHub.copilot"
|
||||
"GitHub.vscode-pull-request-github"
|
||||
],
|
||||
"settings": {
|
||||
"python.defaultInterpreterPath": "/home/vscode/.local/ha-venv/bin/python",
|
||||
@@ -48,8 +46,6 @@
|
||||
},
|
||||
"mounts": [
|
||||
"type=volume,target=/var/lib/docker",
|
||||
"type=volume,target=/var/lib/containerd",
|
||||
"type=volume,target=/mnt/supervisor",
|
||||
"type=tmpfs,target=/tmp"
|
||||
"type=volume,target=/mnt/supervisor"
|
||||
]
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
# General files
|
||||
.git
|
||||
.github
|
||||
.gitkeep
|
||||
.devcontainer
|
||||
.vscode
|
||||
|
||||
|
||||
69
.github/ISSUE_TEMPLATE.md
vendored
Normal file
69
.github/ISSUE_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
name: Report a bug with the Supervisor on a supported System
|
||||
about: Report an issue related to the Home Assistant Supervisor.
|
||||
labels: bug
|
||||
---
|
||||
|
||||
<!-- READ THIS FIRST:
|
||||
- If you need additional help with this template please refer to https://www.home-assistant.io/help/reporting_issues/
|
||||
- This is for bugs only. Feature and enhancement requests should go in our community forum: https://community.home-assistant.io/c/feature-requests
|
||||
- Provide as many details as possible. Paste logs, configuration sample and code into the backticks. Do not delete any text from this template!
|
||||
- If you have a problem with an add-on, make an issue in it's repository.
|
||||
-->
|
||||
|
||||
<!--
|
||||
Important: You can only fill a bug repport for an supported system! If you run an unsupported installation. This report would be closed without comment.
|
||||
-->
|
||||
|
||||
### Describe the issue
|
||||
|
||||
<!-- Provide as many details as possible. -->
|
||||
|
||||
### Steps to reproduce
|
||||
|
||||
<!-- What do you do to encounter the issue. -->
|
||||
|
||||
1. ...
|
||||
2. ...
|
||||
3. ...
|
||||
|
||||
### Enviroment details
|
||||
|
||||
<!-- You can find these details in the system tab of the supervisor panel, or by using the `ha` CLI. -->
|
||||
|
||||
- **Operating System:**: xxx
|
||||
- **Supervisor version:**: xxx
|
||||
- **Home Assistant version**: xxx
|
||||
|
||||
### Supervisor logs
|
||||
|
||||
<details>
|
||||
<summary>Supervisor logs</summary>
|
||||
<!--
|
||||
- Frontend -> Supervisor -> System
|
||||
- Or use this command: ha supervisor logs
|
||||
- Logs are more than just errors, even if you don't think it's important, it is.
|
||||
-->
|
||||
|
||||
```
|
||||
Paste supervisor logs here
|
||||
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### System Information
|
||||
|
||||
<details>
|
||||
<summary>System Information</summary>
|
||||
<!--
|
||||
- Use this command: ha info
|
||||
-->
|
||||
|
||||
```
|
||||
Paste system info here
|
||||
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
9
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
9
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@@ -1,5 +1,6 @@
|
||||
name: Report an issue with Home Assistant Supervisor
|
||||
name: Bug Report Form
|
||||
description: Report an issue related to the Home Assistant Supervisor.
|
||||
labels: bug
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
@@ -8,7 +9,7 @@ body:
|
||||
|
||||
If you have a feature or enhancement request, please use the [feature request][fr] section of our [Community Forum][fr].
|
||||
|
||||
[fr]: https://github.com/orgs/home-assistant/discussions
|
||||
[fr]: https://community.home-assistant.io/c/feature-requests
|
||||
- type: textarea
|
||||
validations:
|
||||
required: true
|
||||
@@ -75,7 +76,7 @@ body:
|
||||
description: >
|
||||
The System information can be found in [Settings -> System -> Repairs -> (three dot menu) -> System Information](https://my.home-assistant.io/redirect/system_health/).
|
||||
Click the copy button at the bottom of the pop-up and paste it here.
|
||||
|
||||
|
||||
[](https://my.home-assistant.io/redirect/system_health/)
|
||||
- type: textarea
|
||||
attributes:
|
||||
@@ -85,7 +86,7 @@ body:
|
||||
Supervisor diagnostics can be found in [Settings -> Devices & services](https://my.home-assistant.io/redirect/integrations/).
|
||||
Find the card that says `Home Assistant Supervisor`, open it, and select the three dot menu of the Supervisor integration entry
|
||||
and select 'Download diagnostics'.
|
||||
|
||||
|
||||
**Please drag-and-drop the downloaded file into the textbox below. Do not copy and paste its contents.**
|
||||
- type: textarea
|
||||
attributes:
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/config.yml
vendored
2
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -13,7 +13,7 @@ contact_links:
|
||||
about: Our documentation has its own issue tracker. Please report issues with the website there.
|
||||
|
||||
- name: Request a feature for the Supervisor
|
||||
url: https://github.com/orgs/home-assistant/discussions
|
||||
url: https://community.home-assistant.io/c/feature-requests
|
||||
about: Request an new feature for the Supervisor.
|
||||
|
||||
- name: I have a question or need support
|
||||
|
||||
53
.github/ISSUE_TEMPLATE/task.yml
vendored
53
.github/ISSUE_TEMPLATE/task.yml
vendored
@@ -1,53 +0,0 @@
|
||||
name: Task
|
||||
description: For staff only - Create a task
|
||||
type: Task
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
## ⚠️ RESTRICTED ACCESS
|
||||
|
||||
**This form is restricted to Open Home Foundation staff and authorized contributors only.**
|
||||
|
||||
If you are a community member wanting to contribute, please:
|
||||
- For bug reports: Use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)
|
||||
- For feature requests: Submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)
|
||||
|
||||
---
|
||||
|
||||
### For authorized contributors
|
||||
|
||||
Use this form to create tasks for development work, improvements, or other actionable items that need to be tracked.
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Description
|
||||
description: |
|
||||
Provide a clear and detailed description of the task that needs to be accomplished.
|
||||
|
||||
Be specific about what needs to be done, why it's important, and any constraints or requirements.
|
||||
placeholder: |
|
||||
Describe the task, including:
|
||||
- What needs to be done
|
||||
- Why this task is needed
|
||||
- Expected outcome
|
||||
- Any constraints or requirements
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: additional_context
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: |
|
||||
Any additional information, links, research, or context that would be helpful.
|
||||
|
||||
Include links to related issues, research, prototypes, roadmap opportunities etc.
|
||||
placeholder: |
|
||||
- Roadmap opportunity: [link]
|
||||
- Epic: [link]
|
||||
- Feature request: [link]
|
||||
- Technical design documents: [link]
|
||||
- Prototype/mockup: [link]
|
||||
- Dependencies: [links]
|
||||
validations:
|
||||
required: false
|
||||
294
.github/copilot-instructions.md
vendored
294
.github/copilot-instructions.md
vendored
@@ -1,294 +0,0 @@
|
||||
# GitHub Copilot & Claude Code Instructions
|
||||
|
||||
This repository contains the Home Assistant Supervisor, a Python 3 based container
|
||||
orchestration and management system for Home Assistant.
|
||||
|
||||
## Supervisor Capabilities & Features
|
||||
|
||||
### Architecture Overview
|
||||
|
||||
Home Assistant Supervisor is a Python-based container orchestration system that
|
||||
communicates with the Docker daemon to manage containerized components. It is tightly
|
||||
integrated with the underlying Operating System and core Operating System components
|
||||
through D-Bus.
|
||||
|
||||
**Managed Components:**
|
||||
- **Home Assistant Core**: The main home automation application running in its own
|
||||
container (also provides the web interface)
|
||||
- **Add-ons**: Third-party applications and services (each add-on runs in its own
|
||||
container)
|
||||
- **Plugins**: Built-in system services like DNS, Audio, CLI, Multicast, and Observer
|
||||
- **Host System Integration**: OS-level operations and hardware access via D-Bus
|
||||
- **Container Networking**: Internal Docker network management and external
|
||||
connectivity
|
||||
- **Storage & Backup**: Data persistence and backup management across all containers
|
||||
|
||||
**Key Dependencies:**
|
||||
- **Docker Engine**: Required for all container operations
|
||||
- **D-Bus**: System-level communication with the host OS
|
||||
- **systemd**: Service management for host system operations
|
||||
- **NetworkManager**: Network configuration and management
|
||||
|
||||
### Add-on System
|
||||
|
||||
**Add-on Architecture**: Add-ons are containerized applications available through
|
||||
add-on stores. Each store contains multiple add-ons, and each add-on includes metadata
|
||||
that tells Supervisor the version, startup configuration (permissions), and available
|
||||
user configurable options. Add-on metadata typically references a container image that
|
||||
Supervisor fetches during installation. If not, the Supervisor builds the container
|
||||
image from a Dockerfile.
|
||||
|
||||
**Built-in Stores**: Supervisor comes with several pre-configured stores:
|
||||
- **Core Add-ons**: Official add-ons maintained by the Home Assistant team
|
||||
- **Community Add-ons**: Popular third-party add-ons repository
|
||||
- **ESPHome**: Add-ons for ESPHome ecosystem integration
|
||||
- **Music Assistant**: Audio and music-related add-ons
|
||||
- **Local Development**: Local folder for testing custom add-ons during development
|
||||
|
||||
**Store Management**: Stores are Git-based repositories that are periodically updated.
|
||||
When updates are available, users receive notifications.
|
||||
|
||||
**Add-on Lifecycle**:
|
||||
- **Installation**: Supervisor fetches or builds container images based on add-on
|
||||
metadata
|
||||
- **Configuration**: Schema-validated options with integrated UI management
|
||||
- **Runtime**: Full container lifecycle management, health monitoring
|
||||
- **Updates**: Automatic or manual version management
|
||||
|
||||
### Update System
|
||||
|
||||
**Core Components**: Supervisor, Home Assistant Core, HAOS, and built-in plugins
|
||||
receive version information from a central JSON file fetched from
|
||||
`https://version.home-assistant.io/{channel}.json`. The `Updater` class handles
|
||||
fetching this data, validating signatures, and updating internal version tracking.
|
||||
|
||||
**Update Channels**: Three channels (`stable`/`beta`/`dev`) determine which version
|
||||
JSON file is fetched, allowing users to opt into different release streams.
|
||||
|
||||
**Add-on Updates**: Add-on version information comes from store repository updates, not
|
||||
the central JSON file. When repositories are refreshed via the store system, add-ons
|
||||
compare their local versions against repository versions to determine update
|
||||
availability.
|
||||
|
||||
### Backup & Recovery System
|
||||
|
||||
**Backup Capabilities**:
|
||||
- **Full Backups**: Complete system state capture including all add-ons,
|
||||
configuration, and data
|
||||
- **Partial Backups**: Selective backup of specific components (Home Assistant,
|
||||
add-ons, folders)
|
||||
- **Encrypted Backups**: Optional backup encryption with user-provided passwords
|
||||
- **Multiple Storage Locations**: Local storage and remote backup destinations
|
||||
|
||||
**Recovery Features**:
|
||||
- **One-click Restore**: Simple restoration from backup files
|
||||
- **Selective Restore**: Choose specific components to restore
|
||||
- **Automatic Recovery**: Self-healing for common system issues
|
||||
|
||||
---
|
||||
|
||||
## Supervisor Development
|
||||
|
||||
### Python Requirements
|
||||
|
||||
- **Compatibility**: Python 3.14+
|
||||
- **Language Features**: Use modern Python features:
|
||||
- Type hints with `typing` module
|
||||
- f-strings (preferred over `%` or `.format()`)
|
||||
- Dataclasses and enum classes
|
||||
- Async/await patterns
|
||||
- Pattern matching where appropriate
|
||||
- Parenthesis-free `except` clauses with comma-separated exceptions
|
||||
(e.g., `except KeyError, TypeError:`) — available since Python 3.14
|
||||
|
||||
### Code Quality Standards
|
||||
|
||||
- **Formatting**: Ruff
|
||||
- **Linting**: PyLint and Ruff
|
||||
- **Type Checking**: MyPy
|
||||
- **Testing**: pytest with asyncio support
|
||||
- **Language**: American English for all code, comments, and documentation
|
||||
|
||||
### Code Organization
|
||||
|
||||
**Core Structure**:
|
||||
```
|
||||
supervisor/
|
||||
├── __init__.py # Package initialization
|
||||
├── const.py # Constants and enums
|
||||
├── coresys.py # Core system management
|
||||
├── bootstrap.py # System initialization
|
||||
├── exceptions.py # Custom exception classes
|
||||
├── api/ # REST API endpoints
|
||||
├── addons/ # Add-on management
|
||||
├── backups/ # Backup system
|
||||
├── docker/ # Docker integration
|
||||
├── host/ # Host system interface
|
||||
├── homeassistant/ # Home Assistant Core management
|
||||
├── dbus/ # D-Bus system integration
|
||||
├── hardware/ # Hardware detection and management
|
||||
├── plugins/ # Plugin system
|
||||
├── resolution/ # Issue detection and resolution
|
||||
├── security/ # Security management
|
||||
├── services/ # Service discovery and management
|
||||
├── store/ # Add-on store management
|
||||
└── utils/ # Utility functions
|
||||
```
|
||||
|
||||
**Shared Constants**: Use constants from `supervisor/const.py` instead of hardcoding
|
||||
values. Define new constants following existing patterns and group related constants
|
||||
together.
|
||||
|
||||
### Supervisor Architecture Patterns
|
||||
|
||||
**CoreSysAttributes Inheritance Pattern**: Nearly all major classes in Supervisor
|
||||
inherit from `CoreSysAttributes`, providing access to the centralized system state
|
||||
via `self.coresys` and convenient `sys_*` properties.
|
||||
|
||||
```python
|
||||
# Standard Supervisor class pattern
|
||||
class MyManager(CoreSysAttributes):
|
||||
"""Manage my functionality."""
|
||||
|
||||
def __init__(self, coresys: CoreSys):
|
||||
"""Initialize manager."""
|
||||
self.coresys: CoreSys = coresys
|
||||
self._component: MyComponent = MyComponent(coresys)
|
||||
|
||||
@property
|
||||
def component(self) -> MyComponent:
|
||||
"""Return component handler."""
|
||||
return self._component
|
||||
|
||||
# Access system components via inherited properties
|
||||
async def do_something(self):
|
||||
await self.sys_docker.containers.get("my_container")
|
||||
self.sys_bus.fire_event(BusEvent.MY_EVENT, {"data": "value"})
|
||||
```
|
||||
|
||||
**Key Inherited Properties from CoreSysAttributes**:
|
||||
- `self.sys_docker` - Docker API access
|
||||
- `self.sys_run_in_executor()` - Execute blocking operations
|
||||
- `self.sys_create_task()` - Create async tasks
|
||||
- `self.sys_bus` - Event bus for system events
|
||||
- `self.sys_config` - System configuration
|
||||
- `self.sys_homeassistant` - Home Assistant Core management
|
||||
- `self.sys_addons` - Add-on management
|
||||
- `self.sys_host` - Host system access
|
||||
- `self.sys_dbus` - D-Bus system interface
|
||||
|
||||
**Load Pattern**: Many components implement a `load()` method which effectively
|
||||
initialize the component from external sources (containers, files, D-Bus services).
|
||||
|
||||
### API Development
|
||||
|
||||
**REST API Structure**:
|
||||
- **Base Path**: `/api/` for all endpoints
|
||||
- **Authentication**: Bearer token authentication
|
||||
- **Consistent Response Format**: `{"result": "ok", "data": {...}}` or
|
||||
`{"result": "error", "message": "..."}`
|
||||
- **Validation**: Use voluptuous schemas with `api_validate()`
|
||||
|
||||
**Use `@api_process` Decorator**: This decorator handles all standard error handling
|
||||
and response formatting automatically. The decorator catches `APIError`, `HassioError`,
|
||||
and other exceptions, returning appropriate HTTP responses.
|
||||
|
||||
```python
|
||||
from ..api.utils import api_process, api_validate
|
||||
|
||||
@api_process
|
||||
async def backup_full(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Create full backup."""
|
||||
body = await api_validate(SCHEMA_BACKUP_FULL, request)
|
||||
job = await self.sys_backups.do_backup_full(**body)
|
||||
return {ATTR_JOB_ID: job.uuid}
|
||||
```
|
||||
|
||||
### Docker Integration
|
||||
|
||||
- **Container Management**: Use Supervisor's Docker manager instead of direct
|
||||
Docker API
|
||||
- **Networking**: Supervisor manages internal Docker networks with predefined IP
|
||||
ranges
|
||||
- **Security**: AppArmor profiles, capability restrictions, and user namespace
|
||||
isolation
|
||||
- **Health Checks**: Implement health monitoring for all managed containers
|
||||
|
||||
### D-Bus Integration
|
||||
|
||||
- **Use dbus-fast**: Async D-Bus library for system integration
|
||||
- **Service Management**: systemd, NetworkManager, hostname management
|
||||
- **Error Handling**: Wrap D-Bus exceptions in Supervisor-specific exceptions
|
||||
|
||||
### Async Programming
|
||||
|
||||
- **All I/O operations must be async**: File operations, network calls, subprocess
|
||||
execution
|
||||
- **Use asyncio patterns**: Prefer `asyncio.gather()` over sequential awaits
|
||||
- **Executor jobs**: Use `self.sys_run_in_executor()` for blocking operations
|
||||
- **Two-phase initialization**: `__init__` for sync setup, `post_init()` for async
|
||||
initialization
|
||||
|
||||
### Testing
|
||||
|
||||
- **Location**: `tests/` directory with module mirroring
|
||||
- **Fixtures**: Extensive use of pytest fixtures for CoreSys setup
|
||||
- **Mocking**: Mock external dependencies (Docker, D-Bus, network calls)
|
||||
- **Coverage**: Minimum 90% test coverage, 100% for security-sensitive code
|
||||
- **Style**: Use plain `test_` functions, not `Test*` classes — test classes are
|
||||
considered legacy style in this project
|
||||
|
||||
### Error Handling
|
||||
|
||||
- **Custom Exceptions**: Defined in `exceptions.py` with clear inheritance hierarchy
|
||||
- **Error Propagation**: Use `from` clause for exception chaining
|
||||
- **API Errors**: Use `APIError` with appropriate HTTP status codes
|
||||
|
||||
### Security Considerations
|
||||
|
||||
- **Container Security**: AppArmor profiles mandatory for add-ons, minimal
|
||||
capabilities
|
||||
- **Authentication**: Token-based API authentication with role-based access
|
||||
- **Data Protection**: Backup encryption, secure secret management, comprehensive
|
||||
input validation
|
||||
|
||||
### Development Commands
|
||||
|
||||
```bash
|
||||
# Run tests, adjust paths as necessary
|
||||
pytest -qsx tests/
|
||||
|
||||
# Linting and formatting
|
||||
ruff check supervisor/
|
||||
ruff format supervisor/
|
||||
|
||||
# Type checking
|
||||
mypy --ignore-missing-imports supervisor/
|
||||
|
||||
# Pre-commit hooks
|
||||
pre-commit run --all-files
|
||||
```
|
||||
|
||||
Always run the pre-commit hooks at the end of code editing.
|
||||
|
||||
### Common Patterns to Follow
|
||||
|
||||
**✅ Use These Patterns**:
|
||||
- Inherit from `CoreSysAttributes` for system access
|
||||
- Use `@api_process` decorator for API endpoints
|
||||
- Use `self.sys_run_in_executor()` for blocking operations
|
||||
- Access Docker via `self.sys_docker` not direct Docker API
|
||||
- Use constants from `const.py` instead of hardcoding
|
||||
- Store types in (per-module) `const.py` (e.g. supervisor/store/const.py)
|
||||
- Use relative imports within the `supervisor/` package (e.g., `from ..docker.manager import ExecReturn`)
|
||||
|
||||
**❌ Avoid These Patterns**:
|
||||
- Direct Docker API usage - use Supervisor's Docker manager
|
||||
- Blocking operations in async context (use asyncio alternatives)
|
||||
- Hardcoded values - use constants from `const.py`
|
||||
- Manual error handling in API endpoints - let `@api_process` handle it
|
||||
- Absolute imports within the `supervisor/` package (e.g., `from supervisor.docker.manager import ...`) - use relative imports instead
|
||||
|
||||
This guide provides the foundation for contributing to Home Assistant Supervisor.
|
||||
Follow these patterns and guidelines to ensure code quality, security, and
|
||||
maintainability.
|
||||
52
.github/release-drafter.yml
vendored
52
.github/release-drafter.yml
vendored
@@ -5,53 +5,45 @@ categories:
|
||||
- title: ":boom: Breaking Changes"
|
||||
label: "breaking-change"
|
||||
|
||||
- title: ":wrench: Build"
|
||||
label: "build"
|
||||
|
||||
- title: ":boar: Chore"
|
||||
label: "chore"
|
||||
|
||||
- title: ":sparkles: New Features"
|
||||
label: "new-feature"
|
||||
|
||||
- title: ":zap: Performance"
|
||||
label: "performance"
|
||||
|
||||
- title: ":recycle: Refactor"
|
||||
label: "refactor"
|
||||
|
||||
- title: ":green_heart: CI"
|
||||
label: "ci"
|
||||
|
||||
- title: ":bug: Bug Fixes"
|
||||
label: "bugfix"
|
||||
|
||||
- title: ":gem: Style"
|
||||
label: "style"
|
||||
|
||||
- title: ":package: Refactor"
|
||||
label: "refactor"
|
||||
|
||||
- title: ":rocket: Performance"
|
||||
label: "performance"
|
||||
|
||||
- title: ":rotating_light: Test"
|
||||
- title: ":white_check_mark: Test"
|
||||
label: "test"
|
||||
|
||||
- title: ":hammer_and_wrench: Build"
|
||||
label: "build"
|
||||
|
||||
- title: ":gear: CI"
|
||||
label: "ci"
|
||||
|
||||
- title: ":recycle: Chore"
|
||||
label: "chore"
|
||||
|
||||
- title: ":wastebasket: Revert"
|
||||
label: "revert"
|
||||
|
||||
- title: ":arrow_up: Dependency Updates"
|
||||
label: "dependencies"
|
||||
collapse-after: 1
|
||||
|
||||
include-labels:
|
||||
- "breaking-change"
|
||||
- "build"
|
||||
- "chore"
|
||||
- "performance"
|
||||
- "refactor"
|
||||
- "new-feature"
|
||||
- "bugfix"
|
||||
- "style"
|
||||
- "refactor"
|
||||
- "performance"
|
||||
- "test"
|
||||
- "build"
|
||||
- "ci"
|
||||
- "chore"
|
||||
- "revert"
|
||||
- "dependencies"
|
||||
- "test"
|
||||
- "ci"
|
||||
|
||||
template: |
|
||||
|
||||
|
||||
318
.github/workflows/builder.yml
vendored
318
.github/workflows/builder.yml
vendored
@@ -25,20 +25,17 @@ on:
|
||||
push:
|
||||
branches: ["main"]
|
||||
paths:
|
||||
- ".github/workflows/builder.yml"
|
||||
- "rootfs/**"
|
||||
- "supervisor/**"
|
||||
- build.yaml
|
||||
- Dockerfile
|
||||
- requirements.txt
|
||||
- setup.py
|
||||
|
||||
env:
|
||||
DEFAULT_PYTHON: "3.14.3"
|
||||
COSIGN_VERSION: "v2.5.3"
|
||||
DEFAULT_PYTHON: "3.13"
|
||||
BUILD_NAME: supervisor
|
||||
BUILD_TYPE: supervisor
|
||||
IMAGE_NAME: hassio-supervisor
|
||||
ARCHITECTURES: '["amd64", "aarch64"]'
|
||||
|
||||
concurrency:
|
||||
group: "${{ github.workflow }}-${{ github.ref }}"
|
||||
@@ -49,17 +46,21 @@ jobs:
|
||||
name: Initialize build
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
architectures: ${{ steps.info.outputs.architectures }}
|
||||
version: ${{ steps.version.outputs.version }}
|
||||
channel: ${{ steps.version.outputs.channel }}
|
||||
publish: ${{ steps.version.outputs.publish }}
|
||||
build_wheels: ${{ steps.requirements.outputs.build_wheels }}
|
||||
matrix: ${{ steps.matrix.outputs.matrix }}
|
||||
requirements: ${{ steps.requirements.outputs.changed }}
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Get information
|
||||
id: info
|
||||
uses: home-assistant/actions/helpers/info@master
|
||||
|
||||
- name: Get version
|
||||
id: version
|
||||
uses: home-assistant/actions/helpers/version@master
|
||||
@@ -68,108 +69,71 @@ jobs:
|
||||
|
||||
- name: Get changed files
|
||||
id: changed_files
|
||||
if: github.event_name == 'pull_request' || github.event_name == 'push'
|
||||
uses: masesgroup/retrieve-changed-files@45a8b3b496d2d6037cbd553e8a3450989b9384a2 # v4.0.0
|
||||
if: steps.version.outputs.publish == 'false'
|
||||
uses: masesgroup/retrieve-changed-files@v3.0.0
|
||||
|
||||
- name: Check if requirements files changed
|
||||
id: requirements
|
||||
run: |
|
||||
# No wheels build necessary for releases
|
||||
if [[ "${{ github.event_name }}" == "release" ]]; then
|
||||
echo "build_wheels=false" >> "$GITHUB_OUTPUT"
|
||||
# Always build wheels for manual dispatches
|
||||
elif [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
|
||||
echo "build_wheels=true" >> "$GITHUB_OUTPUT"
|
||||
elif [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements\.txt|\.github/workflows/builder\.yml) ]]; then
|
||||
echo "build_wheels=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "build_wheels=false" >> "$GITHUB_OUTPUT"
|
||||
if [[ "${{ steps.changed_files.outputs.all }}" =~ (requirements.txt|build.yaml) ]]; then
|
||||
echo "changed=true" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
- name: Get build matrix
|
||||
id: matrix
|
||||
uses: home-assistant/builder/actions/prepare-multi-arch-matrix@62a1597b84b3461abad9816d9cd92862a2b542c3 # 2026.03.2
|
||||
with:
|
||||
architectures: ${{ env.ARCHITECTURES }}
|
||||
image-name: ${{ env.IMAGE_NAME }}
|
||||
|
||||
build:
|
||||
name: Build ${{ matrix.arch }} supervisor
|
||||
needs: init
|
||||
runs-on: ${{ matrix.os }}
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
id-token: write
|
||||
packages: write
|
||||
strategy:
|
||||
matrix: ${{ fromJSON(needs.init.outputs.matrix) }}
|
||||
env:
|
||||
WHEELS_ABI: cp314
|
||||
WHEELS_TAG: musllinux_1_2
|
||||
WHEELS_APK_DEPS: "libffi-dev;openssl-dev;yaml-dev"
|
||||
WHEELS_SKIP_BINARY: aiohttp
|
||||
matrix:
|
||||
arch: ${{ fromJson(needs.init.outputs.architectures) }}
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Write env-file for wheels build
|
||||
if: needs.init.outputs.build_wheels == 'true'
|
||||
- name: Write env-file
|
||||
if: needs.init.outputs.requirements == 'true'
|
||||
run: |
|
||||
(
|
||||
# Fix out of memory issues with rust
|
||||
echo "CARGO_NET_GIT_FETCH_WITH_CLI=true"
|
||||
) > .env_file
|
||||
|
||||
- name: Build and publish wheels
|
||||
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'true'
|
||||
uses: home-assistant/wheels@e5742a69d69f0e274e2689c998900c7d19652c21 # 2025.12.0
|
||||
- name: Build wheels
|
||||
if: needs.init.outputs.requirements == 'true'
|
||||
uses: home-assistant/wheels@2024.11.0
|
||||
with:
|
||||
abi: cp313
|
||||
tag: musllinux_1_2
|
||||
arch: ${{ matrix.arch }}
|
||||
wheels-key: ${{ secrets.WHEELS_KEY }}
|
||||
abi: ${{ env.WHEELS_ABI }}
|
||||
tag: ${{ env.WHEELS_TAG }}
|
||||
arch: ${{ matrix.arch }}
|
||||
apk: ${{ env.WHEELS_APK_DEPS }}
|
||||
skip-binary: ${{ env.WHEELS_SKIP_BINARY }}
|
||||
apk: "libffi-dev;openssl-dev;yaml-dev"
|
||||
skip-binary: aiohttp
|
||||
env-file: true
|
||||
requirements: "requirements.txt"
|
||||
|
||||
- name: Build local wheels
|
||||
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
|
||||
uses: home-assistant/wheels@e5742a69d69f0e274e2689c998900c7d19652c21 # 2025.12.0
|
||||
- name: Set version
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: home-assistant/actions/helpers/version@master
|
||||
with:
|
||||
wheels-host: ""
|
||||
wheels-user: ""
|
||||
wheels-key: ""
|
||||
local-wheels-repo-path: "wheels/"
|
||||
abi: ${{ env.WHEELS_ABI }}
|
||||
tag: ${{ env.WHEELS_TAG }}
|
||||
arch: ${{ matrix.arch }}
|
||||
apk: ${{ env.WHEELS_APK_DEPS }}
|
||||
skip-binary: ${{ env.WHEELS_SKIP_BINARY }}
|
||||
env-file: true
|
||||
requirements: "requirements.txt"
|
||||
|
||||
- name: Upload local wheels artifact
|
||||
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
|
||||
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
|
||||
with:
|
||||
name: wheels-${{ matrix.arch }}
|
||||
path: wheels
|
||||
retention-days: 1
|
||||
type: ${{ env.BUILD_TYPE }}
|
||||
|
||||
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
|
||||
uses: actions/setup-python@v5.4.0
|
||||
with:
|
||||
python-version: ${{ env.DEFAULT_PYTHON }}
|
||||
|
||||
- name: Install Cosign
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
|
||||
uses: sigstore/cosign-installer@v3.8.1
|
||||
with:
|
||||
cosign-release: ${{ env.COSIGN_VERSION }}
|
||||
cosign-release: "v2.4.0"
|
||||
|
||||
- name: Install dirhash and calc hash
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
@@ -183,49 +147,41 @@ jobs:
|
||||
run: |
|
||||
cosign sign-blob --yes rootfs/supervisor.sha256 --bundle rootfs/supervisor.sha256.sig
|
||||
|
||||
- name: Build supervisor
|
||||
uses: home-assistant/builder/actions/build-image@62a1597b84b3461abad9816d9cd92862a2b542c3 # 2026.03.2
|
||||
- name: Login to GitHub Container Registry
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: docker/login-action@v3.3.0
|
||||
with:
|
||||
arch: ${{ matrix.arch }}
|
||||
container-registry-password: ${{ secrets.GITHUB_TOKEN }}
|
||||
cosign-base-identity: 'https://github.com/home-assistant/docker-base/.*'
|
||||
cosign-base-verify: ghcr.io/home-assistant/base-python:3.14-alpine3.22
|
||||
image: ${{ matrix.image }}
|
||||
image-tags: |
|
||||
${{ needs.init.outputs.version }}
|
||||
latest
|
||||
push: ${{ needs.init.outputs.publish == 'true' }}
|
||||
version: ${{ needs.init.outputs.version }}
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
manifest:
|
||||
name: Publish multi-arch manifest
|
||||
needs: ["init", "build"]
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
id-token: write
|
||||
packages: write
|
||||
steps:
|
||||
- name: Publish multi-arch manifest
|
||||
uses: home-assistant/builder/actions/publish-multi-arch-manifest@62a1597b84b3461abad9816d9cd92862a2b542c3 # 2026.03.2
|
||||
- name: Set build arguments
|
||||
if: needs.init.outputs.publish == 'false'
|
||||
run: echo "BUILD_ARGS=--test" >> $GITHUB_ENV
|
||||
|
||||
- name: Build supervisor
|
||||
uses: home-assistant/builder@2025.02.0
|
||||
with:
|
||||
architectures: ${{ env.ARCHITECTURES }}
|
||||
container-registry-password: ${{ secrets.GITHUB_TOKEN }}
|
||||
image-name: ${{ env.IMAGE_NAME }}
|
||||
image-tags: |
|
||||
${{ needs.init.outputs.version }}
|
||||
latest
|
||||
args: |
|
||||
$BUILD_ARGS \
|
||||
--${{ matrix.arch }} \
|
||||
--target /data \
|
||||
--cosign \
|
||||
--generic ${{ needs.init.outputs.version }}
|
||||
env:
|
||||
CAS_API_KEY: ${{ secrets.CAS_TOKEN }}
|
||||
|
||||
version:
|
||||
name: Update version
|
||||
if: github.repository_owner == 'home-assistant' && needs.init.outputs.publish == 'true'
|
||||
needs: ["init", "run_supervisor"]
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: actions/checkout@v4.2.2
|
||||
|
||||
- name: Initialize git
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: home-assistant/actions/helpers/git-init@master
|
||||
with:
|
||||
name: ${{ secrets.GIT_NAME }}
|
||||
@@ -233,6 +189,7 @@ jobs:
|
||||
token: ${{ secrets.GIT_TOKEN }}
|
||||
|
||||
- name: Update version file
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
uses: home-assistant/actions/helpers/version-push@master
|
||||
with:
|
||||
key: ${{ env.BUILD_NAME }}
|
||||
@@ -246,28 +203,18 @@ jobs:
|
||||
timeout-minutes: 60
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
|
||||
- name: Download local wheels artifact
|
||||
if: needs.init.outputs.build_wheels == 'true' && needs.init.outputs.publish == 'false'
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: wheels-amd64
|
||||
path: wheels
|
||||
|
||||
# Build the Supervisor for non-publish runs (e.g. PRs)
|
||||
- name: Build the Supervisor
|
||||
if: needs.init.outputs.publish != 'true'
|
||||
uses: home-assistant/builder/actions/build-image@62a1597b84b3461abad9816d9cd92862a2b542c3 # 2026.03.2
|
||||
uses: home-assistant/builder@2025.02.0
|
||||
with:
|
||||
arch: amd64
|
||||
container-registry-password: ${{ secrets.GITHUB_TOKEN }}
|
||||
image: ghcr.io/home-assistant/amd64-hassio-supervisor
|
||||
image-tags: runner
|
||||
load: true
|
||||
version: ${{ needs.init.outputs.version }}
|
||||
args: |
|
||||
--test \
|
||||
--amd64 \
|
||||
--target /data \
|
||||
--generic runner
|
||||
|
||||
# Pull the Supervisor for publish runs to test the published image
|
||||
- name: Pull Supervisor
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
run: |
|
||||
@@ -281,10 +228,9 @@ jobs:
|
||||
--privileged \
|
||||
--security-opt seccomp=unconfined \
|
||||
--security-opt apparmor=unconfined \
|
||||
-v /run/docker.sock:/run/docker.sock:rw \
|
||||
-v /run/dbus:/run/dbus:ro \
|
||||
-v /run/supervisor:/run/os:rw \
|
||||
-v /tmp/supervisor/data:/data:rw,slave \
|
||||
-v /run/docker.sock:/run/docker.sock \
|
||||
-v /run/dbus:/run/dbus \
|
||||
-v /tmp/supervisor/data:/data \
|
||||
-v /etc/machine-id:/etc/machine-id:ro \
|
||||
-e SUPERVISOR_SHARE="/tmp/supervisor/data" \
|
||||
-e SUPERVISOR_NAME=hassio_supervisor \
|
||||
@@ -295,69 +241,15 @@ jobs:
|
||||
- name: Start the Supervisor
|
||||
run: docker start hassio_supervisor
|
||||
|
||||
- &wait_for_supervisor
|
||||
name: Wait for Supervisor to come up
|
||||
- name: Wait for Supervisor to come up
|
||||
run: |
|
||||
until SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.Networks.hassio.IPAddress}}' hassio_supervisor 2>/dev/null) && \
|
||||
[ -n "$SUPERVISOR" ] && [ "$SUPERVISOR" != "<no value>" ]; do
|
||||
echo "Waiting for network configuration..."
|
||||
sleep 1
|
||||
done
|
||||
echo "Waiting for Supervisor API at http://${SUPERVISOR}/supervisor/ping"
|
||||
timeout=300
|
||||
elapsed=0
|
||||
|
||||
while [ $elapsed -lt $timeout ]; do
|
||||
if response=$(curl -sSf "http://${SUPERVISOR}/supervisor/ping" 2>/dev/null); then
|
||||
if echo "$response" | jq -e '.result == "ok"' >/dev/null 2>&1; then
|
||||
echo "Supervisor is up! (took ${elapsed}s)"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ $((elapsed % 15)) -eq 0 ]; then
|
||||
echo "Still waiting... (${elapsed}s/${timeout}s)"
|
||||
fi
|
||||
|
||||
SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' hassio_supervisor)
|
||||
ping="error"
|
||||
while [ "$ping" != "ok" ]; do
|
||||
ping=$(curl -sSL "http://$SUPERVISOR/supervisor/ping" | jq -r '.result')
|
||||
sleep 5
|
||||
elapsed=$((elapsed + 5))
|
||||
done
|
||||
|
||||
echo "ERROR: Supervisor failed to start within ${timeout}s"
|
||||
echo "Last response: $response"
|
||||
echo "Checking supervisor logs..."
|
||||
docker logs --tail 50 hassio_supervisor
|
||||
exit 1
|
||||
|
||||
# Wait for Core to come up so subsequent steps (backup, addon install) succeed.
|
||||
# On first startup, Supervisor installs Core via the "home_assistant_core_install"
|
||||
# job (which pulls the image and then starts Core). Jobs with cleanup=True are
|
||||
# removed from the jobs list once done, so we poll until it's gone.
|
||||
- name: Wait for Core to be started
|
||||
run: |
|
||||
echo "Waiting for Home Assistant Core to be installed and started..."
|
||||
timeout=300
|
||||
elapsed=0
|
||||
|
||||
while [ $elapsed -lt $timeout ]; do
|
||||
jobs=$(docker exec hassio_cli ha jobs info --no-progress --raw-json | jq -r '.data.jobs[] | select(.name == "home_assistant_core_install" and .done == false) | .name' 2>/dev/null)
|
||||
if [ -z "$jobs" ]; then
|
||||
echo "Home Assistant Core install/start complete (took ${elapsed}s)"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ $((elapsed % 15)) -eq 0 ]; then
|
||||
echo "Core still installing... (${elapsed}s/${timeout}s)"
|
||||
fi
|
||||
|
||||
sleep 5
|
||||
elapsed=$((elapsed + 5))
|
||||
done
|
||||
|
||||
echo "ERROR: Home Assistant Core failed to install/start within ${timeout}s"
|
||||
docker logs --tail 50 hassio_supervisor
|
||||
exit 1
|
||||
|
||||
- name: Check the Supervisor
|
||||
run: |
|
||||
echo "Checking supervisor info"
|
||||
@@ -372,32 +264,59 @@ jobs:
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Check the Store / App
|
||||
- name: Check the Store / Addon
|
||||
run: |
|
||||
echo "Install Core SSH app"
|
||||
test=$(docker exec hassio_cli ha apps install core_ssh --no-progress --raw-json | jq -r '.result')
|
||||
echo "Install Core SSH Add-on"
|
||||
test=$(docker exec hassio_cli ha addons install core_ssh --no-progress --raw-json | jq -r '.result')
|
||||
if [ "$test" != "ok" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Make sure it actually installed
|
||||
test=$(docker exec hassio_cli ha apps info core_ssh --no-progress --raw-json | jq -r '.data.version')
|
||||
test=$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.version')
|
||||
if [[ "$test" == "null" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Start Core SSH app"
|
||||
test=$(docker exec hassio_cli ha apps start core_ssh --no-progress --raw-json | jq -r '.result')
|
||||
echo "Start Core SSH Add-on"
|
||||
test=$(docker exec hassio_cli ha addons start core_ssh --no-progress --raw-json | jq -r '.result')
|
||||
if [ "$test" != "ok" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Make sure its state is started
|
||||
test="$(docker exec hassio_cli ha apps info core_ssh --no-progress --raw-json | jq -r '.data.state')"
|
||||
test="$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.state')"
|
||||
if [ "$test" != "started" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Check the Supervisor code sign
|
||||
if: needs.init.outputs.publish == 'true'
|
||||
run: |
|
||||
echo "Enable Content-Trust"
|
||||
test=$(docker exec hassio_cli ha security options --content-trust=true --no-progress --raw-json | jq -r '.result')
|
||||
if [ "$test" != "ok" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Run supervisor health check"
|
||||
test=$(docker exec hassio_cli ha resolution healthcheck --no-progress --raw-json | jq -r '.result')
|
||||
if [ "$test" != "ok" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Check supervisor unhealthy"
|
||||
test=$(docker exec hassio_cli ha resolution info --no-progress --raw-json | jq -r '.data.unhealthy[]')
|
||||
if [ "$test" != "" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Check supervisor supported"
|
||||
test=$(docker exec hassio_cli ha resolution info --no-progress --raw-json | jq -r '.data.unsupported[]')
|
||||
if [[ "$test" =~ source_mods ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Create full backup
|
||||
id: backup
|
||||
run: |
|
||||
@@ -407,9 +326,9 @@ jobs:
|
||||
fi
|
||||
echo "slug=$(echo $test | jq -r '.data.slug')" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Uninstall SSH app
|
||||
- name: Uninstall SSH add-on
|
||||
run: |
|
||||
test=$(docker exec hassio_cli ha apps uninstall core_ssh --no-progress --raw-json | jq -r '.result')
|
||||
test=$(docker exec hassio_cli ha addons uninstall core_ssh --no-progress --raw-json | jq -r '.result')
|
||||
if [ "$test" != "ok" ]; then
|
||||
exit 1
|
||||
fi
|
||||
@@ -421,23 +340,30 @@ jobs:
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- *wait_for_supervisor
|
||||
|
||||
- name: Restore SSH app from backup
|
||||
- name: Wait for Supervisor to come up
|
||||
run: |
|
||||
test=$(docker exec hassio_cli ha backups restore ${{ steps.backup.outputs.slug }} --app core_ssh --no-progress --raw-json | jq -r '.result')
|
||||
SUPERVISOR=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' hassio_supervisor)
|
||||
ping="error"
|
||||
while [ "$ping" != "ok" ]; do
|
||||
ping=$(curl -sSL "http://$SUPERVISOR/supervisor/ping" | jq -r '.result')
|
||||
sleep 5
|
||||
done
|
||||
|
||||
- name: Restore SSH add-on from backup
|
||||
run: |
|
||||
test=$(docker exec hassio_cli ha backups restore ${{ steps.backup.outputs.slug }} --addons core_ssh --no-progress --raw-json | jq -r '.result')
|
||||
if [ "$test" != "ok" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Make sure it actually installed
|
||||
test=$(docker exec hassio_cli ha apps info core_ssh --no-progress --raw-json | jq -r '.data.version')
|
||||
test=$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.version')
|
||||
if [[ "$test" == "null" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Make sure its state is started
|
||||
test="$(docker exec hassio_cli ha apps info core_ssh --no-progress --raw-json | jq -r '.data.state')"
|
||||
test="$(docker exec hassio_cli ha addons info core_ssh --no-progress --raw-json | jq -r '.data.state')"
|
||||
if [ "$test" != "started" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
124
.github/workflows/ci.yaml
vendored
124
.github/workflows/ci.yaml
vendored
@@ -8,9 +8,8 @@ on:
|
||||
pull_request: ~
|
||||
|
||||
env:
|
||||
DEFAULT_PYTHON: "3.14.3"
|
||||
DEFAULT_PYTHON: "3.13"
|
||||
PRE_COMMIT_CACHE: ~/.cache/pre-commit
|
||||
MYPY_CACHE_VERSION: 1
|
||||
|
||||
concurrency:
|
||||
group: "${{ github.workflow }}-${{ github.ref }}"
|
||||
@@ -26,15 +25,15 @@ jobs:
|
||||
name: Prepare Python dependencies
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python
|
||||
id: python
|
||||
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
|
||||
uses: actions/setup-python@v5.4.0
|
||||
with:
|
||||
python-version: ${{ env.DEFAULT_PYTHON }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -48,7 +47,7 @@ jobs:
|
||||
pip install -r requirements.txt -r requirements_tests.txt
|
||||
- name: Restore pre-commit environment from cache
|
||||
id: cache-precommit
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: ${{ env.PRE_COMMIT_CACHE }}
|
||||
lookup-only: true
|
||||
@@ -68,15 +67,15 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
|
||||
uses: actions/setup-python@v5.4.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -88,7 +87,7 @@ jobs:
|
||||
exit 1
|
||||
- name: Restore pre-commit environment from cache
|
||||
id: cache-precommit
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: ${{ env.PRE_COMMIT_CACHE }}
|
||||
key: |
|
||||
@@ -111,15 +110,15 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
|
||||
uses: actions/setup-python@v5.4.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -131,7 +130,7 @@ jobs:
|
||||
exit 1
|
||||
- name: Restore pre-commit environment from cache
|
||||
id: cache-precommit
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: ${{ env.PRE_COMMIT_CACHE }}
|
||||
key: |
|
||||
@@ -154,7 +153,7 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Register hadolint problem matcher
|
||||
run: |
|
||||
echo "::add-matcher::.github/workflows/matchers/hadolint.json"
|
||||
@@ -169,15 +168,15 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
|
||||
uses: actions/setup-python@v5.4.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -189,7 +188,7 @@ jobs:
|
||||
exit 1
|
||||
- name: Restore pre-commit environment from cache
|
||||
id: cache-precommit
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: ${{ env.PRE_COMMIT_CACHE }}
|
||||
key: |
|
||||
@@ -213,15 +212,15 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
|
||||
uses: actions/setup-python@v5.4.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -233,7 +232,7 @@ jobs:
|
||||
exit 1
|
||||
- name: Restore pre-commit environment from cache
|
||||
id: cache-precommit
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: ${{ env.PRE_COMMIT_CACHE }}
|
||||
key: |
|
||||
@@ -257,15 +256,15 @@ jobs:
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
|
||||
uses: actions/setup-python@v5.4.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -287,71 +286,25 @@ jobs:
|
||||
. venv/bin/activate
|
||||
pylint supervisor tests
|
||||
|
||||
mypy:
|
||||
name: Check mypy
|
||||
runs-on: ubuntu-latest
|
||||
needs: prepare
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Generate partial mypy restore key
|
||||
id: generate-mypy-key
|
||||
run: |
|
||||
mypy_version=$(cat requirements_test.txt | grep mypy | cut -d '=' -f 3)
|
||||
echo "version=$mypy_version" >> $GITHUB_OUTPUT
|
||||
echo "key=mypy-${{ env.MYPY_CACHE_VERSION }}-$mypy_version-$(date -u '+%Y-%m-%dT%H:%M:%s')" >> $GITHUB_OUTPUT
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
with:
|
||||
path: venv
|
||||
key: >-
|
||||
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-${{ hashFiles('requirements.txt') }}-${{ hashFiles('requirements_tests.txt') }}
|
||||
- name: Fail job if Python cache restore failed
|
||||
if: steps.cache-venv.outputs.cache-hit != 'true'
|
||||
run: |
|
||||
echo "Failed to restore Python virtual environment from cache"
|
||||
exit 1
|
||||
- name: Restore mypy cache
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
with:
|
||||
path: .mypy_cache
|
||||
key: >-
|
||||
${{ runner.os }}-mypy-${{ needs.prepare.outputs.python-version }}-${{ steps.generate-mypy-key.outputs.key }}
|
||||
restore-keys: >-
|
||||
${{ runner.os }}-venv-${{ needs.prepare.outputs.python-version }}-mypy-${{ env.MYPY_CACHE_VERSION }}-${{ steps.generate-mypy-key.outputs.version }}
|
||||
- name: Register mypy problem matcher
|
||||
run: |
|
||||
echo "::add-matcher::.github/workflows/matchers/mypy.json"
|
||||
- name: Run mypy
|
||||
run: |
|
||||
. venv/bin/activate
|
||||
mypy --ignore-missing-imports supervisor
|
||||
|
||||
pytest:
|
||||
runs-on: ubuntu-latest
|
||||
needs: prepare
|
||||
name: Run tests Python ${{ needs.prepare.outputs.python-version }}
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
|
||||
uses: actions/setup-python@v5.4.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Install Cosign
|
||||
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
|
||||
uses: sigstore/cosign-installer@v3.8.1
|
||||
with:
|
||||
cosign-release: "v2.5.3"
|
||||
cosign-release: "v2.4.0"
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -386,9 +339,9 @@ jobs:
|
||||
-o console_output_style=count \
|
||||
tests
|
||||
- name: Upload coverage artifact
|
||||
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
|
||||
uses: actions/upload-artifact@v4.6.1
|
||||
with:
|
||||
name: coverage
|
||||
name: coverage-${{ matrix.python-version }}
|
||||
path: .coverage
|
||||
include-hidden-files: true
|
||||
|
||||
@@ -398,15 +351,15 @@ jobs:
|
||||
needs: ["pytest", "prepare"]
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Set up Python ${{ needs.prepare.outputs.python-version }}
|
||||
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
|
||||
uses: actions/setup-python@v5.4.0
|
||||
id: python
|
||||
with:
|
||||
python-version: ${{ needs.prepare.outputs.python-version }}
|
||||
- name: Restore Python virtual environment
|
||||
id: cache-venv
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
uses: actions/cache@v4.2.2
|
||||
with:
|
||||
path: venv
|
||||
key: |
|
||||
@@ -417,10 +370,7 @@ jobs:
|
||||
echo "Failed to restore Python virtual environment from cache"
|
||||
exit 1
|
||||
- name: Download all coverage artifacts
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
name: coverage
|
||||
path: coverage/
|
||||
uses: actions/download-artifact@v4.1.9
|
||||
- name: Combine coverage results
|
||||
run: |
|
||||
. venv/bin/activate
|
||||
@@ -428,4 +378,4 @@ jobs:
|
||||
coverage report
|
||||
coverage xml
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
|
||||
uses: codecov/codecov-action@v5.4.0
|
||||
|
||||
2
.github/workflows/lock.yml
vendored
2
.github/workflows/lock.yml
vendored
@@ -9,7 +9,7 @@ jobs:
|
||||
lock:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: dessant/lock-threads@7266a7ce5c1df01b1c6db85bf8cd86c737dadbe7 # v6.0.0
|
||||
- uses: dessant/lock-threads@v5.0.1
|
||||
with:
|
||||
github-token: ${{ github.token }}
|
||||
issue-inactive-days: "30"
|
||||
|
||||
16
.github/workflows/matchers/mypy.json
vendored
16
.github/workflows/matchers/mypy.json
vendored
@@ -1,16 +0,0 @@
|
||||
{
|
||||
"problemMatcher": [
|
||||
{
|
||||
"owner": "mypy",
|
||||
"pattern": [
|
||||
{
|
||||
"regexp": "^(.+):(\\d+):\\s(error|warning):\\s(.+)$",
|
||||
"file": 1,
|
||||
"line": 2,
|
||||
"severity": 3,
|
||||
"message": 4
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
4
.github/workflows/release-drafter.yml
vendored
4
.github/workflows/release-drafter.yml
vendored
@@ -11,7 +11,7 @@ jobs:
|
||||
name: Release Drafter
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
@@ -36,7 +36,7 @@ jobs:
|
||||
echo "version=$datepre.$newpost" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Run Release Drafter
|
||||
uses: release-drafter/release-drafter@5de93583980a40bd78603b6dfdcda5b4df377b32 # v7.2.0
|
||||
uses: release-drafter/release-drafter@v6.1.0
|
||||
with:
|
||||
tag: ${{ steps.version.outputs.version }}
|
||||
name: ${{ steps.version.outputs.version }}
|
||||
|
||||
58
.github/workflows/restrict-task-creation.yml
vendored
58
.github/workflows/restrict-task-creation.yml
vendored
@@ -1,58 +0,0 @@
|
||||
name: Restrict task creation
|
||||
|
||||
# yamllint disable-line rule:truthy
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
check-authorization:
|
||||
runs-on: ubuntu-latest
|
||||
# Only run if this is a Task issue type (from the issue form)
|
||||
if: github.event.issue.type.name == 'Task'
|
||||
steps:
|
||||
- name: Check if user is authorized
|
||||
uses: actions/github-script@3a2844b7e9c422d3c10d287c895573f7108da1b3 # v9.0.0
|
||||
with:
|
||||
script: |
|
||||
const issueAuthor = context.payload.issue.user.login;
|
||||
|
||||
// Check if user is an organization member
|
||||
try {
|
||||
await github.rest.orgs.checkMembershipForUser({
|
||||
org: 'home-assistant',
|
||||
username: issueAuthor
|
||||
});
|
||||
console.log(`✅ ${issueAuthor} is an organization member`);
|
||||
return; // Authorized
|
||||
} catch (error) {
|
||||
console.log(`❌ ${issueAuthor} is not authorized to create Task issues`);
|
||||
}
|
||||
|
||||
// Close the issue with a comment
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
body: `Hi @${issueAuthor}, thank you for your contribution!\n\n` +
|
||||
`Task issues are restricted to Open Home Foundation staff and authorized contributors.\n\n` +
|
||||
`If you would like to:\n` +
|
||||
`- Report a bug: Please use the [bug report form](https://github.com/home-assistant/supervisor/issues/new?template=bug_report.yml)\n` +
|
||||
`- Request a feature: Please submit to [Feature Requests](https://github.com/orgs/home-assistant/discussions)\n\n` +
|
||||
`If you believe you should have access to create Task issues, please contact the maintainers.`
|
||||
});
|
||||
|
||||
await github.rest.issues.update({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
state: 'closed'
|
||||
});
|
||||
|
||||
// Add a label to indicate this was auto-closed
|
||||
await github.rest.issues.addLabels({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
labels: ['auto-closed']
|
||||
});
|
||||
4
.github/workflows/sentry.yaml
vendored
4
.github/workflows/sentry.yaml
vendored
@@ -10,9 +10,9 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check out code from GitHub
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
uses: actions/checkout@v4.2.2
|
||||
- name: Sentry Release
|
||||
uses: getsentry/action-release@5657c9e888b4e2cc85f4d29143ea4131fde4a73a # v3.6.0
|
||||
uses: getsentry/action-release@v1.10.4
|
||||
env:
|
||||
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
|
||||
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}
|
||||
|
||||
3
.github/workflows/stale.yml
vendored
3
.github/workflows/stale.yml
vendored
@@ -9,14 +9,13 @@ jobs:
|
||||
stale:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
|
||||
- uses: actions/stale@v9.1.0
|
||||
with:
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
days-before-stale: 30
|
||||
days-before-close: 7
|
||||
stale-issue-label: "stale"
|
||||
exempt-issue-labels: "no-stale,Help%20wanted,help-wanted,pinned,rfc,security"
|
||||
only-issue-types: "bug"
|
||||
stale-issue-message: >
|
||||
There hasn't been any activity on this issue recently. Due to the
|
||||
high number of incoming GitHub notifications, we have to clean some
|
||||
|
||||
79
.github/workflows/update_frontend.yml
vendored
Normal file
79
.github/workflows/update_frontend.yml
vendored
Normal file
@@ -0,0 +1,79 @@
|
||||
name: Update frontend
|
||||
|
||||
on:
|
||||
schedule: # once a day
|
||||
- cron: "0 0 * * *"
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
check-version:
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
skip: ${{ steps.check_version.outputs.skip || steps.check_existing_pr.outputs.skip }}
|
||||
current_version: ${{ steps.check_version.outputs.current_version }}
|
||||
latest_version: ${{ steps.latest_frontend_version.outputs.latest_tag }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
- name: Get latest frontend release
|
||||
id: latest_frontend_version
|
||||
uses: abatilo/release-info-action@v1.3.3
|
||||
with:
|
||||
owner: home-assistant
|
||||
repo: frontend
|
||||
- name: Check if version is up to date
|
||||
id: check_version
|
||||
run: |
|
||||
current_version="$(cat .ha-frontend-version)"
|
||||
latest_version="${{ steps.latest_frontend_version.outputs.latest_tag }}"
|
||||
echo "current_version=${current_version}" >> $GITHUB_OUTPUT
|
||||
echo "LATEST_VERSION=${latest_version}" >> $GITHUB_ENV
|
||||
if [[ ! "$current_version" < "$latest_version" ]]; then
|
||||
echo "Frontend version is up to date"
|
||||
echo "skip=true" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
- name: Check if there is no open PR with this version
|
||||
if: steps.check_version.outputs.skip != 'true'
|
||||
id: check_existing_pr
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
PR=$(gh pr list --state open --base main --json title --search "Update frontend to version $LATEST_VERSION")
|
||||
if [[ "$PR" != "[]" ]]; then
|
||||
echo "Skipping - There is already a PR open for version $LATEST_VERSION"
|
||||
echo "skip=true" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
create-pr:
|
||||
runs-on: ubuntu-latest
|
||||
needs: check-version
|
||||
if: needs.check-version.outputs.skip != 'true'
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
- name: Clear www folder
|
||||
run: |
|
||||
rm -rf supervisor/api/panel/*
|
||||
- name: Update version file
|
||||
run: |
|
||||
echo "${{ needs.check-version.outputs.latest_version }}" > .ha-frontend-version
|
||||
- name: Download release assets
|
||||
uses: robinraju/release-downloader@v1
|
||||
with:
|
||||
repository: 'home-assistant/frontend'
|
||||
tag: ${{ needs.check-version.outputs.latest_version }}
|
||||
fileName: home_assistant_frontend_supervisor-${{ needs.check-version.outputs.latest_version }}.tar.gz
|
||||
extract: true
|
||||
out-file-path: supervisor/api/panel/
|
||||
- name: Create PR
|
||||
uses: peter-evans/create-pull-request@v7
|
||||
with:
|
||||
commit-message: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
|
||||
branch: autoupdate-frontend
|
||||
base: main
|
||||
draft: true
|
||||
sign-commits: true
|
||||
title: "Update frontend to version ${{ needs.check-version.outputs.latest_version }}"
|
||||
body: >
|
||||
Update frontend from ${{ needs.check-version.outputs.current_version }} to
|
||||
[${{ needs.check-version.outputs.latest_version }}](https://github.com/home-assistant/frontend/releases/tag/${{ needs.check-version.outputs.latest_version }})
|
||||
|
||||
6
.gitignore
vendored
6
.gitignore
vendored
@@ -24,9 +24,6 @@ var/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
|
||||
# Local wheels
|
||||
wheels/**/*.whl
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
@@ -103,6 +100,3 @@ ENV/
|
||||
# mypy
|
||||
/.mypy_cache/*
|
||||
/.dmypy.json
|
||||
|
||||
# Mac
|
||||
.DS_Store
|
||||
|
||||
1
.ha-frontend-version
Normal file
1
.ha-frontend-version
Normal file
@@ -0,0 +1 @@
|
||||
20250221.0
|
||||
@@ -1,6 +1,6 @@
|
||||
repos:
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
rev: v0.14.3
|
||||
rev: v0.9.1
|
||||
hooks:
|
||||
- id: ruff
|
||||
args:
|
||||
@@ -13,15 +13,3 @@ repos:
|
||||
- id: check-executables-have-shebangs
|
||||
stages: [manual]
|
||||
- id: check-json
|
||||
- repo: local
|
||||
hooks:
|
||||
# Run mypy through our wrapper script in order to get the possible
|
||||
# pyenv and/or virtualenv activated; it may not have been e.g. if
|
||||
# committing from a GUI tool that was not launched from an activated
|
||||
# shell.
|
||||
- id: mypy
|
||||
name: mypy
|
||||
entry: script/run-in-env.sh mypy --ignore-missing-imports
|
||||
language: script
|
||||
types_or: [python, pyi]
|
||||
files: ^supervisor/.+\.(py|pyi)$
|
||||
|
||||
42
Dockerfile
42
Dockerfile
@@ -1,4 +1,4 @@
|
||||
ARG BUILD_FROM=ghcr.io/home-assistant/base-python:3.14-alpine3.22-2026.03.1
|
||||
ARG BUILD_FROM
|
||||
FROM ${BUILD_FROM}
|
||||
|
||||
ENV \
|
||||
@@ -7,6 +7,11 @@ ENV \
|
||||
CRYPTOGRAPHY_OPENSSL_NO_LEGACY=1 \
|
||||
UV_SYSTEM_PYTHON=true
|
||||
|
||||
ARG \
|
||||
COSIGN_VERSION \
|
||||
BUILD_ARCH \
|
||||
QEMU_CPU
|
||||
|
||||
# Install base
|
||||
WORKDIR /usr/src
|
||||
RUN \
|
||||
@@ -22,40 +27,27 @@ RUN \
|
||||
openssl \
|
||||
yaml \
|
||||
\
|
||||
&& pip3 install uv==0.10.9
|
||||
&& curl -Lso /usr/bin/cosign "https://github.com/home-assistant/cosign/releases/download/${COSIGN_VERSION}/cosign_${BUILD_ARCH}" \
|
||||
&& chmod a+x /usr/bin/cosign \
|
||||
&& pip3 install uv==0.6.1
|
||||
|
||||
# Install requirements
|
||||
COPY requirements.txt .
|
||||
RUN \
|
||||
--mount=type=bind,source=./requirements.txt,target=/usr/src/requirements.txt \
|
||||
--mount=type=bind,source=./wheels,target=/usr/src/wheels \
|
||||
if ls /usr/src/wheels/musllinux/* >/dev/null 2>&1; then \
|
||||
LOCAL_WHEELS=/usr/src/wheels/musllinux; \
|
||||
echo "Using local wheels from: $LOCAL_WHEELS"; \
|
||||
if [ "${BUILD_ARCH}" = "i386" ]; then \
|
||||
setarch="linux32"; \
|
||||
else \
|
||||
LOCAL_WHEELS=; \
|
||||
echo "No local wheels found"; \
|
||||
fi && \
|
||||
uv pip install --compile-bytecode --no-cache --no-build \
|
||||
-r requirements.txt \
|
||||
${LOCAL_WHEELS:+--find-links $LOCAL_WHEELS}
|
||||
setarch=""; \
|
||||
fi \
|
||||
&& ${setarch} uv pip install --compile-bytecode --no-cache --no-build -r requirements.txt \
|
||||
&& rm -f requirements.txt
|
||||
|
||||
# Install Home Assistant Supervisor
|
||||
ARG BUILD_VERSION="9999.09.9.dev9999"
|
||||
COPY . supervisor
|
||||
RUN \
|
||||
sed -i "s/^SUPERVISOR_VERSION =.*/SUPERVISOR_VERSION = \"${BUILD_VERSION}\"/g" /usr/src/supervisor/supervisor/const.py \
|
||||
&& uv pip install --no-cache -e ./supervisor \
|
||||
uv pip install --no-cache -e ./supervisor \
|
||||
&& python3 -m compileall ./supervisor/supervisor
|
||||
|
||||
|
||||
WORKDIR /
|
||||
COPY rootfs /
|
||||
|
||||
LABEL \
|
||||
io.hass.type="supervisor" \
|
||||
org.opencontainers.image.title="Home Assistant Supervisor" \
|
||||
org.opencontainers.image.description="Container-based system for managing Home Assistant Core installation" \
|
||||
org.opencontainers.image.authors="The Home Assistant Authors" \
|
||||
org.opencontainers.image.url="https://www.home-assistant.io/" \
|
||||
org.opencontainers.image.documentation="https://www.home-assistant.io/docs/" \
|
||||
org.opencontainers.image.licenses="Apache License 2.0"
|
||||
|
||||
24
build.yaml
Normal file
24
build.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
image: ghcr.io/home-assistant/{arch}-hassio-supervisor
|
||||
build_from:
|
||||
aarch64: ghcr.io/home-assistant/aarch64-base-python:3.13-alpine3.21
|
||||
armhf: ghcr.io/home-assistant/armhf-base-python:3.13-alpine3.21
|
||||
armv7: ghcr.io/home-assistant/armv7-base-python:3.13-alpine3.21
|
||||
amd64: ghcr.io/home-assistant/amd64-base-python:3.13-alpine3.21
|
||||
i386: ghcr.io/home-assistant/i386-base-python:3.13-alpine3.21
|
||||
codenotary:
|
||||
signer: notary@home-assistant.io
|
||||
base_image: notary@home-assistant.io
|
||||
cosign:
|
||||
base_identity: https://github.com/home-assistant/docker-base/.*
|
||||
identity: https://github.com/home-assistant/supervisor/.*
|
||||
args:
|
||||
COSIGN_VERSION: 2.4.0
|
||||
labels:
|
||||
io.hass.type: supervisor
|
||||
org.opencontainers.image.title: Home Assistant Supervisor
|
||||
org.opencontainers.image.description: Container-based system for managing Home Assistant Core installation
|
||||
org.opencontainers.image.source: https://github.com/home-assistant/supervisor
|
||||
org.opencontainers.image.authors: The Home Assistant Authors
|
||||
org.opencontainers.image.url: https://www.home-assistant.io/
|
||||
org.opencontainers.image.documentation: https://www.home-assistant.io/docs/
|
||||
org.opencontainers.image.licenses: Apache License 2.0
|
||||
@@ -4,11 +4,8 @@ coverage:
|
||||
status:
|
||||
project:
|
||||
default:
|
||||
target: auto
|
||||
threshold: 1
|
||||
patch:
|
||||
default:
|
||||
target: 80
|
||||
target: 40
|
||||
threshold: 0.09
|
||||
comment: false
|
||||
github_checks:
|
||||
annotations: false
|
||||
523
pyproject.toml
523
pyproject.toml
@@ -1,5 +1,5 @@
|
||||
[build-system]
|
||||
requires = ["setuptools~=82.0.0"]
|
||||
requires = ["setuptools~=75.8.0", "wheel~=0.45.0"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
@@ -9,10 +9,10 @@ license = { text = "Apache-2.0" }
|
||||
description = "Open-source private cloud os for Home-Assistant based on HassOS"
|
||||
readme = "README.md"
|
||||
authors = [
|
||||
{ name = "The Home Assistant Authors", email = "hello@home-assistant.io" },
|
||||
{ name = "The Home Assistant Authors", email = "hello@home-assistant.io" },
|
||||
]
|
||||
keywords = ["docker", "home-assistant", "api"]
|
||||
requires-python = ">=3.14.0"
|
||||
requires-python = ">=3.13.0"
|
||||
|
||||
[project.urls]
|
||||
"Homepage" = "https://www.home-assistant.io/"
|
||||
@@ -31,7 +31,7 @@ include-package-data = true
|
||||
include = ["supervisor*"]
|
||||
|
||||
[tool.pylint.MAIN]
|
||||
py-version = "3.14"
|
||||
py-version = "3.13"
|
||||
# Use a conservative default here; 2 should speed up most setups and not hurt
|
||||
# any too bad. Override on command line as appropriate.
|
||||
jobs = 2
|
||||
@@ -53,154 +53,154 @@ good-names = ["id", "i", "j", "k", "ex", "Run", "_", "fp", "T", "os"]
|
||||
# too-few-* - same as too-many-*
|
||||
# unused-argument - generic callbacks and setup methods create a lot of warnings
|
||||
disable = [
|
||||
"format",
|
||||
"abstract-method",
|
||||
"cyclic-import",
|
||||
"duplicate-code",
|
||||
"locally-disabled",
|
||||
"no-else-return",
|
||||
"not-context-manager",
|
||||
"too-few-public-methods",
|
||||
"too-many-arguments",
|
||||
"too-many-branches",
|
||||
"too-many-instance-attributes",
|
||||
"too-many-lines",
|
||||
"too-many-locals",
|
||||
"too-many-public-methods",
|
||||
"too-many-return-statements",
|
||||
"too-many-statements",
|
||||
"unused-argument",
|
||||
"consider-using-with",
|
||||
"format",
|
||||
"abstract-method",
|
||||
"cyclic-import",
|
||||
"duplicate-code",
|
||||
"locally-disabled",
|
||||
"no-else-return",
|
||||
"not-context-manager",
|
||||
"too-few-public-methods",
|
||||
"too-many-arguments",
|
||||
"too-many-branches",
|
||||
"too-many-instance-attributes",
|
||||
"too-many-lines",
|
||||
"too-many-locals",
|
||||
"too-many-public-methods",
|
||||
"too-many-return-statements",
|
||||
"too-many-statements",
|
||||
"unused-argument",
|
||||
"consider-using-with",
|
||||
|
||||
# Handled by ruff
|
||||
# Ref: <https://github.com/astral-sh/ruff/issues/970>
|
||||
"await-outside-async", # PLE1142
|
||||
"bad-str-strip-call", # PLE1310
|
||||
"bad-string-format-type", # PLE1307
|
||||
"bidirectional-unicode", # PLE2502
|
||||
"continue-in-finally", # PLE0116
|
||||
"duplicate-bases", # PLE0241
|
||||
"format-needs-mapping", # F502
|
||||
"function-redefined", # F811
|
||||
# Needed because ruff does not understand type of __all__ generated by a function
|
||||
# "invalid-all-format", # PLE0605
|
||||
"invalid-all-object", # PLE0604
|
||||
"invalid-character-backspace", # PLE2510
|
||||
"invalid-character-esc", # PLE2513
|
||||
"invalid-character-nul", # PLE2514
|
||||
"invalid-character-sub", # PLE2512
|
||||
"invalid-character-zero-width-space", # PLE2515
|
||||
"logging-too-few-args", # PLE1206
|
||||
"logging-too-many-args", # PLE1205
|
||||
"missing-format-string-key", # F524
|
||||
"mixed-format-string", # F506
|
||||
"no-method-argument", # N805
|
||||
"no-self-argument", # N805
|
||||
"nonexistent-operator", # B002
|
||||
"nonlocal-without-binding", # PLE0117
|
||||
"not-in-loop", # F701, F702
|
||||
"notimplemented-raised", # F901
|
||||
"return-in-init", # PLE0101
|
||||
"return-outside-function", # F706
|
||||
"syntax-error", # E999
|
||||
"too-few-format-args", # F524
|
||||
"too-many-format-args", # F522
|
||||
"too-many-star-expressions", # F622
|
||||
"truncated-format-string", # F501
|
||||
"undefined-all-variable", # F822
|
||||
"undefined-variable", # F821
|
||||
"used-prior-global-declaration", # PLE0118
|
||||
"yield-inside-async-function", # PLE1700
|
||||
"yield-outside-function", # F704
|
||||
"anomalous-backslash-in-string", # W605
|
||||
"assert-on-string-literal", # PLW0129
|
||||
"assert-on-tuple", # F631
|
||||
"bad-format-string", # W1302, F
|
||||
"bad-format-string-key", # W1300, F
|
||||
"bare-except", # E722
|
||||
"binary-op-exception", # PLW0711
|
||||
"cell-var-from-loop", # B023
|
||||
# "dangerous-default-value", # B006, ruff catches new occurrences, needs more work
|
||||
"duplicate-except", # B014
|
||||
"duplicate-key", # F601
|
||||
"duplicate-string-formatting-argument", # F
|
||||
"duplicate-value", # F
|
||||
"eval-used", # PGH001
|
||||
"exec-used", # S102
|
||||
# "expression-not-assigned", # B018, ruff catches new occurrences, needs more work
|
||||
"f-string-without-interpolation", # F541
|
||||
"forgotten-debug-statement", # T100
|
||||
"format-string-without-interpolation", # F
|
||||
# "global-statement", # PLW0603, ruff catches new occurrences, needs more work
|
||||
"global-variable-not-assigned", # PLW0602
|
||||
"implicit-str-concat", # ISC001
|
||||
"import-self", # PLW0406
|
||||
"inconsistent-quotes", # Q000
|
||||
"invalid-envvar-default", # PLW1508
|
||||
"keyword-arg-before-vararg", # B026
|
||||
"logging-format-interpolation", # G
|
||||
"logging-fstring-interpolation", # G
|
||||
"logging-not-lazy", # G
|
||||
"misplaced-future", # F404
|
||||
"named-expr-without-context", # PLW0131
|
||||
"nested-min-max", # PLW3301
|
||||
# "pointless-statement", # B018, ruff catches new occurrences, needs more work
|
||||
"raise-missing-from", # TRY200
|
||||
# "redefined-builtin", # A001, ruff is way more stricter, needs work
|
||||
"try-except-raise", # TRY203
|
||||
"unused-argument", # ARG001, we don't use it
|
||||
"unused-format-string-argument", #F507
|
||||
"unused-format-string-key", # F504
|
||||
"unused-import", # F401
|
||||
"unused-variable", # F841
|
||||
"useless-else-on-loop", # PLW0120
|
||||
"wildcard-import", # F403
|
||||
"bad-classmethod-argument", # N804
|
||||
"consider-iterating-dictionary", # SIM118
|
||||
"empty-docstring", # D419
|
||||
"invalid-name", # N815
|
||||
"line-too-long", # E501, disabled globally
|
||||
"missing-class-docstring", # D101
|
||||
"missing-final-newline", # W292
|
||||
"missing-function-docstring", # D103
|
||||
"missing-module-docstring", # D100
|
||||
"multiple-imports", #E401
|
||||
"singleton-comparison", # E711, E712
|
||||
"subprocess-run-check", # PLW1510
|
||||
"superfluous-parens", # UP034
|
||||
"ungrouped-imports", # I001
|
||||
"unidiomatic-typecheck", # E721
|
||||
"unnecessary-direct-lambda-call", # PLC3002
|
||||
"unnecessary-lambda-assignment", # PLC3001
|
||||
"unneeded-not", # SIM208
|
||||
"useless-import-alias", # PLC0414
|
||||
"wrong-import-order", # I001
|
||||
"wrong-import-position", # E402
|
||||
"comparison-of-constants", # PLR0133
|
||||
"comparison-with-itself", # PLR0124
|
||||
# "consider-alternative-union-syntax", # UP007, typing extension
|
||||
"consider-merging-isinstance", # PLR1701
|
||||
# "consider-using-alias", # UP006, typing extension
|
||||
"consider-using-dict-comprehension", # C402
|
||||
"consider-using-generator", # C417
|
||||
"consider-using-get", # SIM401
|
||||
"consider-using-set-comprehension", # C401
|
||||
"consider-using-sys-exit", # PLR1722
|
||||
"consider-using-ternary", # SIM108
|
||||
"literal-comparison", # F632
|
||||
"property-with-parameters", # PLR0206
|
||||
"super-with-arguments", # UP008
|
||||
"too-many-branches", # PLR0912
|
||||
"too-many-return-statements", # PLR0911
|
||||
"too-many-statements", # PLR0915
|
||||
"trailing-comma-tuple", # COM818
|
||||
"unnecessary-comprehension", # C416
|
||||
"use-a-generator", # C417
|
||||
"use-dict-literal", # C406
|
||||
"use-list-literal", # C405
|
||||
"useless-object-inheritance", # UP004
|
||||
"useless-return", # PLR1711
|
||||
# "no-self-use", # PLR6301 # Optional plugin, not enabled
|
||||
# Handled by ruff
|
||||
# Ref: <https://github.com/astral-sh/ruff/issues/970>
|
||||
"await-outside-async", # PLE1142
|
||||
"bad-str-strip-call", # PLE1310
|
||||
"bad-string-format-type", # PLE1307
|
||||
"bidirectional-unicode", # PLE2502
|
||||
"continue-in-finally", # PLE0116
|
||||
"duplicate-bases", # PLE0241
|
||||
"format-needs-mapping", # F502
|
||||
"function-redefined", # F811
|
||||
# Needed because ruff does not understand type of __all__ generated by a function
|
||||
# "invalid-all-format", # PLE0605
|
||||
"invalid-all-object", # PLE0604
|
||||
"invalid-character-backspace", # PLE2510
|
||||
"invalid-character-esc", # PLE2513
|
||||
"invalid-character-nul", # PLE2514
|
||||
"invalid-character-sub", # PLE2512
|
||||
"invalid-character-zero-width-space", # PLE2515
|
||||
"logging-too-few-args", # PLE1206
|
||||
"logging-too-many-args", # PLE1205
|
||||
"missing-format-string-key", # F524
|
||||
"mixed-format-string", # F506
|
||||
"no-method-argument", # N805
|
||||
"no-self-argument", # N805
|
||||
"nonexistent-operator", # B002
|
||||
"nonlocal-without-binding", # PLE0117
|
||||
"not-in-loop", # F701, F702
|
||||
"notimplemented-raised", # F901
|
||||
"return-in-init", # PLE0101
|
||||
"return-outside-function", # F706
|
||||
"syntax-error", # E999
|
||||
"too-few-format-args", # F524
|
||||
"too-many-format-args", # F522
|
||||
"too-many-star-expressions", # F622
|
||||
"truncated-format-string", # F501
|
||||
"undefined-all-variable", # F822
|
||||
"undefined-variable", # F821
|
||||
"used-prior-global-declaration", # PLE0118
|
||||
"yield-inside-async-function", # PLE1700
|
||||
"yield-outside-function", # F704
|
||||
"anomalous-backslash-in-string", # W605
|
||||
"assert-on-string-literal", # PLW0129
|
||||
"assert-on-tuple", # F631
|
||||
"bad-format-string", # W1302, F
|
||||
"bad-format-string-key", # W1300, F
|
||||
"bare-except", # E722
|
||||
"binary-op-exception", # PLW0711
|
||||
"cell-var-from-loop", # B023
|
||||
# "dangerous-default-value", # B006, ruff catches new occurrences, needs more work
|
||||
"duplicate-except", # B014
|
||||
"duplicate-key", # F601
|
||||
"duplicate-string-formatting-argument", # F
|
||||
"duplicate-value", # F
|
||||
"eval-used", # PGH001
|
||||
"exec-used", # S102
|
||||
# "expression-not-assigned", # B018, ruff catches new occurrences, needs more work
|
||||
"f-string-without-interpolation", # F541
|
||||
"forgotten-debug-statement", # T100
|
||||
"format-string-without-interpolation", # F
|
||||
# "global-statement", # PLW0603, ruff catches new occurrences, needs more work
|
||||
"global-variable-not-assigned", # PLW0602
|
||||
"implicit-str-concat", # ISC001
|
||||
"import-self", # PLW0406
|
||||
"inconsistent-quotes", # Q000
|
||||
"invalid-envvar-default", # PLW1508
|
||||
"keyword-arg-before-vararg", # B026
|
||||
"logging-format-interpolation", # G
|
||||
"logging-fstring-interpolation", # G
|
||||
"logging-not-lazy", # G
|
||||
"misplaced-future", # F404
|
||||
"named-expr-without-context", # PLW0131
|
||||
"nested-min-max", # PLW3301
|
||||
# "pointless-statement", # B018, ruff catches new occurrences, needs more work
|
||||
"raise-missing-from", # TRY200
|
||||
# "redefined-builtin", # A001, ruff is way more stricter, needs work
|
||||
"try-except-raise", # TRY203
|
||||
"unused-argument", # ARG001, we don't use it
|
||||
"unused-format-string-argument", #F507
|
||||
"unused-format-string-key", # F504
|
||||
"unused-import", # F401
|
||||
"unused-variable", # F841
|
||||
"useless-else-on-loop", # PLW0120
|
||||
"wildcard-import", # F403
|
||||
"bad-classmethod-argument", # N804
|
||||
"consider-iterating-dictionary", # SIM118
|
||||
"empty-docstring", # D419
|
||||
"invalid-name", # N815
|
||||
"line-too-long", # E501, disabled globally
|
||||
"missing-class-docstring", # D101
|
||||
"missing-final-newline", # W292
|
||||
"missing-function-docstring", # D103
|
||||
"missing-module-docstring", # D100
|
||||
"multiple-imports", #E401
|
||||
"singleton-comparison", # E711, E712
|
||||
"subprocess-run-check", # PLW1510
|
||||
"superfluous-parens", # UP034
|
||||
"ungrouped-imports", # I001
|
||||
"unidiomatic-typecheck", # E721
|
||||
"unnecessary-direct-lambda-call", # PLC3002
|
||||
"unnecessary-lambda-assignment", # PLC3001
|
||||
"unneeded-not", # SIM208
|
||||
"useless-import-alias", # PLC0414
|
||||
"wrong-import-order", # I001
|
||||
"wrong-import-position", # E402
|
||||
"comparison-of-constants", # PLR0133
|
||||
"comparison-with-itself", # PLR0124
|
||||
# "consider-alternative-union-syntax", # UP007, typing extension
|
||||
"consider-merging-isinstance", # PLR1701
|
||||
# "consider-using-alias", # UP006, typing extension
|
||||
"consider-using-dict-comprehension", # C402
|
||||
"consider-using-generator", # C417
|
||||
"consider-using-get", # SIM401
|
||||
"consider-using-set-comprehension", # C401
|
||||
"consider-using-sys-exit", # PLR1722
|
||||
"consider-using-ternary", # SIM108
|
||||
"literal-comparison", # F632
|
||||
"property-with-parameters", # PLR0206
|
||||
"super-with-arguments", # UP008
|
||||
"too-many-branches", # PLR0912
|
||||
"too-many-return-statements", # PLR0911
|
||||
"too-many-statements", # PLR0915
|
||||
"trailing-comma-tuple", # COM818
|
||||
"unnecessary-comprehension", # C416
|
||||
"use-a-generator", # C417
|
||||
"use-dict-literal", # C406
|
||||
"use-list-literal", # C405
|
||||
"useless-object-inheritance", # UP004
|
||||
"useless-return", # PLR1711
|
||||
# "no-self-use", # PLR6301 # Optional plugin, not enabled
|
||||
]
|
||||
|
||||
[tool.pylint.REPORTS]
|
||||
@@ -208,9 +208,6 @@ score = false
|
||||
|
||||
[tool.pylint.TYPECHECK]
|
||||
ignored-modules = ["distutils"]
|
||||
# re.Pattern methods are C extension methods; pylint cannot detect them when
|
||||
# re.Pattern is used as a dataclass field type annotation (false positive).
|
||||
generated-members = ["re.Pattern.*"]
|
||||
|
||||
[tool.pylint.FORMAT]
|
||||
expected-line-ending-format = "LF"
|
||||
@@ -229,120 +226,120 @@ log_date_format = "%Y-%m-%d %H:%M:%S"
|
||||
asyncio_default_fixture_loop_scope = "function"
|
||||
asyncio_mode = "auto"
|
||||
filterwarnings = [
|
||||
"error",
|
||||
"ignore:pkg_resources is deprecated as an API:DeprecationWarning:dirhash",
|
||||
"ignore::pytest.PytestUnraisableExceptionWarning",
|
||||
]
|
||||
markers = [
|
||||
"no_mock_init_websession: disable the autouse mock of init_websession for this test",
|
||||
"error",
|
||||
"ignore:pkg_resources is deprecated as an API:DeprecationWarning:dirhash",
|
||||
"ignore::pytest.PytestUnraisableExceptionWarning",
|
||||
]
|
||||
|
||||
[tool.ruff]
|
||||
lint.select = [
|
||||
"B002", # Python does not support the unary prefix increment
|
||||
"B007", # Loop control variable {name} not used within loop body
|
||||
"B014", # Exception handler with duplicate exception
|
||||
"B023", # Function definition does not bind loop variable {name}
|
||||
"B026", # Star-arg unpacking after a keyword argument is strongly discouraged
|
||||
"B904", # Use raise from to specify exception cause
|
||||
"C", # complexity
|
||||
"COM818", # Trailing comma on bare tuple prohibited
|
||||
"D", # docstrings
|
||||
"DTZ003", # Use datetime.now(tz=) instead of datetime.utcnow()
|
||||
"DTZ004", # Use datetime.fromtimestamp(ts, tz=) instead of datetime.utcfromtimestamp(ts)
|
||||
"E", # pycodestyle
|
||||
"F", # pyflakes/autoflake
|
||||
"G", # flake8-logging-format
|
||||
"I", # isort
|
||||
"ICN001", # import concentions; {name} should be imported as {asname}
|
||||
"N804", # First argument of a class method should be named cls
|
||||
"N805", # First argument of a method should be named self
|
||||
"N815", # Variable {name} in class scope should not be mixedCase
|
||||
"PGH004", # Use specific rule codes when using noqa
|
||||
"PLC0414", # Useless import alias. Import alias does not rename original package.
|
||||
"PLC", # pylint
|
||||
"PLE", # pylint
|
||||
"PLR", # pylint
|
||||
"PLW", # pylint
|
||||
"Q000", # Double quotes found but single quotes preferred
|
||||
"RUF006", # Store a reference to the return value of asyncio.create_task
|
||||
"S102", # Use of exec detected
|
||||
"S103", # bad-file-permissions
|
||||
"S108", # hardcoded-temp-file
|
||||
"S306", # suspicious-mktemp-usage
|
||||
"S307", # suspicious-eval-usage
|
||||
"S313", # suspicious-xmlc-element-tree-usage
|
||||
"S314", # suspicious-xml-element-tree-usage
|
||||
"S315", # suspicious-xml-expat-reader-usage
|
||||
"S316", # suspicious-xml-expat-builder-usage
|
||||
"S317", # suspicious-xml-sax-usage
|
||||
"S318", # suspicious-xml-mini-dom-usage
|
||||
"S319", # suspicious-xml-pull-dom-usage
|
||||
"S601", # paramiko-call
|
||||
"S602", # subprocess-popen-with-shell-equals-true
|
||||
"S604", # call-with-shell-equals-true
|
||||
"S608", # hardcoded-sql-expression
|
||||
"S609", # unix-command-wildcard-injection
|
||||
"SIM105", # Use contextlib.suppress({exception}) instead of try-except-pass
|
||||
"SIM117", # Merge with-statements that use the same scope
|
||||
"SIM118", # Use {key} in {dict} instead of {key} in {dict}.keys()
|
||||
"SIM201", # Use {left} != {right} instead of not {left} == {right}
|
||||
"SIM208", # Use {expr} instead of not (not {expr})
|
||||
"SIM212", # Use {a} if {a} else {b} instead of {b} if not {a} else {a}
|
||||
"SIM300", # Yoda conditions. Use 'age == 42' instead of '42 == age'.
|
||||
"SIM401", # Use get from dict with default instead of an if block
|
||||
"T100", # Trace found: {name} used
|
||||
"T20", # flake8-print
|
||||
"TID251", # Banned imports
|
||||
"TRY004", # Prefer TypeError exception for invalid type
|
||||
"TRY203", # Remove exception handler; error is immediately re-raised
|
||||
"UP", # pyupgrade
|
||||
"W", # pycodestyle
|
||||
"B002", # Python does not support the unary prefix increment
|
||||
"B007", # Loop control variable {name} not used within loop body
|
||||
"B014", # Exception handler with duplicate exception
|
||||
"B023", # Function definition does not bind loop variable {name}
|
||||
"B026", # Star-arg unpacking after a keyword argument is strongly discouraged
|
||||
"B904", # Use raise from to specify exception cause
|
||||
"C", # complexity
|
||||
"COM818", # Trailing comma on bare tuple prohibited
|
||||
"D", # docstrings
|
||||
"DTZ003", # Use datetime.now(tz=) instead of datetime.utcnow()
|
||||
"DTZ004", # Use datetime.fromtimestamp(ts, tz=) instead of datetime.utcfromtimestamp(ts)
|
||||
"E", # pycodestyle
|
||||
"F", # pyflakes/autoflake
|
||||
"G", # flake8-logging-format
|
||||
"I", # isort
|
||||
"ICN001", # import concentions; {name} should be imported as {asname}
|
||||
"N804", # First argument of a class method should be named cls
|
||||
"N805", # First argument of a method should be named self
|
||||
"N815", # Variable {name} in class scope should not be mixedCase
|
||||
"PGH004", # Use specific rule codes when using noqa
|
||||
"PLC0414", # Useless import alias. Import alias does not rename original package.
|
||||
"PLC", # pylint
|
||||
"PLE", # pylint
|
||||
"PLR", # pylint
|
||||
"PLW", # pylint
|
||||
"Q000", # Double quotes found but single quotes preferred
|
||||
"RUF006", # Store a reference to the return value of asyncio.create_task
|
||||
"S102", # Use of exec detected
|
||||
"S103", # bad-file-permissions
|
||||
"S108", # hardcoded-temp-file
|
||||
"S306", # suspicious-mktemp-usage
|
||||
"S307", # suspicious-eval-usage
|
||||
"S313", # suspicious-xmlc-element-tree-usage
|
||||
"S314", # suspicious-xml-element-tree-usage
|
||||
"S315", # suspicious-xml-expat-reader-usage
|
||||
"S316", # suspicious-xml-expat-builder-usage
|
||||
"S317", # suspicious-xml-sax-usage
|
||||
"S318", # suspicious-xml-mini-dom-usage
|
||||
"S319", # suspicious-xml-pull-dom-usage
|
||||
"S320", # suspicious-xmle-tree-usage
|
||||
"S601", # paramiko-call
|
||||
"S602", # subprocess-popen-with-shell-equals-true
|
||||
"S604", # call-with-shell-equals-true
|
||||
"S608", # hardcoded-sql-expression
|
||||
"S609", # unix-command-wildcard-injection
|
||||
"SIM105", # Use contextlib.suppress({exception}) instead of try-except-pass
|
||||
"SIM117", # Merge with-statements that use the same scope
|
||||
"SIM118", # Use {key} in {dict} instead of {key} in {dict}.keys()
|
||||
"SIM201", # Use {left} != {right} instead of not {left} == {right}
|
||||
"SIM208", # Use {expr} instead of not (not {expr})
|
||||
"SIM212", # Use {a} if {a} else {b} instead of {b} if not {a} else {a}
|
||||
"SIM300", # Yoda conditions. Use 'age == 42' instead of '42 == age'.
|
||||
"SIM401", # Use get from dict with default instead of an if block
|
||||
"T100", # Trace found: {name} used
|
||||
"T20", # flake8-print
|
||||
"TID251", # Banned imports
|
||||
"TRY004", # Prefer TypeError exception for invalid type
|
||||
"TRY203", # Remove exception handler; error is immediately re-raised
|
||||
"UP", # pyupgrade
|
||||
"W", # pycodestyle
|
||||
]
|
||||
|
||||
lint.ignore = [
|
||||
"D202", # No blank lines allowed after function docstring
|
||||
"D203", # 1 blank line required before class docstring
|
||||
"D213", # Multi-line docstring summary should start at the second line
|
||||
"D406", # Section name should end with a newline
|
||||
"D407", # Section name underlining
|
||||
"E501", # line too long
|
||||
"E731", # do not assign a lambda expression, use a def
|
||||
"D202", # No blank lines allowed after function docstring
|
||||
"D203", # 1 blank line required before class docstring
|
||||
"D213", # Multi-line docstring summary should start at the second line
|
||||
"D406", # Section name should end with a newline
|
||||
"D407", # Section name underlining
|
||||
"E501", # line too long
|
||||
"E731", # do not assign a lambda expression, use a def
|
||||
|
||||
# Ignore ignored, as the rule is now back in preview/nursery, which cannot
|
||||
# be ignored anymore without warnings.
|
||||
# https://github.com/astral-sh/ruff/issues/7491
|
||||
# "PLC1901", # Lots of false positives
|
||||
# Ignore ignored, as the rule is now back in preview/nursery, which cannot
|
||||
# be ignored anymore without warnings.
|
||||
# https://github.com/astral-sh/ruff/issues/7491
|
||||
# "PLC1901", # Lots of false positives
|
||||
|
||||
# False positives https://github.com/astral-sh/ruff/issues/5386
|
||||
"PLC0208", # Use a sequence type instead of a `set` when iterating over values
|
||||
"PLR0911", # Too many return statements ({returns} > {max_returns})
|
||||
"PLR0912", # Too many branches ({branches} > {max_branches})
|
||||
"PLR0913", # Too many arguments to function call ({c_args} > {max_args})
|
||||
"PLR0915", # Too many statements ({statements} > {max_statements})
|
||||
"PLR2004", # Magic value used in comparison, consider replacing {value} with a constant variable
|
||||
"PLW2901", # Outer {outer_kind} variable {name} overwritten by inner {inner_kind} target
|
||||
"UP006", # keep type annotation style as is
|
||||
"UP007", # keep type annotation style as is
|
||||
# False positives https://github.com/astral-sh/ruff/issues/5386
|
||||
"PLC0208", # Use a sequence type instead of a `set` when iterating over values
|
||||
"PLR0911", # Too many return statements ({returns} > {max_returns})
|
||||
"PLR0912", # Too many branches ({branches} > {max_branches})
|
||||
"PLR0913", # Too many arguments to function call ({c_args} > {max_args})
|
||||
"PLR0915", # Too many statements ({statements} > {max_statements})
|
||||
"PLR2004", # Magic value used in comparison, consider replacing {value} with a constant variable
|
||||
"PLW2901", # Outer {outer_kind} variable {name} overwritten by inner {inner_kind} target
|
||||
"UP006", # keep type annotation style as is
|
||||
"UP007", # keep type annotation style as is
|
||||
# Ignored due to performance: https://github.com/charliermarsh/ruff/issues/2923
|
||||
"UP038", # Use `X | Y` in `isinstance` call instead of `(X, Y)`
|
||||
|
||||
# May conflict with the formatter, https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules
|
||||
"W191",
|
||||
"E111",
|
||||
"E114",
|
||||
"E117",
|
||||
"D206",
|
||||
"D300",
|
||||
"Q000",
|
||||
"Q001",
|
||||
"Q002",
|
||||
"Q003",
|
||||
"COM812",
|
||||
"COM819",
|
||||
"ISC001",
|
||||
"ISC002",
|
||||
# May conflict with the formatter, https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules
|
||||
"W191",
|
||||
"E111",
|
||||
"E114",
|
||||
"E117",
|
||||
"D206",
|
||||
"D300",
|
||||
"Q000",
|
||||
"Q001",
|
||||
"Q002",
|
||||
"Q003",
|
||||
"COM812",
|
||||
"COM819",
|
||||
"ISC001",
|
||||
"ISC002",
|
||||
|
||||
# Disabled because ruff does not understand type of __all__ generated by a function
|
||||
"PLE0605",
|
||||
# Disabled because ruff does not understand type of __all__ generated by a function
|
||||
"PLE0605",
|
||||
]
|
||||
|
||||
[tool.ruff.lint.flake8-import-conventions.extend-aliases]
|
||||
@@ -357,11 +354,11 @@ fixture-parentheses = false
|
||||
[tool.ruff.lint.isort]
|
||||
force-sort-within-sections = true
|
||||
section-order = [
|
||||
"future",
|
||||
"standard-library",
|
||||
"third-party",
|
||||
"first-party",
|
||||
"local-folder",
|
||||
"future",
|
||||
"standard-library",
|
||||
"third-party",
|
||||
"first-party",
|
||||
"local-folder",
|
||||
]
|
||||
forced-separate = ["tests"]
|
||||
known-first-party = ["supervisor", "tests"]
|
||||
@@ -371,7 +368,7 @@ split-on-trailing-comma = false
|
||||
[tool.ruff.lint.per-file-ignores]
|
||||
|
||||
# DBus Service Mocks must use typing and names understood by dbus-fast
|
||||
"tests/dbus_service_mocks/*.py" = ["F722", "F821", "N815", "UP037"]
|
||||
"tests/dbus_service_mocks/*.py" = ["F722", "F821", "N815"]
|
||||
|
||||
[tool.ruff.lint.mccabe]
|
||||
max-complexity = 25
|
||||
|
||||
@@ -1,29 +1,29 @@
|
||||
aiodns==4.0.0
|
||||
aiodocker==0.26.0
|
||||
aiohttp==3.13.5
|
||||
aiodns==3.2.0
|
||||
aiohttp==3.11.13
|
||||
atomicwrites-homeassistant==1.4.1
|
||||
attrs==26.1.0
|
||||
awesomeversion==25.8.0
|
||||
blockbuster==1.5.26
|
||||
brotli==1.2.0
|
||||
ciso8601==2.3.3
|
||||
colorlog==6.10.1
|
||||
attrs==25.1.0
|
||||
awesomeversion==24.6.0
|
||||
brotli==1.1.0
|
||||
ciso8601==2.3.2
|
||||
colorlog==6.9.0
|
||||
cpe==1.3.1
|
||||
cryptography==47.0.0
|
||||
debugpy==1.8.20
|
||||
cryptography==44.0.1
|
||||
debugpy==1.8.12
|
||||
deepmerge==2.0
|
||||
dirhash==0.5.0
|
||||
docker==7.1.0
|
||||
faust-cchardet==2.1.19
|
||||
gitpython==3.1.47
|
||||
jinja2==3.1.6
|
||||
log-rate-limit==1.4.2
|
||||
orjson==3.11.8
|
||||
gitpython==3.1.44
|
||||
jinja2==3.1.5
|
||||
orjson==3.10.12
|
||||
pulsectl==24.12.0
|
||||
pyudev==0.24.4
|
||||
PyYAML==6.0.3
|
||||
securetar==2026.4.1
|
||||
sentry-sdk==2.58.0
|
||||
setuptools==82.0.1
|
||||
voluptuous==0.16.0
|
||||
dbus-fast==4.0.4
|
||||
pyudev==0.24.3
|
||||
PyYAML==6.0.2
|
||||
requests==2.32.3
|
||||
securetar==2025.2.1
|
||||
sentry-sdk==2.22.0
|
||||
setuptools==75.8.2
|
||||
voluptuous==0.15.2
|
||||
dbus-fast==2.34.0
|
||||
typing_extensions==4.12.2
|
||||
zlib-fast==0.2.1
|
||||
|
||||
@@ -1,14 +1,13 @@
|
||||
astroid==4.0.3
|
||||
coverage==7.13.5
|
||||
mypy==1.20.2
|
||||
pre-commit==4.6.0
|
||||
pylint==4.0.5
|
||||
astroid==3.3.8
|
||||
coverage==7.6.12
|
||||
pre-commit==4.1.0
|
||||
pylint==3.3.4
|
||||
pytest-aiohttp==1.1.0
|
||||
pytest-asyncio==1.3.0
|
||||
pytest-cov==7.1.0
|
||||
pytest-timeout==2.4.0
|
||||
pytest==9.0.3
|
||||
ruff==0.15.12
|
||||
time-machine==3.2.0
|
||||
types-pyyaml==6.0.12.20260408
|
||||
urllib3==2.6.3
|
||||
pytest-asyncio==0.25.2
|
||||
pytest-cov==6.0.0
|
||||
pytest-timeout==2.3.1
|
||||
pytest==8.3.4
|
||||
ruff==0.9.8
|
||||
time-machine==2.16.0
|
||||
typing_extensions==4.12.2
|
||||
urllib3==2.3.0
|
||||
|
||||
@@ -1,30 +0,0 @@
|
||||
#!/usr/bin/env sh
|
||||
set -eu
|
||||
|
||||
# Used in venv activate script.
|
||||
# Would be an error if undefined.
|
||||
OSTYPE="${OSTYPE-}"
|
||||
|
||||
# Activate pyenv and virtualenv if present, then run the specified command
|
||||
|
||||
# pyenv, pyenv-virtualenv
|
||||
if [ -s .python-version ]; then
|
||||
PYENV_VERSION=$(head -n 1 .python-version)
|
||||
export PYENV_VERSION
|
||||
fi
|
||||
|
||||
if [ -n "${VIRTUAL_ENV-}" ] && [ -f "${VIRTUAL_ENV}/bin/activate" ]; then
|
||||
. "${VIRTUAL_ENV}/bin/activate"
|
||||
else
|
||||
# other common virtualenvs
|
||||
my_path=$(git rev-parse --show-toplevel)
|
||||
|
||||
for venv in venv .venv .; do
|
||||
if [ -f "${my_path}/${venv}/bin/activate" ]; then
|
||||
. "${my_path}/${venv}/bin/activate"
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
10
setup.py
10
setup.py
@@ -5,9 +5,7 @@ import re
|
||||
|
||||
from setuptools import setup
|
||||
|
||||
RE_SUPERVISOR_VERSION = re.compile(
|
||||
r'^SUPERVISOR_VERSION =\s*"?((?P<git_sha>[0-9a-f]{40})|[^"]+)"?$'
|
||||
)
|
||||
RE_SUPERVISOR_VERSION = re.compile(r"^SUPERVISOR_VERSION =\s*(.+)$")
|
||||
|
||||
SUPERVISOR_DIR = Path(__file__).parent
|
||||
REQUIREMENTS_FILE = SUPERVISOR_DIR / "requirements.txt"
|
||||
@@ -18,15 +16,13 @@ CONSTANTS = CONST_FILE.read_text(encoding="utf-8")
|
||||
|
||||
|
||||
def _get_supervisor_version():
|
||||
for line in CONSTANTS.split("\n"):
|
||||
for line in CONSTANTS.split("/n"):
|
||||
if match := RE_SUPERVISOR_VERSION.match(line):
|
||||
if git_sha := match.group("git_sha"):
|
||||
return f"9999.09.9.dev9999+{git_sha}"
|
||||
return match.group(1)
|
||||
return "9999.09.9.dev9999"
|
||||
|
||||
|
||||
setup(
|
||||
version=_get_supervisor_version(),
|
||||
dependencies=REQUIREMENTS.split("\n"),
|
||||
dependencies=REQUIREMENTS.split("/n"),
|
||||
)
|
||||
|
||||
@@ -11,12 +11,10 @@ import zlib_fast
|
||||
# Enable fast zlib before importing supervisor
|
||||
zlib_fast.enable()
|
||||
|
||||
# pylint: disable=wrong-import-position
|
||||
from supervisor import bootstrap # noqa: E402
|
||||
from supervisor.utils.blockbuster import BlockBusterManager # noqa: E402
|
||||
from supervisor.utils.logging import activate_log_queue_handler # noqa: E402
|
||||
|
||||
# pylint: enable=wrong-import-position
|
||||
from supervisor import bootstrap # pylint: disable=wrong-import-position # noqa: E402
|
||||
from supervisor.utils.logging import ( # pylint: disable=wrong-import-position # noqa: E402
|
||||
activate_log_queue_handler,
|
||||
)
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -54,8 +52,6 @@ if __name__ == "__main__":
|
||||
_LOGGER.info("Initializing Supervisor setup")
|
||||
coresys = loop.run_until_complete(bootstrap.initialize_coresys())
|
||||
loop.set_debug(coresys.config.debug)
|
||||
if coresys.config.detect_blocking_io:
|
||||
BlockBusterManager.activate()
|
||||
loop.run_until_complete(coresys.core.connect())
|
||||
|
||||
loop.run_until_complete(bootstrap.supervisor_debugger(coresys))
|
||||
@@ -66,28 +62,8 @@ if __name__ == "__main__":
|
||||
_LOGGER.info("Setting up Supervisor")
|
||||
loop.run_until_complete(coresys.core.setup())
|
||||
|
||||
# Create startup task that can be cancelled gracefully
|
||||
startup_task = loop.create_task(coresys.core.start())
|
||||
|
||||
def shutdown_handler() -> None:
|
||||
"""Handle shutdown signals gracefully during startup."""
|
||||
if not startup_task.done():
|
||||
_LOGGER.warning("Supervisor startup interrupted by shutdown signal")
|
||||
startup_task.cancel()
|
||||
|
||||
coresys.create_task(coresys.core.stop())
|
||||
|
||||
bootstrap.register_signal_handlers(loop, shutdown_handler)
|
||||
|
||||
try:
|
||||
loop.run_until_complete(startup_task)
|
||||
except asyncio.CancelledError:
|
||||
_LOGGER.warning("Supervisor startup cancelled")
|
||||
except Exception as err: # pylint: disable=broad-except
|
||||
# Supervisor itself is running at this point, just something didn't
|
||||
# start as expected. Log with traceback to get more insights for
|
||||
# such cases.
|
||||
_LOGGER.critical("Supervisor start failed: %s", err, exc_info=True)
|
||||
loop.call_soon_threadsafe(loop.create_task, coresys.core.start())
|
||||
loop.call_soon_threadsafe(bootstrap.reg_signal, loop, coresys)
|
||||
|
||||
try:
|
||||
_LOGGER.info("Running Supervisor")
|
||||
|
||||
@@ -1 +1 @@
|
||||
"""Init file for Supervisor apps."""
|
||||
"""Init file for Supervisor add-ons."""
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,304 +1,153 @@
|
||||
"""Supervisor app build environment."""
|
||||
"""Supervisor add-on build environment."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
from functools import cached_property
|
||||
import json
|
||||
import logging
|
||||
from pathlib import Path, PurePath
|
||||
from typing import TYPE_CHECKING, Any, Self
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from awesomeversion import AwesomeVersion
|
||||
import voluptuous as vol
|
||||
|
||||
from ..const import (
|
||||
ATTR_ARGS,
|
||||
ATTR_BUILD_FROM,
|
||||
ATTR_LABELS,
|
||||
ATTR_PASSWORD,
|
||||
ATTR_SQUASH,
|
||||
ATTR_USERNAME,
|
||||
FILE_SUFFIX_CONFIGURATION,
|
||||
LABEL_ARCH,
|
||||
LABEL_DESCRIPTION,
|
||||
LABEL_NAME,
|
||||
LABEL_TYPE,
|
||||
LABEL_URL,
|
||||
LABEL_VERSION,
|
||||
META_APP,
|
||||
SOCKET_DOCKER,
|
||||
CpuArch,
|
||||
META_ADDON,
|
||||
)
|
||||
from ..coresys import CoreSys, CoreSysAttributes
|
||||
from ..docker.const import DOCKER_HUB, DOCKER_HUB_LEGACY, DockerMount, MountType
|
||||
from ..docker.interface import MAP_ARCH
|
||||
from ..exceptions import (
|
||||
AppBuildArchitectureNotSupportedError,
|
||||
AppBuildDockerfileMissingError,
|
||||
ConfigurationFileError,
|
||||
HassioArchNotFound,
|
||||
)
|
||||
from ..utils.common import find_one_filetype, read_json_or_yaml_file
|
||||
from ..exceptions import ConfigurationFileError, HassioArchNotFound
|
||||
from ..utils.common import FileConfiguration, find_one_filetype
|
||||
from .validate import SCHEMA_BUILD_CONFIG
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .manager import AnyApp
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
from . import AnyAddon
|
||||
|
||||
|
||||
class AppBuild(CoreSysAttributes):
|
||||
"""Handle build options for apps."""
|
||||
class AddonBuild(FileConfiguration, CoreSysAttributes):
|
||||
"""Handle build options for add-ons."""
|
||||
|
||||
def __init__(self, coresys: CoreSys, app: AnyApp, data: dict[str, Any]) -> None:
|
||||
"""Initialize Supervisor app builder."""
|
||||
def __init__(self, coresys: CoreSys, addon: AnyAddon) -> None:
|
||||
"""Initialize Supervisor add-on builder."""
|
||||
self.coresys: CoreSys = coresys
|
||||
self.app = app
|
||||
self._build_config: dict[str, Any] = data
|
||||
self.addon = addon
|
||||
|
||||
@classmethod
|
||||
async def create(cls, coresys: CoreSys, app: AnyApp) -> Self:
|
||||
"""Create an AppBuild by reading the build configuration from disk."""
|
||||
data = await coresys.run_in_executor(cls._read_build_config, app)
|
||||
# Search for build file later in executor
|
||||
super().__init__(None, SCHEMA_BUILD_CONFIG)
|
||||
|
||||
if data:
|
||||
_LOGGER.warning(
|
||||
"App %s uses build.yaml which is deprecated. "
|
||||
"Move build parameters into the Dockerfile directly.",
|
||||
app.slug,
|
||||
)
|
||||
|
||||
if data[ATTR_SQUASH]:
|
||||
_LOGGER.warning(
|
||||
"Ignoring squash build option for %s as Docker BuildKit"
|
||||
" does not support it.",
|
||||
app.slug,
|
||||
)
|
||||
|
||||
return cls(coresys, app, data or {})
|
||||
|
||||
@staticmethod
|
||||
def _read_build_config(app: AnyApp) -> dict[str, Any] | None:
|
||||
"""Find and read the build configuration file.
|
||||
def _get_build_file(self) -> Path:
|
||||
"""Get build file.
|
||||
|
||||
Must be run in executor.
|
||||
"""
|
||||
try:
|
||||
build_file = find_one_filetype(
|
||||
app.path_location, "build", FILE_SUFFIX_CONFIGURATION
|
||||
return find_one_filetype(
|
||||
self.addon.path_location, "build", FILE_SUFFIX_CONFIGURATION
|
||||
)
|
||||
except ConfigurationFileError:
|
||||
# No build config file found, assuming modernized build
|
||||
return None
|
||||
return self.addon.path_location / "build.json"
|
||||
|
||||
try:
|
||||
raw = read_json_or_yaml_file(build_file)
|
||||
build_config = SCHEMA_BUILD_CONFIG(raw)
|
||||
except ConfigurationFileError as ex:
|
||||
_LOGGER.exception(
|
||||
"Error reading %s build config (%s), using defaults",
|
||||
app.slug,
|
||||
ex,
|
||||
)
|
||||
build_config = SCHEMA_BUILD_CONFIG({})
|
||||
except vol.Invalid as ex:
|
||||
_LOGGER.warning(
|
||||
"Error parsing %s build config (%s), using defaults", app.slug, ex
|
||||
)
|
||||
build_config = SCHEMA_BUILD_CONFIG({})
|
||||
async def read_data(self) -> None:
|
||||
"""Load data from file."""
|
||||
if not self._file:
|
||||
self._file = await self.sys_run_in_executor(self._get_build_file)
|
||||
|
||||
# Default base image is passed in BUILD_FROM only when build.yaml is used
|
||||
# (this is legacy behavior - without build config, Dockerfile should specify it)
|
||||
if not build_config[ATTR_BUILD_FROM]:
|
||||
build_config[ATTR_BUILD_FROM] = "ghcr.io/home-assistant/base:latest"
|
||||
await super().read_data()
|
||||
|
||||
return build_config
|
||||
async def save_data(self):
|
||||
"""Ignore save function."""
|
||||
raise RuntimeError()
|
||||
|
||||
@cached_property
|
||||
def arch(self) -> CpuArch:
|
||||
"""Return arch of the app."""
|
||||
return self.sys_arch.match([self.app.arch])
|
||||
def arch(self) -> str:
|
||||
"""Return arch of the add-on."""
|
||||
return self.sys_arch.match(self.addon.arch)
|
||||
|
||||
@property
|
||||
def base_image(self) -> str | None:
|
||||
"""Return base image for this app, or None to use Dockerfile default."""
|
||||
# No build config (otherwise default is coerced when reading the config)
|
||||
if not self._build_config.get(ATTR_BUILD_FROM):
|
||||
return None
|
||||
def base_image(self) -> str:
|
||||
"""Return base image for this add-on."""
|
||||
if not self._data[ATTR_BUILD_FROM]:
|
||||
return f"ghcr.io/home-assistant/{self.sys_arch.default}-base:latest"
|
||||
|
||||
# Single base image in build config
|
||||
if isinstance(self._build_config[ATTR_BUILD_FROM], str):
|
||||
return self._build_config[ATTR_BUILD_FROM]
|
||||
if isinstance(self._data[ATTR_BUILD_FROM], str):
|
||||
return self._data[ATTR_BUILD_FROM]
|
||||
|
||||
# Dict - per-arch base images in build config
|
||||
if self.arch not in self._build_config[ATTR_BUILD_FROM]:
|
||||
# Evaluate correct base image
|
||||
if self.arch not in self._data[ATTR_BUILD_FROM]:
|
||||
raise HassioArchNotFound(
|
||||
f"App {self.app.slug} is not supported on {self.arch}"
|
||||
f"Add-on {self.addon.slug} is not supported on {self.arch}"
|
||||
)
|
||||
return self._build_config[ATTR_BUILD_FROM][self.arch]
|
||||
return self._data[ATTR_BUILD_FROM][self.arch]
|
||||
|
||||
@property
|
||||
def dockerfile(self) -> Path:
|
||||
"""Return Dockerfile path."""
|
||||
if self.addon.path_location.joinpath(f"Dockerfile.{self.arch}").exists():
|
||||
return self.addon.path_location.joinpath(f"Dockerfile.{self.arch}")
|
||||
return self.addon.path_location.joinpath("Dockerfile")
|
||||
|
||||
@property
|
||||
def squash(self) -> bool:
|
||||
"""Return True or False if squash is active."""
|
||||
return self._data[ATTR_SQUASH]
|
||||
|
||||
@property
|
||||
def additional_args(self) -> dict[str, str]:
|
||||
"""Return additional Docker build arguments."""
|
||||
return self._build_config.get(ATTR_ARGS, {})
|
||||
return self._data[ATTR_ARGS]
|
||||
|
||||
@property
|
||||
def additional_labels(self) -> dict[str, str]:
|
||||
"""Return additional Docker labels."""
|
||||
return self._build_config.get(ATTR_LABELS, {})
|
||||
return self._data[ATTR_LABELS]
|
||||
|
||||
def get_dockerfile(self) -> Path:
|
||||
"""Return Dockerfile path.
|
||||
|
||||
Must be run in executor.
|
||||
"""
|
||||
if self.app.path_location.joinpath(f"Dockerfile.{self.arch}").exists():
|
||||
return self.app.path_location.joinpath(f"Dockerfile.{self.arch}")
|
||||
return self.app.path_location.joinpath("Dockerfile")
|
||||
|
||||
async def is_valid(self) -> None:
|
||||
@property
|
||||
def is_valid(self) -> bool:
|
||||
"""Return true if the build env is valid."""
|
||||
|
||||
def build_is_valid() -> bool:
|
||||
try:
|
||||
return all(
|
||||
[
|
||||
self.app.path_location.is_dir(),
|
||||
self.get_dockerfile().is_file(),
|
||||
self.addon.path_location.is_dir(),
|
||||
self.dockerfile.is_file(),
|
||||
]
|
||||
)
|
||||
|
||||
try:
|
||||
if not await self.sys_run_in_executor(build_is_valid):
|
||||
raise AppBuildDockerfileMissingError(_LOGGER.error, app=self.app.slug)
|
||||
except HassioArchNotFound:
|
||||
raise AppBuildArchitectureNotSupportedError(
|
||||
_LOGGER.error,
|
||||
app=self.app.slug,
|
||||
app_arch_list=self.app.supported_arch,
|
||||
system_arch_list=[arch.value for arch in self.sys_arch.supported],
|
||||
) from None
|
||||
return False
|
||||
|
||||
def _registry_key(self, registry: str) -> str:
|
||||
"""Return the Docker config.json key for a registry."""
|
||||
if registry in (DOCKER_HUB, DOCKER_HUB_LEGACY):
|
||||
return "https://index.docker.io/v1/"
|
||||
return registry
|
||||
|
||||
def _registry_auth(self, registry: str) -> str:
|
||||
"""Return base64-encoded auth string for a registry."""
|
||||
stored = self.sys_docker.config.registries[registry]
|
||||
return base64.b64encode(
|
||||
f"{stored[ATTR_USERNAME]}:{stored[ATTR_PASSWORD]}".encode()
|
||||
).decode()
|
||||
|
||||
def get_docker_config_json(self) -> str | None:
|
||||
"""Generate Docker config.json content with all configured registry credentials.
|
||||
|
||||
Returns a JSON string with registry credentials, or None if no registries
|
||||
are configured.
|
||||
"""
|
||||
if not self.sys_docker.config.registries:
|
||||
return None
|
||||
|
||||
auths = {
|
||||
self._registry_key(registry): {"auth": self._registry_auth(registry)}
|
||||
for registry in self.sys_docker.config.registries
|
||||
}
|
||||
return json.dumps({"auths": auths})
|
||||
|
||||
def get_docker_args(
|
||||
self, version: AwesomeVersion, image_tag: str, docker_config_path: Path | None
|
||||
) -> dict[str, Any]:
|
||||
"""Create a dict with Docker run args."""
|
||||
dockerfile_path = self.get_dockerfile().relative_to(self.app.path_location)
|
||||
|
||||
build_cmd = [
|
||||
"docker",
|
||||
"buildx",
|
||||
"build",
|
||||
".",
|
||||
"--tag",
|
||||
image_tag,
|
||||
"--file",
|
||||
str(dockerfile_path),
|
||||
"--platform",
|
||||
MAP_ARCH[self.arch],
|
||||
"--pull",
|
||||
]
|
||||
|
||||
labels = {
|
||||
LABEL_VERSION: version,
|
||||
LABEL_ARCH: self.arch,
|
||||
LABEL_TYPE: META_APP,
|
||||
**self.additional_labels,
|
||||
def get_docker_args(self, version: AwesomeVersion, image: str | None = None):
|
||||
"""Create a dict with Docker build arguments."""
|
||||
args = {
|
||||
"path": str(self.addon.path_location),
|
||||
"tag": f"{image or self.addon.image}:{version!s}",
|
||||
"dockerfile": str(self.dockerfile),
|
||||
"pull": True,
|
||||
"forcerm": not self.sys_dev,
|
||||
"squash": self.squash,
|
||||
"platform": MAP_ARCH[self.arch],
|
||||
"labels": {
|
||||
"io.hass.version": version,
|
||||
"io.hass.arch": self.arch,
|
||||
"io.hass.type": META_ADDON,
|
||||
"io.hass.name": self._fix_label("name"),
|
||||
"io.hass.description": self._fix_label("description"),
|
||||
**self.additional_labels,
|
||||
},
|
||||
"buildargs": {
|
||||
"BUILD_FROM": self.base_image,
|
||||
"BUILD_VERSION": version,
|
||||
"BUILD_ARCH": self.sys_arch.default,
|
||||
**self.additional_args,
|
||||
},
|
||||
}
|
||||
|
||||
# Set name only if non-empty, could have been set in Dockerfile
|
||||
if name := self._fix_label("name"):
|
||||
labels[LABEL_NAME] = name
|
||||
if self.addon.url:
|
||||
args["labels"]["io.hass.url"] = self.addon.url
|
||||
|
||||
# Set description only if non-empty, could have been set in Dockerfile
|
||||
if description := self._fix_label("description"):
|
||||
labels[LABEL_DESCRIPTION] = description
|
||||
|
||||
if self.app.url:
|
||||
labels[LABEL_URL] = self.app.url
|
||||
|
||||
for key, value in labels.items():
|
||||
build_cmd.extend(["--label", f"{key}={value}"])
|
||||
|
||||
build_args = {
|
||||
"BUILD_VERSION": version,
|
||||
"BUILD_ARCH": self.arch,
|
||||
**self.additional_args,
|
||||
}
|
||||
|
||||
if self.base_image is not None:
|
||||
build_args["BUILD_FROM"] = self.base_image
|
||||
|
||||
for key, value in build_args.items():
|
||||
build_cmd.extend(["--build-arg", f"{key}={value}"])
|
||||
|
||||
# The app path will be mounted from the host system
|
||||
app_extern_path = self.sys_config.local_to_extern_path(self.app.path_location)
|
||||
|
||||
mounts = [
|
||||
DockerMount(
|
||||
type=MountType.BIND,
|
||||
source=SOCKET_DOCKER.as_posix(),
|
||||
target="/var/run/docker.sock",
|
||||
read_only=False,
|
||||
),
|
||||
DockerMount(
|
||||
type=MountType.BIND,
|
||||
source=app_extern_path.as_posix(),
|
||||
target="/addon",
|
||||
read_only=True,
|
||||
),
|
||||
]
|
||||
|
||||
# Mount Docker config with registry credentials if available
|
||||
if docker_config_path:
|
||||
docker_config_extern_path = self.sys_config.local_to_extern_path(
|
||||
docker_config_path
|
||||
)
|
||||
mounts.append(
|
||||
DockerMount(
|
||||
type=MountType.BIND,
|
||||
source=docker_config_extern_path.as_posix(),
|
||||
target="/root/.docker/config.json",
|
||||
read_only=True,
|
||||
)
|
||||
)
|
||||
|
||||
return {
|
||||
"command": build_cmd,
|
||||
"mounts": mounts,
|
||||
"working_dir": PurePath("/addon"),
|
||||
}
|
||||
return args
|
||||
|
||||
def _fix_label(self, label_name: str) -> str:
|
||||
"""Remove characters they are not supported."""
|
||||
label = getattr(self.app, label_name, "")
|
||||
label = getattr(self.addon, label_name, "")
|
||||
return label.replace("'", "")
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""Confgiuration Objects for App Config."""
|
||||
"""Confgiuration Objects for Addon Config."""
|
||||
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""App static data."""
|
||||
"""Add-on static data."""
|
||||
|
||||
from datetime import timedelta
|
||||
from enum import StrEnum
|
||||
@@ -6,15 +6,15 @@ from enum import StrEnum
|
||||
from ..jobs.const import JobCondition
|
||||
|
||||
|
||||
class AppBackupMode(StrEnum):
|
||||
"""Backup mode of an App."""
|
||||
class AddonBackupMode(StrEnum):
|
||||
"""Backup mode of an Add-on."""
|
||||
|
||||
HOT = "hot"
|
||||
COLD = "cold"
|
||||
|
||||
|
||||
class MappingType(StrEnum):
|
||||
"""Mapping type of an App Folder."""
|
||||
"""Mapping type of an Add-on Folder."""
|
||||
|
||||
DATA = "data"
|
||||
CONFIG = "config"
|
||||
@@ -38,7 +38,7 @@ WATCHDOG_MAX_ATTEMPTS = 5
|
||||
WATCHDOG_THROTTLE_PERIOD = timedelta(minutes=30)
|
||||
WATCHDOG_THROTTLE_MAX_CALLS = 10
|
||||
|
||||
APP_UPDATE_CONDITIONS = [
|
||||
ADDON_UPDATE_CONDITIONS = [
|
||||
JobCondition.FREE_SPACE,
|
||||
JobCondition.HEALTHY,
|
||||
JobCondition.INTERNET_HOST,
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""Init file for Supervisor app data."""
|
||||
"""Init file for Supervisor add-on data."""
|
||||
|
||||
from copy import deepcopy
|
||||
from typing import Any
|
||||
@@ -12,16 +12,16 @@ from ..const import (
|
||||
FILE_HASSIO_ADDONS,
|
||||
)
|
||||
from ..coresys import CoreSys, CoreSysAttributes
|
||||
from ..store.addon import AppStore
|
||||
from ..store.addon import AddonStore
|
||||
from ..utils.common import FileConfiguration
|
||||
from .addon import App
|
||||
from .addon import Addon
|
||||
from .validate import SCHEMA_ADDONS_FILE
|
||||
|
||||
Config = dict[str, Any]
|
||||
|
||||
|
||||
class AppsData(FileConfiguration, CoreSysAttributes):
|
||||
"""Hold data for installed Apps inside Supervisor."""
|
||||
class AddonsData(FileConfiguration, CoreSysAttributes):
|
||||
"""Hold data for installed Add-ons inside Supervisor."""
|
||||
|
||||
def __init__(self, coresys: CoreSys):
|
||||
"""Initialize data holder."""
|
||||
@@ -30,40 +30,42 @@ class AppsData(FileConfiguration, CoreSysAttributes):
|
||||
|
||||
@property
|
||||
def user(self):
|
||||
"""Return local app user data."""
|
||||
"""Return local add-on user data."""
|
||||
return self._data[ATTR_USER]
|
||||
|
||||
@property
|
||||
def system(self):
|
||||
"""Return local app data."""
|
||||
"""Return local add-on data."""
|
||||
return self._data[ATTR_SYSTEM]
|
||||
|
||||
async def install(self, app: AppStore) -> None:
|
||||
"""Set app as installed."""
|
||||
self.system[app.slug] = deepcopy(app.data)
|
||||
self.user[app.slug] = {
|
||||
async def install(self, addon: AddonStore) -> None:
|
||||
"""Set addon as installed."""
|
||||
self.system[addon.slug] = deepcopy(addon.data)
|
||||
self.user[addon.slug] = {
|
||||
ATTR_OPTIONS: {},
|
||||
ATTR_VERSION: app.version,
|
||||
ATTR_IMAGE: app.image,
|
||||
ATTR_VERSION: addon.version,
|
||||
ATTR_IMAGE: addon.image,
|
||||
}
|
||||
await self.save_data()
|
||||
|
||||
async def uninstall(self, app: App) -> None:
|
||||
"""Set app as uninstalled."""
|
||||
self.system.pop(app.slug, None)
|
||||
self.user.pop(app.slug, None)
|
||||
async def uninstall(self, addon: Addon) -> None:
|
||||
"""Set add-on as uninstalled."""
|
||||
self.system.pop(addon.slug, None)
|
||||
self.user.pop(addon.slug, None)
|
||||
await self.save_data()
|
||||
|
||||
async def update(self, app: AppStore) -> None:
|
||||
"""Update version of app."""
|
||||
self.system[app.slug] = deepcopy(app.data)
|
||||
self.user[app.slug].update({ATTR_VERSION: app.version, ATTR_IMAGE: app.image})
|
||||
async def update(self, addon: AddonStore) -> None:
|
||||
"""Update version of add-on."""
|
||||
self.system[addon.slug] = deepcopy(addon.data)
|
||||
self.user[addon.slug].update(
|
||||
{ATTR_VERSION: addon.version, ATTR_IMAGE: addon.image}
|
||||
)
|
||||
await self.save_data()
|
||||
|
||||
async def restore(
|
||||
self, slug: str, user: Config, system: Config, image: str
|
||||
) -> None:
|
||||
"""Restore data to app."""
|
||||
"""Restore data to add-on."""
|
||||
self.user[slug] = deepcopy(user)
|
||||
self.system[slug] = deepcopy(system)
|
||||
|
||||
|
||||
@@ -1,82 +1,77 @@
|
||||
"""Supervisor app manager."""
|
||||
"""Supervisor add-on manager."""
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Awaitable
|
||||
from contextlib import suppress
|
||||
import logging
|
||||
import tarfile
|
||||
from typing import Self, Union
|
||||
|
||||
from attr import evolve
|
||||
from securetar import SecureTarFile
|
||||
|
||||
from ..const import AppBoot, AppStartup, AppState
|
||||
from ..const import AddonBoot, AddonStartup, AddonState
|
||||
from ..coresys import CoreSys, CoreSysAttributes
|
||||
from ..exceptions import (
|
||||
AppNotSupportedError,
|
||||
AppsError,
|
||||
AppsJobError,
|
||||
AddonsError,
|
||||
AddonsJobError,
|
||||
AddonsNotSupportedError,
|
||||
CoreDNSError,
|
||||
DockerError,
|
||||
HassioError,
|
||||
HomeAssistantAPIError,
|
||||
)
|
||||
from ..jobs import ChildJobSyncFilter
|
||||
from ..jobs.const import JobConcurrency
|
||||
from ..jobs.decorator import Job, JobCondition
|
||||
from ..resolution.const import ContextType, IssueType, SuggestionType, UnhealthyReason
|
||||
from ..store.addon import AppStore
|
||||
from ..resolution.const import ContextType, IssueType, SuggestionType
|
||||
from ..store.addon import AddonStore
|
||||
from ..utils.sentry import async_capture_exception
|
||||
from .addon import App
|
||||
from .const import APP_UPDATE_CONDITIONS
|
||||
from .data import AppsData
|
||||
from .addon import Addon
|
||||
from .const import ADDON_UPDATE_CONDITIONS
|
||||
from .data import AddonsData
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
AnyApp = Union[App, AppStore]
|
||||
AnyAddon = Union[Addon, AddonStore]
|
||||
|
||||
|
||||
class AppManager(CoreSysAttributes):
|
||||
"""Manage apps inside Supervisor."""
|
||||
class AddonManager(CoreSysAttributes):
|
||||
"""Manage add-ons inside Supervisor."""
|
||||
|
||||
def __init__(self, coresys: CoreSys):
|
||||
"""Initialize Docker base wrapper."""
|
||||
self.coresys: CoreSys = coresys
|
||||
self.data: AppsData = AppsData(coresys)
|
||||
self.local: dict[str, App] = {}
|
||||
self.store: dict[str, AppStore] = {}
|
||||
self.data: AddonsData = AddonsData(coresys)
|
||||
self.local: dict[str, Addon] = {}
|
||||
self.store: dict[str, AddonStore] = {}
|
||||
|
||||
@property
|
||||
def all(self) -> list[AnyApp]:
|
||||
"""Return a list of all apps."""
|
||||
apps: dict[str, AnyApp] = {**self.store, **self.local}
|
||||
return list(apps.values())
|
||||
def all(self) -> list[AnyAddon]:
|
||||
"""Return a list of all add-ons."""
|
||||
addons: dict[str, AnyAddon] = {**self.store, **self.local}
|
||||
return list(addons.values())
|
||||
|
||||
@property
|
||||
def installed(self) -> list[App]:
|
||||
"""Return a list of all installed apps."""
|
||||
def installed(self) -> list[Addon]:
|
||||
"""Return a list of all installed add-ons."""
|
||||
return list(self.local.values())
|
||||
|
||||
def get(self, app_slug: str, local_only: bool = False) -> AnyApp | None:
|
||||
"""Return an app from slug.
|
||||
def get(self, addon_slug: str, local_only: bool = False) -> AnyAddon | None:
|
||||
"""Return an add-on from slug.
|
||||
|
||||
Prio:
|
||||
1 - Local
|
||||
2 - Store
|
||||
"""
|
||||
if app_slug in self.local:
|
||||
return self.local[app_slug]
|
||||
if addon_slug in self.local:
|
||||
return self.local[addon_slug]
|
||||
if not local_only:
|
||||
return self.store.get(app_slug)
|
||||
return self.store.get(addon_slug)
|
||||
return None
|
||||
|
||||
def get_local_only(self, app_slug: str) -> App | None:
|
||||
"""Return an installed app from slug."""
|
||||
return self.local.get(app_slug)
|
||||
|
||||
def from_token(self, token: str) -> App | None:
|
||||
"""Return an app from Supervisor token."""
|
||||
for app in self.installed:
|
||||
if token == app.supervisor_token:
|
||||
return app
|
||||
def from_token(self, token: str) -> Addon | None:
|
||||
"""Return an add-on from Supervisor token."""
|
||||
for addon in self.installed:
|
||||
if token == addon.supervisor_token:
|
||||
return addon
|
||||
return None
|
||||
|
||||
async def load_config(self) -> Self:
|
||||
@@ -85,61 +80,50 @@ class AppManager(CoreSysAttributes):
|
||||
return self
|
||||
|
||||
async def load(self) -> None:
|
||||
"""Start up app management."""
|
||||
# Refresh cache for all store apps
|
||||
"""Start up add-on management."""
|
||||
# Refresh cache for all store addons
|
||||
tasks: list[Awaitable[None]] = [
|
||||
store.refresh_path_cache() for store in self.store.values()
|
||||
]
|
||||
|
||||
# Load all installed apps
|
||||
# Load all installed addons
|
||||
for slug in self.data.system:
|
||||
app = self.local[slug] = App(self.coresys, slug)
|
||||
tasks.append(app.load())
|
||||
addon = self.local[slug] = Addon(self.coresys, slug)
|
||||
tasks.append(addon.load())
|
||||
|
||||
# Run initial tasks
|
||||
_LOGGER.info("Found %d installed apps", len(self.data.system))
|
||||
_LOGGER.info("Found %d installed add-ons", len(self.data.system))
|
||||
if tasks:
|
||||
await asyncio.gather(*tasks)
|
||||
|
||||
# Sync DNS
|
||||
await self.sync_dns()
|
||||
|
||||
async def boot(self, stage: AppStartup) -> None:
|
||||
"""Boot apps with mode auto."""
|
||||
tasks: list[App] = []
|
||||
for app in self.installed:
|
||||
if app.boot != AppBoot.AUTO or app.startup != stage:
|
||||
async def boot(self, stage: AddonStartup) -> None:
|
||||
"""Boot add-ons with mode auto."""
|
||||
tasks: list[Addon] = []
|
||||
for addon in self.installed:
|
||||
if addon.boot != AddonBoot.AUTO or addon.startup != stage:
|
||||
continue
|
||||
if (
|
||||
app.host_network
|
||||
and UnhealthyReason.DOCKER_GATEWAY_UNPROTECTED
|
||||
in self.sys_resolution.unhealthy
|
||||
):
|
||||
_LOGGER.warning(
|
||||
"Skipping boot of app %s because gateway firewall"
|
||||
" rules are not active",
|
||||
app.slug,
|
||||
)
|
||||
continue
|
||||
tasks.append(app)
|
||||
tasks.append(addon)
|
||||
|
||||
# Evaluate apps which need to be started
|
||||
_LOGGER.info("Phase '%s' starting %d apps", stage, len(tasks))
|
||||
# Evaluate add-ons which need to be started
|
||||
_LOGGER.info("Phase '%s' starting %d add-ons", stage, len(tasks))
|
||||
if not tasks:
|
||||
return
|
||||
|
||||
# Start Apps sequential
|
||||
# Start Add-ons sequential
|
||||
# avoid issue on slow IO
|
||||
# Config.wait_boot is deprecated. Until apps update with healthchecks,
|
||||
# Config.wait_boot is deprecated. Until addons update with healthchecks,
|
||||
# add a sleep task for it to keep the same minimum amount of wait time
|
||||
wait_boot: list[Awaitable[None]] = [asyncio.sleep(self.sys_config.wait_boot)]
|
||||
for app in tasks:
|
||||
for addon in tasks:
|
||||
try:
|
||||
if start_task := await app.start():
|
||||
if start_task := await addon.start():
|
||||
wait_boot.append(start_task)
|
||||
except HassioError:
|
||||
self.sys_resolution.add_issue(
|
||||
evolve(app.boot_failed_issue),
|
||||
evolve(addon.boot_failed_issue),
|
||||
suggestions=[
|
||||
SuggestionType.EXECUTE_START,
|
||||
SuggestionType.DISABLE_BOOT,
|
||||
@@ -148,152 +132,125 @@ class AppManager(CoreSysAttributes):
|
||||
else:
|
||||
continue
|
||||
|
||||
_LOGGER.warning("Can't start app %s", app.slug)
|
||||
_LOGGER.warning("Can't start Add-on %s", addon.slug)
|
||||
|
||||
# Ignore exceptions from waiting for app startup, app errors handled elsewhere
|
||||
# Ignore exceptions from waiting for addon startup, addon errors handled elsewhere
|
||||
await asyncio.gather(*wait_boot, return_exceptions=True)
|
||||
|
||||
# After waiting for startup, create an issue for boot apps that are error or unknown state
|
||||
# Ignore stopped as single shot apps can be run at boot and this is successful exit
|
||||
# Timeout waiting for startup is not a failure, app is probably just slow
|
||||
for app in tasks:
|
||||
if app.state in {AppState.ERROR, AppState.UNKNOWN}:
|
||||
# After waiting for startup, create an issue for boot addons that are error or unknown state
|
||||
# Ignore stopped as single shot addons can be run at boot and this is successful exit
|
||||
# Timeout waiting for startup is not a failure, addon is probably just slow
|
||||
for addon in tasks:
|
||||
if addon.state in {AddonState.ERROR, AddonState.UNKNOWN}:
|
||||
self.sys_resolution.add_issue(
|
||||
evolve(app.boot_failed_issue),
|
||||
evolve(addon.boot_failed_issue),
|
||||
suggestions=[
|
||||
SuggestionType.EXECUTE_START,
|
||||
SuggestionType.DISABLE_BOOT,
|
||||
],
|
||||
)
|
||||
|
||||
async def shutdown(self, stage: AppStartup) -> None:
|
||||
"""Shutdown apps."""
|
||||
tasks: list[App] = []
|
||||
for app in self.installed:
|
||||
if app.state != AppState.STARTED or app.startup != stage:
|
||||
async def shutdown(self, stage: AddonStartup) -> None:
|
||||
"""Shutdown addons."""
|
||||
tasks: list[Addon] = []
|
||||
for addon in self.installed:
|
||||
if addon.state != AddonState.STARTED or addon.startup != stage:
|
||||
continue
|
||||
tasks.append(app)
|
||||
tasks.append(addon)
|
||||
|
||||
# Evaluate apps which need to be stopped
|
||||
_LOGGER.info("Phase '%s' stopping %d apps", stage, len(tasks))
|
||||
# Evaluate add-ons which need to be stopped
|
||||
_LOGGER.info("Phase '%s' stopping %d add-ons", stage, len(tasks))
|
||||
if not tasks:
|
||||
return
|
||||
|
||||
# Stop Apps sequential
|
||||
# Stop Add-ons sequential
|
||||
# avoid issue on slow IO
|
||||
for app in tasks:
|
||||
for addon in tasks:
|
||||
try:
|
||||
await app.stop()
|
||||
await addon.stop()
|
||||
except Exception as err: # pylint: disable=broad-except
|
||||
_LOGGER.warning("Can't stop app %s: %s", app.slug, err)
|
||||
_LOGGER.warning("Can't stop Add-on %s: %s", addon.slug, err)
|
||||
await async_capture_exception(err)
|
||||
|
||||
@Job(
|
||||
name="addon_manager_install",
|
||||
conditions=APP_UPDATE_CONDITIONS,
|
||||
on_condition=AppsJobError,
|
||||
concurrency=JobConcurrency.QUEUE,
|
||||
child_job_syncs=[
|
||||
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
|
||||
],
|
||||
conditions=ADDON_UPDATE_CONDITIONS,
|
||||
on_condition=AddonsJobError,
|
||||
)
|
||||
async def install(
|
||||
self, slug: str, *, validation_complete: asyncio.Event | None = None
|
||||
) -> None:
|
||||
"""Install an app."""
|
||||
async def install(self, slug: str) -> None:
|
||||
"""Install an add-on."""
|
||||
self.sys_jobs.current.reference = slug
|
||||
|
||||
if slug in self.local:
|
||||
raise AppsError(f"App {slug} is already installed", _LOGGER.warning)
|
||||
raise AddonsError(f"Add-on {slug} is already installed", _LOGGER.warning)
|
||||
store = self.store.get(slug)
|
||||
|
||||
if not store:
|
||||
raise AppsError(f"App {slug} does not exist", _LOGGER.error)
|
||||
raise AddonsError(f"Add-on {slug} does not exist", _LOGGER.error)
|
||||
|
||||
store.validate_availability()
|
||||
|
||||
# If being run in the background, notify caller that validation has completed
|
||||
if validation_complete:
|
||||
validation_complete.set()
|
||||
await Addon(self.coresys, slug).install()
|
||||
|
||||
await App(self.coresys, slug).install()
|
||||
_LOGGER.info("Add-on '%s' successfully installed", slug)
|
||||
|
||||
_LOGGER.info("App '%s' successfully installed", slug)
|
||||
|
||||
@Job(name="addon_manager_uninstall")
|
||||
async def uninstall(self, slug: str, *, remove_config: bool = False) -> None:
|
||||
"""Remove an app."""
|
||||
"""Remove an add-on."""
|
||||
if slug not in self.local:
|
||||
_LOGGER.warning("App %s is not installed", slug)
|
||||
_LOGGER.warning("Add-on %s is not installed", slug)
|
||||
return
|
||||
|
||||
shared_image = any(
|
||||
self.local[slug].image == app.image
|
||||
and self.local[slug].version == app.version
|
||||
for app in self.installed
|
||||
if app.slug != slug
|
||||
self.local[slug].image == addon.image
|
||||
and self.local[slug].version == addon.version
|
||||
for addon in self.installed
|
||||
if addon.slug != slug
|
||||
)
|
||||
await self.local[slug].uninstall(
|
||||
remove_config=remove_config, remove_image=not shared_image
|
||||
)
|
||||
|
||||
_LOGGER.info("App '%s' successfully removed", slug)
|
||||
_LOGGER.info("Add-on '%s' successfully removed", slug)
|
||||
|
||||
@Job(
|
||||
name="addon_manager_update",
|
||||
conditions=APP_UPDATE_CONDITIONS,
|
||||
on_condition=AppsJobError,
|
||||
# We assume for now the docker image pull is 100% of this task for progress
|
||||
# allocation. But from a user perspective that isn't true. Other steps
|
||||
# that take time which is not accounted for in progress include:
|
||||
# partial backup, image cleanup, apparmor update, and app restart
|
||||
child_job_syncs=[
|
||||
ChildJobSyncFilter("docker_interface_install", progress_allocation=1.0)
|
||||
],
|
||||
conditions=ADDON_UPDATE_CONDITIONS,
|
||||
on_condition=AddonsJobError,
|
||||
)
|
||||
async def update(
|
||||
self,
|
||||
slug: str,
|
||||
backup: bool | None = False,
|
||||
*,
|
||||
validation_complete: asyncio.Event | None = None,
|
||||
self, slug: str, backup: bool | None = False
|
||||
) -> asyncio.Task | None:
|
||||
"""Update app.
|
||||
"""Update add-on.
|
||||
|
||||
Returns a Task that completes when app has state 'started' (see app.start)
|
||||
if app is started after update. Else nothing is returned.
|
||||
Returns a Task that completes when addon has state 'started' (see addon.start)
|
||||
if addon is started after update. Else nothing is returned.
|
||||
"""
|
||||
self.sys_jobs.current.reference = slug
|
||||
|
||||
if slug not in self.local:
|
||||
raise AppsError(f"App {slug} is not installed", _LOGGER.error)
|
||||
app = self.local[slug]
|
||||
raise AddonsError(f"Add-on {slug} is not installed", _LOGGER.error)
|
||||
addon = self.local[slug]
|
||||
|
||||
if app.is_detached:
|
||||
raise AppsError(f"App {slug} is not available inside store", _LOGGER.error)
|
||||
if addon.is_detached:
|
||||
raise AddonsError(
|
||||
f"Add-on {slug} is not available inside store", _LOGGER.error
|
||||
)
|
||||
store = self.store[slug]
|
||||
|
||||
if app.version == store.version:
|
||||
raise AppsError(f"No update available for app {slug}", _LOGGER.warning)
|
||||
if addon.version == store.version:
|
||||
raise AddonsError(f"No update available for add-on {slug}", _LOGGER.warning)
|
||||
|
||||
# Check if available, Maybe something have changed
|
||||
store.validate_availability()
|
||||
|
||||
# If being run in the background, notify caller that validation has completed
|
||||
if validation_complete:
|
||||
validation_complete.set()
|
||||
|
||||
if backup:
|
||||
await self.sys_backups.do_backup_partial(
|
||||
name=f"addon_{app.slug}_{app.version}",
|
||||
name=f"addon_{addon.slug}_{addon.version}",
|
||||
homeassistant=False,
|
||||
apps=[app.slug],
|
||||
addons=[addon.slug],
|
||||
)
|
||||
|
||||
task = await app.update()
|
||||
|
||||
_LOGGER.info("App '%s' successfully updated", slug)
|
||||
return task
|
||||
return await addon.update()
|
||||
|
||||
@Job(
|
||||
name="addon_manager_rebuild",
|
||||
@@ -302,35 +259,37 @@ class AppManager(CoreSysAttributes):
|
||||
JobCondition.INTERNET_HOST,
|
||||
JobCondition.HEALTHY,
|
||||
],
|
||||
on_condition=AppsJobError,
|
||||
on_condition=AddonsJobError,
|
||||
)
|
||||
async def rebuild(self, slug: str, *, force: bool = False) -> asyncio.Task | None:
|
||||
"""Perform a rebuild of local build app.
|
||||
async def rebuild(self, slug: str) -> asyncio.Task | None:
|
||||
"""Perform a rebuild of local build add-on.
|
||||
|
||||
Returns a Task that completes when app has state 'started' (see app.start)
|
||||
if app is started after rebuild. Else nothing is returned.
|
||||
Returns a Task that completes when addon has state 'started' (see addon.start)
|
||||
if addon is started after rebuild. Else nothing is returned.
|
||||
"""
|
||||
self.sys_jobs.current.reference = slug
|
||||
|
||||
if slug not in self.local:
|
||||
raise AppsError(f"App {slug} is not installed", _LOGGER.error)
|
||||
app = self.local[slug]
|
||||
raise AddonsError(f"Add-on {slug} is not installed", _LOGGER.error)
|
||||
addon = self.local[slug]
|
||||
|
||||
if app.is_detached:
|
||||
raise AppsError(f"App {slug} is not available inside store", _LOGGER.error)
|
||||
if addon.is_detached:
|
||||
raise AddonsError(
|
||||
f"Add-on {slug} is not available inside store", _LOGGER.error
|
||||
)
|
||||
store = self.store[slug]
|
||||
|
||||
# Check if a rebuild is possible now
|
||||
if app.version != store.version:
|
||||
raise AppsError(
|
||||
if addon.version != store.version:
|
||||
raise AddonsError(
|
||||
"Version changed, use Update instead Rebuild", _LOGGER.error
|
||||
)
|
||||
if not force and not app.need_build:
|
||||
raise AppNotSupportedError(
|
||||
"Can't rebuild an image-based app", _LOGGER.error
|
||||
if not addon.need_build:
|
||||
raise AddonsNotSupportedError(
|
||||
"Can't rebuild a image based add-on", _LOGGER.error
|
||||
)
|
||||
|
||||
return await app.rebuild()
|
||||
return await addon.rebuild()
|
||||
|
||||
@Job(
|
||||
name="addon_manager_restore",
|
||||
@@ -339,36 +298,39 @@ class AppManager(CoreSysAttributes):
|
||||
JobCondition.INTERNET_HOST,
|
||||
JobCondition.HEALTHY,
|
||||
],
|
||||
on_condition=AppsJobError,
|
||||
on_condition=AddonsJobError,
|
||||
)
|
||||
async def restore(self, slug: str, tar_file: SecureTarFile) -> asyncio.Task | None:
|
||||
"""Restore state of an app.
|
||||
async def restore(
|
||||
self, slug: str, tar_file: tarfile.TarFile
|
||||
) -> asyncio.Task | None:
|
||||
"""Restore state of an add-on.
|
||||
|
||||
Returns a Task that completes when app has state 'started' (see app.start)
|
||||
if app is started after restore. Else nothing is returned.
|
||||
Returns a Task that completes when addon has state 'started' (see addon.start)
|
||||
if addon is started after restore. Else nothing is returned.
|
||||
"""
|
||||
self.sys_jobs.current.reference = slug
|
||||
|
||||
if slug not in self.local:
|
||||
_LOGGER.debug("App %s is not locally available for restore", slug)
|
||||
app = App(self.coresys, slug)
|
||||
had_ingress: bool | None = False
|
||||
_LOGGER.debug("Add-on %s is not local available for restore", slug)
|
||||
addon = Addon(self.coresys, slug)
|
||||
had_ingress = False
|
||||
else:
|
||||
_LOGGER.debug("App %s is locally available for restore", slug)
|
||||
app = self.local[slug]
|
||||
had_ingress = app.ingress_panel
|
||||
_LOGGER.debug("Add-on %s is local available for restore", slug)
|
||||
addon = self.local[slug]
|
||||
had_ingress = addon.ingress_panel
|
||||
|
||||
wait_for_start = await app.restore(tar_file)
|
||||
wait_for_start = await addon.restore(tar_file)
|
||||
|
||||
# Check if new
|
||||
if slug not in self.local:
|
||||
_LOGGER.info("Detected new app after restore: %s", slug)
|
||||
self.local[slug] = app
|
||||
_LOGGER.info("Detect new Add-on after restore %s", slug)
|
||||
self.local[slug] = addon
|
||||
|
||||
# Update ingress
|
||||
if had_ingress != app.ingress_panel:
|
||||
if had_ingress != addon.ingress_panel:
|
||||
await self.sys_ingress.reload()
|
||||
await self.sys_ingress.update_hass_panel(app)
|
||||
with suppress(HomeAssistantAPIError):
|
||||
await self.sys_ingress.update_hass_panel(addon)
|
||||
|
||||
return wait_for_start
|
||||
|
||||
@@ -377,60 +339,60 @@ class AppManager(CoreSysAttributes):
|
||||
conditions=[JobCondition.FREE_SPACE, JobCondition.INTERNET_HOST],
|
||||
)
|
||||
async def repair(self) -> None:
|
||||
"""Repair local apps."""
|
||||
needs_repair: list[App] = []
|
||||
"""Repair local add-ons."""
|
||||
needs_repair: list[Addon] = []
|
||||
|
||||
# Evaluate Apps to repair
|
||||
for app in self.installed:
|
||||
if await app.instance.exists():
|
||||
# Evaluate Add-ons to repair
|
||||
for addon in self.installed:
|
||||
if await addon.instance.exists():
|
||||
continue
|
||||
needs_repair.append(app)
|
||||
needs_repair.append(addon)
|
||||
|
||||
_LOGGER.info("Found %d apps to repair", len(needs_repair))
|
||||
_LOGGER.info("Found %d add-ons to repair", len(needs_repair))
|
||||
if not needs_repair:
|
||||
return
|
||||
|
||||
for app in needs_repair:
|
||||
_LOGGER.info("Repairing for app: %s", app.slug)
|
||||
for addon in needs_repair:
|
||||
_LOGGER.info("Repairing for add-on: %s", addon.slug)
|
||||
with suppress(DockerError, KeyError):
|
||||
# Need pull a image again
|
||||
if not app.need_build:
|
||||
await app.instance.install(app.version, app.image)
|
||||
if not addon.need_build:
|
||||
await addon.instance.install(addon.version, addon.image)
|
||||
continue
|
||||
|
||||
# Need local lookup
|
||||
if app.need_build and not app.is_detached:
|
||||
store = self.store[app.slug]
|
||||
# If this app is available for rebuild
|
||||
if app.version == store.version:
|
||||
await app.instance.install(app.version, app.image)
|
||||
if addon.need_build and not addon.is_detached:
|
||||
store = self.store[addon.slug]
|
||||
# If this add-on is available for rebuild
|
||||
if addon.version == store.version:
|
||||
await addon.instance.install(addon.version, addon.image)
|
||||
continue
|
||||
|
||||
_LOGGER.error("Can't repair %s", app.slug)
|
||||
with suppress(AppsError):
|
||||
await self.uninstall(app.slug)
|
||||
_LOGGER.error("Can't repair %s", addon.slug)
|
||||
with suppress(AddonsError):
|
||||
await self.uninstall(addon.slug)
|
||||
|
||||
async def sync_dns(self) -> None:
|
||||
"""Sync apps DNS names."""
|
||||
"""Sync add-ons DNS names."""
|
||||
# Update hosts
|
||||
add_host_coros: list[Awaitable[None]] = []
|
||||
for app in self.installed:
|
||||
for addon in self.installed:
|
||||
try:
|
||||
if not await app.instance.is_running():
|
||||
if not await addon.instance.is_running():
|
||||
continue
|
||||
except DockerError as err:
|
||||
_LOGGER.warning("App %s is corrupt: %s", app.slug, err)
|
||||
_LOGGER.warning("Add-on %s is corrupt: %s", addon.slug, err)
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.CORRUPT_DOCKER,
|
||||
ContextType.ADDON,
|
||||
reference=app.slug,
|
||||
reference=addon.slug,
|
||||
suggestions=[SuggestionType.EXECUTE_REPAIR],
|
||||
)
|
||||
await async_capture_exception(err)
|
||||
else:
|
||||
add_host_coros.append(
|
||||
self.sys_plugins.dns.add_host(
|
||||
ipv4=app.ip_address, names=[app.hostname], write=False
|
||||
ipv4=addon.ip_address, names=[addon.hostname], write=False
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""Init file for Supervisor apps."""
|
||||
"""Init file for Supervisor add-ons."""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from collections import defaultdict
|
||||
@@ -11,8 +11,10 @@ from typing import Any
|
||||
|
||||
from awesomeversion import AwesomeVersion, AwesomeVersionException
|
||||
|
||||
from supervisor.utils.dt import utc_from_timestamp
|
||||
|
||||
from ..const import (
|
||||
ARCH_DEPRECATED,
|
||||
ATTR_ADVANCED,
|
||||
ATTR_APPARMOR,
|
||||
ATTR_ARCH,
|
||||
ATTR_AUDIO,
|
||||
@@ -70,7 +72,6 @@ from ..const import (
|
||||
ATTR_TYPE,
|
||||
ATTR_UART,
|
||||
ATTR_UDEV,
|
||||
ATTR_ULIMITS,
|
||||
ATTR_URL,
|
||||
ATTR_USB,
|
||||
ATTR_VERSION,
|
||||
@@ -78,39 +79,31 @@ from ..const import (
|
||||
ATTR_VIDEO,
|
||||
ATTR_WATCHDOG,
|
||||
ATTR_WEBUI,
|
||||
MACHINE_DEPRECATED,
|
||||
SECURITY_DEFAULT,
|
||||
SECURITY_DISABLE,
|
||||
SECURITY_PROFILE,
|
||||
AppBoot,
|
||||
AppBootConfig,
|
||||
AppStage,
|
||||
AppStartup,
|
||||
CpuArch,
|
||||
AddonBoot,
|
||||
AddonBootConfig,
|
||||
AddonStage,
|
||||
AddonStartup,
|
||||
)
|
||||
from ..coresys import CoreSys
|
||||
from ..docker.const import Capabilities
|
||||
from ..exceptions import (
|
||||
AppNotSupportedArchitectureError,
|
||||
AppNotSupportedError,
|
||||
AppNotSupportedHomeAssistantVersionError,
|
||||
AppNotSupportedMachineTypeError,
|
||||
HassioArchNotFound,
|
||||
)
|
||||
from ..exceptions import AddonsNotSupportedError
|
||||
from ..jobs.const import JOB_GROUP_ADDON
|
||||
from ..jobs.job_group import JobGroup
|
||||
from ..utils import version_is_new_enough
|
||||
from ..utils.dt import utc_from_timestamp
|
||||
from .configuration import FolderMapping
|
||||
from .const import (
|
||||
ATTR_BACKUP,
|
||||
ATTR_BREAKING_VERSIONS,
|
||||
ATTR_CODENOTARY,
|
||||
ATTR_PATH,
|
||||
ATTR_READ_ONLY,
|
||||
AppBackupMode,
|
||||
AddonBackupMode,
|
||||
MappingType,
|
||||
)
|
||||
from .options import AppOptions, UiOptions
|
||||
from .options import AddonOptions, UiOptions
|
||||
from .validate import RE_SERVICE
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
@@ -118,8 +111,8 @@ _LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
Data = dict[str, Any]
|
||||
|
||||
|
||||
class AppModel(JobGroup, ABC):
|
||||
"""App Data layout."""
|
||||
class AddonModel(JobGroup, ABC):
|
||||
"""Add-on Data layout."""
|
||||
|
||||
def __init__(self, coresys: CoreSys, slug: str):
|
||||
"""Initialize data holder."""
|
||||
@@ -135,21 +128,21 @@ class AppModel(JobGroup, ABC):
|
||||
@property
|
||||
@abstractmethod
|
||||
def data(self) -> Data:
|
||||
"""Return app config/data."""
|
||||
"""Return add-on config/data."""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def is_installed(self) -> bool:
|
||||
"""Return True if an app is installed."""
|
||||
"""Return True if an add-on is installed."""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def is_detached(self) -> bool:
|
||||
"""Return True if app is detached."""
|
||||
"""Return True if add-on is detached."""
|
||||
|
||||
@property
|
||||
def available(self) -> bool:
|
||||
"""Return True if this app is available on this platform."""
|
||||
"""Return True if this add-on is available on this platform."""
|
||||
return self._available(self.data)
|
||||
|
||||
@property
|
||||
@@ -158,14 +151,14 @@ class AppModel(JobGroup, ABC):
|
||||
return self.data[ATTR_OPTIONS]
|
||||
|
||||
@property
|
||||
def boot_config(self) -> AppBootConfig:
|
||||
def boot_config(self) -> AddonBootConfig:
|
||||
"""Return boot config."""
|
||||
return self.data[ATTR_BOOT]
|
||||
|
||||
@property
|
||||
def boot(self) -> AppBoot:
|
||||
def boot(self) -> AddonBoot:
|
||||
"""Return boot config with prio local settings unless config is forced."""
|
||||
return AppBoot(self.data[ATTR_BOOT])
|
||||
return AddonBoot(self.data[ATTR_BOOT])
|
||||
|
||||
@property
|
||||
def auto_update(self) -> bool | None:
|
||||
@@ -174,27 +167,27 @@ class AppModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
"""Return name of app."""
|
||||
"""Return name of add-on."""
|
||||
return self.data[ATTR_NAME]
|
||||
|
||||
@property
|
||||
def hostname(self) -> str:
|
||||
"""Return slug/id of app."""
|
||||
"""Return slug/id of add-on."""
|
||||
return self.slug.replace("_", "-")
|
||||
|
||||
@property
|
||||
def dns(self) -> list[str]:
|
||||
"""Return list of DNS name for that app."""
|
||||
"""Return list of DNS name for that add-on."""
|
||||
return []
|
||||
|
||||
@property
|
||||
def timeout(self) -> int:
|
||||
"""Return timeout of app for docker stop."""
|
||||
"""Return timeout of addon for docker stop."""
|
||||
return self.data[ATTR_TIMEOUT]
|
||||
|
||||
@property
|
||||
def uuid(self) -> str | None:
|
||||
"""Return an API token for this app."""
|
||||
"""Return an API token for this add-on."""
|
||||
return None
|
||||
|
||||
@property
|
||||
@@ -214,22 +207,22 @@ class AppModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
"""Return description of app."""
|
||||
"""Return description of add-on."""
|
||||
return self.data[ATTR_DESCRIPTON]
|
||||
|
||||
@property
|
||||
def repository(self) -> str:
|
||||
"""Return repository of app."""
|
||||
"""Return repository of add-on."""
|
||||
return self.data[ATTR_REPOSITORY]
|
||||
|
||||
@property
|
||||
def translations(self) -> dict:
|
||||
"""Return app translations."""
|
||||
"""Return add-on translations."""
|
||||
return self.data[ATTR_TRANSLATIONS]
|
||||
|
||||
@property
|
||||
def latest_version(self) -> AwesomeVersion:
|
||||
"""Return latest version of app."""
|
||||
"""Return latest version of add-on."""
|
||||
return self.data[ATTR_VERSION]
|
||||
|
||||
@property
|
||||
@@ -239,29 +232,27 @@ class AppModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def version(self) -> AwesomeVersion:
|
||||
"""Return version of app."""
|
||||
"""Return version of add-on."""
|
||||
return self.data[ATTR_VERSION]
|
||||
|
||||
@property
|
||||
def protected(self) -> bool:
|
||||
"""Return if app is in protected mode."""
|
||||
"""Return if add-on is in protected mode."""
|
||||
return True
|
||||
|
||||
@property
|
||||
def startup(self) -> AppStartup:
|
||||
"""Return startup type of app."""
|
||||
def startup(self) -> AddonStartup:
|
||||
"""Return startup type of add-on."""
|
||||
return self.data[ATTR_STARTUP]
|
||||
|
||||
@property
|
||||
def advanced(self) -> bool:
|
||||
"""Return False; advanced mode is deprecated and no longer supported."""
|
||||
# Deprecated since Supervisor 2026.03.0; always returns False and can be
|
||||
# removed once that version is the minimum supported.
|
||||
return False
|
||||
"""Return advanced mode of add-on."""
|
||||
return self.data[ATTR_ADVANCED]
|
||||
|
||||
@property
|
||||
def stage(self) -> AppStage:
|
||||
"""Return stage mode of app."""
|
||||
def stage(self) -> AddonStage:
|
||||
"""Return stage mode of add-on."""
|
||||
return self.data[ATTR_STAGE]
|
||||
|
||||
@property
|
||||
@@ -289,7 +280,7 @@ class AppModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def ports(self) -> dict[str, int | None] | None:
|
||||
"""Return ports of app."""
|
||||
"""Return ports of add-on."""
|
||||
return self.data.get(ATTR_PORTS)
|
||||
|
||||
@property
|
||||
@@ -303,7 +294,7 @@ class AppModel(JobGroup, ABC):
|
||||
return self.data.get(ATTR_WEBUI)
|
||||
|
||||
@property
|
||||
def watchdog_url(self) -> str | None:
|
||||
def watchdog(self) -> str | None:
|
||||
"""Return URL to for watchdog or None."""
|
||||
return self.data.get(ATTR_WATCHDOG)
|
||||
|
||||
@@ -319,47 +310,47 @@ class AppModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def panel_title(self) -> str:
|
||||
"""Return panel title for Ingress frame."""
|
||||
"""Return panel icon for Ingress frame."""
|
||||
return self.data.get(ATTR_PANEL_TITLE, self.name)
|
||||
|
||||
@property
|
||||
def panel_admin(self) -> bool:
|
||||
"""Return if panel is only available for admin users."""
|
||||
def panel_admin(self) -> str:
|
||||
"""Return panel icon for Ingress frame."""
|
||||
return self.data[ATTR_PANEL_ADMIN]
|
||||
|
||||
@property
|
||||
def host_network(self) -> bool:
|
||||
"""Return True if app run on host network."""
|
||||
"""Return True if add-on run on host network."""
|
||||
return self.data[ATTR_HOST_NETWORK]
|
||||
|
||||
@property
|
||||
def host_pid(self) -> bool:
|
||||
"""Return True if app run on host PID namespace."""
|
||||
"""Return True if add-on run on host PID namespace."""
|
||||
return self.data[ATTR_HOST_PID]
|
||||
|
||||
@property
|
||||
def host_ipc(self) -> bool:
|
||||
"""Return True if app run on host IPC namespace."""
|
||||
"""Return True if add-on run on host IPC namespace."""
|
||||
return self.data[ATTR_HOST_IPC]
|
||||
|
||||
@property
|
||||
def host_uts(self) -> bool:
|
||||
"""Return True if app run on host UTS namespace."""
|
||||
"""Return True if add-on run on host UTS namespace."""
|
||||
return self.data[ATTR_HOST_UTS]
|
||||
|
||||
@property
|
||||
def host_dbus(self) -> bool:
|
||||
"""Return True if app run on host D-BUS."""
|
||||
"""Return True if add-on run on host D-BUS."""
|
||||
return self.data[ATTR_HOST_DBUS]
|
||||
|
||||
@property
|
||||
def static_devices(self) -> list[Path]:
|
||||
"""Return static devices of app."""
|
||||
"""Return static devices of add-on."""
|
||||
return [Path(node) for node in self.data.get(ATTR_DEVICES, [])]
|
||||
|
||||
@property
|
||||
def environment(self) -> dict[str, str] | None:
|
||||
"""Return environment of app."""
|
||||
"""Return environment of add-on."""
|
||||
return self.data.get(ATTR_ENVIRONMENT)
|
||||
|
||||
@property
|
||||
@@ -378,22 +369,22 @@ class AppModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def legacy(self) -> bool:
|
||||
"""Return if the app don't support Home Assistant labels."""
|
||||
"""Return if the add-on don't support Home Assistant labels."""
|
||||
return self.data[ATTR_LEGACY]
|
||||
|
||||
@property
|
||||
def access_docker_api(self) -> bool:
|
||||
"""Return if the app need read-only Docker API access."""
|
||||
"""Return if the add-on need read-only Docker API access."""
|
||||
return self.data[ATTR_DOCKER_API]
|
||||
|
||||
@property
|
||||
def access_hassio_api(self) -> bool:
|
||||
"""Return True if the app access to Supervisor REASTful API."""
|
||||
"""Return True if the add-on access to Supervisor REASTful API."""
|
||||
return self.data[ATTR_HASSIO_API]
|
||||
|
||||
@property
|
||||
def access_homeassistant_api(self) -> bool:
|
||||
"""Return True if the app access to Home Assistant API proxy."""
|
||||
"""Return True if the add-on access to Home Assistant API proxy."""
|
||||
return self.data[ATTR_HOMEASSISTANT_API]
|
||||
|
||||
@property
|
||||
@@ -417,28 +408,28 @@ class AppModel(JobGroup, ABC):
|
||||
return self.data.get(ATTR_BACKUP_POST)
|
||||
|
||||
@property
|
||||
def backup_mode(self) -> AppBackupMode:
|
||||
def backup_mode(self) -> AddonBackupMode:
|
||||
"""Return if backup is hot/cold."""
|
||||
return self.data[ATTR_BACKUP]
|
||||
|
||||
@property
|
||||
def default_init(self) -> bool:
|
||||
"""Return True if the app have no own init."""
|
||||
"""Return True if the add-on have no own init."""
|
||||
return self.data[ATTR_INIT]
|
||||
|
||||
@property
|
||||
def with_stdin(self) -> bool:
|
||||
"""Return True if the app access use stdin input."""
|
||||
"""Return True if the add-on access use stdin input."""
|
||||
return self.data[ATTR_STDIN]
|
||||
|
||||
@property
|
||||
def with_ingress(self) -> bool:
|
||||
"""Return True if the app access support ingress."""
|
||||
"""Return True if the add-on access support ingress."""
|
||||
return self.data[ATTR_INGRESS]
|
||||
|
||||
@property
|
||||
def ingress_panel(self) -> bool | None:
|
||||
"""Return True if the app access support ingress."""
|
||||
"""Return True if the add-on access support ingress."""
|
||||
return None
|
||||
|
||||
@property
|
||||
@@ -448,12 +439,12 @@ class AppModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def with_gpio(self) -> bool:
|
||||
"""Return True if the app access to GPIO interface."""
|
||||
"""Return True if the add-on access to GPIO interface."""
|
||||
return self.data[ATTR_GPIO]
|
||||
|
||||
@property
|
||||
def with_usb(self) -> bool:
|
||||
"""Return True if the app need USB access."""
|
||||
"""Return True if the add-on need USB access."""
|
||||
return self.data[ATTR_USB]
|
||||
|
||||
@property
|
||||
@@ -463,62 +454,57 @@ class AppModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def with_udev(self) -> bool:
|
||||
"""Return True if the app have his own udev."""
|
||||
"""Return True if the add-on have his own udev."""
|
||||
return self.data[ATTR_UDEV]
|
||||
|
||||
@property
|
||||
def ulimits(self) -> dict[str, Any]:
|
||||
"""Return ulimits configuration."""
|
||||
return self.data[ATTR_ULIMITS]
|
||||
|
||||
@property
|
||||
def with_kernel_modules(self) -> bool:
|
||||
"""Return True if the app access to kernel modules."""
|
||||
"""Return True if the add-on access to kernel modules."""
|
||||
return self.data[ATTR_KERNEL_MODULES]
|
||||
|
||||
@property
|
||||
def with_realtime(self) -> bool:
|
||||
"""Return True if the app need realtime schedule functions."""
|
||||
"""Return True if the add-on need realtime schedule functions."""
|
||||
return self.data[ATTR_REALTIME]
|
||||
|
||||
@property
|
||||
def with_full_access(self) -> bool:
|
||||
"""Return True if the app want full access to hardware."""
|
||||
"""Return True if the add-on want full access to hardware."""
|
||||
return self.data[ATTR_FULL_ACCESS]
|
||||
|
||||
@property
|
||||
def with_devicetree(self) -> bool:
|
||||
"""Return True if the app read access to devicetree."""
|
||||
"""Return True if the add-on read access to devicetree."""
|
||||
return self.data[ATTR_DEVICETREE]
|
||||
|
||||
@property
|
||||
def with_tmpfs(self) -> bool:
|
||||
"""Return if tmp is in memory of app."""
|
||||
def with_tmpfs(self) -> str | None:
|
||||
"""Return if tmp is in memory of add-on."""
|
||||
return self.data[ATTR_TMPFS]
|
||||
|
||||
@property
|
||||
def access_auth_api(self) -> bool:
|
||||
"""Return True if the app access to login/auth backend."""
|
||||
"""Return True if the add-on access to login/auth backend."""
|
||||
return self.data[ATTR_AUTH_API]
|
||||
|
||||
@property
|
||||
def with_audio(self) -> bool:
|
||||
"""Return True if the app access to audio."""
|
||||
"""Return True if the add-on access to audio."""
|
||||
return self.data[ATTR_AUDIO]
|
||||
|
||||
@property
|
||||
def with_video(self) -> bool:
|
||||
"""Return True if the app access to video."""
|
||||
"""Return True if the add-on access to video."""
|
||||
return self.data[ATTR_VIDEO]
|
||||
|
||||
@property
|
||||
def homeassistant_version(self) -> AwesomeVersion | None:
|
||||
"""Return min Home Assistant version they needed by App."""
|
||||
def homeassistant_version(self) -> str | None:
|
||||
"""Return min Home Assistant version they needed by Add-on."""
|
||||
return self.data.get(ATTR_HOMEASSISTANT)
|
||||
|
||||
@property
|
||||
def url(self) -> str | None:
|
||||
"""Return URL of app."""
|
||||
"""Return URL of add-on."""
|
||||
return self.data.get(ATTR_URL)
|
||||
|
||||
@property
|
||||
@@ -546,44 +532,18 @@ class AppModel(JobGroup, ABC):
|
||||
"""Return list of supported arch."""
|
||||
return self.data[ATTR_ARCH]
|
||||
|
||||
@property
|
||||
def has_deprecated_arch(self) -> bool:
|
||||
"""Return True if app includes deprecated architectures."""
|
||||
return any(arch in ARCH_DEPRECATED for arch in self.supported_arch)
|
||||
|
||||
@property
|
||||
def has_supported_arch(self) -> bool:
|
||||
"""Return True if app supports any architecture on this system."""
|
||||
return self.sys_arch.is_supported(self.supported_arch)
|
||||
|
||||
@property
|
||||
def has_deprecated_machine(self) -> bool:
|
||||
"""Return True if app includes deprecated machine entries."""
|
||||
return any(
|
||||
machine.lstrip("!") in MACHINE_DEPRECATED
|
||||
for machine in self.supported_machine
|
||||
)
|
||||
|
||||
@property
|
||||
def has_supported_machine(self) -> bool:
|
||||
"""Return True if app supports this machine."""
|
||||
if not (machine_types := self.supported_machine):
|
||||
return True
|
||||
|
||||
return (
|
||||
f"!{self.sys_machine}" not in machine_types
|
||||
and self.sys_machine in machine_types
|
||||
)
|
||||
|
||||
@property
|
||||
def supported_machine(self) -> list[str]:
|
||||
"""Return list of supported machine."""
|
||||
return self.data.get(ATTR_MACHINE, [])
|
||||
|
||||
@property
|
||||
def arch(self) -> CpuArch:
|
||||
"""Return architecture to use for the app's image."""
|
||||
return self.sys_arch.match(self.data[ATTR_ARCH])
|
||||
def arch(self) -> str:
|
||||
"""Return architecture to use for the addon's image."""
|
||||
if ATTR_IMAGE in self.data:
|
||||
return self.sys_arch.match(self.data[ATTR_ARCH])
|
||||
|
||||
return self.sys_arch.default
|
||||
|
||||
@property
|
||||
def image(self) -> str | None:
|
||||
@@ -592,12 +552,12 @@ class AppModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def need_build(self) -> bool:
|
||||
"""Return True if this app need a local build."""
|
||||
"""Return True if this add-on need a local build."""
|
||||
return ATTR_IMAGE not in self.data
|
||||
|
||||
@property
|
||||
def map_volumes(self) -> dict[MappingType, FolderMapping]:
|
||||
"""Return a dict of {MappingType: FolderMapping} from app."""
|
||||
"""Return a dict of {MappingType: FolderMapping} from add-on."""
|
||||
volumes = {}
|
||||
for volume in self.data[ATTR_MAP]:
|
||||
volumes[MappingType(volume[ATTR_TYPE])] = FolderMapping(
|
||||
@@ -608,27 +568,27 @@ class AppModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def path_location(self) -> Path:
|
||||
"""Return path to this app."""
|
||||
"""Return path to this add-on."""
|
||||
return Path(self.data[ATTR_LOCATION])
|
||||
|
||||
@property
|
||||
def path_icon(self) -> Path:
|
||||
"""Return path to app icon."""
|
||||
"""Return path to add-on icon."""
|
||||
return Path(self.path_location, "icon.png")
|
||||
|
||||
@property
|
||||
def path_logo(self) -> Path:
|
||||
"""Return path to app logo."""
|
||||
"""Return path to add-on logo."""
|
||||
return Path(self.path_location, "logo.png")
|
||||
|
||||
@property
|
||||
def path_changelog(self) -> Path:
|
||||
"""Return path to app changelog."""
|
||||
"""Return path to add-on changelog."""
|
||||
return Path(self.path_location, "CHANGELOG.md")
|
||||
|
||||
@property
|
||||
def path_documentation(self) -> Path:
|
||||
"""Return path to app changelog."""
|
||||
"""Return path to add-on changelog."""
|
||||
return Path(self.path_location, "DOCS.md")
|
||||
|
||||
@property
|
||||
@@ -637,17 +597,17 @@ class AppModel(JobGroup, ABC):
|
||||
return Path(self.path_location, "apparmor.txt")
|
||||
|
||||
@property
|
||||
def schema(self) -> AppOptions:
|
||||
"""Return App options validation object."""
|
||||
def schema(self) -> AddonOptions:
|
||||
"""Return Addon options validation object."""
|
||||
raw_schema = self.data[ATTR_SCHEMA]
|
||||
if isinstance(raw_schema, bool):
|
||||
raw_schema = {}
|
||||
|
||||
return AppOptions(self.coresys, raw_schema, self.name, self.slug)
|
||||
return AddonOptions(self.coresys, raw_schema, self.name, self.slug)
|
||||
|
||||
@property
|
||||
def schema_ui(self) -> list[dict[Any, Any]] | None:
|
||||
"""Create a UI schema for app options."""
|
||||
def schema_ui(self) -> list[dict[any, any]] | None:
|
||||
"""Create a UI schema for add-on options."""
|
||||
raw_schema = self.data[ATTR_SCHEMA]
|
||||
|
||||
if isinstance(raw_schema, bool):
|
||||
@@ -656,17 +616,22 @@ class AppModel(JobGroup, ABC):
|
||||
|
||||
@property
|
||||
def with_journald(self) -> bool:
|
||||
"""Return True if the app accesses the system journal."""
|
||||
"""Return True if the add-on accesses the system journal."""
|
||||
return self.data[ATTR_JOURNALD]
|
||||
|
||||
@property
|
||||
def signed(self) -> bool:
|
||||
"""Currently no signing support."""
|
||||
return False
|
||||
"""Return True if the image is signed."""
|
||||
return ATTR_CODENOTARY in self.data
|
||||
|
||||
@property
|
||||
def codenotary(self) -> str | None:
|
||||
"""Return Signer email address for CAS."""
|
||||
return self.data.get(ATTR_CODENOTARY)
|
||||
|
||||
@property
|
||||
def breaking_versions(self) -> list[AwesomeVersion]:
|
||||
"""Return breaking versions of app."""
|
||||
"""Return breaking versions of addon."""
|
||||
return self.data[ATTR_BREAKING_VERSIONS]
|
||||
|
||||
async def long_description(self) -> str | None:
|
||||
@@ -680,7 +645,7 @@ class AppModel(JobGroup, ABC):
|
||||
return None
|
||||
|
||||
# Return data
|
||||
return readme.read_text(encoding="utf-8", errors="replace")
|
||||
return readme.read_text(encoding="utf-8")
|
||||
|
||||
return await self.sys_run_in_executor(read_readme)
|
||||
|
||||
@@ -696,27 +661,24 @@ class AppModel(JobGroup, ABC):
|
||||
return self.sys_run_in_executor(check_paths)
|
||||
|
||||
def validate_availability(self) -> None:
|
||||
"""Validate if app is available for current system."""
|
||||
"""Validate if addon is available for current system."""
|
||||
return self._validate_availability(self.data, logger=_LOGGER.error)
|
||||
|
||||
def __eq__(self, other: Any) -> bool:
|
||||
"""Compare app objects."""
|
||||
if not isinstance(other, AppModel):
|
||||
def __eq__(self, other):
|
||||
"""Compaired add-on objects."""
|
||||
if not isinstance(other, AddonModel):
|
||||
return False
|
||||
return self.slug == other.slug
|
||||
|
||||
def __hash__(self) -> int:
|
||||
"""Hash for app objects."""
|
||||
return hash(self.slug)
|
||||
|
||||
def _validate_availability(
|
||||
self, config, *, logger: Callable[..., None] | None = None
|
||||
) -> None:
|
||||
"""Validate if app is available for current system."""
|
||||
"""Validate if addon is available for current system."""
|
||||
# Architecture
|
||||
if not self.sys_arch.is_supported(config[ATTR_ARCH]):
|
||||
raise AppNotSupportedArchitectureError(
|
||||
logger, slug=self.slug, architectures=config[ATTR_ARCH]
|
||||
raise AddonsNotSupportedError(
|
||||
f"Add-on {self.slug} not supported on this platform, supported architectures: {', '.join(config[ATTR_ARCH])}",
|
||||
logger,
|
||||
)
|
||||
|
||||
# Machine / Hardware
|
||||
@@ -724,8 +686,9 @@ class AppModel(JobGroup, ABC):
|
||||
if machine and (
|
||||
f"!{self.sys_machine}" in machine or self.sys_machine not in machine
|
||||
):
|
||||
raise AppNotSupportedMachineTypeError(
|
||||
logger, slug=self.slug, machine_types=machine
|
||||
raise AddonsNotSupportedError(
|
||||
f"Add-on {self.slug} not supported on this machine, supported machine types: {', '.join(machine)}",
|
||||
logger,
|
||||
)
|
||||
|
||||
# Home Assistant
|
||||
@@ -734,15 +697,16 @@ class AppModel(JobGroup, ABC):
|
||||
if version and not version_is_new_enough(
|
||||
self.sys_homeassistant.version, version
|
||||
):
|
||||
raise AppNotSupportedHomeAssistantVersionError(
|
||||
logger, slug=self.slug, version=str(version)
|
||||
raise AddonsNotSupportedError(
|
||||
f"Add-on {self.slug} not supported on this system, requires Home Assistant version {version} or greater",
|
||||
logger,
|
||||
)
|
||||
|
||||
def _available(self, config) -> bool:
|
||||
"""Return True if this app is available on this platform."""
|
||||
"""Return True if this add-on is available on this platform."""
|
||||
try:
|
||||
self._validate_availability(config)
|
||||
except AppNotSupportedError:
|
||||
except AddonsNotSupportedError:
|
||||
return False
|
||||
|
||||
return True
|
||||
@@ -751,12 +715,8 @@ class AppModel(JobGroup, ABC):
|
||||
"""Generate image name from data."""
|
||||
# Repository with Dockerhub images
|
||||
if ATTR_IMAGE in config:
|
||||
try:
|
||||
arch = self.sys_arch.match(config[ATTR_ARCH])
|
||||
except HassioArchNotFound:
|
||||
arch = self.sys_arch.default
|
||||
arch = self.sys_arch.match(config[ATTR_ARCH])
|
||||
return config[ATTR_IMAGE].format(arch=arch)
|
||||
|
||||
# local build
|
||||
arch = self.sys_arch.match(config[ATTR_ARCH])
|
||||
return f"{config[ATTR_REPOSITORY]}/{arch!s}-addon-{config[ATTR_SLUG]}"
|
||||
return f"{config[ATTR_REPOSITORY]}/{self.sys_arch.default}-addon-{config[ATTR_SLUG]}"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""App Options / UI rendering."""
|
||||
"""Add-on Options / UI rendering."""
|
||||
|
||||
import hashlib
|
||||
import logging
|
||||
@@ -37,8 +37,8 @@ RE_SCHEMA_ELEMENT = re.compile(
|
||||
r"|device(?:\((?P<filter>subsystem=[a-z]+)\))?"
|
||||
r"|str(?:\((?P<s_min>\d+)?,(?P<s_max>\d+)?\))?"
|
||||
r"|password(?:\((?P<p_min>\d+)?,(?P<p_max>\d+)?\))?"
|
||||
r"|int(?:\((?P<i_min>-?\d+)?,(?P<i_max>-?\d+)?\))?"
|
||||
r"|float(?:\((?P<f_min>-?\d*\.?\d+)?,(?P<f_max>-?\d*\.?\d+)?\))?"
|
||||
r"|int(?:\((?P<i_min>\d+)?,(?P<i_max>\d+)?\))?"
|
||||
r"|float(?:\((?P<f_min>[\d\.]+)?,(?P<f_max>[\d\.]+)?\))?"
|
||||
r"|match\((?P<match>.*)\)"
|
||||
r"|list\((?P<list>.+)\)"
|
||||
r")\??$"
|
||||
@@ -56,8 +56,8 @@ _SCHEMA_LENGTH_PARTS = (
|
||||
)
|
||||
|
||||
|
||||
class AppOptions(CoreSysAttributes):
|
||||
"""Validate Apps Options."""
|
||||
class AddonOptions(CoreSysAttributes):
|
||||
"""Validate Add-ons Options."""
|
||||
|
||||
def __init__(
|
||||
self, coresys: CoreSys, raw_schema: dict[str, Any], name: str, slug: str
|
||||
@@ -72,11 +72,11 @@ class AppOptions(CoreSysAttributes):
|
||||
|
||||
@property
|
||||
def validate(self) -> vol.Schema:
|
||||
"""Create a schema for app options."""
|
||||
"""Create a schema for add-on options."""
|
||||
return vol.Schema(vol.All(dict, self))
|
||||
|
||||
def __call__(self, struct: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Create schema validator for apps options."""
|
||||
def __call__(self, struct):
|
||||
"""Create schema validator for add-ons options."""
|
||||
options = {}
|
||||
|
||||
# read options
|
||||
@@ -93,7 +93,15 @@ class AppOptions(CoreSysAttributes):
|
||||
|
||||
typ = self.raw_schema[key]
|
||||
try:
|
||||
options[key] = self._validate_element(typ, value, key)
|
||||
if isinstance(typ, list):
|
||||
# nested value list
|
||||
options[key] = self._nested_validate_list(typ[0], value, key)
|
||||
elif isinstance(typ, dict):
|
||||
# nested value dict
|
||||
options[key] = self._nested_validate_dict(typ, value, key)
|
||||
else:
|
||||
# normal value
|
||||
options[key] = self._single_validate(typ, value, key)
|
||||
except (IndexError, KeyError):
|
||||
raise vol.Invalid(
|
||||
f"Type error for option '{key}' in {self._name} ({self._slug})"
|
||||
@@ -103,20 +111,7 @@ class AppOptions(CoreSysAttributes):
|
||||
return options
|
||||
|
||||
# pylint: disable=no-value-for-parameter
|
||||
def _validate_element(self, typ: Any, value: Any, key: str) -> Any:
|
||||
"""Validate a value against a type specification."""
|
||||
if isinstance(typ, list):
|
||||
# nested value list
|
||||
return self._nested_validate_list(typ[0], value, key)
|
||||
elif isinstance(typ, dict):
|
||||
# nested value dict
|
||||
return self._nested_validate_dict(typ, value, key)
|
||||
else:
|
||||
# normal value
|
||||
return self._single_validate(typ, value, key)
|
||||
|
||||
# pylint: disable=no-value-for-parameter
|
||||
def _single_validate(self, typ: str, value: Any, key: str) -> Any:
|
||||
def _single_validate(self, typ: str, value: Any, key: str):
|
||||
"""Validate a single element."""
|
||||
# if required argument
|
||||
if value is None:
|
||||
@@ -142,7 +137,7 @@ class AppOptions(CoreSysAttributes):
|
||||
) from None
|
||||
|
||||
# prepare range
|
||||
range_args: dict[str, Any] = {}
|
||||
range_args = {}
|
||||
for group_name in _SCHEMA_LENGTH_PARTS:
|
||||
group_value = match.group(group_name)
|
||||
if group_value:
|
||||
@@ -169,10 +164,6 @@ class AppOptions(CoreSysAttributes):
|
||||
elif typ.startswith(_LIST):
|
||||
return vol.In(match.group("list").split("|"))(str(value))
|
||||
elif typ.startswith(_DEVICE):
|
||||
if not isinstance(value, str):
|
||||
raise vol.Invalid(
|
||||
f"Expected a string for option '{key}' in {self._name} ({self._slug})"
|
||||
)
|
||||
try:
|
||||
device = self.sys_hardware.get_by_path(Path(value))
|
||||
except HardwareNotFound:
|
||||
@@ -191,13 +182,13 @@ class AppOptions(CoreSysAttributes):
|
||||
|
||||
# Device valid
|
||||
self.devices.add(device)
|
||||
return str(value)
|
||||
return str(device.path)
|
||||
|
||||
raise vol.Invalid(
|
||||
f"Fatal error for option '{key}' with type '{typ}' in {self._name} ({self._slug})"
|
||||
) from None
|
||||
|
||||
def _nested_validate_list(self, typ: Any, data_list: Any, key: str) -> list[Any]:
|
||||
def _nested_validate_list(self, typ: Any, data_list: list[Any], key: str):
|
||||
"""Validate nested items."""
|
||||
options = []
|
||||
|
||||
@@ -210,13 +201,17 @@ class AppOptions(CoreSysAttributes):
|
||||
# Process list
|
||||
for element in data_list:
|
||||
# Nested?
|
||||
options.append(self._validate_element(typ, element, key))
|
||||
if isinstance(typ, dict):
|
||||
c_options = self._nested_validate_dict(typ, element, key)
|
||||
options.append(c_options)
|
||||
else:
|
||||
options.append(self._single_validate(typ, element, key))
|
||||
|
||||
return options
|
||||
|
||||
def _nested_validate_dict(
|
||||
self, typ: dict[Any, Any], data_dict: Any, key: str
|
||||
) -> dict[Any, Any]:
|
||||
self, typ: dict[Any, Any], data_dict: dict[Any, Any], key: str
|
||||
):
|
||||
"""Validate nested items."""
|
||||
options = {}
|
||||
|
||||
@@ -236,7 +231,12 @@ class AppOptions(CoreSysAttributes):
|
||||
continue
|
||||
|
||||
# Nested?
|
||||
options[c_key] = self._validate_element(typ[c_key], c_value, c_key)
|
||||
if isinstance(typ[c_key], list):
|
||||
options[c_key] = self._nested_validate_list(
|
||||
typ[c_key][0], c_value, c_key
|
||||
)
|
||||
else:
|
||||
options[c_key] = self._single_validate(typ[c_key], c_value, c_key)
|
||||
|
||||
self._check_missing_options(typ, options, key)
|
||||
return options
|
||||
@@ -262,11 +262,11 @@ class AppOptions(CoreSysAttributes):
|
||||
|
||||
|
||||
class UiOptions(CoreSysAttributes):
|
||||
"""Render UI Apps Options."""
|
||||
"""Render UI Add-ons Options."""
|
||||
|
||||
def __init__(self, coresys: CoreSys) -> None:
|
||||
"""Initialize UI option render."""
|
||||
self.coresys: CoreSys = coresys
|
||||
self.coresys = coresys
|
||||
|
||||
def __call__(self, raw_schema: dict[str, Any]) -> list[dict[str, Any]]:
|
||||
"""Generate UI schema."""
|
||||
@@ -274,28 +274,18 @@ class UiOptions(CoreSysAttributes):
|
||||
|
||||
# read options
|
||||
for key, value in raw_schema.items():
|
||||
self._ui_schema_element(ui_schema, value, key)
|
||||
if isinstance(value, list):
|
||||
# nested value list
|
||||
self._nested_ui_list(ui_schema, value, key)
|
||||
elif isinstance(value, dict):
|
||||
# nested value dict
|
||||
self._nested_ui_dict(ui_schema, value, key)
|
||||
else:
|
||||
# normal value
|
||||
self._single_ui_option(ui_schema, value, key)
|
||||
|
||||
return ui_schema
|
||||
|
||||
def _ui_schema_element(
|
||||
self,
|
||||
ui_schema: list[dict[str, Any]],
|
||||
value: str | list[Any] | dict[str, Any],
|
||||
key: str,
|
||||
multiple: bool = False,
|
||||
) -> None:
|
||||
if isinstance(value, list):
|
||||
# nested value list
|
||||
assert not multiple
|
||||
self._nested_ui_list(ui_schema, value, key)
|
||||
elif isinstance(value, dict):
|
||||
# nested value dict
|
||||
self._nested_ui_dict(ui_schema, value, key, multiple)
|
||||
else:
|
||||
# normal value
|
||||
self._single_ui_option(ui_schema, value, key, multiple)
|
||||
|
||||
def _single_ui_option(
|
||||
self,
|
||||
ui_schema: list[dict[str, Any]],
|
||||
@@ -387,7 +377,10 @@ class UiOptions(CoreSysAttributes):
|
||||
_LOGGER.error("Invalid schema %s", key)
|
||||
return
|
||||
|
||||
self._ui_schema_element(ui_schema, element, key, multiple=True)
|
||||
if isinstance(element, dict):
|
||||
self._nested_ui_dict(ui_schema, element, key, multiple=True)
|
||||
else:
|
||||
self._single_ui_option(ui_schema, element, key, multiple=True)
|
||||
|
||||
def _nested_ui_dict(
|
||||
self,
|
||||
@@ -397,16 +390,20 @@ class UiOptions(CoreSysAttributes):
|
||||
multiple: bool = False,
|
||||
) -> None:
|
||||
"""UI nested dict items."""
|
||||
ui_node: dict[str, Any] = {
|
||||
ui_node = {
|
||||
"name": key,
|
||||
"type": "schema",
|
||||
"optional": True,
|
||||
"multiple": multiple,
|
||||
}
|
||||
|
||||
nested_schema: list[dict[str, Any]] = []
|
||||
nested_schema = []
|
||||
for c_key, c_value in option_dict.items():
|
||||
self._ui_schema_element(nested_schema, c_value, c_key)
|
||||
# Nested?
|
||||
if isinstance(c_value, list):
|
||||
self._nested_ui_list(nested_schema, c_value, c_key)
|
||||
else:
|
||||
self._single_ui_option(nested_schema, c_value, c_key)
|
||||
|
||||
ui_node["schema"] = nested_schema
|
||||
ui_schema.append(ui_node)
|
||||
@@ -416,7 +413,7 @@ def _create_device_filter(str_filter: str) -> dict[str, Any]:
|
||||
"""Generate device Filter."""
|
||||
raw_filter = dict(value.split("=") for value in str_filter.split(";"))
|
||||
|
||||
clean_filter: dict[str, Any] = {}
|
||||
clean_filter = {}
|
||||
for key, value in raw_filter.items():
|
||||
if key == "subsystem":
|
||||
clean_filter[key] = UdevSubsystem(value)
|
||||
|
||||
@@ -1,22 +1,22 @@
|
||||
"""Util apps functions."""
|
||||
"""Util add-ons functions."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from pathlib import Path
|
||||
import subprocess
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from ..const import ROLE_ADMIN, ROLE_MANAGER, SECURITY_DISABLE, SECURITY_PROFILE
|
||||
from ..docker.const import Capabilities
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .model import AppModel
|
||||
from .model import AddonModel
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def rating_security(app: AppModel) -> int:
|
||||
def rating_security(addon: AddonModel) -> int:
|
||||
"""Return 1-8 for security rating.
|
||||
|
||||
1 = not secure
|
||||
@@ -25,25 +25,25 @@ def rating_security(app: AppModel) -> int:
|
||||
rating = 5
|
||||
|
||||
# AppArmor
|
||||
if app.apparmor == SECURITY_DISABLE:
|
||||
if addon.apparmor == SECURITY_DISABLE:
|
||||
rating += -1
|
||||
elif app.apparmor == SECURITY_PROFILE:
|
||||
elif addon.apparmor == SECURITY_PROFILE:
|
||||
rating += 1
|
||||
|
||||
# Home Assistant Login & Ingress
|
||||
if app.with_ingress:
|
||||
if addon.with_ingress:
|
||||
rating += 2
|
||||
elif app.access_auth_api:
|
||||
elif addon.access_auth_api:
|
||||
rating += 1
|
||||
|
||||
# Signed
|
||||
if app.signed:
|
||||
if addon.signed:
|
||||
rating += 1
|
||||
|
||||
# Privileged options
|
||||
if (
|
||||
any(
|
||||
privilege in app.privileged
|
||||
privilege in addon.privileged
|
||||
for privilege in (
|
||||
Capabilities.BPF,
|
||||
Capabilities.CHECKPOINT_RESTORE,
|
||||
@@ -57,49 +57,47 @@ def rating_security(app: AppModel) -> int:
|
||||
Capabilities.SYS_RAWIO,
|
||||
)
|
||||
)
|
||||
or app.with_kernel_modules
|
||||
or addon.with_kernel_modules
|
||||
):
|
||||
rating += -1
|
||||
|
||||
# API Supervisor role
|
||||
if app.hassio_role == ROLE_MANAGER:
|
||||
if addon.hassio_role == ROLE_MANAGER:
|
||||
rating += -1
|
||||
elif app.hassio_role == ROLE_ADMIN:
|
||||
elif addon.hassio_role == ROLE_ADMIN:
|
||||
rating += -2
|
||||
|
||||
# Not secure Networking
|
||||
if app.host_network:
|
||||
if addon.host_network:
|
||||
rating += -1
|
||||
|
||||
# Insecure PID namespace
|
||||
if app.host_pid:
|
||||
if addon.host_pid:
|
||||
rating += -2
|
||||
|
||||
# UTS host namespace allows to set hostname only with SYS_ADMIN
|
||||
if app.host_uts and Capabilities.SYS_ADMIN in app.privileged:
|
||||
if addon.host_uts and Capabilities.SYS_ADMIN in addon.privileged:
|
||||
rating += -1
|
||||
|
||||
# Docker Access & full Access
|
||||
if app.access_docker_api or app.with_full_access:
|
||||
if addon.access_docker_api or addon.with_full_access:
|
||||
rating = 1
|
||||
|
||||
return max(min(8, rating), 1)
|
||||
|
||||
|
||||
def remove_data(folder: Path) -> None:
|
||||
"""Remove folder and reset privileged.
|
||||
|
||||
Must be run in executor.
|
||||
"""
|
||||
async def remove_data(folder: Path) -> None:
|
||||
"""Remove folder and reset privileged."""
|
||||
try:
|
||||
subprocess.run(
|
||||
["rm", "-rf", str(folder)], stdout=subprocess.DEVNULL, text=True, check=True
|
||||
proc = await asyncio.create_subprocess_exec(
|
||||
"rm", "-rf", str(folder), stdout=asyncio.subprocess.DEVNULL
|
||||
)
|
||||
|
||||
_, error_msg = await proc.communicate()
|
||||
except OSError as err:
|
||||
error_msg = str(err)
|
||||
except subprocess.CalledProcessError as procerr:
|
||||
error_msg = procerr.stderr.strip()
|
||||
else:
|
||||
return
|
||||
if proc.returncode == 0:
|
||||
return
|
||||
|
||||
_LOGGER.error("Can't remove app data: %s", error_msg)
|
||||
_LOGGER.error("Can't remove Add-on Data: %s", error_msg)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""Validate apps options schema."""
|
||||
"""Validate add-ons options schema."""
|
||||
|
||||
import logging
|
||||
import re
|
||||
@@ -9,8 +9,7 @@ import uuid
|
||||
import voluptuous as vol
|
||||
|
||||
from ..const import (
|
||||
ARCH_ALL_COMPAT,
|
||||
ARCH_DEPRECATED,
|
||||
ARCH_ALL,
|
||||
ATTR_ACCESS_TOKEN,
|
||||
ATTR_ADVANCED,
|
||||
ATTR_APPARMOR,
|
||||
@@ -33,7 +32,6 @@ from ..const import (
|
||||
ATTR_DISCOVERY,
|
||||
ATTR_DOCKER_API,
|
||||
ATTR_ENVIRONMENT,
|
||||
ATTR_FIELDS,
|
||||
ATTR_FULL_ACCESS,
|
||||
ATTR_GPIO,
|
||||
ATTR_HASSIO_API,
|
||||
@@ -89,7 +87,6 @@ from ..const import (
|
||||
ATTR_TYPE,
|
||||
ATTR_UART,
|
||||
ATTR_UDEV,
|
||||
ATTR_ULIMITS,
|
||||
ATTR_URL,
|
||||
ATTR_USB,
|
||||
ATTR_USER,
|
||||
@@ -98,14 +95,13 @@ from ..const import (
|
||||
ATTR_VIDEO,
|
||||
ATTR_WATCHDOG,
|
||||
ATTR_WEBUI,
|
||||
MACHINE_DEPRECATED,
|
||||
ROLE_ALL,
|
||||
ROLE_DEFAULT,
|
||||
AppBoot,
|
||||
AppBootConfig,
|
||||
AppStage,
|
||||
AppStartup,
|
||||
AppState,
|
||||
AddonBoot,
|
||||
AddonBootConfig,
|
||||
AddonStage,
|
||||
AddonStartup,
|
||||
AddonState,
|
||||
)
|
||||
from ..docker.const import Capabilities
|
||||
from ..validate import (
|
||||
@@ -124,7 +120,7 @@ from .const import (
|
||||
ATTR_PATH,
|
||||
ATTR_READ_ONLY,
|
||||
RE_SLUG,
|
||||
AppBackupMode,
|
||||
AddonBackupMode,
|
||||
MappingType,
|
||||
)
|
||||
from .options import RE_SCHEMA_ELEMENT
|
||||
@@ -141,25 +137,11 @@ RE_DOCKER_IMAGE_BUILD = re.compile(
|
||||
r"^([a-zA-Z\-\.:\d{}]+/)*?([\-\w{}]+)/([\-\w{}]+)(:[\.\-\w{}]+)?$"
|
||||
)
|
||||
|
||||
SCHEMA_ELEMENT = vol.Schema(
|
||||
vol.Any(
|
||||
vol.Match(RE_SCHEMA_ELEMENT),
|
||||
[
|
||||
# A list may not directly contain another list
|
||||
vol.Any(
|
||||
vol.Match(RE_SCHEMA_ELEMENT),
|
||||
{str: vol.Self},
|
||||
)
|
||||
],
|
||||
{str: vol.Self},
|
||||
)
|
||||
)
|
||||
SCHEMA_ELEMENT = vol.Match(RE_SCHEMA_ELEMENT)
|
||||
|
||||
RE_MACHINE = re.compile(
|
||||
r"^!?(?:"
|
||||
r"|intel-nuc"
|
||||
r"|khadas-vim3"
|
||||
r"|generic-aarch64"
|
||||
r"|generic-x86-64"
|
||||
r"|odroid-c2"
|
||||
r"|odroid-c4"
|
||||
@@ -186,20 +168,11 @@ RE_MACHINE = re.compile(
|
||||
RE_SLUG_FIELD = re.compile(r"^" + RE_SLUG + r"$")
|
||||
|
||||
|
||||
def _warn_app_config(config: dict[str, Any]):
|
||||
def _warn_addon_config(config: dict[str, Any]):
|
||||
"""Warn about miss configs."""
|
||||
name = config.get(ATTR_NAME)
|
||||
if not name:
|
||||
raise vol.Invalid("Invalid app config!")
|
||||
|
||||
if ATTR_ADVANCED in config:
|
||||
# Deprecated since Supervisor 2026.03.0; this field is ignored and the
|
||||
# warning can be removed once that version is the minimum supported.
|
||||
_LOGGER.warning(
|
||||
"App '%s' uses deprecated 'advanced' field in config. "
|
||||
"This field is ignored by the Supervisor. Please report this to the maintainer.",
|
||||
name,
|
||||
)
|
||||
raise vol.Invalid("Invalid Add-on config!")
|
||||
|
||||
if config.get(ATTR_FULL_ACCESS, False) and (
|
||||
config.get(ATTR_DEVICES)
|
||||
@@ -208,76 +181,48 @@ def _warn_app_config(config: dict[str, Any]):
|
||||
or config.get(ATTR_GPIO)
|
||||
):
|
||||
_LOGGER.warning(
|
||||
"App has full device access, and selective device access in the configuration. Please report this to the maintainer of %s",
|
||||
"Add-on have full device access, and selective device access in the configuration. Please report this to the maintainer of %s",
|
||||
name,
|
||||
)
|
||||
|
||||
if config.get(ATTR_BACKUP, AppBackupMode.HOT) == AppBackupMode.COLD and (
|
||||
if config.get(ATTR_BACKUP, AddonBackupMode.HOT) == AddonBackupMode.COLD and (
|
||||
config.get(ATTR_BACKUP_POST) or config.get(ATTR_BACKUP_PRE)
|
||||
):
|
||||
_LOGGER.warning(
|
||||
"An app that only supports COLD backups is trying to use pre/post commands. Please report this to the maintainer of %s",
|
||||
name,
|
||||
)
|
||||
|
||||
if deprecated_arches := [
|
||||
arch for arch in config.get(ATTR_ARCH, []) if arch in ARCH_DEPRECATED
|
||||
]:
|
||||
_LOGGER.warning(
|
||||
"App config 'arch' uses deprecated values %s. Please report this to the maintainer of %s",
|
||||
deprecated_arches,
|
||||
name,
|
||||
)
|
||||
|
||||
if deprecated_machines := [
|
||||
machine
|
||||
for machine in config.get(ATTR_MACHINE, [])
|
||||
if machine.lstrip("!") in MACHINE_DEPRECATED
|
||||
]:
|
||||
_LOGGER.warning(
|
||||
"App config 'machine' uses deprecated values %s. Please report this to the maintainer of %s",
|
||||
deprecated_machines,
|
||||
name,
|
||||
)
|
||||
|
||||
if ATTR_CODENOTARY in config:
|
||||
_LOGGER.warning(
|
||||
"App '%s' uses deprecated 'codenotary' field in config. This field is no longer used and will be ignored. Please report this to the maintainer.",
|
||||
"Add-on which only support COLD backups trying to use post/pre commands. Please report this to the maintainer of %s",
|
||||
name,
|
||||
)
|
||||
|
||||
return config
|
||||
|
||||
|
||||
def _migrate_app_config(protocol=False):
|
||||
"""Migrate app config."""
|
||||
def _migrate_addon_config(protocol=False):
|
||||
"""Migrate addon config."""
|
||||
|
||||
def _migrate(config: dict[str, Any]):
|
||||
if not isinstance(config, dict):
|
||||
raise vol.Invalid("App config must be a dictionary!")
|
||||
name = config.get(ATTR_NAME)
|
||||
if not name:
|
||||
raise vol.Invalid("Invalid app config!")
|
||||
raise vol.Invalid("Invalid Add-on config!")
|
||||
|
||||
# Startup 2018-03-30
|
||||
if config.get(ATTR_STARTUP) in ("before", "after"):
|
||||
value = config[ATTR_STARTUP]
|
||||
if protocol:
|
||||
_LOGGER.warning(
|
||||
"App config 'startup' with '%s' is deprecated. Please report this to the maintainer of %s",
|
||||
"Add-on config 'startup' with '%s' is deprecated. Please report this to the maintainer of %s",
|
||||
value,
|
||||
name,
|
||||
)
|
||||
if value == "before":
|
||||
config[ATTR_STARTUP] = AppStartup.SERVICES
|
||||
config[ATTR_STARTUP] = AddonStartup.SERVICES
|
||||
elif value == "after":
|
||||
config[ATTR_STARTUP] = AppStartup.APPLICATION
|
||||
config[ATTR_STARTUP] = AddonStartup.APPLICATION
|
||||
|
||||
# UART 2021-01-20
|
||||
if "auto_uart" in config:
|
||||
if protocol:
|
||||
_LOGGER.warning(
|
||||
"App config 'auto_uart' is deprecated, use 'uart'. Please report this to the maintainer of %s",
|
||||
"Add-on config 'auto_uart' is deprecated, use 'uart'. Please report this to the maintainer of %s",
|
||||
name,
|
||||
)
|
||||
config[ATTR_UART] = config.pop("auto_uart")
|
||||
@@ -286,7 +231,7 @@ def _migrate_app_config(protocol=False):
|
||||
if ATTR_DEVICES in config and any(":" in line for line in config[ATTR_DEVICES]):
|
||||
if protocol:
|
||||
_LOGGER.warning(
|
||||
"App config 'devices' uses a deprecated format instead of a list of paths only. Please report this to the maintainer of %s",
|
||||
"Add-on config 'devices' use a deprecated format, the new format uses a list of paths only. Please report this to the maintainer of %s",
|
||||
name,
|
||||
)
|
||||
config[ATTR_DEVICES] = [line.split(":")[0] for line in config[ATTR_DEVICES]]
|
||||
@@ -295,7 +240,7 @@ def _migrate_app_config(protocol=False):
|
||||
if ATTR_TMPFS in config and not isinstance(config[ATTR_TMPFS], bool):
|
||||
if protocol:
|
||||
_LOGGER.warning(
|
||||
"App config 'tmpfs' uses a deprecated format instead of just a boolean. Please report this to the maintainer of %s",
|
||||
"Add-on config 'tmpfs' use a deprecated format, new it's only a boolean. Please report this to the maintainer of %s",
|
||||
name,
|
||||
)
|
||||
config[ATTR_TMPFS] = True
|
||||
@@ -311,7 +256,7 @@ def _migrate_app_config(protocol=False):
|
||||
new_entry = entry.replace("snapshot", "backup")
|
||||
config[new_entry] = config.pop(entry)
|
||||
_LOGGER.warning(
|
||||
"App config '%s' is deprecated, '%s' should be used instead. Please report this to the maintainer of %s",
|
||||
"Add-on config '%s' is deprecated, '%s' should be used instead. Please report this to the maintainer of %s",
|
||||
entry,
|
||||
new_entry,
|
||||
name,
|
||||
@@ -321,23 +266,10 @@ def _migrate_app_config(protocol=False):
|
||||
volumes = []
|
||||
for entry in config.get(ATTR_MAP, []):
|
||||
if isinstance(entry, dict):
|
||||
# Validate that dict entries have required 'type' field
|
||||
if ATTR_TYPE not in entry:
|
||||
_LOGGER.warning(
|
||||
"App config has invalid map entry missing 'type' field: %s. Skipping invalid entry for %s",
|
||||
entry,
|
||||
name,
|
||||
)
|
||||
continue
|
||||
volumes.append(entry)
|
||||
if isinstance(entry, str):
|
||||
result = RE_VOLUME.match(entry)
|
||||
if not result:
|
||||
_LOGGER.warning(
|
||||
"App config has invalid map entry: %s. Skipping invalid entry for %s",
|
||||
entry,
|
||||
name,
|
||||
)
|
||||
continue
|
||||
volumes.append(
|
||||
{
|
||||
@@ -346,10 +278,10 @@ def _migrate_app_config(protocol=False):
|
||||
}
|
||||
)
|
||||
|
||||
# Always update config to clear potentially malformed ones
|
||||
config[ATTR_MAP] = volumes
|
||||
if volumes:
|
||||
config[ATTR_MAP] = volumes
|
||||
|
||||
# 2023-10 "config" became "homeassistant" so /config can be used for app's public config
|
||||
# 2023-10 "config" became "homeassistant" so /config can be used for addon's public config
|
||||
if any(volume[ATTR_TYPE] == MappingType.CONFIG for volume in volumes):
|
||||
if any(
|
||||
volume
|
||||
@@ -358,7 +290,7 @@ def _migrate_app_config(protocol=False):
|
||||
for volume in volumes
|
||||
):
|
||||
_LOGGER.warning(
|
||||
"App config using incompatible map options, '%s' and '%s' are ignored if '%s' is included. Please report this to the maintainer of %s",
|
||||
"Add-on config using incompatible map options, '%s' and '%s' are ignored if '%s' is included. Please report this to the maintainer of %s",
|
||||
MappingType.ADDON_CONFIG,
|
||||
MappingType.HOMEASSISTANT_CONFIG,
|
||||
MappingType.CONFIG,
|
||||
@@ -366,7 +298,7 @@ def _migrate_app_config(protocol=False):
|
||||
)
|
||||
else:
|
||||
_LOGGER.debug(
|
||||
"App config using deprecated map option '%s' instead of '%s'. Please report this to the maintainer of %s",
|
||||
"Add-on config using deprecated map option '%s' instead of '%s'. Please report this to the maintainer of %s",
|
||||
MappingType.CONFIG,
|
||||
MappingType.HOMEASSISTANT_CONFIG,
|
||||
name,
|
||||
@@ -384,16 +316,18 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
|
||||
vol.Required(ATTR_VERSION): version_tag,
|
||||
vol.Required(ATTR_SLUG): vol.Match(RE_SLUG_FIELD),
|
||||
vol.Required(ATTR_DESCRIPTON): str,
|
||||
vol.Required(ATTR_ARCH): [vol.In(ARCH_ALL_COMPAT)],
|
||||
vol.Required(ATTR_ARCH): [vol.In(ARCH_ALL)],
|
||||
vol.Optional(ATTR_MACHINE): vol.All([vol.Match(RE_MACHINE)], vol.Unique()),
|
||||
vol.Optional(ATTR_URL): vol.Url(),
|
||||
vol.Optional(ATTR_STARTUP, default=AppStartup.APPLICATION): vol.Coerce(
|
||||
AppStartup
|
||||
vol.Optional(ATTR_STARTUP, default=AddonStartup.APPLICATION): vol.Coerce(
|
||||
AddonStartup
|
||||
),
|
||||
vol.Optional(ATTR_BOOT, default=AddonBootConfig.AUTO): vol.Coerce(
|
||||
AddonBootConfig
|
||||
),
|
||||
vol.Optional(ATTR_BOOT, default=AppBootConfig.AUTO): vol.Coerce(AppBootConfig),
|
||||
vol.Optional(ATTR_INIT, default=True): vol.Boolean(),
|
||||
vol.Optional(ATTR_ADVANCED, default=False): vol.Boolean(),
|
||||
vol.Optional(ATTR_STAGE, default=AppStage.STABLE): vol.Coerce(AppStage),
|
||||
vol.Optional(ATTR_STAGE, default=AddonStage.STABLE): vol.Coerce(AddonStage),
|
||||
vol.Optional(ATTR_PORTS): docker_ports,
|
||||
vol.Optional(ATTR_PORTS_DESCRIPTION): docker_ports_description,
|
||||
vol.Optional(ATTR_WATCHDOG): vol.Match(
|
||||
@@ -453,27 +387,29 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
|
||||
vol.Optional(ATTR_BACKUP_EXCLUDE): [str],
|
||||
vol.Optional(ATTR_BACKUP_PRE): str,
|
||||
vol.Optional(ATTR_BACKUP_POST): str,
|
||||
vol.Optional(ATTR_BACKUP, default=AppBackupMode.HOT): vol.Coerce(AppBackupMode),
|
||||
vol.Optional(ATTR_BACKUP, default=AddonBackupMode.HOT): vol.Coerce(
|
||||
AddonBackupMode
|
||||
),
|
||||
vol.Optional(ATTR_CODENOTARY): vol.Email(),
|
||||
vol.Optional(ATTR_OPTIONS, default={}): dict,
|
||||
vol.Optional(ATTR_SCHEMA, default={}): vol.Any(
|
||||
vol.Schema({str: SCHEMA_ELEMENT}),
|
||||
vol.Schema(
|
||||
{
|
||||
str: vol.Any(
|
||||
SCHEMA_ELEMENT,
|
||||
[
|
||||
vol.Any(
|
||||
SCHEMA_ELEMENT,
|
||||
{str: vol.Any(SCHEMA_ELEMENT, [SCHEMA_ELEMENT])},
|
||||
)
|
||||
],
|
||||
vol.Schema({str: vol.Any(SCHEMA_ELEMENT, [SCHEMA_ELEMENT])}),
|
||||
)
|
||||
}
|
||||
),
|
||||
False,
|
||||
),
|
||||
vol.Optional(ATTR_IMAGE): docker_image,
|
||||
vol.Optional(ATTR_ULIMITS, default=dict): vol.Any(
|
||||
{str: vol.Coerce(int)}, # Simple format: {name: limit}
|
||||
{
|
||||
str: vol.Any(
|
||||
vol.Coerce(int), # Simple format for individual entries
|
||||
vol.Schema(
|
||||
{ # Detailed format for individual entries
|
||||
vol.Required("soft"): vol.Coerce(int),
|
||||
vol.Required("hard"): vol.Coerce(int),
|
||||
}
|
||||
),
|
||||
)
|
||||
},
|
||||
),
|
||||
vol.Optional(ATTR_TIMEOUT, default=10): vol.All(
|
||||
vol.Coerce(int), vol.Range(min=10, max=300)
|
||||
),
|
||||
@@ -484,7 +420,7 @@ _SCHEMA_ADDON_CONFIG = vol.Schema(
|
||||
)
|
||||
|
||||
SCHEMA_ADDON_CONFIG = vol.All(
|
||||
_migrate_app_config(True), _warn_app_config, _SCHEMA_ADDON_CONFIG
|
||||
_migrate_addon_config(True), _warn_addon_config, _SCHEMA_ADDON_CONFIG
|
||||
)
|
||||
|
||||
|
||||
@@ -493,7 +429,7 @@ SCHEMA_BUILD_CONFIG = vol.Schema(
|
||||
{
|
||||
vol.Optional(ATTR_BUILD_FROM, default=dict): vol.Any(
|
||||
vol.Match(RE_DOCKER_IMAGE_BUILD),
|
||||
vol.Schema({vol.In(ARCH_ALL_COMPAT): vol.Match(RE_DOCKER_IMAGE_BUILD)}),
|
||||
vol.Schema({vol.In(ARCH_ALL): vol.Match(RE_DOCKER_IMAGE_BUILD)}),
|
||||
),
|
||||
vol.Optional(ATTR_SQUASH, default=False): vol.Boolean(),
|
||||
vol.Optional(ATTR_ARGS, default=dict): vol.Schema({str: str}),
|
||||
@@ -506,7 +442,6 @@ SCHEMA_TRANSLATION_CONFIGURATION = vol.Schema(
|
||||
{
|
||||
vol.Required(ATTR_NAME): str,
|
||||
vol.Optional(ATTR_DESCRIPTON): vol.Maybe(str),
|
||||
vol.Optional(ATTR_FIELDS): {str: vol.Self},
|
||||
},
|
||||
extra=vol.REMOVE_EXTRA,
|
||||
)
|
||||
@@ -531,7 +466,7 @@ SCHEMA_ADDON_USER = vol.Schema(
|
||||
vol.Optional(ATTR_INGRESS_TOKEN, default=secrets.token_urlsafe): str,
|
||||
vol.Optional(ATTR_OPTIONS, default=dict): dict,
|
||||
vol.Optional(ATTR_AUTO_UPDATE, default=False): vol.Boolean(),
|
||||
vol.Optional(ATTR_BOOT): vol.Coerce(AppBoot),
|
||||
vol.Optional(ATTR_BOOT): vol.Coerce(AddonBoot),
|
||||
vol.Optional(ATTR_NETWORK): docker_ports,
|
||||
vol.Optional(ATTR_AUDIO_OUTPUT): vol.Maybe(str),
|
||||
vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str),
|
||||
@@ -545,7 +480,7 @@ SCHEMA_ADDON_USER = vol.Schema(
|
||||
)
|
||||
|
||||
SCHEMA_ADDON_SYSTEM = vol.All(
|
||||
_migrate_app_config(),
|
||||
_migrate_addon_config(),
|
||||
_SCHEMA_ADDON_CONFIG.extend(
|
||||
{
|
||||
vol.Required(ATTR_LOCATION): str,
|
||||
@@ -571,7 +506,7 @@ SCHEMA_ADDON_BACKUP = vol.Schema(
|
||||
{
|
||||
vol.Required(ATTR_USER): SCHEMA_ADDON_USER,
|
||||
vol.Required(ATTR_SYSTEM): SCHEMA_ADDON_SYSTEM,
|
||||
vol.Required(ATTR_STATE): vol.Coerce(AppState),
|
||||
vol.Required(ATTR_STATE): vol.Coerce(AddonState),
|
||||
vol.Required(ATTR_VERSION): version_tag,
|
||||
},
|
||||
extra=vol.REMOVE_EXTRA,
|
||||
|
||||
@@ -1,19 +1,17 @@
|
||||
"""Init file for Supervisor RESTful API."""
|
||||
|
||||
from dataclasses import dataclass
|
||||
from functools import partial
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from aiohttp import hdrs, web
|
||||
from aiohttp import web
|
||||
|
||||
from ..addons.addon import App
|
||||
from ..const import SUPERVISOR_DOCKER_NAME, AppState, FeatureFlag
|
||||
from ..const import AddonState
|
||||
from ..coresys import CoreSys, CoreSysAttributes
|
||||
from ..exceptions import APIAppNotInstalled, HostNotSupportedError
|
||||
from ..exceptions import APIAddonNotInstalled, HostNotSupportedError
|
||||
from ..utils.sentry import async_capture_exception
|
||||
from .addons import APIApps
|
||||
from .addons import APIAddons
|
||||
from .audio import APIAudio
|
||||
from .auth import APIAuth
|
||||
from .backups import APIBackups
|
||||
@@ -49,14 +47,6 @@ MAX_CLIENT_SIZE: int = 1024**2 * 16
|
||||
MAX_LINE_SIZE: int = 24570
|
||||
|
||||
|
||||
@dataclass(slots=True, frozen=True)
|
||||
class StaticResourceConfig:
|
||||
"""Configuration for a static resource."""
|
||||
|
||||
prefix: str
|
||||
path: Path
|
||||
|
||||
|
||||
class RestAPI(CoreSysAttributes):
|
||||
"""Handle RESTful API for Supervisor."""
|
||||
|
||||
@@ -77,28 +67,20 @@ class RestAPI(CoreSysAttributes):
|
||||
"max_field_size": MAX_LINE_SIZE,
|
||||
},
|
||||
)
|
||||
# v2 sub-app: no middleware of its own — parent webapp's middleware
|
||||
# stack runs first for all requests including sub-app routes.
|
||||
self.v2_app: web.Application = web.Application()
|
||||
|
||||
# service stuff
|
||||
self._runner: web.AppRunner = web.AppRunner(self.webapp, shutdown_timeout=5)
|
||||
self._site: web.TCPSite | None = None
|
||||
|
||||
# share single host API handler for reuse in logging endpoints
|
||||
self._api_host: APIHost = APIHost()
|
||||
self._api_host.coresys = coresys
|
||||
|
||||
# handler instances shared between v1 and v2 registrations
|
||||
self._api_apps: APIApps | None = None
|
||||
self._api_backups: APIBackups | None = None
|
||||
self._api_store: APIStore | None = None
|
||||
self._api_host: APIHost | None = None
|
||||
|
||||
async def load(self) -> None:
|
||||
"""Register REST API Calls."""
|
||||
static_resource_configs: list[StaticResourceConfig] = []
|
||||
self._api_host = APIHost()
|
||||
self._api_host.coresys = self.coresys
|
||||
|
||||
self._register_apps()
|
||||
self._register_addons()
|
||||
self._register_audio()
|
||||
self._register_auth()
|
||||
self._register_backups()
|
||||
@@ -116,7 +98,7 @@ class RestAPI(CoreSysAttributes):
|
||||
self._register_network()
|
||||
self._register_observer()
|
||||
self._register_os()
|
||||
static_resource_configs.extend(self._register_panel())
|
||||
self._register_panel()
|
||||
self._register_proxy()
|
||||
self._register_resolution()
|
||||
self._register_root()
|
||||
@@ -125,44 +107,16 @@ class RestAPI(CoreSysAttributes):
|
||||
self._register_store()
|
||||
self._register_supervisor()
|
||||
|
||||
# Register v2 routes before mounting the sub-app
|
||||
# (add_subapp freezes the sub-app's router)
|
||||
if self.sys_config.feature_flags.get(FeatureFlag.SUPERVISOR_V2_API, False):
|
||||
self._register_v2_apps()
|
||||
self._register_v2_backups()
|
||||
self._register_v2_store()
|
||||
self.webapp.add_subapp("/v2", self.v2_app)
|
||||
|
||||
if static_resource_configs:
|
||||
|
||||
def process_configs() -> list[web.StaticResource]:
|
||||
return [
|
||||
web.StaticResource(config.prefix, config.path)
|
||||
for config in static_resource_configs
|
||||
]
|
||||
|
||||
for resource in await self.sys_run_in_executor(process_configs):
|
||||
self.webapp.router.register_resource(resource)
|
||||
|
||||
await self.start()
|
||||
|
||||
def _register_advanced_logs(
|
||||
self,
|
||||
path: str,
|
||||
syslog_identifier: str,
|
||||
default_verbose: bool = False,
|
||||
):
|
||||
def _register_advanced_logs(self, path: str, syslog_identifier: str):
|
||||
"""Register logs endpoint for a given path, returning logs for single syslog identifier."""
|
||||
|
||||
self.webapp.add_routes(
|
||||
[
|
||||
web.get(
|
||||
f"{path}/logs",
|
||||
partial(
|
||||
self._api_host.advanced_logs,
|
||||
identifier=syslog_identifier,
|
||||
default_verbose=default_verbose,
|
||||
),
|
||||
partial(self._api_host.advanced_logs, identifier=syslog_identifier),
|
||||
),
|
||||
web.get(
|
||||
f"{path}/logs/follow",
|
||||
@@ -170,26 +124,11 @@ class RestAPI(CoreSysAttributes):
|
||||
self._api_host.advanced_logs,
|
||||
identifier=syslog_identifier,
|
||||
follow=True,
|
||||
default_verbose=default_verbose,
|
||||
),
|
||||
),
|
||||
web.get(
|
||||
f"{path}/logs/latest",
|
||||
partial(
|
||||
self._api_host.advanced_logs,
|
||||
identifier=syslog_identifier,
|
||||
latest=True,
|
||||
no_colors=True,
|
||||
default_verbose=default_verbose,
|
||||
),
|
||||
),
|
||||
web.get(
|
||||
f"{path}/logs/boots/{{bootid}}",
|
||||
partial(
|
||||
self._api_host.advanced_logs,
|
||||
identifier=syslog_identifier,
|
||||
default_verbose=default_verbose,
|
||||
),
|
||||
partial(self._api_host.advanced_logs, identifier=syslog_identifier),
|
||||
),
|
||||
web.get(
|
||||
f"{path}/logs/boots/{{bootid}}/follow",
|
||||
@@ -197,7 +136,6 @@ class RestAPI(CoreSysAttributes):
|
||||
self._api_host.advanced_logs,
|
||||
identifier=syslog_identifier,
|
||||
follow=True,
|
||||
default_verbose=default_verbose,
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -210,13 +148,10 @@ class RestAPI(CoreSysAttributes):
|
||||
self.webapp.add_routes(
|
||||
[
|
||||
web.get("/host/info", api_host.info),
|
||||
web.get(
|
||||
"/host/logs",
|
||||
partial(api_host.advanced_logs, default_verbose=True),
|
||||
),
|
||||
web.get("/host/logs", api_host.advanced_logs),
|
||||
web.get(
|
||||
"/host/logs/follow",
|
||||
partial(api_host.advanced_logs, follow=True, default_verbose=True),
|
||||
partial(api_host.advanced_logs, follow=True),
|
||||
),
|
||||
web.get("/host/logs/identifiers", api_host.list_identifiers),
|
||||
web.get("/host/logs/identifiers/{identifier}", api_host.advanced_logs),
|
||||
@@ -225,13 +160,10 @@ class RestAPI(CoreSysAttributes):
|
||||
partial(api_host.advanced_logs, follow=True),
|
||||
),
|
||||
web.get("/host/logs/boots", api_host.list_boots),
|
||||
web.get(
|
||||
"/host/logs/boots/{bootid}",
|
||||
partial(api_host.advanced_logs, default_verbose=True),
|
||||
),
|
||||
web.get("/host/logs/boots/{bootid}", api_host.advanced_logs),
|
||||
web.get(
|
||||
"/host/logs/boots/{bootid}/follow",
|
||||
partial(api_host.advanced_logs, follow=True, default_verbose=True),
|
||||
partial(api_host.advanced_logs, follow=True),
|
||||
),
|
||||
web.get(
|
||||
"/host/logs/boots/{bootid}/identifiers/{identifier}",
|
||||
@@ -246,7 +178,6 @@ class RestAPI(CoreSysAttributes):
|
||||
web.post("/host/reload", api_host.reload),
|
||||
web.post("/host/options", api_host.options),
|
||||
web.get("/host/services", api_host.services),
|
||||
web.get("/host/disks/default/usage", api_host.disk_usage),
|
||||
]
|
||||
)
|
||||
|
||||
@@ -286,8 +217,6 @@ class RestAPI(CoreSysAttributes):
|
||||
[
|
||||
web.get("/os/info", api_os.info),
|
||||
web.post("/os/update", api_os.update),
|
||||
web.get("/os/config/swap", api_os.config_swap_info),
|
||||
web.post("/os/config/swap", api_os.config_swap_options),
|
||||
web.post("/os/config/sync", api_os.config_sync),
|
||||
web.post("/os/datadisk/move", api_os.migrate_data),
|
||||
web.get("/os/datadisk/list", api_os.list_data),
|
||||
@@ -374,9 +303,7 @@ class RestAPI(CoreSysAttributes):
|
||||
web.post("/multicast/restart", api_multicast.restart),
|
||||
]
|
||||
)
|
||||
self._register_advanced_logs(
|
||||
"/multicast", "hassio_multicast", default_verbose=True
|
||||
)
|
||||
self._register_advanced_logs("/multicast", "hassio_multicast")
|
||||
|
||||
def _register_hardware(self) -> None:
|
||||
"""Register hardware functions."""
|
||||
@@ -396,9 +323,6 @@ class RestAPI(CoreSysAttributes):
|
||||
api_root.coresys = self.coresys
|
||||
|
||||
self.webapp.add_routes([web.get("/info", api_root.info)])
|
||||
self.webapp.add_routes([web.post("/reload_updates", api_root.reload_updates)])
|
||||
|
||||
# Discouraged
|
||||
self.webapp.add_routes([web.post("/refresh_updates", api_root.refresh_updates)])
|
||||
self.webapp.add_routes(
|
||||
[web.get("/available_updates", api_root.available_updates)]
|
||||
@@ -477,7 +401,7 @@ class RestAPI(CoreSysAttributes):
|
||||
async def get_supervisor_logs(*args, **kwargs):
|
||||
try:
|
||||
return await self._api_host.advanced_logs_handler(
|
||||
*args, identifier=SUPERVISOR_DOCKER_NAME, **kwargs
|
||||
*args, identifier="hassio_supervisor", **kwargs
|
||||
)
|
||||
except Exception as err: # pylint: disable=broad-exception-caught
|
||||
# Supervisor logs are critical, so catch everything, log the exception
|
||||
@@ -490,8 +414,6 @@ class RestAPI(CoreSysAttributes):
|
||||
# is known and reported to the user using the resolution center.
|
||||
await async_capture_exception(err)
|
||||
kwargs.pop("follow", None) # Follow is not supported for Docker logs
|
||||
kwargs.pop("latest", None) # Latest is not supported for Docker logs
|
||||
kwargs.pop("no_colors", None) # no_colors not supported for Docker logs
|
||||
return await api_supervisor.logs(*args, **kwargs)
|
||||
|
||||
self.webapp.add_routes(
|
||||
@@ -501,10 +423,6 @@ class RestAPI(CoreSysAttributes):
|
||||
"/supervisor/logs/follow",
|
||||
partial(get_supervisor_logs, follow=True),
|
||||
),
|
||||
web.get(
|
||||
"/supervisor/logs/latest",
|
||||
partial(get_supervisor_logs, latest=True, no_colors=True),
|
||||
),
|
||||
web.get("/supervisor/logs/boots/{bootid}", get_supervisor_logs),
|
||||
web.get(
|
||||
"/supervisor/logs/boots/{bootid}/follow",
|
||||
@@ -563,7 +481,6 @@ class RestAPI(CoreSysAttributes):
|
||||
web.get("/core/api/stream", api_proxy.stream),
|
||||
web.post("/core/api/{path:.+}", api_proxy.api),
|
||||
web.get("/core/api/{path:.+}", api_proxy.api),
|
||||
web.delete("/core/api/{path:.+}", api_proxy.api),
|
||||
web.get("/core/api/", api_proxy.api),
|
||||
]
|
||||
)
|
||||
@@ -580,118 +497,70 @@ class RestAPI(CoreSysAttributes):
|
||||
]
|
||||
)
|
||||
|
||||
def _register_apps(self) -> None:
|
||||
"""Register App functions."""
|
||||
api_apps = APIApps()
|
||||
api_apps.coresys = self.coresys
|
||||
self._api_apps = api_apps
|
||||
def _register_addons(self) -> None:
|
||||
"""Register Add-on functions."""
|
||||
api_addons = APIAddons()
|
||||
api_addons.coresys = self.coresys
|
||||
|
||||
self.webapp.add_routes(
|
||||
[
|
||||
web.get("/addons", api_apps.list_apps_v1),
|
||||
web.post("/addons/{app}/uninstall", api_apps.uninstall),
|
||||
web.post("/addons/{app}/start", api_apps.start),
|
||||
web.post("/addons/{app}/stop", api_apps.stop),
|
||||
web.post("/addons/{app}/restart", api_apps.restart),
|
||||
web.post("/addons/{app}/options", api_apps.options),
|
||||
web.post("/addons/{app}/sys_options", api_apps.sys_options),
|
||||
web.post("/addons/{app}/options/validate", api_apps.options_validate),
|
||||
web.get("/addons/{app}/options/config", api_apps.options_config),
|
||||
web.post("/addons/{app}/rebuild", api_apps.rebuild),
|
||||
web.post("/addons/{app}/stdin", api_apps.stdin),
|
||||
web.post("/addons/{app}/security", api_apps.security),
|
||||
web.get("/addons/{app}/stats", api_apps.stats),
|
||||
web.get("/addons", api_addons.list),
|
||||
web.post("/addons/{addon}/uninstall", api_addons.uninstall),
|
||||
web.post("/addons/{addon}/start", api_addons.start),
|
||||
web.post("/addons/{addon}/stop", api_addons.stop),
|
||||
web.post("/addons/{addon}/restart", api_addons.restart),
|
||||
web.post("/addons/{addon}/options", api_addons.options),
|
||||
web.post("/addons/{addon}/sys_options", api_addons.sys_options),
|
||||
web.post(
|
||||
"/addons/{addon}/options/validate", api_addons.options_validate
|
||||
),
|
||||
web.get("/addons/{addon}/options/config", api_addons.options_config),
|
||||
web.post("/addons/{addon}/rebuild", api_addons.rebuild),
|
||||
web.post("/addons/{addon}/stdin", api_addons.stdin),
|
||||
web.post("/addons/{addon}/security", api_addons.security),
|
||||
web.get("/addons/{addon}/stats", api_addons.stats),
|
||||
]
|
||||
)
|
||||
|
||||
@api_process_raw(CONTENT_TYPE_TEXT, error_type=CONTENT_TYPE_TEXT)
|
||||
async def get_app_logs(request, *args, **kwargs):
|
||||
app = api_apps.get_app_for_request(request)
|
||||
kwargs["identifier"] = f"addon_{app.slug}"
|
||||
async def get_addon_logs(request, *args, **kwargs):
|
||||
addon = api_addons.get_addon_for_request(request)
|
||||
kwargs["identifier"] = f"addon_{addon.slug}"
|
||||
return await self._api_host.advanced_logs(request, *args, **kwargs)
|
||||
|
||||
self.webapp.add_routes(
|
||||
[
|
||||
web.get("/addons/{app}/logs", get_app_logs),
|
||||
web.get("/addons/{addon}/logs", get_addon_logs),
|
||||
web.get(
|
||||
"/addons/{app}/logs/follow",
|
||||
partial(get_app_logs, follow=True),
|
||||
"/addons/{addon}/logs/follow",
|
||||
partial(get_addon_logs, follow=True),
|
||||
),
|
||||
web.get("/addons/{addon}/logs/boots/{bootid}", get_addon_logs),
|
||||
web.get(
|
||||
"/addons/{app}/logs/latest",
|
||||
partial(get_app_logs, latest=True, no_colors=True),
|
||||
),
|
||||
web.get("/addons/{app}/logs/boots/{bootid}", get_app_logs),
|
||||
web.get(
|
||||
"/addons/{app}/logs/boots/{bootid}/follow",
|
||||
partial(get_app_logs, follow=True),
|
||||
"/addons/{addon}/logs/boots/{bootid}/follow",
|
||||
partial(get_addon_logs, follow=True),
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
# Legacy routing to support requests for not installed apps
|
||||
# Legacy routing to support requests for not installed addons
|
||||
api_store = APIStore()
|
||||
api_store.coresys = self.coresys
|
||||
|
||||
@api_process
|
||||
async def apps_app_info(request: web.Request) -> dict[str, Any]:
|
||||
"""Route to store if info requested for not installed app."""
|
||||
async def addons_addon_info(request: web.Request) -> dict[str, Any]:
|
||||
"""Route to store if info requested for not installed addon."""
|
||||
try:
|
||||
app: App = api_apps.get_app_for_request(request)
|
||||
return await api_apps.info_data(app)
|
||||
except APIAppNotInstalled:
|
||||
# Route to store/{app}/info but add missing fields
|
||||
return await api_addons.info(request)
|
||||
except APIAddonNotInstalled:
|
||||
# Route to store/{addon}/info but add missing fields
|
||||
return dict(
|
||||
await api_store.apps_app_info_wrapped(request),
|
||||
state=AppState.UNKNOWN,
|
||||
options=self.sys_apps.store[request.match_info["app"]].options,
|
||||
await api_store.addons_addon_info_wrapped(request),
|
||||
state=AddonState.UNKNOWN,
|
||||
options=self.sys_addons.store[request.match_info["addon"]].options,
|
||||
)
|
||||
|
||||
self.webapp.add_routes([web.get("/addons/{app}/info", apps_app_info)])
|
||||
|
||||
def _register_v2_apps(self) -> None:
|
||||
"""Register v2 app routes on the v2 sub-app (accessible as /v2/apps/...)."""
|
||||
assert self._api_apps is not None
|
||||
api_apps = self._api_apps
|
||||
|
||||
@api_process_raw(CONTENT_TYPE_TEXT, error_type=CONTENT_TYPE_TEXT)
|
||||
async def get_app_logs_v2(request, *args, **kwargs):
|
||||
app = api_apps.get_app_for_request(request)
|
||||
kwargs["identifier"] = f"addon_{app.slug}"
|
||||
return await self._api_host.advanced_logs(request, *args, **kwargs)
|
||||
|
||||
self.v2_app.add_routes(
|
||||
[
|
||||
web.get("/apps", api_apps.list_apps),
|
||||
web.post("/apps/{app}/uninstall", api_apps.uninstall),
|
||||
web.post("/apps/{app}/start", api_apps.start),
|
||||
web.post("/apps/{app}/stop", api_apps.stop),
|
||||
web.post("/apps/{app}/restart", api_apps.restart),
|
||||
web.post("/apps/{app}/options", api_apps.options),
|
||||
web.post("/apps/{app}/sys_options", api_apps.sys_options),
|
||||
web.post("/apps/{app}/options/validate", api_apps.options_validate),
|
||||
web.get("/apps/{app}/options/config", api_apps.options_config),
|
||||
web.post("/apps/{app}/rebuild", api_apps.rebuild),
|
||||
web.post("/apps/{app}/stdin", api_apps.stdin),
|
||||
web.post("/apps/{app}/security", api_apps.security),
|
||||
web.get("/apps/{app}/stats", api_apps.stats),
|
||||
web.get("/apps/{app}/info", api_apps.info),
|
||||
web.get("/apps/{app}/logs", get_app_logs_v2),
|
||||
web.get(
|
||||
"/apps/{app}/logs/follow",
|
||||
partial(get_app_logs_v2, follow=True),
|
||||
),
|
||||
web.get(
|
||||
"/apps/{app}/logs/latest",
|
||||
partial(get_app_logs_v2, latest=True, no_colors=True),
|
||||
),
|
||||
web.get("/apps/{app}/logs/boots/{bootid}", get_app_logs_v2),
|
||||
web.get(
|
||||
"/apps/{app}/logs/boots/{bootid}/follow",
|
||||
partial(get_app_logs_v2, follow=True),
|
||||
),
|
||||
]
|
||||
)
|
||||
self.webapp.add_routes([web.get("/addons/{addon}/info", addons_addon_info)])
|
||||
|
||||
def _register_ingress(self) -> None:
|
||||
"""Register Ingress functions."""
|
||||
@@ -703,9 +572,7 @@ class RestAPI(CoreSysAttributes):
|
||||
web.post("/ingress/session", api_ingress.create_session),
|
||||
web.post("/ingress/validate_session", api_ingress.validate_session),
|
||||
web.get("/ingress/panels", api_ingress.panels),
|
||||
web.route(
|
||||
hdrs.METH_ANY, "/ingress/{token}/{path:.*}", api_ingress.handler
|
||||
),
|
||||
web.view("/ingress/{token}/{path:.*}", api_ingress.handler),
|
||||
]
|
||||
)
|
||||
|
||||
@@ -713,38 +580,10 @@ class RestAPI(CoreSysAttributes):
|
||||
"""Register backups functions."""
|
||||
api_backups = APIBackups()
|
||||
api_backups.coresys = self.coresys
|
||||
self._api_backups = api_backups
|
||||
|
||||
self.webapp.add_routes(
|
||||
[
|
||||
web.get("/backups", api_backups.list_backups_v1),
|
||||
web.get("/backups/info", api_backups.info_v1),
|
||||
web.post("/backups/options", api_backups.options),
|
||||
web.post("/backups/reload", api_backups.reload),
|
||||
web.post("/backups/freeze", api_backups.freeze),
|
||||
web.post("/backups/thaw", api_backups.thaw),
|
||||
web.post("/backups/new/full", api_backups.backup_full),
|
||||
web.post("/backups/new/partial", api_backups.backup_partial_v1),
|
||||
web.post("/backups/new/upload", api_backups.upload),
|
||||
web.get("/backups/{slug}/info", api_backups.backup_info_v1),
|
||||
web.delete("/backups/{slug}", api_backups.remove),
|
||||
web.post("/backups/{slug}/restore/full", api_backups.restore_full),
|
||||
web.post(
|
||||
"/backups/{slug}/restore/partial",
|
||||
api_backups.restore_partial_v1,
|
||||
),
|
||||
web.get("/backups/{slug}/download", api_backups.download),
|
||||
]
|
||||
)
|
||||
|
||||
def _register_v2_backups(self) -> None:
|
||||
"""Register v2 backup routes on the v2 sub-app (accessible as /v2/backups/...)."""
|
||||
assert self._api_backups is not None
|
||||
api_backups = self._api_backups
|
||||
|
||||
self.v2_app.add_routes(
|
||||
[
|
||||
web.get("/backups", api_backups.list_backups),
|
||||
web.get("/backups", api_backups.list),
|
||||
web.get("/backups/info", api_backups.info),
|
||||
web.post("/backups/options", api_backups.options),
|
||||
web.post("/backups/reload", api_backups.reload),
|
||||
@@ -771,7 +610,7 @@ class RestAPI(CoreSysAttributes):
|
||||
|
||||
self.webapp.add_routes(
|
||||
[
|
||||
web.get("/services", api_services.list_services),
|
||||
web.get("/services", api_services.list),
|
||||
web.get("/services/{service}", api_services.get_service),
|
||||
web.post("/services/{service}", api_services.set_service),
|
||||
web.delete("/services/{service}", api_services.del_service),
|
||||
@@ -785,7 +624,7 @@ class RestAPI(CoreSysAttributes):
|
||||
|
||||
self.webapp.add_routes(
|
||||
[
|
||||
web.get("/discovery", api_discovery.list_discovery),
|
||||
web.get("/discovery", api_discovery.list),
|
||||
web.get("/discovery/{uuid}", api_discovery.get_discovery),
|
||||
web.delete("/discovery/{uuid}", api_discovery.del_discovery),
|
||||
web.post("/discovery", api_discovery.set_discovery),
|
||||
@@ -808,7 +647,7 @@ class RestAPI(CoreSysAttributes):
|
||||
]
|
||||
)
|
||||
|
||||
self._register_advanced_logs("/dns", "hassio_dns", default_verbose=True)
|
||||
self._register_advanced_logs("/dns", "hassio_dns")
|
||||
|
||||
def _register_audio(self) -> None:
|
||||
"""Register Audio functions."""
|
||||
@@ -831,7 +670,7 @@ class RestAPI(CoreSysAttributes):
|
||||
]
|
||||
)
|
||||
|
||||
self._register_advanced_logs("/audio", "hassio_audio", default_verbose=True)
|
||||
self._register_advanced_logs("/audio", "hassio_audio")
|
||||
|
||||
def _register_mounts(self) -> None:
|
||||
"""Register mounts endpoints."""
|
||||
@@ -853,36 +692,35 @@ class RestAPI(CoreSysAttributes):
|
||||
"""Register store endpoints."""
|
||||
api_store = APIStore()
|
||||
api_store.coresys = self.coresys
|
||||
self._api_store = api_store
|
||||
|
||||
self.webapp.add_routes(
|
||||
[
|
||||
web.get("/store", api_store.store_info_v1),
|
||||
web.get("/store/addons", api_store.apps_list_v1),
|
||||
web.get("/store/addons/{app}", api_store.apps_app_info),
|
||||
web.get("/store/addons/{app}/icon", api_store.apps_app_icon),
|
||||
web.get("/store/addons/{app}/logo", api_store.apps_app_logo),
|
||||
web.get("/store/addons/{app}/changelog", api_store.apps_app_changelog),
|
||||
web.get("/store", api_store.store_info),
|
||||
web.get("/store/addons", api_store.addons_list),
|
||||
web.get("/store/addons/{addon}", api_store.addons_addon_info),
|
||||
web.get("/store/addons/{addon}/icon", api_store.addons_addon_icon),
|
||||
web.get("/store/addons/{addon}/logo", api_store.addons_addon_logo),
|
||||
web.get(
|
||||
"/store/addons/{app}/documentation",
|
||||
api_store.apps_app_documentation,
|
||||
"/store/addons/{addon}/changelog", api_store.addons_addon_changelog
|
||||
),
|
||||
web.get(
|
||||
"/store/addons/{app}/availability",
|
||||
api_store.apps_app_availability,
|
||||
"/store/addons/{addon}/documentation",
|
||||
api_store.addons_addon_documentation,
|
||||
),
|
||||
web.post("/store/addons/{app}/install", api_store.apps_app_install),
|
||||
web.post(
|
||||
"/store/addons/{app}/install/{version}",
|
||||
api_store.apps_app_install,
|
||||
"/store/addons/{addon}/install", api_store.addons_addon_install
|
||||
),
|
||||
web.post("/store/addons/{app}/update", api_store.apps_app_update),
|
||||
web.post(
|
||||
"/store/addons/{app}/update/{version}",
|
||||
api_store.apps_app_update,
|
||||
"/store/addons/{addon}/install/{version}",
|
||||
api_store.addons_addon_install,
|
||||
),
|
||||
web.post("/store/addons/{addon}/update", api_store.addons_addon_update),
|
||||
web.post(
|
||||
"/store/addons/{addon}/update/{version}",
|
||||
api_store.addons_addon_update,
|
||||
),
|
||||
# Must be below others since it has a wildcard in resource path
|
||||
web.get("/store/addons/{app}/{version}", api_store.apps_app_info),
|
||||
web.get("/store/addons/{addon}/{version}", api_store.addons_addon_info),
|
||||
web.post("/store/reload", api_store.reload),
|
||||
web.get("/store/repositories", api_store.repositories_list),
|
||||
web.get(
|
||||
@@ -893,10 +731,6 @@ class RestAPI(CoreSysAttributes):
|
||||
web.delete(
|
||||
"/store/repositories/{repository}", api_store.remove_repository
|
||||
),
|
||||
web.post(
|
||||
"/store/repositories/{repository}/repair",
|
||||
api_store.repositories_repository_repair,
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
@@ -904,71 +738,22 @@ class RestAPI(CoreSysAttributes):
|
||||
self.webapp.add_routes(
|
||||
[
|
||||
web.post("/addons/reload", api_store.reload),
|
||||
web.post("/addons/{app}/install", api_store.apps_app_install),
|
||||
web.post("/addons/{app}/update", api_store.apps_app_update),
|
||||
web.get("/addons/{app}/icon", api_store.apps_app_icon),
|
||||
web.get("/addons/{app}/logo", api_store.apps_app_logo),
|
||||
web.get("/addons/{app}/changelog", api_store.apps_app_changelog),
|
||||
web.post("/addons/{addon}/install", api_store.addons_addon_install),
|
||||
web.post("/addons/{addon}/update", api_store.addons_addon_update),
|
||||
web.get("/addons/{addon}/icon", api_store.addons_addon_icon),
|
||||
web.get("/addons/{addon}/logo", api_store.addons_addon_logo),
|
||||
web.get("/addons/{addon}/changelog", api_store.addons_addon_changelog),
|
||||
web.get(
|
||||
"/addons/{app}/documentation",
|
||||
api_store.apps_app_documentation,
|
||||
"/addons/{addon}/documentation",
|
||||
api_store.addons_addon_documentation,
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
def _register_v2_store(self) -> None:
|
||||
"""Register v2 store routes on the v2 sub-app (accessible as /v2/store/...)."""
|
||||
assert self._api_store is not None
|
||||
api_store = self._api_store
|
||||
|
||||
self.v2_app.add_routes(
|
||||
[
|
||||
web.get("/store", api_store.store_info),
|
||||
web.get("/store/apps", api_store.apps_list),
|
||||
web.get("/store/apps/{app}", api_store.apps_app_info),
|
||||
web.get("/store/apps/{app}/icon", api_store.apps_app_icon),
|
||||
web.get("/store/apps/{app}/logo", api_store.apps_app_logo),
|
||||
web.get("/store/apps/{app}/changelog", api_store.apps_app_changelog),
|
||||
web.get(
|
||||
"/store/apps/{app}/documentation",
|
||||
api_store.apps_app_documentation,
|
||||
),
|
||||
web.get(
|
||||
"/store/apps/{app}/availability",
|
||||
api_store.apps_app_availability,
|
||||
),
|
||||
web.post("/store/apps/{app}/install", api_store.apps_app_install),
|
||||
web.post(
|
||||
"/store/apps/{app}/install/{version}",
|
||||
api_store.apps_app_install,
|
||||
),
|
||||
web.post("/store/apps/{app}/update", api_store.apps_app_update),
|
||||
web.post(
|
||||
"/store/apps/{app}/update/{version}",
|
||||
api_store.apps_app_update,
|
||||
),
|
||||
# Must be below others since it has a wildcard in resource path
|
||||
web.get("/store/apps/{app}/{version}", api_store.apps_app_info),
|
||||
web.post("/store/reload", api_store.reload),
|
||||
web.get("/store/repositories", api_store.repositories_list),
|
||||
web.get(
|
||||
"/store/repositories/{repository}",
|
||||
api_store.repositories_repository_info,
|
||||
),
|
||||
web.post("/store/repositories", api_store.add_repository),
|
||||
web.delete(
|
||||
"/store/repositories/{repository}", api_store.remove_repository
|
||||
),
|
||||
web.post(
|
||||
"/store/repositories/{repository}/repair",
|
||||
api_store.repositories_repository_repair,
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
def _register_panel(self) -> list[StaticResourceConfig]:
|
||||
def _register_panel(self) -> None:
|
||||
"""Register panel for Home Assistant."""
|
||||
return [StaticResourceConfig("/app", Path(__file__).parent.joinpath("panel"))]
|
||||
panel_dir = Path(__file__).parent.joinpath("panel")
|
||||
self.webapp.add_routes([web.static("/app", panel_dir)])
|
||||
|
||||
def _register_docker(self) -> None:
|
||||
"""Register docker configuration functions."""
|
||||
@@ -978,11 +763,6 @@ class RestAPI(CoreSysAttributes):
|
||||
self.webapp.add_routes(
|
||||
[
|
||||
web.get("/docker/info", api_docker.info),
|
||||
web.post(
|
||||
"/docker/migrate-storage-driver",
|
||||
api_docker.migrate_docker_storage_driver,
|
||||
),
|
||||
web.post("/docker/options", api_docker.options),
|
||||
web.get("/docker/registries", api_docker.registries),
|
||||
web.post("/docker/registries", api_docker.create_registry),
|
||||
web.delete("/docker/registries/{hostname}", api_docker.remove_registry),
|
||||
|
||||
@@ -3,19 +3,19 @@
|
||||
import asyncio
|
||||
from collections.abc import Awaitable
|
||||
import logging
|
||||
from typing import Any, TypedDict
|
||||
from typing import Any
|
||||
|
||||
from aiohttp import web
|
||||
import voluptuous as vol
|
||||
from voluptuous.humanize import humanize_error
|
||||
|
||||
from ..addons.addon import App
|
||||
from ..addons.addon import Addon
|
||||
from ..addons.manager import AnyAddon
|
||||
from ..addons.utils import rating_security
|
||||
from ..const import (
|
||||
ATTR_ADDONS,
|
||||
ATTR_ADVANCED,
|
||||
ATTR_APPARMOR,
|
||||
ATTR_APPS,
|
||||
ATTR_ARCH,
|
||||
ATTR_AUDIO,
|
||||
ATTR_AUDIO_INPUT,
|
||||
@@ -37,7 +37,6 @@ from ..const import (
|
||||
ATTR_DNS,
|
||||
ATTR_DOCKER_API,
|
||||
ATTR_DOCUMENTATION,
|
||||
ATTR_FORCE,
|
||||
ATTR_FULL_ACCESS,
|
||||
ATTR_GPIO,
|
||||
ATTR_HASSIO_API,
|
||||
@@ -64,6 +63,7 @@ from ..const import (
|
||||
ATTR_MEMORY_LIMIT,
|
||||
ATTR_MEMORY_PERCENT,
|
||||
ATTR_MEMORY_USAGE,
|
||||
ATTR_MESSAGE,
|
||||
ATTR_NAME,
|
||||
ATTR_NETWORK,
|
||||
ATTR_NETWORK_DESCRIPTION,
|
||||
@@ -72,6 +72,7 @@ from ..const import (
|
||||
ATTR_OPTIONS,
|
||||
ATTR_PRIVILEGED,
|
||||
ATTR_PROTECTED,
|
||||
ATTR_PWNED,
|
||||
ATTR_RATING,
|
||||
ATTR_REPOSITORY,
|
||||
ATTR_SCHEMA,
|
||||
@@ -89,25 +90,23 @@ from ..const import (
|
||||
ATTR_UPDATE_AVAILABLE,
|
||||
ATTR_URL,
|
||||
ATTR_USB,
|
||||
ATTR_VALID,
|
||||
ATTR_VERSION,
|
||||
ATTR_VERSION_LATEST,
|
||||
ATTR_VIDEO,
|
||||
ATTR_WATCHDOG,
|
||||
ATTR_WEBUI,
|
||||
REQUEST_FROM,
|
||||
AppBoot,
|
||||
AppBootConfig,
|
||||
AddonBoot,
|
||||
AddonBootConfig,
|
||||
)
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..docker.stats import DockerStats
|
||||
from ..exceptions import (
|
||||
APIAppNotInstalled,
|
||||
APIAddonNotInstalled,
|
||||
APIError,
|
||||
APIForbidden,
|
||||
APINotFound,
|
||||
AppBootConfigCannotChangeError,
|
||||
AppConfigurationInvalidError,
|
||||
AppNotSupportedWriteStdinError,
|
||||
PwnedError,
|
||||
PwnedSecret,
|
||||
)
|
||||
@@ -122,14 +121,13 @@ SCHEMA_VERSION = vol.Schema({vol.Optional(ATTR_VERSION): str})
|
||||
# pylint: disable=no-value-for-parameter
|
||||
SCHEMA_OPTIONS = vol.Schema(
|
||||
{
|
||||
vol.Optional(ATTR_BOOT): vol.Coerce(AppBoot),
|
||||
vol.Optional(ATTR_BOOT): vol.Coerce(AddonBoot),
|
||||
vol.Optional(ATTR_NETWORK): vol.Maybe(docker_ports),
|
||||
vol.Optional(ATTR_AUTO_UPDATE): vol.Boolean(),
|
||||
vol.Optional(ATTR_AUDIO_OUTPUT): vol.Maybe(str),
|
||||
vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str),
|
||||
vol.Optional(ATTR_INGRESS_PANEL): vol.Boolean(),
|
||||
vol.Optional(ATTR_WATCHDOG): vol.Boolean(),
|
||||
vol.Optional(ATTR_OPTIONS): vol.Maybe(dict),
|
||||
}
|
||||
)
|
||||
|
||||
@@ -145,239 +143,216 @@ SCHEMA_SECURITY = vol.Schema({vol.Optional(ATTR_PROTECTED): vol.Boolean()})
|
||||
SCHEMA_UNINSTALL = vol.Schema(
|
||||
{vol.Optional(ATTR_REMOVE_CONFIG, default=False): vol.Boolean()}
|
||||
)
|
||||
|
||||
SCHEMA_REBUILD = vol.Schema({vol.Optional(ATTR_FORCE, default=False): vol.Boolean()})
|
||||
# pylint: enable=no-value-for-parameter
|
||||
|
||||
|
||||
class OptionsValidateResponse(TypedDict):
|
||||
"""Response object for options validate."""
|
||||
class APIAddons(CoreSysAttributes):
|
||||
"""Handle RESTful API for add-on functions."""
|
||||
|
||||
message: str
|
||||
valid: bool
|
||||
pwned: bool | None
|
||||
|
||||
|
||||
class APIApps(CoreSysAttributes):
|
||||
"""Handle RESTful API for app functions."""
|
||||
|
||||
def get_app_for_request(self, request: web.Request) -> App:
|
||||
"""Return app, throw an exception if it doesn't exist."""
|
||||
app_slug: str = request.match_info["app"]
|
||||
def get_addon_for_request(self, request: web.Request) -> Addon:
|
||||
"""Return addon, throw an exception if it doesn't exist."""
|
||||
addon_slug: str = request.match_info.get("addon")
|
||||
|
||||
# Lookup itself
|
||||
if app_slug == "self":
|
||||
app = request.get(REQUEST_FROM)
|
||||
if not isinstance(app, App):
|
||||
raise APIError("Self is not an App")
|
||||
return app
|
||||
if addon_slug == "self":
|
||||
addon = request.get(REQUEST_FROM)
|
||||
if not isinstance(addon, Addon):
|
||||
raise APIError("Self is not an Addon")
|
||||
return addon
|
||||
|
||||
app = self.sys_apps.get(app_slug)
|
||||
if not app:
|
||||
raise APINotFound(f"App {app_slug} does not exist")
|
||||
if not isinstance(app, App) or not app.is_installed:
|
||||
raise APIAppNotInstalled("App is not installed")
|
||||
addon = self.sys_addons.get(addon_slug)
|
||||
if not addon:
|
||||
raise APINotFound(f"Addon {addon_slug} does not exist")
|
||||
if not isinstance(addon, Addon) or not addon.is_installed:
|
||||
raise APIAddonNotInstalled("Addon is not installed")
|
||||
|
||||
return app
|
||||
return addon
|
||||
|
||||
def _list_apps_data(self) -> list[dict[str, Any]]:
|
||||
"""Build the list of installed app data dicts."""
|
||||
return [
|
||||
@api_process
|
||||
async def list(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return all add-ons or repositories."""
|
||||
data_addons = [
|
||||
{
|
||||
ATTR_NAME: app.name,
|
||||
ATTR_SLUG: app.slug,
|
||||
ATTR_DESCRIPTON: app.description,
|
||||
ATTR_ADVANCED: app.advanced, # Deprecated 2026.03
|
||||
ATTR_STAGE: app.stage,
|
||||
ATTR_VERSION: app.version,
|
||||
ATTR_VERSION_LATEST: app.latest_version,
|
||||
ATTR_UPDATE_AVAILABLE: app.need_update,
|
||||
ATTR_AVAILABLE: app.available,
|
||||
ATTR_DETACHED: app.is_detached,
|
||||
ATTR_HOMEASSISTANT: app.homeassistant_version,
|
||||
ATTR_STATE: app.state,
|
||||
ATTR_REPOSITORY: app.repository,
|
||||
ATTR_BUILD: app.need_build,
|
||||
ATTR_URL: app.url,
|
||||
ATTR_ICON: app.with_icon,
|
||||
ATTR_LOGO: app.with_logo,
|
||||
ATTR_SYSTEM_MANAGED: app.system_managed,
|
||||
ATTR_NAME: addon.name,
|
||||
ATTR_SLUG: addon.slug,
|
||||
ATTR_DESCRIPTON: addon.description,
|
||||
ATTR_ADVANCED: addon.advanced,
|
||||
ATTR_STAGE: addon.stage,
|
||||
ATTR_VERSION: addon.version,
|
||||
ATTR_VERSION_LATEST: addon.latest_version,
|
||||
ATTR_UPDATE_AVAILABLE: addon.need_update,
|
||||
ATTR_AVAILABLE: addon.available,
|
||||
ATTR_DETACHED: addon.is_detached,
|
||||
ATTR_HOMEASSISTANT: addon.homeassistant_version,
|
||||
ATTR_STATE: addon.state,
|
||||
ATTR_REPOSITORY: addon.repository,
|
||||
ATTR_BUILD: addon.need_build,
|
||||
ATTR_URL: addon.url,
|
||||
ATTR_ICON: addon.with_icon,
|
||||
ATTR_LOGO: addon.with_logo,
|
||||
ATTR_SYSTEM_MANAGED: addon.system_managed,
|
||||
}
|
||||
for app in self.sys_apps.installed
|
||||
for addon in self.sys_addons.installed
|
||||
]
|
||||
|
||||
@api_process
|
||||
async def list_apps(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return all installed apps (v2: uses "apps" key)."""
|
||||
return {ATTR_APPS: self._list_apps_data()}
|
||||
|
||||
@api_process
|
||||
async def list_apps_v1(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return all installed apps (v1: uses "addons" key)."""
|
||||
return {ATTR_ADDONS: self._list_apps_data()}
|
||||
return {ATTR_ADDONS: data_addons}
|
||||
|
||||
@api_process
|
||||
async def reload(self, request: web.Request) -> None:
|
||||
"""Reload all app data from store."""
|
||||
"""Reload all add-on data from store."""
|
||||
await asyncio.shield(self.sys_store.reload())
|
||||
|
||||
async def info_data(self, app: App) -> dict[str, Any]:
|
||||
"""Build and return app information dict (raises on invalid state)."""
|
||||
return {
|
||||
ATTR_NAME: app.name,
|
||||
ATTR_SLUG: app.slug,
|
||||
ATTR_HOSTNAME: app.hostname,
|
||||
ATTR_DNS: app.dns,
|
||||
ATTR_DESCRIPTON: app.description,
|
||||
ATTR_LONG_DESCRIPTION: await app.long_description(),
|
||||
ATTR_ADVANCED: app.advanced, # Deprecated 2026.03
|
||||
ATTR_STAGE: app.stage,
|
||||
ATTR_REPOSITORY: app.repository,
|
||||
ATTR_VERSION_LATEST: app.latest_version,
|
||||
ATTR_PROTECTED: app.protected,
|
||||
ATTR_RATING: rating_security(app),
|
||||
ATTR_BOOT_CONFIG: app.boot_config,
|
||||
ATTR_BOOT: app.boot,
|
||||
ATTR_OPTIONS: app.options,
|
||||
ATTR_SCHEMA: app.schema_ui,
|
||||
ATTR_ARCH: app.supported_arch,
|
||||
ATTR_MACHINE: app.supported_machine,
|
||||
ATTR_HOMEASSISTANT: app.homeassistant_version,
|
||||
ATTR_URL: app.url,
|
||||
ATTR_DETACHED: app.is_detached,
|
||||
ATTR_AVAILABLE: app.available,
|
||||
ATTR_BUILD: app.need_build,
|
||||
ATTR_NETWORK: app.ports,
|
||||
ATTR_NETWORK_DESCRIPTION: app.ports_description,
|
||||
ATTR_HOST_NETWORK: app.host_network,
|
||||
ATTR_HOST_PID: app.host_pid,
|
||||
ATTR_HOST_IPC: app.host_ipc,
|
||||
ATTR_HOST_UTS: app.host_uts,
|
||||
ATTR_HOST_DBUS: app.host_dbus,
|
||||
ATTR_PRIVILEGED: app.privileged,
|
||||
ATTR_FULL_ACCESS: app.with_full_access,
|
||||
ATTR_APPARMOR: app.apparmor,
|
||||
ATTR_ICON: app.with_icon,
|
||||
ATTR_LOGO: app.with_logo,
|
||||
ATTR_CHANGELOG: app.with_changelog,
|
||||
ATTR_DOCUMENTATION: app.with_documentation,
|
||||
ATTR_STDIN: app.with_stdin,
|
||||
ATTR_HASSIO_API: app.access_hassio_api,
|
||||
ATTR_HASSIO_ROLE: app.hassio_role,
|
||||
ATTR_AUTH_API: app.access_auth_api,
|
||||
ATTR_HOMEASSISTANT_API: app.access_homeassistant_api,
|
||||
ATTR_GPIO: app.with_gpio,
|
||||
ATTR_USB: app.with_usb,
|
||||
ATTR_UART: app.with_uart,
|
||||
ATTR_KERNEL_MODULES: app.with_kernel_modules,
|
||||
ATTR_DEVICETREE: app.with_devicetree,
|
||||
ATTR_UDEV: app.with_udev,
|
||||
ATTR_DOCKER_API: app.access_docker_api,
|
||||
ATTR_VIDEO: app.with_video,
|
||||
ATTR_AUDIO: app.with_audio,
|
||||
ATTR_STARTUP: app.startup,
|
||||
ATTR_SERVICES: _pretty_services(app),
|
||||
ATTR_DISCOVERY: app.discovery,
|
||||
ATTR_TRANSLATIONS: app.translations,
|
||||
ATTR_INGRESS: app.with_ingress,
|
||||
ATTR_SIGNED: app.signed,
|
||||
ATTR_STATE: app.state,
|
||||
ATTR_WEBUI: app.webui,
|
||||
ATTR_INGRESS_ENTRY: app.ingress_entry,
|
||||
ATTR_INGRESS_URL: app.ingress_url,
|
||||
ATTR_INGRESS_PORT: app.ingress_port,
|
||||
ATTR_INGRESS_PANEL: app.ingress_panel,
|
||||
ATTR_AUDIO_INPUT: app.audio_input,
|
||||
ATTR_AUDIO_OUTPUT: app.audio_output,
|
||||
ATTR_AUTO_UPDATE: app.auto_update,
|
||||
ATTR_IP_ADDRESS: str(app.ip_address),
|
||||
ATTR_VERSION: app.version,
|
||||
ATTR_UPDATE_AVAILABLE: app.need_update,
|
||||
ATTR_WATCHDOG: app.watchdog,
|
||||
ATTR_DEVICES: app.static_devices + [device.path for device in app.devices],
|
||||
ATTR_SYSTEM_MANAGED: app.system_managed,
|
||||
ATTR_SYSTEM_MANAGED_CONFIG_ENTRY: app.system_managed_config_entry,
|
||||
async def info(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return add-on information."""
|
||||
addon: AnyAddon = self.get_addon_for_request(request)
|
||||
|
||||
data = {
|
||||
ATTR_NAME: addon.name,
|
||||
ATTR_SLUG: addon.slug,
|
||||
ATTR_HOSTNAME: addon.hostname,
|
||||
ATTR_DNS: addon.dns,
|
||||
ATTR_DESCRIPTON: addon.description,
|
||||
ATTR_LONG_DESCRIPTION: await addon.long_description(),
|
||||
ATTR_ADVANCED: addon.advanced,
|
||||
ATTR_STAGE: addon.stage,
|
||||
ATTR_REPOSITORY: addon.repository,
|
||||
ATTR_VERSION_LATEST: addon.latest_version,
|
||||
ATTR_PROTECTED: addon.protected,
|
||||
ATTR_RATING: rating_security(addon),
|
||||
ATTR_BOOT_CONFIG: addon.boot_config,
|
||||
ATTR_BOOT: addon.boot,
|
||||
ATTR_OPTIONS: addon.options,
|
||||
ATTR_SCHEMA: addon.schema_ui,
|
||||
ATTR_ARCH: addon.supported_arch,
|
||||
ATTR_MACHINE: addon.supported_machine,
|
||||
ATTR_HOMEASSISTANT: addon.homeassistant_version,
|
||||
ATTR_URL: addon.url,
|
||||
ATTR_DETACHED: addon.is_detached,
|
||||
ATTR_AVAILABLE: addon.available,
|
||||
ATTR_BUILD: addon.need_build,
|
||||
ATTR_NETWORK: addon.ports,
|
||||
ATTR_NETWORK_DESCRIPTION: addon.ports_description,
|
||||
ATTR_HOST_NETWORK: addon.host_network,
|
||||
ATTR_HOST_PID: addon.host_pid,
|
||||
ATTR_HOST_IPC: addon.host_ipc,
|
||||
ATTR_HOST_UTS: addon.host_uts,
|
||||
ATTR_HOST_DBUS: addon.host_dbus,
|
||||
ATTR_PRIVILEGED: addon.privileged,
|
||||
ATTR_FULL_ACCESS: addon.with_full_access,
|
||||
ATTR_APPARMOR: addon.apparmor,
|
||||
ATTR_ICON: addon.with_icon,
|
||||
ATTR_LOGO: addon.with_logo,
|
||||
ATTR_CHANGELOG: addon.with_changelog,
|
||||
ATTR_DOCUMENTATION: addon.with_documentation,
|
||||
ATTR_STDIN: addon.with_stdin,
|
||||
ATTR_HASSIO_API: addon.access_hassio_api,
|
||||
ATTR_HASSIO_ROLE: addon.hassio_role,
|
||||
ATTR_AUTH_API: addon.access_auth_api,
|
||||
ATTR_HOMEASSISTANT_API: addon.access_homeassistant_api,
|
||||
ATTR_GPIO: addon.with_gpio,
|
||||
ATTR_USB: addon.with_usb,
|
||||
ATTR_UART: addon.with_uart,
|
||||
ATTR_KERNEL_MODULES: addon.with_kernel_modules,
|
||||
ATTR_DEVICETREE: addon.with_devicetree,
|
||||
ATTR_UDEV: addon.with_udev,
|
||||
ATTR_DOCKER_API: addon.access_docker_api,
|
||||
ATTR_VIDEO: addon.with_video,
|
||||
ATTR_AUDIO: addon.with_audio,
|
||||
ATTR_STARTUP: addon.startup,
|
||||
ATTR_SERVICES: _pretty_services(addon),
|
||||
ATTR_DISCOVERY: addon.discovery,
|
||||
ATTR_TRANSLATIONS: addon.translations,
|
||||
ATTR_INGRESS: addon.with_ingress,
|
||||
ATTR_SIGNED: addon.signed,
|
||||
ATTR_STATE: addon.state,
|
||||
ATTR_WEBUI: addon.webui,
|
||||
ATTR_INGRESS_ENTRY: addon.ingress_entry,
|
||||
ATTR_INGRESS_URL: addon.ingress_url,
|
||||
ATTR_INGRESS_PORT: addon.ingress_port,
|
||||
ATTR_INGRESS_PANEL: addon.ingress_panel,
|
||||
ATTR_AUDIO_INPUT: addon.audio_input,
|
||||
ATTR_AUDIO_OUTPUT: addon.audio_output,
|
||||
ATTR_AUTO_UPDATE: addon.auto_update,
|
||||
ATTR_IP_ADDRESS: str(addon.ip_address),
|
||||
ATTR_VERSION: addon.version,
|
||||
ATTR_UPDATE_AVAILABLE: addon.need_update,
|
||||
ATTR_WATCHDOG: addon.watchdog,
|
||||
ATTR_DEVICES: addon.static_devices
|
||||
+ [device.path for device in addon.devices],
|
||||
ATTR_SYSTEM_MANAGED: addon.system_managed,
|
||||
ATTR_SYSTEM_MANAGED_CONFIG_ENTRY: addon.system_managed_config_entry,
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def info(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return app information."""
|
||||
app: App = self.get_app_for_request(request)
|
||||
return await self.info_data(app)
|
||||
return data
|
||||
|
||||
@api_process
|
||||
async def options(self, request: web.Request) -> None:
|
||||
"""Store user options for app."""
|
||||
app = self.get_app_for_request(request)
|
||||
"""Store user options for add-on."""
|
||||
addon = self.get_addon_for_request(request)
|
||||
|
||||
# Update secrets for validation
|
||||
await self.sys_homeassistant.secrets.reload()
|
||||
|
||||
# Validate/Process Body
|
||||
body = await api_validate(SCHEMA_OPTIONS, request)
|
||||
if ATTR_OPTIONS in body:
|
||||
# None resets options to defaults, otherwise validate the options
|
||||
if body[ATTR_OPTIONS] is None:
|
||||
app.options = None
|
||||
else:
|
||||
try:
|
||||
app.options = app.schema(body[ATTR_OPTIONS])
|
||||
except vol.Invalid as ex:
|
||||
raise AppConfigurationInvalidError(
|
||||
app=app.slug,
|
||||
validation_error=humanize_error(body[ATTR_OPTIONS], ex),
|
||||
) from None
|
||||
if ATTR_BOOT in body:
|
||||
if app.boot_config == AppBootConfig.MANUAL_ONLY:
|
||||
raise AppBootConfigCannotChangeError(
|
||||
app=app.slug, boot_config=app.boot_config.value
|
||||
)
|
||||
app.boot = body[ATTR_BOOT]
|
||||
if ATTR_AUTO_UPDATE in body:
|
||||
app.auto_update = body[ATTR_AUTO_UPDATE]
|
||||
if ATTR_NETWORK in body:
|
||||
app.ports = body[ATTR_NETWORK]
|
||||
if ATTR_AUDIO_INPUT in body:
|
||||
app.audio_input = body[ATTR_AUDIO_INPUT]
|
||||
if ATTR_AUDIO_OUTPUT in body:
|
||||
app.audio_output = body[ATTR_AUDIO_OUTPUT]
|
||||
if ATTR_INGRESS_PANEL in body:
|
||||
app.ingress_panel = body[ATTR_INGRESS_PANEL]
|
||||
await self.sys_ingress.update_hass_panel(app)
|
||||
if ATTR_WATCHDOG in body:
|
||||
app.watchdog = body[ATTR_WATCHDOG]
|
||||
# Extend schema with add-on specific validation
|
||||
addon_schema = SCHEMA_OPTIONS.extend(
|
||||
{vol.Optional(ATTR_OPTIONS): vol.Maybe(addon.schema)}
|
||||
)
|
||||
|
||||
await app.save_persist()
|
||||
# Validate/Process Body
|
||||
body = await api_validate(addon_schema, request, origin=[ATTR_OPTIONS])
|
||||
if ATTR_OPTIONS in body:
|
||||
addon.options = body[ATTR_OPTIONS]
|
||||
if ATTR_BOOT in body:
|
||||
if addon.boot_config == AddonBootConfig.MANUAL_ONLY:
|
||||
raise APIError(
|
||||
f"Addon {addon.slug} boot option is set to {addon.boot_config} so it cannot be changed"
|
||||
)
|
||||
addon.boot = body[ATTR_BOOT]
|
||||
if ATTR_AUTO_UPDATE in body:
|
||||
addon.auto_update = body[ATTR_AUTO_UPDATE]
|
||||
if ATTR_NETWORK in body:
|
||||
addon.ports = body[ATTR_NETWORK]
|
||||
if ATTR_AUDIO_INPUT in body:
|
||||
addon.audio_input = body[ATTR_AUDIO_INPUT]
|
||||
if ATTR_AUDIO_OUTPUT in body:
|
||||
addon.audio_output = body[ATTR_AUDIO_OUTPUT]
|
||||
if ATTR_INGRESS_PANEL in body:
|
||||
addon.ingress_panel = body[ATTR_INGRESS_PANEL]
|
||||
await self.sys_ingress.update_hass_panel(addon)
|
||||
if ATTR_WATCHDOG in body:
|
||||
addon.watchdog = body[ATTR_WATCHDOG]
|
||||
|
||||
await addon.save_persist()
|
||||
|
||||
@api_process
|
||||
async def sys_options(self, request: web.Request) -> None:
|
||||
"""Store system options for an app."""
|
||||
app = self.get_app_for_request(request)
|
||||
"""Store system options for an add-on."""
|
||||
addon = self.get_addon_for_request(request)
|
||||
|
||||
# Validate/Process Body
|
||||
body = await api_validate(SCHEMA_SYS_OPTIONS, request)
|
||||
if ATTR_SYSTEM_MANAGED in body:
|
||||
app.system_managed = body[ATTR_SYSTEM_MANAGED]
|
||||
addon.system_managed = body[ATTR_SYSTEM_MANAGED]
|
||||
if ATTR_SYSTEM_MANAGED_CONFIG_ENTRY in body:
|
||||
app.system_managed_config_entry = body[ATTR_SYSTEM_MANAGED_CONFIG_ENTRY]
|
||||
addon.system_managed_config_entry = body[ATTR_SYSTEM_MANAGED_CONFIG_ENTRY]
|
||||
|
||||
await app.save_persist()
|
||||
await addon.save_persist()
|
||||
|
||||
@api_process
|
||||
async def options_validate(self, request: web.Request) -> OptionsValidateResponse:
|
||||
"""Validate user options for app."""
|
||||
app = self.get_app_for_request(request)
|
||||
data = OptionsValidateResponse(message="", valid=True, pwned=False)
|
||||
async def options_validate(self, request: web.Request) -> None:
|
||||
"""Validate user options for add-on."""
|
||||
addon = self.get_addon_for_request(request)
|
||||
data = {ATTR_MESSAGE: "", ATTR_VALID: True, ATTR_PWNED: False}
|
||||
|
||||
options = await request.json(loads=json_loads) or app.options
|
||||
options = await request.json(loads=json_loads) or addon.options
|
||||
|
||||
# Validate config
|
||||
options_schema = app.schema
|
||||
options_schema = addon.schema
|
||||
try:
|
||||
options_schema.validate(options)
|
||||
except vol.Invalid as ex:
|
||||
data["message"] = humanize_error(options, ex)
|
||||
data["valid"] = False
|
||||
data[ATTR_MESSAGE] = humanize_error(options, ex)
|
||||
data[ATTR_VALID] = False
|
||||
|
||||
if not self.sys_security.pwned:
|
||||
return data
|
||||
@@ -388,53 +363,53 @@ class APIApps(CoreSysAttributes):
|
||||
await self.sys_security.verify_secret(secret)
|
||||
continue
|
||||
except PwnedSecret:
|
||||
data["pwned"] = True
|
||||
data[ATTR_PWNED] = True
|
||||
except PwnedError:
|
||||
data["pwned"] = None
|
||||
data[ATTR_PWNED] = None
|
||||
break
|
||||
|
||||
if self.sys_security.force and data["pwned"] in (None, True):
|
||||
data["valid"] = False
|
||||
if data["pwned"] is None:
|
||||
data["message"] = "Error happening on pwned secrets check!"
|
||||
if self.sys_security.force and data[ATTR_PWNED] in (None, True):
|
||||
data[ATTR_VALID] = False
|
||||
if data[ATTR_PWNED] is None:
|
||||
data[ATTR_MESSAGE] = "Error happening on pwned secrets check!"
|
||||
else:
|
||||
data["message"] = "App uses pwned secrets!"
|
||||
data[ATTR_MESSAGE] = "Add-on uses pwned secrets!"
|
||||
|
||||
return data
|
||||
|
||||
@api_process
|
||||
async def options_config(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Validate user options for app."""
|
||||
slug: str = request.match_info["app"]
|
||||
async def options_config(self, request: web.Request) -> None:
|
||||
"""Validate user options for add-on."""
|
||||
slug: str = request.match_info.get("addon")
|
||||
if slug != "self":
|
||||
raise APIForbidden("This can be only read by the app itself!")
|
||||
app = self.get_app_for_request(request)
|
||||
raise APIForbidden("This can be only read by the Add-on itself!")
|
||||
addon = self.get_addon_for_request(request)
|
||||
|
||||
# Lookup/reload secrets
|
||||
await self.sys_homeassistant.secrets.reload()
|
||||
try:
|
||||
return app.schema.validate(app.options)
|
||||
return addon.schema.validate(addon.options)
|
||||
except vol.Invalid:
|
||||
raise APIError("Invalid configuration data for the app") from None
|
||||
raise APIError("Invalid configuration data for the add-on") from None
|
||||
|
||||
@api_process
|
||||
async def security(self, request: web.Request) -> None:
|
||||
"""Store security options for app."""
|
||||
app = self.get_app_for_request(request)
|
||||
"""Store security options for add-on."""
|
||||
addon = self.get_addon_for_request(request)
|
||||
body: dict[str, Any] = await api_validate(SCHEMA_SECURITY, request)
|
||||
|
||||
if ATTR_PROTECTED in body:
|
||||
_LOGGER.warning("Changing protected flag for %s!", app.slug)
|
||||
app.protected = body[ATTR_PROTECTED]
|
||||
_LOGGER.warning("Changing protected flag for %s!", addon.slug)
|
||||
addon.protected = body[ATTR_PROTECTED]
|
||||
|
||||
await app.save_persist()
|
||||
await addon.save_persist()
|
||||
|
||||
@api_process
|
||||
async def stats(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return resource information."""
|
||||
app = self.get_app_for_request(request)
|
||||
addon = self.get_addon_for_request(request)
|
||||
|
||||
stats: DockerStats = await app.stats()
|
||||
stats: DockerStats = await addon.stats()
|
||||
|
||||
return {
|
||||
ATTR_CPU_PERCENT: stats.cpu_percent,
|
||||
@@ -448,56 +423,54 @@ class APIApps(CoreSysAttributes):
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def uninstall(self, request: web.Request) -> None:
|
||||
"""Uninstall app."""
|
||||
app = self.get_app_for_request(request)
|
||||
async def uninstall(self, request: web.Request) -> Awaitable[None]:
|
||||
"""Uninstall add-on."""
|
||||
addon = self.get_addon_for_request(request)
|
||||
body: dict[str, Any] = await api_validate(SCHEMA_UNINSTALL, request)
|
||||
await asyncio.shield(
|
||||
self.sys_apps.uninstall(app.slug, remove_config=body[ATTR_REMOVE_CONFIG])
|
||||
return await asyncio.shield(
|
||||
self.sys_addons.uninstall(
|
||||
addon.slug, remove_config=body[ATTR_REMOVE_CONFIG]
|
||||
)
|
||||
)
|
||||
|
||||
@api_process
|
||||
async def start(self, request: web.Request) -> None:
|
||||
"""Start app."""
|
||||
app = self.get_app_for_request(request)
|
||||
if start_task := await asyncio.shield(app.start()):
|
||||
"""Start add-on."""
|
||||
addon = self.get_addon_for_request(request)
|
||||
if start_task := await asyncio.shield(addon.start()):
|
||||
await start_task
|
||||
|
||||
@api_process
|
||||
def stop(self, request: web.Request) -> Awaitable[None]:
|
||||
"""Stop app."""
|
||||
app = self.get_app_for_request(request)
|
||||
return asyncio.shield(app.stop())
|
||||
"""Stop add-on."""
|
||||
addon = self.get_addon_for_request(request)
|
||||
return asyncio.shield(addon.stop())
|
||||
|
||||
@api_process
|
||||
async def restart(self, request: web.Request) -> None:
|
||||
"""Restart app."""
|
||||
app: App = self.get_app_for_request(request)
|
||||
if start_task := await asyncio.shield(app.restart()):
|
||||
"""Restart add-on."""
|
||||
addon: Addon = self.get_addon_for_request(request)
|
||||
if start_task := await asyncio.shield(addon.restart()):
|
||||
await start_task
|
||||
|
||||
@api_process
|
||||
async def rebuild(self, request: web.Request) -> None:
|
||||
"""Rebuild local build app."""
|
||||
app = self.get_app_for_request(request)
|
||||
body: dict[str, Any] = await api_validate(SCHEMA_REBUILD, request)
|
||||
|
||||
if start_task := await asyncio.shield(
|
||||
self.sys_apps.rebuild(app.slug, force=body[ATTR_FORCE])
|
||||
):
|
||||
"""Rebuild local build add-on."""
|
||||
addon = self.get_addon_for_request(request)
|
||||
if start_task := await asyncio.shield(self.sys_addons.rebuild(addon.slug)):
|
||||
await start_task
|
||||
|
||||
@api_process
|
||||
async def stdin(self, request: web.Request) -> None:
|
||||
"""Write to stdin of app."""
|
||||
app = self.get_app_for_request(request)
|
||||
if not app.with_stdin:
|
||||
raise AppNotSupportedWriteStdinError(_LOGGER.error, app=app.slug)
|
||||
"""Write to stdin of add-on."""
|
||||
addon = self.get_addon_for_request(request)
|
||||
if not addon.with_stdin:
|
||||
raise APIError(f"STDIN not supported the {addon.slug} add-on")
|
||||
|
||||
data = await request.read()
|
||||
await asyncio.shield(app.write_stdin(data))
|
||||
await asyncio.shield(addon.write_stdin(data))
|
||||
|
||||
|
||||
def _pretty_services(app: App) -> list[str]:
|
||||
def _pretty_services(addon: Addon) -> list[str]:
|
||||
"""Return a simplified services role list."""
|
||||
return [f"{name}:{access}" for name, access in app.services_role.items()]
|
||||
return [f"{name}:{access}" for name, access in addon.services_role.items()]
|
||||
|
||||
@@ -124,7 +124,7 @@ class APIAudio(CoreSysAttributes):
|
||||
@api_process
|
||||
async def set_volume(self, request: web.Request) -> None:
|
||||
"""Set audio volume on stream."""
|
||||
source: StreamType = StreamType(request.match_info["source"])
|
||||
source: StreamType = StreamType(request.match_info.get("source"))
|
||||
application: bool = request.path.endswith("application")
|
||||
body = await api_validate(SCHEMA_VOLUME, request)
|
||||
|
||||
@@ -137,7 +137,7 @@ class APIAudio(CoreSysAttributes):
|
||||
@api_process
|
||||
async def set_mute(self, request: web.Request) -> None:
|
||||
"""Mute audio volume on stream."""
|
||||
source: StreamType = StreamType(request.match_info["source"])
|
||||
source: StreamType = StreamType(request.match_info.get("source"))
|
||||
application: bool = request.path.endswith("application")
|
||||
body = await api_validate(SCHEMA_MUTE, request)
|
||||
|
||||
@@ -150,7 +150,7 @@ class APIAudio(CoreSysAttributes):
|
||||
@api_process
|
||||
async def set_default(self, request: web.Request) -> None:
|
||||
"""Set audio default stream."""
|
||||
source: StreamType = StreamType(request.match_info["source"])
|
||||
source: StreamType = StreamType(request.match_info.get("source"))
|
||||
body = await api_validate(SCHEMA_DEFAULT, request)
|
||||
|
||||
await asyncio.shield(self.sys_host.sound.set_default(source, body[ATTR_NAME]))
|
||||
|
||||
@@ -1,21 +1,19 @@
|
||||
"""Init file for Supervisor auth/SSO RESTful API."""
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Awaitable
|
||||
import logging
|
||||
from typing import Any, cast
|
||||
from typing import Any
|
||||
|
||||
from aiohttp import BasicAuth, web
|
||||
from aiohttp.hdrs import AUTHORIZATION, CONTENT_TYPE, WWW_AUTHENTICATE
|
||||
from aiohttp.web import FileField
|
||||
from aiohttp.web_exceptions import HTTPUnauthorized
|
||||
from multidict import MultiDictProxy
|
||||
import voluptuous as vol
|
||||
|
||||
from ..addons.addon import App
|
||||
from ..addons.addon import Addon
|
||||
from ..const import ATTR_NAME, ATTR_PASSWORD, ATTR_USERNAME, REQUEST_FROM
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..exceptions import APIForbidden, AuthInvalidNonStringValueError
|
||||
from ..exceptions import APIForbidden
|
||||
from ..utils.json import json_loads
|
||||
from .const import (
|
||||
ATTR_GROUP_IDS,
|
||||
ATTR_IS_ACTIVE,
|
||||
@@ -25,7 +23,7 @@ from .const import (
|
||||
CONTENT_TYPE_JSON,
|
||||
CONTENT_TYPE_URL,
|
||||
)
|
||||
from .utils import api_process, api_validate, json_loads
|
||||
from .utils import api_process, api_validate
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -44,23 +42,17 @@ REALM_HEADER: dict[str, str] = {
|
||||
class APIAuth(CoreSysAttributes):
|
||||
"""Handle RESTful API for auth functions."""
|
||||
|
||||
def _process_basic(self, request: web.Request, app: App) -> Awaitable[bool]:
|
||||
def _process_basic(self, request: web.Request, addon: Addon) -> bool:
|
||||
"""Process login request with basic auth.
|
||||
|
||||
Return a coroutine.
|
||||
"""
|
||||
try:
|
||||
auth = BasicAuth.decode(request.headers[AUTHORIZATION])
|
||||
except ValueError as err:
|
||||
raise HTTPUnauthorized(headers=REALM_HEADER) from err
|
||||
return self.sys_auth.check_login(app, auth.login, auth.password)
|
||||
auth = BasicAuth.decode(request.headers[AUTHORIZATION])
|
||||
return self.sys_auth.check_login(addon, auth.login, auth.password)
|
||||
|
||||
def _process_dict(
|
||||
self,
|
||||
request: web.Request,
|
||||
app: App,
|
||||
data: dict[str, Any] | MultiDictProxy[str | bytes | FileField],
|
||||
) -> Awaitable[bool]:
|
||||
self, request: web.Request, addon: Addon, data: dict[str, str]
|
||||
) -> bool:
|
||||
"""Process login with dict data.
|
||||
|
||||
Return a coroutine.
|
||||
@@ -68,45 +60,32 @@ class APIAuth(CoreSysAttributes):
|
||||
username = data.get("username") or data.get("user")
|
||||
password = data.get("password")
|
||||
|
||||
# Test that we did receive strings and not something else, raise if so
|
||||
try:
|
||||
_ = username.encode and password.encode # type: ignore
|
||||
except AttributeError:
|
||||
raise AuthInvalidNonStringValueError(
|
||||
_LOGGER.error, headers=REALM_HEADER
|
||||
) from None
|
||||
|
||||
return self.sys_auth.check_login(app, cast(str, username), cast(str, password))
|
||||
return self.sys_auth.check_login(addon, username, password)
|
||||
|
||||
@api_process
|
||||
async def auth(self, request: web.Request) -> bool:
|
||||
"""Process login request."""
|
||||
app = request[REQUEST_FROM]
|
||||
addon = request[REQUEST_FROM]
|
||||
|
||||
if not isinstance(app, App) or not app.access_auth_api:
|
||||
if not addon.access_auth_api:
|
||||
raise APIForbidden("Can't use Home Assistant auth!")
|
||||
|
||||
# BasicAuth
|
||||
if AUTHORIZATION in request.headers:
|
||||
if not await self._process_basic(request, app):
|
||||
if not await self._process_basic(request, addon):
|
||||
raise HTTPUnauthorized(headers=REALM_HEADER)
|
||||
return True
|
||||
|
||||
# Json
|
||||
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_JSON:
|
||||
data = await request.json(loads=json_loads)
|
||||
if not await self._process_dict(request, app, data):
|
||||
raise HTTPUnauthorized()
|
||||
return True
|
||||
return await self._process_dict(request, addon, data)
|
||||
|
||||
# URL encoded
|
||||
if request.headers.get(CONTENT_TYPE) == CONTENT_TYPE_URL:
|
||||
data = await request.post()
|
||||
if not await self._process_dict(request, app, data):
|
||||
raise HTTPUnauthorized()
|
||||
return True
|
||||
return await self._process_dict(request, addon, data)
|
||||
|
||||
# Advertise Basic authentication by default
|
||||
raise HTTPUnauthorized(headers=REALM_HEADER)
|
||||
|
||||
@api_process
|
||||
@@ -128,14 +107,14 @@ class APIAuth(CoreSysAttributes):
|
||||
return {
|
||||
ATTR_USERS: [
|
||||
{
|
||||
ATTR_USERNAME: user.username,
|
||||
ATTR_NAME: user.name,
|
||||
ATTR_IS_OWNER: user.is_owner,
|
||||
ATTR_IS_ACTIVE: user.is_active,
|
||||
ATTR_LOCAL_ONLY: user.local_only,
|
||||
ATTR_GROUP_IDS: user.group_ids,
|
||||
ATTR_USERNAME: user[ATTR_USERNAME],
|
||||
ATTR_NAME: user[ATTR_NAME],
|
||||
ATTR_IS_OWNER: user[ATTR_IS_OWNER],
|
||||
ATTR_IS_ACTIVE: user[ATTR_IS_ACTIVE],
|
||||
ATTR_LOCAL_ONLY: user[ATTR_LOCAL_ONLY],
|
||||
ATTR_GROUP_IDS: user[ATTR_GROUP_IDS],
|
||||
}
|
||||
for user in await self.sys_auth.list_users()
|
||||
if user.username
|
||||
if user[ATTR_USERNAME]
|
||||
]
|
||||
}
|
||||
|
||||
@@ -3,14 +3,16 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from io import BufferedWriter
|
||||
from collections.abc import Callable
|
||||
import errno
|
||||
from io import IOBase
|
||||
import logging
|
||||
from pathlib import Path
|
||||
import re
|
||||
from tempfile import TemporaryDirectory
|
||||
from typing import Any, cast
|
||||
from typing import Any
|
||||
|
||||
from aiohttp import BodyPartReader, web
|
||||
from aiohttp import web
|
||||
from aiohttp.hdrs import CONTENT_DISPOSITION
|
||||
import voluptuous as vol
|
||||
from voluptuous.humanize import humanize_error
|
||||
@@ -20,7 +22,6 @@ from ..backups.const import LOCATION_CLOUD_BACKUP, LOCATION_TYPE
|
||||
from ..backups.validate import ALL_FOLDERS, FOLDER_HOMEASSISTANT, days_until_stale
|
||||
from ..const import (
|
||||
ATTR_ADDONS,
|
||||
ATTR_APPS,
|
||||
ATTR_BACKUPS,
|
||||
ATTR_COMPRESSED,
|
||||
ATTR_CONTENT,
|
||||
@@ -35,6 +36,7 @@ from ..const import (
|
||||
ATTR_LOCATION,
|
||||
ATTR_NAME,
|
||||
ATTR_PASSWORD,
|
||||
ATTR_PATH,
|
||||
ATTR_PROTECTED,
|
||||
ATTR_REPOSITORIES,
|
||||
ATTR_SIZE,
|
||||
@@ -44,12 +46,15 @@ from ..const import (
|
||||
ATTR_TIMEOUT,
|
||||
ATTR_TYPE,
|
||||
ATTR_VERSION,
|
||||
DEFAULT_CHUNK_SIZE,
|
||||
REQUEST_FROM,
|
||||
BusEvent,
|
||||
CoreState,
|
||||
)
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..exceptions import APIError, APIForbidden, APINotFound
|
||||
from ..jobs import JobSchedulerOptions
|
||||
from ..mounts.const import MountUsage
|
||||
from ..resolution.const import UnhealthyReason
|
||||
from .const import (
|
||||
ATTR_ADDITIONAL_LOCATIONS,
|
||||
ATTR_BACKGROUND,
|
||||
@@ -57,11 +62,11 @@ from .const import (
|
||||
ATTR_LOCATIONS,
|
||||
CONTENT_TYPE_TAR,
|
||||
)
|
||||
from .utils import api_process, api_validate, background_task
|
||||
from .utils import api_process, api_validate
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
ALL_APPS_FLAG = "ALL"
|
||||
ALL_ADDONS_FLAG = "ALL"
|
||||
|
||||
LOCATION_LOCAL = ".local"
|
||||
|
||||
@@ -100,20 +105,10 @@ SCHEMA_RESTORE_FULL = vol.Schema(
|
||||
}
|
||||
)
|
||||
|
||||
# V1 schemas use "addons" as the request body key (legacy API contract).
|
||||
SCHEMA_RESTORE_PARTIAL_V1 = SCHEMA_RESTORE_FULL.extend(
|
||||
{
|
||||
vol.Optional(ATTR_HOMEASSISTANT): vol.Boolean(),
|
||||
vol.Optional(ATTR_ADDONS): vol.All([str], vol.Unique()),
|
||||
vol.Optional(ATTR_FOLDERS): SCHEMA_FOLDERS,
|
||||
}
|
||||
)
|
||||
|
||||
# V2 schemas use "apps" as the request body key.
|
||||
SCHEMA_RESTORE_PARTIAL = SCHEMA_RESTORE_FULL.extend(
|
||||
{
|
||||
vol.Optional(ATTR_HOMEASSISTANT): vol.Boolean(),
|
||||
vol.Optional(ATTR_APPS): vol.All([str], vol.Unique()),
|
||||
vol.Optional(ATTR_ADDONS): vol.All([str], vol.Unique()),
|
||||
vol.Optional(ATTR_FOLDERS): SCHEMA_FOLDERS,
|
||||
}
|
||||
)
|
||||
@@ -131,19 +126,11 @@ SCHEMA_BACKUP_FULL = vol.Schema(
|
||||
}
|
||||
)
|
||||
|
||||
# V1 schema uses "addons" as the request body key (legacy API contract).
|
||||
SCHEMA_BACKUP_PARTIAL_V1 = SCHEMA_BACKUP_FULL.extend(
|
||||
{
|
||||
vol.Optional(ATTR_ADDONS): vol.Or(ALL_APPS_FLAG, vol.All([str], vol.Unique())),
|
||||
vol.Optional(ATTR_FOLDERS): SCHEMA_FOLDERS,
|
||||
vol.Optional(ATTR_HOMEASSISTANT): vol.Boolean(),
|
||||
}
|
||||
)
|
||||
|
||||
# V2 schema uses "apps" as the request body key.
|
||||
SCHEMA_BACKUP_PARTIAL = SCHEMA_BACKUP_FULL.extend(
|
||||
{
|
||||
vol.Optional(ATTR_APPS): vol.Or(ALL_APPS_FLAG, vol.All([str], vol.Unique())),
|
||||
vol.Optional(ATTR_ADDONS): vol.Or(
|
||||
ALL_ADDONS_FLAG, vol.All([str], vol.Unique())
|
||||
),
|
||||
vol.Optional(ATTR_FOLDERS): SCHEMA_FOLDERS,
|
||||
vol.Optional(ATTR_HOMEASSISTANT): vol.Boolean(),
|
||||
}
|
||||
@@ -168,14 +155,14 @@ class APIBackups(CoreSysAttributes):
|
||||
"""Make location attributes dictionary."""
|
||||
return {
|
||||
loc if loc else LOCATION_LOCAL: {
|
||||
ATTR_PROTECTED: backup.all_locations[loc].protected,
|
||||
ATTR_SIZE_BYTES: backup.all_locations[loc].size_bytes,
|
||||
ATTR_PROTECTED: backup.all_locations[loc][ATTR_PROTECTED],
|
||||
ATTR_SIZE_BYTES: backup.all_locations[loc][ATTR_SIZE_BYTES],
|
||||
}
|
||||
for loc in backup.locations
|
||||
}
|
||||
|
||||
def _list_backups(self) -> list[dict[str, Any]]:
|
||||
"""Return list of backups using v2 field names (content["apps"])."""
|
||||
def _list_backups(self):
|
||||
"""Return list of backups."""
|
||||
return [
|
||||
{
|
||||
ATTR_SLUG: backup.slug,
|
||||
@@ -191,7 +178,7 @@ class APIBackups(CoreSysAttributes):
|
||||
ATTR_COMPRESSED: backup.compressed,
|
||||
ATTR_CONTENT: {
|
||||
ATTR_HOMEASSISTANT: backup.homeassistant_version is not None,
|
||||
ATTR_APPS: backup.app_list,
|
||||
ATTR_ADDONS: backup.addon_list,
|
||||
ATTR_FOLDERS: backup.folders,
|
||||
},
|
||||
}
|
||||
@@ -199,76 +186,25 @@ class APIBackups(CoreSysAttributes):
|
||||
if backup.location != LOCATION_CLOUD_BACKUP
|
||||
]
|
||||
|
||||
@staticmethod
|
||||
def _rename_apps_to_addons_in_backups(
|
||||
data_backups: list[dict[str, Any]],
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Rename the content["apps"] key to content["addons"] for v1 responses."""
|
||||
for backup in data_backups:
|
||||
content = backup[ATTR_CONTENT]
|
||||
content[ATTR_ADDONS] = content.pop(ATTR_APPS)
|
||||
return data_backups
|
||||
@api_process
|
||||
async def list(self, request):
|
||||
"""Return backup list."""
|
||||
data_backups = self._list_backups()
|
||||
|
||||
def _backup_info_data(self, backup: Backup) -> dict[str, Any]:
|
||||
"""Return backup info dict using v2 field names (top-level "apps")."""
|
||||
data_apps = [
|
||||
{
|
||||
ATTR_SLUG: app_data[ATTR_SLUG],
|
||||
ATTR_NAME: app_data[ATTR_NAME],
|
||||
ATTR_VERSION: app_data[ATTR_VERSION],
|
||||
ATTR_SIZE: app_data[ATTR_SIZE],
|
||||
}
|
||||
for app_data in backup.apps
|
||||
]
|
||||
return {
|
||||
ATTR_SLUG: backup.slug,
|
||||
ATTR_TYPE: backup.sys_type,
|
||||
ATTR_NAME: backup.name,
|
||||
ATTR_DATE: backup.date,
|
||||
ATTR_SIZE: backup.size,
|
||||
ATTR_SIZE_BYTES: backup.size_bytes,
|
||||
ATTR_COMPRESSED: backup.compressed,
|
||||
ATTR_PROTECTED: backup.protected,
|
||||
ATTR_LOCATION_ATTRIBUTES: self._make_location_attributes(backup),
|
||||
ATTR_SUPERVISOR_VERSION: backup.supervisor_version,
|
||||
ATTR_HOMEASSISTANT: backup.homeassistant_version,
|
||||
ATTR_LOCATION: backup.location,
|
||||
ATTR_LOCATIONS: backup.locations,
|
||||
ATTR_APPS: data_apps,
|
||||
ATTR_REPOSITORIES: backup.repositories,
|
||||
ATTR_FOLDERS: backup.folders,
|
||||
ATTR_HOMEASSISTANT_EXCLUDE_DATABASE: backup.homeassistant_exclude_database,
|
||||
ATTR_EXTRA: backup.extra,
|
||||
}
|
||||
if request.path == "/snapshots":
|
||||
# Kept for backwards compability
|
||||
return {"snapshots": data_backups}
|
||||
|
||||
return {ATTR_BACKUPS: data_backups}
|
||||
|
||||
@api_process
|
||||
async def list_backups(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return backup list (v2: content uses "apps" key)."""
|
||||
return {ATTR_BACKUPS: self._list_backups()}
|
||||
|
||||
@api_process
|
||||
async def list_backups_v1(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return backup list (v1: content uses "addons" key)."""
|
||||
return {
|
||||
ATTR_BACKUPS: self._rename_apps_to_addons_in_backups(self._list_backups())
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def info(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return backup list and manager info (v2: content uses "apps" key)."""
|
||||
async def info(self, request):
|
||||
"""Return backup list and manager info."""
|
||||
return {
|
||||
ATTR_BACKUPS: self._list_backups(),
|
||||
ATTR_DAYS_UNTIL_STALE: self.sys_backups.days_until_stale,
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def info_v1(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return backup list and manager info (v1: content uses "addons" key)."""
|
||||
return {
|
||||
ATTR_BACKUPS: self._rename_apps_to_addons_in_backups(self._list_backups()),
|
||||
ATTR_DAYS_UNTIL_STALE: self.sys_backups.days_until_stale,
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def options(self, request):
|
||||
"""Set backup manager options."""
|
||||
@@ -280,29 +216,52 @@ class APIBackups(CoreSysAttributes):
|
||||
await self.sys_backups.save_data()
|
||||
|
||||
@api_process
|
||||
async def reload(self, _: web.Request) -> bool:
|
||||
async def reload(self, _):
|
||||
"""Reload backup list."""
|
||||
await asyncio.shield(self.sys_backups.reload())
|
||||
return True
|
||||
|
||||
@api_process
|
||||
async def backup_info(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return backup info (v2: top-level "apps" key)."""
|
||||
async def backup_info(self, request):
|
||||
"""Return backup info."""
|
||||
backup = self._extract_slug(request)
|
||||
return self._backup_info_data(backup)
|
||||
|
||||
@api_process
|
||||
async def backup_info_v1(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return backup info (v1: top-level "addons" key)."""
|
||||
backup = self._extract_slug(request)
|
||||
data = self._backup_info_data(backup)
|
||||
data[ATTR_ADDONS] = data.pop(ATTR_APPS)
|
||||
return data
|
||||
data_addons = []
|
||||
for addon_data in backup.addons:
|
||||
data_addons.append(
|
||||
{
|
||||
ATTR_SLUG: addon_data[ATTR_SLUG],
|
||||
ATTR_NAME: addon_data[ATTR_NAME],
|
||||
ATTR_VERSION: addon_data[ATTR_VERSION],
|
||||
ATTR_SIZE: addon_data[ATTR_SIZE],
|
||||
}
|
||||
)
|
||||
|
||||
return {
|
||||
ATTR_SLUG: backup.slug,
|
||||
ATTR_TYPE: backup.sys_type,
|
||||
ATTR_NAME: backup.name,
|
||||
ATTR_DATE: backup.date,
|
||||
ATTR_SIZE: backup.size,
|
||||
ATTR_SIZE_BYTES: backup.size_bytes,
|
||||
ATTR_COMPRESSED: backup.compressed,
|
||||
ATTR_PROTECTED: backup.protected,
|
||||
ATTR_LOCATION_ATTRIBUTES: self._make_location_attributes(backup),
|
||||
ATTR_SUPERVISOR_VERSION: backup.supervisor_version,
|
||||
ATTR_HOMEASSISTANT: backup.homeassistant_version,
|
||||
ATTR_LOCATION: backup.location,
|
||||
ATTR_LOCATIONS: backup.locations,
|
||||
ATTR_ADDONS: data_addons,
|
||||
ATTR_REPOSITORIES: backup.repositories,
|
||||
ATTR_FOLDERS: backup.folders,
|
||||
ATTR_HOMEASSISTANT_EXCLUDE_DATABASE: backup.homeassistant_exclude_database,
|
||||
ATTR_EXTRA: backup.extra,
|
||||
}
|
||||
|
||||
def _location_to_mount(self, location: str | None) -> LOCATION_TYPE:
|
||||
"""Convert a single location to a mount if possible."""
|
||||
if not location or location == LOCATION_CLOUD_BACKUP:
|
||||
return cast(LOCATION_TYPE, location)
|
||||
return location
|
||||
|
||||
mount = self.sys_mounts.get(location)
|
||||
if mount.usage != MountUsage.BACKUP:
|
||||
@@ -331,19 +290,40 @@ class APIBackups(CoreSysAttributes):
|
||||
f"Location {LOCATION_CLOUD_BACKUP} is only available for Home Assistant"
|
||||
)
|
||||
|
||||
def _process_location_in_body(
|
||||
self, request: web.Request, body: dict[str, Any]
|
||||
) -> dict[str, Any]:
|
||||
"""Validate and convert location field in partial backup/restore body."""
|
||||
if ATTR_LOCATION not in body:
|
||||
return body
|
||||
location_names: list[str | None] = body.pop(ATTR_LOCATION)
|
||||
self._validate_cloud_backup_location(request, location_names)
|
||||
locations = [self._location_to_mount(loc) for loc in location_names]
|
||||
body[ATTR_LOCATION] = locations.pop(0)
|
||||
if locations:
|
||||
body[ATTR_ADDITIONAL_LOCATIONS] = locations
|
||||
return body
|
||||
async def _background_backup_task(
|
||||
self, backup_method: Callable, *args, **kwargs
|
||||
) -> tuple[asyncio.Task, str]:
|
||||
"""Start backup task in background and return task and job ID."""
|
||||
event = asyncio.Event()
|
||||
job, backup_task = self.sys_jobs.schedule_job(
|
||||
backup_method, JobSchedulerOptions(), *args, **kwargs
|
||||
)
|
||||
|
||||
async def release_on_freeze(new_state: CoreState):
|
||||
if new_state == CoreState.FREEZE:
|
||||
event.set()
|
||||
|
||||
# Wait for system to get into freeze state before returning
|
||||
# If the backup fails validation it will raise before getting there
|
||||
listener = self.sys_bus.register_event(
|
||||
BusEvent.SUPERVISOR_STATE_CHANGE, release_on_freeze
|
||||
)
|
||||
try:
|
||||
event_task = self.sys_create_task(event.wait())
|
||||
_, pending = await asyncio.wait(
|
||||
(
|
||||
backup_task,
|
||||
event_task,
|
||||
),
|
||||
return_when=asyncio.FIRST_COMPLETED,
|
||||
)
|
||||
# It seems backup returned early (error or something), make sure to cancel
|
||||
# the event task to avoid "Task was destroyed but it is pending!" errors.
|
||||
if event_task in pending:
|
||||
event_task.cancel()
|
||||
return (backup_task, job.uuid)
|
||||
finally:
|
||||
self.sys_bus.remove_listener(listener)
|
||||
|
||||
@api_process
|
||||
async def backup_full(self, request: web.Request):
|
||||
@@ -363,33 +343,14 @@ class APIBackups(CoreSysAttributes):
|
||||
body[ATTR_ADDITIONAL_LOCATIONS] = locations
|
||||
|
||||
background = body.pop(ATTR_BACKGROUND)
|
||||
backup_task, job_id = await background_task(
|
||||
self, self.sys_backups.do_backup_full, **body
|
||||
backup_task, job_id = await self._background_backup_task(
|
||||
self.sys_backups.do_backup_full, **body
|
||||
)
|
||||
|
||||
if background and not backup_task.done():
|
||||
return {ATTR_JOB_ID: job_id}
|
||||
|
||||
backup: Backup | None = await backup_task
|
||||
if backup:
|
||||
return {ATTR_JOB_ID: job_id, ATTR_SLUG: backup.slug}
|
||||
raise APIError(
|
||||
f"An error occurred while making backup, check job '{job_id}' or supervisor logs for details",
|
||||
job_id=job_id,
|
||||
)
|
||||
|
||||
async def _do_backup_partial(
|
||||
self, body: dict[str, Any], background: bool
|
||||
) -> dict[str, Any]:
|
||||
"""Run backup_partial business logic. Expects body["apps"] (v2 key)."""
|
||||
backup_task, job_id = await background_task(
|
||||
self, self.sys_backups.do_backup_partial, **body
|
||||
)
|
||||
|
||||
if background and not backup_task.done():
|
||||
return {ATTR_JOB_ID: job_id}
|
||||
|
||||
backup: Backup | None = await backup_task
|
||||
backup: Backup = await backup_task
|
||||
if backup:
|
||||
return {ATTR_JOB_ID: job_id, ATTR_SLUG: backup.slug}
|
||||
raise APIError(
|
||||
@@ -399,31 +360,39 @@ class APIBackups(CoreSysAttributes):
|
||||
|
||||
@api_process
|
||||
async def backup_partial(self, request: web.Request):
|
||||
"""Create a partial backup (v2: accepts "apps" key in request body)."""
|
||||
"""Create a partial backup."""
|
||||
body = await api_validate(SCHEMA_BACKUP_PARTIAL, request)
|
||||
self._process_location_in_body(request, body)
|
||||
locations: list[LOCATION_TYPE] | None = None
|
||||
|
||||
if body.get(ATTR_APPS) == ALL_APPS_FLAG:
|
||||
body[ATTR_APPS] = list(self.sys_apps.local)
|
||||
if ATTR_LOCATION in body:
|
||||
location_names: list[str | None] = body.pop(ATTR_LOCATION)
|
||||
self._validate_cloud_backup_location(request, location_names)
|
||||
|
||||
locations = [
|
||||
self._location_to_mount(location) for location in location_names
|
||||
]
|
||||
body[ATTR_LOCATION] = locations.pop(0)
|
||||
if locations:
|
||||
body[ATTR_ADDITIONAL_LOCATIONS] = locations
|
||||
|
||||
if body.get(ATTR_ADDONS) == ALL_ADDONS_FLAG:
|
||||
body[ATTR_ADDONS] = list(self.sys_addons.local)
|
||||
|
||||
background = body.pop(ATTR_BACKGROUND)
|
||||
return await self._do_backup_partial(body, background)
|
||||
backup_task, job_id = await self._background_backup_task(
|
||||
self.sys_backups.do_backup_partial, **body
|
||||
)
|
||||
|
||||
@api_process
|
||||
async def backup_partial_v1(self, request: web.Request):
|
||||
"""Create a partial backup (v1: accepts "addons" key in request body)."""
|
||||
body = await api_validate(SCHEMA_BACKUP_PARTIAL_V1, request)
|
||||
self._process_location_in_body(request, body)
|
||||
if background and not backup_task.done():
|
||||
return {ATTR_JOB_ID: job_id}
|
||||
|
||||
if body.get(ATTR_ADDONS) == ALL_APPS_FLAG:
|
||||
body[ATTR_ADDONS] = list(self.sys_apps.local)
|
||||
|
||||
# Rename "addons" → "apps" so _do_backup_partial receives the v2 key
|
||||
if ATTR_ADDONS in body:
|
||||
body[ATTR_APPS] = body.pop(ATTR_ADDONS)
|
||||
|
||||
background = body.pop(ATTR_BACKGROUND)
|
||||
return await self._do_backup_partial(body, background)
|
||||
backup: Backup = await backup_task
|
||||
if backup:
|
||||
return {ATTR_JOB_ID: job_id, ATTR_SLUG: backup.slug}
|
||||
raise APIError(
|
||||
f"An error occurred while making backup, check job '{job_id}' or supervisor logs for details",
|
||||
job_id=job_id,
|
||||
)
|
||||
|
||||
@api_process
|
||||
async def restore_full(self, request: web.Request):
|
||||
@@ -434,23 +403,8 @@ class APIBackups(CoreSysAttributes):
|
||||
request, body.get(ATTR_LOCATION, backup.location)
|
||||
)
|
||||
background = body.pop(ATTR_BACKGROUND)
|
||||
restore_task, job_id = await background_task(
|
||||
self, self.sys_backups.do_restore_full, backup, **body
|
||||
)
|
||||
|
||||
if background and not restore_task.done() or await restore_task:
|
||||
return {ATTR_JOB_ID: job_id}
|
||||
raise APIError(
|
||||
f"An error occurred during restore of {backup.slug}, check job '{job_id}' or supervisor logs for details",
|
||||
job_id=job_id,
|
||||
)
|
||||
|
||||
async def _do_restore_partial(
|
||||
self, backup: Backup, body: dict[str, Any], background: bool
|
||||
) -> dict[str, Any]:
|
||||
"""Run restore_partial business logic. Expects body["apps"] (v2 key)."""
|
||||
restore_task, job_id = await background_task(
|
||||
self, self.sys_backups.do_restore_partial, backup, **body
|
||||
restore_task, job_id = await self._background_backup_task(
|
||||
self.sys_backups.do_restore_full, backup, **body
|
||||
)
|
||||
|
||||
if background and not restore_task.done() or await restore_task:
|
||||
@@ -462,30 +416,23 @@ class APIBackups(CoreSysAttributes):
|
||||
|
||||
@api_process
|
||||
async def restore_partial(self, request: web.Request):
|
||||
"""Partial restore a backup (v2: accepts "apps" key in request body)."""
|
||||
"""Partial restore a backup."""
|
||||
backup = self._extract_slug(request)
|
||||
body = await api_validate(SCHEMA_RESTORE_PARTIAL, request)
|
||||
self._validate_cloud_backup_location(
|
||||
request, body.get(ATTR_LOCATION, backup.location)
|
||||
)
|
||||
background = body.pop(ATTR_BACKGROUND)
|
||||
return await self._do_restore_partial(backup, body, background)
|
||||
|
||||
@api_process
|
||||
async def restore_partial_v1(self, request: web.Request):
|
||||
"""Partial restore a backup (v1: accepts "addons" key in request body)."""
|
||||
backup = self._extract_slug(request)
|
||||
body = await api_validate(SCHEMA_RESTORE_PARTIAL_V1, request)
|
||||
self._validate_cloud_backup_location(
|
||||
request, body.get(ATTR_LOCATION, backup.location)
|
||||
restore_task, job_id = await self._background_backup_task(
|
||||
self.sys_backups.do_restore_partial, backup, **body
|
||||
)
|
||||
background = body.pop(ATTR_BACKGROUND)
|
||||
|
||||
# Rename "addons" → "apps" so _do_restore_partial receives the v2 key
|
||||
if ATTR_ADDONS in body:
|
||||
body[ATTR_APPS] = body.pop(ATTR_ADDONS)
|
||||
|
||||
return await self._do_restore_partial(backup, body, background)
|
||||
if background and not restore_task.done() or await restore_task:
|
||||
return {ATTR_JOB_ID: job_id}
|
||||
raise APIError(
|
||||
f"An error occurred during restore of {backup.slug}, check job '{job_id}' or supervisor logs for details",
|
||||
job_id=job_id,
|
||||
)
|
||||
|
||||
@api_process
|
||||
async def freeze(self, request: web.Request):
|
||||
@@ -514,7 +461,7 @@ class APIBackups(CoreSysAttributes):
|
||||
await self.sys_backups.remove(backup, locations=locations)
|
||||
|
||||
@api_process
|
||||
async def download(self, request: web.Request) -> web.StreamResponse:
|
||||
async def download(self, request: web.Request):
|
||||
"""Download a backup file."""
|
||||
backup = self._extract_slug(request)
|
||||
# Query will give us '' for /backups, convert value to None
|
||||
@@ -526,9 +473,9 @@ class APIBackups(CoreSysAttributes):
|
||||
raise APIError(f"Backup {backup.slug} is not in location {location}")
|
||||
|
||||
_LOGGER.info("Downloading backup %s", backup.slug)
|
||||
filename = backup.all_locations[location].path
|
||||
filename = backup.all_locations[location][ATTR_PATH]
|
||||
# If the file is missing, return 404 and trigger reload of location
|
||||
if not await self.sys_run_in_executor(filename.is_file):
|
||||
if not filename.is_file():
|
||||
self.sys_create_task(self.sys_backups.reload(location))
|
||||
return web.Response(status=404)
|
||||
|
||||
@@ -544,16 +491,14 @@ class APIBackups(CoreSysAttributes):
|
||||
return response
|
||||
|
||||
@api_process
|
||||
async def upload(self, request: web.Request) -> dict[str, str] | bool:
|
||||
async def upload(self, request: web.Request):
|
||||
"""Upload a backup file."""
|
||||
location: LOCATION_TYPE = None
|
||||
locations: list[LOCATION_TYPE] | None = None
|
||||
|
||||
tmp_path = self.sys_config.path_tmp
|
||||
if ATTR_LOCATION in request.query:
|
||||
location_names: list[str] = request.query.getall(ATTR_LOCATION, [])
|
||||
self._validate_cloud_backup_location(
|
||||
request, cast(list[str | None], location_names)
|
||||
)
|
||||
location_names: list[str] = request.query.getall(ATTR_LOCATION)
|
||||
self._validate_cloud_backup_location(request, location_names)
|
||||
# Convert empty string to None if necessary
|
||||
locations = [
|
||||
self._location_to_mount(location)
|
||||
@@ -563,6 +508,9 @@ class APIBackups(CoreSysAttributes):
|
||||
]
|
||||
location = locations.pop(0)
|
||||
|
||||
if location and location != LOCATION_CLOUD_BACKUP:
|
||||
tmp_path = location.local_where
|
||||
|
||||
filename: str | None = None
|
||||
if ATTR_FILENAME in request.query:
|
||||
filename = request.query.get(ATTR_FILENAME)
|
||||
@@ -571,21 +519,18 @@ class APIBackups(CoreSysAttributes):
|
||||
except vol.Invalid as ex:
|
||||
raise APIError(humanize_error(filename, ex)) from None
|
||||
|
||||
tmp_path = await self.sys_backups.get_upload_path_for_location(location)
|
||||
temp_dir: TemporaryDirectory | None = None
|
||||
backup_file_stream: BufferedWriter | None = None
|
||||
backup_file_stream: IOBase | None = None
|
||||
|
||||
def open_backup_file() -> tuple[Path, BufferedWriter]:
|
||||
def open_backup_file() -> Path:
|
||||
nonlocal temp_dir, backup_file_stream
|
||||
temp_dir = TemporaryDirectory(dir=tmp_path.as_posix())
|
||||
tar_file = Path(temp_dir.name, "upload.tar")
|
||||
tar_file = Path(temp_dir.name, "backup.tar")
|
||||
backup_file_stream = tar_file.open("wb")
|
||||
return (tar_file, backup_file_stream)
|
||||
return tar_file
|
||||
|
||||
def close_backup_file() -> None:
|
||||
if backup_file_stream:
|
||||
# Make sure it got closed, in case of exception. It is safe to
|
||||
# close the file stream twice.
|
||||
backup_file_stream.close()
|
||||
if temp_dir:
|
||||
temp_dir.cleanup()
|
||||
@@ -593,13 +538,9 @@ class APIBackups(CoreSysAttributes):
|
||||
try:
|
||||
reader = await request.multipart()
|
||||
contents = await reader.next()
|
||||
if not isinstance(contents, BodyPartReader):
|
||||
raise APIError("Improperly formatted upload, could not read backup")
|
||||
|
||||
tar_file, backup_writer = await self.sys_run_in_executor(open_backup_file)
|
||||
while chunk := await contents.read_chunk(size=DEFAULT_CHUNK_SIZE):
|
||||
await self.sys_run_in_executor(backup_writer.write, chunk)
|
||||
await self.sys_run_in_executor(backup_writer.close)
|
||||
tar_file = await self.sys_run_in_executor(open_backup_file)
|
||||
while chunk := await contents.read_chunk(size=2**16):
|
||||
await self.sys_run_in_executor(backup_file_stream.write, chunk)
|
||||
|
||||
backup = await asyncio.shield(
|
||||
self.sys_backups.import_backup(
|
||||
@@ -610,8 +551,11 @@ class APIBackups(CoreSysAttributes):
|
||||
)
|
||||
)
|
||||
except OSError as err:
|
||||
if location in {LOCATION_CLOUD_BACKUP, None}:
|
||||
self.sys_resolution.check_oserror(err)
|
||||
if err.errno == errno.EBADMSG and location in {
|
||||
LOCATION_CLOUD_BACKUP,
|
||||
None,
|
||||
}:
|
||||
self.sys_resolution.unhealthy = UnhealthyReason.OSERROR_BAD_MESSAGE
|
||||
_LOGGER.error("Can't write new backup file: %s", err)
|
||||
return False
|
||||
|
||||
@@ -619,7 +563,8 @@ class APIBackups(CoreSysAttributes):
|
||||
return False
|
||||
|
||||
finally:
|
||||
await self.sys_run_in_executor(close_backup_file)
|
||||
if temp_dir or backup:
|
||||
await self.sys_run_in_executor(close_backup_file)
|
||||
|
||||
if backup:
|
||||
return {ATTR_SLUG: backup.slug}
|
||||
|
||||
@@ -49,7 +49,6 @@ ATTR_LLMNR_HOSTNAME = "llmnr_hostname"
|
||||
ATTR_LOCAL_ONLY = "local_only"
|
||||
ATTR_LOCATION_ATTRIBUTES = "location_attributes"
|
||||
ATTR_LOCATIONS = "locations"
|
||||
ATTR_MAX_DEPTH = "max_depth"
|
||||
ATTR_MDNS = "mdns"
|
||||
ATTR_MODEL = "model"
|
||||
ATTR_MOUNTS = "mounts"
|
||||
@@ -81,11 +80,3 @@ class BootSlot(StrEnum):
|
||||
|
||||
A = "A"
|
||||
B = "B"
|
||||
|
||||
|
||||
class DetectBlockingIO(StrEnum):
|
||||
"""Enable/Disable detection for blocking I/O in event loop."""
|
||||
|
||||
OFF = "off"
|
||||
ON = "on"
|
||||
ON_AT_STARTUP = "on-at-startup"
|
||||
|
||||
@@ -1,24 +1,21 @@
|
||||
"""Init file for Supervisor network RESTful API."""
|
||||
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
from aiohttp import web
|
||||
import voluptuous as vol
|
||||
|
||||
from ..addons.addon import App
|
||||
from ..addons.addon import Addon
|
||||
from ..const import (
|
||||
ATTR_APP,
|
||||
ATTR_ADDON,
|
||||
ATTR_CONFIG,
|
||||
ATTR_DISCOVERY,
|
||||
ATTR_SERVICE,
|
||||
ATTR_SERVICES,
|
||||
ATTR_UUID,
|
||||
REQUEST_FROM,
|
||||
AppState,
|
||||
AddonState,
|
||||
)
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..discovery import Message
|
||||
from ..exceptions import APIForbidden, APINotFound
|
||||
from .utils import api_process, api_validate, require_home_assistant
|
||||
|
||||
@@ -35,86 +32,83 @@ SCHEMA_DISCOVERY = vol.Schema(
|
||||
class APIDiscovery(CoreSysAttributes):
|
||||
"""Handle RESTful API for discovery functions."""
|
||||
|
||||
def _extract_message(self, request: web.Request) -> Message:
|
||||
def _extract_message(self, request):
|
||||
"""Extract discovery message from URL."""
|
||||
message = self.sys_discovery.get(request.match_info["uuid"])
|
||||
message = self.sys_discovery.get(request.match_info.get("uuid"))
|
||||
if not message:
|
||||
raise APINotFound("Discovery message not found")
|
||||
return message
|
||||
|
||||
@api_process
|
||||
@require_home_assistant
|
||||
async def list_discovery(self, request: web.Request) -> dict[str, Any]:
|
||||
async def list(self, request):
|
||||
"""Show registered and available services."""
|
||||
# Get available discovery
|
||||
discovery = [
|
||||
{
|
||||
ATTR_APP: message.addon,
|
||||
ATTR_ADDON: message.addon,
|
||||
ATTR_SERVICE: message.service,
|
||||
ATTR_UUID: message.uuid,
|
||||
ATTR_CONFIG: message.config,
|
||||
}
|
||||
for message in self.sys_discovery.list_messages
|
||||
if (
|
||||
discovered := self.sys_apps.get_local_only(
|
||||
message.addon,
|
||||
)
|
||||
)
|
||||
and discovered.state == AppState.STARTED
|
||||
if (addon := self.sys_addons.get(message.addon, local_only=True))
|
||||
and addon.state == AddonState.STARTED
|
||||
]
|
||||
|
||||
# Get available services/apps
|
||||
services: dict[str, list[str]] = {}
|
||||
for app in self.sys_apps.all:
|
||||
for name in app.discovery:
|
||||
services.setdefault(name, []).append(app.slug)
|
||||
# Get available services/add-ons
|
||||
services = {}
|
||||
for addon in self.sys_addons.all:
|
||||
for name in addon.discovery:
|
||||
services.setdefault(name, []).append(addon.slug)
|
||||
|
||||
return {ATTR_DISCOVERY: discovery, ATTR_SERVICES: services}
|
||||
|
||||
@api_process
|
||||
async def set_discovery(self, request: web.Request) -> dict[str, str]:
|
||||
async def set_discovery(self, request):
|
||||
"""Write data into a discovery pipeline."""
|
||||
body = await api_validate(SCHEMA_DISCOVERY, request)
|
||||
app: App = request[REQUEST_FROM]
|
||||
addon: Addon = request[REQUEST_FROM]
|
||||
service = body[ATTR_SERVICE]
|
||||
|
||||
# Access?
|
||||
if body[ATTR_SERVICE] not in app.discovery:
|
||||
if body[ATTR_SERVICE] not in addon.discovery:
|
||||
_LOGGER.error(
|
||||
"App %s attempted to send discovery for service %s which is not listed in its config. Please report this to the maintainer of the app",
|
||||
app.name,
|
||||
"Add-on %s attempted to send discovery for service %s which is not listed in its config. Please report this to the maintainer of the add-on",
|
||||
addon.name,
|
||||
service,
|
||||
)
|
||||
raise APIForbidden(
|
||||
"Apps must list services they provide via discovery in their config!"
|
||||
"Add-ons must list services they provide via discovery in their config!"
|
||||
)
|
||||
|
||||
# Process discovery message
|
||||
message = await self.sys_discovery.send(app, **body)
|
||||
message = await self.sys_discovery.send(addon, **body)
|
||||
|
||||
return {ATTR_UUID: message.uuid}
|
||||
|
||||
@api_process
|
||||
@require_home_assistant
|
||||
async def get_discovery(self, request: web.Request) -> dict[str, Any]:
|
||||
async def get_discovery(self, request):
|
||||
"""Read data into a discovery message."""
|
||||
message = self._extract_message(request)
|
||||
|
||||
return {
|
||||
ATTR_APP: message.addon,
|
||||
ATTR_ADDON: message.addon,
|
||||
ATTR_SERVICE: message.service,
|
||||
ATTR_UUID: message.uuid,
|
||||
ATTR_CONFIG: message.config,
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def del_discovery(self, request: web.Request) -> None:
|
||||
async def del_discovery(self, request):
|
||||
"""Delete data into a discovery message."""
|
||||
message = self._extract_message(request)
|
||||
app = request[REQUEST_FROM]
|
||||
addon = request[REQUEST_FROM]
|
||||
|
||||
# Permission
|
||||
if message.addon != app.slug:
|
||||
if message.addon != addon.slug:
|
||||
raise APIForbidden("Can't remove discovery message")
|
||||
|
||||
await self.sys_discovery.remove(message)
|
||||
return True
|
||||
|
||||
@@ -4,24 +4,19 @@ import logging
|
||||
from typing import Any
|
||||
|
||||
from aiohttp import web
|
||||
from awesomeversion import AwesomeVersion
|
||||
import voluptuous as vol
|
||||
|
||||
from ..const import (
|
||||
ATTR_ENABLE_IPV6,
|
||||
ATTR_HOSTNAME,
|
||||
ATTR_LOGGING,
|
||||
ATTR_MTU,
|
||||
ATTR_PASSWORD,
|
||||
ATTR_REGISTRIES,
|
||||
ATTR_STORAGE,
|
||||
ATTR_STORAGE_DRIVER,
|
||||
ATTR_USERNAME,
|
||||
ATTR_VERSION,
|
||||
)
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..exceptions import APINotFound
|
||||
from ..resolution.const import ContextType, IssueType, SuggestionType
|
||||
from .utils import api_process, api_validate
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
@@ -35,71 +30,10 @@ SCHEMA_DOCKER_REGISTRY = vol.Schema(
|
||||
}
|
||||
)
|
||||
|
||||
# pylint: disable=no-value-for-parameter
|
||||
SCHEMA_OPTIONS = vol.Schema(
|
||||
{
|
||||
vol.Optional(ATTR_ENABLE_IPV6): vol.Maybe(vol.Boolean()),
|
||||
vol.Optional(ATTR_MTU): vol.Maybe(vol.All(int, vol.Range(min=68, max=65535))),
|
||||
}
|
||||
)
|
||||
|
||||
SCHEMA_MIGRATE_DOCKER_STORAGE_DRIVER = vol.Schema(
|
||||
{
|
||||
vol.Required(ATTR_STORAGE_DRIVER): vol.In(["overlayfs"]),
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
class APIDocker(CoreSysAttributes):
|
||||
"""Handle RESTful API for Docker configuration."""
|
||||
|
||||
@api_process
|
||||
async def info(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Get docker info."""
|
||||
data_registries = {}
|
||||
for hostname, registry in self.sys_docker.config.registries.items():
|
||||
data_registries[hostname] = {
|
||||
ATTR_USERNAME: registry[ATTR_USERNAME],
|
||||
}
|
||||
return {
|
||||
ATTR_VERSION: self.sys_docker.info.version,
|
||||
ATTR_ENABLE_IPV6: self.sys_docker.config.enable_ipv6,
|
||||
ATTR_MTU: self.sys_docker.config.mtu,
|
||||
ATTR_STORAGE: self.sys_docker.info.storage,
|
||||
ATTR_LOGGING: self.sys_docker.info.logging,
|
||||
ATTR_REGISTRIES: data_registries,
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def options(self, request: web.Request) -> None:
|
||||
"""Set docker options."""
|
||||
body = await api_validate(SCHEMA_OPTIONS, request)
|
||||
|
||||
reboot_required = False
|
||||
|
||||
if (
|
||||
ATTR_ENABLE_IPV6 in body
|
||||
and self.sys_docker.config.enable_ipv6 != body[ATTR_ENABLE_IPV6]
|
||||
):
|
||||
self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6]
|
||||
reboot_required = True
|
||||
|
||||
if ATTR_MTU in body and self.sys_docker.config.mtu != body[ATTR_MTU]:
|
||||
self.sys_docker.config.mtu = body[ATTR_MTU]
|
||||
reboot_required = True
|
||||
|
||||
if reboot_required:
|
||||
_LOGGER.info(
|
||||
"Host system reboot required to apply Docker configuration changes"
|
||||
)
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.REBOOT_REQUIRED,
|
||||
ContextType.SYSTEM,
|
||||
suggestions=[SuggestionType.EXECUTE_REBOOT],
|
||||
)
|
||||
|
||||
await self.sys_docker.config.save_data()
|
||||
|
||||
@api_process
|
||||
async def registries(self, request) -> dict[str, Any]:
|
||||
"""Return the list of registries."""
|
||||
@@ -112,7 +46,7 @@ class APIDocker(CoreSysAttributes):
|
||||
return {ATTR_REGISTRIES: data_registries}
|
||||
|
||||
@api_process
|
||||
async def create_registry(self, request: web.Request) -> None:
|
||||
async def create_registry(self, request: web.Request):
|
||||
"""Create a new docker registry."""
|
||||
body = await api_validate(SCHEMA_DOCKER_REGISTRY, request)
|
||||
|
||||
@@ -122,7 +56,7 @@ class APIDocker(CoreSysAttributes):
|
||||
await self.sys_docker.config.save_data()
|
||||
|
||||
@api_process
|
||||
async def remove_registry(self, request: web.Request) -> None:
|
||||
async def remove_registry(self, request: web.Request):
|
||||
"""Delete a docker registry."""
|
||||
hostname = request.match_info.get(ATTR_HOSTNAME)
|
||||
if hostname not in self.sys_docker.config.registries:
|
||||
@@ -132,25 +66,16 @@ class APIDocker(CoreSysAttributes):
|
||||
await self.sys_docker.config.save_data()
|
||||
|
||||
@api_process
|
||||
async def migrate_docker_storage_driver(self, request: web.Request) -> None:
|
||||
"""Migrate Docker storage driver."""
|
||||
if (
|
||||
not self.coresys.os.available
|
||||
or not self.coresys.os.version
|
||||
or self.coresys.os.version < AwesomeVersion("17.0.dev0")
|
||||
):
|
||||
raise APINotFound(
|
||||
"Home Assistant OS 17.0 or newer required for Docker storage driver migration"
|
||||
)
|
||||
|
||||
body = await api_validate(SCHEMA_MIGRATE_DOCKER_STORAGE_DRIVER, request)
|
||||
await self.sys_dbus.agent.system.migrate_docker_storage_driver(
|
||||
body[ATTR_STORAGE_DRIVER]
|
||||
)
|
||||
|
||||
_LOGGER.info("Host system reboot required to apply Docker storage migration")
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.REBOOT_REQUIRED,
|
||||
ContextType.SYSTEM,
|
||||
suggestions=[SuggestionType.EXECUTE_REBOOT],
|
||||
)
|
||||
async def info(self, request: web.Request):
|
||||
"""Get docker info."""
|
||||
data_registries = {}
|
||||
for hostname, registry in self.sys_docker.config.registries.items():
|
||||
data_registries[hostname] = {
|
||||
ATTR_USERNAME: registry[ATTR_USERNAME],
|
||||
}
|
||||
return {
|
||||
ATTR_VERSION: self.sys_docker.info.version,
|
||||
ATTR_STORAGE: self.sys_docker.info.storage,
|
||||
ATTR_LOGGING: self.sys_docker.info.logging,
|
||||
ATTR_REGISTRIES: data_registries,
|
||||
}
|
||||
|
||||
@@ -68,10 +68,7 @@ def filesystem_struct(fs_block: UDisks2Block) -> dict[str, Any]:
|
||||
ATTR_NAME: fs_block.id_label,
|
||||
ATTR_SYSTEM: fs_block.hint_system,
|
||||
ATTR_MOUNT_POINTS: [
|
||||
str(mount_point)
|
||||
for mount_point in (
|
||||
fs_block.filesystem.mount_points if fs_block.filesystem else []
|
||||
)
|
||||
str(mount_point) for mount_point in fs_block.filesystem.mount_points
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
@@ -18,10 +18,8 @@ from ..const import (
|
||||
ATTR_BLK_WRITE,
|
||||
ATTR_BOOT,
|
||||
ATTR_CPU_PERCENT,
|
||||
ATTR_DUPLICATE_LOG_FILE,
|
||||
ATTR_IMAGE,
|
||||
ATTR_IP_ADDRESS,
|
||||
ATTR_JOB_ID,
|
||||
ATTR_MACHINE,
|
||||
ATTR_MEMORY_LIMIT,
|
||||
ATTR_MEMORY_PERCENT,
|
||||
@@ -39,8 +37,8 @@ from ..const import (
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..exceptions import APIDBMigrationInProgress, APIError
|
||||
from ..validate import docker_image, network_port, version_tag
|
||||
from .const import ATTR_BACKGROUND, ATTR_FORCE, ATTR_SAFE_MODE
|
||||
from .utils import api_process, api_validate, background_task
|
||||
from .const import ATTR_FORCE, ATTR_SAFE_MODE
|
||||
from .utils import api_process, api_validate
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -56,7 +54,6 @@ SCHEMA_OPTIONS = vol.Schema(
|
||||
vol.Optional(ATTR_AUDIO_OUTPUT): vol.Maybe(str),
|
||||
vol.Optional(ATTR_AUDIO_INPUT): vol.Maybe(str),
|
||||
vol.Optional(ATTR_BACKUPS_EXCLUDE_DATABASE): vol.Boolean(),
|
||||
vol.Optional(ATTR_DUPLICATE_LOG_FILE): vol.Boolean(),
|
||||
}
|
||||
)
|
||||
|
||||
@@ -64,7 +61,6 @@ SCHEMA_UPDATE = vol.Schema(
|
||||
{
|
||||
vol.Optional(ATTR_VERSION): version_tag,
|
||||
vol.Optional(ATTR_BACKUP): bool,
|
||||
vol.Optional(ATTR_BACKGROUND, default=False): bool,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -114,7 +110,6 @@ class APIHomeAssistant(CoreSysAttributes):
|
||||
ATTR_AUDIO_INPUT: self.sys_homeassistant.audio_input,
|
||||
ATTR_AUDIO_OUTPUT: self.sys_homeassistant.audio_output,
|
||||
ATTR_BACKUPS_EXCLUDE_DATABASE: self.sys_homeassistant.backups_exclude_database,
|
||||
ATTR_DUPLICATE_LOG_FILE: self.sys_homeassistant.duplicate_log_file,
|
||||
}
|
||||
|
||||
@api_process
|
||||
@@ -123,7 +118,7 @@ class APIHomeAssistant(CoreSysAttributes):
|
||||
body = await api_validate(SCHEMA_OPTIONS, request)
|
||||
|
||||
if ATTR_IMAGE in body:
|
||||
self.sys_homeassistant.set_image(body[ATTR_IMAGE])
|
||||
self.sys_homeassistant.image = body[ATTR_IMAGE]
|
||||
self.sys_homeassistant.override_image = (
|
||||
self.sys_homeassistant.image != self.sys_homeassistant.default_image
|
||||
)
|
||||
@@ -154,13 +149,10 @@ class APIHomeAssistant(CoreSysAttributes):
|
||||
ATTR_BACKUPS_EXCLUDE_DATABASE
|
||||
]
|
||||
|
||||
if ATTR_DUPLICATE_LOG_FILE in body:
|
||||
self.sys_homeassistant.duplicate_log_file = body[ATTR_DUPLICATE_LOG_FILE]
|
||||
|
||||
await self.sys_homeassistant.save_data()
|
||||
|
||||
@api_process
|
||||
async def stats(self, request: web.Request) -> dict[str, Any]:
|
||||
async def stats(self, request: web.Request) -> dict[Any, str]:
|
||||
"""Return resource information."""
|
||||
stats = await self.sys_homeassistant.core.stats()
|
||||
if not stats:
|
||||
@@ -178,26 +170,20 @@ class APIHomeAssistant(CoreSysAttributes):
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def update(self, request: web.Request) -> dict[str, str] | None:
|
||||
async def update(self, request: web.Request) -> None:
|
||||
"""Update Home Assistant."""
|
||||
body = await api_validate(SCHEMA_UPDATE, request)
|
||||
await self._check_offline_migration()
|
||||
|
||||
background = body[ATTR_BACKGROUND]
|
||||
update_task, job_id = await background_task(
|
||||
self,
|
||||
self.sys_homeassistant.core.update,
|
||||
version=body.get(ATTR_VERSION, self.sys_homeassistant.latest_version),
|
||||
backup=body.get(ATTR_BACKUP),
|
||||
await asyncio.shield(
|
||||
self.sys_homeassistant.core.update(
|
||||
version=body.get(ATTR_VERSION, self.sys_homeassistant.latest_version),
|
||||
backup=body.get(ATTR_BACKUP),
|
||||
)
|
||||
)
|
||||
|
||||
if background and not update_task.done():
|
||||
return {ATTR_JOB_ID: job_id}
|
||||
|
||||
return await update_task
|
||||
|
||||
@api_process
|
||||
async def stop(self, request: web.Request) -> None:
|
||||
async def stop(self, request: web.Request) -> Awaitable[None]:
|
||||
"""Stop Home Assistant."""
|
||||
body = await api_validate(SCHEMA_STOP, request)
|
||||
await self._check_offline_migration(force=body[ATTR_FORCE])
|
||||
|
||||
@@ -1,19 +1,10 @@
|
||||
"""Init file for Supervisor host RESTful API."""
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Awaitable
|
||||
from contextlib import suppress
|
||||
import json
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
from aiohttp import (
|
||||
ClientConnectionResetError,
|
||||
ClientError,
|
||||
ClientPayloadError,
|
||||
ClientTimeout,
|
||||
web,
|
||||
)
|
||||
from aiohttp import ClientConnectionResetError, web
|
||||
from aiohttp.hdrs import ACCEPT, RANGE
|
||||
import voluptuous as vol
|
||||
from voluptuous.error import CoerceInvalid
|
||||
@@ -45,7 +36,6 @@ from ..host.const import (
|
||||
LogFormat,
|
||||
LogFormatter,
|
||||
)
|
||||
from ..host.logs import SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX
|
||||
from ..utils.systemd_journal import journal_logs_reader
|
||||
from .const import (
|
||||
ATTR_AGENT_VERSION,
|
||||
@@ -59,7 +49,6 @@ from .const import (
|
||||
ATTR_FORCE,
|
||||
ATTR_IDENTIFIERS,
|
||||
ATTR_LLMNR_HOSTNAME,
|
||||
ATTR_MAX_DEPTH,
|
||||
ATTR_STARTUP_TIME,
|
||||
ATTR_USE_NTP,
|
||||
ATTR_VIRTUALIZATION,
|
||||
@@ -100,7 +89,7 @@ class APIHost(CoreSysAttributes):
|
||||
)
|
||||
|
||||
@api_process
|
||||
async def info(self, request: web.Request) -> dict[str, Any]:
|
||||
async def info(self, request):
|
||||
"""Return host information."""
|
||||
return {
|
||||
ATTR_AGENT_VERSION: self.sys_dbus.agent.version,
|
||||
@@ -129,7 +118,7 @@ class APIHost(CoreSysAttributes):
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def options(self, request: web.Request) -> None:
|
||||
async def options(self, request):
|
||||
"""Edit host settings."""
|
||||
body = await api_validate(SCHEMA_OPTIONS, request)
|
||||
|
||||
@@ -140,7 +129,7 @@ class APIHost(CoreSysAttributes):
|
||||
)
|
||||
|
||||
@api_process
|
||||
async def reboot(self, request: web.Request) -> None:
|
||||
async def reboot(self, request):
|
||||
"""Reboot host."""
|
||||
body = await api_validate(SCHEMA_SHUTDOWN, request)
|
||||
await self._check_ha_offline_migration(force=body[ATTR_FORCE])
|
||||
@@ -148,7 +137,7 @@ class APIHost(CoreSysAttributes):
|
||||
return await asyncio.shield(self.sys_host.control.reboot())
|
||||
|
||||
@api_process
|
||||
async def shutdown(self, request: web.Request) -> None:
|
||||
async def shutdown(self, request):
|
||||
"""Poweroff host."""
|
||||
body = await api_validate(SCHEMA_SHUTDOWN, request)
|
||||
await self._check_ha_offline_migration(force=body[ATTR_FORCE])
|
||||
@@ -156,12 +145,12 @@ class APIHost(CoreSysAttributes):
|
||||
return await asyncio.shield(self.sys_host.control.shutdown())
|
||||
|
||||
@api_process
|
||||
def reload(self, request: web.Request) -> Awaitable[None]:
|
||||
def reload(self, request):
|
||||
"""Reload host data."""
|
||||
return asyncio.shield(self.sys_host.reload())
|
||||
|
||||
@api_process
|
||||
async def services(self, request: web.Request) -> dict[str, Any]:
|
||||
async def services(self, request):
|
||||
"""Return list of available services."""
|
||||
services = []
|
||||
for unit in self.sys_host.services:
|
||||
@@ -176,7 +165,7 @@ class APIHost(CoreSysAttributes):
|
||||
return {ATTR_SERVICES: services}
|
||||
|
||||
@api_process
|
||||
async def list_boots(self, _: web.Request) -> dict[str, Any]:
|
||||
async def list_boots(self, _: web.Request):
|
||||
"""Return a list of boot IDs."""
|
||||
boot_ids = await self.sys_host.logs.get_boot_ids()
|
||||
return {
|
||||
@@ -187,7 +176,7 @@ class APIHost(CoreSysAttributes):
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def list_identifiers(self, _: web.Request) -> dict[str, list[str]]:
|
||||
async def list_identifiers(self, _: web.Request):
|
||||
"""Return a list of syslog identifiers."""
|
||||
return {ATTR_IDENTIFIERS: await self.sys_host.logs.get_identifiers()}
|
||||
|
||||
@@ -202,46 +191,28 @@ class APIHost(CoreSysAttributes):
|
||||
return possible_offset
|
||||
|
||||
async def advanced_logs_handler(
|
||||
self,
|
||||
request: web.Request,
|
||||
identifier: str | None = None,
|
||||
follow: bool = False,
|
||||
latest: bool = False,
|
||||
no_colors: bool = False,
|
||||
default_verbose: bool = False,
|
||||
self, request: web.Request, identifier: str | None = None, follow: bool = False
|
||||
) -> web.StreamResponse:
|
||||
"""Return systemd-journald logs."""
|
||||
log_formatter = LogFormatter.VERBOSE if default_verbose else LogFormatter.PLAIN
|
||||
params: dict[str, Any] = {}
|
||||
log_formatter = LogFormatter.PLAIN
|
||||
params = {}
|
||||
if identifier:
|
||||
params[PARAM_SYSLOG_IDENTIFIER] = identifier
|
||||
elif IDENTIFIER in request.match_info:
|
||||
params[PARAM_SYSLOG_IDENTIFIER] = request.match_info[IDENTIFIER]
|
||||
params[PARAM_SYSLOG_IDENTIFIER] = request.match_info.get(IDENTIFIER)
|
||||
else:
|
||||
params[PARAM_SYSLOG_IDENTIFIER] = self.sys_host.logs.default_identifiers
|
||||
# host logs should be always verbose, no matter what Accept header is used
|
||||
log_formatter = LogFormatter.VERBOSE
|
||||
|
||||
if BOOTID in request.match_info:
|
||||
params[PARAM_BOOT_ID] = await self._get_boot_id(request.match_info[BOOTID])
|
||||
params[PARAM_BOOT_ID] = await self._get_boot_id(
|
||||
request.match_info.get(BOOTID)
|
||||
)
|
||||
if follow:
|
||||
params[PARAM_FOLLOW] = ""
|
||||
|
||||
if latest:
|
||||
if not identifier:
|
||||
raise APIError(
|
||||
"Latest logs can only be fetched for a specific identifier."
|
||||
)
|
||||
|
||||
try:
|
||||
epoch = await self._get_container_last_epoch(identifier)
|
||||
params["CONTAINER_LOG_EPOCH"] = epoch
|
||||
except HostLogError as err:
|
||||
raise APIError(
|
||||
f"Cannot determine CONTAINER_LOG_EPOCH of {identifier}, latest logs not available."
|
||||
) from err
|
||||
|
||||
accept_header = request.headers.get(ACCEPT)
|
||||
|
||||
if accept_header and accept_header not in [
|
||||
if ACCEPT in request.headers and request.headers[ACCEPT] not in [
|
||||
CONTENT_TYPE_TEXT,
|
||||
CONTENT_TYPE_X_LOG,
|
||||
"*/*",
|
||||
@@ -251,12 +222,9 @@ class APIHost(CoreSysAttributes):
|
||||
"supported for now."
|
||||
)
|
||||
|
||||
if "verbose" in request.query or accept_header == CONTENT_TYPE_X_LOG:
|
||||
if "verbose" in request.query or request.headers[ACCEPT] == CONTENT_TYPE_X_LOG:
|
||||
log_formatter = LogFormatter.VERBOSE
|
||||
|
||||
if "no_colors" in request.query:
|
||||
no_colors = True
|
||||
|
||||
if "lines" in request.query:
|
||||
lines = request.query.get("lines", DEFAULT_LINES)
|
||||
try:
|
||||
@@ -271,13 +239,13 @@ class APIHost(CoreSysAttributes):
|
||||
# return 2 lines at minimum.
|
||||
lines = max(2, lines)
|
||||
# entries=cursor[[:num_skip]:num_entries]
|
||||
range_header = f"entries=:-{lines - 1}:{SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX if follow else lines}"
|
||||
elif latest:
|
||||
range_header = f"entries=:0:{SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX}"
|
||||
range_header = f"entries=:-{lines - 1}:{'' if follow else lines}"
|
||||
elif RANGE in request.headers:
|
||||
range_header = request.headers[RANGE]
|
||||
range_header = request.headers.get(RANGE)
|
||||
else:
|
||||
range_header = f"entries=:-{DEFAULT_LINES - 1}:{SYSTEMD_JOURNAL_GATEWAYD_LINES_MAX if follow else DEFAULT_LINES}"
|
||||
range_header = (
|
||||
f"entries=:-{DEFAULT_LINES - 1}:{'' if follow else DEFAULT_LINES}"
|
||||
)
|
||||
|
||||
async with self.sys_host.logs.journald_logs(
|
||||
params=params, range_header=range_header, accept=LogFormat.JOURNAL
|
||||
@@ -286,34 +254,18 @@ class APIHost(CoreSysAttributes):
|
||||
response = web.StreamResponse()
|
||||
response.content_type = CONTENT_TYPE_TEXT
|
||||
headers_returned = False
|
||||
async for cursor, line in journal_logs_reader(
|
||||
resp, log_formatter, no_colors
|
||||
):
|
||||
try:
|
||||
if not headers_returned:
|
||||
if cursor:
|
||||
response.headers["X-First-Cursor"] = cursor
|
||||
response.headers["X-Accel-Buffering"] = "no"
|
||||
await response.prepare(request)
|
||||
headers_returned = True
|
||||
async for cursor, line in journal_logs_reader(resp, log_formatter):
|
||||
if not headers_returned:
|
||||
if cursor:
|
||||
response.headers["X-First-Cursor"] = cursor
|
||||
response.headers["X-Accel-Buffering"] = "no"
|
||||
await response.prepare(request)
|
||||
headers_returned = True
|
||||
# When client closes the connection while reading busy logs, we
|
||||
# sometimes get this exception. It should be safe to ignore it.
|
||||
with suppress(ClientConnectionResetError):
|
||||
await response.write(line.encode("utf-8") + b"\n")
|
||||
except ClientConnectionResetError as err:
|
||||
# When client closes the connection while reading busy logs, we
|
||||
# sometimes get this exception. It should be safe to ignore it.
|
||||
_LOGGER.debug(
|
||||
"ClientConnectionResetError raised when returning journal logs: %s",
|
||||
err,
|
||||
)
|
||||
break
|
||||
except ConnectionError as err:
|
||||
_LOGGER.warning(
|
||||
"%s raised when returning journal logs: %s",
|
||||
type(err).__name__,
|
||||
err,
|
||||
)
|
||||
break
|
||||
except (ConnectionResetError, ClientPayloadError) as ex:
|
||||
# ClientPayloadError is most likely caused by the closing the connection
|
||||
except ConnectionResetError as ex:
|
||||
raise APIError(
|
||||
"Connection reset when trying to fetch data from systemd-journald."
|
||||
) from ex
|
||||
@@ -321,89 +273,7 @@ class APIHost(CoreSysAttributes):
|
||||
|
||||
@api_process_raw(CONTENT_TYPE_TEXT, error_type=CONTENT_TYPE_TEXT)
|
||||
async def advanced_logs(
|
||||
self,
|
||||
request: web.Request,
|
||||
identifier: str | None = None,
|
||||
follow: bool = False,
|
||||
latest: bool = False,
|
||||
no_colors: bool = False,
|
||||
default_verbose: bool = False,
|
||||
self, request: web.Request, identifier: str | None = None, follow: bool = False
|
||||
) -> web.StreamResponse:
|
||||
"""Return systemd-journald logs. Wrapped as standard API handler."""
|
||||
return await self.advanced_logs_handler(
|
||||
request, identifier, follow, latest, no_colors, default_verbose
|
||||
)
|
||||
|
||||
@api_process
|
||||
async def disk_usage(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return a breakdown of storage usage for the system."""
|
||||
|
||||
max_depth = request.query.get(ATTR_MAX_DEPTH, 1)
|
||||
try:
|
||||
max_depth = int(max_depth)
|
||||
except ValueError:
|
||||
max_depth = 1
|
||||
|
||||
disk = self.sys_hardware.disk
|
||||
|
||||
total, _, free = await self.sys_run_in_executor(
|
||||
disk.disk_usage, self.sys_config.path_supervisor
|
||||
)
|
||||
|
||||
# Calculate used by subtracting free makes sure we include reserved space
|
||||
# in used space reporting.
|
||||
used = total - free
|
||||
|
||||
known_paths = await self.sys_run_in_executor(
|
||||
disk.get_dir_sizes,
|
||||
{
|
||||
"addons_data": self.sys_config.path_apps_data,
|
||||
"addons_config": self.sys_config.path_app_configs,
|
||||
"media": self.sys_config.path_media,
|
||||
"share": self.sys_config.path_share,
|
||||
"backup": self.sys_config.path_backup,
|
||||
"ssl": self.sys_config.path_ssl,
|
||||
"homeassistant": self.sys_config.path_homeassistant,
|
||||
},
|
||||
max_depth,
|
||||
)
|
||||
return {
|
||||
# this can be the disk/partition ID in the future
|
||||
"id": "root",
|
||||
"label": "Root",
|
||||
"total_bytes": total,
|
||||
"used_bytes": used,
|
||||
"children": [
|
||||
{
|
||||
"id": "system",
|
||||
"label": "System",
|
||||
"used_bytes": used
|
||||
- sum(path["used_bytes"] for path in known_paths),
|
||||
},
|
||||
*known_paths,
|
||||
],
|
||||
}
|
||||
|
||||
async def _get_container_last_epoch(self, identifier: str) -> str | None:
|
||||
"""Get Docker's internal log epoch of the latest log entry for the given identifier."""
|
||||
try:
|
||||
async with self.sys_host.logs.journald_logs(
|
||||
params={"CONTAINER_NAME": identifier},
|
||||
range_header="entries=:-1:2", # -1 = next to the last entry
|
||||
accept=LogFormat.JSON,
|
||||
timeout=ClientTimeout(total=10),
|
||||
) as resp:
|
||||
text = await resp.text()
|
||||
except (ClientError, TimeoutError) as err:
|
||||
raise HostLogError(
|
||||
"Could not get last container epoch from systemd-journal-gatewayd",
|
||||
_LOGGER.error,
|
||||
) from err
|
||||
|
||||
try:
|
||||
return json.loads(text.strip().split("\n")[-1])["CONTAINER_LOG_EPOCH"]
|
||||
except (json.JSONDecodeError, KeyError, IndexError) as err:
|
||||
raise HostLogError(
|
||||
f"Failed to parse CONTAINER_LOG_EPOCH of {identifier} container, got: {text}",
|
||||
_LOGGER.error,
|
||||
) from err
|
||||
return await self.advanced_logs_handler(request, identifier, follow)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""Supervisor App ingress service."""
|
||||
"""Supervisor Add-on ingress service."""
|
||||
|
||||
import asyncio
|
||||
from ipaddress import ip_address
|
||||
@@ -15,7 +15,7 @@ from aiohttp.web_exceptions import (
|
||||
from multidict import CIMultiDict, istr
|
||||
import voluptuous as vol
|
||||
|
||||
from ..addons.addon import App
|
||||
from ..addons.addon import Addon
|
||||
from ..const import (
|
||||
ATTR_ADMIN,
|
||||
ATTR_ENABLE,
|
||||
@@ -29,8 +29,8 @@ from ..const import (
|
||||
HEADER_REMOTE_USER_NAME,
|
||||
HEADER_TOKEN,
|
||||
HEADER_TOKEN_OLD,
|
||||
HomeAssistantUser,
|
||||
IngressSessionData,
|
||||
IngressSessionDataUser,
|
||||
)
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..exceptions import HomeAssistantAPIError
|
||||
@@ -39,8 +39,6 @@ from .utils import api_process, api_validate, require_home_assistant
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
|
||||
MAX_WEBSOCKET_MESSAGE_SIZE = 16 * 1024 * 1024 # 16 MiB
|
||||
|
||||
VALIDATE_SESSION_DATA = vol.Schema({ATTR_SESSION: str})
|
||||
|
||||
"""Expected optional payload of create session request"""
|
||||
@@ -75,37 +73,43 @@ def status_code_must_be_empty_body(code: int) -> bool:
|
||||
|
||||
|
||||
class APIIngress(CoreSysAttributes):
|
||||
"""Ingress view to handle app webui routing."""
|
||||
"""Ingress view to handle add-on webui routing."""
|
||||
|
||||
def _extract_app(self, request: web.Request) -> App:
|
||||
"""Return app, throw an exception it it doesn't exist."""
|
||||
token = request.match_info["token"]
|
||||
_list_of_users: list[IngressSessionDataUser]
|
||||
|
||||
# Find correct app
|
||||
app = self.sys_ingress.get(token)
|
||||
if not app:
|
||||
def __init__(self) -> None:
|
||||
"""Initialize APIIngress."""
|
||||
self._list_of_users = []
|
||||
|
||||
def _extract_addon(self, request: web.Request) -> Addon:
|
||||
"""Return addon, throw an exception it it doesn't exist."""
|
||||
token = request.match_info.get("token")
|
||||
|
||||
# Find correct add-on
|
||||
addon = self.sys_ingress.get(token)
|
||||
if not addon:
|
||||
_LOGGER.warning("Ingress for %s not available", token)
|
||||
raise HTTPServiceUnavailable()
|
||||
|
||||
return app
|
||||
return addon
|
||||
|
||||
def _create_url(self, app: App, path: str) -> str:
|
||||
def _create_url(self, addon: Addon, path: str) -> str:
|
||||
"""Create URL to container."""
|
||||
return f"http://{app.ip_address}:{app.ingress_port}/{path}"
|
||||
return f"http://{addon.ip_address}:{addon.ingress_port}/{path}"
|
||||
|
||||
@api_process
|
||||
async def panels(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Create a list of panel data."""
|
||||
apps = {}
|
||||
for app in self.sys_ingress.apps:
|
||||
apps[app.slug] = {
|
||||
ATTR_TITLE: app.panel_title,
|
||||
ATTR_ICON: app.panel_icon,
|
||||
ATTR_ADMIN: app.panel_admin,
|
||||
ATTR_ENABLE: app.ingress_panel,
|
||||
addons = {}
|
||||
for addon in self.sys_ingress.addons:
|
||||
addons[addon.slug] = {
|
||||
ATTR_TITLE: addon.panel_title,
|
||||
ATTR_ICON: addon.panel_icon,
|
||||
ATTR_ADMIN: addon.panel_admin,
|
||||
ATTR_ENABLE: addon.ingress_panel,
|
||||
}
|
||||
|
||||
return {ATTR_PANELS: apps}
|
||||
return {ATTR_PANELS: addons}
|
||||
|
||||
@api_process
|
||||
@require_home_assistant
|
||||
@@ -128,7 +132,7 @@ class APIIngress(CoreSysAttributes):
|
||||
|
||||
@api_process
|
||||
@require_home_assistant
|
||||
async def validate_session(self, request: web.Request) -> None:
|
||||
async def validate_session(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Validate session and extending how long it's valid for."""
|
||||
data = await api_validate(VALIDATE_SESSION_DATA, request)
|
||||
|
||||
@@ -143,22 +147,22 @@ class APIIngress(CoreSysAttributes):
|
||||
"""Route data to Supervisor ingress service."""
|
||||
|
||||
# Check Ingress Session
|
||||
session = request.cookies.get(COOKIE_INGRESS, "")
|
||||
session = request.cookies.get(COOKIE_INGRESS)
|
||||
if not self.sys_ingress.validate_session(session):
|
||||
_LOGGER.warning("No valid ingress session %s", session)
|
||||
raise HTTPUnauthorized()
|
||||
|
||||
# Process requests
|
||||
app = self._extract_app(request)
|
||||
path = request.match_info.get("path", "")
|
||||
addon = self._extract_addon(request)
|
||||
path = request.match_info.get("path")
|
||||
session_data = self.sys_ingress.get_session_data(session)
|
||||
try:
|
||||
# Websocket
|
||||
if _is_websocket(request):
|
||||
return await self._handle_websocket(request, app, path, session_data)
|
||||
return await self._handle_websocket(request, addon, path, session_data)
|
||||
|
||||
# Request
|
||||
return await self._handle_request(request, app, path, session_data)
|
||||
return await self._handle_request(request, addon, path, session_data)
|
||||
|
||||
except aiohttp.ClientError as err:
|
||||
_LOGGER.error("Ingress error: %s", err)
|
||||
@@ -168,7 +172,7 @@ class APIIngress(CoreSysAttributes):
|
||||
async def _handle_websocket(
|
||||
self,
|
||||
request: web.Request,
|
||||
app: App,
|
||||
addon: Addon,
|
||||
path: str,
|
||||
session_data: IngressSessionData | None,
|
||||
) -> web.WebSocketResponse:
|
||||
@@ -179,66 +183,58 @@ class APIIngress(CoreSysAttributes):
|
||||
for proto in request.headers[hdrs.SEC_WEBSOCKET_PROTOCOL].split(",")
|
||||
]
|
||||
else:
|
||||
req_protocols = []
|
||||
req_protocols = ()
|
||||
|
||||
ws_server = web.WebSocketResponse(
|
||||
protocols=req_protocols,
|
||||
autoclose=False,
|
||||
autoping=False,
|
||||
max_msg_size=MAX_WEBSOCKET_MESSAGE_SIZE,
|
||||
protocols=req_protocols, autoclose=False, autoping=False
|
||||
)
|
||||
await ws_server.prepare(request)
|
||||
|
||||
# Preparing
|
||||
url = self._create_url(app, path)
|
||||
source_header = _init_header(request, app, session_data)
|
||||
url = self._create_url(addon, path)
|
||||
source_header = _init_header(request, addon, session_data)
|
||||
|
||||
# Support GET query
|
||||
if request.query_string:
|
||||
url = f"{url}?{request.query_string}"
|
||||
|
||||
# Start proxy
|
||||
try:
|
||||
_LOGGER.debug("Proxing WebSocket to %s, upstream url: %s", app.slug, url)
|
||||
async with self.sys_websession.ws_connect(
|
||||
url,
|
||||
headers=source_header,
|
||||
protocols=req_protocols,
|
||||
autoclose=False,
|
||||
autoping=False,
|
||||
max_msg_size=MAX_WEBSOCKET_MESSAGE_SIZE,
|
||||
) as ws_client:
|
||||
# Proxy requests
|
||||
await asyncio.wait(
|
||||
[
|
||||
self.sys_create_task(_websocket_forward(ws_server, ws_client)),
|
||||
self.sys_create_task(_websocket_forward(ws_client, ws_server)),
|
||||
],
|
||||
return_when=asyncio.FIRST_COMPLETED,
|
||||
)
|
||||
except TimeoutError:
|
||||
_LOGGER.warning("WebSocket proxy to %s timed out", app.slug)
|
||||
async with self.sys_websession.ws_connect(
|
||||
url,
|
||||
headers=source_header,
|
||||
protocols=req_protocols,
|
||||
autoclose=False,
|
||||
autoping=False,
|
||||
) as ws_client:
|
||||
# Proxy requests
|
||||
await asyncio.wait(
|
||||
[
|
||||
self.sys_create_task(_websocket_forward(ws_server, ws_client)),
|
||||
self.sys_create_task(_websocket_forward(ws_client, ws_server)),
|
||||
],
|
||||
return_when=asyncio.FIRST_COMPLETED,
|
||||
)
|
||||
|
||||
return ws_server
|
||||
|
||||
async def _handle_request(
|
||||
self,
|
||||
request: web.Request,
|
||||
app: App,
|
||||
addon: Addon,
|
||||
path: str,
|
||||
session_data: IngressSessionData | None,
|
||||
) -> web.Response | web.StreamResponse:
|
||||
"""Ingress route for request."""
|
||||
url = self._create_url(app, path)
|
||||
source_header = _init_header(request, app, session_data)
|
||||
url = self._create_url(addon, path)
|
||||
source_header = _init_header(request, addon, session_data)
|
||||
|
||||
# Passing the raw stream breaks requests for some webservers
|
||||
# since we just need it for POST requests really, for all other methods
|
||||
# we read the bytes and pass that to the request to the app
|
||||
# apps needs to add support with that in the configuration
|
||||
# we read the bytes and pass that to the request to the add-on
|
||||
# add-ons needs to add support with that in the configuration
|
||||
data = (
|
||||
request.content
|
||||
if request.method == "POST" and app.ingress_stream
|
||||
if request.method == "POST" and addon.ingress_stream
|
||||
else await request.read()
|
||||
)
|
||||
|
||||
@@ -253,28 +249,18 @@ class APIIngress(CoreSysAttributes):
|
||||
skip_auto_headers={hdrs.CONTENT_TYPE},
|
||||
) as result:
|
||||
headers = _response_header(result)
|
||||
|
||||
# Avoid parsing content_type in simple cases for better performance
|
||||
if maybe_content_type := result.headers.get(hdrs.CONTENT_TYPE):
|
||||
content_type = (maybe_content_type.partition(";"))[0].strip()
|
||||
else:
|
||||
content_type = result.content_type
|
||||
|
||||
# Empty body responses (304, 204, HEAD, etc.) should not be streamed,
|
||||
# otherwise aiohttp < 3.9.0 may generate an invalid "0\r\n\r\n" chunk
|
||||
# This also avoids setting content_type for empty responses.
|
||||
if must_be_empty_body(request.method, result.status):
|
||||
# If upstream contains content-type, preserve it (e.g. for HEAD requests)
|
||||
if maybe_content_type:
|
||||
headers[hdrs.CONTENT_TYPE] = content_type
|
||||
return web.Response(
|
||||
headers=headers,
|
||||
status=result.status,
|
||||
)
|
||||
|
||||
# Simple request
|
||||
if (
|
||||
hdrs.CONTENT_LENGTH in result.headers
|
||||
# empty body responses should not be streamed,
|
||||
# otherwise aiohttp < 3.9.0 may generate
|
||||
# an invalid "0\r\n\r\n" chunk instead of an empty response.
|
||||
must_be_empty_body(request.method, result.status)
|
||||
or hdrs.CONTENT_LENGTH in result.headers
|
||||
and int(result.headers.get(hdrs.CONTENT_LENGTH, 0)) < 4_194_000
|
||||
):
|
||||
# Return Response
|
||||
@@ -293,42 +279,46 @@ class APIIngress(CoreSysAttributes):
|
||||
try:
|
||||
response.headers["X-Accel-Buffering"] = "no"
|
||||
await response.prepare(request)
|
||||
async for data, _ in result.content.iter_chunks():
|
||||
async for data in result.content.iter_chunked(4096):
|
||||
await response.write(data)
|
||||
|
||||
except (
|
||||
aiohttp.ClientError,
|
||||
aiohttp.ClientPayloadError,
|
||||
ConnectionResetError,
|
||||
ConnectionError,
|
||||
) as err:
|
||||
_LOGGER.error("Stream error with %s: %s", url, err)
|
||||
|
||||
return response
|
||||
|
||||
async def _find_user_by_id(self, user_id: str) -> HomeAssistantUser | None:
|
||||
async def _find_user_by_id(self, user_id: str) -> IngressSessionDataUser | None:
|
||||
"""Find user object by the user's ID."""
|
||||
try:
|
||||
users = await self.sys_homeassistant.list_users()
|
||||
except HomeAssistantAPIError as err:
|
||||
_LOGGER.warning("Could not fetch list of users: %s", err)
|
||||
list_of_users = await self.sys_homeassistant.get_users()
|
||||
except (HomeAssistantAPIError, TypeError) as err:
|
||||
_LOGGER.error(
|
||||
"%s error occurred while requesting list of users: %s", type(err), err
|
||||
)
|
||||
return None
|
||||
|
||||
return next((user for user in users if user.id == user_id), None)
|
||||
if list_of_users is not None:
|
||||
self._list_of_users = list_of_users
|
||||
|
||||
return next((user for user in self._list_of_users if user.id == user_id), None)
|
||||
|
||||
|
||||
def _init_header(
|
||||
request: web.Request, app: App, session_data: IngressSessionData | None
|
||||
) -> CIMultiDict[str]:
|
||||
request: web.Request, addon: Addon, session_data: IngressSessionData | None
|
||||
) -> CIMultiDict | dict[str, str]:
|
||||
"""Create initial header."""
|
||||
headers = CIMultiDict[str]()
|
||||
headers = {}
|
||||
|
||||
if session_data is not None:
|
||||
headers[HEADER_REMOTE_USER_ID] = session_data.user.id
|
||||
if session_data.user.username is not None:
|
||||
headers[HEADER_REMOTE_USER_NAME] = session_data.user.username
|
||||
if session_data.user.name is not None:
|
||||
headers[HEADER_REMOTE_USER_DISPLAY_NAME] = session_data.user.name
|
||||
if session_data.user.display_name is not None:
|
||||
headers[HEADER_REMOTE_USER_DISPLAY_NAME] = session_data.user.display_name
|
||||
|
||||
# filter flags
|
||||
for name, value in request.headers.items():
|
||||
@@ -347,20 +337,19 @@ def _init_header(
|
||||
istr(HEADER_REMOTE_USER_DISPLAY_NAME),
|
||||
):
|
||||
continue
|
||||
headers.add(name, value)
|
||||
headers[name] = value
|
||||
|
||||
# Update X-Forwarded-For
|
||||
if request.transport:
|
||||
forward_for = request.headers.get(hdrs.X_FORWARDED_FOR)
|
||||
connected_ip = ip_address(request.transport.get_extra_info("peername")[0])
|
||||
headers[hdrs.X_FORWARDED_FOR] = f"{forward_for}, {connected_ip!s}"
|
||||
forward_for = request.headers.get(hdrs.X_FORWARDED_FOR)
|
||||
connected_ip = ip_address(request.transport.get_extra_info("peername")[0])
|
||||
headers[hdrs.X_FORWARDED_FOR] = f"{forward_for}, {connected_ip!s}"
|
||||
|
||||
return headers
|
||||
|
||||
|
||||
def _response_header(response: aiohttp.ClientResponse) -> CIMultiDict[str]:
|
||||
def _response_header(response: aiohttp.ClientResponse) -> dict[str, str]:
|
||||
"""Create response header."""
|
||||
headers = CIMultiDict[str]()
|
||||
headers = {}
|
||||
|
||||
for name, value in response.headers.items():
|
||||
if name in (
|
||||
@@ -370,7 +359,7 @@ def _response_header(response: aiohttp.ClientResponse) -> CIMultiDict[str]:
|
||||
hdrs.CONTENT_ENCODING,
|
||||
):
|
||||
continue
|
||||
headers.add(name, value)
|
||||
headers[name] = value
|
||||
|
||||
return headers
|
||||
|
||||
@@ -396,9 +385,9 @@ async def _websocket_forward(ws_from, ws_to):
|
||||
elif msg.type == aiohttp.WSMsgType.BINARY:
|
||||
await ws_to.send_bytes(msg.data)
|
||||
elif msg.type == aiohttp.WSMsgType.PING:
|
||||
await ws_to.ping(msg.data)
|
||||
await ws_to.ping()
|
||||
elif msg.type == aiohttp.WSMsgType.PONG:
|
||||
await ws_to.pong(msg.data)
|
||||
await ws_to.pong()
|
||||
elif ws_to.closed:
|
||||
await ws_to.close(code=ws_to.close_code, message=msg.extra)
|
||||
except RuntimeError:
|
||||
|
||||
@@ -26,7 +26,7 @@ class APIJobs(CoreSysAttributes):
|
||||
def _extract_job(self, request: web.Request) -> SupervisorJob:
|
||||
"""Extract job from request or raise."""
|
||||
try:
|
||||
return self.sys_jobs.get_job(request.match_info["uuid"])
|
||||
return self.sys_jobs.get_job(request.match_info.get("uuid"))
|
||||
except JobNotFound:
|
||||
raise APINotFound("Job does not exist") from None
|
||||
|
||||
@@ -71,10 +71,7 @@ class APIJobs(CoreSysAttributes):
|
||||
|
||||
if current_job.uuid in jobs_by_parent:
|
||||
queue.extend(
|
||||
[
|
||||
(child_jobs, job)
|
||||
for job in jobs_by_parent.get(current_job.uuid, [])
|
||||
]
|
||||
[(child_jobs, job) for job in jobs_by_parent.get(current_job.uuid)]
|
||||
)
|
||||
|
||||
return job_list
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
"""Handle security part of this API."""
|
||||
|
||||
from collections.abc import Awaitable, Callable
|
||||
from dataclasses import dataclass
|
||||
import logging
|
||||
import re
|
||||
from typing import Final
|
||||
from urllib.parse import unquote
|
||||
|
||||
from aiohttp.web import Request, StreamResponse, middleware
|
||||
from aiohttp.web import Request, RequestHandler, Response, middleware
|
||||
from aiohttp.web_exceptions import HTTPBadRequest, HTTPForbidden, HTTPUnauthorized
|
||||
from awesomeversion import AwesomeVersion
|
||||
|
||||
from supervisor.homeassistant.const import LANDINGPAGE
|
||||
|
||||
from ...addons.const import RE_SLUG
|
||||
from ...const import (
|
||||
REQUEST_FROM,
|
||||
@@ -19,25 +19,24 @@ from ...const import (
|
||||
ROLE_DEFAULT,
|
||||
ROLE_HOMEASSISTANT,
|
||||
ROLE_MANAGER,
|
||||
VALID_API_STATES,
|
||||
CoreState,
|
||||
)
|
||||
from ...coresys import CoreSys, CoreSysAttributes
|
||||
from ...homeassistant.const import LANDINGPAGE
|
||||
from ...utils import version_is_new_enough
|
||||
from ..utils import api_return_error, extract_supervisor_token
|
||||
from ..utils import api_return_error, excract_supervisor_token
|
||||
|
||||
_LOGGER: logging.Logger = logging.getLogger(__name__)
|
||||
_CORE_VERSION: Final = AwesomeVersion("2023.3.4")
|
||||
|
||||
# fmt: off
|
||||
|
||||
_V1_FRONTEND_PATHS: Final = (
|
||||
_CORE_FRONTEND_PATHS: Final = (
|
||||
r"|/app/.*\.(?:js|gz|json|map|woff2)"
|
||||
r"|/(store/)?addons/" + RE_SLUG + r"/(logo|icon)"
|
||||
)
|
||||
|
||||
_V2_FRONTEND_PATHS: Final = (
|
||||
r"|/store/apps/" + RE_SLUG + r"/(logo|icon)"
|
||||
CORE_FRONTEND: Final = re.compile(
|
||||
r"^(?:" + _CORE_FRONTEND_PATHS + r")$"
|
||||
)
|
||||
|
||||
|
||||
@@ -49,6 +48,19 @@ BLACKLIST: Final = re.compile(
|
||||
r")$"
|
||||
)
|
||||
|
||||
# Free to call or have own security concepts
|
||||
NO_SECURITY_CHECK: Final = re.compile(
|
||||
r"^(?:"
|
||||
r"|/homeassistant/api/.*"
|
||||
r"|/homeassistant/websocket"
|
||||
r"|/core/api/.*"
|
||||
r"|/core/websocket"
|
||||
r"|/supervisor/ping"
|
||||
r"|/ingress/[-_A-Za-z0-9]+/.*"
|
||||
+ _CORE_FRONTEND_PATHS
|
||||
+ r")$"
|
||||
)
|
||||
|
||||
# Observer allow API calls
|
||||
OBSERVER_CHECK: Final = re.compile(
|
||||
r"^(?:"
|
||||
@@ -56,6 +68,80 @@ OBSERVER_CHECK: Final = re.compile(
|
||||
r")$"
|
||||
)
|
||||
|
||||
# Can called by every add-on
|
||||
ADDONS_API_BYPASS: Final = re.compile(
|
||||
r"^(?:"
|
||||
r"|/addons/self/(?!security|update)[^/]+"
|
||||
r"|/addons/self/options/config"
|
||||
r"|/info"
|
||||
r"|/services.*"
|
||||
r"|/discovery.*"
|
||||
r"|/auth"
|
||||
r")$"
|
||||
)
|
||||
|
||||
# Home Assistant only
|
||||
CORE_ONLY_PATHS: Final = re.compile(
|
||||
r"^(?:"
|
||||
r"/addons/" + RE_SLUG + "/sys_options"
|
||||
r")$"
|
||||
)
|
||||
|
||||
# Policy role add-on API access
|
||||
ADDONS_ROLE_ACCESS: dict[str, re.Pattern] = {
|
||||
ROLE_DEFAULT: re.compile(
|
||||
r"^(?:"
|
||||
r"|/.+/info"
|
||||
r")$"
|
||||
),
|
||||
ROLE_HOMEASSISTANT: re.compile(
|
||||
r"^(?:"
|
||||
r"|/.+/info"
|
||||
r"|/core/.+"
|
||||
r"|/homeassistant/.+"
|
||||
r")$"
|
||||
),
|
||||
ROLE_BACKUP: re.compile(
|
||||
r"^(?:"
|
||||
r"|/.+/info"
|
||||
r"|/backups.*"
|
||||
r")$"
|
||||
),
|
||||
ROLE_MANAGER: re.compile(
|
||||
r"^(?:"
|
||||
r"|/.+/info"
|
||||
r"|/addons(?:/" + RE_SLUG + r"/(?!security).+|/reload)?"
|
||||
r"|/audio/.+"
|
||||
r"|/auth/cache"
|
||||
r"|/available_updates"
|
||||
r"|/backups.*"
|
||||
r"|/cli/.+"
|
||||
r"|/core/.+"
|
||||
r"|/dns/.+"
|
||||
r"|/docker/.+"
|
||||
r"|/jobs/.+"
|
||||
r"|/hardware/.+"
|
||||
r"|/hassos/.+"
|
||||
r"|/homeassistant/.+"
|
||||
r"|/host/.+"
|
||||
r"|/mounts.*"
|
||||
r"|/multicast/.+"
|
||||
r"|/network/.+"
|
||||
r"|/observer/.+"
|
||||
r"|/os/(?!datadisk/wipe).+"
|
||||
r"|/refresh_updates"
|
||||
r"|/resolution/.+"
|
||||
r"|/security/.+"
|
||||
r"|/snapshots.*"
|
||||
r"|/store.*"
|
||||
r"|/supervisor/.+"
|
||||
r")$"
|
||||
),
|
||||
ROLE_ADMIN: re.compile(
|
||||
r".*"
|
||||
),
|
||||
}
|
||||
|
||||
FILTERS: Final = re.compile(
|
||||
r"(?:"
|
||||
|
||||
@@ -76,193 +162,9 @@ FILTERS: Final = re.compile(
|
||||
flags=re.IGNORECASE,
|
||||
)
|
||||
|
||||
@dataclass(slots=True, frozen=True)
|
||||
class _AppSecurityPatterns:
|
||||
"""All compiled regex patterns for app API access control, per API version."""
|
||||
|
||||
# Paths where an installed app's token bypasses normal role checks
|
||||
api_bypass: re.Pattern[str]
|
||||
|
||||
# Paths that only Home Assistant Core may call
|
||||
core_only: re.Pattern[str]
|
||||
|
||||
# Per-role allowed path patterns for installed apps
|
||||
role_access: dict[str, re.Pattern[str]]
|
||||
|
||||
# Paths serving frontend assets (checked in core_proxy middleware)
|
||||
supervisor_frontend: re.Pattern[str]
|
||||
|
||||
# Paths that skip token validation entirely
|
||||
no_security_check: re.Pattern[str]
|
||||
|
||||
|
||||
# fmt: off
|
||||
|
||||
_V1_PATTERNS: Final = _AppSecurityPatterns(
|
||||
api_bypass=re.compile(
|
||||
r"^(?:"
|
||||
r"|/addons/self/(?!security|update)[^/]+"
|
||||
r"|/addons/self/options/config"
|
||||
r"|/info"
|
||||
r"|/services.*"
|
||||
r"|/discovery.*"
|
||||
r"|/auth"
|
||||
r")$"
|
||||
),
|
||||
core_only=re.compile(
|
||||
r"^(?:"
|
||||
r"/addons/" + RE_SLUG + r"/sys_options"
|
||||
r")$"
|
||||
),
|
||||
role_access={
|
||||
ROLE_DEFAULT: re.compile(
|
||||
r"^(?:"
|
||||
r"|/.+/info"
|
||||
r")$"
|
||||
),
|
||||
ROLE_HOMEASSISTANT: re.compile(
|
||||
r"^(?:"
|
||||
r"|/.+/info"
|
||||
r"|/core/.+"
|
||||
r"|/homeassistant/.+"
|
||||
r")$"
|
||||
),
|
||||
ROLE_BACKUP: re.compile(
|
||||
r"^(?:"
|
||||
r"|/.+/info"
|
||||
r"|/backups.*"
|
||||
r")$"
|
||||
),
|
||||
ROLE_MANAGER: re.compile(
|
||||
r"^(?:"
|
||||
r"|/.+/info"
|
||||
r"|/addons(?:/" + RE_SLUG + r"/(?!security).+|/reload)?"
|
||||
r"|/audio/.+"
|
||||
r"|/auth/cache"
|
||||
r"|/available_updates"
|
||||
r"|/backups.*"
|
||||
r"|/cli/.+"
|
||||
r"|/core/.+"
|
||||
r"|/dns/.+"
|
||||
r"|/docker/.+"
|
||||
r"|/jobs/.+"
|
||||
r"|/hardware/.+"
|
||||
r"|/homeassistant/.+"
|
||||
r"|/host/.+"
|
||||
r"|/mounts.*"
|
||||
r"|/multicast/.+"
|
||||
r"|/network/.+"
|
||||
r"|/observer/.+"
|
||||
r"|/os/(?!datadisk/wipe).+"
|
||||
r"|/refresh_updates"
|
||||
r"|/resolution/.+"
|
||||
r"|/security/.+"
|
||||
r"|/snapshots.*"
|
||||
r"|/store.*"
|
||||
r"|/supervisor/.+"
|
||||
r")$"
|
||||
),
|
||||
ROLE_ADMIN: re.compile(r".*"),
|
||||
},
|
||||
supervisor_frontend=re.compile(r"^(?:" + _V1_FRONTEND_PATHS + r")$"),
|
||||
no_security_check=re.compile(
|
||||
r"^(?:"
|
||||
r"|/homeassistant/api/.*"
|
||||
r"|/homeassistant/websocket"
|
||||
r"|/core/api/.*"
|
||||
r"|/core/websocket"
|
||||
r"|/supervisor/ping"
|
||||
r"|/ingress/[-_A-Za-z0-9]+/.*"
|
||||
+ _V1_FRONTEND_PATHS
|
||||
+ r")$"
|
||||
),
|
||||
)
|
||||
|
||||
_V2_PATTERNS: Final = _AppSecurityPatterns(
|
||||
# /v2 is factored out as a literal prefix — alternatives only list the
|
||||
# path suffix, making v1 ↔ v2 pattern diffs easy to read.
|
||||
api_bypass=re.compile(
|
||||
r"^/v2(?:"
|
||||
r"|/apps/self/(?!security|update)[^/]+"
|
||||
r"|/apps/self/options/config"
|
||||
r"|/info"
|
||||
r"|/services.*"
|
||||
r"|/discovery.*"
|
||||
r"|/auth"
|
||||
r")$"
|
||||
),
|
||||
core_only=re.compile(
|
||||
r"^/v2(?:"
|
||||
r"/apps/" + RE_SLUG + r"/sys_options"
|
||||
r")$"
|
||||
),
|
||||
role_access={
|
||||
ROLE_DEFAULT: re.compile(
|
||||
r"^/v2(?:"
|
||||
r"|/.+/info"
|
||||
r")$"
|
||||
),
|
||||
ROLE_HOMEASSISTANT: re.compile(
|
||||
r"^/v2(?:"
|
||||
r"|/.+/info"
|
||||
r"|/core/.+"
|
||||
r"|/homeassistant/.+"
|
||||
r")$"
|
||||
),
|
||||
ROLE_BACKUP: re.compile(
|
||||
r"^/v2(?:"
|
||||
r"|/.+/info"
|
||||
r"|/backups.*"
|
||||
r")$"
|
||||
),
|
||||
ROLE_MANAGER: re.compile(
|
||||
r"^/v2(?:"
|
||||
r"|/.+/info"
|
||||
r"|/apps(?:/" + RE_SLUG + r"/(?!security).+)?"
|
||||
r"|/audio/.+"
|
||||
r"|/auth/cache"
|
||||
r"|/backups.*"
|
||||
r"|/cli/.+"
|
||||
r"|/core/.+"
|
||||
r"|/dns/.+"
|
||||
r"|/docker/.+"
|
||||
r"|/jobs/.+"
|
||||
r"|/hardware/.+"
|
||||
r"|/homeassistant/.+"
|
||||
r"|/host/.+"
|
||||
r"|/mounts.*"
|
||||
r"|/multicast/.+"
|
||||
r"|/network/.+"
|
||||
r"|/observer/.+"
|
||||
r"|/os/(?!datadisk/wipe).+"
|
||||
r"|/reload_updates"
|
||||
r"|/resolution/.+"
|
||||
r"|/security/.+"
|
||||
r"|/store.*"
|
||||
r"|/supervisor/.+"
|
||||
r")$"
|
||||
),
|
||||
ROLE_ADMIN: re.compile(r".*"),
|
||||
},
|
||||
supervisor_frontend=re.compile(r"^/v2(?:" + _V2_FRONTEND_PATHS + r")$"),
|
||||
no_security_check=re.compile(
|
||||
r"^/v2(?:"
|
||||
r"|/ingress/[-_A-Za-z0-9]+/.*"
|
||||
+ _V2_FRONTEND_PATHS
|
||||
+ r")$"
|
||||
),
|
||||
)
|
||||
|
||||
# fmt: on
|
||||
|
||||
|
||||
def _get_app_security_patterns(request: Request) -> _AppSecurityPatterns:
|
||||
"""Return the correct pattern set based on the request's API version."""
|
||||
if request.path.startswith("/v2/"):
|
||||
return _V2_PATTERNS
|
||||
return _V1_PATTERNS
|
||||
|
||||
|
||||
class SecurityMiddleware(CoreSysAttributes):
|
||||
"""Security middleware functions."""
|
||||
|
||||
@@ -278,8 +180,8 @@ class SecurityMiddleware(CoreSysAttributes):
|
||||
|
||||
@middleware
|
||||
async def block_bad_requests(
|
||||
self, request: Request, handler: Callable[[Request], Awaitable[StreamResponse]]
|
||||
) -> StreamResponse:
|
||||
self, request: Request, handler: RequestHandler
|
||||
) -> Response:
|
||||
"""Process request and tblock commonly known exploit attempts."""
|
||||
if FILTERS.search(self._recursive_unquote(request.path)):
|
||||
_LOGGER.warning(
|
||||
@@ -298,10 +200,14 @@ class SecurityMiddleware(CoreSysAttributes):
|
||||
|
||||
@middleware
|
||||
async def system_validation(
|
||||
self, request: Request, handler: Callable[[Request], Awaitable[StreamResponse]]
|
||||
) -> StreamResponse:
|
||||
self, request: Request, handler: RequestHandler
|
||||
) -> Response:
|
||||
"""Check if core is ready to response."""
|
||||
if self.sys_core.state not in VALID_API_STATES:
|
||||
if self.sys_core.state not in (
|
||||
CoreState.STARTUP,
|
||||
CoreState.RUNNING,
|
||||
CoreState.FREEZE,
|
||||
):
|
||||
return api_return_error(
|
||||
message=f"System is not ready with state: {self.sys_core.state}"
|
||||
)
|
||||
@@ -310,12 +216,11 @@ class SecurityMiddleware(CoreSysAttributes):
|
||||
|
||||
@middleware
|
||||
async def token_validation(
|
||||
self, request: Request, handler: Callable[[Request], Awaitable[StreamResponse]]
|
||||
) -> StreamResponse:
|
||||
self, request: Request, handler: RequestHandler
|
||||
) -> Response:
|
||||
"""Check security access of this layer."""
|
||||
request_from: CoreSysAttributes | None = None
|
||||
supervisor_token = extract_supervisor_token(request)
|
||||
patterns = _get_app_security_patterns(request)
|
||||
request_from = None
|
||||
supervisor_token = excract_supervisor_token(request)
|
||||
|
||||
# Blacklist
|
||||
if BLACKLIST.match(request.path):
|
||||
@@ -323,7 +228,7 @@ class SecurityMiddleware(CoreSysAttributes):
|
||||
raise HTTPForbidden()
|
||||
|
||||
# Ignore security check
|
||||
if patterns.no_security_check.match(request.path):
|
||||
if NO_SECURITY_CHECK.match(request.path):
|
||||
_LOGGER.debug("Passthrough %s", request.path)
|
||||
request[REQUEST_FROM] = None
|
||||
return await handler(request)
|
||||
@@ -337,11 +242,8 @@ class SecurityMiddleware(CoreSysAttributes):
|
||||
if supervisor_token == self.sys_homeassistant.supervisor_token:
|
||||
_LOGGER.debug("%s access from Home Assistant", request.path)
|
||||
request_from = self.sys_homeassistant
|
||||
elif patterns.core_only.match(request.path):
|
||||
_LOGGER.warning(
|
||||
"Attempted access to %s from client besides Home Assistant",
|
||||
request.path,
|
||||
)
|
||||
elif CORE_ONLY_PATHS.match(request.path):
|
||||
_LOGGER.warning("Attempted access to %s from client besides Home Assistant")
|
||||
raise HTTPForbidden()
|
||||
|
||||
# Host
|
||||
@@ -357,24 +259,26 @@ class SecurityMiddleware(CoreSysAttributes):
|
||||
_LOGGER.debug("%s access from Observer", request.path)
|
||||
request_from = self.sys_plugins.observer
|
||||
|
||||
# App
|
||||
app = None
|
||||
# Add-on
|
||||
addon = None
|
||||
if supervisor_token and not request_from:
|
||||
app = self.sys_apps.from_token(supervisor_token)
|
||||
addon = self.sys_addons.from_token(supervisor_token)
|
||||
|
||||
# Check App API access
|
||||
if app and patterns.api_bypass.match(request.path):
|
||||
_LOGGER.debug("Passthrough %s from %s", request.path, app.slug)
|
||||
request_from = app
|
||||
elif app and app.access_hassio_api:
|
||||
# Check Add-on API access
|
||||
if addon and ADDONS_API_BYPASS.match(request.path):
|
||||
_LOGGER.debug("Passthrough %s from %s", request.path, addon.slug)
|
||||
request_from = addon
|
||||
elif addon and addon.access_hassio_api:
|
||||
# Check Role
|
||||
if patterns.role_access[app.hassio_role].match(request.path):
|
||||
_LOGGER.info("%s access from %s", request.path, app.slug)
|
||||
request_from = app
|
||||
if ADDONS_ROLE_ACCESS[addon.hassio_role].match(request.path):
|
||||
_LOGGER.info("%s access from %s", request.path, addon.slug)
|
||||
request_from = addon
|
||||
else:
|
||||
_LOGGER.warning("%s no role for %s", request.path, app.slug)
|
||||
elif app:
|
||||
_LOGGER.warning("%s missing API permission for %s", app.slug, request.path)
|
||||
_LOGGER.warning("%s no role for %s", request.path, addon.slug)
|
||||
elif addon:
|
||||
_LOGGER.warning(
|
||||
"%s missing API permission for %s", addon.slug, request.path
|
||||
)
|
||||
|
||||
if request_from:
|
||||
request[REQUEST_FROM] = request_from
|
||||
@@ -384,9 +288,7 @@ class SecurityMiddleware(CoreSysAttributes):
|
||||
raise HTTPForbidden()
|
||||
|
||||
@middleware
|
||||
async def core_proxy(
|
||||
self, request: Request, handler: Callable[[Request], Awaitable[StreamResponse]]
|
||||
) -> StreamResponse:
|
||||
async def core_proxy(self, request: Request, handler: RequestHandler) -> Response:
|
||||
"""Validate user from Core API proxy."""
|
||||
if (
|
||||
request[REQUEST_FROM] != self.sys_homeassistant
|
||||
@@ -422,9 +324,8 @@ class SecurityMiddleware(CoreSysAttributes):
|
||||
and content_type_index - authorization_index == 1
|
||||
)
|
||||
|
||||
patterns = _get_app_security_patterns(request)
|
||||
if (
|
||||
not patterns.supervisor_frontend.match(request.path) and is_proxy_request
|
||||
not CORE_FRONTEND.match(request.path) and is_proxy_request
|
||||
) or ingress_request:
|
||||
raise HTTPBadRequest()
|
||||
return await handler(request)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
"""Inits file for supervisor mounts REST API."""
|
||||
|
||||
from typing import Any, cast
|
||||
from typing import Any
|
||||
|
||||
from aiohttp import web
|
||||
import voluptuous as vol
|
||||
@@ -10,7 +10,7 @@ from ..coresys import CoreSysAttributes
|
||||
from ..exceptions import APIError, APINotFound
|
||||
from ..mounts.const import ATTR_DEFAULT_BACKUP_MOUNT, MountUsage
|
||||
from ..mounts.mount import Mount
|
||||
from ..mounts.validate import SCHEMA_MOUNT_CONFIG, MountData
|
||||
from ..mounts.validate import SCHEMA_MOUNT_CONFIG
|
||||
from .const import ATTR_MOUNTS, ATTR_USER_PATH
|
||||
from .utils import api_process, api_validate
|
||||
|
||||
@@ -26,7 +26,7 @@ class APIMounts(CoreSysAttributes):
|
||||
|
||||
def _extract_mount(self, request: web.Request) -> Mount:
|
||||
"""Extract mount from request or raise."""
|
||||
name = request.match_info["mount"]
|
||||
name = request.match_info.get("mount")
|
||||
if name not in self.sys_mounts:
|
||||
raise APINotFound(f"No mount exists with name {name}")
|
||||
return self.sys_mounts.get(name)
|
||||
@@ -71,10 +71,10 @@ class APIMounts(CoreSysAttributes):
|
||||
@api_process
|
||||
async def create_mount(self, request: web.Request) -> None:
|
||||
"""Create a new mount in supervisor."""
|
||||
body = cast(MountData, await api_validate(SCHEMA_MOUNT_CONFIG, request))
|
||||
body = await api_validate(SCHEMA_MOUNT_CONFIG, request)
|
||||
|
||||
if body["name"] in self.sys_mounts:
|
||||
raise APIError(f"A mount already exists with name {body['name']}")
|
||||
if body[ATTR_NAME] in self.sys_mounts:
|
||||
raise APIError(f"A mount already exists with name {body[ATTR_NAME]}")
|
||||
|
||||
mount = Mount.from_dict(self.coresys, body)
|
||||
await self.sys_mounts.create_mount(mount)
|
||||
@@ -97,10 +97,7 @@ class APIMounts(CoreSysAttributes):
|
||||
{vol.Optional(ATTR_NAME, default=current.name): current.name},
|
||||
extra=vol.ALLOW_EXTRA,
|
||||
)
|
||||
body = cast(
|
||||
MountData,
|
||||
await api_validate(vol.All(name_schema, SCHEMA_MOUNT_CONFIG), request),
|
||||
)
|
||||
body = await api_validate(vol.All(name_schema, SCHEMA_MOUNT_CONFIG), request)
|
||||
|
||||
mount = Mount.from_dict(self.coresys, body)
|
||||
await self.sys_mounts.create_mount(mount)
|
||||
|
||||
@@ -10,7 +10,6 @@ import voluptuous as vol
|
||||
|
||||
from ..const import (
|
||||
ATTR_ACCESSPOINTS,
|
||||
ATTR_ADDR_GEN_MODE,
|
||||
ATTR_ADDRESS,
|
||||
ATTR_AUTH,
|
||||
ATTR_CONNECTED,
|
||||
@@ -23,12 +22,9 @@ from ..const import (
|
||||
ATTR_ID,
|
||||
ATTR_INTERFACE,
|
||||
ATTR_INTERFACES,
|
||||
ATTR_IP6_PRIVACY,
|
||||
ATTR_IPV4,
|
||||
ATTR_IPV6,
|
||||
ATTR_LLMNR,
|
||||
ATTR_MAC,
|
||||
ATTR_MDNS,
|
||||
ATTR_METHOD,
|
||||
ATTR_MODE,
|
||||
ATTR_NAMESERVERS,
|
||||
@@ -36,28 +32,23 @@ from ..const import (
|
||||
ATTR_PRIMARY,
|
||||
ATTR_PSK,
|
||||
ATTR_READY,
|
||||
ATTR_ROUTE_METRIC,
|
||||
ATTR_SIGNAL,
|
||||
ATTR_SSID,
|
||||
ATTR_SUPERVISOR_INTERNET,
|
||||
ATTR_TYPE,
|
||||
ATTR_VLAN,
|
||||
ATTR_WIFI,
|
||||
DOCKER_IPV4_NETWORK_MASK,
|
||||
DOCKER_NETWORK,
|
||||
DOCKER_NETWORK_MASK,
|
||||
)
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..exceptions import APIError, APINotFound, HostNetworkNotFound
|
||||
from ..host.configuration import (
|
||||
AccessPoint,
|
||||
Interface,
|
||||
InterfaceAddrGenMode,
|
||||
InterfaceIp6Privacy,
|
||||
InterfaceMethod,
|
||||
Ip6Setting,
|
||||
IpConfig,
|
||||
IpSetting,
|
||||
MulticastDnsMode,
|
||||
VlanConfig,
|
||||
WifiConfig,
|
||||
)
|
||||
@@ -69,7 +60,6 @@ _SCHEMA_IPV4_CONFIG = vol.Schema(
|
||||
vol.Optional(ATTR_ADDRESS): [vol.Coerce(IPv4Interface)],
|
||||
vol.Optional(ATTR_METHOD): vol.Coerce(InterfaceMethod),
|
||||
vol.Optional(ATTR_GATEWAY): vol.Coerce(IPv4Address),
|
||||
vol.Optional(ATTR_ROUTE_METRIC): vol.Coerce(int),
|
||||
vol.Optional(ATTR_NAMESERVERS): [vol.Coerce(IPv4Address)],
|
||||
}
|
||||
)
|
||||
@@ -78,10 +68,7 @@ _SCHEMA_IPV6_CONFIG = vol.Schema(
|
||||
{
|
||||
vol.Optional(ATTR_ADDRESS): [vol.Coerce(IPv6Interface)],
|
||||
vol.Optional(ATTR_METHOD): vol.Coerce(InterfaceMethod),
|
||||
vol.Optional(ATTR_ADDR_GEN_MODE): vol.Coerce(InterfaceAddrGenMode),
|
||||
vol.Optional(ATTR_IP6_PRIVACY): vol.Coerce(InterfaceIp6Privacy),
|
||||
vol.Optional(ATTR_GATEWAY): vol.Coerce(IPv6Address),
|
||||
vol.Optional(ATTR_ROUTE_METRIC): vol.Coerce(int),
|
||||
vol.Optional(ATTR_NAMESERVERS): [vol.Coerce(IPv6Address)],
|
||||
}
|
||||
)
|
||||
@@ -103,34 +90,17 @@ SCHEMA_UPDATE = vol.Schema(
|
||||
vol.Optional(ATTR_IPV6): _SCHEMA_IPV6_CONFIG,
|
||||
vol.Optional(ATTR_WIFI): _SCHEMA_WIFI_CONFIG,
|
||||
vol.Optional(ATTR_ENABLED): vol.Boolean(),
|
||||
vol.Optional(ATTR_MDNS): vol.Coerce(MulticastDnsMode),
|
||||
vol.Optional(ATTR_LLMNR): vol.Coerce(MulticastDnsMode),
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def ip4config_struct(config: IpConfig, setting: IpSetting) -> dict[str, Any]:
|
||||
"""Return a dict with information about IPv4 configuration."""
|
||||
def ipconfig_struct(config: IpConfig, setting: IpSetting) -> dict[str, Any]:
|
||||
"""Return a dict with information about ip configuration."""
|
||||
return {
|
||||
ATTR_METHOD: setting.method,
|
||||
ATTR_ADDRESS: [address.with_prefixlen for address in config.address],
|
||||
ATTR_NAMESERVERS: [str(address) for address in config.nameservers],
|
||||
ATTR_GATEWAY: str(config.gateway) if config.gateway else None,
|
||||
ATTR_ROUTE_METRIC: setting.route_metric,
|
||||
ATTR_READY: config.ready,
|
||||
}
|
||||
|
||||
|
||||
def ip6config_struct(config: IpConfig, setting: Ip6Setting) -> dict[str, Any]:
|
||||
"""Return a dict with information about IPv6 configuration."""
|
||||
return {
|
||||
ATTR_METHOD: setting.method,
|
||||
ATTR_ADDR_GEN_MODE: setting.addr_gen_mode,
|
||||
ATTR_IP6_PRIVACY: setting.ip6_privacy,
|
||||
ATTR_ADDRESS: [address.with_prefixlen for address in config.address],
|
||||
ATTR_NAMESERVERS: [str(address) for address in config.nameservers],
|
||||
ATTR_GATEWAY: str(config.gateway) if config.gateway else None,
|
||||
ATTR_ROUTE_METRIC: setting.route_metric,
|
||||
ATTR_READY: config.ready,
|
||||
}
|
||||
|
||||
@@ -162,16 +132,10 @@ def interface_struct(interface: Interface) -> dict[str, Any]:
|
||||
ATTR_CONNECTED: interface.connected,
|
||||
ATTR_PRIMARY: interface.primary,
|
||||
ATTR_MAC: interface.mac,
|
||||
ATTR_IPV4: ip4config_struct(interface.ipv4, interface.ipv4setting)
|
||||
if interface.ipv4 and interface.ipv4setting
|
||||
else None,
|
||||
ATTR_IPV6: ip6config_struct(interface.ipv6, interface.ipv6setting)
|
||||
if interface.ipv6 and interface.ipv6setting
|
||||
else None,
|
||||
ATTR_IPV4: ipconfig_struct(interface.ipv4, interface.ipv4setting),
|
||||
ATTR_IPV6: ipconfig_struct(interface.ipv6, interface.ipv6setting),
|
||||
ATTR_WIFI: wifi_struct(interface.wifi) if interface.wifi else None,
|
||||
ATTR_VLAN: vlan_struct(interface.vlan) if interface.vlan else None,
|
||||
ATTR_MDNS: interface.mdns,
|
||||
ATTR_LLMNR: interface.llmnr,
|
||||
}
|
||||
|
||||
|
||||
@@ -206,7 +170,7 @@ class APINetwork(CoreSysAttributes):
|
||||
raise APINotFound(f"Interface {name} does not exist") from None
|
||||
|
||||
@api_process
|
||||
async def info(self, _: web.Request) -> dict[str, Any]:
|
||||
async def info(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return network information."""
|
||||
return {
|
||||
ATTR_INTERFACES: [
|
||||
@@ -215,7 +179,7 @@ class APINetwork(CoreSysAttributes):
|
||||
],
|
||||
ATTR_DOCKER: {
|
||||
ATTR_INTERFACE: DOCKER_NETWORK,
|
||||
ATTR_ADDRESS: str(DOCKER_IPV4_NETWORK_MASK),
|
||||
ATTR_ADDRESS: str(DOCKER_NETWORK_MASK),
|
||||
ATTR_GATEWAY: str(self.sys_docker.network.gateway),
|
||||
ATTR_DNS: str(self.sys_docker.network.dns),
|
||||
},
|
||||
@@ -226,14 +190,14 @@ class APINetwork(CoreSysAttributes):
|
||||
@api_process
|
||||
async def interface_info(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Return network information for a interface."""
|
||||
interface = self._get_interface(request.match_info[ATTR_INTERFACE])
|
||||
interface = self._get_interface(request.match_info.get(ATTR_INTERFACE))
|
||||
|
||||
return interface_struct(interface)
|
||||
|
||||
@api_process
|
||||
async def interface_update(self, request: web.Request) -> None:
|
||||
"""Update the configuration of an interface."""
|
||||
interface = self._get_interface(request.match_info[ATTR_INTERFACE])
|
||||
interface = self._get_interface(request.match_info.get(ATTR_INTERFACE))
|
||||
|
||||
# Validate data
|
||||
body = await api_validate(SCHEMA_UPDATE, request)
|
||||
@@ -244,45 +208,33 @@ class APINetwork(CoreSysAttributes):
|
||||
for key, config in body.items():
|
||||
if key == ATTR_IPV4:
|
||||
interface.ipv4setting = IpSetting(
|
||||
method=config.get(ATTR_METHOD, InterfaceMethod.STATIC),
|
||||
address=config.get(ATTR_ADDRESS, []),
|
||||
gateway=config.get(ATTR_GATEWAY),
|
||||
route_metric=config.get(ATTR_ROUTE_METRIC),
|
||||
nameservers=config.get(ATTR_NAMESERVERS, []),
|
||||
config.get(ATTR_METHOD, InterfaceMethod.STATIC),
|
||||
config.get(ATTR_ADDRESS, []),
|
||||
config.get(ATTR_GATEWAY),
|
||||
config.get(ATTR_NAMESERVERS, []),
|
||||
)
|
||||
elif key == ATTR_IPV6:
|
||||
interface.ipv6setting = Ip6Setting(
|
||||
method=config.get(ATTR_METHOD, InterfaceMethod.STATIC),
|
||||
addr_gen_mode=config.get(
|
||||
ATTR_ADDR_GEN_MODE, InterfaceAddrGenMode.DEFAULT
|
||||
),
|
||||
ip6_privacy=config.get(
|
||||
ATTR_IP6_PRIVACY, InterfaceIp6Privacy.DEFAULT
|
||||
),
|
||||
address=config.get(ATTR_ADDRESS, []),
|
||||
gateway=config.get(ATTR_GATEWAY),
|
||||
route_metric=config.get(ATTR_ROUTE_METRIC),
|
||||
nameservers=config.get(ATTR_NAMESERVERS, []),
|
||||
interface.ipv6setting = IpSetting(
|
||||
config.get(ATTR_METHOD, InterfaceMethod.STATIC),
|
||||
config.get(ATTR_ADDRESS, []),
|
||||
config.get(ATTR_GATEWAY),
|
||||
config.get(ATTR_NAMESERVERS, []),
|
||||
)
|
||||
elif key == ATTR_WIFI:
|
||||
interface.wifi = WifiConfig(
|
||||
mode=config.get(ATTR_MODE, WifiMode.INFRASTRUCTURE),
|
||||
ssid=config.get(ATTR_SSID, ""),
|
||||
auth=config.get(ATTR_AUTH, AuthMethod.OPEN),
|
||||
psk=config.get(ATTR_PSK, None),
|
||||
signal=None,
|
||||
config.get(ATTR_MODE, WifiMode.INFRASTRUCTURE),
|
||||
config.get(ATTR_SSID, ""),
|
||||
config.get(ATTR_AUTH, AuthMethod.OPEN),
|
||||
config.get(ATTR_PSK, None),
|
||||
None,
|
||||
)
|
||||
elif key == ATTR_ENABLED:
|
||||
interface.enabled = config
|
||||
elif key == ATTR_MDNS:
|
||||
interface.mdns = config
|
||||
elif key == ATTR_LLMNR:
|
||||
interface.llmnr = config
|
||||
|
||||
await asyncio.shield(self.sys_host.network.apply_changes(interface))
|
||||
|
||||
@api_process
|
||||
def reload(self, _: web.Request) -> Awaitable[None]:
|
||||
def reload(self, request: web.Request) -> Awaitable[None]:
|
||||
"""Reload network data."""
|
||||
return asyncio.shield(
|
||||
self.sys_host.network.update(force_connectivity_check=True)
|
||||
@@ -291,7 +243,7 @@ class APINetwork(CoreSysAttributes):
|
||||
@api_process
|
||||
async def scan_accesspoints(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Scan and return a list of available networks."""
|
||||
interface = self._get_interface(request.match_info[ATTR_INTERFACE])
|
||||
interface = self._get_interface(request.match_info.get(ATTR_INTERFACE))
|
||||
|
||||
# Only wlan is supported
|
||||
if interface.type != InterfaceType.WIRELESS:
|
||||
@@ -304,10 +256,8 @@ class APINetwork(CoreSysAttributes):
|
||||
@api_process
|
||||
async def create_vlan(self, request: web.Request) -> None:
|
||||
"""Create a new vlan."""
|
||||
interface = self._get_interface(request.match_info[ATTR_INTERFACE])
|
||||
vlan = int(request.match_info.get(ATTR_VLAN, -1))
|
||||
if vlan < 0:
|
||||
raise APIError(f"Invalid vlan specified: {vlan}")
|
||||
interface = self._get_interface(request.match_info.get(ATTR_INTERFACE))
|
||||
vlan = int(request.match_info.get(ATTR_VLAN))
|
||||
|
||||
# Only ethernet is supported
|
||||
if interface.type != InterfaceType.ETHERNET:
|
||||
@@ -318,43 +268,26 @@ class APINetwork(CoreSysAttributes):
|
||||
|
||||
vlan_config = VlanConfig(vlan, interface.name)
|
||||
|
||||
mdns_mode = MulticastDnsMode.DEFAULT
|
||||
llmnr_mode = MulticastDnsMode.DEFAULT
|
||||
|
||||
if ATTR_MDNS in body:
|
||||
mdns_mode = body[ATTR_MDNS]
|
||||
|
||||
if ATTR_LLMNR in body:
|
||||
llmnr_mode = body[ATTR_LLMNR]
|
||||
|
||||
ipv4_setting = None
|
||||
if ATTR_IPV4 in body:
|
||||
ipv4_setting = IpSetting(
|
||||
method=body[ATTR_IPV4].get(ATTR_METHOD, InterfaceMethod.AUTO),
|
||||
address=body[ATTR_IPV4].get(ATTR_ADDRESS, []),
|
||||
gateway=body[ATTR_IPV4].get(ATTR_GATEWAY),
|
||||
route_metric=body[ATTR_IPV4].get(ATTR_ROUTE_METRIC),
|
||||
nameservers=body[ATTR_IPV4].get(ATTR_NAMESERVERS, []),
|
||||
body[ATTR_IPV4].get(ATTR_METHOD, InterfaceMethod.AUTO),
|
||||
body[ATTR_IPV4].get(ATTR_ADDRESS, []),
|
||||
body[ATTR_IPV4].get(ATTR_GATEWAY, None),
|
||||
body[ATTR_IPV4].get(ATTR_NAMESERVERS, []),
|
||||
)
|
||||
|
||||
ipv6_setting = None
|
||||
if ATTR_IPV6 in body:
|
||||
ipv6_setting = Ip6Setting(
|
||||
method=body[ATTR_IPV6].get(ATTR_METHOD, InterfaceMethod.AUTO),
|
||||
addr_gen_mode=body[ATTR_IPV6].get(
|
||||
ATTR_ADDR_GEN_MODE, InterfaceAddrGenMode.DEFAULT
|
||||
),
|
||||
ip6_privacy=body[ATTR_IPV6].get(
|
||||
ATTR_IP6_PRIVACY, InterfaceIp6Privacy.DEFAULT
|
||||
),
|
||||
address=body[ATTR_IPV6].get(ATTR_ADDRESS, []),
|
||||
gateway=body[ATTR_IPV6].get(ATTR_GATEWAY),
|
||||
route_metric=body[ATTR_IPV6].get(ATTR_ROUTE_METRIC),
|
||||
nameservers=body[ATTR_IPV6].get(ATTR_NAMESERVERS, []),
|
||||
ipv6_setting = IpSetting(
|
||||
body[ATTR_IPV6].get(ATTR_METHOD, InterfaceMethod.AUTO),
|
||||
body[ATTR_IPV6].get(ATTR_ADDRESS, []),
|
||||
body[ATTR_IPV6].get(ATTR_GATEWAY, None),
|
||||
body[ATTR_IPV6].get(ATTR_NAMESERVERS, []),
|
||||
)
|
||||
|
||||
vlan_interface = Interface(
|
||||
f"{interface.name}.{vlan}",
|
||||
"",
|
||||
"",
|
||||
"",
|
||||
True,
|
||||
@@ -367,7 +300,5 @@ class APINetwork(CoreSysAttributes):
|
||||
ipv6_setting,
|
||||
None,
|
||||
vlan_config,
|
||||
mdns=mdns_mode,
|
||||
llmnr=llmnr_mode,
|
||||
)
|
||||
await asyncio.shield(self.sys_host.network.create_vlan(vlan_interface))
|
||||
await asyncio.shield(self.sys_host.network.apply_changes(vlan_interface))
|
||||
|
||||
@@ -3,7 +3,6 @@
|
||||
import asyncio
|
||||
from collections.abc import Awaitable
|
||||
import logging
|
||||
import re
|
||||
from typing import Any
|
||||
|
||||
from aiohttp import web
|
||||
@@ -22,14 +21,12 @@ from ..const import (
|
||||
ATTR_SERIAL,
|
||||
ATTR_SIZE,
|
||||
ATTR_STATE,
|
||||
ATTR_SWAP_SIZE,
|
||||
ATTR_SWAPPINESS,
|
||||
ATTR_UPDATE_AVAILABLE,
|
||||
ATTR_VERSION,
|
||||
ATTR_VERSION_LATEST,
|
||||
)
|
||||
from ..coresys import CoreSysAttributes
|
||||
from ..exceptions import APINotFound, BoardInvalidError
|
||||
from ..exceptions import BoardInvalidError
|
||||
from ..resolution.const import ContextType, IssueType, SuggestionType
|
||||
from ..validate import version_tag
|
||||
from .const import (
|
||||
@@ -68,15 +65,6 @@ SCHEMA_GREEN_OPTIONS = vol.Schema(
|
||||
vol.Optional(ATTR_SYSTEM_HEALTH_LED): vol.Boolean(),
|
||||
}
|
||||
)
|
||||
|
||||
RE_SWAP_SIZE = re.compile(r"^\d+([KMG](i?B)?|B)?$", re.IGNORECASE)
|
||||
|
||||
SCHEMA_SWAP_OPTIONS = vol.Schema(
|
||||
{
|
||||
vol.Optional(ATTR_SWAP_SIZE): vol.Match(RE_SWAP_SIZE),
|
||||
vol.Optional(ATTR_SWAPPINESS): vol.All(int, vol.Range(min=0, max=200)),
|
||||
}
|
||||
)
|
||||
# pylint: enable=no-value-for-parameter
|
||||
|
||||
|
||||
@@ -224,53 +212,3 @@ class APIOS(CoreSysAttributes):
|
||||
)
|
||||
|
||||
return {}
|
||||
|
||||
@api_process
|
||||
async def config_swap_info(self, request: web.Request) -> dict[str, Any]:
|
||||
"""Get swap settings."""
|
||||
if (
|
||||
not self.coresys.os.available
|
||||
or not self.coresys.os.version
|
||||
or self.coresys.os.version < "15.0"
|
||||
):
|
||||
raise APINotFound(
|
||||
"Home Assistant OS 15.0 or newer required for swap settings"
|
||||
)
|
||||
|
||||
return {
|
||||
ATTR_SWAP_SIZE: self.sys_dbus.agent.swap.swap_size,
|
||||
ATTR_SWAPPINESS: self.sys_dbus.agent.swap.swappiness,
|
||||
}
|
||||
|
||||
@api_process
|
||||
async def config_swap_options(self, request: web.Request) -> None:
|
||||
"""Update swap settings."""
|
||||
if (
|
||||
not self.coresys.os.available
|
||||
or not self.coresys.os.version
|
||||
or self.coresys.os.version < "15.0"
|
||||
):
|
||||
raise APINotFound(
|
||||
"Home Assistant OS 15.0 or newer required for swap settings"
|
||||
)
|
||||
|
||||
body = await api_validate(SCHEMA_SWAP_OPTIONS, request)
|
||||
|
||||
reboot_required = False
|
||||
|
||||
if ATTR_SWAP_SIZE in body:
|
||||
old_size = self.sys_dbus.agent.swap.swap_size
|
||||
await self.sys_dbus.agent.swap.set_swap_size(body[ATTR_SWAP_SIZE])
|
||||
reboot_required = reboot_required or old_size != body[ATTR_SWAP_SIZE]
|
||||
|
||||
if ATTR_SWAPPINESS in body:
|
||||
old_swappiness = self.sys_dbus.agent.swap.swappiness
|
||||
await self.sys_dbus.agent.swap.set_swappiness(body[ATTR_SWAPPINESS])
|
||||
reboot_required = reboot_required or old_swappiness != body[ATTR_SWAPPINESS]
|
||||
|
||||
if reboot_required:
|
||||
self.sys_resolution.create_issue(
|
||||
IssueType.REBOOT_REQUIRED,
|
||||
ContextType.SYSTEM,
|
||||
suggestions=[SuggestionType.EXECUTE_REBOOT],
|
||||
)
|
||||
|
||||
@@ -1 +1 @@
|
||||
!function(){function d(d){var e=document.createElement("script");e.src=d,document.body.appendChild(e)}if(/Edge?\/(13\d|1[4-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|Firefox\/(13[1-9]|1[4-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|Chrom(ium|e)\/(10[5-9]|1[1-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|(Maci|X1{2}).+ Version\/(18\.([1-9]|\d{2,})|(19|[2-9]\d|\d{3,})\.\d+)([,.]\d+|)( \(\w+\)|)( Mobile\/\w+|) Safari\/|Chrome.+OPR\/(1{2}[5-9]|1[2-9]\d|[2-9]\d{2}|\d{4,})\.\d+\.\d+|(CPU[ +]OS|iPhone[ +]OS|CPU[ +]iPhone|CPU IPhone OS|CPU iPad OS)[ +]+(18[._]([1-9]|\d{2,})|(19|[2-9]\d|\d{3,})[._]\d+)([._]\d+|)|Android:?[ /-](13\d|1[4-9]\d|[2-9]\d{2}|\d{4,})(\.\d+|)(\.\d+|)|Mobile Safari.+OPR\/([89]\d|\d{3,})\.\d+\.\d+|Android.+Firefox\/(13[1-9]|1[4-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|Android.+Chrom(ium|e)\/(13\d|1[4-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|SamsungBrowser\/(2[89]|[3-9]\d|\d{3,})\.\d+|Home As{2}istant\/[\d.]+ \(.+; macOS (1[3-9]|[2-9]\d|\d{3,})\.\d+(\.\d+)?\)/.test(navigator.userAgent))try{new Function("import('/api/hassio/app/frontend_latest/entrypoint.1e251476306cafd4.js')")()}catch(e){d("/api/hassio/app/frontend_es5/entrypoint.601ff5d4dddd11f9.js")}else d("/api/hassio/app/frontend_es5/entrypoint.601ff5d4dddd11f9.js")}()
|
||||
!function(){function d(d){var e=document.createElement("script");e.src=d,document.body.appendChild(e)}if(/Edge?\/(12[2-9]|1[3-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|Firefox\/(12[4-9]|1[3-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|Chrom(ium|e)\/(109|1[1-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|(Maci|X1{2}).+ Version\/(17\.([4-9]|\d{2,})|(1[89]|[2-9]\d|\d{3,})\.\d+)([,.]\d+|)( \(\w+\)|)( Mobile\/\w+|) Safari\/|Chrome.+OPR\/(10[89]|1[1-9]\d|[2-9]\d{2}|\d{4,})\.\d+\.\d+|(CPU[ +]OS|iPhone[ +]OS|CPU[ +]iPhone|CPU IPhone OS|CPU iPad OS)[ +]+(15[._]([6-9]|\d{2,})|(1[6-9]|[2-9]\d|\d{3,})[._]\d+)([._]\d+|)|Android:?[ /-](12[3-9]|1[3-9]\d|[2-9]\d{2}|\d{4,})(\.\d+|)(\.\d+|)|Mobile Safari.+OPR\/([89]\d|\d{3,})\.\d+\.\d+|Android.+Firefox\/(12[4-9]|1[3-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|Android.+Chrom(ium|e)\/(12[3-9]|1[3-9]\d|[2-9]\d{2}|\d{4,})\.\d+(\.\d+|)|SamsungBrowser\/(2[4-9]|[3-9]\d|\d{3,})\.\d+|Home As{2}istant\/[\d.]+ \(.+; macOS (1[2-9]|[2-9]\d|\d{3,})\.\d+(\.\d+)?\)/.test(navigator.userAgent))try{new Function("import('/api/hassio/app/frontend_latest/entrypoint.9ac99222ee42fbb3.js')")()}catch(e){d("/api/hassio/app/frontend_es5/entrypoint.85ccafe1fda9d9a5.js")}else d("/api/hassio/app/frontend_es5/entrypoint.85ccafe1fda9d9a5.js")}()
|
||||
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
@@ -1 +0,0 @@
|
||||
{"version":3,"file":"1057.d306824fd6aa0497.js","sources":["https://raw.githubusercontent.com/home-assistant/frontend/20250925.1/src/data/auth.ts","https://raw.githubusercontent.com/home-assistant/frontend/20250925.1/src/data/entity.ts","https://raw.githubusercontent.com/home-assistant/frontend/20250925.1/src/data/media-player.ts","https://raw.githubusercontent.com/home-assistant/frontend/20250925.1/src/data/tts.ts","https://raw.githubusercontent.com/home-assistant/frontend/20250925.1/src/util/brands-url.ts"],"names":["autocompleteLoginFields","schema","map","field","type","name","Object","assign","autocomplete","autofocus","getSignedPath","hass","path","callWS","UNAVAILABLE","UNKNOWN","ON","OFF","UNAVAILABLE_STATES","OFF_STATES","isUnavailableState","arrayLiteralIncludes","MediaPlayerEntityFeature","BROWSER_PLAYER","MediaClassBrowserSettings","album","icon","layout","app","show_list_images","artist","mdiAccountMusic","channel","mdiTelevisionClassic","thumbnail_ratio","composer","contributing_artist","directory","episode","game","genre","image","movie","music","playlist","podcast","season","track","tv_show","url","video","browseMediaPlayer","entityId","mediaContentId","mediaContentType","entity_id","media_content_id","media_content_type","convertTextToSpeech","data","callApi","TTS_MEDIA_SOURCE_PREFIX","isTTSMediaSource","startsWith","getProviderFromTTSMediaSource","substring","listTTSEngines","language","country","getTTSEngine","engine_id","listTTSVoices","brandsUrl","options","brand","useFallback","domain","darkOptimized","extractDomainFromBrandUrl","split","isBrandUrl","thumbnail"],"mappings":"2QAyBO,MAEMA,EAA2BC,GACtCA,EAAOC,IAAKC,IACV,GAAmB,WAAfA,EAAMC,KAAmB,OAAOD,EACpC,OAAQA,EAAME,MACZ,IAAK,WACH,OAAAC,OAAAC,OAAAD,OAAAC,OAAA,GAAYJ,GAAK,IAAEK,aAAc,WAAYC,WAAW,IAC1D,IAAK,WACH,OAAAH,OAAAC,OAAAD,OAAAC,OAAA,GAAYJ,GAAK,IAAEK,aAAc,qBACnC,IAAK,OACH,OAAAF,OAAAC,OAAAD,OAAAC,OAAA,GAAYJ,GAAK,IAAEK,aAAc,gBAAiBC,WAAW,IAC/D,QACE,OAAON,KAIFO,EAAgBA,CAC3BC,EACAC,IACwBD,EAAKE,OAAO,CAAET,KAAM,iBAAkBQ,Q,gMC3CzD,MAAME,EAAc,cACdC,EAAU,UACVC,EAAK,KACLC,EAAM,MAENC,EAAqB,CAACJ,EAAaC,GACnCI,EAAa,CAACL,EAAaC,EAASE,GAEpCG,GAAqBC,EAAAA,EAAAA,GAAqBH,IAC7BG,EAAAA,EAAAA,GAAqBF,E,+gCCuExC,IAAWG,EAAA,SAAAA,G,qnBAAAA,C,CAAA,C,IAyBX,MAAMC,EAAiB,UAWjBC,EAGT,CACFC,MAAO,CAAEC,K,mQAAgBC,OAAQ,QACjCC,IAAK,CAAEF,K,6GAAsBC,OAAQ,OAAQE,kBAAkB,GAC/DC,OAAQ,CAAEJ,KAAMK,EAAiBJ,OAAQ,OAAQE,kBAAkB,GACnEG,QAAS,CACPN,KAAMO,EACNC,gBAAiB,WACjBP,OAAQ,OACRE,kBAAkB,GAEpBM,SAAU,CACRT,K,4cACAC,OAAQ,OACRE,kBAAkB,GAEpBO,oBAAqB,CACnBV,KAAMK,EACNJ,OAAQ,OACRE,kBAAkB,GAEpBQ,UAAW,CAAEX,K,gGAAiBC,OAAQ,OAAQE,kBAAkB,GAChES,QAAS,CACPZ,KAAMO,EACNN,OAAQ,OACRO,gBAAiB,WACjBL,kBAAkB,GAEpBU,KAAM,CACJb,K,qWACAC,OAAQ,OACRO,gBAAiB,YAEnBM,MAAO,CAAEd,K,4hCAAqBC,OAAQ,OAAQE,kBAAkB,GAChEY,MAAO,CAAEf,K,sHAAgBC,OAAQ,OAAQE,kBAAkB,GAC3Da,MAAO,CACLhB,K,6GACAQ,gBAAiB,WACjBP,OAAQ,OACRE,kBAAkB,GAEpBc,MAAO,CAAEjB,K,+NAAgBG,kBAAkB,GAC3Ce,SAAU,CAAElB,K,mJAAwBC,OAAQ,OAAQE,kBAAkB,GACtEgB,QAAS,CAAEnB,K,qpBAAkBC,OAAQ,QACrCmB,OAAQ,CACNpB,KAAMO,EACNN,OAAQ,OACRO,gBAAiB,WACjBL,kBAAkB,GAEpBkB,MAAO,CAAErB,K,mLACTsB,QAAS,CACPtB,KAAMO,EACNN,OAAQ,OACRO,gBAAiB,YAEnBe,IAAK,CAAEvB,K,w5BACPwB,MAAO,CAAExB,K,2GAAgBC,OAAQ,OAAQE,kBAAkB,IAkChDsB,EAAoBA,CAC/BxC,EACAyC,EACAC,EACAC,IAEA3C,EAAKE,OAAwB,CAC3BT,KAAM,4BACNmD,UAAWH,EACXI,iBAAkBH,EAClBI,mBAAoBH,G,yLC/MjB,MAAMI,EAAsBA,CACjC/C,EACAgD,IAOGhD,EAAKiD,QAAuC,OAAQ,cAAeD,GAElEE,EAA0B,sBAEnBC,EAAoBT,GAC/BA,EAAeU,WAAWF,GAEfG,EAAiCX,GAC5CA,EAAeY,UAAUJ,IAEdK,EAAiBA,CAC5BvD,EACAwD,EACAC,IAEAzD,EAAKE,OAAO,CACVT,KAAM,kBACN+D,WACAC,YAGSC,EAAeA,CAC1B1D,EACA2D,IAEA3D,EAAKE,OAAO,CACVT,KAAM,iBACNkE,cAGSC,EAAgBA,CAC3B5D,EACA2D,EACAH,IAEAxD,EAAKE,OAAO,CACVT,KAAM,oBACNkE,YACAH,Y,kHC9CG,MAAMK,EAAaC,GACxB,oCAAoCA,EAAQC,MAAQ,UAAY,KAC9DD,EAAQE,YAAc,KAAO,KAC5BF,EAAQG,UAAUH,EAAQI,cAAgB,QAAU,KACrDJ,EAAQrE,WAQC0E,EAA6B7B,GAAgBA,EAAI8B,MAAM,KAAK,GAE5DC,EAAcC,GACzBA,EAAUlB,WAAW,oC"}
|
||||
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
BIN
supervisor/api/panel/frontend_es5/1081.e647cbe586ff9dd0.js.br
Normal file
BIN
supervisor/api/panel/frontend_es5/1081.e647cbe586ff9dd0.js.br
Normal file
Binary file not shown.
BIN
supervisor/api/panel/frontend_es5/1081.e647cbe586ff9dd0.js.gz
Normal file
BIN
supervisor/api/panel/frontend_es5/1081.e647cbe586ff9dd0.js.gz
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
{"version":3,"file":"1081.e647cbe586ff9dd0.js","sources":["https://raw.githubusercontent.com/home-assistant/frontend/20250221.0/src/components/ha-button-toggle-group.ts","https://raw.githubusercontent.com/home-assistant/frontend/20250221.0/src/components/ha-selector/ha-selector-button-toggle.ts"],"names":["_decorate","customElement","_initialize","_LitElement","F","constructor","args","d","kind","decorators","property","attribute","key","value","type","Boolean","queryAll","html","_t","_","this","buttons","map","button","iconPath","_t2","label","active","_handleClick","_t3","styleMap","width","fullWidth","length","dense","_this$_buttons","_buttons","forEach","async","updateComplete","shadowRoot","querySelector","style","margin","ev","currentTarget","fireEvent","static","css","_t4","LitElement","HaButtonToggleSelector","_this$selector$button","_this$selector$button2","_this$selector$button3","options","selector","button_toggle","option","translationKey","translation_key","localizeValue","localizedLabel","sort","a","b","caseInsensitiveStringCompare","hass","locale","language","toggleButtons","item","_valueChanged","_ev$detail","_this$value","stopPropagation","detail","target","disabled","undefined"],"mappings":"sXAWgCA,EAAAA,EAAAA,GAAA,EAD/BC,EAAAA,EAAAA,IAAc,4BAAyB,SAAAC,EAAAC,GAkIvC,OAAAC,EAlID,cACgCD,EAAoBE,WAAAA,IAAAC,GAAA,SAAAA,GAAAJ,EAAA,QAApBK,EAAA,EAAAC,KAAA,QAAAC,WAAA,EAC7BC,EAAAA,EAAAA,IAAS,CAAEC,WAAW,KAAQC,IAAA,UAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAE9BC,EAAAA,EAAAA,OAAUE,IAAA,SAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,IAAS,CAAEC,UAAW,aAAcG,KAAMC,WAAUH,IAAA,YAAAC,KAAAA,GAAA,OAClC,CAAK,IAAAL,KAAA,QAAAC,WAAA,EAEvBC,EAAAA,EAAAA,IAAS,CAAEI,KAAMC,WAAUH,IAAA,QAAAC,KAAAA,GAAA,OAAgB,CAAK,IAAAL,KAAA,QAAAC,WAAA,EAEhDO,EAAAA,EAAAA,IAAS,eAAaJ,IAAA,WAAAC,WAAA,IAAAL,KAAA,SAAAI,IAAA,SAAAC,MAEvB,WACE,OAAOI,EAAAA,EAAAA,IAAIC,IAAAA,EAAAC,CAAA,uBAELC,KAAKC,QAAQC,KAAKC,GAClBA,EAAOC,UACHP,EAAAA,EAAAA,IAAIQ,IAAAA,EAAAN,CAAA,2GACOI,EAAOG,MACRH,EAAOC,SACND,EAAOV,MACNO,KAAKO,SAAWJ,EAAOV,MACxBO,KAAKQ,eAEhBX,EAAAA,EAAAA,IAAIY,IAAAA,EAAAV,CAAA,iHACMW,EAAAA,EAAAA,GAAS,CACfC,MAAOX,KAAKY,UACL,IAAMZ,KAAKC,QAAQY,OAAtB,IACA,YAGGb,KAAKc,MACLX,EAAOV,MACNO,KAAKO,SAAWJ,EAAOV,MACxBO,KAAKQ,aACXL,EAAOG,SAKxB,GAAC,CAAAlB,KAAA,SAAAI,IAAA,UAAAC,MAED,WAAoB,IAAAsB,EAEL,QAAbA,EAAAf,KAAKgB,gBAAQ,IAAAD,GAAbA,EAAeE,SAAQC,gBACff,EAAOgB,eAEXhB,EAAOiB,WAAYC,cAAc,UACjCC,MAAMC,OAAS,GAAG,GAExB,GAAC,CAAAnC,KAAA,SAAAI,IAAA,eAAAC,MAED,SAAqB+B,GACnBxB,KAAKO,OAASiB,EAAGC,cAAchC,OAC/BiC,EAAAA,EAAAA,GAAU1B,KAAM,gBAAiB,CAAEP,MAAOO,KAAKO,QACjD,GAAC,CAAAnB,KAAA,QAAAuC,QAAA,EAAAnC,IAAA,SAAAC,KAAAA,GAAA,OAEemC,EAAAA,EAAAA,IAAGC,IAAAA,EAAA9B,CAAA,u0CAzDoB+B,EAAAA,I,MCD5BC,GAAsBnD,EAAAA,EAAAA,GAAA,EADlCC,EAAAA,EAAAA,IAAc,+BAA4B,SAAAC,EAAAC,GA4F1C,OAAAC,EA5FD,cACmCD,EAAoBE,WAAAA,IAAAC,GAAA,SAAAA,GAAAJ,EAAA,QAApBK,EAAA,EAAAC,KAAA,QAAAC,WAAA,EAChCC,EAAAA,EAAAA,IAAS,CAAEC,WAAW,KAAQC,IAAA,OAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAE9BC,EAAAA,EAAAA,IAAS,CAAEC,WAAW,KAAQC,IAAA,WAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAE9BC,EAAAA,EAAAA,OAAUE,IAAA,QAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,OAAUE,IAAA,QAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,OAAUE,IAAA,SAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,IAAS,CAAEC,WAAW,KAAQC,IAAA,gBAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAG9BC,EAAAA,EAAAA,IAAS,CAAEI,KAAMC,WAAUH,IAAA,WAAAC,KAAAA,GAAA,OAAmB,CAAK,IAAAL,KAAA,QAAAC,WAAA,EAEnDC,EAAAA,EAAAA,IAAS,CAAEI,KAAMC,WAAUH,IAAA,WAAAC,KAAAA,GAAA,OAAmB,CAAI,IAAAL,KAAA,SAAAI,IAAA,SAAAC,MAEnD,WAAmB,IAAAuC,EAAAC,EAAAC,EACjB,MAAMC,GACuB,QAA3BH,EAAAhC,KAAKoC,SAASC,qBAAa,IAAAL,GAAS,QAATA,EAA3BA,EAA6BG,eAAO,IAAAH,OAAA,EAApCA,EAAsC9B,KAAKoC,GACvB,iBAAXA,EACFA,EACA,CAAE7C,MAAO6C,EAAQhC,MAAOgC,OAC1B,GAEDC,EAA4C,QAA9BN,EAAGjC,KAAKoC,SAASC,qBAAa,IAAAJ,OAAA,EAA3BA,EAA6BO,gBAEhDxC,KAAKyC,eAAiBF,GACxBJ,EAAQlB,SAASqB,IACf,MAAMI,EAAiB1C,KAAKyC,cAC1B,GAAGF,aAA0BD,EAAO7C,SAElCiD,IACFJ,EAAOhC,MAAQoC,EACjB,IAI2B,QAA/BR,EAAIlC,KAAKoC,SAASC,qBAAa,IAAAH,GAA3BA,EAA6BS,MAC/BR,EAAQQ,MAAK,CAACC,EAAGC,KACfC,EAAAA,EAAAA,GACEF,EAAEtC,MACFuC,EAAEvC,MACFN,KAAK+C,KAAKC,OAAOC,YAKvB,MAAMC,EAAgCf,EAAQjC,KAAKiD,IAAkB,CACnE7C,MAAO6C,EAAK7C,MACZb,MAAO0D,EAAK1D,UAGd,OAAOI,EAAAA,EAAAA,IAAIC,IAAAA,EAAAC,CAAA,iHACPC,KAAKM,MAEM4C,EACDlD,KAAKP,MACEO,KAAKoD,cAG5B,GAAC,CAAAhE,KAAA,SAAAI,IAAA,gBAAAC,MAED,SAAsB+B,GAAI,IAAA6B,EAAAC,EACxB9B,EAAG+B,kBAEH,MAAM9D,GAAiB,QAAT4D,EAAA7B,EAAGgC,cAAM,IAAAH,OAAA,EAATA,EAAW5D,QAAS+B,EAAGiC,OAAOhE,MACxCO,KAAK0D,eAAsBC,IAAVlE,GAAuBA,KAAqB,QAAhB6D,EAAMtD,KAAKP,aAAK,IAAA6D,EAAAA,EAAI,MAGrE5B,EAAAA,EAAAA,GAAU1B,KAAM,gBAAiB,CAC/BP,MAAOA,GAEX,GAAC,CAAAL,KAAA,QAAAuC,QAAA,EAAAnC,IAAA,SAAAC,KAAAA,GAAA,OAEemC,EAAAA,EAAAA,IAAGvB,IAAAA,EAAAN,CAAA,wLA5EuB+B,EAAAA,G"}
|
||||
File diff suppressed because one or more lines are too long
@@ -0,0 +1,40 @@
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2017 Google LLC
|
||||
* SPDX-License-Identifier: BSD-3-Clause
|
||||
*/
|
||||
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2018 Google Inc.
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2018 Google LLC
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
/**
|
||||
* @license
|
||||
* Copyright 2021 Google LLC
|
||||
* SPDX-LIcense-Identifier: Apache-2.0
|
||||
*/
|
||||
BIN
supervisor/api/panel/frontend_es5/1121.6a80ad1fbfcedf85.js.br
Normal file
BIN
supervisor/api/panel/frontend_es5/1121.6a80ad1fbfcedf85.js.br
Normal file
Binary file not shown.
BIN
supervisor/api/panel/frontend_es5/1121.6a80ad1fbfcedf85.js.gz
Normal file
BIN
supervisor/api/panel/frontend_es5/1121.6a80ad1fbfcedf85.js.gz
Normal file
Binary file not shown.
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
BIN
supervisor/api/panel/frontend_es5/1173.df00e6361fed8e6c.js.br
Normal file
BIN
supervisor/api/panel/frontend_es5/1173.df00e6361fed8e6c.js.br
Normal file
Binary file not shown.
BIN
supervisor/api/panel/frontend_es5/1173.df00e6361fed8e6c.js.gz
Normal file
BIN
supervisor/api/panel/frontend_es5/1173.df00e6361fed8e6c.js.gz
Normal file
Binary file not shown.
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
2
supervisor/api/panel/frontend_es5/12.ffa1bdc0a98802fa.js
Normal file
2
supervisor/api/panel/frontend_es5/12.ffa1bdc0a98802fa.js
Normal file
@@ -0,0 +1,2 @@
|
||||
"use strict";(self.webpackChunkhome_assistant_frontend=self.webpackChunkhome_assistant_frontend||[]).push([["12"],{5739:function(e,a,t){t.a(e,(async function(e,i){try{t.r(a),t.d(a,{HaNavigationSelector:()=>c});var d=t(73577),r=(t(71695),t(47021),t(57243)),n=t(50778),l=t(36522),o=t(63297),s=e([o]);o=(s.then?(await s)():s)[0];let u,h=e=>e,c=(0,d.Z)([(0,n.Mo)("ha-selector-navigation")],(function(e,a){return{F:class extends a{constructor(...a){super(...a),e(this)}},d:[{kind:"field",decorators:[(0,n.Cb)({attribute:!1})],key:"hass",value:void 0},{kind:"field",decorators:[(0,n.Cb)({attribute:!1})],key:"selector",value:void 0},{kind:"field",decorators:[(0,n.Cb)()],key:"value",value:void 0},{kind:"field",decorators:[(0,n.Cb)()],key:"label",value:void 0},{kind:"field",decorators:[(0,n.Cb)()],key:"helper",value:void 0},{kind:"field",decorators:[(0,n.Cb)({type:Boolean,reflect:!0})],key:"disabled",value(){return!1}},{kind:"field",decorators:[(0,n.Cb)({type:Boolean})],key:"required",value(){return!0}},{kind:"method",key:"render",value:function(){return(0,r.dy)(u||(u=h` <ha-navigation-picker .hass="${0}" .label="${0}" .value="${0}" .required="${0}" .disabled="${0}" .helper="${0}" @value-changed="${0}"></ha-navigation-picker> `),this.hass,this.label,this.value,this.required,this.disabled,this.helper,this._valueChanged)}},{kind:"method",key:"_valueChanged",value:function(e){(0,l.B)(this,"value-changed",{value:e.detail.value})}}]}}),r.oi);i()}catch(u){i(u)}}))}}]);
|
||||
//# sourceMappingURL=12.ffa1bdc0a98802fa.js.map
|
||||
BIN
supervisor/api/panel/frontend_es5/12.ffa1bdc0a98802fa.js.br
Normal file
BIN
supervisor/api/panel/frontend_es5/12.ffa1bdc0a98802fa.js.br
Normal file
Binary file not shown.
BIN
supervisor/api/panel/frontend_es5/12.ffa1bdc0a98802fa.js.gz
Normal file
BIN
supervisor/api/panel/frontend_es5/12.ffa1bdc0a98802fa.js.gz
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
{"version":3,"file":"12.ffa1bdc0a98802fa.js","sources":["https://raw.githubusercontent.com/home-assistant/frontend/20250221.0/src/components/ha-selector/ha-selector-navigation.ts"],"names":["HaNavigationSelector","_decorate","customElement","_initialize","_LitElement","F","constructor","args","d","kind","decorators","property","attribute","key","value","type","Boolean","reflect","html","_t","_","this","hass","label","required","disabled","helper","_valueChanged","ev","fireEvent","detail","LitElement"],"mappings":"mVAQaA,GAAoBC,EAAAA,EAAAA,GAAA,EADhCC,EAAAA,EAAAA,IAAc,4BAAyB,SAAAC,EAAAC,GAiCvC,OAAAC,EAjCD,cACiCD,EAAoBE,WAAAA,IAAAC,GAAA,SAAAA,GAAAJ,EAAA,QAApBK,EAAA,EAAAC,KAAA,QAAAC,WAAA,EAC9BC,EAAAA,EAAAA,IAAS,CAAEC,WAAW,KAAQC,IAAA,OAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAE9BC,EAAAA,EAAAA,IAAS,CAAEC,WAAW,KAAQC,IAAA,WAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAE9BC,EAAAA,EAAAA,OAAUE,IAAA,QAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,OAAUE,IAAA,QAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,OAAUE,IAAA,SAAAC,WAAA,IAAAL,KAAA,QAAAC,WAAA,EAEVC,EAAAA,EAAAA,IAAS,CAAEI,KAAMC,QAASC,SAAS,KAAOJ,IAAA,WAAAC,KAAAA,GAAA,OAAmB,CAAK,IAAAL,KAAA,QAAAC,WAAA,EAElEC,EAAAA,EAAAA,IAAS,CAAEI,KAAMC,WAAUH,IAAA,WAAAC,KAAAA,GAAA,OAAmB,CAAI,IAAAL,KAAA,SAAAI,IAAA,SAAAC,MAEnD,WACE,OAAOI,EAAAA,EAAAA,IAAIC,IAAAA,EAAAC,CAAA,mKAECC,KAAKC,KACJD,KAAKE,MACLF,KAAKP,MACFO,KAAKG,SACLH,KAAKI,SACPJ,KAAKK,OACEL,KAAKM,cAG5B,GAAC,CAAAlB,KAAA,SAAAI,IAAA,gBAAAC,MAED,SAAsBc,IACpBC,EAAAA,EAAAA,GAAUR,KAAM,gBAAiB,CAAEP,MAAOc,EAAGE,OAAOhB,OACtD,IAAC,GA/BuCiB,EAAAA,I"}
|
||||
File diff suppressed because one or more lines are too long
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user